Transition to the information society (Part 2: Disruptive effects of transparency)

Transition to the information society (Part 2: Disruptive effects of transparency)

Phuah Eng Chye (8 July 2023)

The destination to an information society can only be reached by following the path of transparency. In this regard, economic advancement has always been accompanied by rising levels of transparency. But as societies became more transparent, the harms from transparency becomes more evident”.

The features of transparency (14 March 2020)

Information makes society transparent. But transparency isn’t always warmly welcomed. There is considerable pushback to the “surveillance” aspects of transparency – by devices, technology, algorithms and data – with concerns that privacy intrusions will lead to harms such as discrimination, prosecution, predatory practices and harassment. These abuses have oppressive effects and could erode individual rights, freedoms and democracy.

But these privacy concerns have been dwarfed by the large footprint of digitalisation and the insatiable demand for information. Everyone from governments, firms, schools, employees, customers and families want more disclosures and verification. Digital identities and currencies, e-government, location tracking, healthcare, education, transport, deliveries, remote work, social media platforms and streaming services are massive digitalisation projects in progress. Collectively they reinforce a digital transparency regime that enables greater control over citizens’ lives.

In my view, it is too late to resist and that societal norms have already moved away from privacy towards broad acceptance of transparency as a core feature of the information society. Privacy regulation has proven ineffectual in recapturing the space ceded to transparency and remedying information harms. Its most critical flaw is that privacy is a blunt tool that requires the removal of data from use. Privacy creates information blind spots, injects randomness, hampers discovery of problems and bad actors, and impedes finding solutions. Privacy is espoused but in practice it is sidelined.

Societies don’t seem prepared to handle the higher levels of transparency and the scrutiny that comes with living in a fishbowl. The standard reflex is to regress. Across the world, governments (including liberal democracies) seem to be reacting to transparency-driven information disorder by asserting even greater control over public narratives by sanitising (censorship) content and promoting superficiality. This leads to a future with less room for free speech and debate and strengthens the grip of tech firms and enforcement agencies over content and data on platforms. This is surely not a path that would lead to freedom and democracy. Societies need to figure how to make transparency work and this requires a deepening of our understanding of the features of transparency.

Double-edge sword of transparency

Transparency has profound and far-reaching consequences. Transparency is associated with openness, trust, disclosure, well-defined rules and well-informed participants. It makes individuals willing to interact with strangers. “Transparency is an essential requirement for societal progress. Transparency is needed to enable collaboration, alignment, self-organising and sharing. Transparency is also vital to ensure that leaders and other elite are sensitive to public opinion and provides a means to discipline public administration and business conduct. Conversely, making information confidential hampers collaboration, learning, discipline and impedes progress. Good government, innovation and societal progress are thus highly dependent on the degree of openness in public dialogue and knowledge exchange. Thus, transparency is an essential element in facilitating participation, monetisation, exchange, innovation and informed decision-making”[1].

Nonetheless, transparency is a double-edge sword with substantial harms. Everyone likes their own private space but transparency is intrusive. No one is comfortable with being monitored or having their habits tracked, having their past and present scrutinised by governments, businesses, friends and family members. Information harms are not caused by surveillance alone but can also arise from information dissemination. Once personal information is visible, individuals are exposed to a range of abusive behaviours such as threats, exclusion, suppression, harassment,  scams, burglary, extortion, blackmail, boycotts, discrimination, lawsuits, exploitation, hacking, spams, doxing and identity theft.

“At the level of society, privacy functions as a buffer. Differences in race, beliefs, opinions, opportunities, income and wealth can be accentuated by transparency. Social media typically reinforce team dynamics and amplify the anger, frustration, and disrespect arising from differences. Transparency therefore removes the protective buffers that (1) hide problems and differences and (2) allows different beliefs and systems to co-exist peacefully in ignorance. Removing the buffer forces society to face up to issues and, unfortunately, this tends often to lead to conflict”[2]. Once-dormant opposing views (e.g. on abortion, climate change, LGBTQ and geopolitics) can be re-energized by transparency. A transparent society is simultaneously polarising and intolerant. This has de-stabilising effects on the political, social and business order.

The economic effects are significant. Transparency facilitates value creation from information. The ability of transparency to create economic value add depends on the degree of open access to information, authenticity and reliability. Trust is lost if information is not readily available, if information is false or the process is undependable (non-delivery). “Transparency opens up new organisational possibilities and alters opportunities and risks. This, in turn, makes change inevitable and disrupts legacy business and regulatory arrangements. In this context, transparency shifts the information chokepoints and increases the speed of adjustments. The rapid adjustments can undermine market (overshooting, herding, contagion) and industry (low friction, law of one price) stability”. Transparency, in tandem with settlement immediacy, minimises the need for relationships. While reducing the frequency of risk of loss, transparency increases potential severity, reduces willingness to commit resources for public good which increases systemic risks related to crowding and runs. In relation to this, regulators have a schizophrenic attitude to transparency. While regulators believe transparency enhances informed decision-making and reduces risk aversion, yet they are suspicious of its role in facilitating collusion[3] and they favour no-questions-asked liquidity trading over informed trading (insider trading)[4]. Transparency is a major game changer. Players adapt to changing incentives, strategies and behaviours when information is out in the open. The changing patterns makes it difficult to predict responses and outcomes.

Information visibility reduces frictions but it also works to worsen societal inequalities. Transparency facilitates better endowed and knowledgeable players to exploit their advantages and to bypass regulatory barriers protecting inefficient and weaker players. Inequality is aggravated by concentration-fragmentation and winner-take-all effects, oligopolistic strategies, financialisation (markets and rising costs), demand-driven pricing, merit and price-based discrimination and by signalling and status (conspicuous consumption and advertising) in economic competition. Imbalances are aggravated when transparency is partial – when only select parties (platforms, enforcement agencies) are able to access data. This enhances their competitive advantage or power which they use for private gains rather than for public goals. In relation to this, it has been argued transparency works against low-income groups because data is used against them (exploitation, criminalisation of poverty, systemic discrimination and identification of illegal immigrants). This argument is flawed. Low-income groups are worse off because they are subject to greater scrutiny and their information is being used to exploit rather than to assist them. In this regard, imbalances (inequality) arise due to unequal access to information and can be remedied by information redistribution. Information blind spots (non-transparency) – such as the lack of identity profile, low visibility of asset ownership and informal activities – hamper the government’s ability to address income and opportunity shortages for the low-income group. In contrast, privacy benefits the rich and powerful as they enjoy privileged access to other people’s information. Privacy acts as an information shield on their wealth and misdeeds.Information should be redistributed to facilitate an increase in the low-income groups’ profile, knowledge, choice and ownership of assets. Distributing information to the low-income groups is a pre-requisite to increasing their participation and ensuring a more equitable distribution of income and value.

It should be noted that transparency does not create societal problems, it merely highlights their existence. In this regard, transparency is not a panacea but it expands the range of solutions, that did not previously exist, to addressing information harms. Transparency challenges can be resolved through information-based approaches. Platform monopolies can be tackled by breaking their strangleholds on information. Workers’ rights can be protected by making employer relationships (contracts) transparent and by ensuring workers and customers have the necessary information to protect their right to choose. Transparency is the most robust and fairest way of differentiating truth from falsehoods and can also be harnessed to strengthen check and balances on arbitrary decisions and actions by enforcement agencies and platforms. Transparency facilitates analysis and design of public policies to lower credit defaults among minorities, reduce healthcare costs, improve public safety, tackle corruption and wastage, and tackle many difficult-to-resolve situations. At this point, the transparency regime is still evolving and the use of transparency to solve social and economic challenges is held back by considerable shortcomings[5] in national data management. While transparency has had a largely positive and relatively immediate impact on business efficiency, its disruptive impact on law, social norms and on power structures are only beginning to be evident.

Transparency, information overload and law

Citizens now spend as much of their economic, social and political life in the digital sphere as they do in the physical sphere. Laws, once largely defined in a physical context, has expanded to cover an extensive range of newly defined information crimes (such as in relation to privacy, data, content and compliance). Physical crimes have also been substituted by cybercrimes; e.g. burglary by cybertheft and intellectual property theft; cold calls by spam; and music and video bootlegging by downloading. New legal frameworks have also been established to cover innovative products (e.g. drones, autonomous cars and digital assets) and activities (e.g. sharing and AI).

Societal transparency is thus accompanied by information overload which consequently leads to regulatory overload. Rules are required to clarify what and when information can be captured, used, disclosed and published; and to clarify details on conduct, products, ownership, usage, risks, accountabilities and liabilities. It becomes easy to fall afoul of the law because there are more of them. Regulators find it expedient to prosecute on information failures (omitting or providing false information, and compliance failures) than to prove actual wrong-doings. In information crimes, technicalities matter. The penalties for technical deficiencies can be onerous if it is linked to serious crimes such as money laundering or a widely-defined national security. In the meantime, the expanded legal framework is fertile ground for disputes and litigation.

The archaic judicial systems are unequipped to handle the throughput, transience and complexities of infringements and disputes. In this regard, transparency facilitates the modernisation of courts and enforcers with a view to improving operational efficiency and effectiveness. China[6] is modernising its judicial system by make proceedings digital and accessible, and using AI to review and support judicial decision-making and ensure sentencing consistency.

Technology tools are also proving effective and efficient in combatting crime, screening individuals and in reducing policing and immigration manpower needs. Enforcement agencies (police, immigration and national security) use facial recognition technology (FRT) and AI to make predictive assessments of threats and conduct surveillance to identify and locate suspects.

Nevertheless, the use of surveillance tools is controversial. Nick Corbishley notes “privacy advocates have called for an outright ban on biometric surveillance technologies due to the threat they pose to civil liberties” while some US cities have passed biometric laws which imposes statutory restrictions on biometric use, sale and storage. He is concerned that “in the fully digitised world that is fast taking shape around us, many of the decisions or actions taken by corporations, central banks and local, regional or national authorities that affect us will be fully automated; no human intervention will be needed…these digital systems of surveillance and control, if allowed to take root, will represent one of the biggest collective trade offs of modern history. We are essentially being asked – or rather, not asked – to trade in a system, however imperfect and degraded, of rights, laws and freedoms for one of centralised, automated, top-down technocratic control”.

James Andrew Lewis argues “FRT is another example of law and policy needing to catch up to technology if society is to safely reap its full benefit…facial recognition technology is an irreplaceable tool for maintaining public safety”. In this regard, “FRT has been caught up in a larger public debate over policing, race, and privacy. These are emotional topics, but they have created a confusing narrative. The debate over facial recognition technology also reflects erratic privacy protections in the United States. Digital technologies create immense amounts of data, but the constraints on how this data can be used are inconsistent, particularly for commercial use”. He points out “claims about FRT inaccuracy are either out of date or mistakenly talking about facial characterization. Of course, accuracy depends on how FRT is used. When picture quality is poor, accuracy is lower but often still better than the average human. A 2021 report by the National Institute of Standards and Technology (NIST) found that accuracy had improved dramatically and that more accurate systems were less likely to make errors based on race or gender”.

James Andrew Lewis thinks FRT “will continue to be developed and deployed because of the convenience for consumers and the benefits to public safety”. “One important goal for FRT regulations is to clarify the circumstances under which FRT can and cannot be used”. This would include prescribing transparency requirements such as “annual reporting, public consultation, and making information publicly available on how FRT is being used”, and “defining when images can be stored, for how long, and under what conditions any stored image can be used”, providing opt-out options, establishing adequate mechanisms for oversight and redress to ensure quick rectification of errors, and ensuring appropriate oversight and auditing.

As the use of technology in enforcement increases, there is a need to address data quality problems. Caleb Brennan notes “over the past two decades, the widespread public availability of criminal records and court documents has helped fuel a global, for-profit background check industry worth billions. Services like Checkr, HireRight, and First Advantage claim to offer employers, landlords, and other clients a detailed snapshot of an individual’s criminal past. Some charge as little as around $25. But their assessments are often deeply flawed, in part because background check companies tend to rely on the cheapest and most easily accessible data, which is also the most prone to inaccuracies” – such as pre-conviction arrest reports. “Even FBI data, long considered the gold standard for criminal background checks, contains an astonishing number of errors: Between 50 to 80 percent of their records are at least partially inaccurate or incomplete…Background check firms frequently make mistakes by including expunged records on their reports or by mixing up people with the same name. In many instances, reports fail to specify if an arrest ended in a conviction, or duplicate or misreport the nature of a criminal proceeding”. “The growth of this industry has given rise to new forms of digital punishment…Anything from a police stop to a serious conviction can result in the same type of record: You’re either marked as a criminal or not…it’s just another outgrowth of mass criminalization in that way…The negative impacts of these for-profit background check services disproportionately harm minority communities…The proliferation of these companies has also undermined broader efforts to ban the box – or reduce the stigma of a criminal record in employment by prohibiting potential employers from asking if job candidates have been arrested or incarcerated”.

As the global platforms expand their reach, governments find themselves losing control over their citizens and business conduct, and tax revenues. The reality is that governments cannot leave private data and code largely unregulated and unsupervised given the size of the private virtual sphere. Governments are therefore responding with regulations to increase oversight over platform influence and conduct and increasing their accountabilities.

One key area relates to oversight over platform data. Stephen Maher explains that “as regulators, activists and researchers wrestle with the issues raised by the social media giants, visibility is a central challenge: only people inside the companies have access to good information…nowadays, most of the data about people in groups in society are actually tied up inside private corporations. We have to figure out how to work with them. We have no choice”. However, platforms’ struggle “between people who want to routinely open up its processes and others with a more traditional, defensive corporate view”. Platforms “doesn’t want to make the data available for others to do the hard work and hold them accountable…acknowledging the source of the engagement might lead to responsibilities the company doesn’t want” as well as expose them to criticisms and liabilities. The “real problem is that they lose money every time they have some big controversy.”

Sun-ha Hong notes “all this is part of a broader pattern in which the very groups who should be held accountable by the data tend to be its gatekeepers. Facebook is notorious for transparency-washing strategies, in which it dangles data access like a carrot but rarely follows through in actually delivering it. When researchers worked to create more independent means of holding Facebook accountable – as New York University’s Ad Observatory did last year, using volunteer researchers to build a public database of ads on the platform – Facebook threatened to sue them. Despite the lofty rhetoric around Facebook’s Oversight Board (often described as a Supreme Court for the platform), it falls into the same trap of transparency without power: the scope is limited to individual cases of content moderation, with no binding authority over the company’s business strategy, algorithmic design, or even similar moderation cases in the future. Here, too, the real bottleneck is not information or technology, but power: the legal, political and economic pressure necessary to compel companies like Facebook to produce information and to act on it”.

Data transparency is a critical factor in facilitating oversight over the responsible and fair use of data, ensuring individuals are well informed, identifying and correcting discriminatory bias, and minimising algorithmic errors. Given algorithmic rules and enforcement are rigid and difficult to reverse, clear accountabilities should be established to ensure recourse processes are strengthened to address public complaints and to remedy information harms caused by inaccurate data and unfair processes. An ombudsman should be set up to investigate platform efforts to strengthen their recourse processes. The restructuring of the data relationship with the private sector is a long-term process involving continuous streamlining of data management and processes, and the reshaping and relocation of conduct accountabilities. China[7] has declared data as a factor of production and adopted a national data management framework. However, it should be noted most governments are lacking in technology and data capabilities and are largely dependent on the private sector.

Overall, there is a need to address the different levels of legal protection and oversight in the physical and digital spheres. Legal protection in the physical sphere is subject to government oversight and enforcement is relatively transparent. In contrast, legal protection in the digital sphere is ruled very much like medieval fiefdoms, with platform warlords generally setting and enforcing their own code in a relatively non-transparent manner. Non-transparent code didn’t matter as much in the past because its reach and effects were contained within small private business silos. But as the global platforms expanded their reach, private regulation[8] – which determines access to opportunities, jobs and finance and with rules automatically enforced by code – is becoming ubiquitous and can no longer be ignored. In this regard, private code tends to be proprietary and this raises questions on whether private code or algorithmic regulation is transparent and fair.

Regulators are starting to crack down on misconduct such as fake reviews and bad influencers[9] as well as to address netizen participation in supporting or vilifying others. Lianhe Zaobao observes “to be fair, in this we-media era, it is common practice to post injustices online. However, over the long term, Chinese society has developed an instinct to go public with any injustice, even without verifying the facts; this may create a mentality where anything posted publicly is considered a case of injustice. And in the age of the internet, it seems that anyone who can voice their opinions on social media has the power to put others on trial. But in reality, to uphold justice, just going public is not enough, nor is public trial the best way to judge. What is needed is a clear and professional investigation of the facts, in order to find the truth and move from the deadlock of he said, she said to a resolution. These investigations require the involvement of impartial and professional departments, and the police and authorities should play their roles. Of course, their intervention has to be based on public trust. If professional agencies and public authorities inspire more confidence and are not frequently absent, and if issues are no longer only resolved by making a fuss, society will become much more civilised”.

Governments are pursuing reforms on a piecemeal basis with agencies implementing siloed initiatives. This results in regulatory fragmentation along segmented lines (e.g. privacy, data, content and AI). Governments should adopt a more holistic approach and conceptualise the integration of the public-private and physical-virtual legal spheres. This likely involves reconciling law and code into a generalised or generic framework that recognises private code as an extension of the public regulatory architecture by strengthening regulatory oversight. In line with this, governments should work to bridge the differences between public and private and physical and virtual regulation and ensure greater rule consistency and sentencing proportionality. In reviewing public governance issues for the information society, regulators should focus on outcomes, goals and accountabilities rather than get bogged down by the mechanics.  

The merger of public law and private code may sound like an abhorrent idea to some. In this context, the emergence of digital systems for identification, national data, social credit, CDBCs and healthcare is fanning fears that societies are headed towards a digital Orwellian dystopia. But one should not jump to the conclusion that the outcome is necessarily dystopian. It is more purposeful to figure out how to make digital transparency work towards reinforcing individual rights and fostering a democracy.

One approach to merging public law and private code is to design and implement a “social credit system”. As China’s experience shows, social credit systems need careful thinking to avoid the pitfalls associated with criminalisation or trivialisation. In my view, there is room to design a social credit system that is oriented towards ensuring that enforcement is efficient (and convenient) and that seeks to protect the rights of citizens by ensuring transparency and fairness instead of a system that is overly enforcement-focused or punishment-oriented. The needs and reactions of citizens should guide the development of a social credit system.

The more difficult challenge posed by transparency is that it damages societal consensus on “values”. Recently, developed societies experienced a backlash with legal and societal reversals on long-established consensus on issues such as abortions, LGBTQ, climate change and diversity. Courts are finding it difficult to maintain judicial consistency on values. Courts risk being guilty of judicial overreach as they attempt to clarify ambiguous laws in complex circumstances. If courts actively support a cause, the judicial decisions become divisive rather than unifying. Inconsistent judgements and activism undermine trust in judicial impartiality and independence. Hence, a judicial system that provides efficient and effective dispute resolution is critical to building trust, ensuring society is not bogged in a legal quagmire and for maintaining economic, social and political stability.

Harshness and forgiveness

Transparency buffers such as privacy, anonymity, ambiguity and opacity are used to protect the socially disadvantaged; to reduce vulnerabilities and sensitivities; and to underpin societal cohesion and stability. But there is a downside. These societal buffers result in information blind spots that shield individuals, businesses and government agencies from exposure of their wrong-doings or errors and reduces accountability. In recent years, as erosion of transparency buffers accelerated, harshness has increased because society has become less tolerant and forgiving. Stale data taint background checks and results in systemic discrimination or exclusion. Accusations result in individuals, businesses and officials being subject to vigilante-type punishment without regard for fairness or due process.  There is less tolerance for misinformation and opposing views (e.g. on abortions, gender, sexual abuse, vaccines, masking and geopolitics). Censorship controls becomes more intimidating and oppressive.

The threats posed by harshness to the well-being of citizens are well known. Generally, there are checks and balances. “Privacy-based approaches are commonly used to mitigate the harshness of transparency. For example, some laws authorise courts to expunge or seal criminal records for certain types of arrests and convictions while employers are discouraged from discriminatory practices under the Equal Employment Opportunity laws. Article 9 of the EU’s General Data Privacy Regulation (GDPR) disallows the use of information that reveals a person’s race, ethnicity, political views, religion, union membership, health, and sexual practices unless it falls into certain exceptions such as legal defense, consent of the individual, or certain public interests…GDPR’s Article 17 codifies the right to be forgotten  where a user has a right to obtain from the controller the erasure of personal data relating to them and the abstention from further dissemination of such data, especially in relation to personal data when there is no longer use for the data or, more importantly, when the data subject decides he or she no longer wants the information to be public. The controller not only has to remove data on sites they are in control of but must also take all reasonable steps, including technical measures…to inform third parties…that a data subject requests them to erase any links to, or copy or replication of that personal data[10].”  In March 2014 the right to be forgotten was replaced by the more modest right to erasure. In contrast, US laws do not provide for removal of a person’s past, criminal or otherwise, due to the belief that free speech allowed for a person’s criminal past to become part of public record because of the need to protect society.

Generally, access to personal information is restricted by laws prohibiting questions about an individual’s political views or associations, against rape victimisation, disabilities, diseases and convictions; public access to juvenile records is limited; and  privileged communications between attorneys and clients, priests and penitents, doctors and patients are protected. Privacy regulation is based on the logic that withholding information can provide opportunities for a fresh start as well as reduce discriminatory practices. But there are problems with privacy-based forgetfulness. First, forgetfulness controls are easily bypassed through online searches, secondary databases, and statistical techniques or AI. Second, firms are exposed to legal risks if they recruit employees with criminal records. The absence of legal protection would deter firms from being forgetful or forgiving.

The alternative to privacy-based forgetfulness is transparency-based forgiveness. “A forgiveness approach should be complemented by fairness and the robustness of due process. The importance of these principles is recognised in EU’s General Data Privacy Regulation (GDPR)   which highlights fairness, lawfulness and transparency as the three key principles underpinning processing of personal data. The GDPR contains clauses requiring accuracy, providing individuals explicit rights to access, erase, rectify and restrict the processing of their personal data. This is reinforced by clauses imposing right-to-know, disclosure and various accountabilities to ensure information fiduciaries (platforms) treat users fairly. Hence, the GDPR covers many of the bases for fairness”[11].

Transparency-based forgiveness acknowledges there may be mitigating circumstances for a crime or transgression. Therefore, the enforcement and judicial system should be designed with built-in “tolerance” and legal buffers against harsh formal and informal punishments. This requires transparency (so that people can judge whether decisions are fair) and governance (due processes are robust). In this regard, fairness is data-dependent. This requires data and other evidence to be accessible (transparent) and backed up by robust due process for complaints, disputes and data rectification.

The government has a critical role in strengthening due processes given the weak incentives for the private sector to invest in recourse. Thus the government needs to strengthen the regulatory framework to increase the private sector’s accountability. The US Fair Credit Reporting Act (FCRA) and Consumer Credit Reporting Reform Act (CCRRA) are examples of legal frameworks for managing information flows and operator obligations on dispute settlement mechanisms and data correction procedures. The government would need to strengthen its oversight role while guarding against the bureaucratic tendency to evolve rules in favour of or to protect administrators at the expense of the public and industry. One way to mitigate bureaucratic inertia is to make data transparent and publish KPIs to benchmark dispute resolution and settlement processes. Data transparency facilitates data analysis and use of AI to detect unfairness patterns, enhance an integrated and dynamic approach to forgiveness and to design and implement targeted policies to prevent or redress “worst cases” by firm or geography and to assist victims that suffer most from denial of opportunities or abuse.

Society is becoming harsher – less compassionate and more intolerant – and less forgiving. The traditional stand of “innocent until proven guilty”; and fact-finding and due process has been shoved aside as viral content, misinformation and censorship gives rise to an emotive, vindictive, vigilante-type environment. This should not be the scenario we wish for a transparent society. Instead of emphasising punishment, the focus should be more on deterrence, rehabilitation and provision of opportunities. The system should provide slack for misjudgements and errors. Penalties should be light for infrequent, inadvertent and low-impact incidences; and heavy when infringements are frequent, high-impact or systemic. Forgiveness can be built into a social credit system with a transparent audit trail for scoring, with robust and efficient recourse processes and with opportunities for redemption. Redemption is an incentive-based approach that emphasises rewarding good behaviour rather than punishing bad behaviours. Redemption would allow individuals to “deactivate” adverse records (to reverse poor scores) through good deeds. The harshness of transparency should be mitigated by fostering a regime of forgiveness. In this regard, legal protection is required by compelling decision-makers to confine their decisions after taking into account de-activated records.

Power and democracy

Sun-ha Hong points out that “in a society beset with black-boxed algorithms and vast surveillance systems, transparency is often hailed as liberal democracy’s superhero. It’s a familiar story: inject the public with information to digest, then await their rational deliberation and improved decision making. Whether in discussions of facial recognition software or platform moderation, we run into the argument that transparency will correct the harmful effects of algorithmic systems. The trouble is that in our movies and comic books, superheroes are themselves deus ex machina: black boxes designed to make complex problems disappear so that the good guys can win. Too often, transparency is asked to save the day on its own, under the assumption that disinformation or abuse of power can be shamed away with information. Transparency without adequate support, however, can quickly become fuel for speculation and misunderstanding…But to blame an inattentive or ignorant public is to miss the larger point: that too often, information is hung out to dry, thrown to the wolves…Disinformation scholar Whitney Phillips argues that contrary to the cliché, light doesn’t always disinfect: too often, rendering hate speech or conspiracy theories visible results in amplifying and legitimizing those views. As we are now seeing with QAnon theories and COVID-19 misinformation, throwing facts on the table doesn’t always have a corrective effect, and may even provoke people into doubling down. In her research on conservative evangelical groups, Francesca Tripodi shows that people fall into disinformation rabbit holes not through a lack of research, but rather an abundance of research – routed through alternative sources or mediators…As Ethan Zuckerman has put it, the problem wasn’t information – it was power. Here we find the pernicious consequence of the myth that information is sunlight and that information alone can expunge wrongdoing. Rules are routinely flouted, with officers often citing technical malfunction or lost equipment as an excuse for missing video”.

Sun-ha Hong thinks “as data-driven systems freely expand across social domains, they are increasingly locking us into existing asymmetries of power and information. For some, datafication might seem an empowering choice, a sovereign and individual decision to walk boldly toward a post-human future. For many of us, to appear correctly in databases is the unhappy obligation on which our lives depend…Breaking up platforms would involve many of the same problems attached to understanding platforms as purely an artifact of the economy, although it would depend on how the breaking up was done; the EU solution, regulating data protection and privacy, does not deal with the fundamental question of political-economy power at all…Hovering over the entire question is the spectre of both ecosystem and civilizational collapse, caused by the very rapacity of the capitalist system…Platform capitalism has, so far, offered very little toward the solution of global climate breakdown. Blockchain champions spread the myth of virtuality, of impact-free wealth creation, and offload their substantial environmental and social costs to vague and distant places and times…One of the biggest issues for those of us who have spent so much time fighting against surveillance and for privacy and human rights is that the longer our current governments fail to deal with climate breakdown…the more likely it is that the solutions become, at once, more authoritarian and more surveillant…We need vision but, as always, what has been squeezed out of the conversation is anything genuinely radical or progressive…If we understand platforms as, simultaneously, an economic, political and social phenomenon – in other words, as emerging alternatives to nation-states – we will be able to confront them better…The platform could continue to be just another differently shaped vessel for the enrichment of the very few, or it could be a way of finally slicing through what philosopher and literary critic Kojin Karatani has called the Borromean knot (a tightly interlocking 3-way bond) of nation, state and capital – and moving away from exploitation, environmental destruction and pervasive surveillance toward an entirely new form of more genuinely democratic government”.

Transparency and power thus sit uneasily with each other. In a transparent society, the battle for control over data and narratives is a battle for power. Governments, firms and groups often claim to champion transparency but yet they display discomfort when transparency results in unfavourable data and narratives or expose abuses and corruption. Other peoples’ data should be transparent but not theirs. Only data favourable to them can be disclosed. Unfavourable data goes missing or is obfuscated. This raises the question of whether society can ever be fully transparent.

As the stakes rise in the narrative war, growing frustration among elites are leading to aggressive and vindictive responses to stamp out unofficial and unfavourable narratives. Across Western democracies, there is backsliding on free speech as tough new rules are promulgated and agencies established to control hate speech, disinformation and conspiracy theories delegitimizing the state. This is a slippery road that leads to McCarthyism[12]. For example, once an individual is designated as a politically exposed person (PEP), he may find his bank accounts being abruptly closed[13]. Banks fear the compliance risks from associating with individuals, companies, industries or countries that may be involved in criminal activities (money laundering), conduct misbehaviour (hate speech), sanction contravention or tainted by unproven allegations or smears.

There are also concerns the recently-established “censorship” agencies will trigger a wave of political prosecution and harassment, the deplatforming of non-conformists and lead to the criminalization of dissent. It should be noted the current situation is much worse than the McCarthy era because it covers a wider area ranging from social media content (conspiracy theories, fakes, unfavourable videos, hate speech) to pandemic measures (origin, covid lockdowns, masks and vaccines).

Rising discomfort with transparency does not mean society is on the path to dystopia. The threat posed by dystopia arises not because more information is captured but because control over information is concentrated rather than shared; therefore making information transparency one-sided. In this context, early visions of information dystopias (George Orwell’s 1984[14] and Jeremy Bentham’s panopticon) are based on concepts of centralised control of information – reflecting hierarchical structures with highly limited public access to information. The emergence of distributed or peer-to-peer networks has broadened the distribution of information and made transparency two-sided. Two-sided transparency means most citizens have the ability to create, distribute and access content. In this regard, technological surveillance and AI have not, for the moment, silenced political and social dissent. Social movements (e.g. Me-too, Fight-for-$15, yellow vests and George Floyd protests), disgruntled individuals and foreign governments have demonstrated the ability to use the same technology  – mobile phones, motorcams, bodycams, Youtube, Tik-Tok and Telegram – to organise counter-movements and narratives. Two-sided transparency makes it difficult for governments, intelligence agencies and corporations to dominate virtual space.

Overall, a functioning democracy requires transparency and wide distribution of information to bring us closer to the ideal of active and broad civic participation in society. Transparency and freedom of speech is the best form of governance. It may increase disputes but it also increases accountability, and acts as a check and balance on autocratic use of power. Regulatory boundaries should be redrawn to protect individual freedoms from government, legal or social retaliation.

Conclusions

We are still adjusting to the new levels of transparency in the information society. As much discomfort as there may be with information disorder, nonetheless we should be wary of arguments that advocate granting information privileges and concentrating information power as this will inevitably be at the expense of a majority of citizens and possibly represent the biggest threat to democracy. While transparency is disruptive, we should be mindful that it opens up new opportunities. These opportunities can be harnessed by policy and rule changes to facilitate better use of information to improve living conditions. Within this context, three critical areas that require rethinking are the legal system, the mitigation of harshness and the preservation of democracy.

The geopolitical dimensions should not be overlooked. Transparency exposes the myth of global shared values. Geopolitical strains grow as countries and groups push hard on their divergent beliefs (religious, ideological and cultural) and national agendas. As visibility across countries improves, an action in one country may provoke an angry reaction (protests, boycotts) or even contravene laws in other countries. This may trigger government or judicial intervention. One option to a fractious world is to facilitate the emergence of distinct zones catering to different beliefs and national agendas. This is already happening with decoupling, economic fragmentation and de-dollarisation – with information flows increasingly circulating within those zones. Guardrails to mitigate geopolitical conflict would require government acquiescence  to voluntary restraints on the reach of cross-border policies and judicial decisions.

References

Caleb Brennan (17 April 2023) “Background check industry profits off digital punishment, despite flawed data”. The Appeal. https://theappeal.org/criminal-background-checks-industry-for-profit/

Eli Hoff (29 July 2021) “Is this legal?’: Why an obscure data service has been sued nearly 100 times for facilitating anti-competitive behavior”. Investigate Midwest. https://investigatemidwest.org/2021/07/29/is-this-legal-why-an-obscure-data-service-has-been-sued-nearly-100-times-for-facilitating-anti-competitive-behavior/

James Andrew Lewis (29 September 2021) “Facial recognition technology: Responsible use principles and the legislative landscape”. Center for Strategic and International Studies (CSIS). https://www.csis.org/analysis/facial-recognition-technology-responsible-use-principles-and-legislative-landscape

Jim Quinn (4 July 2023) “Burning books in a brave new 1984 world”. The Burning Platform blog. https://www.zerohedge.com/political/burning-books-brave-new-1984-world

Lianhe Zaobao (22 Jun 2023) “Trial by Weibo: A young woman accuses a middle-aged man of voyeurism”. https://www.thinkchina.sg/trial-weibo-young-woman-accuses-middle-aged-man-voyeurism

Nick Corbishley (24 March 2023) “The pushback against biometric surveillance and control systems is growing on both sides of the Atlantic”. Naked Capitalism.

Phuah Eng Chye (2015) Policy paradigms for the anorexic and financialised economy: Managing the transition to an information society.

http://www.amazon.com/dp/B01AWRAKJG

Phuah Eng Chye (21 December 2019) “The debate on regulating surveillance”. http://economicsofinformationsociety.com/the-debate-on-regulating-surveillance/

Phuah Eng Chye (4 January 2020) “The economics and regulation of privacy”. http://economicsofinformationsociety.com/the-economics-and-regulation-of-privacy/

Phuah Eng Chye (18 January 2020) “Big data and the future for privacy”. http://economicsofinformationsociety.com/big-data-and-the-future-for-privacy/

Phuah Eng Chye (15 February 2020) “The costs of privacy regulation”. http://economicsofinformationsociety.com/the-costs-of-privacy-regulation/

Phuah Eng Chye (29 February 2020) “The journey from privacy to transparency (and back again)”. http://economicsofinformationsociety.com/the-journey-from-privacy-to-transparency-and-back-again/

Phuah Eng Chye (14 March 2020) “The features of transparency”. http://economicsofinformationsociety.com/features-of-transparency/

Phuah Eng Chye (28 March 2020) “The transparency paradigm”. http://economicsofinformationsociety.com/the-transparency-paradigm/

Phuah Eng Chye (11 April 2020) “Anonymity, opacity and zones”. http://economicsofinformationsociety.com/anonymity-opacity-and-zones/

Phuah Eng Chye (15 August 2020) “Economics of data (Part 3: Relationship between data and value and the monetisation framework)”. http://economicsofinformationsociety.com/economics-of-data-part-3-relationship-between-data-and-value-and-the-monetisation-framework/

Phuah Eng Chye (7 November 2020) “Information rules (Part 1: Law, code and changing rules of the game)”. http://economicsofinformationsociety.com/information-rules-part-1-law-code-and-changing-rules-of-the-game/

Phuah Eng Chye (21 November 2020) “Information rules (Part 2: Capitalism, democracy and the path forward)”. http://economicsofinformationsociety.com/information-rules-part-2-capitalism-democracy-and-the-path-forward/

Phuah Eng Chye (8 April 2023) “China’s model (Part 2: Digital China and the information society)”. http://economicsofinformationsociety.com/chinas-model-part-2-digital-china-and-the-information-society/

Stephen Maher (2 August 2021) “Transparency is key to curbing the power of big tech”. Centre for Governance Innovation (CIGI). https://www.cigionline.org/articles/transparency-is-key-to-curbing-the-power-of-big-tech/

Sun-ha Hong (18 February 2021) “Why transparency won’t save us”. Centre for International Governance Innovation (CIGI). https://www.cigionline.org/articles/why-transparency-wont-save-us/

Sun-ha Hong (12 April 2021) “Control Creep: When the data always travels, so do the harms”. Centre for International Governance Innovation (CIGI). https://www.cigionline.org/articles/control-creep-when-data-always-travels-so-do-harms

Thomas Claburn (30 June 2023) “Uncle Sam cracks down on faked reviews and bad influencers”. The Register. https://www.theregister.com/2023/06/30/us_trade_watchdog_revises_rules/

Tyler Durden (4 July 2023) “Farage banking ban sparks UK govt probe of blacklisting accounts over political views”. Zero Hedge. https://www.zerohedge.com/political/farage-bank-ban-sparks-uk-govt-probe-blacklisting-accounts-over-political-views


[1] See “The features of transparency”.

[2] See “The features of transparency”.

[3] See Eli Hoff on  accusations against Agri Stats – a data and analytics firm for the meat processing industry – for facilitating the exchange of confidential, proprietary, and competitively sensitive data that allowed meat producers to collude to fix, raise, maintain, and stabilize prices.

[4] See Policy paradigms for the anorexic and financialised economy: Managing the transition to an information society.

[5] Data quality remains a substantial problem. Data is fragmented (stored in different silos), are not reconciled and many errors are not corrected.

[6] See “China’s model (Part 2: Digital China and the information society)”.

[7] See “China’s model (Part 2: Digital China and the information society)”.

[8] See “Information rules (Part 1: Law, code and changing rules of the game)”.

[9] Thomas Claburn notes that the US Federal Trade Commission (FTC), the consumer fraud watchdog has revised its rules for online reviews and testimonials in advertising, raising the possibility of greater legal risk for those deceptively endorsing – or disparaging –  products or services online in exchange for payment

[10] See “The transparency paradigm”.

[11] See “The transparency paradigm”.

[12] “McCarthyism, also known as the second Red Scare, was the political repression and persecution of left-wing individuals and a campaign spreading fear of alleged communist and socialist influence on American institutions and of Soviet espionage in the United States during the late 1940s through the 1950s”. https://en.wikipedia.org/wiki/McCarthyism

[13] See Tyler Durden.

[14] See Jim Quinn.