Transition to the information society (Part 3: Disruption of content, narratives and the implications for democracy)

Transition to the information society (Part 3: Disruption of content, narratives and the implications for democracy)

Phuah Eng Chye (19 August 2023)

Social media has dramatically increased societal transparency but has been accompanied by information overload and disorder. In this regard, Nicholas Carr observes social media has progressed from an initial phase of context collapse to content collapse. “Before social media, you spoke to different audiences in different ways. You modulated your tone of voice, your words, your behavior, and even your appearance to suit whatever social context you were in (workplace, home, school, nightclub, etc)…On a social network, the theory went, all those different contexts collapsed into a single context. Whenever you posted a message or a photograph or a video, it could be seen by your friends, your parents, your coworkers, your bosses, and your teachers, not to mention the amorphous mass known as the general public. And, because the post was recorded, it could be seen by future audiences as well as the immediate one. When people realized they could no longer present versions of themselves geared to different audiences – it was all one audience now – they had to grapple with a new sort of identity crisis…an infinite number of contexts collapsing upon one another into that single moment of recording…a crisis of self-presentation”.

Nicholas Carr argues “the recent history of social media isn’t a story of context collapse. It’s a story of its opposite: context restoration. Young people led the way, moving much of their online conversation from the public platform of Facebook, where parents and teachers lurked, to the more intimate platform of Snapchat, where they could restrict their audience and where messages disappeared quickly…Context collapse remains an important conceptual lens, but what’s becoming clear now is that a very different kind of collapse – content collapse – will be the more consequential legacy of social media. Content collapse, as I define it, is the tendency of social media to blur traditional distinctions among once distinct types of information – distinctions of form, register, sense, and importance. As social media becomes the main conduit for information of all sorts – personal correspondence, news and opinion, entertainment, art, instruction, and on and on – it homogenizes that information as well as our responses to it”. Content collapse is triggered by digitization which made redundant “the formal standards and organizational hierarchies inherent to the old mediums” and converged information “through a single, universal medium”. “It wasn’t just that the headlines, free-floating, decontextualized motes of journalism ginned up to trigger reflexive mouse clicks, had displaced the stories. It was that the whole organizing structure of the newspaper, its epistemological architecture, had been junked. The news section (with its local, national, and international subsections), the sports section, the arts section, the living section, the opinion pages: they’d all been fed through a shredder, then thrown into a wind tunnel. What appeared on the screen was a jumble, high mixed with low, silly with smart, tragic with trivial. The cacophony of the RSS feed, it’s now clear, heralded a sea change in the distribution and consumption of information. The new order would be disorder”. The smartphone “completed the collapse of content…further compacted all forms of information. The instant notifications and infinite scrolls that became the phone’s default design standards required that all information be rendered in a way that could be taken in at a glance, further blurring the old distinctions between types of content. Now all information belongs to a single category, and it all pours through a single channel”.

Nicholas Carr points out “many of the qualities of social media that make people uneasy stem from content collapse. First, by leveling everything, social media also trivializes everything – freed of barriers, information, like water, pools at the lowest possible level…Second, as all information consolidates on social media, we respond to it using the same small set of tools the platforms provide for us. Our responses become homogenized, too. That’s true of both the form of the responses (repost, retweet, like, heart, hashtag, fire emoji) and their content (Love! Hate! Cringe!). The software’s formal constraints place tight limits on our expressiveness, no matter what we’re talking about. Third, content collapse puts all types of information into direct competition. The various producers and providers of content, from journalists to influencers to politicians to propagandists, all need to tailor their content and its presentation to the algorithms that determine what people see. The algorithms don’t make formal or qualitative distinctions; they judge everything by the same criteria. And those criteria tend to promote oversimplification, emotionalism, tendentiousness, tribalism – the qualities that make a piece of information stand out, at least momentarily, from the screen’s blur. Finally, content collapse consolidates power over information, and conversation, into the hands of the small number of companies that own the platforms and write the algorithms. The much maligned gatekeepers of the past could exert editorial control only over a particular type of content that flowed through a particular medium – a magazine, a radio station, a TV network. Our new gatekeepers control information of all kinds. When content collapses, there’s only one gate”.

Informationalisation, narratives and disorder

Byung-Chul Han[1] explains with informationalisation, “everything that binds and connects is disappearing. There are hardly any shared values or symbols, no common narratives that unite people. Truth, the provider of meaning and orientation, is also a narrative. We are very well informed, yet somehow we cannot orient ourselves. The informatization of reality leads to its atomization – separated spheres of what is thought to be true. But truth, unlike information, has a centripetal force that holds society together. Information, on the other hand, is centrifugal, with very destructive effects on social cohesion. If we want to comprehend what kind of society we are living in, we need to understand the nature of information. Bits of information provide neither meaning nor orientation. They do not congeal into a narrative. They are purely additive. From a certain point onward, they no longer inform – they deform. They can even darken the world. This puts them in opposition to truth. Truth illuminates the world, while information lives off the attraction of surprise, pulling us into a permanent frenzy of fleeting moments. We greet information with a fundamental suspicion: Things might be otherwise. Contingency is a trait of information, and for this reason, fake news is a necessary element of the informational order. So fake news is just another piece of information, and before any process of verification can begin, it has already done its work. It rushes past truth, and truth cannot catch up. Fake news is truth-proof. Information goes along with fundamental suspicion. The more we are confronted with information, the more our suspicion grows. Information is Janus-faced – it simultaneously produces certainty and uncertainty. A fundamental structural ambivalence is inherent in an information society. Truth, by contrast, reduces contingency. We cannot build a stable community or democracy on a mass of contingencies. Democracy requires binding values and ideals, and shared convictions. Today, democracy gives way to infocracy…another reason for the crisis of community, which is a crisis of democracy, is digitalization. Digital communication redirects the flows of communication. Information is spread without forming a public sphere. It is produced in private spaces and distributed to private spaces. The web does not create a public. This has highly deleterious consequences for the democratic process. Social media intensify this kind of communication without community. You cannot forge a public sphere out of influencers and followers. Digital communities have the form of commodities; ultimately, they are commodities”.

Byung-Chul Han points out “of course, there was information in the past, too. But it did not determine society to such a degree as today. In antiquity, mythical narratives determined people’s lives and behavior. The Middle Ages were, for many, determined by the Christian narrative. But information was embedded in narration: An outbreak of the plague was not pure, simple information. It was integrated into the Christian narrative of sin. Today, by contrast, we no longer have any narratives that provide meaning and orientation for our lives. Narratives crumble and decay into information. With some exaggeration, we might say that there is nothing but information without any hermeneutic horizon for interpretation, without any method of explanation. Pieces of information do not coalesce into knowledge or truth, which are forms of narration. The narrative vacuum in an information society makes people feel discontent, especially in times of crisis, such as the pandemic. People invent narratives to explain a tsunami of disorienting figures and data. Often these narratives are called conspiracy theories, but they cannot simply be reduced to collective narcissism. They readily explain the world. On the web, spaces open to make experiences of identity and collectivity possible again. The web, thus, is tribalized – predominantly among right-wing political groups where there is a very strong need for identity. In these circles, conspiracy theories are taken up as offers for assuming an identity…Friedrich Nietzsche once said that our happiness consists of the possession of a non-negotiable truth. Today, we no longer have such non-negotiable truths. Instead, we have an over-abundance of information. I am not sure that the information society is a continuation of the Enlightenment. Maybe we need a new kind of enlightenment. On a new enlightenment, Nietzsche noted: It does not suffice that you realize the ignorance in which humans and animals live, you also have to have the will to be ignorant and learn more. You need to comprehend that without this kind of ignorance life would become impossible, that only on condition of this ignorance can what lives preserve itself and flourish.”

Martin Gurri[2] points out “information now flows freely, unencumbered by either geography or monopoly power, which means what we experience and think has nothing to do with the legacy polities – nation-state, supra-national alliance, municipality – that have defined Western life for three centuries. How do we reconcile the world of bits with that of atoms? We can’t even have a civil war, as it’s impossible to draw the lines. How do we reconcile that seismic tension between the intangible borders of our intellects and values with the very tangible borders that define political and economic realities? Is this part of why these revolts haven’t (largely) congealing into effective movements? Is that a tragedy, or is such intello-geographic tension in fact a saving grace preventing Civil War II? You need at least two preconditions for civil war. One is a potent issue that divides the population. In 1930s Spain it was Communism versus Catholicism. In our own Civil War it was union/freedom versus states’ rights/slavery. Today the big issue is…what? Wearing MAGA hats? Racism certainly isn’t a divisive issue. Everybody’s against it. Capitalism isn’t much of an issue. Nobody even talks about class or poverty any more. Identity’s too diffuse and dispersed…For all the anger, we live pretty comfortably within this evil old system…The second precondition is an inexhaustible supply of young men. To die willingly in appalling numbers as soldiers did both in Spain and here you need a lot of testosterone. We just haven’t produced enough young men to get there…What has changed in the new information landscape is that it has given a voice to the deplorables who are the majority in every country, but also to marginal actors who would have been shut out under the old industrial model. Fifty years ago, the mass audience could only sit and listen. Today the public talks back – hipsters, rednecks, it doesn’t matter, everyone talks back in loud, rude tones. The uproar around every public conversation is a new and startling development.  It fills the elites with fear and loathing, so that they start hallucinating a civil war”.

What is remarkable about modern movements such as Occupy Wall Street, Gilets Jaunes, or Antifa “is that their ideologies never rise above shouted slogans, barely even filling a single tweet…Unlike the various revolutionary movements of the past, such as the PLO, ETA, IRA, Muslim Brotherhood, the 26th of July, etc., there is no intellectual progenitor, cogent politics, formal policy program, organized political wing as adjunct to the violent one, nor even a quickly-mutating platform used for propaganda. There’s simply a violent and photogenic rejection of the status quo. Does this actually lead to anything or, as you put it, the consequence wasn’t revolution but the threat of perpetual turbulence?…When you organize online, you don’t need any of the trappings of 20th century radicalism – a revolutionary command and control organization, a maximum leader, a program, even a coherent ideology. All you need is a smart phone and a sufficient measure of anger against.  This carries tremendous tactical advantages. The street insurgents will invariably catch the authorities by surprise, because institutionally they don’t exist. They are a crowd from the cloud: people from nowhere that suddenly materialize everywhere. But the digital path to revolt suffers from a congenital, and probably fatal, strategic defect. Without a leader or a program, you can’t maneuver. You can’t adjust your tactics. You can’t negotiate with power, for example. Surprisingly often, governments have caved in and offered to meet the protesters’ demands; this happened in Israel in 2011 and France in 2019. In both cases – in every case I am aware of – the protesters showed little interest in figuring out what their demands were. They wouldn’t take yes for an answer…So when you ask whether today’s protests will ever lead to anything, the answer is probably not. They have little positive content. My concern is that they might lead to nothing – to a politics of righteous annihilation and a society lobotomized of all memory. The lust for destruction, rather than fascism or some successor ideology, looms as the great threat to democracy today”.

Martin Gurri explains “human knowledge is much more limited than we like to admit.  To shape the flux of events into a story that will persuade the public, therefore, the elites must control the means of communication. When that control slips, the elite class lapses into a state of crisis. Every major transformation in information technology has brought in train widespread chaos and disruption, often accompanied by bloodshed, as the old elites – wedded to obsolete forms of communication – were chased up their castle towers and heaved out the window. The most disruptive innovation of this nature was surely the printing press. It inspired revolutions in religion, politics, and science. At the present time, we are in the first stages of a gigantic transformation from the industrial mode of information and communication to something that doesn’t even have a name yet. It’s an extinction event for the narratives…Not surprisingly, the people in charge of running things are terrified of saying anything at all – it might come back to bite them. In a Darwinian sense, they are selected for the ability to use words that have no meaning…The appeal to the corpses of once-powerful ideologies itself is evidence of our exhausted powers of explanation, and these dusty mummies, dragged up by the swirl of surface effects, will almost certainly be swept away in the great transformation. I would not say that our institutions are mired in a period of secular incompetence and decline. That is actually true, but I wouldn’t use those words.  I would say that our institutions are structurally (and, I believe, catastrophically) mal-adapted to the new information environment, and that the people who run them are both unable and unwilling to reform them”.

Martin Gurri argues “elites have progressively lost the ability to mediate between events and the old shared stories. Elite omissions and evasions, falsehoods and failures, are now out in the open for all to see…No established authority remains to settle questions of fact. In that sense, the interpretation of reality is up for grabs…The mirror is broken, and the great narratives are fracturing into shards. What passes for authority is devolving to the political war-band and the online mob – that is, to the shock troops of populism left and right…What comes next? Maybe chaos…On the far end of the turbulence, the system will be reconstituted along somewhat different lines.  It is impossible from here to predict the character of the new organizing principles – but it’s safe to say that the radical egalitarianism favored by anti-establishment movements will not be among them. Authority will not devolve from the elites to the public…Stable interpretations of reality seldom arise from a free-for-all…A complex society can’t dispense with elites…The sociopolitical disorders that torment our moment in history, including the fragmentation of truth into post-truth, flow primarily from a failure of legitimacy, of the bond of trust between rulers and ruled…Those at the top have forsaken their function yet cling, illicitly, to their privileged perches…If my analysis is correct, the re-formation of the system, and the recovery of truth, must depend on the emergence of a legitimate elite class”.

Martin Gurri thinks “the great political conflict of our century, I believe, is that between a networked public and the elites who inhabit the great hierarchical institutions that organize modern life” with “this tectonic collision already rattling the world”. “The elites to be in a state of confusion bordering on panic.  They are baffled by anything digital and utterly clueless about where all the nonentities shouting angrily outside their windows come from…The public, which swims comfortably in the digital sea, knows far more than elites trapped in obsolete structures. The public knows when the elites fail to deliver their promised solutions, when they tell falsehoods or misspeak, when they are caught in sexual escapades, and when they indulge in astonishing levels of smugness and hypocrisy. The public is disenchanted in the elites and their institutions, much in the way science disenchanted the world of fairies and goblins. The natural reaction is cynicism. The elites aren’t seen as fallible humans doing their best, but as corrupt and arrogant jerks. In other words, the elites had abandoned the idea of serving the public before the arrival of the digital tsunami. What that catastrophe did was to reverse the polarities of power: it was the public that was now technologically adept, politically restless, and in revolt against the perplexed elites”.

Martin Gurri concludes distrust is prevalent today because “the story-tellers – public officials, the media, scientists: the elites – live in an entirely different information universe from the rest of us. They behave as if we were still in the 20th century, and information is still their monopoly, which they dispense as they see fit and which we will accept on authority. They pretend that they alone have escaped Plato’s cave:  they know. So their stories strike a mathematical pose, and seek to explain, from on high, how they will apply their expertise to solve political, social, or health problems…there will be, at some point, new stories that take into account the new environment and ensure a return of trust in rulers and institutions. The will to believe is real. The question, for me, is how much damage will be inflicted before that civilizational turning, and whether liberal democracy will still exist on the other side”.

The threat of narratives and content moderation

The transformation of content has amplified the potency of narratives as a weapon. Governments, businesses and even lone wolves have access to technology and networks to conduct a narrative war. Information overload or throughput-based strategies (i.e. AI-based echo chambers) can be combined with complexity-related strategies involving fakes, smears, conspiracy theories, audience targeting and gamification techniques to construct reality bubbles, to manipulate emotions (usually outrage) and indoctrinate (through conspiracy theories) the public.

Recently, democratic governments have singled out the threat posed by foreign-influenced narratives. Their misinformation campaigns were thought to have polarised voters and swayed the outcomes of the Brexit vote and the 2016 US elections. In addition, intelligence agencies are perceived to be involved in weaponising narratives. Edward Lucas notes technology disrupted the intelligence industry by shrinking the “cloak of anonymity” which “severely constrained the ability of intelligence officers and their sources to operate safely and secretly”. Intelligence agencies have embraced the commercialization of espionage; blurring “the boundaries between public and private sector intelligence work”. Retired intelligence officers are hired as contractors and actively recruit their ex-colleagues. Instead of traditional intelligence gathering, investigative agencies rely on open-source information, commercial databases and hacked or leaked material. Intelligence agencies also work with outside actors “to find out what is going on and in order to influence it”. “Spies today increasingly need to work with lawyers, both to counter adversaries’ reliance on lawfare – the use of the legal system to delegitimize an enemy or win a public relations victory – and to test the legality of their own operations”. Where “the tricks of the trade – bugging, impersonation, hacking – are illegal, they can simply be outsourced to a suitably unscrupulous subcontractor.” Spy agencies routinely “enlist the help of lawyers, journalists[3], accountants, business executives, and academics”. “Spies and intelligence chiefs need to be media-savvy, countering and mounting information operations…Intelligence officers involved in active measures – making things happen rather than just finding out about them – can find it useful to brief journalists, either highlighting solid facts and logic that help their case or on occasion inventing or twisting source material in order to produce new coverage with the requisite slant or spin”. He cautioned on the danger “the intelligence services of democratic countries may become too flexible and too deeply involved in the institutions and procedures of a free society. The temptation to do so will be particularly strong in countries facing the full blast of hostile influence operations…Intelligence-led criminal justice sanctions and regulatory sanctions – arrests, asset freezes, deportations, banning media outlets, and so forth – that should be the exception could become the rule”.

In the past, governments were able to control narratives by managing their relationship with news media which was the main communication channel. As a trade-off, governments respect its role as the fourth estate or power[4] investigating wrong-doings and injustice, expressing independent opinions and as a forum for debate. But

the influence of news media on the public narrative[5] has been greatly weakened by the emergence of social media platforms. Social media platforms have disintermediated traditional news media from their customers (readers, subscribers and advertisers) and their content (search engines, social media). Traditional media also had to cope with competition from non-traditional players (bloggers, influencers, citizen reporters), the speed and force of social media distribution (viral), and the new forms of engagement (memes, tweets, livestreaming). The loss of readership and advertising revenues undermined commercial viability and has resulted in the closure of many newspaper firms[6].

Traditional news media were relatively pliant because the few dominant players were kept in check by their dependence on government licensing renewals and commercial ad revenues. Governments therefore regarded the news industry as a bulwark defending their narrative. The decline of the traditional news industry means government narratives were not getting distributed effectively and this implies a loss of control over the public narrative. Governments have therefore sought to defend the traditional industry and taken various policy actions. For example, Australia took the lead by using anti-monopoly legislation to address bargaining imbalances and to compel the dominant global platforms to negotiate with the domestic news industry. The law also contained non-discrimination and anti-retaliation provisions to ensure the dominant digital platforms did not bully the domestic publishers.

My view is that these government initiatives are likely to flounder and are unable to reverse the deteriorating fundamentals of traditional news media. Already, the traditional news industry has consolidated into a core surviving group but this group continue to face pressures from loss of readership, revenue weaknesses and competition from new players. The economics of content[7] favours “sponsored push content” with social media (influencers, cyber warriors) incentivised to generate political and corporate-sponsored narratives that crowds out independent and objective content which suffer cost inefficiencies, information overload and attention limitations. To survive, the remaining traditional news media has succumbed to commercial imperatives by becoming part of the “echo chambers”. Unsurprisingly, this has affected public trust in traditional news media.

There is growing political and public pressure on governments to respond to “misinformation”. But it is difficult for democratic governments to intervene directly because the main battleground is not traditional media but the social media platforms. Narrative controls on social media platforms needs to be set up differently because, unlike traditional media, platforms generally do not generate content on its sites and claim to have little direct responsibility. This view is entrenched in the US where social media platforms are protected by Section 230 of the Communications Decency Act of 1996 which “generally provides immunity for online computer services with respect to third-party content generated by its users”[8].

Evelyn Douek explains “regulators turn to platform procedure because constitutional and practical limitations on governmental power mean that the job of setting most substantive content moderation rules cannot be taken away from private companies. Platforms can and will engage in content moderation beyond what the law could proscribe. Platforms that removed only content that could be made illegal would rapidly become unusable, mired in spam, porn, harassment and other graphic but not unlawful speech. Many of the biggest content moderation controversies involve protected speech. In the United States, content like the Christchurch Massacre livestream, hate speech, or coronavirus misinformation, for example, cannot be legally proscribed”.

Given the above constraints,  governments have generally adopted self-regulation of platform content. Aspects of a self-regulatory framework includes specifying the level of formality (statutory obligation to moderate content) which determines whether membership and rules are mandatory (defined in law) or voluntary (based on industry codes), the degree of regulatory oversight and enforcement, and the feasibility of establishing a self-regulatory organisation (SRO) for social media. This does not preclude the government or public from pursuing legal action directly against content originators and platforms. Content moderation usually involves the following.

  • Fact-checking. Fact-checking is used to identify “misinformation”. Fact-checking is constrained by the lack of consensus on what constitutes facts Evelyn Douek notes there is “no public information about how often or why the platform refers content to factcheckers or how fact-checks are used. Like Facebook, TikTok attempts to harness the legitimacy dividends of working with outside experts while structuring the relationship in a way that allows it to retain discretion. In these ways, arrangements designed to ensure the appearance of platform neutrality are plagued with procedural loopholes that subvert that very purpose”. “Platforms’ reliance on authoritative sources like the World Health Organisation to determine what constituted coronavirus misinformation is another example of delegated decision-making”.
  • Content immoderation. Evelyn Douek notes “Facebook CEO Mark Zuckerberg, in a blog post titled A Blueprint for Content Governance, described how, no matter where you draw a policy line, content that approaches that line will get more engagement so rather than simply removing content the most effective way of dealing with misinformation is to reduc[e] its distribution and virality. Facebook’s transparency reports on its content moderation highlight these measures: in boasting about reductions in how often users saw hate speech, Facebook cited not better rules, or better moderation tools, or more human moderators, but changes to the News Feed to reduce the number of times we display posts that later may be determined to violate our policies.” However, content immoderation involves the same false positives and negatives as content moderation.

Bill Rice notes a Facebook employee explanation that “we attack virality aspect through feed demotions. We remove content that can lead to imminent physical harm. For content that doesn’t meet that threshold, we instituted borderline demotions. For example, someone sharing negative side effect posts. Similarly, posts questioning whether you get a vaccine under a mandate, whether it’s government overreach. We demote those. That’s not false information but it leads to a vaccine negative environment”. He argues that by blocking “almost every post that was blocked or demoted contained information which, if widely disseminated, could have perhaps debunked all the false narratives the government was committed to spreading”. “Here we learn that Facebook users also couldn’t fully share the opinion that vaccine mandates were government overreach. Apparently our government has not overreached when it tells citizens and companies that they can’t share certain opinions. Whether people realize this or not, statements like this mean we might as well be living in North Korea or 1978 East Germany. Basically, one cannot accuse our own democratically-elected government of overreaching – per government decree!”

  • Friction and user controls. Evelyn Douek notes “in the past year platforms have started introducing nudges designed to minimize the amount of violating content users post or share in the first place. Examples include prompts to think twice before posting certain types of content…and adding friction, such as extra clicks or limits on how many times a message can be forwarded”. Another “trend in platform design has been to give users more control. This includes measures like allowing users themselves to control who can engage with their content, create preferences for what kind of content gets recommended to them, mass-delete comments on their posts, or designate their own forum moderators or experts…But these different forms of content moderation all exhibit the same accountability deficits, the same information asymmetries, and the same capacity to influence the online information ecosystem”.
  • Demonetisation and deplatforming. Social media firms can respond to persistent or high-impact content infringements by demonetising or deplatforming content providers. Content providers complain these actions can be arbitrary (boilerplate notices without clear explanation) without sufficient warning; and opaque and unaccountable recourse processes could leave accounts demonetised for an uncertain period. Tom Mckay argues while demonetisation can have some effect, nonetheless “there are many ways for those who accrue vast audiences via mainstream social media sites to make money off-site, such as donations, and crowdfunding campaigns, supplement and alternative medicine sales, and going on Fox News constantly to promote their book. Allowing misleading scientific claims to remain up, albeit demonetized, thus doesn’t remove the financial incentive to post them in the first place”. He adds “research has shown that while de-platforming might not be a long-term solution to misinformation and hate speech, it does act as a constraint on the relatively small but disproportionately influential individuals and groups most committed to spreading it”. However, “bans on mainstream sites have driven large numbers of believers to alternative platforms where more openly extreme beliefs thrive, like messaging app Telegram. To some degree, the dispute comes down to whether this sort of self-imposed online quarantine is preferable to giving these web users access to audiences on major platforms, or whether it works at all”.

In addition, the costs of content moderation outweigh its benefits. First, the costs of content moderation controls – organisational structure, quality controls for filtering, flagging and actions against “offending” content and consequential reputational, compliance and legal risks (e.g. outsized fines) – can be substantial without tangible gains. Second, the commercial impact is largely negative because content moderation tends to reduce or even cause traffic to be diverted to competitors. It is also noted that rising content moderation costs benefit the dominant media firms and social media platforms at the expense of the smaller players.

The biggest drawback is the lack of clear evidence that current approaches to content moderation even achieves the broad goals of reducing misinformation or nullifying its impact on the public. The Royal Society’s (UK’s national academy of sciences) report[9] on the online information environment argue that while online misinformation is rampant, the impact of so-called echo chambers may be exaggerated and there’s little evidence to support the filter bubble hypothesis (algorithm-fueled extremist rabbit holes). “Demonstrating a causal link between online misinformation and offline harm is difficult to achieve”. While removing misinformation “may be effective and essential for illegal content (eg hate speech, terrorist content, child sexual abuse material) there is little evidence to support the effectiveness of this approach for scientific misinformation, and approaches to addressing the amplification of misinformation may be more effective”.

Thin line separating platform content moderation and censorship

Governments tend to blame the intensifying narrative wars on geopolitical rivalry but the damage from domestic political polarisation could just be as bad, if not worse. The irony is that authoritarian regimes are more insulated from narrative wars due to their tight grip on content. In contrast, democratic societies are prone to information disorder and destabilisation due to the freedoms they afford to speech and debate. As democratic societies become informationalised, they become more modular, autonomous, complex, diverse and transient. This erodes the binding forces of traditional relationships, values and institutions. Internal divisions are reinforced by a growing sense of insecurity and dissatisfaction among citizens in rich and unequal societies. The narrative wars fan fears and distrust. Distrust of governments, politicians and media unravels societal consensus on a wide range of issues and values such as vaccines, masks, lockdowns and election fraud, abortion, LGBTQ, immigration, climate change and diversity.

Hence, democratic governments are keen to control narratives to limit damage and have relied on content moderation as the main tool. But if the content moderation rules are scoped too narrowly and enforcement is too light, regulation is unlikely to have much impact. Recent trends indicate governments are opting for a stronger response by scoping the laws widely and provide enforcement powers to supress divergent narratives. In effect, this has opened a backdoor for extensive government censorship and for political prosecution.

In the US, revelations from the “Twitter files” and “independent” politicians and journalists highlight the government colluded and pressured social media platforms to censor views. In Missouri v. Biden, Judge Terry Doughty made a preliminary ruling to prevent the Biden administration from communicating with social media platforms to censor content containing constitutionally protected speech. The House Committee on the Judiciary and Select Subcommittee on the Weaponization of the Federal Government released an “interim staff report detailing how the Cybersecurity and Infrastructure Security Agency (CISA) expanded its mission to surveil and censor Americans’ speech on social media. The report…outlines collusion between CISA, Big Tech, and government-funded third parties to conduct censorship by proxy and cover up CISA’s unconstitutional activities”. The report also details how “CISA considered the creation of an anti-misinformation rapid response team”, “moved its censorship operation to a CISA-funded non-profit after CISA and the Biden Administration were sued in federal court, implicitly admitting that its censorship activities are unconstitutional” and “wanted to use the same CISA-funded non-profit as its mouthpiece to avoid the appearance of government propaganda”.

Nick Corbishley notes on 25 August 2023, the “big social media platforms will have to begin fully complying with the European Union’s Digital Services Act, or DSA. The DSA…obliges all Very Large Online Platforms, or VLOPs, to speedily remove illegal content, hate speech and so-called disinformation from their platforms. If not, they risk fines of up to 6% of their annual global revenue. The Commission has so far compiled a list of 19 VLOPs and VLOSEs (Very Large Online Search Engines), most of them from the US. Smaller platforms will have to begin tackling illegal content, hate speech and disinformation from 2024 onwards, assuming the legislation is effective”. The DSA “includes a crisis response mechanism that is clearly modeled on the European Commission’s initially ad hoc response to the conflict in Ukraine and which requires platforms to adopt measures to mitigate crisis-related misinformation.” The general categories of misinformation cover medical (e.g. pandemics), civic (electoral integrity), and crisis (war in Ukraine). “EU’s Code of Practice on Disinformation requires it to do so in connection with the so-called demonetization of disinformation…the Commission will mobilise the entire arsenal of punitive measures at its disposal, in particular the threat or application of fines of 6% of the company’s global turnover”.

Rather than appoint an independent regulator or judicial authority, Nick Corbishley points out “the ultimate decider of what constitutes mis- or dis-information, possibly not just in the EU but across multiple jurisdictions around the world, will be the European Commission”. This raises conflict of interest issues. By granting itself “enforcement powers similar to those it has under anti-trust proceedings,” the Commission has the ability to “take mass censorship to levels not seen in Europe since at least the dying days of the Cold War”. It is ironic that “one of the main justifications for the Collective West’s increasingly aggressive posture” is to “stem the drift toward authoritarianism” led by its strategic rivals. Yet, “the Collective West is, if anything, drifting faster in that direction through its wholehearted embrace of digital censorship, surveillance and control”.

Corynne Mcsherry and Katitza Rodriguez points out several proposals being discussed in Canada “are dangerously misguided and will inevitably result in the censorship of all kinds of lawful and valuable expression”. They include broad harmful content categories that explicitly include speech that is legal but potentially upsetting or hurtful; a hair-trigger 24-hour takedown requirement; an effective filtering requirement;  penalties of up to 3 percent of the providers’ gross revenues or up to 10 million dollars, whichever is higher; mandatory reporting of potentially harmful content (and the users who post it) to law enforcement and national security agencies; website blocking and onerous data-retention obligations. They think “perhaps the most terrifying aspect of the proposal is that it would create a new internet speech czar with broad powers to ensure compliance, and continuously redefine what compliance means. These powers include the right to enter and inspect any place (other than a home)…and examine the document, information or thing or remove it for examination or reproduction; to hold hearing in response to public complaints, and, do any act or thing…necessary to ensure compliance”.

Corynne Mcsherry and Katitza Rodriguez argue “the potential harms here are vast, and they’ll only grow because so much of the regulation is left open…users caught up in these sweeps could end up on file with the local cops – or with Canada’s national security agencies, thanks to the proposed reporting obligations”. They point out there is a chilling effect that will hurt “marginalized groups, both online and offline. Faced with expansive and vague moderation obligations, little time for analysis, and major legal consequences if they guess wrong, companies inevitably overcensor – and users pay the price”. The inadvertent consequences of over-censorship are to create information blindspots that impede analysis and problem-solving; which increases risks to marginalised groups. Even voluntary content moderation rules can backfire because the erasure of online content hides evidence of harassment, discrimination and human abuses. In relation to this, several aspects of the Canadian proposals are modelled on rules that have been criticised; such as “Germany’s Network Enforcement Act, which deputizes private companies to police the internet, following a rushed timeline that precludes any hope of a balanced legal analysis, leading to takedowns of innocuous posts and satirical content” and “France’s hate speech law”. Finally, Canada’s proposal falls far short of meeting the criteria set in “Article 19 of the International Covenant on Civil and Political Rights allows states to limit freedom of expression under select circumstances, provided they comply with a three-step test: be prescribed by law; have legitimate aim; and be necessary and proportionate. Limitations must also be interpreted and applied narrowly”.

Robert Crawford notes “sustaining public support is a requirement for a global power that regularly employs its coercive instruments and is occasionally subject to democratic checks. The work of legitimation is ongoing, demanding not only a high level of concealment but also efforts to shape that which is accepted without question and considered unworthy of attention and concern. Dissenting views challenging dominant narratives are ignored or ridiculed. Information that counters official accounts is denied, minimized, and neutralized. Leakers and even news organizations that report top secrets are vilified as threatening national security and face potential prosecution”. Drawing from Norman Solomon’s new book, War Made Invisible: How America Hides the Human Toll of its Military Machine, “Solomon reviews several reasons for media complicity, including editorial control, the perceived duty among journalists to support the war effort and support the troops, journalists’ dependence on information provided by the military, and the risk of alienating their sources”. While this criticism is directed at the US, it likely also applies to most governments.

The predicament for democratic societies is that it is easier to figure out what governments shouldn’t do rather than what they should to manage narratives. But should governments attempt to ban politicians from lying, prevent the spread of gossip and halt the manipulation of information. These are things regulation shouldn’t try to do. Yet, there are signs that democratic governments are choosing to tighten narrative controls. The worry is that censorship overreach will spill over into criminalisation of “misinformation” – content that dissent with government policies – and prosecution of opposition politicians, whistleblowers and journalists. There are also indications the private sector is aiding and abetting government coercion through deplatforming and debanking[10] practices. Stringent government controls over the public narrative may re-establish information order but it would undermine an information democracy.

Misinformation and narratives

The debate on content moderation is often misdirected because it doesn’t take into account the ambiguity of information. Jason Kelley explains “platforms are notoriously bad at moderation. Even when detailed guidelines for moderators exist, it’s often very hard to apply strict rules successfully to the vast array of types of speech that exist – when someone is being sarcastic, or content is ironic, for example. As a result, creating new categories of speech that online services are liable for hosting would almost certainly result in overbroad takedowns. Right now, platforms are allowed to mostly create their own rules for how they moderate. Giving the government more power to control speech would not be a remedy for the moderation problems that exist. As an example, social media platforms have long struggled with the problem of extremist or violent content on their platforms. Because there is no international agreement on what exactly constitutes terrorist, or even violent and extremist, content, companies look at the United Nations’ list of designated terrorist organizations or the US State Department’s list of Foreign Terrorist Organizations. But those lists mainly consist of Islamist organizations, and are largely blind to, for example, U.S.-based right-wing extremist groups. And even if there was consensus on what constitutes a terrorist, the First Amendment generally would protect those individuals’ ability to speak absent them making true threats or directly inciting violent acts. The combination of these lists and blunt content moderation systems leads to the deletion of vital information not available elsewhere, such as evidence of human rights violations or war crimes. It is very difficult for human reviewers – and impossible for algorithms – to consistently get the nuances of activism, counter-speech, and extremist content itself right. The result is that many instances of legitimate speech are falsely categorized as terrorist content and removed from social media platforms. Those false positives, and other moderation mistakes, will fall disproportionately on Muslim and Arab communities. It also hinders journalists, academics, and government officials because they cannot view and or share this content. While sometimes problematic, the documentation and discussion of terrorist acts is essential given that it is one of the most important political and social issues in the world. With further government intervention into what must be censored, this situation could potentially become much worse, putting marginalized communities and those with views that differ from whoever might be in power in an even more precarious situation online than they already are”. In addition, “what is considered controversial is often shifting, and context- and viewpoint- dependent, it’s important that these views are able to be shared. Defund the police may be considered controversial speech today, but that doesn’t mean it should be censored. Drain the Swamp, Black Lives Matter, or even All Lives Matter may all be controversial views, but censoring them would not be beneficial. Online platforms’ censorship has been shown to amplify existing imbalances in society – sometimes intentionally and sometimes not. The result has been that more often than not, platforms are more likely to censor disempowered individuals and communities’ voices. Without Section 230, any online service that did continue to exist would more than likely opt for censoring more content – and that would inevitably harm marginalized groups more than others”.

Daniel Williams points out “misinformation lacks a consensus definition. If one defines it broadly as misleading information, it is utterly ubiquitous. Even among mainstream news sources, reporting is often highly selective and partisan. Indeed, given that news involves presenting an extremely non-representative sample of attention-grabbing events, one could argue that news is inherently misleading even in the absence of partisan biases”. This is because “regardless of media misinformation, the truth is often complex, uncertain, and counterintuitive, and it can be difficult to figure out, not least because people have limited information, are busy, and are subject to various reasoning biases. Moreover, the beliefs we hold and the identities through which we interpret the world emerge over extended periods from infancy and feature complex interactions between our predispositions, personalities, development, life experiences, communities, and much more. The worldviews that result from this process are often partial, inaccurate, and difficult to dislodge, especially because many people distrust scientists, government agencies, and public health authorities. Although the causes of this distrust and its apparent recent increase in many countries are contested, it seems to be rooted in factors pertaining to economics, identity, polarization, and institutional failures, not  infection by fake news. This perspective is sharply at odds with…hypothesis that false beliefs arise because people are insufficiently skeptical of information they encounter online or from other media sources. If, instead, popular misperceptions often have much deeper roots and emerge from processes spanning many years, the real problem in many cases might be the exact opposite: that people are too wedded to their intuitions and overly skeptical of information from trustworthy sources. In that case, teaching people to be more vigilant about possible manipulation might backfire, providing them with greater resources to dismiss information at odds with their unfounded beliefs”.

On science, Michael Chamberlain asks “what if misinformation is coming from the public health officials themselves? And lately, government has not seemed to embrace differences in opinion, preferring instead to smother contrary opinions. The resulting erosion of trust in the officials in charge – less than half of Americans trust the Centers for Disease Control and Prevention (CDC) on COVID – could be more damaging than any misinformation found on social media…Far too often in recent years, when science gets in the way of the government’s agenda, science is disregarded, ignored, or undermined. The CDC, Food and Drug Administration (FDA), and National Institutes of Health (NIH) have appeared to make policy decisions and public representations inconsistent with science – including their own science. Science is undermined when scientists and the institutions that apply science do not follow the findings of unbiased studies. And science is undermined when those institutions do not make a good faith effort to collect data and let the data dictate a conclusion. When science gets tunnel vision for a result, it ceases to be science”. In these instances, agencies breaching their obligation to uphold scientific integrity in its decision-making, knowingly disseminate scientifically unfounded statements contrary to or ignore their own research and overrule recommendations from their own scientists without proper scientific justification. He warns “the Biden administration often decries misinformation about anything contradicting its own narratives, but it appears to be one of the worst purveyors of misinformation. Citizens can’t trust a government that misrepresents the results of studies, or prevents the collection of, or even intentionally hides, data. The American public should expect that its science-based institutions and most prominent spokesmen follow the science and use the scientific method in reaching policy decisions. Unless these institutions and their leadership change course, public trust will continue to plummet”.

The difficulty is with classifying “misinformation” as the reality is there are few enduring “absolute truths” or “universal values” that survives intense scrutiny. Today’s orthodoxy can be as easily overturned tomorrow. Forget about relying on politicians, officials, scholars, priests, scientists and experts to clarify truth or to check facts. They are opinionated cheerleaders for their cause (or income), quarrelsome and are as likely to be a source of misinformation. How then can content be moderated when there are no absolute truths?

In this context, it may be more appropriate to focus on narratives rather than misinformation. Take for example a situation where “truth” threatens national security. A narrative, which by definition must be a lie, needs to be constructed to counter the “misinformation”. What then are narratives? We can say that narratives are generally disinterested in facts or logic and are usually distilled into memes, slogans, talking points or smears and tailored to maximise influence given the short attention spans and fickle memories of the public. In essence, narratives are storyline frames that tap into loyalty and team dynamics.

Narratives usually have multiple objectives. From a strategic perspective, narratives are established to justify or reinforce the legitimacy of a set of beliefs and the political-cultural organisational structure. Narratives are dynamic. The storylines need to be constantly evolving and refreshed in line with changing situations. For example, establishment narratives face constant challenges from opposing narratives that seek to challenge their legitimacy. Thus, narratives have attention-based objectives intended to support, defend or refute a certain line of thinking; to either focus or divert attention; or to clarify, deflect or obfuscate issues. On a continuous basis, the objectives of narratives are tactical with the aim of controlling the news cycle. Narratives can also have emotional-based objectives such as to rouse anger or to pacify emotions.

However, the impact of misinformation narratives may be exaggerated. Daniel Williams notes panic about misinformation narratives took off in 2016. “Amid the global resurgence of nationalist populism – and social media’s role in making fringe views more visible – many pundits, policymakers, and social scientists began to ask why so many people had seemingly lost their minds. The misinformation narrative supplied an answer: because people are misinformed about the world, and they are misinformed about the world because they have been exposed to misinformation. Three features of this explanation are worth noting. First, it is apolitical: it explains social and political conflict not in terms of people’s divergent identities, perspectives, and interests but in terms of factual errors caused by exposure to bad information. On this view, our political adversaries are simply ignorant dupes, and with enough education and critical thinking they will come to agree with us; there is no need to reimagine other social institutions or build the political power necessary to do so. Second, the argument is technocratic rather than democratic. By using an epidemiological metaphor, it suggests solutions to social problems akin to public health measures: experts must lead the way in tamping the spread of mind viruses and vaccinating the masses against them. Finally, though many critics find the misinformation panic too pessimistic, there is a deeper sense in which it is extremely optimistic. Foolproof does not just posit a simple explanation and remedy for complex social problems; it imagines a threat that can be straightforwardly identified. These features, I think, help explain why the misinformation-as-virus narrative has won such widespread endorsement. The belief that a dangerous misinformation virus is a major source of society’s problems is popular not because it is supported by evidence, and not because it has duped credulous individuals, but, most plausibly, because its apolitical, technocratic, and simplistic character resonates with the interests and biases of those who consume and propagate it”.

In addition, Daniel Williams explains “much of the current misinformation panic depicts humans as profoundly gullible, routinely revising their worldviews and behaviors based on what they encounter on the Internet. In fact, a large body of psychological research demonstrates that people exploit sophisticated psychological mechanisms for evaluating communicated information. If anything, such mechanisms – what cognitive scientists call epistemic vigilance – make individuals overly stubborn, too difficult to influence rather than too easy. In general, people are highly skeptical of information they encounter: if a message conflicts with their preexisting beliefs, they demand arguments that they find persuasive from sources that they judge to be trustworthy. Otherwise, they typically reject the message…This does not mean that everyone is always well-informed, of course. Ignorance is pervasive, and people hold inaccurate beliefs about many topics. But the issue is not whether people are misinformed; it’s why. Inoculation theory traces false beliefs to exposure to misinformation, but this is often a simplistic and misleading picture of our cognitive life”. He concludes that “fears about people’s manipulability are broadly unfounded…there is no such thing as brainwashing; even under the stress of an intense campaign of persuasion and manipulation, people are remarkably stubborn and difficult to influence”. “When people seem to have been manipulated by ideas, it is often not because they have been duped against their interests but, quite the contrary, because their behavior promotes their interests. We can see this at work in both the supply and demand of misinformation”. In this context, “the consumption of such misinformation is highly skewed. Most people consume very little, but a small minority of the population – consisting largely of avid conspiracy theorists, hyper-partisans, and extremists – consume a lot”.

In this context, we need to ask what fact-checking, content moderation and censorship is about since misinformation lies in the eyes of the beholder. My view is content controls are about narrative alignment. In the main, governments believe in preserving the dominance of their narratives to maintain public support opposition to the regime. This means governments must also minimise or pre-empt opposing narratives. Hence, governments declare unfavourable narratives as “misinformation” and present them as threats to “democracy”, “security” or “stability”. Of course, if there is a change of government changes, the established narrative with one favourable to the new regime. Situations may also arise where the government is agnostic about a narrative but find themselves being squeezed between conflicting narratives that forces the government to take a position one way or the other. Thus the objective of fact-checking, content moderation and censorship have little to do with the truth. It is intended to reinforce the dominance and legitimacy of the government narrative and to blunt or even supress the force of opposing narratives, or to manage conflicts. These developments represent a major setback to democracy and moves society a step closer to being authoritarian.

Government’s role on controlling narratives in a democratic and informationalised society

Given the practical difficulties of moderating content, and the risks of censorship over-reach and unintended political prosecution, should democratic governments treat the current narrative wars as a phase where eventually its intensity would be exhausted and society would adapt and build immunity to its adverse effects. I think governments cannot be a mere observer for several reasons.

First, the scale and consequence of misinformation is significant in a democratic and informationalised society. Aspects of a surveillance dystopia are already manifested in security services, on platforms and finance. In addition, today’s efforts to control narratives makes McCarthyism of the 1950s look inconsequential. Then censorship was narrow; a witch hunt to root out communists. Today, due to information overload, censorship efforts extend across a broad swath of issues – hate speech, human rights, climate change, liberal democratic values, vaccines, masks, lockdowns, immigration, LGBTQ, and geopolitical rivalries. In this context, the rise of woke and cancel cultures has led to attempts to ban books, rewrite literature or educational texts and to make movies or brands to conform to “liberal democratic values”. Recently, a “conservative” counter-movement has emerged with opposing narratives. Participation is broad and many wanting to have their say in censoring content. The narrative conflicts over such a broad range of issues reflects growing intolerance of differences in democratic societies. Censorship coverage will be further expanded by the prioritisation of national security. As censorship coverage becomes sweeping, it crowds out and leaves little space for free speech and democracy to survive.

Second, misinformation also affects individuals and businesses but they are unable to resolve issues such as cyberbullying and other abuses on their own. Governments need to lead the way. In this regard, the public also expects governments to take on these responsibilities. Zhang Changyue reported a China Youth Daily online survey of 1,000 individuals revealed they “expected public prosecution to be launched against severe online abuse with many calling for the police to assist victims in collecting evidence…highlighting increasing public concern over the cyberbullying on social order and the rights of individuals”. The top four obstacles are “the identification of the cyberbullyer because of online anonymity (69.6 percent), the high time and financial costs (64.2 percent), the trouble in collecting evidence (55.3 percent), as well as filing a case and starting legal procedures (52.7 percent)…To tackle these difficulties, 88.1 percent of participants called for public prosecution of  more severe cases of cyber violence as the spread of online rumors and insults currently belonged to private prosecution, which means the victims needed to collect evidence by themselves which is usually very challenging such as finding out perpetrator on the internet. Some 68.1 percent of the surveyed youth thus expect the police to provide assistance for victims when it is difficult to collect evidence and 66.1 percent of them hope for enhanced supervision to make sure timely case filing”. He  adds the survey was undertaken in conjunction with the release of draft guidelines by China’s leading court and prosecutor’s office, and the Ministry of Public Security which outlined “the behaviors, societal harm and legal remedies concerning online violence, and proposed targeted solutions to address difficulties in filing cases, gathering evidence, and initiating public prosecutions…The draft categorized cyberviolence as behaviors including the spread of online rumors, online insults, infringement of personal information, and offline harassment, as well as the derivative behavior of malicious marketing and sensationalism. The draft emphasized heavier punishment on misconducts that targeted minors or disabled individuals, those who employ internet water army, fabricate sex-related topics and use deepfake technology for illegal information. According to the draft, cyber violence behaviors based on its degree of severity may face punishment in accordance to the criminal law”. In addition to monitoring online platforms where rumors and insults initially appeared, it was suggested zombie and promotional accounts be managed through requirements for real-name registration and faith-breaking penalty mechanisms.

Third, there are concerns that a prolonged narrative war would greatly diminish public’s ability to differentiate between truth and falsehoods. This leads to a widening gap between narrative and reality which will impede governments and societies from facing up to and resolving their real challenges. This is why it is essential that narratives, especially government narratives, are subject to scrutiny, debate and tested to ensure they are robust. This requires a vibrant democracy which in turn requires tolerance of competing and conflicting narratives. Intolerance of debate reflects elitist distrust of citizens. If you can’t trust citizens to understand information and make rational choices, can there be a democracy.

Censorship and political prosecution have chilling effects on free speech. But history demonstrates that attempts to repress free speech are likely to fail or will eventually backfire. While concentration of traditional media and social media platforms facilitate stricter gatekeeping controls, yet the peer-to-peer structure of networks means misinformation or non-aligned narrative can easily find outlets to elude censorship.

The current narrative war reveals a great deal about the state of democratic societies. It reflects democratic societies are distracted, full of doubts, insecure and divided on their future. The give-aways are the stagnant conversations on stale issues and the desperation to supress conflicting narratives. In this context, the path towards restoring information order requires evolving a breakthrough narrative built around a “new” vision of society. But it is not easy to change the conversations as there tends to be substantial resistance to breakthrough narratives. “Participants are generally reluctant to depart from script. Participants are well versed with the playbook and can easily recite the talking points. Often the conversation helps put food on the table. For example, the narratives of central banks and Wall Street support the supply of liquidity and profits to markets and the finance industry. The private sector enthusiastically blames over-regulation for hurting competitiveness because it allows them to shift the blame to the government and to extract even more concessions (less taxes and regulation, more protection and incentives). At other times, the conversations are intended to navigate the path of least resistance and to minimise friction. The drawback of policy conversation reruns is that the same old suggestions will be repeated”[11]. Hence, the tendency is to repeat well-worn narratives (e.g. the US budget crisis, the Cold War) so that participants can go through the motions of solving familiar problems and crises. “It is extremely difficult to shift the policy conversation. Nobody knows where the new conversations could lead to. They could sow disorder, undermine legitimacy and screw up team dynamics. Participants could lose their bearings, suffer missteps, become disenfranchised or upset their income apple cart. New conversations are thus greeted with suspicion and resistance. But there is no way around the fact that discovering new solutions require having new conversations”. As a result, the public wonders if policy-makers and politicians are out of touch. Widespread public dissatisfaction increases the risks of populist policy reminiscence; a choice between bringing back the good old days of the middle class, manufacturing jobs and Western supremacy or the bad old days of taxing the rich, breaking up platform monopolies or reviving the Cold War.

The stakes are high for a democracy. Democratic governments do not have the luxury of being passive bystanders to the toxicity of the narrative war. The responsibility falls on governments rather than private platforms to lead the exploration into how democracy should evolve with digitalisation. On this note, censorship is not the cure but will likely create more problems. Governments should reposition its policies on content and narratives by treating free speech as a public good and rejecting policies that do not promote democracy. Towards this end, governments should ensure that the scope of censorship is narrow and that wide leeway should be given to non-security narratives relating to politics, health and religion. Instead of focusing on the message, the right approach to tackling “misinformation” is to focus on enhancing the quality of narratives – data and analysis to support well-informed debate; tolerance of criticisms; and choices framed within the context of trade-offs. There is also a need to work on emotional aspects of narratives – generally to substitute anger with human qualities such as common sense and compassion and to replace vigilante-type movements will efficient and effective processes. Overall, societal change works best when it is driven by both visionary and critical narratives. Governments need to be cautious to ensure they do not inadvertently cancel out informed debate, analysis and criticisms. We should remind ourselves that the real Orwellian threat to a democratic and informationalised society is centralised control of information and thoughts.

A broad agenda[12]

From a broader perspective, the task of managing narratives is part of the current tribulations of information disorder testing of society’s capability to manage massive amounts of information. Censorship is just a minor aspect of managing narratives Governments have the responsibility and should set the vision to strengthen the content ecosystem to develop breakthrough narratives, counter potential threats from misinformation, rebuild credibility and strengthen public trust in a democratic and informationalised society. Below are some suggestions.

  • Re-organising government communications

The content landscape has been dramatically transformed but the government communications machinery remains roughly unchanged. Unsurprisingly, the government’s messages are being drowned out by social media and this is weakening its influence on the public. This is making democratic governments insecure and they are becoming increasingly intolerant of un-sanitised content. While governments like to blame others (platforms, foreign enemies), the loss of influence reflects governments are doing a bad job of getting their message out to the public. The government’s communications machinery – comprising ministries, agencies and national news agencies (print, radio and television) – have become anachronistic. Agencies, officials and politicians are increasingly outsourcing communications to consultants, cyber warriors, and influencers. It should be noted government content is weightier than private content because they are official pronouncements and due care is needed as inaccuracies have substantial consequences. Excepting high-profile announcements, government news tends to be detailed and boring to avoid controversy, scrutiny and debate on official decisions and actions. While the readership base is small but the readers are highly influential and high-value.

The starting point for reorganising government communications is to map how government content is produced, vetted and distributed across traditional and new channels; with particular focus on the dependence on and costs of external communication resources. A core unit should be tasked to plan and oversee the reorganisation. This unit would monitor as well as coordinate communication and marketing by government agencies across all channels – social media, newspapers, search engines. It would recommend streamlining of approval processes and resources, adopt at-source communications, assess messaging quality and channel effectiveness, analyse content users, and recommend initiatives to increase the usage and value add of government content. To assess communication effectiveness, the unit could track whether government messages are reaching its intended audience and its impact. Information gaps – whether different stakeholders are getting access to the information they need – should also be addressed. Other goals could include improving the public’s knowledge and appreciation of government policies and administration, increasing the levels of transparency, authenticity and accountability, increasing the usage of government information and ensuring the records of public activities – such as at courts and local governments – are well maintained (archived) and accessible to the public. Governments should aim to upgrade one-way communication into engagement by making public news more interactive and participative consistent with social media trends. 

  • Systems approach

Evelyn Douek argues it is misguided to view content moderation as “a rough online analogue of offline judicial adjudication of speech rights, with legislative-style substantive rules being applied over and over again to individual pieces of content by a hierarchical bureaucracy of moderators”. Focusing “on the merits of individual speech decisions…leads to endless and irresolvable arguments about the normative desirability of platforms’ substantive rules, whether they have been correctly and impartially applied in particular cases, and whether platforms have afforded due process to individual users”. “Even if there were not constitutional obstacles to substantive governmental regulation of content moderation, the sheer scale, speed and technological complexity of the task means state actors could not directly commandeer the operations of content moderation. This is a descriptive, not normative, observation: the state simply does not have the capacity to usurp platforms as the frontline of content moderation”.

There are thus questions over the “practicality of individualized due process. The European Union Digital Services Act (DSA) illustrates…requirements for extensive procedural protections in every case: platforms would need to provide reasons for any content removal, a right of appeal open for six months in all cases, a human in the loop for all appeals, and an further right of appeal to a third-party arbitrator…the German Federal Court of Justice’s has ruled that Facebook must provide every individual user notice if a post is deleted, reasons for that deletion, and an opportunity to reply”. “The futility of maximalist individual process…divorced from the reality of platform scale. It would be entirely impractical and result in most users not receiving any process at all given the percentage of claims that could be resolved would be miniscule”. “Courts will take months or years to resolve speech cases, finely parsing the details in an extraordinary effort to get them right. But even after all this effort, and no matter how much process and checks are piled on, there will always be disagreement about what outcomes in speech cases should be. Perfectibility in any system of speech regulation is illusory. The unfathomable scale of content moderation makes this all the more true. To believe otherwise is to force necessary trade-offs into the shadows and suggests reforms that will be counterproductive”.

Evelyn Douek argues “content moderation should instead be understood as a project of mass speech administration and that looking past a post-by-post evaluation of platform decision-making reveals a complex and dynamic system that need a more proactive and continuous form of governance than the vehicle of individual error correction allows”. She advocates expanding the toolset for content moderation “beyond individual error correction”, “a systems thinking approach to content moderation regulation that focuses on systems not individual cases, on wholes and interrelationships rather than parts, and on patterns of change rather than static snapshots.” “The common thread here is that new technologies require new thinking about what it means to assert and protect rights. The scale and automated nature of decisionmaking have changed how decisions about rights are made by public and private entities alike. Scholars have argued that this requires rethinking governance in the context of due process, anti-discrimination and privacy rights. And yet speech governance proves especially resistant to this kind of analysis”. “It is a vast system of administration that includes a far broader range of decisions and decisionmakers…Content moderation…now includes many more things than it did even a few years ago: increased reliance on automated moderation; sticking labels on posts; partnerships with fact-checkers; greater platform and government collaboration; adding friction to how users share content; giving users affordances to control their own online experience; looking beyond the content of posts to how users behave online to determine what should be removed; tinkering with the underlying dynamics of the very platforms themselves. The people and processes that determine how user-generated content is treated on online platforms are therefore far more heterogeneous than depicted in the standard account. Content moderators include engineers, product managers, authorities outside platforms, teams monitoring behavioral signals, industry peers, and government partners”. A systems thinking approach embraces the diversity and dynamism of content moderation systems, which are in constant flux. “Instead of focusing on the downstream outcomes in individual cases, it focuses on the upstream choices about design and prioritization in content moderation that set the boundaries within which downstream paradigm cases can occur. Instead of creating barriers to entry or locking in a vision of content moderation that is fixed and reflects the practices of the current dominant firms, it would allow for innovation and iteration. And in focusing on procedural accountability rather than the pursuit of some substantive conception of an ideal speech environment, it is more politically feasible and less constitutionally vulnerable”.

Evelyn Douek highlights transparency as a key element. The heterogenous decisionmakers involved in content moderation should be made identifiable so they can be held to account for their decisions. Reforms should include “procedural requirements directed at surfacing platforms’ ex ante decisions about system design, creating processes for holding platforms to and testing their implementation of these commitments, and facilitating systemic transparency that can generate information for the industry and regulators, with a view to creating more specific standards and mandates in the future…can mitigate some of the most persistent concerns about content moderation’s accountability deficits and produce a virtuous cycle of regulatory, public and industry learning. The function of these processes is not to control discretion but to make its practice transparent. They facilitate diagnosis and improvement”. She notes transparency can enable more effective stakeholder input, inform market and regulatory responses, provide an empirical basis for content moderation design and be used to establish industry benchmarks of reasonable and responsible company behavior. Finally, transparency can bring legitimacy and accountability to platforms’ decisions.

  • Using technology to fight technology

Bruce Schneier and Nathan Sanders point out there are legitimate concerns “that AI could spread misinformation, break public comment processes on regulations, inundate legislators with artificial constituent outreach, help to automate corporate lobbying, or even generate laws in a way tailored to benefit narrow interests”. Nonetheless, there is increasing demonstration of “the potential beneficial uses of AI for governance…in democratic processes is to serve as discussion moderator and consensus builder. To help democracy scale better in the face of growing, increasingly interconnected populations – as well as the wide availability of AI language tools that can generate reams of text at the click of a button – the US will need to leverage AI’s capability to rapidly digest, interpret and summarize this content”. “In 2021, the Council of Federal Chief Data Officers recommended modernizing the comment review process by implementing natural language processing tools for removing duplicates and clustering similar comments in processes governmentwide. These tools are simplistic by the standards of 2023 AI. They work by assessing the semantic similarity of comments based on metrics like word frequency (How often did you say personhood?) and clustering similar comments and giving reviewers a sense of what topic they relate to”. The drawback is that the nuances and context of public feedback might be overlooked as well as the risk of missing “out on the opportunity to recognize committed and knowledgeable advocates, whether interest groups or individuals, who could have long-term, productive relationships with the agency”. They note modern AI techniques can be used to identify distinctive experiences, arguments and testimony and to detect outliers

Hence, the use of AI and other technologies should be maximised. In this context, the role of humans in generating news content, maintaining editorial quality and distribution will diminish because it is too costly. AI should be used to lower the costs of production, quality control and distribution as well as to strengthen the learning curve for humans in the content process.

  • Authentic and transparent zones

The boundaries for credible narratives were once set by traditional media. This has changed. The narrative wars have lowered public trust in the credibility of traditional and social media alike. The credibility vacuum suggests the need to create blue-ribbon zones of credible content; backstopped by high levels of authenticity and transparency. These zones would be regulated and subject to strong oversight – similar to regulated exchanges and financial intermediaries – to build confidence. There would be disclosures on content contributors, ethical constraints on marketing and commercialisation and obligations on informing citizens. The independence of content from safe zones should be respected by content moderators and safeguarded from censorship. I see elements of this in Elon Musk’s remake of Twitter with emphasis placed on promoting independent thinking rather than talking points; debate rather than memes; credibility and trust rather than likes. I agree this is the right direction. In this context, there is still choice as authentic and transparent zones would likely co-exist, complement and compete with unregulated or lightly-regulated internet channels that offer privacy or anonymity. In many ways, authentic and transparent zones represent a digital version of  Speaker’s Corner (an area where public speaking and discussion are freely allowed) in Hyde Park in London.

  • Position news as a growth industry

The decline of traditional media doesn’t mean it is a sunset industry. It merely reflects information disruption of legacy media firms. In fact, the emergence of social media is creating many opportunities as news is likely to have a central role in the information society. We just need the right mindset to adapt to the changing environment and reimagine the news industry as a source of economic growth.

The first requirement is to conceptualise a news ecosystem organised based on modularity. Journalism and other forms of content production are no longer distinct but should be viewed as embedded activities within the individual-centric and transparent information society. Everyone can be a reporter in the information society. In turn, reporters are much the subject of news; their personality, stories, opinions and events becoming part of their individual brand. Due to the high throughput and speed, modularity should be supported by at-source concepts and technology to facilitate content production and delivery with minimal interventions. There is a need to clarify and streamline the status, definitions, roles, privileges and obligations between traditional and new media as bloggers, influencers and citizen journalists increasingly account for a large proportion of news stories and videos and are increasingly influential in shaping the public narrative[13].

Rather than defend legacy models, governments should lead in developing a bold vision as markets cannot achieve this transformation on their own. Governments should develop policies, invest resources and provide incentives to accelerate growth and innovation such as individual-centric content models. The news or content industry can be positioned as a source of new jobs; not on a standalone basis but in conjunction with other ancillary activities such as content regulation, spillover activities and technology. In particular, schemes should be set up to improve content monetisation at the long-tail where marginal producers struggle to earn a decent living.

“The establishment of a government platform[14] is a key intervention that will change the dynamics of news intermediation, delivery and engagement. A government platform, modelled on the best features of the global platforms, can be used to customise delivery of government and local community news and facilitate aggregation strategies. In addition, the platforms can promote local content and expand reach to local audiences. This will provide a base to establish a national network to buy and sell independent pull content. The platform can also provide free or inexpensive AI tools (for editing, blogging) to help individual writers mitigate cost constraints[15]. A government platform can also facilitate the monetisation and distribution of value in favour of individual or small firm content producers. In this regard, the government platform offers high levels of community engagement, authenticity and copyright protection. It can attract high-value advertising revenue and the ecosystem can be organised to promote co-branding and spin-offs to ensure the bulk of value is channelled to individual content producers. Lastly, government platforms can be used to transform the relationship with citizens from being passive recipients of government communications into active and responsible participants in a democracy”[16]. “In this regard, the ability of the news industry to operate at the front line of truth should be supported by creating a conducive environment for investigative and community reporting, and by expanding quality sources of information. should be addressed by initiatives to open up community bubbles, promoting informed and active citizen participation and raising the levels of transparency and authenticity. In reimagining the industry, it is important to reinforce the role of news industry as a source of reality and as the conscience of a democratic society”.

  • Strengthening the sources and quality of domestic information

Sam Lebovic explains the dominance of misinformation is due to “the absence of countervailing forms of necessary political information…Much of what we call information is, strictly speaking, expression – spin, commentary, or analysis (of various degrees of sophistication). The consequences of this media ecosystem are profound, but among other things, they both incentivize demagogic politicians and exaggerate the spread of lies in the polity”. He notes Walter Lippmann’s “solution was to turn away from a focus on the press as the panacea for democracy, and to reject the idea of the omnicompetent individual citizen whose spontaneously emerging interests would guide press policy through the price signals in the market. The press, he argued, is no substitute for institutions. In fact, he noted perceptively, much of what the press reported was, in fact, information generated by other institutions for the purpose of being reported on”. “What were needed, Lippmann argued, were institutions that could produce the sorts of knowledge necessary for self-governance, even when there was no obvious consumer market for those forms of knowledge…the creation of what he called political observatories in all branches of government, national, state, municipal, industrial and even in foreign affairs – which should be endowed and ensured of their independence. The universities, he thought, provided an obvious place to locate them…To create those resources, we need public institutions able to produce information and then make it freely available, even if there is no market for it. (Today, a good deal of policy reporting is, in fact, secreted behind paywalls in trade journals; exacerbating the longstanding regulatory problems created by the informational disparities between producers, who are incentivized to know one industry very closely, and consumers, who are expected to be omnicompetent about all the products they consume.) Lippmann was writing before the rise of the administrative state, which was in part the kind of organization he was calling for – expanding funding for the agencies of the state would certainly not be a bad thing, capable as they are of providing all sorts of essential data about social life. But there is also a need for nonprofit newsrooms, focused on the recording and reporting of basic data about political life”.

Sam Lebovic adds “the demands of basic transparency can be met in other ways – public records law, recording local board meetings and posting them online, etc. – but the problem is that no one really has the time to sift through raw material of this sort. What is needed is some kind of curation and summary – the presence of political observers on local beats who can keep an eye on the ongoing, daily work of politics, provide easily digestible summaries of what is happening, provide some hierarchy of importance in their presentation of the material – page one of a physical newspaper was excellent at this – and who are ready to provide context and background when important moments and decisions and developments arise. Some commitment to the fraught professional norm of objectivity would be necessary – objectivity not understood as a balancing of competing political perspectives, but as a process of transparently reporting on political life…What is needed is an easily available resource where citizens can go to see what is happening in their local polities, where they can quickly scan a familiar, trusted outlet to inform themselves, in a relatively efficient manner, about what is going on, what issues may lead them to change their vote, what they might care to learn more about…If one wanted to avoid starting entirely new institutions, one idea might be to establish political observatories attached to student newspapers at local universities and colleges. They have a distribution network and an identity. If they were able to hire permanent staff to work alongside students, and aim their vision not simply at campus politics but at the local communities in which they reside, they could play a very different role than they currently do”. In summary, this “meant thinking not about what any one individual believed or was saying, nor even about what rights should be afforded to any class of political expression, but in thinking about how the society, as a whole, was arranging the political economy of its information”.

To a large extent, there has been a general  debasement of the quality of domestic information in circulation. Demand shifts (from consumption constraints such as overload, attention limits and consumption behaviours) and disintermediation of data and advertising revenues by platforms) was allowed to erode commercial incentives for production of pull content (defined as feedstock domestic information such as community content, public records and analysis). The shrinking production of pull content reduces information flows that function as a crucial public good that improves the quality of narratives, informs public debate and acts as a bulwark against fallacious and fake narratives and data. The public benefits of pull content far exceed their costs and are essential for a vibrant democracy.

The task of producing credible information has become more important with the rising threat from fake content. Only governments can lead this charge. This is partly due to the fact that they are content regulators and are positioned to address the gaps that the private sector in relation to content as public goods. At the same time, governments are large producers of information such as announcements, policies, rules, statistics, research, minutes and administrative records. Hence, it can consider how to strengthen its production and improve access to its information as part of the reorganisation of its communications. In particular, governments need to intervene decisively to overcome monetisation (revenue and cost) constraints impeding expansion. In this regard, pull content can largely be regarded as “non-profit” due to the lack of commercial viability.

First, governments can consider facilitating mutual consumption and monetisation to strengthen the pull content ecosystem. This is because the major producers of information also tend to be the major consumers. In this regard, consumption of content tends to be bundled into the pricing (often as a freebie) subsidies (through advertising), cross-subsidies, indirect (third-party) payments. Second, the government could designate agencies to anchor demand for existing and new content products; in the form of funding or purchase commitments by providing seed money, grants or incentives to support content related to public interest activities. These commitments can be used to create income opportunities for private firms and individuals to produce content on public and community activities. Purchase commitments can be supported by private sponsorship and crowdfunding arrangements. Third, governments could explore synergistic opportunities by forging closer linkages between government news, education, civic participation and collaboration with the private sector. Journalistic skills have long learning curves. Content skills should be given greater emphasis in the school curriculum. Apprenticeship programs should expand to include writing reports on community events and public interest issues, content moderation (fact-checking, editing) and social media interactions. Ex-journalists could be recruited to teach and develop the educational and apprenticeship programs.

The reimagining of the news industry is an important exercise; not just for addressing the consequences of a decline in the traditional newspaper industry but also to recalibrate policies in recognition of the information forces reshaping the landscape and to reset the vision for the news industry. Ultimately, governments should aspire to evolve a news ecosystem that supports the growth of the information industry; increase citizen engagement and participation; and increase the level of transparency and authenticity to promote an information democracy.

Conclusion

In an era of information overload and disorder, democratic governments have grown increasingly sensitive to threats to their narratives and are tightening their controls to curb “misinformation”. Unfortunately, this is eroding free speech and political freedoms that are the unique features of a democracy. Government’s management of narratives is related to the type of democracy and society they aspire to. Instead of focusing on reducing information flows, governments should adopt a broad approach more compatible to defending freedom and democracy as they seek to restore information order.

References

Anthony Ilukwe (10 August 2023) “Technology can support democratic engagement: education is the key”.  Centre for International Governance Innovation (CIGI). https://www.cigionline.org/articles/technology-can-support-democratic-engagement-education-is-the-key/

Antonio García Martínez (19 September 2020) “The prophet of the revolt: Martin Gurri and the ungovernable public”. The Pull Request. https://www.thepullrequest.com/p/the-prophet-of-the-revolt?s=r

Bill Rice (11 August 2023) “The one paragraph that reveals all”. The Brownstone Institute. https://www.zerohedge.com/political/one-paragraph-reveals-all

Bruce Schneier, Nathan Sanders (21 June 2023) “Actually, AI could be good for democracy”. Asia Times. https://asiatimes.com/2023/06/actually-ai-could-be-good-for-democracy/

Committee on the Judiciary and Select Subcommittee on the Weaponization of the Federal Government; U.S. House of Representatives (26 June 2023) “The weaponization of CISA: How a cybersecurity agency colluded with big tech and disinformation partners to censor Americans”. Interim Staff Report. https://judiciary.house.gov/sites/evo-subsites/republicans-judiciary.house.gov/files/evo-media-document/cisa-staff-report6-26-23.pdf

Corynne Mcsherry, Katitza Rodriguez (10 August 2021) “O (No!) Canada: Fast-moving proposal creates filtering, blocking and reporting rules – and speech police to enforce them”. Electronic Frontier Foundation (EFF). https://www.eff.org/deeplinks/2021/08/o-no-canada-fast-moving-proposal-creates-filtering-blocking-and-reporting-rules-1

Daniel Williams (7 June 2023) “The fake news about fake news”. Boston Review. https://www.bostonreview.net/articles/the-fake-news-about-fake-news/

D. Wilding, P. Fray, S. Molitorisz, E McKewon (2018) “The impact of digital platforms on news and journalistic content”. University of Technology Sydney. https://www.accc.gov.au/system/files/ACCC%20commissioned%20report%20-%20The%20impact%20of%20digital%20platforms%20on%20news%20and%20journalistic%20content%2C%20Centre%20for%20Media%20Transition%20%282%29.pdf

Edward Lucas (27 April 2019) “The spycraft revolution”. Foreign Policy.

Evelyn Douek (10 January 2022) “Content moderation as administration”. Harvard Law Review.  https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4005326

Jason Kelley (3 December 2020) “Section 230 is good, actually”.  Electronic Frontier Foundation (EFF). https://www.eff.org/deeplinks/2020/12/section-230-good-actually

Michael Chamberlain (29 June 2023) “Agendas run rampant over science in the Biden Administration”. RealClear Wire. https://www.zerohedge.com/political/agendas-run-rampant-over-science-biden-administration

Nathan Gardels (21 April 2022) “All that is solid melts into information”. Noema Magazine. https://www.noemamag.com/all-that-is-solid-melts-into-information/

Nicholas Carr (13 January 2020) “From context collapse to content collapse.” Rough Type. http://www.roughtype.com/?p=8724

Nick Corbishley (7 July 2023) “The EU’s mass censorship regime is almost fully operational. Will it go global?”. Naked Capitalism.

Olga Peterson (13 June 2023) “Udo Ulfkotte exposed the CIA’s role in controlling worldwide media in his book Journalists for hire and should be celebrated among the great whistleblowers of all-time”. Covert Action Magazine.

Phuah Eng Chye (2015) Policy paradigms for the anorexic and financialised economy: Managing the transition to an information society. http://www.amazon.com/dp/B01AWRAKJG

Phuah Eng Chye (22 June 2019) “Policy conversations and the language of information”. http://economicsofinformationsociety.com/policy-conversations-and-the-language-of-information/

Phuah Eng Chye (26 October 2019) “Information and organisation: Cross border data flows and spying”. http://economicsofinformationsociety.com/information-and-organisation-cross-border-data-flows-and-spying/

Phuah Eng Chye (21 November 2020) “Information rules (Part 2: Capitalism, democracy and the path forward)”. http://economicsofinformationsociety.com/information-rules-part-2-capitalism-democracy-and-the-path-forward/

Phuah Eng Chye (2 January 2021) “Information rules (Part 5: The politicisation of content)”. http://economicsofinformationsociety.com/information-rules-part-5-the-politicisation-of-content/

Phuah Eng Chye (16 January 2021) “Information rules (Part 6: Disinformation, transparency and democracy)”. http://economicsofinformationsociety.com/information-rules-part-6-disinformation-transparency-and-democracy/

Phuah Eng Chye (30 January 2021) “Information rules (Part 7: Regulating the politics of content)”. http://economicsofinformationsociety.com/information-rules-part-7-regulating-the-politics-of-content/

Phuah Eng Chye (13 February 2021) “Information rules (Part 8: The decline of the newspaper and publishing industries)”. http://economicsofinformationsociety.com/information-rules-part-8-the-decline-of-the-newspaper-and-publishing-industries/

Phuah Eng Chye (27 February 2021) “Information rules (Part 9: The economics of content)”. http://economicsofinformationsociety.com/information-rules-part-9-the-economics-of-content/

Phuah Eng Chye (13 March 2021) “Information rules (Part 10: Reimagining the news industry for an information society)”. http://economicsofinformationsociety.com/information-rules-part-10-reimagining-the-news-industry-for-an-information-society/

Phuah Eng Chye (27 May 2023) “Transition to the information society (Part 1: Disruption of households and work)”. http://economicsofinformationsociety.com/transition-to-the-information-society-part-1-disruption-of-households-and-work/

Phuah Eng Chye (8 July 2023) “Transition to the information society (Part 2: Disruptive effects of transparency)”. http://economicsofinformationsociety.com/transition-to-the-information-society-part-2-disruptive-effects-of-transparency/

Robert Crawford (20 July 2023) “How media makes impact of US forever wars invisible”. Responsible Statecraft. https://responsiblestatecraft.org/2023/07/20/how-media-makes-impact-of-us-forever-wars-invisible/

Sam Lebovic (18 November 2022) “Fake news, lies, and other familiar problems”. Columbia University. https://academiccommons.columbia.edu/doi/10.7916/sew5-h618

Thomas Brooke (20 July 2023) “Disgraceful Coutts de-banked Nigel Farage because of his conservative views, internal dossier reveals”. Remix News. https://www.zerohedge.com/political/disgraceful-coutts-de-banked-nigel-farage-because-his-conservative-views-internal-dossier

Tom McKay (20 January 2022) “Social media bans of scientific misinformation aren’t helpful, researchers say”. Gizmodo. https://gizmodo.com/researchers-say-bans-on-scientific-misinformation-arent-1848385764

Zhang Changyue (20 June 2023) “Nearly 90 percent of Chinese youth expect public prosecution against severe cyberbullying: survey”. Global Times. https://www.globaltimes.cn/page/202306/1292954.shtml


[1] See Nathan Gardels.

[2] See Antonio García Martínez.

[3] See Olga Peterson’s account on the CIA’s role in influencing worldwide media.

[4] Refers to the role of media in advocacy and political debate and is regarded as complementing the government’s separation of powers into legislative, executive, and judicial branches. https://en.wikipedia.org/wiki/Fourth_Estate

[5] Anthony Ilukwe notes that “as citizen journalism and user-generated content have grown, traditional gatekeepers of information have lost influence”.

[6] See “Information rules (Part 8: The decline of the newspaper and publishing industries)”; D. Wilding, P. Fray, S. Molitorisz, E McKewon (2018) “The impact of digital platforms on news and journalistic content”.

[7] See “Information rules (Part 9: The economics of content)”.

[8] https://en.wikipedia.org/wiki/Section_230

[9] See Tom Mckay.

[10] See Thomas Brooke on how Coutts monitored Nigel Farage’s social media accounts and closed his account because of his political views.

[11] See “Policy conversations and the language of information”.

[12] See “Information rules (Part 10: Reimagining the news industry for an information society)”.

[13] See “Information rules (Part 10: Reimagining the news industry for an information society)”.

[14] See “Government of the Data (Part 3: The future of government platforms)”.

[15]   See “Information rules (Part 9: The economics of content)”.

[16] See “Information rules (Part 10: Reimagining the news industry for an information society)”.