Information rules (Part 7: Regulating the politics of content)

Information rules (Part 7: Regulating the politics of content)

Phuah Eng Chye (30 January 2021)

There is growing discomfort that the freedoms afforded to platforms is accentuating the politicisation of content; leading to information disorder and political instability. Rightly or wrongly, opinions have swung in favour of greater regulation; with content regulation becoming synonymous with platform regulation.

Barack Obama[1] argues “the degree to which these companies are insisting that they are more like a phone company than they are like The Atlantic, I do not think is tenable. They are making editorial choices, whether they’ve buried them in algorithms or not. The First Amendment doesn’t require private companies to provide a platform for any view that is out there. At the end of the day, we’re going to have to find a combination of government regulations and corporate practices that address this, because it’s going to get worse. If you can perpetrate crazy lies and conspiracy theories just with texts, imagine what you can do when you can make it look like you or me saying anything on video. We’re pretty close to that now…If we do not have the capacity to distinguish what’s true from what’s false, then by definition the marketplace of ideas doesn’t work. And by definition our democracy doesn’t work. We are entering into an epistemological crisis.”

Valerie C. Brannon note “social media companies have come under increased scrutiny regarding the type of user content that they allow to be posted on their sites, and the ways in which they may promote – or deemphasize – certain content. A wide variety of people have expressed concern that these sites do not do enough to counter harmful, offensive, or false content. At the same time, others have argued that the platforms take down or deemphasize too much legitimate content…respond to allegations of political bias in their platforms’ content moderation decisions”. Questions are being raised as to whether “social media platforms are living up to their reputation as digital public forums”.

At the moment, there is a legal vacuum on the role of platforms in moderating content. In the US, Valerie C. Brannon point out “currently, federal law does not offer much recourse for social media users who seek to challenge a social media provider’s decision about whether and how to present a user’s content. Lawsuits predicated on these sites’ decisions to host or remove content have been largely unsuccessful, facing at least two significant barriers under existing federal law. First, while individuals have sometimes alleged that these companies violated their free speech rights by discriminating against users’ content, courts have held that the First Amendment, which provides protection against state action, is not implicated by the actions of these private companies. Second, courts have concluded that many non-constitutional claims are barred by Section 230 of the Communications Decency Act…which provides immunity to providers of interactive computer services, including social media providers, both for certain decisions to host content created by others and for actions taken voluntarily and in good faith to restrict access to objectionable material. Some have argued that Congress should step in to regulate social media sites”.

Where and who to regulate

One major challenge in regulating the politics of content is the difficulty of pin-pointing the where and who to regulate. This is because the complexity of the internet architecture makes it difficult to locate the points of accountability. Joan Donovan explains “at every level of the tech stack, corporations are placed in positions to make value judgments regarding the legitimacy of content, including who should have access, and when and how”. Most content moderation debates revolve around individual websites’ policies for appropriate participation and the platforms’ terms of service (platforms, search engines and apps). “While platforms, search engines and apps have policies against harassment, hate and incitement to violence, it is difficult to enforce these policies given the enormous scale of user-generated content”.

Joan Donovan note for cloud service providers, “content moderation occurs in cases where sites are hosting stolen or illegal content. Websites with fraught content…will often mask or hide the location of their servers to avoid losing hosts”. Content Delivery Networks (CDNs) “provide protection from malicious access attempts, such as distributed denial-of-service attacks that overwhelm a server with fake traffic. Without the protection of CDNs such as Cloudflare or Microsoft’s Azure, websites are vulnerable to political or profit-driven attacks”. “Generally speaking, cloud services, CDNs and domain registrars are considered the backbone of the internet, and sites on the open web rely on their stability, both as infrastructure and as politically neutral services”. However, content decisions are rare “except in the cases of trademark infringement, blacklisting by a malware firm or government order”.

In contrast, Internet Service Providers (ISPs), which allow access to the open web and platforms, “are in constant litigious relations with consumers and the state. ISPs have been seen to selectively control access and throttle bandwidth to content profitable for them, as seen in the ongoing net neutrality fight”. Joan Donovan points out when governments blacklist websites, domain registrars can be asked to remove them.

Valerie C. Brannon remarks speech posted on the internet “exists in an architecture of privately owned websites, servers, routers, and backbones,” and its existence online is subject to the rules of those private companies”. In this regard, “the most important decisions affecting the future of freedom of speech will not occur in constitutional law; they will be decisions about technological design, legislative and administrative regulations, the formation of new business models, and the collective activities of end-users”.

Since content is distributed through multiple channels and cover many topics, legal jurisdictions are overlapping and fragmented. Judit Bayer, Natalija Bitiukova, Petra Bárd, Judit Szakács, Alberto Alemanno and Erik Uszkiewicz highlights “social media service, which emerged after 2000, is not defined and its liability is not set out consistently by the relevant legal instruments. These include the E-Commerce Directive, the Audiovisual Media Services (AVMS) Directive, the ePrivacy Directive and the proposed ePrivacy Regulation, the Code of Conduct on countering illegal hate speech, the Commission Recommendation on measures to effectively tackle illegal content online, the Communication from the Commission on tackling online disinformation, the European Council decision of March 2018 and the Proposal for a Regulation on preventing the dissemination of terrorist content online”. Content regulation also is covered by “the German Network Enforcement Act, the French Act against Informational Manipulation and the Italian law against fake news, along with the co-regulatory initiative between the French government and Facebook, the Code of Practice to tackle online disinformation and the Commission Action Plan Against Disinformation”.

It is difficult to pin down platform accountabilities because there are diverse models performing multiple and transitory functions as communication channels, intermediaries, producers, publishers, aggregators and moderators. Valerie C. Brannon summarises “the permissibility of federal regulation of social media sites will turn in large part on what activity is being regulated. To the extent that federal regulation specifically targets communicative content – that is, speech – or social media platforms’ decisions about whether and how to present that content, that regulation may raise constitutional questions. While the Supreme Court has not yet weighed in on the question, lower courts have held that when search engines make decisions regarding the presentation of search results, they are exercising editorial functions protected as speech”.

Judit Bayer, Natalija Bitiukova, Petra Bárd, Judit Szakács, Alberto Alemanno and Erik Uszkiewicz explains “the emergence of social media marks the beginning of a new age of the public sphere (Öffentlichkeit)…This decentralised and horizontal discussion cannot be supervised with the same instruments as the centrally organised, traditional mass media. This control vacuum has allowed rapid innovations in line with business interests, and become exploited by political opportunists”. They suggest “while the ubiquitous content itself can hardly be controlled, the architecture of this communication – algorithms and data flow – should”.

Regulatory approaches

In comparison, traditional media channels are extensively regulated. Hence, the new proposals tend to be platform-centric. They include setting up a new oversight regulator, imposing responsibilities by withdrawing safe harbours and expanding platform accountabilities; and promoting self-regulation.

Oversight regulator. UK is considering setting up an independent regulator to oversight a compulsory Code of Ethics that sets out “what constitutes harmful content…such a Code of Ethics should be similar to the Broadcasting Code…The process should establish clear, legal liability for tech companies to act against agreed harmful and illegal content on their platform and such companies should have relevant systems in place to highlight and remove types of harm and to ensure that cyber security structures are in place…the independent regulator should have the ability to launch legal proceedings against them, with the prospect of large fines being administered as the penalty for non-compliance with the Code. This same public body should have statutory powers to obtain any information from social media companies that are relevant to its inquiries…This body should also have access to tech companies’ security mechanisms and algorithms, to ensure they are operating responsibly”[2].

The drawback of establishing a new agency is that its jurisdiction is likely to overlap with existing agencies; leading to regulatory fragmentation. The alternatives are to modify the mandate of existing regulators or to set up an agency to coordinate intra-agency efforts.

Responsibilities for content. The most favoured approach is to impose greater responsibilities on platforms to manage content; whether through new obligations or by removing immunity from liabilities. In US, the debate centers around Section 230.

No provider or user of an interactive computer service shall be treated as the publisher[3] or speaker of any information provided by another information content provider. These twenty-six words from Section 230 of the Communications Decency Act of 1996 are said to have created the Internet…Section 230 lets Internet firms moderate their platforms as they see fit and carry third-party, user-generated content without fear of liability. Absent these protections, it is unlikely that the commercial Internet would have developed such robust forums for online expression”. Ellen P. Goodman, Ryan Whittington (2019) “Section 230 of the Communications Decency Act and the future of online speech”.

Ellen P. Goodman and Ryan Whittington notes “everyone can agree that the Internet is very different from what was imagined in 1996 when Section 230 was penned. Internet firms have concentrated power and business models that are nothing like what they had then. No one contemplated the velocity, reach, scale, nature, and influence of the speech now flowing over digital infrastructure. It is entirely reasonable to rethink how Internet liability is apportioned. But it is critical that we are clear about how changes to Section 230 might strengthen government control over speech, powerful platforms’ control, and/or make the Internet even more lawless”.

They explain “Section 230 of the Communications Decency Act protects online intermediaries like social media platforms from being sued for transmitting problematic third-party content. It also lets them remove, label, or hide those messages without being sued for their choices. The law is thus simultaneously a shield from liability – encouraging platforms to transmit problematic content – and a sword – allowing platforms to manage that content as they like. Section 230 has been credited with creating a boisterous and wide-open Internet ecosystem. It has also been blamed for allowing platforms to profit from toxic speech”.

Ellen P. Goodman and Ryan Whittington point out there are limits to the protection. “Section 230 thus does not provide platforms a defense from liability associated with federal criminal law, intellectual property, and digital communications law”. They think it is misconceived that “without Section 230, Internet firms would have more incentive to responsibly police their platforms”. First, “even in the absence of legal pressures, there are political and consumer pressures on them to conduct content moderation”. Second, “platforms would not necessarily address most of the objectionable forms of content…Much of the offensive or harmful content that has raised concerns is entirely legal, rendering a shield from liability superfluous. This content includes non-defamatory fake news, hate speech, and non-criminal harassment, as these examples fall into classes of protected speech”.

Suggestions include the “withdrawal of safe harbor protections such as in relation to deep fakes and platform-hosted advertising”. Ellen P. Goodman and Ryan Whittington note it has been argued “exposing platforms to greater liability for the advertisements they run could potentially reorient the marketplace in a way that improves advertising quality. Internet firms could “force the ad tech and online publishing industries to adopt technologies that give them more control and oversight of the ads they run.” Other possible carve-outs from safe harbor protections include foreign-influenced content, drug-trafficking content, online harassment, conspiracy to incite violence, cyber-stalking and consumer fraud”.

Ellen P. Goodman and Ryan Whittington argue introducing a reasonable care standard “would take a more negligence-centered approach to intermediary liability. This would empower courts to determine whether a platform’s actions regarding specific content was reasonable by considering the context of the content and the platform’s efforts to combat such content…It would also provide plaintiffs with recourse against platforms that encourage the propagation of unlawful content while hiding behind Section 230’s liability shield…Stripping blanket immunity from platforms in exchange for a negligence standard would enable plaintiffs to engage in extensive litigation aimed at determining whether a platform’s conduct was, indeed, reasonable”.

Ellen P. Goodman and Ryan Whittington suggest “a quid pro quo structure for Section 230 protections” would allow platforms to choose between adopting additional duties[4] for content moderation or forgoing some “protections afforded by Section 230”. “The idea is to require large platforms to develop detailed, transparent, appealable practices specifically for disrupting coordinated campaigns that engage in activities that threaten or intentionally incite physical violence…that clearly constitute online harassment, or that constitute commercial fraud.” Another suggestion was to make “immunity for platforms proportional to their ability to reasonably identify speakers that use the platform to engage in harmful speech or conduct”[5]

More broadly, Valerie C. Brannon highlights there are “three possible frameworks…First, using the analogue of the company town, social media sites could be treated as state actors who are themselves bound to follow the First Amendment when they regulate protected speech. If social media sites were treated as state actors under the First Amendment, then the Constitution itself would constrain their conduct, even absent legislative regulation. The second possible framework would view social media sites as analogous to special industries like common carriers or broadcast media. The Court has historically allowed greater regulation of these industries’ speech, given the need to protect public access for users of their services. Under the second framework, if special aspects of social media sites threaten the use of the medium for communicative or expressive purposes, courts might approve of content-neutral regulations intended to solve those problems. The third analogy would treat social media sites like news editors, who generally receive the full protections of the First Amendment when making editorial decisions. If social media sites were considered to be equivalent to newspaper editors when they make decisions about whether and how to present users’ content, then those editorial decisions would receive the broadest protections under the First Amendment. Any government regulations that alter the editorial choices of social media sites by forcing them to host content that they would not otherwise transmit, or requiring them to take down content they would like to host, could be subject to strict scrutiny”.

The EU has moved forward with regulations formalizing “the role of social media platforms as the governors of much of the speech that is being shared online”. Natali Helberger notes the EU recently “announced a revision of the governance framework that should lead to greater accountability of platforms for the content shared by their users, as part of the planned Digital Services Act Package…already adopted three directives that…extend the responsibilities of platforms with respect to the way content is organized and distributed. The Digital Services Act Package must issue ex ante rules to ensure that markets characterized by large platforms with significant network effects and acting as gatekeepers remain fair and contestable by innovators, businesses and new market entrants”.

Natali Helberger notes the German Network Enforcement Act signalled seriousness “about enforcing the existing intermediary obligations, with the introduction of new procedures and significant fines” while proposed additional provisions to its media law will make it the earliest “to impose media pluralism obligations on internet intermediaries”. The German approach “signals an important point of departure from the traditional European e-commerce approach to the regulation of social media, and a move into a media law regime”.

France reinforced its rules against the dissemination of false information. “Alongside transparency obligations and an obligation to actively implement measures to combat the dissemination of false information likely to disturb public order or to affect the integrity of a ballot, France has created a new, much discussed civil procedure to make it possible “for political parties, candidates and even interested individuals to apply for a judicial order to prevent the transmission of allegations or accusations that are factually inaccurate or misleading. The criteria stipulated are that these allegations are likely to alter the integrity of the upcoming ballot and are deliberately, artificially or automatically and massively disseminated by means of an online public communication service”. It also recently adopted a law to combat hate content on the Internet “with the capability of obliging platforms to remove certain types of content within one hour (!), coupled with serious fines”. Following cooperation between the government and Facebook, France has adopted a broad approach based on “five pillars, starting with the development of a broader vision on the role, regulation and realization of public values vis-à-vis social media (first pillar), new accountability regulations with a new duty of care for social media platforms at their heart (second pillar), along with a public stakeholder dialogue under the auspices of the government (third pillar), a specialized regulator (fourth pillar) and European coordination (fifth pillar)”.

Natali Helberger notes UK’s white paper “suggests that an extraordinarily broad range of content be tackled, from issues of national security and terrorist content, content infringing copyright, misinformation and filter bubbles, to cyberbullying and cyber-crime. The UK version of a duty of care aims to make companies take more responsibility for the safety of their users and oblige them to address harm caused by content or activity on their services”. However, “this broad approach, in combination with the far-reaching powers of a new regulatory authority, has also earned the proposal substantial criticism from human rights advocates and legal experts”.

Natali Helberger offered caveats on the EU approach, noting “the European Union and its Member States seem to have accepted by and large that the Internet is now governed by certain platforms, and much of the current policy proposals on the table are simply what seem to be desperate attempts by national or regional governments to maintain the illusion of control and a foot in the door. Rules that would, for example, oblige platforms to be more transparent about the way in which social media platforms themselves wield opinion power are virtually absent, whether that is the power to decide whom to show which political message (political microtargeting), or the power to decide for which public values or societal goals their algorithms should be optimized. Platforms are redefining the strategies and values that will supposedly make our world a better place, giving them their own particular Silicon Valley spin…No attention whatsoever is being paid to the growing trend by Facebook and YouTube to move into the active role of content editors by commissioning content and concluding deals with rights holders. Nor are there much meaningful policy discussions about how to address the growing dependence of the legacy media, and indeed society, on platforms, or create more equal negotiation conditions and safeguards for the independence of those whose task it is to remain watchful and investigate abuses of political power, whether by politicians, governments or powerful internet companies”.

Natali Helberger note “to date, the debate on how to reduce dependence and rein in platform (monopoly) power has been framed as a debate in the realm of competition law”. However, competition law “is only designed and suited to deal with opinion power to a limited degree”. In addition, given measurement difficulties, “digital media concentration law and policy focus will likely have to shift to the creation of counter powers, the diffusion of control over proprietary and opaque algorithms, entirely new forms of transparency, the regulation of political advertising online and the separation of social infrastructure from the distribution of content”. Overall, “without adequate safeguards, all commitments to neutrality, fairness and non-manipulation are meaningless…Dispersing concentrations of opinion power and creating countervailing powers is essential to preventing certain social media platforms from becoming quasi-governments of online speech, while also ensuring that they each remain one of many platforms that allow us to engage in public debate”.

Asian countries tend to be more direct. Singapore passed its “Protection from Online Falsehoods and Manipulation Act” (POFMA) to address the propagation of harmful and divisive stories. Kirsten Han and Charis Loke point out use of POFMA has been limited. “No POFMA order has materialised to combat falsehoods circulating on platforms like WhatsApp and WeChat. This is possibly because…chain messages on closed or encrypted apps can’t be targeted with correction notices or takedown orders. And while POFMA leaves open the option of a general correction direction, the practicality of sending a message to every Singaporean WhatsApp user – regardless of whether they’ve received the original false message or not – is highly questionable, if even technically achievable…the government has used clarifications on its own platforms to debunk falsehoods circulating via text messaging and chat apps”. It was also not evident it “will be able to do much about a serious disinformation campaign coordinated by hostile foreign actors”. They point out the lack of clarity on the threshold for application of POFMA. “This leaves us with a topsy-turvy situation: POFMA is impotent against the closed messaging apps where the most pernicious falsehoods are circulating, but has been used to force correction notices on content even where there is no clear danger of real risk or harm to the public. There is a lack of clarity over when something justifies the issuance of a POFMA directive, instead of a public statement from the relevant authorities, or prosecution under other laws…POFMA continues to be a blunt, clumsy tool, unable to tackle damaging and false rhetoric within the chat apps where it thrives, nor able to deal with bad faith actors abroad”.

Celia Chen notes “China has included censorship of content as part of sweeping new regulations targeting online travel agencies and platforms…The 42 specific regulations, drafted by China’s Ministry of Culture and Tourism, include rules mandating that online travel providers are responsible for regulating content customers upload onto their platforms, including text, pictures, audio and videos. Censorship of content should be done before it is published online to guarantee information security, the draft regulations say. Platform operators also have to take necessary measures to prevent information from spreading if its publication contravenes laws and rules. At the same time, platforms need to keep records of who tried to post such information, report them to the authorities, and cooperate with authorities on any follow up investigation”. Phoebe Zhang reports China also intends to tighten content rules to rein in live streaming, rap and comedy shows and to require public performances to have on-site censors.

Iris Deng and Celia Chen reports the Cyberspace Administration of China recently published a revised draft of its Regulation on Internet Information Service which aims to “ensure the healthy and orderly development of internet information services”, as well as “maintain national security and public interest”. The revised draft expands regulatory coverage and “clearly defines for the first time the types of products that fall under information services, including search engines, instant messaging, websites, online payments, e-commerce and software downloads”. New clauses target rampant forms of fraud such as identity theft and fake news and explicitly prohibits the release of false information, helping others delete, block or replace online information, mass registration and selling online accounts for profit.

Iris Deng and Celia Chen notes “the new draft also broadened the definition of harmful online information. In addition to information that endangers national security, leaks state secrets or subverts state power, the new draft would ban online information that disrupts financial market order. False information about disasters, epidemics, emergencies and food and drug safety are also banned. On top of possible criminal charges and other punishments, websites spreading such information could be shut down. Individuals working for such sites could be held liable”. The draft also strengthens defences against online activities that could compromise China’s national security interests and strengthens the ability of internet regulators to take “technical measures and other necessary measures” to block “illegal” information from abroad.

Self-regulation. Industry self-regulation is usually advocated as a more efficient alternative to government regulation. One form of self-regulation is through content guidelines. D. Wilding, P. Fray, S. Molitorisz and E McKewon points out Australian media is subject to 14 separate codes of practice. The main division is between broadcast media[6] and print/online[7] with separate schemes within these sectors. On top of this are the codes of ethics from the journalists’ union and the Media, Entertainment and Arts Alliance. The media standards for broadcasting are considered co-regulatory as the rules are developed by industry bodies, registered with and enforced by the statutory regulator under the Broadcasting Services Act 1992.  Print and online media rules are drafted by the Australian Press Council and Independent Media Council which are industry-funded self-regulatory schemes with no statutory component. It is an open question whether it is possible to extend these content guidelines, promoting journalistic values and norms[8] to platforms.

Another form are independent industry boards[9] or firm boards such as Facebook’s Oversight Board. Siva Vaidhyanathan criticised Facebook’s oversight board as having limited powers and scope in relation to reviewing “the most challenging content issues” and deciding on “whether specific content should be allowed or removed”. He points out “the board can’t say anything about the toxic content that Facebook allows and promotes on the site. It will have no authority over advertising or the massive surveillance that makes Facebook ads so valuable. It won’t curb disinformation campaigns or dangerous conspiracies. It has no influence on the sorts of harassment that regularly occur on Facebook or WhatsApp. It won’t dictate policy for Facebook Groups, where much of the most dangerous content thrives. And most importantly, the board will have no say over how the algorithms work and thus what gets amplified or muffled by the real power of Facebook”.

Siva Vaidhyanathan suggests oversight boards cannot be considered as a form of self-regulation. “The Facebook board has no such power. It can’t generate a general code of conduct on its own, or consider worst-case scenarios to advise the company how to minimize the risk of harm. That would mean acting like a real advisory board. This one is neutered from the start…process will be slow and plodding. Faux-judicial processes might seem deliberative, but they are narrow by design…But on Facebook, as in global and ethnic conflict, the environment is tumultuous and changing all the time. Calls for mass violence spring up, seemingly out of nowhere. They take new forms as cultures and conditions shift. Facebook moves fast and breaks things like democracy. This review board is designed to move slowly and preserve things like Facebook. This review board will provide a creaking, idealistic, simplistic solution to a trivial problem”. He concludes “ultimately, this board will influence none of the things that make Facebook global: scale (2.5 billion users in more than 100 languages), targeted ads (enabled by surveillance), and algorithmic amplification of some content rather than other content. The problem with Facebook is not that a photograph came down that one time. The problem with Facebook is Facebook”.

The Facebook Oversight Board was operationalised this year and it was immediately handed a hot potato – to review the indefinite suspension of the Facebook and Instagram accounts of Donald Trump. Mathew Ingram reports Facebook also asked the board to consider: “any observations or recommendations around suspensions when the user is a political leader.” One of the four co-chairs of the 20-person board will assign the case to a panel, typically consisting of five people, and it is given 90 days to process the case.

Donald Trump is not the first leader to have been banned from platforms but past denials of access to leaders and governments have been inconsistent while this is a case where the platform has banned its own leader. The Facebook Oversight Board review will be closely watched around the world for its elucidation of principles on private platform moderation of content from leaders and governments.

This case illustrates the pitfalls of content moderation. First, there are far-reaching implications for free speech and democracy if platforms are obliged to censor such content. One concern is that it concentrates too much power in the hands of private platforms. Some argue this would encourage governments to become more authoritarian in its approach to restricting content. In this context, Mathew Ingram notes Twitter’s position that “accounts of politicians are given some leeway because the people have a right to hold power to account in the open” but they are not above repercussions…Jack Dorsey, Twitter’s chief executive, said that he struggled with the decision to ban Trump, which he believes could be destructive to the noble purpose and ideals of the open internet. A company making a business decision to moderate itself is different from a government removing access, yet can feel much the same.”

Judit Bayer, Natalija Bitiukova, Petra Bárd, Judit Szakács, Alberto Alemanno and Erik Uszkiewicz argue making platforms “responsible for deciding on transmitted content would give them greater power than necessary, because it would give them discretionary power to decide over citizens’ speech. Without disputing the need to remove manifestly illegal content, we argue that the fundamental right to free speech would best be served if platform providers were obliged to remain neutral intermediaries, make their algorithmic principles transparent and be subject to supervision”.

Second, platforms are caught between a rock and a hard place. Whether they choose to be more active in moderating content or not, their franchise would be threatened by regulations and boycotts. In addition, if platforms actively moderate content, they would be acting more like media and publishing companies which implies their “carrier” protections should be removed.

Third, while self-regulation offers advantages such as expertise, flexibility and timeliness of content moderation, self-regulation is rife with conflicts of interest. Platforms are often reluctant to impose or enforce rules that discourage traffic or empower users. Hence, the pressure to moderate content may end up driving away disenfranchised participants. This comes at a time when platforms are already being pressured by governments over the “unauthorised” use of local content. All of this point to an erosion of their market dominance.

Fourth, though platforms have the technological capabilities, tech cultures lack the empathy and diplomatic skills to handle real-world political and social conflicts. Generally, platform leaders behave like “deer caught in the headlights” when they are caught in the political cross-fire. Lastly, without coordination among platforms, rules will be fragmented and inconsistent.

In any case, platforms already have broad leeway to moderate content. Erin Brodwin lauds Pinterest for its “a zero-tolerance vaccine misinformation policy, a team tasked with enforcing it, and a flexible approach that accounts for emerging intel from health authorities…Users who search for either vaccines or Covid-19 and any related terms are shown results only from Pinterest boards maintained by the World Health Organization, the Centers for Disease Control and Prevention, and the American Academy of Pediatrics. For vaccines and Covid-19, subjects that simultaneously threaten individual health and public safety, the company has escalated its anti-misinformation tactics…Pinterest has also suspended people from the platform who violate that policy”.

D. Wilding, P. Fray, S. Molitorisz and E McKewon note media watch blogs and cyber ombudsmen have emerged: “bloggers who have no ties to established media outlets now assume the role of critic and watchdog of those organisations that have historically operated with (varying degrees of) consistency, accuracy and accountability”. However, as a matter of comparison, “traditional media has multiple layers of editorial review each accountable to a superior editor the blogosphere is inherently non-transparent. This highlights some ambiguity in the function of these blogs: Whether the internet’s inherent decentralisation, along with concomitant lack of uniform editorial standards, forecloses the possibility that these blogs can be legitimate regulators of the flow of information remains hotly contested…it remains to be seen whether the accountability instruments emerging online like newsroom blogs, online ombudsmen and media criticism on the social web – successfully support or even replace…traditional instruments of self-regulation[10].

Cross-border regulation

Disinformation campaigns have geopolitical undertones. Pavel Sharikov notes “the rhetoric linked with mutual accusations on interference in elections and internal affairs is a source for special concern. Indicatively, in both Russia and the US, these accusations are linked with information campaigns and cyberattacks. They have a powerful negative impact on bilateral ties and have no clear prospects for settlement”.

There are difficulties in constructing a bilateral agreement to deter cyber-attacks. First, “cyberattacks take place regularly. By some estimates, millions are made every day”.

Second, “is the problem of attribution…cyberattacks could easily be mounted by third parties, not just under orders of military-political leadership, but state proxies, non-state actors and others”. “Finally, it is not clear how to ensure parity”. “Apparently, as long as cyberattacks are in the grey zone of international law, the incidents capable of triggering an escalation remain highly probable”, Pavel Sharikov concludes. Hence, disinformation campaigns are part of the geo-political decoupling and it will be difficult to curb disinformation campaigns[11] without international cooperation.

What to regulate – the challenge of content moderation

Even if governance structures can be agreed upon, it is unlikely content moderation rules can be designed to satisfy all parties. Ryan Broderick notes “moderating content and comments is one of the most vital responsibilities on the internet. It’s where free speech, community interests, censorship, harassment, spam, and overt criminality all butt up against each other. It has to account for a wide variety of always-evolving cultural norms and acceptable behaviors…It also requires a concrete set of rules that can be consistently enforced. Right now, Facebook isn’t sure how it defines hate speech. YouTube can’t figure out what constitutes a conspiracy theory or how to properly fact-check it. Twitter won’t say whether it’d ban powerful users like President Trump for violating its user guidelines. Whether it’s fake news, child exploitation, Russian chaos agents, marketing scams, white nationalism, anti-vaxxers, yellow vests, coordinated harassment, catfishing, doxing, or the looming possibility of an information war escalating into a nuclear one between India and Pakistan, all of it comes down to one very simple issue. Sites like Facebook, YouTube, and Twitter have failed to support clear and repeatable moderation guidelines for years, while their platforms have absorbed more and more of our basic social functions…Community moderators, content moderators, audience development editors — they’re all shades of the same extremely important role that has existed since the birth of the internet. It’s the person looking at what’s being posted to a website and decides if a piece of content or a user should stay there or be taken down. It’s like combining a sheriff and a librarian”.

Martin Gurri points out “fact-checking in the post-truth era is a dicey proposition.  On whose authority do you rest your judgments?  The old-time news media still believes it possesses authority, but that’s a case of senile dementia…Given that millions of users, being human, constantly speak falsely online, what capacity or authority does Facebook have to call them out? A selective approach, like Twitter’s, would immediately strike the public as arbitrary or biased.  Pointers to some third-party judge would only dodge the question of why this fact had been checked, and not another.  A world of infinite fact-checking – of fact-checking the fact-checkers and those who fact-check them – loomed as the last stage of post-truth…Increasingly, the post-truth landscape has come to resemble a science fiction nightmare.  In a fractured universe, multiple spheres of truth reach blindly for domination – but when they collide, every claim for every truth is nullified, and nothing is validated…To linger here is to risk being sucked down into the sound and fury emanating from the labyrinth of post-truth.  Yet I can’t walk away.  None of us can walk away”.

Content moderation is problematic because views are diverse and statistics manipulatable. The line separating facts, beliefs, opinions, fake news, satire, hoaxes, games and entertainment is a thin one. The more scholarly the topic, the stronger the disagreement – scholars argue constantly on facts, theories and beliefs in science, academia, ideology and religion. It is also time-consuming to provide satisfactory explanations[12] and justifications that can withstand intense public scrutiny.

It is also difficult to differentiate between reporting and propagating. Matt Taibbi notes attempts by YouTube/Google, Facebook, and Twitter to remove content contravening their guidelines have adversely affected live-streaming reports from independent content creators. “Pundits had long worried that live stream capability was allowing the broadcast of violence and hate speech. In the hands of alternative media, however, the tool posed another problem, in the form of simply showing offensive reality…it’s unclear how platforms like YouTube understand the documentation of political demonstrations. If you film a neo-Nazi running his mouth, should you be banned for covering his hate speech? If you show a gun-rights activist carrying a gun, are you yourself engaging in pro-gun activism? For independent outlets like Status Coup, these questions pose a serious problem. Because they’re dependent financially on platforms like YouTube to reach subscribers, they can’t afford to take the risk of being shut down. But how can alternative media operate if it doesn’t know exactly where the lines are? Also, how can such outlets add value when its one advantage over corporate media – flexibility, and willingness to cover topics outside the mainstream – is limited by the fear of consequences from making independent-minded editorial decisions?”

Automation will not provide satisfactory solutions as well. Matt Taibbi adds “Youtube and Facebook seem to be relying increasingly on robots to make their decisions, and I think automated decision-making fails to understand the difference between content promoting groups and content covering them…Protests against cops are labeled as hate speech. Non-violent civil disobedience is labeled as graphic violence…My impression is that somewhere, some moderator is supposed to be watching videos and making decisions, but instead, they rush through and take wild guesses”.

It is an onerous burden to expect moderators to be able to identify every snippet of misinformation in an environment of information overload and ambiguity. Steve Randy Waldman argues “there is no way Twitter or Facebook can solve the moderation problem. Their AIs are beside the point. We have strong disagreements over what kind of speech should be legitimate in public fora, what the lines are between opinion, which may be productive even if mistaken, and disinformation, which should be suppressed for its invidious effects. There is no right answer. Under status quo platforms what will emerge, what has already emerged, is that the standards of good enough moderation will be determined in reaction to the outrage of influential political factions. It’s hard to imagine a regime more antithetical to the purposes of free speech, whether it’s Facebook favoring conservative agitprop to appease prominent Republicans or Democrats suppressing misinformation in the name of believe the science”.

Using transparency and authenticity to build resilience to disinformation

The alternative to direct regulation is to use transparency and authenticity to build resilience against misinformation and disinformation. Misinformation and disinformation thrive only because of the vacuum left by shortages in the supply of quality content.

Eliza Mackintosh cites the multi-pronged, cross-sector approach adopted by Finland “to prepare citizens of all ages for the complex digital landscape of today – and tomorrow”. “The course is part of an anti-fake news initiative launched by Finland’s government…aimed at teaching residents, students, journalists and politicians how to counter false information designed to sow division”. This includes “a checklist of methods used to deceive readers on social media: image and video manipulations, half-truths, intimidation and false profiles”. “The education system was reformed to emphasize critical thinking with students encouraged to analyse and debate current issues, supported by digital literacy toolkit provided by fact-checking agencies…The exercises include examining claims found in YouTube videos and social media posts, comparing media bias in an array of different clickbait articles, probing how misinformation preys on readers’ emotions, and even getting students to try their hand at writing fake news stories themselves”. She caveats Finland as an exception as it is a small and homogenous country that ranks highly for happiness, press freedom, gender equality, social justice, transparency and education. This makes it difficult for external actors to find fissures within society to exploit.

Strategies[13] to neutralise disinformation can be tied to educational and content funding programs. Platforms should be encouraged to launch programs promoting constructive citizen engagement and to support programmes to expand the sources of reliable and quality content. Platforms should also elevate their role in identifying sources of disinformation and ensure that effective actions (e.g. labelling content and limiting reach of messages) are undertaken. Misinformation and disinformation can be countered by increasing the levels of transparency and authenticity. Transparency can be improved by mandating disclosures, improving the access of citizens to information and strengthening content ecosystems that protect critical analysis, investigative reporting and whistle-blowing. Authenticity can be strengthened by increasing the number of safe zones[14] and providing non-anonymous walled areas where platform users and advertisers find it easy to avoid “dubious” content.

Overall, Natali Helberger points out platforms had a “liberating effect on the opinion power of individuals… many positive effects on democracy. However… the liberation of individual users as active actors is also at the heart of many of the current problems concerning misinformation, the proliferation of hate speech, polarization, nationalism and the abuse of the Internet for adversarial purposes”. In my view, this suggests that a pro-active strategy to ensure citizens are well-informed is likely to be superior than a defensive one that only focus on weeding out misinformation and disinformation. In my opinion, policy-makers should choose content strategies that favour free speech over censorship, authenticity over sanitisation and that emphasise on expanding the sources of quality content.

Conclusions

There is little disagreement that effective content regulation requires government intervention and cooperation from platforms. But it is uncertain whether the current regulatory initiatives will prove sufficient to tame the politics of content. It should be acknowledged that misinformation and disinformation breeds because there is popular demand for narratives and facts that affirm polarized opinions. In many instances, the polarisation of views begins with the politicians themselves. Instead of taking responsibility, the elected law-makers have shifted accountability down to the platforms, media and courts; and put everyone in the cross-fire of their political dispute.

This suggests the long-term challenge is to revitalise the public forum so that diversity of opinion can translate into constructive discourse and thoughtful content. This was a role filled by traditional media in the past but the decline of the news industry and the emergence of platforms seems to have created a vacuum.

References

Celia Chen (16 October 2019) “China’s regulatory oversight of booming online travel platforms will include censorship of illegal content”. SCMP.

https://www.scmp.com/tech/apps-social/article/3033025/chinas-regulatory-oversight-booming-online-travel-platforms-will

D. Wilding, P. Fray, S. Molitorisz, E McKewon (2018) “The impact of digital platforms on news and journalistic content”. University of Technology Sydney. https://www.accc.gov.au/system/files/ACCC%20commissioned%20report%20-%20The%20impact%20of%20digital%20platforms%20on%20news%20and%20journalistic%20content%2C%20Centre%20for%20Media%20Transition%20%282%29.pdf

Eliza Mackintosh (May 2019) “Finland is winning the war on fake news. What it’s learned may be crucial to Western democracy”. CNN. https://edition.cnn.com/interactive/2019/05/europe/finland-fake-news-intl/

Ellen P. Goodman, Ryan Whittington (1 August 2019) “Section 230 of the Communications Decency Act and the future of online speech”. Rutgers Law School Research Paper. SSRN. http://dx.doi.org/10.2139/ssrn.3458442

Erin Brodwin (21 September 2020) “How Pinterest beat back vaccine misinformation — and what Facebook could learn from its approach”. Stat.

House of Commons, UK (14 February 2019) “Disinformation and fake news: Final report”. Digital, Culture, Media and Sport Committee. https://publications.parliament.uk/pa/cm201719/cmselect/cmcumeds/1791/1791.pdf

Iris Deng, Celia Chen (8 January 2021) “Beijing updates internet regulation to include a wide swathe of services, fake news and fraud”. SCMP. https://www.scmp.com/tech/policy/article/3117015/beijing-updates-internet-regulation-include-wide-swath-services-fake

Jeffrey Goldberg (16 November 2020) “Why Obama fears for our democracy”. The Atlantic. https://www.theatlantic.com/ideas/archive/2020/11/why-obama-fears-for-our-democracy/617087/

Joan Donovan (28 October 2019) “Navigating the tech stack: When, where and how should we moderate content?” Center for International Innovation Governance (CIGON). https://www.cigionline.org/articles/navigating-tech-stack-when-where-and-how-should-we-moderate-content

Judit Bayer, Natalija Bitiukova, Petra Bárd, Judit Szakács, Alberto Alemanno, Erik Uszkiewicz (20 March 2019) “Disinformation and propaganda – Impact on the functioning of the rule of law in the EU and its Member States”. European Parliament Think Tank. http://www.europarl.europa.eu/thinktank/en/document.html?reference=IPOL_STU%282019%29608864

Kirsten Han, Charis Loke (12 June 2020) “POFMA: Singapore’s clumsy fake news hammer”. New Naratif. https://newnaratif.com/journalism/pofma-fake-news-hammer/

Mathew Ingram (22 January 2021) “Facebook asks its new oversight board to rule on banning Trump”. Columbia Journalism Review. https://www.cjr.org/the_media_today/facebook-asks-its-new-oversight-board-to-rule-on-banning-trump.php

Matt Taibbi (18 November 2020) “Meet the censored: Ford Fischer”. TK News. https://taibbi.substack.com/p/meet-the-censored-ford-fischer

Matt Taibbi (27 January 2021) “Meet the censored: Status coup – Silicon Valley is shutting down speech loopholes. The latest target: live content”. https://taibbi.substack.com/p/meet-the-censored-status-coup

Natali Helberger (7 July 2020) “The political power of platforms: How current attempts to regulate misinformation amplify opinion power”. Digital Journalism. Taylor & Francis Online. https://www.tandfonline.com/doi/full/10.1080/21670811.2020.1773888?src=recsys

Pavel Sharikov (20 November 2020) “Information threats and arms control: Is Russian-US dialogue possible?”. Valdai Club. https://valdaiclub.com/a/highlights/information-threats-and-arms-control-is-russian-us/

Phoebe Zhang (10 December 2019) “China airs tighter content rules to rein in live streaming, rap and comedy shows”. SCMP. https://www.scmp.com/news/china/society/article/3041430/china-airs-tighter-content-rules-rein-live-streaming-and-shows

Phuah Eng Chye (26 October 2019) “Information and organisation: Cross border data flows and spying”. http://economicsofinformationsociety.com/information-and-organisation-cross-border-data-flows-and-spying/

Phuah Eng Chye (29 February 2020) “The journey from privacy to transparency (and back again)”. http://economicsofinformationsociety.com/the-journey-from-privacy-to-transparency-and-back-again/

Phuah Eng Chye (14 March 2020) “Features of transparency”.

Phuah Eng Chye (28 March 2020) “The transparency paradigm”.

Phuah Eng Chye (11 April 2020) “Anonymity, opacity and zones”.

Phuah Eng Chye (7 November 2020) “Information rules (Part 1: Law, code and changing rules of the game)”. http://economicsofinformationsociety.com/information-rules-part-1-law-code-and-changing-rules-of-the-game/

Phuah Eng Chye (21 November 2020) “Information rules (Part 2: Capitalism, democracy and the path forward)”.

Phuah Eng Chye (5 December 2020) “Information rules (Part 3: Regulating platforms – Reviews, models and challenges)”.

Phuah Eng Chye (19 December 2020) “Information rules (Part 4: Regulating platforms – Paradigms for competition)”. http://economicsofinformationsociety.com/900-2/

Phuah Eng Chye (2 January 2021) “Information rules (Part 5: The politicisation of content)”. http://economicsofinformationsociety.com/information-rules-part-5-the-politicisation-of-content/

Phuah Eng Chye (16 January 2021) “Information rules (Part 6: Disinformation, transparency and democracy)”.

Roxanne Khamsiideas (16 November 2020) “A lack of transparency is undermining pandemic policy”. Wired. https://www.wired.com/story/a-lack-of-transparency-is-undermining-pandemic-policy/

Siva Vaidhyanathan (9 May 2020) “Facebook and the folly of self-regulation”. Wired. https://www.wired.com/story/facebook-and-the-folly-of-self-regulation/

Steve Randy Waldman (16 December 2020) “Repealing Section 230 as antitrust”. Interfluidity. https://www.interfluidity.com/v2/8093.html

Valerie C. Brannon (27 March 2019) “Free speech and the regulation of social media content”. Congressional Research Service. https://fas.org/sgp/crs/misc/R45650.pdf


[1] See Jeffrey Goldberg.

[2] See House of Commons, UK.

[3] “In the offline world…Distributors, such as bookstores and newsstands, are not generally held liable for the content they distribute. By contrast, publishers can be held liable for content. This distinction can be explained by the fact that, unlike distributors, publishers exercise a high degree of editorial control publishers exercise…The advent of the Internet challenged this distinction between distributor and publisher. Lawmakers understood that Internet services did not fit neatly into this distributor-publisher paradigm. These services often exercised more control over content than distributors but could not reasonably be considered publishers of the vast amount of material on their platforms”.

[4] “For example, to qualify for immunity, platforms could be required to publish data on their curation practices and moderation procedures” or platforms above a certain size could be required to pay a portion of their revenue into a fund dedicated to support accountability journalism. See Ellen P. Goodman and Ryan Whittington.

[5] “If the identity of a content creator is unknown and the platform is indemnified, victims of tortious or criminal conduct will often be left without meaningful legal recourse”. This proposal is attributed to Gus Hurwitz. See Ellen P. Goodman and Ryan Whittington.

[6] “In the broadcast environment, there are eight separate sets of rules as each type of broadcasting service (e.g., commercial television, commercial radio) has its own code of practice, as does each of the national broadcasters”. See D. Wilding, P. Fray, S. Molitorisz and E McKewon.

[7] “For print and online news and comment, most large publishers and some smaller publishers are members of the Australian Press Council (APC) and therefore subject to its two statements of principles”. See D. Wilding, P. Fray, S. Molitorisz and E McKewon.

[8] Norms such as accuracy and clarity, fairness and balance, privacy and the avoidance of harm, and integrity and transparency. See D. Wilding, P. Fray, S. Molitorisz and E McKewon.

[9] Such as the Global Alliance for Responsible Media (GARM).

[10] Comments in Italics are sourced from others. See D. Wilding, P. Fray, S. Molitorisz and E McKewon.

[11] See “Information and organisation: Cross border data flows and spying”.

[12] See Roxanne Khamsiideas on the transparency issues on explaining the basis of Covid guidelines.

[13] Removal of fraudulent accounts and various other deterrents to rein the spread of disinformation.

[14] See “Anonymity, opacity and zones”.