Information rules (Part 1: Law, code and changing rules of the game)

Information rules (Part 1: Law, code and changing rules of the game)

Phuah Eng Chye (7 November 2020)

Every age has its potential regulator, its threat to liberty. Our founders feared a newly empowered federal government; the Constitution is written against that fear. John Stuart Mill worried about the regulation by social norms in nineteenth-century England; his book On Liberty is written against that regulation. Many of the progressives in the twentieth century worried about the injustices of the market. The reforms of the market, and the safety nets that surround it, were erected in response. Ours is the age of cyberspace. It, too, has a regulator. This regulator, too, threatens liberty. But so obsessed are we with the idea that liberty means “freedom from government” that we don’t even see the regulation in this new space. We therefore don’t see the threat to liberty that this regulation presents. This regulator is code – the software and hardware that make cyberspace as it is. This code, or architecture, sets the terms on which life in cyberspace is experienced. It determines how easy it is to protect privacy, or how easy it is to censor speech. It determines whether access to information is general or whether information is zoned. It affects who sees what, or what is monitored. In a host of ways that one cannot begin to see unless one begins to understand the nature of this code, the code of cyberspace regulates. This regulation is changing. The code of cyberspace is changing. And as this code changes, the character of cyberspace will change as well. Cyberspace will change from a place that protects anonymity, free speech, and individual control, to a place that makes anonymity harder, speech less free, and individual control the province of individual experts only.

Lawrence Lessig (January 2000) “Code is law: On liberty in cyberspace”.

Societies advance based on their ability to use information. In advanced societies, information animates social, business and political engagements. But information also disrupts traditional organisation and control systems. It increases the level of complexity, upsets the balance of power and increases disputes. A massive expansion of information rules is therefore needed to clarify what and when information can be captured, and to clarify ownership, access, disclosure, usage, accountabilities and liabilities related to information. In tandem with this, there is increasing debate on regulating what can be said (content).

But there is no clear path forward on formulating a coherent set of information rules. In tandem with this, kneejerk responses result in a patchwork of regulation. Tim O’Reilly points out “regulation is the bugaboo of today’s politics. We have too much of it in most areas, we have too little of it in others, but mostly, we just have the wrong kind, a mountain of paper rules, inefficient processes, and little ability to adjust the rules or the processes when we discover the inevitable unintended results”. He thinks the weakness of the “normal regulatory model” is its focus on rules rather than outcomes. “How often have we faced rules that simply no longer make sense? How often do we see evidence that the rules are actually achieving the desired outcome?”

Growing overlap between law and code

Actions and accountabilities in the physical world are bounded by laws based on reasoning. Disputes are resolved by distinct judicial and enforcement processes. In contrast, in the virtual sphere actions, accountabilities and decisions are determined by network architecture and code. As the virtual sphere expands and intrudes onto the physical sphere, this leads to growing overlap between the worlds of law and code.

Samer Hassan and Primavera De Filippi notes “as more and more of our interactions are governed by software, we increasingly rely on technology as a means to directly enforce rules. Indeed, as opposed to traditional legal rules, which merely stipulates what people shall or shall not do, technical rules determine what people can or cannot do in the first place. This eliminate the need for any third party enforcement authority to intervene after the fact, in order to punish those who infringed the law”.

Samer Hassan and Primavera De Filippi points out “today, regulation by code is progressively establishing itself as a regulatory mechanism adopted not only by the private sector but also by the public sector. Governments and public administrations increasingly rely on software algorithms and technological tools in order to define code-base rules, which are automatically executed (or enforced) by the underlying technology”. Examples include making “predictive assessments about potential threats to national security, or the use of computer algorithms to support judicial decision-making[1] and determine jail sentences[2] or paroles”.

“Software[3] ultimately ends up stipulating what can or cannot be done in a specific online setting, more frequently than the applicable law, and possible also much more effectively…The advantage of this form of regulation by code is that, instead of relying on ex-post enforcement by third parties (i.e., courts and police), rules are enforced ex-ante, making it very difficult for people to breach them in the first place. Besides, as opposed to traditional legal rules, which are inherently flexible and ambiguous, technical rules are highly formalized and leave little to no room for ambiguity, thereby eliminating the need for judicial arbitration”.

Samer Hassan and Primavera De Filippi adds “regulation by code also comes with important limitations and drawbacks that might create new issues related to fairness and due process”. “On the one hand, in contrast to traditional legal rules, which must be appreciated by a judge and applied on a case-by-cases basis, code-based rules are written in the rigid and formalized language of code, which does not benefit from the flexibility and ambiguity of natural language. On the other hand, the architectural implementation of online platforms ultimately depends on the specific choices of platform operators and software engineers, seeking to promote or prevent a certain type of actions. Just like any other technological artifact, code is not neutral, but inherently political: it has important societal implications, insofar as it might support certain political structures or facilitate certain actions and behaviors over others”.

Algorithmic regulation

The ability of code to substitute for law is a significant development. Tim O’Reilly argues “we are at a unique time when new technologies make it possible to reduce the amount of regulation while actually increasing the amount of oversight and production of desirable outcomes…reputation entirely replaces regulation, seemingly with no ill effect. Governments should be studying these models, not fighting them, and adopting them where there are no demonstrable ill effects…Reputation systems are a great example of how open data can help improve outcomes for citizens with less effort by overworked regulators and enforcement officials”.

Hence, the current environment is “amenable to creative forms of measurement, and ultimately algorithmic regulation…Once you understand that you have actionable data being systematically collected, and that your interventions based on that data are effective, it’s time to begin automating those interventions”. Tim O’Reilly argues “we need to find more ways to make the consequences of bad action systemic, rather than subject to haphazard enforcement. This is only possible when laws and regulations focus on desired outcomes rather than the processes used to achieve them”. In this context, “laws should specify goals, rights, outcomes, authorities, and limits. If specified broadly, those laws can stand the test of time”.

Lambert Strether, however, is critical of arguments for “automating those interventions”. He argues “algorithmic regulation code, unlike data, can be proprietary and secret…how are coders and their employers to be held accountable for bugs and induced to fix them? Will bugs even be tracked? Or is the occasional random loss of a home an acceptable price to pay for an algocratic utopia”. In circumstances where algorithmic regulation is used for law enforcement, “you could be to all intents and purposes” by software…but if you believe…software is buggy – then how do you appeal? In the case of proprietary, closed source software, there is no appeal because you can’t see the software, so you can’t find the bugs…it’s worth underlining heavily how untransparent, how opaque, even obfuscatory, software programs are”. In addition, “at least one could read the text” of law “but with algorithmic regulation…How in good conscience can we ask a free people to obey regulations that they cannot read, when they cannot verify how the regulation was applied in their case, and when they have no way for appeal”.

This is not a theoretical debate. Algorithmic regulation is already widely used – such as in credit ratings, scores, reviews, lists and likes. At the extreme, the implementation of social credit will take this trend to the next level. The benefits and consequences of algorithmic regulation are immense and should be carefully analysed.

Changing rules of the game

The expansion of algorithmic regulation is consistent with the transition to an information society. In this context, the information society is characterised by disorder[4] due to the disruptive effects of information on activities, processes and organisations. This makes traditional rules, set for a physical environment, obsolete. Order can only be restored by putting in place new rules that can cater to both the physical and virtual worlds.

The changing regulatory paradigm from the physical to information signify the rules of the game are changing. In this regard, where crime used to be largely defined in a physical context, the new rules mostly define misdemeanours and illegal conduct within an information context. Physical crimes (such as burglary) are overshadowed by an avalanche of newly defined information crimes (such as in relation to privacy, data or content). This will be accompanied by an expansion of standards or information requirements to define information crimes.

In this context, enforcers and regulators have already found it is more expedient to prosecute on information or procedural failures (tax evasion, omitting or providing false information, and failure to comply) than to prove actual wrong-doings. The penalties for information are also becoming more onerous if it is linked to a serious crime like money laundering or national security.  In other words, the costs of filling up a form wrongly has been greatly escalated.

The regulatory focus has thus generally shifted from preserving physical safety to minimising disrepute and conflict and to improving authenticity. In tandem with this, rules on corporate and social behaviours and content has expanded significantly. A consequence of information overload is regulatory overload. More rules, more lawsuits and rising liabilities will increase the pressure to expand legal documentation to mitigate risks. Overall, information rules will sharply increase legal risks, potential liabilities and bureaucracy costs for society as a whole.

A stop-gap agenda for information rules

In the face of growing overlaps between law and code, the current approach to regulating information is haphazard. The existing flaws – conflicting rules and gaps – will be further worsened by the need to introduce/modify rules to cover the expansion of information products (patents, copyright and data), activities (sharing, video-conferencing), technologies (AI, IOT, blockchain) and industries (drones, autonomous cars). It is impractical to expect a panacea exists and the likelihood is continued tinkering of existing regulations.

It is useful to therefore develop a stopgap agenda for the medium-term. A set of generic principles should be established to guide the design of information rules. At the moment, the tendency is to develop principles separately for privacy, data, content or AI. Integrating these principles into a holistic framework is a critical step towards ensuring consistent and coherent regulation. The framework should also aim to present a vision of how law and code can be integrated to achieve regulatory goals and to identify the areas where safeguards are required.

There is also a need for an overview of regulatory terrain and roles – by mapping out the areas or activities that should be regulated, those that require direct government oversight or where it may be more practical to devolve the responsibilities to the private sector (industry associations, firms or platforms). One possibility is to develop a coverage matrix with industry regulators (such as banking, transportation) on one axis and general regulators (such as for competition, consumer protection) on the other to identify areas of overlap. I would also suggest that to avoid regulatory fragmentation, consideration could be given to establishing specialist agencies (such as for data, AI) that would provide expertise, advice, research and take on a role of assisting and coordinating the regulatory agencies. In tandem with this, the jurisdictions, accountabilities and liabilities of the regulatory and specialist agencies can be mapped out.

In particular, the regulatory boundaries between the private and public domains and between the physical or virtual spheres needs to be redrawn. Convergence implies regulation can no longer be individually segmented and should be generalised or as universal as possible. It is timely to regard private code as an extension of the regulatory architecture so as to ensure decisions made by private code are in line with public objectives and laws. In any case, as the real-time intervention of code becomes more intrusive, the reconciliation of law and code into a common framework is inevitable

The traditional judicial and enforcement system is archaic and this poses several risks. First, it is not equipped to handle the throughput, transience, speed and complexities of modern infringements and disputes. Delays and inconsistent decisions can undermine trust in its fairness and independence. Second, regulatory gaps may lead to judicial overreach – where the burden of clarifying ambiguous laws is left to the courts rather than to the executive and legislative branches. Hence, there is a need to re-conceptualise the judicial and enforcement system in the context of integrating the public-private and physical-virtual spheres. This would involve facilitating better use of technology and data, streamlining processes and relocating responsibilities to ensure an efficient, robust and transparent enforcement and dispute resolution process.

The transition will not be a smooth process. Tom Simonite notes although AI risk-scoring algorithms were rolled out to assist judges in sentencing decisions, “but judges appear not to have trusted that system. After the law took effect, they overruled the system’s recommendation more than two-thirds of the time…Over time, judges reverted to their prior ways… Now we understand that risk assessment can interact with judges to make disparities worse”. In addition, “their developers have sometimes restricted government agencies using their tools from releasing information about their design and performance. Jurisdictions haven’t allowed outsiders access to the data needed to check their performance”.

Overall, the advantage of code is its ability to process large volumes of “situations” quickly at low costs. In this regard, human justice is costly and inefficient. “Machine justice” means justice need no longer be limited to police and courts and can be widely available. But “machine justice” may not result in “better justice” because it may replicate human bias, aggravate bias in existing patterns and is subject to human over-ride and manipulation. But “machine justice” may not be worse than “human justice”. The criticism of algorithmic justice may be unfair particularly when it relates to difficult-to-resolve situations. In addition, criticism of discriminatory bias is usually followed by recommendations to stop using algorithms rather than by advice on how algorithms can be made bias-free.

To an extent, the regulatory system for the finance industry (with some caveats) can be a role model for information rules. Finance is information’s oldest industry. Efforts to formalise information regulation began in earnest in the 1930s with the promulgation of securities laws which evolved later into a sophisticated framework comprising standards and rules for disclosures, compliance and oversight.

Conclusion

Initially, code didn’t matter because its reach and effect were limited to private silos. But, the formulation of a cohesive set of information rules is becoming more pressing as the information society comes of age. Code has expanded its boundaries with the massive expansion in datasets with comprehensive permanent records that can be aggregated on a historical or cross-sectional basis.

In a virtual world, code has an essential role in shaping order. As the overlap between the physical and virtual grows, there will be increasing clashes between law and code. As private justice begins to rival public justice, there is a need to understand which will over-ride which, and when. In setting the rules for an information society, the priority is to integrate rules for the physical and virtual and this requires managing the integration of code and law. In this context, we should be reminded that the difficulty of regulating code arises because we are also trying to regulate life itself.

In many respects, social credit systems[5] – as abhorrent as the idea may be to some – may be the forerunner of judicial and enforcement systems in the information society. Justice is justice and the same rules should apply in the public-private and physical-virtual spheres. So rather than to tar social credit systems as dystopian, it may be more purposeful to figure out how to make it work towards reinforcing a democracy.

References

Fan Anqi (6 September 2020) “Extra points for good behavior: Suzhou’s civilization code sparks controversy”. Global Times. https://www.globaltimes.cn/content/1200084.shtml

Lambert Strether (20 August 2014) “Algorithmic regulation, code is law, and the case of Ferguson”. Naked Capitalism. https://www.nakedcapitalism.com/2014/08/algorithmic-regulation-code-law-case-ferguson.html

Lawrence Lessig (January 2000) “Code is law: On liberty in cyberspace”. Harvard magazine. https://www.harvardmagazine.com/2000/01/code-is-law-html

Matt Ho (28 August 2020) “China plans hi-tech supervision of police officers and judges as party tightens grip on domestic security”. SCMP. https://www.scmp.com/news/china/politics/article/3099349/china-plans-hi-tech-supervision-police-officers-and-judges

Phuah Eng Chye (11 May 2019) “Critique of information”. http://economicsofinformationsociety.com/%EF%BB%BFcritique-of-information/

Samer Hassan, Primavera De Filippi (2017) “The expansion of algorithmic governance: From code is law to law is code”. The Journal of Field Actions. https://journals.openedition.org/factsreports/4518

Tim O’Reilly (2013) “Open data and algorithmic regulation”. Chapter 22 Beyond Transparency. https://beyondtransparency.org/chapters/part-5/open-data-and-algorithmic-regulation/

Tom Simonite (9 May 2019) “Algorithms should’ve made courts more fair. What went wrong?” Wired. https://www.wired.com/story/algorithms-shouldve-made-courts-more-fair-what-went-wrong/


[1] See Matt Ho on how China plans to use blockchain, artificial intelligence and big data technologies to identify procedural violations in investigations, trials and enforcement work.

[2] See Tom Simonite on the hazards of using algorithms for court sentencing.

[3] E.g. Digital Rights Management (DRM) schemes transpose the provisions of copyright law into technological measures of protection by restricting usage of copyrighted works (e.g. limiting the number of digital copies). See Samer Hassan and Primavera De Filippi.

[4] See “Critique of information”.

[5] See Fan Anqi on the discussion on Suzhou’s civilization code, a civil behavior scoring system, that evaluates “residents’ daily lives including employment, study and entertainment based on their degree of civilization”.