Information rules (Part 11: Regulating AI – Issues)

Information rules (Part 11: Regulating AI – Issues)

Phuah Eng Chye (27 March 2021)

Artificial Intelligence (AI) is the key that is unlocking the power of automation and data. It is accentuating the use of information to reconstruct economic activities; in the process creating new industries, business models and forms of employment. It is transforming organisational structures and cultures and altering social and power relationships between individuals, families, communities, firms and nations. With AI, the transformation of society from industrial to information is well on its way.

The benefits have been substantial. AI has opened new opportunities and efficiencies and enhanced safety and risk management capabilities. At the same time, the unparalleled power of AI is giving rise to mounting concern on potential harms. Algorithmic decision-making can be opaque, complex, and subject to error and bias. The widespread and unrestrained use of AI makes the long-term consequences difficult to predict and, even worse, difficult to control. These concerns stretch across a wide front from the loss of human employment, skills, privacy and dignity to risks emanating from corporate or state surveillance, market domination, distributional inequalities, unconstrained enforcement and military use.

Growing need for algorithmic accountability

Frank Pasquale notes “over the past decade, algorithmic accountability has become an important concern…Exposés have sparked vibrant debates about algorithmic sentencing. Researchers have exposed tech giants showing women ads for lower-paying jobs, discriminating against the aged, deploying deceptive dark patterns to trick consumers into buying things, and manipulating users toward rabbit holes of extremist content. Public-spirited regulators have begun to address algorithmic transparency and online fairness, building on the work of legal scholars who have called for technological due process, platform neutrality, and nondiscrimination principles”.

Frank Pasquale points out “this policy work is just beginning, as experts translate academic research and activist demands into statutes and regulations. Lawmakers are proposing bills requiring basic standards of algorithmic transparency and auditing. We are starting down on a long road toward ensuring that AI-based hiring practices and financial underwriting are not used if they have a disparate impact on historically marginalized communities”.

Frank Pasquale explains the “first wave of algorithmic accountability research and activism” targets existing systems. He suggests the second wave of algorithmic accountability is addressing structural concerns on the type of AI systems being built and their governance. “We need genuine accountability mechanisms, external to companies and accessible to populations. Any A.I. system that is integrated into people’s lives must be capable of contest, account, and redress to citizens and representatives of the public interest”. “Second-wave critics of these apps may bring in a more law and political economy approach, questioning whether the apps are prematurely disrupting markets for (and the profession of) mental health care in order to accelerate the substitution of cheap (if limited) software for more expensive, expert, and empathetic professionals. These labor questions are already a staple of platform regulation. I predict they will spread to many areas of algorithmic accountability research, as critics explore who is benefiting from (and burdened by) data collection, analysis, and use”.

Jacob Mchangama and Hin-Yan Liu point out though “the idea of legal constraint is increasingly difficult to reconcile with the revolution promised by artificial intelligence and machine learning – specifically, those technologies’ promises of vast social benefits in exchange for unconstrained access to data and lack of adequate regulation on what can be done with it. Algorithms hold the allure of providing wider-ranging benefits to welfare states, and of delivering these benefits more efficiently. Such improvements in governance are undeniably enticing. What should concern us, however, is that the means of achieving them are not liberal. There are now growing indications that the West is slouching toward rule by algorithm – a brave new world in which vast fields of human life will be governed by digital code both invisible and unintelligible to human beings, with significant political power placed beyond individual resistance and legal challenge. Liberal democracies are already initiating this quiet, technologically enabled revolution, even as it undermines their own social foundation”.

Translating principles to regulation

The biggest obstacle to implementing high-level generic principles is its translation into practical ground-level rules – as AI is experimental, self-learning and constantly permutating. Andrea Renda notes “several attempts have been made at building a comprehensive framework for responsible, trustworthy AI so far. Some of them limit themselves to declarations of principles (e.g. Asilomar principles), whereas others effectively provide a framework for implementing AI and aligning it with agreed-upon values. None of them provides a final answer to the achievement of responsible AI”.

Marc Böhlen[1] points out “the problem is that it is not clear yet how to make ethical autonomous A.I systems…There are really two very different and interwoven aspects of AI ethics. First is the ethics of those who make and use AI systems. Then there is the ethics of the AIs themselves…a hard call to assess whether a robot is ethical or not”. He notes an impact assessment checklist is typically used to assess ethical behaviour. “That kind of practical oversight is in principle a good idea. But the details really matter…The system has no provision of ensuring quality standards in the compliance…At worst, it could be an ethics whitewashing mechanism that allows less scrupulous actors to quickly comply with vague rules and then bring questionable products and services to market under a seal of approval”.

In addition, Marc Böhlen argues an AI ethics document “will certainly contain a large collection of perspectives…I would consider the excessive focus on inclusive discussion almost a smoke screen that then gives license to an uncompromising drive to push an AI positive industry agenda, with insufficient checks in place to balance the process…in that enforcement and oversight is unclear…focus on enabling AI to become marketable means that existing market logic will control AI… Certainly we will see an ethics compliance industry grow very quickly. It will probably generate similar excesses as other compliance industries have in the recent past…But compliance will not work everywhere. Imagine ethically aligned robots from one nation fighting an opponent with no such constraints”.

The European Commission points out “bias and discrimination are inherent risks of any societal or economic activity. Human decision-making is not immune to mistakes and biases”. While AI-driven bias could have much larger effects, the risks does not stem from a flaw in the original design but from the practical impacts of the correlations or patterns. “The specific characteristics of many AI technologies, including opacity (black box-effect), complexity, unpredictability and partially autonomous behaviour, may make it hard to verify compliance with, and may hamper the effective enforcement of, rules of existing EU law meant to protect fundamental rights. Enforcement authorities and affected persons might lack the means to verify how a given decision made with the involvement of AI was taken and, therefore, whether the relevant rules were respected. Individuals and legal entities may face difficulties with effective access to justice in situations where such decisions may negatively affect them”.

Andrea Renda observes “what makes the issue almost intractable is that there is no such thing as a neutral algorithm: and even if it was possible to generate one, a neutral algorithm would in many cases be useless, whereas excessively biased algorithms can be dangerous and harmful”. In this regard, the fault may lie with data, which can be biased, inaccurate, or inappropriate, rather than with the algorithm.

AI algorithms also lack transparency. Margot E. Kaminski notes calls “for transparency in algorithmic decision-making, in the form of both notice towards individuals and audits that enable expert third-party oversight. Some of these calls for transparency have been ambitiously deep and broad, suggesting that both algorithmic source code and data sets should be subjected to public scrutiny. Others have responded by enumerating the harms this level of transparency could cause, or by arguing that transparency directed at individuals will be relatively useless since individuals lack the expertise to do much with it. But transparency of some kind has a clear place in algorithmic accountability governance, from recent calls for algorithmic impact assessments to proposals for whistleblower protections, to regularly repeated calls for algorithmic auditing”.

She argues the General Data Protection Regulation (GDPR) “may prove to be an example, both good and bad, of a robust algorithmic accountability regime in practice”. “The General Data Protection Regulation (GDPR) contains a significant set of rules on algorithmic accountability, imposing transparency, process, and oversight on the use of computer algorithms to make significant decisions about human beings. In this regard, the GDPR creates “a system of targeted revelations of different degrees of depth and scope aimed at different recipients. Transparency in practice is not limited to revelations to the public. It includes putting in place internal company oversight, oversight by regulators, oversight by third parties, and communications to affected individuals. Each of these revelations may be of a different depth or kind; an oversight board might get access to the source code, while an individual instead might get clearly communicated summaries that she can understand”.

Margot E. Kaminski clarifies individual transparency rights are protected by “a right to be informed about algorithmic decision-making…while individuals need not be provided with source code, they should be given far more than a one-sentence overview of how an algorithmic decision-making system works. They need to be given enough information to be able to understand what they are agreeing to (if a company is relying on the explicit consent exception); to contest a decision; and to find and correct erroneous information, including inferences”. “Thus, there is a clear relationship between the other individual rights the GDPR establishes – contestation, correction, and erasure – and the kind of individualized transparency it requires. This suggests something interesting about transparency: the substance of other underlying legal rights often determines transparency’s substance. If one has a right of correction, one needs to see errors. If one has a right against discrimination, one needs to see what factors are used in a decision. Otherwise, information asymmetries render underlying rights effectively void”.

Hence, “the guidelines list examples of what kinds of information should be provided to individuals and how it should be provided. Individuals should be told both the categories of data used in an algorithmic decision-making process and an explanation of why these categories are considered relevant. Moreover, they should be told the factors taken into account for the decision-making process, and…their respective weight’ on an aggregate level. They should be told how a profile used in algorithmic decision-making is built, including any statistics used in the analysis and the sources of the data in the profile. Lastly, companies should provide individuals an explanation of why a profile is relevant to the decision-making process and how it is used for a decision”.

Margot E. Kaminski notes “there are a number of ways that systematic transparency can be implemented. First, regulators can use significant information-forcing capabilities under the GDPR to get access to information about algorithms. The GDPR also envisions general data protection audits conducted by government authorities. Second, most companies deploying algorithmic decision-making must set up internal accountability and disclosure regimes. They must perform a data protection impact assessment, and provide information to an internal but independent data protection officer who has, at least on paper, deep information-forcing abilities…Third, the guidelines suggest that companies performing decision-making with a high impact on individuals should use independent third-party auditing and provide that auditor with all necessary information about how the algorithm or machine learning system works.”

However, Margot E. Kaminski cautions “it is one thing to put these requirements on paper and quite another to have them operate in practice. The system of algorithmic accountability that the GDPR and its accompanying interpretative documents envision faces significant hurdles in implementation: high costs to both companies and regulators, limited individual access to justice, and limited technical capacity of both individuals and regulators…there are other ways in which the GDPR may fail. Its heavy reliance on collaborative governance in the absence of significant public or third-party oversight could lead to capture or underrepresentation of individual rights”.

The experience of several US states and cities to regulate government use of algorithms serve as cautionary tales. In 2017, New York City took the groundbreaking step of setting up its Automated Decision Systems (ADS) Task Force to develop policy on algorithm technologies. Kate Kaye note “for all its good intentions, the effort was bogged down in a bureaucratic morass. The task force failed at even completing a first necessary step in its work: getting access to basic information about automated systems already in use…city agencies, accustomed to keeping details of vendor technologies secret, were unwilling to provide even a basic list of automated systems already in use…tech firms insist on protecting such information from disclosure.

Albert Fox Cahn notes the axiomatic question centered around defining “what exactly is an automated decision system?”. “City officials brought up the specter of unworkable regulations that would apply to every calculator and Excel document, a Kafkaesque nightmare where simply constructing a pivot table would require interagency approval. In lieu of this straw man, they offered a constricted alternative, a world of AI regulation focused on algorithms and advanced machine learning alone. The problem is…some of the most powerful forms of automation still run on Excel, or in simple scripts. You don’t need a multi-million-dollar natural-language model to make a dangerous system that makes decisions without human oversight, and that has the power to change people’s lives”. “While the city’s officials hoped to confine our purview to algorithmic source code, the task force was given no details into how even the simplest of automated decision systems worked”. “Rather than providing a thoughtful critique of how specific systems succeed and fail, the document gives a passing reference to an array of concerns, ranging from bias, to funding, to regulatory burden. And thus died this first valiant effort at municipal algorithmic accountability”.

Rashida Richardson[2] “argued municipalities need to do a better job of ensuring control over contracts with tech providers. Its authors, technology experts and advocates for civil rights and algorithmic accountability, called for New York’s city council to pass a law requiring more transparent procurement of automated decision systems. “New York City should use its purchasing power to only contract with firms willing to disclose the details of their technology, how they will be used, and who could potentially be harmed. Any firm not willing to be transparent about their technology should be left off the NYC payroll”.

In Europe, there has been legal challenges to the use of AI in administering welfare benefits and other core services. Jon Henley and Robert Booth reports “a Dutch court has ordered the immediate halt of an automated surveillance system for detecting welfare fraud because it violates human rights, in a judgment likely to resonate well beyond the Netherlands”. The Dutch government’s risk indication system (SyRI) gathered data in low-income neighbourhoods to identify individuals most likely to commit benefit fraud. “A broad coalition of privacy and welfare rights groups, backed by the largest Dutch trade union, argued that poor neighbourhoods and their inhabitants were being spied on digitally without any concrete suspicion of individual wrongdoing…The system did not pass the test required by the European convention on human rights of a fair balance between its objectives, namely to prevent and combat fraud in the interest of economic wellbeing, and the violation of privacy that its use entailed, the court added, declaring the legislation was therefore unlawful”.

Jacob Mchangama and Hin-Yan Liu notes the municipality of Gladsaxe in Copenhagen has been “experimenting with a system that would use algorithms to identify children at risk of abuse, allowing authorities to target the flagged families for early intervention that could ultimately result in forced removals”. While “the application of big data and algorithmic processing seems to be perfectly virtuous, aimed as it is at upholding the core human rights of vulnerable children. But the potential for mission creep is abundantly clear…Such government algorithms also weaken public accountability over the government. Danish citizens have not been asked to give specific consent to the massive data processing already underway. They are not informed if they are placed on puzzlement lists, nor whether it is possible to legally challenge one’s designation. And nobody outside the municipal government of Gladsaxe knows exactly how its algorithm would even identify children at risk. Gladsaxe’s proposal has produced a major public backlash, which has forced the town to delay the program’s planned rollout. Nevertheless, the Danish government has expressed interest in widening the use of public-service algorithms across the country to bolster its welfare services – even at the expense of the freedom of the people they are intended to serve”.

Overall, Jacob Mchangama and Hin-Yan Liu believes “the perils of such programs are less understood and discussed than the benefits. Part of the reason may be that the West’s embrace of public-service algorithms are by-products of lofty and genuinely beneficial initiatives aimed at better governance. But these externalities are also beneficial for those in power in creating a parallel form of governing alongside more familiar tools of legislation and policy-setting. And the opacity of the algorithms’ power means that it isn’t easy to determine when algorithmic governance stops serving the common good and instead becomes the servant of the powers that be. This will inevitably take a toll on privacy, family life, and free speech, as individuals will be unsure when their personal actions may come under the radar of the government”.

They add “there are good reasons to think judicial procedures would not be able to serve as a check on the growth of public-service algorithms. Consider the Danish case: the civil servants working to detect child abuse and social fraud will be largely unable to understand and explain why the algorithm identified a family for early intervention or individual for control. As deep learning progresses, algorithmic processes will only become more incomprehensible to human beings, who will be relegated to merely relying on the outcomes of these processes, without having meaningful access to the data or its processing that these algorithmic systems rely upon to produce specific outcomes. But in the absence of government actors making clear and reasoned decisions, it will be impossible for courts to hold them accountable for their actions”.

Alfred Ng notes “lawmakers and researchers have advocated for algorithmic audits, which would dissect and stress-test algorithms to see how they work and whether they’re performing their stated goals or producing biased outcomes. And there is a growing field of private auditing firms that purport to do just that. Increasingly, companies are turning to these firms to review their algorithms, particularly when they’ve faced criticism for biased outcomes, but it’s not clear whether such audits are actually making algorithms less biased – or if they’re simply good PR”.

One issue is that “there are no industry standards or regulations…Generally, audits proceed a few different ways: by looking at an algorithm’s code and the data from its results, or by viewing an algorithm’s potential effects through interviews and workshops with employees. Audits with access to an algorithm’s code allow reviewers to assess whether the algorithm’s training data is biased and create hypothetical scenarios to test effects on different populations…For those that do hire auditors, there are no standards for what an “audit” should entail…And because audit reports are also almost always bound by nondisclosure agreements, the companies can’t compare each other’s work…And tech companies aren’t always forthcoming…situation where trade secrets are a good enough reason to allow these algorithms to operate obscurely and in the dark, and we can’t have that”. Alfred Ng notes “there’s also a growing body of adversarial audits, largely conducted by researchers and some journalists, which scrutinize algorithms without a company’s consent”. However, “there’s no guarantee companies will address the issues raised in an audit…Public pressure can at times push companies to address the algorithmic bias in the technology…But it can be hard to create buzz around algorithmic accountability.”

There has thus been little progress in translating high-level ethical concerns into effective oversight over AI. Kai-Fu Lee[3] thinks “regulation is clearly needed. But it should be on an application-specific basis. You can’t regulate a technology in a vacuum. So AI for autonomous vehicles should absolutely be regulated, but not AI in general. We will need to apply domain-specific expertise in each area both to regulate effectively and also to avoid expecting AI to do what it cannot do in each case…A related point is that anthropomorphism here is dangerous: we should not expect AI to explain everything as humans do - it will not always be possible. I do agree we should do our best to have explainable AI, but we can’t expect AI to give reasons like humans. We should remind ourselves that human reasons are not always good, accurate, or truthful either!”

Nonetheless, there are two critical areas where regulatory clarity is urgently required. One relates to the accountability for damages caused by AI systems or the liability regime and the other to ownership rights.

Liability regime

New safety risks may be caused by flaws in either the design of AI-related technologies or the availability and quality of data. A European Commission (EC) white paper highlights “a lack of clear safety provisions tackling these risks may, in addition to risks for the individuals concerned, create legal uncertainty for businesses…Market surveillance and enforcement authorities may find themselves in a situation where they are unclear as to whether they can intervene, because they may not be empowered to act and/or don’t have the appropriate technical capabilities for inspecting systems. Legal uncertainty may therefore reduce overall levels of safety and undermine the competitiveness of European companies. If the safety risks materialise, the lack of clear requirements and the characteristics of AI technologies…make it difficult to trace back potentially problematic decisions made with the involvement of AI systems…Persons having suffered harm may not have effective access to the evidence that is necessary to build a case in court, for instance, and may have less effective redress possibilities compared to situations where the damage is caused by traditional technologies. These risks will increase as the use of AI becomes more widespread”.

The EC paper notes “the current product safety legislation already supports an extended concept of safety protecting against all kind of risks arising from the product according to its use. However, provisions explicitly covering new risks presented by the emerging digital technologies could be introduced to provide more legal certainty”.  In this respect, new assessments may be needed for “the autonomous behaviour of certain AI systems during its life cycle”, “explicit obligations for producers could be considered also in respect of mental safety risks of users when appropriate”, while “union product safety legislation could provide for specific requirements addressing the risks to safety of faulty data”, “the opacity of systems based on algorithms could be addressed through transparency requirements”, to consider adaptation of existing rules, and “given the increasing complexity of supply chains as regards new technologies, provisions specifically requesting cooperation between the economic operators in the supply chain and the users could provide legal certainty”.

Andrea Renda points out “one key aspect of the future policy framework for artificial intelligence is the choice of the liability regime for damages caused by AI systems”. This would require clarifying the scope of the liability[4]; the type of remedy and the type of liability rule; and the problems of attribution[5] or apportionment of liability. In relation to this, there is a need to pin-point “a responsible entity for damages caused by AI”. The challenge is to find a balance to “avoid imposing excessively burdensome obligations on AI developers, vendors and distributors in circumstances in which it is virtually impossible or useless to have a human in the immediate control of the system; and at the same time guarantee that end users will be compensated for the damage caused, and will therefore be more likely to accept and take up the new systems”.

Andrea Renda notes “this approach to responsibility inevitably leads to the identification of a strict (non-fault-based) liability regime…three aspects would need to be defined in detail: whether the regime would be absolute or relative; whether there would be one entity in the whole value chain that is primarily responsible vis-à-vis the end user; and whether there would be joint and several liability in case of joint participation in causing an accident”. Hence, “the design of a liability regime for AI inevitably boils down to a fundamental question: Can AI be considered as an object under the control of a human being, or does AI feature some elements of autonomy, which would warrant a different set of rules?” In this context, “if AI is considered as an extension of the human being, or a part thereof (as could occur in the case of augmented intelligence), then the liability rules applicable to humans would also apply to the AI system. Accordingly, a fault-based regime will most often apply”. “However, the law doctrines could also vary according to whether the AI is considered equivalent to an object, a service, an animal, a slave (e.g. robots) or as an employee”.

Andrea Renda explains the current EU rules on liability for AI systems is mostly related the Product Liability Directive (which determines liability if a product causes damage to a person or their private property) and the Machinery Directive (which sets general health and safety requirements for products). While the current EU legal framework appears largely adequate in maintaining a balance between compensating victims and avoiding stifling innovation, this is still a need for “some clarification and interpretive guidance in order to avoid generating confusion and a lack of certainty among industry players”. “At the same time, there are many pieces of sectoral regulation that already impose specific behaviour on the side of producers and service providers, which should not be cumulated with new regulatory obligations, in order to avoid redundancies and overlaps, with consequent losses of legal certainty and productivity”. For example, insurance companies and banks are already subject to information disclosure requirements, which may overlap with future regulatory obligations of transparency, accountability and nondiscrimination in future AI policies.

Ownership rights

The World Intellectual Property Organisation (WIPO) notes “the role of AI in the invention process is increasing, and there are cases in which the applicant has named an AI application as the inventor in a patent application”. Hence, a broad range of issues are being explored on whether intellectual property (IP) laws should be revised to permit AI applications to be named as inventors. There are wide ramifications concerning infringement, liability and dispute resolution, whether AI applications or algorithms are patentable subject matter and AI on trademark law and trade secrets.

“Policy positions adopted in relation to the attribution of copyright to AI-generated works will go to the heart of the social purpose for which the copyright system exists… If AI-generated works were excluded from eligibility for copyright protection, the copyright system would be seen as an instrument for encouraging and favoring the dignity of human creativity over machine creativity. If copyright protection were accorded to AI-generated works, the copyright system would tend to be seen as an instrument favoring the availability for the consumer of the largest number of creative works and of placing an equal value on human and machine creativity”.

Additional complexities arise relating to whether the (unauthorised) use of data for machine learning would constitute an infringement of copyright. However, World Intellectual Property Organisation (WIPO) notes “since data are generated by such a vast and diverse range of devices and activities, it is difficult to envisage a comprehensive single policy framework for data. There are multiple frameworks that have a potential application to data, depending on the interest or value that it is sought to regulate. These include, for example, the protection of privacy, the avoidance of the publication of defamatory material, the avoidance of the abuse of market power or the regulation of competition, the preservation of the security of certain classes of sensitive data or the suppression of data that are false and misleading to consumers…The general question…is whether IP policy should go further than the classical system and create new rights in data in response to the new significance that data have assumed as a critical component of AI. The reasons for considering such further action would include the encouragement of the development of new and beneficial classes of data; the appropriate allocation of value to the various actors in the data value chain, notably, data subjects, data producers and data users; and the assurance of fair market competition against acts or behavior deemed inimical to fair competition”.

Tracy Qu notes “in the first known court case involving Chinese AI copyright protection, a Beijing-based law firm sued Baidu for infringement after one of the search giant’s platforms reposted a WeChat article by the law firm. Baidu’s defence was that the article was created by AI, so it was not protected by copyright. The case was heard by the new Beijing Internet Court, which in April held that only works created by a natural person can be protected under copyright law, but added that authorship of the AI-created work in question should still have been protected by law”. Xu Xinming[6] commented “the court’s decision giving authorship to the user of the AI is only from the perspective of promoting cultural communication and the development of science, but it did not point to any legal evidence supporting it…This was only a single case and a way for the Beijing Internet Court to explore the legal [dilemma], but the situation is far from mature”.

Conclusion

AI, data, content and platforms represent aspects of technological progress that has literally brought us to through the doors of the information society. It has greatly expanded the information sphere. This, in turn, has increased the necessity for information rules to set out the new rules of the game. At the moment, government policies and regulation are being crafted on the fly in a haphazard manner. The risks of regulatory overreach and overload is obvious. It is also evident that there is insufficient regulatory capacity to handle the technical complexities. Hence, there is still a considerable way to go to evolve a coherent regulatory framework for AI. Nonetheless, the geo-political competition to lead AI regulation is already intensifying.

References

Albert Fox Cahn (26 November 2019) “The first effort to regulate AI was a spectacular failure”. Fast Company. https://www.fastcompany.com/90436012/the-first-effort-to-regulate-ai-was-a-spectacular-failure

Alfred Ng (23 February 2021) “Can auditing eliminate bias from algorithms?” The Markup. https://themarkup.org/ask-the-markup/2021/02/23/can-auditing-eliminate-bias-from-algorithms

Andrea Renda (15 February 2019) “Artificial Intelligence: Ethics, governance and policy challenges”. Report of a CEPS Task Force. Centre for European Policy Studies (CEPS). https://www.ceps.eu/publications/artificial-intelligence-ethics-governance-and-policy-challenges

European Commission (19 February 2020) “White paper on Artificial Intelligence: A European approach to excellence and trust”. https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf

European Commission (19 February 2020) “The report on liability for Artificial Intelligence and other emerging technologies”. Prepared by the Expert Group on Liability and New Technologies. file:///C:/Users/user/Downloads/LiabilityforAIandotheremergingdigitaltechnologiespdf.pdf

Frank Pasquale (25 November 2019) “The second wave of algorithmic accountability”. Law and Political Economy. https://lpeblog.org/2019/11/25/the-second-wave-of-algorithmic-accountability/

Jacob Mchangama, Hin-Yan Liu (25 December 2018) “The welfare state is committing suicide by artificial intelligence”. Foreign Policy.

Jon Henley, Robert Booth (5 February 2020) “Welfare surveillance system violates human rights, Dutch court rules”. The Guardian. https://www.theguardian.com/technology/2020/feb/05/welfare-surveillance-system-violates-human-rights-dutch-court-rules

Kate Kaye (13 December 2019) “New York just set a dangerous precedent on algorithms, experts warn”. Bloomberg Citylab. https://www.bloomberg.com/news/articles/2019-12-12/nyc-sets-dangerous-precedent-on-algorithms

Margot E. Kaminski (June 15, 2018) “The right to explanation, explained”. SSRN. https://ssrn.com/abstract=3196985

Martin Reeves (13 May 2018) “How AI will reshape companies, industries, and nations: An interview with Kai-Fu Lee of Sinovation Ventures”. Journal of Beautiful Business. https://journalofbeautifulbusiness.com/how-ai-will-reshape-companies-industries-and-nations-an-interview-with-kai-fu-lee-of-sinovation-6409ee8af953

Matthew Linares (28 February 2019) “Developing industry standards won’t give AI a conscience”. Originally published at openDemocracy.

National Security Commission on Artificial Intelligence (March 2021) “NSCAI Final Report”. https://www.nscai.gov/wp-content/uploads/2021/03/Full-Report-Digital-1.pdf

Phuah Eng Chye (7 November 2020) “Information rules (Part 1: Law, code and changing rules of the game)”. http://economicsofinformationsociety.com/information-rules-part-1-law-code-and-changing-rules-of-the-game/

Phuah Eng Chye (21 November 2020) “Information rules (Part 2: Capitalism, democracy and the path forward)”.

http://economicsofinformationsociety.com/information-rules-part-2-capitalism-democracy-and-the-path-forward/

Phuah Eng Chye (5 December 2020) “Information rules (Part 3: Regulating platforms – Reviews, models and challenges)”. http://economicsofinformationsociety.com/information-rules-part-3-regulating-platforms-reviews-models-and-challenges/

Phuah Eng Chye (19 December 2020) “Information rules (Part 4: Regulating platforms – Paradigms for competition)”. http://economicsofinformationsociety.com/900-2/

Phuah Eng Chye (2 January 2021) “Information rules (Part 5: The politicisation of content)”. http://economicsofinformationsociety.com/information-rules-part-5-the-politicisation-of-content/

Phuah Eng Chye (16 January 2021) “Information rules (Part 6: Disinformation, transparency and democracy)”. http://economicsofinformationsociety.com/information-rules-part-6-disinformation-transparency-and-democracy/

Phuah Eng Chye (30 January 2021) “Information rules (Part 7: Regulating the politics of content)”. http://economicsofinformationsociety.com/information-rules-part-7-regulating-the-politics-of-content/

Phuah Eng Chye (13 February 2021) “Information rules (Part 8: The decline of the newspaper and publishing industries)”. http://economicsofinformationsociety.com/information-rules-part-8-the-decline-of-the-newspaper-and-publishing-industries/

Phuah Eng Chye (27 February 2021) “Information rules (Part 9: The economics of content)”. http://economicsofinformationsociety.com/information-rules-part-9-the-economics-of-content/

Phuah Eng Chye (13 March 2021) “Information rules (Part 10: Reimagining the news industry for an information society)”. http://economicsofinformationsociety.com/information-rules-part-10-reimagining-the-news-industry-for-an-information-society/

Tracy Qu (20 December 2019) “Legal experts grapple with copyright when it comes to works created using artificial intelligence”. SCMP. https://www.scmp.com/tech/start-ups/article/3042811/legal-experts-grapple-copyright-when-it-comes-works-created-using

World Intellectual Property Organisation (WIPO) (20 May 2020) “Revised issues paper on intellectual property policy and artificial intelligence”. https://www.wipo.int/meetings/en/doc_details.jsp?doc_id=499504


[1] See Matthew Linares.

[2] See Kate Kaye.

[3] See Martin Reeves.

[4] “Whether developers, vendors or distributors of AI systems should be liable for damages” and whether it is caused by data, inadequate safeguards, algorithm design or conduct.  See Andrea Renda.

[5] The difficulty in attributing responsibility arises because the individual contribution of AI to the damage is impossible to prove or the damage arose from its interactions (with humans or other AI). It may also be difficult to prove who, between the AI vendor, the distributor, or the OEM (original equipment manufacturer), caused the damage. See Andrea Renda.

[6] See Tracy Qu.