Information rules (Part 12: The global race for AI governance leadership)

Information rules (Part 12: The global race for AI governance leadership)

Phuah Eng Chye (10 April 2021)

In its early years, global adoption of AI benefitted immensely from openness and minimal restraints. The governance vacuum was filled by the private sector. Janosch Delcker notes “organizations ranging from the Catholic Church to Big Tech corporations have all released nonbinding ethical principles for how to develop and use AI…rules do exist have largely been developed by industry players – such as Google, Facebook and Apple in the U.S., or Tencent and Alibaba in China – for themselves…The rules for AI are, in effect, being written in code by the biggest tech companies who have the biggest AI research and the biggest pools of data and the biggest connection to a number of people who are the providers of data”.

This is changing with rising geo-political tensions. AI supremacy is now perceived as the key to winning the global technology race. Governments are flexing their policy muscles on technology and information; contemplating policies to promote their national and commercial interests. Andrea Renda explains “the international political order will be heavily affected by this transition…the data-hungry nature of current AI applications…the emerging technology stack will be considered as critical infrastructure, i.e. essential to national stability in the near future…The explosion of the Internet of Things and the massive generation of data driven, AI-powered applications that run key critical infrastructure such as energy grids, internet pipelines, the food chain, the ATM network, hospital logistics and care delivery will graduallylead countries to try to protect the IT stack as a domestic asset. The risk of foreign intrusion into the data architecture, already existing today…will gradually become an existential risk for governments. Thus, a temptation to invoke so-called AI sovereignty or AI autarchy may emerge…AI sovereignty will be even more loudly invoked in the age of quantum supremacy, given the need to avoid that advances in cryptography provide hostile nations with important strategic advantages in global intelligence”. Global governance of AI will thus be shaped by rivalry among the major economic powers. The pursuit of individual national interests points to a future of regulatory fragmentation and uncertainties.


In a space once dominated by the US and US firms, China has suddenly emerged as a major force and has become the hare setting the pace for others in AI. China has outlined a “next generation artificial intelligence development plan”[1] to “achieve world-leading levels, making China the world’s primary AI innovation center” and to occupy “the commanding heights of AI technology” by 2030. “China will have established a number of world-leading AI technology innovation and personnel training centers (or bases), and will have constructed more comprehensive AI laws and regulations, and an ethical norms and policy system”.

Janosch Delcker notes “China’s blatant ambitions to become the world leader in AI had since alarmed U.S. policymakers and tech executives. Not only was Beijing pouring billions into the research and development of AI, but reports also suggested that the Chinese government was using it to build up an all-seeing surveillance state. And it was becoming increasingly clear that, as the country’s tech giants were exporting their technology around the globe, China was pushing for its ideas to become international standards. This convinced the White House that, despite differences with its allies over what rules are needed, Western countries should band together to make sure their companies, rather than Beijing’s, remained at the forefront of developing AI”.

Fu Ying offers a different perspective. “The US view of hi-tech as an area of strategic rivalry is not a perspective shared by China. While there is competition, the reality in the field is a kind of constructive and strategic mutual dependency…Both countries can benefit tremendously in a partnership, unless the US forces a decoupling and pushes China to find other partners or to develop its own solutions – which would also weaken US companies’ position and influence. So for China, the preferred outcome is an interdependent community with a shared future, international conversations to encourage collaboration, and common rules for safe, reliable and responsible AI”.

Dieter Ernst’s assessment is that “in China, public research institutions conduct AI research into both neural networks and symbolic AI. Interactions with industry, however, remain limited, as industry’s primary concern is to forge ahead in China’s rapidly growing mass market for AI applications. China’s AI strategy has emphasized data as a primary advantage. With fewer obstacles to data collection and use, China has amassed huge data sets, the likes of which do not exist in other countries. It was assumed that China could always purchase the necessary AI chips from global semiconductor industry leaders. Until recently, AI applications run by leading-edge major Chinese technology firms were powered by foreign chips, mostly designed by a small group of top US semiconductor firms. However, this global knowledge sourcing was not supported by a robust body of domestic research. With the escalation of the US-China technology war, this lack of research has become a major vulnerability for China’s AI industry”.


Martijn Rasser notes the US “already has many of the necessary components to foster an American AI century: World-class universities and research institutes, an open society that promotes innovation, a dynamic and hard-working population, and the world’s leading technology companies”. But he points out “much of its present achievements is rooted in policies implemented decades ago…U.S. technological leadership is not guaranteed”. “On its current trajectory, the United States is poised to fall behind its competitors. China, in particular, is spending growing sums on AI research and development and is outpacing the United States in the deployment of AI-based systems. China is also luring more expats back home to join the AI ecosystem”.

Martijn Rasser points out “what is still missing, however, is a true framework for a national AI strategy. The United States needs to think big and take bold action to harness the technology’s potential and address the challenges”. He recommends boosting R&D development funding, greater emphasis on developing human capital, reconsidering immigration policies and protecting its technological edge, particularly by placing “broad export controls on semiconductor manufacturing equipment to China”.

In 2019, the US[2] launched its “American AI Initiative…a concerted effort to promote and protect national AI technology and innovation. The Initiative implements a whole-of-government strategy in collaboration and engagement…directs the Federal government to pursue five pillars for advancing AI: (1) invest in AI research and development (2) unleash AI resources, (3) remove barriers to AI innovation, (4) train an AI-ready workforce, and (5) promote an international environment that is supportive of American AI innovation and its responsible use”.

Dieter Ernst points out “debates on AI governance need to keep in mind that countries differ in their AI research trajectories…apart from data and algorithms, America’s AI leadership is based primarily on advanced, specialized AI chips. The markets for leading-edge AI chips are tight oligopolies, controlled by a handful of US companies. Equally important, US companies control the software for integrated circuit design (called Electronic Design Automation/EDA tools), and are (with the important exception of ASML, a Dutch company) the lead players in semiconductor production equipment. In light of such an extremely unequal distribution of market power, a handful of US market leaders can shape technology trajectories, standards and pricing strategies for AI chips”.

A National Security Commission on Artificial Intelligence (NSCAI) report highlights the US is drastically underprepared for the age of artificial intelligence and that given the military implications, the US must resolve to resolve to win the strategic competition with China. Sam Shead notes “the commission calls on the U.S. government to more than double its AI research and development spending to $32 billion a year by 2026. It also suggests establishing a new body to help the president guide the U.S.′ wider AI policies, relaxing immigration laws for talented AI experts, creating a new university to train digitally skilled civil servants, and accelerating the adoption of new technologies by U.S. intelligence agencies”. The NASCAI report also highlights the need “to become self-reliant on computer chips and warns about the dangers of being so dependent on Taiwan’s TSMC…we must reevaluate the meaning of supply chain resilience and security.” In tandem with this, NASCAI also recommends the US tighten choke points on chipmaking technology to prevent China from overtaking it.


The EU is facing the reality it is lagging in AI. Guntram B. Wolff notes “when European companies want to adopt AI solutions, they are almost inevitably forced to turn to vendors from the United States or China – the world’s two incontestable leaders…The EU offering is comparatively poor. Of the top 30 AI-related patent applicants, only four are European. Nor is the future looking more promising. Of the 100 most promising AI startups in the world, only two are from the EU (while six are from the U.K.), and they attract well-below-average funding”.

Guntram B. Wolff argues the dependency “raises at least two concerns. The first concern is geopolitical…As the use of AI grows, so will the risks that come with depending on a technology that is produced and controlled outside of the EU. The risk that supply chains will be disrupted by economic decoupling has become very real in the current geopolitical climate. The second concern is that giving access to data can provide long-term advantages to existing AI companies – making the development of European AI ever more difficult”.

Guntram B. Wolff points out “only a bold strategy…can secure AI in Europe. First, the EU needs to invest in its own AI technologies…conditions for entrepreneurs need to be improved, and industrial policy should be designed to support AI innovation clusters. Second, the EU needs to make sure that AI firms in Europe can use data to train their machine-learning algorithms…The deployment of IoT in manufacturing is an opportunity for the EU to become a leader in AI if the data management can be governed by European rules. Finally, the EU needs to create a single market for data, especially data from industrial processes”.

Dieter Ernst thinks “both Europe and China are way behind the United States in the critically important AI chip ecosystem. The real challenge for Europe is to define a handful of AI niches that require domain knowledge for specific applications like health or transportation systems. Aggressive competition policy helps, as do regulatory frameworks such as the GDPR and the forthcoming Digital Services Act, which will set new rules regarding digital platforms’ responsibility for illegal content and disinformation online. AI standardization is another area where the European Union should have some leverage. However, all of these regulatory efforts need to be supported by joint efforts to strengthen domestic technological and management capabilities”.

Towards this end, the EU plans to “combine its technological and industrial strengths with a high-quality digital infrastructure and a regulatory framework based on its fundamental values to become a global leader in innovation in the data economy”. The European Commission emphasises the key elements of a future regulatory framework for AI involves creating “a unique ecosystem of trust” by ensuring “compliance with EU rules, including the rules protecting fundamental rights and consumers’ rights”. “The Commission is convinced that international cooperation on AI matters must be based on an approach that promotes the respect of fundamental rights, including human dignity, pluralism, inclusion, nondiscrimination and protection of privacy and personal data and it will strive to export its values across the world. It is also clear that the responsible development and use of AI can be a driving force to achieve the Sustainable Development Goals and advance the 2030 Agenda”.

In this regard, the European Commission notes “developers and deployers of AI are already subject to European legislation on fundamental rights (e.g. data protection, privacy, non-discrimination), consumer protection, and product safety and liability rules” but nonetheless “existing legal or regulatory regimes, those regarding transparency, traceability and human oversight are not specifically covered under current legislation in many economic sectors…In addition, it must ensure socially, environmentally and economically optimal outcomes and compliance with EU legislation, principles and values. This is particularly relevant in areas where citizens’ rights may be most directly affected, for example in the case of AI applications for law enforcement and the judiciary”.

However, the European Commission acknowledged “some specific features of AI (e.g. opacity) can make the application and enforcement of this legislation more difficult… a need to examine whether current legislation is able to address the risks of AI and can be effectively enforced, whether adaptations of the legislation are needed, or whether new legislation is needed. Given how fast AI is evolving, the regulatory framework must leave room to cater for further developments…Member States are pointing at the current absence of a common European framework…If the EU fails to provide an EU-wide approach, there is a real risk of fragmentation in the internal market, which would undermine the objectives of trust, legal certainty and market uptake”.

Andrea Renda notes the EU approach is based on exploiting “the existence of a huge gap in leadership on AI global governance: a gap that the US and China are probably unwilling to deal with, and that only the European Union, working as a collective, would have the strength to fill. The EU could become a leading voice in a new global governance setting where technical standards are otherwise being shaped only through voluntary, multi-stakeholder, transnational private regulation”.

Susan Ariel Aaronson agrees the EU roadmap is a signal that it “views AI development as a global good”. This is reflected by the EU’s use of trade agreements to promote the free flow of data, enforcement actions against firms engaging in uncompetitive business practices and its protection of personal data. “Many nations strive to be judged as adequate, to freely trade data with and from EU citizens. In short, the EU’s internationalist and trustworthy approach is gaining converts”.

Janosch Delcker thinks the “EU, home of stringent privacy laws, wants to become a leader in trustworthy AI and plans to release the world’s first laws for AI early next year. This, Brussels hopes, will protect Europeans against abuse and boost consumer trust in European AI while giving its industry a competitive advantage in the long run”.

However, Europe’s governance-led approach to AI has its critics. Guntram B. Wolff points out “Europe leads the world in AI regulation, which is important but insufficient to exploit its potential…must decide if Europe wishes to become a global player, and not just a global referee”.

Nigel Cory points out “Europe is already struggling in this race” and “yet it fails to recognize the likelihood that a new restrictive conformity assessment framework is likely to further undermine the EU’s position”. He points out “the EC is rushing to apply the precautionary principle – the idea that innovations must be proven safe before they are deployed – based on the widespread but incorrect beliefs that there is something inherently suspect about the technology, that organizations will have strong incentives to use the technology in ways that harm individuals, and that existing laws are insufficient to effectively oversee the use of this technology”. He believes “the likelihood of these risks coming to fruition is often overstated…Many proposed solutions are a poor fit, inadequate, and/or ineffective”. In this regard, US and China sought “to influence global standards builds on its firms’ ability to develop these new technologies, not the other way around”.

Nigel Cory thinks “the rush to regulation and implementation, without waiting on international discussions on AI and standards to evolve, indicates that the EU is willing to use AI regulation as a protectionist and expansionist strategy rather than building bridges between common approaches that each address shared public policy interests”. For example, the EU approach to rules and conformity assessment advantage “its own intra-regional regulatory standards and a select, designated group of European standards” at the expense of non-EU players. “Such Europe-specific conformity testing for data-driven applications represents a mechanism for localization and discrimination between local and foreign firms and their digital products” and are “precisely the kind of localization barrier to trade that the EC advocates against in forums like the WTO. Its application to new technology stands to exacerbate its negative impact on trade and interoperability”.

This could boomerang if, in the future, EU firms are affected by other countries “enacting their own opaque and arbitrary conformity assessment frameworks for AI”. National conformity testing frameworks will almost create “a whole other realm of potential trade disputes…The proposed institutional framework for administrating this framework is equally problematic…Creating or designating completely new agencies or offices, competencies, and coordination mechanisms is costly and complicated. It also presumes the competency and appropriateness of notified bodies”.

Nigel Cory argues “Europe shouldn’t focus on being first with new digital rules, it should focus on creating and implementing rules that allow AI-driven businesses and innovations to flourish in Europe, and in other likeminded nations that embrace the principles of rules-governed, enterprise-led, market-based trade. European policies should be designed to enable and promote health and robust competition in digital industries, for doing so will have a powerful effect on promoting European productivity and economic growth”.

The global race for AI governance leadership

Nick Wright notes “there are currently no global, multilateral bodies exclusively focussed on governing AI-based technologies that would enable states to deliberate, develop norms, and set agendas on issues ranging from algorithmic discrimination to AI in warfare. In an era of rising great power competition, the creation of new multilateral institutions or global AI treaties would be fraught by years of negotiations, and, moreover, is highly unlikely. As such, there is the risk of a governance vacuum, whereby global AI standards and innovation evolves in a disparate fashion, with a lack of coordination and cooperation among major powers. Given the current global context, the focus should be on utilising, maximising, and strengthening the potential and scope of existing instruments and institutions, in order to advance the development of shared standards and solve global problems related to AI”.

Susan Ariel Aaronson thinks countries won’t be able to reap the benefits of AI without international cooperation “to create an effective enabling environment for AI, an internationally accepted system of norms to govern both data and AI, and adopt policies to discourage anti-competitive behaviour. The US should be leading this effort because it holds a large share of the global market for AI services. However, it is sending mixed messages”. On one hand, officials have developed ethical guidelines, undertook actions against anti-competitive practices, improved access to public data while the US-Mexico-Canada Agreement facilitated the free flow of data. On the other, the US has not fostered an effective enabling environment for AI. The US is lacking a national law protecting personal data, adopted “a nationalist conception of AI, emphasising its role as a military technology and its importance to national security”, restricted exports of AI and visas for foreign AI researchers. She argues the nationalistic approach undermines the international network of research, talent and capital that fuels AI. “Like the EU, the US wants to create trusted AI systems”. “Taken together, its actions send a message that America is less interested in cooperation than domination”.

However, as Dieter Ernst observes, “there is no one best way to approach AI research and deployment. China, Europe and the United States can only play the cards they have, and diversity will continue to shape their unique AI development trajectories…This persistent diversity of national AI research trajectories indicates that resolving AI governance challenges through cooperation will not be easy”.

Janosch Delcker notes most governments “acknowledge that to regulate the global AI industry, international rules are needed – rules that are often drafted in standard-setting bodies where China has been particularly active over the past decade. Beijing has notably been pushing its ideas inside the Internet Engineering Task Force, an international organization developing core protocols for the internet; and technical standard-setting bodies like the United Nations’ International Telecommunication Union, whose recommendations are often made policy by countries in the Global South.

US and Europe are concerned with China’s efforts to influence global rules. Janosch Delcker notes “to counter these efforts, the West is seeking to build its own community”. However, their approaches were diverse. “While European leaders are eyeing laws as sweeping as the General Data Protection Regulation, the U.S. has been pushing a light-touch approach to regulation within its own borders”. In this regard, the US considers Brussels’ approach[3]heavy-handed and innovation-killing, echoing the position of its own tech industry, whose representatives have warned that too many rules would stifle innovation”.

Nonetheless, Susan Ariel Aaronson notes the US government is “working with 41 other countries at the Organisation for Economic Cooperation and Development (OECD) on an international agreement for building trustworthy artificial intelligence”. In May 2019, the US adopted “the OECD AI Recommendation, the first intergovernmental standard for AI, which includes five complementary values-based principles and five recommendations”[4] that promote AI that is innovative and trustworthy and that respects human rights and democratic values. The US also supported “the G20 AI Principles, which are drawn from the OECD Recommendation”.

She adds in June 2020, the U.S. joined an alliance dubbed the Global Partnership on Artificial Intelligence (GPAI) – comprising all G7 members (Canada, France, Germany, Italy, Japan, the U.K. and the U.S.) plus Australia, South Korea, Singapore, Mexico, India, Slovenia and the EU – with the aim of supporting the “responsible and human-centric development and use” of AI…in a manner consistent with human rights, fundamental freedoms and our shared democratic values…In other words: a different approach from China’s”.

Dieter Ernst notes while “it makes sense to start with a plurilateral approach, focusing on collaboration between…countries who share at least some common values and institutional histories. But at some stage…needs to lay out a longer-term strategy that defines interactions with other countries, such as China, India, South Korea and Taiwan, Province of China. These countries already play an important role in the development of AI research, patents and standards. Freezing out China from global AI governance would cause irreparable collateral damage to all members of the global AI community”.

Dieter Ernst points out “more fundamental issues are at stake. Due to the US-China technology war, international governance of AI that is based on state representation has ceased to be effective” with the erosion of the “rules-based order”. The US is “further expanding the exterritorial reach of US law through its Clean Network initiative, while neglecting UN organizations. China, on the other hand, has successfully occupied the levers of command and control in the International Telecommunication Union and other UN organizations. The spread of technology warfare is severely handicapping current efforts to develop international standards for AI. As established rules of trade are broken, mutual distrust and rising uncertainty are disrupting international trade, investment and knowledge exchange, especially in high-tech industries such as AI”. “As a result, international AI standardization based on state representation is no longer able to keep pace with the breathtaking speed of AI research and applications. Geopolitical antagonisms threaten to politicize the work of technical committees. Gone are the days when international technical standardization was all about a cooperative search for technical solutions to benefit transnational corporations, trade and technological innovation”.

Dieter Ernst points out there is an alternative. “For many AI research groups, an important motivation for relying on open-source platforms is protection against the exterritorial reach of US technology export restrictions…open-source platforms and communities play an increasingly active role in AI standardization. Technical standard setting is painfully slow, while open-source projects tend to happen amazingly fast. In response, many AI tools have shifted toward open-source environments to provide software-based interoperability. In addition, open-source communities tend to avoid traditional fora out of fear that geopolitics may trump ethical and moral principles.

However, he notes “there is no guarantee that open-source platforms on their own will improve the distribution of AI research capabilities and assets…only 15 percent of AI research papers publish their code, and these are predominantly from academic groups. Global platform leaders typically embed their code in their proprietary frameworks that cannot be released. New approaches to AI governance are needed to address this important, unresolved issue”.

Dieter Ernst concludes “there is reason for cautious optimism. Despite the disruptions caused by technology war and the global pandemic, the inherent vitality of global AI research and development communities keeps finding ways to overcome the barriers that are currently suffocating government-to-government cooperation. These new forms of informal cooperation will exert pressure on policy makers to create new regulatory frameworks for strengthening the role of open-source platforms and communities”. Hence, “it is high time to systematically explore how these new hybrid forms of cooperation work, and what needs to be done to improve access for smaller companies and research labs across the globe”.


Andrea Renda (15 February 2019) “Artificial Intelligence: Ethics, governance and policy challenges”. Report of CEPS Task Force. Centre for European Policy Studies (CEPS).

Dieter Ernst (22 October 2020) “AI research and governance are at a crossroads”. Centre for International Governance Innovation (CIGI).

European Commission (19 February 2020) “White paper on Artificial Intelligence: A European approach to excellence and trust”.

Fu Ying (5 December 2019) “Why the US should join China in future-proofing AI technology”. SCMP.

Guntram B. Wolff (19 February 2020) “Europe may be the world’s AI referee, but referees don’t win”. Brugel.

Janosch Delcker (7 September 2020) “Wary of China, the West closes ranks to set rules for artificial intelligence”. Politico.

Martijn Rasser (24 December 2019) “The United States needs a strategy for artificial intelligence”. Foreign Policy.

Meredith Broadbent (17 March 2021) “What’s ahead for a cooperative regulatory agenda on artificial intelligence?” Center for Strategic & International Studies (CSIS).

National Security Commission on Artificial Intelligence (March 2021) “NSCAI Final Report”.

Nick Wright, Oliver Patel, Audrey Tan (2 July 2020) “AI and international relations: Shaping the policy and research agenda”. UCL European Institute.

Nigel Cory (12 June 2020) “Response to the public consultation for the European Commission’s white paper on a European approach to artificial intelligence”. Information Technology & Innovation Foundation (ITIF).

Phuah Eng Chye (7 November 2020) “Information rules (Part 1: Law, code and changing rules of the game)”.

Phuah Eng Chye (21 November 2020) “Information rules (Part 2: Capitalism, democracy and the path forward)”.

Phuah Eng Chye (5 December 2020) “Information rules (Part 3: Regulating platforms – Reviews, models and challenges)”.

Phuah Eng Chye (19 December 2020) “Information rules (Part 4: Regulating platforms – Paradigms for competition)”.

Phuah Eng Chye (2 January 2021) “Information rules (Part 5: The politicisation of content)”.

Phuah Eng Chye (16 January 2021) “Information rules (Part 6: Disinformation, transparency and democracy)”.

Phuah Eng Chye (30 January 2021) “Information rules (Part 7: Regulating the politics of content)”.

Phuah Eng Chye (13 February 2021) “Information rules (Part 8: The decline of the newspaper and publishing industries)”.

Phuah Eng Chye (27 February 2021) “Information rules (Part 9: The economics of content)”.

Phuah Eng Chye (13 March 2021) “Information rules (Part 10: Reimagining the news industry for an information society)”.

Phuah Eng Chye (27 March 2021) “Information rules (Part 11: Regulating AI – Issues)”.

Sam Shead (2 March 2021) “U.S. is not prepared to defend or compete in the A.I. era, says expert group chaired by Eric Schmidt”. CNBC.

State Council Notice (1 August 2017) “A next generation artificial intelligence development plan”. Translated by Graham Webster, Paul Triolo, Elsa Kania, Rogier Creemers. China Copyright and Media.

Susan Ariel Aaronson (7 October 2019) “The Trump administration’s approach to artificial intelligence is not that smart: it’s about cooperation, not domination”. SCMP.

[1] See State Council Notice.

[2] See “Artificial Intelligence for the American People”.

[3] See also Meredith Broadbent on the frictions in the US-EU tech relationship.

[4] See