Global reset – Technology decoupling (Part 6: Fragmentation, data sovereignty and localisation)

Global reset – Technology decoupling (Part 6: Fragmentation, data sovereignty and localisation)

Phuah Eng Chye (21 May 2022)

UNCTAD notes “the three data behemoths – the United States, China and the European Union – have each created distinct data realms, which creates problems of compatibility or interoperability among them, severely impeding the ability to devise global rules to govern cross-border data flows and, thereby, create a level playing field for all countries. For those countries outside these dominant data realms (except for a few exceptions, such as India and the Russian Federation), this means that, as rule-takers, they will likely have to choose which of the models of data governance to follow if divergence continues to grow. To enhance their access to data and their market dominance, the United States, China and the European Union seek to bring other countries under their realm through instruments such as trade agreements or capacity-building, or in exchange for market access. Officials in smaller or less advanced countries will likely feel compelled to choose one realm over the others, because they already have significant trade relations with that market, or because they favour that realm’s approach to data governance. For many countries, however, it will prove difficult, if not impossible, to choose, since they have significant economic relations with more than one realm. Consequently, those countries’ Governments will try to delay for as long as possible before aligning themselves with one particular realm. Thus, developing countries would be trapped in making choices that would affect other economic relations. For instance, Latin American countries often have to choose between the GDPR model and the United States model with regard to regulation of cross-border data flows and data protection rules; given that their economic interests are aligned with both these blocs, most Latin American countries face a tough choice. Several countries in Africa now appear to be aligning with the Chinese model of cybersovereignty, but they also have ties with the European Union and the United States. China has stronger influence in many Asian developing countries. The traditional allies of the United States have been encouraged to take a tough stance against Chinese companies, such as excluding Huawei from their telecommunications networks and banning social media apps such as TikTok. In terms of infrastructure, less points of interconnection to the global network resulting from Internet fragmentation would entail increased costs and overall lower efficiency; fragmentation would also lead to a reduced ability to participate in the network effects of the dynamics of a relatively global interconnection. Given the high degree of interconnection and interdependence with global content and service providers in many developing countries, there may be significant implications for local companies and users affected by the fragmentation of Internet services”.

Fragmentation risks and costs

UNCTAD argues “a divided approach to data governance could eventually lead to a world of divergent data nationalism, where countries adopt inward-looking data policies with no international consensus, resulting in reduced opportunities for digital innovation and development across the world. This fragmentation is likely to lead to a suboptimal outcome, where it would not be possible for the potential benefits of the data-driven economy, which are mostly based on the flow of data, to materialize…A potential fragmentation in the data-driven digital economy may create difficulties for technological progress, with reduced competition, oligopolistic market structures in the different areas, and stronger influence of the Government. It would reduce business opportunities, as the access of users and companies to supply chains would become more complicated, and data flows would be restricted across borders. Also, there would be more obstacles for collaboration across jurisdictions, which would become less reliable”.

UNCTAD adds that “divergent data nationalism will be especially inimical to the interests of developing countries, including LDCs. First, it will result in suboptimal domestic regulations, especially in developing countries with low regulatory capacity, resulting in adverse consequences for privacy and security, and prejudicing the interests of domestic Internet users…Second, a fragmented Internet reduces market opportunities for domestic Micro, Small and Medium-sized Enterprises (MSMEs) to reach worldwide markets, which may instead be confined to some local or regional markets. Third, divergent data nationalism reduces opportunities for digital innovation, including various missed opportunities for inclusive development that can be facilitated by engaging in data-sharing through strong international cooperation. Finally, a world of divergent data nationalism has only a few winners and many losers. Certain established digital economies may emerge as winners due to their advantageous market size and technological prowess, but most small, developing economies will lose opportunities for raising their digital competitiveness. However, in the absence of a properly functioning international system of regulations of cross-border data flows that allows maximizing benefits from data, while addressing the risks, in a way that income gains are equitably distributed, the only option for developing countries is to regulate their data flows at the national level”.

In this context, UNCTAD argues fragmentation of the digital space “goes against the original spirit of the Internet as a free, decentralized and open network. “Academia and think tanks mostly tend to support the free flow of data…favour cross-border data flows, as these lower costs of doing business and expand international trade, consumer welfare and GDP”. It adds “some research argues towards free data flows to support human rights, freedom of speech and democracy…Data may represent activities that are of concern for national security and law enforcement, as well as national culture and values. As more and more activities become encoded within data, the nature of data flows therefore becomes a concern for those focused on security and enforcement. Ensuring security and protection of data produced by key organizations (such as the military or within critical infrastructure) increasingly plays a central role in national security. This perspective on data can often overlap with the economic perspective. For example, national security rules within countries with a stronger geopolitical focus might be concerned with protecting trade secrets and intellectual property of domestic organizations as much as with critical national activities. As data become more prevalent, they also provide a means to track criminality and enforce laws. Therefore, accessibility and jurisdiction of data are becoming more important in law enforcement. Data can also overlap with domestic security questions. In some countries, data flows (for example, those that embed certain media or applications) might be counter to cultural or moral norms, or of a politically sensitive nature that leads to censorship”.

Lindsey R. Sheppard, Erol Yayboke and Carolina G. Ramos argues that in the information and communications technology (ICT) sector, “evidence suggests that data localization increases prices and [limits the] availability of ICT products and services while creating few data center jobs. Despite economic protectionist arguments that cross-border data flows could make local internet-based businesses less competitive, there is limited evidence to suggest that data localization drives local economic development, online or off. Efforts to erect barriers might provide short-term commercial benefits to newly advantaged domestic firms, though potentially at the expense of innovation and the broader, long-lasting global economic growth spurred by the advent of the internet”.

Despite the negative views on fragmentation risks, Hung Tran observes “at present, it appears that more and more countries are moving toward data-localization requirements, either de jure as parts of their claims of data sovereignty or de facto when they insist that data can only be transferred freely to countries deemed to have adequate privacy protection. In fact, the number of countries having enacted data-localization legislation has increased from thirty-five in 2017 to sixty-four in 2021. The tendency of many countries to require data localization is becoming a difficult obstacle to making progress in negotiating digital trade deals”.

One key development has been the Schrems II case[1] where the Court of Justice of the European Union (CJEU) declared the EU-US Privacy Shield framework an invalid mechanism for transferring personal data to the US and questioned the use of Standard Contractual Clauses (SCCs)[2] facilitating such transfers. Carolyn Bigg explains “casting doubt on SCCs makes it much more expensive and risky to transfer or access personal data outside the EU, and risks massive fines for businesses which get it wrong. Although SCCs do remain a valid mechanism for processing EU personal data outside Europe, the judgment confirms that individual businesses must now verify whether the conditions of transfer (including the destination country) offer appropriate safeguards to individual’s personal data in accordance with GDPR”.

At first glance, the CJEU ruling on the Schrems II case “appeared only to have major consequences for data sharing between the EU and the US. Now it’s abundantly clear that the decision presents a vast set of commercial, operational and legal challenges and risks for businesses all over the world. The ruling affects all multinational businesses that transfer data in and out of the EU, use EU service providers, have EU entities or operations, or even just have EU-based customers or users – and not least the many businesses that rely on cloud and outsourcing providers in Asia…each business must now consider each and every single international data flow: even commonplace daily activities such as using global HR or CRM systems or cloud solutions, or operating a global website”. “Not only will organisations need to assess the standard of data protection laws in each country according to GDPR, the surveillance practices and national security laws of these countries will also need to be taken into account as part of the risk assessment. Transfers which continue despite a failed assessment must be reported to an EU data protection regulator”.

Carolyn Bigg advises “the cost and operational burden this requirement places on businesses cannot be underestimated”. “The judgment also confers huge power on EU data protection regulators to suspend or prohibit transfers where such appropriate safeguards cannot be provided, and non-compliance could expose exporters of EU data to fines of up to a prohibitive 4 per cent of total global annual turnover”. “Given the risks of investigations or substantial fines under GDPR, contractual liabilities with existing vendors, or costs and operational challenges of finding alternative cloud, IT or service solutions, businesses will need to be forensic and thorough in their assessments if they wish to continue to use SCCs. The decision has profound implications for software and technology companies which rely on globally hosted cloud services or outsourcing providers in countries such as India or the Philippines. In the coming weeks and months, both providers and their customers will need to grapple with some extremely tough questions. Are they still able to service European customers? Will they need to build new data centres in Europe, or relocate? Do they need to repatriate, silo and/or ring fence GDPR-protected data within Europe? Should they avoid SCCs for transferring EU data altogether, and will the alternative GDPR cross-border mechanisms remain valid for much longer?”

Nicholas Davis, Landry Signé and Mark Esposito note “regulatory divergence around the world is estimated to cost financial institutions between five and 10 percent of annual revenue, amounting to at least $780 billion each year. In addition to creating compliance costs, fragmented approaches to governance may increase the very risks they seek to address, simply by creating complexity and confusion through conflicting measures. Extraterritorial regulations, such as those emanating from the U.S., EU, and China, can undermine local markets and regulations, while the inconsistent implementation of standards can distort markets and create negative consequences for end users. It is unhelpful when the economic complexity on trade and the nascent needs around transparency and disclosure of non-financial performance require a converging system, not a devolved one”.

Data sovereignty

UNCTAD notes “traditionally, sovereignty has been associated with national territories and physical borders. However, the data-driven digital economy challenges this concept, as data are transmitted through the Internet…and national borders become blurred. An additional factor that affects sovereignty is that, with increasing market power and size, powerful global digital platforms can behave in a nation-State-like manner, self-regulating their huge digital ecosystems, which include more and more aspects of life and society, and affect the sovereignty of true nation States…Conventionally, sovereignty has been advanced at the level of the nation State, as it has the legitimacy, power and capacity to establish rules and govern (normally attributed by the sovereign will of its population through democratic elections). As data become increasingly economically important and States perceive a loss of control, against other countries or global digital platforms, as a result of crossborder data flows, there have been growing concerns in relation to data sovereignty at a national level. The terms digital and data sovereignty have been widely debated recently; the notion of data sovereignty practically did not exist before 2011 in academic and public debates. It has taken various meanings that reflect different cultural values and political preferences in different regions; the meaning may also be evolving over time as national priorities change.

“The concept of data sovereignty or digital sovereignty refers to the idea that data is subject to the laws and governance structures within the nation in which it is collected”. In this regard, Daniel Araya and Maithili Mavinkurve notes “there are gaping holes in national, regional and international laws on data as well as regulations, norms and institutions around how data should be governed, collected, processed, monetized and used. Many of these holes are already being filled but much is left to be done. In response to the challenges of data governance, many governments are now instituting virtual borders to control their data and protect data sovereignty”.

The importance of asserting data sovereignty is highlighted by various risks. Daniel Araya and Maithili Mavinkurve point out “the fact that US companies (as per the US Foreign Intelligence Surveillance Act and the US Clarifying Lawful Overseas Use of Data Act) can be compelled to hand over Canadian data to the US government without notifying Canadian authorities. These kinds of digital vulnerabilities represent substantial holes within the contemporary design of Canada’s data infrastructure…In fact, the Canadian government has clearly stated that as long as a CSP [cloud service provider] that operates in Canada is subject to the laws of a foreign country, Canada will not have full sovereignty over its data. This is because there remains a risk that data stored in the cloud could be accessed by another country. The issue of data sovereignty is complex and continuously evolving as foreign laws are being tested in foreign courts. This issue highlights the need for sovereign Canadian networks and data infrastructure (including 5G), data regulations and data laws to protect Canadian sovereignty”. In this context, there is a need “to recognize that data supersedes economics and innovation as a matter of national security…As an example, a foreign blockade in the digital realm could mean that a foreign power might restrict or suspend Canada from accessing critical data or services…Additionally, what if there is a scenario whereby, to protect itself, a foreign ally is forced to shut off its virtual borders?” Canada’s dependency on US digital infrastructure also means it is exposed to extreme cyberthreat scenarios. “Ensuring sovereignty over data and key cyber-infrastructure is now increasingly critical to protecting citizens”. “Canadian national security measures should require that sensitive data in certain critical industries be stored, routed and processed within Canada. Canadian domestic internet traffic should remain within Canada”. In addition, “accumulating and monetizing Canadian data is critical to future economic growth…The asymmetry at play between large data platform companies and individual consumers means that ordinary Canadians are at a substantial disadvantage. Without public oversight, platform companies can act in ways that are harmful to Canada and Canadians”.

Similar concerns have surfaced in Europe. UNCTAD points out “there is a growing discussion in the European Union on digital sovereignty, based on its values focused on the protection of fundamental rights; it also connects to the idea that the European Union needs to build capacity and catch up in the data-driven digital economy, in the face of dominant global digital platforms from the United States and China. But the focus seems to be moving more recently towards the concept of strategic autonomy. The approach of China to digital sovereignty positions digital technologies and the Internet as a broader geopolitical asset. Therefore, it emphasizes nationally-driven plans that push global technology leadership, and protection of data as a core and strategic asset for the Government, with a strong focus on security. In the United States, sovereignty over data is mainly entrusted to the private sector”.

UNCTAD notes developing countries adopt varying approaches to data sovereignty. In Brazil and Indonesia, “discussions have stressed the building of capacity, as well as alluding to critical infrastructures that nations need to control within the idea of sovereignty. Developing country discussions have also more strongly embedded social and cultural ideas of digital sovereignty that were previously more common among social movements and open-source communities. These link to longer histories of dominance and post-colonial inequalities, with the desire for groups to collectively take control of their own assets and destinies. In the context of the data-driven economies, digital/data colonialism is understood to take a broader reach than the historical colonialism of countries over countries; colonialism in the digital context is related to the exploitation of human beings over data by companies or by Governments, and it can happen in all countries. The emergence of national sovereignty in all these cases, however, can sit uneasily with the global nature of the Internet and the difficulty in assigning territoriality to cross-border data flows. The approach of more strategically controlling key digital assets is also potentially only viable in large nations with centralized leadership that are willing to undertake highly interventionist regulations”.

UNCTAD notes “claims for digital sovereignty have been made at different geographic levels. At a subnational level, these typically focus on gaining access to privately collected data in spaces within the public interest. This might include local traffic, citizen or pollution data held by private firms that can support better spatial analysis, management and planning. Through negotiation or in specific moments, technology firms such as Uber, Siemens, Airbnb and Orange have shared data to support urban projects. In some developing country projects, sovereignty has also emerged through strategic joint projects between data providers and the public sector in building data infrastructure, and capturing and analysing data, as seen – for example, in smart city projects in India. There are also proposals that seek to support expanded sovereignty over data, such as open data, data trusts, data cooperatives and data stewardship. Such claims to sovereignty are often less strongly made and their practical implementation is still limited. In the examples mentioned above, cities have rarely sought to control data or prevent cross-border data flows. Rather, they simply demand the ability to access and use data for their own ends. In sum, there are different notions of sovereignty for claiming rights over data, and at different layers and geographical levels; the meaning of digital/data sovereignty (and therefore the associated sovereign rights) remains confusing. There are significant difficulties in reconciling the notion of national sovereignty traditionally associated with country territories with the borderless nature, globality and openness of the digital space in which data flow”.

UNCTAD thinks “it is not only national sovereignty that matters in the data-driven digital economy; individual (or community) data sovereignty also becomes key in view of the nature of data. This implies that individual data sovereignty of people or communities may need to be protected from both private companies and Governments, to guarantee that individuals (and communities) have control of their data, and to prevent abuse and misuse of data. Hence the need for data to be properly regulated in a broad international data governance framework. It is important that countries are able to claim their sovereignty rights over data generated domestically, in order to be able to take autonomous decisions based on those data, and benefit from them, as well as maintain their independence from global digital platforms and foreign Governments. However, this should not be reflected in self-sufficiency or isolationist strategies, which are not likely to pay, given the network character of the Internet and the high level of interdependence in the data-driven digital economy”.

As the importance of data in an economy and society grows, no government can afford to remain passive on managing data as a national resource. In my view, the shift in favour of data sovereignty, as with most other digital trends (such as currency) is inevitable. We will get a better sense of this as more nations[3] evolve more comprehensive cyber, digital and data regulatory frameworks. Regulation of data is not possible without asserting jurisdictional boundaries. The US has been lagging and, possibly resisting, digital regulation (and currency) because as the dominant digital and financial power, it has the most to lose from data fragmentation. In this regard, digital regulation has disruptive and cannibalising effects, and can be regarded as a defensive move to protect local interests.

Data localisation – concept, measurement and regulation

UNCTAD argues “in the new context of the data-driven digital economy, concepts such as ownership and sovereignty are being challenged. Rather than trying to determine who owns the data, what matters is who has the right to access, control and use the data. There are significant difficulties in reconciling the notion of national sovereignty traditionally associated with country territories and the borderless nature, globality and openness of the digital space in which data flow. Digital sovereignty is often associated with the need to store data within national borders, but the link between the geographic storage of data and development is not evident…Well-established approaches applied to international trade across different territories (for example, rules of origin) cannot be easily applied to data, given their nature. The flows of raw data that are not linked to a specific exchange of a good or service are not included in the concept of digital trade…globally agreed definitions are lacking…varying definitions can lead to large differences in the volume of data flows that are categorized as personal data…cross-border data flows in themselves are neither e-commerce nor trade, and should not be regulated purely as such. Command of data leads to information advantages, adding to the sources of potential market failure in economies built on data, including economies of scale and scope, as well as network effects. The information asymmetry inherent in the data economy seems irreducible, as there are no market solutions to correct for it. Additional trade-offs linked to the ethics of data are similarly important, including the relationship between creating value from data and data surveillance of populations, and the links between data filtering and censorship. As a consequence, the governance of data and data flows is crucial. However, while setting appropriate rules on cross-border data flows at the right point can help to guarantee data rights, reduce structural challenges and support economic development, there is no consensus on the policy approach to take”.

UNCTAD further explains “in goods trade, the ordering and payment of goods or services may be done digitally. In the case of goods and services that become digitalized, these may not only be ordered, but also delivered online…Users may be able to use a foreign online service for free (such as search engines, social media, video streaming and web browsing), but during this process, data generated about them are extracted, processed and monetized – for example, through targeted advertising. Moreover, as products and services become integrated, enduring cross-border data flows may also be related to facilitating services on devices such as phones and sensors. Whether they are coupled with trade flows or not, cross-border data flows differ vastly in their character, speed, regularity and ability to track. Cross-border data flows are often much less clearly associated with commercial transactions, and in many cases they are not. A mobile device, for instance, may transmit or receive data flows about its user over a long period simply by being switched on. The speed and regularity of cross-border data flows also lead to a very different character compared with international trade. A single user interaction in an app might result in a cascade of different cross-border data flows, including captured user data, data being requested from cloud storage, and data flows related to advertising and other uses, sometimes between a set of intermediate services and organizations. As data flows are fluid and frequent, and location is hard to determine in a borderless network…trade in the same set of data can occur repeatedly in nanoseconds. Researchers and policymakers may find it hard to determine what is an import or export. They also struggle to ascertain when data are subject to domestic law…and what type of trans-border enforcement is appropriate…The technical characteristics of data flows – their frequency, their routing as packets across the Internet, and the role of intermediaries (such as platforms) involved in facilitating data flows – make it difficult to establish the origin and destination of data flows. Similarly, assessing the value of data and data flows is a daunting task, given that this value is mainly a potential option value, materializing only at use, and it is highly contextual. Moreover, data are most often the unpriced by-product of the production and consumption of goods and services, making it difficult to determine where value is created and captured. Therefore, well-established approaches applied to international trade (for example, rules of origin) across different territories would not lend themselves to work well in the case of data, given the nature of data and cross-border data flows. In view of the different characteristics of data in comparison to goods and services and the multidimensional nature of data, cross-border data flows require a different treatment from international trade in terms of their regulation. As opposed to trade, in many countries, certain types of data (such as non-personal or non-sensitive data, as discussed in the next section) can be sent through the Internet without registration, approval or permissions. Transmitting other types of data, including personal data, will link to legal accountability regimes. In this case, there will be no technical barriers to free flows, but organizations will be expected to follow rules and are legally accountable if issues arise. Within recent personal data regulations, for instance, organizations are often required to formally register with regulators”.

UNCTAD points out different methodologies have been used to address measurement challenges. “McKinsey largely defines these flows as cross-border data and communication flows. Hence, they are measured using Internet bandwidth, Internet penetration and Internet call minutes. At the same time, these reports try to differentiate cross-border data flows from other flows, such as financial ones, even though banking is associated with large data flows…Castro and McQuinn illustrate how firms such as airplane manufacturers collect terabytes of data during international flights to support maintenance and repair services. Similarly, a manufacturer of trucks and buses has established a data-driven arm to analyse driving data to optimize fuel efficiency, reduce transport’s environmental impact, and use aggregated data to monitor the fleet and detect problems earlier…In addition to the challenge of measuring these flows quantitatively, there is also a legal question on what constitutes cross-border data flows, which can impact their measurement. For instance, a transfer of ownership of data from an entity in one country to one in another, without moving the data out of their data centre, could constitute a flow of data across borders without an actual flow of data having occurred and being measured”.

According to UNCTAD, “the location of data can be determined by a number of factors, which can be of a technical, economic, security, jurisdiction or privacy-related nature; it is also dependent on the availability and reliability of data-related infrastructure and energy to support it. Whether data flows are cross-border or not is often determined by the location of data storage. When interacting with a website or an application, the server where the content or application is hosted can be located anywhere in the world. Some of the online services own and operate their own data centres; others rent server space from other companies, such as Amazon Web Services, Microsoft Azure, Google or others. A server could also be located at an ISP, a small business or at home. In turn, the Internet server might store the data locally on its disk drives, or it might send the data to another server – usually, but not always, in the same location…increasing volumes of data are stored within a limited number of hyperscale data centres (linked to the concentration of key cloud servers, infrastructure and data warehousing), a large part of them in developed countries and China. Technically, data travel over fibre at the speed of light and for many applications, and data storage is not required to be in a specific location. There can be queries rapidly transmitted within applications or services. The business models of large technology firms tend to build on this location independence of storage. Core data infrastructure provides services globally or to a broad region, with a strong dominance of data centres in North America and Western Europe, which together account for almost two thirds of all co-location data centres”.

Ingo Borchert and L. Alan Winters points out “the growing and pervasive use and exchange of data, including across borders, has fuelled concerns about the use – and especially the misuse – of data, including in the context of power relations among firms and between firms and consumers, but in particular with respect to privacy and personal data protection. These concerns are compounded when data move beyond the reach of domestic regulatory bodies or is subject to differing regulations depending on where it is located and the type of information that it contains. Indeed, while data and digital activity are inherently borderless, regulations are not, and ensuring privacy and digital security, protecting intellectual property, enabling economic development and maintaining the reach and oversight of regulatory and audit bodies can become more complex when data cross jurisdictions. Furthermore, different data are subject to different data governance frameworks. Personal data is subject to privacy and personal data protection but data from the private sector is generally subject to intellectual property rights (IPR) and specific sector level regulations (as might be the case for banking or telecommunications data). Data related to the activities of governments and other public sector bodies are often subject to specific policies on access and disclosure. However, data types overlap, as is the case with publicly funded collection of personal data by private firms, raising issues that touch on different policy domains and data governance frameworks. In addition, different definitions exist for different types of data. What one country might consider as personal data might not be considered as such in another. All of this means that what data is subject to which data governance framework is a complex issue, with challenges compounded when data cross international borders where definitions, policy domains and data governance frameworks can differ”.

UNCTAD notes “regulations on crossborder data flows can be found in different kinds of laws and regulations…include data protection laws; cybersecurity laws, regulations and policies; Internet laws and regulations; regulations pertaining to both hardware and software; government procurement laws; laws related to protecting State secrets; income tax laws; corporate and accounting laws and regulations; policies related to e-commerce and digital development; and data strategies. Thus, as different areas of policymaking are involved, regulating in a silo approach may lead to inconsistent measures in different ministries. This would call for a whole-of-government approach in regard to the governance of cross-border data flows…Several countries use sectoral regulations of cross-border data flows…expressly prohibit cross-border data flows in the health sector to safeguard patient confidentiality…restrictions on the cross-border transfer of web mapping data… requires defence-related data to be stored in domestic cloud servers…require local data storage in sectors requiring stronger regulatory oversight, such as financial data, insurance data, electronic payments, telecommunications data and gambling data”.

In this regard, “at a general level, regulations on cross-border data flows suffer from some implementation challenges. First, as multiple government agencies are responsible for managing different dimensions of cross-border data flows (for example, trade, telecommunications, domestic industry and development, home affairs and Internet regulation), the possible overlap and lack of coordination between these agencies can lead to inconsistent and uncoordinated domestic regulations or policy positions on cross-border data flows…regulators rarely cooperate in practice…Second, many countries deliberately frame their regulations on cross-border data flows ambiguously, to allow for unfettered administrative discretion. For instance, terms such as critical data, important data, sensitive personal data, critical infrastructure, data sovereignty, digital/cybersovereignty – although used in many policy documents and regulations – can have different meanings and contexts…lead to uncertainty and adversely affect both consumer and business interests, not least through higher compliance costs for multinational as well as smaller companies engaging in international trade. Third, a related implementation challenge is the extent to which data protection laws apply to non-personal data. As most data sets used in business processing contain at least some personal data, many small companies, without sufficient resources to store these two types of data separately are forced to adopt the highest standard for the entire data set, leading to additional costs and reducing their overall competitiveness. Fourth, sector-specific regulations can entail practical implementation challenges. For example, several countries restrict the outflow of health data of individuals. But it is unclear if health data are limited to medical records, or if they include health-related information that can be tracked by IoT products such as smart watches, or by simply observing the browsing behaviour of individuals. Lastly, implementation and enforcement challenges at the institutional level are related to budgetary constraints and lack of political will…without the necessary human and institutional support”.

UNCTAD notes “regulations often specifically apply to personal data, and can be roughly categorized as incorporating: (a) an adequacy approach (or geographically-based approach), where data transfers are regulated on the basis of the data protection standards/laws in the recipient country – for instance, the Government may determine which foreign countries have adequate, sufficient or equivalent data protection frameworks, thereby expressly allowing data transfers to such countries or approving transfers on a case-by-case basis; (b) an accountability (or organizationally-based) approach, where data transfers are based on the data exporter remaining accountable to the domestic Government and, by extension, to the users, for compliance with data protection standards, irrespective of where the data are transferred, stored or processed. An accountability approach would require cross-border enforcement – i.e. where the data processor located abroad has acted in contravention of the requirements in the domestic law…In practice, a data protection framework could incorporate both an adequacy and accountability approach”. In this regard, “the term free flow of data typically refers to regulations that do not impose any specific restrictions on cross-border data flows, although the regulations may contain rules for ex post accountability for companies…in Canada, any company that transfers personal data abroad is responsible for ensuring compliance with domestic laws, but there are no express restrictions on such transfers. Instead, organizations are required to designate an individual who can be held accountable, to ensure compliance with domestic data protection laws. Consent of the data subject is not necessary specifically for transferring data abroad, although organizations should include information in their privacy policies regarding transfer to foreign countries…Many LDCs have not yet implemented a regulatory framework for data protection and, as such, have not imposed any regulations that affect cross-border data flows, i.e. data flow freely across borders by default as they remain unregulated”.

UNCTAD defines localisation requirements as either strict or partial. “Strict localization refers to a legal requirement to store and/or process data in the country, and may potentially include a complete prohibition on cross-border data transfers (even for purposes of processing)…China has imposed strict data localization requirements for personal information and important data collected by operators of critical infrastructure…The cybersecurity law in Vietnam…requires all foreign and domestic suppliers of telecommunications, as well as Internet services offered online to store data locally. In some other countries, localization requirements can be applied very broadly, subject to the regulator’s discretion…Some countries impose strict localization requirements for specific data categories, including health, defence, IoT, and mapping data and, more broadly, for critical government and public data. Other examples of strict localization requirements relate to business records, tax records and accounting records. The localization requirements related to business or accounting records are often legacy laws…some experts argue that these laws may be less suited to the current digital age, where most records are stored in the cloud”.

Partial localization refers to a legal requirement to store data locally, but does not include a prohibition on transferring or storing copies of the data abroad, although specific compliance requirements may be imposed for cross-border data transfer and storage”. “A conditional transfer requirement means that data can be transferred abroad subject to the data processor complying with specified regulatory requirements. Depending on the design of these compliance requirements, conditional transfers may be categorized as hard, intermediate or soft. Compliance requirements for cross-border data transfer are extremely common in data protection laws. Hard conditional transfers entail a comprehensive compliance regime that includes country-specific approvals for transfers (e.g. an adequacy approach), regulatory approvals for transfers, approved contracts for transfers (e.g. standard contractual clauses (SCCs) and binding corporate rules (BCRs) provided under GDPR), and are subject to strict regulatory audit…Even when hard compliance requirements are in place, countries often allow cross-border transfers of personal data in limited circumstances, such as where necessity-based derogations exist in the domestic data protection law (e.g. necessity to perform a contract, to protect public interest, or to protect vital interests of the data subject), or where due consent is obtained from the data subjects. Some data protection laws also contain specific exemptions for cross-border data transfers for governmental or law enforcement purposes, medical research purposes, bank or stock transfers, or in accordance with an international treaty. Intermediate or soft conditional transfer requirements refer to easier compliance requirements, such as obtaining implicit consent of users or limited user notice requirements, or if data processors can conduct cross-border data flows subject to a self-assessment of the data protection framework of the recipient country with necessary contracts (i.e. if prescribed by law)”.

The debate on data localisation

The views supporting “free cross-border data flows” have prevailed among the tech community as data fragmentation is perceived to be an impediment. However, UNCTAD notes that “from a development perspective, there is little evidence that backs positions in support of either free cross-border data flows or strict data localization policies. Most studies favouring free flows seek to estimate the negative impact of data flow restrictions in terms of opportunity cost. However, such an approach may fail to incorporate equity and distributional issues related to who appropriates the gains. They may also fail to factor in the non-economic dimensions of data, such as privacy and security. At the same time, the case for strict data localization policies in support of domestic development is weak. It is not evident that keeping data inside national borders results in economic or social development. The lack of evidence in either direction is partly related to measurement problems, and partly to the fact that the data-driven digital economy and the exploding cross-border data flows are relatively recent phenomena”. The debate on data localisation and, by implication, data governance covers a wide range of issues that need to be considered separately.

  • Jurisdiction and security issues

UNCTAD points out “a common reason for storing data locally concerns questions of jurisdiction and security. In cases where data are stored outside a State’s borders, the argument is that accessing such data for legal reasons can be a challenge. Mutual legal assistance treaties exist to allow nations to access data outside a jurisdiction, but these are not in place between all countries, and such requests are reported to take between 6 weeks to 10 months, even when the United States is the requestor. There are high-profile examples where data access for security reasons was less than forthcoming…Cybersecurity implications might also be used to justify storing data locally. Cross-border flows and international storage have been linked to perceived risks, where nations fear cross-State surveillance and/ or unwarranted mining of national data”.

In addition, “companies in countries with rigid data residency and access requirements will acquire less crucial information in times of emergency because they are not trusted by business partners and governments abroad. Even now private platforms generally resist giving information to government except when legally compelled. This is because of customer, competitive and also not to make it a habit of giving information to government as well as allowing them to strengthen oversight…any critical business will be handicapped by data residency requirements, as it will not be able to access cutting-edge cloud computing, machine learning, and other technologies developed and hosted abroad. Businesses restrained by data residency laws end up with higher costs, less efficient technologies, and a greater risk of having to be taken over in a crisis”.

UNCTAD points out “from a technological perspective, location of data storage/processing does not ensure data protection or security per se; rather, privacy/data protection is a function of the underlying technologies and standards used in the data-driven sectors. Cyberthreats are global in nature and may even originate domestically. Thus, storing data domestically does not necessarily reduce vulnerability to cyberattacks. Indeed, it may further prejudice the security of data when localization is mandated in countries with poor digital infrastructure. In contrast, strong privacy and cybersecurity standards can help to protect data from intrusion, irrespective of where such data are stored. Moreover, forced data storage in countries where Governments can demand backdoor access to such data facilitates government surveillance. On the other hand, personal data can be better protected with high encryption standards, irrespective of where companies store the data. Other concerns include the possibility of large-scale natural disasters wiping out data servers located in specific regions. Finally, localized data sets resulting from restrictions on data flows, as opposed to global data sets combining data from across countries, entail new policy risks; for instance, local data sets make it harder for companies to detect patterns in criminal activities such as money laundering, terrorism financing and fraud”. “Indeed, domestic storage of data across multiple countries poses risks of many small, poorly managed and costly data centres…In terms of security, firms tend to place data in diversified locations in order to minimize risks”. Hence, an “isolationist mentality around cybersecurity can undermine access to state-of-the-art international best in class solutions”.

Lindsey R. Sheppard, Erol Yayboke and Carolina G. Ramos argue “there is a case to be made that the free flow of data to hostile or authoritarian regimes threatens the national security of their geopolitical adversaries…Further, there are legitimate reasons why law enforcement agencies, for example, would desire both access to data and to restrict the ability of malign actors to share data across international borders…For G20 member countries such as China, India, Indonesia, Russia, and Turkey, the lack of an agreed-upon definition of data localization-related national security concerns provides an opportunity to argue for stronger data localization mandates. Some of these justifications lack evidence; others strain credulity…control over data flows has enabled governments to assert control over citizens more than it has addressed legitimate cybersecurity and other traditional national security concerns…In other words, control over data flows is often not actually about national security; it is about control”. They point out “data localization territorializes data so that domestic governments can assert jurisdiction over it and, by extension, service providers…this increasing control has potentially negative effects on civil society, democracy, and human rights…Data localization can limit collaboration between military, law enforcement, intelligence, and other security actors by creating obstacles to accessing information across borders. It effectively provides a safe haven for actors who execute gray zone tactics, including information operations via social media and illicit financial activities, on platforms subject to localization requirements – limiting the ability of targeted countries to combat and investigate them and, if applicable, prosecute the perpetrators of related crimes…This would weaken current information-sharing channels and businesses’ reporting obligations, thereby impacting intelligence-gathering methods and criminal investigations…Americans abroad, including U.S. government officials, depend on secure telecommunications that become more complicated as data localization requirements harden… It can also further culturally isolate nations from one another, making diplomacy and peacebuilding efforts more difficult. Most specifically, if certain forms of data localization (such as hard or hybrid) are widely adopted, they could impede research into terrorist organizations’ funding structures, compromise informants, and weaken traditional U.S. intelligence-gathering networks”.

The case for free cross-border data flows or anti-data localisation is a uniquely historical (tied to the evolution of the internet) and US perspective. As more countries opt to implement data localisation regulations, this would impede the ability of the US to oversight other countries and actors. UNCTAD suggests “a well-defined and properly functioning international framework for data governance, including for cross-border data flows, could allow for some common understanding and clarity over sovereign rights over data”.

  • Economic and efficiency issues

The economic debate on data localisation can be divided into micro and macro issues. Micro issues relate to how data organisation affects processing and application efficiency. UNCTAD points out “while data storage does not need to be location-specific, there are technical arguments for data and storage infrastructure becoming more globally spread. Having a more local source of data may benefit local firms in terms of cost. Moreover, lower latency, or time response to the request, works in favour of locating the data closer to their origin. Other technical risks, such as sporadic fibre cuts and lack of redundancy, are reduced with an increasing diversity of data centres. These arguments are less important to low bandwidth or non-real-time data, but become more of a challenge for a newer generation of real-time applications where users require data flows that are highly sensitive to delay or highly interactive (such as cloud applications or real-time monitoring in industry). In these cases, proximity becomes important in ensuring that large-scale data flows are viable”. Another issue relates to direct increase in operating costs from having more data centers and the significant opportunity costs from data localization “as a fragmented Internet will have adverse effects on emerging technologies, such as making them more biased if they rely on a limited and homogenous set of data for transforming data into insights”.

At the macroeconomic level, the costs and benefits of data localisation on growth, employment, and innovation is debated. UNCTAD notes conventional views that “local production plays a key role in supporting skills, the emergence of domestic firms and development more broadly”. Data localisation is thus justified on the grounds that it “potentially support local data capacities and infrastructure, and drive the digital economy”. However, “the limitation of these arguments is that, as opposed to localizing the production of goods or services, even if data centres are domestically located, activities associated with data may still be done remotely. Therefore, the direct local benefits of domestic data centres will lead to the creation of a relatively small number of direct jobs. These will mainly be in the initial construction of buildings with a limited number of network engineers, technicians and security required on the ground”.

Nonetheless, UNCTAD notes the “spillovers from data centre investments can be more significant, highlighting how other types of data-related capital and capacity emerge with the presence of data centres. Such arguments are less well researched in developing countries, but evidence in developed countries suggests that data centres can complement other investments in data infrastructure, and have important spillover effects in the economy – for example, by supporting joint public–private upgrading of energy and transport infrastructure. Therefore, while the direct economic gains from localizing data centres are limited, in some instances the presence of data centres might be an important part of a broader package of planned investments that build data capacity and capital in a country”.

UNCTAD makes the important distinction that “the strategy of requiring data to be stored domestically may only pay off in large countries that can achieve the necessary critical mass and scale to be able to create value from the data. In addition, keeping the data inside borders can lead to economic development only when the capacities to transform the data into digital intelligence and monetize them exist in the country, as will be discussed below. Data use skills are more important, and can be developed locally, even if the data centre is located elsewhere; the connectivity infrastructure is also more relevant than the data centres themselves. For smaller countries, little value can be generated from data when they are not allowed to flow across borders, given that the value of data emerges from aggregation of data. Thus, it is more important to focus on the location of the value created from data (and its capture), from the processing of data into data products, which does not necessarily match the place where data are generated. It is in the location of the use of data where real economic value is added; thus, it is the flow of data value that matters more than the flow of data themselves. In this sense, the physical location of the data storage may not be such an important factor for development. However, this may also depend on the needs for processing data, since the strongest capacity for data processing is found in the hyperscale international data centres, which are rarely located in developing countries, except for China. It may be argued that, as long as access to the data is ensured, there should not be any relation between the location of the data storage and economic development since, with guaranteed access, domestic actors can use the data for economic purposes. This would be the case for a firm that stores its data in a data centre outside a country (leading to a cross-border data flow), which, as long as it can use the data for its purposes, will be able to benefit from the data”.

Lothar Determann thinks “whilst in the immediate term this may appear to be advantageous for indigenous companies, in the long run, such protectionism tends to harm the protected companies by shielding them from much-needed global competition. Also, foreign countries will eventually reciprocate and foreign business may shy away from entering markets where data residency laws apply to avoid additional costs and taxation. Consequently, indigenous business may find it difficult to scale and succeed internationally. They will ultimately become a local liability. Mandating the use of local data centers or locally-made technology seems less helpful if local facilities end up not being globally competitive and slow down local progress…Data residence laws could force multinationals to invest in local infrastructure and data centers. But, the opposite, negative effect is more likely: Many multinationals may prefer to operate without local government access to data and the related risks of corruption and compliance deficits associated with establishing local presences”. In addition, “most personal data that companies collect is not crucial for national security purposes and not accessed by governments out of respect for individual privacy and freedoms. Therefore, it is not necessary or proportionate to mandate that companies must store all personal data locally. Moreover, for purposes of securing government access to data, it would be sufficient to require companies to guarantee remote access to data (wherever it is stored) or keep local back-up copies, which companies could create on a daily or weekly basis at much reduced cost compared to duplicating primary systems locally”.

Overall, the micro and macro effects of data localisation work in different directions because benefits and costs fall differently on various stakeholders. Clearly, governments have most to gain as data localisation consolidates their grip on national data while the spill-over benefits should not be overlooked. In contrast, data localisation leaves MNCs and global tech firms with difficult choices due to losses in efficiency and scale. Lindsey R. Sheppard, Erol Yayboke and Carolina G. Ramos note “some multinational corporations have chosen to leave certain markets rather than comply with restrictive data localization mandates, while others have chosen to remain and adapt”. In my view, data regulation and rising regulatory costs are inevitable trends. While inconvenient, most businesses would adjust and need to find solutions. For governments, they need to manage the impact of data localisation on foreign participation and mitigate potential adverse effects by adopting facilitative approaches and ensuring policies are aligned.

  • Distributional effects

UNCTAD highlights that “digital markets are often based on winner-take-all dynamics” and as a result “a data-related divide has compounded the digital divide. In this new configuration, developing countries may find themselves in subordinate positions, with data and their associated value capture being concentrated in a few global digital corporations and other multinational enterprises that control the data. They risk becoming mere providers of raw data to global digital platforms, while having to pay for the digital intelligence obtained from their data”. Hence, “a global digital platform extracts the data from the users in a particular country, using them for its private benefit, without any compensation or possibility for domestic firms to productively use those data. Indeed, foreign entities are likely to have a first-mover advantage in data analysis and processing that may be challenging to bridge by latecomer developing countries, even with access to their data”. The “distributional effects of the gains from free data flows…are likely to accrue especially in sectors and to people that are already privileged in terms of international market access or skills. This could exacerbate existing inequality within and across countries”.

“Further, as digital investments tend to be asset-light, many companies based in developed countries do not make extensive investments in local infrastructure, even when they derive significant revenues from providing services in the domestic market. As an example, Africa and Latin America taken together account for only 4 per cent of the world’s co-location data centres. Further, with the exception of some Chinese platforms, no other technology company from developing countries has been able to establish a global market presence”. UNCTAD adds “large tech firms’ infrastructure, for example, has neglected certain regions, such as Africa, which suffers from a lack of data infrastructure, including key application servers, data centres and content delivery networks. Even if the state of affairs has improved in recent years, it can have an impact, for example, by downgrading the performance of specific cloud applications or increasing overall costs for data providers”.

UNCTAD notes “several countries believe that targeted industrial policies in the digital economy are essential for catch-up and to avoid an unhealthy dependence on American and Chinese technology companies…such as India, are now focusing on the development of domestic data capabilities as a means to capture more of the revenue flowing to foreign digital companies, and thereby boosting the growth of their domestic digital sectors. In such countries, preventing the transfer of massive volumes of data on residents to foreign companies through strict data localization laws and policies is seen as a potential route to encourage the growth of domestic data facilities and massive data sets. This growth in data capabilities may in turn facilitate the development of domestic digital products and services for growing domestic consumer demand, thereby powering the growth of home-bred digital companies. However…data localization cannot per se facilitate development of successful digital platforms in developing countries”.

UNCTAD cautions “if free flow of data is imposed on countries through trade agreements, developing countries may be taken advantage of. These agreements may then limit the scope for national policymaking and country-specific approaches to development. Moreover, for developing countries to benefit from the digital economy, they need to find means to localize the economic value of data, which could require temporary protectionist measures or an improved framework for data ownership and remuneration. In the absence of better domestic rules, including on taxation of international technology firms, gaps in income and privacy issues are likely to grow, entrenching dependencies…As long as developing countries are not able to drive their own development in the digital sphere, limited capabilities and financial means create a new dependency. This so-called digital colonialism involves actions by major technology firms to shape the policy debate in their favour through lobbying, investments in infrastructure, and donations of hardware and software to developing countries”. UNCTAD suggests “a proper international framework regulating cross-border data flows should ensure access, and guarantee that the income gains from data are equitably shared when access is restricted. This should be complemented by improvements in the capacity to process the data in developing countries”.

Conclusion

UNCTAD notes “the ability of a country to make its own decisions in shaping policies on data and data flows – their data sovereignty – is gaining in importance, although the definition and motivation of data sovereignty can vary widely across countries”. The trend towards data sovereignty is a consequence of the digitalisation of society. Governments cannot afford to forgo their data rights as a larger part of the economy and society becomes digitalised. Tech decoupling and geopolitical conflicts makes data localisation requirements all but inevitable. In the information society, control over data determines a nation’s fate. But realistically, only large nations have the ability to protect their data while smaller nations have little choice but to follow.

References

Antonio Douglas, Hannah Feldshuh (April 2022) “How American companies are approaching China’s data, privacy, and cybersecurity regimes”. US-China Business Council (USCBC). https://www.uschina.org/reports/how-american-companies-are-approaching-china%E2%80%99s-data-privacy-and-cybersecurity-regimes

Carolyn Bigg (29 July 2020) “Why global data flow is under threat, and why Asia is in a strong position to benefit”. SCMP. https://www.scmp.com/tech/policy/article/3095025/why-global-data-flow-under-threat-and-why-asia-strong-position-benefit

Daniel Araya, Maithili Mavinkurve (24 January 2022) “Emerging technologies, game changers and the impact on national security”. Centre for International Governance Innovation (CIGI). https://www.cigionline.org/static/documents/NSS_Report9_SgEhTat.pdf

Hung Tran (November 2021) “Competing data governance models threaten the free flow of information and hamper world trade”. Atlantic Council. https://www.atlanticcouncil.org/wp-content/uploads/2021/11/Competing-Data-Governance-Models-Threaten-the-Free-Flow-of-Information-and-Hamper-World-Trade.pdf

Ingo Borchert, L. Alan Winters (eds) (2021) “Addressing impediments to digital trade”

Centre for Economic Policy Research (CEPR) Press. file:///C:/Users/user/Downloads/Addressing%20Impediments%20to%20Digital%20Trade.pdf

Jacques Crémer, David Dinielli, Amelia Fletcher, Paul Heidhues, Monika Schnitzer, Fiona Scott Morton (11 February 2022) “The Digital Markets Act: An economic perspective on the final negotiations”. Voxeu. https://voxeu.org/article/digital-markets-act-economic-perspective-final-negotiations

Lindsey R. Sheppard, Erol Yayboke, Carolina G. Ramos (23 July 2021) “The real national security concerns over data localization”. Center for Strategic and International Studies (CSIS). https://www.csis.org/analysis/real-national-security-concerns-over-data-localization

Lothar Determann (9 June 2020) “How data residency laws can harm privacy, commerce and innovation – and do little for national security”. World Economic Forum. https://www.weforum.org/agenda/2020/06/where-data-is-stored-could-impact-privacy-commerce-and-even-national-security-here-s-why/

Nicholas Davis, Landry Signé, Mark Esposito (January 2022) “Interoperable, agile, and balanced: Rethinking technology policy and governance for the 21st century”. Brookings. https://www.brookings.edu/wp-content/uploads/2022/01/Rethinking-tech-policy_final.pdf

Phuah Eng Chye (12 March 2022) “Global reset – Technology decoupling (Part 1: Challenges, checkpoints, chokepoints and IOT)”. http://economicsofinformationsociety.com/global-reset-technology-decoupling-part-1-challenges-checkpoints-chokepoints-and-iot/

Phuah Eng Chye (26 March 2022) “Global reset – Technology decoupling (Part 2: Decoupling race and scenarios)”. http://economicsofinformationsociety.com/global-reset-technology-decoupling-part-2-decoupling-race-and-scenarios/

Phuah Eng Chye (9 April 2022) “Global reset – Technology decoupling (Part 3: The standard setting battleground)”. http://economicsofinformationsociety.com/global-reset-technology-decoupling-part-3-the-standard-setting-battleground/

Phuah Eng Chye (23 April 2022) “Global reset – Technology decoupling (Part 4: The geopolitics of data)”. http://economicsofinformationsociety.com/global-reset-technology-decoupling-part-4-the-geopolitics-of-data/

Phuah Eng Chye (7 May 2022) “Global reset – Technology decoupling (Part 5: Digital trade agreements and digital governance)”. http://economicsofinformationsociety.com/global-reset-technology-decoupling-part-5-digital-trade-agreements-and-digital-governance/

United Nations Conference on Trade and Development (UNCTAD) (2021) “Digital economy report 2021 – Cross-border data flows and development: For whom the data flow”. https://unctad.org/system/files/official-document/der2021_en.pdf


[1] Data Protection Commissioner v Facebook Ireland Limited arises from a complaint filed by Maximillian Schrems that Facebook was not adequately protecting EU personal data when transferring and storing it in the US, in part because the US does not have similar data protection laws to those (i.e. GPDR) in the EU, and also because of the reach of US surveillance and national security laws over the data once in the US.

[2] SCCs are standard sets of contractual terms to which both the provider and recipient of the personal data sign up, to ensure that recipients of GDPR-protected personal data comply with GDPR requirements even if the local data privacy laws do not provide the same safeguards.

[3] See Jacques Crémer, David Dinielli, Amelia Fletcher, Paul Heidhues, Monika Schnitzer and Fiona Scott Morton on the EU’s Digital Markets Act.