The costs of privacy regulation

The costs of privacy regulation

Phuah Eng Chye (15 February 2020)

Within the context of the information society, privacy regulation is a concept more suited to a low-information environment. When the supply and uses of information is limited, privacy regulation is cost-effective because the amount of data is small and it is centrally stored. There is less information to protect and it is relatively easy to identify and safeguard against infringements. In the meantime, the opportunity costs of not having information is mostly insignificant due to the limited scale and uses of information.

In the information society, the supply and uses of information is abundant. Privacy regulation is costly because large amounts of data is collected, exchanged and stored in multiple silos. Privacy infringements are frequent and occurs across a wide network. Due to the high volumes and multiple copies, the costs of protecting privacy is prohibitive[1]. On the other hand, the opportunity costs of not having information is significant. In particular, the substantial value generated from verification, profiling, aggregation and analysis of individuals will be foregone.

Hence, it is becoming increasingly costly to keep individual data private in an environment where information is the basic raw material.  As rising levels of transparency crowd out the space for individual privacy, the challenge is to develop a regulatory framework that can balance the trade-offs between privacy rights and the potential costs in terms of its impact on convenience, efficiency, growth and innovation.

Alessandro Acquisti, Curtis Taylor and Liad Wagman notes there are “two schools of thought about the interplay of privacy concerns, regulation, and technological innovation…The first school of thought holds that regulatory protection inhibits technology diffusion by imposing costs upon the exchange of information. In addition to these trade-offs…complexities concerning state-specific regulation and information exchange…The second school of thought, instead, argues that explicit privacy protection promotes the use of information technology by reassuring potential adopters that their data will be safe” [2].

In this context, they note Chicago School scholars G.J. Stigler and R.A Posner advanced theories that “the protection of privacy creates inefficiencies in the marketplace, since it conceals potentially relevant information from other economic agents…Removing an individual’s personal information from the marketplace through privacy regulation ultimately transfers the cost of that person’s possibly negative traits onto other market participants…regulatory interventions blocking the flow of personal information would be redistributive and inefficient: economic resources and productive factors would end up being used inefficiently, or rewarded unfairly, because information about their quality had been removed from the marketplace”. The counter-argument is that “the Chicago School’s privacy models may fail to capture the complexity inherent in privacy decision-making by individuals and organizations”; particularly in assessing private and public welfare effects[3].

Alessandro Acquisti, Curtis Taylor and Liad Wagman also note hypotheses suggesting that “consumers may suffer privacy costs when too little personal information about them is being shared with third parties, rather than too much”[4] while “distrust of newcomers is an inherent social cost of cheap pseudonyms – privacy of identity can be a barrier to trust building”[5]. Other costs include “opportunity costs when useful data is not disclosed” and the “costs associated with the act of protecting data”.

Alessandro Acquisti, Curtis Taylor and Liad Wagman argue that “privacy laws need to be tailored to take into account and balance specific and continually evolving trade-offs, and how rather than looking at privacy regulation in a binary, monotonic fashion, the effect of regulation on technology efforts can be heterogeneous – depending on the specific requirements included in the legislation. Consider, again, genetic data: genomic analyses may not only reveal information about an individual’s current health, but also about future health risks, and this potential to reveal information is likely to expand…At the same time, as personal genetic and genomic information becomes increasingly available, consumers face new privacy risks – for instance, if such information reaches the hands of advertising platforms and data aggregators, the latter may use it to construct risk profiles for individuals and their biological relatives such as children and parents, combine it with other data, and improve their targeting of product offerings”. Privacy requirements therefore should be balanced against “the observation that wider access to genetic and genomic analyses can lead to broader improvements in overall healthcare” while the reporting of infectious disease “is critical for the effective initiation of public-health intervention measures”[6].

There is also a need consider that the trade-offs are complex and often carry unintended consequences. Andrea O’Sullivan and Adam Thierer points out “it is quickly becoming one of the iron laws of technology policy that by attempting to address one problem (like privacy, security, safety, or competition), policymakers often open up a different problem on another front”.

For example, Europe’s General Data Privacy Regulation (GDPR)’s right of access provision “- which mandates that companies give users their personal data – can be exploited by malicious actors to steal personally identifiable information”. Encrypted communications provide cover to criminals and terrorists but if government agencies are provided backdoors, privacy is infringed. In addition, “if a government can get into an encryption standard, so might a malicious hacker”. In relation to online child safety, users have been required to verify their age and identity to prevent predators from posing as children but the security of the children’s data itself is put at risks.

Alec Stapp adds that if your account gets hacked, the hacker can use the right of access to get all of your data. The right to be forgotten is in conflict with the public’s right to know a bad actor’s history. The right to data portability creates another attack vector for hackers to exploit. And the right to opt-out of data collection creates a free-rider problem where users who opt-in subsidize the privacy of those who opt-out.

Nonetheless, the GDPR is currently regarded as the gold standard in attempts to develop a comprehensive framework for digital privacy and data protection. Jeanette Herrle and Jesse Hirsh highlight that GDPR has been successful in educating citizens on the need to “informed about the use of their data. And it has been pretty successful at shining a spotlight on shady practices… a useful tool for policing and curbing the worst excesses and exploitation (such as dark patterns, data mining and so on)”. On a positive note, “there has been a huge increase in people exercising their rights, with 144,000 individual complaints (concerning access requests, unwanted marketing, employee privacy and deletion requests). The GDPR also seems to have brought to the fore a new awareness of the many potential flaws or shortcomings regarding data protection…has given the entire notion of privacy as a human right a currency it did not possess before”. “While lack of consumer awareness of rights probably contributes to corporate non-compliance (why change when no one is reporting you?), a greater number of fines and actions could dramatically reduce such practices”.

However, Jeanette Herrle and Jesse Hirsh point out that “after one year in effect, considerable blind spots are coming to the fore”. While breach notification[7] has “been a resounding success”, “potential loopholes persist” that reduced the effectiveness of controls such as informed consent and on automated decision-making. They suggest “the GDPR has potential to help change the data collection ecosystem as a whole – whether or not it has done so yet is up for debate”…On a positive note, some organizations are now openly discussing the changes needed to reduce the data they require or to be less intrusive”. In addition, there are hints that further statutory actions are pending and “across the European Union, a ramping up of staffing in data protection agencies is under way”.

The initial experience with GDPR indicates the costs of privacy regulation is significant. According to the European Data Protection Board, within a year, there had been a total of 281,088 cases, 144,376 complaints, 89,271 data breach notifications and 47,441 others[8]. Jeanette Herrle and Jesse Hirsh notes “although fines were imposed on 91 different companies in GDPR’s first year of implementation, most were relatively minor; a single fine accounted for 89 percent of the total €56 million in fines issued. And even this €50 million fine levied against Google is far from the maximum allowable fine of €3.7 billion (which would be four percent of Google’s entire global revenue)… Recently, the UK Information Commission Office (ICO) has fined British Airways £183.39 million for a major data breach resulting from poor security, roughly four times the amount…against Google…seems to indicate a willingness to push companies through accountability measures to embrace more than just the letter of the law. As the GDPR moves into its second year, the role of fines in changing corporate behaviour will undoubtedly come back into the spotlight”.

Jeanette Herrle and Jesse Hirsh also notes “a widespread perception that the GDPR has not changed corporate practices but instead added a layer of bureaucracy that is especially onerous to smaller enterprises. Because the regulation leans toward a self-policing, self-reporting model, companies have been focusing on adding personnel in order to achieve compliance rather than actually changing what they do and why. For example, Facebook has not changed their business model; rather, they have hired more lawyers to defend their model and adopted language that makes it easier to obtain consent, disregarding the fact that most Facebook users have little choice but to consent if they want to communicate with friends and family…Combine this with poor board-level awareness and superficial efforts at compliance, and it is no wonder that the GDPR instigated new bureaucracy and not a culture change for corporate practices”.

Jeanette Herrle and Jesse Hirsh notes “an estimated 500,000 organizations that have registered data protection officers (DPOs) across Europe… However, 52 percent of the organizations that had done so by the end of 2018 said they had only done it for compliance with the law (although 48 percent felt it served a valuable purpose within the company)”. Alec Stapp observed that “compliance costs are astronomical” with estimates that aggregate GDPR compliance costs for large US firms “could reach $150 billion” and that “75,000 DPOs would need to be hired for compliance”.

Privacy regulation have also had a chilling effect on corporate activities and innovation. Alec Stapp notes that due to GDPR, “small and medium-sized businesses have left the EU market in droves (or shut down entirely)”, M&A deals have fallen apart “because of concerns about a target company’s data protection policies and compliance with GDPR” while researchers think GDPR “will make it harder to share information across borders or outside their original research context.”

Andrea O’Sullivan and Adam Thierer suggest “titans like Google and Facebook have dominated European ad tech market since the advent of the GDPR because they can shoulder compliance risks in a way that smaller vendors cannot…What is clear is that the data privacy laws enacted so far have had predictable negative impacts on security and competition, and that ill-defined privacy fundamentalism too often drives ill-fitting policies”.

Jian Jia, Ginger Jin and Liad Wagman finds GDPR had an “immediate, pronounced, and negative” impact on technology venture investment and, thus, potentially on innovation and job creation, in the several months following its rollout. “At our aggregate unit of observation, EU venture funding decreased by $3.38 million at the mean of $23.18 million raised per week per state per crude technology category. This reduction takes place in both the intensive margin (the average dollar amount raised per round of funding, which decreased 39%) and the extensive margin (the number of deals, which incurred a 17% average drop)”. “GDPR’s effect is particularly pronounced for young (0–3 year old) EU ventures, where an average reduction of 19% in the number of deals is observed…Our back-of-the-envelope calculation suggests that the investment reduction for young ventures could translate into a yearly loss between 3,604 to 29,819 jobs in the EU, corresponding to 4.09% to 11.20% of jobs created by 0–3 year old ventures in our sample”. In particular, they point out GDPR’s rollout had a considerable impact on small technology ventures. To comply with GDPR, the large firms changed their platform operating guidelines – changing app permissions, privacy terms and consent requirements. This adversely affected small businesses operating on their platforms.

Jeanette Herrle and Jesse Hirsh adds “that for all its virtues, the GDPR does little to question existing models. The emerging unintended side effects are the wholly foreseeable consequences of treating data as a commodity rather than as a collective good; the GDPR could certainly boost the power of big tech or reinforce the concerning data use practices that inspired the GDPR to begin with. One year in, it seems as if the GDPR has failed to mitigate the de facto monopoly technology giants have on the collection and use of data. And frankly, if that is what needs to happen, more than the GDPR is needed”.

Overall, society is approaching a critical juncture on regulating personal data. Jeanette Herrle and Jesse Hirsh argues “GDPR made a greater impact on national and international governance than it did on citizen data or industry practice. Countries around the world are now debating or passing new privacy legislation, as well as entertaining greater regulatory action against growing global technology giants. The GDPR has been regarded as a new standard that many countries are aspiring to align with. While this does not mean that the GDPR is the ultimate regulatory goal, it has presented a target or milestone that other countries are now moving toward”.

The application of GPDR across borders is problematic because it extends “its protection of EU citizens’ data outward, but enforcement is typically jurisdiction-bound”. In this regard, national regulations that vary from the GDPR will fragment data governance and impede enforceability. Jeanette Herrle and Jesse Hirsh points out “cross-border processing of cases by EU supervisory authorities is also on the rise, and the continuing evolution of mechanisms for cooperation, such as procedures for mutual assistance, joint operations and the “one-stop shop” (which designates a lead for cross-border cases based on where the company is headquartered), will be critical to the success of GDPR implementation”.

While privacy regulation may appeal to national sensitivities, the risk is that GDPR will trigger a new arms race for control of what is emerging as the world’s most valuable resource; namely data. As countries evolve new regulatory frameworks and courts attempt to assert national property rights on local data, the consequential effect will be to damage global data flows, raise the costs of using data significantly and accelerate the process of deglobalisation.

References

Alessandro Acquisti, Curtis Taylor, Liad Wagman (8 March 2016) “The economics of privacy”. Journal of Economic Literature, Vol. 52, No. 2; Sloan Foundation Economics Research Paper. https://ssrn.com/abstract=2580411

Alec Stapp (24 May 2019) “GDPR after one year: Costs and unintended consequences”. Truth on the Market. https://truthonthemarket.com/2019/05/24/gdpr-after-one-year-costs-and-unintended-consequences/

Andrea O’Sullivan, Adam Thierer (25 September 2019) “Tech policy, unintended consequences & the failure of good intentions”. Mercatus. https://www.mercatus.org/bridge/commentary/tech-policy-unintended-consequences-failure-good-intentions?utm_source=email&utm_medium=bridge_newsletter&utm_name=technology_and_innovation&utm_content=button

Jeanette Herrle, Jesse Hirsh (9 July 2019) “The peril and potential of the GDPR”. Centre for International Governance Innovation. https://www.cigionline.org/articles/peril-and-potential-gdpr

Jian Jia, Ginger Jin, Liad Wagman (7 January 2019) “The short-run effects of GDPR on technology venture investment”. https://voxeu.org/article/short-run-effects-gdpr-technology-venture-investment

Phuah Eng Chye (3 August 2019) “Information and development: Globalisation in transition”. http://economicsofinformationsociety.com/information-and-development-globalisation-in-transition/

Phuah Eng Chye (17 August 2019) “Information and development: Globalisation interrupted and deglobalisation risks”.

Phuah Eng Chye (21 December 2019) “The debate on regulating surveillance”.

Phuah Eng Chye (4 January 2020) “The economics and regulation of privacy”.

Phuah Eng Chye (18 January 2020) “Big data and the future for privacy”.


[1] Privacy breaches can occur over a broad area – e.g. data, content, communications, networks, storage, platforms and even algorithms.

[2] Cited from Miller and Tucker. See Alessandro Acquisti, Curtis Taylor and Liad Wagman. 

[3] “If a physician were not bound by confidentiality, a patient may not feel comfortable sharing all the relevant detail of her condition. On the other hand, when charitable contributions are public, amounts donated may increase, because contributing raises the reputation of the donor”. See Alessandro Acquisti, Curtis Taylor and Liad Wagman

[4] Cited from Hal Varian. See Alessandro Acquisti, Curtis Taylor and Liad Wagman. 

[5] Cited from Friedman and Resnick. See Alessandro Acquisti, Curtis Taylor and Liad Wagman. 

[6] Cited from Miller and Tucker. See Alessandro Acquisti, Curtis Taylor and Liad Wagman. 

[7] “When a breach happens, the supervisory authority needs to be notified within 72 hours, with the ultimate goal being the notification of affected users so they can take action to protect themselves”. Jeanette Herrle and Jesse Hirsh.

[8] Alec Stapp.