The journey from privacy to transparency (and back again)
Phuah Eng Chye (29 February 2020)
Society’s progress is marked by milestones in its ability to use information to organise activities. In this context, improvements in knowledge and technological capabilities has expanded the generation and use of information to monetise activities and exchange of products and services. Implicit in these activities is the condition that information is transparent and accessible. Economic development is thus characterised by rising use of information and rising levels of transparency.
Nowhere is the journey towards the pre-eminence of transparency more notable than in the finance industry. Historically, privacy was sacrosanct in the finance industry but its status has been eroded in recent decades. Regulators now require banks and other financial intermediaries to “know your customer” – to seek client disclosures and to establish internal processes to prevent money laundering, fraud (including tax) and mis-selling. Securities regulation is built on the principle that “sunshine is the best disinfectant”. Disclosure standards and rules are used to prosecute abuses as well as to promote good governance and corporate responsibilities. Central banks have moved from their cult of secrecy towards using disclosure, forward guidance and communication as key tools for policy conduct. There is growing intolerance of privacy even across national borders. Tax havens and financial centres[1] are under pressure to comply with international requirements to disclose tax and anti-money laundering information to the relevant authorities or risk being placed on a blacklist.
It is difficult to avoid meeting the demands for information from governments and the private sector. In addition, personal data is also being generated from activities – communications, surveillance, activities – and by third parties (ratings, likes) sometimes with an individual’s knowledge and permission and sometimes without. It is difficult to keep personal data private when there is already so much information slushing about.
Transparency has also permeated social norms. Leslie K. John notes that “heightening the desire to disclose appears to be central to many social media sites, right down to the perpetual “What’s on your mind?” prompt on Facebook…Users build their social graph by adding contacts; those contacts’ transactions are displayed prominently in a newsfeed. That makes financial transactions feel like social transactions, turning something that people would ordinarily keep private into something that they not only are comfortable sharing but potentially want to share”.
He explains individuals have a natural “desire for disclosure”. “Humans have what appears to be an innate desire, or even need, to share with others. After all, that’s how we forge relationships – and we’re inherently social creatures. In one study, even people who were very concerned about their privacy went on to readily divulge personal information to a chat bot. Unloading your secrets has psychological and physical benefits. When strangers are paired up in lab experiments and prompted to disclose personal information with each other, they build greater rapport. Keeping a journal in which you share your worries can improve physical health, while keeping secrets can reduce well-being”.
The social norm is evolving into expectations for personal information to be disclosed. Leslie K. John argues “our orientation toward disclosure is also apparent in how we perceive those who abstain: We view people who withhold with contempt…we dislike and distrust those who avoid answering personal questions even more than those who reveal damaging information about themselves”. Individuals or firms that did not identify themselves, possess a social media history or rating may find themselves ostracised from transactions and activities (a job, a ride or even a date).
But there appears to be limits to public forbearance of transparency. Increasing encroachment of private space is leading to public disillusionment. Leonard Kleinrock notes in the early days, we were “constantly surprised by unanticipated applications…email, the World Wide Web, peer-to-peer file sharing, user generated content, Napster, YouTube, Instagram, social networking”. In this regard, “the internet was designed to promote decentralized information, democracy and consensus based upon shared values and factual information”. “We enjoyed a wonderful culture of openness, collaboration, sharing, trust and ethics…adherence to netiquette persisted for the first two decades of the Internet”. He suggests the decline began “in the early 1990s when spam first appeared at the same time there was an intensifying drive to monetize the Internet as it reached deeply into the world of the consumer. This enabled many aspects of the dark side to emerge (fraud, invasion of privacy, fake news, denial of service, etc.). It also changed the nature of internet technical progress and innovations as risk aversion began to stifle the earlier culture of moon shots”.
Leonard Kleinrock argues “today, almost no one would say that the internet was unequivocally wonderful, open, collaborative, trustworthy or ethical. How did a medium created for sharing data and information turn into such a mixed blessing of questionable information? How did we go from collaboration to competition, from consensus to dissention, from a reliable digital resource to an amplifier of questionable information?” Nonetheless, he concludes that the internet “is fast on its way to becoming invisible… When I walk into a room, the room should know I’m there and it should provide to me the services and applications that match my profile, privileges and preferences…We are rapidly moving into such a future as the Internet of Things pervades our environmental infrastructure with logic, memory, processors, cameras, microphones, speakers, displays, holograms, sensors”.
Siva Vaidhyanathan suggests early visions of internet freedom “began at the apex of naivete about the potential for the internet to enhance democracy and improve the quality of life on Earth. By the end of 2019, very few people could still hold such a position with honesty”. In this regard, the role of the internet in the early people revolutions was vastly over-rated. “What Facebook, Twitter, and YouTube offered to urban, elite protesters was important, but not decisive, to the revolutions in Tunisia and Egypt. They mostly let the rest of the world know what was going on…Facebook and Twitter leveraged all this good publicity to give themselves more central roles in politics and policy”.
However, “the rosy optimism of 2011 soon ebbed into the dark side of the digital revolution became too glaring to ignore…Two political events would be the fulcra for this pivot. The first was the 2013 revelation by former intelligence contractor Edward Snowden that governments had tapped into the formerly secure channels of major data companies to track and profile citizens without their knowledge. We realized, all at once, that what might once have seemed like a harmless system of private surveillance – the tracking of our preferences, expressions, and desires for the sake of convenience and personalization – had been handed over to unaccountable state actors. Snowden’s whistle-blowing put the dangers of massive data surveillance into public conversation…The next…the breadth of voter data lifted off of Facebook by a little-known, London-based consulting firm. Cambridge Analytica…In the meantime, news media reported on Facebook’s role in amplifying calls to genocide in Myanmar, as well as sectarian violence in India and Sri Lanka. Other services were also named as culpable in spreading destructive, hate-filled content. Reports outlined how YouTube’s recommendation engine drives videogame fans toward misogynistic and racist videos; and explained that Twitter has been populated with trolls and bots that amplify propaganda aimed at fracturing liberal democracies around the world”.
Siva Vaidhyanathan concludes that “in the end, the myth of 2010 was transformed into another myth: Where once we thought online platforms would help depose dictators all around the world, we came to think that the same technologies are predisposed to do the opposite – to empower bigots and prop up authoritarian regimes. Neither of these notions is entirely wrong. But they don’t lead us to a clear agenda for confronting excesses and concentrations of power. Technologies determine nothing. Technologies influence everything”.
Hence, an internet that once thrived on anonymity and freedom is increasingly subject to calls for all types of regulation to curb the excesses of technology firms and their dominance. Once welcomed, recognition technologies, IOT and AI are viewed suspiciously as tools that will be used to control social and political behaviours.
In tandem with this, public policy has recently favoured the use of privacy regulation to regain the private space earlier ceded to transparency. Europe led the way with the General Data Privacy Regulation (GDPR) to uphold the primacy of individual privacy rights. “In a society where individuals will generate ever increasing amounts of data, the way in which the data are collected and used must place the interests of the individual first, in accordance with European values, fundamental rights and rules. Citizens will trust and embrace data-driven innovations only if they are confident that any personal data sharing in the EU will be subject to full compliance with the EU’s strict data protection rules” [2].
But this leads to a policy conundrum; namely how the EU is to move forward to unlock the value of data in its economy. This is evident from the recently announced strategies to shape Europe’s digital future[3]. Simon Chandler interprets the new strategies as influenced by geopolitical considerations with the objective of positioning European tech companies as significant players on the global data stage.
The EU report[4] states “currently, a small number of Big Tech firms hold a large part of the world’s data” and that “the EU should create an attractive policy environment so that, by 2030, the EU’s share of the data economy – data stored, processed and put to valuable use in Europe – at least corresponds to its economic weight.” He argues the EU strategies are intended to create “a single European data space”; one that, by ensuring “data can flow within the EU,” will favor EU data-based companies at the expense of (American) outsiders. “The Commission will provide more guidance to stakeholders on the compliance of data sharing and pooling arrangements with EU competition law” which could theoretically allow a number of big EU companies to come together and agree to share data only between themselves, at the expense of their competitors. It was also announced “the Commission will examine the relationship between public support to undertakings (e.g. for digital transformation) and the minimisation of competition distortions through data-sharing requirements for beneficiaries” which potentially permits EU member states to provide greater financial support to EU companies.
Nonetheless, achieving data competitiveness requires enabling data sharing which is at odds with GDPR’s stringent privacy and ethical requirements. Simon Chandler notes that “if the EU seriously wants to challenge the American tech industry’s dominance of data, it may in the end have to relax some of its more scrupulous regulations. This is worrying for EU citizens, who may suffer at the expense of the EC’s desire to outcompete Google, Amazon and Microsoft”.
In my view, the new data strategies seem aimed at addressing the obvious question of why the EU, despite possessing the talent, wealth and infrastructure, has not been able to groom a home-grown champion to compete with US and China’s tech giants? There are several plausible explanations for why Europe has been unable to match the success of Silicon Valley so far. Europe tends to protect traditional business organisations and to resist information disruption. Though a union, Europe is fragmented by jurisdiction and language and therefore unable to achieve scale as easily. Its most promising companies are snapped up by American and Chinese firms. GDPR’s chilling effect on innovation has not been helpful either.
My hypothesis is that China’s success at building homegrown champions to rival the US tech giants was an eye-opener of sorts. Europe also recognised the risks of being sandwiched between the two geopolitical giants without their own homegrown domestic champions. The EU strategies seems to bear some resemblance to China’s approach. The EU seems to be building a firewall to corral data and ownership within domestic silos. It has identified focus areas for data-sharing to create space for its own players. It has elevated the role of state intervention and subsidies to support growth in strategic areas while tightening controls on foreign takeovers.
The frequent regulatory toing-and-froing between privacy and transparency reflects two points. First, focal points and context shifts as you move along the axis of surveillance-privacy-data and this creates confusion to the objective of regulating information. Second, this illustrates that privacy and transparency are the two prongs of a balancing act in information regulation.
Nonetheless, society should not stray too far off the path of transparency. An extensive privacy regime will shrink the quantity and quality of information. The information society cannot function well with too many blind spots. In addition, it is unlikely the ideals of equality, freedom and fairness will be achieved through privacy. Privacy implies unequal distribution of information and, therefore of, power. Historically, equality, freedom and fairness has increased with greater transparency. At the other end of the scale, despotic regimes are often lacking in transparency. It is thus more likely the ideals of equality, freedom and fairness will be achieved through transparency-based policies. In this context, those protesting against the use of information are more likely to be regarded as information luddites attempting to halt the advance to an information society rather than as freedom fighters against the tyranny of a surveillance dystopia.
References
European Commission (19 February 2020) “Communication: A European strategy for data”. https://ec.europa.eu/info/sites/info/files/communication-european-strategy-data-19feb2020_en.pdf
European Commission (19 February 2020) “Communication: Shaping Europe’s digital future”. https://ec.europa.eu/info/sites/info/files/communication-shaping-europes-digital-future-feb2020_en_3.pdf
Leonard Kleinrock (17 March 2019) “Fifty years of the internet: What we learned, and where will we go next”. Techcrunch. https://techcrunch.com/2019/03/18/fifty-years-of-the-internet/
Leslie K. John (September 2018) “Uninformed consent”. Harvard Business Review. https://hbr.org/cover-story/2018/09/uninformed-consent
Phuah Eng Chye (23 November 2019) “Information and organisation: China’s surveillance state growth model (Part 2: The clash of models)”.
Phuah Eng Chye (21 December 2019) “The debate on regulating surveillance”.
Phuah Eng Chye (4 January 2020) “The economics and regulation of privacy”.
Phuah Eng Chye (18 January 2020) “Big data and the future for privacy”.
Phuah Eng Chye (15 February 2020) “The costs of privacy regulation”.
Pierce O’Reilly, Kevin Parra Ramirez, Michael A. Stemmer (31 January 2020) “Exchange of information and bank deposits in international financial centres: An example of multilateral cooperation at work”. https://voxeu.org/article/exchange-information-and-bank-deposits-international-financial-centres
Simon Chandler (19 February 2020) “EU plans European rival to Google with new data and AI proposals”. Forbes. https://www.forbes.com/sites/simonchandler/2020/02/19/eu-plans-european-rival-to-google-with-new-data-and-ai-proposals/#42acece1487c
Siva Vaidhyanathan (27 December 2019) “The two myths of the internet”. Wired. https://www.wired.com/story/the-two-myths-of-the-internet/
[1] See Pierce O’Reilly, Kevin Parra Ramirez and Michael A. Stemmer.
[2] European Commission “Communication: A European strategy for data”.
[3] European Commission “Communication: Shaping Europe’s digital future”.
[4] Quotations are based on the EU “Communication: A European strategy for data” cited by Simon Chandler in his article.