The debate on regulating surveillance

The debate on regulating surveillance

Phuah Eng Chye (21 December 2019)

These warnings are commonplace, but they are rarely very specific. Other than the vague threat of an Orwellian dystopia, as a society we don’t really know why surveillance is bad and why we should be wary of it. To the extent that the answer has something to do with “privacy”, we lack an understanding of what “privacy” means in this context and why it matters. We’ve been able to live with this state of affairs largely because the threat of constant surveillance has been relegated to the realms of science fiction and failed totalitarian states. But these warnings are no longer science fiction. Neil M. Richards (2013) The dangers of surveillance.

The question for the future is how society is to respond to pervasive surveillance. Gus Hosein explains “social and technological changes have increased the power and pervasiveness of surveillance. First, nearly everything we do today is a communicative act that is digitally observable, recordable, and most likely logged, and analysed…Second…nearly every communication can now be collected, analysed, retained and monetised…Third, every communication generates increasingly sensitive metadata – data related to the communications – that is captured, logged, rendered accessible, and mined to draw lists of suspects and targets, and to understand our relationships and interactions. Fourth, nearly every communication today involves a third party – the post office, the mobile phone company, the search engine, and the undersea cable company, who are likely to be tasked with surveillance on behalf of the state. Fifth, all of this surveillance can now be done in secret – the tampered envelope is now replaced with perfect, secretive replications of communications, captured at a number of points in a network”.

Elijah Sparrow notes the features of a digital environment – i.e. perfect digital copies, many points of capture, data immortality, automation, high confidentiality and low anonymity – gives rise to “dragnet surveillance” where “the capture and processing of personal information by powerful actors is not just routine but ubiquitous”. “Increasingly, surveillance does not seem an activity undertaken for simple influence, management, protection or direction, but instead seems to be much more, constituting the core security strategy of many nation-states and the core business model for the largest internet firms, credit card companies, and advertisers”.

Yasha Levine explains this evolution is not accidental. “The Internet was hardwired to be a surveillance tool from the start…an attempt to build computer systems that could collect and share intelligence, watch the world in real time, and study and analyze people and political movements with the ultimate goal of predicting and preventing social upheaval…it always had a dual-use nature rooted in intelligence gathering and war”.

However, as Elijah Sparrow points out, “digital surveillance is still in its infancy. Governments collect more data than they know how to effectively process, facial recognition is still not accurate, and tracking databases are full of false information. For some, this is a comfort: no matter how much the surveillance net expands, it will be full of holes (and also false positives, with sometimes tragic personal results for those falsely convicted)”. “The struggle for the future of digital communication – who can control the flow of bits and who can assign identity to those bits – is being actively fought on the terrains of politics, law and technology. While all these terrains are important, new advances in the technology of encryption, usability and open protocols have the potential to offer powerful protection to the common user in the near future”.

Control over personal data is shaping up as a core policy issue in modern societies. Gus Hosein argues “we must not presume that this is only about communications privacy. As nearly everything involves communication in modern society, communications surveillance can itself generate previously unseen power for the watchers over the watched: individuals, groups and even societies”. “We have barely scratched the surface on any of these questions, and within all of this we find ourselves racing to the future where the boundaries of privacy will be further tested, innocuous information increasingly revelatory, and the power to surveil increasing in its power and scope”.

There is confusion on how technological surveillance should be perceived. Neil M. Richards argues that “as a society, we are thus of two minds about surveillance. On the one hand, it is creepy, Orwellian, and corrosive of civil liberties. On the other hand, it keeps us and our children safe. It makes our lives more convenient and gives us the benefit of a putatively free Internet. Moreover, some influential thinkers argue that data surveillance does not affect privacy at all…We like its benefits, though we are fearful (and sometimes dismissive) of its costs. This confusion points to a larger problem: civil liberties advocates lack a compelling account of when and why (if at all) surveillance is harmful. As a society, we have an intuitive understanding that public- and private-sector surveillance is potentially bad, but we do not have an articulate explanation of why it is bad. Some of our intuitions stem from literature, such as George Orwell’s chilling portrait of Big Brother in Nineteen Eighty-Four. But few critics of government surveillance such as the NSA wiretapping program and the British data-retention regulations would suggest that these programs are directly analogous to the evil regime depicted in Orwell’s dystopia. Moreover, the Orwell metaphor seems wholly inapplicable to databases used to personalize targeted advertising on the web, the efforts of insurance companies to promote safe driving, and the practices of online booksellers to sell more books by monitoring consumers’ shopping habits in ways that used to be impossible. We need an account of when and why surveillance is problematic to help us see when we should regulate and when we should not”.

Privacy advocates nonetheless are steadfast in their view that privacy is a human right critical to the functioning of democracy and that surveillance inflicts irreparable damage to a liberal society. Neil M. Richards warns “surveillance is harmful because it can chill the exercise of our civil liberties, especially our intellectual privacy. It also gives the watcher power over the watched, creating the risk of a variety of other harms, such as discrimination, coercion, and the threat of selective enforcement, where critics of the government can be prosecuted or blackmailed for wrongdoing unrelated to the purpose of the surveillance”. Examples include the misuse of information to prosecute political, religious or racial minorities, the exploitation of vulnerabilities arising from data exposure and the reduction of choice due to discrimination and manipulation.

Privacy advocates however concede it is an uphill challenge to restrict surveillance in the era of big data. This is because “modern privacy problems emerge not just from disclosing deep secrets, but from making obscure information more accessible (increased accessibility) or from consistent observation or eavesdropping (surveillance)”. Daniel J. Solove points out “information dissemination is one of the broadest groupings of privacy harms. These harms consist of the revelation of personal data or the threat of spreading information”. Freer access to personal information could thus lead to contraventions of “laws protecting against asking questions about a person’s political views or associations”, against rape victimisation, disabilities, diseases and convictions as well as dilute protections of privileged communications between attorneys and clients, priests and penitents, and doctors and patients.

Daniel J. Solove points out “identification can inhibit one’s ability to be anonymous or pseudonymous. Anonymity and pseudonymity protect people from bias based on their identities and enable people to vote, speak, and associate more freely by protecting them from the danger of reprisal”. Similarly, aggregation of different pieces of data “can reveal new facts about a person that she did not expect would be known about her when the original, isolated data was collected…aggregation’s power and scope are different in the Information Age”. In this regard, the ability to aggregate and identify “increases the government’s power over individuals. Identification has been a critical tool for governments seeking to round up radicals or disfavored citizens. It is also an efficient tool for controlling people”.

Overall, there are structural trends inhibiting the ability of regulation to curb surveillance. First, the supply of and demand for surveillance is likely to continue to expand. On the supply side, recent innovations such as the IOT, big data and AI will capture, connect and learn from massive amounts of data. On the other, demand for surveillance information is insatiable. Governments, firms and individuals want information on activities and people. The information society is risk averse and prefer sanitised environments. Surveillance is necessary to create valuable safe zones where identities, behaviours and risks are known and controllable. This means regulation can only, at best, aim to build safeguards but it does not appear able to rein in technological surveillance.

Second, increased surveillance of public and private space[1] will not only crowd out non-surveilled space but will also change societal attitudes. Jevan Hutson suggest the surveillance challenge is “not only physical, it is cultural. Consumer surveillance technologies entrench surveillance as an essential duty of citizenship. Beyond offloading the costs and pressures of physical infrastructure from the state to consumers – creating new avenues for surveillance and data collection with less restrictions – these technologies inculcate surveillance as a social and communal obligation and engender public support and acceptance of ever more pervasive and invasive surveillance…By reorienting the surveillance relationship between individual and the state to the individual versus the individual…fragments accountability and deprives the individual of the ability to challenge or escape data collection. It becomes harder to challenge a larger, consolidated surveillance apparatus because it is built consensually by private parties. Our neighbors have the right to watch and protect their private property, despite the objections of others. A diffused network of cameras reduces freedom of choice in how individuals protect their privacy because they are up against an architecture of fragmented private parties, rather than just the state…crowdsourced security footage…spells the evisceration of reasonable expectations of privacy in public life and the significant chilling of constitutionally protected activities by allowing government-technology partnerships to create a detailed picture of the movements – and therefore the lives – of a massive number of community members doing nothing more than going about their daily business. This enables law enforcement agencies to undertake widespread, systematic surveillance on a level that was never possible before”. In this regard, Scott Peppet foresees the “unraveling of privacy, as economic incentives lead consumers to agree to surveillance devices…Peppet argues that this unraveling of privacy creates a novel challenge to privacy law, which has long focused on unconsented surveillance rather than on surveillance as part of an economic transaction”[2].

Third, increasing overlap between the state and private surveillance will reduce the ability of regulation to shield privacy rights. Neil M. Richards notes “one of the most significant changes that the age of surveillance has brought about is the increasing difficulty of separating surveillance by governments from that by commercial entities. Public- and private sector surveillance are intertwined – they use the same technologies and techniques, they operate through a variety of public/private partnerships, and their digital fruits can easily cross the public/private divide”. Hence, “we must recognize that surveillance transcends the public/private divide. Public and private surveillance are simply related parts of the same problem, rather than wholly discrete. Even if we are ultimately more concerned with government surveillance, any solution must grapple with the complex relationships between government and corporate watchers”.

He explains “it might seem curious to think of information gathering by private entities as surveillance. Notions of surveillance have traditionally been concerned with the watchful gaze of government actors like police and prison officials rather than companies and individuals. But in a postmodern age of liquid surveillance[3], the two phenomena are deeply intertwined. Government and nongovernment surveillance support each other in a complex manner that is often impossible to disentangle. At the outset, the technologies of surveillance – software, RFID chips, GPS trackers, cameras, and other cheap sensors – are being used almost interchangeably by government and nongovernment watchers. Private industry is also marketing new surveillance technologies to the state…Nor do the fruits of surveillance respect the public/private divide…governments have been eager to acquire the massive consumer and internet-activity databases that private businesses have compiled for security and other purposes, either by subpoena or outright purchase. Information can also flow in the other direction; the U.S. government recently admitted that it was giving information to insurance companies that it had collected from automated license-plate readers at border crossings…governments also have an interest in making privately collected data amenable to public-sector surveillance…the United States…requires telecommunications providers to build their networks in ways that make government surveillance and interception of electronic communications possible…retain details of all internet access, email, and internet telephony by users for twelve months, so that they can be made available to government investigators for cases of antiterrorism, intellectual property, child protection, or for other purposes. This surveillant symbiosis between companies and governments means that no analysis of surveillance can be strictly limited to just the government or the market in isolation. Surveillance must instead be understood in its aggregated and complex social context”.

Companies[4] professing idealistic notions against surveillance also find it difficult to avoid direct or indirect involvement with state agencies. In addition, specialist firms such as Palantir[5] operate to specifically support government intelligence and enforcement activities. Andrew Ferguson[6] notes private entities are becoming “instrumental in the day-to-day functioning of police departments around the country…there’s been a shift toward the privatization of public safety…You are relying on a new technology to help you do the ordinary business of policing, which means you lose some control over how it gets done…You have to rely on technical experts that may not be in house to really run your police department, which is a problem because of public accountability and different incentives.” He adds there is an “economic and financial race to become the platform for policing, recognizing that if you become the platform in the data, you win, because everything goes through you.”

The convergence of politics, national security and commercial interests makes it increasingly difficult to delineate between the government and private sector conduct of surveillance and, more critically, the legitimacy of such activities. Even if, in the initial instance, the purpose was legitimate in pursuit of public or commercial objectives, there is always likely to be questionable use at a later point in time.

Neil M. Richards argues “our existing models for understanding surveillance – such as Big Brother and the Panopticon – are the most out of date. Even if we are primarily worried about state surveillance, perhaps because we fear the state’s powers of criminal enforcement, our solutions to the problem of surveillance can no longer be confined to regulation of government actors. Any solutions to the problem of surveillance must thus take into account private surveillance as well as public…Additional legal protections will be needed to cope with developments in surveillance practices. Because the government can sidestep many legal restrictions on the collection of data by buying it from private databases, we should place additional restrictions on this growing form of state surveillance”.

Smart cities reflect the twin dilemmas of overlapping space and roles in surveillance. Nancy Scola notes “a truly smart city stands to radically increase the amount of data collected on its citizens and visitors, and it puts into sharp relief the responsibility a local government – and the contractors it would inevitability hire to manage some of that digital infrastructure – would have to both hold and probe that data. That dynamic quickly turns the future of the smart city from a technological question to a fundamentally civic one. Heaps of data are already piling up in cities around the world, with very little agreement on the best way to handle all that information”.

The worries range “from basic concerns about citizen privacy to big questions about balancing democracy with corporate control”. Nancy Scola notes “some privacy advocates worry about the idea of collective privacy, or the idea that data can be used to know things about communities they would rather not have everyone know, like tracking residents’ movements to create a profile of the overall rhythms of a neighborhood, or even analyzing sewage for signs of concentrated drug use”.

The largest concern is over the “power imbalance”; “local governments eager to get their hands on tech’s benefits quickly – but which often lack the time, money and expertise to properly assess what, exactly, they stand to gain from it”. Nancy Scola argues that “officials might soon find that those companies end up owning not just slices of real estate but also, as they take on more local responsibility, huge chunks of information about how cities themselves function” and that “when it comes to future negotiations, it’s frightening that Google will have that data and cities won’t.” In this regard, “who owns all the data produced by the city of the future? Who controls it? Whose laws apply. There is concern that handing over too much control to a private company will set the wrong precedent. By definition, the autonomy of a smart city means taking some hands-on day-to-day decision-making away from elected officials and civil servants. And when the complex algorithms and data-collection decisions driving those city operations are in the hands of one company, that can raise worries that too much power over our civic lives is being handed over to private interests”.


For its smart city project, Sidewalk Labs committed to not using personal information for advertising purposes or selling it to third parties, without explicit consent. One core proposal is to use “de-identification techniques…When data is de-identified correctly – using principles including k-anonymity, and frameworks such as differential privacy – it is no longer personal information. While de-identification of data may not completely eliminate the risk of the re-identification of a data set, when proper guidelines and techniques are followed, the process can produce data sets for which the risk of re-identification is very small”. In tandem with this, “organizations should make properly de-identified or non-personal data that they have collected publicly accessible to third parties by default, formatted according to open standards. This approach would help to ensure that individual privacy is preserved while also enabling data and source code to be accessible by others to catalyze innovation. Organizations should be prepared to detail their methods for making such data publicly accessible, and to justify any plans to restrict data access”.

Sidewalk Labs however argued transaction data[7] should be excluded from the review process as “the data collector is already accountable under applicable privacy laws either to obtain consent…or, if it is a public-sector entity, to ensure they have the proper legislated authority. Second, this type of data arguably is not uniquely connected to public spaces, nor is it generally considered a public asset requiring additional protections within the public interest”. This “reflects the belief that incorporating transaction data into a governance model for the Sidewalk Toronto project would be unworkable given the lack of a relationship between this kind of data collection and a specific geography”[8].

The Sidewalk Labs data governance proposals raise several questions. The first relates as to whether de-identification techniques can effectively address privacy concerns. Charlotte Jee notes that “anonymization”[9] (stripping out obviously identifiable data such as names, phone numbers, email addresses) and altering data sets “to be less precise” and removing “columns in spreadsheets” introduces “noise” to databases are intended to “reassure us that this means there’s no risk we could be tracked down in the database”. However, several studies have demonstrated by creating “a machine-learning model that estimates exactly how easy individuals are to reidentify from an anonymized data set…On average, in the US, using those three records, you could be correctly located in an “anonymized” database 81% of the time. Given 15 demographic attributes of someone living in Massachusetts, there’s a 99.98% chance you could find that person in any anonymized database. “As the information piles up, the chances it isn’t you decrease very quickly…The fact that the data set is incomplete does not protect people’s privacy”.

The second relates to the repackaging and reselling of data. Ava Kofman points out Sidewalk Labs has developed a “program, known as Replica.” “Typical urban planners rely on processes like surveys and trip counters that are often time-consuming, labor-intensive, and outdated. Replica, instead, uses real-time mobile location data…provides a full set of baseline travel measures…including the total number of people on a highway or local street network, what mode they’re using (car, transit, bike, or foot), and their trip purpose (commuting to work, going shopping, heading to school)…the program gathers and de-identifies the location of cellphone users, which it obtains from unspecified third-party vendors. It then models this anonymized data in simulations – creating a synthetic population that faithfully replicates a city’s real-world patterns but that obscures the real-world travel habits of individual people”.

Ava Kofman notes concerns that sensitive data such as location is being “harvested by third parties” and that this data could be sold “to stalkers and bounty hunters”. In addition, “because the exact sources of data have not been revealed, it is unclear whether Replica draws from the ranks of unregulated apps that profit from indefinite privacy policies to continuously collect users’ precise whereabouts”. “At stake with Replica is the value that can be produced by aggregating data about our movements and then selling it back to governments…Some see the project as an example of the way the proprietary tools and techniques developed by Sidewalk Labs at Quayside might be exported – or imported – to other cities, without creating any additional economic benefits for the residents who have produced this data”.

The third relates as to who and how oversight should be conducted. The risk of setting up an Urban Data Trust is that it is an ineffective work-around. In my view, the government has the ultimate accountability and they should take on the task. Creating additional agencies tend to diffuse accountabilities and fragment regulation and enforcement.

Competitive concerns also need to be addressed. Sidewalk Labs highlights “data collected in the public realm or in publicly owned spaces should not solely benefit the private or public sector; instead, it should benefit multiple stakeholders…Part of using data responsibly involves making sure that no one entity – Sidewalk Labs or another – controls urban data that could reasonably be considered a public asset. The opportunities to use urban data to create new digital innovations must be available to everyone, from the local startup to the global corporation”. In relation to this, Aria Bendix notes “reports that Sidewalk Labs had asked potential consultants to either hand over intellectual property or issue an exclusive, royalty-free license. That could force competition to dwindle, since not all firms would be willing to agree to these terms”.

The expansion of surveillance has triggered “a widespread techlash, which could have profound implications for firms that use consumers’ data”. Nonetheless, firms can “find loopholes to wriggle out of while complying with the letter of the law”. Leslie K. John explains “the behind-the-scenes plumbing of the surveillance economy is so byzantine and opaque that it’s effectively impossible for consumers to be adequately informed. There is also no way to know what all third parties are doing, or will do, with your data. Although Facebook has been tightening oversight of apps as well as their access to user data, the fact is that many apps have been selling user information obtained on Facebook, and consumers can’t possibly have known where their data would end up when they agreed to the apps’ terms and conditions…effectively impossible for you to figure out how your data moved through the advertising ecosystem or identify the brokers or agencies involved”.

Leslie K. John points out that “when people perceive decisions to be overwhelmingly complex, they are prone to disengage” – “It’s unlikely to solve the problem given that users don’t read privacy policies and, despite the media uproar, don’t take much action when they learn of breaches”. In this regard, “informed consent – the principle companies use as permission to operate in this economy – is something of a charade. Most consumers are either unaware of the personal information they share online or, quite understandably, unable to determine the cost of sharing it – if not both”.

Firms take advantage of these vulnerabilities. Leslie K. John notes “firms have made loose privacy defaults, well, the default for the tech industry…examples…In November 2016, Uber changed its preset options to allow it to track users at all times. (It changed them back in September 2017 after facing criticism.) On the social payments app Venmo, transactions are public by default. Google automatically stores your location when you merely open its Maps app; opting out is confusing, if not downright misleading”. He adds “users’ ability to opt out is also often obfuscated…Facebook’s mobile app used defaults to deceptively prod users into consenting to upload their phone contacts to Facebook (something highly lucrative to Facebook in tracking a user’s social graph).”

There are concerns on adherence to the principle of consent. Ava Kofman notes investigations “showed that Google’s apps and website track people even after they have disabled the location history on their phones…tracking Android users by collecting the addresses of nearby cellphone towers even if all location services were turned off. The company has also been caught using its Street View vehicles to collect the Wi-Fi location data from phones and computers”.

Ava Kofman points out lingering questions about “the type and quality of consent obtained…many users do not understand how closely they are being tracked and how often their data is being resold to advertisers or third parties or programs like Replica”. “Consent has historically been defined by broad and vague terms of service, leveraging companies’ knowledge of intricate technical details at the expense of users too pressed for time to read – let alone understand – their jargon-laden privacy policies…explanations…to give permission are often incomplete or misleading.” Critically, “it’s difficult to evaluate who might be consenting when it’s not clear where the data comes from”.

Firms are also evasive about providing users a means to opt out of being tracked. Kashmir Hill describes the “Do Not Track” (DNT) feature as resembling a “spray-on sunscreen, a product that makes you feel safe while doing little to actually protect you”. She notes “only a handful of sites respect the request…The vast majority of sites…ignore it…Yahoo and Twitter initially said they would respect it, only to later abandon it. The most popular sites on the internet, from Google and Facebook to Pornhub and xHamster, never honored it in the first place. Facebook says that while it doesn’t respect DNT, it does provide multiple ways for people to control how we use their data for advertising…Google’s Chrome browser offers users the ability to turn off tracking, but Google itself doesn’t honor the request”. “There are other options for people bothered by invasive ads, such as an obscure opt-out offered by an alliance of online advertising companies, but that only stops advertising companies from targeting you based on what they know about you, not from collecting information about you as you browse the web”.

Kashmir Hill elaborates the reason why DNT became “a useless tool is that the government refused to step in and give it any kind of legal authority…There is no penalty for ignoring DNT…proved useful to the industry to create the illusion of a voluntary self-regulatory process, seemingly preempting the need for regulation…The biggest obstacle was advertisers who didn’t want to give up delicious data and revenue streams; they insisted that DNT would kill online growth and stymied the process”.

Hence, controls tend to be porous. There are instances of enforcement agencies violating platform policies on authentic identities. Dave Maass notes “cops continue to create fake and impersonator accounts to secretly spy on users. By pretending to be someone else, cops are able to sneak past the privacy walls users put up and bypass legal requirements that might require a warrant to obtain that same information”. To curb the fake Facebook accounts, Dave Maass suggests potential actions such as suspending the accounts, publicising information and alerting affected users and groups and amending terms and conditions.

In addition, Leslie K. John points out “there’s also nothing stopping your friends from sharing information on your behalf…Even when consumers actively seek to uncover what personal information about them has been shared and with which entities, firms are not always forthcoming…discrete pieces of data can be synthesized to form new data…Algorithms and processing power now make it possible to build behavioral profiles of users without ever having to ask for their data…This phenomenon creates entirely new conundrums: If a company profiles a consumer using machine learning, is that profile subject to the regulatory rules of personally identifiable information? Does the consumer have any right to it? Should a company be allowed to use such techniques without the consent of the targets, or at all? No one knows”.

Leslie K. John note “some argue that it may be too late to protect consumers’ personal data, because it has already been fed into machine-learning tools that can accurately infer information about us without collecting any more…regulation could place basic limits on what firms could do with those predictions (…preventing health insurers from using them to discriminate against applicants who might have medical problems)”.

Economic incentives also appear to be stacked against protecting consumer data. Leslie K. John notes “consumers don’t reward companies for offering better privacy protection. Privacy-enhancing technologies have not been widely adopted”. While people do care about privacy, “several factors impede their ability to make wise choices”. This includes impatience, a preference for convenience and for platforms to adopt loose privacy defaults and structure data transactions to nudge consumers towards disclosing information. In addition, “breaches make information increasingly public. And when our information is public, we value our privacy less, in turn making us more comfortable with parting with it”. Given that firms have “an informational advantage over consumers…suggests a market failure and thus invites regulatory intervention”.

Leslie K. John suggests “regulation is not a panacea for the surveillance economy. It will surely introduce some new issues. There’s also more to gaining consumers’ trust than merely following the law. But if we draw on insights from behavioral science and accept that consumers are imperfect decision makers rather than perfectly rational economic actors, we can design better regulation that will help realize the benefits of data collection while mitigating its pitfalls – for both firms and consumers alike”.

In situations where “consumers couldn’t properly assess risks” and “firms weren’t motivated to address them”, he argued “it makes sense for regulators to shift risk onto those best able to manage it: the makers of the products”. Leslie K. John suggests “the real promise of government intervention may lie in giving firms an incentive to use consumers’ personal data only in reasonable ways. One way to do that is to adopt a tool used in the product safety regime: strict liability, or making firms responsible for negative consequences arising from their use of consumer data, even in the absence of negligence or ill intent. Relatedly, firms that collect our personal data could be deemed…information fiduciaries – entities that have a legal obligation to behave in a trustworthy manner with our data. Interventions such as these would give firms a sincere interest in responsibly using data and in preempting abuses and failures in the system of data collection and sharing (because otherwise they’d face financial penalties)”.

Neil M. Richards favours placing “restrictions both on the government’s ability to buy private databases and on its ability to share personal information with the private sector. Privacy law already has numerous models for this latter category…which prevents the government from disclosing many kinds of records about individuals that it has in its possession”.

Neil M. Richards notes that “although we have laws that protect us against government surveillance, secret government programs cannot be challenged until they are discovered. And even when they are, courts frequently dismiss challenges to such programs for lack of standing, under the theory that mere surveillance creates no tangible harms”. He argues “secret surveillance” should be recognised as “illegitimate, and prohibit the creation of any domestic surveillance programs whose existence is secret” and “reject the idea that it is acceptable for the government to record all Internet activity without authorization”.

Overall, individuals in society now increasingly have to share, voluntarily or otherwise, more and more pieces of their personal data to maintain access to public and private sector services. As such, there is growing public concern on the nefarious use of personal data – which includes addresses, financial transactions, messages and location. Hence, there is increasing public pressure to legislate privacy laws to give individuals greater control over their data.

Tech firms have generally adopted two strategies to head off legal constraints on their ability to use personal data. The first is to dilute the provisions of proposed legislation. Rob Dozier points out that in Illinois (US), a “bill that sought to empower average people to file lawsuits against tech companies for recording them without their knowledge via microphone-enabled devices was defanged this week after lobbying from trade associations representing Silicon Valley giants”. The provisions were watered down after the technology trade associations objected and “claimed that the state’s definition of a digital device was too broad, and that the Act would lead to private litigation which can lead to frivolous class action litigation.”

The second is to draft parallel legislation to establish new privacy legislation that would overrule stringent proposals and “instead put into place a kinder set of rules that would give the companies wide leeway over how personal digital information was handled”. Jerri-Lynn Scofield notes Cisco recently proposed “three basic principles for US data privacy legislation”. They are (1) “Ensure interoperability between different privacy protection regimes”; particularly with the EU General Data Protection Regulation (GDPR). (2) “Avoid fracturing of legal obligations for data privacy through a uniform federal law that aligns with the emerging global consensus”; The concern over fracturing arises from the “cost of having to comply with multiple state regulatory regimes”. (3) “Reassure customers that enforcement of privacy rights will be robust without costly and unnecessary litigation”.

In summary, the regulation of surveillance has become a major legislative agenda item. Leslie K. John cautions “the goal should not be simply to make it harder to share or to unilaterally increase firms’ barriers to consumer data. Such an approach would be overly simplistic, because firms and consumers alike have much to gain from sharing information…the costs of restricting information flow…impede innovation… reduce competition. The cost of compliance is disproportionately burdensome for small players”. Neil M. Richards suggests “the future development of surveillance law” should allow for “a more appropriate balance between the costs and benefits of government surveillance”. In this regard, the regulation of surveillance is often justified based on the requirements for privacy. Exploring the economics of privacy will thus shed greater light on the regulatory issues relating to surveillance.

References

Aria Bendix (21 October 2018) “There’s a battle brewing over Google’s $1 billion high-tech neighborhood, and it could have major privacy implications for cities”. Business Insider. https://www.businessinsider.com/google-sidewalk-labs-toronto-privacy-data-2018-10

Ava Kofman (28 January 2019) “Google’s Sidewalk Labs plans to package and sell location data on millions of cellphones”. The Intercept. https://theintercept.com/2019/01/28/google-alphabet-sidewalk-labs-replica-cellphone-data/

Caroline Haskins (6 February 2019) “Dozens of cities have secretly experimented with predictive policing software”. Motherboard. Tech by Vice. https://motherboard.vice.com/en_us/article/d3m7jq/dozens-of-cities-have-secretly-experimented-with-predictive-policing-software

Charlotte Jee (23 July 2019) “You’re very easy to track down, even when your data has been anonymized”. MIT Technology Review. https://www.technologyreview.com/s/613996/youre-very-easy-to-track-down-even-when-your-data-has-been-anonymized/

Daniel J. Solove (January 2006) “A taxonomy of privacy”. University of Pennsylvania Law Review, Vol. 154, No. 3, p. 477; GWU Law School Public Law Research Paper No. 129. https://ssrn.com/abstract=667622

Dave Maass (14 April 2019) “Four steps Facebook should take to counter police sock puppets”. Electronic Frontier Foundation. https://www.eff.org/deeplinks/2019/04/facebook-must-take-these-four-steps-counter-police-sock-puppets

Elijah Sparrow (2014) “Digital surveillance”.  Communications surveillance in the digital age. Global Information Society Watch. Association for Progressive Communications (APC) and Humanist Institute for Cooperation with Developing Countries (Hivos). https://www.giswatch.org/2014-communications-surveillance-digital-age

Gus Hosein (2014) “Introduction”. Communications surveillance in the digital age. Global Information Society Watch. Association for Progressive Communications (APC) and Humanist Institute for Cooperation with Developing Countries (Hivos). https://www.giswatch.org/2014-communications-surveillance-digital-age

Jerri-Lynn Scofield (8 February 2019) “Cisco joins other tech giants in calling for a federal privacy law”. Naked Capitalism. https://www.nakedcapitalism.com/2019/02/cisco-joins-tech-giants-calling-federal-privacy-law.html

Jevan Hutson (28 January 2019) “How Ring & Rekognition set the stage for consumer generated mass surveillance”. Washington Journal of Law, Technology & Arts.

Julie Bort (16 July 2019) “The cofounder of Palantir just called Google an unpatriotic company. Here’s why this alarming new level of rhetoric within tech is really just a deflection”. Business Insider US. https://www.businessinsider.my/why-joe-lonsdale-accused-google-unpatriotic-deflection-2019-7/?r=US&IR=T

Kashmir Hill (15 October 2018) “Do Not Track, the privacy tool used by millions of people, doesn’t do anything”. Gizmodo. https://gizmodo.com/do-not-track-the-privacy-tool-used-by-millions-of-peop-1828868324

Ira S. Rubinstein, Woodrow Hartzog (17 August 2015) “Anonymization and risk”. https://www.ftc.gov/system/files/documents/public_comments/2015/10/00033-97824.pdf

Leslie K. John (September 2018) “Uninformed consent”. Harvard Business Review. https://hbr.org/cover-story/2018/09/uninformed-consent

Nancy Scola (July/August 2018) “Google Is building a city of the future in Toronto. Would anyone want to live there?” Politico. https://www.politico.com/magazine/story/2018/06/29/google-city-technology-toronto-canada-218841


Neil M. Richards (25 March 2013) “The dangers of surveillance”. Harvard Law Review. https://ssrn.com/abstract=2239412

Phuah Eng Chye (12 October 2019) “Information and organisation: Shades of surveillance”. http://economicsofinformationsociety.com/information-and-organisation-shades-of-surveillance/

Phuah Eng Chye (26 October 2019) “Information and organisation: Cross border data flows and spying”.

Rob Dozier (12 April 2019) “Big tech lobbying gutted a bill that would ban recording you without consent”.  Motherboard. Tech by Vice.https://www.vice.com/en_us/article/ywyzm5/big-tech-lobbying-gutted-a-bill-that-would-ban-recording-you-without-consent

Ryan Gallagher (11 July 2019) “How U.S. tech giants are helping to build China’s surveillance state”. The Intercept. https://theintercept.com/2019/07/11/china-surveillance-google-ibm-semptian/

Sidewalk Labs (17 June 2019) “Sidewalk Lab’s proposal: Master innovation and development plan”. Volume 2 Chapter 5 “Digital Innovation”.  https://quaysideto.ca/wp-content/uploads/2019/06/MIDP_Vol.2_Chap.5_DigitalInnovation.pdf

Tracy Qu (15 August 2019) “China’s internet regulator warns app operators over data privacy and says more rectification is needed”. SCMP. https://www.scmp.com/tech/apps-social/article/3022795/chinas-internet-regulator-warns-app-operators-over-data-privacy

Yasha Levine (6 February 2018) “Surveillance valley: Why are internet companies like Google in bed with cops and spies?” The Baffler. https://thebaffler.com/latest/oakland-surveillance-levine


[1] “Information and organization: Shades of surveillance”.

[2] See Neil M. Richards.

[3] Zygmunt Bauman and David Lyon highlighted the phenomenon of “liquid surveillance” which describes the spread of surveillance beyond nonconsensual state watching to a sometimes-private surveillance in which the subjects increasingly consent and participate. See Neil M. Richards.

[4] See articles by Julie Bort, Ryan Gallagher and Yasha Levine.

[5] Julie Bort.

[6] See article by Caroline Haskins.

[7] Transaction data is information that individuals consent to providing for commercial or government-operated services through a direct interaction, such as apps, websites, and product or service delivery.

[8] One realistic example is an app-based ride-hail service whose vehicles are equipped with sensors or cameras capable of collecting data on passengers or the environment.

[9] See Ira S. Rubinstein and Woodrow Hartzog argues that “process-based data release policy, which resembles the law of data security, will help us move past the limitations of focusing on whether data sets have been anonymized. It draws upon different tactics to protect the privacy of data subjects, including accurate deidentification rhetoric, contracts prohibiting reidentification and sensitive attribute disclosure, data enclaves, and query-based strategies to match required protections with the level of risk”.