Information and organisation: Shades of surveillance

Information and organisation: Shades of surveillance

Phuah Eng Chye (12 October 2019)

Information is critical to organising society. The boundaries for the capture and use of information are set by the policies and social attitudes on surveillance. But it is difficult to draw clear boundaries as information and surveillance are co-joined at the limbs. Surveillance has always existed but its reach was limited. The elevation from manual surveillance to technological surveillance has made it pervasive. Technological surveillance is not just deployed by authoritarian governments but exists in many shades. I have loosely categorised technological surveillance into four broad and overlapping groups.

  1. State surveillance

Technology arouses fears. The fear can be economic; such as the fear of technological unemployment or the loss of jobs due to automation. But the greatest fear has ideological roots; namely the fear of technological surveillance. Concerns over a dystopian future of totalitarian state control as portrayed in the Orwellian big brother is watching you or Jeremy Bentham’s panopticon[1] has been around for more than a century.

Many commentators view China’s authoritarian surveillance regime as the closest to resembling the dystopian threat to liberty. Bernhard Zand notes that due to unrest in the Xinjiang Uighur Autonomous Region, “Beijing has tightened its grip and turned Xinjiang into a security state that is extreme even by China’s standards, being a police state itself…state-of-the-art surveillance technology, with cameras illuminating every street all over the region, from the capital Urumqi to the most remote mountain village. Iris scanners and WiFi sniffers are in use in stations, airports and at the ubiquitous checkpoints – tools and programs that allow data traffic from wireless networks to be monitored. The data is then collated by an integrated joint operations platform that also stores further data on the populace – from consumer habits to banking activity, health status and indeed the DNA profile of every single inhabitant of Xinjiang”.

He observes “anyone with a potentially suspicious data trail can be detained. The government has built up a grid of hundreds of re-education camps…members of the local Communist Party committee given powers to inspect family homes and interrogate them about their lives: Who lives here? Who visited? What did you talk about? Even the controllers are getting controlled: Many apartments have bar code labels on the inside of the front door which the official must scan to prove that he or she carried out the visit”.

Harrison Jacobs notes “the Chinese government is working to create a techno-authoritarian state powered by artificial intelligence and facial recognition to track and monitor its 1.4 billion citizens…China already has 170 million security cameras in use for its so-called Skynet surveillance system, with 400 million more on the way in the coming years”.  Surveillance capabilities are enhanced by “mobile app to log citizens’ personal data, track citizens’ activity, and flag dangerous individuals for investigation or punishment”[2], and AI and 5G to “track vehicles, recognise faces and a person’s gait, read micro expressions and body language, to spot anomalies and predict crime – all in real time”[3]

However, the deployment of technological surveillance is spreading worldwide. Neil M. Richards notes “autocratic regimes have long been the villains in the stories we tell about surveillance, but they are no longer the only governments that have stepped up their surveillance activities. Democratically elected governments in the West have deepened their commitment to surveillance of the public as well. Since 2001 this monitoring has often been done in the name of counterterrorism, but it has also been justified as protecting cybersecurity, intellectual property, children from predators, and a seemingly evergrowing list of other concerns. Some of the most well-known and valuable publicly traded corporations have also got in on the act, often with the consent (in varying degrees) of their customers. Surveillance, it seems, is not just good politics, but also good business”.

The difficulties in distinguishing between the “overt” authoritarian surveillance and the “embedded” surveillance in democracies arises because they rely on the same technologies and, in many instances, on the same firms. Technology firms are actively involved in developing and selling surveillance-related technologies such as facial and voice recognition, biometric scanners, genealogy databases[4], license plate readers[5], camera enabled drones and predictive policing software[6] and providing technological support and access to databases to military, intelligence and enforcement agencies[7].

Siena Anstis, Ronald J. Deibert and John Scott-Railton note “sophisticated spyware available on the market and can infiltrate both iOS and Android devices. It also allows an operator to read text messages, including those that are end-to-end encrypted; examine photos; and track a phone’s location. The technology can also silently enable microphones and cameras, turning the phone into a portable surveillance tool to overhear and observe conversations happening in the phone’s vicinity”. They suggest the connection between spyware and the silencing of dissent highlights “how the availability and abuse of highly intrusive surveillance technology accelerates the already rapidly shrinking space in which vulnerable people can express dissent without facing repercussions such as torture, arbitrary imprisonment or killing”.

“While lucrative, the business of hacking phones and computers is highly nontransparent…operates largely without restraints…This wild west system means states purchasing spyware could be at liberty to abuse it with limited or no transparency or regulation and, in many countries, without legal ramifications …States, technology manufacturers, private equity firms and other participants in the targeted surveillance industry now enjoy unconstrained freedom to profit. Meanwhile, there are significant consequences. As targeted surveillance technology becomes a mainstay of intelligence gathering and law enforcement among states lacking safeguards against abuse, our ability to express ourselves at liberty, in privacy and without the looming threat of repercussions will be dramatically threatened”.

There is considerable overlap between state surveillance and technology firms. For example, Glenn Greenwald notes “Amazon…is a critical partner for the U.S. Government in building an ever-more invasive, militarized and sprawling surveillance state”. This includes “face-recognition software for crowds, which it called Rekognition”. “If police body cameras, for example, were outfitted with facial recognition, devices intended for officer transparency and accountability would further transform into surveillance machines aimed at the public. With this technology, police would be able to determine who attends protests…continuously monitor immigrants. Cities might routinely track their own residents, whether they have reason to suspect criminal activity or not. As with other surveillance technologies, these systems are certain to be disproportionately aimed at minority communities”.

Amazon products “will allow users to report crimes directly to their smart speakers…a startling reminder of the growing reach that technology companies have into our daily lives, intimate habits, and vulnerable moments – with and without our permission…Simply walking up to a friend’s house could result in your face, your fingerprint, or your voice being flagged as suspicious and delivered to a government database without your knowledge or consent. With Amazon selling the devices, operating the servers, and pushing the technology on law enforcement, the company is building all the pieces of a surveillance network, reaching from the government all the way to our front doors”.

There is, of course, a difference between China and the Western societies. In the Western societies, there is strong public pushback on threats to individual privacy rights and civil liberties. But there has been no clear direction where it is headed. There is a trend favouring stringent privacy laws to protect the rights of citizens. At the same time, there have been increased sharing of data for enforcement purposes. In Canada[8], the police, social services, and health workers in Canada rely on shared databases to track the behaviour of vulnerable people – including minors and people experiencing homelessness – with little oversight and often without consent. In the US, some courts have approved “reverse location search warrants”[9] to facilitate general purpose digital information collection by the police.

Hence, the intelligence and enforcement agencies search for opportunities to bypass regulatory constraints to access personal data. For example, some US cities passed legislation to require companies to notify passers-by their faces are being scanned or to ban local agencies from using facial-recognition software[10].

But these restraints are limited to specific localities and specific conditions, leaving many areas and instances where there are no restrictions. For example, US border security deploy smart wall[11] technologies such as drones, sensors and AI to curb illegal border crossings. More controversially, according to Barbara Boland, is that border “agents have been detaining American citizens without arrest, searching, and in some cases downloading the entire contents of phones, tablets, laptops, and other devices. And this all happens without a warrant or access to an attorney”. She notes these searches can “reveal information about financial and commercial crimes, such as those relating to copyright, trademark and export control violations… resulted in arrests for child pornography, evidence helpful in combating terrorist activity, violations of export controls, convictions for intellectual property rights violations, and visa fraud discoveries”; crimes for which border agencies have not been traditionally tasked to investigate. In addition, the government does not provide any information on where the information goes” and that it could be stored indefinitely and shared.

Governments have also stepped up social media monitoring. Jerri-Lynn Scofield adds the US State Department recently required “nearly all applicants for US visas to submit their handles for Facebook, Twitter, Instagram and YouTube as well as previous email addresses and phone numbers”. This is based on the reason that social media monitoring is “a vital tool to screen out terrorists, public safety threats, and other dangerous individuals from gaining immigration benefits and setting foot on U.S. soil.” She cautions that “it’s likely other countries will adopt reciprocal policies”.

Isobel Asher Hamilton notes Britain’s spy agency put out a proposal for “encrypted messaging services like WhatsApp or iMessage surreptitiously blind copying government agencies in on a chat without alerting the other users”. Critics argue that “reconfiguring messaging services’ software to allow them to quietly add the government to private chats could introduce unforeseen vulnerabilities, which in turn could be exploited by hackers…It would also mean redesigning services like WhatsApp so that the company could access and view individual chats, which is intentionally designed to be impossible at the moment as it would constitute a serious invasion of privacy by the company”. In addition, “creating a point of access to private chats for the UK government could result in an international domino effect”. Critics also argue that “the moment users find out that a software update to their formerly secure end-to-end encrypted messaging application can now allow secret participants to surveil their conversations, they will lose trust in that service.”

  • Community surveillance

There is considerable overlap between state and community surveillance. In China, community surveillance is regarded as an extension of state security. In 2014, the State Council of China published its “Planning outline for the construction of a social credit system”[12]. The system was aimed at forging “a public opinion environment where keeping trust is glorious. It will strengthen sincerity in government affairs, commercial sincerity, social sincerity and the construction of judicial credibility.” This project has been controversial. Louise Matsakis notes “the Chinese government and state media say the project is designed to boost public confidence and fight problems like corruption and business fraud. Western critics[13] often see social credit instead as an intrusive surveillance apparatus for punishing dissidents and infringing on people’s privacy”.

There is considerable confusion as to how social credit system will work as it is currently voluntary and is slated to be mandatory by 2020. Nicole Kobie points out that “as yet, there’s no one social credit system. Instead, local governments have their own social record systems that work differently, while unofficial private versions are operated at companies such Ant Financial’s Zhima Credit, better known as Sesame Credit”. There were many pilot schemes, “though the pilot is approved, and indeed encouraged, it could one day be shut down by the government”.

At the moment, China’s social credit system comprise a patchwork of city, provincial and private sector schemes. Based on several articles[14], a citizen’s score can be reduced by offences ranging from crimes, traffic offences, not paying fines or bills, bribery and refusing to carry out military service. Some are credit-related (credit delinquency) while others are platform-related (posting fake product reviews, booking a hotel room without showing up, spending too many hours playing video games or wasting money on frivolous purchases).

A range of penalties are imposed to discourage bad behaviours. Citizens with low scores may face restrictions on their rights to travel, access to the internet, transportation, housing, financial products, licences, permits, jobs, schools and social services, be publicly named as a bad citizen or even have their dogs taken away[15].

Incentives are offered for good behaviour such as for timely repayment of bills, the types of goods purchased, the level of education and even the score of friends. Citizens are awarded extra points for taking care of elderly family members and helping the poor and be eligible to be on a good citizen list. High scores could entitle citizens to priority on healthcare waiting lists, school admissions and employment discounts; preferential loans; the ability to rent without deposits; and more matches on dating websites.

Louise Matsakis notes concerns that “the primary mechanism of the Social Credit System are the nationwide blacklists and red lists”. The blacklists comprise “a rap sheet of its worst offenders” and the red lists are “rosters of companies and people that have been particularly compliant”. These lists are “then made public on a centralized website, called China Credit, where anyone can search them”. “Many regulatory agencies have signed memorandums of understanding with each other, in which they promise to punish people and businesses on one another’s blacklists”. “Chinese legal researchers are worried about the Supreme People’s Court” blacklist as that “comes with harsh punishments”.

Yu-Jie Chen argues the judgement defaulter’s blacklist is imposing “layers of disproportionate, arbitrary, and wide-ranging punishments”, on people who have largely already suffered the consequences of breaking the law. She is also worried about “how the list penalizes people who didn’t commit any offense, like a child who is barred from attending certain schools because of their parent’s actions. It’s not clear whether citizens can effectively get off the list if they’re included on it by accident, or even if they fulfil their court-ordered obligations”[16].

Phoebe Zhang adds new approaches are being developed to increase the pressure on defaulters. “Hebei province developed a digital map that displays deadbeats, including individuals and companies, in a user’s vicinity…Lishui teamed up with local movie theaters to play a video about bad debtors ahead of a screening of Avengers: Endgame”. Recently, the Jianggan district court sent “targeted social media ads to people close to debtors, including family, friends and colleagues, offering money for those who manage to make the indebted pay up…The ads can be generated through an application on WeChat”.

The greatest controversy is on whether the credit score will be linked to political activities. It is speculated that people who associate with dissidents or critics (including family members), circulate critical petitions, posts or fake news, take part in protests or are involved with illegal social organisations could be penalised. A recent Abacus article points out draft regulation from “the Cyberspace Administration of China (CAC) stipulates that both internet service providers and internet users could be blacklisted under China’s social credit system for seriously untrustworthy conduct. The CAC didn’t detail what makes certain behavior seriously untrustworthy.” However, it gave a few examples. “Besides sharing false information, the CAC said companies that have violated laws and regulations, had their websites shut down or business licenses revoked, or failed to carry out a punishment could be blacklisted”.

There is also concern on the extension of the social credit system to businesses. Frank Tang notes “the National Development and Reform Commission (NDRC) is pushing ahead with social credit-based supervision of all commercial entities from large firms to small, independently owned and operated business, prompting complaints over corporate privacy and heavy handed government intervention. The social credit rating will include court rulings, tax records, environmental protection issues, government licensing, product quality, work safety, and administrative punishments by market regulators…Firms will be labelled as having excellent, good, fair or a poor credit rating, with the initial assessment used as basic proof to allow the government to conduct varying degrees of supervision. For any business deemed to have a poor credit history, the management will be called in by local officials for a detailed review, which will include plans to correct the problems”.

He adds “the NDRC did not elaborate on the business credit rating methodology, but said that it will solicit public feedback”. “The NDRC has already completed its assessment of travel service companies, coal mining firms, long-distance bus providers, natural gas suppliers and home services. According to the assessment of the coal sector published in April, only 98 of over 19,000 firms were rated as excellent, while 1,868 were labelled as having poor social credit ratings. Some 204 natural gas suppliers were rated as poor after they appeared on the lists of the Supreme Court or the State Administration of Market Regulation for violating laws or regulations”.

Frank Tang notes comments from NDRC official that “the assessment would not be used directly for punishment, instead, punishments would be meted out in accordance with joint memorandums by different government agencies that follows a strict procedure. There are more than 50 such joint memorandums that stipulate both incentives and punishments in different sectors…For severe violations, especially those endangering life and property, harsh punishment will be adopted, such as a temporary or even permanent ban on market entry…Internet and big data technology will be used effectively to aggregate all kinds of information. It will set up a risk warning mechanism to prevent the emergence of cross-sector and cross-region risks”.

An EU Chamber of Commerce report[17] notes “there will also be a parallel set of compliance records, such as anti-monopoly cases, data transfers, pricing and licensing…a multinational company in China will be expected to deal with 30 different ratings and compliance records based on up to 300 requirements”. The report suggests that “for companies, higher scores mean lower tax rates, better credit conditions, easier market access and more public procurement opportunities, while lower scores lead to the opposite and negative ratings could possibly result in sanctions and blacklisting.” It anticipated “the corporate social credit system could in principle create a more level playing field, since automated data processing and impartial algorithm-based ratings could largely eliminate arbitrary decision-making and regulatory grey areas…But the report warned that the system had the potential to be used to discriminate against international companies”. The comprehensive, non-financial credit rating is also “envisaged to counter the current dominance of Western credit-rating companies”.

Frank Tang adds “the new system has also created major discomfort for foreign firms, who are afraid that it could be used as a weapon during international trade disputes or to give domestic firms an advantage. These worries have intensified given that China is now compiling an unreliable entity list to sanction foreign firms who hurt Chinese companies for non-commercial reasons…concerns that the submission of sensitive data could put firms’ intellectual property at risk”.

There are also contentious issues revolving around the transparency of scoring methodologies. Rachel Botsman notes “Alibaba does not divulge the complex algorithm it uses…but they do reveal the five factors taken into account. The first is credit history…Next is fulfilment capacity, which it defines in its guidelines as “a user’s ability to fulfil his/her contract obligations”. The third factor is personal characteristics, verifying personal information such as someone’s mobile phone number and address. But the fourth category, behaviour and preference…judges people by the types of products they buy…It nudges citizens away from purchases and behaviours the government does not like…The fifth category is interpersonal relationships…choice of online friends and their interactions say about the person being assessed”.

Similar tracking systems also operate in the West as well. Rachel Botsman notes “most Americans have dozens of scores… most of them held by companies that give us no chance to opt out. Others we enter into voluntarily. The US government can’t legally compel me to participate in some massive data-driven social experiment, but I give up my data to private companies every day. I trust these corporations enough to participate in their vast scoring experiments…I post my thoughts and feelings on Facebook and leave long trails of purchases on Amazon and eBay. I rate others in Airbnb and Uber and care a little too much about how others rate me…through a process called identity resolution, data aggregators can use the clues I leave behind to merge my data from various sources”.

The trend generally favours rising levels of community surveillance. Peter Rogers notes Darwin in Australia, a sister city to Haikou in China, seems to have drawn “inspiration from the Chinese social credit surveillance system” of which elements have been “embedded innocuously in the Switching on Darwin plans for a smarter city. This includes the “potential of the system for gathering data on citizens’ use of public services, such as Wi-Fi…to enhance council profitability through sale of user data to the private sector is significant…track citizen movements in real time”. He also notes that in Australia “the Encryption Act, rushed through federal parliament in December 2018, gave law enforcement and intelligence agencies unprecedented access to communications technology. Telecommunications providers must now provide potentially unlimited back doors into private data. They must also, by law, conceal that they have done so from customers/citizens…Combine these points of technology and law and we see the foundation of a surveillance state”.

Smart city projects are the quintessential form of community surveillance. Nancy Scola notes “the notion of the feedback-rich smart city has circulated for years, and in practice has mostly taken the shape of centuries-old cities like New York or Boston adopting sensor-enabled stoplights or equipping their residents with an app for spotting potholes. But the real dream, a place whose constant data flow lets it optimize services constantly, requires something different, a ground-up project not only woven through with sensors and Wi-Fi, but shaped around waves of innovation still to come, like self-driving cars. Thanks to a host of technological advances, that’s practical now in a way it never has been before. Mass-produced sensors now cost less than a dollar apiece, even for hobbyists; high-speed broadband and cheap cloud computing mean that a city can collect and analyze reams of data in real time”.

Smart cities are an attractive proposition. “Hitching up with tech companies that are flush with both cash and grand visions might be cities’ best chance to leap into the future, or at least to turbocharge their lagging districts”. But smart cities are subject to the same criticisms that beset social credit. Nancy Scola notes that “to critics, nonstop data looks a lot like tracking and surveillance – opening big questions about privacy risks, plus how the data is used and who controls it”.

A coalition of the Toronto, Ontario and Canadian governments contracted with Sidewalk Labs[18], a sister company of Google, to design and oversee the development of a dozen acres, called Quayside, “as a smart city, a sensor-enabled, highly wired metropolis that can run itself”. Nancy Scola suggests “a truly smart city stands to radically increase the amount of data collected on its citizens and visitors, and it puts into sharp relief the responsibility a local government – and the contractors it would inevitability hire to manage some of that digital infrastructure – would have to both hold and probe that data. That dynamic quickly turns the future of the smart city from a technological question to a fundamentally civic one. Heaps of data are already piling up in cities around the world, with very little agreement on the best way to handle all that information”.

Hence, she notes “the relationship between government and Sidewalk remains a work in progress, and some critics worry that handing over too much control to a private company will set the wrong precedent. By definition, the autonomy of a smart city means taking some hands-on day-to-day decision-making away from elected officials and civil servants. And when the complex algorithms and data-collection decisions driving those city operations are in the hands of one company, that can raise worries that too much power over our civic lives is being handed over to private interests”. The risk is that these companies might “end up owning not just slices of real estate but also, as they take on more local responsibility, huge chunks of information about how cities themselves function”.

Joel Kotkin paints a dystopian future from the “tech world” initiatives to design smart cities. “The tech oligarchs who already dominate our culture and commerce, manipulate our moods, and shape the behaviors of our children while accumulating capital at a rate unprecedented in at least a century want to fashion our urban future in a way that dramatically extends the reach of the surveillance state already evident in airports and on our phones”. He suggests “this new urban vision negates the notion of organic city-building and replaces it with an algorithmic regime that seeks to rationalize, and control, our way of life…This is a vision of the urban future in which the tech companies’ own workers and whatever other people with skills the machines haven’t yet replaced are a new class of urban serfs living in small apartments, along with a much larger class of dependent persons living on income maintenance and housing or housing subsidies provided by the state”.

Attention has focused on use of facial recognition technology. In the West, facial recognition technology is increasingly being deployed at borders, airports, railway stations and shopping centres. Freddie Stuart highlights the rising installation of facial recognition cameras in London. This includes “the implementation of advanced surveillance cameras in The Barbican Centre, where 16 of the new 65 cameras will be capable of recognising faces, and possess an invasive two-way audio feature – potentially allowing controllers to listen in”.

One concern is that the new surveillance technologies are being rolled out by private companies in public spaces. Freddie Stuart notes that “while the centrality of data to the business models of tech companies is well-documented, the collection of data in privately owned physical space is a relatively unexplored phenomenon”. “In the same way that the ownership of online platforms is used as space to collect personal information, these physical spaces could soon become the real-world data mines of private firms”. He suggests that while “in the UK, the use of facial recognition technology to monitor members of the public for commercial purposes is illegal without prior notice… there are no checks and balances on the use of facial recognition by private firms, if they are issued as part of a security strategy”.

In the West, there has been pushback on the threat to individual privacy rights and civil liberties. Freddie Stuart notes the Biometrics Commissioner and the Forensic Science Regulator has called for “a moratorium on the current use of facial recognition technology” and noted that “no further trials should take place until a legislative framework has been introduced and guidance on trial protocols, and an oversight and evaluation system, has been established”. Similarly, “the human rights group Liberty has launched a petition for the Home Secretary to ban the use of facial recognition technology in public spaces”.

In the US, several states have imposed laws to restrict the use of facial recognition technology. In 2015, “Illinois users accused Facebook of violating that state’s Biometric Information Privacy Act by using facial recognition technology to collect biometric data. Facebook allegedly accomplished this through its Tag Suggestions feature, which allowed users to recognise their Facebook friends from previously uploaded photos”. Facebook Inc. lost the “class action lawsuit claiming that it illegally collected and stored biometric data for millions of users without their consent”. Recently, “Facebook agreed to pay a record $5 billion fine to settle a Federal Trade Commission data privacy probe”[19].

In contrast, China is enthusiastic about “about extending artificial intelligence technologies to many walks of life, from catching criminals, detecting cancers to developing self-driving cars”[20]. Facial recognition technology is being deployed airport security, crime prevention and traffic violations.

Jane Zhang relates that in Chongqing, “officially the world’s most surveilled city”, residents are generally supportive as “it gives people a sense of security and there are fewer crimes”. She notes many people in China “appear willing to trade some privacy for extra security or a practical improvement in their daily lives when it comes to new tech adoption…There is no impact [on personal privacy] if you do not steal, rob or break laws.” Due to the low costs, shop owners have been installing surveillance cameras as “this system has helped me catch thieves several times. Even if I called the police [for theft], they would just file the case”.

Technological surveillance is appealing to most developing economies as it provides a means of immediately addressing enforcement weaknesses. Bloomberg reports India is planning to “build a system to centralise facial recognition data captured through surveillance cameras across India. It would link up with databases containing records for everything from passports to fingerprints to help India’s depleted police force identify criminals, missing people and dead bodies…The government says the move is designed to help one of the world’s most understaffed police forces, which has one officer for every 724 citizens – well below global norms”. The system is also viewed as an essential tool to fight crime and terrorists. “But the project is also ringing alarm bells in a nation with no data privacy laws…the lack of proper safeguards opens the door for abuses”.

In defence of using surveillance technology, Koh Hong-Eng argues “video cameras and other public safety technology support local economies and improve quality of life for residents”. In this regard, he views “cities are engine of growth…But dangerous conditions can stunt that growth. Violent crime in particular depresses property values and chokes off local tax revenue, leaving less money available for public services”.

Koh Hong-Eng highlights the new technologies can eliminate “information silos” and give the different agencies (police, fire, public transport and health care” better tools for sharing information and for collaborating. “Cameras also help authorities respond more effectively to incidents in progress. During an earthquake, flood or fire, cameras gives first responders a unified, real-time snapshot of citywide operations, so that each service has the same information at the same time”.

In their study, Ignacio Munyo and Martín Rossi note “an increasing number of cities worldwide are relying on surveillance cameras as a tool for preventing crimes and supporting investigations and prosecutions”. Their recent study on the “impact of a large-scale introduction of police-monitored cameras in Montevideo, Uruguay…indicate a 20% reduction in crime in areas of the city where the cameras are located, with no evidence of a displacement effect. The programme also appears to offer value for money compared with other security and crime prevention measures”. They attribute these results to “deterrence (a police presence makes criminal activity less attractive) and incapacitation (police officers apprehend criminals, leaving fewer of them around to commit future crimes)”.

  • Firm surveillance

If the concerns on state and community surveillance are valid, how should one regard the intrusive surveillance conducted by private sector firms on employees and customers.

“Technology has advanced far beyond the browser cookies and retargeting that allow ads to follow us around the internet. Smartphones now track our physical location and proximity to other people – and, as researchers recently discovered, can even do so when we turn off location services. We can disable the tracking on our web browsers, but our digital fingerprints can still be connected across devices, enabling our identities to be sleuthed out. Home assistants like Alexa listen to our conversations and, when activated, record what we’re saying. A growing range of everyday things – from Barbie dolls to medical devices – connect to the internet and transmit information about our movements, our behavior, our preferences, and even our health. A dominant web business model today is to amass as much data on individuals as possible and then use it or sell it – to target or persuade, reward or penalize. The internet has become a surveillance economy. What’s more, the rise of data science has made the information collected much more powerful, allowing companies to build remarkably detailed profiles of individuals. Machine learning and artificial intelligence can make eerily accurate predictions about people using seemingly random data. Companies can use data analysis to deduce someone’s political affiliation or sexuality or even who has had a one-night stand. As new technologies such as facial recognition software and home DNA testing are added to the tool kit, the surveillance done by businesses may soon surpass that of the 20th century’s most invasive security states”. Leslie K. John (September 2018) “Uninformed consent”.

Simon Head relates how the rigorous workplace control techniques espoused by Frederick Winslow Taylor (in the 1880s) have been advanced by technology to monitor employee activities and communications in the workplace and outside. This includes tracking employees’ online activities, webcams, listening devices, an “ultrasonic tracker of a worker’s hands to monitor performance of assigned tasks”[21] and even microchip implants[22].

In particular, the deployment of “People Analytics”[23] (PA) where badges (attached to microphones and sensors) “record their subjects’ frequency of speaking, tone of voice, facial expressions, and body language…automatically measure individual and collective patterns of behavior, predict human behavior from unconscious social signals, identify social affinity among individuals…and enhance social interactions by providing feedback. In providing “real-time and automatic tracking of aspects of employee conversation like the variability in the timing of replies, frequency in communications, use of emoticons, slang, sentiment and banter…flagging anything that deviated from the norm for further investigation. That could be something as seemingly innocuous as shouting on a phone call, accessing a work computer in the middle of the night, or visiting the restroom more than colleagues”. Simon Head suggests workplace technologies could be friendlier if employees (unions) were able to negotiate limits on their use.

Shainaz Firfiray points out (in relation to microchip implants) the serious concerns on the impact in relation to human dignity, ethics, health and discriminatory practices as well as questions whether it is right to track employees’ activities outside the office. He notes that “even if implants are technically voluntary, it’s not hard to imagine situations where employees might feel pressured to accept the chips by their managers or warned of unfavourable consequences if they don’t agree”. Hence, “that probably won’t stop some employers seeing what they can get away with at a time when it’s increasingly common to let private companies know almost everything about us”. In this regard, Miranda Katz highlights a study’s estimate that “94 percent of organizations currently monitor workers in some way. Regulations governing such conduct are lax; they haven’t changed since the 19th century”.

Companies not only monitor their employees intrusively but also their customers. The list of technologies[24] to glean more information from customers includes the use of thermal imaging technology to monitor patterns in customer movements; devices to track mobile devices of individual consumers through the store; monitors to track specific items being moved from shelves by consumers; wi-fi to share the users unique MAC (media access control) address and capture personal information; floor sensors to measure footpath and engagement time by location; apps such as mobile payment software, which can identify customers and their buying patterns; radio frequency identification transmitters attached to products to detect their movement and location; emotional capture technology to determine the emotional or cognitive state of shoppers by analysing their facial expressions; and eye-tracking[25] to analyse sub-conscious responses.

Doug Stephens notes “technology giant Adobe recently launched a cloud-based platform that, by using a variety of data points and technologies, identifies individual shoppers in real-time as they enter a store, portraying them as moving dots on a store map. It then allows store management to click on and receive a full profile of each individual, including spending patterns, marital status, age range, city of residence and more. From there, each individual consumer can be micro-targeted with specific offers and promotions to suit their known purchasing patterns”. Farfetch founder José Neves suggests the new technologies are turning customers into an “offline cookie, a technology that automatically adds products to your wish list on your app as you touch them in the store, without having to scan anything”[26].

Doug Stephens notes that “while studies suggest that we are largely willing to share data about who we are, we are decidedly more reluctant to share information about where we are. In other words, there’s an important difference in the shopper’s mind between the sharing of data and straight-up surveillance”.

In this context, he suggests there are “two directly competing forces are at play in the retail market. First, the pervasive nature of the internet is rewiring consumers to expect new levels of personalisation of shopping experiences and products. We expect retailers to help us cut through the clutter and give us exactly what we want, how and when we want it. That said, we also desire privacy and these two conflicting consumer needs are now colliding head-on. Can one achieve intimacy without information?  Can we enjoy hyper-personalisation while maintaining our privacy? These remain open questions”.

But Doug Stephens is doubtful “we can get the toothpaste back in the tube. It’s highly unlikely that marketers will un-see the opportunity to turn stores into living and responsive websites. Retailers most certainly will use technology to better understand who is in their stores and what those customers want. Their capacity to do so is no longer in question. It’s only the rules of engagement that remain to be sorted out. Laws will be broken, rights will be violated, cases will be litigated and, in the end, as we always do, we’ll adapt”.

Nick Tabor notes the increasing convergence between security and marketing activities; that the same information used for surveillance is also being used for marketing. Hence, there is a need to clarify issues in relation to commercial surveillance data; such as the type of data being collected, how it is being stored, for how long and the people who would have access.

Mike Elgan suggests there are elements in “America’s growing social credit system” or the “surveillance of social media activity by private companies” that resembles China’s social credit system. For example, “life insurance companies can base premiums on what they find in your social media posts”. Users can be banned from service by AirBnB and Uber and from “WhatsApp if too many other users block you. You can also get banned for sending spam, threatening messages, trying to hack or reverse-engineer the WhatsApp app, or using the service with an unauthorized app.

Mike Elgan notes “nobody likes antisocial, violent, rude, unhealthy, reckless, selfish, or deadbeat behavior. What’s wrong with using new technology to encourage everyone to behave?  The most disturbing attribute of a social credit system is not that it’s invasive, but that it’s extralegal. Crimes are punished outside the legal system, which means no presumption of innocence, no legal representation, no judge, no jury, and often no appeal. In other words, it’s an alternative legal system where the accused have fewer rights. Social credit systems are an end-run around the pesky complications of the legal system. Unlike China’s government policy, the social credit system emerging in the U.S. is enforced by private companies. If the public objects to how these laws are enforced, it can’t elect new rule-makers. An increasing number of societal privileges related to transportation, accommodations, communications, and the rates we pay for services (like insurance) are either controlled by technology companies or affected by how we use technology services. And Silicon Valley’s rules for being allowed to use their services are getting stricter. If current trends hold, it’s possible that in the future a majority of misdemeanors and even some felonies will be punished not by Washington, D.C., but by Silicon Valley. It’s a slippery slope away from democracy and toward corporatocracy. In other words, in the future, law enforcement may be determined less by the Constitution and legal code, and more by end-user license agreements”.

There is also concern over the use of customer’s personal data to create a blacklist. Susie Cagle notes bars use PatronScan kiosks to verify IDs and collect and track basic customer demographic data. The PatronScan service maintains “a record of bad customer behavior and flag those individuals, alerting every other bar…What constitutes bad behavior is at a bar manager’s discretion, and ranges from sexual assault to violence to public drunkenness and other…Unless patrons successfully appeal their status to PatronScan or the bar directly, their status can follow them for anywhere from a couple weeks to a few months, to much, much longer”.

Susie Cagle highlights “PatronScan’s product raises a number of concerns about privacy, surveillance, and discrimination. PatronScan’s reports reveal the company logged where customers live, the household demographics for that area, how far each customer travelled to a bar, and how many different bars they had visited”. “According to the company’s own policies, the company readily shares the information it collects on patrons, both banned and not, at the request of police”. However, the police are not provided a backdoor and they have to ask permission.

Susie Cagle notes “civil liberties advocates are more skeptical about the PatronScan model. The system is inherently subjective: Banning criteria are nebulous and determined on an individual basis…There’s essentially nothing to stop a bad-actor business from using PatronScan for discriminatory purposes under the guise of security”. She notes liberties may be infringed as businesses “don’t have an unfettered right to refuse service to anyone” and that people who are banned may not receive an “explanation of what rule they had violated or how they had transgressed”.

The expansion in the collection and use of surveillance data is likely to lead to infringement of other laws. For example, a recent working paper[27] on the advantages of trading based on satellite imagery of parking lot traffic raised “questions about individual investor protections in an age of new alternative data sources”. It was pointed out that “trading on material non-public information…is against the law…insider trading”. In this regard, “technology is increasingly blurring the boundaries between public and private information, creating data opportunities that are legal, but are expensive and often require special expertise to access… Our evidence suggests that unequal access to alternative data leaves individual investors outside the information loop”.

  • Personal surveillance

Despite the surveillance concerns, users enthusiastically use the new technologies. Users use facial recognition to unlock their phones, computers and for transactions (e.g. ride-sharing), GPS to navigate and to track their devices. Individuals also wear smart watches and activity trackers to monitor their health, exercise regime and eating habits. They have even inserted microchip implants “to speed up users’ daily routines and make their lives more convenient – accessing their homes, offices and gyms is as easy as swiping their hands against digital readers. They also can be used to store emergency contact details, social media profiles or e-tickets for events and rail journeys within Sweden”[28].

There is also rising use of home surveillance devices to protect against burglaries or to monitor infants. Individual home surveillance units can be networked into a community surveillance system. Amazon’s Ring surveillance system is “a system of home surveillance doorbell cameras which operate on an integrated social media platform, Neighbors. Neighbors allows users to share camera footage with other users and law enforcement agencies, as well as report safety issues, strangers, or suspicious activities. The platform aggregates user-generated reports and video data into a local activity maps and watchlists”. Jevan Hutson highlights that “by integrating facial recognition and contracting with local and federal law enforcement agencies, Amazon supercharges the potential for its massive network of surveillant consumers to comprehensively track the movements of individuals over time, even when the individual has not broken any law. Fully realized, these technologies set the stage for consumer generated mass surveillance”. In particular, this gives rise to concern that the use of “facial recognition technology amplifies bias, intensifies mass surveillance and ought to be subject to stringent regulation”.

Echo Xie notes surveillance is being extended to schools and universities as part “of a national effort to lead the world in emerging technologies and move China’s economy up the value chain”. “In a detailed plan published in 2018, the ministry suggested that schools explore a new teaching model based on AI, including using artificial intelligence to monitor the teaching process and analyse the performance of students and teachers”.

Schools across China have enthusiastically adopted AI, particularly facial recognition technology. The systems are not only used to gain entry to and secure school facilities, they also record student attendance and handle enrolments. A school in “Zhejiang uses the systems in various applications, from its canteens to manage distribution of school meals to its classrooms where it monitors whether students enjoy their classes…cameras in the school’s classrooms can pick up seven emotions in the students – neutral, happy, sad, disappointed, angry, scared and surprised”. In “Guizhou, Guanyu Technology supplies chip-equipped smart uniforms to track students’ locations, according to its website”.

Echo Xie notes concern over the impact on childrens’ psychological health and the privacy and the security of their personal data. “The ministry has acknowledged growing concern among teachers, students and parents over use of AI applications in schools, releasing guidelines…tightening the range of students’ personal data that app developers can collect…appointed a specialist panel to look into data security and privacy concerns with facial recognition in campuses”.

Surveillance technology is also being deployed in US schools. Benjamin Herold notes that due to shooting incidents, “schools are eagerly searching out new technologies. Companies feed those fears, then respond by offering new services. The systems are then deployed with minimal forethought or oversight”. In this regard, there has been “a fresh boom in the school safety technology market, with a handful of established companies and a growing crop of startups now competing to offer ever-more comprehensive surveillance capabilities”.

Benjamin Herold suggests “such is the new reality for America’s schools, which are hastily erecting a massive digital surveillance infrastructure, often with little regard for either its effectiveness or its impact on civil liberties. Social media monitoring companies track the posts of everyone in the areas surrounding schools, including adults. Other companies scan the private digital content of millions of students using district-issued computers and accounts. Those services are complemented with tip-reporting apps, facial-recognition software, and other new technology systems. Florida offers a glimpse of where it all may head: Lawmakers there are pushing for a state database that would combine individuals’ educational, criminal justice, and social-service records with their social media data, then share it all with law enforcement. Across the country, the results of such efforts are already far-reaching. The new technologies have yielded just a few anecdotal reports of thwarted school violence, the details of which are often difficult to pin down. But they’ve also shone a huge new spotlight on the problems of suicide and self-harm among the nation’s children. And they’ve created a vast new legal and ethical gray area, which harried school administrators are mostly left to navigate on their own”.

According to Benjamin Herold, critics[29] argue “the real threats related to school surveillance: chilling students’ intellectual freedom and free-speech rights. Undermining their reasonable expectations of privacy. Traumatizing children with false accusations. And systematically desensitizing a generation of kids to pervasive surveillance”. There are also “legal and ethical considerations for schools when students plug their personal devices into district-issued computers, leading Gaggle’s filters to automatically suck up and scan their private photos and videos”.


Technological surveillance is spreading across the world. Steven Feldstein notes “at least seventy-five out of 176 countries globally are actively using AI technologies for surveillance purposes. This includes: smart city/safe city platforms (fifty-six countries), facial recognition systems (sixty-four countries), and smart policing (fifty-two countries)”. “China is a major driver of AI surveillance worldwide. Technology linked to Chinese companies – particularly Huawei, Hikvision, Dahua, and ZTE- supply AI surveillance technology in sixty-three countries…Huawei alone is responsible for providing AI surveillance technology to at least fifty countries worldwide. No other company comes close”. “AI surveillance technology supplied by U.S. firms is present in thirty-two countries…France, Germany, Israel, Japan are also playing important roles in proliferating this technology”.

Steven Feldstein adds that “liberal democracies are major users of AI surveillance…51 percent of advanced democracies deploy AI surveillance systems. In contrast, 37 percent of closed autocratic states, 41 percent of electoral autocratic/competitive autocratic states, and 41 percent of electoral democracies/illiberal democracies deploy AI surveillance technology. There is a strong relationship between a country’s military expenditures and a government’s use of AI surveillance systems”.

Perhaps the trend favouring technological surveillance shouldn’t come as a surprise because it is consistent with the transition from a low-information to a high-information environment. In this transition, the regulation of surveillance is a central issue because it determines the quantity of information available for use and the basis on which activities can be organised. But it is difficult to shape the rules because surveillance exists in many shades. It is difficult to differentiate one form of surveillance from the other, since they are underpinned by the same technologies and types of data. What makes one form of surveillance more acceptable than another? Is it a question of who is conducting or its purpose? Even then, the answers at the point of collection may not matter much because it is hard to control where data ends up and how it is used. One of the biggest challenges in regulating surveillance relates to the flow of data across borders.


Abacus (1 August 2019) “China’s social credit system may soon target online speech”. SCMP.

Abacus (17 August 2019) “Facial recognition is enforcing traffic laws in Shenzhen”. SCMP.

Alexandra Ma (29 October 2018) “China has started ranking citizens with a creepy social credit system – here’s what you can do wrong, and the embarrassing, demeaning ways they can punish you”. Business Insider US.

Alexandra Ma (11 May 2019) “China uses an intrusive surveillance app to track its Muslim minority, with technology that could be exported to the rest of the world. Here’s how it works”. Business Insider US.

Anna Mitchell, Larry Diamond (2 February 2018) “China’s surveillance state should scare everyone”. The Atlantic.

Avi Bar-Zeev (28 May 2019) “The eyes are the prize: Eye-tracking technology is advertising’s holy grail”. Motherboard. Tech by Vice.

Barbara Boland (8 July 2019) “Americans shocked to find their rights literally vanish at U.S. airports”. The American Conservative.

Benjamin Herold (30 May 2019) “Schools are deploying massive digital surveillance systems. The results are alarming”. Education Week.

Bernhard Zand (26 July 2018) “A surveillance state unlike any the world has ever seen”. Speigel.

Bloomberg (20 September 2019) “India plans to adopt China-style facial recognition in policing, despite having no data privacy laws”.

Caroline Haskins (6 February 2019) “Dozens of cities have secretly experimented with predictive policing software”. Motherboard. Tech by Vice.

Caroline Haskins (12 July 2019) “Revealed: This is Palantir’s top-secret user manual for cops”. Motherboard. Tech by Vice.

Charles Rollet (5 June 2018) “The odd reality of life under China’s all-seeing credit score system”. Wired.

Doug Stephens (29 January 2019) “Is surveillance the future of service?”

Echo Xie (16 September 2019) “Artificial intelligence is watching China’s students but how well can it really see?” SCMP.

Frank Tang (17 September 2019) “China pushing ahead with controversial corporate social credit rating system for 33 million firms”. SCMP.

Freddie Stuart (28 September 2019) “How facial recognition technology is bringing surveillance capitalism to our streets”. Originally published at openDemocracy.

Glenn Greenwald (8 February 2019) “Jeff Bezos protests the invasion of his privacy, as Amazon builds a sprawling surveillance state for everyone else”. The Intercept.

Harrison Jacobs (July 2018) “China’s Big Brother surveillance technology is impressive and chilling – but it’s not nearly as all-seeing as the government wants you to think.” Business Insider.

Ignacio Munyo, Martín Rossi (30 June 2019) “Police-monitored cameras and crime”. Voxeu.

Isobel Asher Hamilton (30 May 2019) “Apple and WhatsApp are trying to fight off plans from British spies to ghost their way into your encrypted messages”. Business Insider US.

Jane Zhang (4 October 2019) “In Chongqing, the world’s most surveilled city, residents are happy to trade privacy for security”. SCMP.

Jerri-Lynn Scofield (4 June 2019) “Privacy watch: US visa applicants must provide social media details”. Naked Capitalism.

Jevan Hutson (28 January 2019) “How Ring & Rekognition set the stage for consumer generated mass surveillance”. Washington Journal of Law, Technology & Arts.

Joel Kotkin (18 February 2018) “From disruption to dystopia: Silicon Valley envisions the city of the future”. The Daily Beast.

Jonathan Stempel (8 August 2019) “Facebook loses facial recognition technology appeal, must face class action”. Reuters.

Kellen Browning (10 August 2018) “Sacramento welfare investigators track drivers to find fraud. Privacy group raises red flags”. The Sacramento Bee.

Koh Hong-Eng (3 June 2018) “Big Brother surveillance? How video cameras can make cities safer and contribute to higher economic growth”. South China Morning Post.

Laura Counts (28 May 2019) “How hedge funds use satellite images to beat Wall Street – and Main Street”. BerkeleyHass.

Leslie K. John (September 2018) “Uninformed consent”. Harvard Business Review.

Louise Matsakis (29 July 2019) “How the West got China’s social credit system wrong”. Wired.

Maddy Savage (22 October 2018) “Thousands of Swedes are inserting microchips under their skin”. National Public Radio.

Mara Hvistendahl (14 December 2017) “Inside China’s vast new experiment in social ranking”. Wired.

Mike Elgan (26 August 2019) “Uh-oh: Silicon Valley is building a Chinese-style social credit system”. Fast Company.

Miranda Katz (12 August 2018) “The creative ways your boss is spying on you”. Wired.

Nancy Scola (July/August 2018) “Google Is building a city of the future in Toronto. Would anyone want to live there?” Politico.

Nathan Munn (28 February 2019) “Police in Canada are tracking people’s negative behavior in a risk database. Motherboard. Tech at Vice.

Neil M. Richards (25 March 2013) “The dangers of surveillance”. Harvard Law Review.

Nick Tabor (20 October 2018) “Smile! The secretive business of facial-recognition software in retail stores”. NYmag.

Nicole Kobie (21 January 2019) “The complicated truth about China’s social credit system. Wired.

Peter Aldhous (19 May 2019) “This genealogy database helped solve dozens of crimes. But its new privacy rules will restrict access by cops”. BuzzFeed News.

Peter Rogers (29 May 2019) “Is China’s social credit system coming to Australia?” The Conversation.

Phoebe Zhang (23 September 2019) “Chinese court offers bounties to catch deadbeats”. Inkstone Newsletter.

Rachel Botsman (21 October 2017) “Big data meets big brother as China moves to rate its citizens”. Wired.

Sarah Dai (26 Feb 2019) “Chinese police test gait-recognition technology from AI start-up Watrix that identifies people based on how they walk”. South China Morning Post.

Sarah Dai (15 May 2019) “How 9/11 and China’s plan for blanket surveillance created a wave that CCTV camera makers Hikvision and Dahua rode to huge success”. South China Morning Post.

Scott Thurmscott Thurm (13 July 2018) “Microsoft calls for federal regulation of facial recognition. Wired.

Siena Anstis, Ronald J. Deibert, John Scott-Railton (19 July 2019) “A proposed response to the commercial surveillance emergency”. Lawfare Blog.

Shainaz Firfiray (22 November 2018) “Microchip implants are threatening workers’ rights”. The Conversation.

Shirin Ghaffary (14 May 2019) “San Francisco’s facial recognition technology ban, explained”. Recode-Vox.

Shirin Ghaffary (16 May 2019) The “smarter” wall: How drones, sensors, and AI are patrolling the border”. Recode-Vox.

Sidewalk Labs (17 June 2019) “Sidewalk Lab’s proposal: Master innovation and development plan”.

Simon Head (24 May 2018) “Big Brother Goes Digital”. NYbooks.

Steven Feldstein (17 September 2019) “The global expansion of AI surveillance”. Carnegie Endowment.

Susie Cagle (29 May 2019) “Meet the security company building an international database of banned bar patrons”. Medium.

Wendy Wu (28 August 2019) “European firms warned China’s social credit system could be a matter of life or death.” SCMP.

Yves Smith (12 February 2019) “Reverse location search warrant: A new personal data hoovering exercise brought to you by Google”. Naked Capitalism.

Zsolt Katona, Marcus Painter, Panos N. Patatoukas, Jieyen Zeng (30 July 2018) “On the capital market consequences of alternative data: Evidence from outer space”. 9th Miami Behavioral Finance Conference.


[2] Alexandra Ma.

[3] Sarah Dai.

[4] Peter Aldhous.

[5] Kellen Browning.

[6] Caroline Haskins.

[7] Caroline Haskins.

[8] Nathan Munn.

[9] Yves Smith.

[10] Shirin Ghaffary.

[11] Shirin Ghaffary.

[12] Rachel Botsman.

[13] US Vice President Mike Pence has criticized the project. “By 2020, China’s rulers aim to implement an Orwellian system premised on controlling virtually every facet of human life – the so-called social credit score”. See Louise Matsakis.

[14] Complied from articles by Anna Mitchell and Larry Diamond, Alexendra Ma, Mara Hvistendahl, Charles Rollet.

[15] The eastern Chinese city of Jinan started enforcing a social credit system for dog owners in 2017. Alexandra Ma.

[16] See Louise Matsakis.

[17] See Wendy Wu.

[18] “Sidewalk Lab’s proposal: Master innovation and development plan”.

[19] Jonathan Stempel.

[20] Abacus.

[21] Miranda Katz.

[22] Shainaz Firfiray.

[23]A term popularised by Alex “Sandy” Pentland and his colleagues at MIT’s Media Lab. Simon Head.

[24] Doug Stephens.

[25] Avi Bar-Zeev.

[26] Doug Stephens.

[27] The text is based a summary of the paper by Laura Counts. The working paper is authored by Zsolt Katona, Marcus Painter, Panos N. Patatoukas and Jieyen Zeng.

[28] Maddy Savage.

[29] American Civil Liberties Union.