The transparency paradigm

The transparency paradigm

Phuah Eng Chye (28 March 2020)

The gap between the ideals of privacy regulation and the operating realities of an information society is growing wider. The pace of information accumulation is accelerating and undermining the relevance of a privacy paradigm. Recent studies[1] indicate that (1) the ability to protect privacy diminishes as the quantity of information increases and; (2) the inability of privacy regulation to address an expanding range of information ills. A privacy paradigm is likely to be riddled with gaps or contradictions.

The alternative is to consider viewing the information challenges (including privacy) within the context of a transparency paradigm. Replacing a privacy paradigm with a transparency paradigm is more than just semantics. Strictly, privacy’s scope is narrow. For example, while there is overlap between privacy, anonymity, data security and fairness, this did not mean that privacy is the appropriate or only remedy. In this regard, the use of privacy as a legal euphemism to address a broad range of information ills looks like over-reach.

Privacy’s other flaw is it is a blunt tool with a single solution – removing data from use. However, there are evident weaknesses. For example, using privacy regulation to address discrimination lead to solutions that depend on randomness rather than analysis[2]. Privacy regulation can end up worsening rather than eliminating discrimination because it hampers the discovery of problems and bad actors and impede victims from proving their innocence or getting assistance.

A transparency paradigm would not ignore privacy considerations. But it would expand the range of possible solutions by considering transparency-based solutions. Transparency-based solutions requires data to diagnose problems. Its not fool-proof but it is the fairest way of differentiating truth from falsehoods and it facilitates a direct and calibrated response. In any case, it is a superior approach to not using data. It should be noted that a high-information environment opens up new transparency-driven approaches to addressing information ills that were previously not available in a low-information environment.

Privacy-based forgetfulness vs transparency-based forgiveness

Privacy-based approaches are commonly used to mitigate the harshness of transparency. For example, some laws authorise courts to expunge or seal criminal records for certain types of arrests and convictions while employers are discouraged from discriminatory practices under the Equal Employment Opportunity laws[3]. Article 9 of the EU’s General Data Privacy Regulation (GDPR) “disallows the use of information that reveals a person’s race, ethnicity, political views, religion, union membership, health, and sexual practices unless it falls into certain exceptions such as legal defense, consent of the individual, or certain public interests”[4].

This approach has been distilled into the principle of forgetfulness. Marcus Cayce Myers notes GDPR’s Article 17 codifies the right to be forgotten[5] where a user has a “right to obtain from the controller the erasure of personal data relating to them and the abstention from further dissemination of such data, especially in relation to personal data” when there is no longer use for the data or, more importantly, when the data subject decides he or she no longer wants the information to be public. The controller not only has to remove data on sites they are in control of but must also “take all reasonable steps, including technical measures…to inform third parties which are processing such data, that a data subject requests them to erase any links to, or copy or replication of that personal data.”

The right to be forgotten is controversial in terms of its effects on third parties. “Under this regulation, retweets, sharing, comments, re-posting, or posted comments constitute dissemination of personal information. Search engines and social media outlets are required to not only inform these users of this request, but also take technical measures to remove these specific data…that the data subject no longer wants disseminated on the Internet. Controllers are not only required to remove information when requested, but they are also required to limit access to data when the information is no longer used by the controller or if there is some question as to whether the data is a truthful representation of the data subject”. Exceptions are made for data involving the freedom of expression or public interest.

Marcus Cayce Myers notes the “required removal or limited access to data provides a logistical problem for many search engines. It first requires non-E.U. based companies to adhere to E.U. regulation in their maintenance and promotion of world-wide websites and search engines. Second, and perhaps most problematic, is that these regulations direct companies to respond immediately to the requests of individual users who at any given time may have small, complex removal requests that are difficult to track and expensive to remove. In March 2014 the right to be forgotten in Article 17 was replaced with a more moderate right to erasure. The major difference between the right to be forgotten and right to erasure is unclear because both deal with the removal of personal information from controllers of data, such as search engines”.

Marcus Cayce Myers adds that “American law, by contrast, has no such historical justification for removing a person’s past, criminal or otherwise. American law has always favored the idea that free speech allowed for a person’s criminal past to become part of public record because of the need to protect society”. “It is this clash of legal and philosophical values that underpins the right to be forgotten issue”.

Thus, privacy regulation is based on the logic that withholding certain information can avoid discriminatory practices as well as provide opportunities for a fresh start. There are several problems with privacy-based forgetfulness.

First, forgetfulness controls are easily bypassed. For example, “employers are supposed not to ask about such information during the hiring process – but searching for it online significantly reduces the risk of detection”. In addition, employers may rely on “statistical discrimination strategies”[6]. In addition, these rules apply to only known databases within jurisdictional reach but not unofficial or illegal ones.

Second, Ciara Byrne notes the right to be forgotten – or the scrubbing of personal data used by credit scoring agencies, potential employers or landlords – are mainly utilised by high-income and professional users. “In 2018, Google released data on the 2.4 million right to be forgotten requests…Most delisting requests (30%) related to professional information or professional wrongdoing and self-authored content like social media posts. And just 1,000 requesters, or 25% of the total, generated 15% of all URLs. Many of these frequent requesters were law firms and reputation management services. Wealthier users meanwhile have an array of privacy-protecting tools at their disposal. Reputation management firms like Igniyte charge $1,850 to $26,000 per month to repair the online reputations of individuals or companies by removing or suppressing negative content, adding positive content and tweaking SEO”. The risk is a divide comprising “privacy haves and privacy have-nots, where privacy is a luxury for affluent people.”

Third, does it make sense to make the internet forgetful when the real world is not? There have been many instances where past accounts of misdeeds resurface to cause embarrassment, upend careers or even result in prosecutions. In relation to this, there is a need to reconcile forgetfulness with the risk of potential legal liabilities. For example, would a firm that recruited an individual with a criminal record be liable if as an employee commits a crime? The problem is that the law does not offer protection to forget adverse data. Most governance controls require due process to explain decisions (such as hiring). To mitigate legal risks, firms may implement controls that will neutralise the rules of forgetfulness. Hence, privacy-based forgetfulness has difficulties achieving their desired outcomes.

To sum up, there is little excuse for being forgetful or overlooking data in a high-information environment. The more compatible approach in a transparent environment is the principle of forgiveness. Forgiveness requires knowledge of the facts and is therefore a relatively more truthful and transparent approach. Hence, a system based on forgiveness would first require that most data and records (at least the official ones) are accessible. However, a transparent environment can be harsh.

The harshness of transparency needs to be mitigated by applying the principles of tolerance (of errors), compassion and redemption. But the system should not be enforcement-focused or punishment-oriented. In maintain transparency of past incidences of misbehaviour, the system should provide room for slack by being tolerant of misjudgements and errors. Penalties should only be punitive when the infringements are frequent, high-impact or systemic.

Forgiveness requires compassion. There may be mitigating circumstances for the error or crime. These instances would provide grounds for redemption where records of the infringement are “deactivated” and cannot be used against the individual. Legal protection should be afforded to decision-makers who act accordingly based on the de-activation of the record. Redemption[7] is made possible by instituting processes so that individuals can “deactivate” adverse records through their good deeds. Forgiveness can be complemented by an incentive-based approach that emphasises rewarding good behaviour rather than punishments for bad behaviours. The forgiveness approach can be tested in the design of a social credit system.

Fairness and due process

A forgiveness approach should be complemented by fairness and the robustness of due process. The importance of these principles is recognised in EU’s General Data Privacy Regulation (GDPR) [8] which highlights fairness, lawfulness and transparency as the three key principles underpinning processing of personal data. The GDPR contains clauses requiring accuracy, providing individuals explicit rights to access, erase, rectify and restrict the processing of their personal data. This is reinforced by clauses imposing right-to-know, disclosure and various accountabilities to ensure information fiduciaries (platforms) treat users fairly. Hence, the GDPR covers many of the bases for fairness.

Fairness is more compatible with transparency as it is data-dependent. Hence, data can be analysed to highlight patterns of unfairness; such as to prevent[9] or detect[10] systemic discrimination or incidences of exploitation. This allows the formulation of direct and targeted initiatives to either to tackle the worst cases (by firm or geography) or to assist and rehabilitate victims (by firm or geography) that suffer most from denial of opportunities or abusive conduct.

There remain areas where transparency is regarded as problematic; such as in rating information and scoring methodologies. Jerri-Lynn Scofield notes consumers are subject to “secret scores: hidden ratings that determine how long each of us waits on hold when calling a business, whether we can return items at a store, and what type of service we receive. A low score sends you to the back of the queue; high scores get you elite treatment”…There are questions about these scores; not least “the data they used to make decisions about you”, “how they analyzed that data or what their decision was…machines and algorithms… need to be understood, maintained, and regulated”. Red flags are raised “any time a decision is outsourced to a black box, and oversight surrendered”.

Jerri-Lynn Scofield highlights #REPRESENT[11] made considerable efforts to “shine a light on a part of the world of unregulated data collection” where “the ability of corporations to target, manipulate and discriminate against Americans is unprecedented and inconsistent with the principles of competition and free markets…Surveillance scoring promotes inequality by empowering companies to decide which consumers they want to do business with and on what terms, weeding out the people who they deem less valuable. Such discrimination is as much a threat to democracy as it is to a free market.” A major criticism is that “users have little recourse to correct false information about them or challenge their ratings…credit reports frequently contain errors – which are time-consuming to correct – and the system is vulnerable to hacking”.

Fairness is affected by how rules are made, operationalised and enforced. In this regard, the achievement of fairness is dependent on the levels of transparency (so people can judge whether decisions are fair) and governance (whether due processes are robust). Otherwise, the consequences are that disadvantaged individuals may get victimised or made to bear the costs of errors.

It is common to find high levels of public dissatisfaction with the due processes for complaints, disputes and data rectification. Anthony Dukes notes “consumer resentment may stem from poor customer service. In fact, most Americans have fought with phone menus, desperately seeking a live service agent to seek a refund. In 2013, Americans spent an average of 13 hours disputing a purchase or resolving a problem with customer service”. Their research[12] reveals “many complaint processes are actually designed to help companies retain profits by limiting the number of customers who can successfully resolve their complaints…Forcing customers to talk to a computer, circulate through phone menus or sit on hold”. “Only by insisting to talk to a manager or threatening to leave the company do consumers come closer to obtaining a refund…This allows companies to exploit customers’ individual differences in age, race and gender so that only the squeakiest wheels are compensated”. The research “suggests that in markets without much competition, companies are more likely to implement a tiered complaint process and profit from the reduced payouts to customers”.

Lou Downe notes the service shortfalls arise “because we don’t design services, we let them happen by accident. The services we use everyday, from student loans to healthcare and housing, are more likely to be the product of technological constraints, political whim, and personal taste than they are the conscious decision of an individual or organization. By not designing our services, we’re accepting that they will simply evolve to the conditions around them, regardless of whether or not that means a service meets user needs, is financially sustainable, or achieves a certain outcome”.

“What is more surprising perhaps is that up to 60% of the cost of these services is spent on service failure – phone calls asking government how to do something, or pieces of casework where forms aren’t filled in correctly. Spending on public services amounts to roughly a third of U.K. GDP, meaning that bad service design is one of the biggest unnecessary costs to U.K. taxpayers. And yet, it’s not simply users that are paying the price for bad service design, it’s our organizations, too. We are footing the bill for the unnecessary phone calls, the returned products, complaints, or missed appointments as much as our users”.

Lou Downe suggests services continue to work this way because it has “remained unrefereed and unscrutinized…We have a collective blindness for services. They are the gaps between things, and so not only do we fail to see them, but we fail to recognize when they aren’t working. They are often provided by multiple organizations, or parts of an organization, so their cost and the negative effect they have on the world is more difficult to track than the cost of failing technology…Service failure is hidden in wrongly worded questions, broken links, and poorly trained staff; in emails not sent, phone lines that have been closed or inaccessible PDFs. In short, it’s hidden in the small, everyday failures of our services to meet the very basic needs our users have – to be able to do the thing they set out to do”. Hence, “we need to turn these everyday decisions into conscious design decisions, with the full awareness of the effect they will have on the service we provide, but we can only do that if we know what we want to achieve – what a good service is and what it isn’t”.

Overall, due processes tend to be weak because there is little incentive for the public and private sectors to invest in recourse systems. In addition, there is a bureaucratic tendency (even among private firms) to evolve rules to favour or protect administrators at the expense of the public. This weakness can be addressed through greater process transparency and public discussion. The alternative is to consider oversight regulation to protect the interests of users.

There are well established recourse frameworks in the finance industry. Alessandro Acquisti, Curtis Taylor and Liad Wagman notes the Fair Credit Reporting Act (FCRA) “established permissible purposes of credit information disclosure, codified information flows along the lines that they had naturally developed in the market, introduced dispute settlement mechanisms and data correction procedures, and assigned expiration dates to negative information such as bankruptcy and payment defaults…The Consumer Credit Reporting Reform Act (CCRRA) of 1996 introduced for the first time duties for financial information providers. In order to correct inaccuracies in consumers’ records, the CCRRA mandated a two-sided information flow to/from credit bureaus and providers, and formalized some information flows among affiliates. The Gramm-Leach-Bliley (GLB) Act of 1999 extended the CCRRA by formally and legally allowing a variety of financial institutions to sell, trade, share, or give out nonpublic personal information about their customers to non-affiliates, unless their customers direct that such information not be disclosed by opting out…financial institutions can disclose users’ information to credit reporting agencies to comply with any other laws or regulations”.

Distribution of information and democracy

Intuitively, most associate privacy with democracy. I would argue this intuition is broadly incorrect. Privacy tends to concentrate the power of those with information privileges. On this note, we should be wary of arguments based on national security as it ends up granting information privileges usually at the expense of a majority of citizens and impede coordination and cooperation. Transparency rather than privacy is more likely to foster democracy. Rising transparency, of course, implies an erosion of individual privacy.

Nonetheless, there is discomfort with transparency because it exposes the power of information. First, there are adverse consequences when personal information is used to embarrass or harass individuals. To protect themselves, children and adults often prefer to keep some information away from the prying eyes of family and friends, teachers, bosses, co-workers and the government in general.

Second, there are risks that government or private firms will not be able to resist the temptation to exercise power from their control of large amounts of personal data. Third, some concerns relate to organisational restructuring. For example, transparency may mean employers will be alerted on staff that are moonlighting at gigs.

Transparency also poses a threat to the power of the elite. Data is autonomous and can be unfavourable and undermine the political or corporate narrative. When it is favourable and serves the cause, it is published. When it is not, it is manipulated or even discarded. Expanding transparency also increases the risks that hidden benefits, corruption and regulatory evasion may be uncovered. Given that transparency threatens vested interests, there will be considerable resistance to widening information distribution.

However, the discomfort does not mean that transparency will translate into dystopia. The threat of dystopia arises not because more information is captured but because power over information is concentrated and the transparency of information is one-sided. In this context, early visions of information dystopias (George Orwell’s 1984 and Jeremy Bentham’s panopticon) are based on centralised control of information – reflective of hierarchical structures with highly limited access to information.

In the modern landscape, the transparency or distribution of information has widened with the emergence of distributed or peer-to-peer systems such as networks and platforms. Two-sided transparency puts information out in the open at everyone’s reach. It may increase disputes but it also increases accountability, and acts as a check and balance on behaviour. Transparency is the best form of governance.

Hence, it is not necessary that China’s[13] extensive surveillance infrastructure would end up as a dystopia. The issue is not the surveillance infrastructure. China’s fatal shortcoming is that its information system lacks two-sided transparency and robust due processes to widen information distribution and reinforce democracy. In the transparency paradigm, wide distribution of information constitutes a vital safeguard.

In this context, Rachel Botsman asks whether China’s social credit system “is in fact a more desirably transparent approach to surveillance in a country that has a long history of watching its citizens…As a Chinese person, knowing that everything I do online is being tracked, would I rather be aware of the details of what is being monitored and use this information to teach myself how to abide by the rules?” She suggests “while it might be too late to stop this new era, we do have choices and rights we can exert now. For one thing, we need to be able to rate the raters…Our central choice now is whether this surveillance is a secret, one-way panopticon – or a mutual, transparent kind of ‘coveillance’ that involves watching the watchers[14]“.

Rachel Botsman argues “our trust should start with individuals within government (or whoever is controlling the system). We need trustworthy mechanisms to make sure ratings and data are used responsibly and with our permission. To trust the system, we need to reduce the unknowns. That means taking steps to reduce the opacity of the algorithms. The argument against mandatory disclosures is that if you know what happens under the hood, the system could become rigged or hacked. But if humans are being reduced to a rating that could significantly impact their lives, there must be transparency in how the scoring works”.

Rachel Botsman concludes “it is still too early to know how a culture of constant monitoring plus rating will turn out. What will happen when these systems, charting the social, moral and financial history of an entire population, come into full force? How much further will privacy and freedom of speech (long under siege in China) be eroded? Who will decide which way the system goes? These are questions we all need to consider, and soon. Today China, tomorrow a place near you. The real questions about the future of trust are not technological or economic; they are ethical. If we are not vigilant, distributed trust could become networked shame. Life will become an endless popularity contest, with us all vying for the highest rating that only a few can attain”.

Rising levels of transparency is thus a major game changer. Incentives, strategies and behaviours change when information is out there in the open and players become more skilful and adept. The changing patterns of social engagement makes it difficult to predict whether technological surveillance will tighten or disrupt control.

In this context, the advancement of technological surveillance does not seem to have silenced political and social discourse. Instead, the opposite seems to be true. The emergence of the new social movements (e.g. Me-too, Fight-for-$15, yellow vests and the HK protestors), hackers, whistle blowers and bitcoin enthusiasts demonstrate that the same technology and data used for state and corporate control can be weaponised by disgruntled citizens to organise a counter-movement.

Hence, wide distribution of information and increased transparency makes it difficult for governments, intelligence agencies and corporations to dominate virtual space[15]. This is particular evident in Hong Kong where the protestors[16] are highly sophisticated and seems to have gotten the upper hand; including the ability to foil surveillance technology such as facial recognition.

Victor Ting notes “doxing has been used as a weapon during the social unrest, with people on either side of the anti-government movement disclosing and spreading personal information of not only their opponents but also their family members, often along with intimidating messages…Complaints have shot up from 57 in 2018 to a staggering 4,370 by the end of last year. About 36 per cent or 1,580 cases involved unauthorised disclosure of personal data belonging to police officers and their families, while 20 per cent, or 873 cases, concerned doxing of protesters”.

We can debate what democracy is but democracy does not occur in a vacuum. Deliberate policies are required to mitigate the harsh consequences of information visibility. In other words, policies and rules are needed to promote an environment of “transparency without fear or favour”. It is important to establish a fence to protect individuals as well as decision-makers from official, legal or social retaliatory actions.

Towards this end, there should be greater exploration of transparency-based remedies to blunt the threat of abusive actions. For example, most attacks are non-transparent; i.e. the source is anonymous. One approach is to identify and isolate attackers through the creation of a transparent and authenticated zone[17] where the identities of attackers will be visible. This will create a safe zone while the attacks are more likely to occur in the non-transparent zones which have lower levels of credibility.

We are still adjusting to the new levels of transparency. In this regard, transparency can have the same effects as “coming out of the closet”. To some extent, making information public eliminates the value of blackmail. Instead, it exposes individuals to being blackballed or discriminated against. This requires different types of protection.

Overall, a functioning democracy requires democracy of information. Information transparency and its wide distribution are critical building blocks to ensuring citizens are well informed and for bringing us closer to the ideal of active and broad civic participation in society.

Trust and income inequality

Trust and transparency feed on each other. For example, the ability of sharing platforms to scale stranger sharing is dependent on the willingness of individuals to reveal personal information and share personal effects. The success of several platforms is due to two-sided transparency (the identities of the individuals and assets) and trust in authenticity (what is promised will be delivered). Trust is lost and customers will flee if the information turns out to be false or the process is faulty.

In the same manner, transparency should be used to address the challenges of inequality and poverty. There is a view that transparency works against the lower income groups because information is typically used against them (exploitation, criminalisation of poverty and identification of illegal immigrants). There is truth that information is being used against the poor but the problem is that information is not being used to assist the lower income groups.

In a transparency paradigm, value is created from information[18]. Blind spots (non-transparency) – such as the lack of identification, low visibility of ownership and informal activities – hamper the government’s ability to address income and opportunity shortages. In contrast, privacy actually benefits the rich and powerful more as it acts as a shield for their wealth and misdeeds. Typically, they would be privileged in having favoured access to other people’s information while not divulging their own.

Hence, transparency can benefit the lower income groups if it is two-sided and information is instead used to solve their problems and protect their interests. Income redistribution therefore needs to be accompanied by information redistribution. Information redistribution should be undertaken with a view to ensuring an increase in the lower income groups possession of identity, knowledge, assets and choice. Improvements in profiling can form the basis for identifying gaps and can underpin targeted assistance. Distributing information to the lower income groups is therefore a pre-requisite to increasing their participation and ensuring a more even distribution of income and value.

Conclusion

Transparency expands possibilities and is a major driver of organisational and social change. But transparency amplifies the power of information. It is difficult to resist the temptation to use information for personal gain or to oppress or exploit others. Transparency also makes differences in beliefs and cultures more visible and gives rise to team-driven conflicts.

Overall, our experience with modern transparency is limited. We’re not sure about the economics of transparency nor is there a clear view on what the rules should be for managing social life in a fishbowl. I believe the key to unlocking the transparency paradigm is to embrace transparency, to learn how to use information better and to formulate policies to manage its double-edge nature so that information can be used to bring out the best in us as a society.

References

Alessandro Acquisti, Curtis Taylor, Liad Wagman (8 March 2016) “The economics of privacy”. Journal of Economic Literature, Vol. 52, No. 2; Sloan Foundation Economics Research Paper. https://ssrn.com/abstract=2580411

Anthony Dukes (30 December 2019) “Why bad customer service won’t improve anytime soon”. Originally published at The Conversation. https://www.nakedcapitalism.com/2019/12/why-bad-customer-service-wont-improve-anytime-soon.html

Anthony Dukes, Yi Zhu (16 May 2019) “Why customer service frustrates consumers: using a tiered organizational structure to exploit hassle costs”. https://doi.org/10.1287/mksc.2019.1149

Chauncey Jung (23 February 2020) “China ventures outside the Great Firewall, only to hit the brick wall of online etiquette and trolls”. SCMP. https://www.scmp.com/comment/opinion/article/3051757/china-ventures-outside-great-firewall-only-hit-brick-wall-online

Harper Neidig (25 June 2019) “Advocates push FTC crackdown on secret consumer scores”. The Hill. https://thehill.com/policy/technology/450084-advocates-push-ftc-crackdown-on-secret-consumer-scores

Jerri-Lynn Scofield (4 November 2019) “What’s inside that black box: What regulating data privacy and policing drunk driving have in common”. Naked Capitalism. https://www.nakedcapitalism.com/2019/11/whats-inside-that-black-box-what-regulating-data-privacy-and-policing-drunk-driving-have-in-common.html

Kate Eichhorn (27 December 2019) “Why an internet that never forgets is especially bad for young people”. MIT Technology Review.  https://www.technologyreview.com/s/614941/internet-that-never-forgets-bad-for-young-people-online-permanence/

Lou Downe (12 February 2020) “The hidden design failure that’s costing consumers trillions”. Fast Company. https://www.fastcompany.com/90463081/the-hidden-design-failure-thats-costing-consumers-trillions

Marcus Cayce Myers (December 2014) “Digital immortality vs. The right to be forgotten: A comparison of U.S. and E.U. laws concerning social media privacy”. Researchgate.net. file:///C:/Users/user/Downloads/Digital_Immortality_vs_The_Right_to_be_Forgotten_A.pdf

Matt J. Kusner, Joshua R. Loftus (4 February 2020) “The long road to fairer algorithms. Build models that identify and mitigate the causes of discrimination”. Nature. https://www.nature.com/articles/d41586-020-00274-3

Phuah Eng Chye (2015) Policy paradigms for the anorexic and financialised economy: Managing the transition to an information society.

Phuah Eng Chye (23 November 2019) “Information and organisation: China’s surveillance state growth model (Part 2: The clash of models)”.

Phuah Eng Chye (7 December 2019) “Information and organisation: China’s surveillance state growth model (Part 3: The relationship between surveillance and growth)”. http://economicsofinformationsociety.com/information-and-organisation-chinas-surveillance-state-growth-model-part-3-the-relationship-between-surveillance-and-growth/

Phuah Eng Chye (21 December 2019) “The debate on regulating surveillance”.

Phuah Eng Chye (4 January 2020) “The economics and regulation of privacy”.

Phuah Eng Chye (18 January 2020) “Big data and the future for privacy”.

Phuah Eng Chye (15 February 2020) “The costs of privacy regulation”.

Phuah Eng Chye (29 February 2020) “The journey from privacy to transparency (and back again)”. http://economicsofinformationsociety.com/the-journey-from-privacy-to-transparency-and-back-again/

Phuah Eng Chye (14 March 2020) “Features of transparency”.

Rachel Botsman (21 October 2017) “Big data meets big brother as China moves to rate its citizens”. Wired. https://www.wired.co.uk/article/chinese-government-social-credit-score-privacy-invasion

Siddhartha Bandyopadhyay (24 March 2020) “Why rehabilitation – not harsher prison sentences – makes economic sense”. The Conversation. https://theconversation.com/why-rehabilitation-not-harsher-prison-sentences-makes-economic-sense-132213

Victor Ting (21 January 2020) “Hong Kong’s privacy watchdog reveals 75-fold increase in doxxing complaints amid anti-government protests in 2019”. SCMP. https://www.scmp.com/news/hong-kong/law-and-crime/article/3047059/hong-kongs-privacy-watchdog-reveals-75-fold-increase


[1] See earlier articles on privacy.

[2] See Matt J. Kusner and Joshua R. Loftus.

[3] Alessandro Acquisti, Curtis Taylor and Liad Wagman.

[4] Marcus Cayce Myers.

[5]In France, the right to be forgotten or droit á l’oubli began as a concept that former criminals who had served their sentences were entitled to a fresh start unencumbered by their criminal past. Their criminal histories were erased to allow them to begin their lives anew as productive members of society. See Marcus Cayce Myers.

[6] See Alessandro Acquisti, Curtis Taylor and Liad Wagman.

[7] See Siddhartha Bandyopadhyay’s overview on the economic merits of rehabilitation vis-à-vis punishment.

[8] GDPR. https://eur-lex.europa.eu/legal-content/EN/TXT/?qid=1528874672298&uri=CELEX%3A02016R0679-20160504

[9] For example, facial recognition could be used to minimize incidences of police questioning minorities without a criminal record.

[10] Matt J. Kusner and Joshua R. Loftus outline what is required to build models to explore ethical issues, unearth the causes of discrimination and to build algorithms that correct for discrimination.

[11] See Harper Neidig.

[12] See Anthony Dukes and Yi Zhu.

[13] “Information and organisation: China’s surveillance state growth model (Part 3: The relationship between surveillance and growth)”.

[14] Quote attributed to Kevin Kelly from his book The inevitable, where he describes a future where the watchers and the watched will transparently track each other. See Rachel Botsman.

[15] See Chauncey Jung.

[16] Information and organisation: China’s surveillance state growth model (Part 2: The clash of models).

[17] See forthcoming articles on “Anonymity, opacity and zones” and “Government of the data (Part 3: Facebook paradise)”.

[18] Investment and lending would be minimal without information.