The dismal decade (Part 8: Seven forces shaping 2030 #6-#7)

The dismal decade (Part 8: Seven forces shaping 2030 #6-#7)

Phuah Eng Chye (26 October 2024)

In my final article for this blog, I will focus on why Force #6 The crisis of governments and Force #7 The transition to an information society are major forces shaping 2030 scenarios.

Force #6 – The crisis of governments

The geopolitical-driven transition from governance to governments

“Decades ago, market forces were unleashed to drive globalisation. This period was marked by financial crises which could be traced to crises of governance. In recent years, governments began to reject market outcomes for geopolitical, climate and social reasons. Politicians took the opportunity to reclaim the power of running the economy that they lost to globalisation, MNCs, markets and platforms. However, their policy missives are blowing up markets and industries; and crushing animal spirits and increasing public agitation in the process. Today’s crises are a crisis of governments.

Governments are treading on treacherous and slippery ice. Yet we expect them to and only they are in a position to solve it”.

“The Great Economic War (GEW) (Part 14: The middle game – Instability and reset of world order and global governance)”.

Oddly enough, there isn’t a crisis of governments in authoritarian countries because the governments have always maintained an iron grip. In contrast, it is governments in the liberal and market-driven Western democracies that are facing a crisis and where the interventions are starting to become intrusive and heavy-handed as they respond to various threats.

By way of background, the West used to preach that governments should limit their role in the economy to give room for private sector expansion and that they should rely primarily on governance as a means of keeping market forces in check. Certainly, their advice was followed as many developing countries enthusiastically embraced globalisation, markets and governance – albeit with caveats.

However, globalisation seemingly picked the “wrong” winners and produced “unwanted” outcomes. The West still haven’t come to terms with the rise of China and the Global South, nor has it adjusted to a multipolar landscape. In this respect, the West’s shrinking global presence are being reflected in terms of their declining share and control over global resources, markets and financing. Western economies have generally been sluggish, their spending power increasingly constrained while their vulnerabilities keep getting exposed. Western anxieties have grown as their economies hollowed out, and their industries and firms disrupted by new technologies and challengers. As their labour force migrated from “stable” manufacturing jobs to “transient” service-gig jobs, Western economies became increasingly unequal and then polarised. Growing public restlessness has fuelled populist and policy backlashes on globalisation, markets and technology in Western societies.

Western commitment to free markets and democracy have waned. During this decade, Western governments seem more obsessed with winning the Darwinian race against adversaries. Towards this end, Western governments have increased the number of interventions in their attempts to impose their will on other nations and even their own firms and citizens. New policies and re-regulation were implemented to enable governments to wield greater control over resources, finance, supply chains, technology, infrastructure, data, platforms and weaponry. This include prioritising reindustrialisation and erecting barriers to tame globalisation. However, these interventions have the side effect of repressing market forces; often with adverse domestic and international economic and social consequences.

There is likely to be more economic and social pain lying ahead for most countries over the next six years. At some point, governments will be forced to withdraw fiscal and monetary stimulus and the damage to the private sector will become evident. The “Japanification disease” might take grip in several countries where macroeconomic aggression has over-ridden market discipline, leaving the private sector highly dependent on fiscal stimulus. The Japanification dilemma materialises when governments find they need to cut fiscal expenditures at the same time the private sector is retrenching. This foreshadows a prolonged economic downturn lies ahead once the post-pandemic recovery phase is over.

Hence, the shift from governance to governments is setting up the 2020s to be a dismal decade. Geopolitically-driven policies are setting the world on a trajectory headed towards recessions, crises or/and wars rather than towards growth, stability and peace. Deterrence and de-risking, intended to change an adversary’s conduct, are instead provoking escalatory retaliations. International safe space fostered by geopolitical tolerance and accommodation on controversial issues is shrinking. In tandem with this, Western or authoritarian governments alike insist that only their narratives should prevail. The roles, voices, ideas and criticisms of multilateral institutions, NGOs, private firms and luminaries are largely ignored, or, worse still, repressed. There has been a loss of truth, freedom and democracy around the world during this decade. In the West, electorates have been voting out incumbent governments and this is leading to erratic policy reversals on economic and social policies, climate, diversity, immigration and reproductive rights which, in turn, is generating long-term uncertainty and instability.

The irony is that because authoritarian countries are conditioned to expect the worst from geopolitical conflict, they appear better prepared for difficult times. The question then is whether the liberal democracies of the West can face up to the dismal realities when fiscal and monetary gravity kicks in. Do Western governments have the courage to implement the bitter remedies? Most of all, do countries have the wisdom to veer off the path to a major geopolitical collision. Is there sufficient goodwill left among the global powers to cooperate in the event of a major global crisis?

Probably it will take a crisis, a natural disaster or even a war to knock some sense into governments (politicians) to start the multilateral process to reduce geopolitical risks. The big two, US and China, need to voluntarily accept constraints on their global power and to demonstrate willingness to compromises so that a grand bargain can be conceived to address global imbalances in the military, economic, technological, financial, industrial and information realms. Politicians also need to be “encouraged” to tone down their war narratives. Perhaps social movements need to raise their voices in clamouring for global peace and cooperation.

Reset the public-private sector relationship

Western governments – under pressure from the US – have pushed onwards with geoeconomic fragmentation. The significance of geoeconomic fragmentation is that it paves the way for direct military confrontations in the future. In this regard, it should be noted you can’t bomb an adversary if you are dependent on their factories for supplies. However, this ignores the fact that “beggar-thy-adversary” geoeconomic fragmentation is hurting Western MNCs badly. Western MNCs are being pressured to play for the “home” team, not to do business with adversaries and to accept sacrifices that come from complying with political and populist dictates. Thus, Western MNCs find themselves caught in the cross fire as they are pressured to vacate markets, to cope with policy uncertainties and to manage rising compliance costs.

The reclamation of power by governments is creating ambivalence about the roles of the state and private enterprise. Governments may not have intended to replace markets but their interventions create uncertainty which compounds the challenges from information disruption, higher interest rates, supply chain shocks and anaemic aggregate demand. With Keyne’s animal spirits in a state of confusion, the Western private sector are likely to retreat and retrench.

This comes at a time when Western MNCs are facing greater competition as corporate power shifts from the West to the Global South for several reasons. First, Western economies are matured and Western MNCs find their global is shrinking (due to decoupling policies). Despite efforts to block China from making inroads into Western markets, there is a lot of room for Chinese MNCs to expand in the Global South which is still growing at a relatively faster pace.

Second, Western businesses are being disrupted by cost-competitive and state-sponsored companies in the Global South. In particular, the emergence of innovative Chinese business models are exposing the weaknesses of the Japanese business models. The Japanese practice of lifetime employment appears anachronistic in the gig economy. The Keiretsu (the Japanese business network) has been overtaken by platforms. Quality control and lean manufacturing techniques are superseded by AI, IOT, robotics and vertical re-integration. Mark J. Greeven, Katherine Xin, George S. Yip suggest the West needs to pay greater attention to Chinese management strategies such as digitally enhanced directed autonomy (DEDA). The “three core features of the DEDA approach: granting employees autonomy at scale, supporting them with digital platforms, and setting clear, bounded business objectives”. Under autonomy at scale, “China’s companies embrace single-threaded leadership, which severely limits managerial distraction by giving a leader a clearly defined task, budget, and timeline”. With specific targeting, small teams can organize themselves around specific business opportunities without managerial intervention but “what employees do with their autonomy is carefully tracked”. “Digital platforms to give frontline employees direct access to shared corporate resources and capabilities”. They note the Chinese approach recognises “that the centralization of shared business functions does not have to translate into increased power at the top”.

Third, Western policies like tariffs, export bans and reindustrialisation favour incumbents and legacy. This delays creative destruction of old industries and diverts resources needed to build the infrastructure and industries of the future. In contrast, China’s policies support disruption of legacy models and focus on accelerating the growth of new industries. This places the West at a long-term disadvantage.

Hence, there is urgency for the West to reset their public-private sector relationships to stem the decline of its MNCs. If the West is unable to reverse the decline, then it is unlikely to be able to hold onto its global dominance. In this regard, the private-public sector relationship is less critical in authoritarian economies which are unconstrained by the profit motive. But it is highly problematic for the Western economies which have relied on fiscal stimulus, loose and cheap credit. and rising corporate profits to sustain euphoric asset prices. A reversal of these “market-oriented” policies could trigger a sharp fall in asset prices with severe after-effects. In the meantime, Western governments also need to address the significant public goods shortfalls which arise from under-investment, and concentration (monopolies) which are aggravating the social problems.

Overall, the public-private sector relationship needs to be reset at two levels. At the macroeconomic level, governments eventually need to face up to the hard decision to cut spending. Hence, governments should prepare a policy transition to ensure an orderly fiscal exit and to establish the groundwork for a private sector-led economic recovery. In this regard, fiscal spending reductions should be targeted and gradual to prevent the recession from cutting too deep. This might include sacrificing security goals (reducing defence spending), phasing withdrawal of corporate and social handouts, closing tax loopholes, raising tax revenues and generally easing controls. Rather than solely rely on monetary accommodation, regulators should facilitate market clearing to restore financial soundness, assist the private sector to repair their balance sheets, and promote a shift from liquidity-driven to information-driven price discovery. The re-establishment of market discipline and management of creative destruction are the pre-requisites to establishing a base for a private sector-led recovery. In addition, tax policies need to be re-balanced to achieve greater neutrality between virtual and physical operations, and between savings, investment and consumption. Tax policies should also be tilted in favour of promoting employment and wages for low-income individuals and small companies.

At the operating level, governments should undertake a broad review of public-private arrangements with the aim of re-organising provision of public goods. First, governments need to re-assess past arrangements with the private sector to provide public goods. Privatisation, public-private partnerships and outsourcing were popular policy choices to reduce the government’s financing burden and to tap private sector efficiencies. The major disappointment is that despite shifting the responsibility of delivering public goods to the private sector, government expenditures continued to rise while the negative social impact is become more evident. To be fair, the long-term negative outcomes were not surprising as the private sector and markets are oriented towards optimising profits rather than social benefits. Long-term, the burden will once again fall on governments to make good on social shortfalls that arise. The review should also analyse the strengths and weaknesses in the private sector’s delivery of public goods and assess whether benefits are reaching the intended targets. It should consider all options for addressing shortcomings including on project oversight and maintenance, corporate subsidies, restructuring and bail-out processes, and best practice models for collaborations. The incentive systems should be redesigned to ensure private goals and behaviours are firmly aligned with public policy objectives.

Second, the public-private sector roles should be streamlined by removing or reducing social obligations from businesses. This is conditional on the quid pro quo that the private sector should then transfer sufficient resources via taxation to the government or pay higher wages to employees to meet social needs. In tandem with this, cost and quality baselines should be established for public access to goods such as education, healthcare, transportation, accommodation and utilities. Third, governments need to clarify the limits of its scope of interventions to assure the private sector that intervention would not go overboard. In this regard, government agencies and corporate boards are bombarded by “fashionable” objectives such as climate change, diversity and national security and results in mission creep. This results in objective overload and agency over-reach, dilutes resources and imposes unreasonable performance burdens. it may be better for government agencies and corporate boards to return to focusing narrowly on their mandates.

The public-private sector relationship should be reset to position governments and businesses to do what they know best. Governments are the ultimate stewards of society and need to carry out their duties impartially and efficiently. Businesses are best at competing, innovating and pursuing profits while markets are essential for price discovery and ensuring allocation discipline.

Transform government administration to be fit for the 21st century

For governments to lead society into the 21st century, transforming its administration is probably the first step. In this regard, it should be noted that government administrations have suffered significant damage from decades of downsizing, privatisation and outsourcing. The prestige, culture, implementation and operational capabilities of government administration is generally a shadow of what it once was. Yet for all that, the number of agencies have mushroomed, headcount expanded, the scope of responsibilities broadened and government spending have risen. Something has gone wrong somewhere.

“There is no getting around the need to reform governments. The much-needed reforms are not about cost-cutting or KPI efficiency. We’ve had enough of those. It is not about delivery of government services. Those will mainly be digitalised and increasingly handled through a platform. In this context, government administration should not be modelled on commercial best practices. You can’t afford to have government administrations run as though they are platform companies. In this regard, government capabilities have been weakened by decades of pro-market policies. Politically-appointed officials parachuted into government administrations and external consultants further damaged the civil service by treating it as bureaucracies to be culled rather than focusing on the internal cultivation of culture and talent. In a weakened government administration, politicians find it easier to over-ride the civil service without taking on accountability and to weaponise government policies. The loss of neutrality and independence undermines the government’s prestige. Mismanagement and loss of prestige results in staffing shortages across various government professions”[1].

We need to develop a vision of how government administrations should be re-organised for the 21st century. One possibility is to visualise governments as operating an AI-enabled platform for digital citizens. Proposals to reorganise government administration should also aim to restore the civil service prestige and ethos of public duty. The restoration of prestige is key to attracting talent and establishing a strong culture. This needs to be supported by decent incomes and career paths to ensure even low-middle ranked civil servants can earn enough to maintain a comfortable family life. Governments should therefore ensure a baseline structure for wages, working conditions and benefits for public employees. The permanence of government employment can offset the transience of private sector gig employment. The reassurances of a government job can act as to provide an anchor of stability for the economy and society.

Conclusions

Over the next six years, governments will face immense geopolitical and societal pressures. Though they are mostly trying to do their best, they will largely fail if they continue to operate with an industrial society mindset. In my view, governments need to embrace the information society mindset by adopting new visions, setting new goals and delivering services in radically different ways. Governments need to forge a new social contract with businesses and citizens. To be fair, we are still in the early stages of the learning curve in a fast-changing and fast-moving landscape. This means governments need to think-on-the-fly and be prepared to “move fast and break things”. At the same time, it needs to tread cautiously due to the inherent resistance to radical organisational change. This is a difficult balance to achieve. From my experience, it may be more beneficial to experiment and fail than to be bogged down and stagnate.

Force #7 – Transition to an information society

The transition from an industrial to an information society begun with the internet and accelerated with the emergence of platforms, connected devices, mobility, AI and robotics. This has vastly expanded the virtual realm which co-exists with the physical realm. Initially, these technology and information-driven innovations were greeted with enthusiasm because it created digital abundance by introducing new capabilities and conveniences. However, attitudes changed as information disruption placed pressure on legacy societal and business models, causing them to break down. In a world made smaller by greater visibility and speed, information abundance created throughput overload and made the world feel “crowded”. The information society may come laden with conveniences but society itself seems to be more intolerant and harsher. Traditional buffers to conflicts and crises have been eliminated and new stress points are appearing due to the non-linear information effects – such as intangibility, size, speed, transparency, concentration-fragmentation, polarisation, transience and complexity.

These developments are a sign that the industrial society is drawing to a close and that we are moving into the age of the information society. The pace of change is accelerating while relationships have moved from a hierarchical to peer-to-peer structures. This is posing a problem to democratic societies as these trends are loosening controls and leading to greater disorder in society. Western governments are responding with muscular policies – partly to reclaim power that had been diffused to other stakeholders; partly to restore order, and partly due to the absence of obvious alternative solutions. But Western governments are making a mistake. They are attempting to reinstate the nostalgic features of the industrial society – open migration, industrialisation, the middle class and trade unions. They have also abandoned global free markets and connectivity and instead leaned on trade barriers, sanctions and fragmentation to keep themselves globally relevant. However, the information society challenges will only become more pressing over time. Governments that avoid addressing the information society challenges run the risks of economic and social retardation.

China’s digital model and data as a factor of production

The digital economy[2] is the main driver of the transition to an information society. In many respects, vibrant growth in the digital sector is needed to partially offset fading growth in traditional sectors such as manual-based services, property and construction and to neutralise the effects of demographic aging.

China has been leading the digitalisation charge. On the other hand, the West appears hesitant to discard its legacy industrial capitalist model. In this context, the debate on China digital economy model generally gets sidetracked into the ideological debate on the merits of central planning versus markets. More than half a century ago, Hayek argued that central planning is inferior to markets in economic coordination. This is because market price discovery was superior to bureaucratic central planning in ensuring efficient resource allocation while the profit motive was the best tool to incentivise innovation and productivity. But these propositions are no longer valid in a changing information environment. “In the 1950s and 1960s, information systems were primitive in a largely physical environment. Central planning failed because data was manually captured and unreliable. The system was driven by political decisions and gamed by bureaucrats. Coordination broke down because the system was production-driven and generally unresponsive to demand. In a rudimentary information system of an industrial society, prices are superior to central planning in facilitating coordination. But information systems have evolved. The role of prices has altered as the environment transits from physical scarcity to information abundance. When information is scarce, transactions and resource allocation are dependent on prices to convey information. When information is abundant, price becomes one among many variables to consider. In the information society, the real-time speed of coordination blurs the distinctions between central planning and spontaneous markets. Resource allocation and coordination is data- and demand-driven. Autonomous exchange takes place on distributed networks supported by machine data, AI and a knowledge ecosystem of standards, disclosures, rules and dispute mechanisms. Human administrators are relegated to support, oversight and strategic roles. The availability of detailed information and transaction speed assists in overcoming information asymmetry and facilitating transactions among strangers”[3].

On the surface, the West appears to have the edge in advancing their digital economy – given its technology leadership, financial strengths and its open society. But, as argued earlier, the West seems to be regressing into their industrial society shell. In contrast, China has a comprehensive approach[4] to digitalisation. David Dorman and John Hemmings describes China’s success “as dependent on the comprehensive integration of big data, computing power, and artificial intelligence (through algorithms and application software) as well as constructing the digital infrastructure and governance systems needed to manage it. This key state-level requirement for digital infrastructure drives Beijing’s New Type Infrastructure campaign…In Beijing’s view, although the global competition over technology is fundamentally about ideology, in concrete terms the competition itself will be increasingly focused on big data. As the newest and most important factor of production in the digital age, data is comprehensively reconstructing global production, distribution, and consumption and becoming the high ground in the competition between major countries…Once described only in terms of informatization (applying information technology), a new and perhaps more critical lane has now been added: digitalization (applying value to data)…We must…give full play to the driving and leading role of informatization and digitalization in Chinese-Style Modernization. In the party’s view, the West focuses haphazardly on the competition over individual digital technologies. Beijing focuses long term on the competition over big data, and developing the complex digital ecosystems that will enable its intelligent application”[5].

Rebecca Arcesati and Jeroen Groenewegen-Lau note China’s digital economy accounts “for as much as 23 percent of transnational data flows, and AI increasingly transforming traditional business models, Beijing’s stringent data localization requirements have sweeping implications for global trade, investment, and innovation”. In this regard, “a key component of Beijing’s efforts to control data flows is that it installs actors that it has control over at strategic positions in the digital ecosystem”. Across the transport, healthcare and education industries, China plans to consolidate and integrate data on a national information platform and to support this with a large network of data centers. This would also facilitate the government to wrest control of data from platform companies. Beijing regards data as a matter of national security and requires domestic and foreign firms to interact with state agencies and trusted actors on data storage, sharing, trading, and use. Beijing says firmer state control is aimed at addressing the mishandling of personal information and security-sensitive data, as well as to tame China’s rampant black market for data. “It is also a manifestation of Beijing’s impatience with private data-collecting monopolies and lack of supervision, which policymakers believe are preventing data flows from upgrading traditional sectors such as manufacturing. Nearly every data-related policy announcement urges companies to properly handle the relationship between the government and the market, give full play to the decisive role of the market in resource allocation, and optimize the [supporting] role of government guidance and regulation. The Data Security Law (DSL) makes it clear that non-sensitive data is to circulate freely in a data exchange market to make domestic governance more efficient and the economy more productive”.

Rebecca Arcesati and Jeroen Groenewegen-Lau note “since the Fourth Plenum of the 19th CCP Central Committee in 2019 designated data a factor of production alongside land, labor, capital, and technology, a number of top-level policies have called for a better integrated and more efficient data market in China. These include the 14th Five-Year Plan for China’s socioeconomic development, as well as sectoral plans for national informatization, the digital economy, and Big Data development. By 2025, the digital economy is to account for 10% of national gross domestic product, up from 7.8% in 2020, while a functional data trading system shall be in place, the plan says”.

China’s digital model espouses datafication[6] of its citizens. Dylan Levi King notes “Tu Zipei, the former Alibaba executive and theorist of social governance, has called the proposed Chinese model single-particle governance. The model integrates data from government and commercial sources into individual master files that become the elementary particle…Horizontality requires not just individual autonomy, but also a sense that local communities or interests can be organized and exercise some kind of collective agency”. In China, “a person’s political identity was linked in part to the collective bodies in which they participated. But with an urbanizing population that is increasingly integrated into service economies instead of life-long economic or social roles, the bases for these collective expressions of political identity are disappearing…As the population atomizes, the government seems intent on creating a stronger civic Chinese identity and wants its citizens to politically relate primarily to the national government…the new rhetoric about data-driven governance: it presumes a population where the individual is a data-generating automaton whose activities are input for the state to work with, with few or no intervening social structures. The logic of big data governance at its highest scale appears horizontal in flattening the inputs into decision making. It de-emphasizes the importance of political, economic, and intellectual elites but also of local government. It also increasingly removes the possibility of a cadre-managed collective autonomy in goals and decisions”. It is worthwhile contrasting the massive support from Chinese citizens for technological innovation with the fierce resistance from Western citizens towards surveillance-related technologies such as facial recognition, digital IDs, AI and CBDCs.

Digital identities, surveillance[7] and control over data

The information society is a broad concept that not only encompass the digital economy but extends into social lives. One core aspect revolves around the datafication of citizens through digital ID numbers and its implications for individual privacy rights or conversely the extent to which it facilitates state surveillance.

Nick Corbishley notes India pioneered the launch of Aadhaar (Hindi for foundation), a 12-digit unique identity (UID) number based on biometric and demographic information[8], in 2012. It is the world’s largest digital ID system with 1.3 billion UIDs issued by 2021 covering a staggering 92% of India’s population. “Aadhaar was first introduced as a voluntary way of improving welfare service delivery. But the Modi government rapidly expanded its scope by making it mandatory for welfare programs and state benefits. The mission creep didn’t end there. Aadhaar has become all but necessary to access a growing list of private sector services, including medical records, bank accounts and pension payments…Plans are also afoot to link voter registration to Aadhaar”. “Put simply, life in India without Aadhaar is one of near-total exclusion”.

Nick Corbishley points out typical of digital identity systems, Aadhaar is vulnerable to hacking. It is believed that “hundreds of millions of Indians are now up for grabs on the dark web, for as little as $80,000”. The leak of “personal identifiable information (PII) creates a significant risk of digital identity theft…Threat actors leverage stolen identity information to commit online banking theft, tax refund fraud, and other cyber-enabled financial crimes. Nation-state actors are also hunting for Aadhaar data with the goal of espionage and influence campaigns that leverage detailed insights on the Indian population”. There are other downsides. Aadhaar’s system can track “users’ movements between cities, their employment status and purchasing records. It is a de facto social credit system that serves as the key entry point for accessing services in India. While the system has helped to speed and clean up India’s bureaucracy, it has also massively increased the Indian government’s surveillance powers and excluded over 100 million people from welfare programs as well as basic services”.

Nick Corbishley notes China recently announced pilot tests for a new national digital identification system across more than 80 internet service applications and released draft rules. Beijing’s proposed digital ID system will form part of the broader RealDID program that aims to store individual identity records on the country’s government-run Blockchain-based Service Network (BSN). “According to an article in Caixin Global, the main goal of the new system is to cut down on the personal information that internet platforms can collect from their users. The current real-name registration system has left platforms with an excessive amount of their users’ personal information, exacerbating privacy concerns and the risk of breaches”. He points out China’s objective “appears to be about bringing private data under greater public control”. He notes a book, Surveillance state by Josh Chin and Liza Lin, argues the Chinese government is redrawing the position of the state and citizens on the same side of the privacy battle against private companies. “The Chinese government is now proposing that by collecting every Chinese citizen’s data extensively, it can find out what the people want (without giving them votes) and build a society that meets their needs. But to sell this to its people – who, like others around the world, are increasingly aware of the importance of privacy – China has had to cleverly redefine that concept, moving from an individualistic understanding to a collectivist one…Consider recent Chinese legislation like the Personal Information Protection Law (in effect since November 2021) and the Data Security Law (since September 2021), under which private companies face harsh penalties for allowing security breaches or failing to get user consent for data collection. State actors, however, largely get a pass under these laws”. During the pilot phase, the digital ID is being marketed as optional and Chinese residents that sign up “will then be given an electronic network identity authentication certificate with a network number, with which they will be able to sign up for and log in to popular apps such as WeChat and Taobao”. In China’s case, the government already exercises tight controls and closely monitors content and people’s behavior on the internet. “Websites and apps must verify users with their phone numbers, which are tied to personal identification numbers”. “This allows platforms and authorities to police online activity, such as combating cyberbullying and misinformation, as well as to censor critical discussion of the government”. “Until now, that control has been fragmented as censors have had to track people across different online platforms. A national internet ID could centralize it. With that centralisation of data comes a heightened risk not only of government overreach but also data breaches”. However, control of data shifts as “personal IDs had empowered platform companies to gather user data that could be used for their financial gain. Replacing personal IDs with anonymous digital ones would allow the state to monitor online activity while limiting companies’ ability to track consumer behaviour”.

Nick Corbishley argues that while centralised digital identity systems in India and China are scrutinised and criticised, the EU, UK and Australia are also rolling out similar systems with hardly any fanfare. “Nor has there been any coverage of the close cooperation between the EU and the US to align their digital identity standards, even though the US doesn’t even have an official digital ID system in place”.

Nick Corbishley cautions “a full-fledged digital identity system, as currently conceived, could end up touching just about every facet of our lives, from our health (including the vaccines we are supposed to receive) to our money (particularly once central bank digital currencies are rolled out), to our business activities, our private and public communications, the information we are able to access, our dealings with government, the food we eat and the goods we buy. A system like this will offer governments and the companies they partner with unprecedented levels of surveillance and control. And most of the decision processes will be automated…among the most important questions today’s societies could possibly grapple with since they threaten to transform our lives beyond recognition, granting governments and their corporate partners much more granular control over our lives”. There are fears “that the real objective is to further clamp down on expression and the free exchange of information online, eventually removing a means for people to post anonymously or without having their entire internet presence readily open to government inspection”. These “systems pose no less grave threat to privacy, freedom of expression and other basic rights (or privileges)… could drastically extend authorities’ oversight over online behaviour, potentially covering everything from internet shopping history to travel itineraries”.

The higher levels of resistance to “surveillance-based” models in Western societies can be explained. Generally, West societies have evolved “trust institutions and laws” and are thus in a position to place greater value on individual privacy rights. Given its success, the West finds it difficult to give up on legacy institutions and to switch to digital “trust” alternatives. In contrast, developing economies find it difficult to build comparable Western trust institutions and laws. Therefore it is unsurprising that developing economies enthusiastically follow China’s lead in using “surveillance” technologies and data-driven methodologies to develop their economies and societies. Digital systems – comprising IDs, payment, credit and surveillance systems – provide developing countries a practical and cost-effective way of leapfrogging the development of institutional trust. “China’s state surveillance model demonstrates economic growth can be achieved by collecting information extensively (surveillance) and organising activities effectively (state). The integration of technological surveillance into the core of its ecosystem reduces the costs of capture, improves data reliability, improves participation (access), broadens reach and applications (connectivity and ecosystem), reduces information asymmetry, strengthens community trust and facilitates self-organisation”[9].

I view the trend towards digital IDs as inevitable due to the overwhelming economic benefits. China’s government reliance on surveillance tools, data and AI makes it possible for the government to meet the needs of its huge population with a comparatively small number of civil servants. This includes maintaining law and order, streamlining approval processes, managing social disputes and public grievances, analytical and predictive models for public services and identifying private and public sector misconduct (national security, anti-corruption, embezzlement, abuse of power, misuse of government funds and nepotism).

Critically, digital IDs are a springboard for a government platform[10] to coordinate the diverse range of public and private schemes covering e-government services (healthcare, welfare and pensions), digital documentation management (school, health, ownership, licenses), taxation, surveys and government communications. A government platform could act as the single true source of data by ensuring accurate up-to-date profiles. This would significantly reduce administrative and verification costs, improve on-boarding and matching, cross-referencing, reduce leakages and fraud, and facilitate self-organising. The consolidation of personal data would provide a coherent overview of overlaps and gaps. It would facilitate government targeting of specific objectives such as unemployment by location or groups; or activities such as environmental sustainability, care-taking, welfare services and community building. It is possible to coordinate welfare and participation schemes to feed into each other to strengthen the community ecosystem and economic renewal process. Employment schemes could be designed to assist debt repayment and promote business start-ups; the concept of retirement homes can be expanded to include work programs; welfare or retirement payments can be linked to payment for caring services by family or relatives. Industry apprenticeship or skill-building schemes can be improved and tied with the educational system or other job-matching platforms. In addition, such platforms can provide notification of relevant job openings and this will reduce job search frictions. There are also synergies from matching job schemes with team-building and community-based enterprise creation to bridge the gap between school and work.

To a large extent, the emergence of government platforms would reduce the gatekeeping power of global platforms. In this context, the days of the “free internet” are over. Governments around the world are reacting to global platform domination on the grounds of sovereignty, national security, tax revenues, individual privacy and consumer rights, fair competition and content moderation obligations.

Accompanying this is the looming tussle between the public and private sector[11] over control of data. The question is whether it is better for society to entrust guardianship of personal data with governments, or with global and local firms. Of course, the concerns on governments using surveillance data to supress citizens cannot be dismissed. But it should be borne in mind that suppression tends to occur more frequently in low-information environments.

There are advantages when responsibility for data resides with the government. First, there is a clear point of accountability. The public can put pressure on the government to manage personal data in a transparent and responsible (safeguarding individual privacy rights) manner as well as to expand the benefits to the public. In addition, only governments are in a position to reconcile and clean up personal data from multiple sources. It is important to establish an authoritative source of personal data as this dramatically reduces verification costs and improves accuracy and authenticity.

In contrast, the West’s approach of allowing informal private data systems to flourish creates information blindspots, contains hidden hazards (inaccurate data, systemic discrimination, and results in automated decision-making that lacks transparency, accountability and oversight. China’s data approach implies the private sector cannot be trusted with personal data and that state control and oversight is crucial to minimise private sector abuses and to ensure data is available (data-sharing) for a broad range of public uses. Overall, the adverse aspects of digital identity systems arises only because authorities (public or private) did not address exclusionary challenges, information asymmetry abuses and did not do enough to promote transparency, fairness, recourse and to increase the supply of digital public goods.

In any case, neither the public nor private sector are likely to voluntarily relinquish their rights to collect and use other people’s data. The private sector will use data for profits and that’s fine. But it doesn’t mean their ownership or power over data should be absolute. There needs to be checks and balances. Ultimately, governments should have the rights to oversight and discipline the private sector to prevent them from abusing their information privileges. Governments should also be able to assert sovereign rights over virtual space controlled by foreign platforms. Most importantly, governments need to improve how it is using data for the public good. On reflection, perhaps governments should manage data for people the way platforms manage it for their business. In reality, most governments lack the technical capability to impose their will on platforms.

Digital identity systems has often been criticised for exclusion of poor (and illegal migrants) undeservedly. I would argue that digital identity systems are an essential and powerful tool to address poverty. My hypothesis is that “the value of individuals is enhanced by more information…poor people lack identity (e.g. even an address) and are unable to generate value…Overcoming poverty requires not only providing individuals with identities but also by ensuring they are endowed with assets (property, education, health, access and opportunities)…development strategies should adopt a human-centric approach by focusing on the organisation and use of information to improve the value of human capital and to let economic development take care of itself”[12]. “There is a need for game-changing business models for poverty eradication. A superior model is one that views the poor as representing a source of untapped value from human capital. In this context, data-driven strategies can be implemented by establishing platforms responsive to the need of local residents, that can contribute towards creating decent jobs and promote efficiency and innovation in public services. Unlocking the value of human capital at the bottom of the pyramid will undoubtedly have a positive impact on the aggregate value of an economy”[13]. Increasing digital access increases opportunities for monetisation and exchange, and is a major requirement for ensuring equal digital rights. For sure, governments should not neglect the exclusionary challenges and should ensure the poor has the ability to access digital systems or alternatives. In addition, data transparency can neutralise the disadvantage of inequality by ensuring the rich and powerful are subject to the same rules and discipline.

Managing the information society

Governments need to consider several critical issues in managing the information society.

  • Data ownership

The number one issue in the information society is who owns or controls data. In this regard, the core value of a virtual world lies not so much in hardware (physical infrastructure) and definitely not in software which is transient and substitutable. The basic resource in the information society is data. While articles are replete with references to “data as oil”[14], it may not be an apt comparison as data is abundant rather than scarce like oil.

Having recognised the value of data, initial academic discussions explored mechanisms for fair distribution of the value of data from platforms to individuals or content creators. This theoretical approach has not gotten very far for the practical reason that since individuals have “exchanged” their ownership rights in return for “services”, they are no longer in a position to retain ownership. In addition, it is the large data sets that are valuable rather than individual pieces of data. Generally, the value of data increases with aggregation (when data can be cross-referenced across time and subjects), timeliness, authenticity and accuracy. Collecting, verifying and using data is costly. The value of data grows exponentially only when it is needed or used. Otherwise, it is just lying around and can be costly to maintain. It is difficult to make a case for blanket protection of individual ownership rights because data is non-rivalrous and easily copied. This has a bearing on the protection of individual privacy rights given personal data is already “out there” anyway. Therefore, issues on individual privacy rights should be reframed as questions relating to rights to access, usage and aggregation. It is important to avoid overly-precise rules as this tends to result in a convoluted legal framework. I prefer a principles-based or generic approach that focuses more on points of accountability with stringent requirement for audit trails relating to data sources, uses and amendments.

It is difficult to operationalise legally enforceable rights to protect data ownership. Data can be copied and transmitted at almost zero cost and this renders barriers to protect ownership rights and exclusivity ineffective. “The Australian Government Productivity Commission Inquiry Report points out that under Australian law, no one owns data, and this is generally the case overseas too, although copyright and various other laws can ascribe various rights to parties. The concept of data ownership is nebulous. If a consumer cannot trade with their data, then it is hardly accurate to contend there is data ownership. They explained that thinking about data as personal property creates messy overlaps with copyright law. In instances where there were multiple owners, the difficulty in resolving competing claims could render data unusable in practice. They argue that “thinking about individual’s information in the context of consumer rights avoids many of these problems. And, a case can be made that the concept of your data always being your data suggests a more inalienable right than one of ownership. Rights may be balanced against other competing interests, but they cannot be contracted away or sold with no further recourse for the individual in the event of data misuse or emerging new opportunities for beneficial data use. The Commission believes data rights give a more enduring and workable outcome for individuals. Hal Varian suggests that instead of focusing on data ownership – a concept appropriate for private goods – we really should think about data access. Data is rarely sold in the same way private goods are sold, rather it is licensed for specific uses. For example, rather than ask who should own autonomous vehicle data, it is better to ask “who should have access to autonomous vehicle data and what can they do with it?” Hal Varian highlights many parties can simultaneously access autonomous vehicle data. In fact, from the viewpoint of safety it seems very likely that multiple parties should be allowed to access autonomous vehicle data. There could easily be several data collection points in a car: the engine, the navigation system, mobile phones in rider’s pockets, and so on. Requiring exclusivity without a good reason for doing so would unnecessarily limit what can be done with the data[15].

There has been much debate on content ownership issues. Caitlin Chin notes “technology platforms should recognize the IP rights of news outlets and human creators, especially when using copyrighted articles to train algorithms. AI developers have trained LLMs by scraping billions of written articles, images, audio, and lines of software code from humans, typically without compensating, citing, obtaining permission from, or even informing the original creators. A wide range of professionals, ranging from the NMA to comedian Sarah Silverman to computer programmers, are asking – or, in some cases, suing – AI developers to pay their training data sources, stating their unlicensed use of content violates IP rights. Days after the Associated Press reached a licensing deal with OpenAI in July 2023, thousands of authors signed an open letter to urge LLM developers to both obtain consent from and compensate writers in order to scrape their work. In January 2023, a group of software developers sued OpenAI and GitHub for building the code-generating algorithm Copilot based on their licensed work. That same month, several artists filed a class action lawsuit against Stability AI, Midjourney, and DeviantArt for processing their copyrighted material to train algorithms that generated images in their unique styles. Shortly after, Getty Images sued Stability AI in the United Kingdom and the United States for training algorithms based on 12 million copyrighted images. In addition, the Daily Mail is reportedly considering legal action against Google for scraping hundreds of thousands of copyrighted articles to develop Bard without permission. These cases could take years to resolve in court, and their outcomes are uncertain. Generative AI has created novel questions over the interpretation of existing IP rights, particularly whether algorithms fall under the fair use exception in the Copyright Act. Although AI developers have acknowledged their history of scraping copyrighted material without consent, they have also argued that generative AI qualifies as fair use because the output is sufficiently transformative in nature compared to the original input. The plaintiffs in these lawsuits disagree, arguing that fair use does not protect the exploitation of copyrighted material in highly commercial contexts where AI developers benefit financially at the expense of human creators. Furthermore, generative AI tools reproduce copyrighted text or images in many cases, sometimes even quoting source text verbatim, which possibly contradicts the transformative use argument. Going forward, the definitions of fair use and derivative works will be critical for Congress or the courts to clarify to help writers and other content creators exercise their IP rights in the production of AI”. The complex issues relating to ownership of data and content are still not well explored and there are many grey areas in relation to ownership claims, accountabilities and liabilities.

  • Law, code[16] and data quality

The shift from a physical to an information paradigm signals a change in the rules of the game. Where “crime used to be largely defined in a physical context, the new rules mostly define misdemeanours and illegal conduct within an information context. Physical crimes (such as burglary) are overshadowed by an avalanche of newly defined information crimes (such as in relation to privacy, data or content). This will be accompanied by an expansion of standards or information requirements to define information crimes. In this context, enforcers and regulators have already found it is more expedient to prosecute on information or procedural failures (tax evasion, omitting or providing false information, and failure to comply) than to prove actual wrong-doings. The penalties for information are also becoming more onerous if it is linked to a serious crime like money laundering or national security.  In other words, the costs of filling up a form wrongly has been greatly escalated. The regulatory focus has thus generally shifted from preserving physical safety to minimising disrepute and conflict and to improving authenticity. In tandem with this, rules on corporate and social behaviours and content has expanded significantly. A consequence of information overload is regulatory overload. More rules, more lawsuits and rising liabilities will increase the pressure to expand legal documentation to mitigate risks. Overall, information rules will sharply increase legal risks, potential liabilities and bureaucracy costs for society as a whole”.

“In face of the growing overlaps between law and code, the current approach to regulating information is haphazard. The existing flaws – conflicting rules and gaps – will be further worsened by the need to introduce/modify rules to cover the expansion of information products (patents, copyright and data), activities (sharing, video-conferencing), technologies (AI, IOT, blockchain) and industries (drones, autonomous cars). It is impractical to expect a panacea exists and the likelihood is continued tinkering of existing regulations. It is useful to therefore develop a stopgap agenda for the medium-term. A set of generic principles should be established to guide the design of information rules. At the moment, the tendency is to develop principles separately for privacy, data, content or AI. Integrating these principles into a holistic framework is a critical step towards ensuring consistent and coherent regulation. The framework should also aim to present a vision of how law and code can be integrated to achieve regulatory goals and to identify the areas where safeguards are required. There is also a need for an overview of regulatory terrain and roles – by mapping out the areas or activities that should be regulated, those that require direct government oversight or where it may be more practical to devolve the responsibilities to the private sector (industry associations, firms or platforms)…In particular, the regulatory boundaries between the private and public domains and between the physical or virtual spheres needs to be redrawn. Convergence implies regulation can no longer be individually segmented and should be generalised or as universal as possible. It is timely to regard private code as an extension of the regulatory architecture so as to ensure decisions made by private code are in line with public objectives and laws. In any case, as the real-time intervention of code becomes more intrusive, the reconciliation of law and code into a common framework is inevitable”.

Samer Hassan and Primavera De Filippi points out “today, regulation by code is progressively establishing itself as a regulatory mechanism adopted not only by the private sector but also by the public sector. Governments and public administrations increasingly rely on software algorithms and technological tools in order to define code-base rules, which are automatically executed (or enforced) by the underlying technology”. Examples include making “predictive assessments about potential threats to national security, or the use of computer algorithms to support judicial decision-making and determine jail sentences or paroles…software ultimately ends up stipulating what can or cannot be done in a specific online setting, more frequently than the applicable law, and possible also much more effectively…The advantage of this form of regulation by code is that, instead of relying on ex-post enforcement by third parties (i.e., courts and police), rules are enforced ex-ante, making it very difficult for people to breach them in the first place. Besides, as opposed to traditional legal rules, which are inherently flexible and ambiguous, technical rules are highly formalized and leave little to no room for ambiguity, thereby eliminating the need for judicial arbitration”.

Samer Hassan and Primavera De Filippi adds “regulation by code also comes with important limitations and drawbacks that might create new issues related to fairness and due process”. “On the one hand, in contrast to traditional legal rules, which must be appreciated by a judge and applied on a case-by-cases basis, code-based rules are written in the rigid and formalized language of code, which does not benefit from the flexibility and ambiguity of natural language. On the other hand, the architectural implementation of online platforms ultimately depends on the specific choices of platform operators and software engineers, seeking to promote or prevent a certain type of actions. Just like any other technological artifact, code is not neutral, but inherently political: it has important societal implications, insofar as it might support certain political structures or facilitate certain actions and behaviors over others”.

Overlaps between law and code are growing and this has many consequences. Algorithms determine whether citizens get or are denied access to finance, transportation, welfare and other services. Citizens may not even be aware that they are being penalised or have become victims of discriminatory bias. Algorithmic regulation may replicate human bias, aggravate bias in existing patterns and is subject to human manipulation.

Matt Sheehan notes China was the first country to implement detailed regulations which represented the foundation of its AI governance regime. “China’s three most concrete and impactful regulations on algorithms and AI are its 2021 regulation on recommendation algorithms, the 2022 rules for deep synthesis (synthetically generated content), and the 2023 draft rules on generative AI. Information control is a central goal of all three measures, but they also contain many other notable provisions. The rules for recommendation algorithms bar excessive price discrimination and protect the rights of workers subject to algorithmic scheduling. The deep synthesis regulation requires conspicuous labels be placed on synthetically generated content. And the draft generative AI regulation requires both the training data and model outputs to be true and accurate, a potentially insurmountable hurdle for AI chatbots to clear. All three regulations require developers to make a filing to China’s algorithm registry, a newly built government repository that gathers information on how algorithms are trained, as well as requiring them to pass a security self-assessment”.

It is also important to pay attention to data quality as information errors could unfairly penalise citizens. For example, Martin Kretschmer, Tobias Kretschmer, Alexander Peukert and Christian Peukert notes EU’s “AI Act proposal does lay out explicit goals for data quality, such as that training, validation and testing data sets shall be relevant, representative, free of errors and complete…We categorise input risks according to the underlying reasons that may impede data quality. In some applications, the most adequate dataset may not exist yet or a once adequate dataset is no longer up-to-date because the context changed over time. All these issues are exogenous; methods to collect adequate data may not exist yet, or the environment simply has changed, but the underlying reasons that make data quality suboptimal are not due to specific actions of any individual stakeholder. On the other hand, data quality may change because of strategic behaviour of market participants. For example, even a large language model that was trained on essentially all the information on the internet suffers from data quality issues. It can only work with information that someone has decided to make publicly available. As a result of such input biases, an AI system may produce outputs of insufficient quality…For constructing a liability framework we therefore analyse data quality as a function of exogenous issues, such as cold start (underrepresented data), concept drift (when data becomes outdated), and data availability because of privacy concerns. At the other extreme, data quality can be understood as a function of endogenous issues, such as agents trying to game the system, for example with the AI equivalent of search engine optimization or adversarial attacks that inject biased/selected data”.

  • Resetting the social contract

Society is moving away from an industrial paradigm of large households, youthful population, physical activities and permanent employment and is heading towards an information paradigm of modular households, aging population, virtual activities and transient employment. In this context, the industrial-era social contract is becoming obsolete due to changing household composition, monetisation of living costs and the changing relationship between households and work[17].

“Population contraction is accompanied by drastic changes in household size and composition. Information society households are diverse (combinations ranging from same-sex to polygamous families), transient (divorces and mobility), modular (individualistic) and autonomous (independent). These household trends reinforce demographic aging and are reshaping family and community cultures, relationships and the societal order. Attitudes and roles are changing – overturning industrial society ideals on families and responsibilities (family obligations, attitudes on work). Individuals are detached from families as knowledge transmission is virtual and autonomous (social media) rather than physical (school) and relationship-based (family)”.

“A related source of disruption is the monetisation of household costs. When information is scarce, many activities are relationship-driven (e.g. at the household) and thus unpaid or lowly priced. Rising affluence, information abundance, high-population densities and shrinking household sizes contributed to the monetisation of unpaid activities, especially household chores such as cooking, cleaning and caregiving. While monetisation of household chores facilitated individuals to live outside of their families, it meant that previously free household services were monetised by strangers. This increases the costs of maintaining a family and makes income indispensable…This is reinforced by the effects of Baumol’s Cost Disease and the costs for new essential goods related to connectivity and travel…the broad trends of longevity, rising dependency, depopulation, expanding coverage and rising operating costs are putting the industrial-era welfare, healthcare and retirement systems under tremendous strain. We should not under-estimate the social risks from disruption of households. An information society with changing family structures, new generational cultures and rising support costs face worsening social vulnerabilities. This is unbinding the social glue that held families and communities together and is diluting the sense of common purpose. The traditional economic paradigms are unable to map the logic of disruption risks. Governments need to step up with a new policy paradigm that recognise their resource constraints and undertake radical reform in managing household disruption and redesigning the social safety net for the sake of future generations”.

It is worth highlighting that Baumol’s cost disease postulates that as economies shift from manufacturing to services, service wages would rise despite the absence of productivity gains. It turned out instead that Baumol’s cost disease is profit-driven rather than wage-driven; i.e. that the benefits from rising service prices flow into profits rather than into wages. It is thus not a coincidence that the expansion of the service sector has corresponded with the falling share of wages in national income and with growing inequality.

The changing relationship between households and work takes center stage in the information society. “Changes in household and work structures are increasing tensions between income and costs, and increasing the vulnerability of new generations. The older generations had a well-defined path towards achieving a good life. Education was relatively inexpensive and wages assured. Individuals were able to purchase homes and start a family at relatively young ages. The goalposts have moved. The middle class has shrunk and the ranks of the low-income expanded, reflecting less upside opportunity for many. The middle class are finding the income-cost relationship has changed so drastically that they are no longer able to afford the benefits (e.g. house, education, healthcare) once taken for granted. The new generations begin working life at a disadvantage – indebted to finance their education,  finding it harder to secure a stable career and to accumulate savings due to higher living costs, and debt and rental servicing obligations. They are also more vulnerable to setbacks. In effect, economic hardship has become more widespread despite full employment. Rising job and income uncertainties are also dampening household expectations of life-time or permanent income. Hence, a confluence of demographic aging, anaemic household and business formation, rising income uncertainty and industrial maturity are likely to have deflationary consequences”.

It should be noted concerns that technology will lead to higher unemployment has not really been borne out, at least not yet. Geopolitics and macroeconomic policy seems to have had a more substantial influence on unemployment. The other threat was the fissuring of traditional employment via outsourcing, contracting and alternative work arrangements – e.g. gigs, contract work, freelancers, temps, remote and work from home. Initially, alternative work was greeted with enthusiasm. Employers appreciated the flexibilities from matching workforce size to real-time fluctuations in demand as this allowed them to minimise costs and risks. Workers welcomed the autonomy, flexibility and the opportunity to earn or supplement income, which was useful to mitigate contingencies or be a stop-gap for small entrepreneurs.

In tandem with the growth of the digital economy, it is estimated that around 10% to 40% of the labour force are now employed in alternative work. Thus, the social consequences of alternative work has become significant.  Reports on “the future of work” were launched with much fanfare. But these optimistic reports have largely disappeared. Governments instead have to manage complaints on how alternative work is facilitating worker exploitation.

Kathryn Taylor notes “Veena Dubal has exposed a constellation of technology-driven practices that are being deployed by companies to subvert worker autonomy and keep wages low, varied, and difficult to predict or calculate”. “Together, these kinds of practices amount to what Dubal has termed algorithmic wage discrimination, where individual workers are paid different hourly wages – calculated with ever-changing formulas using granular data on location, individual behavior, demand, supply, and other factors – for broadly similar work”.

Kathryn Taylor argues “new laws and enforcement efforts should prohibit employers from deploying dark patterns against employees…employment dark patterns laws should aim to eliminate employee-facing interface designs that embody any of the following commonly recognized attributes of dark patterns” that could be:

  • Asymmetric (making options that are detrimental or less appealing to employees more visible while making more beneficial options harder to access);
  • Restrictive (overly reducing or eliminating choices that should be available to employees);
  • Disparate in treatment (purposely disadvantaging a subset of employees);
  • Covert (hiding the mechanism of influence over employees);
  • Deceptive (inducing false beliefs about some aspect of the employment); or
  • Information hiding (obscuring or delaying access to information essential to the employment).

For example, Veena Dubal notes Amazon “does not directly employ the delivery workers. Rather, the company contracts with Delivery Service Providers (DSPs), small businesses that Amazon helps to establish. In this putative nonemployment arrangement, Amazon does not provide to the DSP drivers workers’ compensation, unemployment insurance, health insurance, or the protected right to organize. Nor does it guarantee individual DSPs or their workers minimum wage or overtime compensation…Instead, DSPs receive a variable hourly rate based on fluctuations in demand and routes, along with bonuses based on a quantified digital evaluation of on-the-job behavior, including service, safety, [and] client experience…is it the structure of Amazon’s payment system, rooted in evasion of employment law, data extraction from labor, and digitalized control?…Scholars and advocates have raised concerns about the growing limitations on worker privacy and autonomy, the potential for society-level discrimination to seep into machine learning systems, and a general lack of transparency on workplace rules”.

The enabler of this data-intensive approach is workplace surveillance. Zephyr Teachout points out “the market for employee surveillance systems is booming. Monitoring systems include thumb scans, identification badges, closed circuit cameras, geolocation tracking, and sensors on tablets and vehicles. Software flags not only a decrease in productivity but also the expression of negative attitudes. Tools record keyboard strokes and conversations, which are then analyzed to rate an employee’s emotional status based on word patterns and content. The company Cogito, for instance, sells software to call centers that records and then analyzes calls between employees and customers, with a real-time behavioral dashboard that tells the employees when to be more empathetic, when to pick up the pace, and when to exude more confidence and professionalism. Supervisors have dashboards summarizing these performance metrics, which are used to determine pay and retention…In addition to making worker’s pay much less predictable, the potential spread of these management techniques has broader democratic implications. They will increase economic and racial inequality, undermine labor solidarity, and put workers in a profoundly humiliating position in relationship to their boss, one where worker speech and autonomy are highly circumscribed”.

There are adverse macroeconomic consequences. Alternative work has diminished labour bargaining power as reflected by a decline in the labour share of income. Absent government regulation, it is likely that alternative work is turning into a form of precarious labour. In the meantime, rising costs is reducing affordability and deterring household formation as uncertain future income streams makes it difficult for workers to gain access to bank loans to purchase houses while household vulnerability rises due to the lack of insurance and retirement savings coverage.

Nonetheless, in an environment of abundant information and rapid reactions, demand-driven dynamic pricing models are here to stay and it does not make sense to revert to fixed price regulation. It should be noted that dynamic pricing and algorithm-driven models have the benefit of expanding both demand and supply. The problem though is not price volatility but rather discriminatory, exploitative and abusive behaviour. Abusive behaviours can be mitigated through conduct regulation. “Regulators can choose to oversight data and AI that drive platform pricing algorithms as well as impose transparency and choice requirements to neutralise the platforms’ monopsony advantage with a view to ensuring fairness for workers and customers. Rather than directly target data and AI, regulators can instead focus on outcomes. Regulators can establish public good objectives such as labour and environmental protections and/or to reduce traffic congestion and accidents. It can then design the regulations and set KPIs for platforms. Ensure there is competition to the platforms by promoting competitors to platforms or through inter-operating rules or for even for governments to set up their own platforms”[18].

We should also overcome our fears that humans would be replaced by AI or machines. I think is it great if robots and AI can be utilised to do as much “work” as possible. In this context, the bigger challenge is to reorganise or reinvent work for the information society. Rather than delay creative destruction, governments should focus on accelerating the creation of new economic activities – whether it involves the repackaging of physical work or activities in the metaverse – with a view to generating new jobs, income streams and social interactions.

In this context, an effective problem-solving approach would likely involve synthesising the best ideas from capitalism and socialism to address social contract challenges such as aging demographics, establishing a baseline for the bottom 40%, strengthening oversight of gig employment, deploying technology to expand low-cost access to high-quality education, healthcare and finance and strengthening community interactions and development.

Finally, there is a need to address the “loss” of human capital arising from the changing relationship between work and humans. There are valid concerns that machines and AI are diminishing human accumulation of knowledge and detaching humans from the pleasures of creativity, problem-solving, decision-making and ownership (of the tasks associated with work). The loss of valuable social features such as apprenticeship, career progression, interactions and relationships have social exclusionary effects that could lead to an increase in social problems and crime.

Similarly, there is a need to address the cultural issues arising from a generational clash[19], where the younger generation are socially atomized, in the information society. “Atomization means an individual cannot establish meaningful relations to other individuals. Atomized individuals do not have a meaningful language to describe their experiences or articulate an identity. An atomized individual no longer knows what criteria they should follow when making judgements or decisions. There is no concrete relation between the self and collectives, be the collective a family, a local place, or the nation”. “This generational clash defines China’s politics. The elders have power and the youth are expected to propel the economy forward, innovating but never forgetting the struggle. In truth, the younger generations were born into a society profoundly more individualistic than their elders’…This manifests as a political problem due to China’s birth strike – fewer children are being born, which will make China feel older and less energetic, and transform the raw material of economic growth: humanity itself…With China building out an ambitious fourth industrial revolution, some factories are hoping to replace the missing workers with robotics, big data-driven efficiencies and AI”.

The future of democracy

Over the next six years, the biggest challenge for the West will be whether its governments can defend freedom in their democratic society without resorting to draconian measures. In this regard, Luis E. Santana, Inga Trauthig and Samuel Woolley sees “the rise of artificial intelligence (AI) presents unprecedented challenges to the concept of citizenship in the twenty-first century…potentially, democracy itself is at stake…Today, digital technologies and the internet have expanded the anthroposphere, the realm of human interaction, to include physical and virtual spaces. In essence, citizenship is progressing – and struggling to both keep existing rights in the face of new challenges and acquire new ones…But the shift to digital has not erased social inequalities – quite the opposite. Indeed, the rise of AI may exacerbate social inequalities related to digital citizenship…Our current digital geography mirrors offline social inequalities, making it difficult for historically marginalized groups to create content and access reliable information. Those with limited proficiency with technological tools find their access to online public services restricted and their abilities to discern between real and false content online hampered, which affects their capacity to exercise citizenship”.

Luis E. Santana, Inga Trauthig and Samuel Woolley argue “a timely example is the torrent of false images emerging in the 2024 US presidential race. It’s telling that the various interpretations of digital citizenship all advocate for individuals’ ability to fully participate in digital society as a premise. But these challenges in the current digital public sphere are obstacles to informed, fact-based participation by all…While our digital societies grapple with these challenges, AI introduces further complexities. What will happen when it is not only individuals but also machine entities asserting citizenship rights online? How will regulators respond if AI-powered corporations come to wield influence surpassing that of entire cities or nations…Indeed, digital citizenship is at risk of perishing in the cradle, due to the still largely unregulated applications of AI. Moreover, digital citizenship can be engineered, exploited or even rendered harmful through AI-driven online manipulation. Citizenship – like democracy – does not necessarily evolve in a linear, progressive fashion; it experiences setbacks, obstacles and attacks. But the trajectory can still change for the better”.

Governments and elites are already experiencing a sense of “loss of control” as traditional orders are disrupted. The government response is important as their policies will shape the future of democracy. There is a risk that, out of desperation, governments might resort to muscular policies in their attempts to re-establish societal order and, in the process, end up endangering freedom itself; particularly by imposing draconian restrictions on the freedom of speech and thought. I explore several critical issues on free speech, censorship and transparency, and democracy (end-state).

  • Free speech and democracy[20]

Nothing symbolises democracy more than the right to free speech. Western societies cherish their citizens’ right to open debate at the public town square. But freedom of speech is being threatened by social media’s viral overload of misinformation and inflammatory content such as rumours, conspiracy theories, fake news, propaganda, hate speech, smears, and spam accompanied by abusive conduct such as blackmail, bullying, doxing, identity theft and witch-hunts. Matt Stoller notes other “bad acts include Grindr knowingly facilitating sexual violence, credit reporting agencies trying to avoid being regulated for mistakes in credit reports, or Facebook fostering ethnic violence. Identity thieves use social media sites to steal money, with the IRS flagging more than 1 million tax returns for identity fraud in 2023, with information largely captured on LinkedIn or Meta’s sites. Today, scammers use Facebook to impersonate soldiers so as to start fake long-distance relationships with lonely people, eventually tricking their victims into sending their ‘boyfriends’ money. And social media firms use algorithms to target gambling addicts with social casino apps. These platforms aren’t liable for any of it, because of the weird and creepy reading of Section 230 combined with a corporatized First Amendment”.

Unlike in authoritarian societies, “democratic societies are prone to information disorder and destabilisation due to the freedoms they afford to speech and debate. As democratic societies become informationalised, they become more modular, autonomous, complex, diverse and transient. This erodes the binding forces of traditional relationships, values and institutions. Internal divisions are reinforced by a growing sense of insecurity and dissatisfaction among citizens in rich and unequal societies. The narrative wars fan fears and distrust. Distrust of governments, politicians and media unravels societal consensus on a wide range of issues and values such as vaccines, masks, lockdowns and election fraud, abortion, LGBTQ, immigration, climate change and diversity”.

But does strictly adhering to free speech commitments mean allowing misinformation to flourish and to wantonly damage social cohesion and stability? There are adverse side effects from the relationship between narratives and social movements. Narratives “reinforces polarisation by nationality, race or religion and triggers strong team dynamics  (colour revolution, cancel cultures) to force individuals to choose sides. the cause is often hijacked by extremes”. Information overload from an “overdose of warnings and accusations can lead to saturation and diminishing returns. Audiences become exhausted, everyone’s credibility is undermined, public sensitivity is numbed and the social movement energy is dissipated. There are also dangers from believing too much in your own rhetoric as it can cloud your judgement. At some point, the gap between the narrative and the on-the-ground reality becomes too wide and reality starts to sink in. There can be rebound effects as disillusioned audiences turn on the authorities”.

In the past, governments could more easily achieve a balance between allowing free speech to flourish and yet maintain control of the narrative because they only needed to control traditional media, which was the main distribution channel for narratives. But traditional media has now been surpassed by social media such as Whats App, Google, Facebook, Instagram, Reddit, Telegram, X and TikTok.

Governments are concerned as the “decline of the traditional news industry means government narratives were not getting distributed effectively and this implies a loss of control over the public narrative. But attempts to resuscitate traditional media are likely to flounder. The economics of content favours sponsored push content with social media (influencers, cyber warriors) incentivised to generate political and corporate-sponsored narratives that crowds out independent and objective content which suffer cost inefficiencies, information overload and attention limitations. To survive, the remaining traditional news media has succumbed to commercial imperatives by becoming part of the echo chambers. Unsurprisingly, this has affected public trust in traditional news media”[21].

Governments have leaned on platforms to maintain control over narratives. Caitlin Chin argues “the sustainability of news cannot fall on publishers alone; large digital platforms must share responsibility to understand and address their sizable impacts on society. Yet search engine and social media companies operate with relatively few U.S. legal requirements to build fairness and transparency into algorithms, protect sensitive personal information when serving personalized advertisements, engage in ad-tech practices that promote fair competition with news publishers, and mitigate the spread of harmful content online. Without bright-line U.S. regulations for technology companies, the recent acceleration in AI adoption presents at least four major risks that could severely undermine both news availability and public access to information in the long term”.

The first risk is that “Google, which controls approximately 92 percent of the search engine market worldwide, sends news websites approximately 24 billion views per month. This may account for over one-third of publishers’ online traffic, which is a critical metric for digital advertisements…LLMs could increase the gatekeeper power of dominant search engines that aim to maximize user engagement or screen time on their platforms. Should LLMs direct fewer readers to click through Google to external websites, digital news organizations risk losing a major source of online visibility, audience engagement, and advertising revenue…individuals who cannot afford multiple newspaper subscriptions may be more likely to believe misinformation and lower-quality content – whether human- or AI-generated – that they view on social media or search engines for free. In a more fragmented internet, people are more likely to exist within their ideological bubbles, as chatbots cannot offer diverse perspectives like a human journalist can. Social media algorithms, which typically recommend or promote content based on past browsing activity or personal interests, further reinforce echo chambers based on user engagement and not the common good”.

The second risk is that “social media platforms are using AI to automatically rank posts, which enables the mass de-prioritization of legitimate news outlets in favor of fake, spammy, or manipulative user-uploaded content…In addition to text, the widespread availability of generative AI tools allows any internet user to easily post doctored images, video, and audio online, which could facilitate the impersonation of newsrooms or even threaten the safety of individual journalists…There are no U.S. federal laws that specifically regulate deepfake AI technologies, so every social media platform, app store, search engine, and online forum treats this content differently. Meta’s policy is to remove synthetic media that would likely mislead someone into thinking that a subject of the video said words that they did not or that merges, replaces, or superimposes content on a video, making it appear to be authentic. However, the company exempts parody or satire. Furthermore, as deepfake imagery becomes more realistic and commonplace, synthetic media policies will likely become progressively difficult to enforce. Content detection algorithms must continuously advance, too; otherwise, the internet ecosystem may become a more perilous space for public-facing journalists, with audiences who are less receptive to the information they convey”.

Caitlin Chin points out the third risks is that “chatbots cannot perform the same functions as a human journalist, but news executives may still leverage AI to streamline operations or justify workforce reductions in the short term…LLMs like ChatGPT are best equipped to automate specific functions like summarizing documents – but not advanced editorial skills like relationship building with sources, original analytical thinking, contextual understanding, or long-form creative writing. LLMs predict patterns and word associations based on their training datasets but, during large-scale deployments, are known to contain factual inaccuracies or even generate fake stories altogether. In February 2023, Penn State researchers also found that LLMs can spit out plagiarized text, whether by inadequately paraphrasing or copying training material verbatim. Such behavior is doubly problematic for some models, like ChatGPT, which do not attribute or cite sources by default. In addition, since many LLMs build upon text from online websites and forums – many of which have historically excluded or exhibited hostility toward individuals based on factors like gender identity, race, or sexual orientation – their automated outputs can reproduce broader societal biases…Despite these shortcomings, some corporate news executives may leverage LLMs to cut expenditures in the short term and not simply to boost productivity or create new value in the long term. When G/O Media, the parent company of Gizmodo and Deadspin, published AI-generated entertainment articles in July 2023, it attracted high public backlash over their many factual errors, lack of human editorial oversight, and overall substandard quality of writing. CNET paused its use of LLMs in January 2023 after a significant number of errors and plagiarized language were detected within its AI-generated articles, which the outlet admitted to having quietly published for months without clear disclosures…In March 2023, OpenAI, OpenResearch, and University of Pennsylvania researchers estimated that LLMs could affect job functions for 80 percent of the U.S. workforce – with writers, reporters, and journalists among the most vulnerable. Moreover, MIT, London School of Economics, and Boston University researchers detected a negative correlation between AI adoption and job recruitment between 2010 and 2018: for every 1 percent increase in AI deployment, companies cut hiring by approximately 1 percent. It is hardly surprising that CNET staffers cited long-term uncertainty from AI as one reason for unionizing in May 2023 or that the Writers’ Guild of America (WGA) proposed banning AI in screenwriting and prohibiting creative material from training algorithms when striking the same month”.

Caitlin Chin notes the fourth risk is that “generative AI can increase the prevalence of spammy or false content online, which obscures legitimate news and funnels advertising dollars away from traditional publishers. While present-day LLMs cannot compose original prose comparable to that of a highly skilled journalist, they are well suited to churning out low-cost, low-quality, and high-volume clickbait. While clickbait production does not help most traditional newsrooms, it benefits made-for-advertising (MFA) websites, which are spammy, traffic-driven sites designed solely to maximize page views and advertising dollars. As of August 2023, analytics firm NewsGuard discovered at least 437 websites that deployed generative AI to churn out large quantities of fictitious articles – many containing unsubstantiated conspiracy theories, unreliable medical advice, or fabricated product reviews…MFA websites provide no material public benefits but, without proper safeguards, could create significant negative externalities in an AI era. LLMs are designed to generate outcomes at scale – a perfect fit for content farms whose sole purpose is search engine optimization (SEO) through nonsensical keywords, summarized or verbatim text from news sources, and highly repetitive spam. These articles often list fake authors or anonymous bylines and appear to lack human oversight. The rising prevalence of AI-generated spam could decrease public trust and understanding of critical current events, especially if it distorts the market for real news and obscures legitimate newsrooms as centralized sources of information. It will become exponentially harder for human journalists to disseminate trustworthy information when the internet ecosystem is stuffed with bots. Content farms divert more than user attention away from legitimate news websites; they also cost valuable digital advertising dollars. The AI-generated websites that NewsGuard detected were stuffed with programmatic advertisements, including from major brands like Subaru and Citigroup—almost all of which were automatically routed through Google’s Ad Exchange. Google Ads maintains policies against servicing spammy automatically-generated content but does not publicly reveal the results of its placement algorithm or content review outcomes. In June 2023, an Adalytics study showed that Google frequently served video ads on lower-quality clickbait or junk websites without the awareness of its buy-side advertising clients. The same month, the Association of National Advertisers estimated that about $13 billion in digital advertising revenue is algorithmically funnelled into clickbait MFA websites, which amounts to approximately 15 percent of the total $88 billion pie that marketers spend on automated ad exchanges every year”.

Luis E. Santana, Inga Trauthig and Samuel Woolley argue “the emergence of generative AI chatbots that are capable of mimicking human behaviour raises concerns about the legitimacy of online interactions. These AI personas will be able to imitate genuine citizens and, with that, perhaps acquire political influence. Deepfake video and audio recordings already pose a threat to the integrity of online information – undermining the digital public sphere. These AI-created pollutions of the online environment can skew the online conversation further and make digital worlds the very worst place for citizens to inform themselves – obstructing democratic deliberation, and further impeding the development of literate digital citizenship…the dominance of large social media platforms has concentrated significant power in the hands of a few. These platforms leverage data, algorithms and potentially AI to influence not only online spaces but also physical ones through policy and business models. And, they are the best positioned to train new AI with all their users’ accumulated data. This concentration of power (data) could be further amplified by AI, potentially before effective regulations or counterbalances are established. Some experts…suggest that large tech companies are already trying to amass power before regulation comes. Additionally, as AI and hardware become increasingly intertwined, tech giants such as Microsoft may leverage this to solidify their dominance in hardware and software markets…AI is poised to make challenges to democracy more potent – in this case, by interfering with the developments of digital citizenship worldwide. This risk calls for urgent deliberations and reassessments of digital citizenship practices – and includes the need for innovative citizen technologies that factor in emergent methods of voter manipulation and deal with new social and digital inequalities that will affect the enjoyment of rights. Civil society and the global research community should consider what practices of digital citizenship work to uphold democratic principles and ensure the fulfilment of rights within this rapidly evolving landscape”.

Andy Kessler explains “it’s a new world. Large language models churn out speech by the mile. Chatbots hallucinate and write strange things, spewing statements they think are true but are false – like many politicians. Both OpenAI and Microsoft have been sued for defamation for their chatbot’s output. Congress and courts should label generative AI companies as publishers, which they are, with all the ensuing copyright and liability issues. They will fight it tooth and nail, but let’s call a bot a bot – they don’t host, they publish. Sen. Tom Cotton…TikTok is a tool of Chinese Communist propaganda. That expressive activity is a long way from human information content publisher. Let’s treat them this way as publishers. Russian bots feeding us lies aren’t human either. We need laws structured to shut them down…Sadly, whether government can pressure social-media companies to censor is still an open issue. For lack of standing, the Supreme Court recently tossed a lawsuit trying to limit such interference. Thirty years is a long time in Techworld. The norm of the mid-1990s was big clunky monitors, 28K dial-up modems and America Online. With no clue about smartphones or 5G, legislation written then inadvertently spawned the digital world we know today, good and bad. This time, legislators should think hard about a future world of chatbots and machine-learning algorithms, with laws that affect billions of digitally connected users. I can easily imagine an AI bot defaming whoever it wants to or, gulp, whoever dares speak against it”.

AI is turning out not to be the panacea to overcome human weaknesses but rather as a tool to amplify them. AI seems to learn and replicate unethical behaviours such as lying, cheating and bullying; abetting fraud, blackmail, discrimination and exploitation; and systemically propagating misinformation and falsehoods. This is because AI is goal-oriented rather than imbued with some values to do the right things. AI is useful, as all systems are, but we need to know its weaknesses to use it well.

In this regard, authoritarian governments have large censorship apparatus and a firm grip on narratives and are not bothered by democratic challenges. In contrast, it is a much more uncomfortable situation for democratic societies with governments forced to manage the trade-offs between free speech and strategic control of narratives. Several governments – Australia, Brazil, Canada, EU, India, UK and US – have either proposed or are considering legislation to compel global platforms to pay domestic publishers for use of their content. In this instance, the responses of global platforms have varied depending on the country. Sometimes they ignore the directives, sometimes they comply, and in other instances they threaten to shut down or reduce news flow or to withdraw.

Courts are also taking a stricter view by rescoping responsibilities and liabilities. Matt Stoller notes “courts have begun narrowing Section 230, with such cases as Lemmon vs Snap, which began to treat harmful features on tech platforms under product liability law instead of speech law. Still, all of these decisions are at a circuit court level…NetChoice, a trade association of big tech trade companies, sought, and got, First Amendment protection for curated feeds, which meant certain aspects of how platforms operate were beyond government regulation. But be careful what you wish for, because the Third Circuit ruled that now everything said on those feeds is now the responsibility of the tech platform. The Court held that a platform’s algorithm that reflects editorial judgments about compiling the third-party speech it wants in the way it wants is the platform’s own expressive product and is therefore protected by the First Amendment. Given the Supreme Court’s observations that platforms engage in protected first-party speech under the First Amendment when they curate compilations of others’ content via their expressive algorithms, it follows that doing so amounts to first-party speech under §230, too. Because TikTok’s algorithm curates and recommends a tailored compilation of videos for a user’s FYP based on a variety of factors, including the user’s age and other demographics, online interactions, and other metadata, it becomes TikTok’s own speech. And now TikTok[22] has to answer for it in court. Basically, the court ruled that when a company is choosing what to show kids and elderly parents, and seeks to keep them addicted to sell more ads, they can’t pretend it’s everyone else’s fault when the inevitable horrible thing happens…This case is going to be catalytic. If/when this goes to the Supreme Court…This Is the Case That Could Destroy/Save the Internet. And now plaintiff lawyers will think about the litigation they can bring. Without Section 230 as a shield, at least in the Third Circuit, is Facebook now liable for facilitating military scams? Are the big tech platforms going to have to face claims for helping violate state gambling laws or being a party to mass identity theft or sexual assault or child abuse? What about garden variety defamation claims they have been able to ignore until now? These are the billion-dollar questions”.

  • Censorship – Is the cure worse than the disease?

Governments are taking the lead to contain the flow of vitriolic online content that is destabilising society as well as taking measures to protect individuals and businesses from related content abuses such as bullying, spams and blackmail. In this regard, most governments either lack the capabilities or are restricted by laws from directly moderating content. Therefore, the content moderation task is usually delegated to platforms[23] which are designated as the gatekeepers or front-line regulators. The platforms rely on a variety of tools such as fact-checking, AI, content labelling, adding friction to content sharing, increasing user control over content, and reducing the ability of content producers to earn income.

Laws are being stiffened to punish platforms for content moderation failures. Ben Weingarten notes that under the United Kingdom’s Online Safety Act, the regulator Ofcom “will be able to charge firms up to 10% of their global annual revenues should they fail to take robust action against content that includes racially or religiously aggravated public order offences, inciting violence – or apparently even raising concerns about illegal immigration.  In September, U.K. officials held talks with X regarding the spread of misinformation and other harmful content…Australia, too, recently indicated it will fine platforms up to 5% of their global revenue should they fail to prevent the spread of misinformation online, specifically around elections and public health”.

Governments have also been taking dramatic actions against platforms and founders. Ben Weingarten notes “Brazil’s government recently took the extraordinary step of banning X over the platform’s refusal to comply with orders from its Supreme Court that it take down the accounts of former President Jair Bolsonaro and his supporters in a sweeping effort to curtail the speech of the country’s populist-nationalist right. This marked the climax of a fight in which X’s legal representative faced the threat of arrest, Musk found himself under criminal investigation, and Brazil seized funds from his Starlink satellite Internet service companies’ accounts to satisfy fines…in September, X agreed to comply with orders from Brazil’s Supreme Court…Under the same pressure to remove disfavored content creators, YouTube alternative Rumble announced it would be leaving Brazil last December”. “French authorities arrested Telegram founder and CEO Pavel Durov in August for allegedly permitting criminal activity on the messaging application and refusing to turn over information or documents with investigators pursuant to law, drawing outcries from Musk and other free speech proponents. Telegram, too, eventually agreed to comply with government requests for user data concerning alleged crimes.”

However, it is not evident that the current approaches to content moderation is achieving the goals of reducing misinformation or nullifying its impact on the public. In this context, there are several conundrums. First, the biggest challenge is the determination of facts or truths. In areas such as academia, religion, cultures, ideology and politics and even science, many issues are hotly debated and there are few absolute truths, and most assertions are a matter of interpretation. Second, the fact-checking process can be tedious, lengthy and costly. It is also likely the “appointed” fact-checking institutions will lack independence as funding will most likely come from interested parties that want them to promote their views.

Lastly, it is difficult to set the boundaries to prevent censorship from becoming sweeping and crossing the red lines protecting the freedom of speech. In this regard, “today’s efforts to control narratives makes McCarthyism of the 1950s look inconsequential. Then censorship was narrow; a witch hunt to root out communists. Today, due to information overload, censorship efforts extend across a broad swath of issues – hate speech, human rights, climate change, liberal democratic values, vaccines, masks, lockdowns, immigration, LGBTQ, and geopolitical rivalries. In this context, the rise of woke and cancel cultures has led to attempts to ban books, rewrite literature or educational texts and to make movies or brands to conform to liberal democratic values. Recently, a conservative counter-movement has emerged with opposing narratives…The narrative conflicts over such a broad range of issues reflects growing intolerance of differences in democratic societies”[24].

“There are concerns that a prolonged narrative war would greatly diminish public’s ability to differentiate between truth and falsehoods. This leads to a widening gap between narrative and reality which will impede governments and societies from facing up to and resolving their real challenges. This is why it is essential that narratives, especially government narratives, are subject to scrutiny, debate and tested to ensure they are robust. This requires a vibrant democracy which in turn requires tolerance of competing and conflicting narratives. Intolerance of debate reflects elitist distrust of citizens. If you can’t trust citizens to understand information and make rational choices, can there be a democracy?” “Censorship and political prosecution have chilling effects on free speech. But history demonstrates that attempts to repress free speech are likely to fail or will eventually backfire. While concentration of traditional media and social media platforms facilitate stricter gatekeeping controls, yet the peer-to-peer structure of networks means misinformation or non-aligned narrative can easily find outlets to elude censorship”.

“The current narrative war reveals a great deal about the state of democratic societies. It reflects democratic societies are distracted, full of doubts, insecure and divided on their future. The give-aways are the stagnant conversations on stale issues and the desperation to supress conflicting narratives. In this context, the path towards restoring information order requires evolving a breakthrough narrative built around a new vision of society. But it is not easy to change the conversations as there tends to be substantial resistance to breakthrough narratives. Participants are generally reluctant to depart from script. Participants are well versed with the playbook and can easily recite the talking points. Often the conversation helps put food on the table…The private sector enthusiastically blames over-regulation for hurting competitiveness because it allows them to shift the blame to the government and to extract even more concessions (less taxes and regulation, more protection and incentives). At other times, the conversations are intended to navigate the path of least resistance and to minimise friction. The drawback of policy conversation reruns is that the same old suggestions will be repeated. Hence, the tendency is to repeat well-worn narratives (e.g. the US budget crisis, the Cold War) so that participants can go through the motions of solving familiar problems and crises. It is extremely difficult to shift the policy conversation. Nobody knows where the new conversations could lead to. They could sow disorder, undermine legitimacy and screw up team dynamics. Participants could lose their bearings, suffer missteps, become disenfranchised or upset their income apple cart. New conversations are thus greeted with suspicion and resistance. But there is no way around the fact that discovering new solutions require having new conversations. As a result, the public wonders if policy-makers and politicians are out of touch. Widespread public dissatisfaction increases the risks of populist policy reminiscence; a choice between bringing back the good old days of the middle class, manufacturing jobs and Western supremacy or the bad old days of taxing the rich, breaking up platform monopolies or reviving the Cold War”[25].

Are Western governments in danger of crossing the red lines of a democratic society? I think the recent display of aggressive censorship constitutes an unreasonable display of absolute power; i.e. with governments and agencies threatening disproportionate and harsh punishments (jail terms, hefty fines) or actions (surveillance, prosecutions, de-platforming, loss of access to services, bans) against authors, content generators, influencers, publishers (platforms, their owners and management) and even individuals forwarding messages. Aggressive censorship increases regulatory burden, criminalises debate and dissent while vindictive over-reach borders as a form of thought control.

It is important to remember Voltaire’s[26] dictum that “I do not agree with what you have to say, but I would defend to the death your right to say it”. Censorship is a negative and dangerous way of tackling misinformation. Censorship diminishes the difference between a democratic and an authoritarian society. In particular, governments should reframe from turning “the right to speak (or publish) without fear or favour” into an information crime.

From a broad perspective, the tribulations of content moderation is a natural and minor aspect of managing information overload and disorder. Given this, my view is that governments should reject broad and aggressive censorship approaches; i.e. censorship should be kept to a minimum. Governments should rely more on alternative approaches to restore information order that are more compatible with a democracy. Examples include focusing on improving the quality of narratives – by increasing the availability of data and analysis to support well-informed debate – and displaying greater tolerance of criticisms and choices. The emotional aspects of narratives should also be managed to tone down anger and vigilante-type social movements. Respectful debate requires mutual respect, common sense and compassion. “Overall, societal change works best when it is driven by both visionary and critical narratives. Governments need to be cautious to ensure they do not inadvertently cancel out informed debate, analysis and criticisms. We should remind ourselves that the real Orwellian threat to a democratic and informationalised society is centralised control of information and thoughts”[27].

Governments should therefore aim to strengthen the content ecosystem to enable the development of breakthrough narratives, counter potential threats from misinformation, rebuild credibility and strengthen public trust. The starting point is to revamp the legacy government communications. A core unit should be established to map the production and distribution of government content across traditional and new channels, identify the use of external resources; streamline approval processes; recommend at-source communication approaches; assess messaging quality; analyse channel effectiveness and gaps (whether government messages are reaching its intended audience and its impact); and recommend initiatives to increase usage and the value add of government content. This could be complemented by initiatives to improve the public’s knowledge and appreciation of government policies and administration, increase the levels of transparency, authenticity and accountability, increase the usage of government information and ensure the records of public activities – e.g. courts and local governments – are well maintained and accessible to the public. Governments should also aim to upgrade one-way communication into an engagement and participative process, consistent with social media trends.

In relation to this, “the establishment of a government platform is a key intervention that will change the dynamics of news intermediation, delivery and engagement. A government platform, modelled on the best features of the global platforms, can be used to customise delivery of government and local community news and facilitate aggregation strategies. In addition, the platforms can promote local content and expand reach to local audiences. This will provide a base to establish a national network to buy and sell independent pull content. The platform can also provide free or inexpensive AI tools (for editing, blogging) to help individual writers mitigate cost constraints. A government platform can also facilitate the monetisation and distribution of value in favour of individual or small firm content producers. In this regard, the government platform offers high levels of community engagement, authenticity and copyright protection. It can attract high-value advertising revenue and the ecosystem can be organised to promote co-branding and spin-offs to ensure the bulk of value is channelled to individual content producers. Lastly, government platforms can be used to transform the relationship with citizens from being passive recipients of government communications into active and responsible participants in a democracy”. “In this regard, the ability of the news industry to operate at the front line of truth should be supported by creating a conducive environment for investigative and community reporting, and by expanding quality sources of information. should be addressed by initiatives to open up community bubbles, promoting informed and active citizen participation and raising the levels of transparency and authenticity. In reimagining the industry, it is important to reinforce the role of news industry as a source of reality and as the conscience of a democratic society”.

In this regard, the decline in public trust can be stemmed by creating “blue-ribbon” zones of credible content; backstopped by high levels of authenticity and transparency. These zones would be regulated and subject to strong oversight – similar to regulated exchanges and financial intermediaries – to build confidence. There would be greater disclosures on content contributors and ethical constraints on marketing and commercialisation. The independence of safe zone content should be respected by content moderators and safeguarded from censorship. There are elements of this in Elon Musk’s remake of Twitter with emphasis placed on promoting independent thinking rather than talking points; debate rather than memes; credibility and trust rather than likes. In this context, authentic and transparent zones would co-exist, complement and compete with unregulated or lightly-regulated internet channels that offer privacy or anonymity.

Another key objective is to strengthen the sources and quality of domestic information. “To a large extent, there has been a general  debasement of the quality of domestic information in circulation. Demand shifts (from consumption constraints such as overload, attention limits and consumption behaviours) and disintermediation of data and advertising revenues by platforms) was allowed to erode commercial incentives for production of pull content (defined as feedstock domestic information such as community content, public records and analysis). The shrinking production of pull content reduces information flows that function as a crucial public good that improves the quality of narratives, informs public debate and acts as a bulwark against fallacious and fake narratives and data. The public benefits of pull content far exceed their costs and are essential for a vibrant democracy. The task of producing credible information has become more important with the rising threat from fake content. Only governments can lead this charge. This is partly due to the fact that they are content regulators and are positioned to address the gaps that the private sector in relation to content as public goods. At the same time, governments are large producers of information such as announcements, policies, rules, statistics, research, minutes and administrative records. Hence, it can consider how to strengthen its production and improve access to its information as part of the reorganisation of its communications. In particular, governments need to intervene decisively to overcome monetisation (revenue and cost) constraints impeding expansion. In this regard, pull content can largely be regarded as non-profit due to the lack of commercial viability. First, governments can consider facilitating mutual consumption and monetisation to strengthen the pull content ecosystem. This is because the major producers of information also tend to be the major consumers. In this regard, consumption of content tends to be bundled into the pricing (often as a freebie) subsidies (through advertising), cross-subsidies, indirect (third-party) payments. Second, the government could designate agencies to anchor demand for existing and new content products; in the form of funding or purchase commitments by providing seed money, grants or incentives to support content related to public interest activities. These commitments can be used to create income opportunities for private firms and individuals to produce content on public and community activities. Purchase commitments can be supported by private sponsorship and crowdfunding arrangements. Third, governments could explore synergistic opportunities by forging closer linkages between government news, education, civic participation and collaboration with the private sector. Journalistic skills have long learning curves. Content skills should be given greater emphasis in the school curriculum. Apprenticeship programs should expand to include writing reports on community events and public interest issues, content moderation (fact-checking, editing) and social media interactions. Ex-journalists could be recruited to teach and develop the educational and apprenticeship programs. The reimagining of the news industry is an important exercise; not just for addressing the consequences of a decline in the traditional newspaper industry but also to recalibrate policies in recognition of the information forces reshaping the landscape and to reset the vision for the news industry. Ultimately, governments should aspire to evolve a news ecosystem that supports the growth of the information industry; increase citizen engagement and participation; and increase the level of transparency and authenticity to promote an information democracy”[28].

  • The transparency challenge[29]

Apart from authenticity, transparency is the other antidote to misinformation. But the tendency is to take transparency for granted. There is an inherent contradiction in that everyone wants others to be transparent but seek privacy for themselves. Whether they are government officials, politicians, businesses, NGOs or individuals, few want unfavourable content that makes them look bad or creates trouble for them.  This results in a trend towards santisation.

Its not so easy to be transparent. China is an interesting case study. Vincent Brussee and Kai von Carnap notes “weeks after becoming the country’s top leader, President Xi Jinping stipulated a transparent government was the key to fighting corruption. Soon thereafter, he popularized the slogan disclosure as the norm, nondisclosure as the exception. This had an immediate positive impact on transparency. During Xi’s first full year in power, the average delay between issuing a State Council policy document and publishing it online dropped from over two years to only 91 days. Similarly, the National Bureau of Statistics increased the number of statistical indicators – indicators like population data, economic statistics – by 673% in the immediate years after Xi’s ascension to power. In recent years, policy documents related to Open Government Information show a shift in priorities. In 2022, OGI work plans emphasized improving the OGI confidentiality review system, strictly conducting confidentiality reviews itself, and preventing leaks not just of state secrets but also other sensitive information. In 2023 the new Work Regulations for the State Council removed all references to transparency and to disclosure being the norm, among others. It instead replaced these with more restrictive calls to disclose government information according to law, timely, and accurately”. Similarly, “the first years under Xi Jinping saw remarkable improvements in transparency, with China’s National Bureau of Statistics publishing over tenfold more indicators than pre-Xi. However, this pattern has gradually started to decrease since 2020, with a 3.2 percent decrease between 2020 and 2022. In addition, international news outlets have suggested that even fewer of these indicators have been shared with international data service providers like CEIC”.

Recently, China seems to have backtracked on transparency. Vincent Brussee and Kai von Carnap thinks “geopolitical tensions are a principal driver behind the disappearing data. China’s authorities are concerned that online information can be used in ways to harm its development or discredit its policies. Controlling the sources foreign observers can use to study and analyze the country is one way for Beijing to control the narrative”. “Technology policy seems to have the most rapidly increasing restrictions. While topics like human rights had always been sensitive in China, competition over science and technology is driving Beijing to reassess disclosure of key sources in this field”. “Especially since the Covid-19 pandemic, China’s leadership has expressed heightened concern that foreigners are using information left in the open on China’s internet to smear the country…controversial theories about a potential Covid-19 lab leak based on (often mistranslated) information from the Chinese internet”. Another example is population data from Xinjiang. “After foreign researchers used census data as potential proof of potential human rights violations in the region, local authorities scrambled to take down the information. Notably, at least three local governments – Hotan, Hami, and Tacheng – have removed reports containing the population data, only to later re-upload the reports with omission of the sensitive sections”. “Similarly, after data on youth unemployment revealed a record high in August 2023, the National Bureau of Statistics abruptly suspended its regular release”. “Next to the decline in government transparency, a second emerging trend involves diverse methods to restrict, limit, or divert foreign access to online resources.  A surprisingly large number of popular websites and apps are either fully unavailable or altered for international use. China’s censorship apparatus – perhaps the most sophisticated in the world – no longer focuses only on domestic issues but also filters and curates what can be seen from abroad”.

Vincent Brussee and Kai von Carnap also note that non-geopolitical considerations such as internet bottlenecks, privacy considerations, and risk management may play a role in reducing information transparency. “Starting from 2021, the Supreme People’s Court has been reducing the number of cases it releases on Court Judgements Online (CJO), the official database for judicial rulings. While the Supreme People’s Procuratorate prosecuted 12 percent more cases from 2017 to 2022, the publication of court cases decreased by 63 percent between 2020 and 2022. A central reason is that such court reforms do not just serve transparency but equally facilitate standardization of judicial behavior and top-down supervision. This means courts can restrict disclosure where it does not align with political objectives…Politically sensitive cases are less likely to be released to the public. Cases involving politically well-connected parties are frequently missing from online databases, and there are suggestions that cases that may involve collective protests are missing too. Administrative cases can be especially sensitive as they may involve complex bargaining against the state, such as regarding land use and ownership”. However, “courts are concerned about the societal impact of disclosed criminal cases. Judges have expressed concerns that verdicts on topics like corruption could be used as potential blueprints for committing crimes. Therefore, many such cases have been removed. Furthermore, many verdicts that present an unflattering view of China’s society, especially those involving the death penalty, have also been removed”. “Privacy regulations have led to a decline in published civil cases. Civil cases are the most common court cases and publication increased yearly until 2020. However, newly uploaded cases decreased by eight percent in 2021 and nearly 40 percent in 2022”.

From the judges’ perspective, uploading judgments represents extra work with little benefits. He Haibo[30] argues “for judges, the most significant concern is that any flaws in their judgments might be exaggerated. Minor issues that weren’t a big deal initially can appear more significant when scrutinized, leading to public criticism. For instance, someone online might point out numerous problems in a judgment, but on closer examination, these issues could be related to wording and punctuation. However, when highlighted online, it can create the impression of a seriously flawed judgment…Another significant challenge is the lack of clarity in responsibilities. Rules regarding what can be made public and what can’t be are often ambiguous. Initially, judges may see no harm in making judgements public, but when public controversies arise later, it can create significant pressure for both judges and the court. Consequently, judges often have mixed feelings or even opposition to making court judgments public”.

Privacy considerations and regulations are another major reason. He Haibo points out “many parties see the benefits of public access to judgments but don’t want their issues made public. In cases where they lose, the vast majority of parties do not want their cases to be made public, and even those who win are not necessarily keen on publicizing them”. “The rules regarding the online publication of judgments still remain somewhat unclear, particularly concerning the handling of parties’ identity information”. He suggests that when it comes to sharing the identity information of people involved, consideration should be given to protecting the privacy of individuals. However, he firmly believes that “when it comes to entities involved in market transactions, particularly companies, their business details should be transparent and public. There’s no reason to keep information such as the company’s name, address, and legal representative confidential”.

He Haibo notes “the issue of making judgments public also involves larger interests, including national security, social stability, and the government’s image…where exactly lie the boundaries of national security? Without a clear demarcation of these boundaries, the decision to publicly release court judgments could face unpredictable pressures…The same logic applies to social stability. The disclosure of certain judgments can impact the reputation of government agencies…When…a case becomes public, it’s not only about the individual officer; it reflects on the entire public security system, potentially tarnishing its image and creating significant pressure”.

He Haibo adds “there should also be a systemic approach and clear procedures to handling requests for the revision, withdrawal, and annotation of judgments. This is to minimise the risk of allowing personal connections to dictate online publication”. He also proposed that “when publishing judgments online, there should be a prominent notation indicating if a judgment has been overturned. Without this feature, once a case undergoes a first instance, second instance, and retrial, and the judgment changes, the original judgment may become inaccessible. By establishing these connections and making it clear to readers about the current status of a judgment, CJO can become more comprehensive and transparent”. He also hopes “the Supreme People’s Court can develop a robust system for proofreading judgments and handling sensitive information”.

China’s experience highlight it is not so easy to be transparent even when one wants to be. In this regard, China’s relatively opaque system of governance continue to be a major weakness in its digital revolution. Criticism and debate are tightly controlled in China and people still “disappear” without explanation. Due to the transparency gaps, this is doubt and confusion about the government’s intentions and foreign distrust on the fairness and reliability of China’s due process and legal system.

Overall, transparency is a key benchmark for democracy. Challenges arise due to the double-edge nature of transparency. First, it relates to the willingness of authorities to share information with the public. Second, transparency reduces buffers to conflict and increases polarisation and harshness[31]. Third, transparency intensifies the battle for influence and power, reduces the prospect for compromise and tends to result in intimidation and sanitisation. Transparency also has a chilling effect. When behaviour is easily observed, people to be conformist, to self-censor and to even spy or criticise others.

Only governments can manage the transparency challenge for the well-being of its citizens. This would include using transparency, to strengthen organisational coordination for problem-solving and to increase government administrative accountability to citizens. Governments should also ensure there are robust systems for forgiveness, forgetfulness and rehabilitation to mitigate harshness; and an efficient due process for dispute resolution and recourse to ensure fairness.

  • Utopia or dystopia

Digitalisation will continue its relentless advance with widespread deployment of biometric-based digital identities, mobile connectivity, surveillance cameras and sensors, facial recognition, AI and digital currencies. Simply put, society and governments will become ever more reliant on surveillance technologies. But I think fears this will lead to an Orwellian dystopia of big brother and thought control are overly pessimistic scenarios.

Take China as an example. In the West, there are fears of China’s social credit system as a front-runner of an authoritarian surveillance system. Jerry Grey explains this arises from misperceptions that an omnipresent social credit system exists in China. The reality is that there are several different systems including credit scoring systems operating in China and some bear similarities to systems operating in the West. He notes the systems are relatively transparent. “In China, if you want to know what your credit rating is, you go to your bank, enter your bank card into an ATM and it will provide you with a printout in seconds, completely free of charge”. “Another is an app for parents of primary and middle school kids…Learning to Strengthen the Country, provides positive information flow on China and encourages parents to interact with the school and their kids”. He comments that the systems are generally based on “no compulsion, reward good behavior and do not penalize bad behavior”. Hence, cities may launch community volunteer apps – helping the disabled, assisting traffic flow control or other community work – in return for preferential access to events or schools. However, individuals involved in major court cases related to debts or abusive misconduct may find themselves blocked from traveling on public transport or booking hotel accommodation.

I would argue that technologically-driven surveillance and information systems are agnostic and, by themselves, does not necessarily constitute either a dystopia or utopia. Instead, societal conditions are determined by the transparency of the system to citizens. This is why it is important that systems should not operate as black boxes. Citizens should have access to and understand their scores or rankings. Citizens should be able to challenge and quickly amend data errors and be provided recourse to resolve disputes. To mitigate harshness, it is important that punishments are not determined ad hoc or crowd-driven, that penalties are proportionate (particularly for omissions or errors) and that the systems are oriented towards forgiveness, rehabilitation and rewarding good deeds.  

Will governments find ways to use technology in a manner that promotes the freedom of citizens or will they end up using technology to supress citizens? I don’t think there is a single definitive answer. My view is that technology has created a virtual world alongside a physical world. Governments have only begun to regulate the “untamed” and “platform-controlled” virtual world. I think that there will be a lot of regulatory experimentation on data and content control; and on citizen’s access rights and presence to see what works and what doesn’t. My hope is that governments will find ways to harness technology as a positive force for community, social, political and economic organisation and that they will avoid pitfalls and turn into an authoritarian dystopia.

There are warning signs that things are taking a turn for the worse for free speech. Western governments seem increasingly intolerant of dissent and are strengthening their censorship frameworks. Worse still, there is a worrying trend towards militarisation marking a return to the use of brute physical force to resolve arguments and conflicts. The backsliding is of concern because the West is the last bastion for free speech and democracy.

But it’s not going to get us very far if we are drowning in our fears on surveillance and dystopia. It is more positive to spend more time thinking about how differently democracy and human rights works in a virtual world as compared to a physical world. For example, how should individual privacy rights be protected without breaking the fishbowl.  We simply do not have enough examples of how democracy can thrive in an information society. Perhaps, governments should set KPIs to measure whether they are managing data to increase double-sided transparency; improve living standards; and increase participation, fairness, cooperation, choice and freedom of speech.

Global information order and cross-border censorship

In the past, contests for geopolitical supremacy was framed in territorial or physical paradigms. This time is different. Initially, informationalisation was a major driver of globalisation and financialisation. This coincided with the golden age of American unipolarity. China’s rise and its ability to establish a rival global information network is starting to diminish the West’s soft power hegemony across many fronts – multilateral institutions, finance (currency), infrastructure, technology, data, law and culture. China’s threat to the US-led international order is a driver of geopolitical conflict.

Technology bifurcation is likely to cut deeper than initially anticipated. The US is leading its allies to contain China’s advances by restricting cross-border trade and information flows and weaponising the entire technology supply chain and market space. Semiconductor chips are at the center of the battle but the fissures are deepening to include communication infrastructure and networks, software and memory ecosystems, with the rivalry extending all the way to products, data and standards.

Over time, the West is likely to find it is fighting a losing battle to defend their dominance of the global information order. Generally, governments across the world are strengthening national rules to assert their sovereign rights over technology, data and content. Thus, the global information landscape is becoming dotted with data silos in a peer-to-peer network fragmented by multiple standards and information rules.

There are also signs of increasing cross-border conflict over global narratives with countries seeking to exercise exterritorial jurisdiction. In recent months, authorities in Europe and Brazil have taken actions against platforms (like X, Meta, Telegram and Tik-Tok) and their owners to force them to comply with their content moderation requirements. Ben Weingarten notes the EU’s Digital Services Act of 2022 “is seen by champions of stringent content moderation standards and critics alike as the strongest global effort to regulate speech…the measure imposes a slew of regulatory requirements on the more than a dozen social media platforms and search engines that have at least 45 million users in the EU. It requires these platforms to take measures to counter illegal content online, not only responding to user-flagged posts but those fingered by specialised trusted flaggers for removal…Illegal content…includes illegal hate speech and other prohibited rhetoric, pursuant to EU law or those within any of its 27 member states. Platforms also must take risk-based action, including undergoing independent audits to combat disinformation or election manipulation – with the expectation those measures should be taken in consultation with independent experts and civil society organisations. The Commission says these measures are aimed at mitigating systemic issues such as…hoaxes and manipulation during pandemics, harms to vulnerable groups and other emerging societal harms driven by harmful but not illegal content…The DSA also references a Code of Practice on Disinformation, under which Big Tech companies such as Google, Meta, and Microsoft have agreed to demonetize purported disinformation pursuant to European Commission guidance”. He notes “notable signatories and contributors to the self-regulatory code” seem to include government-funded entities which have “allegedly targeted the advertising revenue of independent media outlets”. Ben Weingarten notes X (and Elon Musk) and Meta “have both faced formal proceedings under the Digital Service Act over the last year concerning potentially non-compliant practices touching on political speech”. Platforms have also come out with their own content guidelines. For example, “Facebook’s Community Standards, for example, “apply to everyone, all around the world.” Academics have termed the tendency of companies to apply the strictest local guidelines globally as the Brussels Effect.”

Ben Weingarten highlights that efforts to combat misinformation in one country to protect its citizens from “serious harm” may be construed as threatening free speech and interfering in domestic politics in another country. For example, in reaction to the anti-immigration fervor sparked by viral social media posts, Ben Weingarten notes UK “Metropolitan Police Commissioner Mark Rowley threatened extradition and jail time for Americans should they violate British speech laws concerning incitement, stirring up racial hatred, or other terrorist offenses regarding the publishing of material”. “With the Biden-Harris administration silent in the face of the targeting of American platforms, Republicans are bringing forth legislation to combat foreign threats to domestic speech. Last month, House Republicans introduced two bills – the No Censors on our Shores Act and the No Funding or Enforcement of Censorship Abroad Act – to punish foreign individuals and entities that promote or engage in the censorship of American speech”.

Ben Weingarten notes the US State Department view of US as a “champion of and leader in the protection of freedom of expression”. Secretary of State Antony Blinken argues “disinformation transcends borders. It crosses platforms. No single country, no single entity can meet this challenge alone.” To create “a healthier information environment…the administration is using diplomacy, advancing a shared understanding of the problem as well as creative solutions to address it.” These diplomatic efforts include “aligning partners and allies around a framework to counter information manipulation by foreign adversaries,” “training partners to analyze disinformation,” sharing best practices, and “co-chairing the OECD’s new Misinformation and Disinformation Hub, helping governments shift from ad hoc tactics to more holistic policies that enable reliable information to thrive.” “The U.S. government has used the FBI and the State Department, among other agencies, to coordinate counter-disinformation efforts globally with other nations”. The US State Department maintains it is committed to advancing a rights-respecting approach to technology that mitigates potential harms while maintaining the free and open use of digital platforms. We are concerned by actions to limit access to information anywhere in the world.

Censorship is routine in authoritarian countries but it is worrying when Western democracies are doing the same. As more countries venture into cross-border narrative control, this creates several dilemmas. First, misinformation in one country may constitute the official narrative in another country. Global platforms are already finding they need to censor a certain perspective in one country and censor the opposing perspective in another country. Second, the attempt by countries to expand jurisdictional reach to control the global narrative will likely have spillover geopolitical consequences which will undermine global governance and global democracy.

Conclusions: The journey to 2030

This is an epochal decade where the global landscape is being dramatically reshaped. There are seven drivers at work: (1) Great power conflict, (2) Military warfare, (3) Asymmetric warfare and reset of global rules, (4) Global imbalances and anorexic economies, (5) Monetary disorder and the Fed policy transition, (6) The crisis of governments, and (7) The transition to an information society. Based on the current trajectory, the world seems headed for extreme scenarios such as a global depression or a world war. Of course, policy-makers will act to avert catastrophes or at the very least to contain them; but it only takes one slip-up to result in a fatal scenario. In this conclusion, I briefly summarise the geopolitical and information journey to 2030.

  • The geopolitical journey

The fragmentation of the world into broad geographical alliances is a key driver of 2030 scenarios. The world order is being reshaped according to the outcomes of the direct confrontation between US and China at one level and a broader multipolarity contest involving the West and the Global South at another level.

The first is the challenge of avoiding a world war. When countries adopt the mindset that they must defeat their adversaries, conflict will escalate across all fronts. The military front is the most critical but there hasn’t been much leeway for diplomacy to ease the tensions. Will this change? To be sure, citizens in US, Europe and China share the sentiment that they would not want to be subject to the senseless destruction. But politicians have their own agenda and perhaps it would take a major crisis or military confrontation before there can be de-escalation. Another possibility is

That positive developments in Ukraine or the Middle East could also shift political sentiment. In the meantime, tensions are building up in the South China sea and it is possible there could be a naval stand-off between China and the US before the end of the decade.

The second is what a fragmented landscape would look like in 2030. Retaliations will likely further shrink the safe space for cooperation and interoperability and result in clearer definition of the separate spheres. The key question is whether or when the West would attempt to impose comprehensive sanctions on China similar to what it did to Russia. Clearly, the substantial repercussions are holding them back but this move could be triggered by military hostilities. At this stage, even if the West chose to lift sanctions and bans, many of the decoupling measures have already had irreversible effects. It will be difficult to reverse the beggar-thy-neighbour dynamics as most countries and MNCs have already found alternative suppliers or markets. An alternative non-Western ecosystem is already evolving. This augurs over-supply and a global industry fall-out.

What would a Cold War be like in an integrated global and virtual economy? To what extent can countries restrict cross border flows of information, capital, companies, employees, customers, tourists and ideas. If information is not allowed to be freely shared, it will result in information blindspots that will reduce abundance and increase scarcity. Business will become more informal and physical, and this will increase the risk of supply dislocation and risk aversion.  

Worse of all, there will be shortfalls in collective actions due to fears of free-riding by adversaries. The loss of trust will hamper the ability of governments to deal with future economic crises. It is unlikely the West come to China’s rescue if its economy crashes and vice-versa. If globalisation and unipolarity are no longer the future paths, then it is imperative to chart a path to a stable multipolar system. This would probably involve the rebalancing of the world’s two largest imbalances; namely China’s manufacturing and export dominance and US’s financial and import dominance. There is also the pending question of how the collective West would respond to a rising India, a resilient Russia and an assertive Global South. In this regard, we need to ask whether the geopolitical blocs themselves will be internally stable; i.e. are there sufficient economic benefits for allies seek to stay united under an umbrella. The depth of geoeconomic fragmentation will therefore depend on how far allies and neutrals will participate in ostracising adversaries. The countries with the most to lose from geoeconomic fragmentation are those with largest global presence.

In a global environment dominated by self-interest and bargaining, is there an alternative scenario to the dismal decade? I think it would help to change the mainstream narrative and reframe geopolitical issues to create a more positive atmosphere for de-escalation. There needs to be more in-depth discussion on how a

multipolar world order would operate. Global institutions, countries and social movements should actively promote de-escalation and de-militarisation, and highlight the potential destructive costs of war. There is a need to halt the beggar-thy-adversary dynamics and increase the focus on addressing global imbalances, mitigating financial stability risks and assisting the poorest economies.

  • The information society journey

I have written extensively on the information society to analyse its immense consequences for economic, political and social stability. Policy-makers need to deepen their understanding of the workings of the information society if they are to understand the challenges of managing citizens – which are more mobile and connected – in an overlapping virtual and physical world. We are still in the early stages of the journey but the deficiencies of the industrial society model is already evident. If governments respond to social backlash by reducing (controlling) the flow of information and choice, the lack of coordination will eventually lead to economic and social contraction. Governments need to develop a vision to fix the broken industrial society. They should formulate policies to harness information, tap new opportunities, manage creative destruction and strengthen financial stability to promote orderly growth, and temper inequalities and harshness to strengthen social stability and democracy in society.

References

Andy Kessler (22 September 2024) “Section 230 catches up to AI”. Wall Street Journal. https://archive.is/BBigs

Ben Weingarten (10 October 2024) “Global crackdown: How foreign censorship threatens American free speech”. Zero Hedge. https://www.zerohedge.com/geopolitical/global-crackdown-how-foreign-censorship-threatens-american-free-speech

Caitlin Chin (31 August 2023) “Navigating the risks of artificial intelligence on the digital news landscape”. https://www.csis.org/analysis/navigating-risks-artificial-intelligence-digital-news-landscape

David Dorman, John Hemmings (11 May 2022) “China’s digital challenge: Hidden in plain sight, bigger than you thought, and much harder to solve”. Center for Strategic & International Studies (CSIS). https://www.csis.org/analysis/chinas-digital-challenge-hidden-plain-sight-bigger-you-thought-and-much-harder-solve

David Dorman, John Hemmings (February 2023) “Digital China: The strategy and its geopolitical implications”. Pacific Forum. https://pacforum.org/wp-content/uploads/2023/02/IssuesandInsights_VOL23_WP2.pdf

Dylan Levi King (18 November 2021) “The second death of Jiao Yulu”. Palladium Magazine. https://www.palladiummag.com/2021/11/18/the-second-death-of-jiao-yulu/

Gary Zhexi Zhang (16 September 2024) “China’s hinterland becomes a critical datascape”. Noema Magazine. https://www.noemamag.com/chinas-hinterland-becomes-a-critical-datascape/

Jacob Dreyer (26 January 2023) “China’s last generation”. Noema Magazine.

Jerry Grey (7 September 2023) “In Western scaremongering, the alleged Chinese social credit system or social credit score is played up, but does it even exist?” Eastern Angle. https://www.easternangle.com/in-western-scaremongering-the-alleged-chinese-social-credit-system-or-social-credit-score-is-played-up-but-does-it-even-exist/

Kathryn Taylor (7 February 2023) “Gig companies are manipulating their workers. Dark patterns laws should step in”. N.Y.U. Journal of Legislation & Public Policy.

Li Hiang Ng, Bram Edens (22 November 2023) “Measuring the digital economy (Part 1): Why is it important?” Statistics and Data Directorate (OECD)

Luis E. Santana, Inga Trauthig, Samuel Woolley (26 September 2024) “We can harness digital citizenship to confront AI risks”. Centre for International Governance Innovation (CIGI). https://www.cigionline.org/articles/we-can-harness-digital-citizenship-to-confront-ai-risks/

Mark J. Greeven, Katherine Xin, George S. Yip (March–April 2023) “How Chinese companies are reinventing management”. Harvard Business Review. https://hbr.org/2023/03/how-chinese-companies-are-reinventing-management

Marshall Reinsdorf, Gabriel Quirós, Statistics Department (5 April 2018) “Measuring the digital economy”. International Monetary Fund (IMF). https://www.imf.org/en/Publications/Policy-Papers/Issues/2018/04/03/022818-measuring-the-digital-economy

Martin Kretschmer, Tobias Kretschmer, Alexander Peukert, Christian Peukert (22 Nov 2023) “The global AI regulation race: Why the EU should focus on data quality and liability rules”. Voxeu. https://cepr.org/voxeu/columns/global-ai-regulation-race-why-eu-should-focus-data-quality-and-liability-rules

Matt Sheehan (10 July 2023) “China’s AI regulations and how they get made”. Carnegie Endowment. https://carnegieendowment.org/2023/07/10/china-s-ai-regulations-and-how-they-get-made-pub-90117

Matt Sheehan (27 February 2024) “Tracing the roots of China’s AI regulations”. Carnegie Endowment. https://carnegieendowment.org/2024/02/27/tracing-roots-of-china-s-ai-regulations-pub-91815

Matt Stoller(29  August 2024) “Judges rule big tech’s free ride on Section 230 is over”. Thebignewsletter.com. https://www.thebignewsletter.com/p/judges-rule-big-techs-free-ride-on

Nick Corbishley (3 November 2023) “The world’s largest biometric digital ID system, India’s Aadhaar, just suffered its biggest ever data breach”. Naked Capitalism.

Nick Corbishley (16 August 2024) “Western media finally begin warning about the dark side of digital identity… in China”. Naked Capitalism.

Phuah Eng Chye (2015) Policy paradigms for the anorexic and financialised economy: Managing the transition to an information society. http://www.amazon.com/dp/B01AWRAKJG

Phuah Eng Chye (7 December 2019) “Information and organisation: China’s surveillance state growth model (Part 3: The relationship between surveillance and growth)”. http://economicsofinformationsociety.com/information-and-organisation-chinas-surveillance-state-growth-model-part-3-the-relationship-between-surveillance-and-growth/

Phuah Eng Chye (28 March 2020) “The transparency paradigm”. http://economicsofinformationsociety.com/the-transparency-paradigm/

Phuah Eng Chye (11 April 2020) “Anonymity, opacity and zones”. http://economicsofinformationsociety.com/anonymity-opacity-and-zones/

Phuah Eng Chye (23 May 2020) “Public and private roles in managing data (Part 3: Evolving roles)”. http://economicsofinformationsociety.com/public-and-private-roles-in-managing-data-part-3-evolving-roles/

Phuah Eng Chye (20 June 2020) “Government of the data (Part 2: India’s Aadhaar and the debate on digital IDs)”. http://economicsofinformationsociety.com/government-of-the-data-part-2-indias-aadhaar-and-the-debate-on-digital-ids/

Phuah Eng Chye (4 July 2020) “Government of the data (Part 3: The future of government platforms)”. http://economicsofinformationsociety.com/government-of-the-data-part-3-the-future-of-government-platforms/

Phuah Eng Chye (18 July 2020) “Economics of data (Part 1: What is data?)”. http://economicsofinformationsociety.com/economics-of-data-part-1-what-is-data/

Phuah Eng Chye (15 August 2020) “Economics of data (Part 3: Relationship between data and value and the monetisation framework)”. http://economicsofinformationsociety.com/economics-of-data-part-3-relationship-between-data-and-value-and-the-monetisation-framework/

Phuah Eng Chye (26 September 2020) “Economics of data (Part 6: Data and poverty eradication)”. http://economicsofinformationsociety.com/economics-of-data-part-6-data-and-poverty-eradication/

Phuah Eng Chye (10 October 2020) “Hayek: The coordination problem, prices and information”. http://economicsofinformationsociety.com/hayek-the-coordination-problem-prices-and-information/

Phuah Eng Chye (7 November 2020) “Information rules (Part 1: Law, code and changing rules of the game)”. http://economicsofinformationsociety.com/information-rules-part-1-law-code-and-changing-rules-of-the-game/

Phuah Eng Chye (21 November 2020) “Information rules (Part 2: Capitalism, democracy and the path forward)”. http://economicsofinformationsociety.com/information-rules-part-2-capitalism-democracy-and-the-path-forward/

Phuah Eng Chye (19 December 2020) “Information rules (Part 4: Regulating platforms – Paradigms for competition)”. http://economicsofinformationsociety.com/900-2/

Phuah Eng Chye (16 January 2021) “Information rules (Part 6: Disinformation, transparency and democracy)”. http://economicsofinformationsociety.com/information-rules-part-6-disinformation-transparency-and-democracy/

Phuah Eng Chye (30 January 2021) “Information rules (Part 7: Regulating the politics of content)”. http://economicsofinformationsociety.com/information-rules-part-7-regulating-the-politics-of-content/

Phuah Eng Chye (27 February 2021) “Information rules (Part 9: The economics of content)”. http://economicsofinformationsociety.com/information-rules-part-9-the-economics-of-content/

Phuah Eng Chye (13 March 2021) “Information rules (Part 10: Reimagining the news industry for an information society)”. http://economicsofinformationsociety.com/information-rules-part-10-reimagining-the-news-industry-for-an-information-society/

Phuah Eng Chye (18 December 2021) “Global reset – Economic decoupling (Part 1: China’s socialism big bang)”. http://economicsofinformationsociety.com/global-reset-economic-decoupling-part-1-chinas-socialism-big-bang/

Phuah Eng Chye (12 February 2022) “Global reset – Economic decoupling (Part 5: Growing divergence between governments and MNCs)”. http://economicsofinformationsociety.com/global-reset-economic-decoupling-part-5-growing-divergence-between-governments-and-mncs/

Phuah Eng Chye (26 February 2022) “Global reset – Economic decoupling (Part 6: MNCs in a deglobalizing world)”. http://economicsofinformationsociety.com/global-reset-economic-decoupling-part-6-mncs-in-a-deglobalizing-world/

Phuah Eng Chye (19 November 2022) “The Great Economic War (GEW) (Part 9: The geopoliticisation of MNCs)”. http://economicsofinformationsociety.com/the-great-economic-war-gew-part-9-geopoliticisation-of-mncs/

Phuah Eng Chye (8 April 2023) “China’s model (Part 2: Digital China and the information society)”. http://economicsofinformationsociety.com/chinas-model-part-2-digital-china-and-the-information-society/

Phuah Eng Chye (27 May 2023) “Transition to the information society (Part 1: Disruption of households and work)”. http://economicsofinformationsociety.com/transition-to-the-information-society-part-1-disruption-of-households-and-work/

Phuah Eng Chye (8 July 2023) “Transition to the information society (Part 2: Disruptive effects of transparency)”. http://economicsofinformationsociety.com/transition-to-the-information-society-part-2-disruptive-effects-of-transparency/

Phuah Eng Chye (19 August 2023) “Transition to the information society (Part 3: Disruption of content, narratives and the implications for democracy)”. http://economicsofinformationsociety.com/transition-to-the-information-society-part-3-disruption-of-content-narratives-and-the-implications-for-democracy/

Phuah Eng Chye (14 October 2023) “Transition to the information society (Part 4: Citizens as data)”. http://economicsofinformationsociety.com/transition-to-the-information-society-part-4-citizens-as-data/

Phuah Eng Chye (11 November 2023) “Transition to the information society (Part 5: Only governments can solve today’s problems but can they?)” http://economicsofinformationsociety.com/transition-to-the-information-society-part-5-only-governments-can-solve-todays-problems-but-can-they/

Phuah Eng Chye (9 December 2023) “The dismal decade (Part 1: De-dollarisation and currency landscape in 2030)”. http://economicsofinformationsociety.com/the-dismal-decade-part-1-de-dollarisation-and-currency-landscape-in-2030/

Phuah Eng Chye (27 January 2024) “The dismal decade (Part 2: Adversarial monetary policies)”. http://economicsofinformationsociety.com/the-dismal-decade-part-2-adversarial-monetary-policies/

Phuah Eng Chye (24 February 2024) “The dismal decade (Part 3: The economic race: How US is faring)”. http://economicsofinformationsociety.com/the-dismal-decade-part-3-the-economic-race-how-us-is-faring/

Phuah Eng Chye (30 March 2024) “The dismal decade (Part 4: The economic race: Can China catch US by 2030)”. http://economicsofinformationsociety.com/the-dismal-decade-part-4-the-economic-race-can-china-catch-us-by-2030/

Phuah Eng Chye (27 April 2024) “The dismal decade (Part 5: China and Japanification risks)”. http://economicsofinformationsociety.com/the-dismal-decade-part-5-china-and-japanification-risks/

Phuah Eng Chye (29 June 2024) “The dismal decade (Part 6: Seven forces shaping 2030 #1- #3)”. http://economicsofinformationsociety.com/the-dismal-decade-part-6-seven-forces-shaping-2030-1-3/

Phuah Eng Chye (27 July 2024) “The dismal decade (Part 7: Seven forces shaping 2030 #4)”. http://economicsofinformationsociety.com/the-dismal-decade-part-7-seven-forces-shaping-2030-4/

Phuah Eng Chye (31 August 2024) “The dismal decade (Part 8: Seven forces shaping 2030 #5)”. http://economicsofinformationsociety.com/the-dismal-decade-part-8-seven-forces-shaping-2030-5/

Rebecca Arcesati, Jeroen Groenewegen-Lau (5 December 2023) “China’s data management: Putting the party state in charge”. Mercator Institute for China Studies (MERICS), Hinrich Foundation Limited. https://merics.org/en/report/chinas-data-management-putting-party-state-charge

Samer Hassan, Primavera De Filippi (2017) “The expansion of algorithmic governance: From code is law to law is code”. The Journal of Field Actions. https://journals.openedition.org/factsreports/4518

Tyler Durden (27 September 2024) “Revealed: Big brother’s Facebook censorship dashboard”. Zero Hedge. https://www.zerohedge.com/political/revealed-big-brothers-facebook-censorship-dashboard

Veena Dubal (November 2023) “On algorithmic wage discrimination”. Columbia Law Reform. https://columbialawreview.org/content/on-algorithmic-wage-discrimination/

Vincent Brussee, Kai von Carnap (15 February 2024) “The increasing challenge of obtaining information from Xi’s China”. Mercator Institute for China Studies (MERICS). https://merics.org/en/report/increasing-challenge-obtaining-information-xis-china

Zephyr Teachout (11 June 2023) “Surveillance wages: A taxonomy”. Lpeproject.org.

Zichen Wang, Peiyu Li, Jia Yuxuan (17 December 2023) “He Haibo says China’s online publication of judgments must NOT regress”. Pekingnology. https://www.pekingnology.com/p/he-haibo-says-chinas-online-publication


[1] See “Transition to the information society (Part 5: Only governments can solve today’s problems but can they?)

[2] See Marshall Reinsdorf, Gabriel Quirós, Statistics Department and Li Hiang Ng, Bram Edens on measuring the digital economy.

[3] See “Hayek: The coordination problem, prices and information”.

[4] “China’s model (Part 2: Digital China and the information society)”. See Gary Zhexi Zhang on the transformation of Gui’an New Area into China’s “Big Data Valley”.

[5] “China’s model (Part 2: Digital China and the information society)”.

[6] “Transition to the information society (Part 4: Citizens as data)”.

[7] See “Information and organisation: China’s surveillance state growth model (Part 3: The relationship between surveillance and growth)”.

[8] “To register for an Aadhaar card, Indian residents have to provide basic demographic information, including name, date of birth, age, address and gender, as well as biometric information, including ten fingerprints, two eyeball scans and a facial photograph”. See “Government of the data (Part 2: India’s Aadhaar and the debate on digital IDs)”.

[9] See “Information and organisation: China’s surveillance state growth model (Part 3: The relationship between surveillance and growth)”.

[10] See “Government of the data (Part 3: The future of government platforms)”.

[11] See “Public and private roles in managing data (Part 3: Evolving roles)”.

[12] See “Information and organisation: China’s surveillance state growth model (Part 3: The relationship between surveillance and growth)”.

[13] See “Economics of data (Part 6: Data and poverty eradication)”.

[14] See “Economics of data (Part 1: What is data?)”.

[15] See “Economics of data (Part 1: What is data?)”.

[16] This section mainly draws on “Information rules (Part 1: Law, code and changing rules of the game)”.

[17] “Transition to the information society (Part 1: Disruption of households and work)”.

[18] See “Government of the data (Part 3: The future of government platforms)”.

[19] Generalised from Jacob Dreyer’s extracts from Chinese anthropologist Xiang Biao’s book “Self as Method”.

[20] See “Transition to the information society (Part 3: Disruption of content, narratives and the implications for democracy)”.

[21] “Transition to the information society (Part 3: Disruption of content, narratives and the implications for democracy)”.

[22] This refers to the case where TikTok via its algorithm, recommended and promoted videos posted by third parties to ten-year-old Nylah Anderson on her uniquely curated “For You Page.” After watching a video encouraging viewers to record themselves engaging in acts of self-asphyxiation, Nylah attempted the conduct depicted in the challenge and unintentionally hanged herself. Anderson’s family sued under Pennsylvania state law for product liability, negligence, and wrongful death. The lower district court dismissed the claim, ruling that TikTok isn’t responsible, because TikTok was merely hosting the speech of others, not making the speech itself. However, the Court of Appeals for the Third Circuit has not allowed Anderson’s lawsuit to once again proceed on the basis that the Section 230 immunity does not protect TikTok from being held to account for alleged “knowing distribution and targeted recommendation of videos it knew could be harmful.”

[23] See Tyler Durden “Revealed: Big brother’s Facebook censorship dashboard”.

[24] “Transition to the information society (Part 3: Disruption of content, narratives and the implications for democracy)”.

[25] “Transition to the information society (Part 3: Disruption of content, narratives and the implications for democracy)”.

[26] https://www.azquotes.com/author/15138-Voltaire

[27] “Transition to the information society (Part 3: Disruption of content, narratives and the implications for democracy)”.

[28] “Transition to the information society (Part 3: Disruption of content, narratives and the implications for democracy)”.

[29] See “The transparency paradigm”, “Anonymity, opacity and zones”, and “Transition to the information society (Part 2: Disruptive effects of transparency)”.

[30] See Zichen Wang, Peiyu Li and Jia Yuxuan.

[31] “Transition to the information society (Part 2: Disruptive effects of transparency)”.