Economics of data (Part 3: Relationship between data and value and the monetisation framework)

Economics of data (Part 3: Relationship between data and value and the monetisation framework)

Phuah Eng Chye (15 August 2020)

At the macroeconomic level, it is difficult to simplify the relationship between data and aggregate value. For example, the marginal revenue concept is used to explain why although data (like water) is essential, it is cheap because it is abundant. The opposite notion has also been put forward – that although a single data point (like a single drop of water) is of little value but many data points can be valuable. These hypotheses are, of course, contradictory.

The fault may lie in using the price theory for physical goods to explain the value of data, which is an information good. Price theory for physical goods tends to be single-dimensional and linear. Price theory for information goods needs to cope with features that are multi-dimensional and non-linear. Multi-dimensional values are riddled by complexity and ambiguity.

Data values are modular – value is derived from bundling or unbundling a combination of features such as ownership, risks, income and price. Data values are also contextual and ephemeral. The right information at the right time in the right hands is extremely valuable. But it is worthless if you don’t need it. This tension is reflected in the famous quote that “information wants to be free”.

“On the one hand information wants to be expensive, because it’s so valuable. The right information in the right place just changes your life. On the other hand, information wants to be free, because the cost of getting it out is getting lower and lower all the time. So you have these two fighting against each other”. Stewart Brand (1984)[1]

There are thus significant challenges to developing a coherent framework on the aggregate value of data. The main challenge is that price models used for tangibles operate under conditions of scarcity and are relatively static. The theoretical framework for information goods like data needs to explain how the price mechanism works under conditions of abundance, dynamism and convergence. As a starting point, there are several influences on the aggregate value of data to consider.

The aggregate value of data tends to increase with authenticity. Information becomes more valuable when people believe it is authentic. George A. Akerlof’s “The market for lemons: Quality uncertainty and the market mechanism” highlights the importance of trust and the classic observation of how the proclivity to offer inferior goods such as automobile lemons “tends to drive the market out of existence”. The theory suggests information asymmetry can lead to adverse selection and trigger market failure. He explains “the difficulty of distinguishing good quality from bad…may indeed explain many economic institutions and may in fact be one of the more important aspects of uncertainty”. He adds that “one natural result of our model is that the risk is borne by the seller rather than by the buyer”. His hypothesis reinforces the view that authenticity has value.

The challenge of information asymmetry needs to be differentiated in an environment where information is scarce (the lemons model) to one where information is abundant (information overload is common). Information overload replaces the cost of information asymmetry with the costs of spam, complexity, transience and disinformation. The value of authenticity thus rises with information overload.

Another distinction is that trust is a relationship-based concept involving motives, beliefs and norms (institution-based). Authenticity reduces the need to rely on trust. Hence, authenticity is autonomous. Authenticity is driven by confidence on the integrity of data and the levels of transparency in a system. Moving beyond relationships expands scale and increases efficiency by reducing the costs of verification.

Sharing platforms illustrate these principles well. Shanu Athiparambath explains “millions of people stay in Airbnb homes every night. It’s not trust which makes this possible…Airbnb puts hosts and guests in a position where behaving badly would ruin their reputations…He highlights “in the ancestral environment, there was no formal third-party enforcement of norms. There was usually no penalty for treating outsiders dishonestly and unfairly”. In this regard, “the Airbnb review system is an extremely powerful third-party norm enforcement system…When the hosts try to deceive guests through evolutionarily familiar ways, the penalty comes in evolutionarily novel ways. A negative review can haunt you for very long”.

In relation to value, he suggests that “in our fairly stable, hunter-gatherer past, there was no market pricing because they didn’t trade with strangers. There was only authority ranking, communal sharing and equality matching…The price can’t vary according to supply and demand if you’re equality matching”. Shanu Athiparambath points out “Airbnb has tools that help you price your home right, allow the price to fluctuate according to supply and demand, and offer last minute discounts. When Airbnb’s smart pricing brings the prices down, hosts think they got the short end of the stick. They have too much pride to lower the prices even when the demand is ridiculously low”. In particular, “when everybody is free to list their homes, stars are inevitable…The best Airbnb hosts don’t feel very tempted to overcharge or deceive their guests…When you look at things this way, the distinction between genuine trustworthiness and benign, self-interested behaviour seems to be a matter of degree…It is enough to have high social intelligence and be moderately trusting and trustworthy. Airbnb can’t produce genuine trust. No institution can”.

The aggregate value of data tends to increase with transparency. Transparency is an effective tool for reducing information asymmetry. In this regard, transparency can be associated with access, use, authenticity and network effects. Data transparency implies that data is widely accessible. This expands the user base and use or sharing of the data. Data quality improves with multiple use. Access and authenticity increase the range of products that can be built on data – particularly when data can be aggregated across people, time, place and activities. The collective efforts of multiple users in an open environment can enhance the quality, usefulness and value of data and generate expanding returns from network effects.

Privacy and property rights adversely affects the aggregate value of data. Private ownership results in data enclosure and restricts public access. Isolating data within silos generally lowers use, quality and hampers the ability to scale. It reduces transparency and create information asymmetry. When only limited parties can access the data, it means only these parties are able to extract value from the data. Hence, privacy and property rights produce value (profits) for individuals and firms but they tend to constrain aggregate value by reducing its usage by society. Legal protections that increase the private value of data will incentivise criminal activities such as bootlegging, pirating and cybertheft (thus resulting in policing costs). Private ownership also has exclusionary effects on society as it prevents the poor from enjoying the benefits. Generally, most economies and markets where data transparency is limited are unable to achieve their potential.

Overall, transparency is a double-edge sword[2]. It increases market efficiencies but reduces private arbitrage opportunities as it erases the information advantage. Hence, there is a debate over whether a reduction in private opportunities will reduce the incentive to innovate. Like many games, there are no pre-determined outcomes. Specific outcomes depend on how players, particularly information intermediaries, modify their strategies in reaction to higher levels of transparency. Sometimes, rising transparency could increase the premiums for shielding data (privacy). At other times, it could customers to flee to other services.

The aggregate value of data increases with organisation. The value of data improves with the level of organisation. Standalone data has little value. The ability to aggregate and connect data adds context and increases its value. In a sense, the opportunity to extract value from data acts as an incentive to undertake data capture, authentication, aggregation and analysis. Better organisation of data creates efficiencies to reduce risks (costs). For example, shortening the settlement period reduces credit risks while digital payments reduce the risks of physical theft.

Overall, the value that can be generated from data is dependent on the strengths of the ecosystem. First, an information infrastructure (connectivity) is needed to support autonomous and modular transactions, bundling (and unbundling) and liquidity (tradability and investability). Second, rules are needed for information disclosures, standardisation and enforcement processes to underpin the reliability of the system.

Monetisation framework

The relationship between value and data can be analysed within the context of a monetisation framework[3]. When data is scarce, many activities are relationship-driven (e.g. at the household household) and thus unpaid or lowly priced. Monetisation can be described as a data-driven process which enables the monetisation and autonomous exchange of relationship-based services. Data abundance is a prerequisite for scaling the level of monetisation. “Critical mass is needed not only to justify the costs of managing information but also to ensure sufficient thickness of demand and supply for markets to flourish. This explains why monetisation is more likely to flourish in areas with high population densities. Higher levels of information usage, financialisation and the emergence of highly-populated cities are contributory factors to the monetisation of many unpaid activities, especially household services”[4]. In this regard, the price of data in different locations is generally related to the information infrastructure to support monetisation and exchange. It is also influenced by the value (income and wealth) of residents and environment; data would likely have higher value in a rich community than a poor one[5]. Valuing data at the same levels geographically would converge living standards around the world.

Monetisation works in both directions and there is a need to deepen our understanding of how data drives repricing patterns. Digitalisation monetises and reprices some activities, products and assets upwards. But it also causes some activities, products and assets to be demonetised and be repriced downwards or even made available free.

For example, as economies become more data-driven, typically household chores such as cooking, cleaning and caregiving become monetised. Their prices are reinforced by Baumol’s Cost disease[6] effects over the long-term. In particular, public goods (education, housing and health) have also been repriced upwards significantly. The monetisation of public goods reflects its transformation into private goods (as government shares the burden on rising costs with public consumers). In several countries, it reflects a financialisation of these services as they are turned into private enterprises or investments with the result that they are priced as investment assets.

In the other direction, many consumer products and services have been digitised (such as content, music) or their production and distribution scaled so that they are repriced downwards or offered for free. At an earlier point, there was a belief (false as a generalisation) that consumers, conditioned by a free internet, would be resistant to paying, at least for digital content. Pricing and delivery strategies have undergone drastic changes with the onset of information disruption or data-driven organisational change (in relation to households, markets and government regulations). To some extent, the repricing patterns are driven by corporate strategies in response to landscape changes.

There are immeasurable benefits from lower costs and free products. Yet demonetisation is not been welcomed but is instead greeted by the suspicion that there is a catch; that free comes attached with conditions. In the case of platforms, free is perceived as part of a bundling strategy to lock out competitors, or to exploit consumers or other people’s content[7]. Hence, the repricing patterns raise questions about the norms that should be set for business behaviours. For example, the rules governing the use of content, the use of demand-driven pricing to exploit elasticities, and the use of concentration power to dictate terms.

Repricing patterns have significant macroeconomic implications. First, monetisation and demonetisation generate repricing (inflationary and deflationary) pressures in the economy. Demonetisation has deflationary effects but the public benefits, while not measured, can be substantial particularly in widening public access and inclusiveness. Second, monetisation and financialisation goes hand in hand. Monetisation generates income and employment (wages) but aggravates inequalities. Third, repricing has spill-over effects on costs, growth, production, wages and profits.

From a public policy perspective, repricing patterns are important because of its social consequences on issues such as the fairness of income distribution, the impact of the rising costs of public goods and the viability of welfare arrangements. Governments need to rethink their social objectives in relation to inclusiveness, equality and stability within the context of the monetisation framework and refine their policies to wield influence on what becomes more expensive, cheaper or free in accordance with their objectives for society. They would need to re-evaluate their intervention tools; assess whether price regulation is still relevant and whether intervention is more effective by targeting supply (creating low-cost competition), changing price mechanisms for public goods or by providing subsidies (which has been criticised).

Conclusions

The hypothesis on the relationship between data and value is simple. As data becomes abundant, data becomes more valuable, in aggregate rather than on the margin. Societies will become wealthier if they are able to enhance authenticity, transparency and organisation. The relationship between data and value can also be analysed within a monetisation framework. The implications of repricing patterns from monetisation and demonetisation have implications for public policies because of their impact on social life and governments should refine their polices to manage the repricing patterns. These observations reaffirm that data matters in the economy.

References

Charles Goldfinger (2000) “Intangible economy and financial markets”.  Communications & Strategies, No. 40, 4th quarter 2000. http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=83F42CBBC58994AFC46DCD93DF9D0988?doi=10.1.1.461.6988&rep=rep1&type=pdf

George A. Akerlof (August 1970) “The market for lemons: Quality uncertainty and the market mechanism”. The Quarterly Journal of Economics. MIT Press. https://www2.bc.edu/thomas-chemmanur/phdfincorp/MF891%20papers/Ackerlof%201970.pdf

Phuah Eng Chye (2015) Policy paradigms for the anorexic and financialised economy: Managing the transition to an information society.

Phuah Eng Chye (26 August 2017) “The services economy: Revisiting Baumol’s cost disease”. http://economicsofinformationsociety.com/the-services-economy-revisiting-baumols-cost-disease/

Phuah Eng Chye (5 January 2019) “Future of work: Redefining work (Part 6: Monetising participation)”. http://economicsofinformationsociety.com/future-of-work-re-defining-work-part-6-monetising-participation/

Phuah Eng Chye (3 August 2019) “Information and development: Globalisation in transition”. http://economicsofinformationsociety.com/information-and-development-globalisation-in-transition/

Phuah Eng Chye (7 December 2019) “Information and organisation: China’s surveillance state growth model (Part 3: The relationship between surveillance and growth)”. http://economicsofinformationsociety.com/information-and-organisation-chinas-surveillance-state-growth-model-part-3-the-relationship-between-surveillance-and-growth/

Phuah Eng Chye (28 March 2020) “The transparency paradigm”.

Phuah Eng Chye (11 April 2020) “Anonymity, opacity and zones”.

Phuah Eng Chye (18 July 2020) “Economics of data (Part 1: What is data?)”. http://economicsofinformationsociety.com/economics-of-data-part-1-what-is-data/

Phuah Eng Chye (1 August 2020) “Economics of data (Part 2: Market approach to valuing data)”. http://economicsofinformationsociety.com/economics-of-data-part-2-market-approach-to-valuing-data/

Shanu Athiparambath (2019) “How Airbnb Is silently changing Himalayan villages”. Veridici. https://veridici.com/how-airbnb-is-silently-changing-himalayan-villages/


[1] http://en.wikipedia.org/wiki/Information_wants_to_be_free

[2] See “The transparency paradigm” and “Anonymity, opacity and zones”.

[3] See “Future of work: Redefining work (Part 6: Monetising participation)”.

[4] See “Future of work: Redefining work (Part 6: Monetising participation)”.

[5] By the same token, privacy is more valuable to the rich rather than to the poor.

[6] See “The services economy: Revisiting Baumol’s cost disease”.

[7] To deliberately allow the use of or to use content without permission or payment to creators; thus depriving them of income.