Policy Significance Statement
Today’s big tech platforms can use their power over the attention of billions of users to shape the markets in which they participate for their own benefit, and against the interests of rivals, their users, and firms that depend on that flow of attention. Measures of market power based only on revenues and profit are insufficient for understanding these dynamics. To better regulate these attention-gatekeepers—and their coming AI-powered successors—we must mandate regular and consistent disclosures of the internal operating metrics that measure how they allocate attention, and develop a baseline understanding of what constitutes good behavior and what constitutes abuse.
1. Introduction
The dominant policy narrative that guides the regulation of internet platforms today focuses on user data and privacy (Albrecht, Reference Albrecht2016; Srinivasan, Reference Srinivasan2019). Platforms are said to abuse their market power by taking data from consumers (or “users”) without their permission and using it to manipulate their behavior through personalization. Zuboff (Reference Zuboff2019) calls this surveillance capitalism.
We argue instead that it may be more productive to understand platform market power and to regulate its possible abuses by measuring the ways that internet platforms control and monetize the attention of their users. The fairness or unfairness of the algorithmic systems by which platforms allocate user attention affects not only users but also an entire ecosystem of third-party suppliers (such as websites, content creators, or app developers), as well as advertisers.
As Simon (Reference Simon1971) noted, an abundance of information leads to a scarcity of attention. In the face of increasing information abundance, he predicted that we would use machines to help us better allocate our time and attention. And so it has transpired: information has become so abundant that it defies manual curation. Instead, powerful, proprietary algorithmic systems use the data they collect to match users with the answers, news, entertainment, products, applications, and services they seek.
Tirole (Reference Tirole2017, Chapter 14) echoed Simon’s idea decades later, noting: “We suffer from too much choice, not too little. Our problem now is how best to allocate time and attention to this plethora of potential activities, trades, and relationships […] The more the other costs (transportation, customs duties, listing) fall, the more important costs associated with signaling, reading, and selecting become, and the more we need sophisticated platforms to match the buyers and sellers.”
Each of the dominant internet platforms is an attention gatekeeper of one kind or another, matching requests from billions of consumers to content, services, and products from millions of suppliers. Despite their differences, these platforms have risen to prominence because each has developed an effective way to efficiently allocate user attention to the most relevant information, products, and services. Google and other search engines promise to find the most relevant web pages from millions of possibilities for each user query; e-commerce sites promise to find the best products available at the best price; social media sites promise to generate a unique, personalized feed of updates from friends; music and video recommendation services promise to deliver a feed that matches a user’s taste; on-demand transportation services promise to find the closest driver. And so on. And these platforms make a corresponding promise to the suppliers of content, products, or services (web sites, app developers, merchants, creators, and even other users) on the other side of what is typically a two-sided or three-sided marketplace consisting of consumers (“users”), producers (“suppliers”), and advertisers: that if the supplier provides the most relevant information, they will be rewarded with consumer attention.
In many cases, these information matching marketplaces have proven to be remarkably efficient. Complex, data-driven algorithmic systems act as a kind of proprietary “invisible hand,” making use of immense amounts of consumer, producer, and advertiser information to efficiently match supply with demand. When the algorithms are fair, they deliver services to consumers that were previously unthinkable, saving them time and effort in making better choices by providing extraordinarily relevant results despite an overwhelming number of competing options. Suppliers and advertisers find new customers and the ability to transact with existing ones. But these markets have proven to be “winner takes most” (Wu, Reference Wu2013; Petit, Reference Petit2020), or sometimes “winner takes all,” leaving them ripe for abuse. Once a platform establishes dominance, it is in a position to extract additional time and attention from its users, and economic rents from its supplier marketplace or advertisers, by controlling that flow of attention.
Economists and policymakers have long been concerned about the power of dominant companies to extract economic rents, and there is a growing body of research arguing that an increase in rents is a major contributor to increased inequality, less vibrant entrepreneurial ecosystems, and lower levels of productivity growth and investment in modern economies (Piketty, Reference Piketty2014; Standing, Reference Standing2016; Ryan-Collins et al., Reference Ryan-Collins, Lloyd and Macfarlane2017; Mazzucato, Reference Mazzucato2018; Stiglitz, Reference Stiglitz2019; Christophers, Reference Christophers2020; Kurz, Reference Kurz2023).
Rents typically reflect control over a scarce factor of production. This control allows its holder to extract profits above a “normal” rate achievable in a competitive market. These profits are not the result of productive improvements that grow the economic pie; they are a reallocation of economic value from one party to another as a result of some kind of market power.
Not all rents represent abuse of power, though. As noted by Schumpeter (Reference Schumpeter2013), innovation—whether protected by patents, trade secrets, or just by moving faster and more capably than the competition—provides an opportunity to receive a disproportionate share of profits until the innovation is spread more widely. A company that continues to innovate can earn disproportionate profits for a long time, especially in a growing market.
During the expansive period of a new technology cycle, market leaders do emerge through innovation, solving new problems, and creating new value not only for consumers but also for a rich ecosystem of suppliers, intermediaries, and even competitors. These market leaders can reach astonishing levels of Schumpeterian profit as they lay waste to incumbents and dominate the new market. But once the growth of the market as a whole slows, they can no longer rely on the rising tide of new user adoption and their own innovations to maintain that level of profit. At that point, they may turn to more traditional extractive techniques, using their market power to maintain or increase their now-customary level of profits in the face of macroeconomic factors and competition that ought to be eating them away.Footnote 1
Companies like Amazon, Apple, Google, and Microsoft have been innovators, and much of the value they have received has been well earned as a return on their investments. But they are also increasingly the beneficiaries of economic rents. But what is the scarce factor of production that allows them to extract these rents? And how do you measure rents in a market where services are offered for free? This article argues that the scarce factor of production is the attention of users, and that rents can be identified by deviations from the best possible attention allocations of which a platform is capable.Footnote 2 These are represented by what in the search engine literature are referred to as “organic” results; that is, by the results chosen as best by the platform’s own search or recommendation algorithms before any self-serving distortions.
These “algorithmic attention rents” are rents in the classical sense. Attention is a factor of production that is limited in supplyFootnote 3 and can see its value appropriated by others than those who supply it.Footnote 4 By virtue of a platform’s dominance in a given attention market, it is able to appropriate an increasing share of the return to “attention”—including by providing lower-quality results, by charging a higher price than what the attention may be worth to those buying it, by forcing ecosystem participants to pay for visibility, or by trying to monopolize vertical product or service markets.
In allocating user attention, the platform is also shaping the allocation of economic value between competing stakeholders on the platform, including itself, its users, its third-party supplier ecosystem, and its advertisers. A platform’s third-party producers compete with each other, and advertisers compete with these producers and other advertisers, for a fixed quantum of user attention. Not only is a user’s attention finite, so too is the narrow window onto abundant information provided by the screen, whose interface design is controlled by the platform. Every user attention allocation can thus lead to a pecuniary gain or harm for a firm, website owner, or content creator on another side of the platform. Attention allocations drive value allocations.
This understanding shifts the analysis of a platform’s abuse of market power away from prices. A platform’s dominance can be measured by its ability to shape user attention independently of user preferences,Footnote 5 user inputs, and the relevance of its third-party ecosystem’s information.
Our approach differs from the surveillance capitalism view that “Big Tech” algorithms extract a “behavioral surplus” from users as excess data (beyond service improvement) to manipulate them (Zuboff, Reference Zuboff2019). While it is true that platforms collect enormous amounts of data on their users, profit from it, and use it not only for their users’ benefit but also for the benefit of their advertisers, this narrative misses the mark in several important ways. Data is an essential raw material that is aggregated and made useful by internet services, and personalization is often experienced as a benefit by consumers rather than a harm. Drawing a clear line between permissible and impermissible uses of data and personalization is often difficult. Data is ultimately a means to more effective attention allocations, not an end in itself.
In our framing, it is attention that can be extracted in excess of that needed by the platform to earn a normal return on capital. And once that excess attention has been extracted from the consumer, it can be redirected to extract pecuniary rents from suppliers or advertisers—or to allocate more value to the platform’s own information. A surveillance capitalism paradigm ignores that platforms are multisided, such that every suboptimal allocation or action impacts not just users but also the other platform sides too.Footnote 6
Our emphasis on nonpecuniary attention rents being extracted from users in order to extract pecuniary rents from suppliers is in-line with the predictions of Rochet and Tirole’s benchmark economic models of platforms (Rochet and Tirole, Reference Rochet and Tirole2003, Reference Rochet and Tirole2006). These predict that a monopolist will charge users zero pecuniary prices to maximize profits, if cross-side network effects are large (such that each advertiser benefits considerably from each additional user).Footnote 7 Our article elaborates on what happens next: a monopolist charging users zero pecuniary prices can increase profits further by degrading the quality of attention allocations to users below a competitive level without a loss of revenue that would make such a strategy unprofitable (Begent and Collyer, Reference Begent and Collyer2013). Unsurprisingly, this is what is widely observed today: such platforms have become more focused on attracting advertisers than on providing a good user experience, in order to extract excessive profits from advertisers or their producer ecosystems. Doctorow (Reference Doctorow2023) calls this “enshittification.”
Why users, suppliers, and advertisers not switch to other platforms? One answer is that it is difficult to assemble what Amazon famously called the “flywheel,” in which a critical mass of algorithmically curated content from suppliers draws users, and more users draw more suppliers, in a virtuous circle through which the marketplace provider is able to continuously improve its services.Footnote 8 Data does play a role here. The more users that a platform has, the more data that it can collect about them and the better its algorithmic results can be. That means in practice that the market leaders are sufficiently far enough ahead of the competition that, once they have established market power, they have headroom to worsen the product in other ways without losing users to competitors.
A critical part of the monopolist’s toolkit is also to raise switching costs by reducing frictions internally, and raising them externally. For example, free shipping with Amazon Prime encourages users not to shop around, and Amazon’s “most favored nation” pricing contracts with its suppliers make it unlikely that lower prices will be found elsewhere (Graham, Reference Graham2023). While this is not the focus of our analysis, it is a backdrop to any understanding of how a marketplace platform can reduce the quality of results without losing participants.
The identification of a dominant platform’s pecuniary rents as being extracted via algorithmically manipulated attention makes it possible to better understand several different types of platform harms, including self-preferencing, excessive advertising, exploitation of third-party ecosystems, and exploitation of user click behavior. A major implication of this work is the need for greater disclosure, to allow regulators, investors, and the public to better observe, measure, and ultimately regulate potential harms stemming from how user attention is allocated.
Our primary goal here is to articulate a theory of platform market power and its abuses in the digital age that serves as a foundation for future work. In a companion paper, “Amazon’s Algorithmic Rents” (Strauss et al., Reference Strauss, O’Reilly and Mazzucato2023), we take a deeper look at the legal and policy application of these ideas to Amazon’s third-party marketplace. And in “Behind the Clicks: Can Amazon allocate user attention as it pleases?” (Rock et al., Reference Rock, Strauss, O’Reilly and Mazzucato2023), we demonstrate an approach to measuring algorithmic attention rents in Amazon’s marketplace. This research is part of a broader effort to map modern economic rents (Mazzucato et al., Reference Mazzucato, Ryan-Collins and Gouzoulis2023).
2. A theory of rents in digital markets
Economists see prices as the coordinator of economic activity: they are the sinews of the market’s invisible hand. Prices are thought to optimally allocate resources among competing ends when they reflect the dynamic information (preferences and scarcities) contained in the billions of daily decentralized interactions between demand and supply. As Hayek (Reference Hayek1945, p. 1) notes, decentralized price formation solves “the problem of the utilization of knowledge which is not given to anyone in its totality.” Market-driven price formation is superior to any centralized mechanism of coordination of economy activity because it ensures that “fuller use will be made of the existing knowledge” contained in the economy (Hayek, Reference Hayek1945., p. 2).
In neoclassical (marginalist) theory, perfect information exists, such that prices are optimal because they reflect the subjective utility evaluations of consumers.Footnote 9 By contrast, in Simon’s view of decision-making and in more recent work in behavioral economics, perfect information does not exist. Instead, consumers and producers make decisions that are shaped not only by human limits and biases but also by the institutions that shape the information that is available to decision-makers (Simon, Reference Simon1997). And today, what makes information imperfect is often not that there is too little of it, but too much, and the institutions that help us manage that abundance have extraordinary power to shape our decisions. Internet platforms change the institutional context and challenge the conventional view of decision-making in markets in several ways:
-
1. In informationally complex markets, platforms transfer much of the work of decision-making from humans to machines. Internet search, e-commerce search, social media feeds, and other algorithmically managed recommendation engines are examples of such machines.
-
2. These algorithmic machines are often used to match the supply and demand for non-priced goods and services, or those that are not individually priced. For example, in a free, ad-supported service such as Google Search or Facebook, or a subscription service such as Netflix or Spotify, consumers are matched with suppliers of information without considering price as a factor. Matching instead relies on other non-price factors to gauge the objectives and preferences of the consumer, the quality of the products or services on offer from suppliers, and even the reliability and reputation of the suppliers themselves. As Google founders Larry Page and Sergey Brin noted (Brin, Reference Brin1998), a platform such as Google Search offers objective rankings based on something as seemingly subjective as optimizing for “relevancy.”Footnote 10 Much like the decentralized markets celebrated by Hayek, platforms work their allocative magic by processing signals based on millions of decisions taken by other users on the internet and in real life, combined with data, both expressed and implied, on the user’s personal preferences.Footnote 11 Collecting more data is an essential part of what makes these systems work. But in processing these data to produce a relevancy ranking for user search or recommendations, the algorithmic system takes on the role of the invisible hand and works either to preserve the competitive process or to distort it.
-
3. Even when price is a factor, as in e-commerce or an advertising marketplace, the platform’s proprietary matching algorithms internalize and centralize the otherwise decentralized market mechanism. Furthermore, this internalized and centralized market is opaque. Rather than providing explicit information to consumers about the basis for the ranking of products and services, an algorithmically generated ranking implies much of that information, with the user expected to trust the rank ordering provided by the platform. The platforms control the presentation of information, and their algorithmically populated interface designs become the context for user decision-making.
Algorithmic attention allocations thus supplement—and at times supplant—traditional markets as the key institutional mechanism coordinating economic activity and shaping the terms on which exchange takes place online. The resulting algorithmic systems decide the winner among different producers whose information is competing for the user’s attention. They not only facilitate the effective delivery of the platform’s information services to users but the monetization between platform sides (e.g., advertisers and users).
Because internet platforms have effectively internalized the market mechanism, their algorithmic allocations tend to reflect the degree of competitiveness within and between platforms. In a competitive market, the platform has a strong incentive for its algorithms to be fair; once they have market power, platforms are liable to make allocations that are self-serving.
Attention allocations involve design choices and trade-offs because attention is finite and consumable.Footnote 12 For example, a platform allocating more top screen space to advertising information can prevent the user from spending attention on more relevant organic results, leading to poorer choices. A platform providing information directly in response to a query rather than directing traffic to a third party website might yield benefit to its users even as it reduces benefit to third party suppliers.
Internet platforms are in a unique position to explore and optimize these trade-offs for their own benefit due to their access to real-time data on the participants from all platform sides. With millions of users repeating the same search, or responding to the same recommendations, the platform is able to run statistically meaningful A/B tests on thousands of different algorithmic weightings and design options. In 2022, Google claims to have run more than 800,000 search experiments, which led to more than 4,000 changes to Search (Google, 2023a).
2.1. How the limits of human cognition enable algorithmic authority
Simon’s (Reference Simon and Estes1978a, Reference Simon1995) information processing paradigm focuses on how humans make decisions in the real world: “compatible with the access to information and the computational capacities that are actually possessed by organisms, including man, in the kinds of environments in which such organisms exist” (Simon, Reference Simon1955, p. 99).
Human computational capacities are limited, as is their time.Footnote 13 These “hardware” limits help explain real-world behaviors, which tend to follow heuristics—informational shortcuts and strategies that allow humans to make reasonable choices in everyday complex environments (Simon, Reference Simon1978b, p. 12, Simon, Reference Simon, Durlauf and Blume2017). Heuristics reflect human “satisficing behavior” which aims for “good enough” outcomes,Footnote 14 based on many unknowns. This contrasts with the economic assumption that humans “optimize” to achieve the “best” solution based on the known outcomes from every action. Simon’s emphasis on the decision-making process contrasts strongly with the neoclassical focus on (equilibrium) outcomes by unconstrained actors.
Simon’s insights are echoed by contemporary behavioral economics. Tversky and Kahneman (Reference Tversky and Kahneman1974, 1981) and Kahneman (Reference Kahneman2011) posited that humans have two decision-making modes: what they called System 1, for decisions that need to be made quickly, and System 2, for those that depend on careful rational analysis. According to Kahneman (Reference Kahneman2011), 98% of human decision-making relies on System 1. While these two systems are appropriate for different types of activity (immediate response to a threat, for instance, versus long-term planning), they may be applied inappropriately. System 1 is particularly subject to various cognitive biases, such as anchoring bias, by which the first piece of information presented to us frames our judgment of additional information.
Studies of actual human behavior on internet platforms bear out the predictions of both Simon’s “information abundant/attention scarce” model and behavioral economics. The majority of clicks tend to go to the first few results displayed near the top left of the screen (Keane and O’Brien, Reference Keane and O’Brien2006; Craswell et al., Reference Craswell, Zoeter, Taylor and Ramsey2008)—even if they will vary by some unknown amount as these results deteriorate in relevancy. This is called positional bias (Joachims et al., Reference Joachims, Swaminathan and Schnabel2017).Footnote 15 This is a form of anchoring bias that has been well documented by search engine optimization consultants. For example, a 2022 study of more than 4 million Google Search results pages found that more than 54% of clicks go to the first three organic results. Only 0.63% of users click through to the second page of the results (Dean, Reference Dean2023). A 2018 report from the web tracking firm Jumpshot (2018), based on a large sample of users, noted the same behavior on Amazon: user clicks are concentrated on the first few rows of product results. In our own study, we found that 78% of the most clicked product listings are positioned within the first two rows of the search results (Rock et al., Reference Rock, Strauss, O’Reilly and Mazzucato2023). Perhaps even more strikingly, in 2022, a study by e-commerce consulting firm Feedvisor (2022) reported that 32% of shoppers simply opted to buy the first product listed on a search results page.
The value proposition and business model of the modern internet is thus entirely in line with the premise that users are not perfect “hedonic calculators” (Robinson, Reference Robinson2001). And this is why users are able to benefit from intermediators such as Google or Amazon. Time savings accrue to users when they click on algorithmically ranked results based on position, trusting that the platform has done the work for them, rather than having to evaluate intrinsic and latent product qualities themselves. This behavioral heuristic saves users enormous amounts of time and cognition in decision-making. Following a platform’s algorithm can lead the user to make better choices, since it effectively secures what Simon (Reference Simon1997) calls a degree of expertise in the decision-making process.
The impact of that expertise is borne out by actual observed user behavior. Clicks go to the top results not just because of human positional bias, but also because the platforms have traditionally worked very hard to make the top results the best results. Google went so far as to patent a ranking factor that it called “the long click” (Lopatenko et al., Reference Lopatenko, Kim, Dornbush, Wei, Kilbourn and Lopyrev2015). The patent posited that a “short click” (i.e., when a user clicks on a result and comes right back and clicks on another result) indicates dissatisfaction with the result, while a long click (when the user goes away and does not come back) represents success. Along with countless other factors measured across millions of searches, long clicks could be used to raise or lower the rank of search results, with the goal that the first result gets the long click.Footnote 16 The success of these systems is precisely why they have gained the trust of users, and why that trust, once earned, can be abused.
A platform’s power to get users to click on its algorithmic outputs is imperfect, since clicks and views are also influenced by the broader relative prominence and attractiveness of the results as a whole (Yue et al., Reference Yue, Patel and Roehrig2010), including their relative informativeness (Keane and O’Brien, Reference Keane and O’Brien2006; Craswell et al., Reference Craswell, Zoeter, Taylor and Ramsey2008). Yet, the power that trusted platforms have to drive user attention, and in turn clicks or views, is immense. Attention directed to new information, including advertising, can lead to immediate, almost frictionless action. A purchase, a view of an addictive TikTok, YouTube, or Facebook video, a path down a rabbit hole of attention consumption, is just a click away. In feed recommendation systems such as those offered by social media systems, even clicks may not be necessary, as platforms increasingly use autoplaying videos to capture user attention, which thus becomes “opt out” rather than “opt in.”
The attention-shaping power of online platforms is a reminder of Simon’s key assertion that organizations or institutions—he often used the terms interchangeably—shape individual behavior by determining “the environments of information in which decisions are taken” (Simon, Reference Simon1997). Institutions help consumers make decisions by establishing a division of labor in “the process of deciding,” which is just as important as a division of labor in the processes of “doing,” argues Simon. He elaborates on these ideas in Administrative Behavior (Simon, Reference Simon1997, Chapter 7), focusing on the concept of authority in securing an effective division of labor in decision-making within the organization:
‘Authority’ may be defined as the power to make decisions which guide the actions of another. […] That is, he holds in abeyance his own critical faculties for choosing between alternatives and uses the formal criterion of the receipt of a command or signal as his basis for choice.
This passage could be read today as applying to algorithmic recommendations made by a dominant digital platform to a user, just as much as a recommendation coming from a senior manager or expert to a worker, as outlined by Simon.
2.2. How algorithmic authority enables attention rents
Attention rents occur when a platform abuses its algorithmic authority and exploits its role as a trusted intermediator to direct user attention (clicks or time spent) to suboptimal—often sponsored—information. At its core, these rents exploit users’ positional bias in how they click, or what they view, by placing this suboptimal information in users’ core attentional zone. This positional bias relies heavily on the suboptimal information trying to replicate, or leverage, the authority of a platform’s organic algorithmic result (so-called “trust bias”) (Keane and O’Brien, Reference Keane and O’Brien2006; Keane et al., Reference Keane, O’Brien and Smyth2008). The information may be embedded between the optimal organic results, or increasingly, may displace them.
One implication of this algorithmic power in allocating user attention is that it provides the platform with considerable leverage over both parties (suppliers and advertisers) looking to access user attention—the platform’s core commodity. The platform can ensure that paid (advertising) and organic results directly compete with one another for user attention (especially when placed “above the fold” as it was called in the days of newspapers, but today might be called “before the scroll.”)Footnote 17 The attention rent is levied on the users—to get them to allocate attention to more profitable content for the platform—in order to extract a pecuniary rent from the supplier or advertiser side of the dominant platform’s multisided market. This rent can help create an above-normal return for the platform, especially when combined with other charges and fees already placed on its ecosystem.
Thus, greater monetization of users entails a double-sided process, impacting not just a platform’s users but also third-party firms and advertisers, as the nature of content shown to users changes. Information from some third-party firms gets demoted by additional paid or addictive content, and the demoted content must adjust to the new algorithmic optimization to survive. Potential advertisers receive more priority attention space and, in turn, greater incentive to advertise. And users must consume more paid-for content. What happens on one side of the platform impacts the other(s) (Hovenkamp, Reference Hovenkamp2020)—not necessarily from network effects,Footnote 18 but by virtue of the fixed screen space (i.e., attention sphere) over which the platform sides compete.
2.3. How attention rents become pecuniary rents
Contemporary theories of platforms focus mostly on externalities arising from network effects. This helps explain why platforms provide services to users for free, in order to generate network effects and/or economies of scale. This also explains pricing structures on platforms, that is, how the pricing burden falls between the two sides of the platform. For Rochet and Tirole (Reference Rochet and Tirole2006), the optimal price structure on platforms is set indirectly to optimize total profits from both sides. This means that pricing on one side of the platform should take into account the impact it would have on participation—and in turn profits—on the other side of the platform.Footnote 19 This tends to create a highly imbalanced pricing structure, whereby users pay no pecuniary price for the platform and the price burden instead falls on third-party firms or advertisers. Because advertisers value an additional user considerably, and because users prefer free services to paying directly for them, it is often more profitable for the platform to charge users indirectly, through their attention.Footnote 20
But this theory says less about what mature platforms, which have for some time been charging users zero (or low) prices, do next. When users face zero pecuniary prices, and their attention is monetized on the other side of the market, increasing profits initially come from gaining more users. But over time, as user growth slows, increasing profits from user participation become a function of the quantity of advertising. Weyl (Reference Weyl2010, p. 1643) goes so far as to see “the platform’s problem as [first] choosing participation rates on the two sides rather than the prices supporting this allocation.” User responsiveness to advertising, often measured in real time by the platform, becomes key for determining the optimal business model mix such that (Behringer and Filistrucchi, Reference Behringer and Filistrucchi2015, p. 293): “In two-sided markets, quantities on one market side are functions of prices on that market side and quantities on the other market side.”
It follows that algorithms coordinate platform sides not just by setting the price level as Tirole notes, but also by controlling attention allocations. If the participation of sides is relatively fixed (due to user stickiness and third-party lock-in), then extracting monopoly rents from one side of the platform requires adjusting attention allocations on the other side(s), since users must consume greater quantities of the more profitable content.Footnote 21
In a multisided digital platform, increasing profits or profitability for the platform thus tends to require changing the information content shown to users on the screen (Behringer and Filistrucchi, Reference Behringer and Filistrucchi2015): What information outputs a platform’s organic algorithms optimize for (such as recommendation algorithms optimizing for more sustained engagement); and/or the algorithmic mix of outputs (such as more advertising and fewer organic outputs) are essential ingredients to a platform increasing user monetization and extracting more profits from its ecosystem of firms or advertisers. Without changing the type of content shown to users, the opportunities for user monetization by the platform’s third-party firms or advertisers are limited to the traditional solution of raising prices (e.g., for advertising, subscriptions, or other fees).
These changes in information can impact value allocations if the platform chooses to algorithmically allocate more attention to paid or otherwise self-serving content. This is done constantly, by changes in the relative screen presentation of results, by changes to the underlying algorithmic ranking systems, and, as asserted in the US Department of Justice complaint against Google, by internal controls that “tweak” the results to get the desired outcome (United States of America v. Google LLC, 2020). Measured user behavior provides a rich stream of data by which the platform can assess the impact of changes and make further adjustments to reach its objectives (“operating metrics”).
In traditional markets, in which goods and services are directly traded for money, prices persistently higher than “normal” are seen as a sign of excess market power and rent extraction. In attention markets, the primary expression of algorithmic market power is the ability of the platform to profitably direct user attention, and to produce information allocations, that are to an appreciable degree independent of consumer preferences, competitor information relevance, and its users’ explicit search inputs. Algorithmic rent is part of a platform’s wider exertion of market power when it can make inferior attention allocations without losing sufficient sales, or users and third-party suppliers, to make such an allocation unprofitable (Begent and Collyer, Reference Begent and Collyer2013; Federal Trade Commission, et al. v. Amazon.Com, Inc. 2023).
A core notion explored in our research is that an attention rent exists when there is a deviation from the best attention allocations of which the platform is capable, and on whose promise the platform drew its users in the first place. In a competitive market, platforms win by providing the best possible results to consumers. Deviations from this standard generally occur once the platform has cemented its dominance and hold over user attention. Because it is extremely difficult and costly to deliver high-quality search or recommendation results, the market leaders are frequently far more capable than their competitors, which gives them additional room to worsen their results without losing customers.
Algorithmically driven attention allocations by the platform, matching users to suboptimal information, can create rents in the sense of “returns” to a factor of production—here “attention” (proxied by screen space and user time)—which is fixed in supply, largely invariant to changes in prices, and exploited for the platform’s own profit (Blaug, Reference Blaug1997; Alchian, Reference Alchian, Durlauf and Blume2017). A platform is increasing profits not from better matches (productivity improvements), but from information matches that are more profitable for itself, or from higher matching fees.
This pattern of rent extraction by exploiting the relative proportion and position of advertising and organic results is exactly what has been observed in practice. In its earlier years, Google Search result pages consisted of a list of ten organic search results (often referred to as the “ten blue links”) and a snippet of content from the destination site. These organic links were framed by adsFootnote 22: three above the organic results, with additional ads in a column to the right of the organic results. The ads were clearly differentiated from the organic results in a distinct color block. A remarkable series of screenshots recorded by Dutch search engine consultant Blacquière (Reference Blacquière2014)—updated by Marvin (Reference Marvin2020)—shows how ads took over more of the Search screen and became increasingly hard to distinguish from organic search results on Google Search.
In 2016, the right-hand column of ads was removed completely (Kim, Reference Kim2016), and today, ads are nearly indistinguishable from organic results, interspersed between the organic results, and far more of the most favored space at the top of the page is given over to ads. Additional space is taken up by Google’s own content, often displayed in a carousel of images, or in an informational panel that Google refers to as a “OneBox” (O’Reilly, Reference O’Reilly2019), or most recently, in an AI-generated answer to the user’s query.
Clearly, internet platforms are well within their rights to monetize their offerings. When services are free to users, someone has to pay the bills. The question, though, is what level of monetization is justified to recover costs and earn a fair return on investment, and when does it become excessive? When is advertising of value to the supplier ecosystem and to users, and when does it become a source of extractive rents? It is not always easy to determine the answers.
At Amazon, however, the answer to the question of whether monetization has become extractive appears unambiguous. Almost all organic recommendations,Footnote 23 which helped make Amazon’s digital marketplace revolutionary, have been replaced by purely paid-for recommendations or hybrid recommendations, for example, “Trending now—Sponsored,” or “Highly rated—Sponsored” (Kaziukėnas, Reference Kaziukėnas2021). E-commerce research firm marketplace pulse estimates that of the first 20 products a shopper sees when searching on Amazon, only 4 are now organic results (Kaziukėnas, Reference Kaziukėnas2022). Because of how paid results are placed, it can sometimes take scrolling past three browser windows worth of search results to get to the fifth organic result (Kaziukėnas, Reference Kaziukėnas2022).
Our own analysis of Amazon’s ability to allocate value (i.e., clicks) between competing products in its third-party marketplace through the algorithmic arrangement of search results shows that a product listing shifted to a sufficiently higher “attention share” position will receive more clicks, regardless of its relevance or price. A product listing in the bottom 10 for relevance, but the highest percentile for “attention share,” is as likely to be clicked as a top 5 most relevant product in an average position. This dynamic enables Amazon to trade-off the relevance of a search result for its screen position, through advertising, while maintaining a high click-through rate (Rock et al., Reference Rock, Strauss, O’Reilly and Mazzucato2023).
Users may be harmed when a platform replaces organic results with advertising because they may see fewer relevant options and may be directed away from better products or lower prices. Organic outputs and advertising both aim to be clicked on or looked at, but the ranking of the former is optimized for intrinsic relevance among all eligible information sources, while the latter optimizes directly for clicks among those firms that bid for user attention. While the most relevant ads can be as useful to consumers as the organic results, showing more ads means diving deeper into the inventory of ads and potentially showing worse ads that are not as useful.
The harms to Amazon’s marketplace suppliers from replacing organic content with ads are far more immediate. In a world where gatekeepers’ principal benefit to consumers is algorithmically curated access to a rich ecosystem of suppliers, and where their benefit to suppliers is algorithmically curated access to a huge market of end users, organic search results are the coin of the realm and fairness is of the essence. Paid results standing in for (rather than supplementing) organic results are the equivalent of a debased currency. When advertising replaces organic search results, the supplier ecosystem must now pay for visibility that it once earned through product quality and reputation signals.
As advertising dominates more of the screen, it has become a barrier to entryFootnote 24 for merchants wanting to sell on Amazon—a tax on top of referral and fulfillment fees (Morrison, Reference Morrison2021). “There’s fewer organic search results on the Amazon marketplace page, so that increasingly means the only way to get on the page is to buy your way on there,” said Jason Goldberg, chief commerce strategy officer at advertising mega-firm Publicis (Palmer, Reference Palmer2021).Footnote 25 The result, according to Quartile (2018), a major AI-powered advertising platform, is that Amazon is now a “pay-to-play” platform for top screen positions. Three-quarters of all sellers on Amazon now “choose” to advertise (Mileva, Reference Mileva2022). For small- and medium-sized sellers, the figure is 79% (Jungle Scout, 2022).
This is not advertising like that offered by Google or Facebook or traditional media, which rides as a passenger on a current of attention focused on information or entertainment, but a kind of Hunger Games-like competition between merchants to capture the purchases of people who are already looking to buy. It is a zero-sum transfer of attention and value between sellers (Gornyi, Reference Gornyi2023), which simultaneously increases revenue for Amazon but without “necessarily growing the sales volume” (Kaziukėnas, Reference Kaziukėnas2021). An attention boost to one seller necessarily comes at the expense of another.
Moreover, unlike in traditional media advertising, these ads appear in the actual decision-making interface with an almost identical appearance to the organically recommended product choices. The majority of Amazon marketplace ads simply duplicate the organic listings and appear on the same page, often adjacent to them, offering no additional information to consumers. And with ads and organic listings competing for user attention on the same search results screen, Amazon exploits users’ positional bias, and puts the ad rather than the organic result in the position most likely to be clicked on, thus extracting a fee from the supplier while providing no added benefit.Footnote 26 This behavior demonstrates exactly what we mean when we say that Amazon uses attention rents extracted from users to extract corresponding pecuniary rents from its supplier ecosystem.
Through this zero-sum competition, Amazon has fostered both higher ad prices and lower return from ad spend (Jungle Scout, 2022; Soper, Reference Soper2023), leading to a rent transfer from third-party firms to the platform itself. By 2022, advertising had become a highly profitable $37.7 billion business for Amazon (Amazon.com, 2022, p. 67).Footnote 27 Meanwhile, the average cost per click on Amazon ads doubled from $0.56 in 2018 to $1.2 in 2021 (Business of Apps, 2022). Average cost of spend was 30% according to Adbadger, meaning that 30 cents now has to be spent on ads to drive $1 of sales (The Badger, 2023).
Interestingly, as TikTok begins to mount an e-commerce challenge to Amazon, it is relying on new forms of algorithmic earned attention to attract and engage users. As with its social feed product, it uses viral cascades of user attention to surface popular content. In the case of e-commerce, that is the discovery of so-called “dupes,” lower-cost products that have the same or better features than higher cost branded products (Barinka, Reference Barinka2023). That is, TikTok is demonstrating once again that finding new signals that provide better organic algorithmic results provides a competitive advantage in acquiring and retaining users, and that algorithmic rents tend to be extracted by platforms only once they have cemented their dominance.
Distortion of the best algorithmically chosen results can also be seen in social media systems. Facebook, X (formerly Twitter), and Instagram all began by matching users with a unique feed of content from other users that they have chosen to follow, whether they be friends, celebrities, or news sources. During the growth period, when user acquisition is the paramount goal, these platforms align their algorithmic selections with this user promise.
In this early stage, sorting of posts in the feed is typically in reverse chronological order (newest to oldest), or through a social graph (from friends). Later, other possible arrangements are tried, such as sorting by posts with the most engagement. Eventually, the sorting is fully given over to the algorithmic recommendations. At first, users are given some control, allowing them to revert, for example, to a chronological sort, or to preference posts from those marked as friends, or from those the user has chosen to follow. Over time, these expressions of user preference are made harder to find or disappear entirely. Meanwhile, the ad load goes up, also making it harder to find and enjoy the posts that the user originally signed up for. Moreover, “dark patterns,” such as automatically switching the user to a feed of recommended videos (e.g., Instagram reels) as soon as they finish viewing a video posted by a friend, are also used to help override the user’s preferences. From there, it is a slippery slope to shaping the content shown in such a way as to extract additional attention from users by showing them whatever content will drive the most engagement.
While we have not done empirical research to measure algorithmic attention rents in social media, we believe that the notion of algorithmic attention rents provides fruitful avenues for further research and analysis of social media and other recommendation systems as well as for search-based systems.
3. Harms from algorithmic rents
While it is harm to consumers or to competitors that is most likely to draw the eye of regulators, many of the harms from algorithmic attention rents may fall more heavily on the supply side of a platform’s ecosystem.
Thompson (Reference Thompson2014, Reference Thompson2023) notes that in order to provide ranked results or recommendations to its users, a platform must aggregate information—pooling and digitizing supply, and commoditizing it in the process. This aggregation reduces the power of the platform’s suppliers, and makes them dependent on the fairness of the algorithmic rankings provided by the platform.Footnote 28 It is these suppliers, not just consumers, who are harmed when information allocations are distorted from their optimally relevant competitive level.
To understand why clearly defining the role of the platform as intermediator and aggregator matters, it is important to take an ecosystem view of the total investment in value creation, rather than attributing all of the value creation to the platform. Any notion of fairness depends on this ecosystem view: without websites, there would be no need for Google Search; without merchants, no Amazon; without app developers, no App Stores; without users creating content as well as consuming it, no social media. When suppliers are harmed, users too will be harmed over the long run.
These ecosystems of value creators depend on the platform’s algorithms for what in the industry is called “earned attention.”Footnote 29 When the platform exempts its own competitive content or services from its own algorithms, or when it displaces organic results with paid results, the ecosystem suffers a loss of incentive and reward for continuing to produce value. Eventually, this loss of value affects both users and the platform itself, as the whole virtuous circle of creation, aggregation, and curation breaks down. That which has been earned is appropriated by the platform.
Advertisers can also provide value to consumers, especially when ads are well targeted to their interests. This value too can be algorithmically misappropriated by the platform. Targeted advertising commoditizes attention, personalized by demographics, location, and interests, to be auctioned off at scale and algorithmically matched billions of times per day with content viewed by consumers. Advertisers bid for that attention, but what they get may not be what they pay for if the platform places ads near inappropriate brand-damaging content (Internet Advertising Bureau, 2020), places display ads (which advertisers pay for by a count of impressions [i.e., inclusion in pages being displayed] rather than clicks) in locations far below the scroll where they are unlikely actually to be seen by users, or doesn’t properly police problems such as click fraud, viewing by bots rather than humans, autoplaying videos that are not actually chosen by the user, and so on. Hwang (Reference Hwang2020) calls this ad quality problem “the subprime attention crisis.”Footnote 30
A fair reward for value created is a useful framework within which to evaluate harms from algorithmic allocations. When considering algorithmic attention allocations, it is helpful to ask who wins, who loses, and who decides.Footnote 31 This can be a complex calculus. Examples from Google Search, Amazon marketplace, and social media feeds, which we explore below, show the range of questions that can be explored using the analysis of algorithmic attention rents in ad-based businesses.
3.1. Google Search: The market shaping power of attention allocations
Google does make a substantial effort to find the most relevant advertisements (Google, 2023b).Footnote 32 In the pay-per-click (PPC) ad system used in Google Search (“Adwords”), the company has strong incentives to find the ads that are most likely to draw clicks. In display advertising formats (pay-per-view), in which Google is a player through its algorithmic placement of ads on third party websites and its control of the ad exchanges in which advertisements are algorithmically bought and sold, Google’s incentives are less clearly aligned with those of its advertisers.
Even in the PPC system of Adwords, though, the pool of possible results comes from those willing and able to pay. And that strongly favors some firms over others. A good example may be found in a search such as “buy new tires.” In current designs, ads populate the best screen positions, and those ads typically come from large national chains and online sellers (O’Reilly, Reference O’Reilly2022). Local merchants are shown on a map far down the page, requiring scrolling to see them. The map does provide unpaid business listings that allow users to learn more about each of the merchants, including their opening hours, address, and phone number. This is valuable to both users and to merchants even though it does not take the form of a traditional web link. But this information is far harder to find today than it was 10 years ago. This imposes a time cost on users and a visibility penalty on suppliers, such that they must now advertise to gain attention that they had previously earned by signals such as quality and a location convenient to the user.
Note the market shaping power on display. As consumers increasingly rely on search engines to find products, Google is essentially deciding that those most willing and able to pay are more deserving of consumer attention than those that relevancy factors such as location or reputation suggest. This example highlights the responsibility of platforms to think deeply about the consequences of their attention allocations. In short, the trade-offs are not always obvious, and weighing the harms to any party requires careful analysis.
3.2. Amazon marketplace: Advertising as extractive rent
While celebrated by shareholders and analysts as a triumph of business strategy, earning the company enormous profits, Amazon’s marketplace advertising business seems to provide little benefit to either users or suppliers (who, in the case of Amazon’s third party marketplace, are also its advertisers). Users see fewer of the results that Amazon’s own algorithms have calculated as an ideal match for their search, and instead see ads, whose relevance is often lower.
In Rock et al. (Reference Rock, Strauss, O’Reilly and Mazzucato2023), we compared the organic rank of a search result with the ad rank of the same result.Footnote 33 We found that on average an ad boosted the visibility of a possibly inferior (by the judgment of Amazon’s algorithms) product by between 5 and 50 positions, with a median boost of 17 positions. We also found that occupying high value screen positions is more important for sponsored than organic listings, suggesting that users are not finding the most relevant products in the top ranking spots.
There may be some benefit from advertising to merchants trying to raise the visibility of new high quality products (which may have fewer reviews and less history of being clicked on and purchased), but for the most part Amazon’s ad business appears simply to be an additional fee levied on merchants rather than providing added value to them. It raises their cost of doing business, which may be passed along to consumers in the form of higher prices. We found that the top-3 most clicked advertised products are about 17% more expensive than organic ones ($19.3 vs. $16.5) and one-third less relevant (organic rank of 4 vs. 3.)
Why do sellers put up with this behavior? As outlined in the recent United States Federal Trade Commission complaint against Amazon (Graham, Reference Graham2023),Footnote 34 the company has used contractual requirements to raise switching costs and to prevent sellers from offering lower prices on competing marketplaces, or even on their own sites. Additional techniques involve punishing sellers who do not purchase services such as fulfillment and advertising from Amazon by sentencing them to virtual algorithmic invisibility. We explore this subject in greater detail in the companion paper, “Amazon’s Algorithmic Rents” (Strauss et al., Reference Strauss, O’Reilly and Mazzucato2023).
Why do users put up with it? It is quite possible that they do not, but just not in sufficient numbers to affect Amazon’s calculus about the overall profitability of its strategy. According to Stone (Reference Stone2021), “When sponsored ads were prominently displayed, there was a small, statistically detectable short-term decline in the number of customers who ended up making a purchase… But while he [Bezos] cautioned against alienating customers by serving too many ads, he opted to vigorously move forward, saying that any deleterious long-term consequences would have to be implausibly large to outweigh the potential windfall and the investment opportunities that could result from it.”
The damage to Amazon may be a gradual downslope or a sudden cliff. When does brand and reputation damage accumulate to the point that consumers start trusting Amazon less, shopping at Amazon less, and expending the effort of trying alternatives? If Amazon is experiencing slow, incremental costs from what it is doing, there is more hope that it will change its behaviorFootnote 35 than if it is experiencing no costs unless regulators impose them.
3.3. Social media: Engagement is a two-edged sword
Assessing optimality for social media platforms, which may be used either for utility or entertainment, is more difficult than it is for utility platforms like search or e-commerce. If the user is simply looking to be entertained, an algorithmic feed optimized for engagement may arguably be what the user wants, as the success of TikTok demonstrates. And especially when informed by personalized data, ads can be highly relevant and add information value to consumers. But while higher engagement and time spent on a social media site can, on the surface, be seen as a sign that users are finding more value on it, there are also harms, starting with addictive behavior, and when engagement is driven by posts fueling anger, self-doubt, misinformation, or controversy.
Social media feed recommendations, which contain advertising directly in the feed, give platforms an incentive to extract attention rents via increased engagement. More time spent on the platform means more surface area for advertising. Thus, consumers may see a higher proportion of stories optimized for “bad” forms of engagement, including those that are divisive, titillating, contain misinformation, or are otherwise harmful to consumers. As Stray (Reference Stray2021) notes, over the long term, companies have incentives to manage for “good” engagement so that users do not leave, but over the short term, they can profit even from “bad” engagement.
Advertisers are also harmed when their ads are algorithmically matched to inappropriate content (Internet Advertising Bureau, 2020). Only the platform benefits, which is why we consider these systems ripe for attention rents. More research is needed.
4. Measuring algorithmic attention rents
Today’s platforms are data-rich environments, where harms can be measured and estimated directly. Antitrust regulatory bodies now include data teams (Hunt, Reference Hunt2022), though they remain hampered by platforms not providing regular operating and data disclosures (Section 5.2).
Measuring the extent to which sponsored listings have displaced organic search results is one way to understand whether or not these platforms are abusing their market power. We make the assumption that if, as the platforms have both promised and demonstrated, their algorithms aim to make the best possible attention allocations, the deviation between organic and paid results provides evidence of rents and harms. Using this approach, the presence of algorithmic attention rents can be inferred as follows:
-
1. Comparing the organic ranking of a product with its paid ranking to determine the extent to which the platform is preferencing results that its organic algorithm shows are inferior. For example, on Amazon, paid advertising results often rank a product higher—sometimes far higher—than the organic listing for the same product, and to other superior organically ranked products (Rock et al., Reference Rock, Strauss, O’Reilly and Mazzucato2023).
-
2. Examining whether ads bring additional information to consumers. For example, on Amazon, the organic listing for a product and an identical paid listing (ad) for the same product often appear side by side. This duplication reduces variety. No new information is provided. Instead, net new information is taken away.
-
3. Comparing the quality of a dominant platform’s organic algorithmic results with the organic allocations offered by other less dominant platforms that do face competitive pressures. For example, it is possible to compare Amazon’s product search results on its marketplace with those from Google Shopping; or Google Travel results with those from a site such as TripAdvisor. This entails an assessment of the relative quality of the organic results, which may be a very data intensive and complex task.
-
4. Examining whether or not the information (including information quality) that a business or consumer could reasonably expect to find in a competitive market is available (Areeda and Hovenkamp, Reference Areeda and Hovenkamp2023).Footnote 36 For example, in a competitive market a consumer would expect to find the total (and unit) price of a good or service displayed to aid in shopping comparisons. Yet, this is often hidden from view online. Airbnb, for example, only recently started showing total prices of accommodation (including cleaning fees) (Airbnb, 2022). So-called “drip pricing” (Fletcher, Reference Fletcher2012), only showing part of the price to the consumer initially, is widespread, but may be more common on platforms with greater market power.Footnote 37
In another example, companies such as Yelp and TripAdvisor have complained that Google’s Travel listings do not make the same efforts as these smaller platforms to filter out review spam—reflecting Google’s added market power (Hawkins, Reference Hawkins2018).Footnote 38 The information degraded could also be the quality of advertising results (and disclosures) that an advertiser could reasonably expect from the platform. The DoJ argued in its 2023 trial against Google that it degraded the ad quality in search results by making changes to its keyword matching. This reflects Google’s monopoly over “when and where” an ad appears (United States of America v. Google LLC, 2020, p. 6).
-
5. Examining whether ads have increased (and organic output declined) beyond the level reasonably required for the platform to earn a competitive return on capital invested. Estimating a reasonable level of return is always difficult, especially in a new industry where norms have not been widely established, and network effects create high levels of concentration. However, a historical view of the platform operating and financial metrics can provide some perspectives. For example, if profit per user (or per search, per session, or other relevant measure of service delivered) continues to increase once user growth has leveled off, without demonstrated reductions in costs or commensurate ecosystem benefits, this would suggest increased monetization above the level that was previously considered sufficient to deliver the service. This profit-driven monetization growth would need to be linked to the platform’s overall level of profit margins or profit growth for it to be found to be exploitative, or above normal.Footnote 39
Advertising needs to be looked at in combination with the other platform fees and prices a platform charges in order to assess its reasonableness. For example, Apple justifies the 30% commission it charges for purchases on its App Store by relating it to various investments and user benefits such as privacy and quality (Epic Games v. Apple Inc., 2021). Yet in addition, Apple charges app developers for visibility on its App Store by reserving prime display and result spots for advertising. These two fees combined—advertising and commission—are significantly above 30% (Kuriata, Reference Kuriata2022). Financial disclosures by the company, which put the two revenue sources in different categories, help obscure the total cost to app developers.
Much of the data for these assessments can be found in an extensive literature from the Search Engine Optimization community, social media marketing consultants, e-commerce consultants, and the like, which provides some understanding of the ranking factors that guide algorithmic rankings and recommendations. The platforms themselves drop tantalizing clues in their announcements, blog posts, conference presentations, and annual shareholder letters (O’Reilly et al., Reference O’Reilly, Strauss and Mazzucato2023a). In addition, there are numerous one-time studies by academics, consultants, marketplace or advertising data firms, and occasionally, regulators. These studies usually consist of statistical analysis of a snapshot of web-scraped data at a specific time. For example, from a study of 1.4 billion searches by 28 million UK citizens (Goodwin, Reference Goodwin2012), we know that in 2011, 94% of Google clicks were organic and only 6% went to ads. But we have no idea what the ratio was in different countries, what that ratio is today, or how it changed in the intervening years as Google has updated its algorithms and screen designs.
While information from studies such as this is useful, it points at a gaping hole in the regulatory apparatus: the lack of regular, mandated disclosures by platform companies of the operating metrics that actually guide the design of their algorithms, measure their results, and ultimately control the monetization of user attention.
5. Some possible regulatory interventions
The theory of algorithmic rents and data available from studies such as the one we have done suggest some relatively straightforward regulatory interventions.
5.1. Regulations of algorithmic output and preferences
For search-based algorithmic systems, including App Stores and e-commerce, regulators could impose the following requirements:
-
1. A percentage of the screen positions receiving the top share of attention could be reserved for organic results.
-
2. When ads duplicate organic results and provide no additional information (as in the Amazon marketplace), regulations could require that the organic result appear first, in the position most likely to be clicked on.
-
3. When there is an exact match for a user query (as in a search for a product by brand name in the Amazon marketplace), the exact match organic result must be the first result.
For feed-based algorithmic systems, such as TikTok, Twitter, and Instagram:
-
1. Platforms could be required to offer “sticky” preferences to users, rather than requiring them to repeatedly assert their wishes(e.g., in Instagram, it is possible for the user to choose to see posts only from those they follow or have marked as favorites, but the feed almost immediately reverts to the platform’s own algorithmic feed preferences). In general, when users opt out of a new behavior offered by the platforms, that preference should be persistent.
-
2. Attention-hijacking patterns such as autoplaying videos could require an opt-in expression of preference.
However, these interventions suffer from several flaws:
-
1. Platform behavior is a moving target, with new features, constant algorithm updates, and design changes.
-
2. Regulatory interventions are likely to be countered by the platforms, much as they counter attempts by users and marketplace participants to “game” their algorithms.
-
3. Over-specified regulations could inadvertently strangle legitimate innovations that would benefit not only the platforms but also their users and marketplace participants.
The more fundamental problem that regulators need to address is that the mechanisms by which platforms measure and manage user attention are poorly understood. Effective regulation depends on enhanced disclosures.
5.2. The need for regular, mandated, disclosures of operating metrics
The platforms themselves collect numerous and detailed operating metrics to judge and manage the performance of their own systems for directing user attention. They know how many users they have, how much time those users spend on each of the platform’s services, how they are monetized, and the impact of new services and designs on their usage and monetization. They know the ad load. They know the ratio of organic clicks to ad clicks. They know much traffic is sent on to outside sites, by market segment. They know the gross merchandise volume of an e-commerce marketplace or app store, and they know what percentage they collect in fees. And they know how each of these measurements compares to prior periods—and how those changes result from updates of their interface designs and algorithms—just as well as they know their personnel costs, capital equipment costs, revenue, and their profit. But only the financial metrics are reported regularly and consistently, and those financial reports are almost completely disconnected from the operating metrics that are used to actually manage so much of the business (O’Reilly et al., Reference O’Reilly, Strauss and Mazzucato2023a).
The lack of disclosure of operating metrics for the free side of internet aggregators is a gaping hole in the regulatory apparatus. Costs, revenue, profit, and other financial metrics may be sufficient to understand a business based on tangible inputs and outputs, but are not fit to purpose for information businesses whose assets and activities are largely intangible and whose market power is exercised through delivery of services that are free to consumers (Mazzucato et al., Reference Mazzucato, Strauss and Ryan-Collins2023).
Given the size of the major internet companies, these metrics need to be highly disaggregated, both on a geographical and product basis. Google parent Alphabet alone has more than nine free products with more than a billion users, yet it reports only one major business—“advertising.” The connection between its revenues and the underlying free products and services is completely opaque. Meta too discloses little disaggregated information about products such as Facebook, Instagram, WhatsApp, and Messenger, each with billions of users. This pattern is repeated across the industry.
Even the products that are directly monetized often are not required to be broken out in detail, due to outdated segment reporting rules. US securities regulations require companies to break out financial detail for any business operation (or “segment”) that represents more than 10% of revenue or profit, yet in practice, they allow for management discretion in what segments are reported. This allowed Amazon to hide the remarkable growth and profitability of its Amazon Web Services business for years, and for Apple to claim in response to a lawsuit from Epic Games that it did not actually know the profitability of its App Store (Mazzucato et al., Reference Mazzucato, Strauss and Ryan-Collins2023). But perhaps more importantly, it allows them to hide the workings of the free side of the products that underpin their enormous market power.
The most important first step is to understand how much is left out of the picture when market power is measured purely in financial terms. We live in an attention economy. A product like Google Maps, with over a billion users, has insignificant revenue relative to Alphabet’s total (perhaps $3 billion out of $289 billion), yet it is unquestionably the most powerful player in its competitive segment, offering free products that cap the growth and opportunities for smaller mapping companies like ESRI or Garmin with more traditional business models. Segment reporting rules should be triggered by the number of active users a product has, not just by its contribution to company revenue or profit.
5.3. Some recommended reportable operating metrics
Understanding what operating metrics need to be reported for free products and services is in its infancy. In contrast to the financial metrics required on the money side of businesses, which are rooted in systems of accounting dating back to the 13th century, the metrics that are used to govern algorithmic attention businesses are at best a few decades old. Nonetheless, we must make a start.
All metrics should be reported quarterly, with more detail annually, as part of the existing financial disclosures required of public companies. And because a historical view would be useful, to whatever extent possible, the introduction of such reporting should require a backward look for at least several years, and ideally much longer. As noted above, metrics should also be disaggregated by product, with reporting required for any product having more than 100 million monthly users. They should also be disaggregated by country, and by device type (desktop or mobile.)
Here are some of the metrics that would be useful for examining attention rents monetized through advertising:
-
1. Ad load. Because not every page has the same number of ads—on Google, for example, many search engine results pages are noncommercial, and carry no ads at all—ad load should be reported by decile, or some other framework that highlights the ad concentration on the most highly monetized pages.
-
2. Ratio of organic clicks to ad clicks. Again, by decile or other weighted format.
-
3. Average click through rate of the first organic result. The proportion of users who visit the page who click on the first organic result.
-
4. Average click through rate of the first ad. The proportion of users who visit the page who click on the first ad.
-
5. Amount of traffic sent on to third party sites. This should be bucketed by market segment, such as news, entertainment, commerce, travel, local search, and so on.
-
6. Amount of traffic sent to the company’s own other products and services. This could be further detailed by traffic source. For example, it would be useful to know how many users come to Google Search from Chrome on Apple devices vs. Chrome on Android, vs. from other browsers such as Firefox.
-
7. Gross merchandise volume (for e-commerce platforms). Without this information, it is impossible to determine the percentage of all fees levied on third party marketplace participants.
-
8. Gross fee revenue, including advertising from marketplace participants (for e-commerce platforms and app stores).
-
9. A monetization narrative explains the relationship between these various metrics describing the free side of their platform and their monetization on other platform sides.
Ideally, regulators, working with cooperative industry players, would define reportable metrics based on those that are actually used by the platforms themselves to manage search, social media, e-commerce, and other algorithmic relevancy and recommendation engines. These metrics should then be standardized and required. There may be some metrics that can legitimately be considered trade secrets, but there are many that are common to most if not all internet businesses of the same type.
Note also that the operating metrics of big tech players are a moving target, constantly updated as the platforms continue to innovate. So this is also an opportunity to update the standards-setting process by which required reporting metrics are defined, mandating updated and timely reporting of any meaningful change in operating metrics.
Platforms will claim, with some justice, that disclosure will harm their businesses, as it will allow third parties to game their systems more easily. But this is akin to the old approach to cybersecurity, of “security through obscurity.” We have learned that it is far better to find and fix vulnerabilities than to hide them.
This is particularly important as we enter the age of large language models (LLMs) and generative AI, the next generation of attention management machines foretold by Herbert Simon. You cannot regulate what you do not understand (O’Reilly, Reference O’Reilly2023). It is not enough to rely on the assurances of powerful players that they are doing their best. Regular, reportable metrics will allow investors, the public, regulators, and the platforms themselves to better understand and operate truly free markets, which, as classical economists such as Adam Smith and David Ricardo believed, are not markets free of government intervention, but markets free of rents (Mazzucato, Reference Mazzucato2018).
6. Conclusion: AI and attention
While this article focuses on the present state of algorithmic attention rents, we are far from done with the changing institutional context of human decision-making online. Today’s LLMs do not provide ranked choices but answers. They do not (at present) send traffic or other compensation to third party content, sites, or apps; nor do they depend on clicks and views. They do not (yet) have an advertising-based business model. Yet, they are quite consistent with the thrust of this article, since they depend on users accepting the increasing penetration of algorithmic authority (Choudhury and Shamszare, Reference Choudhury and Shamszare2023), not just into everyday decision-making but increasingly, into everyday thinking.
Attention and cognition conserving heuristics help explain much about the present trajectory of today’s LLMs and other “frontier” AI systems. Newell and Simon (Reference Newell and Simon1975) viewed both human and artificial intelligence inextricably bound up with the use of context and selectivity to create more efficient, guided, approximate solutions—for humans and machines.Footnote 40
As platforms evolve, AI may not only guide and replace human decision-making but human thinking (cognition.) LLMs can make use of existing information by creating new or summarized outputs from that information, not through the ranking of it. As AI improves, those outputs may become increasingly original, perhaps transcending their inputs and creating new knowledge and new possibilities. But even today:
-
1. The model can save users enormous cognitive costs and time. It demands little by way of users’ attention and so improves upon the net time saving provided to users by existing productivity platforms and services. But at the same time, it takes even greater trust by users in the reliability and fairness of the model and of the platform providing it.
-
2. The model produces a probabilistic service for its users. It does so by consuming information inputs in a production process (“training”) to produce outputs that may vary each time. In the same way, an orchestra will train to gain skills, and then each performance may vary depending on the occasion. This makes it even harder to measure bad behavior from the outside, making the need for preemptive disclosures even more urgent.
-
3. While the first generation of LLMs were not updated in real time through interaction with users, suppliers, and advertisers, that is coming. The early instantiations of today’s web applications were also relatively static captures of a moment in time, and only became rapidly updated as the technology progressed. But even today, these models depend on content created by humans—the vast corpus of human knowledge and creativity on which they have been trained.
Despite, and perhaps because of, all these differences between current search, recommendation, and feed algorithms and LLMs, the need for disclosure is paramount. Like their predecessors, these LLM systems internalize and centralize a vast marketplace of human knowledge and experience. As presently implemented, AI systems pass through neither attention nor remuneration to the providers of content used to train the model. As in present systems, human inputs are regarded as raw materials that can be appropriated by the developer of the system. If history is any guide, control over these raw materials by frontier AI platforms will eventually lead to the quest for a business model that allows for the extraction of monopoly rents.
Looking back at what we know now about present platforms, we can only wish there had been a disclosure regime that would have shown us the state of these systems when their creators were focused on serving their users and other ecosystem partners, and thus told us when and how they began to turn from that path to extract self-serving economic rents. Much like their predecessors, these frontier AI systems are managed by metrics whose details are known only to their creators and disclosed to the outside world only via generalities and sporadic, often self-serving data points. The time to establish rules for disclosure of operating metrics for frontier AI systems is now (O’Reilly, Reference O’Reilly2023).
We are still in the early stages of the attention economy, and innovation should be allowed to flourish. But this places an even greater emphasis on the need for transparency, and the establishment of baseline reporting frameworks that will allow regulators to measure whether attention allocation systems, including frontier AI systems, are getting better or worse over time. Greater public visibility into the operation of these platforms can, in conjunction with more informed policy making, lead to better behavior on the part of those who own and manage these systems, more balanced ecosystems of value creation, and the optimal use of knowledge in society.
Data availability statement
All secondary data used in this article have been cited and are available by request of those authors. Contact [email protected] for data cited in Rock et al. (Reference Rock, Strauss, O’Reilly and Mazzucato2023).
Acknowledgments
The authors acknowledge the helpful comments from Jennifer Pahlka, Bill Janeway, Greg Linden, Betsy Masiello, and Derek Slater. All errors are their own.
Author contribution
Writing – original draft: M.M., T.O., I.S.; Writing – review & editing: M.M., T.O., I.S.
Funding statement
This research was supported by the Omidyar Network.
Competing interest
No competing interests exist which have influenced in any undue matter the views advanced in this article.
Provenance statement
This article was published as a working paper (O’Reilly et al., Reference O’Reilly, Strauss and Mazzucato2023b).
Comments
No Comments have been published for this article.