Skip to main content Accessibility help
×
Hostname: page-component-cd9895bd7-p9bg8 Total loading time: 0 Render date: 2024-12-26T13:23:43.764Z Has data issue: false hasContentIssue false

Part III - Platform Governance

Published online by Cambridge University Press:  16 May 2024

Kyle Langvardt
Affiliation:
University of Nebraska, Lincoln
Justin (Gus) Hurwitz
Affiliation:
University of Pennsylvania Law School

Summary

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2024
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

11 Introduction Platform Governance

Kyle Langvardt

The term “content moderation,” a holdover from the days of small bulletin-board discussion groups, is quite a bland way to describe an immensely powerful and consequential aspect of social governance. Today’s largest platforms make judgments on millions of pieces of content a day, with world-shaping consequences. And in the United States, they do so mostly unconstrained by legal requirements. One senses that “content moderation” – the preferred term in industry and in the policy community – is something of a euphemism for content regulation, a way to cope with the unease that attends the knowledge (1) that so much unchecked power has been vested in so few hands and (2) that the alternatives to this arrangement are so hard to glimpse.

Some kind of content moderation, after all, is necessary for a speech platform to function at all. Gus Hurwitz’s “Noisy Speech Externalities” (Chapter 12) makes this high-level point from the mathematical perspective of information theory. For Professor Hurwitz, content moderation is not merely about cleaning up harmful content. Instead, content moderation becomes most important as communications channels approach saturation with so much content that users cannot pick out the signal from the noise. In making this particular case for content moderation, Professor Hurwitz offers a striking inversion of the traditional First Amendment wisdom that the cure for bad speech is more speech. When speech is cheap and bandwidth is scarce, any incremental speech may create negative externalities. As such, he writes, “the only solution to bad speech may be less speech – encouraging more speech may actually be detrimental to our speech values.” Professor Hurwitz therefore suggests that policymakers might best advance the marketplace of ideas by encouraging platforms to “use best available content moderation technologies as suitable for their scale.”

Laura Edelson’s “Content Moderation in Practice” (Chapter 13) provides some detail on what these technologies might look like. Through a survey of the mechanics of content moderation at today’s largest platforms – Facebook, YouTube, TikTok, Reddit, and Zoom – Dr. Edelson demonstrates that the range of existing techniques for moderating content is remarkably diverse and complex. “Profound differences in content moderation policy, rules for enforcement, and enforcement practices” produce similarly deep differences in the user experience from platform to platform. Yet all these platforms, through their own mechanisms, take a hardline approach toward content that is “simply illegal” or that otherwise contravenes some strong social expectation.

In Chapter 14, “The Reverse Spider-Man Principle: With Great Responsibility Comes Great Power,” Eugene Volokh examines the hazards that arise when private go-betweens assume the responsibility of meeting public expectations for content regulation. As companies develop technical capabilities that insinuate them more deeply into human decision-making and interaction, there is a natural temptation to require them to use their powers for harm prevention. But as seen in the case of online platforms, these interventions can create discomfiting governance dynamics where entities micromanage private life without clear guardrails or a public mandate. Volokh argues that courts do grasp this Reverse Spider-Man Principle at some level, and that they have worked to avoid its dangers in diverse settings. Tort law, for example, does not generally hold landlords responsible for screening out allegedly criminal tenants, even if such screening might help protect other tenants from violent crime. If it were otherwise, then the law would appoint landlords as narcotics officers, with likely disastrous consequences for individual liberty.

Alan Z. Rozenshtein’s “Moderating the Fediverse: Content Moderation on Distributed Social Media” (Chapter 15) points toward an alternative social media architecture that would address the Reverse Spider-Man problem by dialing down the reach, responsibility, and power of any one community of moderators. This “Fediverse” does not rotate around any single intermediary in the way that today’s mainstream social media architecture does. Instead, the Fediverse is held together by a common protocol, ActivityPub, that allows any user to found and operate their own “instance.” In the case of Mastodon, the Fediverse’s most popular social media platform, each instance works a bit like a miniature X platform with its own content policies and membership criteria. Groups of instances, in turn, can enter into federative agreements with each other: Instance A may allow its users to see content posted in instance B, but not content posted in instance C.

This architecture ensures that no one group of moderators has the scale – or the responsibility, or the power – to set content rules that control the shape of public discourse. But achieving this result would require great effort in the form of a distributed, almost Jeffersonian moderation culture in which a much larger group of users participates intimately in content decisions. Moreover, it is unclear that the Fediverse lends itself to ad-based monetization in the same way that platformed social media does. The seemingly natural behavioral and economic inclination toward market concentration and walled gardens indicates that public policy will have to play some role in encouraging the Fediverse to flourish. Professor Rozenshtein’s chapter offers some suggestions.

12 Noisy Speech Externalities

Gus Hurwitz
12.1 Introduction

A central tenet of contemporary First Amendment law is the metaphor of the marketplace of ideas – that the solution to bad speech is more, better, speech.Footnote 1 This basic idea is well established in both judicial and scholarly writing – but it is not without its critics. My contribution to this volume adds a new criticism of the marketplace-of-ideas metaphor. I argue that there are circumstances where ostensibly “good” speech may be indistinguishable by listeners from bad speech – indeed, that there are cases in which any incremental speech can actually make other good speech indistinguishable from bad speech. In such cases, seemingly “good” speech has the effect of “bad” speech. I call this process by which ostensibly good speech turns the effects of other speech bad “a noisy speech externality.”

This thesis has important implications. First, it offers a poignant critique of the marketplace-of-ideas aphorism introduced by Justice Holmes in his Abrams dissent.Footnote 2 If the marketplace of ideas is subject to significant market failure, correctives may be justified. Market failures, after all, are a standard justification for regulatory intervention. But, second, my contribution goes a step farther, suggesting not only that there are circumstances in which good speech may fail as a corrective to bad speech but also that there are circumstances in which the addition of seemingly good speech may only yield more bad speech. In such cases, the only solution to bad speech may be less speech – encouraging more speech may actually be detrimental to our speech values. If that is the case, then correctives may be not only justified but needed to satisfy an important societal interest. And, third, this chapter presents solutions for content-neutral ways in which to implement such correctives.

The insight underlying this thesis builds on my prior work applying the insights of Claude Shannon’s information theory to social media.Footnote 3 That piece applied Shannon’s work to social media to argue that, at least at a metaphorical level and potentially at a cognitive level, our capacity to communicate is governed by Shannon’s channel-capacity theorem.Footnote 4 This theorem tells us that the capacity of a communications channel is limited by that channel’s signal-to-noise ratio. Critically, once that capacity is exceeded, any additional signal is indistinguishable from noise – and this has the effect of worsening the signal-to-noise ratio, further reducing the communications capacity. In other words, after a certain threshold, additional speech is not merely ineffective: It creates a negative externality that interferes with other speech.

Other scholars have made similar arguments, which can casually be framed as exploring the effects of “too much information” or “information overload.”Footnote 5 But the negative-externality element of this argument goes a step further. A “too much information” argument suggests that listeners are overwhelmed by the quantity of speech to which they may be subject. This argument suggests that speakers can – deliberately or otherwise – exercise a veto over other speakers by saturating listeners’ information sources. For the listener, it is not merely a question of filtering out the good information from the bad (the signal from the noise): At the point of saturation, signal cannot be differentiated from noise and any filtering necessarily must occur upstream from the listener.

Filtering – reducing the overall amount of speech – has always been a key tool in fighting bad speech. All platforms must filter. Indeed, this is nothing new: Editorial processes have always been valuable to listeners. The question is how they do it, with a related question of the law surrounding that filtering. Under the current approach (facilitated through Section 230 of the Communications Decency Act and built upon First Amendment principles), platforms have substantial discretion over what speech they host. This chapter’s normative contribution is to argue that liability shield should be contingent upon platforms using “reasonable best-available technology” to filter speech – a standard that most platforms, this chapter also argues, likely already meet.

The discussion in this chapter proceeds in four sections. It begins in Section 12.2 by introducing technical concepts from the field of information theory – most notably the ideas of channel capacity and the role of the signal-to-noise ratio in defining a channel’s capacity. Section 12.3 then introduces the traditional “marketplace-of-ideas” understanding of the First Amendment and builds on lessons from information theory to argue that this “marketplace” may in some cases be subject to negative externalities – noisy speech externalities – and that such externalities may justify some forms of corrective regulation. It also considers other arguments that have a similar feeling (“too much information,” “information overload,” and “listeners’ rights”), and explains how the negative-externalities consequence of exceeding a channel’s carrying capacity presents an even greater concern than is advanced by those ideas. Sections 12.4 and 12.5 then explore the First Amendment and regulatory responses to these concerns, arguing that the negative-externalities concern might justify limited regulatory response. In particular, Section 12.6 argues that platforms can reasonably be expected to implement “reasonable best-available technologies” to address noisy speech externalities.

12.2 Information Theory, Channel Capacity, and the Signal-to-Noise Ratio

Initially developed by Claude Shannon at AT&T Bell Labs in the 1940s to study how, and how much, information could be transmitted over the communications channels making up the telephone network, information theory studies how we encode and transmit usable information over communications channels.Footnote 6 While mathematical and abstract in its characterization of information and communication, it is quite literally at the foundation of all modern communications networks.Footnote 7

To understand the questions that information theory answers, we can start with a counterfactual. Imagine a perfect, noiseless, communications medium being used by two people to share meaningful information between them – say a professor wants to transmit a 90,000-word “article” to a journal editor. Assuming no limits on the part of the two communicating individuals, how quickly can one transmit that information to the other? We can ask the same question in a slightly different way: Because we assume the speakers do not impose any constraints (i.e., we assume that each can speak or listen at any speed), we want to know how much information the communications medium can carry per unit of time – its “channel capacity.”Footnote 8

To answer this to a first approximation, one could imagine that a professor could read the article aloud in three hours. So, the channel capacity is at least 30,000 words per hour. But we have assumed that the communicating individuals aren’t the constraint. So, in principle, the professor could read faster, and the editor could transcribe faster – say, 90,000 words per hour, or even 180,000 words per hour, or even 180,000,000 words per hour. In the limit case, because we have assumed that the endpoints (the speaker and listener) do not impose any constraints and that the communications channel is a perfect, noiseless medium, the professor and the editor could communicate instantaneously.

This of course is not the case. But it illustrates two distinct limits we need to be aware of: the ability of the endpoints (speaker and listener) to encode and decode information at a given speed, and the ability of the communications channel to transmit, or carry, that information at a given speed.

Shannon studied both the encoding and carrying questions. We are focused on the carrying question, which for Shannon boiled down to two factors: the strength of the information-carrying signal and the amount of background noise. Taken together, these define the signal-to-noise ratio of the communications channel (mathematically, signal divided by noise, or signal/noise). Increasing this ratio increases the channel capacity. This means that you can increase the channel capacity either by increasing the signal strength or by decreasing the noisiness of the channel.

This should make intuitive sense to anyone who has ever had conversation in a noisy room. It is hard to have a conversation at a loud party – you generally need to speak more slowly and loudly to be heard clearly. You speak more slowly because the carrying capacity of the room (qua communications channel) is reduced by the noise; you speak more loudly to increase the strength of your communications signal.

The example of the noisy room also demonstrates three key takeaways from information theory. What is the source of the “noise” in the room? Mostly other people having their own conversations, or perhaps music is playing in the background. This is not meaningless “noise.” Noise is not merely static or unintelligible sound (though static would be noise). Rather, noise is any signal that is not carrying meaningful information for the recipient. The recipient needs to expend mental energy trying to differentiate signal from noise (if there is enough signal available to reconstruct the intended communication), which slows down her ability to receive information.

Second, all noise is, therefore, reciprocal. Your conversation is noise to everyone else in the room! This means that when you speak more loudly so that your interlocutor can make out what you are saying, you are increasing the amount of noise that everyone else in the room must deal with – you are worsening their signal-to-noise ratio.

Something similar happens when you exceed the carrying capacity of a given communications channel, even if there is no one using that channel other than the speaker and listener. If you think about having a conversation on a phone line with a lot of noise in the background, there is a maximum speed at which you can talk and be understood. What happens when you speak faster than this? The sounds you make become unintelligible – they become noise. This is another lesson quantified by Shannon: When a channel’s carrying capacity is exceeded, any additional information put onto that channel is interpreted not as signal but as noise.Footnote 9

This last observation illustrates a third key takeaway: Signal and noise are interpreted by, and at, the receiver. There could be a thousand conversations going on in the room. Only those that reach the given individual’s ears contribute to the signal and noise she must decipher. Similarly, in the online context, the signal-to-noise ratio is a function only of the message that an individual receives, not of the universe of messages that a platform carries. If a platform that carries a billion messages each day only delivers the relevant, meaningful ones to its users, it will have a very high signal-to-noise ratio; if another platform carries tens or hundreds of messages each day but delivers them all to a user regardless of their relevance, requiring her to sift through all of the messages in order to find those of relevance to her, that platform will have a relatively low signal-to-noise ratio.

12.3 Externalities and Speech Regulation

The discussion above tells us something additional about shared communications channels: At a certain point, information added to a communications channel creates a negative externality, reducing the capacity of that channel for everyone using it. But additional foundation is needed before we can look at why this carries an important lesson for how we think about speech regulation. The discussion below revisits the metaphor of the marketplace of ideas, looks at other scholars who have considered the challenges of limited communications capacity, and then introduces the idea that the negative externality created when a channel’s carrying capacity is exceeded can justify regulation. It concludes by discussing some examples to illustrate these concerns.

12.3.1 Recapitulating the Marketplace of Ideas

Justice Holmes’s dissent in Abrams v. United States introduced one of the most enduring metaphors of American law: the marketplace of ideas.Footnote 10 The concept of the marketplace of ideas is more intuitive than it is appealing: Just as better products (in terms of either price or quality) brought to market will sell better than inferior ones, so too will better ideas curry more favor with the public than lesser ones. And, in dynamic terms, just as overpriced or low-quality products will encourage new entrants into the market, lesser ideas will create an opportunity for better ideas to prevail.

This metaphor does important work toward vindicating the First Amendment’s protection of individuals’ speech against government interference – indeed, this is its true appeal, rather than the idea that speech will work as a marketplace. It promises that there is a mechanism to arbitrate between competing speech in the place of the government. Even where there may be some social need for speech to be moderated, state actors can take a step back and rely on this alternative mechanism to moderate in their stead – a need that might otherwise create demand for government intervention.

The marketplace-of-ideas metaphor has monopolized understandings of the First Amendment’s protection of speech for the past century. While one can, and many do, debate its propriety and fidelity to the Amendment, I will posit that it was fit to task for most of this era. And the reason for this is that the listener-to-speaker ratio was relatively high. This was an era of rapidly changing technologies during which innovation ensured that new entrants and media were regularly entering the marketplace, but the high capital costs of those technologies inhibited entry, largely limiting it to those with the resources and ability to sincerely engage as a participant in this marketplace. A relative few broadcast platforms competed for market share based on the quality of their reporting, and a number of local media outlets pruned these broadcasters’ speech even further as a means to reach local communities. And throughout much of the twentieth century, where media failed a community’s needs, entry was both possible and often occurred.

12.3.2 Other Characterizations of Speech Regulation

As we entered the modern era of communications – with the widespread adoption of cable television and explosive growth of talk radio in the 1980s and the rapid digitalization of consumer-focused communications in the early 1990sFootnote 11 – increasing attention was paid to the idea of “too much information.”Footnote 12 Indeed, television had famously been described as a “vast wasteland” as early as 1961.Footnote 13 By the early 1990s, the number of channels that cable systems could carry exceeded the number of channels of content being produced, satellite systems that could carry several times that many channels were being developed, and the internet had come to the attention of sophisticated commentators.

Around the turn of the century, for instance, Cass Sunstein and Richard Posner both considered how the changing media landscape might affect our understanding of media regulation.Footnote 14 Sunstein, for instance, juxtaposed the marketplace-of-ideas approach to free speech with a Madisonian perspective, under which the purpose of the First Amendment is not merely to protect private speakers from government intrusion into their speech, but also affirmatively to promote and facilitate deliberative democracy.Footnote 15 Under the marketplace model, regulatory intervention had generally only been understood as appropriate in the face of scarcity – a lack of sufficient communications channels that prevented competition within the marketplace. As newer technologies increased the capacity of communications channels, and decreased the cost of deploying new ones, this rationale for regulating speech diminished. But, Sunstein argued, the Madisonian perspective suggested that regulation might nonetheless be appropriate if the new, emerging marketplace of ideas was not conducive to a functioning deliberative democracy.Footnote 16

A decade later, Richard Posner considered many of the same issues that result from the decreasing costs of entering the market and sharing information in the information ecosystem.Footnote 17 He presented a different perspective than Sunstein, however, arguing that most consumers of information had always primarily wanted entertainment – not droll information – and that increased competition in the marketplace was catering to this interest.Footnote 18 In typical contrarian Posnerian fashion, he argued this was possibly not a bad thing: Just as increased competition in the marketplace catered to those citizens who were more interested in entertainment than in information, it would also better cater to those citizens who were more interested in information.Footnote 19

More recently, there have been arguments about “too much information” and “information overload.”Footnote 20 The general theme of these arguments is apparent on their face: Consumers of information face a glut of information that overwhelms their ability to process it all. A generation or two ago, there were relatively few sources of information. Consumers could reasonably assume that these sources had gone through some kind of vetting process and were therefore basically trustworthy. Indeed, should they so desire, an interested consumer could at least somewhat meaningfully undertake to investigate the quality of those competing information sources. The “too much information” argument says that neither of these is as possible today, if it is possible at all, as in it was prior generations – that the sheer quantity of information we encounter on a day-to-day basis undermines media sources’ authority and interferes with listeners’ purposes. And this resonates with the “marketplace-of-ideas” frame as well, for markets are driven, in part, by the consumer’s ability to make informed choices – if that is not possible, the marketplace may not work.

There is another, more recent, argument that, again, challenges the marketplace orthodoxy: listeners’ rights.Footnote 21 The listeners’-rights idea echoes the Madisonian (i.e., democracy-oriented) perspective on the First Amendment, though it may or may not be aligned with the marketplace concept. Under this view, the purpose of the First Amendment is not merely to ensure individuals’ unfettered ability to speak without government interference, but also to ensure that individuals have access to (viz., the opportunity to listen to) information without undue government interference. Thus, if listeners want certain types of information but speakers interfere with their ability to obtain that information, the government may have some role in mediating that conflict and, when it does so, it should preference the listeners’ choices about what information they want to receive over the speakers’ efforts to influence the speakers.

12.3.3 An Externalities Argument for Speech Regulation

We can now return to the ideas introduced with information theory. The discussion in Section 12.2 concluded with the idea that any additional speech added to a saturated communications channel is interpreted as noise, not signal, by all parties to that communications channel. This has the effect of worsening the signal-to-noise ratio, which reduces the overall channel capacity for everyone using that communications channel. In effect, this combines both the “too much information” construct and the listeners’-rights understanding of the First Amendment.

More important, it introduces a fundamentally different justification for (and, as will be discussed in Sections 12.4 and 12.5, a different approach to) speech regulation: externalities. Both the “too much information” and listeners’-rights perspectives present an information-asymmetry rationale for regulating the marketplace of ideas. Information asymmetries are a traditional justification for intervening to regulate a market: When one side of the market systematically has better information than the other, we might regulate to prevent harmful exploitation of that information.Footnote 22 For instance, we may require nutrition or energy-usage labels on products where consumers are not in a position to ascertain that information on their own. So too one could imagine requiring disclosures about the sources of information that speakers communicate to listeners, either as a way of helping consumers to meaningfully make use of the glut of information communicated to them or as a way of vindicating their rights to receive meaningful information as balanced against speakers’ rights to share information.Footnote 23

Externalities are another traditional justification for regulating markets.Footnote 24 Externalities occur where one party’s private conduct has impacts on one or more third parties. Those impacts are “external” to the primary private conduct – as such, parties engaging in that conduct have little incentive to take them into account. Perhaps the most standard example of an externality is pollution: If I burn coal to generate electricity and no one has told me that I cannot put smoke into the air, I will not factor the environmental, health, or other costs of that pollution into my prices. The same can be said for many other types of activity (sometimes even individual conduct) and can be positive or negative. A neighborhood in which many people have dogs that they need to take on walks regularly (a private activity) may not be as welcoming to individuals who are scared of dogs (a negative externality) and also may have less crime (a positive externality).

Importantly, because the impacts of negative externalities are usually dispersed among many people and are difficult to measure except in aggregate, it may not be possible for injured parties to bring a lawsuit to recover for the injuries, either practically or as a matter of law. Lawmakers therefore might step in to address externalities, such as by prohibiting the underlying private conduct, requiring the parties to it to take care to prevent the externalities, or imposing taxes or fees on those parties that can be used to compensate any injured third parties to the case.

Additional speech – even ostensibly productive speech – added to a saturated communications channel has the characteristics of a negative externality. Because the carrying capacity of the channel is already saturated, the additional speech is interpreted by all who hear it as noise. This worsens the signal-to-noise ratio, further decreasing the carrying capacity of the channel. In a very real sense, noise is like air pollution: Just as pollution reduces the usability of air for all who breathe it, noise reduces the usability of a communications channel for all who communicate over it. Thinking back to the example of the loud room at a party, it is intuitive that if someone walks into a room in which several people are having conversations and turns up the volume on a stereo, this act will negatively impact the ability of all those in the room to continue their conversations.

12.3.4 Some Examples of Noisy Speech Externalities

Pointing to concrete examples of noisy speech externalities is challenging because the concept itself is somewhat abstract, and because the impacts may not be readily identifiable as discrete events.

The example of the noisy room presents a case study: At some point, a quiet room becomes too noisy to comfortably have a conversation; at a further point, it becomes impractical to have a conversation; at a further point, it becomes impossible to have a conversation. This transition charts the increasing harms that stem from noisy speech externalities. These harms most clearly need remedy when conversation becomes impossible. But in practice, the forum is likely to be abandoned by most participants before that point.

Useful forums have to solve the signal-to-noise problem somehow, then, and they differentiate themselves by addressing the problem in different ways. Newspapers are as much a filter of information as a source of information; so are television and radio stations. Bookstores sort their books into sections and by topics; publishers select books for publication, perhaps filtering by subject, genre, or audience, and ensuring a quality threshold. Noisy forums are not usually sought after as platforms for information sharing. One would not ordinarily negotiate a contract at a rock concert or debate politics with strangers in a busy subway station.

Social media presents the clearest setting for examples of noisy speech externalities – and mis- and disinformation are likely the clearest examples. The very promise of social-media platforms is that they allow users to communicate directly with one another – their defining feature is that they do not filter or select the information shared between users. And they also lack the traditional indicia of being too noisy to serve as a forum for information to be exchanged. A rock concert is a poor forum because all of the sound is heard at once – attendees attempting to have a conversation necessarily hear the loud music at the same time they are trying to hear their interlocutor – making it difficult to differentiate signal from noise. But in the social-media setting, content is presented in individual pieces, creating a perception for users that they have the ability to meaningfully engage with it.

We see the effects of noisy speech externalities, both intentionally and unintentionally created, with dis- and misinformation. For instance, there is some agreement that Russia weaponized disinformation around the 2016 U.S. election.Footnote 25 Commentators such as Bruce Schneier have argued that the purpose of Russian disinformation campaigns has been less to influence specific outcomes than to attack the American information ecosystem.Footnote 26 An adversary can win just as much by attacking our ability to separate fact from fiction as by convincing us to accept falsity as truth. But we do not need to turn to deliberate efforts to cause harm for examples. The inventor of X’s “retweet” button, for instance, has described the retweet feature as “hand[ing] a 4-year-old a loaded weapon.”Footnote 27 Speaking of an example, he said: “Ask any of the people who were targets at that time, retweeting helped them get a false picture of a person out there faster than they could respond. We did not build a defense for that. We only built an offensive conduit.”Footnote 28

It is important to note here – and to draw attention to a theme to which I will return in Part IV – that the point of these examples is not to say that they demonstrate that X (or any other particular platform) ought to be regulated. To the contrary, platforms like X have always tried to develop new features to address concerns like these. Even the retweet feature was initially envisaged as a way to improve the signal-to-noise ratio (the theory being that making it easier to quickly share good information would help users get more information that they wanted). Platforms like Reddit have comprehensive user-based moderation technologies and norms.Footnote 29 Facebook and X have invested substantially in addressing mis- and disinformation. The argument I make in Section 12.5 is that it may be reasonable and permissible for Congress to require firms to engage in such efforts, and that efforts such as these should satisfy any regulatory obligations.

12.4 Preliminaries of Addressing Noisy Speech Externalities

Whether there is need and legally justifiable reason to support regulating speech in the digital era has prompted substantial debate in recent years. Arguments against such regulations most often sound, on both fronts, in the metaphor of the marketplace of ideas: The ostensible problem we face today is that the cost of speech is too low and the barriers to expression are too few, both of which would typically support the functioning of a robust marketplace.Footnote 30 The discussion below argues that concern about negative externalities derived from an information-theory-based understanding of channel capacity is sufficient to justify and overcome fundamental legal obstacles (i.e., First Amendment concerns) to regulation. It then considers what such regulation could look like, drawing from other settings where the law addresses externalities.

12.4.1 Speech Regulation and the First Amendment

The first question is simply, as a matter of law, whether noisy speech externalities provide a cognizable legal basis for speech regulation – that is, whether such regulation could survive First Amendment scrutiny. As above, debates about such speech regulation are often framed in terms of Madisonian vs Holmesian principles, whether the purpose of the First Amendment is to support robust democratic engagement or to foster a robust marketplace of ideas. But it may be useful to anchor the discussion in more doctrinal terms: Would regulation intended to address noisy speech externalities survive First Amendment scrutiny under existing doctrine?

There are two distinct lines of cases most relevant to this question: broadcast-regulation cases (e.g., Red Lion, Pacifica, Turner)Footnote 31 and the noise-regulation cases (such as Ward v. Rock Against Racism).Footnote 32 The broadcast cases are the foundation for media regulation in the United States – and, as foundations go, they are notably weak. They derive from midcentury understandings of spectrum and technological ability to use spectrum as a broadcast medium. The central concept of these cases is scarcity.Footnote 33 Because spectrum was a scarce resource that only supported relatively few television or radio broadcast stations in any geographic area, there was sufficient justification for the government to regulate who had access to that spectrum in order to ensure “the right of the public to receive suitable access to social, political, esthetic, moral and other ideas and experiences.”Footnote 34 These cases expressly discuss this right in terms of the marketplace of ideas – importantly, scarcity is one of the most traditional justifications for regulation to intervene in the operation of markets – though it is arguably the case that this “right” is compatible with the Madisonian view of the First Amendment.

Ward v. Rock Against Racism is best known for its treatment of content-neutral “time, place, and manner” restrictions on speech and its clarification that such regulations need only be narrowly tailored to address a legitimate government purpose, not the least restrictive means of doing so.Footnote 35 Curiously, Ward v. Rock itself dealt with noise regulations that limited the volume of music at a concert in New York’s Central Park – though the Court’s concern about “noise” in that case is understood in the colloquial sense of “disturbing or distracting sound” rather than in information theory’s more technical sense. But even without resorting to information theory and the concern that noise reduces a channel’s information-carrying capacity, this type of noise is a classic example of a negative externality.Footnote 36

Where a communications channel is at its carrying capacity, we may see both scarcity and negative externalities coming into play. Scarcity does not necessarily implicate externalities, because the lack of options may not affect third parties. And externalities alone do not necessarily meaningfully implicate scarcity because they can adversely affect every option that a consumer may reasonably have. But if a communications channel reaches the point of saturation, that suggests that its users do not have a robust set of alternative channels available to them (as otherwise they would switch to a less congested channel) and so face scarcity, and that any additional information added to that channel worsens the signal-to-noise ratio for all users, creating a negative externality.

It is therefore likely, or at least plausible, that the government would have a substantial interest in narrowly tailored regulations intended to lessen these impacts on a content-neutral basis.

12.4.2 Technical Responses to a Poor Signal-to-Noise Ratio

Mathematically, the way to address a poor signal-to-noise ratio is to increase the strength of the signal or decrease the amount of noise in the signal. This guides technical approaches to addressing a poor signal-to-noise ratio. In settings where the communications channel’s capacity is not exceeded, the signal strength can be directly increased (akin to speaking more loudly). Alternatively, filters can be added to reduce noise, either at the transmitter or receiver. Importantly, filters can both reject true “noise” (e.g., background static), or they can reject “unwanted signal.” For instance, a radio receiver might use a filter to reject signal from adjacent radio stations (e.g., a radio tuned to FM station 101.3 might filter out signal from stations 101.1 and 101.5).

In more complex settings, such as cellular telephone networks, solutions are more sophisticated. In cellular networks, capacity can be increased by adding more “cells” (antennas spread across the network) and decreasing the power at which the antennas within them transmit. Adding more cells brings antennas closer to cell phones – the reduced distance decreases the minimum signal strength that is needed for communications. This, in turn, reduces the extent to which one’s phone interferes with another’s. And there is always the least-sophisticated solution to a poor signal-to-noise ratio: decreasing the speed of communications.Footnote 37

12.4.3 Legal Responses to Externalities

On the legal side of the ledger, there are both public and private legal institutions that address externalities. On the public law side of the ledger, environmental regulation to reduce pollution presents the clearest analogy. Here, there are a few standard tools in the regulatory toolbox. Environmental regulations, for instance, might implement direct command-and-control-style regulations, prohibiting certain types of conduct (such as the emission of certain pollutants beyond a threshold, and an absolute prohibition on the use or emission of certain pollutants or chemicals).Footnote 38 In other cases, the EPA uses “best-available control technology” (BACT) or similar requirements, under which the agency will undertake a regulatory process to ascertain the state-of-the-art in pollution-control technologies and require sources of pollutants to use those technologies.Footnote 39

There are also private law institutions that address externalities – though they are relatively rare to see implemented at scale. In common law, these are most often seen in the context of new or changing technologies that lead to new conflicts between individuals. For instance, when once-distant residential communities and industrial farming operations expand to the point that they are near-neighbors,Footnote 40 noise-generating machinery is installed in areas where it was not previously used,Footnote 41 cement plants expand to serve the needs of growing communities,Footnote 42 new construction obstructs longstanding enjoyment of the sun,Footnote 43 or new technologies such as mills alter the character and landscape of a community,Footnote 44 judges may be called in to adjudicate the uncertain rights that exist between individuals engaging in these activities and the third parties affected by the externalities resulting from them. Most often, legal claims arising from these changing uses are styled as regarding public or private nuisance – although where they implicate rights that have been clearly established under the common law or statute, they may be treated as trespass or statutory violations.

12.5 How We Should Regulate Noisy Speech Externalities

This brings us to this chapter’s ultimate question: How should we regulate online speech in response to noisy speech externalities? My answer is that we should adopt a model similar to that used by the EPA for pollution control – a best-available control technology – but rely on customary industry practices to determine whether such a standard is being met. Unlike the EPA model, the baseline requirement in this setting ought to be a “reasonable best-available technology” requirement, recognizing the vast variation between the capabilities of various platforms and needs of their users. One option for implementing this requirement is to make Section 230’s liability shield contingent upon the use of such technologies.Footnote 45

Section 12.4.3 introduced standard legal approaches to addressing externalities, including public law approaches such as environmental regulation and private law approaches such as nuisance and trespass claims. So far, neither of these approaches have been put into practice in the online environment. Rather, Section 230 of the Communications Decency Act creates a permissive self-regulatory environment.Footnote 46 Section 230 frees up platforms to moderate users’ speech while making clear that they are under no obligation to do so.Footnote 47 Under this approach, platforms are shielded from liability for any harms caused by speech generated by their users, including to the speech interests of their other users.

Section 12.4.1, however, suggested that the government may have a sufficient interest in regulating this speech with narrowly tailored regulations intended to lessen the impacts of noisy speech externalities. We might look to aspects of both public and private law approaches to regulating externalities for a model of how the government could respond to noisy speech externalities. In the final analysis, this approach would combine the self-regulatory approach embraced by Section 230 with an affirmative requirement that platforms use best-available content-moderation technologies as suitable for their scale.Footnote 48

Pollution and pollution control are the most traditional legal analogies for thinking about noisy speech externalities and their regulation – and the control-technologies analogy maps onto the concept of content-moderation technologies. A requirement that a platform uses “best-available content-moderation technology” to ensure as best as possible that its users have meaningful access to content on the platform is a content-neutral policy. More-prescriptive policies would run the risk of making content-based distinctions, especially if they were based in concerns that some types of speech on platforms were more or less in need of protection.Footnote 49 And while policies such as a common-carrier obligation would likely qualify as facially neutral, their effect on the signal-to-noise ratio of platforms would be to render the platforms useless for the vast majority of communications.Footnote 50

Content-moderation techniques and technologies are akin to technological filters or amplifiers that reduce noise or increase the strength of desirable signal to improve the signal-to-noise ratio. These techniques or technologies may come in many forms (indeed, they need not be technological or algorithmic, but could result, for instance, from cultivating community norms or market mechanisms). Their defining characteristic is that they improve a platform’s signal-to-noise ratio, making it easier for users to engage with desired information (signal) or less likely that they will encounter undesired information (noise). As with filters and amplifiers, these technologies can be misconfigured – content moderation can have the effect of amplifying harmful speech or filtering desirable speech. But this is a specific question of how a technology is implemented (including whether it is a “best-available” technology), not a question of the viability or desirability of the underlying technology.

The harder question is who decides what content-moderation technologies are reasonably considered “best available” and how regulation based upon those technologies may be implemented. In the environmental-regulation context, this is done through a regulatory process in which the regulator gathers information about industry practices and dictates what technologies to use. This is not a desirable approach in the speech-moderation setting. As an initial matter, different content-moderation technologies may have different effects on different speech or speakers. Unlike in the pollution context, this potentially creates substantial issues, including embedding content-based distinctions into regulations. That could bring us into the domain of strict scrutiny and concerns about government interference in private speech – a central concern against which the First Amendment is meant to protect.

An additional challenge is the range of speech and the range of platforms hosting that speech. This is a more dynamic environment than the environmental-pollution setting. The EPA regulates a small number of pollutants produced by a small number of chemical processes, which can only be addressed by a small number of control technologies. This makes assessing the best available among those control technologies a tractable task. Courts and regulators are unlikely to be able to keep up to speed with changing needs and capabilities of content moderation – indeed, they are likely to lack the sophistication needed to understand how the technologies even work. And different technologies may be better or worse suited to different types of speech or different types of communication platforms.

It is not unusual for courts to look to industry custom in the face of changing technologies or scientifically complex settings.Footnote 51 This is a setting where deference to customary industry practices, as opposed to prescriptive command-and-control regulation, makes good sense. And it is an important margin along which online speech platforms compete today. Indeed, platforms invest substantially in their content-moderation operations and are continually innovating new techniques to filter undesired speech, amplify desired speech, and generally to give users greater control of the information that they receive. To be sure, not all of these technologies succeed, and platforms often need to balance the effectiveness of these technologies with the business needs of the platform. To take one example of the former effect, the initial theory behind adding the ability to “like” and “retweet” content on X was to amplify desirable content – but its greater effect was to substantially worsen the platform’s signal-to-noise ratio by increasing the velocity of lower-value content and superficial engagement with that content.Footnote 52 On the other hand, tools like verified accounts and the ability for high-reputation users to help moderate posts through systems like X’s Birdwatch or Reddit’s moderator system provide useful amplification and filters that help manage the platforms’ signal-to-noise ratios.

Of course, any regulation of online platforms’ speech practices needs to confront Section 230’s liability shield. Today, Section 230 permits but does not require platforms to adopt content-moderation policies. As suggested above, if a platform is exceeding its channel-carrying capacity without losing users, this suggests there is a market failure keeping those users beholden to the platform. At that point, justification for Section 230’s permissive no-moderation provisions are at their nadir and it becomes reasonable to expect the platform to adopt improved moderation practices.

This is no small recommendation, and I do not make it lightly. Weakening the protections of Section 230’s liability shield significantly increases the cost of litigation, especially for smaller platforms. In its current form, it is difficult for a plaintiff in a Section 230 case to survive a motion to dismiss. Making that shield contingent upon a nebulous “reasonable best-available technology” requirement invites suits that would survive a motion to dismiss. Critically, this raises significant concerns that litigation could deprive platforms’ users of their chosen venue for the exercise of their constitutionally protected speech rights. Any alteration to Section 230’s liability shield should thus be accompanied by specific requirements to counterbalance these concerns, such as sanctions for sham or strategic litigation, fee-shifting requirements, specific pleading or discovery requirements, and safe harbors for smaller platforms. These should be accompanied by a presumption that any industry-standard content-moderation techniques satisfy a “reasonable best-available technology” benchmark. Policymakers might also consider a two-pronged requirement (a) that concerns about a platform’s content-moderation practices must be reported to a state attorney general prior to the commencement of a suit and (b) that a private suit can only be brought after that attorney general non-prejudicially declines to conduct their own investigation.

12.6 Conclusion

A central tenet of contemporary First Amendment law is the metaphor of the marketplace of ideas – that the solution to bad speech is more, better speech. But this is built upon an assumption that more, better speech is possible. Information theory tells us that there are circumstances where any additional speech is necessarily bad speech. This is analytically equivalent to an externality, a common form of market failure and traditional regulatory intervention; and it is analogically equivalent to a market failure in the marketplace of ideas. Indeed, examples of regulation in the face of such failures are common in cases such as pollution and nuisance law – as well as in the First Amendment setting.

This chapter has argued that regulation may be justified, and may survive First Amendment challenges, in cases where noisy speech externalities are likely to occur. It also argues that such regulation should draw from the examples of pollution control’s use of best-available control technologies to mitigate these externalities – but that unlike the pollution setting, courts should look to customary industry practices to evaluate whether a particular platform is using such technologies. This chapter suggests that Section 230’s liability shield, which currently allows but does not require platforms to implement any content-moderation technologies, should require platforms to adopt such technologies. However, this is likely a less radical suggestion than it may seem, as most platforms already actively use and develop content-moderation technologies in the standard course of business; such efforts should be sufficient to satisfy a “best-available content-moderation technology” requirement. Rather, only platforms that actively eschew content-moderation practices, or that otherwise neglect these technologies, would risk the loss of Section 230’s liability shield.

13 Content Moderation in Practice

Laura Edelson
13.1 Introduction

Almost all platforms for user-generated content have written policies around what content they are and are not willing to host, even if these policies are not always public. Even platforms explicitly designed to host adult content, such as OnlyFans,Footnote 1 have community guidelines. Of course, different platforms’ content policies can differ widely in multiple regards. Platforms differ on everything from what content they do and do not allow, to how vigorously they enforce their rules, to the mechanisms for enforcement itself. Nevertheless, nearly all platforms have two sets of content criteria: one set of rules setting a minimum floor for what content the platform is willing to host at all, and a more rigorous set of rules defining standards for advertising content. Many social-media platforms also have additional criteria for what content they will actively recommend to users that differ from their more general standards of what content they are willing to host at all.

These differences, which exist in both policy and enforcement, create vastly different user experiences of content moderation in practice. This chapter will review the content-moderation policies and enforcement practices of Meta’s Facebook platform, YouTube (owned by Google), TikTok, Reddit, and Zoom, focusing on four key areas of platforms’ content-moderation policies and practices: the content policies as they are written, the context in which platforms say those rules will be enforced, the mechanisms they use for enforcement, and how platforms communicate enforcement decisions to users in different scenarios.

Platforms usually outline their content-moderation policies in their community guidelines or standards. These guideline documents are broad and usually have rules about what kinds of actions users can take on their platform and what content can be posted. These guideline documents often also describe the context in which rules will be enforced. Many platforms also provide information about the enforcement actions they may take against content that violates the rules. However, details about the consequences for users who post such content are typically sparse.

More detail is typically available about different platforms’ mechanisms for enforcement. Platforms can enforce policies manually by having human reviewers check content for compliance directly, or they can employ automated methods to identify violating content. In practice, many platforms employ a hybrid approach, employing automated means to identify content that may need additional human review. Whether they employ a primarily manual or primarily automated approach, platforms have an additional choice to make regarding what will trigger enforcement of their rules. Platforms can enforce their content-moderation policies either proactively by looking for content that violates policies or reactively by responding to user complaints about violating content.

Platforms also have a range of actions they can take regarding content found to be policy violating. The bluntest tool they can employ is simply to take the content down. A subtler option involves changing how the content is displayed by showing the content with a disclaimer or by requiring a user to make an additional click to see the content. Platforms can also restrict who can see the content, limiting it to users over an age minimum or in a particular geographic region. Lastly, platforms can make content ineligible for recommendation, an administrative decision that might be entirely hidden from users.

Once a moderation decision is made, either by an automated system or by a human reviewer, platforms have choices about how (and whether) to inform the content creator about the decision. Sometimes platforms withhold notice in order to avoid negative reactions from users, though certain enforcement actions are hard or impossible to hide. In other instances, platforms may wish to keep users informed about actions they take either to create a sense of transparency or to nudge the user not to post violating content in the future.

13.2 Facebook

Facebook (owned by Meta) has made more information about its content-moderation policies and practices available compared to other social-media companies discussed here. However, it is also the only major platform at the time of this writing that gives an outside body, its external Oversight Board, discretion over the enforcement of its policies.

13.2.1 Content Policies

Facebook outlines its content policies in its Community Standards.Footnote 2 Broadly speaking, Facebook prohibits or otherwise restricts content that promotes violent or criminal behavior, poses a safety risk, or is “objectionable content,” usually defined as hate speech, sexual content, or graphic violence.

Violent, sexual, hateful, and fraudulent content are all prohibited outright. However, there are limited exceptions for newsworthy content, such as police body-cam footage from shooting incidents, which must be shared behind a warning label if at all. Content that poses an immediate safety risk, such as non-consensual “outing” of LGBTQ+ individuals or doxing, is always prohibited. Many other forms of “borderline” content are restricted, rather than banned outright, if it is found to be satirical, expressed as an opinion, or newsworthy.

Meta’s policy around misinformation is more ambiguous than these prohibited categories of content. The company’s policy says, “misinformation is different from other types of speech addressed in our Community Standards because there is no way to articulate a comprehensive list of what is prohibited.” The policy continues, “We remove misinformation where it is likely to directly contribute to the risk of imminent physical harm. We also remove content that is likely to directly contribute to interference with the functioning of political processes and certain highly deceptive manipulated media.”Footnote 3 In practice, this policy has produced subcategories of misinformation with varying levels of protection. For example, over the past several years, the company has interpreted this policy as prohibiting vaccine misinformation but not climate change-related misinformation.

13.2.2 Enforcement Practices

Meta also provides some information about Facebook’s policy-enforcement practices in its “Transparency Center.”Footnote 4 Facebook says that it enforces its policies with a mix of automated methods and human reviewers who train the automated systems over time. In Meta’s words, a new automated system “might have low confidence about whether a piece of content violates our policies. Review teams can then make the final call, and our technology can learn from each human decision. Over time – after learning from thousands of human decisions – the technology becomes more accurate.”Footnote 5

This quote describes a fairly standard process in machine learning where automated systems and humans collaborate to make decisions, with humans having a more significant role early in the process and automated systems “learning” from the decisions humans make over time. While Meta’s documentation clearly states that human reviewers make the call when automated classifiers have low confidence, it is less clear about human reviewers’ role in more established domains. Meta states that there are some circumstances where automated systems remove content without human intervention: “Our technology will take action on a new piece of content if it matches or comes very close to another piece of violating content.” According to Meta, their “technology [i.e., automated system] finds more than 90% of the content we remove before anyone reports it for most violation categories.”Footnote 6 A careful reader will note that this does not say that 90 percent of content is removed before users report it, only that it is found before users report it. Still, it is likely a safe assumption that the vast majority of content moderation that happens on the Facebook platform is proactive, rather than reactive.

When Facebook removes content (as opposed to restricting who can see their content or reducing how often it recommends it in users’ newsfeeds), it notifies the user who posted the content.Footnote 7 It then employs a “strike” system to restrict the accounts of users whom the company finds to have violated content policies repeatedly over time.Footnote 8 A first strike is only a warning, but after that, strikes result in increasingly longer bans from creating content. These range from a second strike resulting in a one-day ban to a fifth strike resulting in a thirty-day ban. Users can appeal decisions they think are incorrect, and Meta publishes statistics about how often they reinstate removed content in various categories of violations in its quarterly Community Standards Enforcement Report.Footnote 9 Finally, accounts that repeatedly post policy-violating content and thus receive five or more strikes can be disabled entirely.Footnote 10 As a final layer of oversight of their content-moderation practices, Meta, uniquely among major social-media companies, has established an Oversight Board.Footnote 11 The Board serves, among other things, as a final court of appeals for Facebook’s moderation decisions. As of the time of this writing, Meta’s Oversight Board has reviewed thirty-six appeals, and found in twenty-four cases that content should be reinstated.Footnote 12

13.3 YouTube

Rather than a standalone section of its website, YouTube outlines its content policies (“Community Guidelines”) in a section of its Help pages.Footnote 13 YouTube prohibits nearly all the same categories of content as Facebook, although the companies’ policies use different nomenclature in some cases and demonstrate different areas of focus. For example, both platforms prohibit sexual content, but Facebook groups this category under the umbrella of “offensive content” while YouTube groups it with “sensitive content.” Similarly, both platforms broadly prohibit fraudulent content, but YouTube focuses more on preventing spam, while Facebook focuses on financial scams.

In contrast to its relatively well-developed documentation around its content policies, YouTube’s documentationFootnote 14 of its policy-enforcement mechanisms is sparse. The company thoroughly describes how users can flag content that violates policy and how content is reactively reviewed when that happens (always by human reviewers). The policies state that YouTube does, however, “use technology to identify and remove spam automatically, as well as re-uploads of content we have already reviewed and determined violates our policies.”Footnote 15 Google (YouTube’s owner) also publishes data about content moderation on YouTube in quarterly Transparency Reports.Footnote 16 In these reports, Google breaks down the share of removals originating from automated systems versus users, with greater than 90 percent of removals originating from automated systems. Google also provides statistics on when in a post’s lifecycle removals happen, breaking down the share that happens before a post receives any views at all, one to ten views, or greater than ten views.

Like Facebook, YouTube employs a “strike” system to nudge users into better behavior.Footnote 17 YouTube’s strike system is significantly more aggressive, however. Users get a warning with no other penalty attached the first time YouTube finds that they have posted content that violates its policies. After that, users who receive three additional strikes in a ninety-day period will have their YouTube channel permanently removed. YouTube further says that “[i]f your channel or account is terminated, you may be unable to use, own, or create any other YouTube channels/accounts.”Footnote 18 This implies that channel removal is indeed a complete ban of the user in some cases, but it’s unclear how often this penalty is imposed in full.

13.4 TikTok

TikTok, similar to Facebook, maintains a separate “Community Guidelines” section of its website.Footnote 19 Content prohibitions are grouped slightly differently, but they generally resemble those of other platforms insofar as they focus on sexually explicit content, fraudulent content, and content deemed to pose a safety risk.

TikTok has released very little information about its mechanisms for enforcement, which violations will result in permanent bans, and how many “strikes” users might receive before getting a permanent ban. In 2021, TikTok published a blog postFootnote 20 announcing that the platform would begin automated proactive content removals for some categories of content. The platform also publishes quarterly Community Guidelines Enforcement reportsFootnote 21 with details around content removal and restoration after appeal.

Unlike Meta and Google, TikTok does not give removal statistics by method of initial flagging. Rather, it breaks down final removals by “automated” versus “manual” means. The word “automated” is undefined, but one can reasonably infer it refers to removals without any human review. In TikTok’s case, this appears to be about one-quarter of overall removals, but note that this metric is not equivalent to the ones given by other platforms around initial flagging type, so these numbers are not directly comparable. This is because this metric likely refers to human involvement at any point in the moderation process, instead of solely at the point of initial flagging.

At the same time as its automated proactive-content-removal announcement, TikTok also confirmed that it employs a strike system to ban users who repeatedly post violating content. TikTok does not currently disclose how many times (or at what frequency) users would have to violate policy to receive a ban. Its Community Guidelines make clear that they have a zero-tolerance policy for the most serious categories of violations, such as Child Sexual Abuse Material (CSAM) or violent content. In its transparency reports, the company provides data about the number of accounts removed on a monthly basis. Still, there is no way to connect the number of removed posts to the number of removed accounts without more intermediate data.

13.5 Reddit

Like other platforms reviewed in this chapter, Reddit publishes Community Guidelines that apply across the entire platform.Footnote 22 However, these Community Guidelines are best thought of as a content-moderation “floor” that describes a substantially lower threshold than is actually enforced across the vast majority of the platform. This is because all Reddit content is posted to “subreddits” (also known as channels), each having its own set of policies and practices that users create and enforce themselves.Footnote 23 Reddit does require that channel moderators post their policies clearly and maintain an appeals process, but communities are otherwise free to self-moderate as they see fit.

This overarching policy of relatively few limitations on what content is permitted on the platform has naturally led to the existence of many groups with a great deal of content that many users would find objectionable for one reason or another. To manage this issue, Reddit has a policy of “quarantining” subreddits that most users might find highly offensive or upsetting.Footnote 24 Reddit will not run ads on quarantined channels, which means they generate no revenue for Reddit. Content posted in these channels also does not appear in feeds of users not subscribed to the quarantined subreddits and will not be discoverable in user searches.

Similar to other platforms we have discussed, Reddit publishes a transparency report with details about its content-policy enforcement. However, it only publishes this report annually.Footnote 25 Reddit has some site-wide enforcement of its content-moderation policies, but subreddit moderators do the majority of content removal, according to its transparency report. To support the enforcement of both site- and community-specific content guidelines by moderators, Reddit makes an extensive set of moderator documentationFootnote 26 and toolsFootnote 27 available to its army of volunteer channel moderators. One community moderation tool unique to Reddit among the platforms we have discussed is that of flair.Footnote 28 Flair are short text tags with single words, phrases, or emoticons. While flair can be used for a variety of purposes, when it is associated with user accounts, it typically conveys a user’s reputation.

Due to the fragmented nature of both content policy and enforcement on Reddit, there is little that can be said about how enforcement decisions are communicated to users when they happen on the channel level. However, while subreddit moderators have broad autonomy to police their channels (and to ban users from them) as they see fit, only Reddit can ban user accounts from the site entirely. Reddit publishes data about both content and user-account removal in its transparency report, but the platform does not outline any explicit thresholds of policy violations (either what kind or how many) that would prompt a user’s account to be suspended.

13.6 Zoom

While Zoom is not generally considered a social-media company, it is still a platform for users to share content. Readers may be most familiar with Zoom as a tool for one-on-one video calling, but Zoom can also be used to host multi-party calls with up to 1,000 participants and webinars with up to 10,000, depending on the host’s account type.Footnote 29 Zoom users can also record videos and save them to Zoom’s cloud so that others can watch those videos at a later time. Therefore, the company has published standards for what content it is and is not willing to host.Footnote 30 In their community standards, Zoom prohibits many of the same content categories as other platforms we have reviewed. These prohibited categories include hate speech, promotion of violence, and sexual or suggestive content, though some other commonly prohibited categories, such as misinformation, are allowed. However, unlike the other platforms we have discussed, Zoom only enforces its policies in reaction to user reports.Footnote 31

Zoom appears to have no proactive enforcement of its content policies. Zoom also states that all moderation in response to user reports is done manually, rather than by automated means.Footnote 32 Notably, the company does not currently publish data about its content-policy enforcement. Instead, Zoom’s annual transparency report only includes statistics about the company’s responses to government requests of different types. The company has not made data available about how many pieces of content it has removed or how many users have been banned due to its content-policy enforcement.

Zoom does not have external oversight of its content-moderation decisions – only Meta does this – but interestingly, the platform does have several progressive tiers of internal content-moderation review to which users can appeal decisions. At the highest tier of review, an “appeals panel” makes decisions by majority vote. Panel members are chosen from a pool of Zoom employees and serve for no longer than two years. Panel decisions are documented so they can guide future internal decision-making. In many respects, Zoom’s “appeals panel” is described quite similarly to Meta’s Oversight Board.

13.7 Differences in Content-Moderation Policy

Of the platforms we have reviewed, it is likely no coincidence that the three largest – Facebook, YouTube, and TikTok – have similar written policies on content moderation, as they are all attempting to serve very broad user bases and therefore face similar challenges. They all have platform-wide policies against many of the same types of content. They all take tiered approaches to enforcement, involving banning some kinds of content and limiting access or distribution of other kinds of content. They all describe (in greater or lesser detail) a policy of warning users who post violative content and banning those users who do so repeatedly.

Reddit’s channel-specific approach is different in almost every respect from the approach taken at Facebook, YouTube, and TikTok. While there is a minimum standard for allowable content on Reddit, most policy rules are set by users themselves to facilitate the types of discussions they want to engage in within specific groups. As they are written, Zoom’s content policies fall somewhere between the permissiveness of Reddit and the broad prohibitions against offensive content that the largest platforms have. Zoom prohibits sexual and fraudulent content, as well as explicit calls for violence. However, the platform makes no explicit rules against many other categories of content, including misinformation, that are harder to define. In this respect, Zoom’s content policies are significantly less aggressive than those of Facebook, TikTok, and YouTube.

13.8 Differences in Content-Moderation Enforcement Rules

The starkest differences between the platforms we have studied exist not in their policies as they are written, but in their rules for enforcing these policies. For example, Zoom’s clear statement that it only enforces its policies in response to user reports creates manifestly different conditions for what content is allowed than exists on platforms that engage in proactive enforcement.

There are also meaningful differences between what consequences platforms impose on users who violate platform rules. Most platforms we have discussed employ “strike” systems of some kind, but not all are clear about what penalties will be enforced after which strike, or how long strikes will be counted. YouTube’s clarity on these points is a notable exception. This ambiguity is likely strategic, giving platforms the freedom to adjust their policies in reaction to events without having to communicate every change publicly. It is interesting to note that one of Reddit’s rules for its channel moderators is not to create “Secret Guidelines”Footnote 33 that aren’t clearly communicated to users, even though Reddit itself is largely opaque about how it enforces its own guidelines.

Reddit and Zoom take a much more reactive approach to content moderation than Facebook, YouTube, and TikTok. Reddit, as discussed above, leaves most aspects of content moderation – including enforcement – to its user community. Zoom’s content policies look much more like those of Facebook, YouTube, or TikTok on paper, but unlike those platforms, Zoom intervenes only in response to user complaints. In effect, then, any given group of users on a Zoom call can effectively agree on and enforce a local content-moderation policy – much as if they were on a subreddit. Unlike Reddit, however, there is no “floor” of allowable content for consenting users, because Zoom only enforces its content policies if it receives a complaint.

However, there do appear to be some areas where the effects of policy enforcement are relatively consistent across platforms, even if the mechanisms for achieving this effect differ. This is particularly true around content that is simply illegal, such as violent terrorist imagery or CSAM (Child Sexual Abuse Material). Every platform we have discussed here makes clear that not only is this type of content prohibited, but that posting this type of content will result in users losing their accounts immediately, without strikes or warnings.

13.9 Differences in Content-Moderation Enforcement Implementation and Transparency

Differences around policy enforcement extend beyond rules for what policy enforcement looks like and what triggers it. There are also serious differences in platforms’ implementation of enforcement systems. Zoom’s all-manual, tiered enforcement system has very different accuracy characteristics than systems that use machine learning to evaluate content proactively. TikTok appears to rely more heavily on fully automated content moderation with an expectation that users will dispute some decisions and some content will be restored after those disputes. These details of implementation create very different user experiences than exist on other platforms.

Some of these differences are the result of platforms’ differing structures. Reddit’s uniquely manually intensive moderation system results from its channel-focused design. Reviewing the resources needed to build accurate machine-learning systems is beyond the scope of this chapter. However, the largest platforms that employ machine-learning techniques to identify violative content in an automated manner can do so, at least in part, because of the enormous training sets of data they can build because of the large volumes of user content they host.

All of the platforms we have reviewed publish transparency-report documents that provide some information about how their policies are implemented in practice. Each of these “transparency reports” have developed independently and, even when theoretically reporting data about the same category, often use different metrics to measure slightly different things. This means that while they can be individually informative, they are rarely directly comparable.

13.10 Conclusion

The platforms reviewed here have profound differences in content-moderation policy, rules for enforcement, and enforcement practices. How, then, can we compare them when they differ on so many dimensions? Ultimately, platforms (and their policies) exist to shape their user experience. This chapter, therefore, proposes that users’ ultimate experience of platforms’ content policies provides the most meaningful basis for comparison. This outcome-focused framework leads us to a series of questions that can be asked about different categories of content on each platform:

  • What content are users able to post?

  • What content will be taken down after users post it and how quickly will it be removed?

  • What content will be visible to users other than the poster?

  • What content will be recommended to other users?

  • What will the consequences be for users who post violating content?

An example of how to apply this framework to a category of content, in this case sexual content, is shown in the table below.

Sexually Explicit ContentFacebookYouTubeTikTokRedditZoom
Can users post this content?May be blocked at time of uploadMay be blocked at time of uploadMay be blocked at time of uploadYesYes
Will this content be taken down?YesYesYesOnly if it goes against the rules of the channel in which it is postedOnly if a viewer objects
Will this content be visible to other users?Generally no (because it will not be recommended)Yes, until it is taken downGenerally no (because it will not be recommended)Yes, unless it violates channel rules and is removed by a moderatorYes, unless a viewer objects and the content is taken down
Will this content be recommended to other users?NoNoNoOnly if the user has subscribed to the channelNo. (Zoom does not recommend content)
What are the consequences for users who post this content?One strike (out of an unknown number)One strike (out of three to four)One strike (out of an unknown number)May be banned from channel (if in violation of channel rules)Unclear

Platforms and policymakers often discuss aspects of content moderation in isolation. Our exploration of moderation policy and implantation demonstrates the degree to which these dynamic systems are the result of multiple interlocking parts, where aspects of one part of the system impact the efficacy of another. The reality of how policies are experienced by users is heavily impacted by how those policies are implemented. In closing, we encourage the reader, when attempting to make comparisons between platforms or even attempting to understand the impacts of changes to a single system, to consider the whole, rather than the parts.

14 The Reverse Spider-Man Principle With Great Responsibility Comes Great Power

Eugene Volokh
Footnote *
14.1 Introduction

An entity – a landlord, a manufacturer, a phone company, a credit card company, an internet platform, a self-driving-car manufacturer – is making money off its customers’ activities. Some of those customers are using the entity’s services in ways that are criminal, tortious, or otherwise reprehensible. Should the entity be held responsible, legally or morally, for its role (however unintentional) in facilitating its customers’ activities? This question has famously been at the center of the debates about platform content moderation,Footnote 1 but it can come up in other contexts as well.Footnote 2

It is a broad question, and there might be no general answer. (Perhaps it is two broad questions – one about legal responsibility and one about moral responsibility – but I think the two are connected enough to be worth discussing together.) In this chapter, though, I’d like to focus on one downside of answering it “yes”: what I call the Reverse Spider-Man Principle – with great responsibility comes great power.Footnote 3 Whenever we are contemplating holding entities responsible for their customers’ behavior, we should think about whether we want to empower such entities to surveil, investigate, and police their customers, both as to that particular behavior and as to other behavior.Footnote 4 And that is especially so when the behavior consists of speech, and the exercise of power can thus affect public debate.

Of course, some of the entities with whom we have relationships do have power over us. Employers are a classic example: In part precisely because they are responsible for our actions (through principles such as respondeat superior or negligent hiring/supervision liability), they have great power to control what we do, both on the job and in some measure off the job.Footnote 5 Doctors have the power to decide what prescription drugs we can buy, and psychiatrists have the responsibility (and the power) to report when their patients make credible threats against third parties.Footnote 6 And of course we are all subject to the power of police officers, who have the professional though not the legal responsibility to prevent and investigate crime.

On the other hand, we generally do not expect to be in such subordinate relationships to phone companies, or to manufacturers selling us products. We generally do not expect them to monitor how we use their products or services (except in rare situations where our use of a service interferes with the operation of the service itself), or to monitor our politics to see if we are the sorts of people who might use the products or services badly. At most, we expect some establishments to perform some narrow checks at the time of a sale, often defined specifically and clearly by statute, for instance by laws that require bars not to serve people who are drunk or that require gun dealers to perform background checks on buyers.Footnote 7

Many of us value the fact that, in service-oriented economies, companies try hard to do what it takes to keep customers (consider the mentality that “the customer is always right”), rather than expecting customers to comply with the companies’ demands. But if we insist on more “responsibility” from such providers, we will effectively push them to exercise more power over us, and thus fundamentally change the nature of their relationships with us. If companies are required to police the use or users of their products and services (what some call “third-party policing”Footnote 8) then people’s relationship with them may become more and more like people’s relationship with the police.

To be sure, none of this is a dispositive argument against demanding such responsibility. Perhaps sometimes such responsibility is called for. My point, though, is that this responsibility also carries costs. We should take those costs into account when we engage in “balancing,” “proportionality tests,” Learned Hand cost–benefit analysis, or something similar – whether as a matter of adjudication, policymaking, or even just moral judgment – in deciding whether to demand such responsibility.

14.2 The Virtues of Irresponsibility

Let me begin by offering three examples of where some courts have balked at imposing legal liability, precisely because they did not want to require or encourage businesses to exercise power over their customers.

14.2.1 Telephone and Telegraph Companies

The first came in the early 1900s, when some government officials demanded that telephone and telegraph companies block access to their services by people suspected of running illegal gambling operations. Prosecutors could have gone after the bookies, of course, and they did. But they also argued that the companies should have done the same – and indeed sometimes prosecuted the companies for allowing their services to be used for such criminal purposes.

No, held some courts (though not allFootnote 9); to quote one:

A railroad company has a right to refuse to carry a passenger who is disorderly, or whose conduct imperils the lives of his fellow passengers or the officers or the property of the company. It would have no right to refuse to carry a person who tendered or paid his fare simply because those in charge of the train believed that his purpose in going to a certain point was to commit an offense. A railroad company would have no right to refuse to carry persons because its officers were aware of the fact that they were going to visit the house of [the bookmaker], and thus make it possible for him and his associates to conduct a gambling house.

Common carriers are not the censors of public or private morals. They cannot regulate the public and private conduct of those who ask service at their hands.Footnote 10

If the telegraph or telephone company (or the railroad) were held responsible for the actions of its customers, the court reasoned, then it would acquire power – as “censor[] of public or private morals” – that it ought not possess.

And indeed, Cloudflare, a provider of internet services that prevents denial-of-service attacks, drew an analogy to a phone company in saying that it would generally not reject customers based on their views (though it might stop service to them if their services were actively being used to organize criminal attacksFootnote 11):

Our conclusion … is that voluntarily terminating access to services that protect against cyberattack is not the correct approach…. Just as the telephone company does not terminate your line if you say awful, racist, bigoted things, we have concluded in consultation with politicians, policy makers, and experts that turning off security services because we think what you publish is despicable is the wrong policy. To be clear, just because we did it in a limited set of cases before does not mean we were right when we did. Or that we will ever do it again.Footnote 12

14.2.2 EMail Systems

Telegraph and telephone companies were common carriers, denied such power (and therefore, those courts said, responsibility) by law. But consider a second example, Lunney v. Prodigy Services Co., a 1999 case in which the New York high court held that email systems were immune from liability for allegedly defamatory material sent by their users.Footnote 13

Email systems aren’t common carriers, but the court nonetheless reasoned that they should not be held responsible for failing to block messages, even if they had the legal authority to block them: An email system’s “role in transmitting e-mail is akin to that of a telephone company,” the court held, “which one neither wants nor expects to superintend the content of its subscribers’ conversations.”Footnote 14 Even though email systems aren’t forbidden from being the censors of their users’ communications, the court concluded that the law should not pressure them into becoming such censors.

14.2.3 Landlords

Courts have likewise balked at imposing obligations on residential landlords that would encourage the landlords to surveil and police their tenants. Consider Castaneda v. Olsher, where a mobile-home-park tenant injured in a gang-related shootout involving another tenant sued the landlord, claiming it “had breached a duty not to rent to known gang members.”Footnote 15 No, said the California Supreme Court:

[W]e are not persuaded that imposing a duty on landlords to withhold rental units from those they believe to be gang members is a fair or workable solution to [the] problem [of gang violence], or one consistent with our state’s public policy as a whole….

If landlords regularly face liability for injuries gang members cause on the premises, they will tend to deny rental to anyone who might be a gang member or, even more broadly, to any family one of whose members might be in a gang.Footnote 16

This would in turn tend to lead to “arbitrary discrimination on the basis of race, ethnicity, family composition, dress and appearance, or reputation,”Footnote 17 which may itself be illegal (so the duty would put the landlord in a damned-if-you-do-damned-if-you-do-not position).

But even apart from such likely reactions by landlords possibly being illegal, making landlords liable would jeopardize people’s housing options and undermine their freedom even if they aren’t gang members, further subjecting them to the power of their landlords: “[F]amilies whose ethnicity, teenage children, or mode of dress or personal appearance could, to some, suggest a gang association would face an additional obstacle to finding housing.”Footnote 18 Likewise, even if landlords respond only by legally and evenhandedly checking all tenants’ criminal histories, “refusing to rent to anyone with arrests or convictions for any crime that could have involved a gang” would “unfairly deprive many Californians of housing.”Footnote 19 This “likely social cost” helped turn the court against recognizing such a responsibility on the part of landlords.Footnote 20

Other courts have taken similar views. In Francis v. Kings Park Manor, Inc., for instance, the Second Circuit sitting en banc refused to hold a landlord liable for its tenants’ racial harassment of fellow tenants, partly because of concern that such responsibility would pressure landlords to exercise undue power over tenants:

[U]nder the alternative proposed by Francis, … prospective and current renters would confront more restrictive leases rife with in terrorem clauses, intensified tenant screening procedures, and intrusions into their dealings with neighbors, all of which could result in greater hostility and danger, even culminating in (or beginning with) unwarranted evictions.

Our holding should also be of special interest to those concerned with the evolution of surveillance by state actors or by those purporting to act at their direction. See Note 44, ante (warning against broad liability schemes that would encourage landlords to act as law enforcement).Footnote 21

The New York intermediate appellate court took a similar view in Gill v. New York City Housing Authority, rejecting liability for tenant-on-tenant crime that the plaintiff claimed might have been avoided had the landlord dealt better with a tenant’s mental illness:

The practical consequences of an affirmance in this case would be devastating. The Housing Authority would be forced to conduct legally offensive and completely unwarranted “follow-up” of all those tenants within its projects known to have a psychiatric condition possibly … injurious to another tenant…. [E]viction, which is described in the Housing Authority Management Manual as a “last resort,” would become almost commonplace.Footnote 22

A New Jersey intermediate appellate court took the same view in Estate of Campagna v. Pleasant Point Properties, LLC, rejecting a claim that landlords should be responsible for doing background checks on tenants.Footnote 23 Likewise, in the related context of university liability for students’ consumption of alcohol, the Massachusetts high court concluded:

As many courts have noted, requiring colleges and universities to police all on-campus use of alcohol would be inappropriate and unrealistic. Although “[t]here was a time when college administrators and faculties assumed a role in loco parentis” and “[s]tudents were committed to their charge because the students were considered minors,” “[c]ollege administrators no longer control the broad arena of general morals.” College-aged students, while sometimes underage for the purposes of the purchase and consumption of alcohol, otherwise are adults expected to manage their own social activities…. [T]he additional intrusion into the private lives of students that would be necessary to control alcohol use on campus would be both impractical for universities and intolerable to students.Footnote 24

To be sure, the pattern here is not uniform. Sometimes landlords are held responsible (by statutes, ordinances, or tort law rules), for monitoring their tenants for potentially illegal behavior, such as the distribution of drugs; for failing to evict tenants who are violating the law,Footnote 25 or even tenants who are being victimized by criminals, and are thus calling 911 too often;Footnote 26 for failing to warn co-tenants of tenants’ past criminal records;Footnote 27 or even for renting to tenants who have criminal records.Footnote 28 But the result of those decisions has indeed been what the courts quoted above warned about: greater surveillance of tenants by landlords, and greater landlord power being exercised over tenants.Footnote 29

14.2.4 The Limits of Complicity

One way of understanding these cases is that they put limits on concepts of complicity. The law does sometimes hold people liable for enabling or otherwise facilitating others’ wrongful conduct, even in the absence of a specific wrongful purpose to aid such conduct;Footnote 30 consider tort law principles such as negligent hiring and negligent entrustment. But there are often good public-policy reasons to limit this.

Sometimes those reasons stem from our sense of professional roles. We do not fault a doctor for curing a career criminal, even if as a result the criminal goes on to commit more crimes. It’s not a doctor’s job to decide whether someone merits healing, or to bear responsibility for the consequences of successfully healing bad people.

Likewise, the legal system expects defense lawyers to do their best to get clients acquitted, and does not hold the lawyers responsible for the clients’ future crimes. (Indeed, historically the legal system allowed courts to order unwilling lawyers to represent indigent defendants.Footnote 31) When there is public pressure on lawyers to refuse to represent certain clients, the legal establishment often speaks out against such pressure.Footnote 32

And sometimes those reasons stem from our sense of who should and who should not be “censors of public or private morals.” The police may enforce gambling laws, or arrest gang members for gang-related crimes. The courts may enforce libel law. But various private entities, such as phone companies, email services, and landlords, should not be pressured into doing so.Footnote 33

14.3 Practical Limits on Private Companies’ Power, in the Absence of Responsibility

Of course, many such companies (setting aside the common carriers or similarly regulated monopolies) already have great power over whom to deal with and what to allow on their property, even when they aren’t held responsible – by law or by public attitudes – for what happens on their property. In theory, for instance, Prodigy’s owners could have decided that they wanted to kick off users who were using Prodigy email for purposes that they found objectionable: libel, racist speech, Communist advocacy, or whatever else. Likewise, some companies may decide not to deal with people who they view as belonging to hate groups or anti-American organizations, just because their shareholders or managers think that’s the right thing to do, entirely apart from any social or legal norms of responsibility.

But in practice, in the absence of responsibility (whether imposed by law or social norms), many companies will eschew such power, for several related reasons – even setting aside the presumably minor loss of business from the particular customers who are ejected:

  1. 1. Policing customers takes time, effort, and money.

  2. 2. Policing customers risks error and bad publicity associated with such error, which could alienate many more customers than the few who are actually denied service.

  3. 3. Policing customers risks allegations of discriminatory policing, which may itself be illegal and at least is especially likely to yield bad publicity.

  4. 4. Policing some customers will often lead to public demands for broader policing: “You kicked group X, which we sort of like, off your platform; why aren’t you also kicking off group Y, which we loathe and which we view as similar to X?”Footnote 34

  5. 5. Conversely, a policy of “we do not police our customers” – buttressed by social norms that do not require (or even affirmatively condemn) such policing – offers the company a simple response to all such demands.

  6. 6. Policing customers creates tension even with customers who aren’t violating the company’s rules – people often do not like even the prospect that some business is judging what they say, how they dress, or whom they associate with.

  7. 7. Policing customers gives an edge to competitors who publicly refuse to engage in such policing and who sell their services as “our only job is to serve you, not to judge you or eject you.”

Imposing legal responsibility on such companies can thus pressure them to exercise power even when they otherwise would not have. And that is so in some measure even if responsibility is accepted just as a broad moral norm, created and enforced by public pressure (likely stemming from influential sectors of society, such as the media or activists or professional organizations), and not a legal norm.Footnote 35 That moral norm would increase the countervailing costs of non-policing. It would decrease the costs of policing: For instance, the norm and the corresponding pressure would likely act on all major competitors, so the normal competitive pressures encouraging a “the customer is always right” attitude would be sharply reduced. And at some point, the norm might become the standard against which the reasonableness of behavior is measured as a legal matter.

Likewise, when people fault a company for errors or perceived discrimination, the company can use the norm as cover, for instance arguing that “regrettably, errors will happen, especially when one has to do policing at scale.” “After all, you have told us you want us to police, have not you?”

Accepting such norms of responsibility could also change the culture and organization of the companies. It would habituate the companies to exercising such power. It would create internal bureaucracies staffed with people whose jobs rely on exercising the power – and who might be looking for more reasons to exercise that power.

And by making policing part of the companies’ official mission, the acceptance of responsibility norms would subtly encourage employees to make sure that the policing is done effectively and comprehensively, and not just at the minimum that laws or existing social norms command. Modest initial policing missions, based on claims of responsibility for a narrow range of misuse, can thus creep into much more comprehensive use of such powers.Footnote 36

Indeed, it appears that something like this happened with social-media platforms. Title 47 U.S.C. § 230 freed online companies of legal responsibility for the content of users’ speech, and many such companies therefore did not exercise their legal power to restrict what users posted, or did so only lightly.Footnote 37 But the mid-2010s saw a combination of social and congressional pressure that held platforms responsible for supposed misinformation and other bad speech on their platforms, which caused the leading platforms to exercise such power more and more.Footnote 38 Platforms have now begun making decisions about which political candidates and officials to deplatform and which important political stories to block (including in the heat of an election campaign).Footnote 39 One might approve or disapprove of such power exercised by large business corporations over public discourse;Footnote 40 but my point here is simply that calls for great responsibility have indeed increased the exercise of such power.

14.4 The Internet of Things, Constant Customer/Seller Interaction for Tangible Products, and the Future of Responsibility

So far, there has been something of a constraint on calls for business “responsibility” for the actions of their customers: Such calls have generally involved ongoing business–customer relationships, for instance when Facebook can monitor what its users are posting (or at least respond to other users’ complaints).

Occasionally, some have called on businesses to simply not deal with certain people at the outset – consider Castaneda v. Olsher, where the plaintiffs argued that the defendants just should not have rented the mobile homes to likely gang members. But such exclusionary calls have been rare.

I expect, for instance, that few people would think of arguing that car dealers should refuse to sell cars to suspected gang members who might use the cars for drive-by shootings or for crime getaways.Footnote 41 Presumably, most people would agree that even gang members are entitled to buy and use cars in the many lawful ways that cars can be used, and that car dealers should not see their job as judging the likely law-abidingness of their customers.Footnote 42 If the legislature wants to impose such responsibilities, for instance by banning the sale of guns to felons or of spray paint to minors, then presumably the legislature should create such narrow and clearly defined rules, which would rely on objective criteria that do not require seller judgment about which customers merely seem likely to be dangerous.

But now more and more products involve constant interaction between the customer and the seller.Footnote 43 Say, for instance, that I’m driving a partly self-driving Tesla that is in constant contact with the company. Recall how Airbnb refused to rent to people who it suspected were going to a “Unite the Right” rally.Footnote 44 If that is seen as proper – and indeed is seen as mandated by corporate social responsibility principles – then one can imagine similar pressure on Tesla to stop Teslas from driving to the rally (or at least to stop such trips by Teslas owned by those people suspected of planning to participate in the rally).Footnote 45

To be sure, this might arouse some hostility, because it’s my car, not Tesla’s. But Airbnb was likewise refusing to arrange bookings for other people’s properties, not its own. Airbnb’s rationale was that it had a responsibility to stop its service from being used to promote a racist, violent event.Footnote 46 Why would not Tesla then have a similar responsibility to stop its intellectual property and its central computers (assuming they are in constant communication with my car) from being used the same way?

True, the connection between the Tesla and its user’s driving to the rally is somewhat indirect – but not more so than Airbnb’s. Indeed, Tesla’s connection is a bit more direct: Its product and the accompanying services would get the driver the last mile to the rally itself, rather than just providing a place to stay the night before. Indeed, there’s just one eminently foreseeable step (a short walk from the parking space) between the use of the Tesla and the driver’s attendance at the rally. And conversely, if we think Tesla should not be viewed as responsible for its cars being used to get to rallies that express certain views, what should that tell us about whether Facebook should be responsible for use of its service to convey those views?

Now, Tesla’s sales contract might be seen as implicitly assuring that its software will always try to get me to my destination. But that is just a matter of the contract. If companies are seen as responsible for the misuse of their services, why wouldn’t they have an obligation to draft contracts that let them fulfill that responsibility?

Of course, maybe some line might be drawn here: Perhaps, for instance, we might have a special rule for services that are ancillary to the sale of goods (Tesla, yes; Airbnb, no), under which the transfer of the goods carries with it the legal or moral obligation for the seller to keep providing the services even when one thinks the goods are likely to be used in illegal or immoral ways. (Though what if I lease my Tesla rather than buying it outright, or rent it for the day just as I might rent an Airbnb apartment for the night?) Or at least we might say there’s nothing irresponsible about a product seller refusing to police customers’ continuing use of the services that make those products work.

But that would just be a special case of the broader approach that I’m suggesting here: For at least some kinds of commercial relationships, a business should not be held responsible for what its customers do – because we do not want it exercising power over its customers’ actions. We might then ask whether we should apply the same principle to other commercial relationships.

14.5 Big Data and the Future of Responsibility

There has historically also been another constraint on such calls for business “responsibility”: It’s often very hard for a business to determine what a customer’s plans are. Even if there is social pressure to get businesses to boycott people who associate with supposed “hate groups”Footnote 47 – or even if the owners of a business (say, Airbnb) just want to engage in such a boycott – how is a business to know what groups a person associates with, at least unless the person is famous, or unless someone expressly complains about the person to the business?Footnote 48

But these days we can get a lot more data about people, just by searching the Internet and some other databases (some of which may cost money, but all of which are well within the means of most big businesses). To be sure, this might yield too much data about each prospective customer for a typical business to process at scale. But AI technology will likely reduce the cost of such processing by enabling computers to quickly and cheaply sift through all that data, and to produce some fairly reliable estimate: Joe Schmoe is 93 percent likely to be closely associated with one of the groups that a business is being pressured to boycott. At that point, the rhetoric of responsibility may suggest that what now can be done (identifying supposedly evil potential clients) should be done.

Consider one area in which technological change has sharply increased the scope of employer responsibility – and constrained the freedom of many prospective employees. American tort law has long held employers responsible for negligent hiring, negligent supervision, or negligent retention when they unreasonably hire employees who are incompetent at their jobs in a way that injures third parties,Footnote 49 or who have a tendency to commit crimes that are facilitated by the job.Footnote 50 But until at least the late 1960s, this had not required employers to do nationwide background checks, because such checks were seen as too expensive, and thus any such requirement “would place an unfair burden on the business community.”Footnote 51 Even someone who had been convicted of a crime could thus often start over and get a job, at least in a different locale, without being dogged by his criminal record.

Now, though, as nationwide employee background checks have gotten cheaper, they have in effect become mandatory for many employers: “Lower costs and easier access provide [an] incentive to perform [background] checks, potentially leaving employers who choose not to conduct such checks in a difficult position when trying to prove they were not negligent in hiring.”Footnote 52 As a result, people with criminal records now often find it especially hard to get jobs.

Perhaps that’s good, given the need to protect customers from criminal attack. Or perhaps it’s bad, given the social value of giving people a way to get back to productive, law-abiding life. Or perhaps it’s a mix of both. But my key point here is that, while the employer’s responsibility for screening his employees has formally remained the same – the test is reasonable care – technological change has required employers to exercise that responsibility in a way that limits the job opportunities of prospective employees much more than it did before.

Similarly, commercial property owners have long been held responsible for taking reasonable – which is to say, cost-effective – measures to protect their business visitors from criminal attack. Thus, as video surveillance cameras became cheap enough to be cost-effective, courts began to hold that defendants may be negligent for failing to install surveillance cameras,Footnote 53 even though such surveillance would not have been required when cameras were much more expensive.

We can expect to see something similar as technological change renders cost-effective other forms of investigation and surveillance – not just of employees or of outside intruders, but of customers. If it is a company’s responsibility to make sure that bad people do not use the company’s products or services for bad purposes, then as technology allows companies to investigate their clients’ affiliations and beliefs more cost-effectively, companies will feel pressure to engage in such investigation.

14.6 Conclusion

Responsibility” is often viewed as an unalloyed good. Who, after all, wants to be known as “irresponsible”?Footnote 54 Sometimes we should indeed hold people and organizations legally or morally responsible for providing tools that others misuse. People and organizations are also of course entitled to choose to accept such responsibility, even if they are not pressured to do so.Footnote 55 And sometimes even if they do not feel responsible for doing something, they might still choose to do it, whether because they think it’s good for their users and thus good for business, or because they think it’s good for society. In particular, I’m not trying to take a position here on what sort of moderation social-media platforms should engage in.Footnote 56

My point here is simply that such responsibility has an important cost, and refusal to take responsibility has a corresponding benefit. Those who are held responsible for what we do will need to assert their power over us, surveilling, second-guessing, and blocking our decisions. A phone company or an email provider or a landlord that’s responsible for what we do with its property will need to control whether we are allowed to use its property, and control what we do with that property; likewise for a social-media platform or a driverless-car manufacturer. If we want freedom from such control, we should try to keep those companies from being held responsible for their users’ behavior.

There is value in businesses being encouraged to “stay in their lane,” with their lane being defined as providing a particular product or service. They should be free to say that they “are not the censors of public or private morals,” and that they should not “regulate the public and private conduct of those who ask service at their hands.”Footnote 57 Even if, unlike with telephone and telegraph cases, they have the legal right to reject some customers, they should be free to refrain from exercising that right. Sometimes the responsibility for stopping misuse of the product should be placed solely on the users and on law enforcement – not on businesses that are enlisted as largely legally unsupervised private police forces, doing what the police are unable to do or (as with speech restrictions) are constitutionally forbidden from doing.

15 Moderating the Fediverse Content Moderation on Distributed Social Media

Alan Z. Rozenshtein
Footnote *
15.1 Introduction

Current approaches to content moderation generally assume the continued dominance of “walled gardens”: social-media platforms that control who can use their services and how. Whether the discussion is about self-regulation, quasi-public regulation (e.g., Facebook’s Oversight Board), government regulation, tort law (including changes to Section 230), or antitrust enforcement, the assumption is that the future of social media will remain a matter of incrementally reforming a small group of giant, closed platforms. But, viewed from the perspective of the broader history of the internet, the dominance of closed platforms is an aberration. The internet initially grew around a set of open, decentralized applications, many of which remain central to its functioning today.

Email is an instructive example. Although email is hardly without its content-moderation issues – spam, in particular, has been an ongoing problem – there is far less discussion about email’s content-moderation issues than about social media’s. Part of this is because email lacks some of the social features that can make social media particularly toxic. But it is also because email’s architecture simply does not permit the degree of centralized, top-down moderation that social-media platforms can perform. If “ought” implies “can,” then “cannot” implies “need not.” There is a limit to how heated the debates around email-content moderation can be, because there’s an architectural limit to how much email moderation is possible. This raises the intriguing possibility of what social media, and its accompanying content-moderation issues, would look like if it too operated as a decentralized protocol.

Fortunately, we do not have to speculate, because decentralized social media already exists in the form of the “Fediverse” – a portmanteau of “federation” and “universe.” Much like the decentralized infrastructure of the internet, in which the HTTP communication protocol facilitates the retrieval and interaction of webpages that are stored on servers around the world, Fediverse protocols power “instances,” which are comparable to social-media applications and services. The most important Fediverse protocol is ActivityPub, which powers the most popular Fediverse apps, notably the X-like microblogging service Mastodon, which has over a million active users and continues to grow, especially in the wake of Elon Musk’s purchase of X.Footnote 1

The importance of decentralization and open protocols is increasingly recognized within Silicon Valley. X co-founder Jack Dorsey has launched Bluesky, an X competitor built on the decentralized ATProtocol. Meta’s Mark Zuckerberg has described his plans for an “open, interoperable metaverse” (though how far this commitment to openness will go remains to be seen).Footnote 2 And established social media platforms are building in interoperability with ActivityPub applications.Footnote 3

Building on an emerging literature around decentralized social media,Footnote 4 this brief essay seeks to give an overview of the Fediverse, its benefits and drawbacks, and how government action can influence and encourage its development. Section 15.2 describes the Fediverse and how it works, first distinguishing open from closed protocols and then describing the current Fediverse ecosystem. Section 15.3 looks at the specific issue of content moderation on the Fediverse, using Mastodon as a case study to draw out the advantages and disadvantages of the federated content-moderation approach as compared to the currently dominant closed-platform model. Section 15.4 considers how policymakers can encourage the Fediverse through participation, regulation, antitrust enforcement, and liability shields.

15.2 Closed Platforms and Decentralized Alternatives
15.2.1 A Brief History of the Internet

A core architectural building block of the internet is the open protocol. A protocol is a rule that governs the transmission of data. The internet consists of many such protocols, ranging from those that direct how data is physically transmitted to those that govern the most common internet applications, like email or web browsing. Crucially, all these protocols are open, in that anyone can set up and operate a router, website, or email server without needing to register with or get permission from a central authority.Footnote 5 Open protocols were key to the first phase of the internet’s growth because they enabled unfettered access, removing barriers and bridging gaps between different communities. This enabled and encouraged interactions between groups with various interests and knowledge, resulting in immense creativity and idea-sharing.

But starting in the mid-2000s, a new generation of closed platforms – first Facebook, YouTube, and X, and later Instagram, WhatsApp, and TikTok – came to dominate the internet habits of most users.Footnote 6 Today’s internet users spend an average of seven hours online a day, and approximately 35 percent of that time is spent on closed social-media platforms.Footnote 7 Although social-media platforms use the standard internet protocols to communicate with their users – from the perspective of the broader internet, they just operate as massive web servers – their internal protocols are closed. There’s no Facebook protocol that you could use to run your own Facebook server and communicate with other Facebook users without Facebook’s permission. Thus, major social-media platforms are the most important example of the internet’s steady, two-decades-long takeover by “walled gardens.”Footnote 8

There are many benefits to walled gardens; otherwise, they would not have taken over. Closed systems are attractive for the companies that run them because the companies can exert greater control over their platforms through content and user moderation. But the draw for platform owners is insufficient; only by providing users with a better experience (or at least convincing them that their experience is better) could closed platforms have come to dominate social media.

Closed platforms have indeed often provided more value to users. The logic of enclosure applies as much to virtual spaces as it does to real ones: Because companies can more thoroughly monetize closed platforms, they have a greater incentive to invest more in those platforms and provide better user experiences. One can create an X account and begin posting tweets and interacting with others within minutes; good luck setting up your own microblogging service from the ground up. And because companies have full control over the platform, they can make changes more easily – thus, at least in the short term, closed platforms can improve at a faster rate than can open platforms, which often struggle with cumbersome, decentralized consensus governance.

Most important, at least from the perspective of this chapter, are closed platforms’ advantages when it comes to moderation. Closed platforms can be moderated centrally, which enables greater control over what appears on the network. And the business models of closed platforms allow them to deploy economic and technological resources at a scale that open, decentralized systems simply cannot match. For example, Meta, Facebook’s parent company, has spent over $13 billion on “safety and security” efforts since the 2016 election, employing, both internally and through contractors, 40,000 employees on just this issue. And Meta’s investments in AI-based content-moderation tools have led it to block billions of fake accounts.Footnote 9 Content moderation, as Tarleton Gillespie notes, “is central to what platforms do, not peripheral” and “is, in many ways, the commodity that platforms offer.”Footnote 10 Indeed, this concern with security – whether about malicious code, online abuse, or offensive speech – is one of the most important drivers of the popularity of closed systems.Footnote 11

But closed platforms have become a victim of their own success. They have exacerbated the costs of malicious action by creating systems that are designed to be as frictionless as possible within the network (even if access to the network is controlled by the platform). At the same time, they have massively increased user expectations regarding the moderation of harmful content, since centralization allows (in theory, though not in practice) the complete elimination of harmful content in a way that the architecture of an open system does not. Closed platforms impose uniform, top-down standards, which inevitably leave many users unsatisfied. And they raise concerns about the handful of giant companies and Silicon Valley CEOs exercising outsized control over the public sphere.Footnote 12

In other words, large, closed platforms are faced with what might be called the moderator’s trilemma. The first prong is that platform user bases are large and diverse. The second prong is that the platforms use centralized, top-down moderation policies and practices. The third prong is that the platforms would like to avoid angering large swaths of their users (not to mention the politicians that represent them). But the content-moderation controversies of the past decade suggest that these three goals cannot all be met. The large closed platforms are unwilling to shrink their user bases or give up control over content moderation, so they have tacitly accepted high levels of dissatisfaction with their moderation decisions. The Fediverse, by contrast, responds to the moderator’s trilemma by giving up on centralized moderation.

15.2.2 The Fediverse and Its Applications

The term “Fediverse” refers collectively to the protocols, servers, applications, and communities that enable decentralized social media. The most popular of these protocols is ActivityPub, which is developed by the World Wide Web Consortium, the main international standards organization for the World Wide Web, and which has also developed the HTML, XML, and other foundational internet standards.Footnote 13

To understand how ActivityPub operates, it’s important to appreciate that all social-media platforms are built around the same core components: users creating and interacting with pieces of content, whether posts (Facebook), tweets (X), messages (WhatsApp), images (Instagram), or videos (YouTube and TikTok). When a user tweets, for example, they first send the tweet to an X server. That X server then distributes that tweet through the X network to other users. Like all platforms, X has its own internal protocol that processes the data representing the tweet: the tweet’s content plus metadata like the user handle, the time the tweet was made, responses to the tweet (“likes” and “retweets”), and any restrictions on who can see or reply to the tweet.

ActivityPub generalizes this system. The ActivityPub protocol is flexible enough to accommodate different kinds of social-media content. This means that developers can build different applications on top of the single ActivityPub protocol; thus, Friendica replicates the main features of Facebook, Mastodon replicates those of X, and PeerTube of YouTube. But unlike legacy social-media platforms, which do not naturally interoperate – one can embed a YouTube link in a tweet, but X sees the YouTube content as just another URL, rather than a type of content that X can directly interact with – all applications built on top of ActivityPub have, in principle, access to the same ActivityPub data, allowing for a greater integration of content.Footnote 14

The most important feature of ActivityPub is that it is decentralized. The servers that users communicate with and that send content around the network are independently owned and operated. Anyone can set up and run an ActivityPub server – generally called an “instance” – as long as they follow the ActivityPub protocol. This is the key feature distinguishing closed platforms like X or Facebook from open platforms like ActivityPub – or email or the World Wide Web, for that matter: Anyone can run an email or web server if they follow the relevant protocols.

ActivityPub’s decentralized nature means that each instance can choose what content flows across its network and use different content-moderation standards. An instance can even choose to block certain users, types of media (e.g., videos or images), or entire other instances. At the same time, each instance’s content-moderation decisions are locally scoped: No instance can control the behavior of any other instance, and there is no central authority that can decide which instances are valid or that can ban a user or a piece of content from the ActivityPub network entirely. As long as someone is willing to host an instance and allow certain content on that instance, it exists on the ActivityPub network.

This leads to a model of what I call content-moderation subsidiarity. Just as the general principle of political subsidiarity holds that decisions should be made at the lowest organizational level capable of making such decisions,Footnote 15 content-moderation subsidiarity devolves decisions to the individual instances that make up the overall network.

A key guarantor of content-moderation subsidiarity is the ability of users to switch instances if, for example, they are dissatisfied with how their current instance moderates content. If a user decides to move instances, their followers will automatically refollow them at their new account.Footnote 16 Thus, migrating from one Mastodon instance to another does not require starting from scratch. The result is that, although Fediverse instances show some of the clustering that is characteristic of the internet as a whole,Footnote 17 no single instance monopolizes the network.Footnote 18

Using Albert Hirschman’s theory of how individuals respond to dissatisfaction with their organizations,Footnote 19 we can say that the Fediverse empowers users to exercise powers of voice and exit more readily and meaningfully than they could on a centralized social-media platform. Rather than simply put up with dissatisfactions, the Fediverse permits users to choose the instance that best suits them (exit) and to use that leverage to participate in instance governance (voice). Of course, users on closed platforms can (and frequently do) express their grievances with how the platform is moderated – perhaps most notably on X, where a common (and ironic) subject for tweets is how terrible X is – but such an “affective voice” is far less likely to lead to meaningful change than the “effective voice” that the Fediverse enables.Footnote 20

Some existing companies, though they remain centralized in most respects, have enhanced users’ voice and exit privileges by decentralizing their platform’s moderation practices. For example, Reddit, the popular message-board platform, grants substantial autonomy to its various subreddits, each of which has its own moderators. Indeed, Reddit is frequently held up as the most prominent example of bottom-up, community-based content moderation.Footnote 21 One might thus ask: does the Fediverse offer anything beyond what already exists on Reddit and other sites, like Wikipedia, that enables user-led moderation?

Indeed it does, because the Fediverse’s decentralization is a matter of architecture, not just policy. A subreddit moderator has control only insofar as Reddit, a soon-to-be public company,Footnote 22 permits that control. Because Reddit can moderate any piece of content – and can even ban a subreddit outright – no matter whether the subreddit moderator agrees, the company is subject to public pressure to do so. Perhaps the most famous example is Reddit’s banning of the controversial pro-Trump r/The_Donald subreddit several months before the 2020 election.Footnote 23

Taken as a whole, the architecture of the Fediverse represents a challenge not only to the daily operations of incumbent platforms, but also to their very theoretical bases. Media scholars Aymeric Mansoux and Roel Roscam Abbing have developed what is so far the most theoretically sophisticated treatment of the Fediverse’s content-moderation subsidiarity, which they characterize as a kind of “agonism”: the increasingly influentialFootnote 24 model of politics that seeks a middle ground between, on the one hand, unrealistic hopes for political consensus and, on the other hand, the zero-sum destructiveness of antagonism:

The bet made by agonism is that by creating a system in which a pluralism of hegemonies is permitted, it is possible to move from an understanding of the other as an enemy, to the other as a political adversary. For this to happen, different ideologies must be allowed to materialize via different channels and platforms. An important prerequisite is that the goal of political consensus must be abandoned and replaced with conflictual consensus…. Translated to the Fediverse, it is clear that it already contains a relatively diverse political landscape and that transitions from political consensus to conflictual consensus can be witnessed in the way communities relate to one another. At the base of these conflictual exchanges are various points of view on the collective design and use of the software stack and the underlying protocols that would be needed to further enable a sort of online agonistic pluralism.Footnote 25

The Fediverse is a truly novel evolution in online speech. The question is: It works in theory, but does it work in practice?

15.3 Content Moderation on the Fediverse
15.3.1 The Mastodon Case Study

Although the organization that runs the Mastodon project recommends certain content-moderation policies,Footnote 26 each Mastodon instance is able to choose whether and how much to moderate content. The large, general-interest instances tend to have fairly generic policies. For example, Mastodon.social bans “racism, sexism, homophobia, transphobia, xenophobia, or casteism” as well as “harassment, dogpiling or doxxing of other users.”Footnote 27 By contrast, other instances do not specify prohibited categories of content;Footnote 28 this, of course, does not prevent the instance administrators from moderating content on an ad hoc basis, but it does signal a lighter touch. Content moderation can also be based on geography and subject matter; for example, Mastodon.social, which is hosted in Germany, explicitly bans content that is illegal in Germany,Footnote 29 and Switter, a “sex work friendly social space” that ran from 2018 to 2022, permitted sex-work advertisements that mainstream instances generally prohibited.Footnote 30 Mastodon instances can also impose various levels of moderation on other instances, which can be: (1) fully accessible (the default); (2) filtered but still accessible; (3) restricted such that users can only view content posted on the restricted instances if they follow users on those instances; and (4) fully blocked.

Mastodon instances thus operate according to the principle of content-moderation subsidiarity: Content-moderation standards are set by, and differ across, individual instances. Any given Mastodon instance may have rules that are far more restrictive than those of the major social-media platforms. But the network as a whole is substantially more protective of speech than are any of the major social-media platforms, since no user or content can be permanently banned from the network and anyone is free to start an instance that communicates both with the major Mastodon instances and with the peripheral, shunned instances.

The biggest content-moderation challenge for Mastodon has been Gab, an X-like social network that is popular on the far-right. Gab launched in 2016, and, in 2019, switched its software infrastructure to run on a version of Mastodon, in large part to get around Apple and Google banning Gab’s smartphone app from their app stores. By switching its infrastructure to Mastodon and operating as merely one of Mastodon’s many instances, Gab hoped to hitch a ride back to users’ smartphones.Footnote 31

Gab is a useful case study in how decentralized social media can self-police. On the one hand, there was no way for Mastodon to expel Gab from the Fediverse. As Mastodon’s founder Eugen Rochko explained, “You have to understand it’s not actually possible to do anything platform-wide because it’s decentralized…. I do not have the control.”Footnote 32 On the other hand, individual Mastodon instances could – and the most popular ones did – refuse to interact with the Gab instance, effectively cutting it off from most of the network in a spontaneous, bottom-up process of instance-by-instance decision-making. Ultimately, Gab was left almost entirely isolated, with more than 99 percent of its users interacting only with other Gab users. Gab responded by “defederating”: voluntarily cutting itself off from the remaining instances that were still willing to communicate with it.Footnote 33

15.3.2 Benefits and Drawbacks of Federated Moderation

As the Gab story demonstrates, the biggest benefit of a decentralized moderation model is its embrace of content-moderation subsidiarity: Each community can choose its own content-moderation standards according to its own needs and values, while at the same time recognizing and respecting other communities’ content-moderation choices. This is in stark contrast to the problem faced by large, centralized platforms, which by their nature must choose a single moderation standard that different groups of users will inevitably find either under- or overinclusive.

The difference in business models also lowers the need for content moderation generally. The business models of the major platforms – selling advertisements – require them to maximize “user engagement,” and the discovery algorithms designed to promote this goal tend to emphasize conflict across users. By contrast, Fediverse applications can, and often are, engineered with “antivirality” in mind.Footnote 34 For example, Mastodon’s lack of X’s “quote tweet” feature was an intentional design choice on Eugene Rochko’s part, who judged that such a feature “inevitably adds toxicity to people’s behaviours” and encourages “performative” behavior and “ridiculing.”Footnote 35 The same considerations underpin Mastodon’s lack of full-text search and eschewal of algorithmic amplification in favor of reverse-chronological feeds.Footnote 36 In addition, Fediverse instances, which are generally run by volunteers and without a profit imperative, can afford to focus on smaller communities in which like-minded users do not suffer the problem of “context collapse” that frequently leads to conflicts on the major social-media platforms.Footnote 37

Of course, if the Fediverse proves popular, for-profit entities may enter the space, thus introducing the problematic incentives of the major platforms. But even if this were to occur, the ability of users to switch Fediverse applications and instances will limit the extent to which the Fediverse’s architecture will reflect the values of the extractive attention economy.

The main objection to the Fediverse is that what some see as its key feature – its decentralized model – is for others its main bug. Because there is no centralized Fediverse authority, there is no way to fully exclude even the most harmful content from the network. And, as noted above, Fediverse administrators will generally have fewer resources as compared to giant social-media platforms.Footnote 38 By contrast, if Facebook or X want to fully ban a user or some piece of content, they can in principle do so (although in practice it can be a challenge given the size of their networks and users’ ability to evade content moderation).

In considering the limits of decentralized content moderation, it is helpful to distinguish between two categories of objectionable conduct. The first category consists of content that is broadly recognized as having no legitimate expressive value. Examples of such content are child-exploitation material, communication that facilitates criminal conduct, and spam. The challenges of moderating these types of content are technological and organizational, and the main question is whether decentralized social media can handle the moderation challenges at scale. Ultimately, it’s an empirical question and we’ll have to wait until the Fediverse grows to find out the answer. But there are reasons for optimism.

First, the Fediverse itself may be up to the task. Automated scanning, while hardly foolproof, could lower moderation costs. For example, many of the major platforms use Microsoft’s PhotoDNA system to scan for child pornography,Footnote 39 and the same software could be used by Fediverse instances for content that they host. And if effective moderation turns out to require more infrastructure, that could lead to a greater consolidation of instances. This is what happened with email, which – in part due to the investments necessary to counter spam – has become increasingly dominated by Google and Microsoft.Footnote 40

If similar scale is necessary to fight spam and bot accounts on the Fediverse, this could serve as a centripetal force to counter the Fediverse’s decentralized architecture and lead to a Fediverse that is more centralized than it is today (albeit still far more decentralized than architecturally closed platforms). Partial centralization would reintroduce some of the content-moderation dilemmas that decentralization is meant to avoid,Footnote 41 and there is a trade-off between a vibrant and diverse communication system and the degree of centralized control that would be necessary to ensure 100 percent filtering of content. The question, to which the answer is as yet unknown, is how stark that trade-off is.

A second reason to think that federalized systems can have sufficient content moderation is that governments could step in to deal with instances that cannot, or choose not to, deal with the worst content. Although the Fediverse may live in the cloud, its servers, moderators, and users are physically located in nations whose governments are more than capable of enforcing local law.Footnote 42 A Mastodon instance that hosted child pornography would not only be blocked by all mainstream Mastodon instances, but would also be quickly taken offline – and have its members prosecuted – by the relevant jurisdictions. Even the threat of state action can have large effects. For example, Switter, which by the end of its life was the third-largest Mastodon instance, shut down because its organizers concluded that Switter’s continued existence was increasingly untenable as major jurisdictions like the United States, Australia, and the United Kingdom advanced online-safety and antitrafficking legislation.Footnote 43

When it comes to the second category of content moderation – content that is objectionable to one group but that others view as legitimate, even core, speech – the Fediverse will host content that current platforms prohibit. But whether this is a weakness or a strength depends on one’s substantive views about the content at issue. What looks to one group like responsible moderation can appear to others as unjustified censorship. And when platforms inevitably make high-profile moderation mistakes – moderation, after all, is not an exact science – they undermine their credibility even further, especially where determinations of “misinformation” or “disinformation” are perceived as tendentious attempts to suppress conflict over politics, health, or other important social and culture issues.Footnote 44

The benefit of decentralized moderation is that it can satisfy both those that want to speak and those that do not want to listen. By empowering users, through their choice of instance, to avoid content they find objectionable, the Fediverse operationalizes the principle that freedom of speech is not the same as freedom of reach. In a world where there simply is not consensus on what content is and is not legitimate, letting people say what they want while giving others the means to protect themselves from that speech may be the best we can do.

A different concern with decentralized moderation is that it will lead to “filter bubbles” and “echo chambers” in which members will choose to only interact with like-minded users.Footnote 45 For Mansoux and Abbing, this state of affairs would produce a watered-down, second-best agonism:

Rather than reaching a state of agonistic pluralism, it could be that the Fediverse will create at best a form of bastard agonism through pillarization. That is to say, we could witness a situation in which instances would form large agonistic-without-agonism aggregations only among both ideologically and technically compatible communities and software, with only a minority of them able and willing to bridge with radically opposed systems.Footnote 46

This concern, though understandable, can be addressed several ways. First, filter bubbles are not a Fediverse-only phenomena; closed platforms can design their systems so as to keep dissimilar users from interacting with each other.

Second, it is important to not overstate the effect of filter bubbles; even the most partisan users frequently consume and even seek out information that challenges their beliefs.Footnote 47 While Fediverse applications like Mastodon may make it easier for users to communicate only with like-minded peers, users can still go outside their instances to access whatever information they want.

And third, even if filter bubbles exist, it is unclear whether they are a net negative, at least from the perspective of polarization and misinformation. The “backfire effect” (also known as belief perseverance) is a well-established psychological phenomenon whereby individuals who are exposed to evidence that challenges their views end up believing in those views more rather than less.Footnote 48 In this view, a more narrowly drawn epistemic environment, while hardly a model of ideal democratic public reason, may actually be better than a social-media free-for-all.

Put another way, the smaller communities of the Fediverse may be a useful corrective to the “megascale” of contemporary social media, which pushes us to “say so much, and to so many, so often.”Footnote 49

15.4 Encouraging the Fediverse

The Fediverse is still a very small part of the broader social-media ecosystem. Mastodon’s several million users pale in comparison with Facebook’s billion or X’s hundreds of millions of users. Whether the Fediverse ever grows large enough to challenge the current dominance of closed platforms is very much an open question, one that will ultimately depend on whether it provides a product that ordinary users find superior to what is currently available on the dominant platforms.

Such an outcome is hardly preordained. It would require millions of people to overcome the steeper learning curves of Fediverse applications, commit to platforms that are often intentionally less viral than the engagement-at-all-costs alternatives, and navigate the culture shock of integrating into an existing community.Footnote 50 After experiencing a mass influx of X users that defected after Elon Musk purchased the platform, Mastodon has seen its active users drop from its late-2022 high of 2.5 million, suggesting that, for many users, Mastodon does not work as an X replacement.Footnote 51

But Mastodon has demonstrated that, for millions of people, decentralized social media is a viable option. And even if Mastodon’s market share remains modest, other decentralized applications, whether operating on ActivityPub or other protocols (as with the ATProtcol-powered Bluesky) will continue to grow, especially if they combine Mastodon’s emphasis on decentralization with Silicon Valley’s engagement-at-all-costs priorities. In the end, the current dominance of the incumbent platforms may prove illusory. They are, after all, themselves subject to shakeups, as is demonstrated by the meteoric rise of apps like TikTok.

Although decentralized social media will have to stand on its own merits, public-policy interventions could nevertheless encourage its growth. Here I briefly consider four such interventions, ranging from most- to least-direct government involvement.

First, governments could support the Fediverse by participating in it as users or, better yet, as instances. This would both directly contribute to the Fediverse’s growth but, more importantly, would help legitimate it as the preferred social-media architecture for democratic societies. For example, shortly after Musk announced plans to purchase X, the European Commission, the executive branch of the European Union, launched EU Voice, a Mastodon instance that “provides EU institutions, bodies and agencies with privacy-friendly microblogging accounts that they typically use for the purposes of press and public relations activities.”Footnote 52 Other governments and international organizations could follow suit.

Second, governments could mandate that large social-media platforms interoperate with the Fediverse. For example, under such a regime, Facebook would be allowed to choose what users or content appear on its servers, but it would have to allow other Fediverse instances to communicate with it. This would allow users to access content that Facebook removes and also still be able to interact with the broader Facebook community.Footnote 53 Such regulation would have to specify to what extent Facebook could block other instances entirely, since otherwise Facebook could effectively defederate. But even a limited interoperability mandate would enable a balance between what are the currently envisioned options: totally unfettered control by closed platforms or common-carrier-type regulations that make any sort of moderation impossible.Footnote 54

Such regulation is already being pursued in Europe, where the Digital Services Act would require large platforms to interoperate, a requirement that could easily be modified to include the Fediverse.Footnote 55 In the United States, interoperability legislation, which has already been introduced in Congress,Footnote 56 would be a welcome alternative to recent overbroad state laws from Texas, Florida, and other Republican-governed states that purport to limit the ability of major social-media platforms to moderate content. These laws, in addition to being poorly thought out and overtly political, may also violate the First Amendment, at least in their more extreme versions.Footnote 57

Third, antitrust regulators like the Department of Justice and the Federal Trade Commission could use an incumbent platform’s willingness to interoperate as a consideration in antitrust cases.Footnote 58 Interoperability could then be an alternative to calls to “break up” social-media giants, a tactic that is both controversial and legally risky.Footnote 59

Finally, policymakers should consider how the background legal regime can be tweaked to improve the incentives for the Fediverse. In the United States, the most important factor is Section 230 of the Communications Decency Act of 1996, which shields platforms from liability as publishers of content created by users.Footnote 60 Although Section 230 has come under increasing controversy, especially as it applies to giant platforms, it’s hard to imagine how the Fediverse could function without it. The open nature of the Fediverse – with users being able to travel between and communicate across instances – limits the scope of monetization, since users can choose instances that limit advertisements and algorithmic ranking. But this also means that Fediverse instances will lack the resources necessary to perform the sort of aggressive content moderation that would be necessary were they to be held liable for their users’ content. The rationale for Section 230 immunity when it was enacted in the mid-1990s – to help support a nascent internet – no longer applies to the technology giants. But it does apply to the current generation of internet innovators: the federated social-media platforms.

Footnotes

11 Introduction Platform Governance

12 Noisy Speech Externalities

1 Whitney v. California, 274 U.S. 357, 377 (1927) (Brandeis, J., concurring) (“If there be time to expose through discussion the falsehood and fallacies, to avert the evil by the processes of education, the remedy to be applied is more speech, not enforced silence.”), overruled in part by Brandenburg v. Ohio, 395 U.S. 444 (1969).

2 Abrams v. United States, 250 U.S. 616, 630 (1919) (“[T]he best test of truth is the power of the thought to get itself accepted in the competition of the market…. That at any rate is the theory of our Constitution.”).

3 Justin “Gus” Hurwitz, Madison and Shannon on Social Media, 3 Bus. Entrepreneurship & Tax L. Rev. 249 (2019).

4 Cognitive psychologists and neurobiologists have identified some of these limits. Some of this research, such as that showing that there is a roughly constant information density across spoken human languages despite their vastly different syntax, grammars, and word complexity, is considered in Hurwitz, supra Footnote note 3.

5 See infra Section 12.3.2.

6 See Hurwitz, supra Footnote note 3, at 259.

7 C. E. Shannon, A Mathematical Theory of Communications, 27 Bell Sys. Tech. J. 379 (1948).

8 A communications medium is referred to as a “channel” – as in a communications channel. As defined by Shannon, a “channel is merely the medium used to transmit the signal from transmitter to receiver.” Footnote Id. at 381.

9 Shannon, supra Footnote note 7, at 410 (“If an attempt is made to transmit at a higher rate than C, say C + R1, then there will necessarily be an equivocation equal to or greater than the excess R1. Nature takes payment by requiring just that much uncertainty, so that we are not actually getting any more than C through correctly.”).

10 250 U.S. 616, 630 (1919) (Holmes, J., dissenting) (“[T]he best test of truth is the power of the thought to get itself accepted in the competition of the market…. That at any rate is the theory of our Constitution.”).

11 Cf. Note, The Awareness Doctrine, 135 Harv. L. Rev. 1907, 1911 (2022) (citing Steve Rendall, The Fairness Doctrine: How We Lost It, and Why We Need It Back, FAIR (Jan. 1, 2005), https://perma.cc/P557-C8FA), with Marvin Ammori, The Fairness Doctrine: A Flawed Means to Attain a Noble Goal, 60 Admin. L. Rev. 881 (2008) (discussing issues with the Fairness Doctrine in practice).

12 See, e.g., Cass Sunstein, Too Much Information: Understanding What You Do not Want to Know (2020).

13 Newton N. Minow, Television and the Public Interest, 55 Fed. Commc’ns L.J. 395, 397 (2003).

14 Cass R. Sunstein, The First Amendment in Cyberspace, 104 Yale L.J. 1757 (1995); Richard Posner, Bad News, N.Y. Times Book Rev. (July 31, 2005) (reviewing and discussing eight recent books on the changing media landscape).

15 Sunstein, supra Footnote note 14, at 1759.

16 Footnote Id. at 1804.

17 Posner, supra Footnote note 14.

18 Footnote Id. (“But increased competition has not produced a public more oriented toward public issues, more motivated and competent to engage in genuine self-government, because these are not the goods that most people are seeking from the news media. They are seeking entertainment, confirmation, reinforcement, emotional satisfaction; and what consumers want, a competitive market supplies, no more, no less.”).

19 Footnote Id. (“Yet what of the sliver of the public that does have a serious interest in policy issues? Are these people less well served than in the old days? Another recent survey by the Pew Research Center finds that serious magazines have held their own and that serious broadcast outlets, including that bane of the right, National Public Radio, are attracting ever larger audiences. And for that sliver of a sliver that invites challenges to its biases by reading The New York Times and The Wall Street Journal, that watches CNN and Fox, that reads Brent Bozell and Eric Alterman and everything in between, the increased polarization of the media provides a richer fare than ever before.”).

20 Sunstein, supra Footnote note 12.

21 James Grimmelmann, Listeners’ Choices, 90 Colo. L. Rev. 65 (2019).

22 See Thomas Lambert, How to Regulate: A Guide for Policymakers 185–218 (2017). See also George A. Akerlof, The Market for “Lemons”: Quality Uncertainty and the Market Mechanism, 84 Q.J. Econ. 488 (1970); Howard Beales, Richard Craswell & Steven C. Salop, The Efficient Regulation of Consumer Information, 24 J.L. & Econ. 491 (1981).

23 This is not meant to advocate for either of those outcomes. One could also imagine a Posnerian market-based approach in which information providers compete to be trustworthy by making the information they provide easy for consumers to verify. It is entirely possible that the market could address concerns about either listeners being overwhelmed by information or vindicating their rights against speakers better than regulatory approaches.

24 See Lambert, supra Footnote note 22, at 22–59. See also Garrett Hardin, The Tragedy of the Commons, 162 Sci. 1243 (1968); Ronald Coase, The Problem of Social Cost, 3 J.L. & Econ. 1 (1960).

25 See Henry Farrell & Bruce Schneier, Democracy’s Dilemma, Bos. Rev. (May 15, 2019), https://perma.cc/MG27-J4K4; Yochai Benkler, Robert Faris & Hal Roberts, Network Propaganda: Manipulation, Disinformation, and Radicalization in American Politics (2018).

27 Alex Kantrowitz, The Man Who Built the Retweet: “We Handed A Loaded Weapon to 4-Year-Olds”, BuzzFeed (July 23, 2019), https://perma.cc/5LQ8-QFWC.

29 See Reddit Mod Education, Reddit, https://perma.cc/R8QX-V5WV.

30 See Posner, supra Footnote note 14; Eugene Volokh, Cheap Speech and What It Will Do, 104 Yale L.J. 1805 (1995).

31 Red Lion Broad. Co. v. Fed. Commc’ns Comm’n, 395 U.S. 367 (1969); Fed. Commc’ns Comm’n v. Pacifica Found., 438 U.S. 726 (1978); Turner Broad. Sys., Inc. v. Fed. Commc’ns Comm’n, 520 U.S. 180 (1997).

32 Ward v. Rock Against Racism, 491 U.S. 781 (1989).

33 Nat’l Broad. Co. v. United States, 319 U.S. 190 (1943).

34 Red Lion, 395 U.S. at 390.

35 Ward, 491 U.S. at 791.

36 It bears emphasis that the negative externality of noise alone is less substantial than that which occurs when incremental speech is added to a saturated communications channel. In the latter case, the incremental speech is a negative impact on its own, but it has the more substantial effect of making otherwise-meaningful speech also on the communications channel indistinguishable from noise.

37 While less sophisticated, this approach is nonetheless important. It is probably one of the two intuitive responses to overcoming a poor signal-to-noise ratio (the other being to increase signal strength – that is, to speak more loudly or clearly). It also finds analogy in discussions about online speech regulation, for instance with suggestions that platforms insert “friction” into speech.

38 See, e.g., Bruce A. Ackerman & Richard B. Stewart, Reforming Environmental Law, 37 Stan. L. Rev. 1333 (1985); Rena Steinzor, Reinventing Environmental Regulation: The Dangerous Journey from Command to Self-Control, 22 Harv. Env’t L. Rev. 103 (1998) (“Command and control rules impose detailed, legally enforceable limits, conditions, and affirmative requirements on industrial operations, generally controlling sources that generate pollution on an individual basis.”).

39 Steinzor, supra Footnote note 38, at 114; Ackerman & Stewart, supra Footnote note 38, at 1335 (discussing best-available technology regulations); 42 U.S.C § 7475(a)(4) (“[T]he proposed facility is subject to the best available control technology for each pollutant subject to regulation under this chapter.”).

40 Spur Industries v. Del E. Webb Development Co., 108 Ariz. 178 (1972).

41 Sturges v. Bridgman, 11 Ch. D. 852 (1879).

42 Boomer v. Atlantic Cement Co., 257 N.E.2d 870 (N.Y. 1970).

43 Fontainebleau H. Corp. v. Forty-Five Twenty-Five, Inc., 114 So. 2d 357 (Fla. Ct. App. 1960); Prah v. Maretti, 108 Wis. 2d 223 (1982).

44 Rylands v. Fletcher, L.R. 3 H.L. 330 (1868).

45 The most trenchant response to proposals such as this is that weakening Section 230’s liability shield creates a legitimately worrisome possibility of harming platforms – especially smaller platforms that cannot easily absorb the cost of litigation – by exposing them to potential claims sufficient to survive a motion to dismiss. This concern is addressed near the end of this section.

46 47 U.S.C. § 230.

47 § 230(c)(2) provides that platforms shall not be held liable on account of their moderation activities and § 230(c)(1) provides that they shall not be held liable for information shared by their users.

48 That last proviso, “as suitable for their scale,” is likely redundant. A small platform likely has little need for significant content moderation practices because users are likely able to replicate the experience of that platform on any number of other platforms. As the scale of the platform grows, the value of content moderation grows, and the costs to the platform of poor practices potentially decreases as users face fewer competitive alternatives. In addition to potential competition considerations, it is potentially the case that as the signal-to-noise ratio decreases, so too will users’ ability to evaluate the quality of the platform. This creates an endogenous challenge that might on its own justify some amount of regulatory intervention.

49 See, e.g., Turner Broad. Sys., Inc. v. Fed. Commc’ns Comm’n, 520 U.S. 180, 234 (O’Connor, J., dissenting) (“But appellees’ characterization of must-carry as a means of protecting [local broadcast] stations, like the Court’s explicit concern for promoting ‘community self-expression’ and the ‘local origination of broadcast programming,’ reveals a content-based preference for broadcast programming.”).

50 Indeed, one could argue that a common-carrier obligation would amplify certain types of speech over others in a social-media environment. If we need carriage guaranteed to ensure that some types of speech are viable on the platform, that suggests that the regulations are not, in fact, content-neutral. Footnote Id.

51 The classic case demonstrating the role of industry custom in judicial decision-making is The T.J. Hooper, 60 F.2d 737 (2d Cir. 1932). Medical-malpractice cases are an instance where courts rely on customary practices as a tool for understanding complex scientific settings. See, e.g., Richard N. Pearson, The Role of Custom in Medical Malpractice Cases, 51 Ind. L.J. 528 (1976).

52 Kantrowitz, supra Footnote note 27.

13 Content Moderation in Practice

1 Help, Onlyfans, https://perma.cc/WCW7-VDSY.

2 Facebook Community Standards, Meta, https://perma.cc/G36P-CAU8.

3 Misinformation, Meta, https://perma.cc/2DTC-R7CT.

4 How Meta Enforces Its Policies, Meta, https://perma.cc/82GV-37N6.

6 How Technology Detects Violations, Meta (Jan. 19, 2022), https://perma.cc/QC6Q-L9RM.

7 Taking Down Violating Content, Meta (Sept. 9, 2022), https://perma.cc/B3VX-388A.

8 Restricting Accounts, Meta (Oct. 4, 2022), https://perma.cc/A7BJ-AHPF.

9 Community Standards Enforcement Report, Meta, https://perma.cc/9BHW-SAPP.

10 Disabling Accounts, Meta (Jan. 19, 2022), https://perma.cc/RYR7-RZ6J.

11 Oversight Board, https://perma.cc/M32S-356A.

13 YouTube’s Community Guidelines, YouTube, https://perma.cc/85SE-MW4X.

14 YouTube Community Guidelines Enforcement FAQs, Google, https://perma.cc/X3FD-Q7RM.

15 See Footnote id. (answering the question “Is flagged content automatically removed?”).

16 YouTube Community Guidelines Enforcement, Google, https://perma.cc/EAS7-X6NQ.

17 Community Guidelines Strike Basics on YouTube, Google, https://perma.cc/6WPD-B2R3.

18 Channel or Account Terminations, Google, https://perma.cc/Y6DC-FZHN.

19 Community Guidelines, TikTok, https://perma.cc/XDM8-DQQ9.

20 Eric Han, Advancing Our Approach to User Safety, TikTok, https://perma.cc/V7Y2-ZG9Y.

21 Reports, TikTok, https://perma.cc/L7YF-4KRF.

22 Reddit Content Policy, Reddit, https://perma.cc/3A9D-3BJ7.

23 Moderator Code of Conduct, Reddit (Sept. 8, 2022), https://perma.cc/GYS2-5UUP.

24 Quarantined Subreddits, Reddit, https://perma.cc/2FPP-66FQ.

25 Transparency Report 2021, Reddit, https://perma.cc/7HLX-BT2J.

26 Reddit Mods, Reddit, https://perma.cc/5HU2-DVRU.

27 Reddit Moderation Tools, Reddit, https://perma.cc/99P4-T8C3.

28 User Flair, Reddit, https://perma.cc/49JR-2M7W.

29 Ajaay, Zoom Limit: Maximum Participants, Call Duration, and More, Nerds Chalk (Oct. 21, 2020), https://perma.cc/EWQ8-4YMM.

30 Acceptable Use Guidelines, Zoom, https://perma.cc/3SS4-86GN.

31 Acceptable Use Guidelines Enforcement, Zoom, https://perma.cc/P8GZ-BKRF.

32 Our Tier Review System, Zoom, https://perma.cc/25TT-JWKD.

33 Moderator Code of Conduct, supra Footnote note 23.

14 The Reverse Spider-Man Principle With Great Responsibility Comes Great Power

* Thanks to Nikita Aggarwal, Laura Edelson, Gus Hurwitz, Michael Karanicolas, Kyle Langvardt, Marc McKenna, Courtney Radsch, John Villaseñor, and Mark Verstraete for their help.

1 See, e.g., Dory Knight-Ingram, Hate Speech in Social Media: How Platforms Can Do Better, Mich. News (Feb. 17, 2022), https://perma.cc/D6Z2-TR7E (“‘the companies behind [social-media platforms] have civic responsibilities to combat abuse and prevent hateful users and groups from harming others’”) (quoting Professor Libby Hemphill, author of an Anti-Defamation League report urging platforms to ban “white supremacist speech”); Karis Stephen, The Social Responsibility of Social Media Platforms, Reg. Rev. (Dec. 21, 2021), https://perma.cc/WT48-ZCRE.

2 See, e.g., Henry Fernandez, Curbing Hate Online: What Companies Should Do Now, Ctr. for Am. Progress (Oct. 25, 2018), https://perma.cc/Y83F-VMRE (arguing that payment processors have a responsibility to refuse to process payments to “hate groups”).

3 “With great power comes great responsibility” of course predates Spider-Man’s Uncle Ben, though it is most associated with him. The phrase is often credited to, among others, Voltaire, see, e.g., Montpelier US Ins. Co. v. Collins, No. CIV. 11-141-ART, 2012 WL 588799, *1 (E.D. Ky. Feb. 22, 2012). But see With Great Power Comes Great Responsibility, Quote Investigator, https://perma.cc/5FAU-F655 (casting doubt on this attribution). Luke 12:48 (King James) – “For unto whomsoever much is given, of him shall be much required” – also seems to express a similar sentiment; in context, the “much is given” does appear to refer to power (see Luke 12:42 (King James), discussing someone “whom his lord shall make ruler over his household”) rather than wealth.

The official name is apparently “Spider-Man” rather than “Spiderman,” but not “Bat-Man” or “Super-Man.” This was apparently deliberate product differentiation. See Patricia T. O’Conner & Stewart Kellerman, Why the Hyphen in Spider-Man?, Grammarly (July 13, 2012), https://perma.cc/XXC4-QQEV (relying on, among other sources, a Tweet by Stan Lee).

4 I assume in all such situations that the entities aren’t acting with the specific purpose of promoting illegal behavior. If such a purpose is present, their actions may well be criminal aiding and abetting or even criminal conspiracy. See, e.g., 18 Pa. Cons. Stat. Ann. § 306 (aiding and abetting); Tex. Penal Code Ann. § 7.02 (2004) (likewise); United States v. Pino-Perez, 870 F.2d 1230, 125 (7th Cir. 1989) (likewise); Ocasio v. United States, 578 U.S. 282, 288 (2016) (conspiracy).

5 Some statutes do limit employers’ power to act on their employees’ religious practices, speech, and certain off-the-job activities. See, e.g., Title VII of the Civil Rights Act of 1964, 42 U.S.C. § 2000e; Colo. Rev. Stat. Ann. § 24-34-402.5(1) (lawful off-the-job activities generally); N.D. Cent. Code Ann. §§ 14-02.4-03, 14-02.4-08 (same); 820 Ill. Comp. Stat. Ann. 55/5 (off-the-job consumption of lawful products); Mont. Code Ann. §§ 39-2-313(2), 39-2-313(3) (2011) (same); Nev. Rev. Stat. Ann. § 613.333(1)(B) (same); N.C. Gen. Stat. Ann. § 95-28.2(B) (same); Wis. Stat. Ann. §§ 111.321, 111.35(2) (same); Eugene Volokh, Private Employees’ Speech and Political Activity: Statutory Protection against Employer Retaliation, 16 Tex. Rev. L. & Pol. 295 (2012); Eugene Volokh, Should the Law Limit Private-Employer-Imposed Speech Restrictions?, 2 J. Free Speech L. 269 (2023) (containing a map of such statutes throughout the country).

6 See, e.g., Tarasoff v. Regents of Univ. of Cal., 551 P.2d 334 (Cal. 1976).

7 See, e.g., Cal. Bus. & Prof. Code § 25602(a); 18 U.S.C. § 922(t).

8 See, e.g., Lorraine Mazerolle & Janet Ransley, Third Party Policing (2005); Tracey L. Meares & Emily Owens, Third-Party Policing: A Critical View, in Police Innovation: Contrasting Perspectives 249, 273–87 (David Weisburd & Anthony A. Braga eds., 2019).

9 For the contrary view, see, e.g., Howard Sports Daily v. Weller, 18 A.2d 210 (Md. 1941).

10 Commonwealth v. W. Union Tel. Co., 67 S.W. 59, 60 (Ky. 1901) (paragraph break added); see also Pennsylvania Publications v. Pennsylvania Pub. Util. Comm’n, 36 A.2d 777, 781 (Pa. 1944) (cleaned up); People v. Brophy, 120 P.2d 946, 956 (Cal. App. 1942).

11 Matthew Prince, Blocking Kiwifarms, Cloudflare Blog (Sept. 3, 2022), https://perma.cc/WG5N-6YPK.

12 Matthew Prince & Alissa Starzak, Cloudflare’s Abuse Policies & Approach, Cloudflare Blog (Aug. 31, 2022), https://perma.cc/J5KB-JRE9.

13 The case turned on conduct that happened before the enactment of 47 U.S.C. § 230, which provided such immunity by statute. The court therefore addressed whether a libel claim was available in the first place, thus avoiding the need to determine whether § 230 was retroactive.

14 Lunney v. Prodigy Servs. Co., 723 N.E.2d 539, 542 (N.Y. 1999).

15 Castaneda v. Olsher, 162 P.3d 610, 613 (Cal. 2007).

16 Footnote Id. at 617. On this point, the Justices were unanimous.

18 Footnote Id. at 618.

20 Footnote Id. at 619.

21 992 F.3d 67, 79 n.47 (2d Cir. 2021).

22 130 A.D.2d 256, 266 (N.Y. App. Div. 1987).

23 234 A.3d 348, 369 (N.J. App. Div. 2020); see also Anderson v. 124 Green St. LLC, No. CIV.A. 09-2626-H, 2011 WL 341709, at *5 (Mass. Super. Jan. 18, 2011), aff’d, 82 Mass. App. Ct. 1113 (2012).

24 Helfman v. Northeastern Univ., 149 N.E.3d 758, 768 (Mass. 2020) (citations omitted). The court recognized a university’s duty to protect intoxicated students when it is aware of an “alcohol-related emergency,” id. at 771, but concluded that universities are not responsible for monitoring alcohol use proactively, id. at 774–76.

25 See, e.g., Giggers v. Memphis Hous. Auth., 277 S.W.3d 359, 371 (Tenn. 2009).

26 See, e.g., Bd. of Trustees of Vill. of Groton v. Pirro, 152 A.D.3d 149, 157–58 (N.Y. App. Div. 2017); Erik Eckholm, Victims’ Dilemma: 911 Calls Can Bring Eviction, N.Y. Times (Aug. 16, 2013); Matthew Desmond & Nicol Valdez, Unpolicing the Urban Poor: Consequences of Third-Party Policing for Inner-City Women, 78 Am. Socio. Rev. 117 (2012).

27 See generally Eugene Volokh, Tort Law vs. Privacy, 114 Colum. L. Rev. 879, 895–97 (2014).

28 See David Thacher, The Rise of Criminal Background Screening in Rental Housing, 33 Law & Soc. Inquiry 5, 26 (2008) (“government efforts that encouraged landlords to adopt criminal history screening were partly motivated by a growing belief that private institutions should take more responsibility for their social impacts”).

29 See generally, e.g., B.A. Glesner, Landlords as Cops: Tort, Nuisance & Forfeiture Standards Imposing Liability on Landlords for Crime on the Premises, 42 Case W. Res. L. Rev. 679, 780 (1992); Deborah J. La Fetra, A Moving Target: Property Owners’ Duty to Prevent Criminal Acts on the Premises, 28 Whittier L. Rev. 409, 439–59 (2006); Robert J. Aalberts, Drug Testing Tenants: Does It Violate Rights of Privacy?, 38 Real Prop. Prob. & Tr. J. 479, 481–82 (2003); Desmond & Valdez, supra Footnote note 26.

30 See supra Footnote note 4.

31 See, e.g., Sacandy v. Walther, 262 Ga. 11 (1992); David L. Shapiro, The Enigma of the Lawyer’s Duty to Serve, 55 N.Y.U. L. Rev. 735 (1980).

32 See, e.g., Guantanamo Remarks Cost Policy Chief His Job, CNN (Feb. 2, 2007), https://perma.cc/236W-5DDP; Michel Paradis & Wells Dixon, In Defense of Unpopular Clients – and Liberty, Wall St. J. (Nov. 18, 2020); cf. Eugene Volokh, Defending Guantanamo Detainees, Volokh Conspiracy (Jan. 12, 2007), https://perma.cc/K3DD-CN4N.

33 I set aside here still other reasons, for instance, stemming from the sense that excessive complicity liability may wrongly chill proper behavior as well as improper, or may unduly deter the exercise of constitutional rights. See, e.g., New York Times Co. v. Sullivan, 376 U.S. 254 (1964) (limiting newspaper publisher’s liability for publishing allegedly libelous ads); Protection for Lawful Commerce in Arms Act, 15 U.S.C. §§ 7901–7903 (limiting firearms manufacturers’ and sellers’ liability for criminal misuse of firearms by third parties).

34 Judge Alex Kozinski and I have labeled this “censorship envy,” at least when it applies to speech-restrictive decisions. Alex Kozinski & Eugene Volokh, A Penumbra Too Far, 106 Harv. L. Rev. 1639, 1655 n.88 (1993).

35 Of course, some broad moral norms may be prompted or reinforced by government actors, such as elected representatives who are holding hearings. See, e.g., Transcript, House of Representatives Energy & Commerce Comm., Subcomms. on Communications & Tech. and on Consumer Protection & Commerce, Disinformation Nation: Social Media’s Role in Promoting Extremism and Misinformation, 117th Cong. (Mar. 25, 2021).

36 See Eugene Volokh, The Mechanisms of the Slippery Slope, 116 Harv. L. Rev. 1026, 1051–56 (2003) (discussing such “enforcement need” slippery slopes).

37 See, e.g., Kate Klonick, The New Governors: The People, Rules, and Processes Governing Online Speech, 131 Harv. L. Rev. 1598, 1618–21 (2018).

38 Footnote Id. at 1664, 1667; cf. Jack M. Balkin, How to Regulate (and Not Regulate) Social Media, 1 J. Free Speech L. 71, 87–88 (2021) (“Public pressure and media coverage of social media companies can push them, at the margins, to behave as more responsible curators of public discourse.”); Richard L. Hasen, Deep Fakes, Bots, and Siloed Justices: American Election Law in a “Post-Truth” World, 64 St. Louis U. L.J. 535, 554 (2020).

39 See Eugene Volokh, Treating Social Media Platforms Like Common Carriers?, 1 J. Free Speech L. 377, 395–96 (2021).

40 There is an element here of the debate about Citizens United v. FEC, 558 U.S. 310 (2010), though with the ideological polarity largely reversed. Volokh, supra Footnote note 39, at 388–95.

41 But see Andrew Jay McClurg, The Tortious Marketing of Handguns: Strict Liability Is Dead, Long Live Negligence, 19 Seton Hall Legis. J. 777, 816 n.178 (1995) (quoting a proposal that gun sellers must, on pain of liability for negligence, “be especially alert to, and wary of, gun buyers who display certain behavioral characteristics such as … appear[ing] in unkempt clothing and hav[ing] a slovenly appearance”).

42 A few companies have said that they will refuse to do business with anyone “associated with known hate groups.” See An Update on Our Work to Uphold Our Community Standards, Airbnb (Mar. 18, 2021), https://perma.cc/SVJ7-RLT8; Michelle Malkin, Why Airbnb Banned Me (And My Hubby, Too!), Prescott eNews (Feb. 6, 2022), https://perma.cc/G8ER-GR3M; Off Service Conduct, Twitch, https://perma.cc/37HD-66J7. Twitch also says it will ban users who are “[h]armful misinformation actors, or persistent misinformation superspreaders,” even when none of the alleged misinformation was spread on Twitch.

43 Rebecca Crootof, The Internet of Torts: Expanding Civil Liability Standards to Address Corporate Remote Interference., 69 Duke L.J. 583 (2019), discusses this interaction in detail; but that article focuses on corporations monitoring and controlling the products they sell in order to promote their own financial interests (for instance, enforcing otherwise hard-to-enforce license terms, or electronically “repossessing” them in the event of failure to pay), rather than in order to fulfill some legally or socially mandated responsibilities to prevent supposed misuse by customers.

In addition to the question discussed in the text – whether the companies should have a responsibility for monitoring customer use of such connected products, and preventing misuse – there are of course other questions as well, such as (1) whether companies should have a responsibility to report possible misuse, see Volokh, supra Footnote note 27; (2) whether companies’ records of user behavior should in some measure be shielded from law enforcement subpoenas and warrants, and from civil discovery; and (3) whether companies should be required to design their products in a way that facilitates law enforcement, cf. 47 U.S.C. §§ 1002, 1003, 1005 (requiring that telephone systems be designed to facilitate legally authorized surveillance).

44 Will Sommer, Airbnb, Uber Plan to Ban ‘Unite the Right’ White-Supremacist Rally Participants, Daily Beast (Aug. 10, 2018), https://perma.cc/TG4L-7L2V. Uber and Lyft apparently only stressed that their drivers could “refuse service to passengers connected to the … rally,” id., rather than themselves forbidding their drivers from doing so.

45 Maybe Tesla’s current owner, Elon Musk, would be reluctant to impose such rules, but then imagine some other car company that sells such cars.

46 See, e.g., Should Airbnb Ban Customers It Disagrees With?, BBC (Aug. 8, 2017), https://perma.cc/C4MZ-9JBZ.

47 See supra Footnote note 42.

48 See, e.g., the Michelle Malkin incident cited in Footnote note 42; Malkin is a prominent commentator.

49 See, e.g., Carman v. City of New York, 14 Abb. Pr. 301 (N.Y. Sup. Ct. 1862) (noting liability for “want of sufficient care in employing suitable persons”).

50 See, e.g., F. & L. Mfg. Co. v. Jomark, Inc., 134 Misc. 349 (N.Y. App. Term. 1929) (noting liability when a messenger hired by defendant stole property, when “[t]he most casual investigation would have disclosed that this messenger was not a proper person to whom defendant’s goods might be intrusted,” presumably because the investigation would have shown that the messenger was dishonest); Hall v. Smathers, 240 N.Y. 486, 490 (1925) (noting liability for an “assault upon a tenant of an apartment house by a superintendent kept in his position in spite of the complaints of the tenants, and with full knowledge of the defendants’ agents of his habits and disposition”).

51 See Stevens v. Lankard, 297 N.Y.S.2d 686, 688 (App. Div. 1968), aff’d, 254 N.E.2d 339 (N.Y. 1969).

52 Ryan D. Watstein, Note, Out of Jail and Out of Luck: The Effect of Negligent Hiring Liability and the Criminal Record Revolution on an Ex-Offender’s Employment Prospects, 61 Fla. L. Rev. 581, 592–93 (2009); cf., e.g., Malorney v. B & L Motor Freight, Inc., 496 N.E.2d 1086, 1089 (Ill. App. Ct. 1986) (“[T]here is no evidence … that the cost of checking on the criminal history of all truck driver applicants is too expensive and burdensome when measured against the potential utility [(preventing sexual assault of hitchhikers)] of doing so.”); Carlsen v. Wackenhut Corp., 868 P.2d 882, 887–88 (Wash. Ct. App. 1994) (concluding employer may have duty to conduct background check for certain employees, including unarmed concert security guards).

53 See Volokh, supra Footnote note 27, at 918 n.176 (collecting cases).

54 Well, maybe it seems romantic at times – cf. Bobby Darin, Call Me Irresponsible, on From Hello Dolly to Goodbye Charlie (Capitol Records 1962), https://perma.cc/5C62-W2P4 – but we can set that aside here.

55 Occasionally people’s felt moral or religious obligation to avoid what they see as complicity with evil behavior will clash with public accommodations laws, and will raise interesting questions under various religious freedom statutes and constitutional regimes; but this is a separate matter. See, e.g., Eugene Volokh, A Common-Law Model for Religious Exemptions, 46 UCLA L. Rev. 1465, 1525–26 (1999); Eugene Volokh, Religious Exemption Regimes and Complicity in Sin, Volokh Conspiracy (Dec. 6, 2021), https://perma.cc/FZ3U-8N94; Eugene Volokh, Bans on Political Discrimination in Places of Public Accommodation and Housing, 15 NYU J. L. & Lib. 709 (2021).

56 Cf. Laura Edelson, Content Moderation in Practice, 3 J. Free Speech L. 183 (2023) (describing some actual moderation practices of various social-media platforms); Volokh, supra Footnote note 39 (discussing some arguments in favor and against limiting social-media platform moderation).

57 See supra Footnote note 10 and accompanying text.

15 Moderating the Fediverse Content Moderation on Distributed Social Media

* For helpful comments I thank Laura Edelson, Kyle Langvardt, Erin Miller, Chinmayi Sharma, and participants at the Big Tech and Antitrust Conference at Seton Hall Law School, the Information Society Project and the Freedom of Expression Scholars Conference at Yale Law School, the Association for Computing Machinery (ACM) Symposium on Computer Science and Law, and the Max Weber Programme Multidisciplinary Research Workshop at the European University Institute. For excellent research assistance I thank Caleb Johnson and Isabel Park.

1 See Barbara Ortutay, Twitter Drama Too Much? Mastodon, Others Emerge as Options, AP News (Nov. 12, 2012), https://perma.cc/PY4F-8GD9.

2 Andrew Hayward, An ‘Open, Interoperable’ Metaverse Is ‘Better for Everyone’: Meta’s Mark Zuckerberg, Yahoo! News (Oct. 11, 2022), https://perma.cc/E32U-FQ7C.

3 David Pierce, Can ActivityPub Save the Internet?, Verge (Apr. 20, 2023).

4 See, e.g., Mike Masnick, Protocols, Not Platforms: A Technological Approach to Free Speech, Knight First Amend. Inst. (Aug. 21, 2019), https://perma.cc/J2QD-YVF7; Francis Fukuyama et al., Stanford Cyber Pol’y Ctr., Middleware for Dominant Digital Platforms: A Technological Solution to a Threat to Democracy (2021), https://perma.cc/S54K-JVEX; Daphne Keller, The Future of Platform Power: Making Middleware Work, 32 J. Democracy 168 (2021); Chand Rajendra-Nicolucci & Ethan Zuckerman, What If Social Media Worked More Like Email?, in An Illustrated Field Guide to Social Media 24 (Chand Rajendra-Nicolucci & Ethan Zuckerman eds., 2021), https://perma.cc/F3LC-LGR4; Robert W. Gehl & Diana Zulli, The Digital Covenant: Non-centralized Platform Governance on the Mastodon Social Network, Info., Commc’n & Soc’y (forthcoming), https://perma.cc/H4XN-9E9K.

5 The distinction between open and closed protocols is not clear-cut. Some of the core technology behind the internet – for example, the Domain Name System, which maps IP addresses to human-readable domain names – has a centralized registration system. But this system imposes relatively minimal control, and the entity that runs it, the Internet Corporation for Assigned Names and Numbers (ICANN), is a multistakeholder nonprofit that prioritizes openness and interoperability.

6 An early challenge to the open internet came from the first generation of giant online services providers like America Online, Compuserve, and Prodigy, which combined dial-up internet access with an all-encompassing web portal that provided both internet content and messaging. But as internet speeds increased and web browsing improved, users discovered that the limits of these closed systems outweighed their benefits, and they faded into irrelevance by the 2000s.

7 Simon Kemp, Digital 2022: Global Overview Report, DataReportal (Jan. 26, 2022), https://perma.cc/XM4G-DLND.

8 The other major example of a move to a closed system is the dominance of smartphones, which (especially iOS devices) are far more closed than are personal computers.

9 Our Progress Addressing Challenges and Innovating Responsibly, Facebook (Sept. 21, 2021), https://perma.cc/3FHT-3TB8.

10 Tarleton Gillespie, Custodians of the Internet: Platforms, Content Moderation, and the Hidden Decisions That Shape Social Media 13 (2018).

11 Jonathan Zittrain, The Future of the Internet and How to Stop It 59 (paperback ed. 2008).

12 When Elon Musk first made his bid to purchase X, X co-founder Jack Dorsey tweeted:

In principle, I do not believe anyone should own or run Twitter. It wants to be a public good at a protocol level, not a company. Solving for the problem of it being a company however, Elon is the singular solution I trust. I trust his mission to extend the light of consciousness.

@jack, Twitter (Apr. 25, 2022, 9:03 PM), https://perma.cc/VD56-QNRQ.

The chaos that has roiled X since Musk’s takeover suggests that Dorsey’s faith in Musk’s “mission to extend the light of consciousness” was misplaced while underscoring the observation that X would be better as “a public good at a protocol level, not a company.” To his credit, Dorsey has since recognized Musk’s faults as X’s owner. See Faiz Siddiqui & Will Oremus, Twitter Founder Jack Dorsey Says Musk Wasn’t an Ideal Leader after All, Wash. Post (Apr. 29, 2023).

13 ActivityPub, W3C, https://perma.cc/L84U-C5D6.

14 For example, as PeerTube, a video-sharing platform, notes, “you can follow a PeerTube user from Mastodon (the latest videos from the PeerTube account you follow will appear in your feed), and even comment on a PeerTube-hosted video directly from your Mastodon’s account.” PeerTube, https://perma.cc/RT9C-9TVH.

15 See generally Andreas Føllesdal, Subsidiarity, 6 J. Pol. Phil. 190 (1998).

16 Mastodon does not currently allow moving posts from one instance to another, but it does allow users to download a record of their posts. How to Migrate from One Server to Another, Mastodon, https://perma.cc/Y4XY-KM6W.

17 See Lada A. Adamic & Bernardo A. Huberman, Zipf’s Law and the Internet, 3 Glottometrics 143, 147–48 (2002), https://perma.cc/H8LL-G9LY (“[T]here are many small elements contained within the Web, but few large ones. A few sites consist of millions of pages, but millions of sites only contain a handful of pages. Few sites contain millions of links, but many sites have one or two. Millions of users flock to a few select sites, giving little attention to millions of others.”).

18 A list of Mastodon instances, sorted by number of users, is available at https://perma.cc/S8JU-GGTW.

19 See Albert O. Hirschman, Exit, Voice, and Loyalty: Responses to Decline in Firms, Organizations, and States (1970).

20 See Seth Frey & Nathan Schneider, Effective Voice: Beyond Exit and Affect in Online Communities, New Media & Soc’y (Sept. 2021), https://perma.cc/VQ6K-6CBY.

21 See, e.g., James Grimmelmann, The Virtues of Moderation, 17 Yale J.L. & Tech. 42, 94–101 (2015).

22 Cory Weinberg, Reddit Aims for IPO in Second Half as Market’s Gears Quietly Turn, Information (Feb. 14, 2023), https://perma.cc/XT6C-CS35.

23 Mike Isaac, Reddit, Acting against Hate Speech, Bans “The_Donald” Subreddit, N.Y. Times (June 29, 2020).

24 For a recent attempt to bring agonism into the mainstream of legal scholarship, see Daniel E. Walters, The Administrative Agon: Democratic Theory for a Conflictual Regulatory State, 132 Yale L.J. 1 (2022).

25 Aymeric Mansoux & Roel Roscam Abbing, Seven Theses on the Fediverse and the Becoming of FLOSS, in The Eternal Network: The Ends and Becomings of Network Culture 124, 131 (Kristoffer Gansing & Inga Luchs eds., 2020). For an influential general account of agonism, see Chantal Mouffe, Agonistics: Thinking the World Politically (2013).

26 Specifically, the Mastodon project has promulgated a “Mastodon Server Covenant,” whereby instances that commit to “[a]ctive moderation against racism, sexism, homophobia and transphobia” such that users will have “confidence that they are joining a safe space, free from white supremacy, anti-semitism and transphobia of other platforms” are eligible to be listed on the project’s homepage as recommended instances. See Eugen Rochko, Introducing the Mastodon Server Covenant, Mastodon (May 16, 2019), https://perma.cc/GP8H-MXXK. But the covenant is not binding on any Mastodon instance, and noncomplying instances remain full-fledged members of the overall Mastodon network, subject only to the moderation decision of other instances.

27 Welcome, Mastodon, https://perma.cc/326M-JW5A; see also mas.to!, Mastodon, https://perma.cc/TBH6-BKWA.

28 See, e.g., Mastodon.cloud, Mastodon, https://perma.cc/7YQQ-ZX87.

29 Welcome, Mastodon, supra Footnote note 27.

31 Adi Robertson, How the Biggest Decentralized Social Network Is Dealing with Its Nazi Problem, Verge (July 12, 2019), https://perma.cc/QA6F-J54U. Gab is not the only right-wing social-media network to use Mastodon as its base. Truth Social, Donald Trump’s social-media site, is also built off of Mastodon. Michael Kan, Trump’s Social Media Site Quietly Admits It’s Based on Mastodon, PCMag (Dec. 1, 2021), https://perma.cc/3CJE-S2AA.

32 Robertson, supra Footnote note 31.

33 Rob Colbert (@shadowknight412), Gab (May 27, 2020), https://perma.cc/G82J-73WX.

34 Clive Thompson, Twitter Alternative: How Mastodon Is Designed to Be “Antiviral”, Medium (Nov. 9, 2022), https://perma.cc/49N4-YWGZ.

35 Eugen Rochko (@Gargron), Mastodon (Mar. 10, 2018), https://perma.cc/VXE7-XVLC.

36 Thompson, supra Footnote note 34.

37 See, e.g., Jenny L. Davis & Nathan Jurgenson, Context Collapse: Theorizing Context Collusions and Collisions, 17 Info., Commc’n & Soc’y 476 (2014).

38 See supra Footnote notes 9Footnote 11 and accompanying text.

39 See Hany Farid, Reining in Online Abuses, 19 Tech. & Innovation 596 (2018).

40 See Enze Liu et al., Who’s Got Your Mail?: Characterizing Mail Service Provider Usage, in Proceedings of the 2021 ACM Internet Measurement Conference 113 (2021).

41 For example, the outsize importance of a few email providers has led to complaints of censorship. See, e.g., Republican National Committee Sues Google over Email Spam Filters, Reuters (Oct. 24, 2022), https://perma.cc/49EU-JFEQ.

42 See generally Jack Goldsmith & Tim Wu, Who Controls the Internet?: Illusions of a Borderless World (2006).

43 See Switter, supra Footnote note 30.

44 A high-profile example is X and Facebook’s decision on the cusp of the 2020 election to block news reports of Hunter Biden’s stolen laptop. While X and Facebook, both of whom played an important role in amplifying Russian election interference in 2016, were understandably concerned that the laptop story was foreign disinformation, later revelations suggesting that the laptop was in fact authentic have further undermined many conservatives’ faith in the platforms, and even the platforms themselves have conceded the mistake. See Cristiano Lima, Hunter Biden Laptop Findings Renew Scrutiny of Twitter, Facebook Crackdowns, Wash. Post (Mar. 31, 2022); Jessica Bursztynsky, Twitter CEO Jack Dorsey Says Blocking New York Post Story Was “Wrong, CNBC (Oct. 16, 2020), https://perma.cc/7CMJ-5VGA; David Molloy, Zuckerberg Tells Rogan FBI Warning Prompted Biden Laptop Story Censorship, BBC (Aug. 26, 2022), https://perma.cc/XG9Q-5PWQ.

45 See generally Cass R. Sunstein, #Republic: Divided Democracy in the Age of Social Media (2018).

46 Mansoux & Abbing, supra Footnote note 25, at 132.

47 See Peter M. Dahlgren, A Critical Review of Filter Bubbles and a Comparison with Selective Exposure, 42 Nordicom Rev. 15 (2021).

48 See Brendan Nyhan & Jason Reifler, When Corrections Fail: The Persistence of Political Misperceptions, 32 Pol. Behav. 303 (2010).

49 Ian Bogost, People Are Not Meant to Talk This Much, Atlantic (Oct. 22, 2021), https://perma.cc/U3NT-7MGF.

50 See Alan Rozenshtein, Mastodon’s Content-Moderation Growing Pains, Volokh Conspiracy (Nov. 21, 2022), https://perma.cc/5MPT-3WYK.

51 Amanda Hoover, The Mastodon Bump Is Now a Slump, Wired (Feb. 7, 2023), https://perma.cc/TJ5W-YRLZ.

53 Interestingly, Meta is reportedly working on a decentralized text-based social media platform that would interoperate with Mastodon. Deepsekhar Choudhury & Vikas Sn, Exclusive: Meta Mulls a Twitter Competitor Codenamed “P92” That Will Be Interoperable with Mastodon, MoneyControl (Mar. 10, 2023), https://perma.cc/6E8L-BFC6.

54 To be sure, interoperability mandates are not without their own risks, especially to user privacy. See, e.g., Thomas E. Kadri, Digital Gatekeepers, 99 Tex. L. Rev. 951, 999 (2021); Jane Bambauer, Reinventing Cambridge Analytica One Good Intention at a Time, Lawfare (June 8, 2022), https://perma.cc/7V7W-GML6.

55 At the same time, other requirements of the Digital Services Act, especially around mandatory content moderation, might hinder the Fediverse’s development. See Konstantinos Komaitis & Louis-Victor de Franssu, Can Mastodon Survive Europe’s Digital Services Act?, Tech Pol’y Press (Nov. 16, 2022), https://perma.cc/W8RC-2XVL.

56 See, e.g., Press Release, Sen. Mark R. Warner, Lawmakers Reintroduce Bipartisan Legislation to Encourage Competition in Social Media (May 25, 2022), https://perma.cc/SC2Z-3XQL.

57 See, e.g., Alan Z. Rozenshtein, First Amendment Absolutism and the Florida Social Media Law, Lawfare (June 1, 2022), https://perma.cc/WXT9-4HAL; see generally Alan Z. Rozenshtein, Silicon Valley’s Speech: Technology Giants and the Deregulatory First Amendment, 1 J. Free Speech L. 337 (2021).

58 See generally Chinmayi Sharma, Concentrated Digital Markets, Restrictive APIs, and the Fight for Internet Interoperability, 50 U. Mem. L. Rev. 441 (2019).

59 See Herbert Hovenkamp, Antitrust and Platform Monopoly, 130 Yale L.J. 1952 (2021).

60 47 U.S.C. § 230(c)(1).

Save book to Kindle

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

  • Platform Governance
  • Edited by Kyle Langvardt, University of Nebraska, Lincoln, Justin (Gus) Hurwitz, University of Pennsylvania Law School
  • Book: Media and Society After Technological Disruption
  • Online publication: 16 May 2024
  • Chapter DOI: https://doi.org/10.1017/9781009174411.014
Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

  • Platform Governance
  • Edited by Kyle Langvardt, University of Nebraska, Lincoln, Justin (Gus) Hurwitz, University of Pennsylvania Law School
  • Book: Media and Society After Technological Disruption
  • Online publication: 16 May 2024
  • Chapter DOI: https://doi.org/10.1017/9781009174411.014
Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

  • Platform Governance
  • Edited by Kyle Langvardt, University of Nebraska, Lincoln, Justin (Gus) Hurwitz, University of Pennsylvania Law School
  • Book: Media and Society After Technological Disruption
  • Online publication: 16 May 2024
  • Chapter DOI: https://doi.org/10.1017/9781009174411.014
Available formats
×