Introduction
In 2013, eleven-year-old Alexis Spence, a fifth grader, joined Instagram after her classmates made fun of her for not having a social media account.Footnote 1 She was two years under the platform’s minimum age requirement to open an account, but other user content showed her how to obtain a parent’s passcode to disable parental blocks to the social media platform.Footnote 2 On her tablet,Footnote 3 she made her Instagram app icon look like a calculator to hide it from her parents.Footnote 4 After joining the app, Alexis was confronted with algorithm-driven content portraying underweight models and links to extreme dieting websites that glorified anorexia nervosa, negative body image, and self-harm.Footnote 5 When she was twelve years old, Alexis drew a picture of herself crying on the floor next to her phone with words like “stupid,” “ugly,” and “fat” emanating from the screen, and “kill yourself” in a thought bubble.Footnote 6 She saved pictures of anorexic models as “motivation” to look at whenever she felt hungry.Footnote 7 Months after opening the Instagram account, Alexis started showing signs of depression and her parents sought mental health treatment, but she refused to continue to see a therapist after a handful of initial sessions.Footnote 8 In Instagram posts she shared in spring 2018, Alexis wrote: “I hate myself and my body….Please stop caring about me, I’m a waste of time and space.”Footnote 9 Alerted by school counselors to the posts, Alexis’s parents had her hospitalized.Footnote 10 Alexis was suffering from an eating disorder, anxiety, depression, and suicidal thoughts.Footnote 11
As a result of Alexis’s exposure to Instagram’s toxic algorithm practices, she underwent years of professional counseling through in-patient and out-patient programs, participated in eating disorder treatment services, used a service dog, and required ongoing medical attention to ensure she did not relapse.Footnote 12 In June 2022, at the age of nineteen, the Social Media Victims Law Center filed a personal injury lawsuit on behalf of Alexis in California federal court alleging that Meta Platforms, Inc. (Meta), Instagram’s parent company, purposely designed its social media platform to addict young users, and that Meta steered her down a years-long path of physical and psychological harm.Footnote 13
Social media algorithms that push extreme content to vulnerable youth are linked to a pronounced increase in mental health problems for adolescents, including poor body image, eating disorders, and suicidality. A 2021 Wall Street Journal investigation revealed that TikTok floods child and adolescent users with videos of rapid weight loss methods, including tips on how to consume less than 300 calories a day, and encourages a “corpse bride diet,” showing emaciated girls with protruding bones.Footnote 14 The journalistic investigation involved the creation of a dozen automated accounts registered as thirteen-year-olds and revealed that TikTok’s algorithm-driven “For You” page, a section of the platform that algorithmically recommends content to users, fed adolescent accounts tens of thousands of weight-loss videos within just a few weeks of joining the platform.Footnote 15
Another report revealed the scale and intensity with which TikTok bombards vulnerable teen users with dangerous content that might encourage self-harm, suicide, and disordered eating. In 2022, researchers from the Center for Countering Digital Hate studied the TikTok algorithm by establishing new social media accounts posing as thirteen-year-old girls in the United States, United Kingdom, Australia, and Canada.Footnote 16 Researchers recorded the first thirty minutes of content automatically recommended by TikTok to these accounts in their “For You” page.Footnote 17 The study revealed that the volume of harmful content shown to vulnerable accounts (i.e., with the term “loseweight” in their username) was significantly higher than that shown to standard accounts.Footnote 18 For instance, vulnerable accounts were served twelve times more self-harm and suicide videos than standard accounts.Footnote 19
Social media companies employ algorithms for a variety of reasons, with the primary purpose of keeping users engaged with constant feeds of information for extended periods of time; such engagement results in massive profits for the companies paid by advertisers targeting ads at a certain demographic.Footnote 20 A recent study by our research team, discussed later in this Article, found that in 2022, major social media platforms earned nearly $11 billion in advertising revenues from U.S. children ages zero to seventeen years.Footnote 21 Given these handsome profits, social media platforms have little incentive to moderate their own harmful practices. Policymakers must instead step forward and make changes to curb the harmful use of algorithms by social media platforms. Legal obstacles, however, stand in the way of such reform.
Social media platforms currently enjoy substantial First Amendment speech protection and are relatively insulated from culpability by the liability shield extended to website owners and operators under Section 230 of the Communications Decency Act (CDA).Footnote 22 Alexis Spence’s lawsuit is part of a recent wave of litigation that attempts to circumvent Section 230 and other free speech protections, but the lawsuit is designed to help only one person, with relief made possible only after harm has been inflicted. Further, while studies have shown an association between exposure to social media algorithms and increased mental health harms in young users (detailed below), the task of demonstrating that social media has directly caused such harms has been difficult because platforms do not allow external researchers access to their algorithms. Stronger evidence of causation is needed to demonstrate that social media platforms are liable for harm.
This Article advocates for state and federal legislation requiring social media companies to conduct periodic algorithm risk audits that measure the incidence of harm inflicted on young users. Such risk audits should be conducted by independent third parties, and the results should be publicly disclosed. This policy measure is urgently needed to curb social media companies’ pernicious use of relentless algorithms and to protect the millions of young users who are vulnerable to their harms.
The first section of this article examines the federal Children’s Online Privacy Protection Act (COPPA), the age restriction for young users of social media, and the failure of platforms to verify the age of users. The second section discusses the results of public health and neuroscience studies that demonstrate evidence of mental health harms to adolescents resulting from social media use. This section also discusses how this evidence could help establish causation needed to prove unfair and deceptive business practice claims and products liability claims. The third section presents results of a new study that shows social media platforms are economically incentivized to keep young users actively engaged on their platforms. The fourth section discusses the legal obstacles to preventing harm to young people caused by social media algorithms and Supreme Court cases where the Court refused to rein in the blanket immunity currently granted to social media platforms under Section 230 of the CDA. The fifth section explores the strategies to circumvent First Amendment protection and immunity granted by Section 230 of the CDA, including bringing an unfair or deceptive business practice claim against social media platforms, filing a products liability lawsuit and bringing claims under public nuisance tort theory. The sixth section discusses recent state-level legislation, including the California Age Appropriate Design Code that is a promising step in addressing harms to young people, but faces a court challenge, and California’s Data Protection Impact Assessment law and Utah’s Social Media Amendments law, which are less effective because they rely on social media companies to evaluate themselves. It also discusses the Kids Online Safety Act, introduced in Congress in 2023, which would give parents and users under seventeen the ability to opt out of algorithmic recommendations, limit the time young people spend on a platform, and require platforms to do risk assessments conducted by independent third parties, but is uncertain whether the bill will become law. The final section advocates for states to require social media companies to conduct algorithm risk audits that would provide evidence for legal actions seeking to reform the harmful practices of social media platforms.
I. A Growing Number of Young People Have Easy Access to Social Media Platforms and Its Resulting Harms and Current Federal Law Restricting Young Users Is Ineffectual
Ease of access is a foundational issue in understanding how social media platforms affect adolescent mental health. Social media has become increasingly popular over the past two decades. The beginning of popular social media as we know it arguably dates to 2004, when MySpace became the first social media platform to reach one million monthly active users.Footnote 23 Throughout the next decade, social media became an integral part of many lives—especially those of adolescents. Popular platforms began to spring up, notably Facebook in 2004, Twitter in 2006, and Instagram in 2010.Footnote 24 The popularity of social media grew alongside the number of available platforms.. When TikTok launched in 2016, social media was so popular that the platform gained half a billion users worldwide in less than two years.Footnote 25 TikTok, a platform where young users create and share short videos often showing themselves singing, dancing, doing comedy, and lip-syncing, on average added twenty million new users each month during its first two years.Footnote 26 Large portions of social media memberships are populated by people under the age of eighteen.Footnote 27 And exposing minors to the harmful content on these platforms and the addictive design of these platforms has produced a generation plagued by the constant need to be online— only to be confronted by content that can be harmful to their mental health.
Tammy Rodriguez is among the increasing number of parents who understand the toll social media platforms have on children’s mental health. On January 20, 2022, she filed a wrongful death lawsuit against Meta on behalf of her eleven-year-old daughter, Selena, who took her own life as a result of being severely addicted to social media.Footnote 28 Due to Selena’s use of Instagram and Snapchat, she was hospitalized for emergency psychiatric care and experienced “worsening depression, poor self-esteem, eating disorders, self-harm, and, ultimately, suicide.”Footnote 29 In her complaint, Ms. Rodriguez claims that, due to the lack of parental controls on Instagram and Snapchat, the only way to effectively limit Selena’s access to social media was to physically confiscate her phone, which caused her to run away to access her social media accounts on other devices.Footnote 30 Selena was solicited several times for sexually exploitative content and she once sent sexually explicit images, which were leaked to her classmates.Footnote 31 Ms. Rodriguez claims that Meta knew or should have known that its platform was harmful to a significant percentage of its minor users and still failed to redesign its products to ameliorate these harms.Footnote 32
This is an ongoing lawsuit, and, unfortunately, one of many. Selena was exposed to social media platforms at a very young age and suffered severely because of it. Theoretically, Selena should have been protected online by COPPA, but loopholes in the law wrongly expose many children to the addictive design of and harmful content on social media platforms.
A. The Federal Children’s Online Privacy Protection Act Does Not Adequately Protect Children Against the Mental Health and Addictive Harms of Social Media
Congress enacted COPPA in 1998 with the primary goal of placing parents in control of the information collected from their young children online.Footnote 33 COPPA prohibits social media platforms from collecting, using, or disclosing the personal information of children under the age of thirteen years without verifiable parental consent.Footnote 34 COPPA defines personal information as the child’s first and last name; physical address; online contact information, including username; telephone number; social security number; persistent identifiers, such as IP address; photograph, video, or audio file that contain the child’s image or voice; and geolocation.Footnote 35 COPPA applies to a social media platform where the platform either (1) is directed to children under thirteen or (2) has actual knowledge that they are collecting, using, or disclosing the personal information of someone under thirteen years old.Footnote 36
As a result of the age restriction contained in COPPA, a vast majority of social media platforms require users to be at least thirteen years old to open an account.Footnote 37 These same platforms insist that COPPA regulations do not apply to them, because the platforms are not directed at children under the age of thirteen.Footnote 38 These platforms’ age-minimum “workarounds” result in mental health harm to adolescents who use them, because many do not enforce age verification, allowing young users to easily misrepresent their age to gain access to platforms.Footnote 39 Young users are thus left vulnerable to not only the harmful content present on the platforms but also to the exploitative business practices that manipulate people to stay on the platforms longer such as infinite scroll of content, encouraging posting content to obtain “likes,” etc.Footnote 40 By failing to establish effective ways to verify users’ ages, social media companies ultimately enable minors under the age of thirteen to set up accounts without verifiable parental consent—and place themselves squarely in direct violation of COPPA. These platforms’ failures to verify user age circumvent COPPA’s very purpose, which is to protect against the collection, use, and disclosure of the personal information of minors under the age of thirteen.
Many social media platforms also fail to comply with the advertising rules that COPPA sets forth. COPPA prohibits social media platforms from using behavioral or demographic advertising, due to the ban on collection of personal information from users under the age of thirteen absent verifiable parental consent.Footnote 41 (Behavioral advertising is curated based on the web-browsing behavior of the user, while demographic advertising is curated based on the personal demographic information of the user.Footnote 42) Therefore, when adhering to COPPA regulations, social media platforms must deliver advertising on only a contextual basis—placing ads on webpages based on the context of those webpages—to those under thirteen.Footnote 43
However, when users misrepresent their age to open an account, social media platforms that rely on that inaccurate data are essentially allowed to disregard COPPA’s advertising restrictions, and instead expose their young users to behavioral and demographic advertising as well as contextual advertising. This issue is further compounded by the fact that many social media companies disregard COPPA because they blithely claim their platforms are not targeted to children;Footnote 44 some platforms do not even attempt to detect underage users who join the site with a falsified birthdate.
1. Disregard of COPPA Requirements by Social Media Platforms, Low Age Cut-off, and Inadequate Age Verification Procedures Result in Harm to Young Social Media Users
A 2022 study by Pixalate (a fraud protection, privacy, and compliance analytics platform) revealed that social media platforms’ disregard of COPPA is likely exposing children to harmful advertising.Footnote 45 The study showed that while many social media platforms boast that advertisements are not targeted for a child’s use, eight percent of Apple App Store apps and seven percent of Google Play Store apps actually are targeted to minors.Footnote 46 The study also found that child-targeted apps are forty-two percent more likely to share both GPS and IP address information with third-party digital advertisers than are non-child-targeted apps.Footnote 47 As noted earlier, geolocation and persistent identifiers, such as IP address, are considered personal information under COPPA; thus, COPPA forbids the collection, use, and disclosure this kind of information absent verifiable parental consent.Footnote 48 Lastly, the Pixalate study revealed that advertisers spend 3.1 times more money on child-targeted apps than they spend on apps directed to a general audience.Footnote 49 Thus, advertisers are focusing a larger sum of their financial resources on child-targeted apps, making exposure to potentially harmful advertising more likely for users of child-targeted apps, than users of general audience apps.
Congress has explicitly recognized the problem of minors’ use of social media platforms without verifiable parental consent, and the reality of platforms’ evasions of COPPA by claiming to not be child-targeted. In the 2021 Appropriations Act, Congress directed the Federal Trade Commission (FTC) to study and report on “whether and how artificial intelligence (AI) may be used to identify, remove, or take any other appropriate action necessary to address a wide variety of specified online harms.”Footnote 50 AI is frequently used to address one such harm: age verification. AI can be used to determine whether users appear to be under the age of thirteen years.Footnote 51 Social media platforms can thus use AI to determine whether a user joined the site with a fake birthdate and whether COPPA regulations are applicable.
However, the FTC, in its responding report to Congress, advised against the usage of AI technology for this purpose, warning that using AI technology as a policy solution could lead to a myriad of unintended harms.Footnote 52 These harms derive from the inherent design flaws and inaccuracy of AI tools—including the potential bias built into the tool that reflects the biases of its developers—and the possibility of AI tools incentivizing and enabling invasive commercial surveillance and data extraction practices due to the vast amount of data required to develop, train, and use the tool.Footnote 53 The FTC advised that policies to alleviate online harms must not be rooted in the use of AI.Footnote 54 Rather, the FTC posits, it is imperative to understand the specific ways in which social media platforms are harmful to children and adolescents to enable policymakers to explore legal remedies and strategies that would hold the platforms accountable for the harm they create.
Congress is currently focused on the minimum age requirements that social media platforms impose upon users who wish to open accounts. By protecting only minors under the age of thirteen years, COPPA treats minors thirteen and older as adults, thus exposing them to the harms of social media without any age-related restrictions.Footnote 55 This issue has been recognized by bi-partisan members of the U.S. Senate who sponsored a bill in 2022, and reintroduced it in 2023, that would raise the cut-off age to seventeen years.Footnote 56 This bill is discussed later in this article.Footnote 57
II. Public Health and Neuroscience Studies Point to Mounting Evidence of Mental Health Harms to Adolescents Resulting from Social Media Use
A. Rigorous Experimental and Longitudinal Public Health Studies of Social Media Effects Strongly Suggest Social Media Has a Harmful Impact on the Mental Health of Young Users
A growing body of evidence demonstrates that high amounts of social media use, and image-based social media in particular, is associated with poor mental health outcomes. Social media platforms are designed to provide content to retain viewers, using algorithms that populate individual feeds with material that entices users to stay engaged for longer periods of time.Footnote 58 Algorithm-driven features, such as limitless scrolling, social pressure and social reward (e.g., “likes” on posts), notifications, and individualized content feeds, are designed to maximize time spent on platforms.Footnote 59 Practices of social media platforms and apps designed to retain the attention of users are essential, albeit pernicious, features of platforms’ business models that are predicated on monetizing users’ time and attention.
These practices can be harmful to the mental health of users, particularly young users. To understand the associations between social media use and mental health outcomes, a plethora of research studies have been conducted, which have subsequently been summarized in several systematic reviews and meta-analyses.Footnote 60 A 2020 review summarizing the results of studies evaluating associations between social media use and indicators of mental health problems among adolescents published between 2011 and 2018 concluded there was a positive association, while also noting the complexity of the relationship.Footnote 61 The authors state that aspects of adolescents’ personal and social identity formation may be vulnerable to the effects of social media use and described hypothesized mechanisms including limited self-regulation skills, displacement of sleep and/or physical activity, and negative social comparisons.Footnote 62 The study identified risk factors for mental health problems, including time spent on social media, personal investment, repeatedly checking for messages, and addictive use.Footnote 63
While the body of research covering the early years of social media lays the groundwork for understanding the relationship between social media use and mental health outcomes, the platforms and their business practices have changed in profound ways, rendering more updated research necessary. Much of the research to date focuses on early platforms, such as Facebook and Twitter, which were created in 2004 and 2006, respectively, and does not adequately assess or account for the impact of currently popular platforms, such as Instagram and TikTok, which were created more recently in 2010 and 2016, respectively. Due to the rapid changes in the industry since its inception, the study designs used in the early years of social media research among youth provide limited insight into the effects of social media in its current form.
In recent years, scholars have more precisely assessed social media, and have employed more rigorous study designs, including experimental and longitudinal observational cohort studies with young people followed over years.Footnote 64 These enhancements strengthen the quality of the evidence generated by these studies and our ability to make causal conclusions about the relationship between social media use and mental health among youth. The most compelling studies in recent years have been those examining associations of social media use with body image and eating disorders and also those examining anxiety, depression, and suicidality.
Body image consists of the thoughts, feelings, and perceptions an individual has about the way they look,Footnote 65 and social media use has been associated with poor body image (i.e., body dissatisfaction).Footnote 66 Eating disorders are a serious public health concern,Footnote 67 and adolescence is a vulnerable window for the onset of disordered eating behaviors.Footnote 68 A number of studies have explored the relationship between social media use and body image using experimental designs, where participants are randomly assigned to different exposures or experiences of social media content to allow for comparisons between groups.Footnote 69 Random assignment helps researchers to best isolate the impact of the experiment, rather than external factors.Footnote 70 One European study found that an interaction between peer feedback and images of professional models contributed to adolescent girls’ conceptualization of what an “ideal” body shape is, as well as differences in individual susceptibility to perceiving the ideal body as very thin.Footnote 71 Another experimental study randomly assigned U.S. undergraduate college women to groups using either Facebook or Instagram, or a control group (the control group participants played a game rather than use social media).Footnote 72 The researchers found that Facebook and Instagram users reported engaging in more appearance comparisons than the control group.Footnote 73 Further, Instagram users also reported increased appearance comparison relative to Facebook users, and experienced decreased body satisfaction and increased negative affect.Footnote 74 In a third experimental study examining Instagram use, male and female college students viewed posts with two body-size conditions: a thin body type and a higher-weight body type.Footnote 75 Researchers measured attention to the Instagram posts and state body dissatisfactionFootnote 76 and found that exposure to images with thin-body portrayals resulted in both increased attention to the posts and increased body dissatisfaction compared to participants exposed to images of a higher-weight body type.Footnote 77 Female participants who perceived their own body type as higher-weight experienced increased body dissatisfaction in response to thin-image posts compared to higher-weight-image posts; this was not observed for females who perceived their body type as thin or average weight.Footnote 78 Another randomized controlled trial evaluated the impact of a break from social media (including Facebook, Instagram, TikTok, and Twitter) found individuals who were randomly assigned to a group that stopped using social media for a week saw improvements in measures of depression and anxietyFootnote 79 compared to participants who continued to use social media as usual.Footnote 80
Ultimately, experimental study findings make clear that the kinds of social media platforms adolescents use have different effects on mental health outcomes, and that image-based or visual platforms are an important driver of the associations with worse mental health outcomes, particularly regarding body image and disordered eating.Footnote 81
In addition to experimental study designs, longitudinal cohort study designs provide some of the most rigorous research findings to date linking social media use to eating disorders risk. For instance, a UK-based longitudinal observational study that enrolled youth (fifty-six percent male, mean age at time 1 = 14.3 years) and assessed social media use and body satisfaction at three times over one year found that adolescents with higher social media use engaged in more social comparison,Footnote 82 which was then associated with lower body satisfaction later in the year.Footnote 83 One U.S.-based study followed a cohort of adolescents beginning at age ten to thirteen years and measured social media use as a distinct component of media use.Footnote 84 This specificity in defining what to measure is key, as earlier research studies tended to assess “screen time” as a catch-all term that also included screen-based activities like television and internet browsing, making it difficult to ascertain the impact of social media as distinct from other activities.Footnote 85 Researchers found that adolescent girls with high social media use (two to three hours per day) early in adolescence who subsequently increased use over time had increased suicide risk ten years later.Footnote 86 Data from the longitudinal UK-based “Our Futures” study show that frequent social media use (two to three times per day) at the beginning of the study, when participants were age twelve to fourteen years, was associated with more psychological distress as measured by the General Health QuestionnaireFootnote 87 at follow-up, when these same participants were ages fifteen to sixteen years.Footnote 88 Finally, in a longitudinal study conducted among U.S. high school students, appearance-related social media consciousness at the start of the study was associated with subsequent depressive symptoms one year later.Footnote 89
Results of these experimental study designs and longitudinal cohort study designs provide rigorous evidence linking high amounts of social media use, and time spent on image-based social media in particular, to mental health harms in young users.
B . Neuroscience Pathways Directly Link Social Media Use to Mental Health Risks in Young Social Media Users
Evidence from the psychological literature highlights two psychological processes that are especially important in explaining how social media use negatively impacts adolescent mental health, particularly as related to eating disorders risk: upward comparison and thin-ideal internalization. Upward comparison occurs when an individual compares aspects of themselves (e.g., appearance) against more popular or esteemed others, such as social media influencers, professional models, or celebrities.Footnote 90 Research on Instagram suggests that among adolescents, use of the platform increases the tendency of users to do upward comparison, which is ultimately associated with body dissatisfaction.Footnote 91 Thin-ideal internalizationFootnote 92 occurs when a society highly values being thin as a component of being attractive and can be especially harmful when an adolescent within that society adopts the cultural norm equating thinness with attractiveness as their own belief. Thin-ideal internalization is a risk factor for body image disturbancesFootnote 93 and contributes to hyperconsciousness about one’s appearance, including frequent body checking and shame (i.e., feeling like a bad person for how they look or weigh or feeling ashamed for not being smaller).Footnote 94 Evidence linking social media use to this type of harmful upward comparison and thin ideal internalization, and subsequently to eating disorders symptoms, is stronger in adolescent girls than boys.Footnote 95 This evidence highlighting a greater impact on adolescent girls is corroborated by a 2023 statement from the U.S. Surgeon General acknowledging that social media use “perpetuate[s] body dissatisfaction, disordered eating behaviors, social comparison, and low self-esteem, especially among adolescent girls.”Footnote 96
Evidence from the neuroscience literature has identified several features of the adolescent brain that lead to uniquely elevated risks of social media use in adolescents compared to adults or even to younger children. First, adolescence is a time of heightened sensitivity to peer feedback and social cues, which are processed by the brain’s social cognition and emotional response circuitry, including brain regions such as the amygdala, striatum, and medial prefrontal cortex.Footnote 97 When sense of self-worth and identity is forming in adolescence, “brain development is especially susceptible to social pressures, peer opinions, and peer comparison.”Footnote 98 As such, the adolescent brain is particularly tuned in to “rewarding” feedback on social media, such as “likes” on a post. Adolescents use this information to shape their understanding of social norms and values.Footnote 99 For example, if an adolescent posts an image of a thin model and subsequently receives hundreds of “likes,” their brain interprets that they were “rewarded” for sharing the image and they are more likely than adults to use the “likes” to inform their concept of what images are socially desirable.
Additionally, the naturally uneven pace of development of different regions of the brain during adolescence exacerbates vulnerability to the harms posed by social media use. For instance, during adolescence, brain regions that process emotions (e.g., the amygdala) develop faster, while brain regions responsible for decision-making, reasoning, and impulse control (i.e., frontal regions) develop more slowly and continue to develop well into young adulthood.Footnote 100 This lopsided neural development is associated with heightened emotionality and sensitivity to emotion-inducing social media content, since adolescents’ abilities to regulate emotional responses is hindered.Footnote 101 Lastly, adolescents are more sensitive than adults to social rewards (in contrast to monetary or other types of rewards).Footnote 102 The activation of reward processing regions in the brain when using social media platforms can make these platforms highly influential on teens. Features including getting “likes” on a post or comment, autoplay, infinite scroll and algorithms that leverage use data to serve content recommendations motivate continued engagement despite psychological harms and promote excessive use of social media.Footnote 103 As a common example, a teen who received several “likes” on a previously shared picture of themselves vaping is neurologically motivated to post a similar picture to receive the same stimulating reward.
In sum, neuroscience research has identified unique characteristics of the adolescent brain that place adolescents, rather than adults or younger children, at particular risk to negative mental health effects of social media use. These characteristics include: (1) the heightened sensitivity to social cues, (2) increased emotional responses as a product of underdeveloped judgment regions and more mature emotion processing regions, and (3) social media’s ability to activate reward processing regions in the brain to motivate continued engagement. Social media platforms that are highly visual or image-based, where digitally altered and unrealistic images of body shape and thinness are common, compound the links between social media use and subsequent mental health harms.Footnote 104 Adolescents are especially sensitive to peer feedback that communicates social preferences and have exaggerated emotional responses because of the brain’s reduced ability to regulate emotional responses. Persistent exposure to social media content is driven by algorithms and platform practices that engage sensitive reward processing structures to motivate teens to stay on platforms longer. Altogether, the interaction of normative adolescent neurodevelopment with features of social media platforms, particularly those that are image-based, increases mental health risks to young people. This developmental-stage-based vulnerability must be accounted for when (1) assessing the harm inflicted on young users by social media platform practices and (2) creating legislation and regulations to curb such harms.
III. Social Media Platforms Are Economically Incentivized to Use Relentless Algorithms that Push Harmful Content to Young Online Users
The immense advertising revenue social media platforms generate from young users discourages efforts by the platforms to self-regulate and curb the online harms caused to young people. The economic benefit social media companies enjoy from exploiting young social media users is assumed to be considerable but has not been well-documented. Social media platforms have no obligation to release data surrounding the types of content to which young users are exposed, nor the impacts of such content.Footnote 105 And platforms are highly incentivized to keep youth online; children’s online experiences are heavily monetized through advertising revenue on social media platforms and mobile applications.Footnote 106
Since platforms are not held accountable to children nor regulatory agencies,Footnote 107 they are not required to report advertising revenue nor the age breakdown of users. To fill gaps in the information on how much revenue social media platforms generate from minors, authors of this article obtained data from a business marketing source and from public survey dataFootnote 108 to conduct a novel simulation analysis that would provide the first known estimates of the number of users and the annual advertising revenue generated from U.S.-based users aged zero to twelve and thirteen to seventeen years for six major social media platforms. We found, across the major platforms, that annual advertising revenue from U.S. children ages zero to twelve years is estimated to be over $2 billion in U.S. dollars, and from all children ages zero to seventeen years is nearly $11 billion.Footnote 109 For several social media platforms, thirty to forty percent of their annual advertising revenue is generated from users ages zero to seventeen years. The massive revenue generated from young users discourages social media platforms from self-regulation and further demonstrates the need for government policy and legislative intervention to curb harms.
IV. Legal Obstacles to Preventing Harm to Young People Caused by Social Media Algorithms and the Strategies to Circumvent Them: How to Grapple with First Amendment Speech Protections and Section 230 of the Communications Decency Act
A. First Amendment Protection for Content on Social Media Platforms Allows Harm to Be Inflicted on Young Users Through the Platforms’ Use of Algorithms
Those attempting to regulate harms to children and teens resulting from time spent on social media platforms face the daunting legal obstacles of the First Amendment and Section 230 of the federal CDA. As technology becomes more and more entangled with the everyday life and communication of most Americans, social media platforms like Facebook, Instagram, Twitter, Snapchat, and TikTok have become forums where individuals can exercise their right to free speech. The First Amendment protects a wide swath of speech, ranging from highly protected political speech to lesser protected commercial speech and sexually explicit speech.Footnote 110 Certain categories of speech are bluntly illegal and will not enjoy First Amendment protection, such as defamation, bribery, incitement, fighting words, conspiracy to commit a crime, etc.Footnote 111 Generally, however, laws trying to regulate the specific content of speech, for instance, hate speech, will be found unconstitutional, while content-neutral laws that instead regulate the time, place, and manner of speech no matter its content will not be deemed violative the First Amendment.Footnote 112
Those trying to regulate harms of social media platforms risk violating First Amendment free speech rights because of restrictions they seek to impose on content, specifically algorithms used by social media platforms.Footnote 113 Thus, the more specific issue is whether algorithms—in this case, computer programming that can sort and recommend content for users of social media—is protected speech.Footnote 114 Though the Supreme Court has not ruled on the issue of whether algorithms are protected under the First Amendment, algorithms are likely protected speech under the Free Speech Clause because algorithms are, in essence, a computer code,Footnote 115 and federal courts have repeatedly found that a computer code is speech protected under the First Amendment.Footnote 116 Similarly, federal courts have found that search engine results are protected speech under the First Amendment.Footnote 117 Algorithms and search engine outputs function similarly in that both algorithms and search engines are edited compilations of speech that are generated from other individuals, such as engineers, and are arranged to appear in a specific order on a user’s social media feed.Footnote 118 Due to the similarity, courts would likely find that a social media algorithm is a type of computer code or output code, and is consequently protected under the First Amendment.Footnote 119
Thus, the content circulated by a social media platform’s algorithm—the information that shows up in the feed of social media users—likely cannot not be specifically targeted by any legislation due to First Amendment protections.Footnote 120 Neither can the actual computer code of the algorithm that selects and directs the content, even though some of it is harmful to young people.Footnote 121 Legislation aiming to regulate harmful algorithms must surmount this high bar of First Amendment speech protection.
B. Immunity Granted to Social Media Platforms by Section 230 of the Communications Decency Act Protects Social Media Companies Against Injury Claims
The second major obstacle to regulating harm caused by social media platforms is Section 230 of the CDA. Section 230 grants immunity to online services, meaning that online services are not liable for the speech of third parties published on their platforms.Footnote 122 Enacted in 1996, Section 230 was considered foundational in supporting the internet as a free speech medium. At that time, however, social media did not exist, and was not a primary source of communication and information like it is today
Nevertheless, courts in recent years have applied Section 230’s protections to social media platforms, including Facebook and Twitter.Footnote 123 As a result, Section 230 has become a major roadblock to legislation aimed at protecting children and teens from online harms. Certainly, Section 230 has had a positive impact, notably keeping the social media companies that have become so central to everyday life in business.Footnote 124 However, many legislators attempting to regulate harmful content in digital spaces argue that Section 230 is overbroad and has granted immunity to platforms that knowingly profit from harmful content on their social media platforms.Footnote 125
When drafting legislation to regulate social media harms, legislators might achieve in circumventing Section 230’s rigorous protections in the three following circumstances:
First, if the defendant may have induced or contributed to the development of the illegal content in question, then Section 230 does not apply. Second, if the plaintiff’s claim does not arise from the defendant’s publishing or content moderation decisions, then Section 230 does not apply because Section 230 does not protect providers from all liability (only liability from its role as a publisher). Third, if the case relates to a content-removal decision and the defendant fails to meet Section 230(c)(2)’s “good faith” requirement, then Section 230 does not apply because the defendant does not qualify for its protection.Footnote 126
1. U.S. Supreme Court Declines to Limit Section 230 Blanket Immunity for Social Media Companies, but a Revenue Sharing Claim that Could Remove Section 230 Protection Potentially Exists
Of note are the legal challenges to the broad immunity granted to social media platforms by Section 230 that have arisen in recent years, including a case heard by the U.S. Supreme Court in 2022. In two recent decisions, Twitter, Inc. v. Taamneh and Gonzalez v. Google, the U.S. Supreme Court declined to limit the broad immunity Section 230 offers to social media companies for promoting inappropriate content that is published by third parties on their platforms. (The decisions did, however, appear to leave intact a revenue sharing theory where a plaintiff may allege a platform that commercially profits from an algorithm that pushes illegal content could be considered an information content-provider, thus removing the immunity protection of Section 230 and opening the platform up to liability.) These rulings were celebrated by Tech companies and their allies as a win for free expression on the internet,Footnote 127 while critics of Section 230 viewed the decisions with disappointment.
In Twitter, Inc. v. Taamneh, the family of Nawras Alassaf, who was killed in an ISIS terrorist attack on the Reina nightclub in Istanbul, Turkey, alleged that social media companies knowingly aided ISIS in violation of the Anti-Terrorism Act by allowing ISIS content on their platforms, failing to remove such content, and recommending ISIS content using algorithms.Footnote 128 The Supreme Court unanimously held that social media companies, including Twitter, did not “aid and abet” ISIS simply because their algorithms recommended ISIS content, failed to remove such content, or knew such content existed on the platform.Footnote 129 The Court explained that these actions did not rise to the level of substantial assistance as required to seek damages under the Anti-Terrorism Act for a secondary-liability claim.Footnote 130 Although the Taamneh decision did not forthrightly address Section 230, it declined to impose any third-party liability on Twitter because it did not “knowingly” provide substantial assistance and thus could not have aided and abetted ISIS in the terrorist attack on the Reina nightclub.
In Gonzalez v. Google, the Court left Section 230 fully intact and declined to rule definitively on whether it protects a platform’s recommendation algorithms because the plaintiffs in Gonzalez failed to state a claim.Footnote 131 Nohemi Gonzalez was killed in 2015 during an ISIS terrorist attack in Paris while studying abroad.Footnote 132 Nohemi’s family alleged that Google, Twitter, and Facebook aided and abetted ISIS with algorithms and recommended video content.Footnote 133 Specifically, the family asserted a revenue sharing theory, alleging that the platforms placed paid advertisements in proximity to ISIS-related content and shared in resulting ad revenue; therefore, the three social media companies should be liable for the ISIS-related content it generated revenue from.Footnote 134
The revenue sharing theory articulated in Gonzalez asserts that, if a platform is commercially profiting off of an algorithm, it should be considered an information content-provider under Section 230, thus barring immunity from liability. A recent California case, In re Apple Inc. Litigation, analyzed this theory. This case involved social media casino apps, including the purchase of virtual “chips” to wager for gambling purposes.Footnote 135 Here, the plaintiffs asserted two distinct theories for revenue sharing that would arguably bar the platforms from immunity under Section 230. Under one theory, plaintiffs alleged that the platforms operated as the payment processor for all purchases of virtual chips and thus aided in the “exercise of illegal gambling by selling chips that [were] substantially certain to be used to wager on a slot machine.”Footnote 136 The court found that since this theory was grounded in the platforms’ own bad acts, and not in the content of the social media casino app, the platforms could not rely on Section 230 to escape liability.Footnote 137 The second revenue sharing theory asserted that platforms were liable for “offering, categorizing, and promoting” social casino applications in their App Stores, which helped the platforms generate a profit by targeting advertisements at specific users.Footnote 138 The plaintiffs alleged the platforms not only recommended content but helped develop advertisements to attract users to the social casino apps, making the illegal product “more appealing and addicting.”Footnote 139 But the court noted that the platforms’ contribution of data, which aided in the creation of advertisements, did not create and develop the casino apps, rather, the contribution of data was akin to offering publishing advice.Footnote 140 Thus the platforms behaved like editors, rather than content providers, and were shielded from liability by Section 230.Footnote 141
The Ninth Circuit found in In re Apple that Section 230 did not bar the ad revenue sharing claims, because such allegations did not seek to hold the social media platforms liable for any content provided by a third party.Footnote 142 Under the facts of Gonzalez, the Ninth Circuit held that the plaintiffs “failed to state a claim for aiding-and-abetting liability” because the allegations were devoid of any statements “about how much assistance Google provided” and therefore did not plausibly allege “that Google’s assistance was substantial.”Footnote 143 By failing to demonstrate that Google provided substantial assistance to ISIS, the plaintiffs did not have a viable claim under the Anti-Terrorism Act and thus, Google could assert a Section 230 immunity defense.
The Supreme Court agreed with this holding, because the Gonzalez complaint “allege[d] nothing about the amount of money that Google supposedly shared with ISIS, the number of accounts approved for revenue sharing, or the content of the videos that were approved.”Footnote 144 The Court thus explained that there was nothing in the complaint “to view Google’s revenue sharing as substantial assistance” and that without more the plaintiffs failed to demonstrate “that Google knowingly provided substantial assistance” to the Reina attack, the Paris attack, or any other ISIS terrorist attack.Footnote 145 Because Google did not violate any law, it could still benefit from Section 230 immunity.
While the revenue sharing claims did not succeed in this particular case, the Ninth Circuit acknowledged that, in a different scenario, ad revenue sharing by a social media platform would not be immune to liability under Section 230.Footnote 146 The Supreme Court did not reject this idea, but explained that without a viable claim, in this case “aiding and abetting” under the Anti-Terrorism Act, it could not address Section 230.Footnote 147 Rather, to bar a platform from asserting Section 230 immunity, a plaintiff would first need to raise a viable claim for the platforms to be held liable for their conduct.
Unfortunately, when considering how a revenue sharing liability claim could challenge the harm platforms inflict on adolescents through relentless algorithms, the second revenue sharing liability theory in In re Apple seems to apply best. Creating algorithms that offer, categorize, and promote harmful content would likely not be considered a content-providing act, but rather an editorial function protected from liability, even if a social media platform earns profits from ad revenue. Platforms may be aiding in, and profiting from, targeting harmful content to minor users, but they are not creating the content itself and thus may be immunized under Section 230. Further, both Gonzalez and In re Apple involved online sites engaged in illegal activity—terrorism and gambling—which provided more reason for the Ninth Circuit and district court to hold the social media platforms liable for their conduct. In contrast, while platforms feeding harmful content to minors through the use of algorithms—and earning tidy sums from ad revenues—may be injurious to young users, the platforms are not promoting an illegal activity.Footnote 148 These differences, and the Supreme Court’s decision to not review the revenue sharing theory of liability claim in Gonzalez and Taamneh cases, suggest at the claim’s potential—but it is difficult to predict how exactly it could be used in the future to bar social media platforms from immunity under Section 230 of the CDA.
V. Legal Strategies to Circumvent First Amendment Protection and Immunity Granted by Section 230 of the Communications Decency Act to Social Media Platforms
There are a few legal strategies that can be used to regulate harms created in the online world that surmount the obstacles created by the First Amendment and Section 230 of the CDA.Footnote 149 First, the FTC or states’ attorneys general could bring claims against social media companies for unfair or deceptive business practices. Second, products liability lawsuits could be brought against social media platforms, though such suits may benefit only a few people and only after harm has occurred. Lastly, states could pass legislation based on products liability theory that would require a study of social media design functions and the reform of those functions to prevent harm to users.Footnote 150
A. Unfair or Deceptive Business Practice Claims Brought Against Social Media Companies by the FTC or States’ Attorneys General Could Withstand a First Amendment Free Speech Defense and Circumvent Section 230 of the Communications Decency Act
One approach to regulating the harms that children and teens experience due to social media use is applying Section 5 of the FTC Act, which declares unlawful “unfair or deceptive acts or practices in or affecting commerce.”Footnote 151 The FTC finds that an act or business practice is unfair where “(1) the act or practice causes or is likely to cause substantial injury to consumers which (2) is not reasonably avoidable by consumers themselves and (3) not outweighed by countervailing benefits to consumers or to competition.”Footnote 152 In addition, the FTC finds that an act or business practice is deceptive where (1) a representation, omission, or practice misleads or is likely to mislead the consumer; (2) a consumer’s interpretation of the representation, omission, or practice is considered reasonable under the circumstances; and (3) the misleading representation, omission, or practice is material.Footnote 153
The FTC Act does not grant a private right of action; enforcement of the FTC Act can only be achieved through the FTC itself.Footnote 154 An action could be brought under Section 5 of the FTC Act if it can be proven that the persistent algorithmic pushing of harmful content, such as eating disorder content shown to a user through the social media platform’s algorithm-driven feed, meets the definition of an “unfair or deceptive business practice,” regardless of the platform’s intent to harm the user. When an action is brought under Section 5 alleging unfair or deceptive business practices, the defendant may not use good faith as a defense because intent to deceive the consumer is not an element of the claim.Footnote 155
States’ attorneys general offices can also bring claims of unfair or deceptive business practices against social media companies, because the FTC assigns certain enforcement authority to states in this area.Footnote 156 State consumer protection laws also grant attorneys general significant authority to bring such claims.Footnote 157 The FTC Act has prohibited unfair or deceptive acts and practices since 1938, and states followed suit in the 1970s and 1980s when they began to adopt their own forms of consumer protection statutes, largely modeled after the FTC Act.Footnote 158
A multi-state investigation against TikTok and Meta was launched in 2022 by attorneys general in eight states; it focused on the methods and techniques used by social media companies to boost engagement among young users.Footnote 159 Specifically, attorneys general are examining the methods used to increase the duration of time spent on the platforms as a means to uncover the harm such usage may cause young people and what social media companies know about those harms.Footnote 160 In the investigation, attorneys general will likely seek disclosure and reform from social media companies related to the effect algorithmic operations have on adolescent users,Footnote 161 as well as gathering information on a growing number of public health studies that examine mental health harms suffered by young users of social media.Footnote 162
States’ attorneys general, however, may struggle to bring a successful claim against social media platforms because of the difficulty in proving that harms suffered by young social media users are caused by unfair or deceptive business practices such as algorithms to push harmful content. Most difficult to prove would be the first element of an unfair practice claim, which requires “the act or practice causes or is likely to cause substantial injury to consumers,”Footnote 163 and the third element of a deceptive practice claim, which requires “the misleading representation, omission, or practice is material.”Footnote 164
Numerous public health studies (explored earlier in this article) can undoubtedly establish an association, and these studies strongly suggest causation between social media use by young people and the mental health harms they suffer. However, risk audits of social media platforms—and specifically legislation that would require those audits—are necessary to show that the algorithmic function of social media platforms is directly linked to substantial harms to young users, many of whom suffer from body dissatisfaction, eating disorders, substance abuse, anxiety, depression, self-harm, and suicidality.
1. Product Liability Claims Focused on Harmful Design of Social Media Platforms Could Withstand a First Amendment Challenge and Circumvent Section 230 of the Communications Decency Act
A second legal remedy that could surmount the obstacles imposed by the First Amendment and Section 230 immunity would be a products liability claim, brought under a negligence theory. A plaintiff bringing a products liability lawsuit against a social media platform would need to allege that the social media platform is harmful and that the platform knew or should have known that the design of the product, the social media app or website, would cause harm.Footnote 165 The plaintiff would need to show that the defendant is not immune from liability under Section 230 of the CDA in order to be successful.Footnote 166 For example, in Lemmon v. Snap, Inc., plaintiffs alleged that their two sons died in a high-speed car crash due to the negligent design of the Snapchat app, which, through the use of a Speed Filter, encouraged their sons to drive at excessively high speeds while the app measured and displayed their speed in real time.Footnote 167 The court held that Snap Inc., as a products manufacturer, had a duty to design a reasonably safe product.Footnote 168 Moreover, the court found that Snapchat was not immune from liability under Section 230 of the CDA because its duty to design a reasonably safe product was separate from its role in monitoring and publishing third-party content.Footnote 169 Another product liability case, discussed earlier, is being pursued by Tammy Rodriguez, the mother of an eleven-year-old suicide victim, alleging that Meta and Snap must be held liable for the wrongful death of her daughter, Selena Rodriguez.Footnote 170 Ms. Rodriguez alleges that Meta and Snap “knowingly” and “purposefully” designed their platforms to be addictive, making these platforms unreasonably dangerous to minor users, such as Selena.Footnote 171 Under a products liability strategy, companies such as Meta and Snap may be held liable for the physical and mental harm to users of their platforms if the court finds that the company knew or should have known that the design of the platform posed unreasonable dangers.Footnote 172 A products liability strategy circumvents both the First Amendment free speech protections and Section 230 of the CDA because it does not challenge the content and speech found on a platform but, instead, cites fault with the harmful design of the platform itself.Footnote 173
Relying on a products liability claim to combat the harms inflicted on children by social media has its limitations, however. Product liability cases typically involve only a single plaintiff, or a specific class of plaintiffs in a class action suit. At best, product liability cases are a reactive legal strategy that address harms occurring in the online world only after the harms have occurred. A product liability claim does not address the continuing or future harm social media platforms inflict on the general public of users.
In addition, proving that a faulty design of a product was the direct cause of an injury can be difficult. Generally, for a products liability case to be successful, a plaintiff must prove: (1) the product caused them to be injured; (2) the product that injured them was defective; (3) the defect of the product is what caused their injury; and (4) the product was being used the way it was intended.Footnote 174 It is not enough to argue that one was injured while using the defective product correctly; the plaintiff must also demonstrate specifically that their injury was caused by the defect itself.Footnote 175 In some cases, linking the defect in the product to the injury is fairly straightforward, but in other cases, it is not. That is specifically the problem in cases where young persons experience harm by engaging with social media platforms. Public health studies can conclusively demonstrate an association between social media use and the mental health harms suffered by young social media users, but direct causation of harm is harder to prove, but a groundswell of reputable public health, psychology, and neuroscience studies in recent years go a long way in directly linking social media use to severe harms suffered by young social media users. Mandatory algorithm risk audits of social media platforms, discussed in detail later in this article, would delineate that faulty and intentional design of social media platforms cause many harms experienced by youth.Footnote 176
2. Public Nuisance Theory Brought Against Social Media Giants by School Districts Could Restrain Platforms From Targeting Addictive Social Media Algorithms at Minors.
A third legal remedy that could circumvent First Amendment speech protections and Section 230 immunity is the tort theory of public nuisance. In January 2023, Seattle School District No. 1 (Seattle Schools) brought a case against Meta, Snapchat, TikTok, and YouTube, alleging that the social media platforms “intentionally marketed and designed their social media platforms for youth users, substantially contributing to a student mental health crisis.”Footnote 177 Seattle Schools specifically allege that the four social media platforms intentionally design their platforms to maximize the time youth users spend on the platform with the use of harmful algorithms.Footnote 178 Furthermore, Seattle Schools allege the harm to adolescent mental health is reasonably foreseeable.Footnote 179
Seattle Schools brought this complaint because the school district is a primary provider for mental health services to children and teenagers who are specifically targeted by social media platforms.Footnote 180 In 2023, there were 109 schools within the Seattle Schools district with a population of 53,873 students.Footnote 181 Seattle Schools claims harmful social media algorithms have created a mental health crisis for children and teens and Seattle Schools have struggled to provide adequate mental health services to adolescents in their schools to meet the growing need.Footnote 182 In a similar suit, in April 2023, Dexter Community Schools in Washtenaw County, Michigan joined a lawsuit with at least eleven other Michigan schools against major social media platforms, including Meta, Snapchat, TikTok, and YouTube.Footnote 183 Plaintiffs are seeking damages for past and future harm resulting from social media addiction and for funding for school counselors to address the mental health crisis resulting from high social media use.Footnote 184
Seattle Schools brings its complaint under a public nuisance theory. Under the relevant code, public nuisance is defined as “whatever is injurious to health or indecent or offensive to the senses, or an obstruction to the free use of property, so as to essentially interfere with the comfortable enjoyment of life and property.”Footnote 185 A public nuisance occurs when someone commits an act or performs a duty that “annoys, injures, or endangers the comfort, repose, health or safety of others, offends decency…or in any way renders other persons insecure in life, or in the use of property.”Footnote 186 It impacts an entire community.Footnote 187 In the past, public nuisance claims have been used to address pollution, road obstructions, and operating houses for prostitution.Footnote 188 More recently, public nuisance has been used to litigate claims regarding climate change, gun violence, and teen vaping.Footnote 189
Because a public nuisance is experienced by the entire community, a plaintiff has standing to sue under this theory if they are one of the following: (1) a public authority charged with the responsibility of protecting the public; or (2) an individual who has suffered harm from the specific nuisance.Footnote 190 In this case, Seattle School District No. 1 has standing to bring this claim under the first category of plaintiffs.
When defending against a public nuisance allegation, a defendant can assert various defenses: contributory negligence, assumption of the risk, coming to the nuisance, or statutory compliance.Footnote 191 If these defenses fail and the plaintiff prevails, the typical remedy the court awards is damages.Footnote 192 In some cases, an injunction may be appropriate, wherein the defendant would be restrained for continuing the wrongful conduct.Footnote 193 Defendants may be fined for committing a public nuisance in addition to the court issuing an injunctive order.Footnote 194 While a public nuisance claim may help many students, especially if an injunction is ordered, it addresses harm in only the specific school districts in which the suit is brought. Thus, the number of young people who are protected is limited.
Juul, a company that sells electronic cigarettes, has faced ongoing litigation from school districts, cities, and counties across the nation under the public nuisance theory for contributing to nicotine addiction among adolescents.Footnote 195 The first of these was brought in Massachusetts in 2018.Footnote 196 Plaintiffs in these lawsuits alleged that Juul marketed e-cigarettes to youth, using false and misleading language describing e-cigarettes as fun and safe for adolescents.Footnote 197 The Connecticut Attorney General claimed that Juul “relentlessly marketed vaping products to underage youth, manipulated their chemical composition to be palatable to inexperienced users, employed an inadequate age verification process and misled consumers about the nicotine content and addictiveness of its products.”Footnote 198 In December 2022, Juul agreed to settle the claims coming from 10,000 lawsuits for a sum of $1.2 billion.Footnote 199 Similarly, in a lawsuit involving six states, in April 2023, Juul agreed to settle claims that it unlawfully marketed addictive products to minors for $462 million.Footnote 200 In March 2023, Juul also agreed to settle a complaint brought under the public nuisance theory by Minnesota in December 2019, however the terms of the settlement have not yet been released.Footnote 201
While public nuisance claims against Juul have been successful, allegations that Juul is unlawfully marketing an addictive product—electronic cigarettes—to teens is vitally different from allegations that social media giants are targeting an addictive product—social media— to teens. It is illegal to encourage minors to use electronic cigarettes and consume nicotine; it is completely legal to encourage minors to use social media. Seattle Schools, and any other school districts bringing a public nuisance claim, will have difficulty proving that the social media platforms are involved in illegal conduct. Furthermore, Seattle Schools will need to prove that the social media companies caused the mental health crisis among youth in the school district, not merely that there is a correlation between negative mental health and increased social media use.Footnote 202
VI. The California Age Appropriate Design Code Leads the Nation in Attempts to Address Harms to Young People Inflicted by Social Media Platforms and Should Have Ripple Effects Nationally
The passage of the California Age Appropriate Design Code (California Code) on September 15, 2022, constituted a significant step forward in the United States to combat online harms to children and adolescent users, including online content that contributes to eating disorders, depression, anxiety, social media addiction, and other mental health harms.Footnote 203 The law thwarts First Amendment challenges because it does not regulate the content of speech on social media platforms, but instead focuses on the functional design of the platforms that cause harm, essentially applying a products liability theory, but more broadly.Footnote 204
The California Code sets forth certain standards that social media platforms must comply with. For example, it mandates that a social media company must conduct a Data Protection Impact Assessment for services or platforms likely to be accessed by consumers younger than the age of 18, Footnote 205 establish the age of consumers using the platform with a level of certainty,Footnote 206 and ensure that minor users’ platform websites and apps are set to the highest level of privacy possible.Footnote 207
Furthermore, the Code prohibits social media platforms from using private information of a child user in a way that is harmful to the physical and mental health of the child,Footnote 208 collecting the geolocation information of a user,Footnote 209 and using deceptive design functions, such as targeted advertising, and exposing children to harmful content and contacts that pressure children to provide personal, private information beyond what is necessary.Footnote 210 These are only a few of the many important standards social media companies must comply with under the California Code. To implement and enforce these standards, the California Code requires the establishment of the California Children’s Data Protection Task Force.Footnote 211 The California Code was signed into law in September 2022Footnote 212 and goes into effect on July 1, 2024.Footnote 213
Of the many pieces of legislation filed in the United States aimed at addressing online harms, specifically those harms that impact minor users on a platform, the California Code will be perhaps the most influential. Currently, social media platforms must follow COPPA, which imposes requirements on online service operators to protect the privacy of users under the age of thirteen years.Footnote 214 Where COPPA places the burden on parents to take control of their child’s privacy online, in contrast, the California Code places the burden on the social media platform to create services and devices that are safe for the physical and mental wellbeing of children.Footnote 215 California is a leader in U.S. law regarding technology and privacy rights and is a technology hub in itself; thus, it is likely that the California Code will have a ripple effect throughout the nation in changing the structure and design of social media platforms for the better.
The California Code is modeled after the United Kingdom Age Appropriate Design Code (UK Code), which has inspired significant change by social media companies. The UK Code came into full force on September 2, 2021, after a twelve-month transition period.Footnote 216 Since the UK Code came into effect, many social media platforms have made changes to their services and devices to comply with its requirements. For example, TikTok has turned off notifications past bedtime for children less than thirteen years old and has provided safe search mechanisms as a default, Instagram has disabled targeted advertisements for minor users, YouTube has disabled autoplay for minor users, and Google has stopped targeted advertising for minor users.Footnote 217 Given the success of the UK Code, passage of the California Code should produce similar instrumental changes in the United States.
A . California’s Data Protection Impact Assessment Requirement to Measure Social Media Harms on Young Users Is Well Intentioned but Very Limited Because It Relies on Social Media Companies to Assess Themselves
The California Code requires businesses to complete a Data Protection Impact Assessment before a new online service, product, or feature is offered to the public, and to maintain documentation of this assessment.Footnote 218 A Data Protection Impact Assessment is a “systematic survey” that assesses and mitigates risks that “arise from data management practices of the business to children who are reasonably likely to access the online service, product, or feature at issue.”Footnote 219 Specifically, the assessment will address whether a product, service, or feature could harm children or expose children to harmful content, could lead children to experiencing harmful contacts, could permit children to witness or participate in harmful conduct, or whether algorithms used and whether targeted advertisements could harm the child.Footnote 220
However, the Data Protection Impact Assessments are confidential and will not be publicly disclosed to anyone other than the California Attorney General’s Office.Footnote 221 Moreover, the Data Protection Impact Assessments will be conducted internally by a social media company, instead of by a third party.Footnote 222 For example, Facebook would be in charge of running a Data Protection Impact Assessment of its own design on its own platform. These two factors drastically weaken the impact the Data Protection Impact Assessments could have, particularly because the social media platform would not be subject to outside scrutiny. California should replace the assessments with algorithm risk audits that are conducted by independent third parties and are required to be publicly disclosed, therefore providing for greater accountability and enforceability of the California Code objectives.
The California Code is arguably the strongest state law in the United States that addresses the mental health of children and teens and the role social media plays. The First Amendment and Section 230 of the CDA have been barriers to legislation aimed at regulating online harms caused by social media platforms.Footnote 223 However, the California Code circumvents the First Amendment and Section 230 of the CDA by regulating the design and function of social media platforms, rather than the content or speech posted on the social media platforms. In this way, the California Code employs a products liability theory, but has the potential to be more effective in curbing harms caused by social media platforms than a single products liability lawsuit such as Lemmon v. Snap, Inc. or Google v. Rodriguez.
The California Code is preventive rather than reactive in nature. Unlike products liability cases, which involve a singular plaintiff and are brought after a harm has already occurred, the California Code attempts to prevent online harms before they occur by requiring social media platforms to comply with certain standards. The law places a burden on social media platforms to create services and products that are safe for the mental and physical wellbeing of users, with specific attention to the vulnerabilities of children using their platforms. Unlike products liability suits that benefit only a handful of users or just one person, the California Code will likely have a broad impact on all young users of social media in California.
B . Social Media Platforms Must Institute a Reliable Age Verification Method to Protect Minors and Enact Laws to Assess the Injuries that Platforms Inflict on Young Users
While the California Code leads the nation in legislatively contemplating the harms caused to adolescents by social media, it falls short of identifying actual injury teens experience and does not go far enough to prevent those harms. For the California Code to be effective, platforms’ age verification processes must be appropriately addressed.Footnote 224 COPPA commands that operators of online services restrict their platforms to users ages thirteen years or older, absent verifiable parental consent.Footnote 225 For example, both Facebook and Instagram require users to be thirteen years old or older to create an account, but implement this requirement only by asking for the user’s birthdate during account creation.Footnote 226
This COPPA regulation is not easy to enforce; many child users are able to evade the age requirements on social media platforms by simply misrepresenting their birth date when registering for an account.Footnote 227 Social media platforms must administer a mechanism to verify the age of minor users to a degree of certainty, including those minor users who have lied about their age. For example, Instagram is currently testing the following three options to verify the age of Instagram users: (1) the user must upload an image of their ID, (2) the user must record a video of themselves, or (3) the user must ask friends to verify their age.Footnote 228 While these options are currently being explored only in instances where an Instagram user attempts to change their age from under eighteen to eighteen years or older, they may be implemented by other social media companies to ensure all users are age thirteen years or older.Footnote 229
Congress is currently focusing on the low cut-off age requirement that social media platforms impose upon users wishing to open accounts. On May 2, 2023, Senators Richard Blumenthal (D-CT) and Marsha Blackburn (R-TN) reintroduced to Congress the Kids Online Safety Act (KOSA). This bill seeks to give parents and users under seventeen the ability to opt out of algorithmic recommendations, prevent third parties from viewing a minor’s data, and limit the time young people spend on a platform.Footnote 230 KOSA has received bipartisan support from U.S. Senators across the country and is endorsed by several mental health organizations and associations.Footnote 231
KOSA lists a set of harms that social media companies must mitigate, including preventing the spread of content that promotes suicidal behaviors, eating disorders, substance use disorders, sexual exploitation, advertisements for certain illegal products (e.g. tobacco and alcohol), and other matters.Footnote 232 Mitigation efforts could include removing rewards given to young users for time spent on the platform or other features that result in compulsive usage.Footnote 233 KOSA also requires social media companies “to perform an annual independent, third-party audit that assesses the risks to minors.”Footnote 234 This audit must be made public and must evaluate the risks to minors who use the platform.Footnote 235 The bill further requires platforms “to enable the strongest privacy settings by default” for kids.Footnote 236
Unlike an earlier version of the bill proposed in 2022, “KOSA 2.0” addresses concerns that it could inadvertently cause harm to young people. Opponents of the earlier version of the bill expressed concerns that KOSA would create pressure to over moderate content and allow political agendas to influence what information was accessible to young people.Footnote 237 For example, if a young teenager has an eating disorder and is looking for resources to receive counseling or health care resources, such content, though beneficial, might be censored by the online platform to avoid liability. However, KOSA 2.0 includes protections for beneficial support services like the National Suicide Hotline, substance use disorder resources, and LGBTQ youth centers.Footnote 238 These safeguards ensure young people’s access to such groups is not hindered by the bill’s requirements.
Despite these changes, Big Tech groups and some civil liberty organizations, including the American Civil Liberties Association, oppose the legislation, raising concerns about young people’s privacy and First Amendment rights.Footnote 239 While KOSA 2.0 has addressed many of the concerns of children’s mental health advocates, and would force social media companies to be transparent in its potentially harmful business practices, it is unclear whether the bill will garner the necessary legislative support (especially from the U.S. House of Representatives) to become a federal law.Footnote 240
Laws must also be enacted to examine the calculated addictive design of social media platforms and to prevent platforms from targeting vulnerable adolescent users. Social media platforms typically have a three-step method that draws users in and makes it psychologically more difficult to set down the phone: (1) a trigger, such as a notification, which pushes the user to check their device; (2) an action, where the user “clicks” to open and use an application on their device; and (3) a reward, like a favorite or “like” on a post, that motivates continued engagement on the platform.Footnote 241 Minor users, such as eleven-year-old Selena Rodriguez, who took her own life in July 2021, are most vulnerable to social media addiction and the resulting mental and physical harms.Footnote 242 Tammy Rodriguez, Selena’s mother, alleges in her suit against Instagram and Snapchat that the platforms were purposefully designed to “exploit human psychology” and addict users to their platforms; therefore, Instagram and Snapchat should be liable for the harm that resulted from Selena’s addiction to their platforms.Footnote 243
To protect young users like Selena, another piece of legislation, the Social Media Duty to Protect Children Act, was considered in California in 2022. The bill attempted to impose a duty upon social media companies to not addict children to their platform, but the bill did not pass.Footnote 244 With this bill, social media platforms would have been prohibited from addicting a child to a social media platform through “using a design, feature, or affordance that the platform knew, or by the exercise of reasonable care should have known, causes a child user, as defined, to become addicted to the platform.”Footnote 245 The Duty to Protect Children Act took a direct and controversial path in holding social media accountable and, not surprisingly, big tech lobbyists worked hard to make sure the measure was defeated.Footnote 246 It was voted down by the California Senate in August 2022.Footnote 247
A law similar to California’s failed legislation, however, found success in Utah.Footnote 248 In March 2023, the Utah legislature adopted the Social Media Usage Amendments law that prohibits social media companies from using practices, designs, or features that the company knows or should know would cause a young person to form an addiction to that social media platform. Footnote 249 To enforce this, the law gives the Utah Division of Consumer Protection the ability to audit the records of social media companies to determine compliance with the law and to investigate a complaint alleging a violation.Footnote 250 If a social media company is found to be in violation, the company is subject to civil penalties of “$250,000 for each practice, design, or feature of its platform shown to have caused addiction.”Footnote 251 The social media company can also face penalties up to $2,500 for each teen user who is shown to have been exposed to the addictive practice, design, or feature. The court may also issue an injunction or award actual damages to the injured young person.
The law also creates a private right of action allowing individuals to social media companies for “any addiction, financial, physical, or emotional harm suffered by a Utah young person as a consequence of using or having an account on the social media company’s platform.”Footnote 252 Any minor who suffers such harms is entitled to an award of “$2,500 per each incident of harm” in addition to other relief the court deems necessary.Footnote 253 If a young user or account holder is under the age of sixteen, it is assumed that a harm is caused as a result of having or using a social media account unless it can be proven otherwise.Footnote 254
In response to an alleged violation, the social media company can assert an affirmative defense to such penalties if a quarterly audit of its practices, designs, and features is conducted to detect potential addiction of a young user and the company corrects, within thirty days of the completion of an audit, any violation. The law does not require social media companies to conduct audits, but rather allows social media companies to use quarterly audits as a means to assert an affirmative defense.Footnote 255 The law did not specify how these audits would be conducted, but does suggest that social media companies would audit themselves.Footnote 256
The Utah law, however, faces a significant legal battle because social media companies have filed suit in December of 2023 claiming free speech violations. Tech advocacy groups, established and funded by members of the Big Tech industry, including NetChoice, publicly opposed the passage of the law.Footnote 257 NetChoice has already sued to challenge California’s Age-Appropriate Design Code Act for restricting young users’ social media usage and may file a similar claim against the Utah legislation.Footnote 258
The Utah law, and the bills that were defeated or watered down in California, serve as models of viable legal remedies to curb social media harm that could be employed elsewhere. The California Code is a significant step forward in attempts to reduce harm to minors using social media, but the defeat of the Social Media Duty to Protect Children Act demonstrated that bluntly identifying social media use as addictive and dangerous to children may be politically difficult to advance. Further, the Data Protection Impact Assessment requirement under the California Code, which had the potential to directly identify the social media functions that harm minors, was rendered toothless because the assessments will not be conducted by independent third-party auditors nor publicly disclosed. Similarly, the Utah Social Media Amendments law, while well-intentioned, is weakened by its apparent reliance on social media companies to conduct their own internal audits.
Essentially, social media companies are entrusted to police themselves, which will undoubtedly result in superficial auditing and ineffectual enforcement. To ensure laws impose the right regulations to alleviate harm to minors, and that social media companies comply by taking the best corrective action, the social media functions that pose the greatest risk to minors must be accurately assessed and the results made publicly available. Algorithm risk audits conducted by independent third parties that continuously measure harmful algorithmic practices directed toward minors who use social media should be required by legislation that is passed in tandem with a law similar to the California Code.
VII. Laws Requiring Algorithm Risk Audits Will Provide Compelling Evidence Linking Social Media’s Use of Algorithms to Harm to Children and Thereby Enhance Enforcement of Laws Mandating Reform of Social Media Platforms
“The real problem of humanity is the following: We have Paleolithic emotions, medieval institutions and godlike technology.” Footnote 259 – E.O. Wilson
This ponderance by the late E.O. Wilson, a pioneer of evolutionary biology, captures the uneven balance between human vulnerability and the technologically advanced spaces in which we spend so much time. In a world where social media technology is constantly developing and ultimately outpacing the ability for humans to navigate its effects, legislation must be implemented to protect minor users from the harms this technology can cause. To best draft such legislation, policymakers must fully understand what harms social media causes and the effects of these harms on young people. The most pernicious practice is arguably the use of algorithms that relentlessly direct targeted content to minors on their social media feeds.
Public health, psychology, and neuroscience studies clearly demonstrate an alarming rise in depression, anxiety, suicidality, and other mental illnesses among adolescents in the last decadeFootnote 260 coinciding with the introduction of social media platformsFootnote 261 such as Instagram (2010), Snapchat (2011), and TikTok (2016), which are all heavily used by young people.Footnote 262 Therefore, any law aimed at protecting minors online must address how social media platforms employ algorithms in the function and design of their products. To be effective, the laws must incorporate enhanced means of enforcement, rather than mere prohibitions on particular acts. This objective could best be accomplished through the use of algorithm risk audits.
Legally mandating algorithm risk audits is a relatively new strategy that is gaining traction nationally and globally.Footnote 263 New York City was among the first jurisdictions to mandate these types of audits, passing a law on December 11, 2021, that requires annual audits assessing bias in automated employment decision tools, which use algorithms to screen applicants for employment positions.Footnote 264 By requiring these audits, known as “bias audits,” the law helps identify when an algorithm might intentionally or unintentionally weed out applicants based on certain demographics, such as race and gender.Footnote 265 The New York City law, which took effect on April 15, 2023, requires an impartial evaluation by an independent auditor, the results of which must be made publicly available.Footnote 266
The bias audits will measure the disparate impact the use of algorithms has on a specific demographic by comparing the number of applicants from a specific demographic selected to move forward in the hiring process to the number of those in the most highly selected demographic.Footnote 267 For example, the bias audit might compare the number of applicants who are women selected to move forward in the hiring process to the number of applicants who are men, who were the most selected demographic. This comparison will allow the independent auditors to assess whether the use of algorithms in the hiring process disproportionately impacts a certain demographic, such as women.Footnote 268 The demographic categories the bias audits assess are gender, race/ethnicity, and intersectional (i.e., overlapping demographics, such as an applicant who is a woman of a minoritized race).Footnote 269
Another real-world example to look to for guidance on how an algorithm risk audit might work is the recent settlement between Meta Platforms, Inc. (Meta) and the U.S. Department of Justice (DOJ). On June 21, 2022, the DOJ announced its entrance into a settlement agreement that resolved allegations that Meta engaged in discriminatory advertising in violation of the Fair Housing Act (FHA).Footnote 270 The agreement also resolved a lawsuit filed against Meta by the United States, which alleged that “Meta’s housing advertising system discriminated against Facebook users based on their race, color, religion, [gender], disability, familial status, and national origin.”Footnote 271 Meta was charged with unevenly displaying housing ads to Facebook users of certain FHA-protected demographics, such as gender and race.Footnote 272 The settlement between Meta and the DOJ required Meta to develop a new system to make housing ads more evenly displayed across race and gender groups, and therefore address the discrimination caused by its algorithms.Footnote 273
The settlement set forth a three-step approach: (1) identify the specific harm, (2) determine how to measure the extent of harm, and (3) agree on reporting periods and benchmarks to reduce harm.Footnote 274 The first step was to identify the specific harm, which was the discrimination caused by housing ads being unevenly displayed to Meta users of certain demographics, namely gender and race, in violation of the Fair Housing Act. The second step—to determine how to measure the extent of harm—required Meta and tech experts to figure out how to measure Meta’s data to assess the extent of the discriminatory harm. The discriminatory harm is shown through variances between the eligible and actual audiences for housing ads.
The eligible audience includes all users who (1) fit the targeting options selected by an advertiser for an ad, and (2) were shown one or more ads on a Meta platform over the past 30 days.Footnote 275 The actual audience includes all users in the eligible audience who actually viewed the specific ad.Footnote 276 Once these audiences are identified, a measurement is taken to determine the variance between them using a measurement method called the Earth Mover’s Distance.Footnote 277 To conceptualize this measurement, think of side-by-side pie charts. One pie chart shows the eligible audience for a housing ad—suppose it is split fifty percent for male users and fifty percent for female users.Footnote 278 The other pie chart shows the actual audience for a housing ad—suppose this is split forty percent for male users and sixty percent for female users. To determine the variance, compare the differences between corresponding pieces of the pie charts.Footnote 279 The total variance is the sum of the differences between corresponding slices of the pie charts. Here, there is a ten percent difference for male users (fifty percent in the eligible audience chart and forty percent in the actual audience chart) and a ten percent difference for female users (fifty percent in the eligible audience chart and 60% in the actual audience chart).Footnote 280 Once the variance for each demographic is found, add the variances together and divide by two (since any decrease in one slice becomes an equivalent increase in another slice, so it is double-counted) to determine the total variance. In this case, the total variance is (10% + 10%) / 2 = 10%.Footnote 281
As a word of caution, this calculation of the Earth Mover’s Distance is quite simple, and works best in a context where demographic groups are of relatively equal size in the eligible population.Footnote 282 The metric, however, will be less useful under scenarios where demographic groups are of widely varying sizes, as is the case when comparing across racial/ethnic groups in the United States.Footnote 283 For this reason, it would be prudent for the Earth Mover’s Distance metric to be supplemented with an additional metric to flag when any particular group, for instance, a small demographic group, experiences a large relative variance, such as exceeding fifty percent, when comparing eligible to actual audience sizes.Footnote 284
Under the final step of the approach described settlement, Meta and the DOJ must agree on reporting periods and benchmarks to reduce harm.Footnote 285 Meta must meet “certain [benchmarks] within a specific period of time” to reduce the variance between the eligible and actual audience for housing ads.Footnote 286 These benchmarks call for Meta, by December 31, 2023, to reduce variances to “less than or equal to 10% for 91.7% of those ads for [gender] and less than or equal to 10% for 81.0% of those ads for […] race/ethnicity.”Footnote 287 By the end of 2023, Meta must ensure that for 91.7% of all housing ads on its platform, the variance between the eligible and actual audience for gender is 10% or less. Additionally, Meta must ensure that for 81% of housing ads on its platform, the variance between the eligible and actual audience for race/ethnicity is 10% or less.
To meet these benchmarks, Meta has developed a system called the Variance Reduction System (VRS), which helps reduce variances between the eligible and actual audiences for housing ads.Footnote 288 Once a variance is detected between the eligible and actual audiences using the Earth Mover’s Distance measurement, Meta can use the VRS to help reduce that variance. Think of the two working in tandem with one another, similar to how a radar and auto-pilot work with a plane.Footnote 289 The radar identifies when there is a hazard ahead and the autopilot shifts the plane’s speed or altitude to avoid the hazard. Likewise, the Earth Mover’s Distance identifies the variance between the audiences and the VRS works to shrink that variance.Footnote 290
Additionally, under the settlement, Meta must prepare a report every four months confirming that it has met the benchmarks for the previous four-month period.Footnote 291 Importantly, Meta and the DOJ selected an independent, third-party reviewer “to investigate and verify on an ongoing basis” whether the benchmarks are being met.Footnote 292 The third-party reviewer, therefore, serves as an objective check on Meta’s compliance with the DOJ agreement.
This settlement agreement marks the first time Meta will be subject to court oversight for its ad targeting and delivery system.Footnote 293 The settlement requires Meta to alter the way its algorithms target and deliver housing ads to ensure compliance with the Fair Housing Act. We believe that this three-step approach to monitoring and measuring harm caused by algorithms can be adapted to assess harm caused by social media platforms to adolescent users in the form of an algorithm risk audit.
A. Legislation Based on the Meta/DOJ Settlement Should Require Social Media Companies to Conduct Algorithm Risk Audits to Reduce Harm to Children
New legislation that would legally mandate algorithm risk audits would mirror the three-step approach used in the Meta/DOJ settlement: (1) identify the specific harm(s), (2) determine how to measure the extent of each harm, and (3) agree on reporting periods and benchmarks to reduce harm. Our model legislation does not provide a specific set of harms to be measured, but lawmakers could customize it to determine what kind of harms they want to address.Footnote 294 To conceptualize how an algorithm risk audit would work, consider the specific harm adolescent users experience when confronted with pro-eating disorder content. Pro-eating disorder content may include very restrictive dieting plans, extreme exercise regimens, and images of very thin bodies that intend to serve as “inspiration” for users who are seeing the content.Footnote 295 An algorithm risk audit could be used to measure the extent of this harm, which could lead to social media platforms being pressured, and possibly required, for instance by attorneys general to comply with existing prohibitions on unfair or deceptive business practices, to alter the way their algorithms function to reduce the harm.
Using the audit’s three-step approach, the first step would be to identify the specific harm. The specific harm might be described as “eating disorder rabbit holes,”Footnote 296 such as when adolescent social media users begin searching for and engaging with content related to mental health and body image and then are progressively shown more and more pro-eating disorder related content.Footnote 297
The second step would be to determine how to measure eating disorder rabbit holes.Footnote 298 For this step, a social media platform might be required to measure the number of users who have made the transition from mental health and body image-related content to pro-eating disorder related content (e.g., an extremely restrictive dieting plan) within a certain number of minutes, hours, or days. The social media platform could measure the users who plunge into eating disorder rabbit holes and compare the demographics of these users. If the specific concern is adolescent users, the social media platform could compare the number of all users who enter eating disorder rabbit holes to that of adolescent users who do. Comparing the difference between these numbers would show whether adolescent users are disproportionately likely to tumble down eating disorder rabbit holes.
The social media platform and the governmental body that enacted a law requiring an algorithm risk audit would then move to the third step—agreeing on reporting periods and benchmarks to reduce harm. Determinations for reporting periods and benchmarks could be made in collaboration with the social media platforms and some governmental entity serving as an enforcement group for the law, including the enacting legislative body, state attorneys general offices, or state administrative agencies. The enforcement group could determine that the social media platform needs to implement a new system, similar to Meta’s development and implementation of the VRS, to alter its current algorithm to address the disparate impact it has on adolescent users. In implementing such a change, the parties would need to determine benchmarks for improvement and reporting periods to ensure compliance with those benchmarks. Reporting periods could be required at any reasonable rate, such as on a quarterly, monthly, or even weekly basis. Similar to the Meta/DOJ settlement, a law requiring algorithm risk audits would require that the reports be evaluated by a third-party, independent reviewer to ensure compliance with the benchmarks agreed upon by the parties.
Our proposed legislation for algorithm risk audits, however, would move beyond the requirements dictated by the Meta/DOJ agreement. In addition to the three-step approach, a law mandating algorithm risk audits would require public disclosure of a social media platform’s compliance with the agreed upon benchmarks. Indeed, the compliance reports developed by the platform, and reviewed by a third-party, should be made publicly available. These reports could be required on a quarterly, monthly, or even weekly basis, allowing policymakers to determine the necessary frequency. This level of transparency would encourage social media platforms to be diligent in their prevention of harms caused by algorithms. Additionally, public disclosure would provide users with information about a platform’s algorithmic practices, including its benefits and harms, which would allow users to choose whether or not to use a platform that employs such an algorithm.Footnote 299 Further, required public disclosure would provide data to researchers examining the potential harms caused to adolescents by social media and could also inform policymakers as to the actual risks of harm and inspire concrete legislative solutions to remedy it.Footnote 300
Significantly, while harms caused to adolescents by social media platforms are currently criticized as theoretical in nature, algorithm risk audits would curate evidence of instances of harm that could significantly add to the mounting evidence that demonstrates a causal link between social media platforms’ business practices and harm to adolescents.Footnote 301 Indeed, if social media platforms are able to alter their practices to comply with benchmarks required under a law of this kind, it might indicate that these platforms have at least some control over the harms their algorithms cause. Policymakers, state attorneys general offices, and state administrative agencies could therefore pursue lawsuits aimed at holding social media platforms accountable for the harm caused to adolescent users that they negligently create and ignore.
Notably, to create an algorithm risk audit, a social media platform would need to share data with the independent third-party assessing compliance with the agreed upon metrics.Footnote 302 Social media platforms may object to this, fearing that trade secrets or proprietary information would be exposed, which might allow competitors to gain a business advantage. However, a law requiring algorithm risk audits could require that only the measured harms to adolescents be publicly disclosed and not the company’s proprietary data.Footnote 303
Finally, and significantly, a law requiring algorithm risk audits would survive a constitutional challenge under the First Amendment due to its content neutral nature. An algorithm risk audit would not regulate content; or speech, on social media platforms, nor prohibit the use of particular algorithms; rather, it would measure the effects an algorithm has on its users.Footnote 304 Such evidence could be used to help establish causation, which is the most difficult element to prove in FTC claims against businesses for unfair or deceptive practices and in products liability claims. Such findings could be a catalyst for attorneys general to enforce state laws aimed at preventing harm caused by social media platforms, including the California Age Appropriate Design Code. Publicly disclosed algorithm risk audits would, therefore, provide vital new evidence needed to compel social media companies to change their harmful practices.
VIII. Conclusion
Mental and physical health injuries to children and adolescents caused by harmful algorithm feeds on Instagram, TikTok, and other social media platforms are far-reaching and must be confronted as a public health crisis. Social media companies employ relentless feeds of algorithm-driven content to keep young users engaged on platforms, which results in billions of dollars in annual revenue for the platforms paid by advertisers targeting ads at children. With such economic incentives, platforms will not take it upon themselves to cure practices that harm young social media users. That task must fall to policymakers in Congress and state legislatures. Any new law must be careful not to run afoul of First Amendment free speech protection for social media platforms and must circumvent the immunity currently granted to social media platforms under Section 230 of the CDA. The Supreme Court, in the cases of Twitter, Inc. v. Taamneh and Gonzalez v. Google, recently declined to diminish the immunity from liability social media companies currently enjoy under Section 230, but appeared to leave intact a revenue sharing theory where a plaintiff may allege a platform that commercially profits from an algorithm that pushes illegal content could be considered an information content-provider, thus removing the immunity protection of Section 230 and opening the platform up to liability. Further, laws such as the California Age Appropriate Design Code, which requires Data Protection Impact Assessments, while positive, do not provide enough enforcement to be truly effective in curbing social media harms. Claims lodged by state attorneys general against platforms for unfair or deceptive business practices will also fail if the causal link between social media practices and harm to minors cannot be established.
To best accomplish this, social media companies should be required to conduct algorithm risk audits that identify specific sections of computer code as deceptive design elements. The U.S. Senate is contemplating a law, KOSA 2.0, that would mandate risk audits of social media algorithms by independent third parties. The results of those audits would be made public. But the law faces opposition by respected civil liberty groups and Big Tech raising concerns about young people’s privacy and First Amendment rights. The bill may also fail to garner necessary legislative support in the U.S. House of Representatives to become law. Thus, state legislatures must be urged to craft legislation that requires algorithm risk audits, but the neutrality and transparency of the audits is imperative. Any algorithm risk audit that a social media company conducts must be administered by an independent, third-party auditor, and the results should be publicly disclosed. Such disclosure will allow law enforcement organizations, such as attorneys general, and researchers examining the risks and benefits of social media practices to access the audit’s findings. Requiring algorithm risk audits is a crucial step to protecting children who risk their mental and physical well-being when they delve into the relentless algorithmic information feeds of social media.
Funding
This study was supported by the Becca Schmill Foundation and the Strategic Training Initiative for the Prevention of Eating Disorders. A Raffoul is supported by the Canadian Institutes of Health Research Institute of Population and Public Health grant MFE-171217. SB Austin is supported by the US Maternal and Child Health Bureau training grant T76-MC00001. The funders were not involved in the conduct of the study. The authors do not have financial conflicts of interest with this study.