1. Introduction
In November 2021, leaked internal documents revealed that the software Facebook used to automatically remove hate speech was heavily biased against minorities: around 90 per cent of ‘hateful’ posts removed criticised white people and/or men.Footnote 1 External studies also suggest that Facebook’s content moderation systems disproportionately target marginalised groups.Footnote 2 Facebook has since changed its race-blind moderation policies, which ignored systemic inequalities between racial groups and treated speech targeting privileged and marginalised groups as equivalent.Footnote 3 However, discrimination will almost certainly persist, for example due to widespread bias in artificial intelligence (AI) classifiers.Footnote 4 Other platforms also mostly operate race-blind policiesFootnote 5; user accusationsFootnote 6 and survey evidenceFootnote 7 suggest they exhibit similar racial biases. Most major platforms also strictly ban content deemed ‘adult’ or sexually suggestive, and content related even tangentially to sex work. These policies not only materially harm sex workers,Footnote 8 but are typically enforced arbitrarily and disproportionately against women of colour and LGBTQ+ people.Footnote 9
Content moderation has become a particular focus of academic and political debates, but moderation policies like these are not the only ways platforms reinforce existing social inequalities. Targeted advertising-based business models require continual surveillance and classification of users, which both reflects and reproduces existing social inequalities.Footnote 10 Gender- and race-based targeting enable discrimination in employment and housing adverts,Footnote 11 and facilitate advertising campaigns which exploit reductive stereotypes.Footnote 12 Platforms impose simplistic and reductive identity categories, like binary gender classifications,Footnote 13 to fulfil marketers’ demands for gender-segregated audiences.Footnote 14 Such business models not only enable direct discrimination, but also channel online culture and communication in predictable directions that reinforce dominant social norms because they appeal to advertisers. According to the most heavily-promoted social media content, women wear makeup, while men play video games.Footnote 15 Queer people can only be visible if they are desexualised and embrace heteronormative lifestyles.Footnote 16 As Iris Marion Young emphasises, restrictive social norms and stereotypes like these are as much a form of injustice as more obvious forms of discrimination and maldistribution.Footnote 17
Social media regulation must do more to address these manifestations of social inequality. So far this has not been a focus for European lawmakers, who have instead prioritised issues like misinformation, extremism, and copyright infringement, but these problems are beginning to be recognised. Initiatives including the self-regulatory Code of Conduct on Hate SpeechFootnote 18 and 2018 Audiovisual Media Services Directive (AVMSD),Footnote 19 as well as the 2022 Digital Services Act (DSA),Footnote 20 contain provisions aimed at addressing discrimination and prejudice. They also address broader issues, such as opacity and arbitrariness in content moderation, which are relevant to addressing bias and inequality.
The predominant framework through which the European Union (EU) addresses such issues is that of fundamental rights.Footnote 21 In various legislative measures, mandating respect for users’ rights serves as a key safeguard against state and corporate censorship. Now, under the DSA, platforms will increasingly have to consider human rights when implementing moderation and content governance policies.Footnote 22 Moreover, human rights have been the normative starting point for much academic commentary and activism around social media regulation. They function as a default moral standard against which law and policy can be judged, and a widely-accepted normative basis for calls for reform.
This paper aims to question the dominance of rights frameworks in law, policy and academic debates around social media. Without dismissing the importance of human rights, their ubiquity as the rarely questioned moral yardstick against which all platform and state policies are measured is problematic. As a legal framework, even if human rights protection is made more effective and comprehensive, it is structurally unsuited to addressing systemic, collective and cultural issues that are irreducible to discrete decisions or individual victims. As a mode of political discourse, human rights favour certain (liberal, individualistic) framings of problems and solutions and distract from issues which are not easily framed in these terms. Thus, human rights are not well suited to addressing structural inequalities in social media governance, and should not be the sole legal or normative framework for regulation and research. Research concerned with social media’s unequal impacts would benefit from drawing on and developing alternative normative frameworks and values which place more emphasis on structural conditions and the distribution of power and resources: these could for example depart from theories of data justice,Footnote 23 media justice,Footnote 24 or democratic legitimacy.Footnote 25
The paper proceeds as follows. Section 2 outlines the roles of human rights in EU social media law and surrounding academic debates. Section 3 discusses the limitations of rights-based legal frameworks. It first shows that individual legal rights are not only practically incapable of offering effective and equal protection, but also structurally unsuited to addressing systemic inequalities. It then addresses the counter-argument that fundamental rights in EU social media law function not only as individual legal protections, but also as general principles, arguing that the indeterminacy of these principles makes them equally incapable of effectively addressing systemic inequalities. Section 4 argues that the dominance of human rights framings in legal and policy debates about social media does not serve progressive goals. It reinforces corporate power by suggesting that corporations can legitimately control the online public sphere if they make minor operational reforms, and sidelines criticisms of their business models and market structure. Finally, Section 5 considers reconfigured human rights frameworks which arguably better address the paper’s key concerns. Attempts to reinterpret human rights for the digital age in more structural or collective ways cannot fully overcome the limitations identified. However, arguments from critical race theory that human rights are imperfect but necessary in the absence of widely supported alternatives should be taken seriously. Research and advocacy should use rights discourse and litigation pragmatically, while recognising their limitations and more strongly emphasising alternative frameworks which centre political economy, structural oppression and collective action.
2. Fundamental rights and social media
Since the 2000 E-Commerce Directive (ECD) set the baseline standards for European platform regulation,Footnote 26 the role of fundamental rights has markedly increased. The ECD primarily pursued economic objectives, instituting a liberal regulatory regime in order to develop the nascent internet industry, inspired by the Clinton administration’s pro-market internet policies.Footnote 27 In contrast, the preamble to the DSA – which overhauls and updates the ECD – states that ‘protection of fundamental rights’ is one of its primary objectives, alongside two other classic liberal values: public security and market development.Footnote 28 This growing importance can be explained partly by the increased social and political significance of online platforms, which has produced a consensus that they must be regulated to protect ‘European values’,Footnote 29 and partly by broader developments in EU fundamental rights law, such as the Charter becoming binding and the developing jurisprudence on states’ positive obligations.Footnote 30
Section 2(A) first outlines the four main ways EU social media law draws on fundamental rights. Section 2(B) then briefly reviews academic literature on EU fundamental rights and international human rights law (IHRL) in this context. Legal literature on social media is extensive and diverse, but a large proportion of critical EU law research relies on fundamental rights as the key normative framework, focusing on potential threats to or inadequate protection of fundamental rights. Outside the EU, activists and academics have influentially argued that IHRL should play a greater role in social media governance.
A. Fundamental rights in EU law
Fundamental rights play four major roles in EU social media law. First, mandating respect for rights frequently serves as a key safeguard against excessive censorship by platforms. For example, the controversial 2021 Terrorist Content Regulation (TCR) requires platforms to remove content flagged by law enforcement within one hour.Footnote 31 Its recitals repeatedly mention that authorities and platforms handling removal orders must respect fundamental rights.Footnote 32 The even more controversial 2019 Copyright Directive (CD) created a new liability regime in which platforms must make best efforts to ensure unavailability of unlicensed copyright works, effectively requiring filtering of all user uploads.Footnote 33 Article 17(10) states that industry best practices on enforcement should be guided by fundamental rights.Footnote 34
In this context, as well as serving as guiding principles, fundamental rights safeguards are operationalised through legal rights for individual users to contest moderation decisions. Article 17(9) CDFootnote 35 and Article 10 TCRFootnote 36 require platforms to offer appeals systems for users whose content is removed pursuant to platforms’ legal obligations. The DSA extends this obligation to all moderation, and additionally allows users to appeal to independent out-of-court institutions.Footnote 37 This regulatory approach ultimately aims to prevent arbitrary moderation by empowering individuals to defend their rights.
Second, several recent EU regulations incorporate ‘private ordering’: platforms are required to take action to pursue certain goals, but can decide largely autonomously how they do so.Footnote 38 Such legislation typically envisages fundamental rights as a guiding principle for, and constraint on, such actions. For example, under Article 5 TCR, platforms must take ‘specific measures’ to address terrorist content; an indicative list of possible measures is given, but platforms have broad discretion.Footnote 39 Similarly, Article 28b AVMSD requires video-sharing platformsFootnote 40 to take appropriate and proportionate measures to protect children from harmful content, and the public generally from illegal content such as hate speech.Footnote 41 Both provisions specify that these proactive measures must have regard to users’ fundamental rights. The DSA gives fundamental rights an even broader role. Article 14(4) requires all platforms to have regard to users’ rights when implementing moderation policies.Footnote 42 Articles 34 and 35 require ‘very large online platforms’ with over 45 million EU users to regularly evaluate and address ‘systemic risks’ to various public values, including fundamental rights.Footnote 43
Third, fundamental rights principles serve as judicial constraints on EU legislation. In the leading SABAM cases,Footnote 44 the European Court of Justice (ECJ) established that the ECD’s prohibition of general monitoring obligations means hosting platforms cannot be required to check all uploads for illegal material. Central to both decisions was the balancing of copyright owners’ IP rights against users’ rights to privacy and freedom of expression and information, and platform companies’ freedom to conduct a business. More recently, the Polish government brought an unsuccessful judicial review case against Article 17 CD, arguing that requiring advance filtering of user content (which is inevitably somewhat over-inclusive) was incompatible with fundamental rights, in particular freedom of expression. The ECJ held that Article 17 is compatible with fundamental rights, but only if interpreted narrowly, to minimise removal of non-infringing content.Footnote 45
Finally, rights play a prominent rhetorical role in orienting and legitimising EU social media regulation. As noted above, fundamental rights protection is framed as a key objective and guiding value of the DSA – both in the legislation itself,Footnote 46 and in EU officials’ surrounding press statements.Footnote 47 This shared understanding can shape the functioning of the EU regulatory regime as a whole, including aspects which are not based on individual fundamental rights claims, such as the regulatory oversight by the Commission and national digital services coordinators (DSCs).Footnote 48 The consensus that fundamental rights protection is a key goal of the DSA can be expected to influence how these actors engage with platforms and understand their regulatory responsibilities. This is especially the case as some aspects of the regulatory framework explicitly encourage them to frame their goals in rights terms. For example, industry codes of conduct led by the Commission must address the categories of ‘systemic risk’ specified in Article 34(1), which prominently include risks to fundamental rights.Footnote 49
B. Human rights in academic debates
This framing is also dominant in academic debates. Much of the leading scholarship on EU platform regulation evaluates its compatibility with fundamental rights.Footnote 50 For other scholars, fundamental rights provide a general normative orientation to analyse the implications of practices like algorithmic recommendations and guide recommendations for future regulation.Footnote 51 This is by no means the only normative perspective represented in the literature. For example, competition scholars have focused on market structure and economic power,Footnote 52 while others address broader social concerns including media pluralismFootnote 53 and the commercialisation of online discourse.Footnote 54 Overall, however, fundamental rights provide a prominent, widely accepted normative framework for critiques of social media law.
Another, more international body of literature calls for IHRL to play a greater role in social media governance.Footnote 55 Three broad strands in this literature can be identified. First, there have been calls for platforms to voluntarily respect or formally consider IHRL in decision-making processes, as a form of corporate social responsibility commitment.Footnote 56 This has been advocated prominently by former UN freedom of expression rapporteur David Kaye,Footnote 57 as well as the coalition of human rights NGOs behind the Santa Clara content moderation principles.Footnote 58 Such calls typically attach particular weight to the UN Guiding Principles on Business and Human Rights (UNGPs), which outline a ‘moral responsibility’ for businesses to respect IHRL.Footnote 59 In a sympathetic but critical overview of this literature, Evelyn Douek identifies six key commonly-cited benefits: strengthening legitimacy; providing globally applicable norms; providing a ‘common vocabulary’ for debates; helping companies resist state censorship demands; providing procedural safeguards; and being ‘the least-worst option available’.Footnote 60 She also highlights difficulties including the indeterminacy of IHRL and the possibility that it will simply serve as legitimating rhetoric for companies, but concludes that it can be useful and calls for multi-stakeholder debate to further develop IHRL norms for the social media context.
A second strand makes similar arguments for IHRL’s benefits, but it places more emphasis on reining in platforms whose scale and power threaten users’ rights. Accordingly, instead of voluntary commitments, it calls for dominant platforms to be directly bound by IHRL. For example, Agnès Callamard argues that, in an ideal world, a new international human rights treaty would create duties for large social media platforms (while acknowledging that this is unlikely and has drawbacks in practice).Footnote 61 Other authors interpret existing IHRL norms as already creating such obligations in some circumstances – in particular, where dominant ‘gatekeeper’ platforms can significantly restrict freedom of expression.Footnote 62
A final strand emphasises states’ positive obligations, which are well-established under major human rights treaties. The UNGPs state that businesses are morally obliged to respect human rights, but states are legally obliged to protect them, including by regulating businesses. Positive obligations to protect rights including freedom of expression by regulating private actors are also well-established in European human rights jurisprudence.Footnote 63 Accordingly, it is argued that states must regulate social media to ensure platforms do not significantly interfere with users’ rights.Footnote 64 Like the EU law literature discussed above, such perspectives criticise state regulation which mandates or incentivises over-broad censorship. They also call for regulation to ban arbitrary censorship by platforms, and to require proactive action on issues like online hate speech.Footnote 65
3. Human rights as a legal framework
Critical scholarship has long argued that legal frameworks based on individual rights are unsuited to addressing systemic inequalities. These arguments are highly relevant in the context of social media, where unequal outcomes like those discussed in the introduction principally result from systemic issues which are not reducible to individual rights. Building on established critiques of human and fundamental rights law, Section 3(A) shows that the individual procedural rights established in the CD, TCR, and DSA are incapable of addressing systemic inequalities in content governance. Section 3(B) then discusses fundamental rights as general guiding principles for social media governance, showing that they are in practice unlikely to place meaningful constraints on social media companies.
A. Individual rights claims
David Kennedy suggests that human rights frameworks often emphasise ‘participation and procedure’ over material resources and capabilities.Footnote 66 This tendency is highly visible in EU social media policy. Individual procedural rights – most importantly the ability to appeal content removals – have become the go-to mechanism to strengthen accountability in content moderation. In the CD and TCR, these appeals procedures are the key safeguard against state-mandated or private censorship.Footnote 67 In the DSA, they are one element of a broader framework which also involves more systemic safeguards, such as oversight by national DSCs, and general mandates for content policies to respect fundamental rights.Footnote 68 However, the notice and appeal procedures involve the most concrete, detailed and extensive obligations.Footnote 69 One expert has described the DSA as ‘in its essence…a digital due process regulation bundled with risk management tools’.Footnote 70
Giovanni De Gregorio describes this regulatory approach as ultimately based on a liberal ethos which aims to protect individual autonomy and dignity by enabling individuals to understand and contest decisions which affect them.Footnote 71 This very much aligns with the liberal values traditionally identified as central to human rights law.Footnote 72 It is plausible that the reliance on fundamental rights as the primary safeguard against abuses of power in the EU legal framework, and their prominence in surrounding political discourse, have predisposed EU legislators towards this approach – especially as such procedural remedies are often a key demand of scholars and NGOs advocating a greater role for human rights in social media governance.Footnote 73
However, regulation focused on individual rights and due process is inherently limited in two ways. First, even where individual rights claims are in principle relevant, in that the issues at hand primarily affect legally protected interests of identifiable individuals, in practice they inevitably offer imperfect and unequal protection. Second, more fundamentally, individual legal rights simply cannot address many important issues related to contemporary social media: in particular, those that result from systemic or institutional design choices, and that affect collective rather than individual interests.
Practical limitations
Even where decisions directly affect individual rights and are open to challenge by those individuals – for example, when platforms arbitrarily censor content that does not violate their stated policies, or content protected by mandatory copyright exceptions – several factors suggest that most people’s rights will not be effectively protected in practice. First, these procedural protections are unlikely to be widely used. Evidence from the longstanding notice-and-takedown system in US copyright law is that people rarely submit appeals.Footnote 74 Many users simply lack the time, energy or motivation. They may also be intimidated by needing to state that their content does not infringe copyright, potentially starting a legal conflict with corporate rightsholders,Footnote 75 while often facing intentionally off-putting messages from platforms.Footnote 76 Similar patterns can be expected in other policy areas. Non-expert users are unlikely to confidently understand legal categories such as hate speech, and research shows that users generally have little understanding of platforms’ content policies.Footnote 77 This will inevitably limit their ability to challenge platforms’ application of these norms. The copyright literature suggests that formally protecting rights without considering the context in which they are exercised, and the imbalances of knowledge and power between users and platform companies, is unlikely to be effective in practice.
Second, rights frameworks inevitably fail to represent everyone’s rights equally. Generally, individuals with more economic and social capital are more likely to have the time and informational resources needed to enforce legal rights.Footnote 78 Inequalities in digital literacy are also highly relevant in this context: many people, disproportionately the economically and socially disadvantaged, do not understand basic features of social mediaFootnote 79 and are thus unlikely to engage with appeals procedures. Some evidence also suggests that women are less likely than men to submit appeals, and more likely to be discouraged from sharing content in future.Footnote 80 In Germany, some users have successfully sued platforms for violating their own terms and conditions – which must be interpreted so as to adequately respect users’ constitutional rights – by removing content arbitrarily or without notice. A recent analysis of the case law indicates that this possibility has so far mostly been used by right-wing men whose content was removed as hate speech.Footnote 81 While some or all of those cases may have been well-founded, this skewed uptake of rights meant to protect everyone illustrates the practical limitations of individual legal rights in promoting equality.
Insofar as enforcement of legal rights reflects existing social inequalities, their distributional effects will tend to be regressive, shifting limited resources towards those individuals who are best able to enforce their rights.Footnote 82 Compliance with the DSA’s detailed procedural obligations is expected to be resource-intensive.Footnote 83 This is especially relevant for smaller platforms, but leaked information and journalistic investigations suggest that even at the largest and wealthiest companies, moderation and security teams are overstretched and under-resourced.Footnote 84 These procedural rights might thus not only disproportionately benefit relatively privileged individuals, but could divert resources from other areas – such as systemic improvements to moderation processes,Footnote 85 or research into safer technological designFootnote 86 – that might bring more benefits to marginalised users.
Notably, some of these limitations are recognised in Advocate General Øe’s Opinion in Poland’s judicial review case against Article 17 CD. He clearly acknowledges that appeals procedures are insufficient to protect freedom of expression and information: many users will not submit them, and even successful appeals may be too late for the content to have its intended impact (he does not address inequalities between users).Footnote 87 To minimise mistaken removals, his favoured interpretation requires platforms to block only content which manifestly infringes copyright and is not covered by an exception.Footnote 88 This was largely followed by the ECJ, which did not use the word ‘manifest’ but stated that content should not be filtered where determining infringement requires independent (manual) assessment.Footnote 89 The judgement thus recognises that individual procedural protections are an insufficient safeguard against censorship: filtering systems must rather be designed to minimise it from the start.
However, neither Opinion nor judgement specifies how this should be legally guaranteed. Advocate General Øe suggests platforms blocking non-infringing content could lose their immunity from intermediary liability for infringementFootnote 90 – which would produce the odd scenario that users’ rights to share content based on or similar to copyright material rely on litigation by copyright owners. Otherwise, concrete legal solutions are not proposed. Instead, both judgementFootnote 91 and OpinionFootnote 92 state that member states must ensure that their implementing legislation and its supervision by judicial and administrative authorities protect non-infringing content, and that the stakeholder dialogues on best practices for content filtering must ensure protection of users’ rights.
The fact that both leave it to other actors to determine how overblocking should be prevented could be taken to illustrate the difficulty of identifying solutions to structural problems while thinking in terms of individual rights. However, even if this reticence is explained on other grounds, such as the ECJ’s limited institutional competence to devise solutions, it is unclear how effective any national-level safeguards will be. Most Member States so far largely transposed Article 17 word-for-word,Footnote 93 and copyright scholars disagree over what (if any) further action the judgement requires.Footnote 94 Overall, the Article 17 litigation suggests that a structural approach to content governance, protecting collective interests in free exchange of information as well as individual rights, must be clearly incorporated into EU legislation from the outset and not only read in as an afterthought.Footnote 95
Structural limitations
This points to broader limitations of rights frameworks. Even if legal rights could be perfectly and fairly enforced, critics have argued that their inherently individualistic nature makes them unsuited to addressing structural and collective problems. This line of criticism is related to leftist and feminist critiques of legal frameworks focusing on formal equality – often protected through individual rights to equal treatment – over substantive or transformative equality, understood as requiring structural or institutional change.Footnote 96 Similarly, critical legal studies (CLS) and law and political economy scholars have argued that pursuing equality requires a more holistic analysis of how legal institutions allocate power and resources, and make some people more vulnerable to rights violations than others.Footnote 97 Generally, rights claims are not suited to articulating the need for systemic or institutional reforms,Footnote 98 instead offering individuals ‘empowerment…understood as agency within existing constraints’.Footnote 99
In technology law, similar arguments have been forcefully made by privacy and data protection scholars. Leading scholars have increasingly sought to reframe privacy as a collective value serving social interests like political and intellectual freedom, not (only) an individual interest,Footnote 100 and have argued that even perfectly-enforceable rights could not adequately address the social impacts of contemporary data-processing practices, which are essentially collective.Footnote 101 A person’s data is primarily valuable not as information about them, but as part of much bigger datasets which can be used to infer information about and act on others. Thus, data processing may respect the data subject’s rights, but harm other people or society generally. Addressing these effects requires normative frameworks which centre collective interests and democratic control.Footnote 102
These arguments are also highly relevant in the social media context. Many of the most consequential and concerning practices of contemporary social media companies do not primarily involve decisions that directly affect individuals, but higher-level decisions about how technical and operational systems are designed. This has most comprehensively been shown by Douek, who argues that ‘the scale and speed of online speech means content moderation cannot be understood as simply the aggregation of many (many!) individual adjudications’.Footnote 103 The most salient questions are not about individual posts, but how moderation systems function and, since errors are inevitable, which types will be preferred. Rights to understand and contest individual moderation decisions do not allow users to challenge the systemic choices that structure them, even though it is these that underlie many of the systemic biases discussed in the introduction.
For example, strict bans on sexual content – in place at almost all major platformsFootnote 104 – tend to be enforced arbitrarily and disproportionately against LGBTQ+ users,Footnote 105 for several reasons. On the one hand, policy enforcement frequently discriminates against LGBTQ+ people, due to algorithmic bias in AI moderation tools (which are often built on image classification datasets pervaded by homophobia,Footnote 106 or default to blunt censorship of LGBTQ+-related keywordsFootnote 107) and widespread ideological biases which make human moderators disproportionately likely to see queer sexuality as adult or inappropriate.Footnote 108 On the other hand, even assuming unbiased enforcement would be possible, many queer communities place particular value on open and unconventional expressions of gender and sexuality, meaning they will be disproportionately harmed by policies banning explicit or suggestive content.Footnote 109 The appeals processes established by Articles 20–21 DSA would allow users to challenge instances where content which clearly respects platforms’ stated policies on adult content is removed – but not the reasonableness of the policies themselves, the AI systems used to implement them, the widespread cultural prejudices against queer sexuality, or the broader assumptions that all social media should be policed so as to be safe for children.Footnote 110 Individual rights claims cannot achieve the systemic changes to policies, technical tools and company practices that would be necessary for substantively equal treatment of LGBTQ+ users. Nor do they offer democratic oversight or contestation of how these systems are designed.
These rights are also structurally incapable of representing all relevant interests. In particular, enabling individuals to challenge removal of their content fails to represent the collective interests of the content’s potential audience.Footnote 111 Equally, although the DSA in principle allows challenges to decisions not to remove content,Footnote 112 harmful content such as hate speech or misinformation often primarily affects collective interests rather than identifiable individuals, making such challenges less likely.Footnote 113 These problems reflect established limitations of rights frameworks. As Salomé Viljoen demonstrates in the privacy context, they cannot address decisions that are directly about one person, and respect their rights, but have harmful downstream effects for others or for society generally.Footnote 114
Moreover, in addition to moderating content, platforms make many other governance decisions with systemic effects on user behaviour and information flows. These include the content and interactions technically permitted by interfaces (for example, the ease of commenting on strangers’ posts affects the incidence of abuse and harassmentFootnote 115), the presentation of content (for example, TikTok’s presentation of short-form videos with little context appears to encourage misinformationFootnote 116), and recommendation systems (for example, Instagram’s algorithms appear to recommend photos more when users wear revealing clothingFootnote 117). These choices have a much broader impact than individual moderation decisions, but are not easily addressed through individual rights. This may be part of the reason that platform recommendations and other design choices are left largely unregulated by the DSA.Footnote 118 Its focus on content moderation as the primary area of concern makes sense within a rights framework which emphasises harms to identifiable individuals at the expense of broader questions about how platforms shape online media and communications.
Citing Young,Footnote 119 Anna Lauren Hoffmann argues that focusing on ‘rights, opportunities and resources’ fails to capture injustices stemming from the ways information technologies ‘shape normative standards of identity and behavior’.Footnote 120 Such concerns are particularly relevant for social media, as intermediaries for all kinds of media and cultural consumption.Footnote 121 Research in creator studies has shown that professional social media creators perceive moderation and recommendation systems as pervasively biased in favour of white, straight, conventionally attractive creators,Footnote 122 and that the most successful creators are often those whose content conforms to dominant norms and stereotypes around gender, race and class.Footnote 123 This has implications not only for equality between creators, but for social norms and culture much more broadly. For example, survey evidence shows that social media reinforce gendered beauty standards which young women and non-binary people experience as oppressive and often distressing.Footnote 124 Such diffuse, cumulative impacts on culture and media cannot be addressed through individual rights, but require more systemic consideration of the logics and objectives of content curation systems.
Overall, EU regulation of content moderation lends support to arguments that rights frameworks tend towards conservatism, offering better treatment for individuals within existing social institutions rather than institutional reform or democratic governance.Footnote 125 Alexander Somek has argued that EU anti-discrimination law essentially aims to guarantee all individuals fair access to markets, unhindered by irrational prejudices, as opposed to alleviating dependence on markets or the unequal outcomes they produce.Footnote 126 This could aptly describe the DSA’s regulatory approach. Platforms can determine according to their own business interests what content to allow, how their policies are enforced, and how they organise and promote content; procedural rights simply aim to guarantee fair access to these market services. As this section has shown, not only do these rights fail to offer substantively equal protection to all users; they are fundamentally incapable of addressing systemic biases and inequalities in social media governance.
B. Rights as principles
A possible counterargument to this is that individual legal claims are not the only, or even the primary way that fundamental rights are understood and protected in EU social media law. As section 2(A) outlined, they are also operationalised in numerous provisions as general guiding principles for companies and regulators. Private ordering measures like those in the TCR and AVMSD require platforms to consider fundamental rights when implementing their legal obligations,Footnote 127 while the DSA gives fundamental rights a broader guiding role. Article 14(4) requires platforms to be ‘diligent, objective and proportionate’ and have regard to fundamental rights whenever applying and enforcing their terms and conditions. Notably, it explicitly mentions the right to media freedom and pluralism, clearly indicating that fundamental rights are understood here as collective values, not only individual interests. In addition, Articles 34–35 require very large online platforms to regularly assess and take measures to mitigate ‘systemic risks’ to various social values, including fundamental rights.
These provisions could arguably address structural issues like those discussed above, since they require platforms to consider not only whether they are treating individuals fairly, but also whether they are appropriately balancing everyone’s fundamental rights in the design and operation of their systems as a whole – with Articles 34–35 extending to all design and business practices, not only moderation policies.Footnote 128 For example, operating content moderation systems which are systematically biased against LGBTQ+ users could be argued to violate Article 14(4). Articles 34–35 could require large platforms to identify design features that exacerbate problems like misinformation or harassment, and change them to mitigate these risks.
However, the claim that fundamental rights in principle could address certain issues does not mean they are actually likely to be interpreted and enforced in that way. The CLS movement influentially argued that rights (and law generally) are intrinsically indeterminate – meaning that applying them in particular situations inevitably involves significant discretion, and will be influenced by decision-makers’ perspectives and ideologies.Footnote 129 Several factors make the indeterminacy critique particularly relevant in this context, and suggest that fundamental rights will not place significant constraints on platform companies.
First, not only are the Charter rights themselves abstract and open to different interpretations, the legal provisions requiring platforms to take them into consideration are even more vague. What it concretely means for platforms to ‘take into account’Footnote 130 or have ‘due regard to’Footnote 131 rights is unclear, though it seems obviously less stringent than a requirement to ‘respect’ or ‘protect’ them; arguably companies could comply purely by documenting consideration of relevant rights in decision-making processes, without making any substantive changes.Footnote 132 The novel concept of a systemic risk to rights is even less clear: what does it mean for a right to be at risk, and how widespread must that risk be to be systemic? Moreover, almost any content governance decision affects multiple, competing rights.Footnote 133 The requirement to have regard to all of them offers no indication of how to resolve such conflicts. Since there are so many plausible interpretations of the relevant rights and the appropriate mitigation measures, Articles 14 and 34-35 DSA offer virtually no substantive guidance on content governance.
This uncertainty is compounded by the lack of established standards on how Charter rights should be interpreted in the social media context and in relation to private companies. This is particularly relevant to Article 21 of the Charter on non-discrimination, which would obviously be central in mandating platforms to redress systemic inequalities. By default, the Charter only binds EU and Member State institutions. Some rights, including non-discrimination, can bind private actors where they have been concretised by EU legislation.Footnote 134 However, of the various EU anti-discrimination measures, only the 2000 Race Equality Directive (RED) covers all private services; discrimination on other grounds is only prohibited in specific contexts, such as employment.Footnote 135 Thus, the EU law right to non-discrimination generally does not apply to social media, making it difficult to establish what it would mean for social media companies to have regard to this right.
In practice, the meaning of fundamental rights provisions will in the first instance be determined by platform companies themselves, since they are responsible for showing that they have considered relevant rights, with regulators playing the secondary role of overseeing compliance. A likely outcome is that platforms make whatever decisions they would have made regardless, while going through the formalities of risk assessments and using fundamental rights language to justify them.Footnote 136 Where they do make substantive changes, they will probably prioritise the most superficial and least costly measures. Journalistic investigations have documented multiple cases where major companies’ internal research teams identified changes to recommendation algorithms that could reduce the visibility of harmful content, but company executives rejected them because they could reduce engagement and advertising revenue.Footnote 137 Given these business incentives, companies will likely interpret and balance fundamental rights in ways that require only minor adjustments, rather than making the extensive investments in technology and human resources which would be needed to address systemic biases in content moderation, or redesigning platforms to prioritise other goals over profit maximisation.
Of course, national regulators and the CommissionFootnote 138 can shape the interpretation of the relevant provisions and ensure that compliance is not a mere formality. As well as threatening fines for non-compliance where policies are not considered to have due regard to fundamental rights or risk assessments are deemed inadequate, they can develop abstract rights provisions into more concrete and stringent standards: for example, by publishing guidance and helping develop industry codes and best practices.Footnote 139 In turn, independent research and activism can influence regulators’ agendas, pushing them to focus on systemic issues.Footnote 140
However, the indeterminacy of fundamental rights law will still to some extent limit regulators’ ability to put pressure on platforms. If companies produce self-serving but defensible accounts of how they considered and balanced fundamental rights, it will be difficult for regulators to make a clear case for non-compliance. Regulators’ capacities and motivations to push for resource-intensive, systemic reforms can also be questioned. The DSA regime is generally focused on individual due process and committed to a market-based model of social media,Footnote 141 suggesting that the Commission is not aiming for a particularly interventionist approach. The DSA’s procedural obligations are also much more detailed and specific than the open-ended provisions mandating consideration of fundamental rights, and even enforcing these obligations will be resource-intensive for regulators. This may leave little capacity for proactive investigation and oversight of other provisions where establishing non-compliance would be less clear-cut.
Finally, even if fundamental rights are interpreted as representing collective interests and values, they remain unsuited to addressing systemic problems which go beyond particular companies. Critical scholarship on IHRL and EU fundamental rights law has argued that rights frameworks focus attention not only on individual victims, but also on the wrongdoing of individual perpetrators, at the expense of broader social structures which produce inequality.Footnote 142 Similarly, EU social media law operationalises fundamental rights as guiding principles for individual companies; it thus excludes consideration of how moderation and other aspects of social media governance unfold across the industry.
For example, returning to the example of sexual content bans, such policies are unlikely to be regarded as violating rights in individual cases: individuals will rarely be severely harmed by being unable to post on a particular platform, and companies could easily defend their policies as a proportionate restriction of free speech, justified by child safety and by their own business interests in appealing to a wide audience. However, the cumulative effects of requiring almost all major platforms to be child-friendly and free of sexual content are deeply concerning. It bars adults from healthy forms of self-expression, impedes access to sexual health advice, and suppresses queer subcultures.Footnote 143 It also sets questionable boundaries for art and culture more broadly, as when museums are prevented from posting images of nude art.Footnote 144 Rights frameworks do not facilitate discussions of when and in what context society generally, and particular communities, need platforms that permit adult content.
Douek’s call for ‘content moderation as systems thinking’Footnote 145 should thus not be limited to considering system design within individual platforms, but should consider how biases and unequal impacts play out across the social media ecosystem. This cannot be achieved by subjecting individual market actors to fundamental rights principles, but requires broader reform of how the industry is governed.
4. Human rights as political discourse
As this suggests, the reliance on fundamental rights as guiding principles does not only encounter practical problems, but also raises broader normative questions about whether this is the most desirable framing to understand and discuss policy issues. In EU technology regulationFootnote 146 and surrounding civil society advocacy,Footnote 147 ‘fundamental rights’ sometimes seems to be used as a synonym for the public interest. All policy concerns can be understood as threats to rights; stronger rights protection must therefore be the solution. This influences legal and policy debates in ways which are generally unlikely to favour progressive goals.
Critical IHRL scholars have argued that human rights discourse can displace or delegitimise alternative normative frameworks focused on structural and political-economic conditions, democratic governance and equality.Footnote 148 While such effects are difficult to conclusively demonstrate, it is strongly arguable that the predominance of rights framings is displacing other normative frameworks in the social media context. Given the consensus around the importance of human rights, researchers and other stakeholders are incentivised to frame issues in rights terms in order to bolster their authority and attract support. These incentives are now also built into the DSA. To challenge platforms’ content policies under Article 14(4), stakeholders must frame issues in fundamental rights terms. Similarly, researchers requesting access to platform data must show that their research relates to one of the systemic risks categorised in Article 34(1).Footnote 149 Fundamental rights offer the broadest and most flexible category, meaning that unless research involves another more specific area, such as electoral integrity, researchers will generally have to frame issues in terms of their fundamental rights impacts. This may not be completely incompatible with alternative normative frameworks focused on more collective values, like justice or democracy, but is likely to displace them to some extent.
One implication of this is the depoliticising nature of human rights discourse. Human rights purportedly express universal values,Footnote 150 and thus promise a way of making authoritative normative claims which bypass political disagreements.Footnote 151 However, choosing to understand issues in terms of individual legal rights does have political implications. The extent to which the individualism of human rights frameworks favours right-wing politics is disputed,Footnote 152 but they are widely considered to align with liberal ideologies which emphasise individual autonomy over other values.Footnote 153 By focusing on protecting individuals against rights violations by identifiable perpetrators, human rights can divert attention from the macro-level social, political and economic context. Susan Marks describes this as ‘false contingency’: focusing on particular instances of individual harm can make them seem ‘random, accidental or arbitrary’, and therefore fail to address the underlying structural conditions.Footnote 154 Indeed, focusing on isolated rights violations may legitimate these conditions, by normalising activities which do not violate rights directly and obviously.Footnote 155
This depoliticising tendency is apparent in the social media literature. Authors advocating a greater role for human rights often emphasise legal form and procedure over political and normative substance: for example, arguing that IHRL would not prevent platforms from setting content policies at their discretion, but would improve matters by requiring transparency and due process.Footnote 156 The impression is often that they do not mind by whom and in whose interests social media are governed, provided they follow some basic procedural rules. Nonetheless, how media and communications infrastructure are governed is a political question, with distributional implications. The implicit view that the current marketised industry is acceptable, so long as corporations respect human rights and due process norms, is a political and ideological position – one that is rarely explicitly defended.
Another recurring argument is that human rights provide a common language and structured framework to address problems, without necessarily prescribing determinate solutions.Footnote 157 However, it is misleading to present having a common language as unqualifiedly good, as if any shared language is equivalently useful. Such claims have a further depoliticising effect, suggesting that lack of consensus about policy issues is due to miscommunication between stakeholders, rather than power imbalances, conflicting interests or fundamental disagreements.Footnote 158 Human rights language is also technical and accords particular authority to legal experts, which may exacerbate power differentials and limit participation in social media governance by those lacking this expertise.Footnote 159
More fundamentally, language is not neutral, but structures our thinking.Footnote 160 The dominance of fundamental rights framings not only encourages reliance on ineffective individual remedies, but diverts attention from systemic and collective issues, instead focusing attention on those which are readily understood in terms of harm to individuals. Legal research on social media has generally been dominated by discussions of content moderation, which is easily conceptualised in terms of individual users’ free speech rights, rather than issues like recommendations and platform design. Similarly, scholarship on profiling by social media companies has focused on individual judicial remedies for discrimination,Footnote 161 even as the unequal impacts of surveillance advertising – for example, when women are overall less likely to see a job advert – primarily exacerbate inequalities at the group level.Footnote 162 Discussing injustice in terms of universal rights can also obscure the particular interests and vulnerabilities of marginalised groups.Footnote 163 It is notable that social injustice and discrimination are major themes in other areas of European technology law scholarship, such as AI and workplace technologies,Footnote 164 but have so far been less prominent in social media law, where debates have tended to focus on supposedly-universal issues like freedom of expression.
The unequal impacts of social media cannot be adequately understood without considering the political economy of the contemporary industry, which is not readily analysed in rights terms. Given major platforms’ business models, a core aim of their content governance systems is to attract advertisers.Footnote 165 This commercial business model is inherently in tension with aspirations to create more inclusive and egalitarian online environments. Platforms value media content according to its potential to keep users engaged and offer a suitable vehicle for adverts,Footnote 166 and profile users according to their potential value as consumers, which will inevitably reflect structural social inequalities.Footnote 167 Advertisers demand audiences segregated by binary genderFootnote 168 and crude racial categories,Footnote 169 and encourage platforms to ban challenging or controversial content which could threaten ‘brand safety’.Footnote 170 Human rights are not suited to criticising and addressing issues like these, given their inherent bias towards micro-level decisions rather than macro-level political-economic conditions.
This also points to the relevance of arguments that rights frameworks place insufficient emphasis on equality and democratic governance.Footnote 171 Human rights law is structurally oriented towards setting outer limits for acceptable state or corporate action, rather than shaping the underlying logics and objectives which these institutions pursue. In social media governance, rights discourse focuses attention on the details of how companies run their platforms: for example, whether they respect procedural and substantive constraints in individual moderation decisions,Footnote 172 or assess the human rights impacts of particular products or policies.Footnote 173 This distracts attention from more fundamental questions about how, by whom and in whose interests online media should be run. Calling for dominant corporate platforms based on surveillance advertising to operate within limits set by fundamental rights misses the opportunity to envisage alternative governance systems, pursuing different aims that might better serve the public.
Indeed, rights discourse can legitimise and stabilise current configurations of corporate power. Grietje Baars argues that using IHRL to promote ‘corporate accountability’ serves corporate interests by framing injustice in terms of exceptional wrongdoing, rather than the normal functioning of unjust systems.Footnote 174 Conversely, idealistic human rights language allows corporations which avoid obvious violations to position themselves as morally worthy, normalising harmful and unequal impacts of their everyday business practices.Footnote 175 This is apparent in the UNGPs’ claim that companies have a ‘moral responsibility’ to respect rights, and in much of the literature using them to argue that platforms should voluntarily respect IHRL, which appears to assume that, if put under enough moral pressure, platform companies can steward social media in the public interest.Footnote 176 Platform companies – most notably Meta – have played on these assumptions, relying heavily on human rights rhetoric to deflect criticism and portray themselves as socially responsible.Footnote 177
Academic literature often portrays human rights law – perhaps correctly – as unthreatening to corporate interests. Barrie Sander suggests that the inevitability of balancing competing rights would mean platforms could still set discretionary content policies, and emphasising that IHRL obligations would not ‘over-burden’ companies but would be tailored to minimise disruption to business.Footnote 178 Other authors stress that users’ rights must be balanced against platforms’ rights to run their businesses.Footnote 179 Corporate platforms’ current business models and objectives are thus portrayed as not only generally acceptable, but worthy of legal protection, and requiring only minor adjustments. None of these authors argues that human rights compliance will solve all problems; however, the centrality of human rights compliance and relatively superficial reforms such as ‘due process’ in the academic literature can give the overall impression that more structural reforms are unnecessary.
Even where scholars do propose more critical and structural analyses of corporate power, the predominance of human rights discourse and their purported status as apolitical, universally-shared values creates incentives to frame these arguments in rights terms – even where they are obviously motivated by political commitments which are not universal, or reducible to individual rights.Footnote 180 For example, authors emphasising states’ positive obligations to protect human rights often call for structural market reforms, such as stronger competition regulation.Footnote 181 However, given the indeterminacy of human rights norms, especially regarding states’ positive obligations, it is unclear why they should demand one intervention (such as increasing market competition) and not another (such as regulating dominant platforms as public utilities). Human rights provisions are not doing much work here beyond providing general rhetorical support for policy arguments influenced by other political views about how online media should be governed.
The introduction to a recent edited volume on platforms and human rights places great emphasis on economic and infrastructural power, stating that it is ‘concerned with the democratic implications of having an online domain governed by a relatively small group of powerful technology companies’.Footnote 182 It later notes that the biggest platforms ‘may affect billions of users’ human rights’.Footnote 183 If political problems result from the economic structure of an industry dominated by a few powerful corporations, and affect billions of people, the choice of individual rights as the primary framework for thinking through these problems is questionable. Yet this may be (partly) strategic. In David Kennedy’s words, human rights arguments are ‘addressed to an imaginary third eye – the bystander who will solidarise with the (unstated) politics of the human rights speaker because it is expressed in an apolitical form’.Footnote 184 Structural political–economic reform would inevitably be controversial and conflict with elite interests: platform companies have some of the world’s highest stock valuesFootnote 185 and are a major driver of US economic growth.Footnote 186 Authors framing calls for structural reform in rights terms may hope to defuse looming political conflicts and appeal to a wider audience. However, this also comes at a cost. Suggesting that reforms can be based on apolitical shared values rather than open political conflict, and that ensuring powerful actors act morally is more important than redistributing power and resources, will ultimately tend to legitimise the corporate status quo.
5. Alternative human rights frameworks
Criticisms of the liberal–individualistic orientation of human rights are not new. Several authors focusing on technology governance have addressed them by reframing human rights as more structural or collective values. These reconceptualisations are valuable, but are still unlikely to fully address the unequal impacts of social media, which requires additional normative frameworks not based on rights. A more significant challenge to the arguments put forward in this article comes from critical race theorists who are critical of rights frameworks, but nonetheless consider them useful and necessary for social justice movements. Given the longstanding dominance of human rights in social media law, and the political challenges facing more progressive normative visions, rights discourse and legal strategies cannot be abandoned.
A. Structural conceptions of human rights
In technology regulation, several authors have recognised the limitations of individualistic rights and remedies, reframing human rights in terms of collective values or structural conditions. These reconceptualisations are argued to offer more egalitarian, less individualistic approaches to technology regulation, and to better reflect how technological environments condition the enjoyment of rights in practice. While they offer interesting and generative ways of thinking about rights, they do not entirely overcome the limitations identified here.
For example, Sander contrasts ‘marketised’ conceptions of human rights law, which primarily protect individual agency against state intervention, with ‘structural’ conceptions which focus on proactively addressing systemic power imbalances.Footnote 187 He suggests that the ECtHR’s Delfi decision (approving a duty for an Estonian news website to actively monitor all user comments)Footnote 188 took a marketised approach, insofar as it targeted discrete, obvious harms associated with illegal hate speech, while overlooking the systemic risks to free speech created by obliging intermediaries to monitor user activity. In contrast, the Inter-American Commission on Human Rights has taken a more structural approach by holding that intermediary liability laws can have cumulative and systemic effects on freedom of expression which amount to rights violations, even where users’ rights to post legal content are formally protectedFootnote 189; the ECJ’s Article 17 judgement takes a similar position.Footnote 190 Sander advocates wider adoption of structural approaches, suggesting that this would entail greater focus on collective values like media diversity and broader problems in the ‘social media ecosystem’, rather than discrete decisions.Footnote 191
This approach to interpreting human rights law certainly offers advantages. Given the scale of content moderation systems, human rights oversight should address legal interventions’ systemic and indirect effects.Footnote 192 However, insofar as structural interpretations are instantiated by courts in legal claims brought by individuals, many of the problems discussed in section 3(A) will remain. On the other hand, insofar as they serve as more general guiding principles for governments – something Sander favours – the indeterminacy of human rights principles means they offer little guidance unless supplemented with other political commitments, and can easily be interpreted in ways that serve elite interests.
This is illustrated by another of Sander’s examples, the ECtHR’s Animal Defenders International ruling that the United Kingdom’s (UK) strict ban on broadcast political advertising was justified by its positive obligations to protect freedom of expression and free elections, in this case by intervening to prevent undue distortion of political debate by wealthy advertisers.Footnote 193 Considering this case’s institutional context illustrates the benefits and limitations of structural interpretations of rights. The ECtHR’s role is not to positively determine how states should implement human rights norms, but to establish minimum standards of protection. The indeterminacy of rights is embraced, via the ‘margin of appreciation’ doctrine: it balances legal accountability with democratic states’ freedom of action, allowing governments to resolve indeterminacy by making explicitly political judgements about what serves the public interest. Thus, Animal Defenders did not hold that political broadcast advertising must be banned, only that this is one defensible way of balancing negative free speech rights with positive obligations.
In such cases, where human rights function as minimum constraints on state action, the advantage of the structural approach is that it prevents an overly rigid approach to negative liberties which would prevent states from limiting individual rights to pursue collective goals. However, considering structural interpretations of free speech and free elections as positive principles setting out how to regulate political communication does not take us very far. They do not indicate which of the many possible policies to prevent the wealthy from unduly influencing the media are most desirable, or how the distributional effects of these political choices should be evaluated. For example, they do not explain whether it is reasonable for UK governments to ban all broadcast political advertising in the name of equal participation in political debate, while simultaneously embracing an oligopolistic media system where four individuals control three-quarters of newspaper circulation.Footnote 194 Equally, structural interpretations of human rights do not by themselves offer a positive vision for social media governance, unless they are supplemented by more collective and politicised normative frameworks which are not ultimately based on human rights.
Other technology regulation scholars have argued for a different understanding of human rights, which could also be called structural, in that it focuses on how sociotechnical environments condition rights in practice.Footnote 195 Julie Cohen’s argument for ‘rights-as-affordances’ holds that effectively protecting human rights requires understanding them as collective values, because people can only enjoy individual freedoms if the shared material environment accommodates them. Since our technological environment is heavily shaped by private corporations, protecting collective rights-as-affordances must also involve confronting private power.Footnote 196 In the social media context, such structural perspectives can be seen in some human rights activism advocating market interventions like competition regulation, as a means to strengthen freedom of expression by affording users more choice between platforms.Footnote 197
Although this also improves on individualistic rights framings, rights language may not be the only or best way to analyse how private power operates through sociotechnical environments. First, it again sidelines issues that are less easily framed in terms of individual rights. For example, thinking about how technologies afford or deny rights could be a productive way of critiquing the copyright-filtering systems required by Article 17 CD.Footnote 198 However, it is less easy to analyse the algorithms that determine what content becomes widely visible in rights terms. These also raise concerns about how private power is exercised through design, but because of more diffuse effects on culture, social norms and political debate.
Second, the language of rights-as-affordances may focus analysis on technical design choices, rather than the political-economic conditions that shape them. Cohen is an exception, but other scholarship and activism emphasising how technology conditions human rights primarily emphasise liberal values like individual autonomy, rather than inequality and structural disadvantage.Footnote 199 As famously illustrated by Langdon Winner’s discussion of how New York bridges facilitated racial segregation, sometimes sociotechnical environments are designed precisely to afford rights to some while denying them to others.Footnote 200 Framing such issues in terms of universal rights can depoliticise them, downplaying questions about whose interests sociotechnical environments are built to serve.
For example, Article 17 CD can be critiqued on the basis that it gives too much weight to rightsholders’ interests and not enough to users’ fundamental rights.Footnote 201 However, this does not necessarily capture the actual aims and sociotechnical context of the provision. Annemarie Bridy argues that it was essentially designed around the pre-existing technical affordances of YouTube’s Content ID copyright filtering system, as music and third-party software companies successfully lobbied for platforms to be required to offer such services.Footnote 202 Framing this process as a fundamental rights balancing exercise can paint a misleading picture of a process in which the technical systems which enforce copyright and the legal rules around them were designed from the start to serve private interests, not to achieve concordance between competing universal values. Alternative theoretical approaches such as Cohen’s detailed account of how corporations shaped the law and political economy of today’s privatised, hypercommercialised internet industry could provide a more useful starting point for critiques of EU social media regulation.Footnote 203
Overall, therefore, even structural conceptions of rights cannot by themselves address social media’s unequal impacts. They still focus attention on supposedly-universal interests and liberal values, diverting attention from institutions and social structures that systematically disadvantage some while benefiting others. Arguments for structural reforms of social media are essentially political, and can be better expressed without using the language of individual rights and universal values. In technology law, a possible alternative approach is exemplified by Niklas Eder’s work on algorithmic decision-making.Footnote 204 He rejects individualistic rights-based solutions, because they fail to engage with the systemic patterns and effects of corporate surveillance. Instead, he argues privacy regulation should focus on the legitimacy of surveillance, acknowledging that the concept of legitimacy is open to different meanings depending on underlying political philosophies, and that substantially reforming corporate surveillance would necessarily be politically contentious.
B. Ambivalent views of rights
Another influential rethinking of rights which addresses their capacity to redress structural oppression is provided by Kimberlé Crenshaw, building on other critical race theorists such as Patricia Williams.Footnote 205 Crenshaw directly challenges CLS arguments that relying on legal rights is counterproductive for progressive movements. Although largely in agreement with their basic points that rights are indeterminate and easily manipulated to justify desired decisions, and form part of a legal ideology which stabilises and legitimates the prevailing social order, she argues forcefully that CLS scholars overlook the need for social movements to make pragmatic compromises.
First, Crenshaw observes that critiques often implicitly suggest that rights should be abandoned in favour of a superior political strategy for pursuing equality, but that ‘no such strategy has yet been articulated.’Footnote 206 Subordinated social groups, by definition, have relatively limited ways to put pressure on more powerful groups, and legal rights may offer the best option. Second, she suggests that legitimation is double-edged. Rights legitimise unjust social structures, and movements relying on rights must accommodate themselves to those structures. However, this also means rights can legitimise these movements’ claims in a way that resonates broadly. Rights arguments which threaten the state’s legitimacy by pointing out its failure to respect its own stated values can be an effective lever for change. Brown makes similar points in her analysis of the paradoxes rights present for feminists, arguing that even as rights discourse has legitimised existing power structures and promoted liberal ideology, it has achieved meaningful progress and is effectively indispensable as a legal and political strategy.Footnote 207 More recently, Odette Mazel has argued for a reparative reading of pro-LGBTQ+-rights litigation, understanding it as pursuing change within existing constraints without necessarily accepting or misunderstanding those constraints.Footnote 208
These arguments are highly relevant in the social media context. This article has argued that human rights cannot satisfactorily address structural inequalities in social media governance, and called for greater emphasis on alternative normative frameworks, especially regarding the political economy of social media. However, rights cannot simply be abandoned, given their central role in the established legal regime: legal and political challenges to platform practices have little choice but to rely on them. Moreover, more explicitly progressive normative frameworks would inevitably face disagreement. Centring questions of political economy makes it apparent that the current configuration of power in the industry benefits powerful elites, and that structural industry reforms would face serious political challenges.Footnote 209 In this context, the strategic usefulness of linking calls for reform with the widely-endorsed and authoritative framework of fundamental rights law will often outweigh the disadvantages.
In this respect, literature on legal mobilisation has highlighted the potential for collective action to take advantage of rights frameworks, and to compensate for some of their individualistic and depoliticising aspects. For example, strategic litigation by associations can highlight systemic issues and represent the interests of vulnerable social groups.Footnote 210 Within the DSA framework, fundamental rights norms create some space for collective challenges to systemic injustice. Article 14(4) in principle enables regulators to address the substantive content and system-level enforcement of platforms’ content policies, not only their application in specific decisions.Footnote 211 The DSA also empowers users and – importantly – associations to complain to regulators about breaches of platforms’ obligations, including Article 14(4).Footnote 212 Additionally, the ongoing development of codes of conduct, which will play a major role in concretising platforms’ obligations under the AVMSD and DSA,Footnote 213 offers opportunities for independent researchers and other stakeholders to shape regulatory strategies and the interpretation of fundamental rights obligations. For example, they could advocate for codes to include more concrete requirements regarding platform design, the resources allocated to moderation and safety measures, and investigation and mitigation of systemic bias.Footnote 214 The fundamental rights framework ultimately constrains the terms in which such challenges and advocacy can be expressed, for example by limiting consideration to individual companies’ conduct rather than industry-wide problems. Nonetheless, it offers a basis for collective challenges to unequal treatment.
6. Conclusion
If our aim is to create a more just and egalitarian online public sphere, a world in which profit-driven multinational corporations comply with the minimum standards of IHRL or the EU Charter is, in Samuel Moyn’s words, not enough.Footnote 215 What human rights require of states is already highly indeterminate and disputed; for companies, even more so.Footnote 216 Nonetheless, it is relatively clear that this would not prevent corporations from setting online speech norms based on profitability; distributing information in ways that reproduce structural inequalities; or channelling online culture and communications in the predictable, homogenised directions most conducive to advertising. Nor would it make social media governance more democratic. Europe has the world’s oldest traditions of independent public-service media, founded on the belief that wholly privatised broadcast media systems cannot serve the public interest even if they are well-regulated. On this view, it is inherently problematic that online media are governed by profit-driven conglomerates, even if they are subject to human rights obligations.
Who should own and control the media, and how online speech should be governed, are highly political questions which cannot be answered based on universally shared values. Any project for social media regulation relies, implicitly or explicitly, on some political vision. To actively redress structural social inequalities, such visions must be guided by collective interests and address questions of political economy that do not fit within a rights framework. Structural reform of the social media industry may seem a distant prospect, making it tempting to retreat into the seemingly apolitical, consensual zone of human rights discourse. However, the entrenched dominance of corporate platforms only makes it more important to develop clear conceptual frameworks to challenge and criticise them. It is clear that fundamental rights will continue to play prominent roles in EU social media law, and that they offer a way to make political claims that resonate broadly, so progressive research and activism cannot abandon rights discourse entirely. The challenge is to simultaneously develop convincing critiques based on less individualistic normative frameworks, and to rely strategically on fundamental rights while recognising their limitations.
Acknowledgements
Thank you to Helena Alviar, Beatriz Botero Arcila, Marija Bartl, Séverine Dusollier, Brenda Dvoskin, Alexia Katsiginis, Jeremy Perelman, Barrie Sander and two anonymous reviewers for helpful comments on earlier drafts. Thank you also to the organisers of the GIG-ARTS Conference, the WZB Workshop on Radical Approaches to Platform Governance, and the Leibniz Media Lunch Talks for opportunities to present and receive feedback on this work.
Funding statement
This work received no specific grant from any funding agency, commercial or not-for-profit sectors.
Competing interests
The author has no conflicts of interest to declare.