6.1 Introduction
The issue of mass disinformation on the Internet is a long-standing concern for policymakers, legislators, academics and the wider public. Disinformation is believed to have had a significant impact on the outcome of the 2016 US presidential election.Footnote 1 Concern about the threat of foreign – mainly Russian – interference in the democratic process is also growing.Footnote 2 The COVID-19 pandemic, which reached global proportions in 2020, gave new impetus to the spread of disinformation, which even put lives at risk.Footnote 3 The problem is real and serious enough to force all parties concerned to reassess the previous European understanding of the proper regulation of freedom of expression.
This chapter reviews the measures taken by the European Union and its Member States to limit disinformation, mainly through regulatory instruments. After a clarification of the concepts involved (Section 6.2), I will review the options for restricting false statements which are compatible with the European concept of freedom of expression (Section 6.3), and then examine the related tools of media regulation (Section 6.4). This will be followed by a discussion of the regulation of online platforms in the EU (Section 6.5), and by a presentation of EU (Section 6.6) and national (Section 6.7) measures which specifically address disinformation. Finally, I will attempt to draw some conclusions with regard to possible future regulatory responses (Section 6.8).
6.2 Definitional Issues
Not only are the categories of fake news, disinformation and misinformation not precisely defined in law, but their exact meaning is also disputed in academia. Since the 2016 US presidential election campaign, the use of the term ‘fake news’ has spread worldwide. It is usually applied to news published on a public platform, in particular on the Internet, that is untrue in content or misleading as to the true facts, and which is not published with the intention of revealing the truth but with the aim of deliberately distorting a democratic process or the informed resolution of a public debate.Footnote 4 According to Hunt Allcott and Matthew Gentzkow, fake news is news that is ‘intentionally and verifiably false, and could mislead readers’,Footnote 5 meaning that intentionality and verifiable falsehood are important elements of it. However, in principle, fake news could also include content that is specifically protected by freedom of expression, such as political satire, parody and subjective opinions, a definition which would certainly be undesirable in terms of the protection of freedom of expression.Footnote 6 Since President Trump, after his successful campaign in 2016, mostly applied the term to legacy media that was critical of him, it has gradually lost its original meaning and has fallen out of favor in legal documents.Footnote 7
The EU has for some time preferred the term ‘disinformation’ to describe the phenomenon. Of course, fake news and disinformation are in fact two categories with a significant overlap, and as Björnstjern Baade points out, the former will not disappear from the public sphere either, so legislators, law enforcers and public policymakers will have to continue dealing with it.Footnote 8 Tarlach McGonagle cuts the Gordian knot by defining fake news as content that is ‘disinformation that is presented as, or is likely to be perceived as, news’.Footnote 9 The Code of Practice on Disinformation, in line with several EU documents, defines disinformation as ‘false or misleading content that is spread with an intention to deceive or secure economic or political gain and which may cause public harm’.Footnote 10 Thus, intentional deception and the undue gain accrued or harm it causes are also conceptual elements here. By comparison, misinformation is ‘false or misleading content shared without harmful intent though the effects can still be harmful, e.g. when people share false information with friends and family in good faith’.Footnote 11 However, the inclusion of intentionality as an essential characteristic of disinformation may also raise concerns. It is inherently problematic to limit speech on the basis of a speaker’s intent, and not merely on the basis of the effect achieved. Furthermore, while there is a consensus that satire and parody, being protected opinions, cannot be considered disinformation, they can also be published in bad faith, distorting the true facts, which can have an effect similar to that which attempts to suppress disinformation seek to prevent.Footnote 12
The Code of Practice approved by the EU focuses primarily on curbing disinformation in political advertising, which, according to the European Council and Parliament’s proposal for a regulation, ‘means the preparation, placement, promotion, publication or dissemination, by any means, of a message: (a) by, for or on behalf of a political actor, unless it is of a purely private or a purely commercial nature; or (b) which is liable to influence the outcome of an election or referendum, a legislative or regulatory process or voting behavior’ (on the Code, see Section 6.6.1).
The distinction between disinformation and misinformation, or in other words, the difference between falsehoods made with the intent to harm and untruths that are communicated in good faith but are likely to cause harm, is indeed important and each problem warrants different levels of action and intervention. Prosecuting both disinformation and misinformation with equal force might lead to an unfortunate situation in which citizens who wish to participate in the debate on public affairs, but who do not have the means to verify the truth of a piece of information or communication, who have no malicious intent and seek no direct personal gain, would suffer a disproportionate restriction on their freedom of expression. The most dangerous form of disinformation is that which comes from governments and public bodies. Tackling this is a separate issue, which allows for the use of more robust instruments.Footnote 13 For example, in March 2022, the Council of the EU banned certain Russian television broadcasters on that basis following the outbreak of the Russian–Ukrainian war.Footnote 14
To summarize the above brief conceptual overview, the current approach in the EU is to consider as disinformation content that: (a) is untrue or misleading; (b) is published intentionally; (c) is intended to cause harm or undue gain; (d) causes harm to the public; (e) is widely disseminated, typically on a mass scale and (f) is disseminated through an internet content service. Points € and (f) are not conceptual elements but refer to the usual characteristics of disinformation. Consequently, distorted information resulting from possible bias in the traditional media is not considered disinformation, nor is the publication of protected opinions (satire, parody). Since a specific characteristic of disinformation is that it is spread mainly on the Internet, in particular on social media platforms, attempts at preventing it focus especially on these services.
6.3 The Restriction of False Statements in the European Free Speech Doctrine
The European approach to disinformation, unlike that of the United States, allows for a broad restriction of certain false statements. The US Supreme Court in United States v. Alvarez held that the falsity of an allegation alone is not sufficient to exclude it from First Amendment protection,Footnote 15 but that does not mean that untrue statements of fact, if they cause harm, cannot be restricted, albeit within a narrower range than that of the EU.Footnote 16 While the extent to which and under what circumstances disinformation is restricted in Europe is a matter for national law, normative considerations generally take into account the following three requirements when assessing a restriction on speech: the principle of legality (that the restriction is provided for by an appropriate rule, preferably codified law), the principle of necessity (that the restriction is justified in a democratic society) and the principle of proportionality (that the restriction does not go beyond the legitimate aim pursued).Footnote 17
Within the framework of the protection of freedom of expression in Europe, according to the current doctrine, deliberate lies (intentional publication of untruthful information) may not be subject to a general prohibition. This does not mean that it is not permissible in certain circumstances to prohibit false factual statements but that a general prohibition is usually understood to be incompatible with the doctrine of freedom of speech. The special circumstances in which speech may be prohibited can be grouped into several areas.
First, defamation law and the legal protection of reputation and honor seek to prevent unfavorable and unjust changes being made to an individual’s image and evaluation by society. These regulations aim to prevent an opinion published in the public sphere concerning an individual from tarnishing the ‘image’ of an individual without proper grounds, especially when it is based upon false statements. The approaches taken by individual states to this question differ noticeably, but their common point of departure is the strong protection afforded to debates on public affairs and the correspondingly weaker protection of the personality rights of public figures compared to the protection of freedom of speech.Footnote 18
Second, the EU Council’s Framework Decision on combating racism and xenophobia in the Member States of the EUFootnote 19 places a universal prohibition on the denial of crimes against humanity, war crimes and genocide. Most Member States of the EU have laws prohibiting the denial of the crimes against humanity committed by the Nazis before and during World War II, or the questioning of those crimes or watering down their importance.Footnote 20
Third, a number of specific rules apply to false statements made during election campaigns. These can serve two purposes. On the one hand, communication in the campaign period enjoys robust protection: political speech is the most closely guarded core of freedom of expression, and what is spoken during a campaign is as closely linked to the functioning of democracy and democratic procedures as any speech can be. On the other hand, these procedures must also be protected so that no candidate or community party distorts the democratic decision-making process and ultimately damages the democratic order.Footnote 21
Fourth, commercial communication can be regulated in order to protect consumers from false (misleading) statements. The European Court of Human Rights (ECtHR), in Markt Intern and Beerman v. Germany,Footnote 22 declared that advertisements serving purely commercial interests, rather than contributing to debates in the public sphere, are also to be awarded the protection of the freedom of speech.Footnote 23 Nevertheless, this protection is of a lower order than that granted to ‘political speech’.
Fifth, in some jurisdictions, ‘scaremongering’ – that is, the dissemination of false information that disturbs or threatens to disturb public order or peace – may also be punishable.Footnote 24
Another example of an indirect ban on untrue statements is tobacco advertising. The EU has a broad ban on the subject,Footnote 25 which may be further strengthened by national regulations. The advertising ban includes, by definition, the positive portrayal of tobacco, while the publication of opinions other than advertising arguing for the potential positive effects of tobacco is obviously not banned from the public discourse.
6.4 European and National Media Regulation
Hate speech can also be tackled through media regulation. The Audiovisual Media Services Directive requires Member States to prohibit incitement to violence or hatred directed against a group of persons or a member of a group on the grounds of race, sex, religion or nationality as well as public provocation to commit terrorist offences in linear and nonlinear, television and other audiovisual media services (Article 6). Member States have transposed these provisions into their national legal systems. Under the Directive, only the authority of the state in which the media service provider is broadcasting has jurisdiction to verify whether the conduct in question constitutes hate speech, and to ensure that the broadcasts of the media service provider do not contain incitement to hatred or violence. If a media service provider is not established in an EU Member State, it is not subject to the provisions of the Directive, and the national authorities can take action against it under their own legal systems. According to the well-established case law of the Court of Justice of the EU and the ECtHR, a television broadcaster which incites terrorist violence cannot itself claim freedom of expression.Footnote 26
Other (indirect) measures can also be applied against disinformation in media regulation. Access to the content of a media service provider is granted by the legislator based not on an external condition but on the right of reply, in response to content published previously by the service provider. The Audiovisual Media Services Directive prescribes that EU Member States should introduce national legal regulations with regard to television broadcasting that ensure adequate legal remedies for individuals whose personality rights have been infringed through false statements.Footnote 27 Such regulations are applied throughout Europe and typically impose obligations not only on audiovisual media but also on both printed and online press,Footnote 28 and the granting of the right of reply is also suggested in the EU High Level Expert Group’s report on disinformation (see Section 6.6.1) as a possible tool to combat disinformation.Footnote 29 The promotion of media pluralism may involve a requirement for impartial news coverage, on the basis of which public affairs must be reported impartially in programs which provide information on them. Regulation may apply to television and radio broadcasters, and it has been implemented in several states in Europe.Footnote 30
In July 2022, the British media regulator Ofcom published its decisions on twenty-nine programs that were broadcast on Russia Today (RT) between 27 February 2022 and 2 March 2022. The licence for the RT service was, at the time of broadcast, held by Autonomous Non-Profit Organization TV-Novosti. The programs had raised issues warranting investigation under the due impartiality rules.Footnote 31 Under Section 3(3) of the Broadcasting Act 1990 and of the Broadcasting Act 1996, Ofcom ‘shall not grant a licence to any person unless satisfied that the person is a fit and proper person to hold it’ and ‘shall do all that they can to secure that, if they cease to be so satisfied in the case of any person holding a license, that person does not remain the holder of the license’. Taking into account a series of breaches by RT of the British broadcasting legislation concerning the due impartiality and accuracy rules, Ofcom revoked these licences.Footnote 32
The 1936 International Convention on the Use of Broadcasting in the Cause of Peace and the 1953 Convention on the International Right of Correction would also provide for action against communications from state bodies that have a detrimental effect on international relations, but they are hardly applicable generally to disinformation or misinformation.Footnote 33
6.5 Platform Regulation in the European Union
False claims are spreading across different online platforms at an unprecedented rate and at the same time to a massive extent. In particular, disinformation is being distributed on social media platforms which consciously focuses on electoral campaigning, for political reasons (involving political parties with conflicting interests, other states acting against a particular state and so on). Initially, the platforms defended themselves by claiming that they were neutral players in this communication.Footnote 34 It became increasingly obvious, however, that the platforms themselves are actively able to shape the communication on their services, and that they have an economic interest in its vigor and intensity, and hence that the spread of false news is not necessarily contrary to their interests.Footnote 35 Under EU law, online platforms are considered a type of host providers, whose liability for infringing content which appears in their services is limited, but by no means excluded.
6.5.1 Directive on Electronic Commerce
According to the Directive on Electronic Commerce, if these platforms provide only technical services when they make available, store or transmit the content of others (much like a printing house or a newspaper stand), then it would seem unjustified to hold them liable for the violations of others (‘illegal activity or information’), as long as they are unaware that such violations have occurred. However, in the European approach, gatekeepers may be held liable for their own failure to act after becoming aware of a violation (if they fail to remove the infringing material).Footnote 36 The Directive requires all types of intermediaries to remove such materials after they become aware of their infringing nature (Articles 12–14). In addition, the Directive stipulates that intermediaries may not be subject to a general monitoring obligation to identify illegal activities (Article 15).
While this system of legal responsibility should not necessarily be considered outdated, things have certainly changed since 2000, when the Directive was enacted: there are fewer reasons to believe that today’s online platforms remain passive with regard to content and do nothing more than store and transmit information. While content is still produced by users or other independent actors, the services of gatekeepers select from and organize, promote or reduce the ranking of such content, and may even delete it or make it unavailable within the system. This notice and takedown procedure applies to the disinformation that appears on the platforms, but resorting to the actual removal of content is reserved for disinformation that is unlawful under the legal system of the state in question (slander, terrorist propaganda, denials of genocide and so on). Generally speaking, false claims are not subject to the removal obligation as they are not illegal. Similarly, even if a piece of content is infringing, but no one reports it to the platform, there is no obligation to remove it.
The notion of ‘illegal activity or information’ raises an important issue, as the obligation to remove offending content is independent of the outcome of any possible court or official procedure that may establish that a violation has been committed, and the host provider is required to take action before a decision is passed (provided that a legal procedure is actually initiated). This means that the provider has to decide on the illegality of content on its own, and its decision is free from any legal guarantee (even though it may have an impact on the freedom of expression). This rule may encourage providers to remove content to escape possible liability, even in highly questionable situations. It would be comforting (but probably inadequate, considering the speed of communication) if the liability of an intermediary could not be established unless the illegal nature of the content it has not removed is established by a court.Footnote 37
Although continuous, proactive monitoring of infringing content is not mandatory for platforms, the European Court of Justice opened up a loophole for it well before the recent Regulation banning Russian media outlets, in 2019, in Glawischnig-Piesczek v. Facebook Ireland.Footnote 38 The decision in that case required the platform to delete defamatory statements that had been reported once and removed but which had subsequently reappeared. Likewise, the hosting provider may be obliged to ‘remove information which it stores, the content of which is identical to the content of information that was previously declared to be unlawful, or to block access to that’. This is only possible through the use of artificial intelligence, which is encouraged by this decision and even implicitly made mandatory. Putting that decision in a broader context, it seems that platforms are required to act proactively against unlawful disinformation (or any unlawful content), even given the purported continued exclusion of monitoring obligations. The legality of the content is determined by algorithms, which seems quite risky for freedom of speech.Footnote 39
6.5.2 Digital Services Act
The EU’s Digital Services Act (DSA),Footnote 40 which aims to regulate online platforms in a more detailed and nuanced way, and is applicable from 2023 and 2024, respectively, keeps the most important foundations of European regulation of online platforms in place. The response of the EU to the problem of disinformation is to legislate for more societal responsibility for very large online platforms, while still leaving it to the discretion of the platforms themselves to decide if and how to deal with any systemic risks to freedom of expression.
The DSA retains the essence of the notice and takedown procedure, and platforms still cannot be obliged to monitor user content (Articles 6 and 8), but if they receive a notification that a certain piece of content is illegal, they will be obliged to remove it (Article 6), as set out also in the Directive on Electronic Commerce. The DSA will also seek to protect users’ freedom of expression. It requires users to be informed of the content removed by platforms and gives them the possibility to have recourse to dispute resolution mechanisms in their own country, as well as to the competent authorities or courts if the platform has infringed the provisions of the DSA. These provisions seek to strengthen the position of users, in particular by providing procedural guarantees (most importantly through greater transparency, the obligation to give reasons for deletion of a piece of content or for the suspension of an account, and the right of independent review).Footnote 41
The democratic public sphere is protected by the DSA (Article 14(4)), which states that the restrictions in contractual clauses (Article 14(1)) must take into account freedom of expression and media pluralism. Article 14(4) states that:
Providers of intermediary services shall act in a diligent, objective and proportionate manner in applying and enforcing the restrictions … with due regard to the rights and legitimate interests of all parties involved, including the fundamental rights of the recipients of the service, such as the freedom of expression, freedom and pluralism of the media, and other fundamental rights and freedoms as enshrined in [the Charter of Fundamental Rights of the European Union].
Where platforms do not act with due care, objectivity and proportionality in applying and enforcing restrictions when deleting user content, taking due account of the rights and legitimate interests of all interested parties, including the fundamental rights of the users of the service, such as to freedom of expression, freedom and pluralism of the media, and other fundamental rights and freedoms as set out in the Charter of Fundamental Rights of the European Union (CFR), the user may have recourse to the public authorities. In regard to very large online platforms in Europe, this will most often be the designated Irish authority, to which other national authorities must also refer complaints they receive concerning these platforms, for which the European Commission has also reserved certain powers (it is for the Commission to decide whether to act itself or to delegate this power to the Irish authority).
Under the DSA, the authorities do not explicitly take action against disinformation, only doing so if it constitutes an infringement (war propaganda, which can be conducted through disinformation, can of course constitute an infringement). However, since disinformation alone does not constitute an infringement in national jurisdictions, the DSA does not introduce any substantive change in this respect. Furthermore, very large online platforms and very large online search engines must identify and analyze the potential negative effects of their operations (in particular their algorithms and recommendation systems) on freedom of expression and on ‘civil discourse and electoral processes’,Footnote 42 and must then take appropriate and effective measures to mitigate these risks (Article 35). In addition, the DSA’s rules on codes of conduct also encourage the management of such risks and promote the enforcement of codes (including, for example, the Code of Practice on Disinformation, which predates the DSA). These tools also provide an indirect means of tackling disinformation. One of the main purposes of the DSA is to protect users’ freedom of speech, but users’ speech can also contain dis- or misinformation. It will be difficult to reconcile these conflicting interests when applying the regulation.
Article 36 of the DSA introduces a new ‘crisis response mechanism’. Crisis in this legislation means ‘extraordinary circumstances’ that ‘lead to a serious threat to public security or public health in the Union or in significant parts of it’ (Article 36(2)). Very large online platforms will need to assess to what extent and how the functioning and use of their services significantly contribute to a serious threat, or are likely to do so and then to identify and apply specific, effective and proportionate measures, to prevent, eliminate or limit any such contribution to the serious threat identified (Article 36(1)).
6.6 The European Union’s Efforts to Curb Disinformation on Online Platforms
European jurisdictions allow actions against disinformation, defined as action on the grounds of defamation or the violation of the prohibition of hate speech or scaremongering, while platforms, being ‘host providers’, can be required to remove infringing content. However, these measures in and of themselves seem inadequate to deal with such threats in a reassuring manner. Concerns of this nature have been addressed by the EU in various documents it has produced since 2017.
6.6.1 Communications, Recommendations and Co-Regulation
The first relevant EU Communication, issued in 2017,Footnote 43 concerns tackling illegal content, so it only indirectly addresses the issue of disinformation. It mentions that ‘[t]here are undoubtedly public interest concerns around content which is not necessarily illegal but potentially harmful, such as fake news or content that is harmful for minors. However, the focus of this Communication is on the detection and removal of illegal content.’Footnote 44 The Communication introduced a requirement for platforms to take action against violations in a proactive manner and even in the absence of a notice, even though the platforms are still exempted from liability.Footnote 45 The Recommendation that followed the Communication reaffirmed the requirement to apply proportionate proactive measures in appropriate cases, which thus permits the use of automated tools to identify illegal content.Footnote 46
The High Level Expert Group on Fake News and Online Disinformation published a report in 2018.Footnote 47 The report defines disinformation as ‘false, inaccurate, or misleading information designed, presented and promoted for profit or to intentionally cause public harm’.Footnote 48 While this definition might be accurate, the report refrains from raising the issue of government regulation or co-regulation, and is limited to providing a review of the resources and measures that are available to social media platforms and which they may apply voluntarily. The CommunicationFootnote 49 issued following the report of the High Level Expert Group already recognized the need for more concrete action, not only by online platforms but also by the European Commission and Member States. The document called for a more transparent, trustworthy and accountable online ecosystem. It foresaw the reinforcement of the EU bodies concerned and the creation of a rapid alert system that would identify in real time, through an appropriate technical infrastructure, any disinformation campaign.
Later in 2018, online platforms, leading technology companies and advertising industry players agreed, under pressure from the European Commission, on a code of conduct to tackle the spread of online disinformation. The 2018 Code of Practice on Disinformation was designed to set out commitments in areas ranging from transparency in political advertising to the demonetization of disinformation spreaders. The Code may appear to be voluntary in form – that is, a self-regulatory instrument – but it is in fact a co-regulatory solution that was clearly imposed on the industry players by the European Commission. Its primary objectives are to deprive disseminators of disinformation of advertising revenue from that activity, to make it easy to identify publishers of political advertising, to protect the integrity of the platform’s services (steps against fake accounts and bots) and to support researchers and fact-checkers working on the subject. The Code actually further exacerbates the well-known problem of private censorship (the recognition of the right of platforms to restrict the freedom of expression of their users through rules of their own making),Footnote 50 by putting decisions on individual content in the hands of the platforms, which raises freedom of expression issues.Footnote 51
The Code of Practice was signed in October 2018 by online platforms such as Facebook, Google, Twitter and Mozilla, as well as advertisers and other players in the advertising industry, and was later joined by Microsoft and TikTok. The online platforms and trade associations representing the advertising industry submitted a report in early 2019 setting out the progress they had made in meeting their commitments under the Code of Practice on Disinformation. In the first half of 2019, the European Commission carried out targeted monitoring of the implementation of the commitments by Facebook, Google and Twitter, with a particular focus on the integrity of the European Parliament elections. The Commission published its evaluation of the Code in September 2020, which found that the Code provided a valuable framework for structured dialogue between online platforms, and ensured greater transparency and accountability for their disinformation policies. It also led to concrete actions and policy changes by relevant stakeholders to help combat disinformation.Footnote 52
The Joint Communication of the European Parliament and of the Council on an Action Plan against Disinformation foresees the same measures as in the previous Communication in 2018.Footnote 53 The Communication called upon all signatories of the Code of Practice to implement the actions and procedures identified in the Code swiftly and effectively on an EU-wide basis. It also encouraged the Member States to launch awareness-raising initiatives and support fact-checking organizations. While this document reaffirms the primacy of means that are applied voluntarily by platform providers, it also displays restraint when it comes to compelling the service providers concerned to cooperate (in a forum convened by the European Commission). If the impact of voluntary undertakings falls short of the expected level, the necessity of action of a regulatory nature might arise.Footnote 54
The arrival of the COVID pandemic in Europe in early 2020 gave a new impetus to the mass spread of disinformation, this time directly threatening human lives. Therefore, the EU bodies issued a new document proposing specific measures to be taken by platforms to counter disinformation about the epidemic, but did not actually broaden the scope of the general measures on disinformation previously set out.Footnote 55 Section 4 of the European Democracy Action Plan also specifically addresses the fight against disinformation and foresees the reinforcement of the 2018 Code of Practice, the addition of further commitments and the establishment of a monitoring mechanism.Footnote 56
In 2021, EU bodies issued a new Communication,Footnote 57 which foreshadowed the content of the updated Code of Practice. Subsequently, a review of the Code was launched, leading to the signing of the Strengthened Code of Practice on Disinformation by thirty-four signatories in June 2022.Footnote 58 The updated and strengthened Code aims to deliver on the objectives of the Commission’s guidance,Footnote 59 presented in May 2021, by setting out a broader range of commitments and measures to combat online disinformation. While the Code has not been officially endorsed by the Commission, the Commission set out its expectations in its Communication, and has indicated that it considers that the Code meets these expectations overall. Since this guidance sets out the Commission’s expectations in imperative terms (‘the Code should’, ‘the signatories should’, and so on), it is not an exaggeration to say that the fulfilment of the commitments is seen as an obligation for the platforms, which, if fulfilled, could avoid the imposition of strict legal regulation. Consequently, it is correct to consider the Code not as a self-regulatory instrument, but as a co-regulatory mechanism, which is not created and operated purely by the free will of industry actors but by a public body (in this case, the EU Commission) working in cooperation with industry players.
The Strengthened Code of Practice on Disinformation includes 44 commitments and 128 concrete measures in the areas of demonetization (reducing financial incentives for the disseminators of disinformation), transparency of political advertising (provisions to allow users to better identify political ads through better labelling), ensuring the integrity of services (steps against manipulative behavior such as the use of spam or disinformation), and the protection of the integrity of services (for example, measures to curb manipulative actions such as fake accounts, bot-driven amplification, impersonation and malicious deep spoofing), empowering users through media literacy initiatives, ensuring greater transparency for platforms’ recommendation systems, supporting research into disinformation, and strengthening the fact-checking community. These measures will be supported by an enhanced monitoring framework, including service-level indicators to measure the implementation of the Code at EU and Member State level. Signatories submitted their first reports on the implementation of the Code to the Commission in early 2023. Thereafter, very large online platforms (as defined in the DSA) will report every six months, while other signatories will report annually. The Strengthened Code also includes a clear commitment to work towards the establishment of structural indicators to measure the overall impact of the Code’s requirements. The 2022 Strengthened Code focuses on political advertising, but also refers to other ‘malicious actors’ beyond those who commission political campaigns containing disinformation.Footnote 60 However, it only covers other speakers beyond the political sphere (citizens interested in public affairs and participating in debates) and misinformation without malicious intent more narrowly. Moreover, as with previous documents, it leaves the most important question open: who decides what constitutes disinformation? More precisely, it leaves the decision to the platform moderators and, to a lesser extent, to the fact-checkers.
The first baseline reports on the implementation of the Code were published in February 2023.Footnote 61 According to their reports, the service providers that signed the Code have taken a number of measures: for example, Google deprived disseminators of disinformation of €13 million in advertising revenue in the third quarter of 2022, while TikTok removed 800,000 fake user accounts, followed by a total of 18 million users during the same period and, on Facebook, 28 million fact-checking tags were added to different posts.
An ongoing legislative procedure is also worth noting in this regard. In 2021, the European Parliament and the Council proposed a regulation on the transparency and targeting of political advertising.Footnote 62 The regulation, if adopted, would be uniformly binding on all Member States, covering the identification of the customers of political advertising, their recognizability, measures against illegal political advertising and the requirements for targeting specific users.
The EU’s approaches are in many respects forward-looking and can help to achieve several objectives, although they have also faced a number of criticisms. We may perceive a certain lack of sincerity on the part of both Member States and the EU when it comes to disinformation. All the related documents avoid a clear assessment of the question of whether the dissemination of disinformation falls within the scope of freedom of expression. Following the prevailing European doctrine, one cannot but conclude that a significant proportion of communications containing disinformation is content protected by freedom of expression, so their restriction by instruments with a not entirely clear legal status, such as a co-regulatory code of practice, may be cause for concern. These communications relate to matters of public interest and are therefore subject to the strongest protection of freedom of expression, with the exception of unlawful content, the publication of which is prohibited by specific rules (see Section 6.3). This also applies to content generated or sponsored by governments. However, communications involving untrue statements of fact may not be considered particularly valuable in the European approach, and could actually be restricted by the imposition of further prohibitions. In other words, Member States are free to introduce prohibitions against intentional disinformation that harms society, if this is necessary and proportionate, as this is not within the EU’s competence.Footnote 63 Member States must take the ECtHR’s case law into account when restricting freedom of expression, and this applies equally to disinformation.Footnote 64 Even so, the production and transmission of disinformation can justify restrictions on freedom of expression. However, other content beyond the sufficiently narrow prohibitions thus defined may still claim protection of freedom of expression, so measures taken against them by online platforms – based either on voluntary commitments or on the co-regulatory Code of Practice, but which are not based on the law as it stands – may be unjustified or disproportionate.Footnote 65
The EU approach also reveals a kind of hidden elitism. While the EU focuses on political advertising and intentional disinformation campaigns, some of the measures it enforces on platforms also cover misinformation and communications by citizens. Ian Cram argues that the ECtHR’s jurisprudence privileges traditional (institutional) media over citizen journalists, imposing standards of ‘responsible journalism’ on the latter.Footnote 66 It follows from this that the obligations of the media and other speakers are, where conceptually possible, the same. According to Cram, this is a kind of elitist approach, linked to a – democratically contradictory – perception of media freedom that seeks to create an ‘enlightened public opinion’ even vis-á-vis ‘the people’ (that is, individual speakers, who may be unbridled, perhaps foul-mouthed, and may lack the resources of the institutional media to uncover reality or create informed opinions).Footnote 67 The same is true for the obligations imposed on platforms, which ultimately also restrict this kind of citizen participation in the public sphere. The EU thus turns to the ‘elite’ of the public arena, namely to the traditional media and fact-checking organizations, for help in judging disinformation.
The lack of honesty is also reflected in the interpretation of the Code of Practice, a formally self-regulatory instrument, which in reality is co-regulation imposed by the EU,Footnote 68 where coercion is not based on legislation but on informal agreements, and accompanied by concerns on the part of service providers about the risk of stricter regulation in the future. This co-regulatory nature is recognized by the reference in the Preamble of the Code: ‘This Code of Practice aims to become a Code of Conduct under Article 35 of the DSA’Footnote 69 (in this section, the DSA itself advocates the creation of codes of conduct that set out the various obligations of platforms). Of course, the concerns of service providers are not necessarily justified, given their economic interest in the spread of disinformation, as the 2021 leak by a former Facebook employee, Francis Haugen, starkly highlighted.Footnote 70 Disinformation, unfortunately, tends to attract users, who readily consume such content and interact with it heavily, which in turn generates financial benefits for the platforms. It is therefore also difficult to believe that the transparency required by the EU and committed to by the service providers in relation to the spread of disinformation – covering decision-making and all relevant information – will actually be achieved, and it is very difficult for an actor outside the platform to verify whether it has been.
Twitter announced in May 2023, under the leadership of Elon Musk, that it would leave the Code. Because of its formally self-regulatory nature, this was, of course, within its rights. In any case, Thierry Breton, a senior official of the European Commission, announced immediately after the decision that the Code would nevertheless be enforced, including against Twitter.Footnote 71 This will be possible indirectly, if the Code becomes a kind of industry standard, and thus effectively binding, by applying Section 35 of the DSA.
A problem that goes hand in hand with the spread of disinformation is the breakdown of traditional media. The media are gradually losing the trust of the public,Footnote 72 but their economic foundations and, in turn, their professionalism are also under threat, not least because of the proliferation of internet services. Some EU documents mention the role and importance of the traditional media, although they can hardly offer solutions to these problems. Similarly, only at the level of a mere mention does the EU, including the DSA, address the issue of filter bubbles,Footnote 73 which reinforce social fragmentation, such as the ‘Daily Me’ content offer,Footnote 74 customized for each user, which contributes significantly to the spread of disinformation among susceptible users.Footnote 75 It would not be inconceivable to adopt some of the approaches taken in the regulation of traditional media, such as the right of reply, which would allow disinformation to be accompanied immediately by a reaction containing true facts, or an appropriate adaptation of the obligation of balanced coverage, which would allow a controversial issue to be presented in several readings, immediately visible to the user. This is also hinted at in the Code of Practice, which seeks to steer users towards reliable sources. Measure 22(7) of the Code states that ‘Relevant Signatories will design and apply products and features (for instance, information panels, banners, pop-ups, maps and prompts, trustworthiness indicators) that lead users to authoritative sources on topics of particular public and societal interest or in crisis situations.’ The right to information from multiple sources is the objective of both the right of reply and the obligation to provide balanced information, meaning that even if the means differ, the objectives may be similar in the regulation of traditional media and platforms.
Finally, another problem with the EU’s approach that has been identified is that it is up to platforms and fact-checkers to judge content in the fight against disinformation. This is understandable, since the EU did not want to set up a kind of Orwellian Ministry of Truth, as it would consider it incompatible with freedom of expression for state bodies, courts and authorities to decide on the veracity of a claim. However, it is also doubtful whether leaving such decisions up to private individuals is capable of facilitating informed, fair and unbiased decision-making and whether it does not itself pose a threat to freedom of expression. The very term ‘fact-checking’ is unfortunately Orwellian, and the fact-checkers – and the platform moderators – can themselves be biased, as well as wrong.Footnote 76 Human cognitive mechanisms themselves make fact-checking difficult,Footnote 77 and its credibility is easily undermined, as ‘fact-checkers … disagree more often than one might suppose, particularly when politicians craft language to be ambiguous’.Footnote 78 An empirical study found that ‘fact-checkers are both less likely to fact-check ideologically close entities and more likely to agree with them’.Footnote 79 Fact-checkers are not accountable to society, even less so than the traditional media (through legal regulation or ethics-based self-regulation). Their activities are neither necessarily transparent, nor do they have guarantees of independence. In many cases, such as EU-funded organizations, they operate using public money, which makes these shortcomings problematic. If the traditional media are increasingly losing people’s trust, what reason would people have to trust fact-checking organizations, which face similar credibility problems? While fact-checkers share similar problems with traditional media, their emergence is an interesting development and, if they can bridge the institutional problems, it is not inconceivable that they could be a useful contributor to the public sphere.Footnote 80 It is noteworthy that those fact-checkers who work on behalf of or with the approval or support of social media platforms, and who check the veracity of users’ posts on those sites, bring social media closer to traditional media in terms of the way they operate, as these verifiers have a specific editorial role.
6.6.2 Banning Russian Media Outlets in the Context of the Russian–Ukrainian War
Shortly after the outbreak of the Russian–Ukrainian war, on 1 March 2022, the Council of the EU adopted a DecisionFootnote 81 pursuant to Article 29 of the Treaty of the European Union (TEU) and a RegulationFootnote 82 pursuant to Article 215 of the Treaty on the Functioning of the European Union (TFEU) under which it is prohibited for:
operators to broadcast or to enable, facilitate or otherwise contribute to broadcast, any content by the legal persons, entities or bodies listed in Annex XV [RT – Russia Today English, RT – Russia Today UK, RT – Russia Today Germany, RT – Russia Today France, RT – Russia Today Spanish, and Sputnik news agency], including through transmission or distribution by any means such as cable, satellite, IP-TV, internet service providers, internet video-sharing platforms or applications, whether new or pre-installed.
All broadcasting licences or authorization, transmission and distribution arrangements with RT and Sputnik were suspended. (Later, these measures were extended to other Russian media outlets.) These sanctioning rules derive directly from the TEU. The Council of the EU used the prerogatives under Title V of the TEU concerning the general provisions on the EU’s External Action and the specific provisions on the Common Foreign and Security Policy.Footnote 83 According to a leaked letter, the Regulation should be applied to any links to the internet sites of the media outlets, as well as to their social media accounts.Footnote 84 As a result, the ban is a departure from the general monitoring ban in Article 15 of the E-Commerce Directive.Footnote 85 This provision makes it clear that any state-imposed orders on social media platforms (referred to in the Directive as host services) to monitor users’ content are not compatible with European law. Later, a lawsuit was initiated by RT France against the Regulation, but the Court of Justice of the EU dismissed RT France’s application.Footnote 86
According to the Recitals of the Decision and the Regulation, the Russian Federation ‘has engaged in a systematic, international campaign of media manipulation and distortion of facts in order to enhance its strategy of destabilization of its neighboring countries and of the Union and its Member States’.Footnote 87 The recitals indicate two reasons for the ban: disinformation and propaganda.Footnote 88 Under Article 52(1) of the CFR, any such interference must pursue ‘objectives of general interest recognized by the Union’. Considering this, the restriction targeting disinformation and propaganda might be in line with the CFR.Footnote 89 However, according to Baade, the EU should not invoke the prohibition of disinformation or propaganda as a legitimate aim, as they may be protected expressions. An alternative aim would be to stop propaganda for war specifically.Footnote 90 The prohibition of propaganda for war is enshrined in Article 20 of the International Covenant on Civil and Political Rights. As all the EU Member States have ratified the Covenant, this prohibition can also be considered a generally accepted principle of EU law. As Baade notes, the justification for the ban imposed on RT and Sputnik in the current situation cannot be based solely on the character of their content as ‘propaganda’ and not even as disinformation.Footnote 91 As already mentioned, propaganda and disinformation are generally protected by the freedom of expression, with certain exceptions.
After the Regulation came into force, the largest social media companies relaxed the enforcement of their rules involving threats against Russian military personnel in Ukraine.Footnote 92 According to a leaked internal letter, Meta allowed Facebook and Instagram users to call for violence against the Russian and Belarusian leaders, Vladimir Putin and Alexander Lukashenko, so long as the violence was nonspecific (without referring to an actual plot), as well as violence against Russian soldiers (except prisoners of war) in the context of the Ukraine invasion, which involves a limited and temporary change to its hate speech policy.Footnote 93 Twitter also announced some changes in its policies related to the war, although the company did not amend its generally applicable hate speech policies.Footnote 94
The right of platforms to change the boundaries of free speech at will, without any constitutional guarantee or supervision, is an extremely dangerous development. Their propensity to make changes in a less transparent way, avoiding any meaningful public debate on the proposed changes, only increases the risk to freedom of expression.
6.7 Attempts to Regulate Disinformation at the National Level
In order to strengthen the obligations of online platforms, some European countries have adopted rules, in line with common European law, to compel platforms to remove illegal content more quickly and effectively. The corresponding Act in German law (effective as of 1 January 2018) is a paramount example of this trend.Footnote 95 According to the applicable provisions, all platform providers within the scope of the Act (that is, platform providers with over 2 million users from Germany) must remove all user content that commits certain criminal offences specified by the Act. Such offences include defamation, incitement to hatred, denial of the Holocaust and the spreading of scaremongering news stories.Footnote 96 Manifestly unlawful pieces of content must be removed within twenty-four hours after receipt of a notice, while any ‘ordinary’ unlawful content must be removed within seven days.Footnote 97 If a platform fails to remove a given piece of content, it may be subject to a fine of up to €50 million (theoretically, in cases of severe and multiple violations).Footnote 98 The German legislation does not go much further than the E-commerce Directive itself, or its successor, the DSA; it simply refines the provisions of the Directive, lays down the applicable procedural rules and sets harsh sanctions for platforms which violate them. Nonetheless, the rules are followed in practice, and Facebook seems eager to perform its obligation to remove objectionable content.Footnote 99 The German regulation shows how difficult it is to apply general pieces of legislation and platform-specific rules simultaneously, and it demonstrates how governments prefer to have social media platforms act as the judges of user-generated content.
Subsequently, FranceFootnote 100 and AustriaFootnote 101 adopted similar rules, although the French law (‘Avia law’) was annulled by the Constitutional Council because some of its provisions did not meet the constitutional requirements.Footnote 102 France had introduced transparency and reporting obligations for platforms in a law adopted in 2018, prior to the Avia law, along with a fast-track judicial procedure to remove content disseminated during election campaigns and deemed misleading or inaccurate.Footnote 103 The law confers new powers on the Media Council (Conseil Superieur de l’Audiovisuel), such as the ability to suspend or withdraw the licence of certain media services if, for example, a service under the control or influence of a foreign company is endangering France’s fundamental interests, including the proper functioning of its institutions, by transmitting false information.Footnote 104 Under an amendment to the 1986 Freedom of Communication Act,Footnote 105 the Media Council can order the suspension of the electronic distribution of a television or radio service owned or controlled by a foreign state if the company deliberately transmits false information that could call into question the integrity of an election.Footnote 106 These powers may be exercised from the beginning of the three months preceding the first day of the month in which the presidential, general or European Parliamentary elections or referendums are held. The Constitutional Council found this law constitutional.Footnote 107
The German and French attempts to regulate disinformation have introduced rules imposing obligations on platforms to remove certain content quickly. At the same time, German legislation only imposes obligations on content that is in breach of the Criminal Code, hence it is only the 2018 French law that regulates disinformation that is not in any case illegal, during election campaigns. However, these approaches still leave the decision on content in the hands of the platforms, and do not attempt to limit the spread of disinformation in general.Footnote 108 In Germany, another important piece of legislation has been passed, which also addresses the issue of disinformation. In 2020, the Interstate Treaty on Media Services (Medienstaatsvertrag, MStV) was adopted, which provides for the transparency of algorithms, the proper labelling of bots and the easy findability of public service media content on the platforms on which it is available. The MStV obliges social media platforms, video-sharing platforms and search engines to be nondiscriminatory in terms of content and to prioritize public service content, while not restricting user preferences. On video-sharing platforms, available public broadcasting content should be especially highlighted and made easy to find. These intermediaries may not unfairly disadvantage (directly or indirectly) or treat differently providers of journalistic editorial content to the extent that the intermediary may potentially have a significant influence on their visibility.Footnote 109 These rules only indirectly limit the spread of disinformation, but they provide a good example of how regulation can try to steer users towards credible content, in line with the traditional approach to media regulation.Footnote 110
In the fight against the COVID pandemic and the disinformation related to it, several European countries tried to curb the spread of false and dangerous information by tightening criminal laws. Hungary, for example, tightened its rules on scaremongering,Footnote 111 and Greece extended the scope of the existing offence of dissemination of false information and introduced a prison sentence for those who spread disinformation on the Internet.Footnote 112
8. On the Possible Future Solutions: Some Conclusions
The European states and the EU clearly assign primary responsibility for addressing disinformation issues to the platforms. Of course, the national governments and the European institutions have made a number of commitments themselves, but they leave it to the platforms to sort out the substantive issues, including compliance with their commitments under the Code. However, this is not a reason to give up on introducing further restrictions on free speech, as allowed by the European concept of freedom of expression. Even in the context of the US legal system, Cass Sunstein argues that intentional lies, if they cause at least moderate harm, may be constitutionally prohibited – and even negligent or mistaken misrepresentations can be restricted, if the harm incurred by them is serious.Footnote 113 It is still better – at least in Europe, we typically think so – that the line between forbidden and permissible speech is drawn by the legislature and the courts, constrained by strict constitutional guarantees, rather than by private organizations (in this case, mainly social media platforms) operating without such guarantees. But each and every social media post of concern cannot be taken to court, because nowhere could the judicial system cope with such a workload. Therefore, the right of platforms to decide on user content is likely to remain necessary in the long-term future. However, the protection of content that is not prohibited under the regime of freedom of expression is an important consideration, even if it contains untruths.
Although the European approach is wary of considering the communication of untrue statements of fact to be of high value, freedom of expression, at least according to the traditional approach, is in a sense a ‘black and white’ issue. Either a particular piece of content falls within the scope of freedom of expression or it does not. In other words, once the sometimes-difficult question of whether a particular piece of content constitutes defamation, invasion of privacy, hate speech and so on has been successfully answered, the consequences are self-evident: the content will either be protected or it will not. ‘Pizzagate’,Footnote 114 for example, could in principle have been dealt with under defamation law (at least if it would have happened in Europe, as under US defamation law it is more difficult to protect the reputation of a specific person against false allegations), and the false allegations made in the Brexit campaignFootnote 115 could in principle also have been prohibited under the rules governing fair election or referendum campaigns. Of course, even in these cases, the permissibility of restricting speech is not clear-cut and requires a nuanced decision by a court. Furthermore, an otherwise patently untrue statement – for example, how much more money will be available for the National Health Service in the United Kingdom if the country leaves the EU – may not necessarily be clearly refutable in a legal proceeding. But the main point is that many untrue statements are actually protected by freedom of speech. This does not mean that the protected content has the right to reach an audience or to have its volume amplified by a particular service (for example, through the media), but rather that its restriction is not allowed. This traditional approach is being disrupted by online platforms, which, as is their general practice, also restrict content that is not legally prohibited, according to their own intentions and contractual terms. The same problem dogs the fight against (not legally prohibited) disinformation: the EU also encourages restrictions on content that is otherwise protected by freedom of expression, and the relevant documents do not attempt to resolve this contradiction.
It is also important to make a clear distinction between disinformation originating from governments and dis- or misinformation that comes from members of society, whether deliberate or in good faith, but in this respect the EU documents currently available are not fully consistent. Members of society should not be disproportionately restricted in their freedom of expression, even if they approach public debate with malicious intent, and certainly not if they are unaware of the falsity or potential for damage of the news they are spreading (the good faith transmission of government disinformation also falls into this category). Private speech controlled or promoted by the government should be taken into account and only the freedom of honest citizens’ speech who are otherwise wrong should be strongly protected. The question is whether this separation is even possible. And if so, whose job is it, the legal regulators or purely the platforms? We do not have good answers to this dilemma at the moment.
Nor would it be inconceivable to regulate platforms more strictly, setting out their obligations vis-à-vis content not protected by freedom of expression, not in self- or co-regulatory instruments but in clearly prescribed legal rules. This would of course require close cooperation between Member States and the EU, as speech bans can only be imposed at Member State level, while platform regulation can only be effective at EU level.
Users need to be led out of the filter bubble imposed on them by the platforms, which would fundamentally affect their business model. In this regard, the provision of the option prescribed by the DSA to opt out of content recommendation based on profiling is a step in the right direction, but not big enough, because it puts the decision in the hands of the users, and it is questionable how many will take advantage of it, and that particular bubble can also be produced by means other than profiling. Data protection regulations can also be called upon to help in the fight, in particular by tightening up data-processing actions by platforms.Footnote 116
It would be worth considering making the transmission of substantiated statements and opinions on public affairs to users mandatory, or providing easy access to divergent and dissenting views on specific issues, while maintaining the choice for users who do not wish to hear them, as exemplified by the regulation of traditional media. Such instruments include, in respect of television and radio, the right of reply and the obligation to provide balanced (impartial) news coverage, the mandatory publication of local or national content or the mandatory transmission of certain content of public interest by broadcasters. These duties could also be applied to social media, with some adaptation. In principle, social media could be required to make available, alongside a post on a contentious issue, posts that present the dissenting views on that issue. Algorithms might be able to do this, although the business model of the platforms might be adversely affected. Such a rule would be similar to the right-of-reply and impartial information obligations known from media regulation, except that it could be done automatically without a specific request. Strengthening nonlegislative approaches, raising awareness and supporting traditional media are also necessary tools – within the competence of Member States.
The fight against disinformation is a seemingly open-ended task that poses particular challenges for policymakers, both in terms of protecting freedom of expression and in defining new obligations for members of the public. It has become clear that traditional legal instruments, legislation and the imposition and enforcement of obligations by the relevant authorities can only partially address the problems it raises, and that the cooperation of all stakeholders is necessary. However, this should not lead to the ‘outsourcing’ of decisions by putting them fully in the hands of private companies. Member States and the EU must continue to play a leading role in shaping the rules. The EU has taken a number of important measures, and some Member States are trying to address some of the issues, but it is reasonable to fear that we are only at the beginning of the journey and that further technological developments will bring new risks. Disinformation, as Paul Bernal has so eloquently demonstrated,Footnote 117 is essentially the same age as public communication; there is nothing new under the sun, but we must be able to formulate new answers to old questions all the time. But the end result of any struggle of legal systems in this regard will be that responsible, informed participation in public debates will remain primarily the responsibility of the individual concerned, just as it has been in past centuries.