6.1 Introduction
The issue of mass disinformation on the Internet is a long-standing concern for policymakers, legislators, academics and the wider public. Disinformation is believed to have had a significant impact on the outcome of the 2016 US presidential election.Footnote 1 Concern about the threat of foreign – mainly Russian – interference in the democratic process is also growing.Footnote 2 The COVID-19 pandemic, which reached global proportions in 2020, gave new impetus to the spread of disinformation, which even put lives at risk.Footnote 3 The problem is real and serious enough to force all parties concerned to reassess the previous European understanding of the proper regulation of freedom of expression.
This chapter reviews the measures taken by the European Union and its Member States to limit disinformation, mainly through regulatory instruments. After a clarification of the concepts involved (Section 6.2), I will review the options for restricting false statements which are compatible with the European concept of freedom of expression (Section 6.3), and then examine the related tools of media regulation (Section 6.4). This will be followed by a discussion of the regulation of online platforms in the EU (Section 6.5), and by a presentation of EU (Section 6.6) and national (Section 6.7) measures which specifically address disinformation. Finally, I will attempt to draw some conclusions with regard to possible future regulatory responses (Section 6.8).
6.2 Definitional Issues
Not only are the categories of fake news, disinformation and misinformation not precisely defined in law, but their exact meaning is also disputed in academia. Since the 2016 US presidential election campaign, the use of the term ‘fake news’ has spread worldwide. It is usually applied to news published on a public platform, in particular on the Internet, that is untrue in content or misleading as to the true facts, and which is not published with the intention of revealing the truth but with the aim of deliberately distorting a democratic process or the informed resolution of a public debate.Footnote 4 According to Hunt Allcott and Matthew Gentzkow, fake news is news that is ‘intentionally and verifiably false, and could mislead readers’,Footnote 5 meaning that intentionality and verifiable falsehood are important elements of it. However, in principle, fake news could also include content that is specifically protected by freedom of expression, such as political satire, parody and subjective opinions, a definition which would certainly be undesirable in terms of the protection of freedom of expression.Footnote 6 Since President Trump, after his successful campaign in 2016, mostly applied the term to legacy media that was critical of him, it has gradually lost its original meaning and has fallen out of favor in legal documents.Footnote 7
The EU has for some time preferred the term ‘disinformation’ to describe the phenomenon. Of course, fake news and disinformation are in fact two categories with a significant overlap, and as Björnstjern Baade points out, the former will not disappear from the public sphere either, so legislators, law enforcers and public policymakers will have to continue dealing with it.Footnote 8 Tarlach McGonagle cuts the Gordian knot by defining fake news as content that is ‘disinformation that is presented as, or is likely to be perceived as, news’.Footnote 9 The Code of Practice on Disinformation, in line with several EU documents, defines disinformation as ‘false or misleading content that is spread with an intention to deceive or secure economic or political gain and which may cause public harm’.Footnote 10 Thus, intentional deception and the undue gain accrued or harm it causes are also conceptual elements here. By comparison, misinformation is ‘false or misleading content shared without harmful intent though the effects can still be harmful, e.g. when people share false information with friends and family in good faith’.Footnote 11 However, the inclusion of intentionality as an essential characteristic of disinformation may also raise concerns. It is inherently problematic to limit speech on the basis of a speaker’s intent, and not merely on the basis of the effect achieved. Furthermore, while there is a consensus that satire and parody, being protected opinions, cannot be considered disinformation, they can also be published in bad faith, distorting the true facts, which can have an effect similar to that which attempts to suppress disinformation seek to prevent.Footnote 12
The Code of Practice approved by the EU focuses primarily on curbing disinformation in political advertising, which, according to the European Council and Parliament’s proposal for a regulation, ‘means the preparation, placement, promotion, publication or dissemination, by any means, of a message: (a) by, for or on behalf of a political actor, unless it is of a purely private or a purely commercial nature; or (b) which is liable to influence the outcome of an election or referendum, a legislative or regulatory process or voting behavior’ (on the Code, see Section 6.6.1).
The distinction between disinformation and misinformation, or in other words, the difference between falsehoods made with the intent to harm and untruths that are communicated in good faith but are likely to cause harm, is indeed important and each problem warrants different levels of action and intervention. Prosecuting both disinformation and misinformation with equal force might lead to an unfortunate situation in which citizens who wish to participate in the debate on public affairs, but who do not have the means to verify the truth of a piece of information or communication, who have no malicious intent and seek no direct personal gain, would suffer a disproportionate restriction on their freedom of expression. The most dangerous form of disinformation is that which comes from governments and public bodies. Tackling this is a separate issue, which allows for the use of more robust instruments.Footnote 13 For example, in March 2022, the Council of the EU banned certain Russian television broadcasters on that basis following the outbreak of the Russian–Ukrainian war.Footnote 14
To summarize the above brief conceptual overview, the current approach in the EU is to consider as disinformation content that: (a) is untrue or misleading; (b) is published intentionally; (c) is intended to cause harm or undue gain; (d) causes harm to the public; (e) is widely disseminated, typically on a mass scale and (f) is disseminated through an internet content service. Points € and (f) are not conceptual elements but refer to the usual characteristics of disinformation. Consequently, distorted information resulting from possible bias in the traditional media is not considered disinformation, nor is the publication of protected opinions (satire, parody). Since a specific characteristic of disinformation is that it is spread mainly on the Internet, in particular on social media platforms, attempts at preventing it focus especially on these services.
6.3 The Restriction of False Statements in the European Free Speech Doctrine
The European approach to disinformation, unlike that of the United States, allows for a broad restriction of certain false statements. The US Supreme Court in United States v. Alvarez held that the falsity of an allegation alone is not sufficient to exclude it from First Amendment protection,Footnote 15 but that does not mean that untrue statements of fact, if they cause harm, cannot be restricted, albeit within a narrower range than that of the EU.Footnote 16 While the extent to which and under what circumstances disinformation is restricted in Europe is a matter for national law, normative considerations generally take into account the following three requirements when assessing a restriction on speech: the principle of legality (that the restriction is provided for by an appropriate rule, preferably codified law), the principle of necessity (that the restriction is justified in a democratic society) and the principle of proportionality (that the restriction does not go beyond the legitimate aim pursued).Footnote 17
Within the framework of the protection of freedom of expression in Europe, according to the current doctrine, deliberate lies (intentional publication of untruthful information) may not be subject to a general prohibition. This does not mean that it is not permissible in certain circumstances to prohibit false factual statements but that a general prohibition is usually understood to be incompatible with the doctrine of freedom of speech. The special circumstances in which speech may be prohibited can be grouped into several areas.
First, defamation law and the legal protection of reputation and honor seek to prevent unfavorable and unjust changes being made to an individual’s image and evaluation by society. These regulations aim to prevent an opinion published in the public sphere concerning an individual from tarnishing the ‘image’ of an individual without proper grounds, especially when it is based upon false statements. The approaches taken by individual states to this question differ noticeably, but their common point of departure is the strong protection afforded to debates on public affairs and the correspondingly weaker protection of the personality rights of public figures compared to the protection of freedom of speech.Footnote 18
Second, the EU Council’s Framework Decision on combating racism and xenophobia in the Member States of the EUFootnote 19 places a universal prohibition on the denial of crimes against humanity, war crimes and genocide. Most Member States of the EU have laws prohibiting the denial of the crimes against humanity committed by the Nazis before and during World War II, or the questioning of those crimes or watering down their importance.Footnote 20
Third, a number of specific rules apply to false statements made during election campaigns. These can serve two purposes. On the one hand, communication in the campaign period enjoys robust protection: political speech is the most closely guarded core of freedom of expression, and what is spoken during a campaign is as closely linked to the functioning of democracy and democratic procedures as any speech can be. On the other hand, these procedures must also be protected so that no candidate or community party distorts the democratic decision-making process and ultimately damages the democratic order.Footnote 21
Fourth, commercial communication can be regulated in order to protect consumers from false (misleading) statements. The European Court of Human Rights (ECtHR), in Markt Intern and Beerman v. Germany,Footnote 22 declared that advertisements serving purely commercial interests, rather than contributing to debates in the public sphere, are also to be awarded the protection of the freedom of speech.Footnote 23 Nevertheless, this protection is of a lower order than that granted to ‘political speech’.
Fifth, in some jurisdictions, ‘scaremongering’ – that is, the dissemination of false information that disturbs or threatens to disturb public order or peace – may also be punishable.Footnote 24
Another example of an indirect ban on untrue statements is tobacco advertising. The EU has a broad ban on the subject,Footnote 25 which may be further strengthened by national regulations. The advertising ban includes, by definition, the positive portrayal of tobacco, while the publication of opinions other than advertising arguing for the potential positive effects of tobacco is obviously not banned from the public discourse.
6.4 European and National Media Regulation
Hate speech can also be tackled through media regulation. The Audiovisual Media Services Directive requires Member States to prohibit incitement to violence or hatred directed against a group of persons or a member of a group on the grounds of race, sex, religion or nationality as well as public provocation to commit terrorist offences in linear and nonlinear, television and other audiovisual media services (Article 6). Member States have transposed these provisions into their national legal systems. Under the Directive, only the authority of the state in which the media service provider is broadcasting has jurisdiction to verify whether the conduct in question constitutes hate speech, and to ensure that the broadcasts of the media service provider do not contain incitement to hatred or violence. If a media service provider is not established in an EU Member State, it is not subject to the provisions of the Directive, and the national authorities can take action against it under their own legal systems. According to the well-established case law of the Court of Justice of the EU and the ECtHR, a television broadcaster which incites terrorist violence cannot itself claim freedom of expression.Footnote 26
Other (indirect) measures can also be applied against disinformation in media regulation. Access to the content of a media service provider is granted by the legislator based not on an external condition but on the right of reply, in response to content published previously by the service provider. The Audiovisual Media Services Directive prescribes that EU Member States should introduce national legal regulations with regard to television broadcasting that ensure adequate legal remedies for individuals whose personality rights have been infringed through false statements.Footnote 27 Such regulations are applied throughout Europe and typically impose obligations not only on audiovisual media but also on both printed and online press,Footnote 28 and the granting of the right of reply is also suggested in the EU High Level Expert Group’s report on disinformation (see Section 6.6.1) as a possible tool to combat disinformation.Footnote 29 The promotion of media pluralism may involve a requirement for impartial news coverage, on the basis of which public affairs must be reported impartially in programs which provide information on them. Regulation may apply to television and radio broadcasters, and it has been implemented in several states in Europe.Footnote 30
In July 2022, the British media regulator Ofcom published its decisions on twenty-nine programs that were broadcast on Russia Today (RT) between 27 February 2022 and 2 March 2022. The licence for the RT service was, at the time of broadcast, held by Autonomous Non-Profit Organization TV-Novosti. The programs had raised issues warranting investigation under the due impartiality rules.Footnote 31 Under Section 3(3) of the Broadcasting Act 1990 and of the Broadcasting Act 1996, Ofcom ‘shall not grant a licence to any person unless satisfied that the person is a fit and proper person to hold it’ and ‘shall do all that they can to secure that, if they cease to be so satisfied in the case of any person holding a license, that person does not remain the holder of the license’. Taking into account a series of breaches by RT of the British broadcasting legislation concerning the due impartiality and accuracy rules, Ofcom revoked these licences.Footnote 32
The 1936 International Convention on the Use of Broadcasting in the Cause of Peace and the 1953 Convention on the International Right of Correction would also provide for action against communications from state bodies that have a detrimental effect on international relations, but they are hardly applicable generally to disinformation or misinformation.Footnote 33
6.5 Platform Regulation in the European Union
False claims are spreading across different online platforms at an unprecedented rate and at the same time to a massive extent. In particular, disinformation is being distributed on social media platforms which consciously focuses on electoral campaigning, for political reasons (involving political parties with conflicting interests, other states acting against a particular state and so on). Initially, the platforms defended themselves by claiming that they were neutral players in this communication.Footnote 34 It became increasingly obvious, however, that the platforms themselves are actively able to shape the communication on their services, and that they have an economic interest in its vigor and intensity, and hence that the spread of false news is not necessarily contrary to their interests.Footnote 35 Under EU law, online platforms are considered a type of host providers, whose liability for infringing content which appears in their services is limited, but by no means excluded.
6.5.1 Directive on Electronic Commerce
According to the Directive on Electronic Commerce, if these platforms provide only technical services when they make available, store or transmit the content of others (much like a printing house or a newspaper stand), then it would seem unjustified to hold them liable for the violations of others (‘illegal activity or information’), as long as they are unaware that such violations have occurred. However, in the European approach, gatekeepers may be held liable for their own failure to act after becoming aware of a violation (if they fail to remove the infringing material).Footnote 36 The Directive requires all types of intermediaries to remove such materials after they become aware of their infringing nature (Articles 12–14). In addition, the Directive stipulates that intermediaries may not be subject to a general monitoring obligation to identify illegal activities (Article 15).
While this system of legal responsibility should not necessarily be considered outdated, things have certainly changed since 2000, when the Directive was enacted: there are fewer reasons to believe that today’s online platforms remain passive with regard to content and do nothing more than store and transmit information. While content is still produced by users or other independent actors, the services of gatekeepers select from and organize, promote or reduce the ranking of such content, and may even delete it or make it unavailable within the system. This notice and takedown procedure applies to the disinformation that appears on the platforms, but resorting to the actual removal of content is reserved for disinformation that is unlawful under the legal system of the state in question (slander, terrorist propaganda, denials of genocide and so on). Generally speaking, false claims are not subject to the removal obligation as they are not illegal. Similarly, even if a piece of content is infringing, but no one reports it to the platform, there is no obligation to remove it.
The notion of ‘illegal activity or information’ raises an important issue, as the obligation to remove offending content is independent of the outcome of any possible court or official procedure that may establish that a violation has been committed, and the host provider is required to take action before a decision is passed (provided that a legal procedure is actually initiated). This means that the provider has to decide on the illegality of content on its own, and its decision is free from any legal guarantee (even though it may have an impact on the freedom of expression). This rule may encourage providers to remove content to escape possible liability, even in highly questionable situations. It would be comforting (but probably inadequate, considering the speed of communication) if the liability of an intermediary could not be established unless the illegal nature of the content it has not removed is established by a court.Footnote 37
Although continuous, proactive monitoring of infringing content is not mandatory for platforms, the European Court of Justice opened up a loophole for it well before the recent Regulation banning Russian media outlets, in 2019, in Glawischnig-Piesczek v. Facebook Ireland.Footnote 38 The decision in that case required the platform to delete defamatory statements that had been reported once and removed but which had subsequently reappeared. Likewise, the hosting provider may be obliged to ‘remove information which it stores, the content of which is identical to the content of information that was previously declared to be unlawful, or to block access to that’. This is only possible through the use of artificial intelligence, which is encouraged by this decision and even implicitly made mandatory. Putting that decision in a broader context, it seems that platforms are required to act proactively against unlawful disinformation (or any unlawful content), even given the purported continued exclusion of monitoring obligations. The legality of the content is determined by algorithms, which seems quite risky for freedom of speech.Footnote 39
6.5.2 Digital Services Act
The EU’s Digital Services Act (DSA),Footnote 40 which aims to regulate online platforms in a more detailed and nuanced way, and is applicable from 2023 and 2024, respectively, keeps the most important foundations of European regulation of online platforms in place. The response of the EU to the problem of disinformation is to legislate for more societal responsibility for very large online platforms, while still leaving it to the discretion of the platforms themselves to decide if and how to deal with any systemic risks to freedom of expression.
The DSA retains the essence of the notice and takedown procedure, and platforms still cannot be obliged to monitor user content (Articles 6 and 8), but if they receive a notification that a certain piece of content is illegal, they will be obliged to remove it (Article 6), as set out also in the Directive on Electronic Commerce. The DSA will also seek to protect users’ freedom of expression. It requires users to be informed of the content removed by platforms and gives them the possibility to have recourse to dispute resolution mechanisms in their own country, as well as to the competent authorities or courts if the platform has infringed the provisions of the DSA. These provisions seek to strengthen the position of users, in particular by providing procedural guarantees (most importantly through greater transparency, the obligation to give reasons for deletion of a piece of content or for the suspension of an account, and the right of independent review).Footnote 41
The democratic public sphere is protected by the DSA (Article 14(4)), which states that the restrictions in contractual clauses (Article 14(1)) must take into account freedom of expression and media pluralism. Article 14(4) states that:
Providers of intermediary services shall act in a diligent, objective and proportionate manner in applying and enforcing the restrictions … with due regard to the rights and legitimate interests of all parties involved, including the fundamental rights of the recipients of the service, such as the freedom of expression, freedom and pluralism of the media, and other fundamental rights and freedoms as enshrined in [the Charter of Fundamental Rights of the European Union].
Where platforms do not act with due care, objectivity and proportionality in applying and enforcing restrictions when deleting user content, taking due account of the rights and legitimate interests of all interested parties, including the fundamental rights of the users of the service, such as to freedom of expression, freedom and pluralism of the media, and other fundamental rights and freedoms as set out in the Charter of Fundamental Rights of the European Union (CFR), the user may have recourse to the public authorities. In regard to very large online platforms in Europe, this will most often be the designated Irish authority, to which other national authorities must also refer complaints they receive concerning these platforms, for which the European Commission has also reserved certain powers (it is for the Commission to decide whether to act itself or to delegate this power to the Irish authority).
Under the DSA, the authorities do not explicitly take action against disinformation, only doing so if it constitutes an infringement (war propaganda, which can be conducted through disinformation, can of course constitute an infringement). However, since disinformation alone does not constitute an infringement in national jurisdictions, the DSA does not introduce any substantive change in this respect. Furthermore, very large online platforms and very large online search engines must identify and analyze the potential negative effects of their operations (in particular their algorithms and recommendation systems) on freedom of expression and on ‘civil discourse and electoral processes’,Footnote 42 and must then take appropriate and effective measures to mitigate these risks (Article 35). In addition, the DSA’s rules on codes of conduct also encourage the management of such risks and promote the enforcement of codes (including, for example, the Code of Practice on Disinformation, which predates the DSA). These tools also provide an indirect means of tackling disinformation. One of the main purposes of the DSA is to protect users’ freedom of speech, but users’ speech can also contain dis- or misinformation. It will be difficult to reconcile these conflicting interests when applying the regulation.
Article 36 of the DSA introduces a new ‘crisis response mechanism’. Crisis in this legislation means ‘extraordinary circumstances’ that ‘lead to a serious threat to public security or public health in the Union or in significant parts of it’ (Article 36(2)). Very large online platforms will need to assess to what extent and how the functioning and use of their services significantly contribute to a serious threat, or are likely to do so and then to identify and apply specific, effective and proportionate measures, to prevent, eliminate or limit any such contribution to the serious threat identified (Article 36(1)).
6.6 The European Union’s Efforts to Curb Disinformation on Online Platforms
European jurisdictions allow actions against disinformation, defined as action on the grounds of defamation or the violation of the prohibition of hate speech or scaremongering, while platforms, being ‘host providers’, can be required to remove infringing content. However, these measures in and of themselves seem inadequate to deal with such threats in a reassuring manner. Concerns of this nature have been addressed by the EU in various documents it has produced since 2017.
6.6.1 Communications, Recommendations and Co-Regulation
The first relevant EU Communication, issued in 2017,Footnote 43 concerns tackling illegal content, so it only indirectly addresses the issue of disinformation. It mentions that ‘[t]here are undoubtedly public interest concerns around content which is not necessarily illegal but potentially harmful, such as fake news or content that is harmful for minors. However, the focus of this Communication is on the detection and removal of illegal content.’Footnote 44 The Communication introduced a requirement for platforms to take action against violations in a proactive manner and even in the absence of a notice, even though the platforms are still exempted from liability.Footnote 45 The Recommendation that followed the Communication reaffirmed the requirement to apply proportionate proactive measures in appropriate cases, which thus permits the use of automated tools to identify illegal content.Footnote 46
The High Level Expert Group on Fake News and Online Disinformation published a report in 2018.Footnote 47 The report defines disinformation as ‘false, inaccurate, or misleading information designed, presented and promoted for profit or to intentionally cause public harm’.Footnote 48 While this definition might be accurate, the report refrains from raising the issue of government regulation or co-regulation, and is limited to providing a review of the resources and measures that are available to social media platforms and which they may apply voluntarily. The CommunicationFootnote 49 issued following the report of the High Level Expert Group already recognized the need for more concrete action, not only by online platforms but also by the European Commission and Member States. The document called for a more transparent, trustworthy and accountable online ecosystem. It foresaw the reinforcement of the EU bodies concerned and the creation of a rapid alert system that would identify in real time, through an appropriate technical infrastructure, any disinformation campaign.
Later in 2018, online platforms, leading technology companies and advertising industry players agreed, under pressure from the European Commission, on a code of conduct to tackle the spread of online disinformation. The 2018 Code of Practice on Disinformation was designed to set out commitments in areas ranging from transparency in political advertising to the demonetization of disinformation spreaders. The Code may appear to be voluntary in form – that is, a self-regulatory instrument – but it is in fact a co-regulatory solution that was clearly imposed on the industry players by the European Commission. Its primary objectives are to deprive disseminators of disinformation of advertising revenue from that activity, to make it easy to identify publishers of political advertising, to protect the integrity of the platform’s services (steps against fake accounts and bots) and to support researchers and fact-checkers working on the subject. The Code actually further exacerbates the well-known problem of private censorship (the recognition of the right of platforms to restrict the freedom of expression of their users through rules of their own making),Footnote 50 by putting decisions on individual content in the hands of the platforms, which raises freedom of expression issues.Footnote 51
The Code of Practice was signed in October 2018 by online platforms such as Facebook, Google, Twitter and Mozilla, as well as advertisers and other players in the advertising industry, and was later joined by Microsoft and TikTok. The online platforms and trade associations representing the advertising industry submitted a report in early 2019 setting out the progress they had made in meeting their commitments under the Code of Practice on Disinformation. In the first half of 2019, the European Commission carried out targeted monitoring of the implementation of the commitments by Facebook, Google and Twitter, with a particular focus on the integrity of the European Parliament elections. The Commission published its evaluation of the Code in September 2020, which found that the Code provided a valuable framework for structured dialogue between online platforms, and ensured greater transparency and accountability for their disinformation policies. It also led to concrete actions and policy changes by relevant stakeholders to help combat disinformation.Footnote 52
The Joint Communication of the European Parliament and of the Council on an Action Plan against Disinformation foresees the same measures as in the previous Communication in 2018.Footnote 53 The Communication called upon all signatories of the Code of Practice to implement the actions and procedures identified in the Code swiftly and effectively on an EU-wide basis. It also encouraged the Member States to launch awareness-raising initiatives and support fact-checking organizations. While this document reaffirms the primacy of means that are applied voluntarily by platform providers, it also displays restraint when it comes to compelling the service providers concerned to cooperate (in a forum convened by the European Commission). If the impact of voluntary undertakings falls short of the expected level, the necessity of action of a regulatory nature might arise.Footnote 54
The arrival of the COVID pandemic in Europe in early 2020 gave a new impetus to the mass spread of disinformation, this time directly threatening human lives. Therefore, the EU bodies issued a new document proposing specific measures to be taken by platforms to counter disinformation about the epidemic, but did not actually broaden the scope of the general measures on disinformation previously set out.Footnote 55 Section 4 of the European Democracy Action Plan also specifically addresses the fight against disinformation and foresees the reinforcement of the 2018 Code of Practice, the addition of further commitments and the establishment of a monitoring mechanism.Footnote 56
In 2021, EU bodies issued a new Communication,Footnote 57 which foreshadowed the content of the updated Code of Practice. Subsequently, a review of the Code was launched, leading to the signing of the Strengthened Code of Practice on Disinformation by thirty-four signatories in June 2022.Footnote 58 The updated and strengthened Code aims to deliver on the objectives of the Commission’s guidance,Footnote 59 presented in May 2021, by setting out a broader range of commitments and measures to combat online disinformation. While the Code has not been officially endorsed by the Commission, the Commission set out its expectations in its Communication, and has indicated that it considers that the Code meets these expectations overall. Since this guidance sets out the Commission’s expectations in imperative terms (‘the Code should’, ‘the signatories should’, and so on), it is not an exaggeration to say that the fulfilment of the commitments is seen as an obligation for the platforms, which, if fulfilled, could avoid the imposition of strict legal regulation. Consequently, it is correct to consider the Code not as a self-regulatory instrument, but as a co-regulatory mechanism, which is not created and operated purely by the free will of industry actors but by a public body (in this case, the EU Commission) working in cooperation with industry players.
The Strengthened Code of Practice on Disinformation includes 44 commitments and 128 concrete measures in the areas of demonetization (reducing financial incentives for the disseminators of disinformation), transparency of political advertising (provisions to allow users to better identify political ads through better labelling), ensuring the integrity of services (steps against manipulative behavior such as the use of spam or disinformation), and the protection of the integrity of services (for example, measures to curb manipulative actions such as fake accounts, bot-driven amplification, impersonation and malicious deep spoofing), empowering users through media literacy initiatives, ensuring greater transparency for platforms’ recommendation systems, supporting research into disinformation, and strengthening the fact-checking community. These measures will be supported by an enhanced monitoring framework, including service-level indicators to measure the implementation of the Code at EU and Member State level. Signatories submitted their first reports on the implementation of the Code to the Commission in early 2023. Thereafter, very large online platforms (as defined in the DSA) will report every six months, while other signatories will report annually. The Strengthened Code also includes a clear commitment to work towards the establishment of structural indicators to measure the overall impact of the Code’s requirements. The 2022 Strengthened Code focuses on political advertising, but also refers to other ‘malicious actors’ beyond those who commission political campaigns containing disinformation.Footnote 60 However, it only covers other speakers beyond the political sphere (citizens interested in public affairs and participating in debates) and misinformation without malicious intent more narrowly. Moreover, as with previous documents, it leaves the most important question open: who decides what constitutes disinformation? More precisely, it leaves the decision to the platform moderators and, to a lesser extent, to the fact-checkers.
The first baseline reports on the implementation of the Code were published in February 2023.Footnote 61 According to their reports, the service providers that signed the Code have taken a number of measures: for example, Google deprived disseminators of disinformation of €13 million in advertising revenue in the third quarter of 2022, while TikTok removed 800,000 fake user accounts, followed by a total of 18 million users during the same period and, on Facebook, 28 million fact-checking tags were added to different posts.
An ongoing legislative procedure is also worth noting in this regard. In 2021, the European Parliament and the Council proposed a regulation on the transparency and targeting of political advertising.Footnote 62 The regulation, if adopted, would be uniformly binding on all Member States, covering the identification of the customers of political advertising, their recognizability, measures against illegal political advertising and the requirements for targeting specific users.
The EU’s approaches are in many respects forward-looking and can help to achieve several objectives, although they have also faced a number of criticisms. We may perceive a certain lack of sincerity on the part of both Member States and the EU when it comes to disinformation. All the related documents avoid a clear assessment of the question of whether the dissemination of disinformation falls within the scope of freedom of expression. Following the prevailing European doctrine, one cannot but conclude that a significant proportion of communications containing disinformation is content protected by freedom of expression, so their restriction by instruments with a not entirely clear legal status, such as a co-regulatory code of practice, may be cause for concern. These communications relate to matters of public interest and are therefore subject to the strongest protection of freedom of expression, with the exception of unlawful content, the publication of which is prohibited by specific rules (see Section 6.3). This also applies to content generated or sponsored by governments. However, communications involving untrue statements of fact may not be considered particularly valuable in the European approach, and could actually be restricted by the imposition of further prohibitions. In other words, Member States are free to introduce prohibitions against intentional disinformation that harms society, if this is necessary and proportionate, as this is not within the EU’s competence.Footnote 63 Member States must take the ECtHR’s case law into account when restricting freedom of expression, and this applies equally to disinformation.Footnote 64 Even so, the production and transmission of disinformation can justify restrictions on freedom of expression. However, other content beyond the sufficiently narrow prohibitions thus defined may still claim protection of freedom of expression, so measures taken against them by online platforms – based either on voluntary commitments or on the co-regulatory Code of Practice, but which are not based on the law as it stands – may be unjustified or disproportionate.Footnote 65
The EU approach also reveals a kind of hidden elitism. While the EU focuses on political advertising and intentional disinformation campaigns, some of the measures it enforces on platforms also cover misinformation and communications by citizens. Ian Cram argues that the ECtHR’s jurisprudence privileges traditional (institutional) media over citizen journalists, imposing standards of ‘responsible journalism’ on the latter.Footnote 66 It follows from this that the obligations of the media and other speakers are, where conceptually possible, the same. According to Cram, this is a kind of elitist approach, linked to a – democratically contradictory – perception of media freedom that seeks to create an ‘enlightened public opinion’ even vis-á-vis ‘the people’ (that is, individual speakers, who may be unbridled, perhaps foul-mouthed, and may lack the resources of the institutional media to uncover reality or create informed opinions).Footnote 67 The same is true for the obligations imposed on platforms, which ultimately also restrict this kind of citizen participation in the public sphere. The EU thus turns to the ‘elite’ of the public arena, namely to the traditional media and fact-checking organizations, for help in judging disinformation.
The lack of honesty is also reflected in the interpretation of the Code of Practice, a formally self-regulatory instrument, which in reality is co-regulation imposed by the EU,Footnote 68 where coercion is not based on legislation but on informal agreements, and accompanied by concerns on the part of service providers about the risk of stricter regulation in the future. This co-regulatory nature is recognized by the reference in the Preamble of the Code: ‘This Code of Practice aims to become a Code of Conduct under Article 35 of the DSA’Footnote 69 (in this section, the DSA itself advocates the creation of codes of conduct that set out the various obligations of platforms). Of course, the concerns of service providers are not necessarily justified, given their economic interest in the spread of disinformation, as the 2021 leak by a former Facebook employee, Francis Haugen, starkly highlighted.Footnote 70 Disinformation, unfortunately, tends to attract users, who readily consume such content and interact with it heavily, which in turn generates financial benefits for the platforms. It is therefore also difficult to believe that the transparency required by the EU and committed to by the service providers in relation to the spread of disinformation – covering decision-making and all relevant information – will actually be achieved, and it is very difficult for an actor outside the platform to verify whether it has been.
Twitter announced in May 2023, under the leadership of Elon Musk, that it would leave the Code. Because of its formally self-regulatory nature, this was, of course, within its rights. In any case, Thierry Breton, a senior official of the European Commission, announced immediately after the decision that the Code would nevertheless be enforced, including against Twitter.Footnote 71 This will be possible indirectly, if the Code becomes a kind of industry standard, and thus effectively binding, by applying Section 35 of the DSA.
A problem that goes hand in hand with the spread of disinformation is the breakdown of traditional media. The media are gradually losing the trust of the public,Footnote 72 but their economic foundations and, in turn, their professionalism are also under threat, not least because of the proliferation of internet services. Some EU documents mention the role and importance of the traditional media, although they can hardly offer solutions to these problems. Similarly, only at the level of a mere mention does the EU, including the DSA, address the issue of filter bubbles,Footnote 73 which reinforce social fragmentation, such as the ‘Daily Me’ content offer,Footnote 74 customized for each user, which contributes significantly to the spread of disinformation among susceptible users.Footnote 75 It would not be inconceivable to adopt some of the approaches taken in the regulation of traditional media, such as the right of reply, which would allow disinformation to be accompanied immediately by a reaction containing true facts, or an appropriate adaptation of the obligation of balanced coverage, which would allow a controversial issue to be presented in several readings, immediately visible to the user. This is also hinted at in the Code of Practice, which seeks to steer users towards reliable sources. Measure 22(7) of the Code states that ‘Relevant Signatories will design and apply products and features (for instance, information panels, banners, pop-ups, maps and prompts, trustworthiness indicators) that lead users to authoritative sources on topics of particular public and societal interest or in crisis situations.’ The right to information from multiple sources is the objective of both the right of reply and the obligation to provide balanced information, meaning that even if the means differ, the objectives may be similar in the regulation of traditional media and platforms.
Finally, another problem with the EU’s approach that has been identified is that it is up to platforms and fact-checkers to judge content in the fight against disinformation. This is understandable, since the EU did not want to set up a kind of Orwellian Ministry of Truth, as it would consider it incompatible with freedom of expression for state bodies, courts and authorities to decide on the veracity of a claim. However, it is also doubtful whether leaving such decisions up to private individuals is capable of facilitating informed, fair and unbiased decision-making and whether it does not itself pose a threat to freedom of expression. The very term ‘fact-checking’ is unfortunately Orwellian, and the fact-checkers – and the platform moderators – can themselves be biased, as well as wrong.Footnote 76 Human cognitive mechanisms themselves make fact-checking difficult,Footnote 77 and its credibility is easily undermined, as ‘fact-checkers … disagree more often than one might suppose, particularly when politicians craft language to be ambiguous’.Footnote 78 An empirical study found that ‘fact-checkers are both less likely to fact-check ideologically close entities and more likely to agree with them’.Footnote 79 Fact-checkers are not accountable to society, even less so than the traditional media (through legal regulation or ethics-based self-regulation). Their activities are neither necessarily transparent, nor do they have guarantees of independence. In many cases, such as EU-funded organizations, they operate using public money, which makes these shortcomings problematic. If the traditional media are increasingly losing people’s trust, what reason would people have to trust fact-checking organizations, which face similar credibility problems? While fact-checkers share similar problems with traditional media, their emergence is an interesting development and, if they can bridge the institutional problems, it is not inconceivable that they could be a useful contributor to the public sphere.Footnote 80 It is noteworthy that those fact-checkers who work on behalf of or with the approval or support of social media platforms, and who check the veracity of users’ posts on those sites, bring social media closer to traditional media in terms of the way they operate, as these verifiers have a specific editorial role.
6.6.2 Banning Russian Media Outlets in the Context of the Russian–Ukrainian War
Shortly after the outbreak of the Russian–Ukrainian war, on 1 March 2022, the Council of the EU adopted a DecisionFootnote 81 pursuant to Article 29 of the Treaty of the European Union (TEU) and a RegulationFootnote 82 pursuant to Article 215 of the Treaty on the Functioning of the European Union (TFEU) under which it is prohibited for:
operators to broadcast or to enable, facilitate or otherwise contribute to broadcast, any content by the legal persons, entities or bodies listed in Annex XV [RT – Russia Today English, RT – Russia Today UK, RT – Russia Today Germany, RT – Russia Today France, RT – Russia Today Spanish, and Sputnik news agency], including through transmission or distribution by any means such as cable, satellite, IP-TV, internet service providers, internet video-sharing platforms or applications, whether new or pre-installed.
All broadcasting licences or authorization, transmission and distribution arrangements with RT and Sputnik were suspended. (Later, these measures were extended to other Russian media outlets.) These sanctioning rules derive directly from the TEU. The Council of the EU used the prerogatives under Title V of the TEU concerning the general provisions on the EU’s External Action and the specific provisions on the Common Foreign and Security Policy.Footnote 83 According to a leaked letter, the Regulation should be applied to any links to the internet sites of the media outlets, as well as to their social media accounts.Footnote 84 As a result, the ban is a departure from the general monitoring ban in Article 15 of the E-Commerce Directive.Footnote 85 This provision makes it clear that any state-imposed orders on social media platforms (referred to in the Directive as host services) to monitor users’ content are not compatible with European law. Later, a lawsuit was initiated by RT France against the Regulation, but the Court of Justice of the EU dismissed RT France’s application.Footnote 86
According to the Recitals of the Decision and the Regulation, the Russian Federation ‘has engaged in a systematic, international campaign of media manipulation and distortion of facts in order to enhance its strategy of destabilization of its neighboring countries and of the Union and its Member States’.Footnote 87 The recitals indicate two reasons for the ban: disinformation and propaganda.Footnote 88 Under Article 52(1) of the CFR, any such interference must pursue ‘objectives of general interest recognized by the Union’. Considering this, the restriction targeting disinformation and propaganda might be in line with the CFR.Footnote 89 However, according to Baade, the EU should not invoke the prohibition of disinformation or propaganda as a legitimate aim, as they may be protected expressions. An alternative aim would be to stop propaganda for war specifically.Footnote 90 The prohibition of propaganda for war is enshrined in Article 20 of the International Covenant on Civil and Political Rights. As all the EU Member States have ratified the Covenant, this prohibition can also be considered a generally accepted principle of EU law. As Baade notes, the justification for the ban imposed on RT and Sputnik in the current situation cannot be based solely on the character of their content as ‘propaganda’ and not even as disinformation.Footnote 91 As already mentioned, propaganda and disinformation are generally protected by the freedom of expression, with certain exceptions.
After the Regulation came into force, the largest social media companies relaxed the enforcement of their rules involving threats against Russian military personnel in Ukraine.Footnote 92 According to a leaked internal letter, Meta allowed Facebook and Instagram users to call for violence against the Russian and Belarusian leaders, Vladimir Putin and Alexander Lukashenko, so long as the violence was nonspecific (without referring to an actual plot), as well as violence against Russian soldiers (except prisoners of war) in the context of the Ukraine invasion, which involves a limited and temporary change to its hate speech policy.Footnote 93 Twitter also announced some changes in its policies related to the war, although the company did not amend its generally applicable hate speech policies.Footnote 94
The right of platforms to change the boundaries of free speech at will, without any constitutional guarantee or supervision, is an extremely dangerous development. Their propensity to make changes in a less transparent way, avoiding any meaningful public debate on the proposed changes, only increases the risk to freedom of expression.
6.7 Attempts to Regulate Disinformation at the National Level
In order to strengthen the obligations of online platforms, some European countries have adopted rules, in line with common European law, to compel platforms to remove illegal content more quickly and effectively. The corresponding Act in German law (effective as of 1 January 2018) is a paramount example of this trend.Footnote 95 According to the applicable provisions, all platform providers within the scope of the Act (that is, platform providers with over 2 million users from Germany) must remove all user content that commits certain criminal offences specified by the Act. Such offences include defamation, incitement to hatred, denial of the Holocaust and the spreading of scaremongering news stories.Footnote 96 Manifestly unlawful pieces of content must be removed within twenty-four hours after receipt of a notice, while any ‘ordinary’ unlawful content must be removed within seven days.Footnote 97 If a platform fails to remove a given piece of content, it may be subject to a fine of up to €50 million (theoretically, in cases of severe and multiple violations).Footnote 98 The German legislation does not go much further than the E-commerce Directive itself, or its successor, the DSA; it simply refines the provisions of the Directive, lays down the applicable procedural rules and sets harsh sanctions for platforms which violate them. Nonetheless, the rules are followed in practice, and Facebook seems eager to perform its obligation to remove objectionable content.Footnote 99 The German regulation shows how difficult it is to apply general pieces of legislation and platform-specific rules simultaneously, and it demonstrates how governments prefer to have social media platforms act as the judges of user-generated content.
Subsequently, FranceFootnote 100 and AustriaFootnote 101 adopted similar rules, although the French law (‘Avia law’) was annulled by the Constitutional Council because some of its provisions did not meet the constitutional requirements.Footnote 102 France had introduced transparency and reporting obligations for platforms in a law adopted in 2018, prior to the Avia law, along with a fast-track judicial procedure to remove content disseminated during election campaigns and deemed misleading or inaccurate.Footnote 103 The law confers new powers on the Media Council (Conseil Superieur de l’Audiovisuel), such as the ability to suspend or withdraw the licence of certain media services if, for example, a service under the control or influence of a foreign company is endangering France’s fundamental interests, including the proper functioning of its institutions, by transmitting false information.Footnote 104 Under an amendment to the 1986 Freedom of Communication Act,Footnote 105 the Media Council can order the suspension of the electronic distribution of a television or radio service owned or controlled by a foreign state if the company deliberately transmits false information that could call into question the integrity of an election.Footnote 106 These powers may be exercised from the beginning of the three months preceding the first day of the month in which the presidential, general or European Parliamentary elections or referendums are held. The Constitutional Council found this law constitutional.Footnote 107
The German and French attempts to regulate disinformation have introduced rules imposing obligations on platforms to remove certain content quickly. At the same time, German legislation only imposes obligations on content that is in breach of the Criminal Code, hence it is only the 2018 French law that regulates disinformation that is not in any case illegal, during election campaigns. However, these approaches still leave the decision on content in the hands of the platforms, and do not attempt to limit the spread of disinformation in general.Footnote 108 In Germany, another important piece of legislation has been passed, which also addresses the issue of disinformation. In 2020, the Interstate Treaty on Media Services (Medienstaatsvertrag, MStV) was adopted, which provides for the transparency of algorithms, the proper labelling of bots and the easy findability of public service media content on the platforms on which it is available. The MStV obliges social media platforms, video-sharing platforms and search engines to be nondiscriminatory in terms of content and to prioritize public service content, while not restricting user preferences. On video-sharing platforms, available public broadcasting content should be especially highlighted and made easy to find. These intermediaries may not unfairly disadvantage (directly or indirectly) or treat differently providers of journalistic editorial content to the extent that the intermediary may potentially have a significant influence on their visibility.Footnote 109 These rules only indirectly limit the spread of disinformation, but they provide a good example of how regulation can try to steer users towards credible content, in line with the traditional approach to media regulation.Footnote 110
In the fight against the COVID pandemic and the disinformation related to it, several European countries tried to curb the spread of false and dangerous information by tightening criminal laws. Hungary, for example, tightened its rules on scaremongering,Footnote 111 and Greece extended the scope of the existing offence of dissemination of false information and introduced a prison sentence for those who spread disinformation on the Internet.Footnote 112
8. On the Possible Future Solutions: Some Conclusions
The European states and the EU clearly assign primary responsibility for addressing disinformation issues to the platforms. Of course, the national governments and the European institutions have made a number of commitments themselves, but they leave it to the platforms to sort out the substantive issues, including compliance with their commitments under the Code. However, this is not a reason to give up on introducing further restrictions on free speech, as allowed by the European concept of freedom of expression. Even in the context of the US legal system, Cass Sunstein argues that intentional lies, if they cause at least moderate harm, may be constitutionally prohibited – and even negligent or mistaken misrepresentations can be restricted, if the harm incurred by them is serious.Footnote 113 It is still better – at least in Europe, we typically think so – that the line between forbidden and permissible speech is drawn by the legislature and the courts, constrained by strict constitutional guarantees, rather than by private organizations (in this case, mainly social media platforms) operating without such guarantees. But each and every social media post of concern cannot be taken to court, because nowhere could the judicial system cope with such a workload. Therefore, the right of platforms to decide on user content is likely to remain necessary in the long-term future. However, the protection of content that is not prohibited under the regime of freedom of expression is an important consideration, even if it contains untruths.
Although the European approach is wary of considering the communication of untrue statements of fact to be of high value, freedom of expression, at least according to the traditional approach, is in a sense a ‘black and white’ issue. Either a particular piece of content falls within the scope of freedom of expression or it does not. In other words, once the sometimes-difficult question of whether a particular piece of content constitutes defamation, invasion of privacy, hate speech and so on has been successfully answered, the consequences are self-evident: the content will either be protected or it will not. ‘Pizzagate’,Footnote 114 for example, could in principle have been dealt with under defamation law (at least if it would have happened in Europe, as under US defamation law it is more difficult to protect the reputation of a specific person against false allegations), and the false allegations made in the Brexit campaignFootnote 115 could in principle also have been prohibited under the rules governing fair election or referendum campaigns. Of course, even in these cases, the permissibility of restricting speech is not clear-cut and requires a nuanced decision by a court. Furthermore, an otherwise patently untrue statement – for example, how much more money will be available for the National Health Service in the United Kingdom if the country leaves the EU – may not necessarily be clearly refutable in a legal proceeding. But the main point is that many untrue statements are actually protected by freedom of speech. This does not mean that the protected content has the right to reach an audience or to have its volume amplified by a particular service (for example, through the media), but rather that its restriction is not allowed. This traditional approach is being disrupted by online platforms, which, as is their general practice, also restrict content that is not legally prohibited, according to their own intentions and contractual terms. The same problem dogs the fight against (not legally prohibited) disinformation: the EU also encourages restrictions on content that is otherwise protected by freedom of expression, and the relevant documents do not attempt to resolve this contradiction.
It is also important to make a clear distinction between disinformation originating from governments and dis- or misinformation that comes from members of society, whether deliberate or in good faith, but in this respect the EU documents currently available are not fully consistent. Members of society should not be disproportionately restricted in their freedom of expression, even if they approach public debate with malicious intent, and certainly not if they are unaware of the falsity or potential for damage of the news they are spreading (the good faith transmission of government disinformation also falls into this category). Private speech controlled or promoted by the government should be taken into account and only the freedom of honest citizens’ speech who are otherwise wrong should be strongly protected. The question is whether this separation is even possible. And if so, whose job is it, the legal regulators or purely the platforms? We do not have good answers to this dilemma at the moment.
Nor would it be inconceivable to regulate platforms more strictly, setting out their obligations vis-à-vis content not protected by freedom of expression, not in self- or co-regulatory instruments but in clearly prescribed legal rules. This would of course require close cooperation between Member States and the EU, as speech bans can only be imposed at Member State level, while platform regulation can only be effective at EU level.
Users need to be led out of the filter bubble imposed on them by the platforms, which would fundamentally affect their business model. In this regard, the provision of the option prescribed by the DSA to opt out of content recommendation based on profiling is a step in the right direction, but not big enough, because it puts the decision in the hands of the users, and it is questionable how many will take advantage of it, and that particular bubble can also be produced by means other than profiling. Data protection regulations can also be called upon to help in the fight, in particular by tightening up data-processing actions by platforms.Footnote 116
It would be worth considering making the transmission of substantiated statements and opinions on public affairs to users mandatory, or providing easy access to divergent and dissenting views on specific issues, while maintaining the choice for users who do not wish to hear them, as exemplified by the regulation of traditional media. Such instruments include, in respect of television and radio, the right of reply and the obligation to provide balanced (impartial) news coverage, the mandatory publication of local or national content or the mandatory transmission of certain content of public interest by broadcasters. These duties could also be applied to social media, with some adaptation. In principle, social media could be required to make available, alongside a post on a contentious issue, posts that present the dissenting views on that issue. Algorithms might be able to do this, although the business model of the platforms might be adversely affected. Such a rule would be similar to the right-of-reply and impartial information obligations known from media regulation, except that it could be done automatically without a specific request. Strengthening nonlegislative approaches, raising awareness and supporting traditional media are also necessary tools – within the competence of Member States.
The fight against disinformation is a seemingly open-ended task that poses particular challenges for policymakers, both in terms of protecting freedom of expression and in defining new obligations for members of the public. It has become clear that traditional legal instruments, legislation and the imposition and enforcement of obligations by the relevant authorities can only partially address the problems it raises, and that the cooperation of all stakeholders is necessary. However, this should not lead to the ‘outsourcing’ of decisions by putting them fully in the hands of private companies. Member States and the EU must continue to play a leading role in shaping the rules. The EU has taken a number of important measures, and some Member States are trying to address some of the issues, but it is reasonable to fear that we are only at the beginning of the journey and that further technological developments will bring new risks. Disinformation, as Paul Bernal has so eloquently demonstrated,Footnote 117 is essentially the same age as public communication; there is nothing new under the sun, but we must be able to formulate new answers to old questions all the time. But the end result of any struggle of legal systems in this regard will be that responsible, informed participation in public debates will remain primarily the responsibility of the individual concerned, just as it has been in past centuries.
7.1 Introduction
A broad consensus has emerged in recent years that although rumours, conspiracy theories and fabricated information are far from new,Footnote 1 in the changed structure and operating mechanisms of the public sphere today we are faced with something much more challenging than anything to date,Footnote 2 and the massive scale of this disinformation can even pose a threat to the foundations of democracy.Footnote 3 However, the consensus extends only to this statement, and opinions differ considerably about the causes of the increased threat of disinformation, whom to blame for it, and the most effective means to counter it. From the perspective of freedom of speech, the picture is not uniform either, and there has been much debate about the most appropriate remedies. It is commonly argued, for example, that the free speech doctrine of the United States does not allow for effective legal action against disinformation, while in Europe there is much more room for manoeuvre at the disposal of the legislator.
This chapter presents an example of European thinking: the search for answers to the problem of disinformation within the Council of Europe (CoE). For several reasons, the CoE provides an excellent opportunity to examine this quest as a joint European journey. The organization, that comprises forty-seven Member States, truly represents the vast majority of Europe, and thanks to the diversity of the work within its institutions, it has a complex impact on the legal development and public policy of the continent. The bodies of the CoE, with their sometimes harder, sometimes softer tools – binding international court rulings, opinions on national legislation, public policy recommendations, various reports and analyses – always seek the common European denominator that can provide a solid basis for identifying the peculiarities of the continent’s joint approach.
In this chapter, I will examine the practice of three institutions of the CoE (the European Court of Human Rights, the Committee of Ministers and, to a lesser extent, the Venice Commission) which have had a great impact on the development of Member States’ legislative and judicial processes. The problem of disinformation has not been directly addressed by a specific policy document, either in the decisions of the European Court of Human Rights (ECtHR) or in the recommendations of the Council of Europe’s Committee of Ministers, or among the documents of the Venice Commission. However, the key points that can serve as guidelines for a European approach can be identified from the practice of the three bodies. First, I will outline the common conceptual frameworks in the relevant practice of the three institutions, with special regard to the fundamentals of the doctrine of freedom of speech (Section 7.2). After that, I will analyse the decisions of the ECtHR that can be considered relevant to the question of disinformation, especially those involving the possibility of restrictive legal means against it (Section 7.3). Finally, through the recommendations of the Ministerial Committee, I will examine what public policy solutions the CoE recommends for dealing with disinformation (Section 7.4). My three main sources are: the decisions of the ECtHR (the implementation of which is a fundamental obligation for all Member States), the recommendations of the Committee of Ministers (which provide guidelines for the legislators and political decision-makers), and the relevant documents of the Venice Commission, the advisory body on constitutional matters (which comments on the constitutional processes of the Member States).
The chapter will focus chiefly on the role of the state in the fight against disinformation, so other issues that might be extremely important for the topic – for instance, the complex question of self-regulation – are discussed only tangentially. I hope that by the end of the chapter it will be clear that – although European law undoubtedly seeks answers based on different constitutional doctrines compared to the United States, and this leads to significantly different legal practices on important social issues (especially in the area of hate speech) – the problem of disinformation represents a challenge that even on the old continent cannot be effectively answered by restrictive legal means.
7.2 The Legal Framework of the Fight against Disinformation in the Council of Europe
Although the CoE has not issued any decision or recommendation specifically aimed at combating disinformation, the outlines of how to deal with the issue can be clearly drawn from the relevant documents. The present problem of disinformation did not materialize in a vacuum, as clear principles and a system of criteria for the regulation of public discourse have been developed in recent decades. There is no doubt that the challenge posed by disinformation lies precisely in the fact that digital technologies, platforms and social media have significantly subverted the previous operating mechanisms of the public sphere. However, the response to it needs to be situated among established constitutional principles. We must confront the accepted doctrine with the new phenomenon: we must determine whether new approaches are needed at some point and we must also make it clear where it is not possible to compromise on the principles that have been followed so far.
The Council of Europe has been at the forefront of analysing the social issue of disinformation from the beginning. It was one of the first bodies to recognize the gravity of the phenomenon while trying to clear up the increasingly confusing picture. Between the rampant use of the label ‘fake news’, on the one hand, and the demands for immediate and decisive interventions against misleading information, on the other, a comprehensive study carried out in 2017 for the CoE was pioneering in its emphasis on the importance of taking a calm, analytical approach. The significance of the paper for us is that – examining the intensifying information disturbances in a nuanced manner – it has provided a new conceptual framework for the analysis of the issue. The authors of the study successfully proposed a threefold conceptual categorization to describe information disorders, which until then had been mostly associated with the vague concept of ‘fake news’.Footnote 4
In this model, misinformation is when false information is shared, but no harm is meant. Disinformation, meanwhile, is when false information is knowingly shared to cause harm. A further category, mal-information, is when genuine information is shared with the intent to cause harm, often by bringing information designed to remain private into the public sphere. In this section, I will consider this conceptual framework to be authoritative while dealing primarily with the response to disinformation – that is, knowingly and harmfully spreading untruths. The starting points for the regulatory treatment of social information disturbances, including the intensification of attempts at disinformation, are provided by the legal framework of public communication. The most important points of this framework in the work of the bodies of the CoE are summarized below.
7.2.1 The Participatory Model of Free Speech and the Democratic Public Opinion
The question that defines the entire doctrine of freedom of speech concerns which theoretical justification that right relies on, or to put it more accurately: which aspects each justification prioritizes. The concept of ‘speech’ has a normative nature that is defined by each constitutional doctrine based on which justification(s) it emphasizes.Footnote 5 These justifications have already been systematized by others.Footnote 6 Essentially, communications represent three types of value: they can contribute to the discovery of truth we seek together,Footnote 7 they can be manifestations of the free fulfilment of our personality (the individualist approach),Footnote 8 and they can ensure our participation in democratic social life (democratic theories).Footnote 9 Despite the fact that it is neither possible nor necessary to insist on exclusivity among the justifications, the primary basis the European doctrine rests on can be clearly established.Footnote 10
From the very beginning, the ECtHR has focused on the democratic justification of freedom of expression. According to the reasoning it has consistently ascribed to, ‘freedom of expression constitutes one of the essential foundations of a democratic society and one of the basic conditions for its progress and for each individual’s self-fulfilment’.Footnote 11 Considering the practice as a whole, it is clear that, despite the mention of individual fulfilment, the legal interpretations are not primarily based upon the individualistic justification, although it plays an important role within the democratic approach. Democratic justifications are not completely uniform in all details, and the two main models focus on somewhat different elements in important legal interpretation situations – and the issue of disinformation is one such situation.
One democratic theory sees the value of freedom of speech in that it is essential for the common, informed decision-making which is the essence of democracy, and which places the audience’s need for information at the centre.Footnote 12 The other theory sees the value of free speech above all in that it ensures that everyone has the opportunity to become involved in the life of the democratic community. In this model, participation is at the centre of the concept of democracy and democratic public opinion,Footnote 13 and freedom of speech focuses much more strongly on the speaker and their intention to communicate.Footnote 14 The practice of the ECtHR draws on both approaches, but it is chiefly based on the participation model in the sense that the central issue of legal interpretations is the protection of the speaker’s right to personal expression.
The aspects of participatory democracy are also emphasized in the documents of other bodies of the Council of Europe. As a recommendation of the Committee of Ministers on the new notion of media highlights, freedom of expression is indispensable for a genuine democracy and for democratic processes. ‘In a democratic society, people must be able to contribute to and participate in the decision-making processes which concern them.’Footnote 15 The internet-related recommendations of the Committee of Ministers – as will be shown below – also recognize the revolutionary importance of the digital age for freedom of speech in the expansion of opportunities for personal participation. In addition, the democracy-based approach is most evident in the concrete interpretation of the law, in which, although the scope of freedom of speech is broader than that of political communication, significantly stronger protection is afforded to political speech. The ECtHR consistently emphasizes that ‘the promotion of free political debate is a very important feature of a democratic society and the Court attaches the highest importance to freedom of expression in the context of such debate’.Footnote 16 The importance of this approach is particularly highlighted by the practice of the ECtHR on artistic expressions, which grants strong protection to works of art only if they form part of the public debate.Footnote 17
Both the ECtHR and other bodies of the CoE therefore attach special importance to the democratic formation of public opinion, to which they attribute specific characteristics. On the one hand, democratic public opinion means a lively discourse that embraces as many points of view as possible, and develops according to its own logic and under its own rules in a lively discussion of opinions and counter-opinions. As the Venice Commission emphasizes, open and robust public debate is the cornerstone of democracy: ‘A democracy should not fear debate, even on the most shocking or anti-democratic ideas. It is through open discussion that these ideas should be countered and the supremacy of democratic values be demonstrated … Persuasion through open public debate, as opposed to ban or repression, is the most democratic means of preserving fundamental values.’Footnote 18 On the other hand, although this does not appear expressis verbis in the documents, the legal interpretation of the public debate starts from a specific anthropological view.
The decisions of the ECtHR on the restriction of commercial communication can be usefully contrasted with its decisions on communication deemed to be part of the public debate. The court has consistently held that although the freedom of speech extends to commercial advertisements, their publication can be widely restricted. In order to ensure that consumers receive accurate information about the specific features of goods and services, restrictions may be imposed especially in the case of misleading or untrue information. The ECtHR therefore considers the consumer as a player that is vulnerable to the manufacturer, and needs to be protected.Footnote 19 As we will see in the democratic public debate, even in the case of untrue information, the Court does not admit the possibility of such a general restriction, and considers citizens participating in the formation of public opinion as autonomous (rather than vulnerable) actors.Footnote 20
It should also be mentioned that, in the European doctrine, the press is an actor in the public sphere with distinct rights and obligations. As multiple recommendations of the CoE function, namely scrutiny of public and political affairs and private or business-related matters of public interest, contributes to justify media’s broad freedom; however, it is counterpoised by a requirement of greater diligence in respect of factual information. Regarding the reliability of information, ‘professionalism requires verifying information and assessing credibility’.Footnote 21
In the ECtHR’s view, electoral campaigns have their own significance for the formation of democratic public opinion. The Court reiterated several times that free elections and freedom of expression, particularly freedom of political debate, together form the bedrock of any democratic system.
For this reason, it is particularly important in the period preceding an election that opinions and information of all kinds are permitted to circulate freely … At the same time the Court recognises the importance of protecting the integrity of the electoral process from false information that affect voting results, and the need to put in place the procedures to effectively protect the reputation of candidates.Footnote 22
We will see below in the relevant case law (Section 7.3) that the ECtHR has so far always put the reputation of candidates concerned in the focus.
7.2.2 The Internet and Freedom of Speech
To discuss the issue of disinformation, it is important and instructive to examine more closely how the documents of the CoE view the Internet. A wealth of material is available in this regard, as the Ministerial Committee has dealt with the issues raised by the Internet in many of its recommendations – even mentioning disinformation among these problems – and, of course, cases related to the role of the Internet have also been raised before the ECtHR.
It is clear from the documents that the CoE has taken into account new dangers arising from the functioning of the Internet since the very beginning, but in the first place it still welcomes it as a tool that can radically expand the possibilities for democratic participation. The recommendation of the Committee of Ministers on measures to promote the public service value of the Internet notes that while digital tools can significantly enhance the exercise of human rights and fundamental freedoms, such as freedom of expression, it admits that they may adversely affect these and other rights. Still, the Committee recommends that, in order to promote democracy, Member States should strengthen the participation and involvement of their citizens in public debate through the Internet, and encourage the use of infocommunication services, including online forums, weblogs, political chats, instant messaging and other forms of citizen-to-citizen communication. The recommendation strongly supports citizens’ engagement with the public through user-generated communities rather than official websites.Footnote 23
The ECtHR also approaches the Internet as one of the principal means of providing essential tools for the participation in discussions concerning political issues, highlighting that the possibility for user-generated expressive activity on the Internet provides an ‘unprecedented platform for the exercise of freedom of expression’.Footnote 24 The Court welcomes the fact that the Internet has fostered the ‘emergence of citizen journalism’, as political content ignored by the traditional media is often disseminated via websites to a large number of users, who are then able to view, share and comment upon the information.Footnote 25 However, the bodies of the CoE have also identified the dangers that making use of the new opportunities provided by the Internet entails. Defamatory and other types of clearly unlawful speech, including hate speech and speech inciting violence, can be disseminated as never before, in a matter of seconds.Footnote 26 Digital transformation and the shift towards an increasingly digital, mobile and social media environment have profoundly changed the dynamics of production, dissemination and consumption of news.Footnote 27 Newer materials also mention the problem of disinformation among its dangers: ‘Targeted disinformation campaigns online, designed specifically to sow mistrust and confusion and to sharpen existing divisions in society, may also have destabilizing effects on democratic processes.’Footnote 28 Meanwhile, ‘[d]emocracies have experienced growing threats posed by the spread of disinformation and online propaganda campaigns, including as part of large-scale co-ordinated efforts to subvert democratic processes’.Footnote 29
It is worth briefly mentioning the CoE bodies’ main approach to responsibility for internet content. The central concept of the documents is ‘shared liability’. According to this, ‘a wide, diverse and rapidly evolving range of players facilitate interactions on the Internet between natural and legal persons by offering and performing a variety of functions and services’,Footnote 30 and the responsibility for content must be adapted to this multi-player approach. According to the Ministerial Committee, instead of summary solutions, a fine-tuned approach is needed that elaborates and delineates the boundaries of the roles and responsibilities of all key stakeholders within a clear legal framework, using complementary regulatory frameworks.Footnote 31 The ECtHR also focuses on ‘a context of shared liability between various actors’.Footnote 32
An important starting point for the CoE’s approach to responsibility is that providers of intermediary services – which contribute to the functioning or accessing of media and content, but do not themselves exercise editorial control – should not be regarded as media. However, their activity is certainly relevant in the media context and for the formation of democratic public opinion.Footnote 33 The CoE agrees with the view that state authorities should not impose a general obligation on intermediaries to monitor content which they merely provide access to, and recommends that they should ensure that intermediaries are not held liable for such third-party content. Intermediaries may be liable if they do not act expeditiously to restrict access to content or services as soon as they become aware of their illegal nature.Footnote 34 As the ECtHR emphasized: ‘to exempt these services from all liability might facilitate or encourage abuse and misuse, including hate speech and calls to violence, but also manipulation, lies and disinformation’.Footnote 35
7.2.3 Competing Values and Their Power against Free Speech
Before examining the concrete decisions of the ECtHR in Section 3, it is also worth establishing in principle the methodology with which the jurisprudence evaluates the values and interests that may compete with the interest of freedom of speech. It is clear what these values and interests are under the European Convention on Human Rights. Article 10(2) of the Convention, on freedom of expression, lists the reasons that may serve as a basis for restricting freedom of expression. According to this part of the text, freedom of expression can be limited in the interests of national security, territorial integrity or public safety, for the prevention of disorder or crime, for the protection of health or morals, for the protection of the reputation or rights of others, for preventing the disclosure of information received in confidence, or for maintaining the authority and impartiality of the judiciary. It is well established in the jurisprudence of the ECtHR and the CoE documents that the list provided in the Convention is exhaustive: any restrictions of the right to free speech must pursue a legitimate aim as exhaustively enumerated in Article 10.Footnote 36
The further question of what power these specific reasons for restriction represent against freedom of speech is connected to a dilemma that is also known from the academic discourse: whether freedom of speech should be protected with a categorical or a balancing approach,Footnote 37 or – to adapt this question to the more recent American terminology – whether strict or intermediate scrutiny should be the main method.Footnote 38 At one end of the scale of theoretically possible answers is the position that grants constitutional protection to communications in all circumstances, while the other end of the scale is represented by the view that conflicts of relevant constitutional values can only be resolved by considering the special circumstances of specific cases. According to the most common view, which rather oversimplifies the picture, while the categorical approach prevails in the USA, in Europe balancing interests is more typical. However, the situation is more complex than this: although the absolutist understanding of freedom of speech is undoubtedly quite different from the European doctrine, the jurisprudence of the ECtHR strives to combine both the categorical and the balancing approaches in its practice.Footnote 39
On the one hand, the Court must take into account the reasons for restriction listed in the Convention that compete against freedom of speech, which is itself protected by it.Footnote 40 On the other hand, with regard to political speech, the jurisprudence applies a more categorical understanding of protection. According to the consistently emphasized thesis, there is little scope under Article 10(2) for restrictions on freedom of expression in the field of political speech, and the authorities’ margin of appreciation in assessing competing interests against freedom of expression in this context is particularly narrow.Footnote 41 The Recommendation of the Committee of Ministers concerning internet freedom also points out that restrictions on freedom of speech based on legitimate aims, including defamation laws, hate speech laws or laws protecting public order should be specific and narrowly defined in their application so that they do not inhibit public debate.Footnote 42 Although included in the above-mentioned list, hate speech is something of an exception to the more categorical approach, as its restriction is accepted by the CoE’s bodies, including the Court, with an increasingly permissive attitude. In general, however, it remains true that the ECtHR’s approach to political speech departs from case-by-case consideration and tends towards a more categorical protection.Footnote 43
7.2.4 The Role of the State
In the practice of the CoE concerning freedom of speech, the peculiarities of the European approach to fundamental rights can be clearly identified when considering the role of the state in the enforcement of those rights. Their starting point differs little from the US doctrine: the purpose of the constitution and fundamental rights is, above all, to limit state power, thus ensuring the exercise of civil liberties. The primary obligation of the state is therefore to refrain from violating these freedoms. Regarding freedom of speech, the key point is that the state should not interfere in the formation of public opinion. However, the European approach goes beyond this starting point in two important ways.
On the one hand, the documents of the CoE consistently emphasize that the state has not only negative but also positive obligations in connection with the protection of freedom of speech.Footnote 44 The state is obliged not only to refrain from restricting expression but also to actively contribute to the creation of an environment that supports the exercise of freedom of speech.Footnote 45 In line with this, states also have a positive obligation in the digital environment, ‘to create a safe and enabling environment for everyone to participate in public debate and to express opinions and ideas without fear, including those that offend, shock or disturb State officials or any sector of the population’.Footnote 46 The state must not only protect the individual exercise of rights, but also promote the fulfilment of freedom of opinion as a social value and institution based on its obligation to ensure an ‘objective institutional protection’. This obligation allows broader scope than the US doctrine to regulate social relations related to the fundamental right. The state’s obligation to act in support of the formation of democratic public opinion – as we will see in Section 7.3 analysing the practice of the ECtHR – justifies only a very narrow range of substantive intervention in the content of social communication.
Jurisprudence considers only the ‘gravest forms of hate speech’ to be inherently incompatible with the basic values of democracy, so they can be excluded from the scope of freedom of speech without further consideration.Footnote 47 Beyond that, however, the protection of democracy does not justify substantive intervention in the formation of public opinion but serves as a basis for the state to contribute through regulation to the proper framework and structure of the democratic public sphere. Among other things, this consideration creates a solid basis for European media regulations, the development of which is strongly supported by the CoE. According to its Recommendation on media pluralism, the states are the ‘ultimate guarantors’ of the democratic, plural functioning of the social public domain and have a positive obligation to put in place an appropriate legislative and policy framework to ensure the proper flow of views and information.Footnote 48
On the other hand, the role of the state is fundamentally influenced because, according to the European doctrine, the protection of fundamental rights can not only be relevant between citizens and states. An integral tenet of European constitutional law for decades is that in some cases, when private actors find themselves in a situation that significantly affects the enforcement of the fundamental rights of others, constitutional values also impose requirements on them.Footnote 49 States have a ‘positive obligation to ensure the exercise and enjoyment of rights and freedoms (which) includes, due to the horizontal effects of human rights, the protection of individuals from actions of private parties by ensuring compliance with relevant legislative and regulatory frameworks’.Footnote 50 Recently, the thesis of the horizontal scope of fundamental rights has gained traction in the field of freedom of speech, especially in the relationship between social media platforms and their users.Footnote 51 According to this argument, platforms cannot shape and apply their community rules at their own discretion, but must pay attention to the rights of their users, above all their freedom of speech. In terms of content moderation, for instance, the hands of the service providers are therefore tied to a certain extent by the requirements arising from the freedom of speech. As a consequence, while the recommendation of the CoE welcomes that ‘some online platforms have made considerable efforts to prevent the use of their networks as conduits for large-scale disinformation and manipulation of public opinion’, it also warns that ‘the impact of these measures on the free flow of information and ideas in democratic societies must be studied carefully’.Footnote 52
7.3 False and Harmful Information in the Jurisprudence of the ECtHR
According to the previously cited definition from the analysis of the CoE, disinformation is when false information is knowingly shared in order to cause harm. Although, with the exception of a single stray mention, the ECtHR has not yet addressed disinformation in its decisions,Footnote 53 it has a remarkable body of practice regarding the untruthfulness of communications and the harm caused by it. By analysing this practice, I will take into account the aspects that play an important role in the Court’s consideration when clarifying the conceptual elements of disinformation (falsity, intent and harm).
7.3.1 Statements of Facts and Value Judgements
One of the most important guiding threads of the ECtHR’s decisions reveals the difficulty of distinguishing between factual statements and value judgements. According to the Court’s general practice, the protection of freedom of speech applies differently to statements of facts and value judgements. The Court distinguishes between the two categories on the ground that ‘while the existence of facts can be demonstrated, the truth of value judgments is not susceptible of proof’, and ‘the requirement to prove the truth of a value judgment is impossible to fulfil and infringe freedom of opinion itself’.Footnote 54 The two categories cannot be completely separated, however. First, statements of facts are an integral part of the formation of present or future value judgements, and this must also be taken into account when determining the standards of their protection. Second, the ECtHR reiterates that where a statement amounts to a value judgement, ‘the proportionality of an interference may depend on whether there exists a sufficient factual basis for the impugned statement, since even a value judgment without any factual basis to support it may be excessive’.Footnote 55
From the perspective of the legal prosecution of disinformation, an important question is how strictly the categories of facts and value judgements are regarded as separable. If we rigidly insist on the truthfulness of each element of an expression, then the possibility of restriction opens up greatly, while if we place the emphasis on judging the expression of opinion as a whole, the participation of the speaker in the democratic dialogue becomes much more protected. In the practice of the ECtHR, the latter approach clearly follows from the fundamental aspects of freedom of speech. This is well illustrated by the Polish cases described below, that dealt with the application of a provision of the local election law that prosecutes falsity.Footnote 56
In Kita v. Poland, a decision had to be made on the legality of leaflets distributed in a local election campaign, which drew the voters’ attention to financial abuses. The ECtHR disallowed the summary classification of the Polish courts, since in the course of the categorization of the statements in question they unreservedly qualified all of them as statements which lacked any factual basis without examining the question of whether they could be considered to be value judgements. While the ECtHR recognized that some of the statements could be considered lacking a sufficient factual basis, it based its decision on the fact that ‘the thrust of the applicant’s article was to cast doubt on the suitability of the local politicians for public office’ related to issues of public interest. According to the ratio decedendi of the Strasbourg judgment, ‘the distinction between statements of fact and value judgments is of less significance in a case where the impugned statements were made in the course of a lively political debate, even where the statements made may lack a clear basis in fact’.Footnote 57
In Kwiecien v. Poland, the question was again the interpretation of the Polish election law. In this case, an open letter attacked one of the candidates, and the ECtHR assumed again that the purpose of the speech published in the campaign was to dispute the candidate’s suitability for public office. As a general rule of interpretation, the court establishes a presumption for expressions belonging to the campaign that ‘opinions and information … should be considered as forming part of a debate on questions of public interest’ in the case of campaign communications. The judgment did not dispute the assertion that certain elements of the open letter lacked sufficient factual basis, but overall it considered that ‘its thrust was to cast doubt on the candidate’s suitability for local public office, based on the applicant’s long experience with him’.Footnote 58
7.3.2 Good Faith or Bad Faith
In both Kita and Kwiecien, the ECtHR also referred to the fact that nothing in the circumstances of the case indicated that the speakers had acted in bad faith. As we have seen, the fact that certain elements of the expressions did not have an adequate factual basis was not enough to establish bad faith. Proving bad faith means rebutting the presumption that the speaker acted with the intention of participating in democratic political debate. If it emerges from the circumstances of the case that the author of the opinion was no longer motivated by the shaping of the debate on public affairs, but by the goal of an unfounded personal attack, then their expression of opinion is not entitled to the same protection.Footnote 59
Salov v. Ukraine sheds light on the same consideration in a true ‘conspiracy theory case’. One of the candidates in the Ukrainian presidential election campaign was held responsible by the Ukrainian courts for disseminating the false information that the incumbent (and running for re-election) president was actually already dead and was being replaced by a stuntman during his public appearances. However, the ECtHR emphasized in its decision that the original source and publisher of the untrue information was not the convicted presidential candidate, but that he had only dealt in good faith with information that had already emerged during the campaign. The judgment points out that the sharing or discussion of the received information cannot be prohibited in itself, ‘even if it is strongly suspected that this information might not be truthful. To suggest otherwise would deprive persons of the right to express their views and opinions about statements made in the media’. The Court considered that the presidential candidate had acted in good faith because he emphasized that he had not known whether this information was true or false while he was discussing it with others, having alleged that he was trying to verify it.Footnote 60
At the same time, the standard of good faith is not uniform, but may vary according to the position of the speaker. In Medzlis Islamske Zadnice Brcko, religious and ethnic civil organizations appealed to the competent authorities of an administrative district in Bosnia and Herzegovina and criticized a person they considered to be unsuitable for the post of director of the district’s multi-ethnic public radio station by making a number of incriminating allegations. Some of the claims came from other people’s reports, while others were presented to the authorities without sources. The ECtHR started from the fact that, in the eyes of society, reputable non-governmental organizations (NGOs) have an increased responsibility for disseminating factual statements. According to the Code of Ethics and Conduct for NGOs adopted by the World Association of Non-Governmental Organizations, they were bound by the requirement to verify the veracity of allegations submitted. In contrast, those involved in the case made statements based on guesswork and rumour without making any serious efforts to verify the authenticity of the non-governmental organizations.Footnote 61
7.3.3 Journalistic Ethics
The standard of good faith and careful conduct is always stricter when it applies to the press. The ECtHR attributes a particularly important and prominent role in the functioning of democracy to the press as an institution that professionally informs the public.Footnote 62 This role does not mean that the press should not respect the rights of others, but the aspects of information dissemination are given special weight, since its basic task is to present information and opinions related to public affairs.Footnote 63
However, according to the practice of the court, the press is entitled to enhanced protection in its informative activities only when it acts in good faith – that is, when it acts in accordance with the tenets of ‘responsible journalism’.Footnote 64 The standard of good faith is acting according to journalistic ethics, which sets expectations for journalists in order to provide the public with accurate and reliable information.Footnote 65 Based on journalistic ethics, it is the basic task of the press to take steps before publication to verify the accuracy of the information it receives.Footnote 66 However, the fulfilment of this obligation is, by definition, adjusted to the circumstances of the given case. When covering communication by others, for example in an interview, the general requirement for journalists to distance themselves systematically and formally from the content of a quotation that might insult or provoke others or damage their reputation is not reconcilable with the press’s role of providing information on current events, opinions and ideas. ‘Punishment of a journalist for assisting in the dissemination of statements made by another person in an interview would seriously hamper the contribution of the press to discussion of matters of public interest.’Footnote 67 In addition, it should always be taken into account that journalistic freedom also covers possible recourse to a degree of exaggeration, or even provocation.Footnote 68
It is very instructive, however, that in Staniszewski, the ECtHR saw no reason for protecting the free flow of information when the editor of a monthly newsletter completely failed to indicate the sources of his information. Without any indication of sources, we are probably facing a case of fabrication rather than dissemination of news, which the Court considered as clear proof of acting in bad faith.Footnote 69 In the Court’s approach, in today’s ever more effervescent information whirlwind, the importance of journalistic ethics has not diminished at all, in fact it is becoming more and more important: ‘In a world in which the individual is confronted with vast quantities of information circulated via traditional and electronic media and involving an ever-growing number of players, monitoring compliance with journalistic ethics takes on added importance.’Footnote 70 The ECtHR quite accurately and tellingly identifies that one of the main reasons why disinformation is felt to be overwhelming in our public discourse is that partisan and activist journalism is increasingly trumping standards of responsible journalism.
7.3.4 The Impact of Speech
The Strasbourg Court considers the influencing power of speech to be a factor to be taken into account even in the case of harmful communications, including untrue statements of fact. In the above-mentioned Salov case, for example, an important circumstance was that the convicted presidential candidate shared untrue information from a small newspaper reporting fabricated news with a limited number of people.Footnote 71 When assessing information published in the legacy media, the ECtHR traditionally takes into account the differences between the influencing powers of different media. ‘In considering the duties and responsibilities of a journalist, the potential impact of the medium concerned is an important factor and it is commonly acknowledged that the audio-visual media often have a much more immediate and powerful effect than the print media.’Footnote 72
There may also be large variations in the potential impact of different expressions given the extremely wide range of types of communication on the Internet.
It is clear that the reach and thus potential impact of a statement released online with a small readership is certainly not the same as that of a statement published on mainstream or highly visited web pages. It is therefore essential for the assessment of a potential influence of an online publication to determine the scope of its reach to the public.Footnote 73
Likewise in connection with the potential influence of speech, it is worth mentioning the thread of interpretation which – primarily in cases related to hate speech – also attributes importance to the person of the speaker. In certain positions, according to the Court, more attention is paid to a speaker’s opinion, so its impact can be more significant. In Sanchez v. France, the ECtHR emphasized that with politicians, the degree of notoriety and representativeness necessarily lend a certain resonance and authority to a person’s words or deeds. Owing to politicians’ particular status and position in society, they are more likely to influence people.Footnote 74 In terms of communication on the Internet, a similarly enhanced impact can be presumed if the speaker is a well-known blogger or a popular contributor to social media.Footnote 75
The Court always evaluates the social and political context of the expression of an opinion as an important or, in some cases, decisive factor in the impact of the communication. When a statement is made against a tense political or social background, the presence of such a background generally leads the Court to accept that some form of interference is justified.Footnote 76 In contrast, if there was no indication that a sensitive social or political background existed, the Court was reluctant to admit the necessity of restriction.Footnote 77
7.3.5 The Use of Internet Sources
The line of interpretation developed by the ECtHR regarding the use of internet sources in the press has significance beyond the media. The study prepared for the CoE also emphasizes the importance of the fact that ‘in the age of disinformation’ people are increasingly seeing information created by unofficial sources (from social media accounts, or websites which have only recently appeared), and there is ‘a need to be doing source-checking as well as fact-checking’. This is partly because of the fact that the nature of the source which originally created the content or which first shared it can provide the strongest evidence about whether the information is accurate.Footnote 78
At the same time, the practice of the court does not accept objective responsibility for the dissemination of information, even by the press, which is charged with a strict duty of care. In Editorial Board of Pravoye Delo and Shtekel v. Ukraine,Footnote 79 the ECtHR held accountable the Ukrainian regulation since, although the Press Act exempts journalists from civil liability for verbatim reproduction of material published in the press, no such immunity existed for journalists reproducing material from internet sources not qualified as press. The Court admitted that the risk of certain forms of harm posed by content and communications on the Internet is higher than that posed by the press, therefore the policies governing the reproduction of material from the printed media and the Internet may differ. Even so, considering the importance of the Internet for the exercise of the right to freedom of expression, the Court ruled that ‘the absence of a sufficient legal framework allowing journalists to use information obtained from the Internet without fear of incurring sanctions seriously hinders the exercise of the vital function of the press as a public watchdog’.Footnote 80
Similarly, the ECtHR considered the practice of the Hungarian Constitutional Court, which only exempted journalists from responsibility for the coverage of official press conferences, to be insufficient. In the case of Magyar Jeti Zrt. v. Hungary,Footnote 81 the Court considered it necessary to enforce such immunity on a much wider scale. In the specific case, in connection with the distribution of hyperlinks, the reasoning explained that their very purpose, by directing internet users to other pages and web resources, is to allow them to navigate to and from material in a network characterized by the availability of an immense amount of information. Hyperlinks, as a technique of reporting, are essentially different from traditional acts of publication in many senses, including that the person offering information through a hyperlink does not exercise control over the content of the website to which the hyperlink enables access. Consequently, the Court could not agree with the approach equating the mere posting of a hyperlink with the dissemination of defamatory information, thus automatically entailing liability for the content itself.Footnote 82 Of course, the consideration of other factors also played a role in the decision, especially that the press organ acted in good faith, in the service of the discussion of public affairs.
In another Hungarian case, the ECtHR explained in connection with the comments section of an internet portal that providing a platform for third parties to exercise their freedom of expression by posting comments is a journalistic activity of a particular nature. Therefore, the question of liability for defamatory statements posted there should always be considered on the grounds that ‘the punishment of a journalist for assisting in the dissemination of statements made by another person in an interview would seriously hamper the contribution of the press to discussion of matters of public interest’.Footnote 83 Although the above theses regarding the use of internet resources were elaborated by the ECtHR in cases related to the press, they are deeply rooted in its general approach to internet communication and are thus also applicable to other speakers.
7.3.6 The Harm of Speech
It is also worth briefly considering what in the ECtHR’s case law the legal basis for action against false information may be. In other words, what kind of grievances can justify legal restrictions that respond to disinformation? As has already been discussed, the limitation of freedom of speech must be traceable to one of the grounds for limitation listed in Article 10(2) of the Convention. Those decisions of the ECtHR which considered that it was permissible to limit the freedom of expression due to the dissemination of false information accepted such restrictions in order to protect the rights of others. In terms of the legal response to disinformation, the ‘protection of the rights of others’ can be interpreted in two ways, however. When a restriction is imposed in order to protect the rights of a specific person, typically their reputation, this has long been considered a traditional intervention in line with the doctrine of freedom of speech. In such cases, the ECtHR considers in each situation how much the person affected by the opinion is obliged to tolerate in the interest of the free discussion of public affairs. On the other hand, the ‘rights of others’ can also be interpreted in a more abstract way if it is understood as the legitimate interests of certain groups or even of the entire society. While the protection of social groups, especially minorities in a sensitive situation, is an obvious consideration in cases related to hate speech, the ECtHR has so far considered the protection of the rights of specific individuals as the appropriate goal in decisions related to the dissemination of false statements.
Although the Court – considering the legitimate aim of the interference – has already referred to the more abstract ways in which disinformation may cause harm in two election-related cases, in both cases the individual candidates concerned were actually affected by the false statements. According to the summarizing statement of the Salov judgment, the Court agreed with the government that the interference at issue was intended to pursue the legitimate aim of providing the voters with true information in the course of the presidential campaign.Footnote 84 However, the submission of the government was based on the argument that the dissemination of false information about a presidential candidate could have a damaging effect on their reputation and effectively prevent them from conducting an effective electoral campaign.Footnote 85
The rights of the voters therefore only arose indirectly in the case. In Staniszewski, the ECtHR similarly indicated that the interference had served to protect the integrity of the electoral process and thus the rights of the voters, but in the first place the Court emphasized that the legitimate aim of the restriction was to protect the reputation of one of the candidates in the local elections.Footnote 86 The ECtHR thus considers it relevant to refer to the rights of the voters and the integrity of the election as a more abstract value to be protected in the case of disinformation published in an election campaign, but the restrictions on the freedom of speech that have been deemed legal until now have always been imposed to protect the rights of a specific person, even in the context of an election.
At the same time, the right to adequate information can also be a valid reason for intervention in the field of media law in a more abstract form. The practice of the bodies of the CoE, including the Court, regards the audiovisual media as an actor with particularly strong power to influence others, thus it has developed specific standards of intervention, which are now applied as a matter of convention. The misleading information containing untruths in NIT v. Moldova was found to be a suitable legal basis even for revoking the right to broadcast, as the measure was intended to ensure the audience’s right to a balanced and unbiased coverage of matters of public interest in news programmes.Footnote 87
In principle, ‘prevention of disorder’ could also be grounds for the limitation of free speech, based on the list in Article 10(2) in the field of disinformation. It may not be a coincidence that the title of the report of the CoE on disinformation is ‘Information Disorder’.Footnote 88 It can be argued that, within the changed framework of the public sphere, disinformation, especially in relation to election campaigns, disrupts democratic conditions and causes hard-to-repair damage to the smooth functioning of public discourse and democratic elections. However, the case law of the ECtHR does not support that ‘prevention of disorder’ becomes another legal basis for restricting freedom of speech in the fight against disinformation.
In the Court’s approach, given that the Convention serves to protect individual freedoms, the legal grounds for limiting rights must always be interpreted restrictively. The meaning of ‘prevention of disorder’ was interpreted by the ECtHR together with the term ‘protection of public order’ used elsewhere in the Convention. Accordingly, while ‘public order’ appears to bear a wider meaning, referring to the body of political, economic and moral principles essential to the maintenance of the social structure, the notion of ‘disorder’ ‘conveys a narrower idea, understood in essence in cases of this type as riots or other forms of public disturbance’.Footnote 89 This means that the ‘prevention of disorder’ can serve as a legitimate aim for restricting free speech in case of disinformation only when the communication in question can trigger riots or other forms of violent public disturbance. This cannot be ruled out, but it is certain that disinformation – even if it clearly disturbs the debate on public affairs – has such an effect only in very special circumstances.
Interventions for the sake of public health have recently come into focus in the wake of the COVID-19 pandemic. The convention mentions public health as one of the reasons for restrictions, and although such a case has yet to be brought before the ECtHR, it may well be relevant in the field of disinformation. It is clear from Hertel v. Switzerland that when a public health issue – in this specific case, the effect of microwave ovens on health – is the subject of social debate, the state has only a very narrow margin to limit the viewpoints expressed.Footnote 90 However, this room for manoeuvre increases according to how solid the (European) consensus is on a given health issue. In the event of such a consensus, fundamental considerations of public health could prevail even over fundamental rights such as freedom of expression.Footnote 91
7.3.7 Article 17: Abuses of Freedom of Expression
Due to their significance, I will separately examine the types of harms in which Article 17 of the Convention, the so-called abuse clause, is referred to by the Court. Article 17 of the Convention is a provision on the prohibition of the abuse of rights. It reads as follows: ‘Nothing in this Convention may be interpreted as implying for any State, group or person any right to engage in any activity or perform any act aimed at the destruction of any of the rights and freedoms set forth herein or at their limitation to a greater extent than is provided for in the Convention.’ The methodological significance of Article 17 lies in the fact that, when applied, the ECtHR treats the expression in question as speech outside the scope of freedom of speech, and does not even conduct a strict examination of the legality of the restriction.Footnote 92
Article 17 is the ultimate tool for the protection of human rights and democracy, the application of which is reserved by the ECtHR for cases that deny democratic values and principles.Footnote 93 It is only applicable on an exceptional basis and in extreme situations, and in cases concerning freedom of expression it should only be resorted to in situations where it is immediately clear that the impugned statements involve this right for ends clearly contrary to the values of the Convention.Footnote 94 Given that, according to general opinion, the increasing prevalence of disinformation ultimately undermines democracy, and in its radical sense disinformation is per se contrary to all the values that the Convention promotes and protects, the possibility of applying Article 17 can also be raised.Footnote 95 The validity of this suggestion seems to be strengthened by the fact that the ECtHR saw the possibility of its application in several cases in connection with the denial of true facts or untrue statements of facts. However, it is clear from the following relevant cases that Article 17 is actually applied in practice as a means of action against special forms of hateful and discriminatory speech.
The book that is the subject of Garaudy v. FranceFootnote 96 analysed in detail a number of historical events relating to the Second World War, such as the persecution of the Jews, the Holocaust and the Nuremberg Trials, questioning the reality, extent and seriousness of these historical events. The Court emphasized that this opinion is far from political or ideological criticism, or one calling for ‘a public and academic debate’ on the historical events.
There can be no doubt that denying the reality of clearly established historical facts, such as the Holocaust, does not constitute historical research akin to a quest for the truth. The aim and the result of that approach are completely different, the real purpose being to rehabilitate the National-Socialist regime and, as a consequence, accuse the victims themselves of falsifying history. Denying crimes against humanity is therefore one of the most serious forms of racial defamation of Jews and of incitement to hatred of them.
The Court therefore found that the denial or rewriting of this type of historical facts undermines basic values and is incompatible with democracy and human rights.
In relation to the prohibition of an association in W.P. and Others v. Poland,Footnote 97 the subject of the case was not the denial of historical facts, but the assertion of untrue historical claims. The association of ‘Polish Victims of Bolshevism and Zionism’ proclaimed, among other things, that the persecution of Poles was a crime perpetrated by the Jewish minority, and one of the main points of its programme was to eliminate the inequality oppressing the Polish majority in favour of the Jewish minority. The Court observed that the general purpose of Article 17 is to prevent totalitarian groups from exploiting in their own interests the principles enunciated by the Convention, and the anti-Semitic activities of the association hence clearly met this standard. The ECtHR also closed the case by applying Article 17 when the applicant did not deny the Holocaust as such, but ‘denied an equally significant and established circumstance of the Holocaust considering it false and historically unsustainable that Hitler and the NSDAP had planned, initiated and organized the mass killing of Jews’. The Court found again that these views ran counter to the text and the spirit of the Convention.Footnote 98
Regarding historical facts beyond the Nazi atrocities, a decision of great importance was made in Perincek v. Switzerland. The essential question in this case was whether the criminal conviction for the opinion that ‘the allegations of the “Armenian genocide” are an international lie’ is lawful. The ECtHR found that the decisive point under Article 17 – whether the applicant’s statements sought to stir up hatred or violence, and whether by making them he attempted to rely on the Convention to engage in an activity or to perform acts aimed at the denial of the rights and freedoms laid down in it – is not immediately clear, thus the question of the applicability of Article 17 must also involve consideration of the merit. First and foremost, the Court admitted that its assessment of the necessity of interference with statements relating to historical events has been quite case-specific.
Taking into account a plethora of aspects – that the applicant’s statements related to a matter of public interest and did not amount to a call for hatred or intolerance, that the context in which they were made was not marked by heightened tensions or special historical overtones in Switzerland, that the statements cannot be regarded as affecting the dignity of the members of the Armenian community to the point of requiring a criminal-law response in Switzerland, that there is no international-law obligation for Switzerland to criminalize such statements, that the Swiss courts appear to have censured the applicant for voicing an opinion that diverged from the established opinion in Switzerland, and that the interference took the serious form of a criminal conviction – the Court concluded that the criminal conviction violated the Convention.Footnote 99 However, as the remarkable dissenting opinion of four Judges shows, despite all the efforts of the majority’s reasoning and the many factors put forward for the sake of distinction, one of the most controversial points of the decision remained the question of the application of Article 17.Footnote 100 The debate of the judges clearly points out that the standard used in the context of the abuse clause regarding the denial of historical facts has become unpredictable.
Although they are not directly related to the veracity of factual statements, it is instructive to look at two additional cases regarding the application of Article 17. Norwood v. United KingdomFootnote 101 concerned a poster that depicted a photograph of the Twin Towers in flames, with the words ‘Islam out of Britain – Protect the British People’ and a symbol of a crescent and star in a prohibition sign. The Court noted that the words and images on the poster amounted to a public expression of attack on all Muslims in the United Kingdom. Such a general, vehement attack against a religious group, linking the group as a whole with a grave act of terrorism, was found to be incompatible with the values proclaimed and guaranteed by the Convention, thus Article 17 was applied.
In Lillendahl v. Iceland,Footnote 102 however, the Court did not find the abuse clause applicable in connection with statements describing homosexual people as disgusting sexual deviants. Trying to clarify its case law, the ECtHR explained that hate speech falls into two categories. The first category is composed of the gravest forms of hate speech which the Court regards as falling under Article 17. The second category includes less grave forms of hate speech which the Court does not consider to fall entirely outside the scope of Article 10, but which it is permissible to restrict.
Into this second category, the Court has not only put speech which explicitly calls for violence or other criminal acts, but has held that attacks on persons committed by insulting, holding up to ridicule or slandering specific groups of the population can be sufficient for allowing the authorities to favour combating prejudicial speech within the context of permitted restrictions on freedom of expression.Footnote 103
In relation to the discussion about the fight against disinformation, the following conclusions can be drawn from the judicial application of Article 17. The abuse clause has been applied in practice as a tool for handling the gravest forms of hate speech. Cases of disinformation (denial of true facts or untrue statements of facts) may fall within the scope of Article 17 if the Court can establish that they are manifestations of racial discrimination and incitement to hatred. In addition, the conditions for the application of Article 17 are not objective and predictable but were based on complex consideration of the special circumstances of socially sensitive cases. In such circumstances, it is theoretically not desirable to make a decision without applying the strict methodology of restricting freedom of speech, that is, for it to expand the range of cases that can be handled under Article 17. Finally, it is particularly undesirable to circumvent investigation methods in relation to disinformation, the limitation of which is always decided on by the Court, as we have seen, by taking into account a complex set of criteria. The threat posed by disinformation to the sustainable functioning of elections and democracy in general by no means constitutes a sufficient reason for the use of the abuse clause.
7.3.8 Restrictive Legal Means against Disinformation
After Sections 7.2 and 7.3 of this chapter, the main conclusions can already be drawn, based on the doctrine shared by the CoE, on the extent to which the challenge posed by disinformation can be answered by legal means that limit the freedom of speech. The key finding is that there is no justification for general action against speech that can be classified as disinformation, and that restrictive intervention can only take place in rare, exceptional cases. The basic principles and aspects of the democratic participatory concept of free speech bind the legal interpretation of the conceptual elements of disinformation (untrue statement of facts, intent to deceive and causing harm) in such a way that restrictive mechanisms can only be justified within a narrow range of misleading (and socially otherwise problematic) cases. In the following section, I will first summarize the most important arguments against restrictive interventions, and then I will identify the scope in which restrictive measures can still be justified.
7.3.8.1 Arguments against Restrictive Legal Means
Among the theoretical obstacles that stand in the way of imposing legal restrictions on disinformation, the first is the basic approach of the participatory model of free speech, according to which the active involvement of as many people as possible in the discussion of public affairs is not a circumstance that causes risks but a value to be supported. In the logic of democratic public opinion, the answer to the undoubtedly existing risk that anyone can shape public opinion is not the limitation of participation but the corrective power of a robust debate. This represents more than an abstract doctrine: this argument is also based on the fact that where a pathological social weakness of corrective factors exists, legal restrictions are actually unsuitable tools for improving the situation. The findings of the CoE presented in Section 7.4 are telling. Restrictive legal instruments are hardly suited for remedying the problems of the lack of sources of information worthy of public trust, the shrinking of the ethos of quality journalism, and the increasingly irrational tribalism prevailing in public discussions. They can, however, further increase distrust in the institutional system at any time.
Second, under the auspices of democratic public opinion, the European doctrine (also) regards the speaker and their audience first and foremost as autonomous citizens who interpret information and context in their complexity, and who then jointly bear the result of the exchange of opinions. Democratic public opinion emerges from the dialogue of the members of the community that organizes itself democratically regarding how to self-govern. All this supports the rejection of any intervention that would steer the development of public opinion in the ‘right’ direction and protect the audience in a paternalistic way. In the field of disinformation (and without specific additional circumstances), the many restrictions applied to commercial communication cannot be taken as an example to be followed,Footnote 104 because their anthropological approach sees the consumer as vulnerable to the manufacturer and distributor, whose position must be protected by the state, above all for the sake of their health and safety.
Third, for the doctrine of freedom of speech, statements that can be considered troubling in the informational sense are in many cases not untrue statements of fact, but political opinions in whole or in part, with which the participant in the public debate explains reality. Conspiracy theories, misinterpretations or distortions are traditional elements of the public discourse, which must also be reckoned with in the altered circumstances of the public sphere today.
Fourth, in line with this, in the evaluation of the motivation of the speaker, harmful intent can only be interpreted narrowly. Influencing the plural political public is often accompanied by one-sided communications, thus carrying the possibility of misrepresentation, even without the speaker being guided by the intention to harm. In a public life that is based on political competition, the discrediting of an opponent’s ability or policies are parts of participation in the public debate, even if they are based on arbitrary highlighting, exaggeration of certain factors, or subjective and baseless assumptions.
Fifth, in order to promote participation and avoid excessive interventions, the doctrine of freedom of speech also limits the legal consideration of grievances. In the case of public figures, jurisprudence often decides in favour of freedom of speech, even when specific personal rights are involved, and the abstract interest of informing society or the electorate can be used to justify restrictions even more narrowly.
7.3.8.2 Possible Legal Restrictions
While the allegation of untruthfulness alone is not a sufficient reason for intervention, in the case of expressly fabricated news, there is room for limitation under the Strasbourg jurisprudence. This requires both for the intention of the communicator to be clearly and directly aimed at publishing untrue and harmful information, and for this information to have an independent significance and weight (that is, the speech cannot be evaluated as a political expression of opinion as a whole). When these conditions exist, the scope of the restriction is determined by the nature of the harm: the rights of a specific person are more broadly protected, while more abstract social interests (for example, public health) can only be protected at the cost of free speech in special cases, when they are especially endangered. In particular, there is room for intervention in the case of information sources (for example, social media profiles) that are created and operated in order to systematically fabricate and spread falsity.
In the field of shaping public opinion, even within the new framework of the public, stricter rules apply to journalists, who have a privileged role, which also carries responsibilities in that they are liable for violating journalistic ethics, especially for untruths reported without a good-faith attempt to check the veracity of the information. This obligation may also apply to other actors (for example, to NGOs) that participate in the discourse with a ‘public watchdog’ function. In addition, the media law can impose the obligation to provide factual and objective information on the audiovisual media with greater influence as a specific obligation limiting their editorial freedom.
The increased responsibility in the field of shaping the political aspects of the public sphere seems to appear in the practice of the ECtHR in cases where the speaker is an active politician. However, for the time being, the case law has established a higher-level standard only in the field of hate speech, and there are strong arguments against the development of a similar interpretation in the field of disinformation. The arguments for politicians having to accept even the harshest criticism and personal attacks are also valid in the other direction: in their case, the correction of inaccuracies can best be entrusted to the public exchange of ideas. The attention paid to politicians’ statements is especially multidirectional, and their misleading statements are attacked by many other speakers in public. Moreover, in the conditions of the democratic, pluralistic, multi-party public sphere, the audience may be aware of the often exaggerated and even manipulative nature of political communication. On the basis of the state’s obligation to support the proper structure of democratic public opinion, special requirements supporting the dissemination of quality information can be imposed on the operation of social platforms that do not qualify as media. However, since these requirements are different in nature from the classic restrictive state interventions discussed so far, they will be discussed separately in Section 7.4.1.
Overall, the documents of the CoE and the practice of the ECtHR suggest that measures restricting communication in the fight against disinformation can only play a more significant role than at present if the basic principles of freedom of speech are set aside. The prospect of overruling the aspects that have defined the doctrine of freedom of speech to date can definitely be considered an open question. It can be argued that these aspects were tailored to circumstances in which there were fewer speakers, a slower flow of communication, and more rational expressions of public life. In the new, altered circumstances, new standards must be established and used as tools for effective interventions. However, these constitutional aspects of freedom of speech doctrine were actually not tailored to certain circumstances but to a general principle of the democratic formation of public opinion. It is an undoubted fact that social relations in today’s society and the conditions for democratic exchange of ideas are significantly different now than they were decades ago. Even so, the aforementioned starting points stem from the essence of democracy; hence, as long as there is a shared belief that we want to manage our public affairs democratically, our social practices must be adapted to them, and not the other way around. However, the limitations this responsibility places on the use of restrictive legal instruments does not mean that we are helpless. What is more, deeper reasons for the growth of disinformation can be found in social phenomena against which the state can successfully act primarily not with restrictive measures but through other means. The next section will describe what other tools the regulator has at its disposal in the fight against disinformation.
7.4 Policy Means against Disinformation
It is clear from the materials of the CoE that the institution is aware that solving the aggravating problem of disinformation cannot be achieved by means of restricting public discourse. It is difficult to imagine such wide-scale interventions without violating the democratic principles of freedom of speech, and the problem in any case has deeper roots than could be successfully countered by enforcing new prohibitions. The common understanding within the CoE is that ‘our problem isn’t “fake news”. Our problem is trust.’Footnote 105 A large part of the disinformation problem stems from the phenomenon that trust in mainstream media has been falling for decades, as has trust in other public institutions.Footnote 106 In other words, there is a serious problem with sources in the sense that people trust neither traditional nor the new sources of the widened information ecosystem.Footnote 107 As the Committee of Ministers has summarized it, in the present environment of intensified political partisanship and an increasingly polarized information ecosystem, ‘individuals’ trust in media, as well as trust in politics, institutions and expertise, has in many States declined to a worryingly low level’.Footnote 108
Several documents produced by the CoE outline a multifaceted action plan for the actors involved, adapted to the complexity of the problem. It is clear that both the old and new players in the digital ecosystem have a lot to do to improve the quality of the public discourse. In accordance with the focus of this chapter, I will now consider the main proposals that identify the areas in which the state can take action. Three such areas stand out in terms of their importance: (1) the state’s participation in the regulation of newer actors; (2) the promotion of quality journalism, and (3) the development of media and information literacy skills. The latter two areas are considered soft actions relegated to the background in traditional regulatory logic, but actually represent the best hope for progress in the field of disinformation. The proposals formulated by the Council fit well into the European approach, according to which the state also has a positive obligation to create a regulatory and social environment that best serves the enforcement of free speech. The state has a constitutional responsibility to play its part in forming an appropriate framework for democratic public opinion.
7.4.1 Regulation of Intermediaries
7.4.1.1 Content Moderation
Self-regulation is not a subject of this paper for several reasons. First, the role of the state is not a major focus in self-regulation, and second, the importance and complexity of the topic requires a separate analysis.Footnote 109 However, it is necessary to consider self-regulation from the point of view that the materials of the CoE also emphasize constitutional reasons why the state cannot leave self-regulation systems completely alone. At certain points, the European doctrine of the horizontal effect of freedom of speech limits the room for manoeuvre of the major actors of digital communication, and compliance with these boundaries must ultimately be guaranteed by the state.
This dichotomy can be observed in every mention of self-regulation in the CoE’s documents. On the one hand, the Ministerial Committee also emphasizes that states should encourage the establishment and maintenance of appropriate self-regulatory frameworks or the development of co-regulatory mechanisms. On the other hand, these mechanisms must take due account of the role of intermediaries in providing services of public value and in facilitating public discourse and democratic debate as protected by Article 10 of the Convention. More specifically, and in connection with one of the most pressing issues in this regard, during their content moderation efforts ‘Internet intermediaries should respect the rights of users to receive, produce and impart information, opinions and ideas’.Footnote 110
When restricting access to content in line with their own community standards and policies, intermediaries should do so in a transparent and non-discriminatory manner. In addition, the providers must pay attention to users’ right to freedom of speech. This is not to say that the system of requirements placed upon states should be transferred wholesale to social media platforms. First, the bearer of obligations with regard to fundamental rights remains primarily the state, so it follows that it is states which are most restricted by precepts arising from freedom of expression. Second, the enforcement of constitutional rights against private actors always takes into account the specific, legitimate interests of the obliged party. In spite of this, the emergence of a fundamental rights aspect hinges precisely on the fact that these interests cannot be invoked without restriction
Although platform providers may, on the basis of the objectives of the social network operator, impose special restrictions, they must respect the essential aspects of the fundamental rights thus affected. One such criterion, which follows from the principle of freedom of expression, is that everyone should be free, above all, to express and publish their views in the debate on public affairs. The more current and legitimate the debate on social issues, the narrower the opportunity for the owners of platforms to intervene with regard to the expression of opinion, and the less the service provider can deviate from the consideration of the key constitutional standards.
Based on all of this, in connection with disinformation there are strong arguments against platforms restricting the content of individual communications that are considered worrisome but are not illegal. Where the criteria for the free discussion of public affairs protect speakers against the state, there is a chance that they can also be invoked against the major social media service providers. Platforms can refer to their own legitimate interests under the doctrine of the horizontal effect of free speech, but they are much more limited in current social debates. On the other hand, considering how difficult it is to judge disinformation, it does not seem reasonable for private companies to be granted the right to decide instead of state authorities, especially courts. In the area of self-regulation, the situation is therefore just the opposite to the usual, and in constitutional terms there is a narrower scope for intervention in Europe. The hands of service providers are tied by the requirements of freedom of speech, and the courts must be careful to develop a corresponding practice.
7.4.1.2 The Regulation of Algorithmic Navigation
The nature of the algorithmic recommendation systems used by platforms seems to be particularly important for state policymaking. The fundamental controlling factor of the communication taking place on social media is algorithmic navigation: an algorithm decides what content it draws the attention of users to, and the content it selects has an incomparably greater chance of enjoying widespread dissemination. It is certainly worrisome that, in the current state of affairs, we can only catch a glimpse of the workings of this influence on the social dialogue through the arbitrarily dripped data of the service providers or the subsequent (self-)critical testimony of their former employees. Particularly strong arguments in favour of ensuring transparency have resulted from the information that has been made public so far, which mostly points to the deliberate neglect by social media firms of the harmful effects of algorithms.Footnote 111
At the same time, this issue concerns more than just increasing transparency. Navigation can play an important role in alleviating some of the public’s problems, which can justify the formulation of meaningful expectations. Either we also regulate algorithms, or they alone will regulate us.Footnote 112 It would be a valuable contribution to the further development of public debate, for example, if the platforms used their algorithms to shape the structure of the discourse in their domain in such a way that the masses of users encounter more diverse content – instead of further deepening the gap between echo chambers. In the fight against disinformation, making reliable news sources more accessible could bring substantial progress. Similar to the algorithms they have used so far for their various business goals, the platforms could develop a methodology of navigation that tries to respond to these problems. Ultimately, the state can also oblige them to do so in the context of regulating the structure of the democratic public sphere. However, such an obligation can of course only be meaningful and workable if there are credible news sources that are deserving of public trust, so that their positioning in the digital environment can contribute – across political tribes – to clearing up the information chaos. I am convinced that we have just arrived at key areas of state involvement.
7.4.2 Promoting Quality Journalism
The importance of quality journalism is clearly demonstrated by the fact that the CoE dedicated a separate document to the topic among its recommendations discussing the problems of the digital media environment. The Ministerial Committee identifies the shift towards an increasingly digital, mobile and social media environment as posing fundamental challenges that have profoundly changed the dynamics of the production, dissemination and consumption of news and other media content. Resultantly, quality journalism needs to compete for audience attention with other types of content that are not subject to the same legal, regulatory or ethical frameworks. A number of media outlets that have traditionally been committed to producing reliable information now find themselves unable to counteract these processes due to a declining reader or viewer base. They are struggling to adapt their operations to a digital environment and to stay connected to the communities they serve.
An important question, of course, is what we consider to be quality journalism. According to the definition provided in the recommendation, quality journalism is characterized by a firm commitment to the search for truth, correctness, credibility and independence, as well as the enforcement of social responsibility in the public interest. It is important to note that, in terms of the general picture of the media situation, the task is twofold: it is necessary to work not only on positioning quality journalism in a new environment but also even on recreating it. One of the explanations for the weakening of trust in the media is the participation of a critical mass of activist journalists in the intensified partisanship in the political sphere.
The Council of Europe basically advocates a significant role for the state in two areas: financing and education. States are encouraged to ensure the financial sustainability of quality journalism, which is one of the most formidable challenges facing them. ‘Traditional, advertising-based media business models have been disrupted, while the transformation of major online platforms, in many respects, into publishing organizations has separated news production from news dissemination.’ State actions should include, in a viewpoint-neutral manner:
granting tax relief for media organizations;
making public funds available for community and local media;
removing any regulatory obstacles to the operation of not-for-profit journalism with new forms of donation;
providing funds, grants or other targeted assistance to investigative journalism;
financing self-regulatory press councils and mechanisms.
As for education, the state must help journalists to regularly update their skills and knowledge, specifically in relation to their duties and responsibilities in the digital environment, through fellowship programmes and financial support. In addition, specific education curricula and professional training courses should be made available in the fields of science, health, environment, engineering, law and other specialized subjects of public interest, which would ideally motivate journalism students to acquire the practical skills and theoretical background needed to cover such fields.
Due to its importance, the issue of maintaining public service media should be highlighted separately. The CoE believes that public service media are a key stabilizing actor in the media sector if they function as an authentic and reliable source of information independent from political and commercial interests. Public service media could and should play a special role in setting quality standards.
States should ensure stable and sufficient funding for public service media in order to guarantee their editorial and institutional independence, their capacity to innovate, high standards of professional integrity, and to enable them to properly fulfil their remit and deliver quality journalism.
The recommendation emphasizes that public service media should be at the forefront of the fight against disinformation and should mobilize the actors of quality journalism to develop and share good practices of fact-checking and credible reporting.
When addressing the issue of the proper financing of quality journalism, the question of the transparency of advertising mechanisms should also be discussed. In light of the dominance of the major online platforms in the online advertising market, measures should be undertaken to improve the transparency of their advertising systems and practices, underpinned by legal obligations and independent oversight mechanisms. States should intervene ‘to avoid the diversion of advertising revenues from accurate and reliable news sources to sources of disinformation and blatantly false content, and instead seek to reward reliable sources of news.’
7.4.3 Media and Information Literacy
Another crucial field for policy-based state actions is the development of the media and information literacy (MIL) skills of society. A well-informed and media-literate society (including journalists, the media, online platforms, non-governmental organizations and individuals) is an essential part of the defence against information manipulation in democratic societies. States have a responsibility for creating and promoting MIL initiatives to help citizens recognize and develop resilience to disinformation. The aim is ‘to encourage a media- and information-literate public that is empowered to make informed and autonomous decisions about its media use, that is able and willing to critically engage with the media, that appreciates quality journalism and that trusts credible news sources’.Footnote 113 The skills that are required to understand information disorders should be developed as part of more general MIL skills for accessing and managing the digital space, including the ability to deal with an information and communications environment that provides access to degrading content of a sexual or violent nature.Footnote 114
The Council of Europe lists the following actions that states should take in terms of MIL skills:
provide maximum support for the development of MIL initiatives that illustrate the benefits of quality journalism to various audiences and help them engage with such content in new ways and on new platforms;
support MIL programmes and activities, which should help users to better understand how online infrastructure and economy are operated and regulated and how technology can influence choice in relation to media;
define the promotion of MIL as an explicit common aim of media, information and education policies;
invest adequate resources in MIL and in developing strategies for collaboration, communication and education;
integrate MIL measures in the education of all age groups, as an essential part of school curricula from primary school onwards;
fund independent MIL initiatives by media organizations, public service media, community media, independent regulatory bodies, civil society actors and other relevant actors;
promote specific media literacy programmes for newsrooms, in particular to promote newsroom collaboration, community building and participatory audience engagement.
assist in the development of MIL initiatives that help individuals become more aware of how online advertising works.
7.5 Conclusion
Having searched for European answers to the problem of disinformation, this chapter examined the practice of three institutions of the CoE which have had a great impact on the development of Member States’ legislative and judicial processes. Overall, the practice of the ECtHR, and the documents of the Committee of Ministers and of the Venice Commission suggest that solving the aggravating problem of disinformation cannot be achieved by means of restricting public discourse. It is difficult to imagine such wide-scale interventions without violating the basic principles of the democratic formation of public opinion, and the problem in any case has deeper roots than could be successfully countered by enforcing new prohibitions.
However, the limitations on the use of restrictive legal instruments does not mean that there is no way to address the challenges of disinformation. Indeed, the deeper reasons for the growth of disinformation can be found in social phenomena against which the state can successfully act primarily not by means of restrictive measures, but through other instruments. The common understanding within the CoE is that the problem of disinformation largely stems from a situation where – in the present environment of intensified political partisanship and an increasingly polarized information ecosystem – people trust neither traditional media nor the new online sources of information. According to the European approach, states also have a positive obligation to create a regulatory and social environment that best serves the enforcement of free speech. The state has a constitutional responsibility to play its part in forming an appropriate framework for democratic public opinion. The materials of the CoE identify important areas in which the state can take action. Through the promotion of quality journalism, or the development of media and literacy skills, or the regulation of newer actors in the public, states can seek ways of fostering the creation or the positioning of credible news sources that are deserving of public trust across the political spectrum. Make no mistake: if we are to see any positive development in this field, clearing up the information chaos needs to be a joint effort by the whole of society.