Skip to main content Accessibility help
×
Hostname: page-component-cd9895bd7-jkksz Total loading time: 0 Render date: 2024-12-26T09:05:38.489Z Has data issue: false hasContentIssue false

Part III - Roles and Responsibilities of Private Actors

Published online by Cambridge University Press:  01 November 2021

Hans-W. Micklitz
Affiliation:
European University Institute, Florence
Oreste Pollicino
Affiliation:
Bocconi University
Amnon Reichman
Affiliation:
University of California, Berkeley
Andrea Simoncini
Affiliation:
University of Florence
Giovanni Sartor
Affiliation:
European University Institute, Florence
Giovanni De Gregorio
Affiliation:
University of Oxford

Summary

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2021
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

13 Responsibilities of Companies in the Algorithmic Society

Hans-W. Micklitz and Aurélie Anne Villanueva
13.1 Context – New Wine in Old Bottles?

The major focus of the book is on the constitutional challenges of the algorithmic society. In the public/private divide type of thinking, such an approach puts the constitution and thereby the state into the limelight. There is a dense debate on the changing role of the nation-state in the aftermath of what is called globalization and how the transformation of the state is affecting private law and thereby private parties.Footnote 1 This implies the question of whether the public/private divide can still serve as a useful tool to design responsibilities on both sides, public and private.Footnote 2 If we ask for a constitutional framing of business activities in a globalized world, there are two possible approaches: the first is the external or the outer reach of national constitutions; the second the potential impact of a global constitution. Our approach is broader and narrower at the same time. It is broader as we do not look at the constitutional dimension alone, but at the public/private law below the constitution and at the role and impact on private responsibilities, it is narrower as we will neither engage in the debate on the external/outer reach of nation-state constitutions nor on the existence of a ‘Global Constitution’ or an ‘International Economic Constitution’, based on the GATT/WTO and international human rights.Footnote 3 Such an exercise would require a discussion about global constitutionalization and global constitutionalism in and through the digital society and digital economy.Footnote 4

Therefore, this contribution does not look at private parties through the lenses of the constitutions or constitutionalization processes but through the lenses of private parties, here companies. The emphasis is on the responsibilities of private companies, which does not mean that there is no responsibility of nation-states. Stressing private responsibilities below the surface of the constitution directs the attention to the bulk of national, European and international rules that are and that have been developed in the last decades and that in one way or the other are dealing with responsibility or perhaps even better responsibilities of private and public actors. Responsibility is a much broader term than legal civil liability as it includes the moral dimension,Footnote 5 which might or might not give space to give private responsibility a constitutional outlook or even more demanding a constitutional anchoring, be it in a nation-state constitution, the European or even the Global Constitution.Footnote 6 The culmination point of the constitutional debate is the question of whether human rights are addressing states alone or also binding private parties directly.Footnote 7 Again, this is not our concern. The focus is on the level below the constitution, the ‘outer space’ where private parties and public – mainly administrative – authorities are co-operating in the search for solutions that strike a balance between the freedom of private companies to do business outside state borders and their responsibility as well as those of the nation-states.

The intention is to deliver a rough overview of where we are standing politically, economically and legally, when we are discussing possible legal solutions that design the responsibility of private companies in the globalized economy. This is done against the background of Baldwin’sFootnote 8 structuring of the world trade order along the line of the decline of first transportation costs and second communication costs. The two stages can be associated with two very different forms of world trade. The decline of transportation enabled the establishment of the post–World War II order. Products and services could circulate freely without customs and non-tariff barriers to trade. The conditions under which the products were manufactured, however, were left to the nation-states. This allowed private companies to benefit from the economies of scales, from differences between labour costs and later environmental costs. The decline of communication costs changed the international trade order dramatically. It enabled the rise of global value chains often used as a synonym for global value chains. Here product and process regulation are interlinked through contract.Footnote 9 It will have to be shown that the two waves show superficially regarded similarities, economically and technologically, though there are differences which affect the law, and which will have to be taken into account when it comes to the search for solutions.

13.2 The First Wave – Double Standards in Unsafe Products and Unsafe Industrial Plants

Timewise, we are in the 1960s, 1970s. International trade is blossoming. The major beneficiaries are Western democratic states and multinationals, as they were then called. Opening the gateway towards the responsibility of multinationals ‘beyond the nation-state’Footnote 10 takes the glamour away from the sparkling language of the algorithmic economy and society and discloses a well-known though rather odd problem which industrialized states had to face hand in hand with the rise of the welfare state in whatever form and the increase of protective legislation to the benefit of consumers, of workers and of the environment against unsafe products.

13.2.1 Double Standards on the Export of Hazardous Products

The Western democratic states restricted the reach of the regulation of chemicals, pharmaceuticals, pesticides and dangerous technical goods to their territory, paving the way for their industries to export products to the rest of the world, although their use was prohibited or severely restricted in the home country. The phenomenon became known worldwide as the policy of ‘double standards’ and triggered political awareness around the globe, in the exporting and importing states, in international organizations and in what could be ambitiously called an emerging global society.Footnote 11 Communication costs, however, determined the search for political solutions. It has to be recalled that until the 1980s telephone costs were prohibitive, fax did not yet exist and the only way to engage in serious exchange was to meet physically. The decrease in transportation costs rendered the international gathering possible. The level of action to deal with ‘double standards’ was first and foremost political.

The subject related international organizations, WHO with regard to pharmaceuticals, UNEP and FAO with regard to chemicals, pesticides, waste and the later abolished UN-CTC with regard to dangerous technical goods invested into the elaboration of international standards on what is meant to be ‘hazardous’ and equally pushed for international solutions tying the export of double-standard products to the ‘informed’ consent of the recipient states. Within the international organizations, the United States dominated the discussions and negotiations. That is why each and every search for a solution was guided by the attempt to seek the support of the United States, whose president was no longer Jimmy Carter but Ronald Reagan. At the time, the European Union (EU) was near to non-existent in the international sphere, as it had not yet gained the competence to act on behalf of the Member States or jointly with the Member States. The Member States were speaking for themselves, built around two camps: the hard-core free-trade apologists and the softer group of states that were ready to join forces with voices from what is called today the Global South, seeking a balance between free trade and labour, consumer and environmental protection. Typically, the controversies ended in soft law solutions, recommendations adopted by the international organizations if not unanimously but at the minimum with the United States abstaining.

There is a long way from the recommendations adopted in the mid-1980s and the Rotterdam Convention on the export of hazardous chemicals and pesticides adopted in 1998, which entered into force in 2004.Footnote 12 On the bright side, there is definitely the simple fact that multilateralism was still regarded as the major and appropriate tool for what was recognized as a universal problem, calling for universal solutions. However, there is also a dark side to be taken into consideration. The UN organizations channelled the political debate on double standards, which was originally much more ambitious. NGOs, environmental and consumer organizations, and civil society activists were putting political pressure on the exporting countries to abolish the policy of double standards. The highly conflictual question then was and still is, ‘Is there a responsibility of the exporting state for the health and safety of the citizens of the recipient countries?’ Is there even a constitutional obligation of nation-states to exercise some sort of control over the activities of ‘their’ companies, who are operating from their Western Homebase in the rest of the world? How far does the responsibility/obligation reach? If double standards are legitimate, are nation-states at least constitutionally bound to elaborate and to ensure respect for internationally recognized standards on the safety of products, of health and safety at work, as well as environmental protection?

The adoption of the Rotterdam Convention suffocated the constitutional debate and shifted the focus towards its ratification. The juridification of a highly political conflict on double standards ends in a de-politicization. The attention shifted from the public political fora to the legal fora. The Member States of the EU and the EU ratified the Convention through EU Regulation 304/2003, later 698/2008, today 649/2012.Footnote 13 The United States signed the Convention but never ratified it. In order to be able to assess the potential impact of the Rotterdam Convention or more narrowly the role and function of the implementing EU Regulation on the European Member States, one has to dive deep into the activities of the European Chemical Agency, where all the information from the Member States is coming together.Footnote 14 When comparing the roaring public debate on double standards with the non-existent public interest in its bureaucratic handling, one may wonder to what extent ‘informed consent’ has improved the position of the citizens in the recipient state. The problem of double standards has not vanished at all.Footnote 15

13.2.2 Double Standards on Industrial Plants

The public attention seems to focus ever stronger on catastrophes which shatter the global world order – from time to time, but with a certain regularity. The level of action is not necessarily political or administrative; it is judicial. The eyes of the victims but also of NGOs, civil society organizations, consumer and environmental organizations were and are directed towards the role and function of courts. Dworkin published his book on Law’s empire, where he relied on the ‘Hercules judge’ in 1986, exactly at a time, where even in the transnational arena national courts and national judges turned into key actors and had to carry the hopes of all those who were fighting against double standards. This type of litigation can be easily associated with Baldwin’s distinction. The decline of transportation costs allowed Western-based multinationals to build subsidiaries around the world. Due to the economies of scale, it was cheaper for the multinationals to get the products manufactured in the subsidiaries and ship them back to the Western world to get them assembled. Typically, the subsidiaries were owned by the mother company, having its business seat in the United States or in Europe, either fully or at least up to a 51 per cent majority.

Again, the story to tell is not new, but it is paradigmatic for the 1980s. In 1984, a US-owned chemical plant in Bhopal India exploded. Thousands of people died. The victims argued that the plant did not even respect the rather low Indian standards of health and safety at work and Indian environmental standards. They were seeking compensation from Union Carbide Corporation, the mother company, and launched tort action claims in the United States.Footnote 16 The catastrophe mobilized NGOs and civil society organizations, along with class-action lawyers in the United States who combined the high expectations of the victims with their self-interest in bringing the case before US courts. The catastrophe laid bare the range of legal conflicts which arise in North-South civil litigation. Is there a responsibility of US companies which are operating outside the US territory to respect the high standards of the export state or international minimum standards, if they exist? Or does it suffice to comply with the lower standards of the recipient state? Is the American mother company legally liable for the harm produced through its subsidiary to the Indian workers, the Indian citizens affected in the community and the Indian environment? Which is the competent jurisdiction, the one of the US or the one of India, and what is the applicable law, US tort and class action law with its high compensation schemes or the tort law of the recipient state? The litigation fell into a period where socio-legal research played a key role in the United States and where legal scholars heavily engaged in the litigation providing legal support to the victims. There was a heated debate even between scholars sympathizing with the victims of whether it would be better for India to instrumentalise Bhopal so as to develop the Indian judiciary through the litigation in India in accepting the risk that Indian courts could provide carte blanche to the American mother companies or whether the rights of the victims should be preserved through the much more effective and generous US law before US courts. One of the key figures was Marc Galanter from Wisconsin, who left the material collected over decades on the litigation in the United States and in India, background information on the Indian judiciary, and the role and function of American authorities to the Wisconsin public library.Footnote 17 It remains to be added that in 1986 the US district court declined jurisdiction of American courts as forum non conveniens and that the victims who had to refile their case before Indian courts were never adequately compensated – until today. There are variations of the Bhopal type of litigation; the last one so far which equally gained public prominence is Kiobel.Footnote 18

The political and legal debate on double standards which dominated the public and legal fora in the 1980s differs in two ways from the one we have today on the responsibility of private parties in the digital economy and society: first and foremost, the primary addressees of the call for action were the Western democratic states as well as international organizations. They were sought to find appropriate solutions for what could not be solved otherwise. There are few examples of existing case law on double standards. Bhopal, though mirroring the problem of double standards, is different due to the dimension of the catastrophe and to the sheer number of victims which were identifiable. It is still noteworthy though that the international community left the search for solutions in the hands of the American respectively the Indian judiciary and that there was no serious political attempt neither of the two states nor of the international community to seek extra-judicial compensation schemes. The American court delegated the problem of double standards back to the Indian state and the Indian society alone. Second, in the 1980s, human rights were not yet or at least to a much lesser extent invoked in the search for political as well as for judicial solutions. There was less emphasis on the ‘rights’ rhetoric, on consumer rights as human rights or the right to safety as a human right.Footnote 19 Health, safety and the environment were treated as policy objectives that had to be implemented by the states, either nationally or internationally. The 1980s still breathe a different spirit, the belief in and the hope for an internationally agreeable legal framework that could provide a sound compromise between export and import states or, put differently, between the free-trade ideology and the need for some sort of internationally agreeable minimum standards of protection.

13.3 The Second Wave – GAFAs and Global Value Chains (GVCs)

When it comes to private responsibilities in the digital economy and society, the attention is directed to the GAFAs, to what is called the platform economy and their role as gatekeepers to the market. Here competition law ties in. National competition authorities have taken action against the GAFAs under national and European competition law mainly with reference to the abuse of a dominant position.Footnote 20 The EU, on the other hand, has adopted Regulation 2019/1150Footnote 21 business to platforms in order to ‘create a fair, transparent and predictable business environment for smaller businesses and traders’, which entered into force on 20 July 2020. The von der Leyen Commission has announced two additional activities: a sector-specific proposal which is meant to fight down potential anti-competitive effects by December 2020 and a Digital Services Act which will bring amendments to the e-commerce Directive 2001/43/EEC probably also with regard to the rights of customers. While platforms hold a key position in the digital economy and society, they form in Baldwin’s scenario no more than an integral part of the transformation of the economic order towards GVCs. Platforms help reduce the communication cost, and they are opening up markets for small- and medium-sized companies in the Global South which had no opportunity to gain access to the market before the emergence of platforms.

The current chapter is not the ideal place to do justice to the various roles and functions of platforms or GVCs. There is not even an agreed-upon definition of platforms or GVCs. What matters in our context, is, however, to understand the GVCs as networks which are interwoven through a dense set of contractual relations, which cannot be reduced to a lead company that is organized by the chain upstream and downstream and that holds all the power in their hands. Not only the public attention but also the political attention is very much concentrated on the GAFAs and on multinationals, sometimes even identified and personalized. Steve Jobs served as the incarnation of Apple, and Mark Zuckerberg is a symbolic figure and even a public figure. The reference to the responsibility of private actors is in their various denominations, sociétés, corporations and multinationals. Digitization enabled the development of the platform economy. Communication costs were reduced to close to zero. Without digitalization and without the platforms, the great transformation of the global economy, as Baldwin calls it, would not have been possible. The results are GVCs being understood as complex networks, where SMEs equally may be able to exercise, let alone that the focus on the chain sets aside external effects of the contractualization on third parties.Footnote 22 That is why personalization of the GAFAs is as problematic as the desperate search for a lead company which can be held responsible upstream and downstream.Footnote 23

The overview of the more recent attempts internationally, nationally and the EU lay the ground for discussion. The idea of holding multinationals responsible for their actions in third countries, especially down the GVCs, has been vividly debated in recent years. Discussions have evolved to cover not only the protection of human rights but also environmental law, labour law and good governance in general. Developments in the field and the search for accountability have been led to political action at the international level, to legislative action at the national and European level and to litigation before national courts. Most of the initiatives fall short of an urgently needed holistic perspective, which takes the various legal fields into account, takes the network effects seriously and provides for an integrated regulation of due diligence in corporate law, of commercial practices, of standard terms and of the contractual and tortious liability, let alone the implications with regard to labour law, consumer law and environmental law within GVCs.Footnote 24

13.3.1 International Approaches on GVCs

In June 2011 the United Nations Human Rights Council unanimously adopted the Guiding Principles on Business and Human Rights (UNGPs). This was a major step towards the protection of Human Rights and the evolution of the concept of Social Corporate Responsibility. The adoption of the UNGPs was the result of thirteen years of negotiations. The year 2008 marked another step in the work of the Human Rights Council with the adoption of the framework ‘Protect, Respect and Remedy: A Framework for Business and Human Rights’.Footnote 25 The framework laid down three fundamental pillars: the duty of the state to protect against human rights violations by third parties, including companies; the responsibility of companies to respect human rights; and better access by victims to effective remedies, both judicial and non-judicial. The Guiding Principles, which are seen as the implementation of the Protect, Respect and Remedy Framework, further detail how the three pillars are to be developed. The Guiding Principles are based on the recognition of

[the] State’s existing obligations to respect, protect and fulfil human rights and fundamental freedoms; The role of business enterprises as specialized organs of society performing specialized function, required to comply with all applicable laws and to respect human rights; the need for rights and obligations to be matched to appropriate and effective remedies when breaches.Footnote 26

The Guiding Principles not only cover state behaviours but introduce a corporate responsibility to respect human rights as well as access to remedies for those affected by corporate behaviour or activities. Despite its non-binding nature, the UN initiative proves the intention to engage corporations in preventing negative impacts of their activities on human rights and in making good the damage they would nevertheless cause.

Here is not the place to give a detailed account of the initiative taken at the international level, but it is relevant to stop on the case of the OECD. The OECD worked closely with the UN Human Rights Council in elaborating the OECD Guidelines for Multinational Enterprises.Footnote 27 The guidelines especially introduced an international grievance mechanism. The governments that adhere to the guidelines are required to establish a National Contact Point (NPC) which has the task of promoting the OECD guidelines and handling complaints against companies that have allegedly failed to adhere to the Guidelines’ standards. The NCP usually acts as a mediator or conciliator in case of disputes and helps the parties reach an agreement.Footnote 28

13.3.2 National Approaches to Regulate GVCs

Not least through the international impact and the changing global environment, national legislators are becoming more willing to address the issue of the responsibility of corporations for their actions abroad from a GVC perspective. They focus explicitly or implicitly on a lead company which has to be held responsible. None have taken the network effects of GVS seriously. In 2010, California passed the Transparency in Supply Chains Act,Footnote 29 the same year the United Kingdom adopted the UK Bribery Act, which creates a duty for undertakings carrying an economic activity in Britain to verify there is no corruption in the supply chain.Footnote 30 The Bribery Act was then complemented by the UK Modern Slavery Act 2015, which focuses on human trafficking and exploitation in GVCs.Footnote 31 In the same line, the Netherlands adopted a law on the duty of care in relation to child labour, covering international production chains.Footnote 32 Complemented by EU instruments, such legislation is useful and constitutes a step forward, particularly at the political and legislative levels. Nevertheless, their focus on a sector, a product or certain rights does not enable the body of initiative to be mutually reinforcing. There is a crucial need for a holistic network-related approach to the regulation of GVCs.

Legislation on the responsibility of multinationals for human rights, environment or other harms is being designed in different countries. Germany and Finland have announced being in the process of drafting due diligence legislation.Footnote 33 Switzerland had been working on a proposal, led by NGOs and left parties. The initiative was put to the votation in the last days of November 2020. A total of 47 per cent of the population participated, of which 50.73 per cent voted ‘Yes’.Footnote 34 The project was rejected at the level of the cantons. Therefore this initiative will not go forward. At the time of writing, it seems to be a lighter initiative that will be discussed – one where responsibility is not imposed along the supply chain but for Swiss companies in third countries. The votation is nevertheless a performance in terms of the willingness to carry out such a project, participation and in terms of result. The result of the vote of the cantons can be partly explained by the lobby strategies multinationals have conducted from the beginning of the initiative.

The French duty of vigilance law was adopted in 2017 and introduced in the Code of Commerce among the provisions on public limited companies in the sub-part on shareholders assemblies.Footnote 35 They require shareholders of large public limited companies with subsidiaries abroad to establish a vigilance plan. A vigilance plan introduces vigilance measures that identify the risks and measures to prevent serious harm to human rights, health, security or environmental harm resulting from the activities of the mother company but also of the company it controls, its subcontractors and its suppliers. The text provides for two enforcement mechanisms. First, a formal notice (mise en demeure) can be addressed to the company that does not establish a vigilance plan or establishes an incomplete one. The company has three months to comply with its obligations. Second, there could be an action in responsibility (action en responsabilité) against the company. Here the company must repair the prejudice the compliance with its obligations would have avoided. French multinationals have already received letters of formal notice. This is the case of EDF and its subsidiary EDF Energies Nouvelles for human rights violations in Mexico.Footnote 36 The first case was heard in January 2020. It was brought by French and Ugandan NGOs against Total. The NGOs argue that the vigilance plan designed and put in place by Total is not in compliance with the law on due diligence and that the measures adopted to mitigate the risks are insufficient or do not exist at all.

13.3.3 The Existing Body of EU Approaches on GVCs and the Recent European Parliament Initiative

Sector-specific or product-specific rules imposed on GVCs have been adopted at the EU level and introduced due diligence obligations. The Conflict Minerals RegulationFootnote 37 and the Regulation of timber productsFootnote 38 impose obligations along the supply chain; the importer at the start of the GVC bears the obligations. The Directive on the Disclosure of Non-Financial and Diversity Information obliges large capital market-oriented companies to include in their non-financial statement information on the effects of the supply chain and the supply chain concepts they pursue.Footnote 39 The Market Surveillance Regulation extends the circle of obligated economic operators in the EU to include participants in GVCs, thus already regulating extraterritorially.Footnote 40 The Directive on unfair trading practices in the global food chain regulates trading practices in supply chains through unfair competition and contract law.Footnote 41 Although these bits and pieces of legislation introduce a form of due diligence along the supply chain, they remain product- or sector-specific, which prevents an overall legal approach to due diligence across sectors for all products. This concern is addressed by the latest Recommendation of the European Parliament.

Most recently, in September 2020, the JURI Committee of the European Parliament published a draft report on corporate due diligence and corporate accountability which includes recommendations for drawing up a Directive.Footnote 42 Although the European Parliament’s project has to undergo a number of procedures and discussions among the European institutions and is unlikely to be adopted in its current form, a few aspects are relevant for our discussion. Article 3 defines due diligence as follows:

‘[D]ue diligence’ means the process put in place by an undertaking aimed at identifying, ceasing, preventing, mitigating, monitoring, disclosing, accounting for, addressing, and remediating the risks posed to human rights, including social and labour rights, the environment, including through climate change, and to governance, both by its own operations and by those of its business relationships.

Following the model of the UN Guiding Principles, the scope of the draft legislation goes beyond human rights to cover social and labour rights, the environment, climate change and governance. Article 4 details that undertakings are to identify and assess risks and publish a risk assessment. This risk-based approach is based on the second pillar of the UN Guiding Principles; it is also followed in the French due diligence law. In case risks are identified, a due diligence strategy is to be established whereby an undertaking designs measures to stop, mitigate or prevent such risks. The firm is to disclose reliable information about its GVC, namely, names, locations and other relevant information concerning subsidiaries, suppliers and business partners.Footnote 43 The due diligence strategy is to be integrated in the undertaking’s business strategy, particularly in the choice of commercial partners. The undertaking is to contractually bind its commercial partners to comply with the company’s due diligence strategy.

13.3.4 Litigation before National Courts

Civil society, NGOs and trade unions are key players in making accountable multinationals for their actions abroad and along the GVC. They have supported legal actions for human rights violations beyond national territories. Such an involvement of the civil society is considerably facilitated through digitalization, through the use of the platforms and through the greater transparency in GVCs.Footnote 44 Courts face cases where they have to assess violations of human rights in third countries by multinationals and their subsidiaries and construct extraterritorial responsibility. There is a considerable evolution from the 1980s in that the rights rhetoric goes beyond human rights so as to cover labour law, environmental law, misleading advertising or corporate law. Although the rights rhetoric recognizes the moral responsibility of private companies and accounts for their gravity, the challenges before and during trials to turn a moral responsibility into a legal liability are numerous.

In France, three textile NGOs brought a complaint arguing that Auchan’s communication strategy regarding its commitment to social and environmental standards in the supply chain constituted misleading advertising, since Auchan’s products were found in the Rana Plaza factory in Bangladesh, a factory well-known for its poor working and safety conditions. The case was dismissed at the stage of the investigation. In another case, Gabonese employees of COMILOG were victims of a train accident while at work, which led to financial difficulties for the company. They were dismissed and promised compensation, which they never received. With the support of NGOs, they brought the case to a French employment tribunal, claiming that COMILOG was owned by a French company. Their claim was dismissed but successful on appeal, where the court held COMILOG France and COMILOG international responsible for their own conduct and for the conduct of their subsidiaries abroad. On the merits, the court found that COMILOG had to compensate the workers. On appeal, the Court de Cassation annulled this finding, arguing that there was no sufficient evidence for the legally required strong link with the mother company in France.Footnote 45 There is a considerable number of cases with similar constellations, where courts struggle in finding a coherent approach to these legal issues.

In Total, the NGOs pretended that the vigilance plan is incomplete and does not offer appropriate mitigating measures or failing to adopt them. The court did not rule on the merits, as the competence lies with the commercial court, since the law on due diligence is part of the Commercial Code. Nevertheless, the court made a distinction between the formal notice procedure which is targeted at the vigilance plan and its implementation and the action in responsibility.Footnote 46 It is unclear whether the court suggested a twofold jurisdiction, a commercial one for due diligence strategies and another one for actions in responsibility. The case triggers fundamental questions as to what a satisfactory vigilance plan is and what appropriate mitigating measures are. It also requires clarifications about the relevant field of law applicable, the relevant procedure and the competent jurisdiction.

Even if there is an evolution as to the substance, today’s cases carry the heritage of those from the 1980s. Before ruling on the merits, courts engage in complex procedural issues, just like in the context of the Bhopal litigation or Kiobel. Such legal questions have not yet been settled at the national level, and they are still examined on a case-by-case basis. This lack of consistency renders the outcome of litigation uncertain. The first barrier is procedural; it concerns the jurisdiction of the national court on corporate action beyond the scope of its territorial jurisdiction. The second relates to the responsibility of the mother companies for their subsidiaries. In the two Shell cases brought up in the UKFootnote 47 and in the Netherlands,Footnote 48 Nigerian citizens had suffered from environmental damages which affected their territory, water, livelihood and health. Here the jurisdiction of the national courts was not an issue, but the differentiation between the mother company and its subsidiary remained controversial.Footnote 49

The tour d’horizon indicates how fragile the belief in judicial activism still is. The adoption of due diligence legislation has not changed the level playing field. Courts are to design the contours and requirements of due diligence. Two methodological questions are at the heart of the ongoing discussions of the private responsibilities of companies in the GVCs. Who is competent? Who is responsible? Such are the challenges of the multilevel internationalized and digitalized environment where law finds itself unequipped to address the relevant legal challenges.

13.3.5 Business Approaches to GVCs within and beyond the Law

Recent initiatives suggest a different approach, one where legal obligations are placed on companies, not only to comply with their own obligations but to make them responsible for the respect of due diligence strategies along the GVC. The role and function of Corporate Social Responsibility and Corporate Digital Responsibility are in the political limelight.Footnote 50 Thereby firms have the potential to exercise impact over the GVC. This is particularly true in case a lead company can easily be identified. If the upstream lead company decides to require its downstream partners to comply with its due diligence strategy, the lead company might be able to ensure compliance.Footnote 51 In GVCs, contracts are turned into a regulatory tool to the benefit of the lead company and perhaps to the benefit of public policy goals. There are two major problems: the first results from the exercise of economic power, which might be for good, but the opposite is also true. The second relates to the organization of the GVC, which more often than not is lacking a lead company but is composed out of a complex network of big, small and medium-sized companies. Designing responsibilities in networks is one of the yet still unsolved legal issues.

A consortium of French NGOs has drafted a report on the first year of application of the law on due diligence, where they have examined eighty vigilance plans published by French corporations falling under the scope of the due diligence law.Footnote 52 The report is entitled ‘Companies Must Do Better’ and sheds light on questions we have raised before. As regards the publication and content of the due diligence plans, not all companies have published their vigilance plans, some have incomplete ones, some have a lack of transparency and others seem to ignore the idea behind the due diligence plan. The report writes, ‘The majority of plans are still focusing on the risks for the company rather than those of third parties or the environment.’Footnote 53 Along the different criteria of the vigilance plan analysed by the consortium of NGOs, it becomes clear that few companies have developed methodologies and appropriate responses in designing their due diligence strategy, identifying and mitigating risks. It is also noted that companies have re-used some previous policies and collected them to constitute due diligence. The lack of seriousness does not only make the vigilance plans unreadable; it denies any due diligence strategy of the firm. If multinationals do not take legal obligations seriously at the level of the GVC leading company, are they likely to produce positive spillover effects along the chain? It is too early to condemn the regulatory approach and the French multinationals. Once similar obligations will be adopted in most countries, at least in the EU, we might see a generalization and good practices emerge. Over the long term, we might witness competition arise between firms on the ground of their due diligence strategy.

Externally from the GVC, compliance can also be carried out by actors such as Trade Unions and NGOs. They have long been active in litigation and were consulted in the process of designing legislation. The European Parliament’s Recommendation suggests their involvement in the establishment of the undertaking’s due diligence strategies, similar to French law.Footnote 54 Further, due diligence strategies are to be made public. In France, few companies have made public NGOs or stakeholders contributing to the design of the strategy. If there is no constructive cooperation between multinationals and NGOs yet, NGOs have access to grievance mechanisms under the European Parliament’s Recommendation, which resembles the letter of formal notice under the French law.Footnote 55 Stakeholders which are not limited to NGOs could thereby voice concerns as to the existence of risks which the undertakings would have to answer to and be transparent about through publication.

NGOs have a unique capacity for gathering information abroad on the ground. The European Parliament’s text explicitly refers to the National Contact Point under the OECD framework. National Contact Points are not only entrusted with the promotion of the OECD guidelines; they offer a non-judicial platform for grievance mechanisms.Footnote 56 The OECD conducts an in-depth analysis of the facts and publishes a statement as to the conflict and what it can offer to mediate it. Although such proceedings are non-binding, they do offer the possibility for an exchange between the parties and the case files are often relied on in front of courts. It seems that NGOs and other stakeholders have a role to play in compliance with the due diligence principles. They are given the possibility to penetrate the network and work with it from the inside. There are equally mechanisms that allow for external review of the GVC’s behaviour.

13.4 The Way Ahead: The Snake Bites Its Own Tail

The European Parliaments have discussed the introduction of an independent authority with investigative powers to oversee the application of the proposed directive – namely, the establishment of due diligence plans and appropriate responses in case of risks.Footnote 57 In EU jargon, this implies the creation of a regulatory agency or a form alike. Such an agency could take different forms and could have different powers; what is crucial is the role such an agency might play in the monitoring and surveillance of fundamental rights, the environment, labour rights, consumer rights and so on. A general cross-cutting approach would have a broader effect than isolated pieces of sector- or product-specific legislation. If such rights were as important as for instance competition law, the EU would turn into a leader in transmitting its values only to the GVCs at the international level. Playing and being the gentle civiliser does not mean that the EU does not behave like a hegemon, though.Footnote 58

Does the snake bite its own tail? Despite the idealistic compliance mechanisms, a return to courts seems inevitable, and fundamental questions remain. Are multinationals responsible for their actions abroad? Let us flip a coin. Heads, yes, there is legislation, or it is underway. There is political will and civic engagement. There is a strong rights rhetoric that people, politicians and multinationals relate to. Heads of multinationals and politicians have said this is important. Firms are adopting due diligence strategies; they are mitigating the risks of their activities. They are taking their responsibility seriously. Tails, all the above is true, there has been considerable progress and there is optimism. Does it work in practice? Some doubts arise. There are issues of compliance and courts struggle. Multinationals and nowadays GAFAs have communication strategies to send positive messages. They do not have mailboxes; it is sometimes difficult to find them. Mostly, they might even own GVCs, and what happens there stays there. It is upon their desire to commit to their duty of due diligence; it is not upon the state. How will these parties react in the algorithmic society?

14 Consumer Law as a Tool to Regulate Artificial Intelligence

Serge Gijrath
14.1 Introduction

Ongoing digital transformation combined with artificial intelligence (AI) brings serious advantages to society.Footnote 1 Transactional opportunities knock: optimal energy use, fully autonomous machines, electronic banking, medical analysis, constant access to digital platforms. Society at large is embracing the latest wave of AI applications as being one of the most transformative forces of our time. Two developments contribute to the rise of the algorithmic society: (1) the possibilities resulting from technological advances in machine learning, and (2) the availability of data analysis using algorithms. Where the aim is to promote competitive data markets, the question arises of what benefits or harm can be brought to private individuals. Some are concerned about human dignity.Footnote 2 They believe that human dignity may be threatened by digital traders who demonstrate an insatiable hunger for data.Footnote 3 Through algorithms the traders may predict, anticipate and regulate future private individual, specifically consumer, behaviour. Data assembly forms part of reciprocal transactions, where these data are currency. With the deployment of AI, traders can exclude uncertainty from the automated transaction processes.

The equality gap in the employment of technology to automated transactions begs the question of whether the private individual’s fundamental rights are warranted adequately.Footnote 4 Prima facie, the consumer stands weak when she is subjected to automatic processes – no matter if it concerns day-to-day transactions, like boarding a train, or a complex decision tree used to validate a virtual mortgage. When ‘computer says no’ the consumer is left with limited options: click yes to transact (and, even then, she could fail), abort or restart the transaction process, or – much more difficult – obtain information or engage in renegotiations. But, where the negotiations process is almost fully automated and there is no human counterpart, the third option is circular rather than complementary to the first two. Empirical evidence suggests that automated decisions will be acceptable to humans only, if they are confident the used technology and the output is fair, trustworthy and corrigible.Footnote 5 How should Constitutional States respond to new technologies on multisided platforms that potentially shift the bargaining power to the traders?

A proposed definition of digital platforms is that these are companies (1) operating in two or multisided markets, where at least one side is open to the public; (2) whose services are accessed via the Internet (i.e., at a distance); and (3) that, as a consequence, enjoy particular types of powerful network effects.Footnote 6 With the use of AI, these platforms may create interdependence of demand between the different sides of the market. Interdependence may create indirect network externalities. This leads to establishing whether and, if so, how traders can deploy AI to attract one group of customers to attract the other, and to keep both groups thriving on the digital marketplace.

AI is a collection of technologies that combine data, algorithms and computing power. Yet science is unable to agree even on a single definition of the notion ‘intelligence’ as such. AI often is not defined either. Rather, its purpose is described. A starting point to understand algorithms is to see them as virtual agents. Agents learn, adapt and even deploy themselves in dynamic and uncertain virtual environments. Such learning is apt to create a static and reliable environment of automated transactions. AI seems to entail the replication of human behaviour, through data analysis that models ‘some aspect of the world’. But does it? AI employs data analysis models to map behavioural aspects of humans.Footnote 7 Inferences from these models are used to predict and anticipate possible future events.Footnote 8 The difference in applying AI rather than standard methods of data analysis is that AI does not analyse data as they were programmed initially. Rather, AI assembles data, learns from them to respond intelligently to new data, and adapt the output in accordance therewith. Thus AI is not ideal for linear analysis of data in the manner they have been processed or programmed. Conversely, algorithms are more dynamic, since they apply machine learning.Footnote 9

Machine learning algorithms build a mathematical model based on sample data, known as ‘training data’.Footnote 10 Training data serve computer systems to make predictions or decisions, without being programmed specifically to perform the task. Machine learning focuses on prediction-based unknown properties learned from the training data. Conversely, data analysis focuses on the discovery of (previously) unknown properties in the data. The analytics process enables the processor to mine data for new insights and to find correlations between apparently disparate data sets through self-learning. Self-learning AI can be supervised or unsupervised. Supervised learning is based on algorithms that build and rely on labelled data sets. The algorithms are ‘trained’ to map from input to output, by the provision of data with ‘correct’ values already assigned to them. The first training phase creates models on which predictions can then be made in the second ‘prediction’ phase.Footnote 11 Unsupervised learning entails that the algorithms are ‘left to themselves’ to find regularities in input data without any instructions on what to look for.Footnote 12 It is the ability of the algorithms to change their output based on experience that gives machine learning its power.

For humans, it is practically impossible to deduct and contest in an adequate manner the veracity of a machine learning process and the subsequent outcome based thereon. This chapter contends that the deployment of AI on digital platforms could lead to potentially harmful situations for consumers given the circularity of algorithms and data. Policy makers struggle with formulating answers. In Europe, the focus has been on establishing that AI systems should be transparent, traceable and guarantee human oversight.Footnote 13 These principles form the basis of this chapter. Traceability of AI could contribute to another requirement for AI in the algorithmic society: veracity, or truthfulness of data.Footnote 14 Veracity and truthfulness of data are subject to the self-learning AI output.Footnote 15 In accepting the veracity of the data, humans require trust. Transparency is key to establishing trust. However, many algorithms are non-transparent and thus incapable of explanation to humans. Even if transparent algorithms would be capable of explanation to humans, then still the most effective machine learning process would defy human understanding. Hence the search for transparent algorithms is unlikely to provide insights into the underlying technology.Footnote 16 The quality of output using non-transparent AI is probably better, but it makes the position of the recipient worse, because there is no way for her to test the processes. Consequently, the Constitutional States may want to contain the potential harms of these technologies by applying private law principles.

This chapter’s principal research question is how Constitutional States should deal with new forms of private power in the algorithmic society. In particular, the theorem is that regulatory private law can be revamped in the consumer rights’ realm to serve as a tool to regulate AI and the possible adverse consequences for the weaker party on digital platforms. Rather than the top-down regulation of AI’s consequences to protect human dignity, this chapter proposes considering a bottom-up approach of empowering consumers in the negotiations and the governance phases of mutual digital platform transactions. Following the main question, it must be seen how consumer rights can be applied to AI in a meaningful and effective manner. Could AI output be governed better if the trader must comply with certain consumer law principles such as contestability, traceability, veracity, and transparency?

One initial objection may query why we limit this chapter to consumer law. The answer is that consumers are affected directly when there is no room to negotiate or contest a transaction. Consumer rights are fundamental rights.Footnote 17 The Charter of Fundamental Rights of the EU (CFREU) dictates that the Union’s policies ‘shall ensure a high level of consumer protection’.Footnote 18 The high level of consumer protection is sustained by ensuring, inter alia, the consumers’ economic interests in the Treaty on the Functioning of the European Union (TFEU).Footnote 19 The TFEU stipulates that the Union must promote consumers’ rights to information. The TFEU stipulates that the Union must contribute to the attainment of a high-level baseline of consumer protection that also takes into account technological advances.Footnote 20 It is evident that in the algorithmic society, the EU will strive to control technologies if these potentially cause harm to the foundations of European private law. Responding adequately to the impact that AI deployment may have on private law norms and principles, a technology and private law approach to AI could, conversely, enforce European private law.Footnote 21 Although AI is a global phenomenon, it is challenging to formulate a transnational law approach, given the lack of global AI and consumer regulation.

The structure is as follows: Section 14.2 sets the stage: AI on digital platforms is discussed bottom-up in the context of EU personal data and internal market regulation, in particular revamped consumer law, online intermediaryFootnote 22 and free-flow of data regulation. The focus is on contributing to the ongoing governance debate of how to secure a high level of consumer protection when AI impacts consumer transactions on digital platforms, along with what rights consumers should have if they want to contest or reject AI output. Section 14.2.1 explores why consumer law must supplement AI regulation to warrant effective redress. Section 14.2.2 alludes to principles of contract law. Section 14.2.3 juxtaposes consumer rights with the data strategy objectives. Section 14.2.4 discusses trustworthiness and transparency. Section 14.3 is designed to align consumer rights with AI. Section 14.3.1 reflects on the regulation of AI and consumer rights through GTC. Section 14.3.2 presents consumer law principles that could be regulated: contestability (Section 14.3.2.1), traceability and veracity (Section 14.3.2.2) and transparency (Section 14.3.2.3). Section 14.3.3 considers further harmonization of consumer law in the context of AI. Section 14.4 contains closing remarks and some recommendations.

14.2 AI on Digital Platforms
14.2.1 Consumers, Data Subjects and Redress

Consumers may think they are protected against adverse consequences of AI under privacy regulations and personal data protection regulatory regimes. However, it remains to be seen whether personal data protection extends to AI. Privacy policies are not designed to protect consumers against adverse consequences of data generated through AI. In that sense, there is a significant conceptual difference between policies and GTC: privacy policies are unilateral statements for compliance purposes. The policies do not leave room for negotiation. Moreover, privacy policies contain fairly moot purpose limitations. The purpose limitations are formulated de facto as processing rights. The private consumers/data subjects consider their consent implied to data processing, whatever tech is employed. Hence, the traders might be apt to apply their policies to consumers who are subjected to AI and machine learning. The General Data Protection Regulation (GDPR) contains one qualification in the realm of AI:Footnote 23 a data subject has the right to object at any time against ADM including profiling. This obligation for data controllers is set off by the provision that controllers may employ ADM, provided they demonstrate compelling legitimate grounds for the processing which override the interests, rights and freedoms of the data subject.

Most of the traders’ machine learning is fed by aggregated, large batches of pseudonymised or anonymised non-personal data.Footnote 24 There is no built-in yes/no button to express consent to be subjected to AI, and there is no such regulation on the horizon.Footnote 25 The data policies are less tailored than GTC to defining consumer rights for complex AI systems. Besides, it is likely that most private individuals do not read the digital privacy policies – nor the general contract terms and conditions (GTC) for that matter – prior to responding to AI output.Footnote 26 The provided questions reveal important private law concerns: ‘What are my rights?’ relates to justified questions as regards access rights and vested consumer rights, the right to take note of and save/print the conditions; void unfair user terms; and termination rights. Traders usually refer to the GTC that can be found on the site. There is no meaningful choice. That is even more the case in the continental tradition, where acceptance of GTC is explicit. In Anglo-American jurisdictions, the private individual is confronted with a pop-up window which must be scrolled through and accepted. Declining means aborting the transaction.

‘How can I enforce my rights against the trader?’ requires that the consumer who wishes to enforce her rights must be able to address the trader, either on the platform or through online dispute resolution mechanisms. Voidance or nullification are remedies when an agreement came about through settled European private law principles, such as coercion, error or deceit. Hence the consumer needs to know there is a remedy if the AI process contained errors or was faulty.Footnote 27

14.2.2 Principles of Contract Law

In the algorithmic society, consumers still should have at least some recourse to a counterparty, whom they can ask for information during the consideration process. They must have redress when they do not understand or agree with transactional output that affects their contractual position without explanation. The right to correct steps in contract formation is moot, where the process is cast in stone. Once the consumers have succeeded in identifying the formal counterparty, they can apply remedies. Where does that leave them if the response to these remedies is also automated as a result of the trader’s use of profiling and decision-making tools? This reiterates the question of whether human dignity is at stake, when the counterpart is not a human but a machine. The consumer becomes a string of codes and loses her feeling of uniqueness.Footnote 28 Furthermore, when distributed ledger technology is used, the chain of contracts is extended. There is the possibility that an earlier contractual link will be ‘lost’. For example, there is a gap in the formation on the digital platform, because the contract formation requirements either were not fully met or were waived. Another example is where the consumer wants to partially rescind the transaction but the system does not cater for a partial breach. The impact of a broken upstream contractual link on a downstream contract in an AI-enabled transactional system is likely to raise novel contract law questions, too. An agreement may lack contractual force if there is uncertainty or if a downstream contractual link in the chain is dependent on the performance of anterior upstream agreements. An almost limitless range of possibilities will need to be addressed in software terms, in order to execute the platform transaction validly. When the formation steps are using automated decision-making processes that are not covered in the GTC governing the status of AI output, then this begs the question of how AI using distributed ledger technology could react to non-standard events or conditions, and if and how the chain of transactions is part of the consideration. The consumer could wind up in a vicious cycle, and her fundamental rights of a high consumer protection level could be at stake, more than was the case in the information society. Whereas e-Commerce, Distant Selling and, later, Services Directives imposed information duties on traders, the normative framework for the algorithmic society is based on rather different principles. Theories such as freedom of contract – which entails the exclusion of coercion – and error, when AI output contains flaws or defects may be unenforceable in practice. For the consumer to invoke lack of will theories, she needs to be able to establish where and how in the system the flaws or mistakes occurred.

14.2.3 Data Strategy

Does the data strategy stand in the way of consumer protection against AI? The focus of the EU’s data strategy is on stimulating the potential of data for business, research and innovation purposes.Footnote 29 The old regulatory dilemma on how to balance a fair and competitive business environment with a high level of consumer rights is revived. In 2019–2020, the Commission announced various initiatives, including rules on (1) securing free flow of data within the Union,Footnote 30 (2) provisions on data access and transfer,Footnote 31 and (3) and enhanced data portability.Footnote 32 Prima facie, these topics exhibit different approaches to achieve a balance between business and consumer interests. More importantly, how does the political desire for trustworthy technology match with such diverse regulations? The answer is that it does not. The Free Flow of Non-Personal Data Regulation lays down data localization requirements, the availability of data to competent authorities and data porting for professional users.Footnote 33 It does not cover AI use. The Modernization of Consumer Protection Directive alludes to the requirement for traders to inform consumers about the default main parameters determining the ranking of offers presented to the consumer as a result of the search query and their relative importance as opposed to other parameters only.Footnote 34 The proviso contains a reference to ‘processes, specific signals incorporated into algorithms or other adjustment or demotion mechanisms used in connection with the ranking are not required to disclose the detailed functioning of their ranking mechanisms, including algorithms’.Footnote 35 It does not appear that the Modernization of Consumer Protection Directive is going to protect consumers against adverse consequences of AI output. It also seems that the Trade Secrets Directive stands somewhat in the way of algorithmic transparency.

The provisions on data porting revert to information duties. Codes of Conduct must detail the information on data porting conditions (including technical and operational requirements) that traders should make available to their private individuals in a sufficiently detailed, clear and transparent manner before a contract is concluded.Footnote 36 In light of the limited scope of data portability regulation, there can be some doubt as to whether the high-level European data strategy is going to contribute to a human-centric development of AI.

14.2.4 Trustworthiness and Transparency

The next question is what regulatory requirements could emerge when AI will become ubiquitous in mutual transactions.Footnote 37 The Ethical Guidelines on AI in 2019 allude to seven key requirements for ‘Trustworthy AI’: (1) human agency and oversight; (2) technical robustness and safety; (3) privacy and data governance; (4) transparency, (5) diversity, non-discrimination and fairness; (6) environmental and societal well-being; and (7) accountability.Footnote 38 These non-binding guidelines address different topics, some of which fall outside the scope of private law principles. In this chapter, the focus is on transparency, accountability and other norms, notably traceability, contestability and veracity.Footnote 39 These notions are covered in the following discussion. First, it is established that opaqueness on technology use and lack of accountability could be perceived as being potentially harmful to consumers.Footnote 40 There are voices that claim that technology trustworthiness is essential for citizens and businesses that interact.Footnote 41 Is it up to Constitutional States to warrant and monitor technology trustworthiness, or should this be left to businesses? Does warranting technology trustworthiness not revive complex economic questions, such as how to deal with the possibility of adverse impact on competition or the stifling of innovation, when governments impose standardized technology norms to achieve a common level of technology trustworthiness – in the EU only? What if trust in AI is broken?

A possible denominator for trustworthiness may be transparency. Transparency is a key principle in different areas of EU law. A brief exploration of existing regulation reveals different tools to regulate transparency. Recent examples in 2019–2020 range from the Modernization of Consumer Protection Directive to the Online Intermediary Services Regulation, the Ethical Guidelines on AI, the Open Data Directive and the 2020 White Paper on Artificial Intelligence.Footnote 42 All these instruments at least allude to the need for transparency in the algorithmic society. The Modernization of Consumer Protection Directive provides that more transparency requirements should be introduced. Would it be necessary to redefine transparency as a principle of private law in the algorithmic society? One could take this a step further: to achieve technology trustworthiness, should there be more focus on regulating transparency of AI and machine learning?Footnote 43 The Ethics Guidelines 2019 point at permission systems, fairness and explicability. From a private law perspective, especially permission systems could be considered to establish and safeguard trust. But reference is also made to the factual problem that consumers often do not take note of the provisions that drive the permission.

Explicability is not enshrined as a guiding principle. Nevertheless, transparency notions could be a stepping stone to obtaining explicability.Footnote 44 Accuracy may be a given. What matters is whether the consumer has the right and is enabled to contest an outcome that is presented as accurate.

14.3 Consumer Rights, AI and ADM
14.3.1 Regulating AI through General Terms and Conditions

There are two aspects regarding GTC that must be considered. First, contrary to permission systems, the general rule in private law remains that explicit acceptance of GTC by the consumer is not required, as long as the trader has made the terms available prior to or at the moment the contract is concluded. Contrary to jurisdictions that require parties to scroll through the terms, the European approach of accepting implied acceptance in practice leads to consumers’ passiveness. Indeed, the system of implicit permission encourages consumers to not read GTC. Traders on digital platforms need to provide information on what technologies they use and how they are applied. Given the sheer importance of fundamental rights of human dignity and consumer rights when AI is applied, the question is whether consumers should be asked for explicit consent when the trader applies AI. It would be very simple for traders to implement consent buttons applying varied decision trees. But what is the use when humans must click through to complete a transaction? Take, for example, the system for obtaining cookies consent on digital platforms.Footnote 45 On the one hand, the traders (must) provide transparency on which technologies they employ. On the other hand, cookie walls prevent the consumer from making an informed decision, as they are coerced to accept the cookies. A recognizable issue with cookies in comparison with AI is that, often, it is the consumers who are unable to understand what the different technologies could mean for them personally. In the event the AI output matches their expectations or requirements, consumers are unlikely to protest prior consent given. Hence the real question is whether consumers should be offered a menu of choice beforehand, plus an option to accept or reject AI output or ADM. This example will be covered in the following discussion.

Second, where there is no negotiation or modification of the GTC, the consumer still will be protected by her right to either void or rescind black-, blue or grey-list contract provisions. Additionally, the EU Unfair Contract Terms Directive contains a blue list with voidable terms and conditions.Footnote 46 However, the black, grey and blue lists do not count for much. Rather, the GTC should contain clauses that oblige the trader to observe norms and principles such as traceability, contestability, transparency and veracity of the AI process. This begs the question of whether ethics guidelines and new principles could be translated into binding, positively formulated obligations or AI use. Rather than unilateral statements on data use, GTC could be subjected to comply with general principles and obligations.

The key for prospective regulation does not lie in art. 6 (1) Modernization of Consumer Protection Directive. Although this clause contains no less than twenty-one provisions on information requirements, including two new requirements on technical aspects, none of the requirements apply to providing the consumer information on the use of AI and ADM, let alone the contestability of the consumer transaction based thereon. Granted, there is an obligation for the trader to provide information on the scope of the services, but not on the specific use of AI technology. It is a very big step from the general information requirements to providing specific information on the application of AI and ADM in mutual transactions. When a consumer is subjected to AI processes, she should be advised in advance, not informed after the fact. A commentary to art. 6 clarifies that the traders must provide the information mentioned therein prior to the consumer accepting the contract terms (GTC).Footnote 47 The underlying thought is not new – to protect consumers, as weaker contractual parties, from concluding contracts that may be detrimental to them, and as a result of not having all the necessary information. Absent any relevant information, the consumer lags behind, especially in terms of not being informed adequately (1) that, (2) how and (3) for which purposes AI and machine learning is applied by the trader. The commentators generally feel that providing consumers with the relevant information prior to the conclusion of the contract is essential. Knowing that the trader uses such technologies could be of utmost importance to the consumer. Even if she cannot oversee what the technological possibilities are, she should still get advance notice of the application of AI. Advance notice means a stand-still period during which she can make an informed decision. Going back to the cookie policy example, it is not onerous on the trader to offer the consumer a menu for choice beforehand. This would be especially relevant for the most used application of AI and ADM: profiling. The consumer should have the right to reject a profile scan that contains parameters she does not find relevant or which she perceives as being onerous on her. Granted, the trader will warn the consumer that she will not benefit from the best outcome, but that should be her decision. The consumer should have a say in this important and unpredictable process. She should be entitled to anticipating adverse consequences of AI for her.

The consumer must be able to trace and contest the AI output and ADM. The justification for such rights is discrimination, and lack of information on the essentials underlying the contract terms that come about through the private law principle of offer and acceptance. Granted, art. 9 Modernization of Consumer Protection Directive contains the generic right of withdrawal.Footnote 48 Contesting a consumer transaction based on AI is not necessary. The consumer can simply fill in a form to rescind the agreement. Regardless, the point of a consumer approach to AI use is not meant for the consumer to walk away. The consumer must have the right to know what procedures were used, what kind of outcome they produced, what is meant for the transaction and what she can do against it. As said, the consumer also must have a form of redress, not just against the trader but also against the developer of the AI software, the creator of the process, the third-party instructing the algorithms and/or the intermediary or supplier of the trader.

14.3.2 Consumer Law Principles

Which consumer law principles could be reignited in GTC that enable consumers to require the traders to be accountable for unfair processes or non-transparent output? This goes back to the main theorem. Transactions on digital platforms are governed by mutually agreed contract terms. It is still common practice that these are contained in GTC. Is there a regulatory gap that requires for Constitutional States to formulate new or bend existing conditions for traders using AI? The Bureau Européen des Unions de ConsommateursFootnote 49 proposes ‘a set of transparency obligations to make sure consumers are informed when using AI-based products and services, particularly about the functioning of the algorithms involved and rights to object automated decisions’. The Modernization of Consumer Protection Directive is open for adjustment of consumer rights ‘in the context of continuous development of digital tools’. The Directive makes a clear-cut case for consumers catering for the adverse consequences of AI.Footnote 50 But it contains little concrete wording on AI use and consumers.Footnote 51 Embedding legal obligations for the trader in GTC could, potentially, be a very effective measure. There is one caveat, in that GTC often contain negatively formulated obligations.Footnote 52 Positively phrased obligations, such as the obligation to inform consumers that the trader employs AI, require further conceptual thinking. Another positively phrased obligation could be for the traders to explain the AI process and explain and justify the AI output.

14.3.2.1 Contestability

How unfair is it when consumers may be subject to decisions that are cast in stone (i.e., non-contestable)? An example is embedded contestability steps in smart consumer contracts. At their core, smart contracts are self-executing arrangements that the computer can make, verify, execute and enforce automatically under event-driven conditions set in advance. From an AI perspective, an almost limitless range of possibilities must be addressed in software terms. It is unlikely that these possibilities can be revealed step-by-step to the consumer. Consumers probably are unaware of the means of redress against AI output used in consumer transactions.Footnote 53 Applying a notion of contestability – not against the transaction but against the applied profiling methods or AI output – is no fad. If the system enables the consumer to test the correctness of the AI technology process and output, there must be a possibility of reconsidering the scope of the transaction. Otherwise, the sole remedy for the consumer could be a re-test of the AI process, which is a fake resolve. Indeed, the possibility of technological error or fraud underlines that a re-test is not enough. Traditional contract law remedies, such as termination for cause, could be explored. Furthermore, in connection with the information requirements, it would make sense to oblige traders to grant the consumer a single point of contact. This facilitates contesting the outcome with the trader or a third party, even if the automated processes are not monitored by the trader.Footnote 54

14.3.2.2 Traceability, Veracity

Testing veracity requires reproducibility of the non-transparent machine learning process. Does a consumer have a justified interest in tracing the process steps of machine learning, whether or not this has led to undesirable AI output? Something tells a lawyer that – no matter the output – as long as the AI output has an adverse impact on the consumer, it seems reasonable that the trader will have the burden of evidence that output is correct and, that, in order to be able to provide a meaningful correction request, the consumer should be provided with a minimum of necessary technical information that was used in the AI process. Traceability is closely connected with the requirement of accessibility to information, enshrined in the various legal instruments for digital platform regulation. As such, traceability is closely tied with the transparency norm.

It is likely that a trader using AI in a consumer transaction will escape from the onus on proving that the machine learning process, the AI output or the ADM is faulty. For the average consumer, it will be very difficult to provide evidence against the veracity of – both non-transparent and transparent – AI. The consumer is not the AI expert. The process of data analysis and machine learning does not rest in her hands. Besides, the trail of algorithmic decision steps probably is impossible to reconstruct. Hence, the consumer starts from a weaker position than the trader who applies AI. Granted, it was mentioned in Section 14.2.2 that it makes no practical sense for the consumer to ask for algorithmic transparency, should the consumer not agree with the output. The point is that at least the consumer should be given a chance to trace the process. Traceability – with the help of a third party who is able to audit the software trail – should be a requirement on the trader and a fundamental right for the consumer.

14.3.2.3 Transparency

Transparency is intended to solve information asymmetries with the consumer in the AI process. Transparency is tied closely with the information requirements laid down in the digital platforms and dating back to the Electronic Commerce Directive.Footnote 55 What is the consequence when information requirements are delisted because they have become technologically obsolete? Advocate General Pitruzzella proposed that the Court rule that an e-commerce platform such as Amazon could no longer be obliged to make a fax line available to consumers.Footnote 56 He also suggested that digital platforms must guarantee the choice of several different means of communication available for consumers and rapid contact and efficient communication.Footnote 57 By analogy, in the algorithmic society, transparency obligations on AI-driven platforms could prove to be a palpable solution for consumers. Providing transparency on the output also contributes to the consumer exercising some control over data use in the AI process, notwithstanding the argument that transparent algorithms cannot be explained to a private individual.

14.3.3 Further Harmonization of Consumer Law in the Context of AI

It should be considered whether the Unfair Commercial Practices Directive could be updated with terms that regulate AI.Footnote 58 At the high level, this Directive introduced the notion of ‘good faith’ to prevent imbalances in the rights and obligations of consumers on the one hand and sellers and suppliers on the other hand.Footnote 59 It should be borne in mind that consumer protection will become an even more important factor when the chain of consumer agreements with a trader becomes extended. Granted, the question of whether and how to apply AI requires further thinking on what types of AI and data use could constitute unfair contract terms. A case could be made of an earlier agreement voiding follow-up transactions, for example, because the initial contract formation requirements were not met as after AI deployment. But the impact of a voidable upstream contractual link on a downstream agreement in an AI-enabled or contract system is likely to raise different novel contract law questions, for instance, regarding third party liability.

In order to ensure that Member State authorities can impose effective, proportionate and dissuasive penalties in relation to widespread infringements of consumer law and to widespread infringements with an EU dimension that are subject to coordinated investigation and enforcement,Footnote 60 special fines could be introduced for the unfair application of AI.Footnote 61 Contractual remedies, including claims as a result of damages suffered from incorrect ADM, could be considered.

Prima facie, the Modernization of Consumer Protection Directive provides for the inclusion of transparency norms related to the parameters of ranking of prices and persons on digital platforms. However, the Directive does not contain an obligation to inform the consumer about the relative importance of ranking parameters and the reasons why and through what human process, if any, the input criteria were determined. This approach bodes well for the data strategy, but consumers could end up unhappy, for instance, if information about the underlying algorithms is not included in the transparency standard.

By way of an example, the Modernization of Consumer Protection Directive provides for a modest price transparency obligation at the retail level. It proposes a specific information requirement to inform consumers clearly when the price of a product or service presented to them is personalized on the basis of ADM. The purpose of this clause is to ensure that consumers can take into account the potential price risks in their purchasing decision.Footnote 62 But the proviso does not go as far as to determine how the consumer should identify these risks. Digital platforms are notoriously silent on price comparisons. Lacking guidance on risk identification results in a limited practical application of pricing transparency. What does not really help is that the Modernization of Consumer Protection Directive provides traders with a legal – if flimsy – basis for profiling and ADM.Footnote 63 This legal basis is, unfortunately, not supplemented by consumer rights that go beyond them receiving certain, non-specific information from the trader. The Modernization of Consumer Protection Directive, as it stands now, does not pass the test of a satisfactorily high threshold for consumer protection on AI-driven platforms.

14.4 Closing Remarks

This chapter makes a case for a bottom-up approach to AI use in consumer transactions. The theorem was that the use of AI could well clash with the fundamental right of a high level of consumer protection. Looking at principles of contract law, there could be a regulatory gap when traders fail to be transparent on why and how they employ AI. Consumers also require a better understanding of AI processes and consequences of output, and should be allowed to contest the AI output.

Regulators alike could look at enhancing GTC provisions, to the extent that the individual does not bear the onus of evidence when contesting AI output. Consumers should have the right to ask for correction, modification and deletion of output directly from the traders. It should be borne in mind that the individual is contesting the way the output was produced, generated and used. The argument was made also that consumer rights could supplement the very limited personal data rights on AI.

When Constitutional States determine what requirements could be included in GTC by the trader, they could consider a list of the transparency principles. The list could include (1) informing the consumer prior to any contract being entered into that it is using AI; (2) clarifying for what purposes AI is used; (3) providing the consumer with information on the technology used; (4) granting the consumer a meaningful, tailored and easy to use number of options in accepting or rejecting the use of AI and/or ADM, before it engages in such practice; (5) informing the consumer beforehand of possible adverse consequences for her if she refuses to submit to the AI; (6) how to require from the trader a rerun on contested AI output; (7) adhering to an industry-approved code of conduct on AI and making this code easily accessible for the consumer; (8) informing the consumer that online dispute resolution extends to contesting AI output and/or ADM; (9) informing the consumer that her rights under the GTC are without prejudice to other rights such under personal data regulation; (10) enabling the consumer – with one or more buttons – to say yes or no to any AI output, and giving her alternative choices; (11) enabling the consumer to contest the AI output or ADM outcome; (12) accepting liability for incorrect, discriminatory and wrongful output; (13) warranting the traceability of the technological processes used and allowing for an audit at reasonable cost and (14) explaining the obligations related to how consumer contracts are shared with a third party performing the AI process. These suggestions require being entitled to have a human, independent third party to monitor AI output, and the onus of evidence regarding the veracity of the output should be on the trader.

The fact that AI is aimed at casting algorithmic processes in stone to facilitate mutual transactions on digital platforms should not give traders a carte blanche, when society perceives a regulatory gap.

15 When the Algorithm Is Not Fully Reliable The Collaboration between Technology and Humans in the Fight against Hate Speech

Federica Casarosa
15.1 Introduction

Our lives are increasingly inhabited by technological tools that help us with delivering our workload, connecting with our families and relatives, as well as enjoying leisure activities. Credit cards, smartphones, trains, and so on are all tools that we use every day without noticing that each of them may work only through their internal ‘code’. Those objects embed software programmes, and each software is based on a set of algorithms. Thus we may affirm that most of (if not all) our experiences are filtered by algorithms each time we use such ‘coded objects’.Footnote 1

15.1.1 A Preliminary Distinction: Algorithms and Soft Computing

According to computer science, algorithms are automated decision-making processes to be followed in calculations or other problem-solving operations, especially by a computer.Footnote 2 Thus an algorithm is a detailed and numerically finite series of instructions which can be processed through a combination of software and hardware tools: Algorithms start from an initial input and reach a prescribed output, which is based on the subsequent set of commands that can involve several activities, such as calculation, data processing, and automated reasoning. The achievement of the solution depends upon the correct execution of the instructions.Footnote 3 However, it is important to note that, contrary to the common perception, algorithms are neither always efficient nor always effective.

Under the efficiency perspective, algorithms must be able to execute the instructions without exploiting an excessive amount of time and space. Although technological progress allowed for the development of increasingly more powerful computers, provided with more processors and a better memory ability, when algorithms execute instructions that produce great numbers which exceed the space available in memory of a computer, the ability of the algorithm itself to sort the problems is questioned.

As a consequence, under the effectiveness perspective, algorithms may not always reach the exact solution or the best possible solution, as they may include a level of approximation which may range from a second-best solution,Footnote 4 to a very low level of accuracy. In this case, computer scientists use the definition of ‘soft computing’ (i.e., the use of algorithms that are tolerant of imprecision, uncertainty, partial truth, and approximation), due to the fact that the problems that they are addressing may not be solved or may be solved only through an excessive time-consuming process.Footnote 5

Accordingly, the use of these types of algorithms involves the possibility to provide solutions to hard problems, though these solutions, depending on the type of problems, may not always be the optimal ones. Given the ubiquitous use of algorithms processing our data and consequently affecting our personal decisions, it is important to understand in which occasions we may (or should) not fully trust the algorithm and add a human in the loop.Footnote 6

15.1.2 The Power of Algorithms

According to Neyland,Footnote 7 we may distinguish between two types of power: one exercised by algorithms, and one exercised across algorithms. The first one is the traditional one, based on the ability of algorithms to influence and steer particular effects. The second one is based on the fact that ‘algorithms are caught up within a set of relations through which power is exercised’.Footnote 8 In this sense, it is possible to affirm the groups of individuals that at different stages play a role in the definition of the algorithm share a portion of power.

In practice, one may distinguish between two levels of analysis. Under the first one, for instance when we digit a query over a search engine, the search algorithm activates and identifies the best results related to the keywords inserted, providing a ranked list of results. These results are based on a set of variables that are dependent on the context of the keywords, but also on the trust of the source,Footnote 9 on the previous history of searches of the individual, and so forth. The list of results available will then steer the decisions of the individual and affect his/her interpretation of the information searched for. Such power should not be underestimated, because the algorithm has the power to restrict the options available (i.e., avoiding some content because evaluated as untruthful or irrelevant) or to make it more likely to select a specific option. If this can be qualified as the added value of algorithms able to improve the flaws of human reasoning, which include myopia, framing, loss aversion, and overconfidence,Footnote 10 then it also shows the power of the algorithm over individual decision-making.Footnote 11

Under the second level of analysis, one may widen the view taking into account the criteria that are used to identify the search results, the online information that is indexed, the computer scientist that set those variables, the company that distributes the algorithm, the public or private company that uses the algorithm, and the individuals that may steer the selection of content. All these elements have intertwining relationships that show a more distributed allocation of power – and, as a consequence, a subsequent quest for a shared type of accountability and liability systems.

15.1.3 The Use of Algorithms in Content Moderation

In this chapter, the analysis will focus on those algorithms that are used for content detection and control over user-generated platforms, the so-called content moderation. Big Internet companies have always used filtering algorithms to detect and classify the enormous quantity of uploaded data daily. Automated content filtering is not a new concept on the Internet. Since the first years of Internet development, many tools have been deployed to analyse and filter content, and among them the most common and known are those adopted for spam detection or hash matching. For instance, spam detection tools identify content received in one’s email address, distinguishing between clean emails and unwanted content on the basis of certain sharply defined criteria derived from previously observed keywords, patterns, or metadata.Footnote 12

Nowadays, algorithms that are used for content moderation are widely diffuse, having the advantage of scalability. Such systems promise to make the process much easier, quicker, and cheaper than would be the case when using human labour.Footnote 13

For instance, the LinkedIn network published the update of the algorithms used to select the best matches between employers and potential employees.Footnote 14 The first steps of the content moderation are worth describing: at the first step, the algorithms check and verify the compliance of the content published with the platform rules (leading to a potential downgrade of the visibility or complete ban in case of incompliance). Then, the algorithms evaluate the interactions that were triggered by the content posted (such as sharing, commenting, or reporting by other users). Finally, the algorithms weigh such interactions, deciding whether the post will be demoted for low quality (low interaction level) or disseminated further for its high quality.Footnote 15

As the example of the LinkedIn algorithm clearly shows, the effectiveness of the algorithm depends on its ability to accurately analyse and classify content in its context and potential interactions. The capability to parse the meaning of a text is highly relevant for making important distinctions in ambiguous cases (e.g., when differentiating between contemptuous speech and irony).

For this task, the industry has now increasingly turned to machine learning to train their programmes to become more context sensitive. Although there are high expectations regarding the ability of content moderation tools, one should not underestimate the risks of overbroad censorship,Footnote 16 violation of the freedom of speech principle, as well as biased decision-making against minorities and non-English speakers.Footnote 17 The risks are even more problematic in the case of hate speech, an area where the recent interventions of European institutions are pushing for more human and technological investments of IT companies, as detailed in the next section.

15.2 The Fight against Hate Speech Online

Hate speech is not a new phenomenon. Digital communication may be qualified only as a new arena for its dissemination. The features of social media pave the way to a wider reach of harmful content. ‘Sharing’ and ‘liking’ lead to a snowball effect, which allows the content to have a ‘quick and global spread at no extra cost for the source’.Footnote 18 Moreover, users see in the pseudonymity allowed by social media an opportunity to share harmful content without bearing any consequence.Footnote 19 In recent years, there has been a significant increase in the availability of hate speech in the form of xenophobic, nationalist, Islamophobic, racist, and anti-Semitic content in online communication.Footnote 20 Thus the dissemination of hate speech online is perceived as a social emergency that may lead to individual, political, and social consequences.Footnote 21

15.2.1 A Definition of Hate Speech

Hate speech is generally defined as speech ‘designed to promote hatred on the basis of race, religion, ethnicity, national origin’ or other specific group characteristics.Footnote 22 Although several international treaties and agreements do include hate speech regulation,Footnote 23 at the European level, such an agreed-upon framework is still lacking. The point of reference available until now is the Council Framework Decision 2008/913/JHA on Combatting Certain Forms and Expressions of Racism and Xenophobia by Means of Criminal Law.Footnote 24 As emerges from the title, the focus of the decision is the approximation of Member States’ laws regarding certain offences involving xenophobia and racism, whereas it does not include any references to other types of motivation, such as gender or sexual orientation.

The Framework Decision 2008/913/JHA should have been implemented by Member States by November 2010. However, the implementation was less effective than expected: not all the Member States have adapted their legal framework to the European provisions.Footnote 25 Moreover, in the countries where the implementation occurred, the legislative intervention followed different approaches than the national approaches to hate speech, either through the inclusion of the offence within the criminal code or through the adoption of special legislation on the issue. The choice is not without effects, as the procedural provisions applicable to special legislation may be different to those applicable to offences included in the criminal code.

Given the limited effect of the hard law approach, the EU institutions moved to a soft law approach regarding hate speech (and, more generally, also illegal content).Footnote 26 Namely, EU institutions moved toward the use of forms of co-regulation where the Commission negotiates a set of rules with the private companies, under the assumption that the latter will have more incentives to comply with agreed-upon rules.Footnote 27

As a matter of fact, on 31 May 2016, the Commission adopted a Code of Conduct on countering illegal hate speech online, signed by the biggest players in the online market: Facebook, Google, Microsoft, and Twitter.Footnote 28 The Code of Conduct requires that the IT company signatories to the code adapt their internal procedures to guarantee that ‘they review the majority of valid notifications for removal of illegal hate speech in less than 24 hours and remove or disable access to such content, if necessary’.Footnote 29 Moreover, according to the Code of Conduct, the IT companies should provide for a removal notification system which allows them to review the removal requests ‘against their rules and community guidelines and, where necessary, national laws transposing the Framework Decision 2008/913/JHA’.

As is evident, the approach taken by the European Commission is more focused on the timely removal of the allegedly hate speech than on the procedural guarantees that such private enforcement mechanism should adopt in order not to unreasonably limit the freedom of speech of users. The most recent evaluation of the effects of the Code of conduct on hate speech shows an increased number of notifications that have been evaluated and eventually led to the removal of hate speech content within an ever-reduced time frame.Footnote 30

In order to achieve such results, the signatory companies adopted a set of technological tools assessing and evaluating the content uploaded on their platforms. In particular, they finetuned their algorithms in order to detect potentially harmful content.Footnote 31 According to the figures provided by the IT companies regarding the flagged content, human labour alone may not achieve such task.Footnote 32 However, such algorithms may only flag content based on certain keywords, which are continuously updated, but they always lag behind the evolution of the language. And, most importantly, they may still misinterpret context-dependent wording.Footnote 33 Hate speech is a type of language that is highly context sensitive, as the same word may radically change its meaning if used at different places over time. Moreover, algorithms may be improved and trained in one language, but not in other languages which are less prominent in online communication. As a result, an algorithm that works only through the classifications of certain keywords cannot attain the level of complexity of human language and runs the risk of producing unexpected false positives and negatives in the absence of context.Footnote 34

15.2.2 The Human Intervention in Hate Speech Detection and Removal

One of the strategies able to reduce the risk of structural over-blocking is the inclusion of some human involvement in the identification and analysis of potential hate speech content.Footnote 35 Such human involvement can take different forms, either internal content checking or external content checking.Footnote 36

In the first case, IT companies allocate to teams of employees the task of verifying the sensitive cases, where the algorithm was not able to single out if the content is contrary to community standards or not.Footnote 37 Given the high number of doubtful cases, the employees are subject to a stressful situation.Footnote 38 They are asked to evaluate in a very short time frame the potentially harmful content, in order to provide a decision regarding the opportunity to take the content down. This will then provide additional feedback to the algorithm, which will learn the lesson. In this framework, the algorithms automatically identify pieces of potentially harmful content, and the people tasked with confirming this barely have time to make a meaningful decision.Footnote 39

The external content checking instead involves the ‘trusted flaggers’ – that is, an individual or entity which is considered to have particular expertise and responsibilities for the purposes of tackling hate speech. Examples for such notifiers can range from individual or organised networks of private organisations, civil society organisations, and semi-public bodies, to public authorities.Footnote 40

For instance, YouTube defines trusted flaggers as individual users, government agencies, and NGOs that have identified expertise, (already) flag content frequently with a high rate of accuracy, and are able to establish a direct connection with the platform. It is interesting to note that YouTube does not fully delegate the content detection to trusted notifiers but rather affirms that ‘content flagged by Trusted Flaggers is not automatically removed or subject to any differential policy treatment – the same standards apply for flags received from other users. However, because of their high degree of accuracy, flags from Trusted Flaggers are prioritized for review by our teams’.Footnote 41

15.3 The Open Questions in the Collaboration between Algorithms and Humans

The added value of the human intervention in the detection and removal of hate speech is evident; nonetheless, concerns may still emerge as regards such an involvement.

15.3.1 Legal Rules versus Community Standards

As hinted previously, both algorithms and humans involved in content detection and removal of hate speech evaluate content vis-à-vis the community standards adopted by each platform. Such distinction is clearly affirmed also in the YouTube trusted flaggers programme, where it is affirmed that ‘the Trusted Flagger program exists exclusively for the reporting of possible Community Guideline violations. It is not a flow for reporting content that may violate local law. Requests based on local law can be filed through our content removal form’.

These standards, however, do not fully overlap with the legal definition provided by EU law, pursuant to the Framework Decision 2008/913/JHA.

Table 15.1 shows that the definitions provided by the IT companies widen the scope of the prohibition on hate speech to sex, gender, sexual orientation, disability or disease, age, veteran status, and so forth. This may be interpreted as the achievement of a higher level of protection. However, the width of the definition is not always coupled with a subsequent detailed definition of the selected grounds. For instance, the YouTube community standards list the previously mentioned set of attributes, providing some examples of hateful content. But the standard only sets two clusters of cases: encouragement towards violence against individuals or groups based on the attributes, such as threats, and the dehumanisation of individuals or groups (for instance, calling them subhuman, comparing them to animals, insects, pests, disease, or any other non-human entity).Footnote 45 The Facebook Community policy provides for a better example, as it includes a more detailed description of the increasing levels of severity attached to three tiers of hate speech content.Footnote 46 In each tier, keywords are provided to show the type of content that will be identified (by the algorithms) as potentially harmful.

Table 15.1 Hate speech as defined by several major IT companies

Facebook definitionFootnote 42YouTube definitionFootnote 43Twitter definitionFootnote 44Framework Decision 2008/913/JHA
  • What does Facebook consider to be hate speech?

  • Content that attacks people based on their actual or perceived race, ethnicity, national origin, religion, sex, gender or gender identity, sexual orientation, disability or disease is not allowed. We do, however, allow clear attempts at humour or satire that might otherwise be considered a possible threat or attack. This includes content that many people may find to be in bad taste (example: jokes, stand-up comedy, popular song lyrics, etc.).

Hate speech refers to content that promotes violence against or has the primary purpose of inciting hatred against individuals or groups based on certain attributes, such as:
  1. race or ethnic origin

  2. religion

  3. disability

  4. gender

  5. age

  6. veteran status

  7. sexual orientation/gender identity.

Hateful conduct: You may not promote violence against or directly attack or threaten other people on the basis of race, ethnicity, national origin, sexual orientation, gender, gender identity, religious affiliation, age, disability, or serious disease. We also do not allow accounts whose primary purpose is inciting harm towards others on the basis of these categories.
  • All conduct publicly inciting to violence or hatred directed against a

  • group of persons or a member of such a group defined by reference to race, colour, religion,

  • descent or national or ethnic origin.

As a result, the inclusion of such wide hate speech definitions within the Community Guidelines or Standards become de facto rules of behaviour for users of such services.Footnote 47 The IT companies are allowed to evaluate a wide range of potentially harmful content published on their platforms, though this content may not be illegal according to the Framework Decision 2008/914/JHA.

This has two consequences. First, there is an extended privatisation of enforcement as regards those conducts that are not covered by legal provisions with the risk of an excessive interference with the right to freedom of expression of users.Footnote 48 Algorithms deployed by IT companies will then have the power to draw the often-thin line between legitimate exercise of the right to free speech and hate speech.Footnote 49

Second, the extended notion of harmful content provided by community rules imposes a wide obligation on platforms regarding the flow of communication. This may conflict with the liability regime adopted pursuant relevant EU law, namely the e-Commerce Directive, which imposes a three-tier distinction across intermediary liability and, most importantly, prohibits any general monitoring obligation over ISP pursuant art. 15.Footnote 50 As it will be addressed later, in the section on liability, striking the balance between sufficient incentives to block harmful content and over-blocking effects is crucial to safeguard the freedom of expression of users.

15.3.2 Due Process Guarantees

As a consequence of the previous analysis, the issue of procedural guarantees of users emerges.Footnote 51 A first question is related to the availability of internal mechanisms that allow users to be notified about potentially harmful content, to be heard, and to review or appeal against the decisions of IT companies. Although the strongest position safeguarding freedom of expression and fair trial principle would suggest that any restriction (i.e., any removal of potentially harmful content) should be subject to judicial intervention,Footnote 52 the number of decisions adopted on a daily basis by IT companies does not allow either the intervention of potential victims and offenders, or the judicial system. It should be noted that the Code of Conduct does not provide for any specific requirement in terms of judicial procedures, nor through alternative dispute resolution mechanisms, thus it is left to the IT companies to introduce an appeal mechanism.

Safeguards to limit the risk of removal of legal content are provided instead in the Commission Recommendation on Tackling Illegal Content Online,Footnote 53 which includes within the wider definition of illegal content also hate speech.Footnote 54 The Recommendation points to automated content detection and removal and underlines the need for counter-notice in case of removal of legal content. The procedures involve the exchange between the user and the platform, which should provide a reply: in case of evidence provided by the user that the content may not be qualified as illegal, the platform should restore the content that was removed without undue delay or allow for a re-upload by the user; whereas, in case of a negative decision, the platform should include reasons for said decision.

Among the solutions, the signatories to the Code of Conduct proposed Google provides for a review mechanism, allowing users to present an appeal against the decision to take down any uploaded content.Footnote 55 Then, the evaluation of the justifications provided by the user is processed internally and the final decision is sent afterward to the user, with limited or no explanation.

A different approach is adopted by Facebook. In September 2019, the social network announced the creation of an ‘Oversight Board’.Footnote 56 The Board has the task of providing the appeals for selected cases that address potentially harmful content. Although the detailed regulation concerning the activities of the board is still to be drafted, it is clear that it will not be able to review all the content under appeal.Footnote 57 Although this approach has been praised by scholars, several questions remain open: the transparency in the selection of the people entrusted with the role of adjudication, the type of explanation for the decision taken, the risk of capture (in particular for the oversight board), and so on. And, at the moment, these questions are still unanswered.

15.3.3 Selection of Trusted Flaggers

As mentioned previously in Section 15.2.2., the intervention of trusted flaggers in content detection and removal became a crucial element in order to improve the results of said process. The selection process to identify and recruit trusted flaggers, however, is not always clear.

According to the Commission Recommendation, the platforms should ‘publish clear and objective conditions’ for determining which individuals or entities they consider as trusted flaggers. These conditions include expertise and trustworthiness, and also ‘respect for the values on which the Union is founded as set out in Article 2 of the Treaty on European Union’.Footnote 58

Such a level of transparency does not match with the practice: although the Commission Monitoring exercise provides for data regarding at least four IT companies, with a percentage of notifications received by users vis-à-vis trusted flaggers as regards hate speech,Footnote 59 apart from the previously noted YouTube programme, none of the other companies provide a procedure for becoming a trusted flagger. Nor is any guidance provided on whether the selection of trusted notifiers is a one-time accreditation process or rather an iterative process whether the privilege is monitored and can be withdrawn.Footnote 60

This issue should not be underestimated, as the risk of rubberstamping the decisions of trusted flaggers may lead to over-compliance and excessive content takedown.Footnote 61

15.3.4 Liability Regime

When IT companies deploy algorithms and recruit trusted flaggers in order to proactively detect and remove potentially harmful content, they may run the risk of losing their exemption of liability according to the e-Commerce Directive.Footnote 62 According to art. 14 of the Directive, hosting providers are exempted from liability when they meet the following conditions:

  1. Service providers provide only for the storage of information at the request of third parties;

  2. Service providers do not play an active role of such a kind as to give it knowledge of, or control over, that information.

According to the decision of the CJEU in L’Oréal v. eBay,Footnote 63 the Court of Justice clarified that whenever an online platform provides for the storage of content (in the specific case offers for sale), sets the terms of the service, and receives revenues from such service, this does not change the position of the hosting provider denying the exemptions from liability. In contrast, this may happen when the hosting provider ‘has provided assistance which entail, in particular optimising the presentation of the offers for sale in question or promoting those offers’.

This indicates that the active role of the hosting provider is only to be found when it intervenes directly in user-generated content.Footnote 64 If the hosting provider adopts technical measures to detect and remove hate speech, does it fail its neutral position vis-à-vis the content?

The liability exemption may still apply only if two other conditions set by art. 14 e-Commerce Directive apply. Namely,

  1. hosting providers do not have actual knowledge of the illegal activity or information and, as regards claims for damages, are not aware of facts or circumstances from which the illegal activity or information is apparent; or

  2. upon obtaining such knowledge or awareness, they act expeditiously to remove or to disable access to the information.

It follows that proactive measures taken by the hosting provider may result in that platform obtaining knowledge or awareness of illegal activities or illegal information, which could thus lead to the loss of the liability exemption. However, if the hosting provider acts expeditiously to remove or to disable access to content upon obtaining such knowledge or awareness, it will continue to benefit from the liability exemption.

From a different perspective, it is possible that the development of technological tools may lead to a reverse effect as regards monitoring obligations applied over IT companies. According to art. 15 of the e-Commerce Directive, no general monitoring obligation may be imposed on hosting providers as regards illegal content. But in practice, algorithms may already deploy such tasks. Would this indirectly legitimise monitoring obligations applied by national authorities?

This is the question posed by an Austrian court to the CJEU as regards hate speech content published on the social platform Facebook.Footnote 65 The preliminary reference addressed the following case: in 2016, the former leader of the Austrian Green Party, Eva Glawischnig-Piesczek was the subject of a set of posts published on Facebook by a fake account. The posts included rude comments, in German, about the politician, along with her image.Footnote 66

Although Facebook complied with the injunction of the First Instance court across the Austrian country, blocking access to the original image and comments, the social platform appealed against the decision. After the appeal decision, the case achieved the Oberste Gerichtshof (Austrian Supreme Court). Upon analysing the case, the Austrian Supreme Court affirmed that Facebook can be considered as an abettor to the unlawful comments; thus it may be required to take steps so as to repeat the publication of identical or similar wording. However, in this case, the injunction regarding such a pro-active role for Facebook could indirectly impose a monitoring role, which is in conflict not only with art. 15 of the e-Commerce Directive but also with the previous jurisprudence of the CJEU. Therefore, the Supreme Court decided to stay the proceedings and present a preliminary reference to the CJEU. The Court asked, in particular, whether art. 15(1) of the e-Commerce Directive precludes the national court to make an order requiring a hosting provider, who has failed to expeditiously remove illegal information, not only to remove the specific information but also other information that is identical in wording.Footnote 67

The CJEU decided the case in October 2019. The decision argued that as Facebook was aware of the existence of illegal content on its platform, it could not benefit from the exemption of liability applicable pursuant to art. 14 of the e-Commerce Directive. In this sense, the Court affirmed that, according to recital 45 of the e-Commerce Directive, national courts cannot be prevented from requiring a host provider to stop or prevent an infringement. The Court then followed the interpretation of the AG in the case,Footnote 68 affirming that no violation of the prohibition of monitoring obligation provided in art. 15(1) of the e-Commerce Directive occurs if a national court orders a platform to stop and prevent illegal activity if there is a genuine risk that the information deemed to be illegal can be easily reproduced. In these circumstances, it was legitimate for a Court to prevent the publication of ‘information with an equivalent meaning’; otherwise the injunction would be simply circumvented.Footnote 69

Regarding the scope of the monitoring activity allocated to the hosting provider, the CJEU acknowledged that the injunction cannot impose excessive obligations on an intermediary and cannot require an intermediary to carry out an independent assessment of equivalent content deemed illegal, so automated technologies could be exploited in order to automatically detect, select, and take down equivalent content.

The CJEU decision tries as much as possible to provide a balance between freedom of expression and freedom to conduct a business, but the wide interpretation of art. 15 of the e-Commerce Directive can have indirect negative effects, in particular when looking at the opportunity for social networks to monitor through technological tools the upload of identical or equivalent information.Footnote 70 This approach safeguards the incentives for hosting providers to verify the availability of harmful content without incurring additional levels of liability. However, the use of technical tools may pave the way to additional cases of false positives, as they may remove or block content that is lawfully used, such as journalistic reporting on a defamatory post – thus opening up again the problem of over-blocking.

15.4 Concluding Remarks

Presently, we are witnessing an intense debate about technological advancements in algorithms and their deployment in various domains and contexts. In this context, content moderation and communication governance on digital platforms have emerged as a prominent but increasingly contested field of application for automated decision-making systems. Major IT companies are shaping the communication ecosystem in large parts of the world, allowing people to connect in various ways across the globe, but also offering opportunities to upload harmful content. The rapid growth of hate speech content has triggered the intervention of national and supranational institutions in order to restrict such unlawful speech online. In order to overcome the differences emerging at the national level and enhance the opportunity to engage international IT companies, the EU Commission adopted a co-regulatory approach inviting the same table regulators and regulates, so as to defined shared rules.

This approach has the advantage of providing incentives for IT companies to comply with shared rules, as long as non-compliance with voluntary commitments does not lead to any liability or sanction. Thus the risk of over-blocking may be avoided or at least reduced. Nonetheless, considerable incentives to delete not only illegal but also legal content exist. The community guidelines and standards presented herein show that the definition of hate speech and harmful content is not uniform, and each platform may set the boundaries of such concepts differently. When algorithms apply criteria defined on the basis of such different concepts, they may unduly limit the freedom of speech of users, as they will lead to the removal of legal statements.

The Commission approach explicitly demands proactive monitoring: ‘Online platforms should, in light of their central role and capabilities and their associated responsibilities, adopt effective proactive measures to detect and remove illegal content online and not only limit themselves to reacting to notices which they receive’. But this imposes de facto monitoring obligations which may be carried out through technical tools, which are far from being without flaws and bias.

From the technical point of view, the introduction of the human in the loop, such as in the cases of trusted flaggers or the Facebook Oversight board, does not reduce the questions of effectiveness, accessibility, and transparency of the mechanisms adopted. Both strategies, however, show that some space for stronger accountability mechanisms can be found, though the path to be pursued is still long.

16 Smart Contracts and Automation of Private Relationships

Pietro Sirena and Francesco Paolo Patti
16.1 Introduction

Technological advancements and cyberspace have forced us to reconsider the existing limitations of private autonomy. Within the field of contract law, according to regulatory strategies, the public dimension affects private interests in several ways. These include the application of mandatory rules and enforcement mechanisms capable of obtaining certain results and granting a sufficient level of effectiveness. This is particularly the case in European contract law, where the law pursues regulatory goals related to the establishment and the enhancement of a common European market.Footnote 1

The digital dimension represents a severe challenge for European and national private law.Footnote 2 In order to address the implications of the new technologies on private law, recent studies were conducted inter alia on algorithmic decisions, digital platforms, the Internet of Things, artificial intelligence, data science, and blockchain technology. The broader picture seems to indicate that, in the light of the new technologies, the freedom to conduct business has often turned into power. Digital firms are no longer only market participants: rather, they are becoming market makers capable of exerting regulatory control over the terms on which others can sell goods and services.Footnote 3 In so doing, they are replacing the exercise of states’ territorial sovereignty with functional sovereignty. This situation raised concern in different areas of law and recently also in the field of competition law.Footnote 4

As Lawrence Lessig pointed out, in the mid-1990s, cyberspace became a new target for libertarian utopianism where freedom from the state would reign.Footnote 5 According to this belief, the society of this space would be a fully self-ordering entity, cleansed of governors and free from political hacks. Lessig was not a believer of the described utopian view. He correctly pointed out the need to govern cyberspace, as he understood that left to itself, cyberspace would become a perfect tool of ‘Control. Not necessarily control by government.’Footnote 6 These observations may be connected to the topic of private authorities who exercise power over other private entities with limited control by the state. The issue was tackled in a study by an Italian scholar which is now more than forty years old,Footnote 7 and more recently by several contributions on different areas of private law.Footnote 8 The emergence of private authorities was also affirmed in the context of global governance.Footnote 9 These studies were able to categorize forms and consequences of private authorities, to identify imbalances of power, envisage power-related rules of law, and question the legitimacy of private power. One of the main problems is that private authorities can be resistant to the application and enforcement of mandatory rules.

The present chapter aims to investigate whether and how blockchain technology platforms and smart contracts could be considered a modern form of private authority, which at least partially escapes the application of mandatory rules and traditional enforcement mechanisms.Footnote 10 Blockchain technology presents itself as democratic in nature, as it is based on an idea of radical decentralization.Footnote 11 This is in stark contrast to giant Big Tech corporations working over the internet in the fields of social networking, online search, online shopping, and so forth; with blockchain, technology users put their trust in a network of peers. Nevertheless, as happened with the internet, market powers could create monopolies or highly imbalanced legal relationships.Footnote 12 In this sense, contractual automation seems to play a key role in understanding the potentialities and the risks involved in the technology. In general terms, one of the main characteristics of a smart contract is its self-executing character, which should eliminate the possibility of a breach of contract. But smart contracts may also provide for effective self-help against breaches of traditional contracts. Finally, when implemented on blockchain platforms, smart contract relationships may also benefit from the application of innovative dispute resolution systems, which present themselves as entirely independent from state authorities.

16.2 Smart Contracts: Main Characteristics

In his well-recognized paper entitled ‘Formalizing and Securing Relationships on Public Networks’, Nick Szabo described how cryptography could make it possible to write computer software able to resemble contractual clauses and bind parties in a way that would almost eliminate the possibility of breaching an agreement.Footnote 13 Szabo’s paper was just a first step, and nowadays basically every scholar interested in contract law may expound on the essentials of how a smart contract functions. Some jurisdictions, such as in Italy, have also enacted rules defining a smart contract.Footnote 14 The great interest is due to the growing adoption of Bitcoin and other blockchain-based systems, as for instance Ethereum.Footnote 15 The latter provides the necessary technology to carry out Szabo’s ideas.

Smart contracts do not differ too greatly from natural language agreements with respect to the parties’ aims or interests.Footnote 16 In reality, except where the decision to conclude the contract is taken by an ‘artificial intelligent agent’, they solely form a technological infrastructure that makes transactions cheaper and safer.Footnote 17 The main quality of a smart contract relies on the automation of contractual relationships, as the performance is triggered by an algorithm in turn triggered by the fulfilment of certain events. In this sense, there is often talk of a distinction between the notions of ‘smart contract’ and ‘smart legal contract’ with the result that contractual automation in the majority of cases affects only its performance.Footnote 18 In contrast, the contract as such (i.e., the legal contract) is still a product of the meetings of the minds, through an offer and an acceptance.Footnote 19 In many cases, this induces parties to ‘wrap’ the smart contract in paper and to ‘nest’ it in a certain legal system.Footnote 20

It is therefore often argued that ‘smart contract’ is a misnomer as the ‘smart’ part of the contract in reality affects only the performance.Footnote 21 In addition, smart contracts are not intelligent but rely on an ‘If-Then’ principle, which means, for instance, that a given performance will be executed only when the agreed-upon amount of money is sent to the system.Footnote 22 These critics seem to be correct, and this goes some way to demystifying the phenomenon,Footnote 23 which is sometimes described as a game-changer that will impact every contractual relationship.Footnote 24 Discussions are beginning to be held on automated legal drafting, through which contractual clauses are shaped on the basis of big data by machine learning tools and predictive technologies, but for now, they do not really affect the emerging technology of smart contracts on blockchain platforms.Footnote 25 The latter work is based on rather simple software protocols and other code-based systems, which are programmed ex ante without the intervention of artificial intelligence.Footnote 26

Nevertheless, the importance of the ‘self-executing’ and ‘self-enforcing’ character of smart contracts should not be undermined. Most of the benefits arising from the new technology are in fact based on these two elements, which represent a source of innovation for general contract law. The ‘self-executing’ character should eliminate the occurrence of contractual breaches, whereas the ‘self-enforcing’ character makes it unnecessary to turn to the courts in order to obtain legal protection.Footnote 27 In addition, the code does not theoretically require interpretation, as it should not entail the need to explain ambiguous terms.Footnote 28 Currently, it is not clear whether smart contracts will diminish transaction costs, due to the complexity of digital solutions and the need to acquire the necessary knowledge.Footnote 29 For reasons that will be outlined, costs of implementation seem not to harm the potential spread of smart contracts, especially in the fields of consumer contracts and the Internet of Things.

16.3 Self-Execution and Self-Enforcement

As stated before, through the new technology one or more aspects of the contract’s execution become automated, and having once entered into the contract, parties cannot prevent performance from being executed. Smart contracts use blockchain to ensure the transparency of the contractual relationship and to create trust in the capacity to execute the contract, which depends on the involved technology. As previously stated, the operation is based on ‘If-Then’ statements, which are one of the most basic building blocks of any computer program.

Undeniably, such a technology can easily govern the simple contractual relationship, in which the system has only to determine where a given amount of money has been paid in order to have something in return (e.g., a digital asset) or where the performance is due when certain external conditions of the real world are met. Since a modification of the contractual terms of a smart contract implemented on a blockchain platform is hardly possible, execution appears certain and personal trust or confidence in the counterparty is not needed.Footnote 30 This has led to the claim that in certain situations, contracting parties will face the ‘cost of inflexibility’, as blockchain-based smart contracts are difficult to manipulate and therefore resistant to changes.Footnote 31 In fact, smart contracts are built on the assumption that there will not be modifications after the conclusion of the contract. As a result, if or when circumstances relevant to the smart contract change, a whole new contract would need to be written.

‘Inflexibility’ is often considered a weakness of smart contracts.Footnote 32 Supervening events and the change of circumstances may require parties to intervene in the contractual regulation and provide for some amendments.Footnote 33 Therefore, legal systems contain rules that may lead to a judicial adaptation of the contract, sometimes through a duty to renegotiate its content.Footnote 34 In this regard, smart contracts differ from traditional contracts, as they take an ex ante view instead of the common ex post judicial assessment view of law.Footnote 35

In reality, this inflexibility does not constitute a weakness of smart contracts. Instead, it makes clear that self-execution and self-enforcement could bring substantial benefits only in certain legal relationships, where parties are interested in a simple and instantaneous exchange. Moreover, self-execution does not necessarily affect the entire agreement. Indemnity payouts, insurance triggers, and various other provisions of the contract could be automated and self-fulfilling, while other provisions may remain subject to an ordinary bargain and be expressed in natural language.Footnote 36 One can therefore correctly state that smart contracts automatically perform obligations which arise from legal contracts but not necessarily all the obligations. Finally, it should be observed that future contingencies that impact the contractual balance, as for instance an increase of the raw materials’ price, could be assessed through lines of code, in order to rationally adapt the contractual performance.Footnote 37

The latter issue makes clear that often the conditions for contractual performance relate to the real and non-digital world outside of blockchains. It is therefore necessary to create a link between the real world and the blockchain. Such a link is provided by the so-called oracles, which could be defined as interfaces through which information from the real world enters the ‘digital world’. There are different types of oracles,Footnote 38 and some scholars argue that their operation harms the self-executing character of smart contracts, because the execution is eventually remitted to an external source.Footnote 39 Due to the technology involved, oracles do not seem to impact the automated execution of smart contracts. The main challenge with oracles is that contracting parties need to trust these outside sources of information, whether they come from a website or a sensor. As oracles are usually third-party services, they are not subject to the security blockchain consensus mechanisms. Moreover, mistakes or inaccuracies are not subject to rules that govern breach of contract between the two contracting parties.

In the light of the above, self-execution and self-enforcement assure an automated performance of the contract. Nevertheless, whether due to an incorrect intervention of an oracle, for a technological dysfunction, or for an error in the programming, things may go wrong and leave contracting parties not satisfied. In these cases, there could be an interest in unwinding the smart contract. According to a recent study,Footnote 40 this can be done in three ways. Needless to say, the parties can unwind the legal contract in the old-fashioned way by refunding what they have received from the other party, be it voluntarily or with judicial coercion. At any rate, it would be closer to the spirit of fully automated contracts, if the termination of the contract and its unwinding could also be recorded in the computer code itself and thus carried out automatically.Footnote 41 Finally, it is theoretically possible to provide for technical modifications of the smart contract in the blockchain. The three options, as also argued by the author,Footnote 42 are not easily feasible and there is the risk of losing the advantages related to self-execution. It is therefore of paramount importance to devote attention to the self-help and dispute resolution mechanisms developed on blockchain-platforms.Footnote 43

16.4 Automated Self-Help

The functioning of smart contracts may also determine a new vast array of self-help tools (i.e., enforcement mechanisms that do not require the intervention of state power). The examples of self-help that have recently been discussed are related to Internet of Things technology.Footnote 44 The cases under discussion affect self-enforcement devices that automatically react in the presence of a contractual breach and put the creditor in a position of advantage with respect to that of the debtor. The latter, who is in breach, cannot exercise any legal defence vis-à-vis automated self-help based on algorithms. Scholars who addressed the issue stressed the dangers connected to a pure exercise of private power through technology.Footnote 45

Among the most frequent examples, there is the lease contract, for which a smart contract could automatically send a withdrawal communication in case of a two-month delay in the payment of the lease instalment. If the lessee does not pay the due instalment within one month, the algorithm automatically locks the door and prevents the lessee from entering into the apartment. Another example is the ‘starter interrupt device’, which can be connected to a banking loan used to buy a vehicle. If the owner does not pay the instalments, the smart contract prevents the vehicle from starting. Similar examples are present in the field of utilities (gas, electricity, etc.).Footnote 46 If the customer does not pay for the service, the utilities are no longer available. In looking to general contractual remedies, the potentiality of such self-help instruments appears in reality almost unlimited. Automation could also affect the payment of damages or liquidated damages.

Self-help devices take advantage of technology and put in the creditors’ hands an effective tool, which – at the same time – reduces the costs of enforcement and significantly enhances the effectiveness of contractual agreements. This is mainly due to the fact that recourse to a court is no longer necessary. Contractual automation may increase the awareness of the importance of fulfilling obligations in time. Moreover, the reduction of costs related to enforcement may lead to a decrease in prices for diligent contracting parties. At any rate, as correctly pointed out, the described ‘technological enforcement’ – although effective – does not necessarily respect the requirements set by the law.Footnote 47 In other words, even if smart contracts are technologically enforceable, they are not necessarily also legally enforceable.Footnote 48 In the examples outlined previously, it is possible to imagine a withdrawal from the contract without due notice or the payment of an exorbitant sum of money as damages.

How should the law react to possible deviations between the code and the law? It seems that a kind of principle of equivalent treatment should provide guidance to resolving cases:Footnote 49 limits that exist for the enforcement of traditional contracts should be extended to smart contracts. From a methodological point of view, practical difficulties in applying the law should not prevent an assessment of the (un)lawful character of certain self-help mechanisms. In cases where the law provides for mandatory proceedings or legal steps in order to enforce a right, the same should in principle apply to smart contracts.

Nevertheless, evaluation of the self-help mechanisms’ lawfulness should not be too strict, and should essentially be aimed at protecting fundamental rights – for instance, the right to housing. The ‘automated enforcement’ relies on party autonomy and cannot be considered as an act of oppression exercised by a ‘private power’ per se. Therefore, apart from the protected rights, the assessment should also involve the characteristics of the contracting parties and the subject matter of the contract. In this regard, it was correctly pointed out that EU law provides for some boundaries of private autonomy in consumer contracts, which apply to smart contracts.Footnote 50

For instance, the unfair terms directiveFootnote 51 indicates that clauses, which exclude or hinder a consumer’s right to take legal action, may create a significant imbalance in parties’ rights and obligations.Footnote 52 The same is stated with respect to clauses irrevocably binding the consumer to terms with which she or he had no real opportunity of becoming acquainted before the conclusion of the contract.Footnote 53 According to prevailing opinion, the scope of application of the unfair terms directive also covers smart contracts, even if the clauses are expressed through lines of code.Footnote 54

Undeniably, smart contracts may pose difficulties to consumers when it comes to exercising a right against illicit behaviour on behalf of the business. At any rate, it would not be proper to consider the self-help systems directly unlawful. The enforcement of EU consumer law is also granted by public authorities,Footnote 55 which in the future may exercise control with respect to the adopted contractual automation processes and require modifications in the computer protocol of the businesses. If the self-help reacts to a breach of the consumer, it should not in principle be considered unfair. On the one hand, contractual automation may provide for lower charges, due to the savings in enforcement costs. On the other hand, it could augment the reliability of consumers by excluding opportunistic choices and making them immediately aware of the consequences of the breach. Finally – as will be seen – technological innovation must not be seen only as a menace for consumers, as it could also provide for an improvement in the application of consumer law and, therefore, an enhancement of its level of effectiveness.Footnote 56

16.5 Automated Application of Mandatory Rules

A huge debate has affected the application of mandatory rules in the field of smart contracts. The risk that this innovative technology could be used as an instrument to fulfil unlawful activities, as the conclusion of immoral or criminal contracts, is often pointed out.Footnote 57 The mode of operation may render smart contracts and blockchain technology attractive to ill-intentioned people interested in engaging in illicit acts.

Among the mandatory rules that may be infringed by smart contracts, special attention is dedicated to consumer law.Footnote 58 The characteristics of smart contracts make them particularly compatible with the interests of individual businesses in business-to-consumer relationships, as blockchain technology can guarantee a high level of standardization and potentially be a vehicle for the conclusion of mass contracts. In terms of the application of mandatory consumer law to smart contracts, opinions differ significantly. According to one author, smart contracts will determine the end of consumer law, as they may systematically permit businesses to escape its application.Footnote 59 The claim has also been made that automated enforcement in the sector of consumer contracts amounts to an illusion, as mandatory rules prevent the use of automated enforcement mechanisms.Footnote 60

Both opinions seem slightly overstated and do not capture the most interesting aspect related to smart consumer contracts. In fact, as has been recently discussed, technology and contractual automation may also be used as a tool to enforce consumer law and augment its level of effectiveness.Footnote 61 Many consumers are indeed not aware of their rights or, even if they are, find it difficult to enforce them, due to emerging costs and a lack of experience. In addition, most consumer contractual claims are of insignificant value.

In this regard, a very good example is given by the EU Regulation on Compensation of Long Delay of Flights.Footnote 62 The consumer has a right to get a fixed compensation, depending on the flight length, ranging from 125,00 to 600,00 euros. For the reasons outlined previously, what often happens is that consumers do not claim compensation; the compensation scheme thus lacks effectiveness. In the interest of consumers, reimbursement through a smart contract device has been proposed to automate the process.Footnote 63 The latter would work on the basis of a reliable system of external interfaces.Footnote 64 The proposal seems feasible and is gaining attention, especially in Germany, where the introduction of the smart compensation scheme in cases of cancellations or delays of flights has been discussed in Parliament.Footnote 65

Two possible drawbacks are related to the described types of legislative intervention. Due to the wide distribution of the technology, which crosses national borders, the adoption of smart enforcement may produce strong distortions to international competition.Footnote 66 For instance, the imposition of a smart compensation model as the one discussed in Germany for the delay or the cancellation of flights may lead to an increase in the costs for flight companies that operate predominantly in that country. In order not to harm the aims of the internal market, smart enforcement should thus be implemented on a European level.

Another danger of the proposed use of smart contract devices is ‘over-enforcement’.Footnote 67 The latter may be detrimental because it could prevent businesses from running an activity in order to escape liability and sanctions. The described adoption of technology in cases of flight delays may determine a digitalization of enforcement that drastically drops the rate of unpaid compensations to almost zero. The outlined scenario is not necessarily convenient for consumers, as the additional costs sustained by flight companies would probably be passed on to all customers through an increase in prices. The level of technology required to automatically detect every single delay of an airplane, and grant compensation to the travellers would probably lead to an explosion in costs for companies. While this may increase efficiency in the sector, it is questionable whether such a burden would be bearable for the flight companies. That is not to say that this risk automatically means strict enforcement is inherently evil: enforcement of existing rules is of course a positive aspect. Nevertheless, the economic problems it may give rise to should lead to the consideration of enforcement through technological devices as an independent element that could in principle also require modifications of substantive law.Footnote 68 For instance, the technology could enable recognition of ‘tailored’ amounts of compensation depending on the seriousness of the delay.Footnote 69

Many aspects seem uncertain, and it is not surprising that as things stand, smart enforcement mechanisms are not (yet) the core of legislative intervention.Footnote 70 In reality, the current regulatory approach appears quite the opposite. Legislators are not familiar with the new technologies and are tending towards lightening the obstacles set by mandatory rules to blockchain technology with the aim of not harming its evolution.Footnote 71 In many legal systems, contained ‘regulatory sandboxes’ were created,Footnote 72 in order to support companies exercising their activities in the fields of fintech and blockchain technology. In general terms, regulatory sandboxes enable companies to test their products with real customers in an environment that is not subject to the full application of legal rules. In this context, regulators typically provide guidance, with the aim of creating a collaborative relationship between the regulator and regulated companies. The regulatory sandbox can also be considered a form of principles-based regulation because it lifts some of the more specific regulatory burdens from sandbox participants by affording flexibility in satisfying the regulatory goals of the sandbox.Footnote 73 The described line of reasoning shows the willingness of legislators not to prevent technological progress and to help out domestic companies. The approach inevitably brings clashes when it comes to the protection of consumers’ interests.Footnote 74

16.6 Smart Contracts and Dispute Resolution

Even if the claim ‘code is law’ or the expression ‘lex cryptographica’Footnote 75 may appear exaggerated, it seems evident that developers of smart contracts and blockchain platforms are aiming to create an order without law and implement a private regulatory framework. Achieving such a goal requires shaping a model of dispute resolution capable of resolving conflicts in an efficient manner, without the intervention of national courts and state power.Footnote 76 The self-executing character of smart contracts may not prevent disputes occasionally arising between parties, connected for instance to defects in the product purchased or to the existence of an unlawful act. Moreover, the parties’ agreement cannot always be encoded in ‘if-then’ statements and should be encompassed in non-deterministic notions and general clauses such as, for example, good faith and reasonableness. Unless artificial intelligence develops to the stage where a machine can substitute human reasoning in filling gaps of the contract or putting into effect general clauses,Footnote 77 contractual disputes may still arise. The way smart contracts operate could lead parties to abandon the digital world and resolve their disputes off-chain. The issue is of high importance, as the practical difficulties of solving possible disputes between the parties could obscure the advantages connected to contractual automation.Footnote 78

On this, one of the starting points in the discussions about dispute resolution in the field of blockchain is the observation that nowadays regular courts are not well enough equipped to face the challenges arising from the execution of lines of code.Footnote 79 This claim could perhaps be correct at this stage, but it does not rule out courts acquiring the capacity to tackle such disputes in the future. In reality, a part of the industry is attempting to realize a well-organized and completely independent jurisdiction in the digital world through the intervention of particular types of oracles, which are usually called ‘adjudicators’ or ‘jurors’.Footnote 80

Whether such a dispute resolution model can work strictly depends on the coding of the smart contract. As seen before,Footnote 81 once a smart contract is running, in principle neither party can stop the protocol, reverse an already executed transaction, or otherwise amend the smart contract. Therefore, the power to interfere with the execution of the smart contract should be foreseen ex ante and be granted to a trusted third party. The latter is allowed to make determinations beyond the smart contracts’ capabilities. It will feed the smart contract with information and, if necessary, influence its execution in order to reflect the trusted third parties’ determination.Footnote 82

Independence from the traditional judiciary is granted by ‘routine escrow mechanisms’. Rather than paying the sale price directly to the seller, the latter is kept in escrow by a third party. If no disputes arise from the contract, the funds held in escrow will be unblocked in favour of the seller.Footnote 83 Nowadays, platforms adopt sophisticated systems based on ‘multi-signature addresses’, which do not really give exclusive control of the price to the third party involved as an adjudicator.Footnote 84 This should amount to an additional guarantee in favour of the contracting parties.Footnote 85 The outcome is a kind of advanced ODR system,Footnote 86 which is particularly suitable in the high-volume, low-value consumer complaints market.Footnote 87

The autonomous dispute resolution system is not considered a modern form of the judiciary.Footnote 88 It is presented as a return to the ancient pre-Westphalian past, where jurisdiction did not usually emanate from state sovereignty but from a private service, largely based on the consent of the disputing parties. Nevertheless, given the development of the modern state judiciary, there are many problematic aspects related to dispute resolution on blockchain platforms. For instance, it has been pointed out that: the decision is granted by subjects who do not necessarily have a juridical knowledge (often selected through a special ranking based on the appreciation of users), the decision cannot be recognized by a state court as happens with an arbitral award, and that enforcement does not respect time limits and safeguards provided by regular enforcement proceedings.Footnote 89

With respect to the aforementioned issues, the fear is that such advanced ODR systems based on rules which are autonomous from the ones of national legal systems may limit the importance of the latter in regulating private relationships.Footnote 90 On the other hand, some authors affirm that such procedures, under certain conditions, may become a new worldwide model of arbitration.Footnote 91

Also, in this case, the advantages of the dispute resolution procedures are strictly connected to the self-enforcement character of the decision. The legitimacy of such proceedings must be carefully assessed; the outcome should not necessarily be considered unlawful. The parties voluntarily chose to be subject to the scrutiny of the adjudicator, and from a private law perspective, the situation does not differ significantly from the case of a third arbitrator that determines the contents of the contract. In addition, the scope of automated enforcement does not tackle the entire estate; the assets that are subject to the assignment decided by the adjudicator are made available by the parties on purpose. It is not yet clear how far such proceedings will spread or whether they could functionally substitute for state court proceedings. Needless to say, in the absence of a specific recognition made by legal rules, these dispute resolution mechanisms are subject to the scrutiny of state courts.Footnote 92 Although it could be difficult in practice, the party who does not agree with a decision, which is not legally recognizable, may sue the competent state court in order to have the dispute solved.

16.7 Conclusion

The actual dangers caused by the creation of private powers on blockchain platforms are related to the technology that grants automation of the contractual relationship. On the one side, if rights and legal guarantees are excluded or limited the adoption of self-enforcement devices should of course be considered unlawful. On the other side, however, in principle every situation has to be carefully assessed, as the contracting parties have freely chosen to enter into a smart contract.

Problems may exist when smart contracts are used as a means of self-help imposed by one of the contracting parties. An automated application of remedies may harm the essential interests of the debtors. Nevertheless, automation does not seem to infringe debtor’s rights if enforcement is compliant with deadlines and legal steps provided by the law. Moreover, some economic advantages arising from automation may produce positive effects for whole categories of users and self-enforcement could also become an efficient tool in the hands of the European legislator, in order to significantly augment the effectiveness of consumer protection.

In the light of the issues examined herein, if the technology wishes to augment user trust about the functioning of smart contracts and blockchain, it should not aim to abandon the law.Footnote 93 To be successful in the long run, innovative enforcement and dispute resolution models should respect and emulate legal guarantees. Smart contracts are not necessarily constructed with democratic oversight and governance, which are essential for a legitimate system of private law.Footnote 94 A widespread acceptance of new services requires that the main pillars on which legal systems are based should not be erased.

Footnotes

13 Responsibilities of Companies in the Algorithmic Society

1 Hans-Wolfgang Micklitz and Dennis Patterson, ‘From the Nation-State to the Market: The Evolution of EU Private Law as Regulation of the Economy beyond the Boundaries of the Union?’ in Bart Van Vooren, Steven Blockmans and Jan Wouters (eds), The EU’s Role in Global Governance: The Legal Dimension (Oxford University Press 2013).

2 Matthias Ruffert, The Public-Private Law Divide: Potential for Transformation? (British Institute of International and Comparative Law 2009). Lukas van den Berge, ‘Rethinking the Public-Private Law Divide in the Age of Governmentality and Network Governance’ (2018) 5 European Journal of Comparative Law and Governance 119. Hans-W. Micklitz, ‘Rethinking the Public/Private Divide’ in Miguel Poiares Maduro, Kaarlo Tuori and Suvi Sankari (eds), Transnational Law: Rethinking European Law and Legal Thinking (Cambridge University Press 2014).

3 Cahier à Thème, Les Grandes Théories du Droit Transnational, avec contributions du K. Tuori, B. Kingsbury, N. Krisch, R. B. Stewart, H. Muir Watt, Ch. Joerges, F. Roedel, F. Cafaggi, R. Zimmermann, G.-P. Calliess, M. Renner, A. Fischer-Lescano, G. Teubner, P. Schiff Berman, Numéro 1–2, Revue Internationale de Droit Economique, 2013.

4 Gunther Teubner, Constitutional Fragments: Societal Constitutionalism and Globalization (Oxford University Press 2012).

5 Hans Jonas, Das Prinzip Verantwortung: Versuch Einer Ethik Für Die Technologische Zivilisation (Suhrkamp 1984).

6 Hans-W. Micklitz, Thomas Roethe and Stephen Weatherill, Federalism and Responsibility: A Study on Product Safety Law and Practice in the European Community (Graham & Trotman/M Nijhoff; Kluwer Academic Publishers Group 1994).

7 For a detailed account, see Chiara Macchi and Claire Bright, ‘Hardening Soft Law: The Implementation of Human Rights Due Diligence Requirements in Domestic Legislation’ in Martina Buscemi et al. (eds), Legal Sources in Business and Human Rights Evolving Dynamics in International and European Law (Brill 2020). Liesbeth Enneking et al., Accountability, International Business Operations and the Law: Providing Justice for Corporate Human Rights Violations in Global Value Chains (Routledge 2019). Stéphanie Bijlmakers, Corporate Social Responsibility, Human Rights, and the Law (Routledge 2019). Angelica Bonfanti, Business and Human Rights in Europe: International Law Challenges (Routledge 2018).

8 Richard E. Baldwin, The Great Convergence: Information Technology and the New Globalization (The Belknap Press of Harvard University Press 2016).

9 European Review of Contract Law, Special Issue: Reimagining Contract in a World of Global Value Chains, 2020 Volume 16 Issue 1 with contribution of Klaas Hendrik Eller, Jaakko Salminen, Fabrizio Cafaggi and Paola Iamiceli, Mika Viljanen, Anna Beckers, Laura D. Knöpfel, Lyn K. L. Tjon Soel Len, Kevin B. Sobel-Read, Vibe Ulfbeck and Ole Hansen. Society of European Contract Law (SECOLA), Common Frame of Reference and the Future of European Contract Law, conference 1 and 2 June 2007, Amsterdam.

10 Ralf Michaels and Nils Jansen, ‘Private Law beyond the State – Europeanization, Globalization, Privatization’ (2006) 54 American Journal of Comparative Law 843.

11 Barry Castleman, ‘The Export of Hazardous Industries in 2015’ (2016) 15 Environmental Health 8. Hans-W. Micklitz, Internationales Produktsicherheitsrecht: Zur Begründung Einer Rechtsverfassung Für Den Handel Mit Risikobehafteten Produkten (Nomos Verlagsgesellschaft 1995). Hans-W. Micklitz and Rechtspolitik, Internationales Produktsicherheitsrecht: Vorueberlegungen (Universität Bremen 1989).

12 Rotterdam Convention on the Prior Informed Consent Procedure for Certain Hazardous Chemicals and Pesticides in International Trade https://treaties.un.org/pages/ViewDetails.aspx?src=TREATY&mtdsg_no=XXVII-14&chapter=27.

13 Regulation (EC) No 304/2003 of the European Parliament and of the Council of 28 January 2003 concerning the export and import of dangerous chemicals OJ L 63, 6. 3. 2003, p. 1–26. Today Regulation (EU) No 649/2012 of the European Parliament and of the Council of 4 July 2012 concerning the export and import of hazardous chemicals OJ L 201, 27. 7. 2012, p. 60–106.

15 Webinar ‘Hazardous Pesticides and EU’s Double Standards’, 29. 9. 2020, www.pan-europe.info/resources/articles/2020/08/webinar-hazardous-pesticides-and-eus-double-standards.

16 Daniel Augenstein, Global Business and the Law and Politics of Human Rights (Cambridge University Press, forthcoming).

18 Kiobel v. Royal Dutch Petroleum CO., 569 US 108(2013).

19 Hans-W. Micklitz, ‘Consumer Rights’ in Andrew Clapham, Antonio Cassese and Joseph Weiler (eds), European Union – The Human Rights Challenge, Human Rights and the European Community: The Substantive Law (Nomos 1991).

20 There is a vibrant debate in competition law on the reach of Art. 102 TFEU and the correspondent provisions in national cartel laws . See Nicolas Petit, Big Tech and the Digital Economy: The Moligopoly Scenario (1st ed., Oxford University Press 2020). With regard to the customer dimension, see the following judgment of the Federal Supreme Court of Germany (BGH) on Facebook, KVR 69/19, 23. 6. 2020 openJur 2020, 47441.

21 Regulation (EU) 2019/1150 of the European Parliament and of the Council of 20 June 2019 on promoting fairness and transparency for business users of online intermediation services PE/56/2019/REV/1, OJ L 186, 11. 7. 2019, pp. 57–79.

22 Jaakko Salminen, ‘Contract-Boundary-Spanning Governance Mechanisms: Conceptualizing Fragmented and Globalized Production as Collectively Governed Entities’ (2016) 23 Indiana Journal of Global Legal Studies 709.

23 This comes clear from the methodology used by Jaakko Salminen and Mikko Rajavuori, ‘Transnational Sustainability Laws and the Regulation of Global Value Chains: Comparison and a Framework for Analysis’ (2019) 26 Maastricht Journal of European and Comparative Law 602.

24 For first attempt to at least systematically address the legal fields and the questions that require a solution, Anna Beckers and Hans-W. Micklitz, ‘Eine ganzheitliche Perspektive auf die Regulierung globaler Lieferketten’ (2020) volume 6 Europäische Zeitschrift für Wirtschafts- und Steuerrecht, 324329.

25 United Nations Humans Rights Council, Protect, Respect and Remedy: A Framework for Business and Human Rights, 2008 A/HRC/8/5.

26 United Nations Human Rights Council, United Nations Guidelines on Business and Human Rights, 2011 A/HRC/17/31; for details, see Claire Bright, ‘Creating a Legislative Level Playing Field in Business and Human Rights at the European Level: Is French Duty of Vigilance Law the Way Forward?’ EUI working paper MWP 2020/01, 2020, 2.

27 OECD Guidelines for Multinational Enterprises, 2011.

28 S. Eickenjäger, Menschenrechtsberichtserstattung durch Unternehmen (Mohr Siebeck 2017) 274.

29 California Transparency in Supply Chains Act of 2010 (SB 657).

30 Bribery Act 2010 c. 23.

31 Modern Slavery Act 2018, No. 153, 2018.

32 Wet zorgplicht kinderarbeid, 14 May 2019. For a comparison of the legislation discussed above, see Salminen and Rajavuori (Footnote n 23).

33 For a detailed overview of the status quo, see Macchi and Bright (Footnote n 7).

34 www.bk.admin.ch/ch/f/pore/va/20201129/index.html accessed on 1 December 2020.

35 LOI no. 2017–399 du 27 mars 2017 relative au devoir de vigilance des sociétés mères et des entreprises donneuses d’ordre JORF no. 0074 du 28 mars 2017, S. Brabant and E. Savourey, ‘French Law on the Corporate Duty of Vigilance: A Practical and Multidimensional Perspective’, Revue international de la compliance et de l’éthique des affairs, 14 December 2017, www.bhrinlaw.org/frenchcorporatedutylaw_articles.pdf.

36 For further details, see Claire Bright, ‘Creating a Legislative Level Playing Field in Business and Human Rights at the European Level: Is French Duty of Vigilance Law the Way Forward?’ EUI working paper MWP 2020/01, 2020, 6.

37 Regulation 2017/821/EU of 17 March 2017 laying down supply chain due diligence obligations for Union importers of tin, tantalum, tungsten, their ores and gold originating from conflict-affected and high-risk areas, OJ L 130, 19. 5. 2017, p. 1–20.

38 Regulation 995/2010/EU of 20 October 2010 on the obligations of operators who place timber and timer products on the market, OJ L 295, 12. 11. 2010, p. 23–34.

39 Directive 2014/95/EU of 22 October 2014 amending Directive 2013/34/EU as regards the disclosure of non-financial and diversity information by certain large companies and groups, OJ L 330, 15. 11. 2014, p. 1–9.

40 Regulation 2019/1020/EU of 20 June 2019 on market surveillance and compliance of products and amending Directive 2004/42/EC and Regulations 765/2008/EC and 305/2011/EU, OJ L 169, 25. 6. 2019, p. 1–44.

41 Directive 2019/633/EU of 17 April 2019 on unfair trading practices in business-to-business relationships in the agricultural and food chain, OJ L 111, 25. 4. 2019, p. 59–72.

42 Committee on Legal Affairs, Draft Report with recommendations to the Commission on corporate due diligence and corporate accountability, 2020/2129(INL), 11. 09. 2020.

43 European Parliament Committee on Legal Affairs, Draft Report with recommendations to the Commission on corporate due diligence and corporate accountability, 2020/2129(INL), 11. 09. 2020, article 4.

44 For an early account of the new opportunities, see Eric Brousseau, Meryem Marzouki and Cécile Méadel, Governance, Regulations and Powers on the Internet (Cambridge University Press 2012). Andrea Calderaro and Anastasia Kavada, ‘Challenges and Opportunities of Online Collective Action for Policy Change’ (2013) in Diane Rowland, Uta Kohl and Andrew Charlesworth (eds), Information Technology Law (5th ed., Routledge 2017).

45 Cass. Civ. 14 sept. 2017 no. 15–26737 et 15–26738.

46 Tribunal Judiciaire de Nanterre, 30 January 2020, no. 19/02833.

47 High Court, The Bodo Community et al. v Shell Petroleum Development Company of Nigeria Ltd, (2014) EWHC 1973, 20 June 2014.

48 Court of Appeal of the Hague, Eric Barizaa Dooh of Goi et al. v. Royal Dutch Shell Plc et al., 200. 126. 843 (case c) and 200. 126. 848 (case d), of 18 Decembre 2015.

49 Claire Bright, ‘The Civil Liability of the Parent Company for the Acts or Omissions of Its Subsidiary The Example of the Shell Cases in the UK and the Netherlands’ in Angelica Bonfanti (ed), Business and Human Rights in Europe: International Law Challenges (Routledge 2018).

50 Monika Namysłowska, ‘Monitoring Compliance with Contracts and Regulations: Between Private and Public Law’ in Roger Brownsword, R. A. J. van Gestel and Hans-W. Micklitz (eds), Contract and Regulation: A Handbook on New Methods of Law Making in Private Law (Edward Elgar Publishing 2017); Anna Beckers, Enforcing Corporate Social Responsibility Codes: On Global Self-Regulation and National Private Law (Hart Publishing 2015).

51 Walter van Gerven, ‘Bringing (Private) Laws Closer to Each Other at the European Level’ in Fabrizio Cafaggi, The Institutional Framework of European Private Law (Oxford University Press 1993). Fabrizio Cafaggi and Horatia Muir Watt (eds), Making European Private Law: Governance Design (Edward Elgar 2008).

52 Actionaid, Les Amis de la Terre France, Amnesty International, Terre Solidaire, Collectif Étique sur l’Étiquette, Sherpa, The law on duty of vigilance of parent and outsourcing companies Year 1: Companies must do better (2019), 49.

53 Actionaid, Les Amis de la Terre France, Amnesty International, Terre Solidaire, Collectif Étique sur l’Étiquette, Sherpa, The law on duty of vigilance of parent and outsourcing companies Year 1: Companies must do better (2019), 10.

54 European Parliament Committee on Legal Affairs, Draft Report with recommendations to the Commission on corporate due diligence and corporate accountability, 2020/2129(INL), 11. 09. 2020, articles 5 and 8.

55 European Parliament Committee on Legal Affairs, Draft Report with recommendations to the Commission on corporate due diligence and corporate accountability, 2020/2129(INL), 11. 09. 2020, articles 9 and 10.

56 OECD Guidelines for Multinational Enterprises, 2011, part II, Procedural Approaches, Part C, Application of the guidelines in special cases.

57 European Parliament Committee on Legal Affairs, Draft Report with recommendations to the Commission on corporate due diligence and corporate accountability, 2020/2129(INL), 11. 09. 2020, articles 14 and 15.

58 H.-W. Micklitz, ‘The Role of the EU in the External Reach of Regulatory Private Law – Gentle Civilizer or Neoliberal Hegemon? An Epilogue’, in M. Cantero and H.-W. Micklitz (eds), The Role of the EU in Transnational Legal Ordering: Standards, Contracts and Codes (Edward Elgar Publishing 2020) 298320.

14 Consumer Law as a Tool to Regulate Artificial Intelligence

1 Press Release 19 February 2019, Shaping Europe’s Digital Future: Commission Presents Strategies for Data and Artificial Intelligence.

2 M. Tekmark, Life 3.0: Being Human in the Age of Artificial Intelligence, New York, 2017.

3 For consistency purposes, this article refers to ‘traders’ when referring to suppliers and services providers. Art. 2(2) Directive 2011/83/EU OJ L 304, 22 November 2011 (Consumer Rights Directive). See also Directive (EU) 2019/2161 amending Council Directive 93/13/EEC (Unfair Contract Terms Directive) and Directives 98/6/EC, 2005/29/EC and 2011/83/EU as regards the better enforcement and modernisation of Union consumer protection rules, OJ L 328, 18 December 2019 (Modernization of Consumer Protection Directive).

4 Council of Europe research shows that a large number of fundamental rights could be impacted from the use of AI, https://rm.coe.int/algorithms-and-human-rights-en-rev/16807956b5.

5 B. Custers et al., e-Sides, deliverable 2.2, Lists of Ethical, Legal, Societal and Economic Issues of Big Data Technologies. Ethical and Societal Implications of Data Sciences, https://e-sides.eu/resources/deliverable-22-lists-of-ethical-legal-societal-and-economic-issues-of-big-data-technologies accessed 12 April 2019 (e-SIDES, 2017).

6 H. Feld, The Case for the Digital Platform Act: Breakups, Starfish Problems, & Tech Regulation, e-book, 2019.

7 UK Government Office for Science, Artificial intelligence: opportunities and implications for the future of decision making, 2016. OECD, Algorithms and Collusion – Background Note by the Secretariat, DAF/COMP (2017) 4 (OECD 2017).

8 The Society for the Study of Artificial Intelligence and Simulation of Behaviour, ‘What Is Artificial Intelligence’, AISB Website (no longer accessible); Government Office for Science, Artificial Intelligence: Opportunities and Implications for the Future of Decision Making, 9 November 2016; Information Commissioner’s Office, UK, Big Data, Artificial Intelligence, Machine Learning and Data Protection, Report, v. 2.2, 20170904 (ICO 2017).

9 J. R. Koza, F. H. Bennett, D. Andre, and M. A. Keane ‘Paraphrasing Arthur Samuel (1959), the Question Is: How Can Computers Learn to Solve Problems without Being Explicitly Programmed?’ In Automated Design of Both the Topology and Sizing of Analog Electrical Circuits Using Genetic Programming. Artificial Intelligence in Design, Springer, 1996, 151170. L. Bell, ‘Machine Learning versus AI: What’s the Difference?’ Wired, 2 December 2016.

10 C. M. Bishop, Pattern Recognition and Machine Learning, Springer Verlag, 2006.

11 ICO 2017, p. 7.

12 E. Alpaydin, Introduction to Machine Learning, MIT Press, 2014.

13 European Commission, White Paper on Artificial Intelligence – A European Approach to Excellence and Trust, 19 February 2019, COM(2020) 65 final.

14The quality of being true, honest, or accurate’, Cambridge Dictionary, Cambridge University Press, 2020.

15 J. Modrall, ‘Big Data and Algorithms, Focusing the Discussion’, Oxford University, Business Law Blog, 15 January 2018; D. Landau, ‘Artificial Intelligence and Machine Learning: How Computers Learn’, iQ, 17 August 2016, https://iq.intel.com/artificial-intelligence-and-machine-learning, now presented as ‘A Data-Centric Portfolio for AI, Analytics and Cloud’; last accessed 14 March 2019.

16 W. Seymour, ‘Detecting Bias: Does an Algorithm Have to Be Transparent in Order to Be Fair?’, www.CEUR-WS.org, vol. 2103 (2017).

17 Art. 38 Charter of Fundamental Rights of the EU (CFREU).

18 Art. 38 Fundamental Rights Charter.

19 Article 169(1) and point (a) of Article 169(2) TFEU.

20 Article 114 (3) of the Treaty on the Functioning of the European Union (TFEU). This clause mentions that within their respective powers, the European Parliament and the Council will also seek to achieve a high level of consumer protection.

21 Reiner Schulze, ‘European Private Law: Political Foundations and Current Challenges’ and J. M. Smits, ‘Plurality of Sources in European Private Law’, in R. Brownsword, H.-W. Micklitz, L. Niglia, and S. Weatherill, The Foundations of European Private Law, Oxford, 2011, p. 303306 and 327ff.

22 The Modernization of Consumer Protection Directive and Regulation (EU) 2019/1150 of the European Parliament and of the Council of 20 June 2019 on promoting fairness and transparency for business users of online intermediation services OJ L 186, 11 July 2019 (Online Intermediary Services Regulation). Regulation (EU) 2018/1807 of 14 November 2018 on a Framework for the Free Flow of Non-Personal Data in the European Union, OJ L 303/59, 28 November 2018, entry into force May 2019 (Free Flow of Non-Personal Data Regulation).

23 Council Regulation (EU) 2016/679 on the on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC, OJ L 119/1 (General Data Protection Regulation, or GDPR) contains the right to object and automated individual decision-making (articles 21–22 GDPR), subject to fairly complex exclusions that are explained in detail in the extensive considerations.

24 There is no legal basis under when there are no personal data involved; section 3, e.g., articles 16 (rectification), 17 (erasure), 18 (restriction on processing), and 20 (data portability) GDPR – the rights are often qualified, and the burden of proof is not clear. This makes the consumer’s rights rather difficult to enforce.

25 H. U. Vrabec, Uncontrollable Data Subject Rights and the Data-Driven Economy, dissertation, University Leiden, 2019.

26 See Eurobarometer Special 447 on Online Platforms (2016).

27 This chapter does not discuss online dispute resolution.

28 Spike Jonze, Her (2013). In this movie, the protagonist in an algorithmic society develops an intimate relationship with his operating system – that is, until he finds out the operating system communicates with millions of customers simultaneously.

29 Directive (EU) 2019/1024 on open data and the re-use of public sector information, OJ L 172/56 (Open Data Directive); Commission Communication, ‘Building a European Data Economy’, COM (2017) 9 final.

30 Free Flow of Non-Personal Data Regulation.

31 Regulation 2017/1128/EU of the European Parliament and of the Council of 14 June 2017 on Cross-border Portability of Online Content Services in the Internal Market, [2017] OJ L 168/1 including corrigendum to regulation 2017/1128.

32 GDPR, articles 13 and 20.

33 The Free Flow of Non-Personal Data Regulation does not define ‘non-personal data’. Cf. art. 3 of the Non-personal Data Regulation: ‘“Data” means data other than personal data as defined in point (1) of Article 4 of Regulation (EU) 2016/679’.

34 Modernization of Consumer Protection Directive, recital (22).

35 Directive (EU) 2016/943 of the European Parliament and of the Council of 8 June 2016 on the protection of undisclosed know-how and business information (trade secrets) against their unlawful acquisition, use and disclosure OJ L 157 (Trade Secrets Directive).

36 Commission Communication DSM 2017, p. 2. The Commission mentions a number of activities, including online advertising platforms, marketplaces, search engines, social media and creative content outlets, application distribution platforms, communications services, payment systems and collaboration platforms.

37 Commission Communication on Shaping Europe’s Digital Future, Brussels, 19.2.2020 COM (2020) 67 final; White Chapter on Artificial Intelligence, setting out options for a legislative framework for trustworthy AI, with a follow-up on safety, liability, fundamental rights and data (Commission Communication 2020).

38 Following two Commission Communications on AI supporting ‘ethical, secure and cutting-edge AI made in Europe’ (COM (2018)237 and COM (2018)795), a High-Level Expert Group on Artificial Intelligence was established: Ethic Guidelines for Trustworthy AI, 8 April 2019 (COM (2019)168; Ethic Guidelines AI 2019); https://ec.europa.eu/digital-single-market/en/high-level-expert-group-artificial-intelligence. The Guidelines seem to have been overridden by the White Paper AI 2020.

39 Ethic Guidelines AI 2019, p. 2.

40 e-Sides 2017, i.a., p. 85ff., and the attached lists.

41 Commission Communication AI 2018, para. 3.3.

42 White Paper AI 2020.

43 Ethic Guidelines AI 2019, p. 12–13: The Guidelines do not focus on consumers. Rather, the Guidelines address different stakeholders going in different directions.

44 P. Beddington, Towards a Code of Ethics for Artificial Intelligence, Springer International Publishing, 2017.

45 At the time of writing, the draft proposal Council Regulation concerning the respect for private life and the protection of personal data in electronic communications and repealing Directive 2002/58/EC COM 2017 final (draft Regulation on Privacy and Electronic Communications) was in limbo.

46 Unfair Contract Terms Directive.

47 J. Luzak and S. van der Hof, part II, chapter 2, in Concise European Data Protection, E-Commerce and IT Law, S. J. H. Gijrath, S. van der Hof, A. R. Lodder, G.-J. Zwenne, eds., 3rd edition, Kluwer Law International, 2018.

48 Cf. the standard withdrawal form in Annex 1 to the Consumer Rights Directive.

49 Bureau Européen des Unions de Consommateurs AISBL, Automated Decision Making and Artificial Intelligence – A Consumer Perspective, Position Chapter 20 June 2018 (BEUC 2018).

50 Modernization of Consumer Protection Directive, recital (17).

51 Cf. Regulation (EU) 2017/2394 on cooperation between national authorities responsible for the enforcement of consumer protection laws and repealing Regulation (EC) No. 2006/2004 (OJ L 345, 27.12.2017, p. 1).

52 Cf. the Unfair Consumer Contract Terms Directive.

53 Modernization of Consumer Protection Directive, consideration (2).

54 Cf. the Online Intermediary Services Regulation where corrections can be made at the wholesale level.

55 Directive 2000/31/EC of the European Parliament and of the Council of 8 June 2000 on certain legal aspects of information society services, in particular electronic commerce, in the Internal Market [2000] OJ L 178/1 (Electronic Commerce Directive).

56 Modernization of Consumer Protection Directive, recital (46); CJEU ECLI:EU:C:2019:576, Bundesverband der Verbraucherzentralen und Verbraucherverbände – Verbraucherzentrale Bundesverband e.V. v Amazon EU Sàrl, request for a preliminary ruling from the Bundesgerichtshof, 10 July 2019. The Court followed the non-binding opinion of the Advocate-General to revoke trader’s obligations to provide certain additional information, such as a telephone or fax number.

57 Advocate General’s Opinion in Case C-649/17 Bundesverband der Verbraucherzentralen and Others v. Amazon EU, CJEU, Press Release No. 22/19 Luxembourg, 28 February 2019.

58 Directive 2005/29/EC of the European Parliament and of the Council of 11 May 2005 concerning unfair business-to-consumer commercial practices in the internal market (Unfair Commercial Practices Directive 2005), OJ 2005, L. 149.

59 Unfair Contract Terms Directive, p. 29–34.

60 Regulation (EU) 2017/2394.

61 In order to ensure deterrence of the fines, Member States should set in their national law the maximum fine for such infringements at a level that is at least 4 per cent of the trader’s annual turnover in the Member State or Member States concerned. Traders in certain cases can also be groups of companies.

62 ‘Pricing that involves changing the price in a highly flexible and quick manner in response to market demands when it does not involve personalisation based on automated decision making.’ Directive 2011/83/EU.

63 Modernization of Consumer Protection Directive, recital (45).

15 When the Algorithm Is Not Fully Reliable The Collaboration between Technology and Humans in the Fight against Hate Speech

The contribution is based on the analysis developed within a DG Justice supported project e-NACT (GA no. 763875). The responsibility for errors and omissions remains with the author.

1 See Ben Wagner ‘Algorithmic Regulation and the Global Default: Shifting Norms in Internet Technology’ (2016) Etikk i praksis: Nord J Appl Ethics 5; Rob Kitchin and Martin Dodge Code/Space Software and Everyday Life (MIT Press, 2011).

2 See Jane Yakowitz Bambauer and Tal ZarskyThe Algorithm Game’ (2018) 94 Notre Dame Law Review 1.

3 The set of instructions can include different type of mathematical operations, ranging from linear equations to polynomial calculations, to matrix calculations, and so forth. Moreover, each instruction can be another algorithm, which increases the level of complexity of the overall procedure. See Erika GiorginiAlgorithms and Law’ (2019) 5 Italian Law Journal 144.

4 A well-known example of this case is the Knapsack problem, where the goal is to select among a number of given items the ones that have the maximum total value. However, given that each item has a weight, the total weight that can be carried is no more than some fixed number X. So, the solution must consider weights of items as well as their value. Although in this case a recursive algorithm can find the best solution, when the number of items increases, the time spent to evaluate all the possible combinations increases exponentially, leading to suboptimal solutions.

5 See the definition at https://en.wikipedia.org/wiki/Soft_computing accessed 13 March 2020.

6 Council of Europe ‘Algorithms and Human Rights – Study on the Human Rights Dimensions of Automated Data Processing Techniques and Possible Regulatory Implications’ (2018) https://edoc.coe.int/en/internet/7589-algorithms-and-human-rights-study-on-the-human-rights-dimensions-of-automated-data-processing-techniques-and-possible-regulatory-implications.html accessed 13 March 2020.

7 Daniel Neyland, The Everyday Life of an Algorithm (Palgrave Macmillan, 2019).

9 As, for instance, the well-known algorithm used at the beginning by Google, namely Pagerank. See Larry Page et al. ‘The PageRank Citation Ranking: Bringing Order to the Web’ (1999) http://ilpubs.stanford.edu:8090/422/1/1999-66.pdf accessed 13 March 2020.

10 David StevensIn defence of “Toma”: Algorithmic Enhancement of a Sense of Justice’ in Mireille Hildebrandt and Keiran O’Hara (eds.) Life and the Law in the Era of Data-Driven Agency (Edward Elgar, 2010), analysing Mireille Hildebrandt, Smart Technologies and the End(s) of Law: Novel Entanglements of Law and Technology (Edward Elgar Publishing, 2015).

11 Kevin Slavin ‘How Algorithms Shape Our World’ (2011) www.ted.com/talks/kevin_slavin_how_algorithms_shape_our_world.html accessed 13 March 2020; Frank Pasquale ‘The Algorithmic Self’ (2015) The Hedgehog Review, Institute for Advanced Studies in Culture, University of Virginia. Note that this aspect is the premise of so-called surveillance capitalism as defined by Shoshana Zuboff in ‘Big Other: Surveillance Capitalism and the Prospects of an Information Civilization’ (2015) 30 Journal of Information Technology 75.

12 Thamarai Subramaniam, Hamid A. Jalab, and Alaa Y. TaqaOverview of Textual Anti-spam Filtering Techniques’ (2010) 5 International Journal of Physical Science 1869.

13 Christoph KrönkeArtificial Intelligence and Social Media’ in Thomas Wischmeyer and Timo Rademacher (eds.) Regulating Artificial Intelligence (Springer, 2019).

14 For a description of the LinkedIn platform, see Jian Raymond RuiObjective Evaluation or Collective Self-Presentation: What People Expect of LinkedIn Recommendations’ (2018) 89 Computers in Human Behavior 121.

15 See the wider procedure described at https://engineering.linkedin.com/blog/2017/03/strategies-for-keeping-the-linkedin-feed-relevant accessed 13 March 2020.

16 See, for instance, the wide debate regarding the effectiveness of filtering systems adopted at national level against child pornography. See Yaman Akdeniz Internet Child Pornography and the Law – National and International Responses (Routledge, 2016), and T. J. McIntyre and Colin ScottInternet Filtering – Rhetoric, Legitimacy, Accountability and Responsibility’ in Roger Brownsword and Karen Yeung (eds.) Regulating Technologies: Legal Futures, Regulatory Frames and Technological Fixes (Bloomsbury Publishing, 2008).

17 Natasha Duarte, Emma Llansó, and Anna LoupMixed Messages? The Limits of Automated Social Media Content Analysis, Proceedings of the 1st Conference on Fairness, Accountability and Transparency’ (2018) 81 PMLR 106.

18 Katharina Kaesling ‘Privatising Law Enforcement in Social Networks: A Comparative Model Analysis’ (2018) Erasmus Law Review 151.

19 Natalie AlkiviadouHate Speech on Social Media Networks: Towards a Regulatory Framework?’ (2019) 28 Information & Communications Technology Law 19.

20 See Eurobarometer ‘Special Eurobarometer 452 – Media Pluralism and Democracy Report’ (2016) http://ec.europa.eu/information_society/newsroom/image/document/2016-47/sp452-summary_en_19666.pdf accessed 13 March 2020. See also Article 19 ‘Responding to “Hate Speech”: Comparative Overview of Six EU Countries’ (2018) www.article19.org/wp-content/uploads/2018/03/ECA-hate-speech-compilation-report_March-2018.pdf accessed 13 March 2020.

21 See European Commission – Press Release ‘A Europe That Protects: Commission Reinforces EU Response to Illegal Content Online’ 1 March 2018 http://europa.eu/rapid/press-release_IP-18-1169_en.htm accessed 13 March 2020.

22 Michel RosenfeldHate Speech in Constitutional Jurisprudence: A Comparative Analysis’ (2002–2003) 24 Cardozo L Rev 1523; Alisdair A. GillespieHate and Harm: The Law on Hate Speech’ in Andrej Savin and Jan Trzaskowski (eds.), Research Handbook on EU Internet Law (Edward Elgar, 2014); Natalie AlkiviadouRegulating Internet Hate: A Flying Pig?’ (2016) 7 Journal of Intellectual Property, Information Technology and E-Commerce Law 3; Oreste Pollicino and Giovanni De GregorioHate Speech: una prospettiva di diritto comparato (2019) 4 Giornale di Diritto Amministrativo 421.

23 Note that the definitions of hate speech provided at international level focus on different facets of this concept, looking at content and at the manner of speech, but also at the effect and at the consequences of the speech. See the Rabat Plan of Action adopted by the United Nations in 2013, Annual report of the United Nations High Commissioner for Human Rights, A/HRC/22/17/Add.4.

24 Council Framework Decision on Combating Certain Forms and Expressions of Racism and Xenophobia by Means of Criminal Law, [2008] O.J. (L 328) 55 (Framework Decision 2008/913/JHA).

25 European Parliament ‘Study on the Legal Framework on Hate Speech, Blasphemy and Its Interaction with Freedom of Expression’ (2015) www.europarl.europa.eu/thinktank/en/document.html?reference=IPOL_STU%282015%29536460 accessed 13 March 2020.

26 See also the recent interventions on fake news and illegal content online, respectively the EU Code of Practice on Disinformation http://europa.eu/rapid/press-release_STATEMENT-19-2174_en.htm accessed 13 March 2020, and Commission Recommendation of 1.3.2018 on measures to effectively tackle illegal content online (C(2018) 1177 final https://ec.europa.eu/digital-single-market/en/news/commission-recommendation-measures-effectively-tackle-illegal-content-online accessed 13 March 2020.

27 Chris Marsden Internet Co-regulation – European Law, Regulatory Governance, and Legitimacy in Cyberspace (Cambridge University Press, 2011).

28 European Commission Press Release IP/16/1937 ‘European Commission and IT Companies Announce Code of Conduct on Illegal Online Hate Speech’ (May 30, 2016); see also European Commission ‘Countering Illegal Hate Speech Online #NoPlace4Hate’ (2019) https://ec.europa.eu/newsroom/just/item-detail.cfm?item_id=54300 accessed 13 March 2020. Note that since 2018, five new companies joined the Code of Conduct: Instagram, Google+, Snapchat, Dailymotion and jeuxvideo.com. This brings the total number of companies that are part of the Code of Conduct to nine.

29 Footnote Ibid. at p. 2.

30 See the Commission Factsheet ‘5th evaluation of the Code of Conduct’, June (2020) https://ec.europa.eu/info/sites/default/files/codeofconduct_2020_factsheet_12.pdf accessed 28 June 2021. In particular, the document highlights that ‘on average 90% of the notifications are reviewed within 24 hours and 71% of the content is removed’.

31 See Sissi Cao ‘Google’s Artificial Intelligence Hate Speech Detector Has a “Black Tweet” Problem’ (Observer, 13 August 2019) https://observer.com/2019/08/google-ai-hate-speech-detector-black-racial-bias-twitter-study/ accessed 13 March 2020.

32 See EU Commission ‘Results of the Fourth Monitoring Exercise’ https://ec.europa.eu/info/sites/info/files/code_of_conduct_factsheet_7_web.pdf accessed 13 March 2020. The Commission affirms that the testing evaluation provided for little more than 4,000 notifications in a period of 6 weeks, with a focus on only 39 organisations from 26 Member States.

33 Sean MacAvaney et al. ‘Hate Speech Detection: Challenges and Solutions’ (2019) 14(8) PLOS One 1.

34 This is even more problematic in the case of image detection, as the recent case of the publication of the Led Zeppelin cover on Facebook was deemed contrary to community standards due to nudity and sexual images. See Rob Picheta ‘Facebook Reverses Ban on Led Zeppelin Album Cover’ (CNN, 21 June 2019) www.cnn.com/2019/06/21/tech/facebook-led-zeppelin-album-cover-scli-intl/index.html accessed 13 March 2020. For a wider analysis of the reasons to avoid the ubiquitous use of algorithms for decision-making, see Guido Noto la Diega ‘Against the Dehumanisation of Decision-Making – Algorithmic Decisions at the Crossroads of Intellectual Property, Data Protection, and Freedom of Information’ (2018) 9 JIPITEC 3.

35 Cambridge Consultants, ‘The Use of AI in Content Moderation’ (2019) www.ofcom.org.uk/__data/assets/pdf_file/0028/157249/cambridge-consultants-ai-content-moderation.pdf accessed 13 March 2020.

36 James GrimmelmannThe Virtues of Moderation’ (2015) 17 Yale J.L. & Tech. 42.

37 See the approach adopted by Facebook and Google in this regard: Issie LapowskyFacebook Moves to Limit Toxic Content as Scandal Swirls’ (Wired, 15 November 2018) www.wired.com/story/facebook-limits-hate-speech-toxic-content/ accessed 13 March 2020.; Sam Levin ‘Google to Hire Thousands of Moderators after Outcry over YouTube Abuse Videos’ (The Guardian, 5 December 2017), www.theguardian.com/technology/2017/dec/04/google-YouTube-hire-moderators-child-abuse-videos accessed 13 March 2020.

38 Nicolas P. Suzor Lawless: The Secret Rules That Govern Our Digital Lives (and Why We Need New Digital Constitutions That Protect Our Rights) (Cambridge University Press, 2019).

39 Sarah T. RobertsCommercial Content Moderation: Digital Laborers’ Dirty Work’ in S. U. Noble and B. Tynes (eds.) The Intersectional Internet: Race, Sex, Class and Culture Online (Peter Lang Publishing, 2016); Ben WagnerLiable, but Not in Control? Ensuring Meaningful Human Agency in Automated Decision-Making Systems’ (2018) 11 Policy & Internet 104; Andrew Arsht and Daniel Etcovitch ‘The Human Cost of Online Content Moderation’ (2018) Harvard Law Review Online https://jolt.law.harvard.edu/digest/the-human-cost-of-online-content-moderation accessed 13 March 2020.

40 Flagging is the mechanism provided by platforms to allow users to express concerns about potentially offensive content. This mechanism allows to reduce the volumes of content to be reviewed automatically. See Kate KlonickThe New Governors: The People, Rules and Processes Governing Online Speech’, 131 Harvard Law Review 1598, at 1626 (2018).

41 See ‘YouTube Trusted Flagger Program’ https://support.google.com/YouTube/answer/7554338?hl=en accessed 13 March 2020.

42 Facebook ‘How Do I Report Inappropriate or Abusive Things on Facebook (Example: Nudity, Hate Speech, Threats)’ www.facebook.com/help/212722115425932?helpref=uf_permalink accessed 13 March 2020.

43 Google ‘Hate Speech Policy’ https://support.google.com/YouTube/answer/2801939?hl=en accessed 13 March 2020.

44 Twitter ‘Hateful Conduct Policy’ https://help.twitter.com/en/rules-and-policies/hateful-conduct-policy accessed 13 March 2020.

45 Article 19, ‘YouTube Community Guidelines: Analysis against International Standards on Freedom of Expression’ (2018) www.article19.org/resources/YouTube-community-guidelines-analysis-against-international-standards-on-freedom-of-expression/ accessed 13 March 2020.

46 Article 19, ‘Facebook Community Standards: Analysis against International Standards on Freedom of Expression’ (2018) www.article19.org/resources/facebook-community-standards-analysis-against-international-standards-on-freedom-of-expression/ accessed 13 March 2020.

47 Wolfang Benedek and Matthias C. Kettemann Freedom of Expression and the Internet (Council of Europe Publishing, 2013), 101. See the decision of Italian courts on this matter, as presented in F. Casarosa, ‘Does Facebook get it always wrong? The decisions of Italian courts between hate speech and political pluralism’, presented at Cyberspace conference, November 2020.

48 Council of Europe, Draft Recommendation CM/Rec (2017x)xx of the Committee of Ministers to Member States on the Roles and Responsibilities of Internet Intermediaries, MSI-NET (19 September 2017).

49 National and European courts are still struggling in identifying such boundary; see, for instance, the rich jurisprudence of the ECtHR, European Court of Human Rights Press Unit, Factsheet – Hate Speech (January 2020), www.echr.coe.int/Documents/FS_Hate_speech_ENG.pdf accessed 13 March 2020.

50 Note that this principle is also confirmed by the Council of Europe (Footnote n 48).

51 Giancarlo FrosioWhy Keep a Dog and Bark Yourself? From Intermediary Liability to Responsibility’ (2018) 26 Oxford Int’l J. of Law and Information Technology 1.

52 See, for instance, the suggestion made by UN Rapporteur Frank La Rue, in Report of the Special Rapporteur on the Promotion and Protection of the Right to Freedom of Opinion and Expression (2011) www2.ohchr.org/english/bodies/hrcouncil/docs/17session/A.HRC.17.27_en.pdf, p. 13 accessed 13 March 2020.

53 Commission Recommendation 2018/334 on measures to effectively tackle illegal content online, C/2018/1177, OJ L 63, 6.3.2018, pp. 50–61

54 Footnote Ibid., at 3.

55 See Google ‘Appeal Community Guidelines Actions’ https://support.google.com/YouTube/answer/185111 accessed 13 March 2020.

56 For a detailed description of the structure and role of the Oversight Board, see Facebook ‘Establishing Structure and Governance for an Independent Oversight Board’ (Facebook Newsroom, 17 September 2019) https://newsroom.fb.com/news/2019/09/oversight-board-structure/ accessed 13 March 2020, and Facebook ‘Oversight Board Charter’ (Facebook Newsroom, 19 September 2019) https://fbnewsroomus.files.wordpress.com/2019/09/oversight_board_charter.pdf accessed 13 March 2020.

57 The figures can clarify the challenge: the number of board members is currently set at 40 people, while the number of cases under appeal yearly by Facebook is 3.5 million (only related to hate speech), according to the 2019 Community Standards Enforcement Report https://transparency.facebook.com/community-standards-enforcement#hate-speech accessed 13 March 2020.

58 Commission (2018) Recommendation 2018/334 of 1 March 2018 on measures to effectively tackle illegal content online, C/2018/1177, OJ L 63, 6.3.2018, pp. 50–61.

59 See also the figures provided in Commission Factsheet, ‘How the Code of Conduct Helped Countering Illegal Hate Speech Online’, February (2019) https://ec.europa.eu/info/sites/info/files/hatespeech_infographic3_web.pdf accessed 13 March 2020. The Commission report affirms that ‘The IT companies reported a considerable extension of their network of ‘trusted flaggers’ in Europe and are engaging on a regular basis with them to increase understanding of national specificities of hate speech. In the first year after the signature of the Code of conduct, Facebook reported to have taken 66 EU NGOs on board as trusted flaggers; and Twitter 40 NGOs in 21 EU countries.’

60 Sebastian SchwemerTrusted Notifiers and the Privatization of Online Enforcement’ (2018) 35 Computer Law & Security Review.

61 Note that evidence from the SCAN project highlights that removal rates differs between the reporting channels used to send the notification, with an average of 15 per cent higher, with the exceptional case of Google+, where all the notified cases were accepted by the company. See SCAN ‘Diverging Responsiveness on Reports by Trusted Flaggers and General Users – 4th Evaluation of the EU Code of Conduct: SCAN Project Results’ (2018) http://scan-project.eu/wp-content/uploads/2018/08/sCAN_monitoring1_fact_sheet_final.pdf accessed 13 March 2020.

62 Directive 2000/31/EC, of the European Parliament and of the Council of 8 June 2000 on Certain Legal Aspects of Information Society Services, in Particular Electronic Commerce, in the Internal Market, [2000] O.J. (L 178) 1, 16 (e-Commerce Directive). Note that the proposed Digital Services Act, COM(2020) 825 final, confirms that providers of intermediary services are not subject to general monitoring obligations.

63 Case 324/09 L’Oréal SA and Others v. eBay International AG and Others [2011] ECR I-06011.

64 Christina Angelopoulos et al. ‘Study of Fundamental Rights Limitations for Online Enforcement through Self-Regulation’ (2016) https://openaccess.leidenuniv.nl/handle/1887/45869 accessed 13 March 2020.

65 Case C-18/18, Eva Glawischnig-Piesczek v. Facebook Ireland Limited [2019] ECLI:EU:C:2019:821.

66 Ms Glawischnig-Piesczek requested Facebook to delete the image and the comments, but it failed to do so. Ms Glawischnig-Piesczek filed a lawsuit before the Wien first instance court, which eventually resulted in an injunction against Facebook, which obliged the social network not only to delete the image and the specific comments, but also to delete any future uploads of the image if it was accompanied by comments that were identical or similar in meaning to the original comments.

67 Questions translated by the preliminary reference decision of the Oberste Gerichtshof, OGH, case number 6Ob116/17b.

68 In his opinion, A. G. Szpunar affirmed that an intermediary does not benefit from immunity and can ‘be ordered to seek and identify the information equivalent to that characterised as illegal only among the information disseminated by the user who disseminated that illegal information. A court adjudicating on the removal of such equivalent information must ensure that the effects of its injunction are clear, precise and foreseeable. In doing so, it must weigh up the fundamental rights involved and take account of the principle of proportionality’.

69 The CJEU then defined information with an equivalent meaning as ‘information conveying a message the content of which remains essentially unchanged and therefore diverges very little from the content which gave rise to the finding of illegality’ (par. 39).

70 See Agnieszka Jabłonowska ‘Monitoring Duties of Online Platform Operators Before the Court – Case C-18/18 Glawischnig-Piesczek’ (6 October 2019) http://recent-ecl.blogspot.com/2019/10/monitoring-duties-of-platform-operators.html; Eleftherios Chelioudakis ‘The Glawischnig-Piesczek v. Facebook Case: Knock, Knock. Who’s There? Automated Filters Online’ (12 November 2019) www.law.kuleuven.be/citip/blog/the-glawischnig-piesczek-v-facebook-case-knock-knock-whos-there-automated-filters-online/ accessed 13 March 2020; Marta Maroni and Elda BrogiEva Glawischnig-Piesczek v. Facebook Ireland Limited: A New Layer of Neutrality’ (2019) https://cmpf.eui.eu/eva-glawischnig-piesczek-v-facebook-ireland-limited-a-new-layer-of-neutrality/ accessed 13 March 2020.

16 Smart Contracts and Automation of Private Relationships

1 See generally Stefan Grundmann, ‘The Structure of European Contract Law’ (2001) 4 Eur Rev Contr L 505. On mandatory rules on consumer contracts, see Gerhard Wagner, ‘Zwingendes Vertragsrecht’ in Horst Eidenmüller et al., Revision des Verbraucher-acquis (Mohr Siebeck 2011) 1, 14.

2 See especially Stefan Grundmann and Philipp Hacker, ‘The Digital Dimension as a Challenge to European Contract Law’ in Stefan Grundmann (ed.), European Contract Law in the Digital Age (Intersentia 2018) 345; Alberto De Franceschi and Reiner Schulze (eds.), ‘Digital Revolution – New Challenges for the Law: Introduction’ in Digital Revolution – New Challenges for the Law (C. H. Beck 2019) 115; Matthias Weller and Matthias Wendland (eds.), Digital Single Market – Bausteine eines eines Digitalen Binnenmarkts (Mohr Siebeck 2019).

3 Alessandro Morelli and Oreste Pollicino, ‘Metaphors, Judicial Frames and Fundamental Rights in Cyberspace’ (2020) Am J Comp L 1, 26 (forthcoming).

4 Viktoria H. S. E. Robertson, ‘Excessive Data Collection: Privacy Considerations and Abuse of Dominance in the Era of Big Data’ (2020) 57 CML Rev 161190. On price discrimination based on big data, see Chiara Muraca and Mariateresa Maggiolino, ‘Personalized Prices under EU Antitrust rules’ (2019) Eu Comp L Rev 483.

5 Lawrence Lessig, Code. Version 2.0 (Basic Books 2006) 2: ‘The space seemed to promise a kind of society that real space would never allow–freedom without anarchy, control without government, consensus without power.’

6 Lessig (Footnote n 5) 3. On whether cyberspace required new regulations, see also Frank H. Easterbrook, ‘Cyberspace and the Law of the Horse’ (1996) U Chicago Leg Forum 207216.

7 C. Massimo Bianca, Le autorità private (Jovene 1977).

8 See Florian Möslein (ed.), Private Macht (Mohr Siebeck 2016); Kit Barker et al. (eds.), Private Law and Power (Hart Publishing 2017); Pietro Sirena and Andrea Zoppini (eds.), I poteri privati e il diritto della regolazione (Roma Tre Press 2018).

9 Rodney Bruce Hall and Thomas J Biersteker, The Emergence of Private Authority in Global Governance (Cambridge University Press 2009).

10 A relevant problem that is not tackled in the present essay is the liability of the blockchain-platforms’ operators in cases of bugs or hacks. See Luigi Buonanno, ‘Civil Liability in the Era of New Technology: The Influence of Blockchain’ (16 September 2019). Bocconi Legal Studies Research Paper No. 3454532, September 2019, Available at SSRN: https://ssrn.com/abstract=3454532 (outlining a ‘European strategy’ to face the severe challenges).

11 See William Magnuson, Blockchain Democracy. Technology, Law and the Rule of the Crowd (Cambridge University Press 2020) 6190.

13 Nick Szabo, ‘Formalizing and Securing Relationships on Public Networks’ (1997) 2 (9) First Monday, at https://doi.org/10.5210/fm.v2i9.548.

14 See article 8-ter Decreto legge 14 December 2018, n. 135 (converted in Legge 11 February 2019, Footnote n. 12): ‘Si definisce “smart contract” un programma per elaboratore che opera su tecnologie basate su registri distribuiti e la cui esecuzione vincola automaticamente due o più parti sulla base di effetti predefiniti dalle stesse.’ (‘Smart contracts’ are defined as computer programs that operate on distributed registers-based technologies and whose execution automatically binds two or more parties according to the effects predefined by said parties.) With respect to the Italian provision, see Andrea Stazi, Automazione contrattuale e contratti intelligenti. Gli smart contracts nel diritto comparato (Giappichelli 2019) 134135.

15 See Primavera De Filippi and Aaron Wright, Blockchain and the Law. The Rule of Code (Harvard University Press 2018) 74.

16 See generally Eliza Mik, ‘The Resilience of Contract Law in Light of Technological Change’ in Michael Furmston (ed.), The Future of the Law of Contract (Routledge 2020) 112 (opposing all theories seeking to modify the principles of contract law due to the fact that a given transaction is mediated by technology).

17 See Kevin Werbach and Nicholas Cornell, ‘Contracts Ex Machina’ (2017) 67 Duke LJ 313, 318 (declaring that ‘Algorithmic enforcement allows contracts to be executed as quickly and cheaply as other computer code. Cost savings occur at every stage, from negotiation to enforcement, especially in replacing judicial enforcement with automated mechanisms’).

18 See Mateja Durovic and Fanciszek Lech, ‘The Enforceability of Smart Contracts’ (2019) 5 Italian LJ 493, 499.

19 See especially Gregorio Gitti, ‘Robotic Transactional Decisions’ (2018) Oss dir civ comm 619, 622; Mateja Durovic and André Janssen, ‘The Formation of Blockchain-Based Smart Contracts in the Light of Contract Law’ (2018) 26 Eur Rev Priv L 753771 (‘neither on-chain nor off-chain smart contracts are really challenging the classic elements of English Common Law on formation of contracts – offer and acceptance, consideration, intention to create legal relations, and capacity’).

20 Jason G. Allen, ‘Wrapped and Stacked: “Smart Contracts” and the Interaction of Natural and Formal Language’ (2018) 14 Eur Rev Contr L 307343.

21 See Scott A. McKinney, Rachel Landy, and Rachel Wilka, ‘Smart Contracts, Blockchain, and the Next Frontier of Transactional Law’ (2018) 13 Wash J L Tech & Arts 313, 322 (‘A smart contract, however, is not actually very “smart.” Smart contracts do not (at least, as of the date of this Article) include artificial intelligence, in that a smart contract does not learn from its actions’); Jeffrey M. Lipshaw, ‘The Persistence of Dumb Contracts’ (2019) 2 Stan J Blockchain L & Pol’y 1. With specific regard to the well-known ‘TheDAO’ hack, see the critics of Adam J. Kolber, ‘Not-So-Smart Blockchain Contracts and Artificial Responsibility’ (2018) 21 Stan Tech L Rev 198. See also Blaise Carron and Valentin Botteron, ‘How Smart Can a Contract Be?’ in Daniel Kraus et al. (eds.), Blockchains, Smart Contracts, Decentralised Autonomous Organisations and the Law (Edward Elgar 2019) 101.

22 See, e.g., Eliza Mik, ‘Smart Contracts: Terminology, Technical Limitations and Real World Complexity’ (2017) 9 L Innovation & Tech 269.

23 See, in this regard, André Janssen, ‘Demystifying Smart Contracts’ in Corjo J. H. Jansen et al. (eds.), Onderneming en Digitalisering (Wolters Kluwer 2019) 1529, at 2223.

24 See, e.g., the optimistic view of Jeff Lingwall and Ramya Mogallapu, ‘Should Code Be Law: Smart Contracts, Blockchain, and Boilerplate’ (2019) 88 UMKC L Rev 285.

25 See, generally, Kathryn D. Betts and Kyle R. Jaep, ‘The Dawn of Fully Automated Contract Drafting: Machine Learning Breathes New Life into a Decades Old Promise’ (2017) 15 Duke L & Tech Rev 216; Lauren Henry Scholz, ‘Algorithmic Contracts’ (2017) 20 Stan Tech L Rev 128; Spencer Williams, ‘Predictive Contracting’ (2019) Colum Bus L Rev 621. With respect to the differences between traditional contracts concluded through particular technological devices and contractual automation, which involves the use of AI, see Tom Allen and Robin Widdison, ‘Can Computers Make Contracts’ (1996) 9 Harv J L & Tech 25. On the philosophical implications, cf. Karen Yeung, ‘Why Worry about Decision-Making by Machine?’ in Karen Yeung and Martin Lodge (eds.), Algorithmic Regulation (Oxford University Press 2019) 21.

26 On the different technological steps that lead to a smart contract execution on a blockchain platform, see Michèle Finck, ‘Grundlagen und Technologie von Smart Contracts’ in Martin Fries and Boris P. Paal (eds.), Smart Contracts (Mohr Siebeck 2019) 1, 48. See also Valentina Gatteschi, Fabrizio Lamberti, and Claudio et al., ‘Technology of Smart Contracts’ in Larry A. DiMatteo et al. (eds.), The Cambridge Handbook of Smart Contracts, Blockchain Technology and Digital Platforms (Cambridge University Press 2019) 37.

27 See Finck (Footnote n 26) 9.

28 Cf. Michel Cannarsa, ‘Interpretation of Contracts and Smart Contracts: Smart Interpretation or Interpretation of Smart Contracts?’ (2018) 26 Eur Rev Priv L 773 (pointing out that computer language is deterministic (just one meaning and one result are conceivable), whereas natural language is open to more and potential different meanings).

29 Janssen (Footnote n 23) at 24–25.

30 Rolf H. Weber, ‘Smart Contracts: Do We Need New Legal Rules?’ in De Franceschi and Schulze (n 2) 299, 302.

31 Enrico Seidel, Andreas Horsch, and Anja Eickstädt, ‘Potentials and Limitations of Smart Contracts: A Primer from an Economic Point of View’ (2020) 31 Eur Bus L Rev 169, 176179.

32 Jeremy M. Sklaroff, ‘Smart Contracts and the Cost of Inflexibility’ (2017) 166 U Penn L Rev 263 (arguing that forms of flexibility – linguistic ambiguity and enforcement discretion – create important efficiencies in the contracting process). See also Finck (Footnote n 26) 11.

33 See generally Rodrigo A. Momberg Uribe, The Effect of a Change of Circumstances on the Binding Force of Contracts Comparative Perspectives (Intersentia 2011).

35 Eric Tjong Tjin Tai, ‘Force Majeure and Excuses in Smart Contracts’ (2018) 26 Eur Rev Priv L 787.

36 See McKinney, Landy, and Wilka (Footnote n 21) 325.

37 Footnote Ibid. at 338.

38 See, e.g., ‘software oracles’, which handle information data that originates from online sources, as temperature, prices of commodities and goods, flight or train delays, and so forth; ‘hardware oracles’, which take information directly from the physical world; ‘inbound oracles’, which provide data from the external world; ‘outbound oracles’, which have the ability to send data to the outside world; and ‘consensus-based oracles’, which get their data from human consensus and prediction markets (e.g., Augur, based on Ethereum).

39 See Janssen (Footnote n 23) 23 (declaring that every oracle added to a smart contract decreases the self-enforcement level).

40 Olaf Meyer, ‘Stopping the Unstoppable: Termination and Unwinding of Smart Contracts’ (2020) EuCML 17, at 2024.

41 See Larry A. DiMatteo and Cristina Poncibó, ‘Quandary of Smart Contracts and Remedies: The Role of Contract Law and Self-Help Remedies’ (2018) 26 Eur Rev Priv L 805 (observing: ‘It is in the area of self-enforcement and remedies where the vision of smart contracts confronts the reality of contract law and business lawyering. Smart contracts need to be drafted by lawyers, focused on client interests and not technological prowess’).

42 Meyer (Footnote n 40) 24.

43 See Section 16.4.

44 Robin Matzke, ‘Smart Contracts statt Zwangsvollstreckung? Zu den Chancen und Risiken der digitalisierten privaten Rechtsdurchsetzung’ in Fries and Paal (Footnote n 26) 99, 103. See generally, on Internet of Things liability issues, Christiane Wendehorst, ‘Consumer Contracts and the Internet of Things’ in Reiner Schulze and Dirk Staudenmayer (eds.), Digital Revolution: Challenges for Contract Law in Practice (Hart Publishing 2016) 189; Francesco Mezzanotte, ‘Risk Allocation and Liability Regimes in the IoT’ in De Franceschi and Schulze (Footnote n 2) 169; specifically on consumer contracts, Katarzyna Kryla-Cudna, ‘Consumer Contracts and the Internet of Things’ in Vanessa Mak et al. (eds.), Research Handbook in Data Science and Law (Edward Elgar 2018) 83.

45 See Thomas Riehm, ‘Smart Contracts und verbotene Eigenmacht’ in Fries and Paal (Footnote n 26) 85; Florian Möslein, ‘Legal Boundaries of Blockchain Technologies: Smart Contracts as Self-Help?’ in De Franceschi and Schulze (Footnote n 2) 313.

46 With reference to the German legal system, see Christoph G. Paulus and Robin Matzke, ‘Smart Contracts und Smart Meter – Versorgungssperre per Fernzugriff’ (2018) NJW 1905.

47 Möslein (Footnote n 45) 318.

48 See Max Raskin, ‘The Law and Legality of Smart Contracts’ (2017) 1 Geo L Tech Rev 305 (pointing out: ‘The central problem in the final question of contract law is what happens when the outcomes of the smart contract diverge from the outcomes that the law demands’). With respect to blockchain technology, see also Roger Brownsword, ‘Automated Transactions and the Law of Contract. When Codes Are Not Congruent’ in Furmston (Footnote n 16) 94, 102 (declaring: ‘If such technological enablement or disablement is at variance with what a court applying the law of contract would order, then we have a question of non-congruence’).

49 Such a principle is discussed, in the context of contractual automation, by Brownsword, Footnote ibid. 102–110.

50 See Möslein (Footnote n 45) 323–324.

51 Council Directive 93/13/EEC of 5 April 1993 on unfair terms in consumer contracts [1993] OJ L 95/29.

52 Footnote Ibid., annex, terms referred to in article 3(3), let (q).

53 Footnote Ibid., let (i).

54 See Janssen (Footnote n 23) 26 (arguing that the unfair terms directive does not per se require a textual form of the contractual terms in order to apply).

55 On the different enforcement mechanisms in the field of unfair terms, see generally Hans-Wolfgang Micklitz, ‘Unfair Terms in Consumer Contracts’ in Norbert Reich et al. (eds.), European Consumer Law (2nd ed., Intersentia 2014) 136; Peter Rott, ‘Unfair Contract Terms’ in Christian Twigg-Flesner (ed.), Research Handbook on EU Consumer and Contract Law (Edward Elgar 2016) 287, 293296.

56 On the notion of effectiveness in EU consumer law, see generally Norbert Reich, ‘The Principle of Effectiveness and EU Contract Law’ in Jacobien Rutgers and Pietro Sirena (eds.), Rules and Principles in European Contract Law (Intersentia 2015) 4568.

57 See generally De Filippi and Wright (Footnote n 15) at 86–88; Magnuson (Footnote n 11) 91–170.

58 See Tatiana Cutts, ‘Smart Contracts and Consumers (2019) 122 W Va L Rev 389.

59 Alexander Savelyev, ‘Contract Law 2.0: “Smart” Contracts as the Beginning of the End of Classic Contract Law’, Higher School of Economics Research Paper No. WP BRP 71/LAW/2016, available at SSRN: https://ssrn.com/abstract=2885241.

60 See Danielle D’Onfro, ‘Smart Contracts and the Illusion of Automated Enforcement’ (2020) 61 Wash U J L & Pol’y 173 (arguing that ‘The volume of consumer protection laws, and their tendency to change over time, all but eliminates the prospect of coding smart contracts for perfect compliance ex-ante’).

61 See Oscar Borgogno, ‘Smart Contracts as the (New) Power of the Powerless? The Stakes for Consumers’ (2018) 26 Eur Rev Priv L 885; Janssen (Footnote n 23) 26–29.

62 Regulation (EC) No 261/2004 of the European Parliament and of the Council of 11 February 2004 establishing common rules on compensation and assistance to passengers in the event of denied boarding and of cancellation or long delay of flights, and repealing Regulation (EEC) No 295/91 [2004] OJ L 46/1. On the latter, see generally Ricardo Pazos, ‘The Right to a Compensation In Case Of Long Delay of Flights: A Proposal for a Future Reform’ (2019) 27 Eur Rev Priv L 695(indicating i.a. the most relevant decisions of the European Court of Justice, which not seldom escape from the wording of the Regulation, especially with respect to long delays).

63 Borgogno (Footnote n 61) 897–898.

65 See for details Martin Fries, ‘Schadensersatz ex machina’ (2019) NJW 901; Anusch Alexander Tavakoli, ‘Automatische Fluggast-Entschädigung durch smart contracts’ (2020) ZRP 46.

66 See generally, on the regulatory challenges, Vincent Mignon, ‘Blockchains – Perspectives and Challenges’ in Kraus et al. (Footnote n 21) 1, 9.

67 See generally Franz Hofmann, ‘Smart Contracts und Overenforcement – Analytische Überlegungen zum Verhältnis von Rechtszuweisung und Rechtsdurchsetzung’ in Fries and Paal (Footnote n 26) 125.

68 Footnote Ibid., at 130.

69 In the aforementioned context of compensation for flight delays or cancellation, the somewhat futuristic proposals for a ‘personalization’ of the legal treatment could be of interest: see, generally, Ariel Porat and Lior Jacob Strahilevitz, ‘Personalizing Default Rules and Disclosure with Big Data’ (2014) 112 Mich L Rev 1417; Omri Ben-Shahar and Ariel Porat, ‘Personalizing Mandatory Rules in Contract Law’ (2019) 86 U Chi L Rev 255. With respect to consumer law, see also Christopher G. Bradley, ‘The Consumer Protection Ecosystem: Law, Norms, and Technology’ (2019) 97 Denv L Rev 35.

70 See for an assessment Nico Kuhlmann, ‘Smart Enforcement bei Smart Contracts’ in Fries and Paal (Footnote n 26) 117.

71 Michèle Finck, ‘Blockchains: Regulating the Unknown’ (2018) 19 German LJ 665, 687 (arguing that there are very few blockchain experts, and most regulators have not yet familiarized themselves with the available knowledge on the matter). On the different regulatory techniques concerning blockchain technology, see also Karen Yeung, ‘Regulation by Blockchain: The Emerging Battle for Supremacy between the Code of Law and Code as Law’ (2019) 82 Modern L Rev 207; Roger Brownsword, ‘Smart Contracts: Coding the Transaction, Decoding the Legal Debates’ in Philipp Hacker et al. (eds.), Regulating Blockchain. Techno-Social and Legal Challenges (Oxford University Press 2019).

72 Hilary J. Allen, ‘Regulatory Sandboxes’ (2019) 87 Geo Wash L Rev 579, 592 (declaring that the United Kingdom adopted a regulatory sandbox for fintech in 2016, and Australia, Bahrain, Brunei, Canada, Hong Kong, Indonesia, Malaysia, Mauritius, the Netherlands, Singapore, Switzerland, Thailand, and the United Arab Emirates have followed suit in adopting some form of regulatory sandbox model). See also Dirk A. Zetzsche, Ross P. Buckley, Janos N. Barberis, and Douglas W. Arner, ‘Regulating a Revolution: From Regulatory Sandboxes to Smart Regulation’ (2017) 23 Fordham J Corp & Fin L 31.

73 Allen, Footnote ibid.

74 See Magnuson (Footnote n 11) 183–184.

75 The notion ‘lex cryptographica’ is adopted by De Filippi and Wright (Footnote n 15) 5.

76 See generally Pietro Ortolani, ‘The Impact of Blockchain Technologies and Smart Contracts on Dispute Resolution: Arbitration and Court Litigation at the Crossroads’ (2019) 24 Unif L Rev 430.

77 See Section 16.2.

78 Falco Kreis and Markus Kaulartz, ‘Smart Contracts and Dispute Resolution – A Chance to Raise Efficiency?’ (2019) 37 ASA Bulletin 336, 339 (affirming: ‘If the parties revert to traditional means to resolve their dispute, the efficiency gained during the period of the contract performance will likely be significantly impaired’).

79 Markus Kaulartz, ‘Smart Contract Dispute Resolution’ in Fries and Paal (Footnote n 26) 73, 74–75.

80 See, e.g., the ‘Aragon Project’ implemented on Ethereum is defined on the official website as ‘a dispute resolution protocol formed by jurors to handle subjective disputes that cannot be resolved by smart contracts’ (https://aragon.org/blog/aragon-court-is-live-on-mainnet). For other examples, cf. Amy Schmitz and Colin Rule, ‘Online Dispute Resolution for Smart Contracts’ (2019) J Disp Resol 103, 116–122.

81 See Section 16.3.

82 Kreis and Kaulartz (Footnote n 77) 341.

83 See Ortolani (Footnote n 76) 433; Schmitz and Rule (Footnote n 80) 123.

84 The system is described by Ortolani, Footnote ibid., 434, as follows: ‘This device essentially works like a lock with two keyholes; it can only be opened if two keys are used. Two parties entering into a transaction can use this device to store coins (for example, the price for the sale of certain goods), until the obligations arising out of that transaction have been performed. Both parties are provided with a digital key to the address; if no dispute arises, they can use the two keys to unlock the coins, jointly determining their final destination (typically, the address of the seller). In case of a dispute, however, neither party can access the coins autonomously, but either of them can ask a private adjudicator to review the facts of the case and determine which of the two disputants is entitled to the disputed funds.’

85 See also the proposals of Wulf A. Kaal and Craig Calcaterra, ‘Crypto Transaction Dispute Resolution’ (2017–2018) 73 Bus Law 109.

86 Schmitz and Rule (Footnote n 80) 114–124 (envisaging an ‘ODR clause’ to be implemented in the smart contracts).

87 See generally Richard Susskind, Online Courts and the Future of Justice (Oxford University Press 2019) 260262.

88 Ortolani (Footnote n 76) 434.

89 Ortolani (Footnote n 76) 435–438.

90 See Christoph Althammer, ‘Alternative Streitbeilegung im Internet’, in Florian Faust and Hans-Bernd Schäfer (eds.), Zivilrechtliche und rechtsökonomische Probleme des Internet und der künstlichen Intelligenz (Mohr Siebeck 2019) 249, at 266269.

91 See Gauthier Vannieuwenhuyse, ‘Arbitration and New Technologies: Mutual Benefits’ (2018) 35 J Int’l Arb 119.

92 Möslein (Footnote n 45).

93 See Kevin Werbach, ‘Trust, but Verify: Why the Blockchain Needs the Law’ (2018) 33 Berkeley Tech LJ 487.

94 See Mark Verstraete, ‘The Stakes of Smart Contracts’ (2019) 50 Loy U Chi LJ 743.

Figure 0

Table 15.1 Hate speech as defined by several major IT companies

Save book to Kindle

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×