Hostname: page-component-586b7cd67f-l7hp2 Total loading time: 0 Render date: 2024-11-26T19:42:13.227Z Has data issue: false hasContentIssue false

Artificial Intelligence Risks and Algorithmic Regulation

Published online by Cambridge University Press:  15 June 2022

Pedro Rubim Borges Fortes
Affiliation:
UFRJ, Rio de Janeiro, Brazil
Pablo Marcello Baquero*
Affiliation:
HEC Paris, Jouy-en-Josas, France
David Restrepo Amariles
Affiliation:
HEC Paris, Jouy-en-Josas, France
*
*Corresponding author. E-mail: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

In this editorial article, we aim to map out the central features of algorithmic regulation and its conceptual basis – seeking to bring together different strands of the literature relating to the topic that have often remained apart. We then reflect on the ways through which algorithmic law could evolve to address the challenges of artificial intelligence in the legal domain, particularly by examining the potential of applying a “prudential” test in order to determine whether automated decision-making systems are suitable to adequately support legal decision-making.

Type
Symposium on Algorithmic Regulation and Artificial Intelligence Risks
Copyright
© The Author(s), 2022. Published by Cambridge University Press

I. Introduction: algorithmic law and regulation and risks of artificial intelligence

The challenges of defining artificial intelligence (AI) have led academics to often focus on the algorithm as the foundational element of a procedure for solving a problem in a series of steps.Footnote 1 Machine-learning algorithms model complex human performance through these processes, having become capable of learning from experience and solving problems in ways that are novel to human operators.Footnote 2 Departing from the concept of the algorithm, scholars from various disciplines have written on algorithmic regulation – now a popular theme among academics, politicians and the public in general as an instrument to address the various challenges brought about by the use of AI in society.Footnote 3

In this editorial article, we propose to map out the central features of algorithmic regulation and its conceptual basis – seeking to bring together different strands of the literature relating to the topic that have often remained apart. We then reflect on the ways by which algorithmic law could evolve to address the challenges of AI in the legal domain, particularly by examining the potential of applying a “prudential” test to determine whether automated decision-making systems are suitable to adequately support legal decision-making.

Algorithmic regulation consists of standard setting through the computational instructions established by the mathematical formulae that facilitate the massive generation of knowledge from “Big Data”. From one side, algorithms generate predictions regarding future behaviour based on the analysis of a significant amount of data. From the other side, they relatively autonomously execute decisions relying on those predictions – concerning, for instance, credit denial or an increase in an electricity bill. When deployed in the legal domain, whilst they may contribute to increase the effectiveness of decision-making, they also create various legal risks, such as those concerning privacy, bias or the manipulation of the democratic process.

Decisions involving key elements of society are increasingly being delegated to algorithmic systems. An example of this involves the immigration system in the European Union (EU). The European Travel Information and Authorisation System (ETIAS) – an algorithmic system discussed in one of the articles in this symposium – is expected soon to be operational. It will be used to make automated risk assessments to recommend which visa-exempt foreign citizens should be able to enter the EU territory. There are many risks involved in the implementation of this system, notably regarding discrimination against people of particular nationalities, races, socioeconomic conditions or educational backgrounds.

In this context, governments and regulatory agencies have an opportunity to intervene by auditing, validating and nullifying algorithms. In the regulatory environment, algorithms thus emerge both as potential objects and as means for risk regulation. In other words, algorithms can also be described as commands articulated through mathematical formulae that contain normativity embedded in their code.Footnote 4 Analogous to recipes, their instructions, guidelines and orientations set standards for safety, privacy and economic development, affecting internal processes, informational transparency and the distribution of outcomes.

The insight that code is law and subject to regulation was the central thesis of Lawrence Lessig’s research on “code” in the late 1990s.Footnote 5 As the normative architecture structures and constrains social and legal power, code shapes and regulates cyberspace through checks and balances built to protect fundamental values.Footnote 6 In the context of the Internet, “regulability” means the ability of the government to regulate the behaviour of citizens while on the Net (“Netizens”), primarily through code.Footnote 7 Regulability depends on the design and the plasticity of the technology that facilitates transformation, adaptability and addressing challenges related to information regarding users, geography and use.Footnote 8 Governments may regulate behaviour indirectly through technologies that affect behaviour by influencing the development of code and making behaviour more regulable.Footnote 9 According to Lessig, code’s architecture determines what people can and cannot do as a kind of law dependent on politics, because if code is law, control of code is power.Footnote 10 Code regulates cyberspace because it defines the terms upon which cyberspace is offered.Footnote 11

Updating Lessig’s thesis to our setting of algorithmic regulation, an algorithm may also be considered law from a realistic perspective. As an alternative to the positivistic concept of law,Footnote 12 the realistic theory examines the law-jobs and the institutional rules of the game that function as a social practice orientated to ordering relations between subjects.Footnote 13 Karl Llewellyn focuses his jurisprudence on the jobs that law helps get done, and he examines tools for doing these law-jobs.Footnote 14 His theory of legal rules consists of elements such as: (1) a command to do things as described by the rule; and (2) a predicted consequence calculated and estimated in terms of the concrete case in hand.Footnote 15 Realistic theories of law are forged in the interdisciplinary tradition of socio-legal theories that incorporate historical and sociological insights on law.Footnote 16 An algorithm operates as a functional equivalent of a legal rule, containing an analogous structure of a command and a predicted consequence. The study of legal rules embedded in these algorithms and the mechanisms for their normative review may be considered part of an emerging discipline of algorithmic law. A realistic theory defines law-like working tools for solving objectives and problems as components of the machinery of functioning legal institutions.Footnote 17

Ultimately, regulating algorithmic law involves discussion of how the law must be adapted and how legal tech tools may be designed to achieve regulatory purposes related to different uses of contemporary technology.

This editorial article is organised as follows: in Section II, we provide a conceptual framework to understand and reflect on algorithmic law and regulation, bringing together different strands of the interdisciplinary literature that have often remained apart. In Section III, we propose a prudential test for evaluating algorithmic decision-making in the legal domain in order to improve algorithmic regulation. In Section IV, we discuss some of the challenges related to the risks of AI, providing a contextualised introduction to the articles published in this special issue focused on debates regarding algorithmic regulation, electronic democracy and the character of algorithmic law. These articles were presented in the context of the Algorithmic Law and Society Symposium held at HEC Paris in December 2021.

II. A conceptual framework for algorithmic law and regulation

In this section, we seek to build a conceptual framework to understand and reflect on algorithmic law and regulation, connecting different strands of the literature that have often remained apart.

The term “algorithmic regulation” was coined only in 2013 by Tim O’Reilly.Footnote 18 Previous projects were identified as regulation supported by computational systems, such as the Chilean Cybersyn Project in the 1970s, an ambitious technological programme aimed at controlling the country’s industrial production.Footnote 19 This resignification of previous experiences occurred because of the powerful idea behind “algorithmic regulation” as a conceptual framework for reflecting on law regulating algorithms regulating law.

This dialectical effect of algorithmic law and regulation was absent in the early definition proposed by Karen Yeung, one of the leading scholars in the field. Originally, Yeung restricted “algorithmic regulation” to regulatory governance systems that use algorithmic decision-making and that are focused on regulation through algorithms.Footnote 20 More recently, however, Yeung revised her initial view and, with co-author Lena Ulbricht, adopted a broader terminology incorporating processes that do not involve decision-making and the inclusion of the regulation of algorithms.Footnote 21 We focus on the broader conceptual framework that is more aligned to the current experience of algorithmic regulation because of the multiple modes of regulating algorithmic law that go beyond decision-making and the use of algorithms as tools for regulation. A conceptual framework for algorithmic law and regulation should consider the dynamics of regulating law.Footnote 22 Regulation of law operates through a multidimensional model in which legal rules, public policies and bodies of law interact by accommodating or integrating competing goals that form part of the regulatory scheme in such a way that the meaning of law is relative to the particular context of the legal operation.Footnote 23 The responsiveness of regulating law implies the collaboration and cooperation of those subject to such regulation through hybrid forms of regulation.Footnote 24

Yeung formulated her conception of algorithmic regulation based on the functional approach to regulation composed by a tripartite structure involving the three elements of standard-setting, information-gathering and behavioural modification.Footnote 25 As Yeung and Bronwen Morgan highlighted in their introductory manual on law and regulation, the focus on these three core functions avoids the pursuit of a definitional quest for the proper scope of regulation.Footnote 26 On the one hand, narrow definitions of regulation centre on intentional state action to influence behaviour through establishing, monitoring and enforcing legal rules.Footnote 27 On the other hand, broader definitions of regulation include various forms of social control, even if they are unintentional or originate from a non-state actor.Footnote 28 Even though legal scholars normally adopt a more narrow definition based on a state-centric and hierarchical conceptions of law, algorithmic regulation consists of the institutional rules of the game that are formed by non-state actors and also through a more heterarchical conception of law. Instead of the vertical Kelsenian normative pyramid,Footnote 29 algorithmic law of cyberspace seems to be composed of horizontal normative networks.Footnote 30 In terms of regulatory theory, the contemporary experience of algorithmic regulation provides a complex setting of hybrid regulation in which regulators, regulatees and third parties interact, negotiate and reorientate their normative standards that are set and reset through regulation at multiple levels.Footnote 31 Regulatory regimes combine mixed forms of enforced self-regulation (the regulator compels the regulatee to write a set of rules), co-regulation (the regulator and regulatee share responsibility for regulatory design and/or regulatory enforcement) and meta-regulation (the regulatee may define its own rules, but the regulator institutionalises them and monitors the integrity of institutional compliance).Footnote 32 In addition to the more traditional perspective of state actors as regulators acting through regulatory agencies, this complex regulatory space is also occupied by market actors and civil actors performing the role of regulators.Footnote 33 If traditional forms of regulation were self-regulation (first-party regulation) and independent state regulation (second-party regulation), today the relationships between the regulator and the regulatee are mediated by third parties that occupy the regulatory space and perform regulatory functions through processes of communication, negotiation, accreditation, monitoring, assessment and auditing, for example.Footnote 34

Importantly, regulatory theories are classified based on the character of the actors that contribute to their emergence and the typical patterns of interaction between the regulatory actors. The typology of regulatory theories is composed of the following types of theories: (1) public interest theories, where regulation is attributed to a public body such as the legislature, a governmental department or regulatory agency, whose deliberation is based on the pursuit of collective goals for the promotion of the general welfare of a particular political community; (2) private interest theories, where regulation emerges from the actions of individuals or groups motivated to maximise their self-interest as private individuals or private bodies, such as lobby groups or corporations; and (3) institutionalist theories of regulation, where regulation emerges through the prominent role of organisations, institutions and systems in the regulatory dynamics that shape outcomes in ways that transcend the preferences and interests of the regulatory participants.Footnote 35 Algorithmic regulation may transcend this public–private divide, as these computational systems are predominantly developed by private actors, but public actors could potentially make regulatory interventions to require those systems to be developed in accordance with the requirements of due process of law. In theory, algorithms could be developed in an exclusively public setting for a planned regulatory purpose to perform a specific governmental function that establishes normative standards, gathers information from citizens and produces consequential effects that influence behaviour. Similarly, algorithms could also theoretically be produced privately by a corporation that defines the rules of the game for a private activity without any direct state intervention. In practice, when the state is involved in “algorithmic regulation”, the experience of private individuals also shapes the regulatory space. On the other hand, when the state does not intervene directly in “algorithmic regulation”, private parties behave under the shadow of the state, and so their experiences are also influenced by state action or omission.Footnote 36 Therefore, in a complex regulatory space, algorithmic regulation may be institutionalised through the roles of organisations, institutions and systems that shape the normativity of algorithms through a combination of public and private contributions to a transcendent final outcome.

Ulbricht and Yeung highlight the growing literature on the lawfulness, legitimacy and acceptability of algorithms, but they consider the relationship between this rich area of research and the concept of “algorithmic regulation” to be uncertain and yet to be interrogated.Footnote 37 Our understanding of algorithmic regulation, however, considers the normative control of the commands embedded in these mathematical formulae to be part and parcel of “algorithmic regulation” because demands to transform these rules, values and trade-offs are ultimately part of the process of the definition of these standards. Whenever algorithms are subject to this review process we may refer to the regulation of algorithms, as this is part of the process of the transformation of the commands embedded in their computational programs. In exceptional cases, the normative control of algorithms results from the judicial review of courts, such as the pioneering case of digital discrimination through geo-blocking and geo-pricing in the context of the Olympic Games in Rio de Janeiro in 2016.Footnote 38 More commonly, the judicial review of algorithms results from the interaction of various private and public actors in the regulatory space. Today, one important form of third-party regulation is “auditing”, which is now used in various contexts in response to growing pressures for verification requirements.Footnote 39 Cathy O’Neil strongly supports an immediate change of algorithmic law and regulation to incorporate human values in computational systems and to conduct algorithmic audits that analyse the software code and the data to correct potential unfairness found.Footnote 40 Even if auditors face resistance from web giants, auditing may reveal the algorithms inner workings and their prejudices, generating even more public demand for algorithmic accountability.Footnote 41 In this context, Cathy O’Neil emphasises the powerful regulatory role of the government in adapting and enforcing these laws and regulations in response to consumer demands for more transparency, information and justice.Footnote 42 Ariel Ezrachi and Maurice E. Stucke also defend auditing the algorithm as part of the enforcement toolbox, but they warn against its limited practical appeal, especially because of the technological challenges of producing evidence of unlawfulness in a controlled laboratory test, to exercise control over processed data and to keep pace with the state of the art of technological developments.Footnote 43 A similar challenge arose in the laboratory tests of Volkswagen vehicles equipped with a “defeat device” – software that could identify that the car was undergoing laboratory testing and temporarily transform the performance of the engine regarding its gas emissions to comply with Californian environmental laws and regulations.Footnote 44 Such situations of fraud against consumers require a combination of normative responses from administrative law, criminal law and civil law that include fines, tort liability and criminal sanctions.Footnote 45

The reference to normativity does not imply necessarily an intentional order that one ought to do something because the algorithmic recipe may impose an order, a series of acts, guidelines, directions and other technical consequences that constrain, impose or limit some action in a specific way. According to Hakan Hydén, algorithms are primarily technical and secondarily normative, providing conditional instructions and free-standing imperatives for AI systems conducting operations affecting people in their everyday lives.Footnote 46 He considers normativity to be an indirect effect of algorithms, and his neologism “algo-norms” refers to those norms that are related to the societal consequences of the use of algorithms.Footnote 47 The normativity of algorithms originates not in positive law but in the mathematical formula’s structure of commands resulting in predicted consequences. Defined broadly as soft law too, regulation refers to mechanisms of social control, including unintentional and non-state processes.Footnote 48 Once intentionality is no longer included in our definition of regulation, anything producing effects on behaviour is considered regulatory.Footnote 49 Regulation may be considered a constitutive mechanism of the market and of property rights.Footnote 50 In the case of electronic commerce, the invisible hand of the market may be displaced by a digitalised hand subject to manipulation and anti-competitive practices when the algorithmic price is no longer a competitive price but merely a fiction created by technology industries.Footnote 51 Because of strong asymmetries of information and power between the Big Tech corporations and individual consumers, companies may produce algorithms whose code maximises profit through perfect behavioural pricing discrimination.Footnote 52 Regulation would be necessary to prevent these practices.Footnote 53 Ezrachi and Stucke remind us that competition is normative and norms shape participants’ incentives and market structures so that the current landscape of competition may be changed through state intervention and enforcement.Footnote 54 As digital consumers have no power to negotiate or re-negotiate the terms of their electronic contracts, the privacy model of “notice and consent” fails to protect their rights, and so novel strategies of privacy by design and consumer empowerment are necessary for market protection.Footnote 55

Power dynamics are also relevant to the analysis of the democratic dimensions of regulation. As code is the expression of an algorithmic formula in computational programming language, code is power because it may compel people to do things they would not otherwise do by means of force, coercion, influence and/or manipulation.Footnote 56 Reflecting on the future of politics, Jamie Susskind predicts that digital technology will provide most of the law enforcement done by law officials, and algorithms may effectively enforce law by programming for the detection and prohibition of errant behaviour.Footnote 57 Instead of being coerced to drive your car under the speed limit, AI may simply be programmed to force your car to lower its speed according to the legal speed limit, such that your vehicle is always electronically forced to comply with traffic laws. Another important dimension of politics comes from digital surveillance based on data control to which everyone is subject today, leading to the classification, labelling and scoring of individuals according to the attributions given by AI systems, including the possibility of designing national social credit score systems to rate individual citizens.Footnote 58 Algorithmic filters may also direct information, communication and ideological content across social networks, creating artificial bubbles and echo chambers among people with similar views, reducing the possibility of debate and forming digital environments that are hostile to the reception, incorporation and circulation of certain political ideas.Footnote 59 The political power of these technology companies becomes enormous when they control the code in their digital platforms and their devices because software may be reprogrammed without user consent or knowledge.Footnote 60 Today, these digital arenas are forums for public debate, and powerful private actors control the algorithmic rules of the game, defining the power to speak, to express and to communicate.Footnote 61 This deficit of governance and accountability provides an opportunity for algorithmic regulation.Footnote 62 In contrast to the original libertarian perspective expressed by John Perry Marlow in his 1996 “Declaration of Independence for Cyberspace”,Footnote 63 the contemporary political climate seems less resistant to the liberal perspective of algorithmic regulation of the Internet, as symbolised by Tim Berners-Lee’s call for a Magna Carta for the Web.Footnote 64 Similarly, algorithmic regulation of AI is part of the contemporary global political agenda under the leadership of the EU and its call for trustworthy and human-centred AI.Footnote 65

In this context, the notion of SMART law – as an acronym to express the emergence of “scientific, mathematical, algorithmic law shaped by risks and technology” – becomes a useful concept.Footnote 66 The scientific dimension of SMART law originates from its empirical orientation, informed by the best available scientific knowledge and qualified as “evidence-based law”.Footnote 67 The mathematical dimension is expressed by the proliferation of statistical and mathematical tools in the field of law, as exemplified by the use of legal indicators for ranking or rating legal institutions and by the adoption of methods of the economic analysis of law and analytical methods focusing on questions regarding Big Data in law.Footnote 68 The algorithmic dimension supports data analysis, data implementation and law enforcement through specific digital means such as the potential results from the constant connectivity of objects to the Internet in real time (ie the “Internet of Things”) or the use of algorithms for extracting patterns, visualisations and relevant information from masses of data (ie “Big Data”).Footnote 69 The risk-based approach indicates an orientation towards reflexiveness, cost–benefit assessment and the use of risk-management tools.Footnote 70 The technological dimension comes from the specialised software solutions in the legal field, ranging from blockchain technology to AI research and robotisation.Footnote 71 Importantly, within a conceptual framework of algorithmic law and regulation, classical distinctions of legal theory (facts/norms, law/regulation, soft law/hard law, code/legal rules) become either redundant or obsolete.Footnote 72

III. A prudential test for algorithmic decision-making

This section examines the importance of evaluating algorithmic decision-making and setting standards for computer engineers through tests to examine whether a computer prediction or recommendation that is used to support decision-making in the legal domain can resemble the evidence-based, justifiable, reasonable and prudential activities expected of the legal decision-maker.

This idea is reminiscent of Alan Turing’s proposed “imitation game” as an empirical test to evaluate whether machines can think.Footnote 73 According to Turing, the insurmountable difficulty of defining the meaning of “thinking” forces us to establish a game in which an interrogator is in a room connected to two other participants in the game located in other rooms. These three participants communicate with each other through typewritten text displayed on a teleprinter.Footnote 74 The objective of the interrogator is to pose questions and analyse the responses given by the two other participants in this game so that the interrogator may identify which of the other two is a man and which is a woman.Footnote 75 However, Turing proposed that instead of a woman, a machine could participate in this game, and engineers could try to develop electronic or digital computers that could perform well in the game by mimicking the actions of a human very closely.Footnote 76 Writing in 1950, Turing predicted that one would be able to speak of machines thinking without being contradicted by the end of the twentieth century as a result of the transformation in the use of words and general educated opinion.Footnote 77 After challenging a series of arguments against the possibility of machines’ thinking, Turing speculated on the possibility of machine learning in machines with structures analogous to nerve cells that could be stimulated by punishments and rewards in their teaching processes.Footnote 78 Acknowledging that machine learning may appear paradoxical, Turing reiterated that the rules of the operation of the machine may change during the learning process, much like changes in constitutional law.Footnote 79 In his visionary fashion, Turing also affirmed that most programs would lead to machines producing outputs we cannot make sense of or that might seem to be completely random.Footnote 80 For Turing, machines would compete with humans in all intellectual fields, from typically abstract activities such as playing the game of chess to more social activities such as speaking the English language.Footnote 81 According to Martin Ford, Turing’s seminal article established AI as a modern field of study and set the standards for computer engineers in programming a code that would eventually pass the “Turing Test”.Footnote 82

Today, we should also evaluate the contemporary experience of algorithmic decision-making and set standards for computer engineers to eventually pass a test in programming code that could resemble the evidence-based, justifiable, reasonable and prudential activities of a legal decision-maker. In terms of the performance that we would expect of AI in an imitation game, can machines provide useful predictions for decision-making systems in the legal domain? Perhaps we should adapt the Turing Test to a similar setting in which an impartial spectator may engage in an exchange of messages with other participants in an electronic game that simulates legal knowledge and decision-making through typewritten texts displayed on a teleprinter. This impartial spectator could pose legal questions and analyse the responses given by the participants of the game to identify which one is a lay individual and which is a trained lawyer. Then, instead of a bar-accredited lawyer, a machine could participate in this game and engineers could attempt to develop a computer that performs well in the game and appears to think like a lawyer. Today, one can speak of AI trained to mimic the legal actions of professional lawyers, such as the system ROSS.

However, arguments have been made against the possibility of using machines to support legal decision-making. For instance, Melissa Love Koenig, July A. Oseid and Amy Vorenberg consider that empathy, imagination and creativity are essential and exclusively human lawyering skills.Footnote 83 Even if AI may provide support through electronic discovery of documents and with basic legal research, the authors consider that technology will not be able to pursue the artisanal legal crafts of listening with empathy to clients’ stories, devising strategies regarding a case, imagining how an argument could appeal to an audience and creatively structuring the line of legal argumentation.Footnote 84 In their opinion, empathy and storytelling are core human characteristics that are essential for lawyering and cannot be mastered by AI.Footnote 85 However, their conclusion is based on the fact that human beings are the judges making the ultimate decisions in legal cases.Footnote 86 What about the possibility of machines competing in all intellectual fields, including judging? What would be the standards for AI to exercise the role of judges in judicial decision-making or the role of regulators in standard setting?

One initial challenge for algorithmic decision-making consists in the capacity of gathering data and incorporating knowledge of the relevant facts of the case. Fact-finding is an essential part of an evidence-based judgment and AI should be trained to assimilate the relevant facts of a given case. Importantly, algorithms often must be trained to incorporate the information on facts into their computational systems so that they may evaluate the decision to take. Consider, for instance, that self-driving cars must become able to recognise their concrete environment by continuously learning to recognise other cars and traffic signs through Big Data processing and analysis.Footnote 87 Experience with the current projects developing autonomous self-driving vehicles has revealed that unsupervised self-learning processes may lead to flawed outcomes and are riskier than supervised AI projects.Footnote 88

If AI can successfully provide correct responses in evidence-based clinical decisions in healthcare diagnosis, perhaps algorithms could be developed to provide support to evidence-based empirical decisions in legal analysis as well.Footnote 89 Additionally, both judicial and regulatory decision-making are justifiable, meaning that their grounds are transparent, their rationales are explainable and their fairness is subject to contestability through appeal and other forms of normative review, but justifiable, transparent, explainable and contestable AI seems very hard to realise in practice.Footnote 90 Additionally, there are criteria for decision-making that are metaphorically associated with the scales of justice, leading to the construction of various techniques in the search for the correct response to a legal problem, such as proportionality, reasonableness and fairness,Footnote 91 but critics consider that this search for justice may be elusive and that these decisions are ultimately based on discretionary exercises of power.Footnote 92 Finally, judging may also be characterised by the prudence or the practical reason of the human judge, a disposition to take into consideration the complexities of the institutional setting, to devise strategic behaviour for the advancement of principles and to display patience, modesty and flexibility to compromise, to meet resistance and delays and to deal with the contradictions of society.Footnote 93 This personality trait of prudence may be extremely difficult to encode in AI, as revealed by the failure of artificial neural network algorithms to spontaneously learn to develop a plan with patience and caution that protects Ms. Pac-Man from ghost attacks while playing the titular Atari computer game.Footnote 94 Prudence may be a typical characteristic of general human intelligence, but the prudential test challenges computer engineers to develop AI systems that are trained in practical reason and expertise as a decision-maker, much like a judge or a regulator.

In this sense, AI would be trained for the specific task of providing judicial decision-making. In this context, one essential question would be to evaluate the social meaning of substituting human judges for artificial judges. As Jack M. Balkin correctly puts it, the substitution of robots for human beings normally has a social meaning that should also be interpreted in terms of its context, morality and politics: a government may decide to substitute human soldiers for AI ones because robots have no families and will not return from war in body bags; or a corporation may decide to substitute human workers for AI ones because robots will not unionise and will not suffer from alcoholism, depression or absenteeism.Footnote 95 On the other hand, some activities are considered to be essentially human, such that our society would value the presence of a “human in the loop” as the decision-maker. Robots and AI may carry out the services and activities that we no longer want to perform. In this sense, we should carefully examine whether we would prefer to be judged by human intelligence or by AI. After the French Revolution, the ideal of judicial decision-making in nineteenth-century France became the literal interpretation of the law through the school of exegesis that hoped that judges would be nothing more than the “mouth of the law”. Algorithms may be proposed according to this myth of the minimalist judge who simply verbalises what was already previously written in the legal code in their judicial decisions. For a computer engineer pursuing the prudential test of developing a judicial robot, prudence would consist of the minimalist, neutral and positivist style adopted in the French courts. Importantly, however, ethnographic analysis of the backstage of courtroom proceedings reveals that judges’ decisions are not simply impersonal expressions of the voice of the law, but rather they are the result of a complex situation arising from interlocution with the counsellors in the conference room.Footnote 96 In her analytical essay on algorithmic regulation and the rule of law, Mireille Hildebrandt proposed a typology of algorithmic regulation composed of two types: (1) code-driven regulation, which refers to self-executing algorithms in which standard-setting integrates with behaviour modification; and (2) data-driven regulation, which refers to predictive algorithms that may provide support for decisions by suggesting standards for monitoring, predicting and influencing behaviour.Footnote 97 The typology of algorithmic regulation should distinguish the prevalence of the logic of the code or data and whether the style of the mode is automatic or not. We should unpack the logic behind this standard-setting and the presence of a “human in the loop” as a part of algorithmic decision-making.

IV. Challenges and risks of AI: legal design, risk regulation, politics and democracy

Algorithmic law and regulation challenges everyone to rethink power, democracy, regulation and institutional design, among other themes that are discussed in the contributions written for this symposium. Revisiting Michel Foucault’s “panopticism” as an instrument for the effect of inducing “a state of conscious and permanent visibility that assures the automatic functioning of power”Footnote 98 in the context of our contemporary surveillance capitalist society seems inevitable, even outside the context of the penitentiary system.Footnote 99 Similarly, Foucauldian studies on law and regulation based on the concept of “governmentality” (ie the institutions that exercise complex power over the population) become relevant, as AI and algorithms can be examined as part of the governmental apparatuses and knowledge that governmentalise the contemporary state.Footnote 100 The ubiquity and pervasiveness of power should not be neglected as challenges for algorithmic law and regulation in relation to the asymmetries of power and information in our contemporary digital societies.Footnote 101

Political challenges often lead us to reflect on democracy, and today some even refer to “AI democracy”, “data democracy” and “wiki democracy”.Footnote 102 Competition among various political groups within cyberspace and the AI scene may invite our reflection on “polyarchy” and the agonistic model of democracy, with an opportunity for real political discussion among groups with contrasting ideological opinions and political stakes.Footnote 103 Critics consider that politics becomes frozen by algorithmsFootnote 104 and that political debate is threatened by extremism and propaganda.Footnote 105 “Algocracy” could mean governance by algorithms or even a more extreme version of government by algorithms in a scenario of subordination of human beings to AI.Footnote 106 On the other hand, these risks justify regulation as control through code, governments, self-regulatory standards or the commercial interests of private actors, as Big Tech companies may be required to rewrite their codes to comply with legal norms.Footnote 107 Karl Polanyi’s insight into the embeddedness of economics within social relationships may inspire regulatory transformations and the use of legal design to embed real guarantees for the protection of users’ rights into code.Footnote 108 In a complex regulatory space and with the rise of unelected authorities, regulatory legitimacy may be achieved through expertise and protecting fundamental rights, economic interests and political guarantees.Footnote 109 However, achieving “better regulation” is always a challenge because decisions in this area involve measurements and value judgments that are complex and controversial.Footnote 110 In any event, our societies will have to deal with all of “the confusion and difficulty that notoriously attends regulation of a generative space”, as Jonathan Zittrain frames the challenge facing states, organisations and stakeholders.Footnote 111 Our symposium hopes to contribute to these discussions by conceptually framing algorithmic law and regulation, proposing a prudential test for algorithmic decision-making and inviting readers to reflect on the challenges facing legal design, risk regulation, politics and democracy.

The first article of our symposium on algorithmic regulation is “The Spread of Legal Tech Solutionism and the Need for Legal Design”, in which Siddharth de Souza reflects on the potential for legal design as an integrated approach for improving the responses that technology may provide to legal problems.Footnote 112 As a framework for building comprehensive products and services focused on systemic outcomes, design thinking may contribute to more legitimate, accountable and accessible delivery of legal services in comparison to the proposals of ad hoc responses resulting from legal tech solutionism. By explaining that technological solutions produced by the market based on a logic primarily of high efficiency and low costs may lead to problematic questions of equity and justice, de Souza shows how predictive policing algorithms reinforce the biases found in the police databases used for their training and reproduce unequal power asymmetries related to race. The design of legal tech solutions should consider more deliberative and reflexive processes and the concrete challenges of the legal system, such as the poor training of judges, administrative bottlenecks in judicial institutions and the challenges of accountability and transparency. In this context, legal design considers how to make the legal system work to meet people’s needs by developing participatory processes, evidence-based engagements and more reflective and interactive solutions. By focusing on the empirical reality of the law, designers could facilitate the circulation of legal information and reduce power asymmetries, and legal design could also empower communities, give voice to vulnerable people and find collaborative ways to change life experiences by helping to aggregate value in legal products and services. Particularly in terms of the regulation of legal tech, designers must consider the lived realities of users and collaborate with people so that they can learn to understand, control and interact with algorithms. De Souza concludes his article with the cautionary message that without careful legal design information technology may contribute to the exclusion and alienation of product users due to its unfamiliar language, technology and contexts.

In their article “The Risks of Trustworthy Artificial Intelligence: The Case of the European Travel Information and Authorisation System”, Charly Derave, Nathan Genicot and Nina Hetmanska provide a comprehensive analysis of the current European challenges related to the promotion of a human-centric and trustworthy approach to AI.Footnote 113 In parallel with its efforts to lead the enactment of regulatory guidelines for AI based on ethical values, the EU has established ETIAS, which will provide travel authorisations to visa-exempted foreigners. A profiling algorithm will perform the risk assessment and machine-learning techniques are being considered for ETIAS, which will become the first European automated risk-profiling system used in migration management. The Foucauldian metaphor of the panopticon as a surveillance system reminds us that the six EU databases on third-country nationals in the EU perform the role of tools of mass surveillance of foreigners and act as instruments of individualised population management. The European Data Protection Supervisor questioned the necessity and proportionality of such system and criticised the presupposition that travellers are suspect and must demonstrate their good faith. Decisions should be based on autonomous human assessment and not on automatic algorithmic decision-making. “Profiling” refers to making a prediction about a hidden variable of interest based on rules defining risk profiles and the data used to comprise the complex algorithmic decision-making system. The authors highlight the fact that the ETIAS Regulation does not precisely define the nature of the algorithm and employs vague terminology, with references to “risks”, “specific risks” and “specific risk indicators” that needed to be further defined. Critics have questioned the volatility of the ever-changing screening rules, the opacity of the unintelligible and publicly inaccessible reasons for considering someone a risky applicant and the potential for indirect discrimination in risk profiles based on age, gender, nationality, place of residence, education and occupation. Algorithmic bias may be encoded in the calculation of risk, reproducing existing inequalities and leading to discriminatory disparities, but such bias may also emerge at the stages of training data and feature selection if the data sample is biased or the choice of attributes results in unfair outcomes for specific groups of travellers. Algorithmic data processing can produce adverse effects on specific groups who may discriminated against through proxies that may be inferred from data on nationality, place of birth and education level. As part of the global mobile infrastructure, ETIAS could be interpreted as an instrument of selective and differentiated inclusion that regulates the mobility of some categories of people and restricts the rights of entry of other people through an algorithm designed for visa allocation according to political priorities. As revealed by the authors in their case study, ETIAS will represent a massive infrastructure of surveillance and serve as a tool of differential exclusion and individualisation of travel restrictions, with it being likely to discriminate against some protected groups and produce biased results. On the other hand, ETIAS is fully embedded in the “ecosystem of trust” championed by the EU as an instrument aimed at countering future threats and assessing future risks such as security risks, risks of irregular immigration and health risks.

Finally, Paolo Cavaliere and Graziella Romeo contribute to the symposium with their article titled “From Poisons to Antidotes: Algorithms as Democracy Boosters”, suggesting that algorithmic decision-making can contribute to an output-orientated democratic process centred on the protection of fundamental rights.Footnote 114 Digital technologies have the potential to increase the quality of democracy in times of populism through technology-enabled policymaking mechanisms that may positively affect democratic representation and legitimation, reducing irrational and detrimental concerns from the process of policymaking. According to the conception of output (“result”) democracy, institutions gain legitimacy when they maximise the expected values of an independently specified social welfare function. Algorithms may help boost the democratic legitimation of the public bodies that utilise them through technologically developed regulatory standards and the provision of a range of services. Algorithms do not replace political choices, but rather they create conditions for a political choice to be confronted with concrete outputs. Algorithms may also ensure the efficiency of the selection process and the consistency of results. According to the authors, computer science could be used to replace political deliberation, with possible benefits in terms of the efficiency of a democratic system, and algorithms also offer the opportunity for us to refocus on output legitimacy by connecting inputs and outputs and making such connections rationally appraisable. The democratic soundness of algorithmic decision-making can be framed as a guarantee of political participation through computer processes regarding how they learn to select and process data. Additionally, the political community may gain the ability to control algorithmic decision-making processes by choosing the issues allocated to AI and the scope of democratic governance. Moreover, democratisation implies that there is an opportunity to challenge algorithmic decision-making through assessing, questioning and potentially changing the outcome of any non-human decision. Potential risks related to algorithmic decision-making include a lack of privacy and data protection, system failures, unfairness, a lack of transparency and the risk of reinforcing existing inequalities and discrimination. The adequate response to these risks involves evaluating and challenging algorithmic decision-making. Parliaments may scrutinise algorithmic decision-making in order to minimise the potential negative impacts of such technology. In their conclusion, Cavaliere and Romeo state that algorithms may expose populist rhetoric by being an instrument of knowledge and a tool for reading reality and solving its problems.

Competing interests

The authors declare none.

Posthumous note

Nina Hetmanska

In March 2022, Nina Hetmanska passed away. She co-authored the article “The Risks of Trustworthy Artificial Intelligence: The Case of the European Travel Information and Authorisation System” in this special issue. Nina was a PhD researcher and an instructor at the Perelman Centre for Legal Philosophy at the Faculty of Law and Criminology, Université Libre de Bruxelles (ULB), where she was responsible for supervising first-year law students in the Introduction to Law course, a task to which she was particularly dedicated. Nina was a young researcher who was full of enthusiasm and promise, and she was strongly committed to reflection and action in the service of the poorest and most excluded. All of us who had the privilege of knowing her and working with her want to pay tribute to her person, her work and her talent.

A full tribute to Nina can be found at https://droit.ulb.be/fr/hommage-a-nina-hetmanska-chercheuse-a-lulb-decedee-le-1er-mars-2022.

References

1 W Barfield, “Towards a Law of Artificial Intelligence” in Research Handbook on the Law of Artificial Intelligence (Cheltenham, Edward Elgar Publishing 2018) p 4.

2 ibid. Technological players have already defeated the best human players in chess, Go and Jeopardy. D Sumpter, Outnumbered: From Facebook and Google to Fake News and Filter-Bubbles – The Algorithms That Control Our Lives (London, Bloomsbury Publishing 2018). Soon, AI will be responsible for our transportation through autonomous cars and the transportation of goods through self-driving trucks in a safer and cheaper way. LD Burns and C Shulgan, Autonomy: The Quest to Build the Driverless Car – And How It Will Reshape Our World (New York, HarperCollins 2018).

3 Karen Yeung and Martin Lodge highlighted the impact of new technologies on social processes, power relations and the distribution of economic resources as examples of how relevant algorithmic regulation became to sociology, political science and economics. K Yeung and M Lodge, “Algorithmic Regulation: An Introduction” in K Yeung and M Lodge (eds) Algorithmic Regulation (Oxford, Oxford University Press 2019) p 2.

4 PRB Fortes, “Paths to Digital Justice: Judicial Robots, Algorithmic Decision-Making, and Due Process” (2020) 7(3) Asian Journal of Law and Society 453–69.

5 L Lessig, “The Limits in Open Code: Regulatory Standards and the Future of the Net” (1999) 15 Berkeley Technology Law Journal 759; L Lessig, “Law Regulating Code Regulating Law (2003) 35 Loyola University Chicago Law Journal 1; L Lessig, Code: And Other Laws of Cyberspace (New York, Basic Books 2009).

6 L Lessig, Code, supra, note 5, pp 4–8.

7 ibid, pp 23–24.

8 ibid, pp 31–37.

9 ibid, p 67.

10 ibid, pp 77–79.

11 ibid, pp 83–84.

12 HLA Hart, J Raz and L Green, The Concept of Law (Oxford, Oxford University Press 2012).

13 W Twining, General Jurisprudence: Understanding Law from a Global Perspective (Cambridge, Cambridge University Press 2009) pp 103–17.

14 KN Llewellyn, “The Theory of Rules” in The Theory of Rules (Chicago, IL, University of Chicago Press 2011) pp 64–65.

15 ibid, pp 51–58.

16 BZ Tamanaha, A Realistic Theory of Law (Cambridge, Cambridge University Press 2017) pp 36–37; W Twining, Karl Llewellyn and the Realist Movement (Cambridge, Cambridge University Press 1973); W Twining, Jurist in Context (Cambridge, Cambridge University Press 2019); PRB Fortes, “An Explorer of Legal Borderlands: A Review of William Twining’s Jurist in Context, a Memoir” (2019) 5(2) REI – Revista Estudos Institucionais 777–90; PRB Fortes and I Kampourakis, “Exploring Legal Borderlands: Introducing the Theme” (2019) 5(2) REI – Revista Estudos Institucionais 639–55.

17 KN Llewellyn and EA Hoebel, The Cheyenne Way: Conflict and Case Law in Primitive Jurisprudence (Norman, OK, University of Oklahoma Press 1941) p 42.

18 T O’Reilly, “Open Data and Algorithmic Regulation” (2013) 21 Beyond Transparency: Open Data and the Future of Civic Innovation 289–300.

19 E Medina, “Rethinking Algorithmic Regulation” (2015) 44(6) Kybernetes 1005–19.

20 Yeung and Lodge, supra, note 3, p 2.

21 L Ulbricht and K Yeung, “Algorithmic Regulation: A Maturing Concept for Investigating Regulation of and through Algorithms” (2022) 16(1) Regulation & Governance 3–22, p 18.

22 C Parker, C Scott, N Lacey and J Braithwaite, “Introduction” in C Parker, C Scott, N Lacey and J Braithwaite (eds) Regulating Law (Oxford, Oxford University Press 2004).

23 A Corbett and S Bottomley, “Regulating corporate governance” in C Parker, C Scott, N Lacey and J Braithwaite (eds) Regulating Law (Oxford, Oxford University Press 2004) pp 64–66.

24 H Collins, Regulating Contracts (Oxford, Oxford University Press 2002) pp 65–69.

25 ibid, p 8.

26 B Morgan and K Yeung, An Introduction to Law and Regulation: Text and Materials (Cambridge, Cambridge University Press 2007) p 3.

27 ibid.

28 ibid, pp 3–4.

29 H Kelsen, “Pure Theory of Law – Its Method and Fundamental Concepts” (1934) 50 Law Quarterly Review 474; H Kelsen, “Pure Theory of Law and Analytical Jurisprudence” (1941) 55 Harvard Law Review 44.

30 M Van de Kerchove and F Ost, De la pyramide au réseau?: pour une théorie dialectique du droit (Brussels, Presses de l’Université Saint-Louis 2019).

31 D Levi-Faur, “Regulation and Regulatory Governance” in D Levi-Faur (ed.) Handbook on the Politics of Regulation (Cheltenham, Edward Elgar Publishing) pp 8–11.

32 ibid.

33 ibid.

34 ibid.

35 Morgan and Yeung, supra, note 26, ch 2.

36 RH Mnookin and L Kornhauser, “Bargaining in the Shadow of the Law: The Case of Divorce” (1978) 88 Yale Law Journal 950.

37 Ulbricht and Yeung, supra, note 21, pp 4–5.

38 PRB Fortes, GM Martins and PF Oliveira, “Digital Geodiscrimination: How Algorithms May Discriminate Based on Consumers’ Geographical Location” (2021) 1 Droit et societe 145–66; PRB Fortes, “O consumidor contemporâneo no Show de Truman: a geodiscriminação digital como prática ilícita no direito brasileiro” (2020) 124(28) Revista de Direito do Consumidor 235–60.

39 Levi-Faur, supra, note 31, pp 8–9.

40 C O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (New York, Broadway Books 2016) pp 205–09.

41 ibid, pp 211–12.

42 ibid, 212–13.

43 A Ezrachi and ME Stucke, Virtual Competition. The Promise and Perils of the Algorithm-Driven Economy (Cambridge, MA, Harvard University Press 2016) pp 230–31.

44 J Ewing, Faster, Higher, Farther: The Inside Story of the Volkswagen Scandal (New York, Random House 2017); PRB Fortes and PF Oliveira, “A insustentável leveza do ser? A quantificação do dano moral coletivo sob a perspectiva do fenômeno da ilicitude lucrativa e o’caso Dieselgate'” (2019) 2(3) Revista IBERC; PRB Fortes, “O Fenômeno da Ilicitude Lucrativa” (2019) 5(1) REI – Revista Estudos Institucionais 104–32.

45 MF Di Rattalma (ed.), The Dieselgate: A Legal Perspective (Berlin, Springer 2017); P Kolba, Davids Gegen Goliath: Die V-W Skandal und die Möglichkeit von Sammelklagen (Vienna, Mandelbaum 2017).

46 H Hydén, “AI, Norms, Big Data, and the Law” (2020) 7(3) Asian Journal of Law and Society 409–36; H Hydén, “Sociology of Digital Law and Artificial Intelligence” in J Přibáň (ed.), Research Handbook on the Sociology of Law (Cheltenham, Edward Elgar Publishing 2020).

47 ibid.

48 Levi-Faur, supra, note 31, p 6.

49 ibid.

50 ibid, p 3.

51 Ariel and Stucke, supra, note 43, pp 27–33.

52 ibid, pp 129–30.

53 ibid.

54 ibid, pp 223–26.

55 ibid, pp 226–28.

56 J Susskind, Future Politics: Living Together in a World Transformed by Tech (Oxford, Oxford University Press 2018) pp 94–97.

57 ibid, pp 101–03.

58 A Webb, The Big nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity (London, Hachette UK 2019) pp 80–85.

59 CR Sunstein, #Republic (Princeton, NJ, Princeton University Press 2018).

60 Susskind, supra, note 56, pp 153–60.

61 ibid, pp 188–94.

62 ibid.

63 See AD Murray, “Internet Regulation” in D Levi-Faur (ed.) Handbook on the Politics of Regulation (Cheltenham, Edward Elgar Publishing 2011) p 269.

64 T Berners-Lee, “An Online Magna Carta: Berners-Lee Calls for Bill of Rights for Web” (The Guardian, 12 March 2014) <https://www.theguardian.com/technology/2014/mar/12/online-magna-carta-berners-lee-web>.

65 S Larsson, CI Bogusz, JA Schwarz and F Heintz, Human-Centred AI in the EU: Trustworthiness as a Strategic Priority in the European Member States (Stockholm, Fores 2020).

66 D Restrepo-Amariles and G Lewkowicz, “Unpacking Smart Law: How Mathematics and Algorithms Are Reshaping the Legal Code in the Financial Sector” (2020) 25(3) Lex Electronica 171–85.

67 ibid, pp 173–74.

68 ibid, pp 174–76.

69 ibid, pp 176–77.

70 ibid, pp 177–78.

71 ibid, pp 178–79.

72 ibid, pp 183–84.

73 AM Turing, “Computing Machinery and Intelligence” (1950) LIX(236) Mind 433.

74 ibid, pp 433–34.

75 ibid.

76 ibid, pp 435–38.

77 ibid, p 442.

78 ibid, pp 454–57.

79 ibid, p 458.

80 ibid, pp 458–59.

81 ibid, p 460.

82 M Ford, Architects of Intelligence: The Truth About AI from the People Building It (Birmingham, Packt Publishing Ltd 2018) p 13.

83 ML Koenig, JA Oseid and A Vorenberg, “Ok, Google, Will Artificial Intelligence Replace Human Lawyering?” (2018) 102 Marquette Law Review 1269.

84 ibid.

85 ibid.

86 ibid, p 1272.

87 A von Ungern-Sternberg, “Autonomous Driving: Regulatory Challenges Raised by Artificial Decision Making and Tragic Choices” in W Barfield and U Pagallo (eds) Research Handbook on the Law of Artificial Intelligence (Cheltenham, Edward Elgar Publishing 2018) pp 264–65.

88 ibid.

89 D Ferrucci, A Levas, S Bagchi, D Gondek and ET Mueller, “Watson: Beyond Jeopardy!” (2011) 199 Artificial Intelligence 93–105.

90 B Waltl and R Vogl, “Increasing Transparency in Algorithmic-Decision-Making with Explainable AI” (2018) 42(10) Datenschutz und Datensicherheit – DuD 613–17; C Henin and D Le Métayer, “Beyond Explainability: Justifiability and Contestability of Algorithmic Decision Systems” (2021) AI & SOCIETY 1–14; J Zerilli, A Knott, J Maclaurin and C Gavaghan, “Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard?” (2019) 32(4) Philosophy & Technology 661–83; H de Bruijn, M Warnier and M Janssen, “The Perils and Pitfalls of Explainable AI: Strategies for Explaining Algorithmic Decision-Making” (2022) 39(2) Government Information Quarterly 101666.

91 R Dworkin, Law’s Empire (Cambridge, MA, Harvard University Press 1986); R Alexy, A Theory of Constitutional Rights (Oxford, Oxford University Press 2010); J Rawls, A Theory of Justice (Cambridge, MA, Harvard University Press 1971).

92 D Kennedy, A Critique of Adjudication [fin de siècle] (Cambridge, MA, Harvard University Press 1998); Hart et al, supra, note 12; RA Posner, Law, Pragmatism, and Democracy (Cambridge, MA, Harvard University Press 2005); R Nozick, Anarchy, State, and Utopia (New York, Basic Books 1974).

93 Fortes, supra, note 4; AT Kronman, “Alexander Bickel’s Philosophy of Prudence” (1985) 94(7) Yale Law Journal 1567–616.

94 Sumpter, supra, note 2, p 219.

95 JM Balkin, “The Path of Robotics Law” (2015) 6 California Law Review 45.

96 B Latour, The Making of Law: An Ethnography of the Conseil d’Etat (Cambridge, Polity Press 2010).

97 M Hildebrandt, “Algorithmic Regulation and the Rule of Law” (2018) 276(2128) Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 20170355.

98 M Foucault, Discipline and Punish: The Birth of the Prison (New York, Vintage 1977) p 201.

99 S Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (London, Profile Books 2019); M Moore, Democracy Hacked: How Technology Is Destabilising Global Politics (New York, Simon & Schuster 2018).

100 M Foucault, Security, Territory, Population: Lectures at the Collège de France, 1977–78 (London, Palgrave Macmillan 2007) pp 108–10.

101 B Golder and P Fitzpatrick, Foucault’s Law (Abingdon-on-Thames, Routledge-Cavendish 2009); B Golder, Foucault and the Politics of Rights (Stanford, CA, Stanford University Press 2015); D Kennedy, “The Stakes of Law, or Hale and Foucault” (1991) 15 Legal Studies Forum 327; B Lange, “Foucauldian-Inspired Discourse Analysis: A Contribution to Critical Environmental Law Scholarship?” in A Philippopoulos-Mihalopoulos (ed.) Law and Ecology: New Environmental Foundations (Abingdon-on-Thames, Routledge 2011) pp 39–64.

102 Susskind, supra, note 56, pp 211–54.

103 RA Dahl, Polyarchy: Participation and Opposition (New Haven, CT, Yale University Press 1971); C Mouffe, The Democratic Paradox (London, Verso 2000).

104 Moore, supra, note 99, p 245.

105 C Bjola and J Pamment, “Introduction” in C Bjola and J Pamment (eds) Countering Online Propaganda and Extremism: The Dark Side of Digital Diplomacy (Abingdon-on-Thames, Routledge 2018).

106 PRB Fortes,.” Hasta la Vista, Baby: Reflections on the Risks of Algocracy, Killer Robots, and Artificial Superintelligence” (2021) 70(279-1) Revista de la Facultad de Derecho de México 45–72.

107 I Brown and CT Marsden, Regulating Code: Good Governance and Better Regulation in the Information Age (Cambridge, MA, MIT Press 2013) pp x–xv.

108 K Polanyi, The Great Transformation (Boston, MA, Beacon 1944) pp xxv, 135; B Lange, F Haines and D Thomas (eds), Regulatory Transformations: Rethinking Economy–Society Interactions (London, Bloomsbury Publishing 2015).

109 F Vibert, The New Regulatory Space: Reframing Democratic Governance (Cheltenham, Edward Elgar Publishing 2014); F Vibert, The Rise of the Unelected: Democracy and the New Separation of Powers (Cambridge, Cambridge University Press 2007).

110 S Weatherill, “The Challenge of Better Regulation” in S Weatherill (ed.) Better Regulation (London, Bloomsbury Publishing 2007) p 4.

111 J Zittrain, The Future of the Internet – And How to Stop It (London, Penguin 2008) p 246.

112 S de Souza, “The Spread of Legal Tech Solutionism and the Need for Legal Design” European Journal of Risk Regulation (this issue).

113 C Derave, N Genicot and N Hetmanska, “The Risks of Trustworthy Artificial Intelligence: The Case of the European Travel Information and Authorisation System”, European Journal of Risk Regulation (this issue).

114 P Cavaliere and G Romeo, “From Poisons to Antidotes: Algorithms as Democracy Boosters”, European Journal of Risk Regulation (this issue).