Skip to main content Accessibility help
×
Hostname: page-component-586b7cd67f-2plfb Total loading time: 0 Render date: 2024-11-24T01:28:49.918Z Has data issue: false hasContentIssue false

Part II - Current and Future Approaches to AI Governance

Published online by Cambridge University Press:  28 October 2022

Silja Voeneky
Affiliation:
Albert-Ludwigs-Universität Freiburg, Germany
Philipp Kellmeyer
Affiliation:
Medical Center, Albert-Ludwigs-Universität Freiburg, Germany
Oliver Mueller
Affiliation:
Albert-Ludwigs-Universität Freiburg, Germany
Wolfram Burgard
Affiliation:
Technische Universität Nürnberg

Summary

Type
Chapter
Information
The Cambridge Handbook of Responsible Artificial Intelligence
Interdisciplinary Perspectives
, pp. 83 - 184
Publisher: Cambridge University Press
Print publication year: 2022
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This content is Open Access and distributed under the terms of the Creative Commons Attribution licence CC-BY-NC-ND 4.0 https://creativecommons.org/cclicenses/

6 Artificial Intelligence and the Past, Present, and Future of Democracy

Mathias Risse
Footnote *
I. Introduction: How AI Is Political

Langdon Winner’s classic essay ‘Do Artifacts Have Politics?’ resists a widespread but naïve view of the role of technology in human life: that technology is neutral, and all depends on use.Footnote 1 He does so without enlisting an overbearing determinism that makes technology the sole engine of change. Instead, Winner distinguishes two ways for artefacts to have ‘political qualities’. First, devices or systems might be means for establishing patterns of power or authority, but the design is flexible: such patterns can turn out one way or another. An example is traffic infrastructure, which can assist many people but also keep parts of the population in subordination, say, if they cannot reach suitable workplaces. Secondly, devices or systems are strongly, perhaps unavoidably, tied to certain patterns of power. Winner’s example is atomic energy, which requires industrial, scientific, and military elites to provide and protect energy sources. Artificial Intelligence (AI), I argue, is political the way traffic infrastructure is: It can greatly strengthen democracy, but only with the right efforts. Understanding ‘the politics of AI’ is crucial since Xi Jinping’s China loudly champions one-party rule as a better fit for our digital century. AI is a key component in the contest between authoritarian and democratic rule.

Unlike conventional programs, AI algorithms learn by themselves. Programmers provide data, which a set of methods, known as machine learning, analyze for trends and inferences. Owing to their sophistication and sweeping applications, these technologies are poised to dramatically alter our world. Specialized AI is already broadly deployed. At the high end, one may think of AI mastering Chess or Go. More commonly we encounter it in smartphones (Siri, Google Translate, curated newsfeeds), home devices (Alexa, Google Home, Nest), personalized customer services, or GPS systems. Specialized AI is used by law enforcement, the military, in browser searching, advertising and entertainment (e.g., recommender systems), medical diagnostics, logistics, finance (from assessing credit to flagging transactions), in speech recognition producing transcripts, trade bots using market data for predictions, but also in music creations and article drafting (e.g., GPT-3’s text generator writing posts or code). Governments track people using AI in facial, voice, or gait recognition. Smart cities analyze traffic data in real time or design services. COVID-19 accelerated use of AI in drug discovery. Natural language processing – normally used for texts – interprets genetic changes in viruses. Amazon Web Services, Azure, or Google Cloud’s low- and no-code offerings could soon let people create AI applications as easily as websites.Footnote 2

General AI approximates human performance across many domains. Once there is general AI smarter than we are, it could produce something smarter than itself, and so on, perhaps very fast. That moment is the singularity, an intelligence explosion with possibly grave consequences. We are nowhere near anything like that. Imitating how mundane human tasks combine agility, reflection, and interaction has proven challenging. However, ‘nowhere near’ means ‘in terms of engineering capacities’. A few breakthroughs might accelerate things enormously. Inspired by how millions of years of evolution have created the brain, neural nets have been deployed in astounding ways in machine learning. Such research indicates to many observers that general AI will emerge eventually.Footnote 3

This essay is located at the intersection of political philosophy, philosophy of technology, and political history. My purpose is to reflect on medium and long-term prospects and challenges for democracy from AI, emphasizing how critical a stage this is. Social theorist Bruno Latour, a key figure in Science, Technology and Society Studies, has long insisted no entity matters in isolation but attains meaning through numerous, changeable relations. Human activities tend to depend not only on more people than the protagonists who stand out, but also on non-human entities. Latour calls such multitudes of relations actor-networks.Footnote 4 This perspective takes the materiality of human affairs more seriously than is customary, the ways they critically involve artefacts, devices, or systems. This standpoint helps gauge AI’s impact on democracy.

Political theorists treat democracy as an ideal or institutional framework, instead of considering its materiality. Modern democracies involve structures for collective choice that periodically empower relatively few people to steer the social direction for everybody. As in all forms of governance, technology shapes how this unfolds. Technology explains how citizens obtain information that delineates their participation (often limited to voting) and frees up people’s time to engage in collective affairs to begin with. Devices and mechanisms permeate campaigning and voting. Technology shapes how politicians communicate and bureaucrats administer decisions. Specialized AI changes the materiality of democracy, not just in the sense that independently given actors deploy new tools. AI changes how collective decision making unfolds and what its human participants are like: how they see themselves in relation to their environment, what relationships they have and how those are designed, and generally what forms of human life can come to exist.Footnote 5

Section II explores what democracy is, emphasizes the materiality of ‘early’ and ‘modern’ democracy and rearticulates the perspective we take from Winner. Section III recounts some of the grand techno-skeptical narratives of twentieth-century philosophy of technology, distilling the warnings they convey for the impact of AI on democracy. Section IV introduces another grand narrative, a Grand Democratic AI Utopia, a way of imagining the future we should be wary of. Section V discusses challenges and promises of AI for democracy in this digital century without grand narratives. Instead, we ask how to design AI to harness the public sphere, political power, and economic power for democratic purposes, to make them akin to Winner’s inclusive traffic infrastructure. Section VI concludes.

II. Democracy and Technology

A distinctive feature – and an intrinsic rather than comparative advantage – of recognizably democratic structures is that they give each participant at least minimal ownership of social endeavors and inspire many of them to recognize each other as responsible agents across domains of life. There is disagreement about that ideal, with Schumpeterian democracy stressing peaceful removal of rulers and more participatory or deliberative approaches capturing thicker notions of empowerment.Footnote 6 Arguments for democracy highlight democracy’s possibilities for emancipation, its indispensability for human rights protection, and its promise of unleashing human potentials. Concerns to be overcome include shortsightedness vis-a-vis long-term crises, the twin dangers of manipulability by elites and susceptibility to populists, the potential of competition to generate polarization, and a focus on process rather than results. However, a social-scientific perspective on democracy by David Stasavage makes it easier to focus on its materiality and thus, later on, the impact of AI.Footnote 7 Stasavage distinguishes early from modern democracy, and both of those from autocracy. Autocracy is governance without consent of those people who are not directly controlled by the ruling circles anyway. The more viable and thus enduring autocracies have tended to make up for that lack of consent by developing a strong bureaucracy that would at least guarantee robust and consistent governance patterns.

1. Early Democracy and the Materiality of Small-Scale Collective Choice

Early democracy was a system in which rulers governed jointly with councils or assemblies consisting of members who were independent from rulers and not subject to their whims. Sometimes such councils and assemblies would provide information, sometimes they would assist with governance directly. Sometimes councils and assemblies involved participation from large parts of the population (either directly or through delegation), sometimes councils were elite gatherings. Rulership might be elective or inherited. Its material conditions were such that early democracy would arise in smaller rather than larger polities, in polities where rulers depended on subjects for information about what they owned or produced and so could not tax without compliance, and where people had exit options. Under such conditions, rulers needed consent from parts of the population. Early democracy thus understood was common around the globe and not restricted to Greece, as the standard narrative has it.Footnote 8

However, what is special about Athens and other Greek democracies is that they were most extensively participatory. The reforms of Cleisthenes, in the sixth century BC, divided Athens into 139 demes (150 to 250 men each, women playing no political role) that formed ten artificial ‘tribes’. Demes in the same tribe did not inhabit the same region of Attica. Each tribe sent 50 men, randomly selected, for a year, to a Council of 500 to administer day-to-day affairs and prepare sessions of the Assembly of all citizens. This system fed knowledge and insights from all eligible males into collective decision making without positioning anyone for take-over.Footnote 9 It depended on production and defense patterns that enslaved people to enable parts of the population to attend to collective affairs. Transport and communication had to function to let citizens do their parts. This system also depended on a steady, high-volume circulation of people in and out of office to make governance impersonal, representative, and transparent at the same time. That flow required close bookkeeping to guarantee people were at the right place – which involved technical devices, the material ingredients of democratic governance.

Let me mention some of those. The kleroterion (allotment machine) was a two-by-three-foot slab of rock with a grid of deep, thin slots gouged into it. Integrating some additional pieces, this sophisticated device helped select the required number of men from each tribe for the Council, or for juries and committees where representation mattered. Officers carried allotment tokens – pieces of ceramics inscribed with pertinent information that fit with another piece at a secure location to be produced if credentials were questioned. (Athens was too large for everyone to be acquainted.) With speaking times limited, a water clock (klepsydra) kept time. Announcement boards recorded decisions or messages. For voting, juries used ballots, flat bronze disks. Occasionally, the Assembly considered expelling citizens whose prominence threatened the impersonal character of governance, ostracisms for which citizens carved names into potsherds. Aristotle argued that citizens assembled for deliberation could display virtue and wisdom no individual could muster, an argument for democracy resonant through the ages.Footnote 10 It took certain material objects to make it work. These objects were at the heart of Athenian democracy, devices in actor-networks to operationalize consent of the governed.Footnote 11

2. Modern Democracy and the Materiality of Large-Scale Collective Choice

As a European invention, modern democracy is representative, with mandates that do not bind representatives to an electorate’s will. Representatives emerge from competitive elections under increasingly universal suffrage. Participation is broad but typically episodic. The material conditions for its existence resemble early democracy: they emerge where rulers need subjects to volunteer information and people have exit options. But modern democracies arise in large territories, as exemplified by the United States.Footnote 12 Their territorial dimensions (and large populations) generate two legitimacy problems. First, modern democracy generates distrust because ‘state’ and ‘society’ easily remain abstract and distant. Secondly, there is the problem of overbearing executive power. Modern democracies require bureaucracies to manage day-to-day-affairs. Bureaucracies might generate their own dynamics, and eventually citizens no longer see themselves governing. If the head of the executive is elected directly, excessive executive power becomes personal power.Footnote 13

Modern democracy too depends on material features to function. Consider the United States. In 1787 and 1788, Alexander Hamilton, James Madison, and John Jay, under the collective pseudonym ‘Publius’, published 85 articles and essays (‘Federalist Papers’) to promote the constitution. Hamilton calls the government the country’s ‘center of information’.Footnote 14 ‘Information’ and ‘communication’ matter greatly to Publius: the former term appears in nineteen essays, the latter in a dozen. For these advocates of this trailblazing system, the challenge is to find structures for disclosure and processing of pertinent information about the country. Publius thought members of Congress would bring information to the capital, after aggregating it in the states. But at the dawn of the Republic, the vastness of the territory made these challenges formidable. One historian described the communication situation as a ‘quarantine’ of government from society.Footnote 15 Improvements in postal services and changes in the newspaper business in the nineteenth century brought relief, facilitating the central role of media in modern democracies. Only such developments could turn modern democracies into actor-networks where representatives do not labor in de-facto isolation.Footnote 16

‘The aim of every political constitution is or ought to be first for rulers to obtain men who possess most wisdom to discern, and most virtue to pursue the common good of society’, we read in Federalist No. 57.Footnote 17 To make this happen, in addition to a political culture where the right people seek office, voting systems are required, the design of which was left to states. Typically, what they devised barely resembled the orderliness of assigning people by means of the kleroterion. ‘Ballot’ comes from Italian ballotta (little ball), and ballots often were something small and round, like pebbles, peas, beans or bullets.Footnote 18 Paper ballots gradually spread, partly because they were easier to count than beans. Initially, voters had to bring paper and write down properly spelled names and offices. The rise of parties was facilitated by that of paper ballots. Party leaders printed ballots, often in newspapers – long strips, listing entire slates, or pages to be cut into pieces, one per candidate. Party symbols on ballots meant voters did not need to know how to write or read, an issue unknown when people voted by surrendering beans or by voice.

In 1856, the Australian state of Victoria passed its Electoral Act, detailing the conduct of elections. Officials had to print ballots and erect booths or hire rooms. Voters marked ballots secretly and nobody else was allowed in polling places. The ‘Australian ballot’ gradually spread, against much resistance. Officially, such resistance arose because it eliminated the public character of the vote that many considered essential to honorable conduct. But de facto there often was resistance because the Australian ballot made it hard for politicians to get people to vote for them in exchange for money (as such voting behavior then became hard to verify). In 1888, Massachusetts passed the first statewide Australian-ballot law in the United States. By 1896, most Americans cast secret, government-printed ballots. Such ballots also meant voters had to read, making voting harder for immigrants, formerly enslaved people, and the uneducated poor. Machines for casting and counting votes date to the 1880s. Machines could fail, or be manipulated, and the mechanics of American elections have remained contested ever since.

3. Democracy and Technology: Natural Allies?

The distant-state and overbearing-executive problems are so substantial that, for Stasavage, ‘modern democracy is an ongoing experiment, and in many ways, we should be surprised that it has worked at all.’Footnote 19 The alternative to democracy is autocracy, which is viable only if backed by competent bureaucracies. Stasavage argues that often advances in production and communication undermined early democracy. New or improved technologies could reduce information advantages of subjects over rulers, e.g., regarding fertility of land – if governments have ways of assessing the value of land, they know to tax it; if they do not, they have no good way of taxing it without informational input from the owners. Agricultural improvements led to people living closer together so bureaucrats could easily monitor them. Conversely, slow progress in science and development favored survival of early democracy.

Innovations in writing, mapping, measurement, or agriculture made bureaucracies more effective, and thus made autocracies with functioning bureaucracies the more viable. Much depends on sequencing. Entrenched democracies are less likely to be undermined by technological advances than polities where autocracy is a live option. And so, in principle, entrenched democracies these days could make good use of AI to enhance their functionality (and thus make AI a key part of the materiality of contemporary democracies). In China, the democratic alternative never gained much traction. In recent decades, the country made enormous strides under an autocratic system with a competent bureaucracy. Under Xi Jinping, China aggressively advertises its system, and AI has started to play a major role in it, especially in the surveillance of its citizens.Footnote 20

Yuval Noah Harari recently offered a somewhat different view of the relationship between democracy and technology.Footnote 21 Historically, he argues, autocracies have faced handicaps around innovation and growth. In the late twentieth century especially, democracies outperformed dictatorships because they were better at processing information. Echoing Hayek’s Road to Serfdom, Harari thinks twentieth-century technology made it inefficient to concentrate information and power.Footnote 22 But Harari also insists that, at this stage, AI might altogether alter the relative efficiency of democracy vs. authoritarianism.

Stasavage and Harari agree that AI undermines conditions that make democracy the more viable system. This does not mean existing democracies are in imminent danger. In fact, it can only be via technology that individuals matter to politics in modern democracies in ways that solve the distant-state and overbearing-executive problems. Only through the right kind of deployment of modern democracy’s materiality could consent to governance be meaningful and ensure that governance in democracies does not mean quarantining leadership from population, as it did in the early days of the American Republic. As the twenty-first century progresses, AI could play a role in this process. Because history has repeatedly shown how technology strengthens autocracy, democrats must be vigilant vis-à-vis autocratic tendencies from within. Technology is indispensable to make modern democracy work, but it is not its natural ally. Much as in Winner’s infrastructure design, careful attention must be paid to ensure technology advances democratic purposes.Footnote 23

III. Democracy, AI, and the Grand Narratives of Techno-Skepticism

Several grand techno-skeptical narratives have played a significant role in the twentieth-century philosophy of technology. To be sure, that field now focuses on a smaller scale, partly because grand narratives are difficult to establish.Footnote 24 However, these narratives issue warnings about how difficult it might be to integrate specifically AI into flourishing democracies, warnings we are well advised to heed as much is at stake.

1. Lewis Mumford and the Megamachine

Mumford was a leading critic of the machine age.Footnote 25 His 1934 Technics and Civilization traces a veritable cult of the machine through Western history that often devastated creativity and independence of mind.Footnote 26 He argues that ‘men had become mechanical before they perfected complicated machines to express their new bent and interest’.Footnote 27 People had lived in coordinated ways (forming societal machines) and endorsed ideals of specialization, automation, and rationality before physical machines emerged. That they lived that way made people ready for physical machines. In the Middle Ages, mechanical clocks (whose relevance for changing life patterns Mumford tirelessly emphasizes) literally synchronized behavior.Footnote 28

Decades later Mumford revisited these themes in his two-volume ‘Myth of the Machine.Footnote 29 These works offer an even more sweeping narrative, characterizing modern doctrines of progress as scientifically upgraded justifications for practices the powerful had deployed since pharaonic times to maintain power. Ancient Egypt did machine work without actual machines.Footnote 30 Redeploying his organizational understanding of machines, Mumford argues pyramids were built by machines – societal machines, centralized and subtly coordinated labor systems in which ideas like interchangeability of parts, centralization of knowledge, and regimentation of work are vital. The deified king, the pharaoh, is the chief engineer of this original megamachine. Today, the essence of industrialization is not even the large-scale use of machinery. It is the domination of technical knowledge by expert elites, and our structured way of organizing life. By the early twentieth century, the components of the contemporary megamachine were assembled, controlled by new classes of decision makers governing the ‘megatechnical wasteland’ (a dearth of creative thinking and possibilities in designing their own lives on the part of most people).Footnote 31 The ‘myth’ to be overcome is that this machine is irresistible but also beneficial to whoever complies.

Mumford stubbornly maintained faith in life’s rejuvenating capacities, even under the shadow of the megamachine. But clearly any kind of AI, and social organization in anticipation of general AI, harbors the dangers of streamlining the capacities of most people in society that Mumford saw at work since the dawn of civilization. This cannot bode well for governance based on meaningful consent.

2. Martin Heidegger and the World As Gestell

Heidegger’s most influential publication on technology is his 1953 ‘The Question Concerning Technology’.Footnote 32 Modern technology is the contemporary mode of understanding things. Technology makes things show up as mattering, one way or another. The mode of revealing (as Heidegger says) characteristic of modern technology sees everything around us as merely a standing-reserve (Bestand), resources to be exploited as means.Footnote 33 This includes the whole natural world, even humans. In 1966, Heidegger even predicted that ‘someday factories will be built for the artificial breeding of human material’.Footnote 34

Heidegger offers the example of a hydroelectric plant converting the Rhine into a mere supplier of waterpower.Footnote 35 In contrast, a wooden bridge that has spanned the river for centuries reveals it as a natural environment and permits natural phenomena to appear as objects of wonder. Heidegger uses the term Gestell (enframing) to capture the relevance of technology in our lives.Footnote 36 The prefix ‘Ge’ is about linking together of elements, like Gebirge, mountain range. Gestell is a linking together of things that are posited. The Gestell is a horizon of disclosure according to which everything registers only as a resource. Gestell deprives us of any ability to stand in caring relations to things. Strikingly, Heidegger points out that ‘the earth now reveals itself as a coal mining district, the soil as a material deposit’.Footnote 37 Elsewhere he says the modern world reveals itself as a ‘gigantic petrol station’.Footnote 38 Technology lets us relate to the world only in impoverished ways. Everything is interconnected and exchangeable, efficiency and optimization set the stage. Efficiency demands standardization and repetition. Technology saves us from having to develop skills while also turning us into people who are satisfied with lives that do not involve many skills.

For Heidegger, modern democracy with its materiality could only serve to administer the Gestell, and thus is part of the inauthentic life it imposes. His interpreter Hubert Dreyfus has shown how specifically the Internet exemplifies Heidegger’s concerns about technology as Gestell.Footnote 39 As AI progresses, it would increasingly encapsulate Heidegger’s worries about how human possibilities vanish through the ways technology reveals things. Democracies that manage to integrate AI should be wary of such loss.

3. Herbert Marcuse and the Power of Entertainment Technology

Twentieth-century left-wing social thought needed to address why the revolution as Marx predicted it never occurred. A typical answer was that capitalism persevered by not merely dominating culture, but by deploying technology to develop a pervasive entertainment sector. The working class got mired in consumption habits that annihilated political instincts. But Marxist thought sustains the prospect that, if the right path were found, a revolution would occur. In the 1930s, Walter Benjamin thought the emerging movie industry could help unite the masses in struggle, capitalism’s efforts at cultural domination notwithstanding. Shared movie experiences could allow people to engage the vast capitalist apparatus that intrudes upon their daily lives. Deployed the right way, this new type of art could help finish up capitalism after all.Footnote 40

When Marcuse published his ‘One-Dimensional Man’ in 1964, such optimism about the entertainment sector had vanished. While he had not abandoned the Marxist commitment to the possibility of a revolution, Marcuse saw culture as authoritarian. Together, capitalism, technology, and entertainment culture created new forms of social control, false needs and a false consciousness around consumption. Their combined force locks one-dimensional man into one-dimensional society, which produces the need for people to recognize themselves in commodities. Powers of critical reflection decline. The working class can no longer operate as a subversive force capable of revolutionary change.

‘A comfortable, smooth, reasonable, democratic unfreedom prevails in advanced civilization, a token of technical progress’, Marcuse starts off.Footnote 41 Technology – as used especially in entertainment, at which Benjamin still looked differently – immediately enters Marcuse’s reckoning with capitalism. It is ‘by virtue of the way it has organized its technological base, [that] contemporary industrial society tends to be totalitarian’.Footnote 42 He elaborates: ‘The people recognize themselves in their commodities; they find their soul in their automobile, hi-fi set, split-level home, kitchen equipment.’Footnote 43 Today, Marcuse would bemoan that people see themselves in possibilities offered by AI.

4. Jacques Ellul and Technological Determinism

Ellul diagnoses a systemic technological tyranny over humanity. His most celebrated work on philosophy of technology is ‘The Technological Society’.Footnote 44 In the world Ellul describes, individuals factor into overall tendencies he calls ‘massification’. We might govern particular technologies and exercise agency by operating machines, building roads, or printing magazines. Nonetheless, technology overall – as a Durkheimian social fact that goes beyond any number of specific happenings – outgrows human control. Even as we govern techniques (a term Ellul uses broadly, almost synonymously with a rational, systematic approach, with physical machines being the paradigmatic products), they increasingly shape our activities. We adapt to their demands and structures. Ellul is famous for his thesis of the autonomy of technique, its being a closed system, ‘a reality in itself […] with its special laws and its own determinations.’ Technique elicits and conditions social, political, and economic change. It is the prime mover of all the rest, in spite of any appearances to the contrary and in spite of human pride, which pretends that man’s philosophical theories are still determining influences and man’s political regimes decisive factors in technical evolution.Footnote 45

For example, industry and military began to adopt automated technology. One might think this process resulted from economic or political decisions. But for Ellul the sheer technical possibility provided all required impetus for going this way. Ellul is a technological determinist, but only for the modern age: technology, one way or another, causes all other aspects of society and culture. It does not have to be this way, and in the past it was not. But now, that is how it is.

Eventually, the state is inextricably intertwined with advancements of technique, as well as with corporations that produce machinery. The state no longer represents citizens if their interests contradict those advancements. Democracy fails, Ellul insists: we face a division between technicians, experts, and bureaucrats, standard bearers of techniques, on the one hand, and politicians who are supposed to represent the people and be accountable on the other. ‘When the technician has completed his task,’ Ellul says, ‘he indicates to the politicians the possible solutions and the probable consequences – and retires.’Footnote 46 The technical class understands technique but is unaccountable. In his most chilling metaphor, Ellul concludes the world technique creates is ‘the universal concentration camp’.Footnote 47 AI would perfect this trend.

IV. The Grand Democratic AI Utopia

Let us stay with grand narratives a bit longer and consider what we might call the Grand Democratic AI Utopia. We are nowhere near deploying anything like what I am about to describe. But once general AI is on our radar, AI-enriched developments of Aristotle’s argument from the wisdom of the multitude should also be. Futurists Noah Yuval Harari and Jamie Susskind touch on something like this;Footnote 48 and with technological innovation, our willingness to integrate technology into imageries for the future will only increase. Environmentalist James Lovelock thinks cyborgs could guide efforts to deal with climate change.Footnote 49 And in his discussion of future risks, philosopher Toby Ord considers AI assisting with our existential problems.Footnote 50 Such thinking is appealing because our brains evolved for the limited circumstances of small bands in earlier stages of homo sapiens rather than the twenty-first century’s complex and globally interconnected world. Our brains could create that world but might not be able to manage its existential threats, including those we created.

But one might envisage something like this. AI knows everyone’s preferences and views and provides people with pertinent information to make them competent participants in governance. AI connects citizens to debate views; it connects like-minded people but also those that dissent from each other. In the latter case, people are made to hear each other. AI gathers the votes, which eliminates challenges of people reaching polling stations, vote counting, etc. Monitoring everything, AI instantly identifies fraud or corruption, and flags or removes biased reporting or misleading arguments. AI improves procedural legitimacy through greater participation while the caliber of decision making increases because voters are well-informed. Voters no longer merely choose one candidate from a list. They are consulted on multifarious issues, in ways that keep them abreast of relevant complexities, ensure their views remain consistent, etc. More sophisticated aggregation methods are used than simple majoritarian voting.Footnote 51

Perhaps elected politicians are still needed for some purposes. But by and large AI catapults early democracy into the twenty-first century while solving the problems of the distant state and of overbearing executive power. AI resolves relatively unimportant matters itself, consulting representative groups for others to ensure everything gets attention without swallowing too much time. In some countries citizens can opt out. Others require participation, with penalties for those with privacy settings that prohibit integration into the system. Nudging techniques – to get people to do what is supposed to be in their own best interest – are perfected for smooth operations.Footnote 52 AI avoids previously prevalent issues around lack of inclusiveness. Privacy settings protect all data. AI calls for elections if confidence in the government falls below a threshold. Bureaucracies are much smaller because AI delivers public services, evaluating experiences from smart cities to create smart countries. Judges are replaced by sophisticated algorithms delivering even-handed rulings. These systems can be arranged such that many concerns about functionality and place of AI in human affairs are resolved internally. In such ways enormous amounts of time are freed up for people to design their lives meaningfully.

As a desirable possibility, something like this might become more prominent in debates about AI and democracy. But we should be wary of letting our thinking be guided by such scenarios. To begin with, imagining a future like this presupposes that for a whole range of issues there is a ‘most intelligent’ solution that for various reasons we have just been unable to put into practice. But intelligence research does not even acknowledge the conceptual uniqueness of intelligence, that is, that there is only one kind of intelligence.Footnote 53 Appeals to pure intelligence are illusionary, and allowing algorithms to engage judgments and decisions like this harbors dangers. It might amount to brainwashing people, with intelligent beings downgraded to responders to stimuli.Footnote 54 Moreover, designing such a system inevitably involves unprecedented efforts at building state capacities, which are subject to hijacking and other abuse. We should not forget that at the dawn of the digital era we also find George Orwell’s 1984.

This Grand Democratic AI Utopia, a grand narrative itself, also triggers the warnings from our four techno-skeptical narratives: Mumford would readily see in such designs the next version of the megamachine, Heidegger detect yet more inauthenticity, Marcuse pillory the potential for yet more social control, and Ellul recognize how in this way the state is ever more inextricably intertwined with advancements of technique.

V. AI and Democracy: Possibilities and Challenges for the Digital Century

We saw in Section III that modern democracy requires technology to solve its legitimacy problems. Careful design of the materiality of democracy is needed to solve the distant-state and overbearing-executive problems. At the same time, autocracy benefits from technological advances because they make bureaucracies more effective. The grand techno-skeptical narratives add warnings to the prospect of harnessing technology for democratic purposes, which, however, do not undermine efforts to harness technology to advance democracy. Nor should we be guided by any Grand Democratic AI Utopia. What then are the possibilities and challenges of AI for democracy in this digital century? Specifically, how should AI be designed to harness the public sphere, political power, and economic power for democratic purposes, and thus make them akin to Winner’s inclusive traffic infrastructure?

1. Public Spheres

Public spheres are actor-networks to spread and receive information or opinions about matters of shared concern beyond family and friendship ties.Footnote 55 Prior to the invention of writing, public spheres were limited to people talking. Their flourishing depended on availability of places where they could do so safely. The printing press mechanized exchange networks, dramatically lowering costs of disseminating information or ideas. Eventually, newspapers became so central to public spheres that the press and later the media collectively were called ‘the fourth estate’.Footnote 56 After the newspapers and other printed media, there was the telegraph, then radio, film production, and television. Eventually, leading twentieth century media scholars coined slogans to capture the importance of media for contemporary life, most distinctly Marshall McLuhan announcing ‘the medium is the message’ and Friedrich Kittler stating ‘the media determine our situation’.Footnote 57

‘Fourth estate’ is an instructive term. It highlights the relevance of the media, and the deference for the more prominent among them, as well as for particular journalists whose voices carry weight with the public. But the term also highlights that media have class interests of sorts: aside from legal regulations, journalists had demographic and educational backgrounds that generated certain agendas rather than others. The ascent of social media, enabled by the Internet, profoundly altered this situation, creating a public sphere where availability of information and viewpoints was no longer limited by ‘the fourth estate’. Big Tech companies have essentially undermined the point of referring to media that way.

In the Western world, Google became dominant in internet searches. Facebook, Twitter, and YouTube offered platforms for direct exchanges among individuals and associations at a scale previously impossible. Political theorist Archon Fung refers to the kind of democracy that arose this way as ‘wide aperture, low deference democracy’: a much wider range of ideas and policies are explored than before, with traditional leaders in politics, media, or culture no longer treated with deference but ignored or distrusted.Footnote 58 Not only did social media generate new possibilities for networking but also an abundance of data gathered and analyzed to predict trends or target people with messages for which data mining deems them receptive. The 2018 Cambridge Analytica scandal – a British consulting firm obtaining personal data of millions of Facebook users without consent, to be used for political advertising – revealed the potential of data mining, especially for locations where elections tend to be won by small margins.Footnote 59

Digital media have by now generated an online communications infrastructure that forms an important part of the public sphere, whose size and importance will only increase. This infrastructure consists of the paraphernalia and systems that make our digital lives happen, from the hardware of the Internet to institutions that control domain names and the software that maintains the functionality of the Internet and provides tools to make digital spaces usable (browsers, search engines, app stores, etc.). Today, private interests dominate our digital infrastructure. Typically, engineers and entrepreneurs ponder market needs, profiting from the fact that more and more of our lives unfolds on platforms optimized for clicks and virality.

Especially, news is presented to appeal to certain users, which not only creates echo-chambers but spreads a plethora of deliberate falsehoods (disinformation, rather than misinformation) to reinforce the worldviews of those users. Political scientists have long lamented the ignorance of democratic citizens and the resulting poor quality of public decision making.Footnote 60 Even well-informed, engaged voters choose based on social identities and partisan loyalties.Footnote 61 Digital media reinforce these tendencies. Twitter, Facebook, YouTube, and competitors seek growth and revenue. Attention-grabbing algorithms of social media platforms can sow confusion, ignorance, prejudice, and chaos. AI tools then readily create artificial unintelligence.Footnote 62

Having a public sphere where viewpoints can be articulated with authority recently became much harder through the emergence of deepfakes. Bringing photoshopping to video, deepfakes replace people in existing videos with someone else’s likeness. Currently their reach is mostly limited to pornography, but their potential goes considerably beyond that. In recent decades, video has played a distinguished role in inquiry. What was captured on film served as indisputable evidence, in ways photography no longer could after manipulation techniques became widespread. Until the arrival of deepfakes, videos offered an ‘epistemic backstop’ in contested testimony.Footnote 63 Alongside other synthetic media and fake news, deepfakes might help create no-trust societies where people no longer bother to separate truth from falsehood, and no media help them do so.

What is needed to countermand such tendencies is the creation of what media scholar Ethan Zuckerman calls ‘digital public infrastructure’.Footnote 64 Digital public infrastructure lets us engage in public and civic life in digital spaces, with norms and affordances designed around civic values. Figuratively speaking, designing digital public infrastructure is like creating parks and libraries for the Internet. They are devised to inform us, structured to connect us to both people we agree with and people we disagree with, and encourage dialogue rather than simply reinforcing perceptions. As part of the design of such infrastructures, synthetic media must be integrated appropriately, in ways that require clear signaling of how they are put together. People would operate within such infrastructures also in ways that protect their entitlements as knowers and knowns, their epistemic rights.Footnote 65

One option is to create a fleet of localized, community-specific, public-serving institutions to serve the functions in digital space that community institutions have met for centuries in physical places. There must be some governance model, so this fleet serves the public. Wikipedia’s system of many editors and authors or Taiwan’s digital democracy platform provide inspiring models for decentralized participatory governance.Footnote 66 Alternatively, governments could create publicly funded nonprofit corporations to manage and maintain the public’s interest in digital life. Specialized AI would be front and center to such work. Properly designed digital public infrastructures would be like Winner’s inclusive traffic infrastructures and could greatly help solve the distant-state and overbearing-executive problems.

2. Political Power

As far as use of AI for maintenance of power is concerned, the Chinese social credit system – a broadly-based system for gathering information about individuals and bringing that information to bear on what people may do – illustrates how autocratic regimes avail themselves of technological advances.Footnote 67 Across the world, cyberspace has become a frequent battleground between excessively profit-seeking or outright criminal activities and overly strong state reactions to them. By now many tools exists that help governments rein in such activities, but those same tools then also help authoritarians oppress political activities.Footnote 68 While most mass protests in recent years, from Hong Kong to Algeria and Lebanon, were inspired by hashtags, coordinated through social networks and convened by smartphones, governments have learned how to countermand such movements. They control online spaces by blocking platforms and disrupting the Internet.Footnote 69

In his 1961 farewell speech, US president Dwight D. Eisenhower famously warned against acquisition of unwarranted influence ‘by the military-industrial complex’ and against public policy becoming ‘captive of a scientific-technological elite’.Footnote 70 Those interconnected dangers would be incompatible with a flourishing democracy. Eisenhower spoke only years after the Office of Naval Research had partly funded the first Summer Research Project on AI at Dartmouth in 1956, and thereby indicated that the military-industrial complex had a stake in this technology developed by the scientific-technological elite.Footnote 71 Decades later, the 2013 Snowden revelations showed what US intelligence could do with tools we can readily classify as specialized AI. Phones, social media platforms, email, and browsers serve as data sources for the state. Analyzing meta-data (who moved where, connected to whom, read what) provides insights into operations of groups and activities of individuals. Private-sector partnerships have considerably enhanced capacities of law enforcement and military to track people (also involving facial, gait, and voice recognition), from illegal immigrants at risk of deportation to enemies targeted for killing.Footnote 72

Where AI systems are deployed as part of the welfare state, they often surveil people and restrict access to resources, rather than providing greater support.Footnote 73 Secret databases and little-known AI applications have had harmful effects in finance, business, education, and politics. AI-based decisions on parole, mortgages, or job applications are often opaque and biased in ways that are hard to detect. Democratic ideals require reasons and explanations, but the widespread use of opaque and biased algorithms has prompted one observer to call societies that make excessive use of algorithms ‘black-box societies’.Footnote 74 If algorithms do things humans find hard to assess, it is unclear what would even count as relevant explanations. Such practices readily perpetuate past injustice. After all, data inevitably reflect how people have been faring so far. Thus, they reflect the biases, including racial biases, that have structured exercises of power.Footnote 75 Decades ago Donna Haraway’s ‘Cyborg Manifesto’, a classic at the intersection of feminist thought and the philosophy of technology, warned the digital age might sustain white capitalist patriarchy with ‘informatics of domination’.Footnote 76

Of course, digital technologies can strengthen democracy. In 2011, Iceland produced the first-ever ‘crowdsourced’ constitutional proposal in the world. In Taiwan, negotiations among authorities, citizens, and companies like Uber and Airbnb were aided by an innovative digital process for deliberative governance called vTaiwan. France relied on digital technologies for the Great National Debate in early 2019 and the following Convention on Climate Change between October 2019 and June 2020, experimenting with deliberation at the scale of a large nation.Footnote 77 Barcelona has become a global leader in the smart city movement, deploying digital technology for matters of municipal governance,Footnote 78 though it is Singapore, Helsinki, and Zurich that do best on the Smart City Index 2020 (speaking to the fact of how much innovation goes on in that domain).Footnote 79 An Australian, non-profit, eDemocracy project, openforum.com.au, invites politicians, senior administrators, academics, businesspeople, and other stakeholders to engage in policy debates. The California Report Card is a mobile-optimized web application promoting public involvement in state government. As the COVID-19 pandemic ravaged the world, democracies availed themselves of digital technologies to let people remain connected and serve as key components of public health surveillance. And while civil society organizations frequently are no match for abusive state power, there are remarkable examples of how even investigations limited to open internet sources can harvest the abundance of available data to pillory abuse of power. The best-known example is the British investigative journalism website Bellingcat that specializes in fact-checking and open-source intelligence.Footnote 80

One striking fact specifically about the American version of modern democracy is that, when preferences of low- or middle-income Americans diverge from those of the affluent, there is virtually no correlation between policy outcomes and desires of the less advantaged groups.Footnote 81 As far as political power is concerned, the legitimacy of modern democracy is questionable indeed. Democracy could be strengthened considerably by well-designed AI. Analyzing databases would give politicians a more precise image of what citizens need. The bandwidth of communication between voters and politicians could increase immensely. Some forms of surveillance will be necessary, but democratic governance requires appropriate oversight. The digital public infrastructure discussed in the context of the public sphere can be enriched to include systems that deploy AI for improving citizen services. The relevant know-how exists.Footnote 82

3. Economic Power

Contemporary ideals of democracy include egalitarian empowerment of sorts. But economic inequality threatens any such empowerment. Contemporary democracies typically have capitalist economies. As French economist Thomas Piketty has argued, over time capitalism generates inequality because, roughly speaking, owners of shares of the economy benefit more from it than people living on the wages the owners willingly pay.Footnote 83 A worry about democracy across history (also much on the mind of Publius) has been that masses would expropriate elites. But in capitalist democracies, we must worry about the opposite. It takes sustained policies around taxation, transportation, design of cities, healthcare, digital infrastructure, pension and education systems, and macro-economic and monetary policies to curtail inequality.

One concern about AI is that, generally, the ability to produce or use technology is one mechanism that drives inequality, enabling those with requisite skills to advance – which enables them not only to become well-off but to become owners in the economy in ways that resonate across generations. Technology generally and AI specifically are integral parts of the inequality-enhancing mechanisms Piketty identifies. One question is how the inequality-increasing tendencies play out for those who are not among the clear winners. AI will profoundly transform jobs, at least because aspects of many jobs will be absorbed by AI or otherwise mechanized. These changes also create new jobs, including at the lower end, in the maintenance of the hardware and the basic tasks around data gathering and analysis.Footnote 84 On the optimistic side of predictions about the future of work, we find visions of society with traditional jobs gradually transformed, some eliminated and new jobs added – in ways that create much more leisure time for the average people, owing to increased societal wealth.

On the pessimistic side, many who are unqualified for meaningful roles in tech economies might be dispensable to the labor force. Their political relevance might eventually amount to little more than that they must be pacified if they cannot be excluded outright. Lest this standpoint be dismissed as Luddite alarmism (‘at the end of the tunnel, there have always been more jobs than before’), we should note that economies where data ownership becomes increasingly relevant and AI absorbs many tasks could differ critically from economies organized around ownership of land or factories. In those earlier cases large numbers of people were needed to provide labor, in the latter case also as consumers. Elites could not risk losing too many laborers. But this constraint might not apply in the future. To be sure, a lot here will depend on how questions around control over, or ownership of, data are resolved, questions whose relevance for our future economy cannot be overstated.Footnote 85

As recently argued by Shoshana Zuboff, the importance of data collection for the economy has become so immense that the term ‘surveillance capitalism’ characterizes the current stage of capitalism.Footnote 86 Surveillance capitalism as an economic model was developed by Google, which to surveillance capitalism is what Ford was to mass production. Later, the model was adopted by Facebook, Amazon, and others. Previously, data were collected largely to improve services. But subsequently, data generated as byproducts of interactions with multifarious devices were deployed to develop predictive products, designed to forecast what we will feel, think, or do, but ultimately also to control and change it, always for the sake of monetization. Karl Marx and Friedrich Engels identified increasing commodification as a basic mechanism of capitalism (though they did not use that very term). Large-scale data collection is its maximal version: It commodifies all our lived reality.

In the twentieth century, Hannah Arendt and others diagnosed mechanisms of ‘totalitarian’ power, the state’s all-encompassing power.Footnote 87 Its central metaphor is Big Brother, capturing the state’s omnipresence. Parallel to that, Zuboff talks about ‘instrumentarian’ power, exercised through use of electronic devices in social settings for harvesting profits. The central metaphor is the ‘Big Other’, the ever-present electronic device that knows just what to do. Big Brother aimed for total control, Big Other for predictive certainty.

Current changes are driven by relatively few companies, which futurist Amy Webb calls ‘the Big Nine’: in the US, Google, Microsoft, Amazon, Facebook, IBM and Apple, in China Tencent, Alibaba and Baidu.Footnote 88 At least at the time of Webb’s writing, the Chinese companies were busy consolidating and mining massive amounts of data to serve the government’s ambitions; the American ones implemented surveillance capitalism, embedded into a legal and political framework that, as of 2021, shows little interest in developing strategic plans for a democratic future and thus do for democracy what the Chinese Communist party did for its system – upgrade it into this century. To be sure, the EU is more involved in such efforts. But none of the Big Nine are based there, and overall, the economic competition in the tech sector seems to be ever more between the United States and China.

The optimistic side of predictions about the future of work seems reachable. But to make that happen in ways that also strengthen democracy, both civil society and the state must step up, and the enormous power concentrated in Big Tech companies needs to be harnessed for democratic purposes.

VI. Conclusion

Eventually there might be a full-fledged Life 3.0, whose participants not only design their cultural context (as in Life 2.0, which sprang from the evolutionary Life 1.0), but also their physical shapes.Footnote 89 Life 3.0 might be populated by genetically enhanced humans, cyborgs, uploaded brains, as well as advanced algorithms embedded into any manner of physical device. If Life 3.0 ever emerges, new questions for governance arise. Would humans still exercise control? If so, would there be democracies, or would some people or countries subjugate everybody else? Would it be appropriate to involve new intelligent entities in governance, and what do they have to be like for the answer to be affirmative? If humans are not in control, what will governance be like? Would humans even be involved?Footnote 90

It is unclear when questions about democracy in Life 3.0 will become urgent. Meanwhile, as innovation keeps happening, societies will change. Innovation will increase awareness of human limitations and set in motion different ways for people to deal with them. As Norbert Wiener, whose invention of cybernetics inaugurated later work on AI, stated in 1964:

The world of the future will be an ever more demanding struggle against the limitation of our intelligence, not a comfortable hammock in which we can lie down to be waited upon by our robot slaves.Footnote 91

Maybe more and more individuals will want to adapt to technological change and perhaps deploy technology to morph into a transhuman stage.Footnote 92 Generally, what technology they use – the materiality of their lives – affects who people are and want to become. Technology mediates how we see ourselves in relation to our social and natural environment, how we engage with people, animals, and material objects, what we do with ourselves, how we spend our professional lives, etc. In short, technology critically bears on what forms of human life get realized or even imagined. For those that do get realized or imagined, what it means to be part of them cannot be grasped without comprehending the role of technology in them.Footnote 93

As we bring about the future, computer scientists will become ever more important, also as experts in designing specialized AI for democratic purposes. That raises its own challenges. Much as technology and democracy are no natural allies, technologists are no natural champions of, nor even qualified advisers to, democracy. Any scientific activity, as Arendt stated some years before Wiener wrote the words just cited, as it acts into nature from the standpoint of the universe and not into the web of human relationships, lacks the revelatory character of action as well as the ability to produce stories and become historical, which together form the very source from which meaningfulness springs into and illuminates human existence.Footnote 94

Democracy is a way of life more than anything else, one that greatly benefits from the kind of action Arendt mentions. And yet modern democracy critically depends on technology to be the kind of actor-network that solves the distant-state and overbearing-executive problems. Without suitable technology, modern democracy cannot survive. Technology needs to be consciously harnessed to become like Winner’s inclusive traffic infrastructure, and both technologists and citizens generally need to engage with ethics and political thought to have the spirit and dedication to build and maintain that kind of infrastructure.

7 The New Regulation of the European Union on Artificial Intelligence Fuzzy Ethics Diffuse into Domestic Law and Sideline International Law

Thomas Burri
I. Introduction

In the conventional picture, international law emanates from treaties states conclude or customs they observe. States comply with binding international law and ensure compliance in the domestic context. In this picture, states in a ‘top-top’ process agree on the law before it trickles down to the domestic legal order where it is implemented. Norms made in other ways are considered ‘soft’, which implies that they provide mere guidance but are technically not binding, or irrelevant to international law.

Obviously, there is room for nuance in the conventional take on international law and its sources. Soft law, for instance, can acquire authority that comes close to binding character.Footnote 1 It can also serve to interpret binding law that would otherwise remain ambiguous.Footnote 2 However, traditional international law ignores that law is also created outside of its formal processes. Norms can notably consolidate independently from the will of states in speedy, subcutaneous processes. Norms can diffuse subliminally across the world into municipal laws which incorporate and make them binding domestically. In this informal process, international law enters the stage late, if at all. It can only retrace the law that has already been locked in domestically. This informal process resembles ‘bottom-up international law’,Footnote 3 though its character is more ‘bottom to bottom’ and ‘transnational’. The process shall be referred to as ‘norm diffusion’ in this chapter. It is illustrated through the creation of norms governing Artificial Intelligence (AI).

The informal process of law creation described above is far from ubiquitous. It can be hard to trace, for when international law codifies or crystallizes ‘new’ norms, it tends to obscure their origin in previous processes of law creation. It is also messy, for it does not adhere to the hierarchies that distinguish conventional international law. Even more so, it is worth discussing norm diffusion to complement the picture of international law and its sources.

The present chapter could have examined norm diffusion in the current global public health crisis. It seems that in the COVID-19 pandemic, behavioural norms informed by scientific expertise take shape rapidly, diffuse globally, and are incorporated into domestic law. In contrast, international lawyers are only now beginning to discuss a more suitable legal framework. However, rather than engaging with the ongoing chaotic normative process in public health, this chapter discusses a more mature and traceable occurrence of norm diffusion, namely that of the regulation of AI. The European Commission’s long-awaited proposal from April 2021 for a regulation on AI marks the perfect occasion to illustrate the diffusion of AI norms.

This chapter proceeds in three steps. First, it examines the creation of ethical norms designed to govern AI (Section II). Second, it investigates the diffusion of such norms into domestic law (Section III). This section examines the European Commission’s recent legislative proposal to show how it absorbs ethical norms on AI. This examination likewise sheds light on the substance of AI norms. Section III could also be read on its own, in other words, without regard to international law-making, if one wished to learn only about the origins and the substance of the European Union regulation in the offing. Section IV then discusses how the process of norm diffusion described in Sections II and III sidelines international law. Section V concludes and offers an outlook.

II. The Creation of Ethical Norms on AI

The creation of ethical norms governing AI has taken many forms over a short period of time. It began with robotics. Roughly 50 years ago, Isaac Asimov’s science fiction showed how ambiguous certain ethical axioms were when applied to intelligent robots.Footnote 4 Since then, robotics has made so much progress that scientists have begun to take an interest in ethical principles for robotics. Such principles, which were prominently enunciated in the United Kingdom in 2010, addressed the potential harm caused by robots, responsibility for damage, fundamental rights in the context of robotics, and several other topics, including safety/security, deception, and transparency.Footnote 5 The same or similar aspects turned out to be relevant for AI after it had re-awakened from hibernation. Two initiatives were significant in this regard, namely the launch of the One Hundred Year Study on Artificial Intelligence at Stanford University in 2014Footnote 6 and an Open LetterFootnote 7 signed by researchers and entrepreneurs in 2015.Footnote 8 Both initiatives sought to guide research toward beneficial and robust AI.Footnote 9 In their wake, the IEEE, an organization of professional engineers, in 2015 embarked on a broad public initiative aimed at pinning down the ethics of autonomous systems;Footnote 10 a group of AI professionals gathered to generate the Asilomar principles for AI, which were published in 2017Footnote 11; and an association of experts put forward ethical principles for algorithms and programming.Footnote 12 This push to establish ethical norms occurred in lockstep with the significant technological advances in AI,Footnote 13 and it is against this background that it must be understood.

In parallel, a discussion began to take shape within the Convention on Certain Conventional Weapons (CCW)Footnote 14 in Geneva. This discussion soon shifted its focus to the use of force by means of autonomous systems.Footnote 15 It notably zeroed in on physically embodied weapons systems – a highly specialized type of robot – and refrained from considering disembodied weapons, sometimes called cyberweapons.Footnote 16 The focus on embodimentFootnote 17 had the effect of keeping AI out of the limelight in Geneva for a long time.Footnote 18 As a broader consequence, the international law community became fixated on an exclusive and exotic aspect – namely physical (‘kinetic’) autonomous weapons systems – while the technological development was more comprehensive. Despite their narrow focus, the seven years of discussions in Geneva have yielded few concrete results, other than a great deal of publicity.Footnote 19

At about the same time, autonomous cars also became the subject of ethical discussion. This discussion, however, soon got bogged down in largely theoretical, though fascinating, ethical dilemmas, such as the trolley problem.Footnote 20 However, unlike those gathered in Geneva to ponder autonomous weapons systems, those intent on putting autonomous cars on the road were pragmatic. They found ways of generating meaningful output that could be implemented.Footnote 21

In 2017, the broader public beyond academic and professional circles became aware of the promises and perils of AI. Civil society began to discuss the ethics of AI and soon produced tangible output.Footnote 22 Actionable principles were also proposed on behalf of womenFootnote 23 and labour,Footnote 24 and AlgorithmWatch, a now notable non-governmental organization, was founded.Footnote 25

In step with civil society, private companies adopted ethical principles concerning AI.Footnote 26 Such principles took different shapes depending on companies’ fields of business. The principles embody a certain degree of self-commitment, which is not subject to outside verification, though.Footnote 27 Parts of the private sector and the third sector have also joined forces, most prominently in the Partnership on AI and its tenets on AI.Footnote 28

The development has not come to a halt today. Various organizations continue to mull over ethical norms to govern AI.Footnote 29 However, most early proponents of such norms have moved from the formation stage to the implementation stage. Private companies are currently applying the principles to which they unilaterally subscribed. After having issued one of the first documents on ethical norms,Footnote 30 the IEEE is now developing concrete technical standards to be applied by developers to specific applications of AI.Footnote 31 ISO, another professional organization, is currently setting such standards as well.Footnote 32 Domestic courts and authorities are adjudicating the first cases on AI.Footnote 33

At this point, it is worth pausing for a moment. The current section sketched a process in which multiple actors shaped and formed ethical norms on AI and are now implementing them. (As Section IV will explain, states have not been absent from this process.) This section could now go on to distil the essence of the ethical norms. This would make sense as the ethics remain unconsolidated and fuzzy. But much important work has already been done in this direction.Footnote 34 In fact, for present purposes, no further efforts are necessary because, while norms remain vague, they have now begun to merge into domestic law. However, the diffusion of ethical norms is far from being a linear and straightforward process with clear causes. Instead, it is multidirectional, multivariate, gradual, and open-ended, with plenty of back and forth. Hence, the next section, as it looks at norm diffusion from the incoming end, in other words, from the perspectives of states and domestic law, is best read as a continuation of the present section. The developments outlined have also occurred in parallel to those in municipal law, which are the topic of the next section.

III. Diffusion of Ethical Norms into Domestic Law: The New Regulation of the European Union on AI

A relevant sign of diffusion into domestic law is states’ first engagement with ethics and AI. For some states, including China, France, Germany, and the United States, such engagement began relatively early with the adoption of AI strategiesFootnote 35 in which ethical norms figured more or less prominently. The French president, for instance, stated a commitment to establish an ethics framework.Footnote 36 China, in its strategy, formulated the aim to ‘[d]evelop laws, regulations, and ethical norms that promote the development of AI’.Footnote 37 Germany’s strategy was to task a commission to come up with recommendations concerning ethics.Footnote 38 The US strategy, meanwhile, was largely silent on ethics.Footnote 39

Some state legislative organs also addressed the ethics of AI early on, most notably, the comprehensive report published by the United Kingdom House of Lords in 2018.Footnote 40 It, among other things, recommended elaborating an AI code to provide ethical guidance and a ‘basis for statutory regulation, if and when this is determined to be necessary’.Footnote 41 The UK report also suggested five ethical principles as a basis for further work.Footnote 42 In a similar vein, the Villani report, which had preceded the French presidential strategy, identified five ethical imperatives.Footnote 43

In the EU, a report drafted within the European Parliament in 2016 drew attention to the need to examine ethics further.Footnote 44 It dealt with robotics because AI was not yet a priority and included a code of rudimentary ethical principles to be observed by researchers. In 2017, the European Parliament adopted the report as a resolution,Footnote 45 putting pressure on the Commission to propose legislation.Footnote 46 In 2018, the Commission published a strategy on AI with a threefold aim, one of which was to ensure ‘an appropriate legal and ethical framework’.Footnote 47 The Commission consequently mandated a group of experts who suggested guidelines for ‘trustworthy’ AI one year later.Footnote 48 These guidelines explicitly drew on work previously done within the institutions.Footnote 49 The guidelines refrained from interfering with the lex lata,Footnote 50 including the General Data Protection RegulationFootnote 51.

In 2019, following the guidelines for trustworthy AI, the Commission published a White Paper on AIFootnote 52, laying the foundation for the legislative proposal to be tabled a year later. The White Paper, which attracted much attention,Footnote 53 recommended a horizontal approach to AI with general principles included in a single legislative act applicable to any kind of AI, thus rejecting the alternative of adapting existing (or adopting several new) sectorial acts. The White Paper suggested regulating AI based on risk: the higher the risk of an AI application, the more regulation was necessary.Footnote 54

On 21 April 2021, based on the White Paper, the Commission presented a Proposal for a regulation on AIFootnote 55. The Commission’s Proposal marks a crucial moment, for it represents the first formal step – globally, it seems – in a process that will ultimately lead to binding domestic legislation on AI. It is a sign of the absorption of ethical norms on AI by domestic law – in other words, of norm diffusion. While the risk-based regulatory approach adopted from the White Paper was by and large absent in the ethics documents discussed in the previous section, many of the substantive obligations in the proposed regulation reflect the same ethical norms.

The Commission proposed distinguishing three categories of AI, namely: certain ‘practices’ of AI that the proposed regulation prohibits; high-risk AI, which it regulates in-depth; and low-risk AI required to be flagged.Footnote 56 While the prohibition against using AI in specific ways (banned ‘practices’)Footnote 57 attracts much attention, practically, the regulation of high-risk AI will be more relevant. Annexes II and III to the proposed regulation determine whether an AI qualifies as high-risk.Footnote 58 The proposed regulation imposes a series of duties on those who place such high-risk AI on the market.Footnote 59

The regulatory focus on risky AI has the consequence, on the flip side, that not all AI is subject to the same degree of regulation. Indeed, the vast majority of AI is subject merely to the duty to ensure some degree of transparency. However, an AI that now appears to qualify as low-risk under the proposed regulation could become high-risk after a minor change in use intention. Hence, given the versatility of AI, the duties applicable to high-risk AI have to be factored in even in the development of AI in low-risk domains. One example is an image recognition algorithm that per se qualifies as low-risk under the regulation. However, if it were later used for facial recognition, the more onerous duties concerning high-risk AI would become applicable. Such development must be anticipated at an early stage to ensure compliance with the regulation throughout the life cycle of AI. Hence, regulatory spill-over from high-risk into low-risk domains of AI is likely. Consequently, the proposed regulation exerts a broader compliance pull than one might expect at first glance, given the specific, narrow focus of the regulation on high-risk AI.

Categorization aside, the substantive duties imposed on those who put high-risk AI on the market are most interesting from the perspective of ethical norm diffusion. The proposed regulation includes four bundles of obligations.

The first bundle concerns data and is laid down in Article 10 of the proposed regulation. When AI is trained with data (though not only thenFootnote 60), Article 10 of the proposed regulation requires ‘appropriate data governance and management practices’, in particular concerning design choices; data collection; data preparation; assumptions concerning that which data measures and represents; assessment of availability, quantity, and suitability of data; ‘examination in view of possible bias’; and identification of gaps and shortcomings. In addition, the data itself must be relevant, representative, free of errors, and complete. It must also have ‘appropriate statistical properties’ regarding the persons on whom the AI is used. And it must take into account the ‘geographical, behavioural or functional setting’ in which the AI will be used.

The duties laid down in Article 10 on data mirror existing ethical norms, notably the imperative to avoid bias. The IEEE’s Charter discussed the issue of data bias.Footnote 61 In an early set of principles addressed to professionals, avoidance of bias featured prominently; it also recommended keeping a description of data provenance.Footnote 62 The Montreal Declaration recommended avoiding discrimination,Footnote 63 while the Toronto Declaration on human rights and machine learning had bias and discrimination squarely in view.Footnote 64 Likewise, some of the ethical norms the private sector had adopted addressed bias.Footnote 65 However, the ethical norms discussed in Section II generally refrained from addressing data and its governance as comprehensively as Article 10 of the proposed regulation. Instead, the ethical norms directly focused on avoidance of bias and discrimination.

The second bundle of obligations concerns transparency and is contained in Article 13 of the proposed regulation. The critical duty of Article 13 requires providers to ‘enable users to interpret [the] output’ of high-risk AI and ‘use it appropriately’Footnote 66. The article further stipulates that providers have to furnish information that is ‘concise, complete, correct and clear’Footnote 67, in particular regarding the ‘characteristics, capabilities and limitations of performance’ of a high-risk AI system.Footnote 68 These duties specifically relate to any known or foreseeable circumstance, including foreseeable misuse, which ‘may lead to risks to health and safety or fundamental rights’, and to performance on persons.Footnote 69

Transparency is an equally important desideratum of ethical norms, though it is sometimes addressed in terms of explainability or explicability. The IEEE’s CharterFootnote 70 and the Asilomar principlesFootnote 71 emphasized transparency to different degrees. Other guidelines encourage the production of explanationsFootnote 72 or appropriate and sufficient information,Footnote 73 or call for extensive transparency, justifiability, and intelligibility.Footnote 74 These references make it evident that ethical norms, though they are heterogeneous and vague, are in the process of being absorbed by EU law (norm diffusion).

The third bundle of obligations is contained in Article 15 of the proposed regulation. It requires high-risk AI to have an ‘appropriate level’ of accuracy, robustness, and cybersecurity.Footnote 75 Article 15 refrains from adding much detail but states that the AI must be resilient to deleterious environmental influences or nefarious third parties’ attempts to game it.Footnote 76

As with the first and second bundles, the aspects of high-risk AI addressed by Article 15 can be traced back to various ethical norms. The high-level principles of effectiveness and awareness of misuse in the IEEE’s Charter covered similar aspects.Footnote 77 The Asilomar principles addressed ‘safety’, but in a rather generic fashion.Footnote 78 Other principles emphasized both the need for safety in all things related to AI and the importance of preventing misuse.Footnote 79 Others focused on prudence, which more or less includes the aspects covered by Article 15.Footnote 80 Parts of the private sector also committed themselves to safe AI.Footnote 81

The fourth bundle contains obligations of a procedural or managerial nature. The proposed regulation places confidence in procedure to cope with the high risks of AI. The trust in procedure goes so far that substantive issues are addressed procedurally only. One such example is one of the cardinal obligations of the proposed regulation, namely the duty to manage risks according to Article 9. Article 9 obliges providers to maintain a comprehensive risk management system throughout the life cycle of high-risk AI. It aims at reducing the risks posed by the AI so that the risks are ‘judged acceptable’, even under conditions of foreseeable misuse.Footnote 82 The means to reduce the risks are design, development, testing, mitigation and control measures, and provision of information to users. Instead of indicating which risks are to be ‘judged acceptable’, Article 9 trusts that risk reduction will result from a series of diligently executed, proper steps. However, procedural rules are not substantive rules. In and of themselves, they do not contain substantive guidance. In essence, Article 9 entrusts providers with the central ‘judgment’ of what is ‘acceptable’. Providers are granted liberty, while their obligations seem less onerous. At the same time, this liberty imposes a burden on them in that courts might not always validate their ‘judgment’ of what was ‘acceptable’ after harm has occurred. Would, for instance, private claims brought against the provider of an enormously beneficial AI be rejected after exceptionally high risks, which the provider managed and judged acceptable, have materialized?

Trust in procedure is also a mainstay of other provisions of the proposed regulation. An assessment of conformity with the proposed regulation has to be undertaken, but, here again, providers carry it out themselves in all but a few cases.Footnote 83 Providers have to register high-risk AI in a new EU-wide database.Footnote 84 Technical documentation and logs must be kept.Footnote 85 Human oversight is required – a notion that has a procedural connotation.Footnote 86 The regulation does not require substantive ‘human control’ as discussed within CCW for autonomous weapons systems.Footnote 87 Discrimination is not directly prohibited, but procedural transparency is supposed to contribute to preventing bias.Footnote 88 Such transparency may render high-risk AI interpretable, but a substantive right to explicable AI is missing.Footnote 89

The procedural and managerial obligations in the fourth bundle cannot easily be traced back to ethical norms. This is because of their procedural nature. Ethical norms are, in essence, substantive norms. Procedural obligations are geared towards implementation, yet implementation is not the standard domain of ethics (except for applied ethics which is yet to reach AIFootnote 90). Hence, while certain aspects of the fourth bundle mirror ethical norms, for example, the requirement to keep logs,Footnote 91 none of them has called for a comprehensive risk management system.

Overall, the proposed regulation offers compelling evidence of norm diffusion, at least to the extent that the regulation reflects ethical norms on AI. It addresses the three most pressing concerns related to AI of the machine learning type, namely bias due to input data, opacity that hampers predictability and explainability, and vulnerability to misuse (gaming, etc.).Footnote 92 In addressing these concerns, the proposed regulation remains relatively lean. It notably refrains from taking on broader concerns with which modern AI is often conflated, namely dominant market power,Footnote 93 highly stylized concepts,Footnote 94 and the general effects of technology.Footnote 95

However, the proposed regulation does not fully address the main concerns concerning AI, namely bias and opacity, head-on. It brings to bear a gentle, procedural approach on AI by addressing bias indirectly through data governance and transparency and remedying opacity through interpretability. It entrusts providers with the management of the risks posed by AI and with the judgement of what is tolerable. Providers consequently bear soft duties. In relying on soft duties, the regulation extends the life of ethical norms and continues their approach of indulgence. It thus incorporates the character of ethical norms that lack the commitment of hard law.

On the one hand, it may be unexpected that ethical norms live on to a certain extent, given that the new law on AI is laid down in a directly applicable, binding Union regulation. On the other hand, this is not all that surprising because a horizontal legislative act that regulates all kinds of AI in one go is necessarily less specific on substance than several sectorial acts addressing individual applications. (Though the adoption of several sectorial acts would have had other disadvantages.) Yet, this approach of the proposed regulation begs the question of whether it can serve as a basis for individual, private rights: will natural persons, market competitors, etc. be able to sue providers of high-risk AI for violation of the procedural, managerial obligations incumbent on them under the regulation?Footnote 96

IV. International Law Sidelined

It is not the case that international law has ignored the rise of AI, while ethics filled the void and laid down the norms. International law – especially the soft type – and ethical principles overlap and are not always easily distinguishable. Yet, even international soft law has been lagging behind considerably. It took until late spring 2019 for the Organization for Economic Co-Operation and Development (OECD) to adopt a resolution spelling out five highly abstract principles on AI.Footnote 97 While the principles address opacity (under transparency and explainability) and robustness (including security and safety), they ignore the risk of bias. Instead, they only generically refer to values and fairness. When the OECD was adopting its non-binding resolution, the European Commission’s White PaperFootnote 98 was already in the making. As the White Paper, the OECD Resolution recommended a risk-based approach.Footnote 99 Additionally, the OECD hosts a recent political initiative, the Global Partnership on Artificial Intelligence,Footnote 100 which has produced a procedural report.Footnote 101

Regional organizations have been more alert to AI than universal organizations. Certain sub-entities of the Council of Europe notably examined AI in their specific purview. In late 2018, a commission within the Council of Europe adopted a set of principles governing AI in the judicial system;Footnote 102 in the Council of Europe’s data protection convention framework, certain principles focussing on data protection and human rights were approved in early 2019.Footnote 103 On the highest level of the Council of Europe, the Committee of Ministers recently adopted a recommendation,Footnote 104 which discussed AI (‘algorithmic systems’,Footnote 105 as it calls it) in depth from a human rights perspective. The recommendation drew the distinction between high-risk and low-risk AI that the proposed Union regulation also adopted.Footnote 106 It, in large parts, mirrors the European Union’s approach developed in the White Paper and the proposed regulation. This is not surprising given the significant overlap in the two organizations’ membership.

On the universal level, processes to address AI have moved at a slower pace. The United Nations Educational, Scientific and Cultural Organization is only now discussing a resolution addressing values, principles, and fields of action on a highly abstract level.Footnote 107 The United Nations published a High-Level Report in 2019,Footnote 108 but it dealt with digital technology and its governance from a general perspective. Hence, the values it listsFootnote 109 and the recommendations it makesFootnote 110 appear exceedingly abstract from an AI point of view. The three models of governance suggested in the report, however, break new ground.Footnote 111

In a nutshell, most of the international law on AI arrives too late. Domestic implementation of ethical norms is already in full swing. Legislative acts, such as the proposed regulation of the EU, are already being adopted. Court and administrative cases are being decided. Meanwhile, standardization organizations are enacting the technical – and not-so-technical – details. Still, the international law on AI, all of which is soft (and hence not always distinguishable from ‘ethical norms’), is far from being useless. The Council of Europe’s recommendation on algorithmic systemsFootnote 112 added texture and granularity to the existing ethical norms. Instruments that may eventually be adopted on the universal level may spread norms on AI across the global south and shave off some of the Western edges the norms (and AI itself) currently still carry.Footnote 113

However, the impact of the ethical norms on AI is more substantial than international legal theory suggests. The ethical norms were consolidated outside of the traditional venues of international law. By now, they are diffusing into domestic law. International law is a bystander in this process. Even if the formation of formally binding international law on AI were attempted at some point,Footnote 114 a substantial treaty would be hard to achieve as domestic legislatures would have locked in legislation by then. A treaty could only re-enact a consensus established elsewhere, in other words, in ethical norms and domestic law, which would reduce its compliance pull.

V. Conclusion and Outlook

This chapter explained how ethical norms on AI came into being and are now absorbed by domestic law. The European Union’s new proposal for a regulation on AI illustrated this process of ‘bottom-to-bottom’ norm diffusion. While soft international law contributed to forming ethical norms, it neither created them nor formed their basis in a formal, strict legal sense.

This chapter by no means suggests that law always functions or is created in the way illustrated above. Undoubtedly, international law is mainly formed top-down through classical sources. In this case, it also exercises compliance pull. However, in domains such as AI, where private actors – including multinational companies and transnational or domestic non-governmental organizations – freely shape the landscape, a transnational process of law creation takes place. States in such cases tend to realize that ‘their values’ are at stake when it is already too late. Hence, states and their traditional way of making international law are sidelined. However, it is not ill will that drives the process of norm diffusion described in this chapter. States are not deliberately pushed out of the picture. Instead, ethical norms arise from the need of private companies and individuals for normative guidance – and international law is notoriously slow to deliver it. When international law finally delivers, it does not set the benchmark but only re-traces ethical norms. However, it does at least serve to make them more durable, if not inalterable.

The discussion about AI in international law has so far been about the international law that should, in a broad sense, govern AI. Answers were sought to how bias, opacity, robustness, etc., of AI could be addressed and remedied through law. However, a different dimension of international law has been left out of the picture so far. Except for the narrow discussion about autonomous weapons systems within CCW, international lawyers have mainly neglected what AI means for international law itself and the concepts at its core.Footnote 115 Therefore, the next step to be taken has to include a re-assessment of central notions of international law in the light of AI. The notions of territoriality/jurisdiction, due diligence duties concerning private actors, control that is central to responsibility of all types, and precaution should consequently be re-assessed and recalibrated accordingly.

8 Fostering the Common Good An Adaptive Approach Regulating High-Risk AI-Driven Products and Services

Thorsten Schmidt and Silja Voeneky Footnote *
I. Introduction

The risks based on AI-driven systems, products, and services are human-made, and we as humans are responsible if a certain risk materialises and damage is caused. This is one of the main reasons why States and the international community as a whole should prioritise governing and responsibly regulating these technologies, at least if high-risks are plausibly linked to AI-based products or services.Footnote 1 As the development of new AI-driven systems, products, and services is based on the need of private actors to introduce new products and methods in order to survive as part of the current economic system,Footnote 2 the core and aim of the governance and regulative scheme should not hinder responsible innovation by private actors, but minimize risks as far as possible for the common good, and prevent violations of individual rights and values – especially of legally binding human rights. At least the protection of human rights that are part of customary international law is a core obligation for every StateFootnote 3 and is not dependent on the respective constitutional framework or on the answer as to which specific international human rights treaty binds a certain State.Footnote 4

In this chapter, we want to spell out core elements of a regulatory regime for high-risk AI-based products and such services that avoid the shortcomings of regimes relying primarily on preventive permit procedures (or similar preventive regulation) and that avoid, at the same time, the drawbacks of liability-centred approaches. In recent times both regulative approaches failed in different areas to be a solid basis for fostering justified values, such as the right to life and bodily integrity, and protecting common goods, such as the environment. This chapter will show that – similar to regulating risks that stem from the banking system – risks based on AI products and services can be diminished if the companies developing and selling the products or services have to pay a proportionate amount of money into a fund as a financial guarantee after developing the product or service but before market entry. We argue that it is reasonable for a society, a State, and also the international community to adopt rules that oblige companies to pay such financial guarantees to supplement preventive regulative approaches and liability norms. We will specify what amount of money has to be paid based on the ex-ante evaluation of risks linked to the high-risk AI product or AI-based service that can be seen as proportionate, in order to minimize risks, but fostering responsible innovation and the common good. Lastly, we will analyse what kind of accompanying regulation is necessary to implement the approach proposed by us. Inter alia, we suggest that a group of independent experts should serve as an expert commission to assess the risks of AI-based products and services and collect data on the effects of the AI-driven technology in real-world settings.

Even though the EU Commission has recently drafted a regulation on AI (hereafter: Draft EU AIA),Footnote 5 it is not the purpose of this chapter to analyze this proposal in detail. Rather, we intend to spell out a new approach that could be implemented in various regulatory systems in order to close regulatory gaps and overcome disadvantages of other approaches. We argue that our proposed version of an ‘adaptive’ regulation is compatible with different legal systems and constitutional frameworks. Our proposal could further be used as a blueprint for an international treaty or international soft lawFootnote 6 declaration that can be implemented by every State, especially States with companies that are main actors in developing AI-driven products and services.

The term AI is broadly defined for this chapter, covering the most recent AI systems based on complex statistical models of the world and the method of machine learning, especially self-learning systems. It also includes systems of classical AI, namely, AI systems based on software already programmed with basic physical concepts (preprogrammed reasoning),Footnote 7 as a symbolic-reasoning engine.Footnote 8 AI in its various forms is a multi-purpose tool or general purpose technology and a rapidly evolving, innovative key element of many new and possibly disruptive technologies applied in many different areas.Footnote 9 A recent achievement, for instance, is the merger of biological research and AI, demonstrated by the use of an AI-driven (deep-learning) programme that a company can use to determine the 3D shapes of proteins.Footnote 10 Moreover, applications of AI products and AI-based services exist not only in the areas of speech recognition and robotics but also in the areas of medicine, finance, and (semi-)autonomous cars, ships, planes, or drones. AI-driven products and AI-driven services already currently shape areas as distinct as art or weapons development.

It is evident that potential risks accompany the use of AI-driven products and services and that the question of how to minimize these risks without impeding the benefits of such products and services poses great challenges for modern societies, States, and the international community. These risks can be caused by actors that are not linked to the company producing the AI system as these actors might misuse an AI-driven technology.Footnote 11 But damages can also originate from the unpredictability of adverse outcomes (so-called off-target effectsFootnote 12), even if the AI-driven system is used for its originally intended purpose. Damage might also arise because of a malfunction, false or unclear input data, flawed programming, etc.Footnote 13 Furthermore, in some areas, AI services or products will enhance or create new systemic risks. For example, in financial applicationsFootnote 14 based on deep learning,Footnote 15 AI serves as a cost-saving and highly efficient tool and is applied on an increasingly larger scale. The uncertainty of how the AI system reacts in an unforeseen and untested scenario, however, creates new risks, while the large-scale implementation of new algorithms or the improvement of existing ones additionally amplifies already existing risks. At the same time, algorithms have the potential to destabilize the whole financial system,Footnote 16 possibly leading to dramatic losses depending on the riskiness and the implementation of the relevant AI-driven system.

Even more, we should not ignore the risk posed by the development of so-called superhuman AI: Because recent machine learning tools like reinforcement learning can improve themselves without human interaction and rule-based programming,Footnote 17 it seems to be possible for an AI system – as argued by some scholars – to create an improved AI system which opens the door to produce some kind of artificial Superintelligence or superhuman AI (or ‘the Singularity’).Footnote 18 Superhuman AI might even pose a global catastrophic or existential risk to humanity.Footnote 19 Even if some call this a science-fiction scenario, other experts predict that AI of superhuman intelligence will happen by 2050.Footnote 20 It is argued, as well, that an intelligence explosion might lead to dynamically unstable systems and it becomes increasingly easy for smarter systems to make themselves smarterFootnote 21 that finally, there can be a point beyond which it is impossible for us to make reliable predictions.Footnote 22 In the context of uncertainty and ‘uncertain futures’,Footnote 23 it is possible that predictions fail and risks arise from these developments faster than expected or in an unexpected fashion.Footnote 24 From this, we deduce that superhuman AI can be seen as a low probability, high impact scenario.Footnote 25 Because of the high impact, States and the international community should not ignore the risks of superhuman AI when drafting rules concerning AI governance.

II. Key Notions and Concepts

Before spelling out in more detail lacunae and drawbacks of the current specific regulation governing AI-based products and services, there is a need to define key notions and concepts relevant for this chapter, especially the notions of regulation, governance, and risk.

When speaking about governance and regulation, it is important to differentiate between legally binding rules on the one hand at the national, European, and international level, and non-binding soft law on the other hand. Only the former are part of the law and regulation strictu sensu.

The term international soft law is understood in this chapter to include rules that cannot be attributed to a formal legal source of public international law and that are, hence, not directly legally binding. However, as rules of international soft law have been agreed upon by subjects of international law (i.e. States, International Organizations (IO)) that could, in principle, create international lawFootnote 26 these rules possess a specific normative force and can be seen as relevant in guiding the future conduct of States, as they promised not to violate them.Footnote 27 Therefore, rules of international soft law are part of top down rulemaking, (i.e. regulation), and must not be confused with (bottom up) private rulemaking by corporations, including the many AI related codes of conduct, as for example, the Google AI Principles.Footnote 28

In the following, regulation means only top down law making by States at the national, and European level or by States and IOs at the international level. It will not encompass rulemaking by private actors that is sometimes seen as an element of so-called self-regulation. However, in the following, the notion of governance will include rules that are part of top-down lawmaking (e.g. international treaties and soft law) and rules, codes, and guidelines by private actors.Footnote 29

Another key notion for the adaptive governance framework we are proposing is the notion of risk. There are different meanings of ‘risk’ and in public international law, there is no commonly accepted definition of the notion, it is unclear how and whether a ‘risk’ is different from a ‘threat’, a ‘danger’, or a ‘hazard’.Footnote 30 For the sake of this chapter, we will rely on the following broad definition, according to which a risk is an unwanted event that may or may not occur,Footnote 31 that is, an unwanted hypothetical future event. This definition includes situations of uncertainty, where no probabilities can be assigned for the occurrence of damage.Footnote 32 A global catastrophic risk shall be defined as a hypothetical future event that has the potential to cause the death of a large number of human beings or/and to cause the destruction of a major part of the earth; and an existential risk can be defined as a hypothetical future event that has the potential to cause the extinction of human beings on earth.Footnote 33

When linking AI-driven products and services to high-risks, we understand high-risks as those that have the potential to cause major damages for protected individual values and rights (as life and bodily integrity) or common goods (as the environment or the financial stability of a State).

The question of which AI systems, products, or services constitute such high-risk systems is discussed in great detail. The EU Commission has presented a proposal in 2021 as the core element of its Draft EU AIA regulating high-risk AI systems.Footnote 34 According to the Draft EU AIA, high-risk AI systems shall include, in particular, human-rights sensitive AI systems, such as AI systems intended to be used for the biometric identification and categorization of natural persons, AI systems intended to be used for the recruitment or selection of natural persons, AI systems intended to be used to evaluate the creditworthiness of natural persons, AI systems intended to be used by law enforcement authorities as polygraphs, and AI systems concerning the area of access to, and enjoyment of, essential private services, public services, and benefits as well as the area of administration of justice and democratic processes, thereby potentially affecting the rule of law in a State (Annex III Draft EU AIA). Nevertheless, it is open for debate whether high-risk AI products and services might include as well, because of the possibility to cause major damages, (semi-)autonomous cars, planes, drones, and ships, and certain AI-driven medical products (such as brain–computer-interfaces, mentioned below) or AI-driven financial trading systems.Footnote 35

Additionally, autonomous weapons clearly fall under the notion of high-risk AI products. However, AI-driven autonomous weapon systems constitute a special case due to the highly controversial ethical implications and the international laws of war (ius in bello) governing their development and use.Footnote 36

Another particular case of high-risk AI systems are AI systems that are developed in order to be part of or constitute superhuman AI – some even classify these AI systems as global catastrophic risks or existential risks.

III. Drawbacks of Current Regulatory Approaches of High-Risk AI Products and Services

To answer the most pressing regulative and governance questions concerning AI-driven high-risk products and such services, this chapter introduces an approach for responsible governance that shall supplement existing rules and regulations in different States. The approach, spelled out below in more detail, is neither dependent on, nor linked to, a specific legal system or constitutional framework of a specific State. It can be introduced and implemented in different legal cultures and States, notwithstanding the legal basis or the predominantly applied regulatory approach. This seems particularly important as AI-driven high-risk products and such services are already being used and will be used to an even greater extent on different continents in the near future, and yet the existing regulatory approaches differ.

For the sake of this chapter, the following simplifying picture might illustrate relevant general differences: some States rely primarily on a preventive approach and lay down permit procedures or similar preventive procedures to regulate emerging products and technologies;Footnote 37 they even sometimes include the rather risk-averse precautionary principle, as it is the case according to EU law in the area of the EU policy of the environment.Footnote 38 The latter intends to oblige States to protect the environment (and arguably other common goods) even in cases of scientific uncertainty.Footnote 39 Other States, such as the United States, in many sectors, avoid strict permit procedures altogether or those with high approval thresholds or avoid a strict implementation, and rather rely on liability rules that give the affected party, usually the consumer, the possibility to sue a company and get compensation if a product or service has caused damage.

Both regulative approaches – spelling out a permit or similar preventive procedures, with regard to high-risk products or services in the field of emerging technologies, or liability regimes to compensate consumers and other actors after they have been damaged by using a high-risk product – even if they are combined have major deficits and have to be supplemented. On the one hand, preventive permit procedures are often difficult to implement and might be easy to circumvent, especially in an emerging technology field. This was illustrated in recent years in different fields, including emerging technologies, as by the aircraft MAX 737 incidentsFootnote 40 or the motorcar diesel gateFootnote 41 cases. If this is the case, damage caused by products after they entered the market cannot be avoided. On the other hand, liability regimes that allow those actors and individuals who suffered damage by a product or service to claim compensation, have the drawback that it is unclear how far they prevent companies from selling unsafe products or services.Footnote 42 Companies rather seem to be nudged to balance the (minor and unclear) risk to be sued by a consumer or another actor in the future with the chance to make (major) profits by using a risky technology or selling a risky product or service in the present.

How standard regulatory approaches fail was shown, inter alia, by the opiate crisis casesFootnote 43 in the United States.Footnote 44 Even worse, an accountability gap is broadened if companies can avoid or limit justified compensatory payments in the end via settlements or by declaring bankruptcy.Footnote 45

IV. Specific Lacunae and Shortcomings of Current AI Regulation

If we take a closer look at the existing specific regulation and regulatory approaches to AI-driven products and (rarely) services, specific drawbacks become apparent at the national, supranational, and international level. It would be beyond the scope of this chapter to elaborate on this in detail,Footnote 46 but some loopholes and shortcomings of AI-specific rules and regulations shall be discussed below.Footnote 47

1. EU Regulation of AI-Driven Medical Devices

A first example is the EU Regulation on Medical Devices (MDR),Footnote 48 which governs certain AI-driven apps in the health sector and other AI-driven medical devices such as in the area of neurotechnology.Footnote 49 The amended MDR was adopted in 2017 and entered into force in 2021.Footnote 50 It lays down a so-called scrutiny processFootnote 51 for high-risk products (certain class III devices) only, which is a consultation procedure prior to market. It regulates, inter alia, AI-driven medical device brain stimulation products, for example, brain–computer-interfaces (BCIs). They are governed by the MDR even if there is no intended medical purpose;Footnote 52 thus, the MDR also governs consumer neurotechnology devices.

However, it is a major drawback that AI-driven neurotechnology devices are regulated by the MDR, but this law does not lay down a permit procedure to ensure safety standards and only spells out the less strict scrutiny process. In this aspect, the regulation of AI systems intended for brain stimulation in the EU differs significantly from the regulations governing the development of drugs and vaccines in the EU which lay down rules with significantly higher safety thresholds, including clinical trials and human subjects research.Footnote 53 Considering the risks because of the use of brain–computer-interfaces to humans and their health and integrity, it is unclear why the regulatory threshold is different from the development and use of drugs. This is even more true if neurotechnology is used as a ‘pure’ consumer technology by individuals and does not have a particular justification for medical reasons. Besides, there is no regulation of neurotechnology at the international level, and so far, no international treaty obliges the States to minimize or mitigate the risks linked to the use of AI-driven neurotechnology.Footnote 54

2. National Regulation of Semi-Autonomous Cars

A second example of sector-specific (top down) regulation for AI-driven products with clear disadvantages that entered already in force are the rules governing semi-autonomous cars in Germany. The relevant German law, the Straßenverkehrsgesetz, hereafter Road Traffic Act, was amended in 2017Footnote 55 to include new automated AI-based driving systems.Footnote 56 From a procedural point of view it is striking that the law-making process was finalized before the federal ethics commission had published its report on this topic.Footnote 57 The relevant § 1a (1) Road Traffic Act states that the operation of a car employing a highly or fully automated (this means level 3, but not autonomous (not level 4 and 5))Footnote 58 driving function is permissible, provided that the function is used for its intended purpose:

Der Betrieb eines Kraftfahrzeugs mittels hoch- oder vollautomatisierter Fahrfunktion ist zulässig, wenn die Funktion bestimmungsgemäß verwendet wird.Footnote 59

It is striking that the meaning of the notions ‘intended purpose’ is not laid down by the Road Traffic Act itself or by an executive order but can be defined by the automotive company as a private actor producing and selling the cars.Footnote 60 Therefore, the Road Traffic Act legitimizes and introduces insofar the private standard-setting by corporations. This provision thus contains an ‘opening clause’ for self-regulation by private actors but is, as such, too vague.Footnote 61 This is an example of a regulatory approach that does not provide sufficient standards in the area of an AI driven product that can be linked to high risks. Hence, it can be argued that the § 1a (1) Road Traffic Act violates the Rechtsstaatsprinzip, rule of law, as part of the German Basic Law,Footnote 62 which states that legal rules must be clear and understandable for those whom they govern.Footnote 63

3. General AI Rules and Principles: International Soft Law and the Draft EU AI Regulation

The question arises whether the lacunae mentioned before at the national and European level in specific areas of AI regulation can be closed by rules of international law (a) and the future regulation at the European level, that is, the 2021 Draft AIA (b).

a. International Regulation? International Soft Law!

So far, there does not exist an international treaty regulating AI systems, products, or services. Nor is such a regulation being negotiated. The aims of the States, having their companies and national interests in mind, are still too divergent. This situation differs from the area of biotechnology, a comparable innovative and as well potentially disruptive technology. Biotechnology is regulated internationally by the the Cartagena Protocol, an international treaty, and this international biotech regulation is based on the rather risk averse precautionary principle.Footnote 64 Since more than 170 States are parties to the Cartagena Protocol,Footnote 65 one can speak of an almost universal regulation, even if the United States, as a major player, is not a State party and not bound by the Cartagena Protocol. However, even in clear high-risk areas of AI development, such as the development and use of autonomous weapons, an international treaty is still lacking. This contrasts with other areas of high-risk weapons development, such as those of biological weapons.Footnote 66

Nevertheless, as a first step, at least international soft law rules have been agreed upon that spell out the first general principles governing AI systems at the international level. The Organization for Economic Co-operation and Development (OECD) has issued an AI Recommendation in 2019 (hereafter OECD AI Recommendation).Footnote 67 Over 50 States have agreed to adhere to these principles, including States especially relevant for AI research and development, such as the United States, the UK, Japan, and South Korea. The OECD AI Recommendation states and executes five complementary value-based principles:Footnote 68 these are inclusive growth, sustainable development, and well-being (IV. 1.1 ); human-centred values and fairness (IV. 1.2.); transparency and explainability (IV. 1.3.); robustness, security, and safety (IV. 1.4.); and accountability (IV. 1.5.). In addition, AI actors – meaning those who play an active role in the AI system lifecycle, including organizations and individuals that deploy or operate AIFootnote 69 – should respect the rule for human rights and democratic values (IV. 1.2. lit. a). These include freedom, dignity and autonomy, privacy and data protection, non-discrimination and equality, diversity, fairness, social justice, and internationally recognized labor rights.

However, the wording of the OECD soft law principles is very soft (‘should respect’). Even the OECD AI Recommendation on transparency and explainability (IV. 1.3.) has little substance. It states that

[…] [AI Actors]Footnote 70 should provide meaningful information, appropriate to the context, and consistent with the state of art: […]

to enable those adversely affected by an AI system to challenge its outcome based on plain and easy-to-understand information on the factors, and the logic that served as the basis for the prediction, recommendation or decision.

Assuming that discrimination and unjustified biases are one of the key problems of AI systems,Footnote 71 asking for a ‘systematic risk management approach’ to solve these problems,Footnote 72 seems insufficient as a standard of AI actors’ due diligence.

Moreover, the OECD AI Recommendation does not mention any legal liability or legal responsibility. AI actors ‘should be accountable’. This indicates that these actors should report and provide certain information about what they are doing to ensure ‘the proper functioning of AI systems’ and ‘for the respect of the above principles’ (IV. 1.5). This does not imply any legal obligation to achieve these standards or any legal liability if an actor fails to meet the threshold.

Finally, the OECD AI Recommendation does not stress the responsibility of governments to protect human rights in the area of AI. They include only five recommendations to policymakers of States (‘adherents’, section 2) that shall be implemented in national policies and international cooperation consistent with the above-mentioned principles. These include investing in AI research and development (V. 2.1), fostering a digital ecosystem for AI (V. 2.2), shaping and enabling policy environment for AI (V. 2.3), building human capacity and preparing for labour market transformation (V. 2.4), and international cooperation for trustworthy AI (V. 2.5). Hence, even if an actor aims to rely on the OECD AI Recommendation, it remains unclear what State obligations follow from human rights with regard to the governance of AI.

Besides this, the problem of how to frame the low probability/high risk scenarious (or the low probability/catastrophic or existential risk challenges) linked to the possible development of superhuman AI is not even mentioned in the OECD AI Recommendation.Footnote 73

b. Draft EU AI Regulation

As mentioned above, the draft regulation issued by the European Commission, the Draft EU AIA, proposes harmonized rules on AI systems and spells out the framework for general regulation of AI. It is laying down criteria with regard to requirements for the design and development of high-risk AI systems, not limited to specific sectors. For this, the regulation follows a risk-based regulatory approach – however not based on the precautionary principle – and, at its core, includes a classification of high-risk AI systems, on the one hand, and non-high-risk AI systems, on the other hand. For this, the notion of an AI system is defined in broad terms (Article 3(1) Draft EU AIA).Footnote 74 Also, the regulation governs all providersFootnote 75 ‘placing on the market or putting into service AI systems in the EU’ and all users of AI systems in the EU (Article 2, Article 3(2) Draft EU AIA). What kind of AI systems are high-risk AI systems, is laid down in general terms in Articles 6-7 and listed in Annex II and Annex III Draft EU AIA. The Annex III list, mentioned above,Footnote 76 can be amended and modified by the EU Commission in the future, which promises that the regulation might not be inflexible regulating the fast-moving field of AI systems as an emerging technology.Footnote 77

The Draft EU AIA aims to limit the possible negative effects of the use of an AI system with regard to the protection of human rights, stressing core human rights as the protection of human dignity, autonomy, and bodily integrity. Therefore, certain ‘AI practices’ are prohibited according to Article 5 Draft EU AIA, especially if used by State authorities. This includes, but is not limited to, the use of certain AI systems that ‘deploy[s] subliminal techniques beyond a person’s consciousness’ if this is likely to cause harm for a person. The same is true if AI practices cause harm to persons because they exploit the vulnerabilities of a specific group due to their age or disability, or the use of AI systems for law enforcement if this means to use a real-time remote biometric identification system. However, the latter prohibitions are not absolute as exemptions are enshrined in Article 5 Draft EU AIA.

Transparency obligations shall also protect human rights, as there is the need to make it transparent if an AI system is intended to interact with natural persons (Article 52 Draft EU AIA). The same is true with regard to the duty to report ‘serious incidents or any malfunctioning (…) which constitutes a breach of obligations under Union law intended to protect fundamental rights’ (Article 62 Draft EU AIA).

Apart from these prohibitions and duties, every high-risk AI system must comply with the specific requirements (Article 8 Draft EU AIA). This means that, inter alia, risk management systems must be established and maintained (Article 9 Draft EU AIA); training data sets must meet quality criteria (Article 10 Draft EU AIA). Besides, the criteria for the technical documentation of high-risk AI systems are spelled out in the Draft EU AIA (Article 11 and Annex IV); the operating high-risk AI systems shall be capable of the automatic recording of events and their operation has to be ‘sufficiently transparent’ (Article 12 and 13 Draft EU AIA). Finally, there must be human oversight (Article 14 Draft EU AIA); the latter could be interpreted as prohibiting the aim to develop and produce superhuman AI.

Another characteristic is that not only developing companies, providers of high-risk AI systems (Article 16 et seq. Draft EU AIA), importers and distributors (Articles 26 and 27 Draft EU AIA), but also users are governed by the Draft EU AIA and have obligations. Users encompass companies, as credit institutions, that are using high-risk AI systems (Articles 3(4), together with Articles 28 and 29 Draft EU AIA). Obligations are, for instance, that ‘input data is relevant in view of the intended purpose of the high-risk AI system’, and the duty to monitor the operation and keep the logs (Article 29 Draft EU AIA).

As the Draft EU AIA includes no relevant liability rules, it is a clear example of a preventive regulatory approach.Footnote 78 However, the Draft EU AIA does not establish a permit procedure but only a so-called conformity assessment procedure (Article 48 and Annex V Draft EU AIA), that is either based on internal control (Annex VI Draft EU AIA) or including the involvement of a notified body (Article 19 and 43, Annex VII Draft EU AIA). Notified bodies have to verify the conformity of high-risk AI systems (Article 33 Draft EU AIA). But it is up to the EU Member States to establish such a notifying authority (Article 30 Draft EU AIA) according to the requirements of the Draft EU AIA, and a notified body is allowed to subcontract specific tasks (Article 34 Draft EU AIA). As an oversight, the EU Commission can investigate cases ‘where there are reasons to doubt’ whether a notified body fulfills the requirements (Article 37 Draft EU AIA).

It has to be mentioned that derogations from the conformity assessment procedure are part of the regulation; derogations exist ‘for exceptional reasons of public security or the protection of life and health of persons, environmental protection’ and even (sic!) ‘the protection of key industrial and infrastructure assets’ (Article 47 Draft EU AIA).

In the end, many obligations rest on the providers, as for instance the documentation obligations (Article 50 Draft EU AIA), the post-market monitoring (Article 61 Draft EU AIA), or the registration of the system as part of the EU database (Articles 51 and 60 Draft EU AIA). However, if one evaluates how effective an implementation might be, it is striking that the regulation lays down only fines ‘up to’ a certain amount of money, as 10.000.000–30.000.000 EUR, if the Draft EU AIA is violated and it is up to the EU Member States to decide upon the severity of the penalties. Additionally, administrative fines that could be imposed on Union institutions, agencies, and bodies are much lower (‘up to’ 250.000 EUR – 500.000 EUR according to Article 72 Draft EU AIA).Footnote 79

It is beyond the scope of this chapter to assess the Draft EU AIA in more detail.Footnote 80 Nevertheless, one has to stress that no permit procedure is part of the regulation of high-risk AI systems. This means that this regulation establishes lower thresholds with regard to high-risk AI systems compared, for instance, with the regulation of the development of drugs and vaccines in the EU. It seems doubtful whether the justification provided in the explanatory notes is convincing; it states that a combination with strong ex-post enforcement is an effective and reasonable solution, given the early phase of the regulatory intervention and the fact the AI sector is very innovative and expertise for auditing is only now being accumulated.Footnote 81

In the end, without a regulative solution for liability issues, it seems doubtful whether the major risks of high-risk AI systems can be sufficiently mitigated on the basis of the Draft EU AIA. Therefore, another approach shall be proposed by us, one that is compatible with the Draft EU AIA but will complement it to fill in the loopholes.

4. Interim Conclusion

From what has been written above, one can conclude, firstly, that there are loopholes and drawbacks in the regulation of emerging technologies and especially AI systems, although there are rules in place in at least some areas of AI-driven products and services at the national, European, and international level. Secondly, there is no coherent, general, or universal international regulation of AI or AI-driven products and services.

Nevertheless, even outside the EU there is widespread agreement that there is the need to have proportional and robust regulation in place, at least for high-risk AI-driven products and such services. If we look at the multiple fields where AI-driven systems are currently used and could be used in the future and also look closely at the inherent benefits and risks linked to those systems and products it seems less surprising that prominent heads of companies selling AI-driven products have emphasized the urgent need to regulate AI systems, products, and services, as well.Footnote 82

The vulnerability of automated trading systems on the financial market may serve as an example highlighting the huge impact of intelligent systems: In the Flash Crash 2010, a quickly completed order triggered automated selling, wiping out nearly $1,000 billion worth of US shares for a period of several minutes.Footnote 83

Therefore, we agree with those who argue that high-risk AI products and such services are emerging and disruptive technologies that have to be regulated.Footnote 84 This is especially true with regard to high-risk AI services because these are often ignored. In our view, there is an urgent need for responsible, (i.e. robust) and proportional regulation of high-risk AI products and services today, because if we try to regulate these when major damages have already occurred, it will be too late.

V. A New Approach: Adaptive Regulation of AI-Driven High-Risk Products and Services
1. A New Approach

We argue that a new approach to regulating AI-driven products is important to avoid the shortfalls of the rules at the national, supranational, and international level mentioned earlier. Our aim is to establish a regulatory approach that can supplement preventive procedures and, at the same time, close the gaps of liability-based approaches of different legal systems. This approach shall be applicable universally and could be laid down in national, supranational, or international laws. Our proposal aims for a proactive, adaptive regulatory scheme that is flexible, risk-sensitive, and has the incentive to assess and lower risks by those companies that develop and sell high-risk AI-driven products and such services. The proposal’s core is that an operator or company must pay a proportionate amount of money (called regulatory capital in the following) as a financial security for future damages before a high-risk, AI-based product or such a service enters the market. To avoid over-regulation, we focus on AI-based products belonging to a class of high-risk products and services which, accordingly, have the potential to cause major damages for protected individual values, rights or interests, or common goods, such as life and bodily integrity, the environment, or the financial stability of a State. A regulatory framework for the potential development of superhuman AI will be discussed as well.

The special case of autonomous weapons, also a high-risk product, has to be mentioned as well: With regard to the specific problems of the development of (semi-)autonomous weapons, many authors and States state, based on convincing arguments, that a prohibition of these weapons is mandatory due to ethical and legal considerations.Footnote 85 This could mean that any kind of adaptive regulation suggested here should not be discussed as such regulation could be a safety net and justify the market entry of such weapons. We agree with the former, that a prohibition of such weapons is feasible, but disagree with the latter. Our argument for including (semi-)autonomous weapons in this discussion about responsible and adaptive regulation does not mean that we endorse the development, production, or selling of (semi-)autonomous weapons – quite to the contrary. Currently, however, it seems unlikely that there will be a consensus by the relevant States that develop, produce, or sell such weapons to sign an international treaty prohibiting or limiting these products in a meaningful way.Footnote 86 Therefore, this chapter’s proposed regulatory approach could, and should, at least close the responsibility gap that emerges if such weapons are developed and used. This seems to be urgently necessary as there are lacunae in the traditional rules of international humanitarian law,Footnote 87 and international criminal law,Footnote 88 and the international rules on State responsibility.Footnote 89 There is the danger that, because of these lacunae, States do not even have to pay compensation if, for instance, an autonomous weapon is attacking and killing civilians in clear violation of the rules of international law.

2. Key Elements of Adaptive Regulation of AI High-Risk Products and Services

We argue that adaptive regulation as a new regulatory scheme for AI-driven high-risk products and such services shall consist of the following core elements:

First, the riskiness of a specific AI-driven product or service should be evaluated by a commission of independent experts. The threshold regarding whether such an evaluation has to take place is dependent on whether the AI-based product or service falls into a high-risk category according to a prima facie classification of its riskiness that shall be laid down in legal rules.Footnote 90 Possible future scenarios together with available data on past experiences (using the evaluated or similar products or services) will form the basis for the experts’ evaluation. If the evaluated product or service is newly developed, a certain number of test cases proposed by the expert commission should provide the data for evaluation.

Second, after the expert commission has evaluated whether a specific AI-driven product or service is high-risk as defined above and falls under the new regulatory scheme, and the questions are answered in the positive, the expert committee shall develop risk scenarios that specify possible losses and associated likelihoods for the scenarios to realize.

Third, relying, in addition to the riskiness of the product, on the financial situation of the developing or producing company,Footnote 91 the experts will determine the specific regulatory capital that has to be paid. They shall also spell out an evaluation system that will allow measurement and assessment of future cases for damages due to the implementation or operation of the AI-driven product or service.

Fourth, the set-up of a fund is necessary, into which the regulatory capital has to be paid. This capital shall be used to cover damages that are caused by the AI-driven high-risk product or service upon occurrence. After a reasonable time, for instance 5–10 years, the capital shall be paid back to the company if the product or service has caused no losses or damages.

Fifth, as mentioned above, after a high-risk product or service has entered the market, the company selling the product or service has to monitor the performance and effects of the product or service by collecting data. This should be understood as a compulsory monitoring phase in which monitoring schemes are implemented. The data will serve as an important source for future evaluation of the riskiness of the product by the expert commission. In particular, if the product or service is new and data is scarce, the evaluation system is of utmost importance because it serves as a database for future decisions on the amount of the regulatory capital and on the need for future monitoring of the product or service.

Sixth, another element of the proposed governance scheme is that the company should be asked to develop appropriate test mechanisms. A testing mechanism is a valid and transparent procedure ensuring the safety of the AI-driven product. For instance, a self-driving vehicle must pass a sufficient number of test cases to ensure that these vehicles behave in a safe way, meeting a reasonable benchmark.Footnote 92 Such a benchmark and test mechanism should be determined by the expert commission. Market entry should not be possible without a test mechanism in place. Given the data from the monitoring phase, the expert commission will be able to evaluate the product; but an appropriate test mechanism has additional advantages as the company itself can use it for the continuous evaluation of the product. It can support the re-evaluation explained in the next step. It will also help the regulator provide automized test mechanisms for the monitoring and evaluating of the technology, particularly in similar scenarios.

Seventh, the expert commission shall re-evaluate the AI-driven high-risk product or service on a regular basis, possibly every year. It can modify its decision on the proportionate amount of regulatory capital that is needed to match the risks by relying on new information and assessing the collected data. The established evaluation system mentioned above will provide reliable data for relevant decisions. (And, as mentioned earlier, after a reasonable time frame, the capital should be paid back to the company if the product or service has caused no losses or damages.)

3. Advantages of Adaptive Regulation

The following significant advantages follow from the adaptive approachFootnote 93 to regulation of AI high-risk prodocts and services: It avoids over-regulating the use of AI products and services especially in cases if the AI technology is new, and the associated risks are ex ante unclear. Current regulatory approaches that lay down preventive permit procedures can prevent a products’ market entry (if the threshold is too high) or allow the market entry of an unsafe product (if the threshold is too low or is not implemented). With the adaptive regulation approach, however, it will be possible to ensure that a new AI product or AI-based service enters the market while sufficient regulatory capital covers possible future damages. The capital will be paid back to the company if the product or service proves to be a low-risk product or service after an evaluation period by using the data collected during this time according to the evaluation system.

a. Flexibility

The adaptive regulation approach allows reacting fast and in a flexible way to new technological developments in the field of AI. Since only the regulation’s core elements are legally fixed a priori, and details shall be adapted on a case-by-case basis by an expert commission, the specific framing for an AI (prima facie) high-risk product can be changed depending on the information and data available. A periodical re-evaluation of the product or service ensures that new information can be taken into account, and the decision is based on the latest data.

b. Risk Sensitiveness

The approach is not only risk-sensitive with regard to the newly developed high-risk AI-based product or service; it also takes into account the different levels of risks accepted by different societies and legal cultures. It can be assumed that different States and societies are willing to accept different levels of risks linked to specific AI products and services, depending on the expected benefit. If, for instance, a society is particularly dependent on autonomous vehicles because of an ageing population and deficits in the public transport system, it might decide to accept higher risks linked to these vehicles to have the chance of an earlier market entry of the AI-based cars. According to these common aims, the threshold to enter the market laid down as part of a permit procedure could be lowered if, at the same time, the regulatory capital will be paid in the fonds and ensures that (at least) all damages will be compensated. The same is true, for instance, for AI-driven medical devices or other AI high-risk products that might be particularly important to people from one State and the common good of specific society due to certain circumstances.

c. Potential Universality and Possible Regionalization

Nevertheless, as AI systems are systems that could be used in every part of the world, the expert commission and its decision shall be based on international law. An international treaty, incorporating the adaptive regulation approach into international law, could outbalance lacunae or hurdles based on national admission procedures that might be ineffective or insufficient. The commission’s recommendations or decisions, once made public, could be implemented directly in different national legal orders if the risk sensitiveness of the State is the same, and could serve as a supplement for the national admission process.

If, however, different types of risk attitudes towards an AI-driven high-risk product or such a service in different States exist, a cultural bias of risk averseness (or risk proneness) can be taken into account when implementing the proposal for regulation spelled out in this chapter at the national or regional levels. This allows the necessary flexibility of a State to avoid insufficient regulation (or overregulation) whilst protecting individual rights, such as bodily integrity or health, or promoting the common good, as the environment or the financial stability of a State or region. Such adjustments can be deemed necessary, especially in democratic societies, if risk perception of the population changes over time, and lawmakers and governments have to react to the changed attitudes. To that end, the German Constitutional Court (Bundesverfassungsgericht, BVerfG) has held that high-risk technologies (in the case at hand: nuclear energy) are particularly dependent on the acceptance of the population in the democratic society, because of the potentially severe damages that might be caused if they are used. The Constitutional Court stressed that because of a change in the public’s perception of a high-risk technology, a reassessment of this technology by the national legislator was justified – even if no new facts were given.Footnote 94

d. Monitoring of Risks

It can be expected that in most cases, a company producing a high-risk AI-driven product or service will be a priori convinced of the safety of its product or service and will argue that its AI-driven product or service can be used without relevant risks, while this opinion is possibly not shared by all experts in the field. Therefore, the collection of data on the product’s performance in real-world settings by the company evaluation systems is an important part of the adaptive regulation proposal introduced in this chapter. On the one hand, the data can help the company to show that its product or service is, as claimed, a low-risk product after a certain evaluation period and justify that the regulatory capital could be reduced or paid back; on the other hand, if the AI-driven product causes damages, the collected data will help improve the product and remedy future problems of using the technology. The data can also serve as an important source of information when similar products have to be evaluated and their risks have to be estimated. Hence, a monitoring phase is an important element of the proposal as reliable data are created on the product’s or service’s performance, which can be important at a later stage to prove that the technology is actually as riskless as claimed by the company at the beginning.

e. Democratic Legitimacy and Expert Commissions

The adaptive regulation approach spelled out in this chapter is not dependent on the constitution of a democratic, human rights-based State, but it is compatible with democracy and aims to protect core human and constitutional rights, such as life and health, as well as common goods, such as the environment. In order to have a sufficient basis that is legitimized, the rules implemented by the expert commission and the rules establishing the expert commission, should be based on an Act of parliament. Legally enshrined expert commissions or panels already exist in different contexts as part of the regulation of disruptive, high-risk products or technologies. They are a decisive element of permit procedures during the development of new drugs, as laid down for instance in the German Medicinal Products Act (Arzneimittelgesetz).Footnote 95 Another example of an interdisciplinary commission based on an act of parliament is the area of biotechnology regulation in Germany.Footnote 96

As long as the commission’s key requirements, such as the procedure for the appointment of its members, the number of members, the scientific background of members, and the procedure for the drafting of recommendations and decisions, are based on an act of parliament, a sufficient degree of democratic legitimacy is given.Footnote 97 In a democracy, this will avoid the pitfalls of elitism and an expert system, an expertocracy, that does not possess sufficient links to the legislature of a democratic State. A legal basis further complies with the requirements of human and constitutional rights-based constitutions, such as the German Basic Law, which demand that the main decisions relevant for constitutional rights have to be based on rules adopted by the legislative.Footnote 98

f. No Insurance Market Dependency

The adaptive regulation approach spelled out in this chapter avoids reliance on a commercial insurance scheme. An approach that refers to an insurance scheme that obliges companies to procure insurance for their AI-based high-risk products or services would depend on the availability of such insurances from companies. This could, however, fail for practical or structural reasons. Further, insurance might not be feasible for the development of new high-risk AI products and services if, and because, only a limited amount of data is available.Footnote 99 Besides, low probability-high-risk scenarios with unclear probability can hardly be covered adequately by insurances, as risk-sharing might be impossible or difficult to achieve by the insurer. Lastly, the reliance on insurance would mean that higher costs have to be covered by a company that is producing AI-based products, as the insurance company needs to be compensated for their insurance product and aims to avoid financial drawbacks by understating risks.

At the national level, there is an example that an attempt to regulate a disruptive technology, in this case biotechnology, based on the duty to get insurance failed as this duty was not implemented by either the regulator or the insurance industry.Footnote 100 Even at the international level, the duty to get insurance for operators can be seen as a major roadblock for ratifying and implementing an international treaty on the liability for environmental damage.Footnote 101

4. Challenges of an Adaptive Regulation Approach for AI-Driven High-Risk Products
a. No Financial Means?

A first argument against the adaptive regulation approach could be that (different from financial institutions) the companies that develop and sell disruptive high-risk AI products or services do not have the capital to pay a certain amount as a guarantee for possible future damages caused by the products or service. This argument is, on the one hand, not convincing if we think about well-established big technology companies, like Facebook, Google, or Apple, etc., that develop AI products and services or outsource these developments to their subsidiaries.

On the other hand, start-ups, and new companies might develop AI-driven products and services which fall within the high-risk area. However, these companies often receive funding capital from private investors to achieve their goals even if they generate profit at a very late stage.Footnote 102 If an investor, often a venture capitalist, knows that the regulatory requirement is to pay a certain amount of capital to a fund that serves as security but that capital will be paid back to the company after a certain time if the product or service does not cause damages, this obligation would not impede or disincentivize the financing of the company compared to other requirements (for instance, as part of permit procedures). Quite to the contrary: To lay down a threshold of a certain amount of regulatory capital as a necessary condition before market-entry of an AI-based high-risk product (not for the stage of the research or development of the product) or AI-based service is an opportunity for the investor to take those risks into account that the company itself might downplay.

In the event that a State is convinced that a certain AI-driven product or service is fostering the common good of its society, and private investors are reluctant to finance the producing company because of major or unclear risks linked to the product or service, there is the possibility that the particular State may support the company with its financial means. Financial support has been given in different forms in other cases of the development of high-risk technology or products in the past and present.Footnote 103

b. Ambiguity and Overregulation?

Another argument one could envisage against the adaptive regulatory approach introduced in this chapter is that it is unclear which AI-driven products or services have to be seen as high-risk products or high-risk services; and therefore there might be an inherent bias that leads to overregulation as the category of high-risk products or services cannot be determined without grey areas, and can be determined neither precisely nor narrowly enough. However, what could be brought forward against this argument is that the category of high-risk AI products and services that the expert commission shall evaluate will be laid down in national, supranational, or international law after a process that includes the discourse with different relevant actors and stakholders, such as companies, developers, researchers, etc.Footnote 104 Criteria for a classification of prima facie high-risk AI products or services should be the possible damage that can occur if a certain risk linked to the product or service materializes. In order to avoid overregulation, one should limit the group of AI-driven high-risk products and services to the most evident; this might be depending on the risk proneness or risk awareness of a society as long as there is no international consensus.

c. Too Early to Regulate?

To regulate emerging technologies such as AI-based products and services is a challenge, and the argument is often brought forward that it is too early to regulate the technologies because the final product or service is unclear at a developmental stage. This is often linked to the argument that regulation of emerging technologies will mean inevitable overregulation of these technologies, as mentioned earlier. The answer to these arguments is that we as a society, every State, and the global community as a whole should avoid falling into the ‘it is too early to regulate until it is too late’ trap. Dynamic developments in a high-risk emerging technology sector, in particular, are characterized by the fact that sensible regulation rather might come too late, as legislative processes are, or can often be, lengthy. The advantage of the adaptive regulation proposed in this chapter is that, despite regulation, flexible standardization adapted to the specific case and the development of risk is possible.

d. No Independent Experts?

As mentioned earlier, the inclusion of expert commissions and other interdisciplinary bodies, such as independent ethics committees and Institutional Review Boards, has been established in various areas as an important element in the context of the regulation and assessment of disruptive, high-risk products or procedures. There are no reasons to assume why expert commissions should not be a decisive and important element in the case of AI regulation. Transparency obligations might ensure that experts closely linked to certain companies are not part of such a commission or are not part of a specific decision of such a commission. Moreover, a pluralistic and interdisciplinary composition of such a body is able to prevent biases as part of the regulative process.Footnote 105

e. Unacceptable Joint Liability of Companies?

Further, it is not an argument against the fund scheme that companies that distribute AI-based products or services that later turn out to be low-risk are unduly held co-liable for companies that produce and distribute AI-based products or services that later turn out to be high-risk and cause damage. The aim of the fund’s establishment is that claims for damages against a certain company X are initially compensated from the fund after a harmful case, namely from the sum that the harm-causing company X has deposited precisely for these cases concerning its risky AI products and services; should the amount of damage exceed this, further damages should initially be paid by company X itself. Thus, unlike with funds that contain a total capital that is depleted when damage payments are made in large amounts, it would be ensured that, in principle, the fund would continue to exist with the separate financial reserves of each company. If, to the contrary, the entire fund would be liable in the event of damage, the state where the company Y producing low-risk AI products is a national would have to provide a default liability to guarantee the repayment of the capital to the company Y. The state would be obliged to reimburse the paid-in regulatory capital to a company such as Y if, contrary to expert opinion, an AI product turns out to be low-risk and the regulatory capital has to be repaid to the company, but the fund does not have the financial means to do so due to other claims.

VI. Determining the Regulatory Capital

Central to the adaptive regulation proposed here is determining the level of regulatory capital. In this Section, we provide a formal setup, using probabilistic approaches.Footnote 106 In the first example, we consider a company that may invest in two competing emerging AI-driven products; one of the products is substantially riskier than the other. Even if we presume that the company is acting rationally (in the sense of a utility maximisingFootnote 107 company),Footnote 108 there are good reasons to claim that risks exceeding the assets of the company will not be taken fully into account in the decision process of this company because, if the risks materialize, the bankruptcy of the company will be caused. Although it seems prima facie rational that diminishing risks exceeding the assets of the company should be the priority for the management of a company, as these risks threaten this actor’s existence, the opposite behavior is incentivized. The high or even existential risks will be neglected by the company if there is no regulation in place obliging the company to take them into account: The company will seek high-risk investments because the higher return is not sufficiently downweighed by expected losses, which are capped at the level of the initial endowment.Footnote 109

First Example: Two competing AI technologies or products

Consider a company with an initial endowment w0. The company can decide to invest in two different AI-driven products or technologies offering (random) returns r and rʹ for the investment of 1 unit of currency. The first technology is the less risky one, while the second is riskier. We assume there are two scenarios: The first scenario (the best case, denoted by +) is if the risk does de facto not materialize. This scenario is associated with some probability p. In this scenario, the riskier strategy offers a higher return, i.e. r+<rʹ+.

In the second scenario (the worst case, denoted by and having probability 1p), the riskier technology will lead to larger losses, such that we assume 0>r>rʹ, both values being negative (yielding losses).

Summarizing, when the company invests the initial endowment into the strategy, the wealth at the end of the considered period (say at time 1) will be w1=w0r, on investing in the first technology, or w1ʹ=w0rʹ, when investing in the second, riskier technology, bankruptcy will occur when w1<0, or w1ʹ<0, respectively.

We assume that the company maximizes expected utility: Expected utility of the first strategy is given by the expectation of the utility of the wealth at time 1, EU=Euw11w1>0 (or EUʹ=Euw1ʹ1w1ʹ>0, respectively for the second strategy). Here u is a utility functionFootnote 110 (we assume it is increasing), E denotes the expectation operator, and 1w1>0 is the indicator function, being equal to one if w1>0, (no bankruptcy) and zero otherwise (and similarly 1w1ʹ>0). The company chooses the strategy with the highest expected utility, namely, the first one if EU>EUʹ and the second one if EUʹ>EU. If both are equal, one looks for additional criteria to find the optimal choice. This is typically a rational strategy.

Up to now, we have considered a standard case with two scenarios, a best case and a worst case. In the case of emerging and disruptive technologies, failure of high-risk AI systems and AI-driven products might lead to immense losses, such that in the worst-case scenario () bankruptcy occurs. This changes the picture dramatically:

we obtain that EU=puw0r+ for the first technology, and for the second, riskier technology EUʹ=puw0rʹ+. Since the riskier technology’s return in the best case scenario is higher, the company will prefer this technology. Most importantly, this does neither depend on the worst case’s probability nor on the amount of the occurring losses. The company, by maximizing utility, will not consider losses beyond bankruptcy in its strategy.

Summarizing, the outcome of this analysis highlights the importance of regulation in providing incentives for the company to avoid overly risky strategies.

The first example highlights that a utility-maximising company will accept large risks surprisingly easily. In particular, the exact amount of losses does not influence the rational decision process, because losses are capped at the level of bankruptcy and the hypothetical losses are high enough to lead to bankruptcy regardless. It can be presumed that the company does not care about the particular amount of losses once bankruptcy occurs. This, in particular, encourages a high-risk strategy of companies since strategies with higher risk on average typically promise higher profits on average. However, the proposed adaptive regulation can promote the common good in aiming to avoid large losses. We will show below that the proposed regulation brings large losses back into the utility maximization procedure by penalizing high losses with high regulative costs, thus helping to avoid these.

Considering the problem of superhuman AI, a particular challenge arises: Once a company develops superhuman AI, the realized utility will be huge. It is argued that a superhuman AI cannot be controlled; thus, it is posing an existential threat not restricted to the company. Potential losses are clearly beyond any scale, yet any company will aim to develop such a superintelligent system as the benefits will be similarly beyond any scale.

The example highlights that a need for regulation will hopefully provide guidance for controlling the development of such AI systems when high-risk AI products lead to large losses and damages. However, with a low or even very low probability of this, large losses, once occurred, have to be compensated for by the public, since the company will be bankrupt and no longer able to cover them. Hence, regulation is needed to prevent a liability shortfall.

The following example will show that a reasonable regulation fosters an efficient maximization of overall wealth in comparison to a setting without regulation.

Second Example: A stylized framework for regulation

In this second example, regulatory capital is introduced. Adaptive regulation can maximize the overall wealth, minimize relevant risks, avoid large losses and foster the common good by requiring suitable capital charges.

Consider I companies: each company i has an initial wealth w¯0i, where one part w¯0iw0i is consumed initially, and the other part w0i is invested (as in the above example) resulting in the random wealth w1i at time 1. The company i pays a regulatory capital ρi and, therefore, aims at the following maximization:

maxcw¯0iw0iρi+Euw1i1w1i>0

The relevant rules should aim to maximize overall wealth: In the case of bankruptcy of a company, say i, the public and other actors have to cover losses. We assume that this is proportional to the occurred losses, gw1i1w1i<0. The overall welfare function P1+P2 consists of two parts: the first part is simply the sum of the utility of the companies,

P1=i=1Icw¯0iw0iρi+Euw1i1w1i>0.

The second part,

P2=i=1IEgw1i1w1i<0,

is the expected costs in case of bankruptcies of the companies. As scholars argue,Footnote 111 one obtains the efficient outcome, maximizing overall wealth or the common good, respectively, by choosing regulatory capital as

ρi=gcPw1i<0ESi;(1)

here the expected shortfall is given by ESi=Ew1i1w1i<0. Hence, by imposing this regulatory capital, the companies will take losses beyond bankruptcy into account, which will help to achieve maximal overall wealth.

As spelled out in the literature, one could incorporate systemic effects in addition, which we do not consider here for simplicity.Footnote 112

Here the adaptive regulatory approach relies on expectations and, therefore, assumes that probabilities can be assessed, even if they have to be estimatedFootnote 113 or suggested by a team of experts. In the case of high uncertainty, this might no longer be possible, and one can rely on non-linear expectations (i.e. utilize Frank Knight’s concept of uncertainty or in the related context of ‘uncertain futures’). As already mentioned, the projection of unknown future risks can be formalized by relying on extreme value theory.Footnote 114 Therefore, it is central that adapted methods are used to incorporate incoming information resulting from the above mentioned monitoring process or other sources. The relevant mathematical tools for this exist.Footnote 115

VII. Dissent and Expert Commission

With regard to the expert commission, one has to expect that a variety of opinions arise. One possibility is that the worst-case opinion is considered, that is, taking the most risk-averse view. An excellent alternative to taking best-/worst-case scenarios or similar estimates is to rely on the underlying estimates’ credibility. This approach is based on the so-called credibility theory, which combines estimates, internal estimates, and several expert opinions in the actuarial context.Footnote 116 We show how and why this is relevant for the proposed regulation.

Third Example: Regulation relying on credibility theory

For simplicity, i will be fixed, and we consider only two experts, one suggesting the probability P1 and the other one P2. The associated values of the regulatory capital computed using equation (1) are denoted by ρ1 and ρ2, respectively.

The idea is to mix ρ1 and ρ2 for the estimation of the regulatory capital as follows:

ρcredibleθ=θρ1+1θρ2

where θ will be chosen optimal in an appropriate sense. If we suppose that there is already experience on estimates of the two experts, we can obtain variances v1 and v2 estimated from their estimation history. The estimator having minimal variance is obtained by choosing

θopt=v2v1+v2.

When expert opinions differ, credibility theory can be used to provide a valid procedure for combining the proposed models. Systematic preference is given to experts who have provided better estimates in the past. Another alternative is to select the estimate with the highest (or lowest) capital; however, this would be easier to manipulate. More robust variants of this method based on quartiles, for example, also exist.

VIII. Summary

This chapter spells out an adaptive regulatory model for high-risk AI products and services that requires regulatory capital to be deposited into a fund based on expert opinion. The model allows compensating potentially occurring damage, while at the same time motivating companies to avoid major risks. Therefore, it contributes to the protection of individual rights of persons, such as life and health, and to the promotion of the common good, such as the protection of the environment. Because regulatory capital is reimbursed to a company if an AI high-risk product or service is safe and risks do not materialize for years, we argue that this type of AI regulation will not create unnecessarily high barriers to the development, market entry, and use of new and important high-risk AI-based products and services. Besides, the model of adaptive regulation proposed in this chapter can be part of the law at the national, European, and international level.

9 China’s Normative Systems for Responsible AI From Soft Law to Hard Law

Weixing Shen and Yun Liu
I. Introduction

Progress in Artificial Intelligence (AI) technology has brought us novel experiences in many fields and has profoundly changed industrial production, social governance, public services, business marketing, and consumer experience. Currently, a number of AI technology products or services have been successfully produced in the fields of industrial intelligence, smart cities, self-driving cars, smart courts, intelligent recommendations, facial recognition applications, smart investment consultants, and intelligent robots. At the same time, the risks of fairness, transparency, and stability of AI have also posed widespread concerns among regulators and the public. We might have to endure security risks when enjoying the benefits brought by AI development, or otherwise to bridge the gap between innovation and security for the sustainable development of AI.

The Notice of the State Council on Issuing the Development Plan on the New Generation of Artificial Intelligence declares that China is devoted to becoming one of the world’s major AI innovation centers. It lists four dimensions of construction goals: AI theory and technology systems, industry competitiveness, scientific innovation and talent cultivation, and governance norms and policy framework.Footnote 1 Specifically, by 2020, initial steps to build AI ethical norms and policies and legislation in related fields has been completed; by 2025, initial steps to establish AI laws and regulations, ethical norms and policy framework, and to develop AI security assessment and governance capabilities shall be accomplished; and by 2030, more complete AI laws and regulations, ethical norms, and policy systems shall be accomplished. Under the guidance of the plan, all relevant departments in Chinese authorities are actively building a normative governance system with equal emphasis on soft and hard laws.

This chapter focuses on China’s efforts in the area of responsible AI, mainly from the perspective of the evolution of the normative system, and it introduces some recent legislative actions. The chapter proceeds mainly in two parts. In the first part, we would present the process of development from soft law to hard law through a comprehensive view on the normative system of responsible AI in China. In the second part, we set out a legal framework for responsible AI with four dimensions: data, algorithms, platforms, and application scenarios, based on statutory requirements for responsible AI in China in terms of existing and developing laws and regulations. Finally, this chapter concludes by identifying the trend of building a regulatory system for responsible AI in China.

II. The Multiple Exploration of Responsible AI
1. The Impact of AI Applications Is Regarded As a Revolution

Science and technology are a kind of productive force. The innovation and application of new technologies often improves production efficiency and stimulates transformative changes on politics, economics, society, and culture. In China, ‘technological revolution’ got its name due to the widespread application of these technologies. It is well known that there have been three technological revolutions in the modern era. China missed the three historic developmental opportunities due to foreign invasion and internal turmoil. During the first and second industrial revolutions, which were powered by steam and electricity respectively, China was in its last imperial period, the Qing Dynasty, and missed the opportunity to participate in the creation of inventions because it was experiencing the century of humiliation in its five-thousand-year history. The Third Industrial Revolution, which began in the 1950s, was marked by the invention and application of atomic energy, electronic computers, space technology, and bioengineering. However, China missed most of it because it lacked the political environment to participate in international communications. China has been lagging behind for a long time. Due to the implementation of the reform and opening-up policy in 1978, China started to catch up and to learn from the West in the aspects of science and technology, legal systems, and other fields.

In order to promote the development of science and technology, Article 12 of the Constitution of the People’s Republic of China (1978 revised) stipulates that the state shall vigorously develop scientific undertakings, strengthen scientific research, carry out technological innovation and technological revolution, and adopt advanced technology in all sectors of the national economy as far as possible. In September 1988, when Deng Xiaoping, the second generation leader of PRC, met with President Gustáv Husák of Czechoslovakia, he said, “Science and technology are the primary productive force,” which has become a generally accepted consensus among Chinese people.

China caught up with the new trend of the third AI flourishing period. At the beginning of the twenty-first century, China’s science and technology policy began to plan the development of ‘next generation information technology’.Footnote 2 Since 2011, Chinese official documents have made extensive references to the development of ‘next generation information technology’. With the rapid development of global AI, China has the opportunity to stand at the same starting line in the next round of AI technology development and application. China is fully aware of the profound impact of AI technology, and some high-level documents already refer to the next round of technological development, represented by AI, as a ‘technological revolution’, which is considered to be similar to the aforementioned three technological revolutions. As it is a revolutionary technology, the Chinese government does not see it only as a technology, but also realizes that it will play a key role in social governance, economic structure, political environment, the international landscape, and other aspects.

On 31 October 2018, the Political Bureau of the Central Committee of the CPC held its ninth collective study on the current status and trends of AI development, and Xi Jinping particularly emphasized that

Artificial Intelligence is a strategic technology leading this round of scientific and technological revolution and industrial change, with a strong ‘head goose’ effect of spillover drive. It is necessary to strengthen the development of Artificial Intelligence potential risk research and prevention, to safeguard the interests of the people and national security, to ensure that Artificial Intelligence is safe, reliable and controllable. It is necessary to integrate multidisciplinary forces, strengthen research on legal, ethical and social issues related to AI, and establish and improve laws and regulations, institutional systems and ethics to safeguard the healthy development of AI.Footnote 3

When recognizing that AI can have such a broad shaping power, China’s technology policy reflects on the idea of balancing development and governance, considering both the promotion of positive social benefits from AI and the prevention of risks from AI applications as components of achieving responsible AI. On one hand, China’s main goal, since its reform and opening up, has been to devote itself to economic development and the improvement of people’s living standards, and in recent years it has also put forward the reform goal of modernizing its governance system and capabilities.Footnote 4 Actively promoting AI technology development is conducive to improving the country’s economy, increasing people’s well-being, and improving the social governance system. On the other hand, AI replaces or performs some behaviors on behalf of people with technical tools, and there is a risk of abuse or loss of control when the technical conditions and social situation are not yet mature. The development measures of technology and risk governance measures are two dimensions with large differences, and the responsible AI mentioned subsequently in this chapter focuses on analyzing the normative system of responsible AI in China from the risk governance dimension.

II. The Social Consensus Established by Soft Law

Soft law is a common tool in the field of technology governance. Technical standards, ethics and morality, initiative and guidelines, and other forms of soft law have diverse flexibility and inclusiveness, and they can fill in areas of social relationships that hard law fails to adjust in a timely manner, adapting to the dual goals of technological innovation development and security prevention. In China’s AI governance framework, government opinions, technical standards, and industry self-regulatory initiatives are all governance tools. These soft laws have no mandatory effect, but are mainly adopted and enforced through self-adoption, being referenced in contracts, within industry autonomy, through public opinion supervision, and within market competition to form a common social consciousness and be implemented, and other tools such as technical standards will also indirectly obtain binding effect by means of legal references.

A government opinion is a kind of nonmandatory guidance document issued by the government. In November 2017, the Ministry of Science and Technology of PRC led the establishment of the Office of the Development and Advancement of the New Generation of Artificial Intelligence, which is a coordinating body jointly composed of 15 relevant departments responsible for promoting the organization and implementation of the new generation of AI development planning and major science and technology projects. In March 2019, the Office of the Development and Advancement of the New Generation of Artificial Intelligence established the Committee on Professional Governance, which was formed by the Ministry of Science and Technology of PRC by inviting scholars from the fields of public administration, computer science, ethics, etc. On 17 June 2019, the Committee on Professional Governance of the New Generation of Artificial Intelligence released in its own name the Governance Principles of the New Generation of Artificial Intelligence – Developing Responsible AI.Footnote 5 According to the above governance principles, in order to promote the healthy development of a new generation of AI; better coordinate the relationship between development and governance; ensure safe, reliable, and controllable AI; promote sustainable economic, social, and ecological development; and build a community of human destiny; all parties involved in the development of AI should follow eight principles: (1) harmony and friendliness, with the goal of promoting common human welfare; (2) fairness and justice, eliminating prejudice and discrimination; (3) inclusiveness and sharing, in line with environmentally friendly, promoting coordinated development, eliminating the digital divide, and encouraging open and orderly competition; (4) respect for privacy, setting behavioral boundaries in the collection, storage, processing, use, and other aspects of personal information; (5) security and controllability, enhancing transparency, explainability, reliability, and controllability; (6) shared responsibility, clarifying the responsibilities of developers, users, and recipients; (7) open cooperation, encouraging interdisciplinary, cross-disciplinary, cross-regional, and cross-border exchanges and cooperation; (8) agile governance, ensuring timely detection and resolution of risks that may arise.Footnote 6 These principles establish the basic ethical framework for responsible AI in China.

China’s technical standards include national standards, industry standards, and local standards which were published by governments agencies, and also include consortia standards and enterprise standards which were published by nongovernment agencies. According to the Standardization Law of the People’s Republic of China (2017 Revision), technical standards are in principle implemented voluntarily, and mandatory standards can be set only under specific circumstances.Footnote 7 There are no mandatory standards for AI governance, and those that have entered the work process are voluntary standards. In August 2020, the Standardization Administration of China and relevant departments released the Guide to the Construction of National New Generation AI Standard System, which incorporates security and ethics into the work plan of national standards, and plans to develop security and privacy protection standards, ethical standards, and other related standards.Footnote 8 In November 2020, the national information security standardization technical committee issued the Guideline for Cyber Security Standards: Practice-Guideline for Ethics of Artificial Intelligence (Draft), which clearly lists five major types of ethical and moral risks of AI: (1) out-of-control risk, which is beyond the scope predetermined, understood, and controllable by the developer, designer, and deployer; (2) social risk, which causes social values and other systematic risks due to abuse or misuse; (3) infringement risk, which causes damage to basic rights, person, privacy, and property; (4) discrimination risk, which generates subjective or objective risks to specific groups of people; (5) liability risk, where the boundaries of responsibilities of relevant parties are unclear.Footnote 9 Currently, AI Risk Assessment Model and AI Privacy Protection Machine Learning Technical Requirements and other relevant technical standards have been released in draft version and are expected to become technical guidelines for AI risk assessment and privacy protection in the form of voluntary standards.Footnote 10

Industry self-regulatory initiatives are nonbinding norms issued by a number of social groups and research institutions in conjunction with stakeholders. The Beijing Zhiyuan Institute of Artificial Intelligence, jointly built by Beijing’s research institutions in the field of AI, released the Beijing Consensus on Artificial Intelligence in May 2019, which addresses AI from three aspects: research and development, use, and governance, and proposes 15 principles that are beneficial to the construction of a human destiny community and social development, which each participant should follow. In July 2021, AI Forum, jointly with more than 20 universities and AI technology companies, released the Initiative for Promoting Trustworthy AI Development, putting forward four initiatives: (1) insisting on technology for good to ensure that trustworthy AI benefits humanity; (2) insisting on sharing rights and responsibilities to promote the value concept of trustworthy AI; (3) insisting on a healthy and orderly approach to promote trustworthy AI industry practices; and (4) insisting on pluralism and inclusion to gather international consensus on trustworthy AI. In addition, there are a series of related initiative documents in areas such as facial recognition security.

III. The Ambition toward a Comprehensive Legal Framework

China currently does not have a unified AI law, but it has been under discussion. In contrast to soft law, the national legislature can promulgate a ‘hard law’ with binding force, which can establish general and binding rules on the scope of application, management system, security measures, rights and remedies, and legal liabilities of AI technologies. After these rules are confirmed by the legislator, the relevant actors within the scope of the law must implement a unified governance model. Therefore, by enacting laws, legislators are selecting a definitive model of governance for society. To ensure that the right choice is made, legislators need to have a good grasp of the past and present of the technology, as well as a sound understanding of the future direction of the technology. At the same time, in the early stages of the development of emerging technologies, there is a wide variation in the technological level of different developers, and the overall technological development stage of society is rapidly iterating, while the process of making new laws and revising them takes a long time, which leads legislators to worry that the laws made may soon become obsolete laws that lag behind the development stage of society, and that if there were no such laws made, it may face a series of new problems brought about by the development of disruptive innovations that cannot be clearly addressed.

During the two sessions of the National People’s Congress in recent years, there have been many proposals or motions on AI governance. There are several proposals on AI regulation between 2018 and 2021, including the Bill on Formulating the Law on the Development of Artificial Intelligence (2018), the Bill on Formulating the Law on the Administration of Artificial Intelligence Applications (2019), and the Bill on Formulating the Law on Artificial Intelligence Governance (2021). Other delegates have proposed the Bill on the Enactment of a Law on Self-Driving Cars (2019). In accordance with the procedures of the two sessions of the National People’s Congress, the delegates’ bills will be referred to the relevant authorities for processing and response, mainly by the Legislative Affairs Commission of the Standing Committee of the National People’s Congress, the Ministry of Science and Technology, and the Cyberspace Administration of China. At present, most of the proposals are referred to the legislative bodies or relevant industry authorities for research and solution, and their main attitude is that the AI legislation shall be carried out as a research project, not yet upgraded to the specific legislative agenda. For example, the Standing Committee of the National People’s Congress (NPC) proposed in its 2020 legislative work plan to

pay attention to research on legal issues related to new technologies and fields such as Artificial Intelligence, block chain and gene editing. Continue to promote the normalization and mechanism of theoretical research work, play the role of scientific research institutions, think tanks and other ‘external brain’, strengthen the exchange and cooperation with relevant parties, and urgently form high-quality research results.Footnote 11

The legislative work on AI is also a task to which President Xi Jinping attaches importance. The Political Bureau of the CPC Central Committee held its ninth collective study on the current status and trends of AI development on October 31, 2018. The General Secretary of the CPC Central Committee and President Xi Jinping clearly stated at this meeting that China will, ‘strengthen research on legal, ethical, and social issues related to Artificial Intelligence, and establish sound laws and regulations, institutional systems, and ethics to safeguard the healthy development of Artificial Intelligence.’Footnote 12 Subsequently, in November 2018, the members of the Standing Committee of the National People’s Congress (NPC) held a special meeting in Beijing to discuss the topic of regulating the development of AI, and after discussion, it was concluded that

the relevant special committees, working bodies and relevant parties of the NPC should take early action and act as soon as possible to conduct in-depth investigation and research on the legal issues involved in Artificial Intelligence, so as to provide relevant legislation work to lay a good foundation and make preparations to promote the healthy, standardized and orderly development of AI.Footnote 13

During the two national sessions held in March 2019, more representatives and members began to discuss the topic of how to build the future rule of law system for AI.Footnote 14 In addition, according to the timetable established in the State Council’s Development Plan for a New Generation of Artificial Intelligence, China should initially establish an AI legal and regulatory system in 2025. To this end, China’s legislature has also begun to cooperate with experts from research institutions to conduct supporting studies. In this context, the author of this paper also participated in the relevant discussions, undertook one of the research tasks, and made suggestions on the legislative strategy of AI at the 45th biweekly consultation symposium of the 13th National Committee of the Chinese People’s Political Consultative Conference (CPPCC) held in December 2020, undertook a project of the Ministry of Science and Technology in 2021 – Research on Major Legislative Issues of AI, and participated in the research task of the Law Working Committee of the Standing Committee of the National People’s Congress on the legislation of facial recognition regulation.

Although there is no comprehensive legislative outcome, China’s solutions for responsible AI can be extracted in all relevant laws. For example, the E-Commerce Law of the People’s Republic of China (E-Commerce Law) enacted in 2018 dictates prohibitions on the use of personal information for Big-Data Driven Price Discrimination,Footnote 15 while the Personal Information Protection Law of the People’s Republic of China (Personal Information Protection Law) and the Data Security Law of the People’s Republic of China (Data Security Law) enacted in 2021 set requirements in terms of automated decision-making rules and data security requirements. In July 2021, the Supreme People’s Court promulgated the Provisions on Several Issues Concerning the Application of Law in Hearing Civil Cases Related to the Use of Facial Recognition Technology for Handling Personal Information which is also an important governance regulation.Footnote 16 In addition, on 27 August 2021, the Cyberspace Administration of China issued the Regulation on Internet Information Service Based on Algorithm Recommendation Technology (Draft for Soliciting Public Comment), which is the first national-level legislative document in China to comprehensively regulate AI from the perspective of algorithms. At the local level, the Shenzhen legislature used its special legislative power as a special economic zone to issue the Regulations on the Promotion of Artificial Intelligence Industry in the Shenzhen Special Economic Zone (Draft for Soliciting Public Comment) on 14 July 2021. Despite the name of the law containing the word ‘promotion’, it includes a special chapter ‘Governance Principles and Measures’ providing the rules for responsible AI.

IV. The Legally Binding Method to Achieve Responsible AI

The new generation of AI is mainly driven by data and algorithms exerting essential social influence on different scenarios through various network platforms. Under the Chinese legal system, we can implement responsible AI through the governance of four dimensions: data, algorithm, platform, and application scenario.Footnote 17

1. Responsible AI Based on Data Governance

Data is the key factor driving the prosperous development of a new generation of AI. Big data resources are increasingly having a significant impact on global production, circulation, distribution, consumption, and economic and social systems, as well as national governance capabilities.Footnote 18 The Cyber Security Law enacted in November 2016 sets requirements for the security of important data and personal information respectively, and AI developers must comply with relevant regulations when processing data. In particular, national security and public interest should be safeguarded when dealing with important data, and the rights and interests of natural persons should be protected when dealing with personal information. In 2020, the newly released Civil Code of the People’s Republic of China (Civil Code) protects privacy and personal information interests. In 2021, the Data Security Law and Personal Information Protection Law (hereinafter referred to as ‘PIPL’) jointly provided for a more comprehensive approach to data governance. Responsible AI is ensured through new legal rules in four major dimensions in the field of data governance: giving individuals new civil rights, setting out obligations for processors, building a governance system for data security risk, and strengthening data processing responsibilities.

The new civil rights granted to individuals are mainly reflected in Chapter 4 of the PIPL. Also, Civil Code has laid down a ‘privacy right’ and a ‘personal information right.’ Privacy refers to a natural person’s undisturbed private life and the private space, private activities, and private information that the person does not want others to know about, while personal information is recorded electronically or by other means that can be used, by itself or in combination with other information, to identify a natural person.Footnote 19 Such distinctions may not be marked and are rarely mentioned in legal and academic research in Europe and the United States (US). However, through these two systems, China constructs the strict protection of privacy rights, protecting natural persons from being exposed or interfered with and giving them the right to keep personal information from being handled illegally. According to the Civil Code, the private information included in personal information shall apply to the provisions of privacy; if there is no such provision, the provisions on the protection of personal information, such as the PIPL, shall be applied. The PIPL provides a series of specific rights in Articles 44–55, the content of which is consistent with the connotation of some articles of the European Union (EU) General Data Protection Regulation (GDPR),Footnote 20 the fundamental purpose of which is to safeguard the rights of individuals in the data processing environment. Based on the protection of these rights, when AI handles personal information, it is also necessary to fully respect human dignity and ensure that personal information is not plundered by information technology. See Table 9.1 for the specific rights system and its legal basis.

Table 9.1. Individuals’ Rights in Personal Information Processing Activities

No.Name of rightLegal references
1The right to be informed, to decide, to restrict or refuse the processingPIPL Art. 44
2The right to consult, duplicate and transfer personal informationPIPL Art. 45
3The right of correction or supplementation of their personal informationPIPL Art. 46
4The right to deletePIPL Art. 47
5The right to request personal information processors to explain their personal information processing rules.PIPL Art. 48
6Exercise the rights of the relevant personal information of the deceasedPIPL Art. 49
7The right to get remedyPIPL Art. 50
8Privacy respectedCC Art. 1032

Note: PIPL refers to Personal Information Protection Law of the People’s Republic of China; CC refers to Civil Code of the People’s Republic of China; DSL refers to Data Security Law of the People’s Republic of China.

The obligations of processors are set not only to protect the personal information rights and interests of natural persons but also to strengthen the regulatory measures of protection specifically. AI developers and operators may be personal information processors, and they are subjected to comply with the nine major obligations under the PIPL and the Data Security Law, as shown in Table 9.2. These obligations cover the entire life cycle of personal information processing, ensuring the accountability of AI applications and reducing or eliminating the risk of damage to personal information.

Table 9.2. Obligations of Data Processors

No.Name of obligationLegal reference
1Acquire legal basis for processing personal informationPIPL Art. 13
2Truthfully, accurately, and completely notify individuals of the relevant matters in a conspicuous way and in clear and easily understood languagePIPL Art. 14 & 17
3Take corresponding security technical measuresPIPL Art. 51 & 59
4Appoint a person in charge of personal information protectionPIPL Art. 52 & 53
5Audit on a regular basis the compliance of their processing of personal informationPIPL Art. 54
6Conduct personal information protection impact assessment in advance, and record the processing informationPIPL Art. 55 & 56
7Immediately take remedial measures, and notify the authority performing personal information protection functions and the relevant individualsPIPL Art. 57

DSL Art. 29 & 30

8Specific requirements for sharing dataPIPL Art. 23
9Specific requirements for important Internet platformsPIPL Art. 58

Note: PIPL refers to Personal Information Protection Law of the People’s Republic of China; CC refers to Civil Code of the People’s Republic of China; DSL refers to Data Security Law of the People’s Republic of China.

Once data security risks in AI applications arise, it is difficult to recover from the damage. In order to avoid different types of data security risks such as personal information and important data, Data Security Law and PIPL establish a series of mechanisms to identify, eliminate, and resolve risks, thus ensuring data security in AI applications. Risk governance measures can be understood from different dimensions. Eight important governance measures under the law are listed in Table 9.3.

Table 9.3. Risk Management System of Data Governance

No.Risk Management SystemLegal reference
1Informed consentPIPL Art. 13–17
2Data minimizationPIPL Art. 6 & 19
3Openness and transparencyPIPL Art. 7, 17, 48, 58
4Cross-border security management systemPIPL Art. 38–43
DSL Art. 31
5Sensitive personal information processing rulesPIPL Art. 28–32
6Categorized and hierarchical data protection systemDSL Art. 21
7Risk monitoring and security emergency response and disposition mechanismDSL Art. 22 & 23
8Public supervising for personal information systemPIPL Art. 60–65
DSL Art. 40

Note: PIPL refers to Personal Information Protection Law of the People’s Republic of China; DSL refers to Data Security Law of the People’s Republic of China.

One of the basic principles for responsible AI is accountability, which also applies to data governance. The developers, controllers, and operators of AI systems can also be regarded as personal information processors on PIPL or data processors on Data Security Law, and they must comply with the above obligations. If the relevant obligators of the AI system violate data security obligations, they are liable for the corresponding damage consequences. The liability includes civil liability for compensation, administrative penalty, and criminal liability. Chapter VI of the Data Security Act and Chapter VII of the PIPL provide a number of legal liabilities that can ensure that individuals are able to obtain remedies and processors are punished in the event of data risks.

2. Responsible AI Based on Algorithm Governance

Responsible AI requires a combination of external and internal factors to play an active role. Data is the external factor, and algorithm is the internal factor. Algorithms are the key components of intelligence, and a series of algorithms combined with data training can form an AI system. Here is an example of the intelligent trial system developed by the authors’ research group in a research program on Intelligent Assistive Technology in the Context of Judicial Process Involving Concerned Civil and Commercial Cases for the China courts. The actual workflow of this platform can be represented in Figure 9.1.

Figure 9.1 A development process of AI application

This process is not a unique way to develop an AI system, but it is an example of common practice. Through the earlier mentioned program development process, a computational function can be implemented, that is, data input, model calculation, and data output, where the merit of the model directly determines the performance of the AI system. In this process, pre-trained models are often selected to reduce the workload of development. The algorithm used combined these pre-trained models, and the new model structure formed after development changed some of the parameters. Therefore, the algorithm governance commonly used in practice mainly refers to the parameters in the model structure, and this chapter continues to adopt ‘algorithmic governance’ as a unified concept to continue the academic terminology.

The E-Commerce Law, the PIPL and other related laws provide relevant provisions on algorithm governance. On 27 August 2021, the Cyberspace Administration of China released the Regulation on Internet Information Service Based on Algorithm Recommendation Technology (Draft for Soliciting Public Comment), which provides many algorithm governance requirements. In the E-Commerce Law, the representative concept in the law of algorithm governance can be summarized as ‘personalized recommendation’.Footnote 21 In the PIPL, the representative concept in this law of algorithmic governance is ‘automated decision-making’.Footnote 22 In the Regulation on Internet Information Service Based on Algorithm Recommendation Technology (Draft for Soliciting Public Comment), the representative concept of algorithm governance is ‘algorithm recommendation technology’ which refers to providing information by using algorithmic techniques such as generation synthesis, personalized push, sorting selection, retrieval and filtering, scheduling decision, etc.Footnote 23 Although the name of this regulation appears to apply to information services, information services here can be understood in a broad sense as information service technology.

We generally believe that the main principled requirements of algorithm governance are transparency, fairness, controllability, and accountability. The governance of algorithms in relevant Chinese laws and regulations basically follows the above-mentioned principles, which are also reflected in this regulation. According to the Regulation on Internet Information Service Based on Algorithm Recommendation Technology (Draft for Soliciting Public Comment), the actions to use algorithm recommendations should follow the principles of fairness, openness, transparency, reasonableness, and honesty.Footnote 24 Moreover, it explicitly prohibits the use of algorithm recommendation services to engage in activities prohibited by laws and regulations, such as endangering national security, disrupting the economic and social order, and infringing on the legitimate rights and interests of others.Footnote 25

Data-driven AI has a certain degree of incomprehensibility, and algorithmic transparency can help us understand how AI systems work and ensure that users make well-informed choices about their use behavior. According to the Regulation on Internet Information Service Based on Algorithm Recommendation Technology (Draft for Soliciting Public Comment), the algorithm recommendation service provider should inform users of the algorithm recommendation service in a conspicuous manner and publicize the basic principle, purpose, and operation mechanism of the algorithm recommendation service properly.Footnote 26 In addition, Articles 24 and 48 of PIPL also have the requirement of algorithm transparency, which makes it clear that individuals have the right to request the personal information processors to explain their personal information processing rules. Individuals have the right to request the personal information processors to explain the decisions that significantly impact their rights and interests through automated decision-making, and the right to refuse to allow the personal information processors to make decisions through automated decision-making only.

Algorithm bias is also a highly controversial issue in algorithm governance, and the center topic is how to ensure the fairness of AI. In China, establishing higher prices for price-insensitive users through algorithms occasionally occurs in e-commerce. The main scenario occurs when cheaper prices are set for new users while relatively higher prices are set for older users who have developed a dependency. Article 18 of the E-Commerce Law and Article 21 of the PIPL have already made relevant provisions. Articles 10 and 18 of the Regulation on Internet Information Service Based on Algorithm Recommendation Technology (Draft for Soliciting Public Comment) have made further provisions: providers of algorithm recommendation service shall strengthen the management of user models and user labels and, to improve the rules of interest points recorded in the user model, they shall not record illegal keywords or unhealthy information in the user’s interest points, or mark it as user labels to recommend information. And they shall not set discriminatory or prejudicial user labels. Algorithm recommendation service providers selling goods or providing services to consumers shall protect consumers’ legitimate rights and interests and shall not use algorithms to impose unreasonable differential treatment in transaction prices and other transaction prices based on consumer preferences, transaction conditions, and other characteristics or illegal acts.

At present, Chinese e-commerce operators still have different opinions on whether such behavior constitutes algorithmic bias. However, with extensive news media coverage, the general public opinion is more inclined to oppose algorithmic biases such as differential treatment of older users.

AI replaces some human behavior with automatic machine behaviors, and controllability is the essential requirement to ensure the safety and stability of AI. In order to prevent the risk of loss of control, the Regulations on the Promotion of Artificial Intelligence Industry in the Shenzhen Special Economic Zone (Draft) was released in July 2021. It set out the rules for agile governance, that is, organizing and conducting social experiments on AI; studying the comprehensive influence of AI development on the behavior patterns, social psychology, employment structure, income changes, social equity of individuals and organizations; and accumulating data and practical experience.Footnote 27

At present, China mainly focuses on self-driving cars and so-called robot advisors that give advice with regard to investment decisions. Relevant departments of the State Council and some localities have issued a series of road-testing specifications for intelligent connected vehicles (ICV), making closed road testing a prerequisite for self-driving cars to be put on the market. At the same time, to further improve their controllability, the cars tested on designated open roads must also be equipped with drivers ready to take over.Footnote 28

On the other hand, the Guidance on Standardizing the Asset Management Business of Financial Institutions issued by the People’s Bank of China and other departments in 2018 also points out the requirements of uncontrolled risk prevention in the field of smart investment consultants. It states that

Financial institutions should develop corresponding AI algorithms or program trading according to different product investment strategies to avoid algorithm homogeneity and increase pro-cyclicality of investment behavior, and for the resulting market volatility risk to develop a response plan. Due to algorithm homogenization, programming design errors, insufficient depth of data utilization and other Artificial Intelligence algorithm model defects or system anomalies, resulting in herding effects, affecting the stable operation of financial markets, financial institutions should promptly take manual intervention measures to force the adjustment or termination of the Artificial Intelligence business.Footnote 29

The accountability of algorithms requires that regulators and stakeholders perform their respective duties to ensure that technological innovation is accompanied by effective risk mitigation. In terms of China’s legal system, the Civil Code, the Product Quality Law, and other related laws provide the basis for the accountability of the algorithm.

For example, the Product Quality Law requires producers who design and sell products to reach the best (not the highest) degree of care; at the same time, it imposes strict liability control over unreasonable risks and aims that producers of AI system products improve their controllability requirements. In the Regulation on Internet Information Service Based on Algorithm Recommendation Technology (Draft for Soliciting Public Comment), the Cyberspace Administration of China expected a new rule that providers of algorithm recommendation services with high risk should register within ten working days from the date of service. The information of the service provider name, service form, application domain, algorithm type, algorithm self-evaluation report, and content to be publicized should be provided through the algorithm filing system of Internet information service.Footnote 30 On the other hand, the service providers of algorithm recommendation should accept social supervision, set up a convenient complaint reporting portal, and handle public complaints and reports promptly. The algorithm recommendation service provider should establish a use complaint channel and system, standardize handling complaints and feedback in a timely fashion, and protect users’ legitimate rights and interests.Footnote 31

The content discussed above reflects the need for administrative authorities, algorithm developers and other relevant parties to fulfill their corresponding responsibilities under the accountability requirements of algorithms.

3. Responsible AI Based on Platform Governance

The world’s primary AI technology innovation enterprises are companies of online platforms. These online platforms have strong technological innovation capabilities and a wide range of AI application scenarios, and many new technologies and new business models derive from online platforms. Therefore, platform governance is also an important aspect of achieving responsible AI. In China, online platform governance is mainly regulated in the E-Commerce Law, competition law, and other relevant laws and regulations. In recent years, provisions on AI governance under online platforms have been adopted or promulgated through the process of legislation or amendment.

There are a growing number of platforms in online transactions that use AI to determine flexible transaction rules. The E-Commerce Law issued in 2018 explicitly defines the platform as a regulated object, which requires that e-commerce platform operators should follow the principles of openness, fairness, and impartiality in formulating platform service agreements and transaction rules.Footnote 32 Article 18 E-Commerce Law also requires e-commerce operators to respect and equally protect the legitimate rights and interests of consumers when providing personalized recommendation services. In addition, the Interim Provisions on the Management of Online Tourism Operation Services issued in August 2020 also sets out that online travel operators shall not abuse technical means such as big data analysis to set unfair trading conditions based on tourists’ consumption records and tourism preferences, to infringe on the legitimate rights and interests of tourists.Footnote 33 Thus, it is clear that the E-Commerce Law mainly requires that the application of AI should not undermine the rights of consumers and operators within the platform to be treated fairly.

Online platforms often use AI to gain an unfair market competitive advantage. China’s Anti-Monopoly Law, Anti-Unfair Competition Law, and other relative regulations are also concerned with the platform responsibilities in the process of AI application.Footnote 34 In terms of horizontal monopoly agreements, the substantial existence of coordination through data, algorithms, platform rules, or other means is regarded as an illegal monopoly. In terms of vertical monopoly agreements, it is also regarded as an illegal monopoly to exclude or restrict market competition through directly or indirectly limiting the price by using data and algorithms, or limiting other transaction conditions by using technical means, platform rules, data, and algorithms.Footnote 35 The use of big data and algorithms to impose differential prices or other trading conditions or to impose differentiated standards, rules, or algorithms based on the ability to pay,Footnote 36 consumption preferences, and usage habits of the counterparty is also considered an illegal monopolistic act of abuse of a dominant position in the market. AI used to implement these monopolistic acts mainly refers to large-scale Internet platforms with a dominant market position.

It is an act of unfair competition for a business operator to use data, algorithms and other technical means to implement traffic hijacking, interference and malicious incompatibility to prevent or disrupt the normal operation of network products or services lawfully provided by other operators.Footnote 37 Similarly, operators that use data, algorithms, and other technical means to unreasonably provide different transaction information to counterparties managed by the same transaction conditions (by collecting and analysing transaction information, the content, and time of internet browsing; the brand and value of the terminal equipment used for the transaction; etc.), are infringing the counterparties’ right to know, right to choose, right to fair trade, etc., and are disrupting the fair trading order of the marketFootnote 38 Those who use AI to implement the above unfair competition can be both large Internet platforms and participants of other platform markets.

4. Responsible AI under Specific Scenarios

Specific scenarios in the field of AI based on new technologies and new business models often attract special attention. As a result, some AI-related regulations in different areas have been emerging. For example, China has special regulations to ensure responsible AI in areas such as labor employment, facial recognition, autonomous driving, smart investment consultants, deep forgery, online travel, online litigation, etc. The regulations related to responsible AI in these special areas can be divided into two categories. The first category is to reflect the provisions of the PIPL, the E-Commerce Law, and other relevant regulations to these specific areas, which increases the relevance of norm implementation but does not substantially introduce a new legal system. The second category is to establish more legal obligations and rights based on the special circumstances in the specific scenarios.

In the labor market, in September 2020, a widely spread report in the social platforms of China about the abuse of algorithms for performance management on online platforms revealed that a series of automated behaviors based on algorithms, such as point rating systems, system ‘upgrades’ to shorten delivery times, and navigation instructions that violate traffic rules, are forcing couriers to engage in high-intensity labor.Footnote 39 In July 2021, the State Administration for Market Regulation (SAMR) and relevant departments jointly issued binding opinions pointing out that the network catering platform and third-party partners should reasonably set the performance appraisal system for delivery workers. In the development of adjustments to the assessment, rewards and punishments and other systems or significant matters involving the delivery workers’ direct interests should be publicized in advance to fully listen to the views of the delivery workers, trade unions, and other parties. Optimize the algorithm rules, not the ‘strictest algorithm’ as the assessment requirements, through the ‘algorithm to take’ and other ways to reasonably determine the number of orders, online rate, and other assessment factors, to determine appropriate flexibility of the delivery time frame.Footnote 40 The Regulation on Internet Information Service Based on Algorithm Recommendation Technology (Draft for Soliciting Public Comment) released in August 2021 further points out that the algorithm recommendation service providers providing work scheduling services to workers should establish and improve the platform order distribution, compensation and payment, working hours, rewards and punishments and other related algorithms, and fulfill the obligations of workers’ rights and interests.Footnote 41 These requirements are a special case in the field of labor employment and reflect the principle of inclusiveness of AI to avoid the risk of polarization of AI.

In the field of facial recognition, an associate professor of the law school of Zhejiang University of Technology sued for the compulsory use of facial recognition equipment as admission to Hangzhou Safari Park, which was regarded as the first case of facial recognition rights defense in the judicial field in China. After that, a professor of the law school of Tsinghua University published a criticism for the compulsory use of facial recognition equipment as permission to enter into the apartment. A series of facial recognition incidents have raised social concerns about the right to choose facial recognition applications. In December 2020, the Cyberspace Administration of China drafted a Security Management Regulations for Commercial Applications of Facial Recognition Technology (Draft), but the draft has not yet been released. Meanwhile, the Supreme People’s Court has published a judicial opinion, Provisions on Several Issues Concerning the Application of Law in Hearing Civil Cases Relating to the Use of Facial Recognition Technology for Handling Personal Information, which states that: if building managers use facial recognition systems as the only way to verify owners or property users (to enter or leave the building), the people’s court shall support the owners or property users who disagree with using facial recognition and request to provide other reasonable verification methods under the law.Footnote 42 In addition, the Legal Working Committee of the Standing Committee of the National People’s Congress is also considering drafting special legal provisions for facial recognition. These moves reflect the requirements of personal biometric information protection and combine the specific scenarios of facial recognition to make special rules.

Furthermore, in the field of autonomous driving, the relevant departments of the State Council have specifically set complex conditions for its testing on open roads, including commissioned inspection reports of autonomous driving functions issued by third-party testing agencies, testing programs, and certificates of compulsory insurance for traffic accidents.Footnote 43 In the field of smart investment consultants, financial institutions should report the main parameters of the AI model and the main logic of asset allocation to the financial supervision and management authorities, set up separate smart management accounts for investors, fully indicate the inherent flaws and risks of using AI algorithms, clarify the transaction process, strengthen the management of the traces, and strictly monitor the trading positions, risk limits, types of transactions, pricing authority of smart management accounts, etc. Financial institutions that cause losses to investors due to violations of law or mismanagement shall be liable for damages as prescribed by law.Footnote 44 In the field of deep-fakes governance, the law requires that no organization or individual shall infringe upon the portrait rights of others by scandalizing, defacing, or using information technology means to forge, etc.; for the protection of the voice of natural persons, the relevant provisions on the protection of portrait rights shall apply by reference.Footnote 45 The dispersion of the above provisions indicates that there are differences in the degree of application of AI in different fields, and there are differences in the risks and security needs arising from its use in different fields. In the absence of comprehensive legislation on AI, the adoption of special binding provisions or guidance to address specific issues is a way to balance the pursuit of development with security values.

IV. Conclusion

The legal professional culture is generally conservative, which results in the law and regulations always lagging behind in responding to innovation and new technologies. At the beginning of the rapid development of AI, we have mainly implemented risk governance through moral codes, ethical guidelines, and technical standards. In contrast to these soft laws, the national legislature can enact mandatory ‘hard laws’ that establish general and binding rules on the scope of application, management system, safety measures, rights and remedies, and legal liabilities of AI technologies. China has issued several soft law governance tools around responsible AI in different sectors but does not yet have comprehensive AI legislation. However, China is still attempting to develop comprehensive AI legislation as evidenced by President Xi Jinping’s statement on AI legislation, the requirements in the national-level development plan for a new generation of AI, the attention paid to the topic by the National People’s Congress deputies, and some local legislative motions, as represented by Shenzhen.

The law has been out of date since its enactment. However, this does not mean that the law can do nothing about the problems after its promulgation. In the codified tradition, the applicability of various legal documents is often scalable, which enables new technologies and new business models to find corresponding applicable provisions. The effective Civil Code, E-Commerce Law, Product Quality Law, and other relevant legislations can serve as legal requirements developing responsible AI. In addition, new laws and other binding documents enacted in recent years provide a substantial basis for AI governance, and the effective and draft documents released in 2021 show that responsible AI is increasingly a concrete goal that needs to be enforced. Looking toward the future, two different options for legislative routes exist in countries including China, the EU, and the United States. One option is a foresighted legal design mindset that designs an institutional track to develop emerging technologies in the fastest possible way. Under this option, after the basic application pattern of AI technology is formed, lawmakers will summarize and predict the various risks of AI based on the existing situation and the understanding obtained by reasoning. Another option is to adopt a ‘wait and see’ approach, arguing that it is still too early for lawmakers to see just how this technology will impact citizens. Under this option, lawmakers pay more attention to the positive value of emerging technology development, and the risks involved are self-identified, self-adjusted, self-regulated, and self-healed by the free competition mechanism of the market.

From the current legislative dynamics in China, improving regulations related to data, algorithms, platforms, and specific scenarios will provide a broad and effective basis for AI governance. The development of comprehensive AI legislation has not been formally included in the NPC Standing Committee’s work plan in the short term, but this does not prevent local legislatures from exploring the possibility of comprehensive legislation. If a comprehensive AI legislation is to be enacted, its key elements are to record the types of AI risks, design the mechanism for identifying AI risks, and construct the mechanism for resolving AI risks. The EU proposal of AI Act released in 2021 has also been widely followed in China, and we can expect that after the proposal of the act is passed in Europe, it is likely that similar legislation will be enacted in China shortly thereafter.

10 Towards a Global Artificial Intelligence Charter

Thomas Metzinger
Footnote *
I. Introduction

The time has come to move the ongoing public debate on Artificial Intelligence (AI) into our political institutions. Many experts believe that during the next decade we will be confronted with an inflection point in history and that there is a closing window of opportunity for working out the applied ethics of AI. Political institutions must, therefore, produce and implement a minimal but sufficient set of ethical and legal constraints for the beneficial use and future development of AI. They must also create a rational, evidence-based process of critical discussion aimed at continuously updating, improving, and revising this first set of normative constraints. Given the current situation, the default outcome is that the values guiding AI development will be set by a very small number of human beings acting within large private corporations and military institutions. Therefore, one goal is to proactively integrate as many perspectives as possible – and in a timely manner. Many initiatives have already sprung up worldwide and are actively investigating recent advances in AI in relation to issues concerning applied ethics, including its legal aspects, future sociocultural implications, existential risks, and policymaking.Footnote 1 Public debate is heated, and some may even have the impression that major political institutions like the European Union (EU) are unable to react with adequate speed to new technological risks and to rising concern amongst the general public. We should, therefore, increase the agility, efficiency, and systematicity of current political efforts to implement rules by developing a more formal and institutionalised democratic process, and perhaps even new models of governance.

To initiate a more systematic and structured process, I will present a concise and non-exclusive list of the five most important problem domains, each with practical recommendations. The first problem domain to be examined is the one that, in my view, is made up of those issues that have the smallest chance of being solved. It should, therefore, be approached in a multilayered process, beginning in the EU itself.

II. The ‘Race-to-the-Bottom’ Problem

We need to develop and implement worldwide safety standards for AI research. A Global Charter for AI is necessary, because such safety standards can be effective only if they involve a binding commitment to certain rules by all countries participating and investing in the relevant type of research and development. Given the current competitive economic and military context, the safety of AI research will very likely be reduced in favour of more rapid progress and reduced cost, namely by moving it to countries with low safety standards and low political transparency (an obvious and strong analogy is the problem of tax evasion by corporations and trusts). If international cooperation and coordination succeeded, then a ‘race to the bottom’ in safety standards (through the relocation of scientific and industrial AI research) could, in principle, be avoided. However, the current landscape of incentives makes this a highly unlikely outcome. Non-democratic political actors, financiers, and industrial lobbyists will almost certainly prevent any more serious globalised approach to AI ethics.Footnote 2 I think that, for most of the goals I will sketch below, it would not be intellectually honest to assume that they can actually be realised, at least not in any realistic time frame and with the necessary speed (this is particularly true of Recommendations 2, 4, 6, 7, 10, 12, and 14). Nevertheless, it may be helpful to formulate a general set of desiderata to help structure future debates.

Recommendation 1

The EU should immediately develop a European AI Charter.

Recommendation 2

In parallel, the EU should initiate a political process steering the development of a Global AI Charter.

Recommendation 3

The EU should invest resources into systematic strengthening of international cooperation and coordination. Strategic mistrust should be minimised; commonalities can be defined via maximally negative scenarios.

The second problem domain to be examined is arguably constituted by the most urgent set of issues, and these also have a fairly small chance of being adequately resolved.

III. Prevention of an AI Arms Race

It is in the interests of the citizens of the EU that an AI arms race, for example between China and the United States (US), be halted before it gathers real momentum. Again, it may well be too late for this, and European influence is obviously limited. However, research into, and development of, offensive autonomous weapons should not be funded, and indeed should be outright banned, on EU territory. Autonomous weapons select and engage targets without human intervention, and they will act and react on ever shorter timescales, which in turn will make it seem reasonable to transfer more and more human autonomy into these systems themselves. They may, therefore, create military contexts in which relinquishing human control almost entirely seems like the rational choice. Autonomous weapon systems lower the threshold for entering a war, and if both warring parties possess intelligent, autonomous weapon systems there is an increased danger of fast escalation based exclusively on machine-made decisions. In this problem domain, the degree of complexity is even higher than in the context of preventing the development and proliferation of nuclear weapons, for example, because most of the relevant research does not take place in public universities. In addition, if humanity forces itself into an arms race on this new technological level, the historical process of an arms race itself may become autonomous and resist political interventions.

Recommendation 4

The EU should ban all research on offensive autonomous weapons on its territory and seek international agreements on such prohibitions.

Recommendation 5

For purely defensive military applications (if they are at all conceivable), the EU should fund research into the maximal degree of autonomy for intelligent systems that appears to be acceptable from an ethical and legal perspective.

Recommendation 6

On an international level, the EU should start a major initiative to prevent the emergence of an AI arms race, using all diplomatic and political instruments available.

The third problem domain to be examined is the one for which the predictive horizon is probably still quite distant, but where epistemic uncertainty is high and potential damage could be extremely large.

IV. A Moratorium on Synthetic Phenomenology

It is important that all politicians understand the difference between AI and artificial consciousness. The unintended or even intentional creation of artificial consciousness is highly problematic from an ethical perspective, because it may lead to artificial suffering and a consciously experienced sense of self in autonomous, intelligent systems. ‘Synthetic phenomenology’ (SP, a term coined in analogy to ‘synthetic biology’) refers to the possibility of creating not only general intelligence, but also consciousness or subjective experiences, in advanced artificial systems. Potential future artificial subjects of experience currently have no representation in the current political process, they have no legal status, and their interests are not represented in any ethics committee. To make ethical decisions, it is important to have an understanding of which natural and artificial systems have the capacity for producing consciousness, and in particular for experiencing negative states like suffering.Footnote 3 One potential risk is that of dramatically increasing the overall amount of suffering in the universe, for example via cascades of copies or the rapid duplication of conscious systems on a vast scale.

For this, I refer readers to an open-access publication of mine, titled ‘Artificial Suffering: An Argument for a Global Moratorium on Synthetic Phenomenology’.Footnote 4 The risk that has to be minimised in a rational and evidence-based manner is the risk of an ‘explosion of negative phenomenology’ (ENP; or simply a ‘suffering explosion’) in advanced AI and other post-biotic systems. I will here define ‘negative phenomenology’ as any kind of conscious experience a conscious system would avoid or rather not go through if it had a choice.

On ethical grounds, we should not risk an explosion of conscious suffering – at the very least not before we have a much deeper scientific and philosophical understanding of what both consciousness and suffering really are. As we presently have no good theory of consciousness and no good, hardware-independent theory about what ‘suffering’ really is, the ENP risk is currently incalculable. It is unethical to run incalculable risks of this magnitude. Therefore, until 2050, there should be a global ban on all research that directly aims at, or indirectly and knowingly risks, the emergence of synthetic phenomenology.

Synthetic phenomenology is only one example of a type of risk to which political institutions have turned out to be systematically blind, typically dismissing such risks as ‘mere science fiction’. It is equally important that all politicians understand both the possible interactions amongst specific risks and – given the large number of ‘unknown unknowns’ in this domain – the fact that there is an ethics of risk-taking itself. This point relates to uncomprehended risks we currently label as ‘mid-term’, ‘long-term’, or ‘epistemically indeterminate’.

Recommendation 7

The EU should ban all research that risks or directly aims at the creation of synthetic phenomenology on its territory, and seek international agreements on such prohibitions.Footnote 5

Recommendation 8

Given the current level of uncertainty and disagreement within the nascent field of machine consciousness, there is a pressing need to promote, fund, and coordinate relevant interdisciplinary research projects (comprising fields such as philosophy, neuroscience, and computer science). Specific topics of relevance are evidence-based conceptual, neurobiological, and computational models of conscious experience, self-awareness, and suffering.

Recommendation 9

On the level of foundational research there is a need to promote, fund, and coordinate systematic research into the applied ethics of non-biological systems capable of conscious experience, self-awareness, and subjectively experienced suffering.

The next general problem domain to be examined is the most complex, and likely contains the largest number of unexpected problems and ‘unknown unknowns’.

V. Dangers to Social Cohesion

Advanced AI technology will clearly provide many possibilities for optimising the political process itself, including novel opportunities for rational, value-based social engineering and more efficient, evidence-based forms of governance. On the other hand, it is plausible to assume that there are many new, at present unknown, risks with the potential to undermine efforts to sustain social cohesion. It is also reasonable to assume the existence of a larger number of ‘unknown unknowns’, of AI-related risks that we will discover only by accident and late in the day. Therefore, the EU should allocate separate resources to prepare for situations in which such unexpected ‘unknown unknowns’ are suddenly discovered.

Many experts believe that the most proximal and well-defined risk is massive unemployment through automation.Footnote 6 The implementation of AI technology by financially potent stakeholders may lead to a steeper income gradient, increased inequality, and dangerous patterns of social stratification.Footnote 7 Concrete risks are extensive wage cuts, a collapse of income tax, plus an overload of social security systems. But AI poses many other risks for social cohesion, for example via privately owned and autonomously controlled social media aimed at harvesting human attention and ‘packaging’ it for further use by their customers, or in ‘engineering’ the formation of political will via Big Nudging strategies and AI-controlled choice architectures that are not transparent to the individual citizens whose behaviour is thus controlled.Footnote 8 Future AI technology will be extremely good at modelling and predictively controlling human behavior – for example by positive reinforcement and indirect suggestions, making compliance with certain norms or the emergence of ‘motives’ and decision outcomes appear entirely spontaneous and unforced. In combination with Big Nudging and predictive user control, intelligent surveillance technology could also increase global risks by locally helping to stabilise authoritarian regimes in an efficient manner. Again, most of these risks to social cohesion are still very likely unknown at present, and we may discover them only by accident. Policymakers must also understand that any technology that can purposefully optimise the intelligibility of its own action for human users can in principle also optimise for deception. Great care must therefore be taken to avoid accidental or even intended specification of the reward function of any AI in a way that might indirectly damage the common good.

AI technology is currently a private good. It is the duty of democratic political institutions to turn large portions of it into a well-protected common good, something that belongs to all of humanity. In the tragedy of the commons, everyone can often see what is coming, but if mechanisms for effectively counteracting the tragedy are not in existence it will unfold invisibly, for example in decentralised situations. The EU should proactively develop such preventative mechanisms.

Recommendation 10

Within the EU, AI-related productivity gains must be distributed in a socially just manner. Obviously, past practice and global trends clearly point in the opposite direction: We have (almost) never done this in the past, and existing financial incentives directly counteract this recommendation.

Recommendation 11

The EU should carefully research the potential for an unconditional basic income or a negative income tax on its territory.

Recommendation 12

Research programs are needed to investigate the feasibility of accurately timed initiatives for retraining threatened population strata towards creative and social skills.

The next problem domain is difficult to tackle because most of the cutting-edge research in AI has already moved out of publicly funded universities and research institutions. It is in the hands of private corporations, and, therefore, systematically non-transparent.

VI. Research Ethics

One of the most difficult theoretical problems in this area is the problem of defining the conditions under which it would be rational to relinquish specific AI research pathways altogether (for instance, those involving the emergence of synthetic phenomenology, or plausibly engendering an explosive evolution of autonomously self-optimising systems not reliably aligned with human values). What would be concrete, minimal scenarios justifying a moratorium on certain branches of research? How will democratic institutions deal with deliberately unethical actors in a situation where collective decision-making is unrealistic and graded, and non-global forms of ad hoc cooperation have to be created? Similar issues have already occurred in so-called gain-of-function research involving experimentation aiming at an increase in the transmissibility and/or virulence of pathogens, such as certain highly pathogenic H5N1 influenza virus strains, smallpox, or anthrax. Here, influenza researchers laudably imposed a voluntary and temporary moratorium on themselves.Footnote 9 In principle, this could happen in the AI research community as well. Therefore, the EU should certainly complement its AI charter with a concrete code of ethical conduct for researchers working in funded projects. However, the deeper goal would be to develop a more comprehensive culture of moral sensitivity within the relevant research communities themselves. Rational, evidence-based identification and minimisation of risks (including those pertaining to a distant future) ought to be a part of research itself, and scientists should cultivate a proactive attitude to risk, especially if they are likely to be the first to become aware of novel types of risk through their own work. Communication with the public, if needed, should be self-initiated, in the spirit of taking control and acting in advance of a possible future situation, rather than just reacting to criticism by non-experts with some set of pre-existing, formal rules. As Michael Madary and I note in our ethical code of conduct for virtual reality, which includes recommendations for good scientific practice: ‘Scientists must understand that following a code of ethics is not the same as being ethical. A domain-specific ethics code, however consistent, developed and fine-grained future versions of it may be, can never function as a substitute for ethical reasoning itself.’Footnote 10

Recommendation 13

Any AI Global Charter, or its European precursor, should always be complemented by a concrete Code of Ethical Conduct guiding researchers in their practical day-to-day work.

Recommendation 14

A new generation of applied ethicists specialised in problems of AI technology, autonomous systems, and related fields needs to be trained. The EU should systematically and immediately invest in developing the future expertise needed within the relevant political institutions, and it should do so aiming at an above-average level of academic excellence and professionality.

VII. Meta-Governance and the Pacing Gap

As briefly pointed out in the introductory paragraph, the accelerating development of AI has perhaps become the paradigmatic example of an extreme mismatch between existing governmental approaches and what would be needed to optimise the risk/benefit ratio in a timely fashion. The growth of AI exemplifies how powerfully time pressure can constrain rational and evidence-based identification, assessment, and management of emerging risks; creation of ethical guidelines; and implementation of an enforceable set of legal rules. There is a ‘pacing problem’: Existing governance structures are simply unable to respond to the challenge fast enough; political oversight has already fallen far behind technological evolution.Footnote 11

I am drawing attention to the current situation not because I want to strike an alarmist tone or to end on a dystopian, pessimistic note. Rather, my point is that the adaptation of governance structures themselves is part of the problem landscape: In order to close or at least minimise the pacing gap, we have to invest resources into changing the structure of governance approaches themselves. ‘Meta-governance’ means just this: A governance of governance equal to facing the risks and potential benefits of an explosive growth in specific sectors of technological development. For example, Wendell Wallach has pointed out that the effective oversight of emerging technologies requires some combination of both hard regulations enforced by government agencies and expanded soft-governance mechanisms.Footnote 12 Gary Marchant and Wendell Wallach have, therefore, proposed so-called Governance Coordination Committees (GCCs), a new type of institution providing a mechanism for coordinating and synchronising what they aptly describe as an ‘explosion of governance strategies, actions, proposals, and institutions’Footnote 13 with existing work in established political institutions. A GCC for AI could act as an ‘issue manager’ for one specific, rapidly emerging technology; as an information clearinghouse, an early warning system, an analysis and monitoring instrument, and an international best-practice evaluator; and as an independent and trusted ‘go-to’ source for ethicists, media, scientists, and interested stakeholders. As Marchant and Wallach write: ‘The influence of a GCC in meeting the critical need for a central coordinating entity will depend on its ability to establish itself as an honest broker that is respected by all relevant stakeholders.’Footnote 14

Many other strategies and governance approaches are, of course, conceivable. However, this is not the place to discuss details. Here, the general point is simply that we can meet the challenge posed by rapid developments in AI and autonomous systems only if we put the question of meta-governance on top of our agenda right from the start. In Europe, the main obstacle to reaching this goal is, of course, ‘soft corruption’ through the Big Tech industrial lobby in Brussels: There are strong financial incentives and major actors involved in keeping the pacing gap as wide open as possible for as long as possible.Footnote 15

Recommendation 15

The EU should invest in researching and developing new governance structures that dramatically increase the speed at which established political institutions can respond to problems and actually enforce new regulations.

VIII. Conclusion

I have proposed that the European Union immediately begin working towards the development of a Global AI Charter, in a multilayered process starting with an AI Charter for the EU itself. To briefly illustrate some of the core issues from my own perspective as a philosopher, I have identified five major thematic domains and provided 15 general recommendations for critical discussion. Obviously, this contribution was not meant as an exclusive or exhaustive list of the relevant issues. On the contrary, at its core, the applied ethics of AI is not a field for grand theories or ideological debates at all, but mostly a problem of sober, rational risk management involving different predictive horizons under great uncertainty. However, an important part of the problem is that we cannot rely on intuitions, because we must satisfy counterintuitive rationality constraints. Therefore, we also need humility, intellectual honesty, and genuine open-mindedness.

Let me end by quoting from a recent policy paper titled ‘Artificial Intelligence: Opportunities and Risks’, published by the Effective Altruism Foundation in Berlin, Germany:

In decision situations where the stakes are very high, the following principles are of crucial importance:

  1. 1. Expensive precautions can be worth the cost even for low-probability risks, provided there is enough to win/lose thereby.

  2. 2. When there is little consensus in an area amongst experts, epistemic modesty is advisable. That is, one should not have too much confidence in the accuracy of one’s own opinion either way.Footnote 16

11 Intellectual Debt With Great Power Comes Great Ignorance

Jonathan Zittrain
Footnote *

The boxes for prescription drugs typically include an insert of tissue-thin paper folded as tight as origami. For the bored or the preternaturally curious who unfurl it, there’s a sketch of the drug’s molecular structure using a notation that harkens to high school chemistry, along with ‘Precautions’ and ‘Dosage and Administration’ and ‘How Supplied’. And for many drugs, under ‘Clinical Pharmacology’, one finds a sentence like this one for the wakefulness drug Provigil, after the subheading ‘Mechanism of Action’: ‘The mechanism(s) through which modafinil promotes wakefulness is unknown.’Footnote 1 That sentence alone might provoke wakefulness without assistance from the drug. How is it that something could be so studied and scrutinized to find its way to regulatory approval and widespread prescribing, while we don’t know how it works?

The answer is that industrial drug discovery has long taken the form of trial-and-error testing of new substances in, say, mice. If the creatures’ condition is improved with no obvious downside, the drug may be suitable for human testing. Such a drug can then move through a trial process and earn approval. In some cases, its success might inspire new research to fill in the blanks on mechanism of action. For example, aspirin was discovered in 1897, and an explanation of how it works followed in 1995.Footnote 2 That, in turn, has spurred some research leads on making better pain relievers through something other than trial and error.

This kind of discovery – answers first, explanations later – accrues what I call ‘intellectual debt’. We gain insight into what works without knowing why it works. We can put that insight to use immediately, and then tell ourselves we’ll figure out the details later. Sometimes we pay off the debt quickly; sometimes, as with aspirin, it takes a century; and sometimes we never pay it off at all.

Be they of money or ideas, loans can offer great leverage. We can get the benefits of money – including use as investment to produce more wealth – before we’ve actually earned it, and we can deploy new ideas before having to plumb them to bedrock truth.

Indebtedness also carries risks. For intellectual debt, these risks can be quite profound, both because we are borrowing as a society, rather than individually, and because new technologies of Artificial Intelligence (AI) – specifically, machine learning – are bringing the old model of drug discovery to a seemingly unlimited number of new areas of inquiry. Humanity’s intellectual credit line is undergoing an extraordinary, unasked-for bump up in its limit.

To understand the problems with intellectual debt despite its boon, it helps first to consider a sibling: engineering’s phenomenon of technical debt.

In the summer of 2012, the Royal Bank of Scotland applied a routine patch to the software it used to process transactions. It went poorly. Millions of customers could not withdraw their money, make payments, or check their balances.Footnote 3 One man was held in jail over a weekend because he couldn’t make bail.Footnote 4 A couple was told to leave their newly-purchased home when their closing payment wasn’t recorded.Footnote 5 A family reported that a hospital threatened to remove life support from their gravely ill daughter after a charity’s transfer of thousands of dollars failed to materialize.Footnote 6 The problem persisted for days as the company tried to figure out what had gone wrong, reconstruct corrupted data, and replay transactions in the right order.

RBS had fallen victim to technical debt. Technical debt arises when systems are tweaked hastily, catering to an immediate need to save money or implement a new feature, while increasing long-term complexity. Anyone who has added a device every so often to a home entertainment system can attest to the way in which a series of seemingly sensible short-term improvements can produce an impenetrable rat’s nest of cables. When something stops working, this technical debt often needs to be paid down as an aggravating lump sum – likely by tearing the components out and rewiring them in a more coherent manner.

Banks are particularly susceptible to technical debt because they computerized early and invested heavily in mainframe systems that were, and are, costly and risky to replace. Their core systems still process trillions of dollars using software written in COBOL, a programming language from the 1960s that’s no longer taught in most universities.Footnote 7 Those systems are now so intertwined with Web extensions, iPhone apps, and systems from other banks, that figuring out how they work all over again, much less eliminating them, is daunting. Consulting firms like Accenture have charged firms like the Commonwealth Bank of Australia hundreds of millions to dollars to make a clean break.Footnote 8

Two crashes of Boeing’s new 737 Max 8 jets resulted in the worldwide grounding of its Max fleet. Analysis so far points to a problem of technical debt: The company raced to offer a more efficient jet by substituting in more powerful engines, while avoiding a comprehensive redesign in order to fit the Max into the original 737 genus.Footnote 9 That helped speed up production in a number of ways, including bypassing costly recertifications. But the new engines had a tendency to push the aircraft’s nose up, possibly causing it to stall. The quick patch was to alter the aircraft’s software to automatically push the nose down if it were too far up. Pilots were then expected to know what to do if the software itself acted wrongly for any reason, such as receiving the wrong information about nose position from the plane’s sensors. A small change occasioned another small change which in turn forced another awkward change, pushing an existing system into unpredictable behavior. While the needed overall redesign would have been costly and time consuming, and would have had its own kinks to work out, here the alternative of piling on debt contributed to catastrophe.

Enter a renaissance in long-sleepy areas of AI based on machine learning techniques. Like the complex systems of banks and aircraft makers, these techniques bear a quiet, compounding price that may not seem concerning at first, but will trouble us later. Machine learning has made remarkable strides thanks to theoretical breakthroughs, zippy new hardware, and unprecedented data availability. The distinct promise of machine learning lies in suggesting answers to fuzzy, open-ended questions by identifying patterns and making predictions. It can do this through, say, ‘supervised learning’, by training on a bunch of data associated with already-categorized conclusions. Provide enough labeled pictures of cats and non-cats, and an AI can soon distinguish cats from everything else. Provide enough telemetry about weather conditions over time, along with what notable weather events transpired, and an AI might predict tornadoes and blizzards. And with enough medical data and information about health outcomes, an AI can predict, better than the best physicians can, whether someone newly entering a doctor’s office with pulmonary hypertension will live to see another year.Footnote 10

Researchers have pointed out thorny problems of technical debt afflicting AI systems that make it seem comparatively easy to find a retiree to decipher a bank system’s COBOL.Footnote 11 They describe how machine learning models become embedded in larger ones and can then be forgotten, even as their original training data goes stale and their accuracy declines.

But machine learning doesn’t merely implicate technical debt. There are some promising approaches to building machine learning systems that, in fact, can offer some explanationsFootnote 12 – sometimes at the cost of accuracy – but they are the rare exceptions. Otherwise, machine learning is fundamentally patterned like drug discovery, and it thus incurs intellectual debt. It stands to produce answers that work, without offering any underlying theory. While machine learning systems can surpass humans at pattern recognition and predictions, they generally cannot explain their answers in human-comprehensible terms. They are statistical correlation engines – they traffic in byzantine patterns with predictive utility, not neat articulations of relationships between cause and effect. Marrying power and inscrutability, they embody Arthur C. Clarke’s observation that any sufficiently advanced technology is indistinguishable from magic.Footnote 13

But here there is no David Copperfield or Ricky Jay who knows the secret behind the trick. No one does. Machine learning at its best gives us answers as succinct and impenetrable as those of a Magic 8-Ball – except they appear to be consistently right. When we accept those answers without independently trying to ascertain the theories that might animate them, we accrue intellectual debt.

Why is unpaid intellectual debt worrisome? There are at least three reasons, in increasing difficulty. First, when we don’t know how something works, it becomes hard to predict how well it will adjust to unusual situations. To be sure, if a system can be trained on a broad enough range of situations, nothing need ever be unusual to it. But malefactors can confront even these supposedly robust systems with specially-crafted inputs so rare that they’d never be encountered in the normal course of events. Those inputs – commonly referred to as ‘adversarial examples’ – can look normal to the human eye, while utterly confusing a trained AI.

For example, computers used to be very bad at recognizing what was in photos. That made categorization of billions of online images for a search engine like Google Images inaccurate. Fifteen years ago the brilliant computer scientist Luis von Ahn solved the problem by finding a way for people, instead of computers, to sort the photos for free. He did this by making the ‘ESP game’.Footnote 14 People were offered an online game in which they were shown images and asked to guess what other people might say was in them. When they were right, they earned points. They couldn’t cash the points in for anything, but thousands of people played the game anyway. And when they did, their successful guesses became the basis for labeling images. Google bought Luis’s game, and the field of human computation – employing human minds as computing power – took off.Footnote 15

Today, Google’s ‘Inception’ architecture – a specially-configured ‘neural network’ machine learning system – has become so good at image recognition that Luis’s game is no longer needed to get people to label photos. We know how Inception was built.Footnote 16 But even its builders don’t know how it gets a given image right. Inception produces answers, but not the kinds of explanations that the players of Luis’s game could offer if they were asked. Inception correctly identifies, say, cats. But it can’t provide an explanation for what distinguishes a picture of a cat from anything else. And in the absence of a theory of cathood, it turns out that Inception can be tricked by images that any human would still immediately see as one of a cat.

MIT undergraduates were able to digitally alter the pixels of a standard cat photo to leave it visibly unchanged – and yet fool Google’s state-of-the-art image detection engine into determining with ‘hundred percent confidence’ that it was looking at a picture of guacamole.Footnote 17 They then went a step further and painted a 3D-printed turtle in a way that looks entirely turtle-like to a human – and causes Google to classify it at every angle as a rifle.Footnote 18

A system that had a discernible theory of whiskers and ears for cats, or muzzles for rifles, would be harder to fool – or at least would only be foolable along the lines that humans could be. But systems without theory have any number of unknown gaps in their accuracy. This is not just a quirk of Google’s state-of-the-art image recognizer. In the realm of healthcare, systems trained to classify skin lesions as benign or malignant can be similarly tricked into flipping their previously-accurate judgments with an arbitrary amount of misplaced confidence,Footnote 19 and the prospect of triggering insurance reimbursements for such inaccurate findings could inspire the real world use of these techniques.Footnote 20

The consistent accuracy of a machine learning system does not defend it against these kinds of attacks; rather, it may serve only to lull us into the chicken’s sense that the kindly farmer comes every day with more feed – and will keep doing so. Charmed by its ready-to-hand predictive power, we will embed machine learning – like the asbestos of yesteryear – into larger systems, and forget about it. But it will remain susceptible to hijacking with no easy process for continuing to validate the answers it is producing, especially as we stand down the work of the human judges it will ultimately replace. Intellectual debt entails a trade-off for vulnerability that is easy to drift into just the way that technical debt does.

There is a second reason to worry as AI’s intellectual debt piles up: the coming pervasiveness of machine learning models. Taken in isolation, oracular answers can generate consistently helpful results. But these systems won’t stay in isolation. As AI systems gather and ingest the world’s data, they’ll produce data of their own – much of which will be taken up by still other AI systems. The New York Subway system has its own old-fashioned technical debt, as trains run through tunnels and switches whose original installers and maintainers have long moved on. How much more complicated would it be if that system’s activities became synchronized with the train departures at Grand Central Terminal, and then new ‘smart city’ traffic lights throughout the five boroughs?

Even simple interactions can lead to trouble. In 2011, biologist Michael Eisen found from one of his students that an unremarkable used book – The Making of a Fly: The Genetics of Animal Design – was being offered for sale on Amazon by the lowest-priced seller for just over $1.7 million, plus $3.99 shipping.Footnote 21 The next cheapest copy weighed in at $2.1 million. The respective sellers were well established; each had thousands of positive reviews. When Eisen visited the page the next day, the prices had gone up yet further. As each day brought new increases from the sellers, Eisen performed some simple math: Seller A’s price was consistently 99.83% that of Seller B. And Seller B’s price was, each day, adjusted to be 127.059% that of Seller A.

Eisen figured that Seller A had a copy of the book and, true to the principles of Economics 101, was seeking to offer the lowest price of all sellers by slightly undercutting the next cheapest price. He then surmised that Seller B did not have a copy of the book, so priced it higher – and was then waiting to see if anyone bought the more expensive copy anyway. If so, Seller B could always get it from Seller A and direct delivery of the book to the lazy buyer, pocketing a handsome profit without having to actually personally package and mail anything.

Each seller’s strategy is rational – and while algorithmic, surely involved no sophisticated machine learning at all. Even those straightforward strategies collided to produce manifestly irrational results. The interaction of thousands of machine learning systems in the wild promises to be much more unpredictable.

The financial markets provide an obvious breeding ground for this type of problem – and one in which cutting-edge machine learning is already being deployed today. In 2010, a ‘flash crash’ driven by algorithmic trading wiped more than $1 trillion from the major US indices – for thirty-six minutes. Last fall, JPMorgan analyst Marko Kolanovic shared a short analysis within a 168-page market report that suggested it could readily happen again, as more investing becomes passive rather than active, and simply tied to indices.Footnote 22 Unlike technical debt, whose borrowing is typically attributable to a particular entity that is stewarding a system, intellectual debt can accumulate in the interstices where systems bump into each other without formally interconnecting.

A third, and most profound, issue with intellectual debt is the prospect that it represents a larger movement from basic science towards applied technology, one that threatens to either transform academia’s investigative rigors or bypass them entirely.Footnote 23 Unlike, say, particle accelerators, the tools of machine learning are as readily taken up by private industry as by universities. Indeed, the kind and volume of data that will produce useful predictions is more likely to be in Google and Facebook’s possession than at the MIT computer science department or Media Lab. Industry may be perfectly satisfied with answers that lack theory. But when those answers aren’t themselves well publicized, much less the AI tools that produce them, intellectual debt will build in societal pockets far away from the academics who would be most interested in backfilling the theory. And an obsession only with answers – represented by a shift in public fundingFootnote 24 of research to orient around them – can in turn steer even pure academics away from paying off the intellectual debt they might find in the world, and instead towards borrowing more.

One researcher in the abstruse but significant field of ‘protein folding’ recently wrote an essay exploring his ambivalence about what it means to be a scientist after a machine learning model was able to, well, fold proteins in ways that only humans had previously been able to achieve.Footnote 25 He told one publication: ‘We’ve had this tendency as a field to be very obsessed with data collection. The papers that end up in the most prestigious venues tend to be the ones that collect very large data sets. There’s far less prestige associated with conceptual papers or papers that provide some new analytical insight.’Footnote 26

It would be the consummate pedant who refused to take a life-saving drug simply because no one knew how it worked. At any given moment an intellectual loan can genuinely be worth taking out. But as more and more drugs with unknown mechanisms of action proliferate – none of them found in the wild – the number of tests to uncover untoward interactions must scale exponentially. In practice, these interactions are simply found once new drugs find their way to the market and bad things start happening, which partially accounts for the continuing cycle of introduction-and-abandonment of drugs. The proliferation of machine learning models and their fruits makes that problem escape the boundaries of one field.

So, what should we do? First, we need to know our exposure. As machine learning and its detached answers rightfully blossom, we should invest in a collective intellectual debt balance sheet. Debt is not only often tolerable, but often valuable – it leverages what we can do. Just as a little technical debt in a software system can help adapt it to new uses without having to continually rebuild it, a measure of considered intellectual debt can give us a Promethean knowledge boost, and then signpost a research agenda to discover the theory that could follow.

For that, we need the signposts. We must keep track of just where we’ve plugged in the answers of an alien system, rather than tossing crumpled IOUs into a file cabinet that could come due without our noticing. Not all debt is created equal. When the stakes are low, such as the use of machine learning to produce new pizza recipes,Footnote 27 it may make sense to shut up and enjoy the pizza, never fretting about the theory behind what makes peanut butter and banana toppings work so well together on a pie. But when the stakes are higher, such as the use of AI to make health predictions and recommendations, we walk on untested ice when we crib the answers to a test rather than ever learning the underlying material. That it is near-irresistible to use the answers makes pursuing an accompanying theory all the more important.

To achieve a balance sheet for intellectual debt, we must look at current practices around trade secrets and other intellectual property. Just as our patent system requires public disclosure of a novel technique in exchange for protection against its copying by others, or the city building department requires the public availability of renovation plans for private buildings, we should explore academic mirroring and escrow of otherwise-hidden data sets and algorithms that achieve a certain measure of public use. That gives us a hope for building a map of debt – and a rapid way to set a research agenda to pay off debt that appears to have become particularly precarious.

Most important, we should not deceive ourselves into thinking that answers alone are all that matters: Indeed, without theory, they may not be meaningful answers at all. As associational and predictive engines spread and inhale ever more data, the risk of spurious correlations itself skyrockets. Consider one brilliant amateur’s running list of very tight associations found,Footnote 28 not because of any genuine association, but because with enough data, meaningless, evanescent patterns will emerge. The list includes almost perfect correlations between the divorce rate in Maine and the per capita consumption of margarine, and between US spending on science, space, and technology and suicides by hanging, strangulation, and suffocation. At just the time when statisticians and scientists are moving to de-mechanize the use of statistical correlations,Footnote 29 acknowledging that the production of correlations alone has led us astray, machine learning is experiencing that success of the former asbestos industry relies on the basis of exactly those kinds of correlations.

Traditional debt shifts control, from borrower to lender, and from future to past, as later decisions are constrained by earlier bargains. Answers without theory – intellectual debt – also will shift control in subtle ways. Networked AI is moving decisions previously left by necessity to, say, a vehicle’s driver into the hands of those tasked with designing autonomous vehicles – hence the ongoing hand-wringing around ethical trolley problems.Footnote 30 Society, not the driver, can now directly decide whom a car that has lost its brakes should most put at risk, including its passengers. And the past can now decide for the future: Cars can be programmed well ahead of time with decisions to be actualized later.

A world of knowledge without understanding becomes, to those of us living in it, a world without discernible cause and effect, and thus a world where we might become dependent on our own digital concierges to tell us what to do and when. It’s a world where home insurance rates could rise or fall by the hour or the minute as new risks are accurately predicted for a given neighborhood or home. The only way to make sense of that world might be to employ our own AIs to try to best position us for success with renter’s insurance AIs (‘today’s a good day to stay home’); hiring AIs (‘consider wearing blue’); or admissions AIs (‘volunteer at an animal shelter instead of a homeless shelter’), each taking and processing inputs in inscrutable ways.

When we have a theory, we get advanced warning of trouble when the theory stops working well. We are called to come up with a new theory. Without the theory, we lose the autonomy that comes from knowing what we don’t know.

Philosopher David Weinberger has raised the fascinating prospect that machine learning could help us tap into natural phenomena that themselves don’t avail themselves of any theory to begin with.Footnote 31 It’s possible that there are complex but – with enough computing power – predictable relationships in the universe that simply cannot be boiled down to an elegant formula like Newton’s account of gravity taught in high schools around the world, or Einstein’s famed insight about matter, energy, and the speed of light. But we are soon to beat nature to that complex punch: with AI, in the name of progress, we will build phenomena that can only be predicted, while never understood, by other AI.

That is, we will build models dependent on, and in turn creating, underlying logic so far beyond our grasp that they defy meaningful discussion and intervention. In a particularly fitting twist, the surgical procedure of electrical deep brain stimulation has advanced through trial-and-error – and is now considered for the augmentation of human thinking, ‘cosmetic neurosurgery’.Footnote 32

Much of the timely criticism of AI has rightly focused on ways in which it can go wrong: it can create or replicate bias; it can make mistakes; it can be put to evil ends. Alongside those worries belongs another one: what happens when AI gets it right, becoming an Oracle to which we cannot help but to return and to whom we therefore become bonded.

Footnotes

6 Artificial Intelligence and the Past, Present, and Future of Democracy

* I am grateful to audiences at University College London and at the University of Freiburg for helpful discussions during Zoom presentations of this material in June 2021. I also acknowledge helpful comments from Sushma Raman, Derya Honca, and Silja Voeneky.

1 L Winner, ‘Do Artifacts Have Politics?’ (1980) 109 Daedalus 121.

2 For current trends, see P Chojecki, Artificial Intelligence Business: How You Can Profit from AI (2020). For the state of the art, see M Mitchell, Artificial Intelligence: A Guide for Thinking Humans (2019); T Taulli, Artificial Intelligence Basics: A Non-Technical Introduction (2019); S Russell, Human Compatible: Artificial Intelligence and the Problem of Control (2019). See also The Future Today Institute, ‘14th Annual Tech Trends Report’ (2021); for musings on the future of AI, see J Brockman (ed), Possible Minds: Twenty-Five Ways of Looking at AI (2019).

3 For optimism about the occurrence of a singularity, see R Kurzweil, The Singularity Is Near: When Humans Transcend Biology (2006); for pessimism, see EJ Larson, The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do (2021); see also N Bostrom, Superintelligence: Paths, Dangers, Strategies (2016) (hereafter Bostrom, Superintelligence); M Tegmark, Life 3.0: Being Human in the Age of Artificial Intelligence (2017) (hereafter Tegmark, Life 3.0).

4 B Latour, Reassembling the Social: An Introduction to Actor-Network-Theory (2007); B Latour, We Have Never Been Modern (1993). To be sure, and notwithstanding the name of the theory, Latour speaks of actants rather than actors, to emphasize the role of non-human entities.

5 How to understand ‘technology’ is a non-trivial question in the philosophy of technology, as it affects how broad our focus is; see C Mitcham, Thinking through Technology: The Path between Engineering and Philosophy (1994); M Coeckelbergh, Introduction to Philosophy of Technology (2019). For AI one could just think of a set of tools in machine learning; alternatively, one could think of the whole set of devices in which these tools are implemented, and all productive activities that come with procurement and extraction of materials involved; see K Crawford, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (2021) (hereafter Crawford, Atlas of AI). While I mostly sideline these issues, I adopt an understanding of technology from W Bijker, ‘Why and How Technology Matters’ in RE Goodin and C Tilly (eds), The Oxford Handbook of Contextual Political Analysis (2006). At a basic level, ‘technology’ refers to sets of artefacts like computers, cars, or voting machines. At the next level, it also includes human activities, as in ‘the technology of e‐voting’. Thereby it refers also to the making and handling of such machines. Finally, and closest to its Greek origin, ‘technology’ refers to knowledge: It is about what people know as well as what they do with machines and related production processes.

6 For a good overview, see A Gutmann, ‘Democracy’ in RE Goodin, P Pettit, and TW Pogge (eds), A Companion to Contemporary Political Philosophy (2007).

7 D Stasavage, The Decline and Rise of Democracy: A Global History from Antiquity to Today (2020) (hereafter Stasavage, The Decline and Rise of Democracy).

8 To think of Greek democracy as a uniquely located innovation also contradicts the evolutionary story of early bands of humans who succeeded because they were good at cooperating and had brains that had evolved to serve cooperative purposes. See for example, C Boehm, Hierarchy in the Forest. The Evolution of Egalitarian Behavior (1999). To the extent that a demos separate from an aristocracy is the hallmark of democracy (a sensible view given the etymology), many cases covered by Stasavage do not count. Still, his account creates an illuminating contrast with autocracies. Also, in structures where consent is needed, internal dynamics over time typically demand broader inclusion.

9 Stasavage, The Decline and Rise of Democracy (Footnote n 7) 29; J Ober, The Rise and Fall of Classical Greece (2015) 123; J Thorley, Athenian Democracy (2004) 23.

10 O Höffe (ed), Aristotle. Politics (1998). Also see M Risse, ‘The Virtuous Group: Foundations for the ‘Argument from the Wisdom of the Multitude’’ (2001) 31 Canadian Journal of Philosophy 31, 53.

11 For the devices, I draw on J Dibbell, ‘Info Tech of Ancient Democracy’ (Alamut), www.alamut.com/subj/artiface/deadMedia/agoraMuseum.html, which explores museum literature on these artefacts displayed in Athens. See also S Dow, ‘Aristotle, the Kleroteria, and the Courts’ (1939) 50 Harvard Studies in Classical Philology 1. For the mechanics of Athenian democracy, see also MH Hansen, The Athenian Democracy in the Age of Demosthenes: Structure, Principles, and Ideology (1991).

12 Hélène Landemore has argued that modern democracy erred in focusing on representation. Instead, possibilities of small-scale decision making with appropriate connections to government should have been favored – which now is more doable through technology. See H Landemore, ‘Open Democracy and Digital Technologies’ in L Bernholz, H Landmore, and R Reich (eds), Digital Technology and Democratic Theory (2021) 62; H Landemore, Open Democracy: Reinventing Popular Rule for the Twenty-First Century (2020).

13 Howard Zinn has a rather negative take specifically on the founding of the United States that would make it unsurprising that these legitimacy problems arose: ‘Around 1776, certain important people in the English colonies […] found that by creating a nation, a symbol, a legal unity called the United States, they could take over land, profits, and political power from favorites of the British Empire. In the process, they could hold back a number of potential rebellions and create a consensus of popular support for the rule of a new, privileged leadership’; H Zinn, A People’s History of the United States (2015) 59.

14 JE Cooke, The Federalist (1961), 149 (hereafter Cooke, Federalists).

15 JS Young, The Washington Community 1800–1828 (1966) 32.

16 B Bimber, Information and American Democracy: Technology in the Evolution of Political Power (2003) 89. For the argument that, later, postal services were critical to the colonization of the American West (and thus have been thoroughly political throughout their existence), see C Blevins, Paper Trails: The US Post and the Making of the American West (2021).

17 Cooke, Federalist (Footnote n 14) 384.

18 I follow J Lepore, ‘Rock, Paper, Scissors: How We Used to Vote’ (The New Yorker, 13 October 2008). Some of those themes also appear in J Lepore, These Truths: A History of the United States (2019), especially chapter 9. See also RG Saltman, History and Politics of Voting Technology: In Quest of Integrity and Public Confidence. (2006). For the right to vote in the United States, see A Keyssar, The Right to Vote: The Contested History of Democracy in the United States (2009).

19 Stasavage, The Decline and Rise of Democracy (Footnote n 7) 296. For a political-theory idealization of modern democracy in terms of two ‘tracks’, see J Habermas, Between Facts and Norms: Contributions to a Discourse Theory of Law and Democracy (1996) Chapters 7–8. The first track is formal decision making (e.g., parliament, courts, agencies). The other is informal public deliberation, where public opinion is formed.

20 The success of the Chinese model has prompted some philosophers to defend features of that model, also in light of how democracies have suffered from the two legitimacy problems; see DA Bell, The China Model: Political Meritocracy and the Limits of Democracy (2016); T Bai, Against Political Equality: The Confucian Case (2019); J Chan, Confucian Perfectionism: A Political Philosophy for Modern Times (2015). For the view that China’s Communist Party will face a crisis that will force it to let China become democratic, see Ci, Democracy in China: The Coming Crisis (2019). For the argument that different governance models emerge for good reasons at different times, see F Fukuyama, The Origins of Political Order: From Pre-Human Times to the French Revolution (2012); F Fukuyama, Political Order and Political Decay: From the Industrial Revolution to the Globalization of Democracy (2014).

21 YN Harari, ‘Why Technology Favors Tyranny’ (The Atlantic, October 2018) www.theatlantic.com/magazine/archive/2018/10/yuval-noah-harari-technology-tyranny/568330/.

22 FA Hayek, The Road to Serfdom (2007).

23 Similarly, Cohen and Fung – reviewing deterministic viewpoints that see technology clearly favor or disfavor democracy – conclude that ‘the democratic exploitation of technological affordances is vastly more contingent, more difficult, and more dependent on ethical conviction, political engagement, and good design choices than the technological determinists appreciated’ A Fung and J Cohen, ‘Democracy and the Digital Public Sphere’ in L Bernholz, H Landemore, and R Reich (eds), Digital Technology and Democratic Theory (2021) 25 (hereafter Fung and Cohen, ‘Democracy and the Digital Public Sphere’). Or as computer scientist Nigel Shadbolt says, addressing worries that ‘machines might take over’: ‘[T]he problem is not that machines might wrest control of our lives from the elites. The problem is that most of us might never be able to wrest control of the machines from the people who occupy the command posts’, N Shadbolt and R Hampson, The Digital Ape: How to Live (in Peace) with Smart Machines (2019) 63.

24 M Coeckelbergh, Introduction to Philosophy of Technology (2019) Part II.

25 On Mumford, see DL Miller, Lewis Mumford: A Life (1989).

26 L Mumford, Technics and Civilization (2010).

28 Footnote Ibid, 12–18.

29 L Mumford, Myth of the Machine: Technics and Human Development (1967) (hereafter Mumford, Myth of the Machine); L Mumford, Pentagon of Power: The Myth of the Machine (1974) (hereafter Mumford, Pentagon of Power).

30 Mumford, Myth of the Machine (Footnote n 29) chapter 9.

31 The title of chapter 11 of Mumford, Pentagon of Power (Footnote n 29).

32 M Heidegger, The Question Concerning Technology, and Other Essays (1977) 3–35 (hereafter Heidegger, The Question Concerning Technology). On Heidegger, see J Richardson, Heidegger (2012); ME Zimmerman, Heidegger’s Confrontation with Modernity: Technology, Politics, and Art (1990).

33 Heidegger, The Question Concerning Technology (Footnote n 32) 17.

34 Quoted in Young, Heidegger’s Later Philosophy (2001) 46.

35 Heidegger, The Question Concerning Technology (Footnote n 32) 16.

38 Quoted in J Young, Heidegger’s Later Philosophy (2001) 50.

39 HL Dreyfus, On the Internet (2008).

40 W Benjamin, The Work of Art in the Age of Its Technological Reproducibility, and Other Writings on Media (2008).

41 H Marcuse, One-Dimensional Man: Studies in the Ideology of Advanced Industrial Society (1991) 1 (hereafter Marcuse, One-Dimensional Man).

42 Marcuse, One-Dimensional Man (Footnote n 41) 3.

44 J Ellul, The Technological Society (1964) (hereafter Ellul, The Technological Society). For recent discussions, see JP Greenman, Understanding Jacques Ellul (2012); HM Jerónimo, JL Garcia, and C Mitcham, Jacques Ellul and the Technological Society in the 21st Century (2013).

45 Ellul, The Technological Society (Footnote n 44) 133.

46 Ellul, The Technological Society (Footnote n 44) 258.

48 J Susskind, Future Politics: Living Together in a World Transformed by Tech (2018) Chapter 13; YN Harari, Homo Deus: A Brief History of Tomorrow (2018) Chapter 9.

49 J Lovelock, Novacene: The Coming Age of Hyperintelligence (2020).

50 See T Ord, The Precipice: Existential Risk and the Future of Humanity (2021) Chapter 5.

51 For a discussion of majority rule in the context of competing methods that process information differently, also M Risse, ‘Arguing for Majority Rule’ (2004) 12 Journal of Political Philosophy 41.

52 RH Thaler and CR Sunstein, Nudge: Improving Decisions about Health, Wealth, and Happiness (2009).

53 See e.g., HE Gardner, Frames of Mind: The Theory of Multiple Intelligences (2011).

54 On this, see also D Helbing and others, ‘Will Democracy Survive Big Data and Artificial Intelligence?’ (Scientific American, 25 February 2017) www.scientificamerican.com/article/will-democracy-survive-big-data-and-artificial-intelligence/.

55 For a classic study of the emergence of public spheres, see J Habermas, The Structural Transformation of the Public Sphere: An Inquiry into a Category of Bourgeois Society (1991). For how information spread in different periods, see A Blair and others, Information: A Historical Companion (2021). For the development of media in recent centuries, see P Starr, The Creation of the Media: Political Origins of Modern Communications (2005).

56 This term has been attributed to Edmund Burke, and thus goes back to a time decades before media played that kind of role in the American version of modern democracy, see J Schultz, Reviving the Fourth Estate: Democracy, Accountability and the Media (1998) 49.

57 M McLuhan, Understanding Media: The Extensions of Man (1994); FA Kittler, Gramophone, Film, Typewriter (1999).

58 For the emergence of digital media and their role for democracy, see Fung and Cohen, ‘Democracy and the Digital Public Sphere’ (Footnote n 23). For the formulation I attributed to Fung, see for instance this podcast PolicyCast, ‘211 Post-expert Democracy: Why Nobody Trusts Elites Anymore’ (Harvard Kennedy School, 3 February 2020) www.hks.harvard.edu/more/policycast/post-expert-democracy-why-nobody-trusts-elites-anymore.

59 A Jungherr, G Rivero and D Gayo-Avello, Retooling Politics: How Digital Media Are Shaping Democracy (2020) Chapter 9; C Véliz, Privacy Is Power: Why and How You Should Take Back Control of Your Data (2021) Chapter 3 (hereafter Véliz, Privacy Is Power).

60 J Brennan, Against Democracy (2017); B Caplan, The Myth of the Rational Voter: Why Democracies Choose Bad Policies (2nd ed., 2008); I Somin, Democracy and Political Ignorance: Why Smaller Government Is Smarter (2nd ed., 2016).

61 CH Achen and LM Bartels, Democracy for Realists: Why Elections Do Not Produce Responsive Government (2017).

62 M Broussard, Artificial Unintelligence: How Computers Misunderstand the World (2019) (hereafter Broussard, Artificial Unintelligence).

63 R Rini, ‘Deepfakes and the Epistemic Backstop’ (2020) 20 Philosophers’ Imprint 1. See also C Kerner and M Risse, ‘Beyond Porn and Discreditation: Promises and Perils of Deepfake Technology in Digital Lifeworlds’ (2021) 8(1) Moral Philosophy and Politics 81.

64 For E Zuckerman’s work, see E Zuckerman, ‘What Is Digital Public Infrastructure’ (Center for Journalism & Liberty, 17 November 2020) www.journalismliberty.org/publications/what-is-digital-public-infrastructure#_edn3; and E Zuckerman, ‘The Case of Digital Public Infrastructure’ (Knight First Amendment Institute, 17 January 2020) https://knightcolumbia.org/content/the-case-for-digital-public-infrastructure; see also E Pariser and D Allen, ‘To Thrive Our Democracy Needs Digital Public Infrastructure’(Politico, 1 May 2021) www.politico.com/news/agenda/2021/01/05/to-thrive-our-democracy-needs-digital-public-infrastructure-455061.

65 S Zuboff, ‘The Coup We Are Not Talking About’ (New York Times, 29 January 2021) www.nytimes.com/2021/01/29/opinion/sunday/facebook-surveillance-society-technology.html; M Risse, ‘The Fourth Generation of Human Rights: Epistemic Rights in Digital Lifeworlds’ (2021) Moral Philosophy and Politics https://doi.org/10.1515/mopp-2020-0039.

66 On Taiwan, see A Leonard, ‘How Taiwan’s Unlikely Digital Minister Hacked the Pandemic’ (Wired, 23 July 2020) www.wired.com/story/how-taiwans-unlikely-digital-minister-hacked-the-pandemic/.

67 For a recent take, see J Reilly, M Lyu, and M Robertson ‘China’s Social Credit System: Speculation vs. Reality’ (The Diplomat, 30 March 2021) https://thediplomat.com/2021/03/chinas-social-credit-system-speculation-vs-reality/. See also B Dickson, The Party and the People: Chinese Politics in the 21st Century (2021).

68 RJ Deibert, Black Code: Surveillance, Privacy, and the Dark Side of the Internet (2013); RJ Deibert, Reset: Reclaiming the Internet for Civil Society (2020).

69 Fung and Cohen, ‘Democracy and the Digital Public Sphere’ (Footnote n 23).

70 For the speech, see DD Eisenhower, ‘Farewell Address’ (1961) www.ourdocuments.gov/doc.php?flash=false&doc=90&page=transcript.

71 Crawford, Atlas of AI (Footnote n 5) 184; Obviously in 1961, AI is not what Eisenhower had in mind.

72 Crawford, Atlas of AI (Footnote n 5) Chapter 6. See also Véliz, Privacy Is Power (Footnote n 59).

73 V Eubanks, Automating Inequality: How High-Tech Tools Profile, Police, and Punish the Poor (2018).

74 F Pasquale, The Black Box Society: The Secret Algorithms That Control Money and Information (2016). See also Broussard, Artificial Unintelligence (Footnote n 62); C O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy (2017).

75 R Benjamin, Race After Technology Race After Technology: Abolitionist Tools for the New Jim Code (2019); R Benjamin, Captivating Technology: Race, Carceral Technoscience, and Liberatory Imagination in Everyday Life (2019); SU Noble, Algorithms of Oppression: How Search Engines Reinforce Racism (2018). See also C D’Ignazio and LF Klein, Data Feminism (2020); S Costanza-Chock, Design Justice: Community-Led Practices to Build the Worlds We Need (2020).

76 D Haraway, Simians, Cyborgs, and Women: The Reinvention of Nature (2015) 149–182.

77 L Bernholz, H Landemore, and R Reich, Digital Technology and Democratic Theory (2021).

78 P Preville, ‘How Barcelona Is Leading a New Era of Digital Democracy’ (Medium, 13 November 2019) https://medium.com/sidewalk-talk/how-barcelona-is-leading-a-new-era-of-digital-democracy-4a033a98cf32.

79 IMD, ‘Smart City Index’ (IMD, 2020) www.imd.org/smart-city-observatory/smart-city-index/.

80 E Higgins, We Are Bellingcat: Global Crime, Online Sleuths, and the Bold Future of News (2021). See also M Webb, Coding Democracy: How Hackers Are Disrupting Power, Surveillance, and Authoritarianism (2020).

81 LM Bartels, Unequal Democracy: The Political Economy of the New Gilded Age (2018); M Gilens, Affluence and Influence: Economic Inequality and Political Power in America (2014).

82 On AI and citizen services, see H Mehr, ‘Artificial Intelligence for Citizen Services and Government’ (Harvard Ash Center Technology & Democracy Fellow, August 2017) https://ash.harvard.edu/files/ash/files/artificial_intelligence_for_citizen_services.pdf.

83 T Piketty, Capital in the Twenty First Century (2014).

84 On these topics, see e.g., D Susskind, A World Without Work: Technology, Automation, and How We Should Respond (2020); DM West, The Future of Work: Robots, AI, and Automation (2019).

85 On this, also see M Risse, ‘Data as Collectively Generated Patterns: Making Sense of Data Ownership’ (Carr Center for Human Rights Policy, 4 April 2021) https://carrcenter.hks.harvard.edu/publications/data-collectively-generated-patterns-making-sense-data-ownership.

86 S Zuboff, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (2019). See also Véliz, Privacy Is Power (Footnote n 59).

87 H Arendt, The Origins of Totalitarianism (1973).

88 A Webb, The Big Nine: How the Tech Titans and Their Thinking Machines Could Warp Humanity (2020).

89 For that term, see Tegmark, Life 3.0 (Footnote n 3).

90 Tegmark, Chapter 5 (Footnote n 3).

91 N Wiener, God and Golem, Inc.; a Comment on Certain Points Where Cybernetics Impinges on Religion (1964) 69.

92 D Livingstone, Transhumanism: The History of a Dangerous Idea (2015); M More and N Vita-More, The Transhumanist Reader: Classical and Contemporary Essays on the Science, Technology, and Philosophy of the Human Future (2013); Bostrom, Superintelligence (Footnote n 3).

93 For some aspects of this, NC Carr, The Shallows: How the Internet Is Changing the Way We Think, Read and Remember (2011); S Turkle, Alone Together: Why We Expect More from Technology and Less from Each Other (2017). But the constitutive role of technology on human life is a central theme in the philosophy of technology and adjacent areas generally.

94 H Arendt, The Human Condition (1958) 324.

7 The New Regulation of the European Union on Artificial Intelligence Fuzzy Ethics Diffuse into Domestic Law and Sideline International Law

1 See the treatment accorded to the ICJ, Legal Consequences of the Separation of the Chagos Archipelago from Mauritius in 1965, Advisory Opinion [2019] ICJ Rep 95, in Dispute concerning Delimitation of the Maritime Boundary between Mauritius and Maldives in the Indian Ocean, no. 28 (Mauritius/Maldives) (Preliminary Objections) ITLOS (2021) para. 203; see our discussion in T Burri and J Trinidad, ‘Introductory note’ (2021) 60(6) International Legal Materials 969–1037.

2 Article 32 Vienna Convention on the Law of the Treaties, 1155 UNTS 331 (engl.) 27 March 1969.

3 J Koven Levit, ‘Bottom-Up International Lawmaking: Reflections on the New Haven School of International Law’ (2007) 32 The Yale Journal of International Law 393420.

4 See A Winfield, ‘An Updated Round Up of Ethical Principles of Robotics and AI’ (Alan Winfield’s Web Log, 18 April 2019) https://alanwinfield.blogspot.com/2019/04/an-updated-round-up-of-ethical.html: ‘1. A robot may not injure a human being or, through inaction, allow a human being to come to harm. 2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law. 3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.’ The work of the present author has benefitted tremendously from Winfield’s collation of ethics principles on AI in his blog at a time when it was not yet easy to assemble the various sets of ethics principles. For the primary source of Asimov’s principles, see e.g. I Asimov, The Caves of Steel (1954) and I Asimov, The Naked Sun (1957); for a discussion of Asimov’s principles about fifty years after Asimov had begun writing about them, see RR Murphy and DD Woods, ‘Beyond Asimov: The Three Laws of Responsible Robotics’ (2009) July/August 2019 IEEE Intelligent Systems 14–20.

5 Drafted in the context of the Engineering and Physical Sciences Research Council and the Arts and Humanities Research Council (United Kingdom) in 2010, but published only in M Boden and others, ‘Principles of Robotics: Regulating Robots in the Real World’ (2017) 29 Connection Science (2) 124129; see also A Winfield, ‘Roboethics – for Humans’ (2011) 17 May 2011 The New Scientist 32–33. Before that, ethicists and philosophers had already discussed robotics in various perspectives, see e.g. R Sparrow, ‘Killer Robots’ (2007) 24 Journal of Applied Philosophy (1) 6277, RC Arkin, Governing Lethal Behavior in Autonomous Robots (2009); PW Singer, Wired for War: The Robotics Revolution and Conflict in the Twenty-First Century (2009); W Wallach and C Allen, Moral Machines: Teaching Robots Right from Wrong (2009).

6 See E Horvitz, One Hundred Year Study on Artificial Intelligence: Reflections and Framing (2014) https://ai100.stanford.edu/reflections-and-framing (hereafter Horvitz, ‘One Hundred Year Study’) also for the roots of this study (on p 1).

7 Future of Life Institute, ‘An Open Letter: Research Priorities for Robust and Beneficial Artificial Intelligence’ (Future of Life Institute) http://futureoflife.org/ai-open-letter/ (hereafter ‘Open Letter’); another important moment before the Open Letter was a newspaper article: S Hawking and others, ‘Transcendence Looks at the Implications of Artificial Intelligence – But Are We Taking AI Seriously Enough?’ The Independent (1 May 2014) www.independent.co.uk/news/science/stephen-hawking-transcendence-looks-at-the-implications-of-artificial-intelligence-but-are-we-taking-9313474.html.

8 Several research groups had addressed the law and ethics of robots in the meanwhile: see C Leroux and others, ‘Suggestion for a Green Paper on Legal Issues in Robotics’ (31 December 2012) www.researchgate.net/publication/310167745_A_green_paper_on_legal_issues_in_robotics; E Palmerini and others, ‘Guidelines on Regulating Robotics’ (Robo Law, 22 September 2014) www.robolaw.eu/RoboLaw_files/documents/robolaw_d6.2_guidelinesregulatingrobotics_20140922.pdf; other authors previously had prepared the ground, notably P Lin, K Abney, and GA Bekey (eds), Robot Ethics: The Ethical and Social Implications of Robotics (2012); U Pagallo, The Law of Robots: Crimes, Contracts, Torts (2013); N Bostrom, Superintelligence: Paths, Dangers, Strategies (2014) (hereafter Bostrom, ‘Superintelligence’); JF Weaver, Robots Are People Too: How Siri, Google Car, and Artificial Intelligence Will Force Us to Change Our Laws (2014); M Anderson and S Anderson Leigh, ‘Towards Ensuring Ethical Behaviour from Autonomous Systems: A Case-Supported Principle-Based Paradigm’ (2015) 42 Industrial Robot: An International Journal (4) 324331.

9 In the 100 Year Study, law and ethics figured prominently as a research topic (Horvitz, ‘One Hundred Year Study’(Footnote n 6) topics 6 and 7), while the Open Letter (Footnote n 7) included a research agenda parts of which were ‘law’ and ‘ethics’.

10 The first version of ‘Ethically Aligned Design’ was made public in 2016: Institute of Electrical and Electronics Engineers (IEEE), ‘Ethically Aligned Design, The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems’ (13 December 2016) http://standards.ieee.org/develop/indconn/ec/ead_v1.pdf; meanwhile, a first edition has become available: Institute of Electrical and Electronics Engineer (IEEE), ‘Ethically Aligned Design, The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems’ (2019) https://ethicsinaction.ieee.org; in the following, reference is made to the latter, the first edition (hereafter, IEEE, ‘Ethically Aligned Design’). It contains a section on high-level ‘general principles’ which address human rights, well-being, data agency, effectiveness, transparency, accountability, awareness of misuse, and competence. Other sections of the Charter discuss classical ethics, well-being, affective computing, personal data and individual agency, methods to guide ethical research and design, sustainable development, embedding values, policy, and law. The last section on the ‘law’ focuses on fostering trust in autonomous and intelligent systems and the legal status of such systems. For full disclosure, the present author co-authored the section on law of Ethically Aligned Design.

11 Future of Life Institute, ‘Asilomar AI Principles’ (Future of Life Institute, 2017) https://futureoflife.org/ai-principles/ (hereafter Future of Life Institute, ‘Asilomar AI Principles’). The Asilomar principles address AI under three themes, namely ‘research’, ‘ethics and values’, and ‘longer term issues’. Several sub-topics are grouped under each theme, viz. goal, funding, science-policy link, culture, race avoidance (under ‘research’); safety, failure transparency, judicial transparency, responsibility, value alignment, human values, personal privacy, liberty and privacy, shared benefit, shared prosperity, human control, non-subversion, arms race (under ‘ethics and values’); and capability caution, importance, risks, recursive self-improvement, and common good (under ‘longer term issues’).

12 Association for Computing Machinery US Public Policy Council (USACM), ‘Statement on Algorithmic Transparency and Accountability’ (USACM, 12 January 2017) www.acm.org/binaries/content/assets/public-policy/2017_usacm_statement_algorithms.pdf (hereafter USACM, ‘Algorithmic Transparency’); the principles are part of a broader code of ethics: Association for Computing Machinery Committee on Professional Ethics, ‘ACM Code of Ethics and Professional Conduct’ (ACM Ethics, 22 June 2018) https://ethics.acm.org. Summed up, the principles are the following: 1. Be aware of bias; 2. Enable questioning and redress; 3. If you use algorithms, you are responsible even if not able to explain; 4. Produce explanations; 5. Describe the data collection process, while access may be restricted; 6. Record to enable audits; 7. Rigorously validate your model and make the test public. Compare also with the principles a professional organization outside of the anglophone sphere published relatively early: Japanese Society for Artificial Intelligence, ‘The Japanese Society for Artificial Intelligence Ethical Guidelines’ (2017) http://ai-elsi.org/wp-content/uploads/2017/05/JSAI-Ethical-Guidelines-1.pdf (hereafter Japanese Society for AI, ‘Guidelines’) in summary: 1. Contribute to humanity, respect human rights and diversity, eliminate threats to safety; 2. Abide by the law, do not use AI to harm others, directly or indirectly; 3. Respect privacy; 4. AI as a resource is to be used fairly and equally by humanity, avoid discrimination and inequality; 5. Be sure to maintain AI safe and under control; provide users with appropriate and sufficient information; 6. Act with integrity and so that society can trust you; 7. Verify performance and impact of AI, warn if necessary, prevent misuse; whistle blowers shall not be punished; 8. Improve society’s understanding of AI, maintain consistent and effective communication; 9. Have AI abide by these guidelines in order for it to become a quasi-member of society. Note, in particular, the Japanese twist of the last guideline.

13 See by way of example V Mnih, and others, ‘Human-Level Control through Deep Reinforcement Learning’ (2015) 518 Nature (26 February 2015) 529533; see also B Schölkopf, ‘Learning to See and Act’ (2015) 518 Nature (26 February 2015) 486487; and D Silver and others ‘Mastering the Game of Go with Deep Neural Networks and Tree Search’ (2016) 529 Nature (28 January 2016) 484489. The Darpa Challenges also significantly pushed research forward, see T Burri, ‘The Politics of Robot Autonomy’ (2016) 7 European Journal of Risk Regulation (2) 341360. In robotics, a certain amount of hysteria has been created by Boston Dynamics’ videos. An early example is the video about the Atlas robot: Boston Dynamics, ‘Atlas, the Next Generation’ (YouTube, 23 February 2016) www.youtube.com/watch?v=rVlhMGQgDkY&app=desktop. But it is not all hype and hysteria, see already GA Pratt, ‘Is a Cambrian Explosion Coming for Robotics?’ (2015) 29 Journal of Economic Perspectives (3 (Summer 2015)) 5160.

14 Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons which may be deemed to be Excessively Injurious or to have Indiscriminate Effects (with Protocols I, II, and III), 1342 UNTS 163 (English), 10 October 1980.

15 This discussion was spurred on by a report: Human Rights Watch and Harvard International Human Rights Clinic, ‘Losing Humanity: The Case against Killer Robots’ (HRW, 19 November 2012) www.hrw.org/report/2012/11/19/losing-humanity/case-against-killer-robots, and an international civil society campaign, the Campaign to Stop Killer Robots (see www.stopkillerrobots.org), in which from the beginning researchers such as P Asaro, R Sparrow, N Sharkey, and others were involved; the International Committee for Robot Arms Control (ICRAC, see www.icrac.net) also campaigned against Killer Robots. Much of the influential legal work within the context of the Campaign goes back to B Docherty, e.g. the report just mentioned or B Docherty, ‘Mind the Gap: The Lack of Accountability for Killer Robots’ (HRW, 9 April 2015) www.hrw.org/report/2015/04/09/mind-gap/lack-accountability-killer-robots; B Docherty, ‘Precedent for Preemption: The Ban on Blinding Lasers as a Model for a Killer Robots Prohibition’ (HRW, 8 November 2015) www.hrw.org/news/2015/11/08/precedent-preemption-ban-blinding-lasers-model-killer-robots-prohibition. The issue of autonomous weapons systems had previously been addressed by Philip Alston: UNCHR, ‘Interim Report by UN Special Rapporteur on extrajudicial, summary or arbitrary executions, Philip Alston’ (2010) UN Doc A/65/321; see also P Alston, ‘Lethal Robotic Technologies: The Implications for Human Rights and International Humanitarian Law’ (2011) 21 Journal of Law, Information and Science 35-60; and later by Christof Heyns: UNCHR, ‘Report of the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, (2013) UN Doc A/HRC/23/47; for scholarship, see A Leveringhaus, Ethics and Autonomous Weapons (2016).

16 The discussion of cyber warfare took a different path. See most recently, D Trusilo and T Burri, ‘Ethical Artificial Intelligence: An Approach to Evaluating Disembodied Autonomous Systems’ in R Liivoja and A Väljataga (eds), Autonomous Cyber Capabilities under International Law (2021) 51–66 (hereafter Trusilo and Burri, ‘Ethical AI’).

17 For a discussion of embodiment from a philosophical perspective, see C Durt, ‘The Computation of Bodily, Embodied, and Virtual Reality’ (2020) 1 Phänomenologische Forschungen 2539 www.durt.de/publications/bodily-embodied-and-virtual-reality/.

18 Defence has meanwhile gone beyond autonomy to consider also AI. Contrast the early US Department of Defence, ‘Directive on Autonomy in Weapon Systems’ (DoD, 21 November 2012, amended 8 May 2017) www.esd.whs.mil/Portals/54/Documents/DD/issuances/dodd/300009p.pdf with the recent Defense Innovation Board, ‘AI Principles: Recommendations on the Ethical Use of Artificial Intelligence by the Department of Defense’ (DoD, 24 February 2020) 12 https://media.defense.gov/2019/Oct/31/2002204458/-1/-1/0/DIB_AI_PRINCIPLES_PRIMARY_DOCUMENT.PDF : ‘The important thing to consider going forward is that however DoD integrates AI into autonomous systems, whether or not they are weapons systems, sharp ethical and technical distinctions between AI and autonomy may begin to blur, and the Department should consider the interaction between AI and autonomy to ensure that legal and ethical dimensions are considered and addressed.’ The Report addresses AI within the Department of Defense in general, not just in combat. It posits five key aspects which should inform the Department of Defense’s engagement with AI: Responsible, equitable, traceable, reliable, governable. (‘Equitable’ refers to what is in other documents often called ‘fairness’ or ‘avoidance of bias’, terms which, according to the report, may be misleading in defence, see p 31). See also HM Roff, ‘Artificial Intelligence: Power to the People’ (2019) 33 Ethics and International Affairs 127, 128–133, for a distinction between automation, autonomy, and AI.

19 The output consists of eleven high-level principles on autonomous weapons systems: Alliance for Multilateralism on Lethal Autonomous Weapons Systems (LAWS), ‘Eleven Guiding Principles on Lethal Autonomous Weapons Systems’ (Alliance for Multilateralism, 2020) https://multilateralism.org/wp-content/uploads/2020/04/declaration-on-lethal-autonomous-weapons-systems-laws.pdf (hereafter Eleven Guiding Principles on Lethal Autonomous Weapons); for the positions of states within CCW and the status quo of the discussions, see D Lewis, ‘An Enduring Impasse on Autonomous Weapons’ (Just Security, 28 September 2020) www.justsecurity.org/72610/an-enduring-impasse-on-autonomous-weapons/; for a thorough discussion of autonomous weapons systems and AI see AL Schuller, ‘At the Crossroads of Control: The Intersection of Artificial Intelligence and Autonomous Weapons Systems with International Humanitarian Law’ (2017) 8 Harvard National Security Journal (2) 379425; see also SS Hua, ‘Machine Learning Weapons and International Humanitarian Law: Rethinking Meaningful Human Control’ (2019) 51 Georgetown Journal of International Law 117146.

20 See JF Bonnefon, A Shariff, and I Rahwan, ‘The Social Dilemma of Autonomous Vehicles’ (2016) 352 Science (6293) 15731576; E Awad and others, ‘The Moral Machine Experiment’ (2018) 563 Nature 5964.

21 Note in particular, Ethics Commission of the Federal Ministry of Transport and Digital Infrastructure, ‘Automated and Connected Driving’ (BMVI, June 2017) www.bmvi.de/SharedDocs/EN/publications/report-ethics-commission.pdf?__blob=publicationFile. This report pinpointed 20 detailed principles. The principles stated clearly that autonomous driving was ethically justified under certain conditions, even if the result of autonomous driving was that persons may occasionally be killed (see principles 2, 8, and 9). See also A von Ungern-Sternberg, ‘Autonomous Driving: Regulatory Challenges Raised by Artificial Decision-Making and Tragic Choices’ in W Barfield and U Pagallo (eds), Research Handbook on the Law of Artificial Intelligence (2017) 251–278.

22 The Future Society in Policy Research, The Law & Society Initiative, ‘Principles for the Governance of AI’ (The Future Society, 15 July 2017) https://thefuturesociety.org/the-law-society-initiative/> (under ‘learn more’); University of Montreal, ‘Montreal Declaration for a Responsible Development of Artificial Intelligence’ (Montréal Declaration Responsible AI_, 2018) https://docs.wixstatic.com/ugd/ebc3a3_c5c1c196fc164756afb92466c081d7ae.pdf (hereafter ‘Montreal Declaration for AI’) was one of the first documents to examine the societal implications of AI, putting forward a very broad and largely aspirational set of principles, the gist being: 1. Increase well-being (with 5 sub-principles); 2. Respect people’s autonomy and increase their control over lives (6 sub-principles); 3. Protect privacy and intimacy (8); 4. Maintain bonds of solidarity between people and generations (6); 5. Democratic participation in AI: it must be intelligible, justifiable, and accessible, while subject to democratic scrutiny, debate, and control (10); 6. Contribute to just and equitable society (7); 7. Maintain diversity, do not restrict choice and experience (6); 8. Prudence: exercise caution in development, anticipate adverse consequences (5); 9. Do not lessen human responsibility (5); 10. Ensure sustainability of planet (4). Compare with: Amnesty International and Access Now, ‘The Toronto Declaration: Protecting the Right to Equality and Non-Discrimination in Machine Learning Systems’ (16 May 2018) www.accessnow.org/cms/assets/uploads/2018/08/The-Toronto-Declaration_ENG_08-2018.pdf (hereafter ‘Toronto Declaration’) which, although put together by non-governmental organizations, is more in the nature of an academic legal text and not easily summarized. It emphasizes the duties of states to identify risks, ensure transparency and accountability, enforce oversight, promote equality, and hold the private sector to account. Similar duties are incumbent on private actors, though they are less firm. The right to effective remedy is also emphasized. Compare also with The Public Voice, ‘Universal Guidelines for Artificial Intelligence’ (The Public Voice, 23 October 2018) https://thepublicvoice.org/ai-universal-guidelines/.

23 Women Leading in AI, ‘10 Principles of Responsible AI’ (Women Leading in AI, 2019) https://womenleadinginai.org/wp-content/uploads/2019/02/WLiAI-Report-2019.pdf. This initiative did not look at AI strictly from a gender, but a broader societal perspective. The 10 principles can be summarized as follows: 1. Mirror the regulatory approach for the pharmaceutical sector; 2. Establish an AI regulatory body with powers inter alia to: audit algorithms, investigate complaints, issue fines for breaches of the General Data Protection Regulation, the law and equality, and ensure algorithms are explainable. 3. Introduce ‘Certificate of Fairness for AI systems’; 4. Require ‘Algorithm Impact Assessment’ when AI is employed with impact on individuals; 5. In public sector, inform when decisions are made by machines; 6. Reduce liability when ‘Certificate of Fairness’ is given; 7. Compel companies to bring their workforce with them; 8. Establish digital skills funds to be fed by companies; 9. Carry out skills audit to identify relevant skills for transition; 10. Establish education and training programme, especially to encourage women and underrepresented sections of society.

24 UNI Global Union, ‘10 Principles for Ethical AI, UNI Global Union Future World of Work’ (The Future World of Work, 2017) www.thefutureworldofwork.org/media/35420/uni_ethical_ai.pdf; summarized: 1. Transparency; 2. Equip with black box; 3. Serve people/planet; 4. Humans must be in command, incl. responsibility, safety, compliance with privacy and law; 5. Avoid bias in AI; 6. Share benefits; 7. Just transition for workforce and support for human rights; 8. Establish global multi-stakeholder governance mechanism for work and AI; 9. Ban responsibility of robots; 10. Ban autonomous weapons.

25 See https://algorithmwatch.org/en/transparency/; AlgorithmWatch provides a useful database bringing together ethical guidelines on AI: https://inventory.algorithmwatch.org/. In 2017, the AI Now Institute at New York University, which conducts research on societal aspects of AI, was also established (see www.ainowinstitute.org). Various ‘research agendas’ have by now been published: J Whittlestone and others, ‘Ethical and Societal Implications of Algorithms, Data, and Artificial Intelligence: A Roadmap for Research’ (Nuffield Foundation, 2019) www.nuffieldfoundation.org/sites/default/files/files/Ethical-and-Societal-Implications-of-Data-and-AI-report-Nuffield-Foundat.pdf (with a useful literature review in appendix 1 and a review of select ethics principles in appendix 2); A Dafoe, ‘AI Governance: A Research Agenda’ (Future of Humanity Institute, 2018) www.fhi.ox.ac.uk/wp-content/uploads/GovAIAgenda.pdf, which broadly focuses on economics and political science research. Compare with OpenAI which is on a ‘mission’ to ensure that general AI will be beneficial. For this purpose, it conducts research on AI based on its own ethical Charter: OpenAI, OpenAI Charter (Open AI, 9 April 2018) https://openai.com/charter/ (hereafter OpenAI Charter); in brief, the principles of the Charter are: Ensure general AI benefits all, avoid uses that harm or concentrate power; primary duty to humanity, minimize conflicts of interest that compromise broad benefit; do the research that makes general AI safe; if late-stage development of general AI becomes a competitive race without time for precaution, stop competing and assist the other project; leadership in technology, policy, and safety advocacy is not enough; AI will impact before general AI, so lead there too; cooperate actively, create global community; provide public goods that help society navigate towards general AI; for now, publish most AI research, but later probably not for safety reasons.

26 See Intel, ‘AI Public Policy Opportunity’ (Intel, 2017) https://blogs.intel.com/policy/files/2017/10/Intel-Artificial-Intelligence-Public-Policy-White-Paper-2017.pdf summed up: 1. Foster innovation and open development; 2. Create new human employment and protect people’s welfare; 3. Liberate data responsibly; 4. Rethink privacy; 5. Require accountability for ethical design and implementation. Further examples include Sage, ‘The Ethics of Code: Developing AI for Business with Five Core Principles’ (Sage, 2017) www.sage.com/~/media/group/files/business-builders/business-builders-ethics-of-code.pdf?la=en&hash=CB4DF0EB6CCB15F55E72EBB3CD5D526B (hereafter Sage, ‘The Ethics of Code’), in brief: 1. Reflect diversity, avoid bias; 2. Accountable AI, but also accountable users; AI must not be too clever to be held accountable; 3. Reward AI for aligning with human values through reinforcement learning; 4. AI should level playing field: democratize access, especially for disabled persons; 5. AI replaces, but must also create work: humans should focus on what they are good at; Google, ‘Artificial Intelligence at Google: Our Principles’ (Google, 2018) https://ai.google/principles/ (hereafter Google, ‘AI Principles’); in brief: 1. Be socially beneficial and thoughtfully evaluate when to make technology available on non-commercial basis; 2. Avoid bias; 3. Build and test for safety; 4. Be accountable to people, i.e. offer feedback, explanation, and appeal; subject AI to human direction and control; 5. Incorporate privacy design principles; 6. Uphold high standard of scientific excellence; 7. Use of AI must accord with these principles; 8. No-go areas: technology likely to cause overall harm; weapons; technology for surveillance violating internationally accepted norms; technology whose purpose violates international law and human rights – though this ‘point 8’ may evolve; IBM, ‘Everyday Ethics for Artificial Intelligence’ (IBM, 2018) www.ibm.com/watson/assets/duo/pdf/everydayethics.pdf (hereafter IBM, ‘Ethics for AI’); in brief: 1. Be accountable, i.e. understand accountability, keep records, understand the law. 2. Align with user values, inter alia by bringing in policy makers and academics; 3. Keep it explainable, i.e., allow for user questions and make AI reviewable; 4. Minimize bias and promote inclusion. 5. Protect users’ data rights, adhere to national and international rights laws.

27 AI Now, ‘AI Now 2017 Report’ (AI Now Institute, 2017) https://ainowinstitute.org/AI_Now_2017_Report.pdf, recommendation no 10: ‘Ethical codes […] should be accompanied by strong oversight and accountability mechanisms.’ (p 2); see also AI Now, ‘AI Now 2018 Report’ (AI Now Institute, 2018) https://ainowinstitute.org/AI_Now_2018_Report.pdf, recommendation no 3: ‘The AI industry urgently needs new approaches to governance.’ (p 4).

28 Partnership on AI, ‘Tenets’ www.partnershiponai.org/tenets/ (hereafter Partnership on AI, ‘Tenets’), in summary: 1. Benefit and empower as many people as possible; 2. Educate and listen, inform; 3. Be committed to open research and dialogue on the ethical, social, economic, and legal implications of AI; 4. Research and development need to be actively engaged with, and accountable to, stakeholders; 5. Engage with, and have representation of, stakeholders in the business community; 6. Maximize benefits and address challenges by: protecting privacy and security; understanding and respecting interests of all parties impacted; ensuring that the AI community remains socially responsible, sensitive and engaged; ensuring that AI is robust, reliable, trustworthy, and secure; opposing AI that would violate international conventions and human rights; and promoting safeguards and technology that do no harm; 7. Be understandable and interpretable for people for purposes of explaining the technology; 8. Strive for a culture of cooperation, trust, and openness among AI scientists and engineers.

29 See, for instance, Pontifical Academy for Life, Microsoft, IBM, FAO and Ministry of Innovation (Italian Government), ‘Rome Call for AI Ethics’ (Rome Call, 28 February 2020) www.romecall.org.

30 IEEE, ‘Ethically Aligned Design’ (Footnote n 10).

31 See the IEEE P7000 standards series, e.g. IEEE SA, IEEE P7000 - Draft Model Process for Addressing Ethical Concerns During System Design (IEEE, 30 June 2016) https://standards.ieee.org/project/7000.html; The IEEE considers standard setting with regard to AI unprecedented: ‘This is the first series of standards in the history of the IEEE Standards Association that explicitly focuses on societal and ethical issues associated with a certain field of technology’; IEEE, ‘Ethically Aligned Design’ (Footnote n 10) 283; for the type of standard that is necessary, see D Danks, AJ London, ‘Regulating Autonomous Systems: Beyond Standards’, (2017) 32 IEEE Intelligent Systems 88.

32 See ISO, ‘Standards by ISO / IEC JTC 1 / SC 42. Artificial Intelligence’ www.iso.org/committee/6794475/x/catalogue/p/0/u/1/w/0/d/0.

33 See UK High Court, R (Bridges) v CCSWP and SSHD [2019] EWHC 2341 (Admin); UK Hight Court, R (Bridges) v CCSWP and SSHD [2020] EWCA Civ 1058; Tribunal Administratif de Marseille, La Quadrature du Net, No. 1901249 (27 Nov. 2020); Swedish Data Protection Authority, ‘Supervision pursuant to the General Data Protection Regulation (EU) 2016/679 – facial recognition used to monitor attendance of students’ (DI-2019-2221, 20 August 2019) <imy.se/globalassets/dokument/beslut/facial-recognition-used-to-monitor-the-attendance-of-students.pdf>; a number of non-governmental organisations are bringing an action against Clearview AI Inc., which sells facial recognition software, for violation of data protection law, see https://privacyinternational.org/legal-action/challenge-against-clearview-ai-europe. A global inventory listing incidents involving AI that have taken place so far includes more than 600 entries to date: AIAAIC repository: https://docs.google.com/spreadsheets/d/1Bn55B4xz21-_Rgdr8BBb2lt0n_4rzLGxFADMlVW0PYI/edit#gid=888071280; compare with AI Incident Database, ‘All Incident Reports’ (7 June 2021) https://incidentdatabase.ai/, which is run by the Partnership on AI and includes 100 incidents.

34 A Jobin, M Ienca, and E Vayena, ‘The Global Landscape of AI Ethics Guidelines’ (2019) 1 Nature Machine Intelligence (2019) 389399; J Fjeld and others, ‘Principled Artificial Intelligence: Mapping Consensus in Ethical and Rights-Based Approaches to Principles for AI’ (Berkman Klein Center for Internet & Society, 2020) http://nrs.harvard.edu/urn-3:HUL.InstRepos:42160420.

35 State Council of the People’s Republic of China, ‘A Next Generation Artificial Intelligence Development Plan’ (New America, 20 July 2017) www.newamerica.org/documents/1959/translation-fulltext-8.1.17.pdf (hereafter China, ‘AI Development Plan’); President of the French Republic, ‘The President of the French Republic Presented His Vision and Strategy to Make France a Leader in AI at the Collège de France on 29 March 2018’ (AI for Humanity, 2018) www.aiforhumanity.fr/en/ (hereafter French Republic, ‘Strategy to Make France a Leader in AI’); Federal Government of Germany, ‘Artificial Intelligence Strategy’ (The Federal Government, November 2018) www.ki-strategie-deutschland.de/home.html?file=files/downloads/Nationale_KI-Strategie_engl.pdf (hereafter Germany, ‘AI Strategy’); US President, ‘Executive Order on Maintaining American Leadership in Artificial Intelligence’ (2019) E.O. 13859 of Feb 11, 2019, 84 FR 3967 (hereafter US President, ‘Executive Order on Leadership in AI’). According to T Dutton, ‘An Overview of National AI Strategies’ (Medium, 28 June 2018) https://medium.com/politics-ai/an-overview-of-national-ai-strategies-2a70ec6edfd which contains a useful list of national AI strategies, Canada was the first state to put forward such a national strategy in the year 2017. Yet it remains unclear what exactly constitutes a ‘strategy’. In any case, the documents published by the Obama Administration in 2016 (see Footnote n 38) already contained many elements of a ‘strategy’.

36 French Republic, ‘Strategy to Make France a Leader in AI’ (Footnote n 35) third commitment.

37 China, ‘AI Development Plan’ (Footnote n 35) Section V 1; the text accompanying this aim is more concrete. It recommends addressing traceability and accountability; to launch research on AI behaviour science and ethics; and ‘establish an ethical and moral multi-level judgment structure and human-computer collaboration ethical framework’. China is also committed to ‘actively participate in global governance of AI, strengthen the study of major international common problems such as robot alienation and safety supervision, deepen international cooperation on AI laws and regulations, international rules and so on, and jointly cope with global challenges’.

38 Germany, ‘AI Strategy’ (Footnote n 35) 4, 37, 38. The data ethics commission (‘Datenethikkommission’) in response published its report in October 2019: Datenethikkommission, ‘Gutachten’ (BMI, October 2019) www.bmi.bund.de/SharedDocs/downloads/DE/publikationen/themen/it-digitalpolitik/gutachten-datenethikkommission.pdf?__blob=publicationFile&v=4. The report deals comprehensively on 240 pages with ‘digitization’, not just AI, and includes 75 recommendations to move forward. An economic assessment of the proposals in the report would be necessary though. The report seems quite ‘big’ on regulation.

39 The US strategy merely stated as one of five guiding principles: ‘The United States must foster public trust and confidence in AI technologies and protect civil liberties, privacy, and American values in their application in order to fully realize the potential of AI technologies for the American people.’ (US President, ‘Executive Order on Leadership in AI’ (Footnote n 35) section 1(d); compare with National Science and Technology Council, ‘Preparing for the Future of Artificial Intelligence’ (The White House, President Barack Obama, October 2016) https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/preparing_for_the_future_of_ai.pdf which had been published before and addressed transparency, fairness, and efficacy of systems in recommendations nos 16 and 17 and ethics in education curricula in recommendation no 20, and National Science and Technology Council, ‘The National Artificial Intelligence Research and Development Strategic Plan’, (The White House, President Barack Obama, October 2016) https://obamawhitehouse.archives.gov/sites/default/files/whitehouse_files/microsites/ostp/NSTC/national_ai_rd_strategic_plan.pdf, which was published on the same day as Preparing for the Future of Artificial Intelligence, p. 3: ‘understand and address the ethical, legal, and societal implications of AI’ is a research priority according to strategy no. 3. See also the webpage of the US government on AI which has recently gone live: www.ai.gov/.

40 House of Lords (Select Committee on Artificial Intelligence), ‘AI in the UK: Ready, Willing and Able?’ (UK Parliament, 16 April 2018) https://publications.parliament.uk/pa/ld201719/ldselect/ldai/100/100.pdf (hereafter House of Lords, ‘AI in the UK’).

41 House of Lords, ‘AI in the UK’ (Footnote n 40) para 420.

42 House of Lords, ‘AI in the UK’ (Footnote n 40) para 417, in brief: 1. Development of AI for common good and humanity; 2. Intelligibility and fairness; 3. Use of AI should not diminish data rights or privacy; 4. Individuals’ right to be educated to flourish mentally, emotionally and economically alongside AI; 5. The autonomous power to hurt, destroy, or deceive human beings should never be vested in AI. In the United Kingdom, further work also addressed the use of facial recognition technology: Biometrics and Forensics Ethics Group (BFEG UK government), ‘Interim Report of BFEG Facial Recognition Working Group’ (OGL, February 2019) https://assets.publishing.service.gov.uk/government/uploads/system/uploads/attachment_data/file/781745/Facial_Recognition_Briefing_BFEG_February_2019.pdf. According to this report, facial recognition: 1. Is only permissible when in public interest; 2. Justifiable only if effective; 3. Should not involve or exhibit bias; 4. Should be deployed in even-handed ways: for example, not target certain events only (impartiality); 5. Should be a last resort: No other less invasive alternative, minimizing interference with lawful behaviour (necessity). Also, 6. Benefits must be proportionate to loss of liberty and privacy; 7. Humans must be impartial, accountable, oversighted, esp. when constructing watch lists; and 8. Public consultation and rationale are necessary for trust. Finally, 9. Could resources be used better elsewhere?

43 C Villani, ‘For a Meaningful Artificial Intelligence – Towards a French and European Strategy’ (AI for Humanity, March 2018) www.aiforhumanity.fr/pdfs/MissionVillani_Report_ENG-VF.pdf 113–114; in summary: 1. transparency and auditability; 2. Rights and freedoms need to be adapted in order to forestall potential abuse; 3. Responsibility; 4. Creation of a diverse and inclusive social forum for discussion; 5. Politicization of the issues linked to technology. Compare with D Dawson and others, ‘Artificial Intelligence – Australia’s Ethics Framework, A Discussion Paper’ (2019) https://consult.industry.gov.au/strategic-policy/artificial-intelligence-ethics-framework/supporting_documents/ArtificialIntelligenceethicsframeworkdiscussionpaper.pdf 6, which, in a nutshell, proposed the following ethics guidelines: 1. Generate net benefits; 2. Civilian systems should do no harm; 3. Regulatory and legal compliance; 4. Protection of privacy; 5. Fairness: no unfair discrimination, particular attention to be given to training data; 6. Transparency and explainability; 7. Contestability; 8. Accountability, even if harm was unintended.

44 Draft Report with recommendations to the Commission on Civil Law Rules on Robotics, 2015/2103(INL), 23 May 2016; the report was marked by an alarmist undertone.

45 Resolution with recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)), European Parliament, P8_TA (2017)0051, 16 February 2018.

46 Footnote Ibid, para 65.

47 Communication from the Commission to the European Parliament, the European Council, the Council, the European Economic and Social Committee and the Committee of the Regions, Artificial Intelligence for Europe, European Commission, 25 April 2018, section 1 toward the end.

48 High-Level Expert Group on Artificial Intelligence, ‘Ethics Guidelines for Trustworthy AI’ (8 April 2019) www.ai.bsa.org/wp-content/uploads/2019/09/AIHLEG_EthicsGuidelinesforTrustworthyAI-ENpdf.pdf (hereafter: ‘Ethics Guidelines for Trustworthy AI’). The Guidelines distinguish between foundations of trustworthy AI which include four ethical principles, namely 1. Respect for human autonomy, 2. Prevention of harm, 3. Fairness, 4. Explicability (12 et seq) and seven requirements for their realization, namely 1. Human agency and oversight, 2. Technical robustness and safety, 3. Privacy and data governance, 4. Transparency, 5. Diversity, non-discrimination, fairness, 6. Societal and environmental well-being and 7. Accountability.

49 Notably European Group on Ethics in Science and New Technologies (EGE), ‘Statement on Artificial Intelligence, Robotics and ‘Autonomous’ Systems’ (9 March 2018) https://op.europa.eu/en/publication-detail/-/publication/dfebe62e-4ce9-11e8-be1d-01aa75ed71a1/language-en/format-PDF/source-78120382>. Another initiative within the wider sphere of the EU worked in parallel with the Commission’s High-Level Expert Group and published a set of principles: L Floridi and others, ‘AI4People – An Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations’ (2018) 28 Minds and Machines 689.

50 See Ethics Guidelines for Trustworthy AI (Footnote n 48) 6: ‘The Guidelines do not explicitly deal with the first component of Trustworthy AI (lawful AI), but instead aim to offer guidance on fostering and securing the second and third components (ethical and robust AI).’ And 10: ‘Understood as legally enforceable rights, fundamental rights therefore fall under the first component of Trustworthy AI (lawful AI), which safeguards compliance with the law. Understood as the rights of everyone, rooted in the inherent moral status of human beings, they also underpin the second component of Trustworthy AI (ethical AI), dealing with ethical norms that are not necessarily legally binding yet crucial to ensure trustworthiness.’

51 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), OJ 2016 L 119/1 (GDPR). The General Data Protection Regulation, in Article 22 regulates automated decision making and therefore one aspect of AI; however, the effectiveness of the Article is limited by the scope of Regulation as well as loopholes in paragraph 2. Article 22 is entitled ‘Automated Individual Decision-Making, Including Profiling’ and reads as follows: ‘1. The data subject shall have the right not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning him or her or similarly significantly affects him or her. 2. Paragraph 1 shall not apply if the decision: (a) is necessary for entering into, or performance of, a contract between the data subject and a data controller; (b) is authorised by Union or Member State law to which the controller is subject and which also lays down suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests; or (c) is based on the data subject’s explicit consent. 3. In the cases referred to in points (a) and (c) of paragraph 2, the data controller shall implement suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests, at least the right to obtain human intervention on the part of the controller, to express his or her point of view and to contest the decision; 4. Decisions referred to in paragraph 2 shall not be based on special categories of personal data referred to in Article 9(1), unless point (a) or (g) of Article 9(2) applies and suitable measures to safeguard the data subject’s rights and freedoms and legitimate interests are in place.’ For an international legal perspective on the General Data Protection Regulation, see the Symposium on: ‘The GDPR in International Law’ (6 January 2020) AJIL Unbound 114.

52 European Commission, White Paper on Artificial Intelligence: A European Approach to Excellence and Trust, European Commission (White Paper, COM(2020) 65 final, 2020) (hereafter White Paper on AI).

53 The public consultation on the White Paper on AI (Footnote n 52) attracted a wide range of comments, see e.g. Google, ‘Consultation on the White Paper on AI – a European Approach’ (Google, 28 May 2020) www.blog.google/documents/77/Googles_submission_to_EC_AI_consultation_1.pdf.

54 White Paper on AI (Footnote n 52) 17: an application of AI should be considered high-risk, when it is situated in a sensitive domain, e.g. health care, and presents a concrete risk.

55 Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts, European Commission, COM (2021) 206 final, 21 April 2021, in the following: the Proposal or the proposed regulation.

56 See Article 52 of the proposed regulation which states a relatively light transparency obligation with regard to AI not presenting high risks (‘certain AI systems’, according to Article 52).

57 The regulation proposes to ban the practice of AI: a) to materially distort a person’s behaviour (a draft leaked earlier had called this ‘manipulation’); b) to exploit the vulnerabilities of a specific group of persons (‘targeting’ of vulnerable groups, according to the leaked draft); c) social scoring by the public authorities, and d) for live remote biometric identification in public places (see article 5(1)(a)–(d) of the proposed regulation). The regulation does not preclude the development of AI, even if it could eventually be used in ways the regulation prohibits. A pathway is required in the case of letters a and b: the practices are only prohibited if they are at least likely to cause a person physical or psychological harm. The ban of biometric identification according to letter d is subject to a public security exception pursuant to Article 5(2).

58 The definition of AI in annex I appears to be in accordance with how the term is understood in the computer sciences (compare S Russell and P Norvig, Artificial Intelligence: A Modern Approach (3rd ed., 2014), but it is a broad definition that lawyers may read differently than computer scientists and the elements added in Article 3(1) of the proposed regulation distort it to some degree. Annex II lists legislative acts of the Union; if an act listed applies (e.g., in case of medical devices or toys), any AI used in this context is to be considered high-risk. Annex III relies on domains in conjunction with concrete, intended uses. It lists the following domains: remote biometric identification systems (if not banned by article 5), critical infrastructure, educational institutions, employment, essential public and private services, law enforcement and criminal law, management of migration, asylum, and border control, as well as assistance of judicial authorities. Specific uses listed under these domains are caught as high-risk AI. For instance, AI is considered high-risk when it is intended to be used for predictive policing (use) in law enforcement (domain). The Commission, jointly with the Parliament and the Council, is delegated the power to add further uses within the existing domains, which, in turn, could only be added to by means of a full legislative amendment; the Commission’s power is subject to an assessment of potential harm (see Articles 7 and 73 of the proposed regulation).

59 Mostly the ‘provider’ will be the person who puts an AI on the market, according to Article 16 of the proposed regulation; sometimes it is the importer, the distributor or another third party, according to Articles 26–28; Article 3(2) defines a provider as ‘a natural or legal person, public authority, agency or other body that develops an AI system or that has an AI system developed with a view to placing it on the market or putting it into service under its own name or trademark, whether for payment or free of charge’.

60 Article 10(6) of the proposed regulation transposes some of the requirements applicable to trained AI to AI that has not been trained.

61 IEEE, ‘Ethically Aligned Design’ (Footnote n 10) 188, recommending careful assessment of bias and integration of potentially disadvantaged groups in the process; Future of Life Institute, ‘Asilomar AI Principles’ (Footnote n 11) did not yet address bias explicitly.

62 USACM, ‘Algorithmic Transparency’ (Footnote n 12), principle no 1: ‘1. Awareness: Owners, designers, builders, users, and other stakeholders of analytic systems should be aware of the possible biases involved in their design, implementation, and use and the potential harm that biases can cause to individuals and society.’ Principle no 5 addressed ‘data provenance’. Compare Japanese Society for AI, ‘Guidelines’ (Footnote n 12) principle no 5 with a slightly broader scope.

63 Montreal Declaration for AI (Footnote n 22) principle no 6.1: ‘AIS must be designed and trained so as not to create, reinforce, or reproduce discrimination based on – among other things – social, sexual, ethnic, cultural, or religious differences.’ See also principle no 7 concerning diversity; there are some data governance requirements in principle no 8 on prudence.

64 Toronto Declaration (Footnote n 22) for instance, no 16. Not all documents laying down ethics principles discuss bias; OpenAI Charter (Footnote n 25) for instance, leaves bias aside and focuses on the safety of general AI.

65 By way of example, Sage, ‘The Ethics of Code’ (Footnote n 26) principle no 1; Google, ‘AI Principles’ (Footnote n 26) principle no 2; IBM, ‘Ethics for AI’ (Footnote n 26) discusses fairness, including avoidance of bias, as one of five ethics principles (34–35); it also includes recommendations on how to handle data: ‘Your AI may be susceptible to different types of bias based on the type of data it ingests. Monitor training and results in order to quickly respond to issues. Test early and often.’ Partnership on AI, Tenets (Footnote n 28) on the other hand, only generically refers to human rights (see tenet no 6.e).

66 Article 13(1) of the proposed regulation.

67 Article 13(2) of the proposed regulation.

68 Article 13(3b) of the proposed regulation.

69 Article 13(3b)(iii and iv) of the proposed regulation.

70 IEEE, ‘Ethically Aligned Design’ (Footnote n 10) 11; transparency implies that the basis of a decision of an AI should ‘always be discoverable’.

71 Asilomar AI Principles (Footnote n 11) according to principle no 7, it must be possible to ascertain why an AI caused harm; according to principle no 8, any involvement in judicial decision making should be explainable and auditable.

72 USACM, ‘Algorithmic Transparency’ (Footnote n 12) principle no 4.

73 Japanese Society for AI, ‘Guidelines’ (Footnote n 12) principle no 5 (addressing security).

74 Montreal Declaration for AI (Footnote n 22) principle no 5, with 10 sub-principles addressing various aspects of transparency. See also The Toronto Declaration (Footnote n 22) which includes strong transparency obligations for states (para 32) and weaker obligations for the private sector (para 51).

75 Article 15(1) of the proposed regulation.

76 Article 15(3 and 4) of the proposed regulation.

77 IEEE, ‘Ethically Aligned Design’ (Footnote n 10) 11, principles nos 4 and 7.

78 Asilomar AI Principles (Footnote n 11) principle no 6.

79 Japanese Society for AI, ‘Guidelines’ (Footnote n 12) principles nos 5 and 7.

80 ‘Montreal Declaration for AI (Footnote n 22) principle no 8; The Toronto Declaration (Footnote n 22) has a strong focus on non-discrimination and human rights; it does not address the topics covered by Article 15 of the proposed regulation directly. Open AI Charter (Footnote n 25) stated a commitment to undertake the research to make AI safe in the long term.

81 E.g. Google, ‘AI Principles’ (Footnote n 26) principle no 3: ‘Be built and tested for safety’; IBM, ‘Ethics for AI’ (Footnote n 26) 42–45, addressed certain aspects of safety and misuse under ‘user data rights’. See also Partnership on AI, ‘Tenets’ (Footnote n 28) tenet no 6.d: ‘Ensuring that AI research and technology is robust, reliable, trustworthy, and operates within secure constraints.’

82 Article 9(4) of the proposed regulation.

83 Articles 19 and 43 of the proposed regulation.

84 Article 60(2) of the proposed regulation.

85 Articles 11–12 of the proposed regulation.

86 Human oversight can be either built into AI or measures can be merely identified so that users can appropriately implement them, according to Article 14(3) of the proposed regulation. Oversight should enable users to understand and monitor AI, interpret its output, decide not to use it, intervene in its operation, and prevent automation bias (Article 14(4)).

87 See Eleven Guiding Principles on Lethal Autonomous Weapons (Footnote n 19); note that ‘meaningful human control’ is not mentioned as a requirement for autonomous weapons systems in these guiding principles.

88 See the discussion of bias above.

89 See the discussion of transparency above.

90 But see Trusilo and Burri, ‘Ethical AI’ (Footnote n 16).

91 See, for instance, USACM, ‘Algorithmic Transparency’ (Footnote n 12) principle no 6: ‘Auditability: Models, algorithms, data, and decisions should be recorded so that they can be audited in cases where harm is suspected.’ (Emphasis removed.)

92 The risk of a responsibility gap is not addressed by the proposed regulation, but by a revision of the relevant legislation on liability, see p 5 of the Explanatory Memorandum to the proposed regulation.

93 See A Ezrachi and ME Stucke, Virtual Competition: The Promise and Perils of the Algorithm-Driven Economy (2016).

94 Bostrom, ‘Superintelligence’ (Footnote n 8); J Dawes, ‘Speculative Human Rights: Artificial Intelligence and the Future of the Human’ (2020) 42 Human Rights Quarterly 573.

95 For a broader perspective on AI, see K Crawford, Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence (2021).

96 Note the broad geographical scope of the proposed regulation. It applies when providers bring AI into circulation in the Union, but also when output produced outside of the Union is used in it (see Article 2(1)(a) and (c) of the proposed regulation). The substantive scope of the proposed regulation is not universal, though, for it, for instance, largely excludes weapons and cars (see Article 2(2) and (3) of the proposed regulation).

97 OECD Recommendation OECD/LEGAL/0449 of 22 May 2019 of the Council on Artificial Intelligence (hereafter OECD, ‘Recommendation on AI’; the five principles are the following: 1. Inclusive growth, sustainable development and well-being; 2. Human-centred values and fairness; 3. Transparency and explainability; 4. Robustness, security and safety; 5. Accountability. Another five implementing recommendations advise specifically States to: invest in AI research and development; foster a digital ecosystem; shape the policy environment for AI, including by way of experimentation; build human capacity and prepare for labour market transformation; and cooperate internationally, namely on principles, knowledge sharing, initiatives, technical standards, and metrics; see also S Voeneky, ‘Key Elements of Responsible Artificial Intelligence – Disruptive Technologies, Dynamic Law’ (2020) 1 Ordnung der Wissenschaft 9, 16.

98 White Paper on AI (Footnote n 52).

99 OECD’Recommendation on AI’ (Footnote n 97) point 1.4.c.

100 OECD, ‘OECD to Host Secretariat of New Global Partnership on Artificial Intelligence’ (OECD, 15 June 2020) https://www.oecd.org/newsroom/oecd-to-host-secretariat-of-new-global-partnership-on-artificial-intelligence.htm; the idea of this initiative may be to counterweigh China in AI: J Delcker, ‘Wary of China, the West Closes Ranks to Set Rules for Artificial Intelligence’ (Politico, 7 June 2021) www.politico.eu/article/artificial-intelligence-wary-of-china-the-west-closes-ranks-to-set-rules/. The OECD initiative is not to be confused with the Partnership on Artificial Intelligence, see Partnership on AI, ‘Tenets’ (Footnote n 28).

101 The Global Partnership on Artificial Intelligence, Responsible Development, Use and Governance of AI, Working Group Report (GPAI Summit Montreal, November 2020) www.gpai.ai/projects/responsible-ai/gpai-responsible-ai-wg-report-november-2020.pdf.

102 European Commission for the Efficiency of Justice (CEPEJ), ‘European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and Their Environment’ (Council of Europe, 3-4 December 2018) https://rm.coe.int/ethical-charter-en-for-publication-4-december-2018/16808f699c. In sum, it suggested the following guidelines: 1. Ensure compatibility with human rights; 2. Prevent discrimination; 3. Ensure quality and security; 4. Ensure transparency, impartiality, and fairness: make AI accessible, understandable, and auditable; 5. Ensure user control.

103 Consultative Committee of the Convention for the Protection of Individuals with Regard to Automatic Processing of Personal Data, ‘Guidelines on Artificial Intelligence and Data Protection (Council of Europe Convention 108)’ (25 January 2019) T-PD(2019)01. The guidelines distinguish between general principles (i), principles addressed to developers (ii), and principles addressed to legislators and policy makers (iii). In summary, the principles are the following: i) 1. Respect human rights and dignity; 2. Respect the principles of Convention 108+: lawfulness, fairness, purpose specification, proportionality of data processing, privacy-by-design and by default, responsibility and demonstration compliance (accountability), transparency, data security and risk management; 3. Avoid and mitigate potential risks; 4. Consider functioning of democracy and social/ethical values; 5. Respect the rights of data subjects; 6. Allow control by data subjects over data processing and related effects on individuals and society. ii) 1. Value-oriented design; 2. Assess, precautionary approach; 3. Human rights by design, avoid bias; 4. Assess data, use synthetic data; 5. Risk of decontextualised data and algorithms; 6. Independent committee of experts; 7. Participatory risk assessment; 8. Right not to be subject solely to automated decision making; 9. Safeguard user freedom of choice to foster trust, provide feasible alternatives to AI; 10. Vigilance during entire life-cycle; 11. Inform, right to obtain information; 12. Right to object. iii) 1. Accountability, risk assessment, certification to enhance trust; 2. In procurement: transparency, impact assessment, vigilance; 3. Sufficient resources for supervisors. 4. Preserve autonomy of human intervention; 5. Consultation of supervisory authorities; 6. Various supervisors (data, consumer protection, competition) should cooperate; 7. Independence of committee of experts in ii.6; 8. Inform and involve individuals; 9. Ensure literacy. See also Consultative Committee of the Convention for the Protection of Individuals with Regard to Automatic Processing of Personal Data, ‘Guidelines on Facial Recognition (Convention 108)’ T-PD(2020)03rev4.

104 Recommendation CM/Rec(2020)1 of 8 April 2020 of the Committee of Ministers to Member States on the human rights impacts of algorithmic systems, Council of Europe Committee of Ministers, (hereafter ‘Recommendation on the human rights impacts’). The recommendation is a detailed text that first addresses states and then private actors. After elaborating on scope and context (part A paras 1–15, discussing, for example, synthetic data [para 6], the fusion of the stages of development and implementation of AI [para 7], the presence of both private and public aspect in many algorithmic systems [para 12], and a precautionary approach [para 15]), it lists obligations of states in part B, including data management (para 2), testing (paras 3.3–5), transparency and remedies (para 4), and precautionary measures (para 5, including standards and oversight). These obligations are then tailored to the situation of private actors on the basis of the due diligence approach applicable to business. The obligations in this part are less stringent; see, for instance, the duty to prevent discrimination in para C.1.4.

105 Recommendation on the human rights impacts (Footnote n 104) para A.2.

106 Recommendation on the human rights impacts (Footnote n 104) para A.11.

107 See UNESCO, ‘Draft text of the Recommendation on the Ethics of Artificial Intelligence’ SHS/IGM-AIETHICS/2021/APR/4 (UNESCO Digital Library, 31 March 2021) https://unesdoc.unesco.org/ark:/48223/pf0000376713; see also UNESCO, ‘Artificial Intelligence for Sustainable Development: Challenges and Opportunities for UNESCO’s Science and Engineering Programmes’ SC/PCB/WP/2019/AI (UNESCO Digital Library, August 2019); see F Molnár-Gábor, Die Herausforderung der medizinischen Entwicklung für das internationale soft law am Beispiel der Totalsequenzierung des menschlichen Genoms, (2012) 72 Zeitschrift für ausländisches öffentliches Recht und Völkerrecht 695, for the role of soft law created by UNESCO.

108 UN High Level Panel on Digital Cooperation, ‘The Age of Digital Interdependence: Report of the UN Secretary-General’s High-Level Panel on Digital Cooperation’ (UN, June 2019) (hereafter ‘The Age of Digital Interdependence’).

109 The Age of Digital Interdependence (Footnote n 108) 7: Inclusiveness, respect, human-centredness, human flourishing, transparency, collaboration, accessibility, sustainability, and harmony. That ‘values’ are relative in AI becomes evident from the key governance principles the Report lays down in Section VI. The principles, each of which is explained in one sentence, are the following: Consensus-oriented; Polycentric; Customised; Subsidiarity; Accessible; Inclusive; Agile; Clarity in roles and responsibility; Accountable; Resilient; Open; Innovative; Tech-neutral; Equitable outcomes. Further key functions are added: Leadership, Deliberation; Ensuring inclusivity; Evidence and data; Norms and policy making; Implementation; Coordination; Partnerships; Support and Capacity development; Conflict resolution and crisis management. This long list that appears like the result of a brainstorming begs the question of the difference between the ‘values’ of the Report on page 7 and the ‘principles’ (‘functions’) on page 39 and how they were categorized.

110 The Age of Digital Interdependence (Footnote n 108) 29–32; the recommendations include: 1B: Creation of a platform for sharing digital public goods; 1C: Full inclusion for women and marginalized groups; 2: Establishment of help desks; 3A: Finding out how to apply existing human rights instruments in the digital age; 3B: Calling on social media to work with governments; 3C: Autonomous systems: explainable and accountable, no life and death decisions, non-bias; 4: Development of a Global Commitment on Digital Trust and Security; 5A: By 2020, create a Global Commitment for Digital Cooperation; welcoming a UN Technology envoy.

111 The Age of Digital Interdependence (Footnote n 108) 23–26: The three governance models that are proposed are the following: i) a beefed-up version of the existing Internet governance forum; ii) a distributed, multi-stakeholder network architecture, which to some extent resembles the status quo; and iii) an architecture that is more government driven, while it focuses on the idea of ‘digital commons’.

112 Recommendation on the human rights impacts (Footnote n 104).

113 See the useful mapping of AI in emerging economies: ‘Global South Map of Emerging Areas of Artificial Intelligence’ (K4A, 9 June 2021) www.k4all.org/project/aiecosystem/; Knowledge for All, a foundation, conducts useful projects on development and AI, see www.k4all.org/project/?type=international-development.

114 The Council of Europe is currently deliberating on whether to draft a treaty on AI: Feasibility Study, Council of Europe Ad Hoc Committee on Artificial Intelligence (CAHAI), CAHAI(2020)23.

115 A further dimension relates to the use of AI for international lawyers, see A Deeks, ‘High-Tech International Law’ (2020) 88(3) George Washington Law Review 574653; M Scherer, ‘Artificial Intelligence and Legal Decision-Making: The Wide Open? — A Study Examining International Arbitration’ (2019) 36 Journal of International Arbitration (5) 539574; for data analysis and international law, see W Alschner, ‘The Computational Analysis of International Law’ in R Deplano and N Tsagourias (eds), Research Methods in International Law: A Handbook (2021) 204228.

8 Fostering the Common Good An Adaptive Approach Regulating High-Risk AI-Driven Products and Services

* Thorsten Schmidt and Silja Voeneky are grateful for the support and enriching discussions at Freiburg Institute for Advanced Studies (FRIAS). Thorsten Schmidt wants to thank Ernst Eberlein, and Silja Voeneky all members of the interdisciplinary FRIAS Research Group Responsible AI for valuable insights. Besides, Voeneky’s research has been financed as part of the interdisciplinary research project AI Trust by the Baden-Württemberg Stiftung (since 2020). Earlier versions of parts of Sections II-IV of this Chapter have been published in S Voeneky, ‘Human Rights and Legitimate Governance of Existential and Global Catastrophic Risks’ in S Voeneky and G Neuman (eds), Human Rights, Democracy and Legitimacy in a World of Disorder (2018) 139 et seq. and S Voeneky, ‘Key Elements of Responsible Artificial Intelligence: Disruptive Technologies, Dynamic Law’ (2020) 1 OdW 9 et seq.

1 This approach is part of the concept of ‘Responsible AI’. In the following, we concentrate on a regulative approach for high-risk AI-driven products; we nevertheless include – for a regulation mutatis mutandis – AI-based high-risk services.

2 J Beckert and R Bronk, ‘An Introduction to Uncertain Futures’ in J Beckert and R Bronk (eds), Uncertain Futures: Imaginaries, Narratives, and Calculation in the Economy (2018), who link this to the capitalist system, only, which seems like a too narrow approach.

3 Human rights treaties do not oblige non-state actors, such as companies; however, States are obliged to respect, protect, and fulfill human rights and the due diligence framework can be applied in the field of human rights protection; cf. M Monnheimer, Due Diligence Obligations in International Human Rights Law (2021) 13 et seq., 49 et seq., 204 et seq. With regard to a human-rights based duty of States to avoid existential and catastrophic risks that are based on research and technological development, cf. S Voeneky, ‘Human Rights and Legitimate Governance of Existential and Global Catastrophic Risks’ in S Voeneky and G Neuman (eds), Human Rights, Democracy and Legitimacy in a World of Disorder (2018) 139, 151 et seq. (hereafter Voeneky, ‘Human Rights and Legitimate Governance’).

4 It is still disputed, however, whether there is an obligation for States to regulate extraterritorial corporate conduct, cf. M Monnheimer, Due Diligence Obligations in International Human Rights Law (2021) 307 et seq. For a positive answer Voeneky, ‘Human Rights and Legitimate Governance’ (Footnote n 3) 155 et seq.

5 Commission, ‘Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on AI (Artificial Intelligence Act) and amending certain Union Legislative Acts’ COM(2021) 206 final.

6 See Section II.

7 For a broad definition see as well the Draft EU AIA; according to this AI system “means software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations or decisions influencing the environments they interact with.” Article 3(1) and Annex I Draft EU AIA reads: “(a) Machine learning approaches, including supervised, unsupervised and reinforcement learning, using a wide variety of methods including deep learning; (b) Logic- and knowledge-based approaches, including knowledge representation, inductive (logic) programming, knowledge bases, inference and deductive engines, (symbolic) reasoning and expert systems; (c) Statistical approaches, Bayesian estimation, search and optimization methods.”

8 Cf. recently M Bhatt, J Suchan, and S Vardarajan, ‘Commonsense Visual Sensemaking for Autonomous Driving: On Generalised Neurosymbolic Online Abduction Integrating Vision and Semantics’ (2021) 299 Artificial Intelligence Journal https://doi.org/10.1016/j.artint.2021.103522. Here we are concerned with the concept of ‘object permanence’, in other words, the idea that “discrete objects continue to exist over time, that they have spatial relationships with one another (such as in-front-of and behind)”, the understanding that objects, such as cars, continue to exist even if they disappear behind an obstacle; see also ‘Is a Self-Driving Car Smarter Than a Seven-Month-Old?’ The Economist (2021) www.economist.com/science-and-technology/is-it-smarter-than-a-seven-month-old/21804141.

9 S Russel and P Novig, Artificial Intelligence: A Modern Approach (3rd ed., 2016), 1. S Voeneky, ‘Key Elements of Responsible Artificial Intelligence – Disruptive Technologies, Dynamic Law’ (2020) 1 OdW 9, 10–11 with further references (hereafter Voeneky, ‘Key Elements of Responsible Artificial Intelligence’) https://ordnungderwissenschaft.de/wp-content/uploads/2020/03/2_2020_voeneky.pdf; I Rahwan and others, ‘Machine Behaviour’ (2019) Nature 568, 477–486 (2019) www.nature.com/articles/s41586-019-1138-y; for the various fields of application cf. also W Wendel, ‘The Promise and Limitations of Artificial Intelligence in the Practice of Law’ (2019) 72 Oklahoma Law Review 21, 21–24, https://digitalcommons.law.ou.edu/olr/vol72/iss1/3/.

10 This might be a tool to solve the so-called protein folding problem, cf. E Callaway, ‘“It Will Change Everything”: DeepMind’s AI Makes Gigantic Leap in Solving Protein Structures’ (2020) 588 Nature 203 www.nature.com/articles/d41586-020-03348-4.

11 M Brundage and others, ‘The Malicious Use of Artificial Intelligence’ (Malicious AI Report, 2018) https://maliciousaireport.com/ 17.

12 For this notion in the area of biotechnology, cf. XH Zhang and others, ‘Off-Target Effects in CRISPR/Cas9-Mediated Genome Engineering’ (2015) 4 Molecular Therapy: Nucleic Acids https://doi.org/10.1038/mtna.2015.37; WA Reh, Enhancing Gene Targeting in Mammalian Cells by the Transient Down-Regulation of DNA Repair Pathways (2010) 22.

13 Cf. C Wendehorst in this volume, Chapter 12.

14 Such as high-frequency trading, deep calibration, deep hedging and risk management. High-frequency trading means the automated trading of securities characterized by extremely high speeds and high turnover rates; deep calibration means the fitting of a model to observable data of derivatives (calibration) by deep neural networks and deep hedging means the derivation of hedging strategies by the use of deep neural networks. For details on the topic of AI and finance, cf. M Paul, Chapter 21, in this volume.

15 To list a few examples of this rapidly growing field, cf. J Sirignano and R Cont, ‘Universal Features of Price Formation in Financial Markets: Perspectives from Deep Learning’ (2019) 19(9) Quantitative Finance 14491459; H Buehler and others, ‘Deep Hedging’ (2019) 19(8) Quantitative Finance 12711291; B Horvath, A Muguruza, and M Tomas, ‘Deep Learning Volatility: A Deep Neural Network Perspective on Pricing and Calibration in (Rough) Volatility Models’ (2021) 21(1) Quantitative Finance 1127.

16 J Danielsson, R Macrae, and A Uthemann, ‘Artificial Intelligence and Systemic Risk’ (Systemic Risk Centre, 24 October 2019) www.systemicrisk.ac.uk/publications/special-papers/artificial-intelligence-and-systemic-risk.

17 See Y LeCun and others, ‘Deep Learning’ (2015) 521 Nature 436444 www.nature.com/nature/journal/v521/n7553/full/nature14539.html.

18 The term ‘the Singularity’ was coined in 1993 by the computer scientist Vernon Vinge; he argued that “[w]ithin thirty years, we will have the technological means to create superhuman intelligence,” and he concluded: “I think it’s fair to call this event a singularity (‘the Singularity’ (…)).” See V Vinge, ‘The Coming Technological Singularity: How to Survive in the Post-Human Era’ in GA Landis (ed), Vision-21: Interdisciplinary Science and Engineering in the Era of Cyberspace (1993) 11, 12.

19 See also in this volume J Tallinn and R Ngo, Chapter 2. Cf. as well S Hawking, ‘Will Artificial Intelligence Outsmart Us?’ in S Hawking (ed), Brief Answers to the Big Questions (2018), 181; S Russel and P Novig, Artificial Intelligence: A Modern Approach (3rd ed., 2016) 1036 et seq.; S Bringsjord and NS Govindarajulu, ‘Artificial Intelligence’ in EN Zalta (ed), The Stanford Encyclopedia of Philosophy (2020) https://plato.stanford.edu/entries/artificial-intelligence/ 9; A Eden and others, Singularity Hypotheses: A Scientific and Philosophical Assessment (2013); A Al-Imam, MA Motyka, and MZ Jędrzejko, ‘Conflicting Opinions in Connection with Digital Superintelligence’ (2020) 9(2) IAES IJ-AI 336348; N Bostrom, Superintelligence (2014) esp. 75 (hereafter N Bostrom, Superintelligence); K Grace and others, ‘When Will AI Exceed Human Performance? Evidence from AI Experts’ (2018) 62 Journal of Artificial Intelligence Research 729754 https://doi.org/10.1613/jair.1.11222.

20 See e.g., R Kurzweil, The Singularity Is Near (2005) 127; for more forecasts, see Bostrom, Superintelligence (Footnote n 14) 19–21.

21 E Yudkowsky, ‘Artificial Intelligence as a Positive and Negative Factor in Global Risk’ in N Bostrom, MM Ćirković (eds), Global Catastrophic Risks (2011) 341.

22 M Tegmark, ‘Will There Be a Singularity within Our Lifetime?’ in J Brockman (ed), What Should We Be Worried About? (2014) 30, 32.

23 See for this J Beckert and R Bronk, ‘An Introduction to Uncertain Futures’ in J Beckert and R Bronk (eds), Uncertain Futures: Imaginaries, Narratives, and Calculation in the Economy (2018) 1–38, 2 who argue that ‘actors in capitalist systems face an open and indeterminate future’.

24 As argued in E Yudkowsky, ‘There’s No Fire Alarm for Artificial General Intelligence’ (Machine Intelligence Research Institute, 13 October 2017) https://intelligence.org/2017/10/13/fire-alarm/.

25 Voeneky, ‘Human Rights and Legitimate Governance’ (Footnote n 3) 150.

26 For a similar definition, see D Thürer, ‘Soft Law’ in R Wolfrum (ed), Max Planck Encyclopedia of Public International Law (2012) volume 9 271 para 8.

27 On the advantages and disadvantages of ‘standards’ compared to ‘regulation’ see J Tate and G Banda, ‘Proportionate and Adaptive Governance of Innovative Technologies: The Role of Regulations, Guidelines, Standards’ (BSI, 2016) www.bsigroup.com/localfiles/en-gb/bis/innovate%20uk%20and%20emerging%20technologies/summary%20report%20-%20adaptive%20governance%20-%20web.pdf 14 (hereafter Tate and Banda, ‘Proportionate and Adaptive Governance’).

28 AI Google, ‘Artificial Intelligence at Google: Our Principles’ https://ai.google/principles/.

29 It is beyond the scope of this chapter to discuss bottom up rules drafted by corporations or NGOs in the area of AI.

30 See G Wilson, ‘Minimizing Global Catastrophic and Existential Risks from Emerging Technologies through International Law’ (2013) 31 Va Envtl LJ 307, 310. Sometimes there is no differentiation made between threat, hazard, and risk, see OECD Recommendation OECD/LEGAL/040 of 6 May 2014 of the Council on the Governance of Critical Risks www.oecd.org/gov/risk/Critical-Risks-Recommendation.pdf. For details see Voeneky, ‘Human Rights and Legitimate Governance’ (Footnote n 3) 140 et seq.

31 See SO Hansson, ‘Risk’ in EN Zalta (ed), Stanford Encyclopedia of Philosophy https://plato.stanford.edu/entries/risk/. In a quantitative sense, risk can be defined through risk measures (be it relying on probabilities or without doing so). Typical examples specify risk as to the probability of an unwanted event that may or may not occur (value-at-risk); or as the expectation of an unwanted event that may or may not occur (expected shortfall). The expectation of a loss materialized by the unwanted event is the product of its size in several scenarios with the probability of these scenarios and thus specifies an average loss given the unwanted event. Many variants of risk measures exist, see for example AJ McNeil, R Frey, and P Embrechts, Quantitative Risk Management: Concepts, Techniques and Tools-Revised Edition (2015). Adaptive schemes rely on conditional probabilities whose theory goes back to T Bayes, ‘An Essay Towards Solving a Problem in the Doctrine of Chances’ (1764) 53 Phil Transactions 370. In the area of international law, the International Law Commission (ILC) stated that the ‘risk of causing significant transboundary harm’ refers to the combined effect of the probability of occurrence of an accident and the magnitude of its injurious impact, see ILC, ‘Draft Articles on Prevention of Transboundary Harm from Hazardous Activities’ (2001) 2(2) YB Int’l L Comm 152.

32 For a different, narrower notion of risk, excluding situations of uncertainty (‘uncertainty versus risk’), see CR Sunstein, Risk and Reason: Safety, Law and the Environment (2002)129; CR Sunstein, Worst-Case Scenarios (2007)146–147; RA Posner, Catastrophe (2004) 171. A judge of the International Court of Justice (ICJ), however, included ‘uncertain risks’ into the notion of risks, see ICJ, Pulp Mills on the River of Uruguay (Argentina v Uruguay), Sep Op of Judge Cançado Trindade [2010] ICJ Rep 135, 159, 162; for a similar approach (risk as ‘unknown dangers’) see J Peel, Science and Risk Regulation in International Law (2010) 1.

33 For slightly different definitions, see N Bostrom, ‘Superintelligence’ (Footnote n 12) 115 (stating that ‘[a]n existential risk is one that threatens to cause the extinction of Earth-originating intelligent life or to otherwise permanently and drastically destroy its potential for future desirable development’; and N Bostrom and MM Ćirković, ‘Introduction’ in N Bostrom and MM Ćirković (eds), Global Catastrophic Risks (2008) arguing that a global catastrophic risk is a hypothetical future event that has the potential ‘to inflict serious damage to human well-being on a global scale’.

34 Cf. Footnote n 5.

35 For a definition of high-risk AI products by the European Parliament (EP), cf. EP Resolution of 20 October 2020 with recommendations to the Commission on a framework of ethical aspects of artificial intelligence, robotics, and related technologies (2020/2021(INL)), para 14: ‘Considers, in that regard, that artificial intelligence, robotics and related technologies should be considered high-risk when their development, deployment and use entail a significant risk of causing injury or harm to individuals or society, in breach of fundamental rights and safety rules as laid down in Union law; considers that, for the purposes of assessing whether AI technologies entail such a risk, the sector where they are developed, deployed or used, their specific use or purpose and the severity of the injury or harm that can be expected to occur should be taken into account; the first and second criteria, namely the sector and the specific use or purpose, should be considered cumulatively.’ www.europarl.europa.eu/doceo/document/TA-9-2020-10-20_EN.html#sdocta9.

36 Autonomous weapons are expressly outside the scope of the Draft EU AIA, cf. Article 2(3).

37 The Draft AIA by the EU Commission spells out a preventive approach and does not include any relevant liability rules. However, the Commission has announced the proposal of EU rules to address liability issues related to new technologies, including AI systems in 2022, cf. C Wendehorst, Chapter 12, in this volume.

38 See for the precautionary principle as part of EU law: Article 191(2) Treaty on the Functioning of the European Union, OJ 2016 C202/47 as well as Commission, ‘Communication on the Precautionary Principle’ COM(2000) 1 final. The precautionary principle (or: approach) is reflected in international law in Principle 15 of the Rio Declaration which holds that: ‘In order to protect the environment, the precautionary approach shall be widely applied by States according to their capabilities. Where there are threats of serious or irreversible damage, lack of full scientific certainty shall not be used as a reason for postponing cost-effective measures to prevent environmental degradation.’ (Emphasis added), United Nations, ‘Rio Declaration on Environment and Development’ (UN Conference on Environment and Development, 14 June 1992) UN Doc A/CONF 151/26/Rev 1 Vol I, 3; cf. also M Schröder, ‘Precautionary Approach/Principle’ in R Wolfrum (ed), Max Planck Encyclopedia of Public International Law (2012) volume 8, 400, paras 1–5. In philosophy, there has been an in-depth analysis and defense of the principle in recent times, cf. D Steel, Philosophy and the Precautionary Principle: Science, Evidence, and Environmental Policy (2014).

39 It is also argued that this principle shall be applied in all cases of scientific uncertainty and not only in order to protect the environment, cf. C Phoenix and M Treder, ‘Applying the Precautionary Principle to Nanotechnology’ (CRN, January 2004) http://crnano.org/precautionary.htm; N Bostrom, ‘Ethical Issues in Advanced Artificial Intelligence’ (2003) https://nickbostrom.com/ethics/ai.html 2.

40 As shown in T Sgobba, ‘B-737 MAX and the Crash of the Regulatory System’ (2019) 6(4) Journal of Space Safety Engineering 299; D Scharper, ‘Congressional Inquiry Faults Boeing and FAA Failures for Deadly 737 Max Plane Crashes’ NPR News (16 September 2020) www.npr.org/2020/09/16/913426448/congressional-inquiry-faults-boeing-and-faa-failures-for-deadly-737-max-plane-cr, key mistakes in the regulatory process were: ‘excessive trust on quantitative performance requirements, inadequate risk-based design process, and lack of independent verification by experts.’ It is argued that similar failures can happen in many other places, see for example P Johnston and H Rozi, ‘The Boeing 737 MAX Saga: Lessons for Software Organizations’ (2019) 21(3) Software Quality Professional 4.

41 C Oliver and others, ‘Volkswagen Emissions Scandal Exposes EU Regulatory Failures’ Financial Times (30 September 2015) www.ft.com/content/03cdb23a-6758-11e5-a57f-21b88f7d973f; M Potter, ‘EU Seeks More Powers over National Car Regulations after VW Scandal’ Reuters (27 January 2017) www.reuters.com/article/us-volkswagen-emissions-eu-regulations-idUSKCN0V51IO.

42 With regard to the disadvantages of the US tort system, MU Scherer, ‘Regulating Artificial Intelligence’ (2016) 29 Harvard Journal of Law & Technology 353, 388, and 391.

43 The opiate crisis cases in the United States show in an alarming way that insufficient and low threshold regulation that allows to prescribe and sell a high-risk product without reasonable limits cannot be outweighed ex post by a liability regime, even if damaged actors claim compensation and sue companies that caused the damage, cf. District Court of Cleveland County, State of Oklahoma, ex rel. Hunter v Purdue Pharma LP, Case No CJ-2017-816 (2019).

44 Another example are the actions of oil drilling companies, as the oil drill technology can be seen as a high-risk technology. As part of the the so-called 2010 Deepwater Horizon incident British Petroleum (BP) has caused an enormous marine oil spill. In 2014, US District Court for the Eastern District of Louisiana ruled that BP was guilty of gross negligence and willful misconduct under the US Clean Water Act (CWA). The Court found the company to have acted ‘recklessly’ (cf. US District Court for the Eastern District of Louisiana, Oil Spill by the Oil Rig ‘Deepwater Horizon’ in the Gulf of Mexico on April 20, 2010, Findings of Fact and Conclusion of Law, Phase One Trial, Case 2:19-md-02179-CJB-SS (4 September 2014) 121–122). In another case Royal Dutch Shell (RDS) was sued as its subsidiary in Nigeria had caused massive environmental destruction; the Court of Appeal in The Hague ordered in 2021 that RDS has to pay compensation to residents of the region and begin the purification of contaminated waters (cf. Gerechtshof Den Haag, de Vereniging Milieudefensie v Royal Dutch Shell PLC and Shell Petroleum Development Company of Nigeria LTD/Shell Petroleum Development Company of Nigeria LTD v Friday Alfred Akpan, 29 January 2021); see E Peltier and C Moses, ‘A Victory for Farmers in a David-and-Goliath Environmental Case’ The New York Times (29 January 2021) www.nytimes.com/2021/01/29/world/europe/shell-nigeria-oil-spills.html.

45 This, as well, the opioid crisis cases in the United States have shown. Cf. J Hoffmann, ‘Purdue Pharma Tentatively Settles Thousands of Opioid Cases’ New York Times (11 September 2019) www.nytimes.com/2019/09/11/health/purdue-pharma-opioids-settlement.html: ‘Purdue Pharma (…) would file for bankruptcy under a tentative settlement. Its signature opioid, OxyContin, would be sold by a new company, with the proceeds going to plaintiffs’. In September 2021, a federal bankruptcy judge gave conditional approval to a settlement devoting potentially $10 billion to fighting the opioid crisis but will shield the company’s former owners, members of the Sackler family, from any future lawsuits over opioids, see J Hoffmann, ‘Purdue Pharma Is Dissolved and Sacklers Pay $4.5 Billion to Settle Opioid Claims’ New York Times (1 September 2021) www.nytimes.com/2021/09/01/health/purdue-sacklers-opioids-settlement.html. Several US states opposed the deal and planned to appeal against it, cf. ‘What is the bankruptcy “loophole” used in the Purdue Pharma settlement?’ The Economist (3 September 2021) www.economist.com/the-economist-explains/2021/09/03/what-is-the-bankruptcy-loophole-used-in-the-purdue-pharma-settlement. See also the Attorney General of Washington’s statement of 1 September 2021: “This order lets the Sacklers off the hook by granting them permanent immunity from lawsuits in exchange for a fraction of the profits they made from the opioid epidemic — and sends a message that billionaires operate by a different set of rules than everybody else”.

46 For this section see Voeneky, ‘Key Elements of Responsible Artificial Intelligence’ (Footnote n 9) 9 et seq.

47 This does not include a discussion of AI and data protection regulations. However, the European General Data Protection Regulation (GDPR) aims to protect personal data of natural persons (Article 1(1) GDPR) and applies to the processing of this data even by wholly automated means (Article 2(1) GDPR). See Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC, in force since 25 May 2018, OJ 2016 L119/1. The GDPR spells out as well a ‘right to explanation’ regarding automated decision processes; cf. T Wischmeyer, ‘Artificial Intelligence and Transparency: Opening the Black Box’ in T Wischmeyer and T Rademacher (eds), Regulating Artificial Intelligence (2019) 75 and 89: Article 13(2)(f) and 14(2)(g) as well as Article 22 GDPR contain an obligation to inform the consumer about the ‘logic involved’ as well as ‘the significance and the envisaged consequences of such processing for the data subject’ but not a comprehensive right to explanation.

48 Regulation (EU) 2017/745 of the European Parliament and of the Council of 05 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC, OJ 2017 L117/1. Besides, AI-based medical devices fall within the scope of high-risk AI systems according to Article 6(1) in conjunction with Annex II (11) Draft EU AIA that explicitly refers to Regulation 2017/745, if such AI systems are safety components of a product or themselves products and subject to third party conformity assessment, cf. this Section 3(b).

49 According to Article 2 MDR ‘medical device’ ‘(…) means any instrument, apparatus, appliance, software, implant, reagent, material or other article intended by the manufacturer to be used, alone or in combination, for human beings for one or more of the following specific medical purposes: (…)’. For exemptions see, however, Article 1(6) MDR.

50 The amended MRD came into force in May 2017, but medical devices are subject to a transition period of three years to meet the new requirements. This transition period was extended until 2021 due to the COVID-19 pandemic, cf. Regulation (EU) 2020/561 of the European Parliament and of the Council of 23 April 2020 amending Regulation (EU) 2017/745 on medical devices, as regards the dates of application of certain of its provisions.

51 Cf. Articles 54, 55, and 106(3), Annex IX Section 5.1, and Annex X Section 6 MDR.

52 Annex XVI: ‘(…) 6. Equipment intended for brain stimulation that apply electrical currents or magnetic or electromagnetic fields that penetrate the cranium to modify neuronal activity in the brain. (…)’.

53 §§ 21 et seq. Arzneimittelgesetz (AMG, German Medicinal Products Act), BGBl 2005 I 3394; Article 3(1) Regulation (EC) 726/2004 of the European Parliament and of the Council of 31 March 2004 laying down Community procedures for the authorization and supervision of medicinal products for human and veterinary use and establishing a European Medicines Agency, OJ 2004 L 136/1.

54 The AI recommendation drafted by the OECD, cf. OECD Recommendation, OECD/LEGAL/0449 of 22 May 2019 of the Council on Artificial Intelligence https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449 are also insufficient in this respect due to their non-binding soft law character, in more detail Voeneky, ‘Key Elements of Responsible Artificial Intelligence’, 17 et seq. and this Section at 3 a. However, at least some States such as Chile and France are attempting to regulate this area of AI: as part of the Chilean constitutional reform, the current Article 19 of the Carta Fundamental is to be supplemented by a second paragraph that protects mental and physical integrity against technical manipulation; cf. on the current status of the legislative process: Cámara dediputadas y diputados, Boletín No 13827-19 for an English translation of the planned amendment see www.camara.cl/verDoc.aspx?prmID=14151&prmTIPO=INICIATIVA, Anexo 1, p. 14. Furthermore, the implementation of specific ‘neurorights’ is planned, cf. project Boletín No 13828-19. The French bioethics law (Loi n° 2021-1017 du 2 août 2021 relative à la bioéthique), which came into force at the beginning of August 2021, allows the use of brain-imaging techniques only for medical and research purposes (Articles 18 and 19), cf. www.legifrance.gouv.fr/jorf/id/JORFTEXT000043884384/.

55 Straßenverkehrsgesetz (StVG), cf. Article 1 Achtes Gesetz zur Änderung des Straßenverkehrsgesetzes (8. StVGÄndG), BGBl 2017 I 1648.

56 §§ 1a, 1b and § 63 Road Traffic Act. For an overview of the most relevant international, European, and national rules governing autonomous or automated vehicles, cf. E Böning and H Canny , ‘Easing the Brakes on Autonomous Driving’ (FIP 1/2021) www.jura.uni-freiburg.de/de/institute/ioeffr2/downloads/online-papers/FIP_2021_01_BoeningCanny_AutonomousDriving_Druck.pdf (hereafter Böning and Canny, ‘Easing the Brakes’).

57 Germany, Federal Ministry of Transport and Digital Infrastructure, Ethics Commission, ‘Automated and Connected Driving’ (BMVI, June 2017), www.bmvi.de/SharedDocs/EN/publications/report-ethics-commission.html.

58 An act regulating fully autonomous cars has been in force since 2021 and has changed the Road Traffic Act, see especially the new §§ 1 d-1g Road Traffic Act. For the draft, cf. German Bundestag, ‘Entwurf eines Gesetzes zur Änderung des Straßenverkehrsgesetzes und des Pflichtversicherungsgesetzes (Gesetz zum autonomen Fahren)’ (Draft Act for Autonomous Driving) (9 March 2021), Drucksache 19/27439 https://dip21.bundestag.de/dip21/btd/19/274/1927439.pdf.

59 § 1a (1) Road Traffic Act.

60 Böning and Canny, ‘Easing the Brakes’ (Footnote n 56).

61 This seems true even if the description of the intended purpose and the level of automation shall be ‘unambiguous’ according to the rationale of the law maker, cf. German Bundestag, ‘Entwurf eines Gesetzes zur Änderung des Straßenverkehrsgesetzes’ (Draft Act for Amending the Road Traffic Act) (2017), Drucksache 18/11300 20 https://dip21.bundestag.de/dip21/btd/18/113/1811300.pdf: ‘Die Systembeschreibung des Fahrzeugs muss über die Art der Ausstattung mit automatisierter Fahrfunktion und über den Grad der Automatisierung unmissverständlich Auskunft geben, um den Fahrer über den Rahmen der bestimmungsgemäßen Verwendung zu informieren.’

62 Grundgesetz für die Bundesrepublik Deutschland (GG), BGBl 1949 I 1, last rev 29 September 2020, BGBl 2020 I 2048.

63 B Grzeszick, ‘Article 20’ in T Maunz und G Dürig (eds), Grundgesetz-Kommentar (August 2020), para 99. This is not the case, however, with regard to level 4 and 5 autonomous cars, as the rules enshrined in the 2021 §§ 1 d-1 g Road Traffic Act are more detailed, even including some norms for a a solution of the so-called trolley problem, cf. § 1 e para. 2 (no 2).

64 Cf. Section III.

65 Cartagena Protocol on Biosafety to the Convention on Biological Diversity (adopted 29 January 2000, entered into force 11 September 2003) 2226 UTNS 208.

66 Convention on the prohibition of the development, production, and stockpiling of bacteriological (biological) and toxin weapons and on their destruction (adopted 10 April 1972, entered into force 26 March 1975) 1015 UNTS 163.

67 OECD AI Recommendation (Footnote n 54).

68 An AI system is defined as ‘a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.’ Cf. OECD AI Recommendation (Footnote n 54).

69 OECD AI Recommendation (Footnote n 54).

70 AI actors here are ‘those who play an active role in the AI system lifecycle, including organisations and individuals that deploy or operate AI’, see OECD AI Recommendation (Footnote n 54).

71 See Data Ethics Commission, Opinion of the Data Ethics Commission (BMJV, 2019), 194 www.bmjv.de/SharedDocs/Downloads/DE/Themen/Fokusthemen/Gutachten_DEK_EN_lang.pdf?__blob=publicationFile&v=3.

72 ‘AI actors should, based on their roles, the context, and their ability to act, apply a systematic risk management approach to each phase of the AI system lifecycle on a continuous basis to address risks related to AI systems, including privacy, digital security, safety and bias.’ Cf. IV. 1.4. c) OECD AI Recommendation (Footnote n 54).

74 See Section II.

75 Providers are not limited to private actors but every natural or legal person, including public authorities, agencies, and other bodies, cf. Article 3(2).

76 See Section II.

77 The European Commission is entitled in Article 7 to add new high-risk systems to Annex III if those systems pose a risk to fundamental rights and safety that is comparable to those systems that are already contained in Annex III. However, this flexibility means that there is only a very loose thread of democratic legitimacy for the future amendments of Annex III. It is beyond the scope of this chapter to discuss this in more detail, but it is unclear whether this disadvantage is sufficiently justified because of the benefit to achieve more flexibility with regard to the regulation of AI systems as a fast-moving technology.

78 For this differentiation, cf. Section III. For more details cf. C Wendehorst, Chapter 12, in this volume.

79 For enforcement details cf. Articles 63 et seq.; for penalties cf. Article 71.

80 For details cf. T Burri, Chapter 7, in this volume.

81 Critical on this as well C Wendehorst, Chapter 12, in this volume.

82 This is true, for example, Bill Gates, Sundar Pichai, and Elon Musk have called for the regulation of AI. See S Pichai, ‘Why Google Thinks We Need to Regulate AI’ Financial Times (20 January 2020) www.ft.com/content/3467659a-386d-11ea-ac3c-f68c10993b04; E Mack, ‘Bill Gates Says You Should Worry About Artificial Intelligence’ (Forbes, 28 January 2015) www.forbes.com/sites/ericmack/2015/01/28/bill-gates-also-worries-artificial-intelligence-is-a-threat/; S Gibbs, ‘Elon Musk: Regulate AI to Combat ‘Existential Threat’ before It’s Too Late’ The Guardian (17 July 2017) www.theguardian.com/technology/2017/jul/17/elon-musk-regulation-ai-combat-existential-threat-tesla-spacex-ceo: Musk stated in July 2017, at a meeting of the US National Governors Association, that ‘AI is a fundamental risk to the existence of human civilization.’

83 Cf. M Mackenzie and A van Duyn, ‘“Flash Crash” was Sparked by Single Order’ Financial Times (1 October 2010) www.ft.com/content/8ee1a816-cd81-11df-9c82-00144feab49a. Cf. J Tallinn and T Ngo, Chapter 2, in this volume; M Paul, Chapter 21, in this volume.

84 Cf. House of Lords Select Committee on Artificial Intelligence, AI in the UK: Ready, Willing and Able? (Report of Session 2017–2019, 2018) HL Paper 100, 126 et seq.; MU Scherer ‘Regulating Artificial Intelligence: Risks, Challenges, Competencies, and Strategies’ (2016) 29(2) Harvard Journal of Law & Technology 353, 355; Perri 6, ‘Ethics, Regulation and the New Artificial Intelligence, Part I: Accountability and Power’ (2010) 4 INFO, COMM & SOC’Y 199, 203.

85 As, for instance, the government of Austria, cf. Group of Governmental Experts of the High Contracting Parties to the Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects, ‘Proposal for a Mandate to Negotiate a Legally-Binding Instrument that Addresses the Legal, Humanitarian and Ethical Concerns Posed by Emerging Technologies in the Area of Lethal Autonomous Weapons Systems (LAWS)’ (Working Paper Submitted to the Convention on Conventional Weapons Group of Governmental Experts on Lethal Autonomous Weapons Systems by Austria, Brazil, and Chile, 8 August 2018) CCW/GGE.2/2018/WP.7 https://undocs.org/CCW/GGE.2/2018/WP.7; and cf. the decision of the Österreicherischen Nationalrat, Decision to Ban Killer Robots, 24 February 2021, www.parlament.gv.at/PAKT/VHG/XXVII/E/E_00136/index.shtml#.

86 For the different State positions, see Group of Governmental Experts on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems, ‘Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects (CCW)’ (Report of the 2019 session of the GGE on Emerging Technologies in the Area of Lethal Autonomous Weapons Systems, 25 September 2019) CCW/GGW.1/2019/3 https://undocs.org/en/CCW/GGE.1/2019/3. On the discussion of these views cf. Voeneky, ‘Key Elements of Responsible Artificial Intelligence’ (Footnote n 9) 15–16. Cf. as well the resolution of the European Parliament, EP Resolution of 20 October 2020 with recommendations to the Commission on a framework for ethical aspects of artificial intelligence, robotics, and related technologies (2020/2012(INL)) www.europarl.europa.eu/doceo/document/TA-9-2020-10-20_DE.html#sdocta8.

87 See Geneva Conventions (adopted 12 August 1949, entered into force 21 October 1950) 75 UNTS 31, 85, 135, 287; Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts (Protocol I) (adopted 8 June 1977, entered into force 7 December 1978) 1125 UNTS 3; Protocol Additional to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of Non-International Armed Conflicts (Protocol II) (adopted 8 June 1977, entered into force 7 December 1978) 1125 UNTS 609.

88 Rome Statute of the International Criminal Court (adopted 17 July 1998, entered into force 1 July 2002) 2187 UNTS 3.

89 ILC, ‘Materials on the Responsibility of States for Internationally Wrongful Acts’ (United Nations, 2012) ST/LEG/SER.B/25.

90 For a proposal by the EU Commission, cf. Section II.

91 In contrast, the Draft EU AIA obliges ‘providers’ and ‘users’, see Section IV 3 b).

92 See, for example, T Menzel, G Bagschik, and M Maurer, ‘Scenarios for Development, Test and Validation of Automated Vehicles’ (2018) IEEE Intelligent Vehicles Symposium (IV).

93 For the notion of adaptive governance cf. Tate and Banda, ‘Proportionate and Adaptive Governance’ (Footnote n 27) 4 et seq., 20.

94 BVerfGE 143, 246–396 (BVerfG 1 BvR 2821/11) para 308. One of the questions in the proceedings was whether the lawmaker in Germany can justify the nuclear phase-out that was enacted after the reactor accident in Fukushima, Japan, took place. This was disputed as an ‘irrational’ change of German laws as the reactor accident in Fukushima did not, in itself, change the risk factors linked to nuclear reactors located in Germany.

95 §§ 40(1), 42(1) AMG (Footnote n 53). For details cf. S Voeneky, Recht, Moral und Ethik (2010) 584–635, esp. at 594–606 (hereafter S Voeneky, Recht, Moral und Ethik).

96 See the Central Committee on Biological Safety (ZKBS), an expert commission responsible for evaluating the risks concerning the development and use of genetically modified organisms (GMOs) www.zkbs-online.de/ZKBS/EN/Home/home_node.html. The commission is based on the the German Genetic Engineering Act (Gentechnikgesetz (GenTG)); BGBl 1993 I 2066 (§ 4 GenTG) and the decree, Verordnung über die Zentrale Kommission für die Biologische Sicherheit (ZKBS-Verordnung, ZKBSV) 30 October 1990 www.gesetze-im-internet.de/zkbsv/index.html.

97 S Voeneky, Recht, Moral und Ethik (Footnote n 98).

98 The so-called Wesentlichkeitsprinzip, that can be deduced from German Basic Law, is dependent on the constitutional framing and is not a necessary element of every liberal human rights-based democracy. In the United States, for instance, it is constitutional that the US president issues Executive Orders that are highly relevant for the exercise of constitutional rights of individuals, without the need to have a specific regulation based on an act of parliament. For the ‘Wesentlichkeitsprinzip’ according to the German Basic Law cf. S Voeneky, Recht, Moral und Ethik (2010) 214–218 with further references; B Grzeszick, ‘Art. 20’ in T Maunz und G Dürig (eds), Grundgesetz-Kommentar (August 2020) para 105.

99 This is the problem existing with regard to the duty to get insurance for an operator that risks causing environmental emergencies in Antarctica as laid down in the Liability Annex to the Antarctic Treaty (Annex VI to the Protocol on Environmental Protection to the Antarctic Treaty: Liability Arising from Environmental Emergencies (adopted on 14 June 2005, not yet entered into force), cf. IGP&I Clubs, Annex VI to the Protocol on Environmental Protection to the Antarctic Treaty: Financial Security (2019), https://documents.ats.aq/ATCM42/ip/ATCM42_ip101_e.doc.

100 Pursuant to § 36 GenTG (Footnote n 96) the German Federal Government should implement the duty to get insurance with the approval of the Federal Council (Bundesrat) by means of a decree. Such a secondary legislation, however, has never been adopted, cf. Deutscher Ethikrat, Biosicherheit – Freiheit und Verantwortung in der Wissenschaft: Stellungnahme (2014) 264 www.ethikrat.org/fileadmin/Publikationen/Stellungnahmen/deutsch/stellungnahme-biosicherheit.pdf.

101 Cf. the so-called Liability Annex, an international treaty, not yet in force, that regulates the compensation of damages linked to environmental emergencies caused by an operator in the area of the Antarctic Treaty System, see Footnote note 58.

102 For example, Tesla as a car manufacturer trying to develop (semi-)autonomous cars has only generated profit since 2020, cf. ‘Tesla Has First Profitable Year but Competition Is Growing’ (The New York Times, 27 January 2021) www.nytimes.com/2021/01/27/business/tesla-earnings.html.

103 For instance, during the COVID-19 pandemic certain vaccine developing companies in Germany have been supported by the federal government and the EU; for example, the Kreditanstalt für Wiederaufbau (KfW) has acquired ‘minority interest in CureVac AG on behalf of the Federal Government’, cf. KfW, 6 August 2020. Also, the high-risk technology of nuclear power plants have been supported financially by different means in Germany since their establishment; inter alia the liability of the operating company in case of a maximum credible accident that has been part of the German law is capped and the German State is liable for compensation for the damages exceeding this cap, cf. §§ 25 et seq., 31, 34, and 38 German Atomic Energy Act (Atomgesetz (AtG)), BGBl 1985 I 1565.

104 Cf. above the proposals of the EU Parliament, Footnote note 35.

105 In the area of biotechnology cf. for instance in Germany the Central Committee on Biological Safety, ZKBS, Footnote note 96.

106 Cf. VV Acharya and others, ‘Measuring Systemic Risk’ (2017) 30(1) The Review of Financial Studies 247 (hereafter Acharya and others, ‘Measuring Systemic Risk’).

107 For this initial claim it is not necessary that utility is measured on a monetary scale. Later, when it comes to determining regulatory capital, we will, however, rely on measuring utility in terms of wealth.

108 This means that future profits and losses are weighted with a utility function and then averaged by expectation. See for example DM Kreps, A Course in Microeconomic Theory (1990) or A Mas-Colell, MD Whinston, and JR Green, Microeconomic Theory (1995) volume 1.

109 See E Eberlein and DB Madan, ‘Unbounded Liabilities, Capital Reserve Requirements and the Taxpayer Put Option’ (2012) 12(5) Quantitative Finance 709724 and references therein.

110 A utility function associates to a various alternative a number (the utility). The higher the number (utility) is, the stronger the alternative is preferred. For example, 1 EUR has a different value to an individual who is a millionaire in comparison to a person who is poor. The utility function is able to capture such (and other) effects. See H Föllmer and A Schied, Stochastic Finance: an Introduction in Discrete Time (2011) Chapter 2 for further references.

111 Acharya and others, ‘Measuring Systemic Risk’ (Footnote n 109).

112 Acharya and others, ibid.

113 M Pitera and T Schmidt, ‘Unbiased Estimation of Risk’ (2018) 91 Journal of Banking & Finance 133–145.

114 See, for example L De Haan and A Ferreira, Extreme Value Theory: An Introduction (2007).

115 See, for example AH Jazwinski, Stochastic Processes and Filtering Theory (1970), R Frey and T Schmidt, ‘Filtering and Incomplete Information’ in T Bielecki and D Brigo (eds), Credit Risk Frontiers (2011); T Fadina, A Neufeld, and T Schmidt, ‘Affine Processes under Parameter Uncertainty’ (2019), 4.1 Probability, Uncertainty and Quantitative Risk, 135.

116 Credibility theory refers to a Bayesian approach to weight the history of expert opinions, see the recent survey by R Norberg (2015) ‘Credibility Theory’ in N Balakrishnan and others (eds) Wiley StatsRef: Statistics Reference Online or the highly influential work by H Bühlmann, ‘Experience Rating and Credibility Theory’ (1967) 4(3) ASTIN Bulletin 199.

9 China’s Normative Systems for Responsible AI From Soft Law to Hard Law

1 State Council, The Notice of the State Council on Issuing the Development Plan on the New Generation of Artificial Intelligence (The State Council of the People’s Republic of China, 8 July 2017) www.gov.cn/zhengce/content/2017-07/20/content_5211996.htm.

2 Ministry of Science and Technology, ‘Notice of the Project Proposal of Application for the National Key Research and Development Plan’ (2001).

3 Xi Jinping in the ninth collective study of the Political Bureau Central Committee of the CPC stressed the importance of strengthening leadership to do a good job of planning a clear task of solid foundation to promote the healthy development of a new generation of AI in China; Xinhua News Agency, ‘Xi Jinping Presided Over the Ninth Collective Study of the Political Bureau of the CPC Central Committee and Gave a Speech’ (The State Council, The People’s Republic of China, 31 October 2018) www.gov.cn/xinwen/2018-10/31/content_5336251.htm (hereafter Xi Jinping, ‘Ninth Study CPC Central Committee’).

4 In November 2013, the Third Plenary Session of the 18th CPC Central Committee took “promoting the modernization of national governance system and governance capacity” as the overall goal of comprehensively deepening reform, China.org.cn, ‘Communiqué of the Third Plenary Session of the 18th Central Committee of the Communist Party of China’ (China.org.cn, 15 January 2014) www.china.org.cn/china/third_plenary_session/2014-01/15/content_31203056.htm. On 31 October 2019, the Fourth Plenary Session of the 19th Central Committee of the Communist Party of China adopted the “decision of the Central Committee of the Communist Party of China on several major issues on adhering to and improving the socialist system with Chinese characteristics and promoting the modernization of national governance system and governance capacity”, which further put forward the requirements of national governance reform, Online Party School, ‘Communiqué of the Fourth Plenary Session of the 19th Central Committee of the Communist Party of China’ (Liaoning Urban and Rural Construction Planning Design Institute Co. LTD, 5 December 2019) http://lnupd.com/english/article/shows/377.

6 The Committee on Professional Governance of the New Generation of Artificial Intelligence, ‘Governance Principles of the New Generation of Artificial Intelligence – Developing Responsible AI’ (Catch the Wind, 17 June 2019) www.ucozon.com/news/59733737.html.

7 Article 10 of Standardization Law of the People’s Republic of China (adopted 1988, effective 1989) stipulated that mandatory national standards shall be developed to address technical requirements for ensuring people’s health and the security of their lives and property, safeguarding national and eco-environmental security, and meeting the basic need of economic and social management.

8 Standardization Administration of China, Cyberspace Administration of China, and other relevant departments, ‘Guide to the Construction of National New Generation AI Standard System’ (2020) 24–25.

9 National Information Security Standardization Technical Committee, ‘Guideline for Cyber Security Standards: Practice-Guideline for Ethics of Artificial Intelligence (Draft)’ (2020).

10 China Institute of Electronic Technology Standardization, ‘White Paper on Standardization of Artificial Intelligence (version 2021)’ (July 2021).

11 Chinese National People’s Congress, ‘The 2020 legislative work plan of the Standing Committee of the National People’s Congress (NPC)’ (The National People’s Congress of the People’s Republic of China, 20 June 2020) www.npc.gov.cn/npc/c30834/202006/b46fd4cbdbbb4b8faa9487da9e76e5f6.shtml.

12 Xi Jinping, ‘Ninth Study CPC Central Committee’ (Footnote n 3).

13 Li Zhanshu held and delivered speech in the meeting of the members of the Standing Committee of the National People’s Congress: Xinhua, ‘The Members of the NPC Standing Committee Chairman’s Meeting Conducted Special Studies and Li Zhansu Chaired and Delivered a Speech’ (The National People’s Congress of the People’s Republic of China, 24 November 2018) www.npc.gov.cn/npc/c238/201811/e3883fb5618e4a2bbefa5d170fe7b02a.shtml.

14 Zhan Haifeng, Committee Members Discuss about the Development of Artificial Intelligence: Building the Future Legal System of AI (6th ed. 2019).

15 Article 18 E-Commerce Law (promulgated 31 August 2018, effective 1 January 2019): when providing the results of search for commodities or services for a consumer based on the hobby, consumption habit, or any other traits thereof, the e-commerce business shall provide the consumer with options not targeting his/her identifiable traits and respect and equally protect the lawful rights and interests of consumers.

16 Supreme People’s Court, Law Interpretation [2021] No. 15, Provisions on Several Issues Concerning the Application of Law in Hearing Civil Cases Related to the Use of Facial Recognition Technology for Handling Personal Information (Judgement of 8 June 2021, in force on 1 August 2021) (hereafter Supreme People’s Court, Provisions on Facial Recognition).

17 Weixing Shen and Yun Liu, ‘New Paradigm of Legal Research: Connotation, Category and Method of Computational Law’ (2020) 5 Chinese Journal of Law 323.

18 Notice of the State Council on Printing and Distributing the Action Platform for Promoting the Development of Big Data, Document No GF [2015] No 50, issued by the State Council on 31 August 2015.

19 Article 1032 and Article 1034 of the Civil Code of the People’s Republic of China (Adopted at the Third Session of the Thirteenth National People’s Congress on 28 May 2020), Order No 45 of the President of the People’s Republic of China (hereafter Civil Code of the People’s Republic of China).

20 Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation), OJ 2016 L 119/1.

21 Article 18 of the E-Commerce Law.

22 Article 73 of the Personal Information Protection Law (effective 1 November 2021).

23 Article 2 of the Regulation on Internet Information Service Based on Algorithm Recommendation Technology (Draft for Soliciting Public Comment).

24 Article 4 of the Regulation on Internet Information Service Based on Algorithm Recommendation Technology (Draft for Soliciting Public Comment).

25 Article 6 of the Regulation on Internet Information Service Based on Algorithm Recommendation Technology (Draft for Soliciting Public Comment).

26 Article 14 of the Regulation on Internet Information Service Based on Algorithm Recommendation Technology (Draft for Soliciting Public Comment).

27 Article 66 of the Regulations of Shenzhen Special Economic Zone on the Promotion of Artificial Intelligence Industry (Draft for Soliciting Public Comment) (14 July 2021).

28 The Ministry of Industry and Information Technology, the Ministry of Public Security and the Ministry of Transport, ‘Specifications for Road Test Management of Intelligent Networked Vehicles (for Trial Implementation)’ (3 April 2018) and The Ministry of Industry and Information Technology, the Ministry of Public Security and the Ministry of Transport, ‘Regulations on the Management of Intelligent Networked Vehicles in Shenzhen Special Economic Zone (Draft for Soliciting Public Comment)’ (23 March 2021).

29 Article 23 of Guiding Opinions on Regulating Asset Management Business of Financial Institutions (27 April 2018, revised 31 July 2020) No 16 [2018] People’s Bank of China (hereafter Guiding Opinions on Regulating Asset Management).

30 Article 20 of the Regulation on Internet Information Service Based on Algorithm Recommendation Technology (Draft for Soliciting Public Comment).

31 Article 26 of the Regulation on Internet Information Service Based on Algorithm Recommendation Technology (Draft for Soliciting Public Comment).

32 Article 32 of the E-Commerce Law.

33 Article 15 of the Interim Provisions on the Management of Online Tourism Operation Services, Order No 4 of the Ministry of Culture and Tourism of the People’s Republic of China (2020).

34 Article 5 of the Anti-Monopoly Guidelines on Platform Economy.

35 Article 7 of the Anti-Monopoly Guidelines on Platform Economy.

36 Article 17 of the Anti-Monopoly Guidelines on Platform Economy.

37 Article 13 of the Notice from the State Administration for Market Regulation of the Provisions on Prohibited Acts of Unfair Competition Online (Draft for Soliciting Public Comment).

38 Article 21 of the Notice from the State Administration for Market Regulation of the Provisions on Prohibited Acts of Unfair Competition Online (Draft for Soliciting Public Comment).

39 Lai Youxuan, ‘Deliveries, Stuck in the System’ (People, September 2020) https://epaper.gmw.cn/wzb/html/2020-09/12/nw.D110000wzb_20200912_1-01.htm.

40 Guidance on the Implementation of The Responsibility of Online Catering Platforms to Effectively Safeguard the Rights and Interests of Food Delivery Workers, issued by SAMR on 16 July 2021.

41 Article 17 of Regulation on Internet Information Service Based on Algorithm Recommendation Technology (Draft for Soliciting Public Comment).

42 Article 10 of Supreme People’s Court, Provisions on Facial Recognition (Footnote n 16).

43 Article 9 of the Intelligent Networked Vehicle Road Test Management Specifications (for Trial Implementation) notice.

44 Guiding Opinions on Regulating Asset Management (Footnote n 29).

45 Article 1019 and Article 1023 of the Civil Code of the People’s Republic of China (Footnote n 19).

10 Towards a Global Artificial Intelligence Charter

* This is an updated and considerably expanded version of a chapter that goes back to a lecture I gave on 19 October 2017 at the European Parliament in Brussels (Belgium). Cf. (2018), Towards a Global Artificial Intelligence Charter. In European Parliament (ed), Should we fear artificial intelligence? PE 614.547. www.philosophie.fb05.uni-mainz.de/files/2018/03/Metzinger_2018_Global_Artificial_Intelligence_Charter_PE_614.547.pdf.

1 For an overview of existing initiatives, I recommend T Hagendorff, ‘The Ethics of AI Ethics: An Evaluation of Guidelines’ (2020) 30 Minds & Machines 99 https://doi.org/10.1007/s11023-020-09517-8; and the AI Ethics Guidelines Global Inventory created by Algorithm Watch, at https://inventory.algorithmwatch.org/. Other helpful overviews are S Baum, ‘A Survey of Artificial General Intelligence Projects for Ethics, Risk, and Policy’ (2017) Global Catastrophic Risk Institute Working Paper 17-1 https://ssrn.com/abstract=3070741; P Boddington, Towards a Code of Ethics for Artificial Intelligence (2017) 3. I have refrained from providing full documentation here, but useful entry points into the literature are A Mannino and others, ‘Artificial Intelligence. Opportunities and Risks’ (2015) 2 Policy Papers of the Effective Altruism Foundation https://ea-foundation.org/files/ai-opportunities-and-risks.pdf (hereafter Mannino et al., ‘Opportunities and Risks’); P Stone and others, ‘Artificial Intelligence and Life in 2030’ (2016) One Hundred Year Study on Artificial Intelligence: Report of the 2015–2016 Study Panel https://ai100.stanford.edu/2016-report; The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems, ‘Ethically Aligned Design. A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems’ (IEEE, 2017) http://standards.ieee.org/develop/indconn/ec/auto_sys_form.html; N Bostrom, A Dafoe, and C Flynn, ‘Policy Desiderata in the Development of Machine Superintelligence’ (2017) Oxford University Working Paper www.nickbostrom.com/papers/aipolicy.pdf; M Madary and T Metzinger, ‘Real Virtuality. A Code of Ethical Conduct. Recommendations for Good Scientific Practice and the Consumers of VR-Technology’ (2016) 3 Frontiers in Robotics and AI 3 http://journal.frontiersin.org/article/10.3389/frobt.2016.00003/full.

2 T Metzinger, ‘Ethics Washing Made in Europe’ Tagesspiegel (8 April 2019) www.tagesspiegel.de/politik/eu-guidelines-ethics-washing-made-in-europe/24195496.html.

3 See T Metzinger, ‘Two Principles for Robot Ethics’ (2013) in E Hilgendorf and JP Günther (eds), Robotik und Gesetzgebung; T Metzinger, ‘Suffering’ (2017) in K Almqvist and A Haag (eds), The Return of Consciousness.

4 See T Metzinger, ‘Artificial Suffering: An Argument for a Global Moratorium on Synthetic Phenomenology’ (2021) 8(1) Journal of Artificial Intelligence and Consciousness 1, 43–66. https://www.worldscientific.com/doi/pdf/10.1142/S270507852150003X.

5 This includes approaches that aim at a confluence of neuroscience and AI with the specific aim of fostering the development of machine consciousness. For recent examples see S Dehaene, H Lau, and S Kouider, ‘What Is Consciousness, and Could Machines Have It?’ (2017) 6362 Science 486; MSA Graziano, ‘The Attention Schema Theory. A Foundation for Engineering Artificial Consciousness’ (2017) 4 Frontiers in Robotics and AI; R Kanai, ‘We Need Conscious Robots. How Introspection and Imagination Make Robots Better’ (Nautilus, 27 April 2017) http://nautil.us/issue/47/consciousness/we-need-conscious-robots.

6 See European Parliamentary Research Service ‘The Ethics of Artificial Intelligence: Issues and Initiatives’ (European Parliamentary Research Service, 2020) 6–11.

7 A Smith and J Anderson, ‘AI, Robotics, and the Future of Jobs’ (Pew Research Center, 2014) www.pewresearch.org/internet/wp-content/uploads/sites/9/2014/08/Future-of-AI-Robotics-and-Jobs.pdf.

8 For a first set of references, see www.humanetech.com/brain-science.

9 See FS Collins and AS Fauci, ‘NIH Statement on H5N1’ (The NIH Director, 2012) www.nih.gov/about-nih/who-we-are/nih-director/statements/nih-statement-h5n1; and RAM Fouchier and others, ‘Pause on Avian Flu Transmission Studies’ (2012) Nature 443.

10 M Madary and T Metzinger, ‘Real Virtuality: A Code of Ethical Conduct. Recommendations for Good Scientific Practice and the Consumers of VR-Technology’ (2016) 3(3) Frontiers in Robotics and AI 1, 12.

11 GE Marchant, ‘The Growing Gap between Emerging Technologies and the Law’ in GE Marchant, BR Allenby, and JR Herkert (eds), The Growing Gap between Emerging Technologies and Legal-Ethical Oversight (2011), 19, puts the general point very clearly in the abstract of a recent book chapter: ‘Emerging technologies are developing at an ever-accelerating pace, whereas legal mechanisms for potential oversight are, if anything, slowing down. Legislation is often gridlocked, regulation is frequently ossified, and judicial proceedings are sometimes described as proceeding at a glacial pace. There are two consequences of this mismatch between the speeds of technology and law. First, some problems are overseen by regulatory frameworks that are increasingly obsolete and outdated. Second, other problems lack any meaningful oversight altogether. To address this growing gap between law and regulation, new legal tools, approaches, and mechanisms will be needed. Business as usual will not suffice’.

12 See W Wallach, A Dangerous Master. How to Keep Technology from Slipping Beyond Our Control (2015), 250.

13 This quote is taken from an unpublished, preliminary draft entitled ‘An Agile Ethical/Legal Model for the International and National Governance of AI and Robotics’; see also GE Marchant and W Wallach, ‘Coordinating Technology Governance’ (2015) 31 Issues in Science and Technology (hereafter Marchant and Wallach, ‘Technology Governance’).

14 Marchant and Wallach, ‘Technology Governance’ (Footnote n 14), 47.

15 For one recent report, see M Bank and others, ‘Die Lobbymacht von Big Tech: Wie Google & Co die EU beeinflussen’ (Corporate Europe Observatory und LobbyControl e.V., 2021) www.lobbycontrol.de/wp-content/uploads/Studie_de_Lobbymacht-Big-Tech_31.8.21.pdf.

16 Cf. Mannino and others, ‘Opportunities and Risks’ (Footnote n 1).

11 Intellectual Debt With Great Power Comes Great Ignorance

* This chapter is based on an essay found at https://perma.cc/CN55-XLCW?type=image. A derivative version of it was published in The New Yorker, ‘The Hidden Costs of Automated Thinking’ (23 July 2019) www.newyorker.com/tech/annals-of-technology/the-hidden-costs-of-automated-thinking.

1 RxList, ‘Provigil’, (RxList, 16 June 2020) www.rxlist.com/provigil-drug.htm.

2 ‘How Aspirin Works’, (1995) 15(1) The University of Chicago Chronicle http://chronicle.uchicago.edu/950817/aspirin.shtml.

3 M Hickman, ‘NatWest and RBS Customers May Receive Compensation as ‘Computer Glitch’ Drags into Sixth Day’ Independent (26 June 2012) www.telegraph.co.uk/finance/personalfinance/bank-accounts/9352573/NatWest-customers-still-unable-to-see-bank-balances-on-sixth-day-of-glitch.html.

4 ‘RBS Computer Problems Kept Man in Prison’ (BBC News, 26 June 2012) www.bbc.com/news/uk-18589280.

5 L Bachelor, ‘NatWest Problems Stop Non-Customers Moving into New Home’ The Guardian (22 June 2012) www.theguardian.com/money/2012/jun/22/natwest-problems-stop-non-customers-home?newsfeed=true.

6 J Hall, ‘NatWest Computer Glitch: Payment to Keep Cancer Girl on Life Support Blocked’ The Telegraph (25 June 2012) www.telegraph.co.uk/finance/personalfinance/bank-accounts/9352532/NatWest-computer-glitch-payment-to-keep-cancer-girl-on-life-support-blocked.html.

7 A Irrera, ‘Banks Scramble to Fix Old Systems as IT ‘Cowboys’ Ride into Sunset’ Reuters (11 April 2017) www.reuters.com/article/us-usa-banks-cobol/banks-scramble-to-fix-old-systems-as-it-cowboys-ride-into-sunset-idUSKBN17C0D8.

9 N Rivero ‘A String of Missteps May Have Made the Boeing 737 Max Crash-Prone’ (Quartz, 18 March 2019) https://qz.com/1575509/what-went-wrong-with-the-boeing-737-max-8/.

10 TJW Dawes and others, ‘Machine Learning of Three-dimensional Right Ventricular Motion Enables Outcome Prediction in Pulmonary Hypertension: A Cardiac MR Imaging Study’ (2017) 283(2) Radiology https://pubmed.ncbi.nlm.nih.gov/28092203/.

11 D Sculley and others, ‘Hidden Technical Debt in Machine Learning Systems’ (2018) 2 NIPS’15: Proceedings of the 28th International Conference on Neural Information Processing Systems https://proceedings.neurips.cc/paper/2015/file/86df7dcfd896fcaf2674f757a2463eba-Paper.pdf.

12 C Rudin, ‘New Algorithms for Interpretable Machine Learning’ (2014) KDD’14: Proceedings of the 20th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining www.bu.edu/hic/2018/12/04/air-rudin/.

13 AC Clarke, ‘Hazards of Prophecy: The Failure of Imagination’ in AC Clarke, Profiles of the Future: An Enquiry into the Limits of the Possible (1962).

14 L Von Ahn and L Dabbish, ‘Labeling Images with a Computer Game’ (2004) CHI’04 Proceedings of the 2004 Conference on Human Factors in Computing Systems 319.

15 See J Zitthrain, ‘Ubiquitous Human Computing’ (2008) Oxford Legal Studies Research Paper No. 32 https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1140445.

16 C Szegedy and others ‘Rethinking the Inception Architecture for Computer Vision’ (2016) 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) 2818.

17 A Ilyas and others, ‘Black-box Adversarial Attacks with Limited Queries and Information’ (labsix, 23 April 2018) www.labsix.org/limited-information-adversarial-examples/.

18 A Athalye and others, ‘Fooling Neural Networks in the Physical World with 3D Adversarial Objects’ (labsix, 31 October 2017) www.labsix.org/physical-objects-that-fool-neural-nets/.

19 SG Finlayson and others, ‘Adversarial Attacks against Medical Deep Learning Systems’ (2019) arXiv:1804.05296v3 https://arxiv.org/pdf/1804.05296.pdf.

20 SG Finlayson and others, ‘Adversarial Attacks on Medical Machine Learning363 Science 1287.

21 M Eisen, ‘Amazon’s $23,698,655.93 Book about Flies’ it is NOT junk (22 April 2011) www.michaeleisen.org/blog/?p=358.

22 T Heath, ‘The Warning from JPMorgan about Flash Crashes Ahead’ The Washington Post (5 September 2018) www.washingtonpost.com/business/economy/the-warning-from-jpmorgan-about-flash-crashes-ahead/2018/09/05/25b1f90a-b148-11e8-a20b-5f4f84429666_story.html.

23 K Birchard and J Lewington ‘Dispute Over the Future of Basic Research in Canada’ The New York Times (16 February 2014) www.nytimes.com/2014/02/17/world/americas/dispute-over-the-future-of-basic-research-in-canada.html.

24 T Caulfield ‘Should Scientists Have to Always Show the Commercial Benefits of Their Research?’ (Policy Options, 1 December 2012) https://policyoptions.irpp.org/magazines/talking-science/caulfield/.

25 M AlQuraishi ‘AlphaFold @ CASP13: “What Just Happened?”’ Some Thoughts on a Mysterious Universe, (9 December 2018) https://moalquraishi.wordpress.com/2018/12/09/alphafold-casp13-what-just-happened/#comment-26005.

26 S Samuel, ‘How One Scientist Coped When AI Beat Him at His Life’s Work’ (Vox, 15 February 2019) www.vox.com/future-perfect/2019/2/15/18226493/deepmind-alphafold-artificial-intelligence-protein-folding.

27 ‘Episode 2: Human-AI Collaborated Pizza’ How to Generate (Almost) Anything (30 August 2018) https://howtogeneratealmostanything.com/food/2018/08/30/episode2.html.

28 T Vigen, Spurious Correlations (2015).

29 RL Wasserstein, AL Schirm, and NA Lazar, ‘Moving to a World Beyond “p < 0.05”’ (2019) 73(S1) The American Statistician www.tandfonline.com/doi/pdf/10.1080/00031305.2019.1583913?needAccess=true.

30 G Marcus ‘Moral Machines’ The New Yorker (24 November 2012) www.newyorker.com/news/news-desk/moral-machines.

31 D Weinberger, ‘Optimization over Explanation’ (Berkman Klein Center, 28 January 2018) https://medium.com/berkman-klein-center/optimization-over-explanation-41ecb135763d.

32 N Lipsman and AM Lozano, ‘Cosmetic Neurosurgery, Ethics, and Enhancement’ (2015) 2 The Lancet Psychiatry 585.

Figure 0

Table 9.1. Individuals’ Rights in Personal Information Processing Activities

Figure 1

Table 9.2. Obligations of Data Processors

Figure 2

Table 9.3. Risk Management System of Data Governance

Figure 3

Figure 9.1 A development process of AI application

Save book to Kindle

To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×