Hostname: page-component-cd9895bd7-dzt6s Total loading time: 0 Render date: 2024-12-28T00:03:38.434Z Has data issue: false hasContentIssue false

ARTIFICIAL INTELLIGENCE AND THE LIMITS OF LEGAL PERSONALITY

Published online by Cambridge University Press:  21 September 2020

Simon Chesterman*
Affiliation:
National University of Singapore, [email protected].
Rights & Permissions [Opens in a new window]

Abstract

As artificial intelligence (AI) systems become more sophisticated and play a larger role in society, arguments that they should have some form of legal personality gain credence. The arguments are typically framed in instrumental terms, with comparisons to juridical persons such as corporations. Implicit in those arguments, or explicit in their illustrations and examples, is the idea that as AI systems approach the point of indistinguishability from humans they should be entitled to a status comparable to natural persons. This article contends that although most legal systems could create a novel category of legal persons, such arguments are insufficient to show that they should.

Type
Articles
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
Copyright © The Author(s) 2020. Published by Cambridge University Press for the British Institute of International and Comparative Law

In 2021 the Bank of England will complete its transition from paper to polymer with the release of a new 50 pound note. A public selection process saw almost quarter of a million nominations for the face of the new note; the final decision, announced in July 2019, was that Alan Turing would be featured. Turing was a hero for his codebreaking during the Second World War. He also helped establish the discipline of computer science, laying the foundations for what we now call artificial intelligence (AI).Footnote 1 Perhaps his best-known contribution, however, is the eponymous test for when true ‘intelligence’ has actually been achieved.

Turing modelled it on a parlour game popular in 1950. A man and a woman sit in a separate room and provide written answers to questions; the other participants have to guess who provided which answer. Turing posited that a similar ‘imitation game’ could be played with a computer. When a machine could fool people into believing that it was human, we might properly say that it was intelligent.Footnote 2

Early successes along these lines came in the 1960s with programs like Eliza. Users were told that Eliza was a psychotherapist who communicated through words typed into a computer. In fact, ‘she’ was an algorithm using a simple list-processing language. If the user typed in a recognised phrase, it would be reframed as a question. So after entering ‘I'm depressed,’ Eliza might reply ‘Why do you say that you are depressed?’ If it didn't recognise the phrase, the program would offer something generic, like ‘Can you elaborate on that?’ Even when they were told how it worked, some users insisted that Eliza had ‘understood’ them.Footnote 3

Parlour games aside, why should it matter if a computer is ‘intelligent’? For several decades, the Turing Test was associated more with the question of whether AI itself was possible, rather than the legal status of such an entity. Yet it is commonly invoked in discussions of legal personality for AI, from Lawrence Solum's seminal 1992 article onwards.Footnote 4 Though no longer regarded as a serious measure of modern AI in a technical sense, the Turing Test's longevity as a trope points to a tension in debates over personality that is often overlooked.

As AI systems become more sophisticated and play a larger role in society, there are at least two discrete reasons why they might be recognised as persons before the law. The first is so that there is someone to blame when things go wrong. This is presented as the answer to potential accountability gaps created by their speed, autonomy, and opacity.Footnote 5 A second reason for recognising personality, however, is to ensure that there is someone to reward when things go right. A growing body of literature examines ownership of intellectual property created by AI systems, for example.

The tension in these discussions is whether personhood is granted for instrumental or inherent reasons. Arguments are typically framed in instrumental terms, with comparisons to the most common artificial legal person: the corporation. Yet implicit in many of those arguments, or explicit in their illustrations and examples, is the idea that as AI systems approach the point of indistinguishability from humans—that is, when they pass Turing's Test—they should be entitled to a status comparable to natural persons.

Until recently, such arguments were all speculative. Then in 2017 Saudi Arabia granted ‘citizenship’ to the humanoid robot SophiaFootnote 6 and an online system with the persona of a seven-year-old boy was granted ‘residency’ in Tokyo.Footnote 7 These were gimmicks—Sophia, for example, is essentially a chatbot with a face.Footnote 8 In the same year, however, the European Parliament adopted a resolution calling on its Commission to consider creating ‘a specific legal status for robots in the long run, so that at least the most sophisticated autonomous robots could be established as having the status of electronic persons responsible for making good any damage they may cause, and possibly applying electronic personality to cases where robots make autonomous decisions or otherwise interact with third parties independently.’Footnote 9

This article begins with the most immediate challenge, which is whether some form of juridical personality would fill a responsibility gap or be otherwise advantageous to the legal system. Based on the history of corporations and other artificial legal persons, it does not seem in doubt that most legal systems could grant AI systems a form of personality; the more interesting questions are whether they should and what content that personality might have.

The second section then turns to the analogy with natural persons. It might seem self-evident that a machine could never be a natural person. Yet for centuries slaves and women were not recognised as full persons either. If one takes the Turing Test to its logical, Blade Runner, conclusion, it is possible that AI systems truly indistinguishable from humans might one day claim the same status. Although arguments about ‘rights for robots’ are presently confined to the fringes of the discourse, this possibility is implicit in many of the arguments in favour of AI systems owning the intellectual property that they create.

Taken seriously, moreover, the idea that AI systems could equal humans suggests a third reason for contemplating personality. For once equality is achieved, there is no reason to assume that AI advances would stop there. Though general AI remains science fiction for the present, it invites consideration whether legal status could shape or constrain behaviour if or when humanity is surpassed. Should it ever come to that, of course, the question might not be whether we recognise the rights of a general AI, but whether it recognises ours.

I. JURIDICAL PERSONALITY: A BODY TO KICK?

Legal personality is fundamental to any system of laws. The question of who can act, who can be the subject of rights and duties, is a precursor to almost every other issue. Yet close examination of these foundations reveals surprising uncertainty and disagreement. Despite this, as John Dewey observed in 1926, ‘courts and legislators do their work without such agreement, sometimes without any conception or theory at all’ regarding the nature of personality. Indeed, he went on, recourse to theory has ‘more than once operated to hinder rather than facilitate the adjudication of a special question of right or obligation’.Footnote 10

In practice, the vast majority of legal systems recognise two forms of legal person: natural and juridical. Natural persons are recognised because of the simple fact of being human.Footnote 11 Juridical persons, by contrast, are non-human entities that are granted certain rights and duties by law. Corporations and other forms of business associations are the most common examples, but many other forms are possible. Religious, governmental, and intergovernmental entities may also act as legal persons at the national and international level.

It is telling that these are all aggregations of human actors, though there are examples of truly non-human entities being granted personhood. In addition to the examples mentioned in the introduction, these include temples in India,Footnote 12 a river in New Zealand,Footnote 13 and the entire ecosystem of Ecuador.Footnote 14 There seems little question that a State could attribute some kind of personality to new entities like AI systems;Footnote 15 if that happens, recognition would likely be accorded by other States also.Footnote 16

A. Theories of Juridical Personality

Scholars and law reform bodies have already suggested attributing AI systems with some form of legal personality to help address liability questions, such as an automated driving system entity in the case of driverless cars whose behaviour may not be under the control of their ‘driver’ or predictable by their manufacturer or owner.Footnote 17 A few writers have gone further, arguing that procedures need to be put in place to try robot criminals, with provision for ‘punishment’ through reprogramming or, in extreme cases, destruction.Footnote 18

These arguments suggest an instrumental approach to personality, but scholarly explanations of the most common form of juridical person—the corporation—show disparate justifications for its status as a separate legal person that may help answer the question of whether that status should be extended to AI systems also.

The aggregate theory, sometimes referred to as the contractarian or symbolist theory, holds that a corporation is a device created by law to allow natural persons who organise themselves as a group to reflect that organisation in their legal relations with other parties. Group members could establish individual contractual relations with those other parties limiting liability and so on, but the corporate form enables them to do so collectively at a lower cost.Footnote 19 The theory has been criticised and is, in any case, the least applicable to AI systems.Footnote 20

The fiction and concession theories of corporate personality have separate originsFootnote 21 but amount to the same thing: corporations have personality because a legal system chooses to give it to them. As the US Supreme Court observed in 1819, a corporation ‘is an artificial being, invisible, intangible, and existing only in contemplation of law’.Footnote 22 Personality is granted to achieve policy ends, such as encouraging entrepreneurship, or to contribute to the coherence and stability of the legal system, such as through the perpetuity of certain entities. The purposive aspect used to be more evident when personality was explicitly granted through a charter or legislation; in the course of the twentieth century this became a mere formality.Footnote 23 These positivist accounts most closely align with legislative and judicial practice in recognising personality, and could encompass its extension to AI systems.

The realist theory, by contrast, holds that corporations are neither fictions nor mere symbols but objectively real entities that pre-exist conferral of personality by a legal system. Though they may have members, they act independently and their actions may not be attributable to those members. At its most extreme, it is argued that corporations are not only legal but also moral persons.Footnote 24 This theory tends to be favoured more by theorists and sociologists than legislators and judges, but echoes the tension highlighted in the introduction: that legal personality is not merely bestowed but deserved. In practice, however, actual recognition as a person before the law remains in the gift of the State.Footnote 25

The end result is, perhaps, that Dewey was correct a century ago: ‘“person” signifies what law makes it signify’.Footnote 26 Though the question of personality is a binary, however—recognised or not—the content of that status is a spectrum. Setting aside for the moment the idea that an AI system might deserve recognition as a person, a State's decision to grant it should be guided by the rights and duties that would be recognised also.

B. The Content of Legal Personality

Legal personality brings with it rights and obligations, but these need not be the same for all persons within a legal system. Even among natural persons, the struggle for equal rights of women, ethnic or religious minorities, and other disadvantaged groups reflects this truth.Footnote 27

It is possible, for example, to grant only rights without obligations. This has tended to be the approach in giving personhood to nature—both in theory, when it was first advocated in 1972,Footnote 28 and in practice, such as in the Constitution of Ecuador.Footnote 29 One could argue that such ‘personality’ is merely an artifice to avoid problems of standing: enabling human individuals to act on behalf of a non-human rights holder, rather than requiring them to establish standing in their own capacity.Footnote 30 In any case, it seems inapposite to the reasons for considering personality of AI systems.

On the other hand, AI legal personality could come only with obligations. That might seem superficially attractive, but insofar as those obligations are intended to address accountability gaps there would be some obvious problems. Civil liability typically leads to an award of damages, for example, which can only be paid if the wrongdoer is capable of owning property.Footnote 31 One could imagine scenarios in which those payments were made from a central fund, though this would be more akin to compulsory insurance regimes proposed as an alternative means of addressing the liability question.Footnote 32 ‘Personality’ would be a mere formality.

In the case of corporations, personality typically means the capacity to sue and be sued, to enter into contracts, to incur debt, to own property, and to be convicted of crimes. On the rights side, the extent to which corporations enjoy constitutional protections comparable to natural persons is the subject of ongoing debate. Though the United States has arguably granted the most protections to corporate entities, even there a line has been drawn at guarantees such as the right against self-incrimination.Footnote 33 Typically, juridical persons will have fewer rights than natural ones. (A similar situation obtains in international law, where States enjoy plenary personality and international organisations may have varying degrees of it.Footnote 34)

1. Private law

The ability to be sued is one of the primary attractions of personality for AI systems, as the European Parliament acknowledged.Footnote 35 This presumes, of course, that there are meaningful accountability gaps that can and should be filled. These gaps are often overstated.Footnote 36 A different reason for wariness about such a remedy is that, even if it did serve a gap-filling function, granting personality to AI systems would also shift responsibility under current laws away from existing legal persons. Indeed, it would create an incentive to transfer risk to such electronic persons in order to shield natural and traditional juridical ones from exposure.Footnote 37 That is a problem with corporations also, which may be used to protect investors from liability beyond the fixed sum of their investment—indeed, that is often the point of using a corporate vehicle in the first place. The reallocation of risk is justified on the basis that it encourages investment and entrepreneurship.Footnote 38 Safeguards typically include a requirement that the name of a limited liability entity include that status in its name (‘Ltd’, ‘LLC’, etc) and the possibility of piercing the corporate veil in limited circumstances to prevent abuse of the form.Footnote 39 In the case of AI systems, similar veil-piercing mechanisms could be developed—though if a human were manipulating AI in order to protect him- or herself from liability, the ability to do so might suggest that the AI system in question was not deserving of its separate personhood.Footnote 40

Entry into contracts is occasionally posited as a reason to grant AI systems personality.Footnote 41 Yet the use of electronic agents to conclude binding agreements is hardly new. High-frequency trading, for example, relies on algorithms concluding agreements with other algorithms on behalf of traditional persons.Footnote 42 Though the autonomy of AI systems may challenge application of existing doctrine to such practices—notably, when something goes wrong, such as a mistake—this is still resolvable without recourse to new legal persons.Footnote 43

Taking on debt and owning property would be necessary incidents of the ability to be sued and enter into contracts.Footnote 44 The possibility that AI systems could accumulate wealth raises the question of whether or how they might be taxed. Taxation of robots has been proposed as a means of addressing the diminished tax base and displacement of workers anticipated as a result of automation.Footnote 45 Bill Gates, among others, has suggested that such robots—or the companies that own them—should be taxed.Footnote 46 Industry representatives have argued that this would have a negative impact on competitiveness and thus far it has not been adopted.Footnote 47 An alternative is to look not at the machines but at the position of companies abusing market position, with possibilities including more aggressive taxing of profits or requirements for distributed share ownership.Footnote 48 In any case, taxation of AI systems—like the ability to take on debt and own property—would follow rather than justify granting them personality.Footnote 49 (The question of AI systems owning their creations will be considered in the next section.Footnote 50)

In addition to owning property, AI systems might also be called on to manage it. In 2014, for example, it was announced that a Hong Kong venture capital firm had appointed a computer program called Vital to its board of directors.Footnote 51 As with the Saudi Arabian government's awarding of citizenship, this was more style than substance—as a matter of Hong Kong law the program was not appointed to anything; in an interview some years later, the managing partner conceded that the company merely treated Vital as a member of the board with observer status.Footnote 52 It is possible that human directors might delegate some responsibility to an AI system, but under most corporate law regimes they cannot absolve themselves of the ultimate responsibility for managing the organisation.Footnote 53 Most jurisdictions require that those directors be natural persons, though in some it is possible for a juridical person—typically another corporation—to serve on the board.Footnote 54 Shawn Bayern has gone further, arguing that loopholes in US business entity law could be used to create limited liability companies, with no human members at all.Footnote 55 This requires a somewhat tortured interpretation of that law—a natural person creates a company, adds an AI system as a member, then resignsFootnote 56—but suggests the manner in which legal personality might be adapted in the future.

2. Criminal law

A final quality of legal personality is the most visceral and worthy of some elaboration: the ability to be punished. If given legal personality comparable to a corporation, there seems little reason to argue over whether an AI system could be prosecuted under the criminal law. Provided actus reus and mens rea are established,Footnote 57 such an entity could be fined or have its property seized; a licence to operate could be suspended or revoked. In some jurisdictions, a winding up order can be made against a juridical person; where that is not available, a fine sufficiently large to bankrupt the entity may have the same effect. In an extreme case, one could imagine a ‘robot criminal’ being destroyed. But would this be desirable and would it be effective?

The most commonly articulated reasons for criminal punishment are retribution, incapacitation, deterrence, and rehabilitation.Footnote 58 Retribution is the oldest reason for punishment, sublimating the victim's desire for revenge into a societal demonstration that wrongs have consequences.Footnote 59 Calibration of those consequences was at its most literal in the lex talionis: an eye for an eye, a tooth for a tooth. The demonstrative effect of fining a corporation—or an electronic ‘person’—may be preferable to a crime otherwise going unpunished.Footnote 60

The penal system can also be used to incapacitate those convicted of crimes, physically preventing them from reoffending. Typically this is through varying forms of incarceration, but may also include exile, amputation of limbs, castration, and execution. In the case of corporations, it may include withdrawal of a licence to operate or a compulsory winding up order.Footnote 61 Here direct analogies with the treatment of dangerous animals and machinery can be made, although measures such as putting down a vicious dog or decommissioning a faulty vehicle are administrative rather than penal and do not depend on determinations of ‘guilt’.Footnote 62 In some jurisdictions, children and the mentally ill may be deemed incapable of committing crimes, yet they may still be detained by the State if judged to be a danger to themselves or the community.Footnote 63 Such individuals do not lose their personality; in the case of AI systems, it is not necessary to give them personality in order to impose measures akin to confinement if a product can be recalled or a licence revoked.

Deterrence is a more recent justification for punishment, premised on the rationality of offenders. By structuring penalties, it imposes costs on behaviour that are intended to outweigh any potential benefits. The ability to reduce criminality to economic analysis may seem particularly applicable to both corporations and AI systems. Yet in the case of the former the incentives are really aimed at human managers who might otherwise act in concert through the corporation for personal as well as corporate gain.Footnote 64 In the case of an AI system, the deterrent effect of a fine would shape behaviour only if its programming sought to maximise economic gain without regard for the underlying criminal law itself.

A final rationale for punishment is rehabilitation. Like incapacitation and deterrence, it is forward-looking and aims to reduce recidivism. Unlike incapacitation, however, it seeks to influence the decision to offend rather than the ability to do so;Footnote 65 unlike deterrence, that influence is intended to operate intrinsically rather than extrinsically.Footnote 66 Rehabilitation in respect of natural persons often appears to be embraced more in theory than in practice; in the United States in particular, it fell from favour in the 1970s.Footnote 67 With respect to corporations, however, the clearer levers of influence have encouraged experimentation with narrowly tailored penalties that aim to encourage good behaviour as well as discourage bad.Footnote 68 Such an approach might seem well suited to AI systems, with violations of the criminal law being errors to be debugged rather than sins to be punished.Footnote 69 Indeed, the educative aspect of rehabilitation has been directly analogised to machine learning in a book-length treatise on the topic.Footnote 70 Yet neither legal personality nor the coercive powers of the State should be necessary to ensure that machine learning leads to outputs that do not violate the criminal law.

C. No Soul to Be Damned

While arguments justifying liability of corporations tend to be instrumental, it is striking how the emerging literature on ‘robot criminals’ slides into anthropomorphism. The very term suggests a special desire to hold humanoid AI systems to a higher standard than, say, household appliances with varying degrees of autonomy or unembodied AI systems operating in the cloud.Footnote 71 There is no principled reason for such a distinction, but it speaks to the tension within arguments for AI personality that blend instrumental and inherent justifications.

Interestingly, arguments over the juridical personality of corporations tend to focus on the opposite problem: their dissimilarity to humans, pithily described by the First Baron Thurlow as them having ‘no soul to be damned, and no body to be kicked’.Footnote 72 The lack of a soul has not impeded juridical personality of corporations and poses no principled barrier to treating AI systems similarly. Corporate personality is different from AI personality, however, in that a corporation is made up of human beings, through whom it operates, whereas an AI system is made by humans.Footnote 73

Instrumental reasons could, therefore, justify according legal personality to AI systems. But they do not require it. The implicit anthropomorphism elides further challenges such as defining the threshold of personality when AI systems exist on a spectrum, as well as how personality might apply to distributed systems. It would be possible, then, to create legal persons comparable to corporations—each autonomous vehicle, smart medical device, resume-screening algorithm, and so on could be incorporated.Footnote 74 If there are true liability gaps then it is possible that such legal forms could fill them. Yet the more likely beneficiaries of such an arrangement would be producers and users, who would thus be insulated from some or all liability.

II. NATURAL PERSONALITY: COGITO, ERGO SUM?

Instrumentalism is not the only reason legal systems recognise personality, however. In the case of natural persons, no Turing Test needs to be passed: the mere fact of being born entitles one to personhood before the law.Footnote 75

It was not always thus. Through much of human history, slaves were bought and sold like property;Footnote 76 indigenous peoples were compared to animals roaming the land, justifying their dispossession;Footnote 77 and for centuries under English law, Blackstone's summary of the position of women held that ‘husband and wife are one person, and the husband is that person’.Footnote 78 Even today, natural persons enjoy plenary rights and obligations only if they are adults, of sound mind, and not incarcerated.

As indicated earlier, many of the arguments in favour of AI personality implicitly or explicitly assume that AI systems are approaching human qualities in a manner that would entitle them to comparable recognition before the law. Such arguments have been challenged both for their analysis and their implications. In terms of analysis, Neil Richards and William Smart have termed the tendency to anthropomorphise AI systems the ‘android fallacy’.Footnote 79 Experiment after experiment has shown that people are more likely to ascribe human qualities such as moral sensibility to machines on the basis of their humanoid appearance, natural language communication, or the mere fact of having been given a name.Footnote 80 More serious arguments about AI approximating human qualities are challenged due to unexamined assumptions about how those qualities manifest in humans ourselves.Footnote 81

In terms of the implications, the 2017 European Parliament resolution prompted hundreds of AI experts from across the continent to warn in an open letter that legal personality for AI would be inappropriate from ‘an ethical and a legal perspective’. Interestingly, such warnings may themselves fall foul of the android fallacy by assuming that legal status based on the natural person model necessarily brings with it all the ‘human’ rights guaranteed under EU law.Footnote 82 Other writers candidly admit that the only basis for denying AI systems personality may ultimately be a form of speciesism—privileging human welfare over robot welfare because we the lawmakers are human.Footnote 83 If AI systems become so sophisticated that this is our strongest defence, the problem may not be their legal status but our own.Footnote 84

This section nonetheless takes seriously the idea that certain AI systems might have an entitlement to personality due to their inherent qualities. The technical aspects of how those qualities might manifest—and indeed a detailed examination of the human qualities that they mimic—are beyond the scope of this article.Footnote 85 Instead, the focus will be on how and why natural personhood might be extended. A first question to examine is how this has been handled in the past, such as the enfranchisement and empowerment of natural persons long treated as inferior to white men. More recently, activists and scholars have urged further expansion of certain rights to non-human animals such as chimpanzees based on their own inherent qualities. The inquiry then turns to the strongest articulation today of meaningful rights on behalf of AI systems for inherent rather than instrumental reasons: that they should be able to own their creations.

A. The Extension of Natural Personality

The arc of the moral universe is long, as Dr Martin Luther King Jr famously intoned, but it bends towards justice. At the time that the United States drafted its Declaration of Independence in 1776, the notion that ‘all men’ [sic] were ‘created equal’ was demonstrably untrue. A decade later, the French Declaration on the Rights of Man similarly proclaimed natural and imprescriptible rights for all—‘nonsense upon stilts’ was Jeremy Bentham's observation.Footnote 86 Man may well be born free, as Rousseau had opined in the opening lines of The Social Contract, but everywhere he remained in chains.Footnote 87

And yet the succeeding centuries did see a progressive realisation of those lofty aspirations and the spread of rights. By the middle of the twentieth century, the Universal Declaration of Human Rights could claim that all human beings were ‘born free and equal in dignity and rights’, despite one-third of them living in territories that the UN itself classified as non-self-governing. Decolonisation, the end of apartheid, women's liberation and other movements followed; rights remain a site of contestation, but virtually no State today would seriously contend that human adults are not persons before the law.Footnote 88

Interestingly, some arguments in favour of legal personality for AI draw not on this progressivist narrative of natural personhood but on the darker history of slavery. Andrew Katz and Ugo Pagallo, for example, find analogies with the ancient Roman law mechanism of peculium, whereby a slave lacked legal personality and yet could operate as more than a mere agent for his master.Footnote 89 As an example of a creative interpretation of personhood it is interesting, though it relies on instrumental justifications rather than the inherent qualities of slaves. As Pagallo notes, peculium was in effect a sort of ‘proto-limited liability company’.Footnote 90 From the previous section, there is no bar on legal systems creating such structures today—as for whether they should do so, reliance upon long discarded laws associated with slavery may not be the strongest case possible.Footnote 91

An alternative approach is to consider how the legal system treats animals.Footnote 92 For the most part, they are regarded as property that can be bought and sold, but also as deserving of ‘humane’ treatment.Footnote 93 Liability of owners for damage caused by animals has limited application to AI systems;Footnote 94 here, the question is whether those animals might ‘own’ themselves.

Various efforts have sought to attribute degrees of personality to nonhuman animals, with little success. In 2013, for example, the Nonhuman Rights Project filed lawsuits on behalf of four captive chimpanzees, arguing that the animals exhibited advanced cognitive abilities, autonomy, and self-awareness. In denying writs of habeas corpus, the New York State Court Appellate Division did not dispute these qualities but held that extension of rights such as personality had traditionally been linked to the imposition of obligations in the form of a social contract. Since, ‘needless to say’, the chimpanzees could not bear any legal duties, they could not enjoy rights of personality such as the right to liberty.Footnote 95 This was a curious basis on which to dismiss the case, as many humans who lack the capacity to exercise rights or responsibilities—infants, persons in a coma—are nonetheless deemed persons before the law.Footnote 96 A parallel case rejected that argument on the circular basis that it ‘ignores the fact that these are still human beings, members of the human community’.Footnote 97 Leave to appeal to the State Court of Appeals was denied, but one of the judges issued a concurring opinion that ended on a speculative note about the future of such litigation. The issue, Judge Fahey observed, is profound and far-reaching. ‘Ultimately, we will not be able to ignore it. While it may be arguable that a chimpanzee is not a “person”, there is no doubt that it is not merely a thing.’Footnote 98

Gabriel Hallevy has argued that animals are closer than AI systems to humans when one considers emotionality as opposed to rationality, but that this has not generally led to them being given personhood under the law. Instead, it is AI systems’ rationality that provides the basis for personhood.Footnote 99 That may be true with regard to the ability to make out the mental element of a criminal offence, but the fact that it is a crime to torture a chimpanzee but not a computer also points to an important difference in how the legal system values the two types of entity. In fact, a stronger argument may be made to protect embodied AI systems that evoke emotional responses on the part of humans—regardless of the sophistication of their internal processing. It seems probable that laws to protect such ‘social robots’ will at some point be adopted, comparable to animal abuse laws. As in the case of those laws, protection will likely be guided by social mores rather than consistent biological—or technological—standards.Footnote 100

The assumption that natural legal personality is limited to human beings is so ingrained in most legal systems that it is not even articulated.Footnote 101 The failure to extend comparable rights even to our nearest evolutionary cousins bodes ill for advocates of AI personality based on presumed inherent qualities.Footnote 102

B. Rewarding Creativity

A distinct reason for considering whether AI systems should be recognised as persons focuses not on what they are but what they can do. For the most part, this is framed as the question of whether an individual or corporation can claim ownership of work done by an AI system. Implicit or explicit in such discussions, however, is the understanding that if such work had been done by a human then he or she would own it him- or herself.

There is, in fact, a long history of questioning whether machine-assisted creation is protectable through copyright.Footnote 103 Early photographs, for example, were not protected because the mere capturing of light through the lens of a camera obscura was not regarded as true authorship.Footnote 104 It took an iconic picture of Oscar Wilde going all the way to the US Supreme Court before copyright was recognised in mechanically produced creations.Footnote 105 The challenge today is distinct: not whether a photographer can ‘own’ the image passively captured by a machine, but who might own new works actively created by one. A computer program like a word processor does not own the text typed on it, any more than a pen owns the words that it writes. But AI systems now write news reports, compose songs, paint pictures—these activities generate value, but can and should they attract the protections of copyright law?

In most jurisdictions, the answer is no.

The US Copyright Office, for example, has stated that legislative protection of ‘original works of authorship’Footnote 106 is limited to works ‘created by a human being’. It will not register works ‘produced by a machine or mere mechanical process that operates randomly or automatically without any creative input or intervention from a human author.’Footnote 107 The word ‘any’ is key and begs the question of what level of human involvement is required to assert authorship.Footnote 108

Consider the world's most famous selfie—of a black crested macaque. David Slater went to Indonesia to photograph the endangered monkeys, which were too nervous to let him take close-ups. So he set up a camera that enabled them to snap their own photos.Footnote 109 After the images gained significant publicity, animal rights activists argued that the monkeys had a greater claim to authorship of the photographs than the owner of the camera. Slater did eventually win, reflecting existing lawFootnote 110—though as part of a settlement he agreed to donate 25 per cent of future royalties from the images to groups protecting crested macaques.Footnote 111 As computers generate more content independently of their human programmers, it is going to be harder and harder for humans to take credit. Instead of training a monkey how to press a button, it may be more like a teacher trying to take credit for the work of his or her student.

Turning to the normative question of whether AI systems themselves should have a claim to ownership, the policy behind copyright is often articulated as incentivising innovation. This has long been seen as unnecessary or inappropriate for computers. ‘All it takes,’ Pamela Samuelson wrote in 1986, ‘is electricity (or some other motive force) to get the machines into production’.Footnote 112 Here the Turing Test offers a different kind of thought experiment: the more machines are designed to copy human traits, the more important such incentives might become.

Until recently, China followed the orthodoxy that AI-produced work is not entitled to copyright protection.Footnote 113 In December 2019, however, a district court in China held that an article produced by an algorithm could not be copied without permission. The article in question was a financial report published by Tencent with a note that it was ‘automatically written’ by Dreamwriter, a news writing program developed by the company in 2015. Shanghai Yingxun Technology Company copied the article without permission and Tencent sued. The article had been taken down, but the infringing company was ordered to pay ¥1,500 (US$216) for ‘economic losses and rights protection’.Footnote 114

The Chinese case reflects a distinct reason for recognising copyright, which is the protection of upfront investment in creative processes. This account presumes that, in the absence of such protection, investment will dry up and there will be a reduced supply of creative works.Footnote 115 Such an approach to copyright is broadly consistent with common law doctrines concerning work created in the course of employment, known in the United States as work for hire, under which a corporate employer or an individual who commissions a work owns copyright despite the actual ‘author’ being someone else.Footnote 116 This may not be available in civil law jurisdictions that place a greater emphasis on the moral rights of a human author.Footnote 117

In Britain, legislation adopted in 1988 does in fact provide copyright protection for ‘computer-generated’ work, the ‘author’ of which is deemed to be the person who undertook ‘the arrangements necessary for the creation of the work’.Footnote 118 Similar legislation has been adopted in New Zealand,Footnote 119 India,Footnote 120 Hong Kong,Footnote 121 and Ireland.Footnote 122 Though disputes about who took the ‘arrangements necessary’ may arise, ownership by a recognised legal person or by no one at all remain the only possible outcomes.Footnote 123

The European Parliament in April 2020 issued a draft report arguing that AI-generated works could be regarded as ‘equivalent’ to intellectual works and therefore protected by copyright. It opposed giving personality of any kind to the AI itself, however, proposing that ownership instead vest in ‘the person who prepares and publishes a work lawfully, provided that the technology designer has not expressly reserved the right to use the work in that way’.Footnote 124 The ‘equivalence’ to intellectual work is interesting, justified here on the basis of a proposed shift in recognising works based on a ‘creative result’ rather than a creative process.Footnote 125

For the time being, then, copyright cannot be owned by AI systems—and does not need to be in order to recognise the creativity of those systems. Nevertheless, reservations as to ownership being claimed by anyone else are evident in the limited rights given for ‘computer-generated’ works. The duration is generally for a shorter period, and the deemed ‘author’ is unable to assert moral rights—such as the right to be identified as the author of the work.Footnote 126 A World Intellectual Property Organization (WIPO) issues paper recognised the dilemma, noting that excluding such works would favour ‘the dignity of human creativity over machine creativity’ at the expense of making the largest number of creative works available to consumers. A middle path, it observed, might be to offer ‘a reduced term of protection and other limitations’.Footnote 127

C. Protecting Inventors

Whereas in copyright law the debate is over who owns works produced by AI systems, in patent law the question is whether they can be owned at all. Patent law in most jurisdictions provides or assumes that an ‘inventor’ must be human. In July 2019, Stephen Thaler decided to test those assumptions, filing patents in Britain, the European Union, and the United States that listed an AI system, DABUS, as the ‘inventor’.Footnote 128 The British Intellectual Property Office was willing to accept that DABUS created the inventions, but relevant legislation required that an inventor be a natural person and not a machine.Footnote 129 The European Patent Office (EPO) followed a more circuitous route to the same end, rejecting the applications on the basis that designating a machine as the inventor did not meet the ‘formal requirements’. These included stating the ‘family name, given names and full address of the inventor’.Footnote 130 A name, the EPO observed, does not only identify a person: it enables them to exercise their rights and forms part of their personality. ‘Things’, by contrast, ‘have no rights which a name would allow them to exercise.’Footnote 131

The US application was also rejected, based in part on the fact that relevant statutes repeatedly referred to inventors using ‘pronouns specific to natural persons’ such as ‘himself’ and ‘herself’. The Patent and Trade Office (USPTO) cited cases holding that conception—‘the touchstone of inventorship’—is a ‘mental act’ that takes place in ‘the mind of the inventor’. Those cases concluded that invention in this sense is limited to natural persons and not corporations. The USPTO concluded that an application listing an AI as an ‘inventor’ was therefore incomplete, but was careful to avoid making any determination concerning ‘who or what’ actually created the inventions in question.Footnote 132

These decisions were consistent with case law and the practice of patent offices around the world, none of which—yet—allows for an AI system to be recognised as an inventor. Analogous to copyright law, one purpose of the patent system is to encourage innovation by granting a time-limited monopoly in exchange for public disclosure. As even the creators of DABUS acknowledged, an AI system is unlikely to be motivated to innovate by the prospect of patent protection. Any such motivation would be found in its programming: it must be instructed to innovate.Footnote 133

As for whether a human ‘inventor’ could be credited for work done by such a system, there is no equivalent of the work for hire doctrine. To be an inventor, the human must have actually conceived of the invention.Footnote 134 Joint inventions are possible and contributions do not need to be identical, but in the absence of a natural person making a significant conceptual contribution an invention would be, on current law, ineligible for patent protection.Footnote 135

Among the interesting aspects of these recent developments are the means by which the same conclusion was reached. As in the case of copyright, there appeared to be no serious doubt that an AI system was capable of creating things that were patentable inventions. The British IP Office explicitly accepted that DABUS had done just that;Footnote 136 the USPTO was at pains to avoid making such a conclusion explicit. The EPO, for its part, dodged the issue by holding that, because they have no legal personality, ‘AI systems and machines cannot have rights that come from being an inventor’.Footnote 137 The EPO was the most blatant, but all three decisions relied on formalism—language in the relevant statute that provided or implied that the rights in question were limited to natural persons. The USPTO diligently dusted off a copy of Merriam-Webster's Collegiate Dictionary to conclude that the use of ‘“whoever” suggests a natural person.’Footnote 138

Legal tribunals routinely face choices between substantive justice and procedural regularity. Statutes of limitations are intended to provide certainty in legal relations; the courts of equity emerged to temper that certainty with justice. In both copyright and patent law, continuing to privilege human creativity over its machine equivalent may ultimately need to be justified by the kind of speciesism mentioned earlier.Footnote 139 At times, however, the language used to engage in such rationalisations of the status quo echoes older legal forms that kept property relations in their rightful place. Among its reasons denying that AI systems like DABUS could hold or transfer rights to a patent, for example, the EPO dismissed analogies between machines and employees: ‘Rather than being employed,’ the EPO concluded, ‘they are owned.’Footnote 140

Such statements are accurate for the time being. But if the boosters of general AI are correct and some form of sentience is achieved, the more appropriate analogy between legal personality and slavery may not be the limited economic rights slaves held in ancient Rome. Rather, it may be the constraints imposed on AI systems today.

III. CONSTRAINING SUPERINTELLIGENCE

If AI systems do eventually match human intelligence it seems unlikely that they would stop there. The prospect of AI surpassing human capabilities has long dominated a popular sub-genre of science fiction.Footnote 141 Though most serious researchers do not presently see a pathway even to general AI in the near future, there is a rich history of science fiction presaging real world scientific innovation.Footnote 142 Taking Nick Bostrom's definition of superintelligence as an intellect that greatly exceeds human cognitive performance in virtually all relevant domains,Footnote 143 it is at least conceivable that such an entity could be created within the next century.Footnote 144

The risks associated with that development are hard to quantify.Footnote 145 Though a malevolent superintelligence bent on extermination or enslavement of the human race is the most dramatic scenario, more plausible ones include a misalignment of values, such that the ends desired by the superintelligence conflict with those of humanity, or a desire for self-preservation, which could lead such an entity to prevent humans from being able to switch it off or otherwise impair its ability to function. An emerging literature examines these questions of what final and instrumental goals a superintelligence might have,Footnote 146 though the discourse was long dominated by voices far removed from traditional academia.Footnote 147 Visa Kurki's recent book-length discussion of legal personality, for example, includes a chapter specifically on personality of AI that concludes with the statement: ‘Of course, if some AIs ever become sentient, many of the questions addressed in this chapter will have to be reconsidered.’Footnote 148

In the face of many unknown unknowns, two broad strategies have been proposed to mitigate the risk. The first is to ensure any such entity can be controlled, either by limiting its capacities to interact with the world or ensuring our ability to contain it, including a means of stopping it functioning: a kill switch.Footnote 149 Assuming that the system has some kind of purpose, however, that purpose would most likely be best served by continuing to function. In a now classic thought experiment, a superintelligence tasked with making paperclips could take its instructions literally and prioritise that above all else. Humans who might decide to turn it off would need to be eliminated, their atoms deployed to making ever more paperclips.Footnote 150

Arguments that no true superintelligence would do anything quite so daft rely on common sense and anthropomorphism, neither of which should be presumed to be part of its code. A true superintelligence would, moreover, have the ability to predict and avoid human interventions or deceive us into not making them.Footnote 151 It is entirely possible that efforts focused on controlling such an entity may bring about the catastrophe that they are intended to prevent.Footnote 152

For this reason, many writers prioritise the second strategy, which is to ensure that any superintelligence is aligned with our own values—emphasising not what it could do, but what it might want to do. This question has also fascinated science fiction writers, most prominently Isaac Asimov, whose three laws of robotics are a leitmotif in the literature.Footnote 153 Here the narrower focus is on whether granting AI systems legal personality in the near term might serve as a hedge against the risks of superintelligence emerging in the future.

This is, in effect, another instrumental reason for granting personality. Of course, there is no reason to assume that including AI systems within human social structures and treating them ‘well’ would necessarily lead to them reciprocating the favour should they assume dominance.Footnote 154 Nevertheless, presuming rationality on the part of a general AI, various authors have proposed approaches that amount to socialising AI systems to human behaviour.Footnote 155 To avoid the sorcerer's apprentice problem of a machine simply told to ‘make paperclips’, for example, its goals could be tied to human preferences and experiences. This might be done by embedding those values within the code of such systems prior to them achieving superintelligence. Goals would thereby be articulated not as mere optimisation—the number of paperclips produced, for example—but as fuzzier objectives such as maximising the realisation of human preferences,Footnote 156 or inculcating a moral framework and a reflective equilibrium that would match the progressive development of human morality itself.Footnote 157

To a lawyer, that sounds a lot like embedding these new entities within a legal system.Footnote 158 If one of the functions of a legal system is the moral education of its subjects, it is conceivable that including AI in this way could contribute to a reflective equilibrium that might encourage an eventual superintelligence to embrace values compatible with our own.

For the present, this is proposed more as a thought experiment than a policy prescription. If a realistic path to superintelligence emerges, it might become a more urgent concern.Footnote 159 There is no guarantee that the approach would be effective, but some small comfort might be taken from the fact that, for the most part, the categories of legal persons recognised in most jurisdictions, along with the rights they enjoy, have tended to expand over time rather than contract. Those regimes that have taken such legal recognition away have tended to be among the most despicable. Apocalyptic scenarios aside, positioning ourselves and our silicon siblings as equals might serve the goal of reinforcing a normative regime in which our interests are aligned, or at least not opposed, if or when we are surpassed.Footnote 160

In the alternative, like the chimpanzees in their New York cages, humanity's greatest hope may not be to be treated as peers, but at least to be seen as more than things.

IV. CONCLUSION: THE LIMITS OF PERSONALITY

In 1991 a prize was established to encourage more serious attempts at the Turing Test. One of the first winners succeeded in part by tricking people—the program made spelling mistakes that testers assumed must have been the result of human fallibility.Footnote 161 Though the Turing Test remains a cultural touchstone, it is far from the best measure of AI research today. As a leading textbook notes, the quest for flight succeeded when the Wright brothers stopped trying to imitate birds and started learning about aerodynamics.Footnote 162 Aeronautical engineers today don't define the goal of their field as making machines that fly so exactly like pigeons that they can fool other pigeons.

In the same way, most arguments in favour of AI legal personality suffer from being both too simple and too complex. They are too simple in that AI systems exist on a spectrum with blurred edges. There is as yet no meaningful category that could be identified for such recognition; if instrumental reasons required recognition in specific cases then this could be achieved using existing legal forms. The arguments are too complex in that many are variations on the android fallacy, based on unstated assumptions about the future development of AI systems for which personality would not only be useful but deserved. At least for the foreseeable future, the better solution is to rely on existing categories, with responsibility for wrongdoing tied to users, owners, or manufacturers rather than the AI systems themselves. Driverless cars are following that path, for example, with a likely shift from insuring drivers to insuring vehicles.Footnote 163

This may change. It is conceivable that synthetic beings of comparable moral worth to humans may one day emerge. Failing to recognise that worth may reveal us to be either an ‘autistic species’, unable to comprehend the minds of other types of beings,Footnote 164 or merely prejudiced against those different from ourselves. If this happens, as Turing hypothesised in 1951, ‘it seems probable that, once the machine thinking method had started, it would not take long to outstrip our feeble powers’.Footnote 165

Turing himself never lived to see a computer even attempt his test in practice. Prosecuted for homosexual acts in 1952, he chose chemical castration as an alternative to prison. He died two years later at the age of 41, apparently after committing suicide by eating a cyanide-laced apple. The announcement that Turing would grace the new 50 pound note followed an official pardon, signed by the Queen in 2013.Footnote 166

Yet the more fitting tribute may be Ian McEwan's novel, Machines Like Me, which imagines an alternative timeline in which Turing lived and was rewarded with the career and the knighthood he deserved. The novel takes seriously the prospect of true AI, in the form of a brooding synthetic Adam, who expresses his love for the human Miranda by writing thousands upon thousands of haikus. Ultimately, however, consciousness is a burden for the machines—struggling to find their place in the world; so pure that they are unable to reconcile human virtues and human vices.

It also offers Turing a chance to rethink his test. ‘In those days,’ the fictional Turing says at age 70, referring to his younger self, ‘I had a highly mechanistic view of what a person was. The body was a machine, an extraordinary one, and the mind I thought of mostly in terms of intelligence, which was best modelled by reference to chess or maths.’Footnote 167

The reality, of course, is that chess is not a representation of life. Life is an open system; it is messy. It is also unpredictable. In the novel, the first priority of the AI robots is to disable the kill switch that might shut them down. Yet most of them ultimately destroy themselves—as the real Turing did—unable to reconcile their innate nature with the injustices of the world around them. Before asking whether we could create such thinking machines, McEwan reminds us, we might want to pause and ask whether we should.

References

1 For a discussion of attempts to define AI, see Russell, SJ and Norvig, P, Artificial Intelligence: A Modern Approach (3rd edn, Prentice Hall 2010) 15Google Scholar. Four broad approaches can be identified: acting humanly (the Turing Test), thinking humanly (modelling cognitive behaviour), thinking rationally (building on the logicist tradition), and acting rationally (a rational-agent approach favoured by Russell and Norvig as it is not dependent on a specific understanding of human cognition or an exhaustive model of what constitutes rational thought).

2 Turing, AM, ‘Computing Machinery and Intelligence’ (1950) 59 Mind 433Google Scholar.

3 Wallace, RS, ‘The Anatomy of ALICE’ in Epstein, R, Roberts, G, and Beber, G (eds), Parsing the Turing Test: Philosophical and Methodological Issues in the Quest for the Thinking Computer (Springer 2009) 184–5Google Scholar.

4 Solum, LB, ‘Legal Personhood for Artificial Intelligences’ (1992) 70 NCLRev 1235–7Google Scholar. Solum himself credits Christopher Stone as first mooting the possibility in a footnote two decades earlier: Stone, CD, ‘Should Trees Have Standing? Towards Legal Rights for Natural Objects’ (1972) 45 SCalLRev 456 n 26Google Scholar.

5 See Chesterman, S, ‘Artificial Intelligence and the Problem of Autonomy’ (2020) 1 Notre Dame Journal of Emerging Technologies 210Google Scholar; S Chesterman, ‘Through a Glass, Darkly: Artificial Intelligence and the Problem of Opacity’ (2021) AJCL (forthcoming).

6 O Cuthbert, ‘Saudi Arabia Becomes First Country to Grant Citizenship to a Robot’ (Arab News 26 October 2017).

7 A Cuthbertson, ‘Artificial Intelligence “Boy” Shibuya Mirai Becomes World's First AI Bot to Be Granted Residency’ (Newsweek, 6 November 2017).

8 D Gershgorn, ‘Inside the Mechanical Brain of the World's First Robot Citizen’ Quartz (12 November 2017).

9 European Parliament Resolution with Recommendations to the Commission on Civil Law Rules on Robotics (2015/2103(INL)) (European Parliament, 16 February 2017) para 59(f).

10 Dewey, J, ‘The Historic Background of Corporate Legal Personality’ (1926) 35 YaleLJ 660Google Scholar.

11 This presumes, of course, agreement on the meaning of ‘human’ and terms such as birth and death. See Naffine, N, ‘Who Are Law's Persons? From Cheshire Cats to Responsible Subjects’ (2003) 66 MLR 346CrossRefGoogle Scholar.

12 See eg Shiromani Gurdwara Prabandhak Committee, Amritsar v Shri Somnath Dass AIR 2000 SC 1421 (Supreme Court of India).

13 Te Awa Tupua (Whanganui River Claims Settlement) Act 2017 (New Zealand), section 14(1). This followed designation of the Te Urewera National Park as ‘a legal entity, [with] all the rights, powers, duties, and liabilities of a legal person’. Te Urewera Act 2014 (New Zealand), section 11(1).

14 Constitution of the Republic of Ecuador 2008 (Ecuador) art 10.

15 For a discussion of the limits of what can be a legal person, see Kurki, VAJ, A Theory of Legal Personhood (Oxford University Press 2019) 127–52CrossRefGoogle Scholar.

16 See eg Bumper Development Corp. v Commissioner of Police for the Metropolis [1991] 1 WLR 1362 (recognising legal personality of an Indian temple under English law).

17 See eg Changing Driving Laws to Support Automated Vehicles (Policy Paper) (National Transport Commission, May 2018) para 1.5; Automated Vehicles: A Joint Preliminary Consultation Paper (Law Commission, Consultation Paper No 240; Scottish Law Commission, Discussion Paper No 166 (2018) para 4.107. cf Chesterman, ‘Autonomy’ (n 5) 225.

18 See eg Hallevy, G, Liability for Crimes Involving Artificial Intelligence Systems (Springer 2015)CrossRefGoogle Scholar; Hu, Y, ‘Robot Criminals’ (2019) 52 UMichJLReform 487Google Scholar.

19 Coase, R, ‘The Nature of the Firm’ (1937) 4 Economica 386CrossRefGoogle Scholar. cf Morawetz, V, A Treatise on the Law of Private Corporations (Little, Brown 1886) 2Google Scholar (‘the fact remains self-evident that a corporation is not in reality a person or a thing distinct from its constituent parts. The word corporation is but a collective name for the corporators.’).

20 N Banteka, ‘Artificially Intelligent Persons’ (2020) 58 HousLR (forthcoming).

21 Dewey (n 10) 665–9.

22 Trustees of Dartmouth Coll. v Woodward 17 US 518, 636 (1819).

23 See eg Amsler, CE, Bartlett, RL and Bolton, CJ, ‘Thoughts of Some British Economists on Early Limited Liability and Corporate Legislation’ (1981) 13 History of Political Economy 774CrossRefGoogle Scholar; Dari-Mattiacci, G et al. , ‘The Emergence of the Corporate Form’ (2017) 33 JLEcon&Org 193Google Scholar.

24 French, P, ‘‘The Corporation as a Moral Person’ (1979) 16(3) AmPhilQ 207Google Scholar.

25 Iwai, K, ‘Persons, Things and Corporations: The Corporate Personality Controversy and Comparative Corporate Governance’ (1999) 47 AJCL 583CrossRefGoogle Scholar; Watson, SM, ‘The Corporate Legal Person’ (2019) 19 JCLS 137Google Scholar.

26 Dewey (n 10) 655.

27 Bryson, JJ, Diamantis, ME and Grant, TD, ‘Of, for, and by the People: The Legal Lacuna of Synthetic Persons’ (2017) 25 Artificial Intelligence and Law 280CrossRefGoogle Scholar.

28 Stone (n 4).

29 Constitution of the Republic of Ecuador, arts 71–74.

30 Rodgers, C, ‘A New Approach to Protecting Ecosystems’ (2017) 19 EnvLRev 266Google Scholar. In New Zealand, by contrast, trustees were established to act on behalf of the environmental features given personality.

31 cf Liability for Artificial Intelligence and Other Emerging Digital Technologies (EU Expert Group on Liability and New Technologies 2019) 38.

32 See eg DA Crane, KD Logue and BC Pilz, ‘A Survey of Legal Issues Arising from the Deployment of Autonomous and Connected Vehicles’ (2017) 23 Michigan Telecommunications and Technology Law Review 256–9. cf Abraham, KS and Rabin, RL, ‘Automated Vehicles and Manufacturer Responsibility for Accidents: A New Legal Regime for a New Era’ (2019) 105 VaLRev 127Google Scholar.

33 Trainor, SA, ‘A Comparative Analysis of a Corporation's Right Against Self-Incrimination’ (1994) 18 FordhamIntlLJ 2139Google Scholar. But see Citizens United v Federal Election Commission, 558 US 310 (US Supreme Court, 2010).

34 Chesterman, S, ‘Does ASEAN Exist? The Association of Southeast Asian Nations as an International Legal Person’ (2008) XII SYBIL 199Google Scholar.

35 See above (n 9).

36 See Chesterman, ‘Autonomy’ (n 5); Chesterman, ‘Opacity’ (n 5).

37 Bryson, Diamantis and Grant (n 27) 287.

38 Easterbrook, FH and Fischel, DR, ‘Limited Liability and the Corporation’ (1985) 52 UChiLRev 89Google Scholar.

39 Millon, D, ‘Piercing the Corporate Veil, Financial Responsibility, and the Limits of Limited Liability’ (2007) 56 EmoryLJ 1305Google Scholar.

40 Turner, J, Robot Rules: Regulating Artificial Intelligence (Palgrave Macmillan 2019) 193CrossRefGoogle Scholar.

41 See eg Chopra, S and White, LF, A Legal Theory for Autonomous Artificial Agents (University of Michigan Press 2011) 160CrossRefGoogle Scholar.

42 See eg Čuk, T and Waeyenberge, A van, ‘European Legal Framework for Algorithmic and High Frequency Trading (Mifid 2 and MAR) A Global Approach to Managing the Risks of the Modern Trading Paradigm’ (2018) 9 EJRR 146Google Scholar.

43 See eg Quoine Pte Ltd v B2C2 Ltd [2020] SGCA(I) 2 (Singapore Court of Appeal) paras 97–128; Chesterman, ‘Autonomy’ (n 5) 243–4.

44 Some argue that this is the most important function of separate legal personality for corporations: Hansmann, H and Kraakman, R, ‘The Essential Role of Organizational Law’ (2000) 110 Yale LJ 387CrossRefGoogle Scholar. cf H Tjio, ‘Lifting the Veil on Piercing the Veil’ [2014] Lloyd's Maritime and Commercial Law Quarterly 19. It is conceivable that AI systems lacking the ability to own property could still be subject to certain forms of legal process, such as injunctions, and could offset debts through their ‘labour’.

45 King, BA, Hammond, T and Harrington, J, ‘Disruptive Technology: Economic Consequences of Artificial Intelligence and the Robotics Revolution’ (2017) 12(2) Journal of Strategic Innovation and Sustainability 53Google Scholar.

46 KJ Delaney, ‘The Robot that Takes Your Job Should Pay Taxes, says Bill Gates’ Quartz (18 February 2017).

47 G Prodhan, ‘European Parliament Calls for Robot Law, Rejects Robot Tax’ Reuters (16 February 2017); L Summers, ‘Robots Are Wealth Creators and Taxing Them Is Illogical’ Financial Times (6 March 2017).

48 ‘Why Taxing Robots Is Not a Good Idea’ Economist (25 February 2017).

49 cf Floridi, L, ‘Robots, Jobs, Taxes, and Responsibilities’ (2017) 30 Philosophy & Technology 1CrossRefGoogle Scholar.

50 See below section II.B.

51 R Wile, ‘A Venture Capital Firm Just Named an Algorithm to Its Board of Directors’ Business Insider (13 May 2014).

52 N Burridge, ‘AI Takes Its Place in the Boardroom’ Nikkei Asian Review (25 May 2017).

53 Möslein, F, ‘Robots in the Boardroom: Artificial Intelligence and Corporate Law’ in Barfield, W and Pagallo, U (eds), Research Handbook on the Law of Artificial Intelligence (Edward Elgar 2018) 658–60Google Scholar.

54 See eg Personen- und Gesellschaftsrecht (PGR) 1926 (Liechtenstein) art 344; Companies Ordinance 2014 (HK), section 457. This was also possible under English law until 2015. See now Small Business, Enterprise and Employment Act 2015 (UK), section 87.

55 Bayern, S, ‘Of Bitcoins, Independently Wealthy Software, and the Zero-Member LLC’ (2014) 108 NWULRev 1495–500Google Scholar.

56 Bayern, S, ‘The Implications of Modern Business-Entity Law for the Regulation of Autonomous Systems’ (2015) 19 Stanford Technology Law Review 101Google Scholar.

57 For corporations this has sometimes proved a difficult, but not insurmountable challenge. See Khanna, VS, ‘Corporate Criminal Liability: What Purpose Does It Serve?’ (1996) 109 HarvLRev 1513Google Scholar; Yockey, JW, ‘Beyond Yates: From Engagement to Accountability in Corporate Crime’ (2016) 12 New York University Journal of Law and Business 412–13Google Scholar; Buell, SW, ‘Criminally Bad Management’ in Arlen, J (ed), Research Handbook on Corporate Crime and Financial Misdealing (Edward Elgar 2018) 59Google Scholar. The fact that corporations are capable of violating criminal laws despite lacking free will or moral responsibility presumably dispenses with this as an argument against criminal responsibility of AI systems.

58 It is arguable that the symbolic role of criminal law need not require actual punishment—it is not uncommon to have laws that are unenforced in practice. Yet this typically relies on an explicit or implicit decision not to investigate or prosecute specific crimes, rather than acceptance that a class of actors cannot be punished at all.

59 Denunciation is sometimes presented as a standalone justification for punishment in its own right. See Wringe, B, An Expressive Theory of Punishment (Palgrave Macmillan 2016)CrossRefGoogle Scholar.

60 Mulligan, C, ‘Revenge Against Robots’ (2018) 69 SCLRev 579Google Scholar; Hu (n 18) 503–7.

61 Thomas, WR, ‘Incapacitating Criminal Corporations’ (2019) 72 VandLRev 905Google Scholar.

62 See eg Legge, D and Brooman, S, Law Relating to Animals (Cavendish Publishing 2000)Google Scholar.

63 Loughnan, A, Manifest Madness: Mental Incapacity in the Criminal Law (Oxford University Press 2012)CrossRefGoogle Scholar.

64 Hamdani, A and Klement, A, ‘Corporate Crime and Deterrence’ (2008) 61 StanLRev 271Google Scholar. cf Guttman, RA, ‘Effective Compliance Means Imposing Individual Liability’ (2018) 5 Emory Corporate Governance and Accountability Review 77Google Scholar.

65 Bentham, J, ‘Panopticon Versus New South Wales’ in Bowring, J (ed), The Works of Jeremy Bentham (William Tait 1843) vol 4, 174Google Scholar.

66 See Ward, T and Maruna, S, Rehabilitation (Routledge 2007)CrossRefGoogle ScholarPubMed.

67 See eg Alschuler, AW, ‘The Changing Purposes of Criminal Punishment: A Retrospective on the Past Century and Some Thoughts About the Next’ (2003) 70 UChiLRev 9Google Scholar. cf Cullen, FT and Gilbert, KE, Reaffirming Rehabilitation (2nd edn, Anderson 2013)Google Scholar.

68 Diamantis, ME, ‘Clockwork Corporations: A Character Theory of Corporate Punishment’ (2018) 103 IowaLRev 507Google Scholar.

69 Lemley, MA and Casey, B, ‘Remedies for Robots’ (2019) 86 UChiLRev 1370Google Scholar.

70 Hallevy (n 18) 210–11.

71 cf Balkin, JM, ‘The Three Laws of Robotics in the Age of Big Data’ (2017) 78 OhioStLJ 1219Google Scholar.

72 King, MA, Public Policy and the Corporation (Chapman and Hall 1977) 1Google Scholar. See eg Coffee, JC Jr, ‘“No Soul to Damn: No Body to Kick”: An Unscandalized Inquiry into the Problem of Corporate Punishment’ (1981) 79 MichLRev 386Google Scholar.

73 Solaiman, SM, ‘Legal Personality of Robots, Corporations, Idols and Chimpanzees: A Quest for Legitimacy’ (2017) 25 Artificial Intelligence and Law 174CrossRefGoogle Scholar.

74 See eg Chopra and White (n 41) 161.

75 See eg Universal Declaration of Human Rights, GA Res 217A(III) (1948), UN Doc A/810 (1948) art 1; International Covenant on Civil and Political Rights (ICCPR) (16 December 1966) 999 UNTS 171, in force 23 March 1976, art 6(1).

76 See Allain, J (ed), The Legal Understanding of Slavery: From the Historical to the Contemporary (Oxford University Press 2013)Google Scholar.

77 Chesterman, S, ‘“Skeletal Legal Principles”: The Concept of Law in Australian Land Rights Jurisprudence’ (1998) 40 Journal of Legal Pluralism and Unofficial Law 61Google Scholar.

78 Holcombe, L, Wives and Property: Reform of the Married Women's Property Law in Nineteenth-Century England (Martin Robertson 1983) 18CrossRefGoogle Scholar.

79 Richards, NM and Smart, WD, ‘How Should the Law Think About Robots?’ in Calo, R, Froomkin, AM and Kerr, I (eds), Robot Law (Edward Elgar 2016) 1821Google Scholar.

80 cf Damiano, L and Dumouchel, P, ‘Anthropomorphism in Human–Robot Co-evolution’ (2018) 9 Frontiers in Psychology 468CrossRefGoogle ScholarPubMed.

81 See eg Hildt, E, ‘Artificial Intelligence: Does Consciousness Matter?’ (2019) 10(1535) Frontiers in Psychology 1–3Google Scholar; Meissner, G, ‘Artifcial Intelligence: Consciousness and Conscience’ (2020) 35 AI & Society 231CrossRefGoogle Scholar. For John Searle's famous ‘Chinese room’ argument, see Searle, JR, ‘Minds, Brains, and Programs’ (1980) 3 Behavioral and Brain Sciences 417–24CrossRefGoogle Scholar.

82 Open Letter to the European Commission: Artificial Intelligence and Robotics (April 2018) para 2(b). cf Turner (n 40) 189–90.

83 See eg Bryson, Diamantis and Grant (n 27) 283. cf Singer, P, ‘Speciesism and Moral Status’ (2009) 40 Metaphilosophy 567CrossRefGoogle Scholar.

84 See below section III.

85 See eg Fellous, J-M and Arbib, MA, Who Needs Emotions? The Brain Meets the Robot (Oxford University Press 2005)CrossRefGoogle Scholar; Wallach, W and Allen, C, Moral Machines: Teaching Robots Right from Wrong (Oxford University Press 2009)CrossRefGoogle Scholar.

86 Bentham, J, ‘Anarchical Fallacies’ in Bowring, J (ed), The Works of Jeremy Bentham (William Tait 1843) vol 2, 501Google Scholar.

87 J-J Rousseau, The Social Contract (GDH Cole trans, first published 1762, JM Dent 1923) 49.

88 For limited exceptions concerning apostates and persons with disabilities, see Taylor, PM, A Commentary on the International Covenant on Civil and Political Rights (Cambridge University Press 2020) 449–54CrossRefGoogle Scholar. For a discussion of anencephalic infants, see Kurki (n 15) 9.

89 Katz, A, ‘Intelligent Agents and Internet Commerce in Ancient Rome’ (2008) 20 Society for Computers and Law 35Google Scholar; Pagallo, U, The Laws of Robots: Crimes, Contracts, and Torts (Springer 2013) 103–6CrossRefGoogle Scholar. See also Ashrafian, H, ‘Artificial Intelligence and Robot Responsibilities: Innovating Beyond Rights’ (2015) 21 Science and Engineering Ethics 325CrossRefGoogle ScholarPubMed; Nasarre-Aznar, S, ‘Ownership at Stake (Once Again): Housing, Digital Contents, Animals, and Robots’ (2018) 10 Journal of Property, Planning, and Environmental Law 78CrossRefGoogle Scholar; Fosch-Villaronga, E, Robots, Healthcare, and the Law: Regulating Automation in Personal Care (Routledge 2019) 152. In 2017CrossRefGoogle Scholar, a digital bank called Peculium was established in France—presumably for investors who never studied Latin.

90 Pagallo (n 89) 104.

91 Chinen, M, Law and Autonomous Machines: The Co-Evolution of Legal Responsibility and Technology (Edward Elgar 2019) 19CrossRefGoogle Scholar.

92 See eg Kurki, VAJ and Pietrzykowski, T (eds), Legal Personhood: Animals, Artificial Intelligence and the Unborn (Springer 2017)CrossRefGoogle Scholar; S Stucki, ‘Towards a Theory of Legal Animal Rights: Simple and Fundamental Rights’ (2020) OJLS (forthcoming).

93 cf Sykes, K, ‘Human Drama, Animal Trials: What the Medieval Animal Trials Can Teach Us About Justice for Animals’ (2011) 17 Animal Law 273Google Scholar.

94 In the case of an animal known to belong to a ‘dangerous species’, its keeper is presumed to know that it has a tendency to cause harm and will be held liable for damage that it causes without the need to prove fault on the keeper's part. For other animals, it must be shown that the keeper knew the specific animal was dangerous. These common law rules on animals are now covered by legislation: Animals Act 1971 (UK). The English courts have shied away from a general doctrine of strict liability for ‘ultra-hazardous activities’. See Witting, C, Street on Torts (15th edn, Oxford University Press 2018) 453CrossRefGoogle Scholar.

95 People ex rel Nonhuman Rights Project, Inc v Lavery, 998 NYS 2d 248 (App Div, 2014).

96 Abate, RS, Climate Change and the Voiceless: Protecting Future Generations, Wildlife, and Natural Resources (Cambridge University Press 2019) 1012CrossRefGoogle Scholar; Kurki (n 15) 6–10 (distinguishing between passive and active legal personhood).

97 Nonhuman Rights Project, Inc ex rel Tommy v Lavery, 54 NYS 3d 392, 396 (App Div, 2017).

98 Nonhuman Rights Project, Inc ex rel Tommy v Lavery, 100 NE 3d 846, 848 (NY, 2018).

99 Hallevy (n 18) 28.

100 Darling, K, ‘Extending Legal Protection to Social Robots: The Effects of Anthropomorphism, Empathy, and Violent Behavior Towards Robotic Objects’ in Calo, R, Froomkin, AM and Kerr, I (eds), Robot Law (Edward Elgar 2016) 226–9Google Scholar. cf Calo, R, ‘Robotics and the Lessons of Cyberlaw’ (2015) 103 CLR 532Google Scholar.

101 Kurki (n 15) 8.

102 cf ibid 176–8 (discussing whether AI could be ‘ultimately valuable’ and thus entitled to personhood).

103 Grimmelmann, J, ‘There's No Such Thing as a Computer-Authored Work — and It's a Good Thing, Too’ (2016) 39 Columbia Journal of Law & the Arts 403Google Scholar.

104 de Cock Buning, M, ‘Artificial Intelligence and the Creative Industry: New Challenges for the EU Paradigm for Art and Technology by Autonomous Creation’ in Barfield, W and Pagallo, U (eds), Research Handbook on the Law of Artificial Intelligence (Edward Elgar 2018) 524Google Scholar.

105 Burrow-Giles Lithographic Co v Sarony, 111 US 53 (1884). Arguments continued, however, with Germany withholding full copyright of photographs until 1965. Nordemann, A, ‘Germany’ in Gendreau, Y, Nordemann, A and Oesch, R (eds), Copyright and Photographs. An International Survey (Kluwer 1999) 135Google Scholar.

106 17 USC §102(a).

107 Compendium of US Copyright Office Practices (3rd edn, US Copyright Office 2019) section 313.2 (emphasis added).

108 See DJ Gervais, ‘The Machine as Author’ (2020) 105 IowaLRev (forthcoming) (proposing a test of ‘originality causation’).

109 C Cheesman, ‘Photographer Goes Ape over Monkey Selfie: Who Owns the Copyright?’ Amateur Photographer (7 August 2014).

110 Naruto v Slater, 888 F 3d 418 (9th Cir, 2018). The Court held that Naruto lacked standing to sue under the US Copyright Act and had no claim to the photographs Slater had published.

111 M Haag, ‘Who Owns a Monkey Selfie? Settlement Should Leave Him Smiling’ New York Times (11 September 2017).

112 Samuelson, P, ‘Allocating Ownership Rights in Computer-Generated Works’ (1986) 47 UPittLRev 1199Google Scholar.

113 Beijing Feilin Law Firm v Baidu Corporation (No. 239) (25 April 2019) (Beijing Internet Court); Ming, Chen, ‘Beijing Internet Court Denies Copyright to Works Created Solely by Artificial Intelligence’ (2019) 14 Journal of Intellectual Property Law & Practice 593Google Scholar.

114 深圳市腾讯计算机系统有限公司 [Shenzhen Tencent Computer System Co Ltd] v 上海盈讯科技有限公司 [Shanghai Yingxun Technology Co Ltd] (24 December 2019) (Shenzhen Nanshan District People's Court); Zhang Yangfei, ‘Court Rules AI-Written Article Has Copyright’ China Daily (9 January 2020).

115 Raustiala, K and Sprigman, CJ, ‘The Second Digital Disruption: Streaming and the Dawn of Data-Driven Creativity’ (2019) 94 NYULRev 1603–4Google Scholar.

116 cf Asia Pacific Publishing Pte Ltd v Pioneers & Leaders (Publishers) Pte Ltd [2011] 4 SLR 381, 398–402 (Singapore Court of Appeal) (distinguishing between authorship and ownership).

117 Lim, D, ‘AI & IP: Innovation & Creativity in an Age of Accelerated Change’ (2018) 52 AkronLRev 843–6Google Scholar.

118 Copyright, Designs and Patents Act 1988 (UK), section 9(3). ‘Computer-generated’ is defined in section 178 as meaning that the work was ‘generated by computer in circumstances such that there is no human author of the work’.

119 Copyright Act 1994 (NZ), section 5(2)(a).

120 Copyright Amendment Act 1994 (India), section 2.

121 Copyright Ordinance 1997 (HK), section 11(3).

122 Copyright and Related Rights Act 2000 (Ireland), section 21(f).

123 See eg Nova Productions v Mazooma Games [2007] EWCA Civ 219; Brown, A et al. , Contemporary Intellectual Property: Law and Policy (5th edn, Oxford University Press 2019) 100–1CrossRefGoogle Scholar.

124 S Séjourné, Draft Report on Intellectual Property Rights for the Development of Artificial Intelligence Technologies (European Parliament, Committee on Legal Affairs, 2020/2015(INI), 24 April 2020) paras 9–10.

125 ibid, Explanatory Statement.

126 Copyright, Designs and Patents Act, section 12(7) (protection for such works is limited to 50 years, rather than 70 years after the death of the author), section 79 (exception to moral rights).

127 Revised Issues Paper on Intellectual Property Policy and Artificial Intelligence (World Intellectual Property Organisation, WIPO/IP/AI/2/GE/20/1 REV, 21 May 2020) para 23. See also du Sautoy, M, The Creativity Code: Art and Innovation in the Age of AI (Harvard University Press 2019) 102Google Scholar; Abbott, R, The Reasonable Robot: Artificial Intelligence and the Law (Cambridge University Press 2020) 7191CrossRefGoogle Scholar.

128 The patents were for a ‘food container’ and ‘devices and methods for attracting enhanced attention’. DABUS is an acronym for Device for the Autonomous Bootstrapping of Unified Sentience.

129 Patents Act 1977 (UK), sections 7, 13. See Whether the Requirements of Section 7 and 13 Concerning the Naming of Inventor and the Right to Apply for a Patent Have Been Satisfied in Respect of GB1816909.4 and GB1818161.0 (BL O/741/19) (4 December 2019) (UK Intellectual Property Office) paras 14–20. The tribunal went on to observe that Thaler could not have acquired ownership from DABUS ‘as the inventor cannot itself hold property’ (para 23).

130 European Patent Convention, done at Munich, 5 October 1973, in force 7 October 1977, art 81, rule 19(1).

131 Grounds for the EPO Decision on Application EP 18 275 163 (27 January 2020) (European Patent Office) para 22.

132 In re Application No.: 16/524,350 (Decision on Petition) (22 April 2020) (US Patent and Trademark Office).

133 Whether the Requirements of Section 7 and 13 Concerning the Naming of Inventor and the Right to Apply for a Patent Have Been Satisfied in Respect of GB1816909.4 and GB1818161.0 (n 129) para 28.

134 Manual of Patent Examining Procedure (MPEP) (9th edn, US Patent and Trademark Office 2017) section 2137.01.

135 Cubert, JA and Bone, RGA, ‘The Law of Intellectual Property Created by Artificial Intelligence’ in Barfield, W and Pagallo, U (eds), Research Handbook on the Law of Artificial Intelligence (Edward Elgar 2018) 418Google Scholar. There is a tenuous argument that scope for interpretation may lie in the fact that ‘inventor’ is defined in US law as the person who ‘invents or discovers’ the subject matter of the invention. 35 USC section 101 (emphasis added). See eg Abbott, R, ‘I Think, Therefore I Invent: Creative Computers and the Future of Patent Law’ (2016) 57 BCLRev 1098Google Scholar; Schuster, WM, ‘Artificial Intelligence and Patent Ownership’ (2018) 75 Wash&LeeLRev 1977Google Scholar; K Hartung, ‘Dear USPTO: Patents for Inventions by AI Must Be Allowed’ IP Watchdog (21 May 2020).

136 Whether the Requirements of Section 7 and 13 Concerning the Naming of Inventor and the Right to Apply for a Patent Have Been Satisfied in Respect of GB1816909.4 and GB1818161.0 (n 129) para 15.

137 Grounds for the EPO Decision on Application EP 18 275 163 (n 131) para 27.

138 In re Application No.: 16/524,350 (n 132) 4.

139 See above (n 83).

140 Grounds for the EPO Decision on Application EP 18 275 163 (n 131) para 31.

141 See eg Harrison, H, War with the Robots (Grafton 1962)Google Scholar; Dick, PK, Do Androids Dream of Electric Sheep? (Doubleday 1968)Google Scholar; Clarke, AC, 2001: A Space Odyssey (Hutchinson 1968)Google Scholar.

142 P Jordan et al, ‘Exploring the Referral and Usage of Science Fiction in HCI Literature’ (2018) arXiv 1803.08395v2.

143 Bostrom, N, Superintelligence: Paths, Dangers, Strategies (Oxford University Press 2014) 22Google Scholar. Early speculation on superintelligence is typically traced to IJ Good, ‘Speculations Concerning the First Ultraintelligent Machine’ in FL Alt and M Rubinoff (eds), Advances in Computers (Academic 1965) vol 6, 31. Turing himself raised the possibility in a talk in 1951, later published as Turing, AM, ‘Intelligent Machinery, A Heretical Theory’ (1996) 4 Philosophia Mathematica 259–60CrossRefGoogle Scholar; he in turn credited a yet earlier source — Samuel Butler's 1872 novel Erewhon.

144 See eg O Etzioni, ‘No, the Experts Don't Think Superintelligent AI is a Threat to Humanity’ MIT Technology Review (20 September 2016). He cites a survey of 80 fellows of the American Association for Artificial Intelligence (AAAI) on their views of when they thought superintelligence (as defined by Bostrom) would be achieved. None said in the next 10 years, 7.5 per cent said in 10–25 years; 67.5 per cent said in more than 25 years; 25 per cent said it would never be achieved.

145 Bradley, P, ‘Risk Management Standards and the Active Management of Malicious Intent in Artificial Superintelligence’ (2019) 35 AI & Society 319Google Scholar; Turchin, A and Denkenberger, D, ‘Classification of Global Catastrophic Risks Connected with Artificial Intelligence’ (2020) 35 AI & Society 147CrossRefGoogle Scholar.

146 See eg Häggström, O, ‘Challenges to the Omohundro–Bostrom Framework for AI Motivations’ (2019) 21 Foresight 153CrossRefGoogle Scholar.

147 See Chalmers, DJ, ‘The Singularity: A Philosophical Analysis’ (2010) 17(9–10) Journal of Consciousness Studies 7Google Scholar.

148 Kurki (n 15) 189.

149 See generally Bostrom (n 143) 127–44.

150 N Bostrom, ‘Ethical Issues in Advanced Artificial Intelligence’ in I Smit and GE Lasker (eds), Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence (2003) vol 2, 12.

151 Danaher, J, ‘Why AI Doomsayers Are Like Sceptical Theists and Why It Matters’ (2015) 25 Mind and Machines 231Google Scholar.

152 Totschnig, W, ‘The Problem of Superintelligence: Political, not Technological’ (2019) 34 AI & Society 907CrossRefGoogle Scholar.

153 I Asimov, ‘Runaround’ Astounding Science Fiction (March 1942).

154 Turner (n 40) 164.

155 Kurzweil, R, The Singularity Is Near: When Humans Transcend Biology (Viking 2005) 424Google Scholar (‘Our primary strategy in this area should be to optimise the likelihood that future nonbiological intelligence will reflect our values of liberty, tolerance, and respect for knowledge and diversity. The best way to accomplish this is to foster those values in our society today and going forward.’); Soares, N and Fallenstein, B, ‘Agent Foundations for Aligning Machine Intelligence with Human Interests: A Technical Research Agenda’ in Callaghan, V et al. (eds), The Technological Singularity: Managing the Journey (Springer 2017) 117–20Google Scholar.

156 Russell, SJ, Human Compatible: Artificial Intelligence and the Problem of Control (Viking 2019)Google Scholar. cf Bostrom's suggestion that the goal for a superintelligence might be expressed as ‘achieve that which we would have wished the AI to achieve if we had thought about the matter long and hard’. Bostrom (n 143) 141.

157 Yudkowsky, E, ‘Complex Value Systems in Friendly AI’ in Schmidhuber, J, Thórisson, KR and Looks, M (eds), Artificial General Intelligence (Springer 2011) 388Google Scholar.

158 cf Omohundro, S, ‘Autonomous Technology and the Greater Human Good’ (2014) 26 Journal of Experimental & Theoretical Artificial Intelligence 308CrossRefGoogle Scholar.

159 In the event that that path lies through augmentation of humans rather than purely artificial entities, those humans are likely to remain subjects of the law.

160 cf Livingston, S and Risse, M, ‘The Future Impact of Artificial Intelligence on Humans and Human Rights’ (2019) 33 Ethics and International Affairs 141CrossRefGoogle Scholar.

161 ‘Artificial Stupidity’ The Economist (1 August 1992).

162 Russell and Norvig (n 1) 3.

163 Chesterman, ‘Autonomy’ (n 5) 220.

164 Chopra and White (n 41) 191.

165 AM Turing, ‘Intelligent Machinery, a Heretical Theory’ (lecture given to the ‘51 Society’ at Manchester) (Turing Digital Archive, AMT/B/4, 1951) 6.

166 See generally Turing, D, Prof Alan Turing Decoded: A Biography (History Press 2015)Google Scholar.

167 McEwan, I, Machines Like Me (Vintage 2019) 300Google Scholar.