A. Introduction
Before us sat the ultimate plaything, the dream of ages, the triumph of humanism—or its angel of death.Footnote 1
The early skeptics who feared that the development of autonomous agents could have highly destructive consequences have not been unduly apocalyptic. Increasingly, it is acknowledged that non-human autonomous agents can commit not only garden-variety domestic crimes but also the most serious international crimes.
The most well-known examples of autonomous agents committing crimes are drones killing civilians during armed conflict. There is a growing literature on the damage that drones and other autonomous weapons can inflict on civilians and other non-military targets.Footnote 2 But there is increasing recognition that autonomous agents can also commit other international crimes, including some of the most paradigmatic of international crimes namely genocide and torture. Robots are employed for interrogation purposes which means they can inflict torture. And bots are used to spread hate speech and intensify discrimination which have triggered ethnic violence and genocide.Footnote 3
But can a human be held accountable for the autonomous actions of a non-human machine? More complex yet: can an autonomous agent be held accountable for its actions? This article will consider the consequences of the ability of AI to commit international crimes and what the appropriate model of liability could be. It will consider these questions against the backdrop of the fundamental legal constraint that only natural persons have legal personality before the International Criminal Court (ICC). It will proceed from the view that it is not sufficient to hold the persons or programmers behind the autonomous agents liable but that it should be possible to hold the autonomous agents that commit international crimes liable.
The reference to “machines” in this article should be understood to include autonomous “agents” which are semi or fully autonomous. The article acknowledges the various degrees to which actors can be autonomous or non-autonomous. It will be argued that whereas it is easier to attribute liability to autonomous agents that are controlled, to a greater or smaller extent, by humans, it is possible to construct a framework for the liability of machines that are able to also operate in a “fully” autonomous way, in other words without human intervention.
The article will start by considering the kinds of international crimes that are committed by machines and then ask whether it follows that machines can be held accountable and if so what form liability should take given the current accountability gap. It will be argued that the requirement of fault does not present an unsurmountable obstacle to finding liability in international criminal law (ICL). It will be suggested that strict liability could form an acceptable basis for holding machines accountable for international crimes. Strict liability, it will be argued, can serve some of the same purposes of international criminal law that are served by fault liability.
Awarding legal personality to machines and then attributing strict liability to machines in the context of international crimes will necessitate a rethinking of the centrality of individual criminal liability and the insistence on human agency in international criminal law. It will also require a rethinking of the purposes of international criminal law and the extent to which the preoccupation with the individual continues to serve these purposes. This holds dramatic consequences for legal subjectivity. It will be argued that if there is a continuum of potential subjects of ICL, then the argument for electronic personhood and liability of machines is as compelling as for other non-humans such as corporate entities and animals. Because autonomous warfare represents a paradigm shift from traditional forms of warfare, it needs a legal approach that is qualitatively different from what existed in the past, also with regard to how international criminal law accommodates such warfare.
B. Definitions
The definition of artificial intelligence (AI) has not remained stable.Footnote 4 The definition has changed and evolved multiple times. A definition formulated in 2018 holds that artificial intelligence is “a branch of computer science focused on programming machines to perform tasks that replicate or augment aspects of human cognition.”Footnote 5 Allen defines AI as the “ability of a computer system to solve problems and to perform tasks that would otherwise require human intelligence.”Footnote 6 A House of Lords Select Committee on Artificial Intelligence defined AI as “technologies with the ability to perform tasks that would otherwise require human intelligence.”Footnote 7
The absence of a single, agreed-upon definition of AI does not necessarily obstruct one from holding AI liable for committing or aiding and abetting international crimes. Most definitions of AI have certain common elements such as the feature that the intelligence of AI actors resemble human intelligence.Footnote 8 The variance in definition does not affect the key questions of the criminal accountability of AI because all AI actors, regardless of definition, lack the key element of fault or mens rea for example. Regardless of the definition of AI one chooses, one would not be able to insist on the traditional elements of criminal law and one would have to construct a new model of electronic liability.
There is also no consensus on a single definition for the term “autonomous,” which is an element of most definitions of AI. In some cases, autonomy is assumed to mean the ability of a system to operate successfully without human intervention. Christof Heyns, former Special Rapporteur on extrajudicial, summary, or arbitrary executions, described autonomous weapons as weapons “that, once activated, can select and engage targets without further human intervention.”Footnote 9 But stating that it operates “without human intervention” does not fully capture the nuance and complexity of autonomy. In other cases, autonomy is conflated with the lack of human control. The French definition qualifies human intervention as “human supervision, meaning there is absolutely no link (communication or control) with the military chain of command.”Footnote 10 The French definition of autonomy still caters for the possible presence of human involvement: under this definition, autonomous weapons may even be deployed under some form of human control but essentially operates independently from humans.
C. How AI Commits International Crimes
Autonomous agents can commit core international crimes of genocide, war crimes, crimes against humanity and the crime of aggression (in other words, international crimes over which the ICC has subject matter jurisdiction)Footnote 11 as well as non-core international crimes such as human trafficking. Antonio Cassese describes the core crimes as “a category comprising the most heinous offences.”Footnote 12
The crimes committed by Autonomous Weapons Systems (AWS) have received a good deal of attention. AWS is understood as any weapon system that can independently select and attack targets, including some existing weapons.Footnote 13 AWS can commit war crimes such as the indiscriminate killing of civilians. Less attention has been paid to how the war crime of torture and other international crimes can be committed by autonomous agents. Autonomous agents have committed human trafficking as well as selling, buying and possessing banned drugs.Footnote 14
Autonomous agents can be used to commit various of the core international crimes. Most of the literature has focused on the dangers that AWS can inflict in wartime or conflict. But autonomous agents can also “commit” international crimes outside the context of armed conflict. From the perspective of accountability, the ways in which the crimes of torture and genocide can be committed by autonomous agents are particularly interesting and important. By way of example, two contexts in which the actions of autonomous agents can become relevant under international criminal law are outlined in turn.
I. Torture
To start with, AI can potentially commit torture. This can occur either physically or psychologically. Autonomous agents can be said to “commit” torture when AI developers integrate AI capabilities into an interrogation system. Thomasen defines a robot interrogator as “any automated technology that examines an individual through questioning . . . for purposes of eliciting incriminating statements or confessions.”Footnote 15 McAllister describes the environment that makes torture by “robots” or autonomous systems possible: “increasing developments in human-computer interaction, research and physiological measurement devices, combined with the inability of humans to act as effective lie detectors and the traditional reliance on technology to enhance human capacities.”Footnote 16 Automated deception detectors have indeed been used for some time in the form of prototype robotic guards, for example, for border control in the United States. Footnote 17 McAllister, describing the reasons governments find autonomous interviewers attractive, stated, “Indeed, autonomous interviewers would permit governments and agencies reliable, low-intrusive, and cost-effective means of interviewing large groups of people quickly or detecting weapons at common security points.”Footnote 18
According to Thomasen, the robot interrogator is expected to be “a tireless, bias-free, faster, more effective and more accurate version of a human interrogator.”Footnote 19 Because sensor technologies can be streamlined into the interrogator, it is easier to conceal their ability to search suspects.Footnote 20 Using AI to inflict torture is further attractive because the deployer of the AI may be able to detach themselves both emotionally and physically.Footnote 21 The fact that the deployer can distance himself or herself physically means that they will not be performing the actus reus under current definitions of torture which requires the victim to be under the control or in the custody of the torturer.Footnote 22
As robots cannot experience pain or empathy and cannot feel compassion, the mere presence of an interrogation robot may cause the subject of interrogation to talk out of fear. What further adds to the effectiveness of robot interrogation is that the interrogatee knows that the robot cannot understand pain or experience empathy and is therefore unlikely to act with mercy and stop the interrogation.Footnote 23 But this does not mean that robots cannot be programmed to emulate human traits. What makes robots suitable interrogators is that they have the capacity to use personality traits such as flattery and intimidation to manipulate the interlocutor.Footnote 24
The idea of an autonomous robot in an interrogative space which self-sufficiently conducts an interrogation invokes a multitude of questions, including the question of what constitutes lawful interrogation, torture, and torture or cruel, degrading and inhuman treatment.Footnote 25 Further, AI actors can be the direct perpetrators of international and transnational crimes. AI can also be used as a tool by humans when AI aids and abets criminal activities. In these cases, examples of AI aiding and abetting criminal activities include smuggling (by using unmanned vehicles for example), as well as torture, sexual offences, fraud and theft. As Sehrawat puts it: “[T]hey can be held responsible under different modes of liability, such as for attempting, assisting, facilitating, aiding, abetting, planning, or instigating the commission of a war crime.”Footnote 26
II. Genocide
The second example of an international crime that might be committed, or assisted, by autonomous agents is genocide. AI tools used on social media can significantly exacerbate the stages of genocide. The first stage of genocide pertains to classifying groups,Footnote 27 that is the creation of an “us” versus “them” dichotomy between people of different race, ethnicity, religion or nationality. Such divisions may escalate into removing or denying a group’s citizenship and stripping them of their civil and human rights.Footnote 28 AI can exacerbate such divisions by facilitating and accelerating the spread of hatred and deepfakes. Through the use of bots, the content of social media messages can be tailored specifically to vilify targeted groups.Footnote 29
The case of international crimes committed against the Uighurs is an example of humans using technology to commit genocide. Millions of Uighurs are currently experiencing the consequences of the misuse of AI-technologies. China has used an intrusive surveillance smartphone application with the name “Integrated joint operations platform” in order to track Uighurs.Footnote 30 In the region of Xinjiang, machines developed with AI-technologies are able to check and scan IDs at checkpoints and alert the police or military in case surveillance cameras detect what Chinese officials call “suspicious activity.”Footnote 31 The use of facial recognition technology has also contributed to China’s ability to target Uighurs.
A further example is the genocide in Myanmar. Some believe that Facebook exacerbated this genocide.Footnote 32 Members of the Myanmar military used Facebook to systematically target the country’s Muslim minority, the Rohingya. In the view of human rights groups, the anti-Rohingya propaganda incited large scale murder and rape.Footnote 33 For example, a Facebook post posted in 2018 showed a photograph of a boatload of Rohingya refugees with the words: “Pour fuel and set fire so that they can meet Allah faster.”Footnote 34 A report by the UN’s Independent International Fact-Finding Mission on Myanmar that was established to make recommendations on genocide and other crimes committed in Myanmar concluded that Facebook has been a useful instrument for those seeking to spread hate against the Rohingya.Footnote 35
D. The Need for Electronic Liability
The need to construct a model of electronic liability is not self-evident. In some cases where autonomous agents or machines directly or indirectly commit international crimes or aid and abet such crimes, it will be clear that there is an identifiable human being “behind” the machine, operating or programming the machine’s actions either directly or indirectly. However, it will more often be the case that the person behind the machine is not identifiable or very difficult to identify. The identity of the perpetrator is likely to be deliberately hidden or obfuscated. As a result, the human actors will escape international criminal liability.
Further, an approach that attaches liability to the human behind the machine is also not realistic from the point of view of the current stage of development of autonomous agents and weapons. Machines or systems of machines have already reached the level of autonomy that they can identify an enemy target and take lethal action without human input.Footnote 36 Examples of such autonomous machines include aerial vehicles, submersible vehicles as well as ground-based vehicles with an attached lethal weapon.Footnote 37 And drones are theoretically able to be controlled autonomously or by a human controller.
This creates an accountability gap that can be filled by constructing electronic liability. This will prevent the perpetrators of crimes from escaping liability by relying on the anthropocentric features of criminal law. It is important not to create incentives for persons to hide behind autonomous agents or for persons to use autonomous agents to hide the true perpetrators of crimes. Electronic liability will allow the “unseen” manufacturer or programmer of an autonomous agent to still pay the price or suffer the consequences of the actions of the machine or autonomous agent. However, it should be noted that imposing electronic liability will not mean that the persons directing the machine, to the extent that they can be identified, cannot or should not be found liable in addition to the machine. The imposition of electronic liability for the commission of international crimes through machines will simply help fill the current accountability gaps in the law.
In addition, electronic liability will prevent perpetrators from escaping liability by relying on narrow definitions of crimes. In the case of torture for example, the actus reus requires that the torturer cannot detach themselves physically from the person being tortured. The Rome Statute defines torture as, “[T]he intentional infliction of severe pain or suffering, whether physical or mental, upon a person in the custody or under the control of the accused; except that torture shall not include pain or suffering arising only from, inherent in or incidental to lawful sanctions.”Footnote 38
The current definition of torture therefore requires the torture victim to be under the control or custody of the accused, implying physical proximity, and does not allow for torturing someone from a remote location.Footnote 39 Using machines to torture therefore makes it easier for torturers to obfuscate liability and circumvent the law. Former UN Special Rapporteur on Torture Nils Melzer has remarked that many countries have invested “significant resources towards developing methods of torture which can achieve purposes of coercion, intimidation, punishment, humiliation or discrimination without causing readily identifiable physical harm or traces.”Footnote 40
E. Overcoming the Obstacles to AI Criminal Liability
The most significant obstacles for electronic liability to be recognized are the following: First, the idea that international criminal law is—only—based on individual criminal responsibility (and that for the purpose of individual criminal responsibility, individuals are understood to be humans); second, the fact that machines are not considered to possess legal personality; and third, the requirement of fault in finding criminal liability. This section shows how these obstacles can be overcome.
I. Individual Criminal Responsibility
One of the most significant contributions the discipline of international criminal law has made to the field of international law generally was the attribution of criminal liability to individuals. Whereas international courts such as the International Court of Justice has attributed liability exclusively to states, the ad hoc international criminal tribunals revolutionised criminal responsibility in international law by not only allowing individual criminal liability but basing its jurisdiction on individual criminal liability.
As a result, humans are still the measure of all things in international criminal law. According to Lostal, the ICC has “clearly espoused the human exceptionalism theory as shown by the contextual, systemic interpretation and subsequent practice around the notion of ‘person.’”Footnote 41 For example, when the preamble of the Rome Statute mentions victims, it does so by making reference to “children, women and men.”Footnote 42 According to Lostal, other norms of the legal framework refer to persons as a shorthand to indicate “human being.” Article 1 of the Statute claims that the Court “shall have the power to exercise its jurisdiction over persons for the most serious crimes of international concern”; Article 26 refers to jurisdiction over “persons”; and Rule 123 RPE relates to measures to ensure the presence of the “person” concerned at the confirmation hearing. The term “persons” is also omnipresent in the Rome Statute’s definitions of crimes.
Prosperi and Terrosi have similarly described existing international criminal law as “essentially anthropocentric.”Footnote 43 It can be said that international criminal law defines itself in terms of individual criminal responsibility. Article 25 of the Rome Statute states that the court shall have jurisdiction over natural persons and sets out the circumstances under which an individual shall be held responsible. Article 25(2) states: “A person who commits a crime within the jurisdiction of the Court shall be individually responsible and liable for punishment in accordance with this Statute.” Individual criminal responsibility is applicable where an individual directly commits a crime or directly contributes to it through ordering, planning, instigating, inciting, co-perpetration, joint criminal enterprise, aiding and abetting.Footnote 44
However, although the notion of “person” has so far been interpreted as “human person,” the wording of these provisions of the Rome Statute would be open to a more inclusive interpretation. As person is, in other (international) legal contexts, often interpreted as including both natural and legal persons, it would be possible also to include artificial persons. The anthropocentrism of international criminal law could thus be overcome by adopting a broader interpretation of already existing norms. However, given the dramatic implications of extending this notion to AI actors, the Rome Statute should be amended to include AI explicitly.
II. Legal Personality
Even if the anthropocentrism of international criminal law can be overcome, liability can only be attributed to machines once machines have been assigned legal personality. Although non-humans do not currently possess legal personality before the ICC or most domestic jurisdictions, the concept of legal personality has been altered many times throughout history to keep up with societal developments. Andrew Clapham reminds us that not that long ago a doctrinal debate was raging over the question whether individuals can be the subject of international law.Footnote 45
The proposal to recognize legal personality for machines or “electronic personhood” has been controversial. Such legal personality would mean that particular highly sophisticated robots and software agents are the addressees of legal duties and obligations and the holders of legal rights.Footnote 46 The debate has been fuelled by a resolution adopted by the European Parliament on Civil Law Rules on Robotics. This resolution made recommendations to the European Commission according to which, in the long run, legislators should award legal personality to some very advanced AI systems.Footnote 47 The underlying idea is that if it is increasingly difficult to trace harm triggered by AI back to any kind of human behaviour, it would follow that one can hold AI itself liable.Footnote 48
The idea that machines can hold legal personality has been heavily criticized. Much of the criticism and resistance has its roots in ethical considerations. It is believed that an “attempt to put machines on an equal footing with human beings and afford them the same or similar rights is apt to help delude the fundamental difference between human beings and things.”Footnote 49
One’s willingness to grant legal personality to autonomous agents will therefore largely depend on how one views the “person” in legal personhood. Those who believe that legal personality consists of nothing more than the formal capacity to bear a legal rightFootnote 50 will be more comfortable to bestow legal personality on autonomous agents than those who think that there is always a necessary connection between moral and legal persons and who attribute metaphysical qualities to personhood.Footnote 51
The definition of legal personality that separates morality from subjectivity is the most inclusive definition of personhood in that it can, potentially, be all-embracing. As Naffine argues, “anything can be a legal person because legal persons are stipulated as such or defined into existence.”Footnote 52 According to this understanding of personhood, legal persons can include animals, fetuses, the dead, the environment, corporations, or whatever else the law accepts into the community of persons.Footnote 53
According to Novelli, Bongiovanni and Sartor, many authors support the idea of conferring legal personality on AI. They mention not only the advanced cognitive abilities of AI actors but also other “new elements” introduced by AI:
Yet what seem to characterize AIs, to the point of introducing new elements into the debate on legal personality, are the sociotechnical profiles resulting from the deployment of artificial intelligence agents, e.g., the marked unpredictability of their decision-making processes and the impact (both positive and negative) that these processes may have on people’s lives, on society, and on the market; the ability of such systems to communicate and network; the involvement of different human players in the production and implementation of such systems, each with different potential responsibilities; and the difficulty, sometimes the impossibility, of tracking the relevant human players.Footnote 54
In some domestic jurisdictions such as New Zealand, there have been developments relating to awarding legal personality to non-humans such as lands and rivers.Footnote 55 And in the context of animal rights, it has been argued that acknowledging nonhuman species as possessing limited legal personality.Footnote 56 Awarding legal personality to animals is extremely rare but not unheard of. In 2015, a judge in Argentina awarded legal personality to an Orangutan.Footnote 57 But US courts have not been similarly adventurous. In June 2022, a US court refused to recognize the legal personality of an elephant called Happy, illustrating that the law on whether animals enjoy personality is still inconsistent.Footnote 58 These developments show that, despite existing controversies, the legal concept of personhood is currently broadening. Attributing legal personality to AI would thus be in line with this trend.
III. Fault as Requirement for Liability
The third apparent obstacle to AI criminal responsibility is the fault requirement for liability. Criminal liability requires both a physical element (actus reus) and a mental element (mens rea). Criminal law requires the presence of mens rea before it can be said that a crime has been committed.Footnote 59 Mens rea refers to a blameworthy mental state and can consist of intention or negligence. It is this mental element that presents the sticking point in terms of holding machines accountable under international criminal law.
To overcome this difficulty, the concept of strict liability constitutes one of the possible legal frameworks for AI accountability. Strict liability has been defined as “crimes which do not require mens rea or even negligence as to one or more elements in the actus reus are known as offences of strict liability.”Footnote 60 As AI systems are not capable of meeting existing criminal law principlesFootnote 61 and the requirements of both factual and mental elements in particular, many believe strict liability is the most appropriate liability model in this context. The next section outlines this possibility in more detail.
However, strict liability would be a novelty in the criminal law context. So far, there has been much resistance to strict liability as a basis for criminal liability.Footnote 62 In the domestic context, courts have argued that strict liability should never be the basis for retributive punishment, and that is also a weak basis for deterrence.Footnote 63 South African courts, for example, are hostile to strict liability and will only deviate from the principle of no liability without fault if there are clear and convincing indications.Footnote 64 Moreover, the more serious a crime, the less likely it is that domestic criminal law systems will allow for strict liability. As Grant states plainly, serious crimes require intention. Domestic systems such as the US thus quite comfortably attach strict liability to traffic offences,Footnote 65 but attaching strict liability to serious offences such as murder or homicide is highly controversial.
Because of the centrality of fault in our understanding of crime, some argue that the absence of fault means that machines are not capable of committing crimes, but that terms such as “malfunctioning” should be used instead.Footnote 66 But, as McAllister writes, to only attach liability to those things or beings philosophically capable of intent would defeat the state parties’ original intent in drafting and adopting the CAT: to prohibit all torture and cruel, inhuman and degrading treatment, not merely torture by those things.Footnote 67 This argument applies not only to torture but to international crimes more generally.
F. Models of Electronic Liability
There are two potential models of electronic liability: strict liability and command responsibility. Both are addressed in turn.
I. Strict liability
Strict liability does not fit easily into traditional understandings of international criminal law. In the Bemba case, the ICC Pre-Trial Chamber went as far as stating that the Rome Statute disapproves of strict liability. When the Pretrial Chamber examined the requirement that “the suspect either knew or, owing to the circumstances at the time, should have known” about the relevant crimes, the Chamber stated that “the Rome Statute does not endorse the concept of strict liability,”Footnote 68 meaning that “attribution of criminal responsibility for any of the crimes that fall within the jurisdiction of the Court depends on the existence of the relevant state of mind or degree of fault.”Footnote 69 This statement by the ICC can be interpreted as meaning that individuals will not be held strictly liable. In contrast, due to its individual-centredness of the Rome Statute, the court has so far been silent on how to deal with the liability of non-humans such as machines.
Fault-based liability is often retributory in its aims. Because strict liability cannot result in retribution, it is considered not fitting in the context of international criminal law which adopts, at least in part, a model of retributive justice. But retributive justice is just one of various models of justice, and retribution just one of various purposes served by international criminal justice. Apart from retribution, international criminal law is also believed to serve the purposes of deterrence, promoting peace and security, strengthening accountability, creating a historical record, and truth-telling.Footnote 70 Victims’ participation and victims’ protection can be added to the traditional list of purposes.Footnote 71 Strict liability can serve many of these purposes as well as fault-based liability does. And in some cases, the policy considerations that will be served by electronic liability, such as enhancing public trust, promoting legal certainty and risk control,Footnote 72 are as important as the more traditional purposes, such as deterrence.
A key benefit of introducing strict liability for AI would be its strong “symbolic” value and the fact that it is likely to both enhance public trust in the mass roll-out of AI and to put an end to legal uncertainty.Footnote 73 Further, although controversial,Footnote 74 the goal of deterrence is already a well-established goal of international criminal law, and strict liability can promote deterrence as much as fault liability can. In domestic law, deterrence as public policy consideration has already been accepted as justification for applying a strict liability standard. In the context of product liability, courts have confirmed that public policy demands that liability for loss should fall or be placed where it will most effectively deter such loss form recurring. In the landmark US decision on product liability, Escola v. Coca-Cola Bottling Co., the court stated:
Even if there is no negligence, however, public policy demands that responsibility be fixed wherever it will most effectively reduce the hazards to life and health inherent in defective products that reach the market. It is evident that the manufacturer can anticipate some hazards and guard against the recurrence of others, as the public cannot. Those who suffer injury from defective products are unprepared to meet its consequences. The cost of an injury and the loss of time or health may be an overwhelming misfortune to the person injured, and a needless one, for the risk of injury can be insured by the manufacturer and distributed among the public as a cost of doing business.Footnote 75
In the context of autonomous agents with the potential of committing international crimes, Zech argues that strict liability can be an instrument for risk distribution.Footnote 76 The risk lies with the injurer. The risk controller must consider whether the expected benefit of an activity exceeds its risk.Footnote 77 Social media companies such as Facebook and Twitter that allow the proliferation of hateful content should be held liable rather than the end user because the end user is not able to successfully predict and protect himself or herself against the harm.
In the context of damage caused by machines where fault is not a useful construct in finding liability, liability will nevertheless be hooked onto causation. As Weinrib writes, under strict liability, causation is decisive to a defendant’s liability.Footnote 78 Essentially, the requirement of fault falls away, and causation becomes a more important requirement. The requirement of causation prevents strict liability from running rampant. It acts as a check or limitation on strict liability.
A strict liability approach would solve many of the problems attached to searching for human agents “behind” autonomous agents and holding them accountable. An approach that holds those who design, program, or create autonomous agents liable is not necessarily just. The complexity of the autonomous agent’s programming could make it possible that the designer, developer, or deployer would neither know nor be able to predict the AI’s criminal act or omission.Footnote 79 For this reason, liability should not rest on knowledge or intent because it might create an incentive for human agents to avoid finding out what exactly the machine learning system is doing.Footnote 80 It is also not true that robots will do only what they are programmed to do. As Grut writes, “[P]rograms with millions of lines of code are written by teams of programmers, none of whom knows the entire program; hence, no individual can predict the effect of a given command with absolute certainty, since portions of large programs may interact in unexpected, untested ways . . . .”Footnote 81
In addition, there remains the question of whether autonomous robots would even obey orders or be capable of recognizing a chain of command.Footnote 82
A final concern with attributing strict liability to machines is finding an appropriate sanction. When a criminal defendant is deemed strictly liable in criminal law or in tort law, he or she may be ordered to pay compensatory damages.Footnote 83 Autonomous agents, like corporations, cannot be imprisoned, but they can be made to feel the brunt of any misconduct through a panoply of sanctions.Footnote 84 In the absence of the option of imprisonment, a finding of “electronic liability” can be punished by imposing a fine. Yet, some will find it difficult to comprehend how a monetary fine as an equitable punishment for a violation of a jus cogens norm.Footnote 85 Fines coupled with other remedies, such as guarantees of non-repetition, might thus be more appropriate than “mere” fines.
II. Command Responsibility
In addition to strict liability, command responsibility can potentially provide a framework to address the accountability gap caused by AI. The model of command responsibility is included in Article 28 of the Rome Statute and employed in Bemba. Command responsibility rests on the presumption of the negligence of the commander(s) who authorized the deployment of an autonomous weapon that commits an illegal act.
Command responsibility extends to actions committed by the forces under the commander’s “effective control.”Footnote 86 While the superior’s duty is not so much active, it is a kind of liability that arises from violating the duty to prevent illegal actions of a party, actions over which the superior exercises professional control.Footnote 87 The vicarious criminal liability that results from command responsibility implicates a commander in many of the acts committed by subordinate forces that violate international law. In the context of the battlefield, the subordinate forces may include AWS in its arsenal of capabilities.Footnote 88 In this context, command liability amounts to operator liability.Footnote 89
In addition, Corn has suggested that command responsibility could be extended to the procurement officials who bring AWS into a government’s inventory. This approach would ensure that “decision-making officials and not technicians or legal advisers” and the ones who endorse the developing technological knowhow of AWS are the individuals who are held accountable should any unlawful outcomes result.Footnote 90
Although applying command responsibility to the AI context is thus a feasible option, it remains doubtful as to whether such an approach would be an appropriate solution to the accountability gap. For example, Human Rights Watch has expressed concerns that it is “arguably unjust” to hold commanders to account for the action of machines “over which they could not have sufficient control.”Footnote 91 Rather than command responsibility, strict liability appears to be the better option.
G. Conclusion
The ICC will potentially only be able to meaningfully prosecute international crimes committed by autonomous agents if it is willing to accommodate strict liability and other faultless models of liability that have so far been anathema to international criminal justice. In order to do so, it would have to make the giant leap of moving away from fault as the central requirement for criminal liability. It would also have to open up its notion of legal subjectivity.
The Rome Statute follows the philosophy articulated at Nuremberg that men, not abstract entities, commit crimes against international law.Footnote 92 But the insistence on finding human agency behind international crimes will not serve the victims of drone attacks and international crimes committed by autonomous agents. Individuals will ultimately only be sufficiently or meaningfully protected if the legal personality is not only enjoyed by individuals or humans.Footnote 93
Accepting and constructing “electronic liability” will necessitate an (undoubtedly seismic) shift away from the individual-centredness of ICL. But amending the ICC Statute is nothing new. It is necessary to now move to amend some of the provisions that have previously been considered foundational, even sacrosanct. According to David Luban, one of the legacies of Nuremberg was enlarging the reach of the law.Footnote 94 He writes that the lawmakers at Nuremberg “viewed their own words and deeds from the perspectives of a distant more pacific age.”Footnote 95 It can be asked whether the drafters of the Rome Statute were similarly prescient and forward-looking when they restricted legal personality to natural persons.
To accommodate electronic liability, the Rome Treaty should be amended to explicitly extend its personal jurisdiction to legal persons. Articles 1 and 25(1) should be amended to include legal persons.Footnote 96 This might require extensive amendments to the Rome Statute, but the alternative would be that the ICC becomes increasingly irrelevant when it comes to fighting impunity for the most serious crimes known to mankind.
If a thing without a tangible form, such as a corporation, can be a legal person, then it is no great conceptual leap to also confer legal personality on a thing that does have physical existence. The roots of the insistence to confine personality to individuals should be re-examined. It should be asked whether the purposes of ICL—both the normative principles and the more policy-oriented—are best served by rigidly clinging to individuals as the only subjects of ICL.
Aknowledgement
Professor Swart would like to thank the editors of this special volume for helpful comments.
Competing Interests
The author declares no competing interest.
Funding Statement
No specific funding has been declared in relation to this article.