Hostname: page-component-78c5997874-t5tsf Total loading time: 0 Render date: 2024-11-20T02:16:37.621Z Has data issue: false hasContentIssue false

Bridging the accountability gap of artificial intelligence – what can be learned from Roman law?

Published online by Cambridge University Press:  18 January 2023

Klaus Heine
Affiliation:
Erasmus School of Law, Erasmus Universiteit Rotterdam, Rotterdam, The Netherlands
Alberto Quintavalla*
Affiliation:
Erasmus School of Law, Erasmus Universiteit Rotterdam, Rotterdam, The Netherlands
*
*Corresponding author e-mail: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

This paper discusses the accountability gap problem posed by artificial intelligence. After sketching out the accountability gap problem we turn to ancient Roman law and scrutinise how slave-run businesses dealt with the accountability gap through an indirect agency of slaves. Our analysis shows that Roman law developed a heterogeneous framework in which multiple legal remedies coexist to accommodate the various competing interests of owners and contracting third parties. Moreover, Roman law shows that addressing the various emerging interests had been a continuous and gradual process of allocating risks among different stakeholders. The paper concludes that these two findings are key for contemporary discussions on how to regulate artificial intelligence.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press on behalf of The Society of Legal Scholars

Introduction

In recent years there is hardly a topic in legal scholarship that has attracted as much attention as artificial intelligence (AI). That has to do with a whole array of doctrinal legal issues, ethical challenges, socio-technical expectations as well as politically-charged industrial politics. The discussions about regulating AI are most of the time an amalgam of legal doctrines, legal methods and author-specific aspirations of what AI may mean for humans and the future applicability of law. The level of sophistication in some of these discussions has been quite high. At the same time, the debates are sometimes quite controversial. On the one hand, it is argued that AI is only another new technology that may create some challenges but, eventually, it will be integrated and handled by the canon of incumbent law.Footnote 1 On the other hand, it is contended that the disruption of AI refers not only to technology, but also to established legal routines and hence new legal designs are needed for a successful integration of AI into law.Footnote 2 That the latter set of questions is on the doorstep has only recently started to become apparent. Although autonomous decisions by AI might not yet be the norm as engineers say, legal and ethical questions can rapidly materialise due to the constant advancement and improvement of AI. It is in this context that opening up a debate on how law can possibly address the challenges posed by AI turns out to be a beneficial process.

Putting all specificities and branching of the legal discourse aside, there are two pillars on which the current debate on AI is grounded. A first complex of questions can be traced back to what has been called the ‘responsibility gap’ or ‘accountability gap’.Footnote 3 The accountability gap refers to the problem of allocating responsibility to AI. If an AI entity undertakes autonomous decisions, then the AI may also be responsible for its own decisions. But how can an AI become responsible in a system of legal obligations that are tailor-made for humans and corporate actors which assume human decision-makers? Can AI be liable for its own decisions? If so, what would such an allocation of liability look like? An even stronger case can be made for AIs that communicate with each other and which coordinate their decisions, as is the case, for example, with algorithmic collusion. Would those networks create a separate legal entity that can be held accountable and which creates legal consequences for its owners? Those questions, which seem to be looming, would not be easy to address since they challenge the incumbent legal system. Admittedly, most AI systems currently have a human in the loop – the technology not yet being mature enough for autonomous decision making. The number of AI systems that may qualify for fully autonomous decision making is, however, likely to increase. One needs only to think of the rising number of autonomous care robots in assistance and care.Footnote 4 Thus, legal scholars have begun to investigate how to address that accountability gap.

The second pillar of the legal debate is concerned with the consequences that different legal designs for AI may have. The two dimensions of consequences are either ethical or economical, or a mix of both. This is clearly apparent in the EU Commission White Paper on Artificial Intelligence and the Proposal for a Regulation laying down harmonised rules on artificial intelligence, both endorsing an AI approach that is human-centric (ie fulfilling certain societal values) and acts as a catalyst for economic growth (ie aiming to raise per capita income in the EU).Footnote 5 This approach is also mirrored in the recent EU Commission proposal for an update of the Product Liability Directive, aiming at the coverage of AI-related harms.Footnote 6 In that sense, the rules governing AI are not discussed from the doctrinal angle of consistency within a system of norms, but as socio-technological tools to achieve certain ends.Footnote 7 In the case of the EU, the aim is to catch up with the US and China by providing a legal framework for AI that facilitates EU-based business models.

This contribution will deal with the accountability gap and its associated legal challenges arising from the deployment of AI by drawing inspiration from a particular instance in Roman law. In fact, an analogy has occasionally been made in the literature between how ancient Romans regulated slaves and how AI might be regulated.Footnote 8 Slaves were allowed and expected to take autonomous decisions up to a certain degree, thereby implying that those decisions might entail failure and damage. This made it necessary that the law would balance the risks between the master, on the one hand, and third contracting parties, on the other hand. An effective governance of the contractual relations of slaves was necessary to raise the economic potential of slaves for their masters and to ensure relational trust for third parties.

In this paper, we are not arguing that Roman law provides the blueprint for dealing with today's AI problems, or that it assists in the definition of legal personhood for robots. This would be too far-fetched for a couple of reasons. First, slaves were actual persons endowed with the thinking (and sentient) capacities of any human being – something that AI entities lack. Secondly, and relatedly, the range of activities that slaves could carry out was infinitely broader than the ones that an AI system can currently operate autonomously. However, Roman law provides a stock of knowledge that can be helpful to sort out certain challenges that the deployment of AI systems has started to pose and will continue to pose. In other words, our contribution aims to give guidance on the direction in which solutions for the agency problem of AI can be found, bearing in mind that technological progress is a gradual process, and the accountability gap is only nascent given a series of attempts by today's law-making to address it.

The paper is organised as follows. In Section 2, we explain how the autonomy, association, and network risk of autonomous decision making sometimes leads to the accountability gap in contemporary law. Section 3 delves into Roman law and explores how it dealt with autonomous decision making of slaves. It will become apparent that there are striking parallels between the legal problems that had to be solved then and those needing to be solved now. It will also become clear that the accumulated knowledge implied in Roman law provides interesting suggestions on how to possibly shape legal designs aimed at closing the accountability gap of AI. Section 4 puts the autonomous decision making of machines in a wider context by stressing that the law's function of solving conflicts and facilitating cooperation is intrinsically linked with how to balance the allocation of risks among different stakeholders. The paper ends with a brief conclusion.

1. The triple-helix of the accountability gap

Damages, losses, and wrong expectations cannot be avoided in a world of uncertainty and fallible knowledge. Law or any other institution cannot simply rule out losses and misfortune. A trivial example is traffic: terrible accidents can happen at sea, in streets or in the air, but one would hardly decide from that to stop traffic and transportation. The typical answer to risk is rather to spot the decision-making entity and to constrain its sphere of activity to a degree that is in accordance with societal standards; this may also include an obligation to compensate victims. Hence, in property, contract, and tort law it is about spotting responsibility and agency, thereby facilitating human action and trade to the benefit of the involved parties. Where necessary, the public regulation of specific activities complements the private law.

Private and public law aim at the same target from different angles: the resolution of conflict by identifying the accountable agent(s).Footnote 9 Thereby, conflict resolution should be efficient in the sense that the purposes of all agents which are affected by a conflict are considered. That means the conflict resolution mechanisms which are provided by law should be informed, purposeful and prevent strategic action to the disadvantage of third parties. There should be no accountability gap. Over the exact meaning of ‘informed’, ‘purposeful’ and ‘strategic’ there might be dissent, but the root problem of the accountability gap is straightforward. The accountability gap refers to a missing link between a law or regulation, on the one hand, and a responsible decision or action, on the other.

The accountability gap is not a severe problem when there are appropriate tools to repair it.Footnote 10 Judges often repair smaller accountability gaps by employing an existing law through interpretation. But there are also larger accountability gaps that cannot be easily bridged by expanding an established law, because the result would not only be a doctrinal ‘overstretch’, but the deficient legal design would also lead to dysfunctional decisions and actions.Footnote 11 In these latter cases new doctrinal solutions and tools are necessary that lead to socially meaningful results. These kinds of paradigmatic shifts in law have happened in the past and are in principle not a new phenomenon. Examples include the invention of the modern limited liability company as a reaction to the new capital-intensive production possibilities of the industrial revolution,Footnote 12 the legal definitions and ways of how to deal with electricity as a sort of intangible good,Footnote 13 or the emergence of enterprise liability.Footnote 14 A similar turning point is being reached with the advent of AIs and robots, too. Autonomous decision making seems destined to bring doctrinal routine to its limit, whether that is in automated contracts, the liability of surgery robots in hospitals or in the case of algorithmic collusion creating hardcore cartels.

To better understand what principal legal problems would be involved if machines were to take decisions autonomously, it is worthwhile distinguishing between three different types of risk: (1) the autonomy risk; (2) the association risk; and (3) the network risk. These three risks constitute the triple-helix of the accountability gap and may require a recalibration of responsibility between human and artificial decision makers.Footnote 15

The autonomy risk. This sort of risk may emerge when AI entities have leeway to take their own decisions based on what they have learned from (big) data. It is this type of machine autonomy that we often have in mind when we think about robots doing the job of humans. For example, it is not unrealistic to imagine that an AI could formulate independently the terms of a contract and sign it in the future.Footnote 16 By doing so, AI would create a valid obligation against the contractual partner. This would not mean that this scenario is currently happening nor that the AI would automatically become a self-standing legal person. However, this situation would make the AI identifiable as a distinctive entity (legal representative) in the process of contracting, in which the owner (employer) of the AI might be the ultimate principal vouching for the fulfilment of the contractual obligations as well as for any possible damages. An even more common example is extra-contractual liability arising from autonomous healthcare robots, when one would reasonably ask for responsibility on the part of the AI and compensation of victims. In this regard, one should note that the liability of contemporary operators denies compensation if the operator has maintained the AI according to the state of the art of possible safety standards.Footnote 17 Moreover, it is yet not clear whether software codes that establish algorithms fall under the European Product Liability Directive.Footnote 18 While it can be assumed that at the moment consumers are still sufficiently protected by legal interpretations of liability laws, sector specific regulations and insurances (eg car liability insurance), the progress of AI technology is likely to lead to more legal inconsistencies. In addition, this growing inconsistency in legal design would have the side effect that the incentive for controlling the developmental risk of the AI is thwarted, with detrimental effects on the usage of those advanced systems. A concrete example can be autonomous vehicles since technical experts provide the prospect of full driving autonomation with no need for a human to drive – so-called levels 4 and 5.Footnote 19 To counteract this scenario, one may argue for a clearer attribution of responsibility. Similarly, and as discussed below, the introduction of the corporate form in the seventeenth century made it easier to find the locus of responsibility allowing for a more rapid advancement of the industrial revolution.

That does not mean that AIs and robots should be legally treated like humans, simply because they create and sign contracts. The machines come into the world as distinctive legal entities because humans would attribute to them, for pragmatic reasons, decision-making power. Accordingly, the deliberate attribution of decision-making power may create a distinct locus of responsibility that is not fully covered by human oversight, although a human owner might be in the background as the principal.Footnote 20 This mismatch of responsibility and decision making comes strikingly forward in academic and policy discussions, when one asks for ‘explainability’ of algorithmic decision making.Footnote 21 But, at the same time, it is a core feature of machine learning that the exact reasons leading to a decision remain in a black box. That makes it a deliberate and consequentialist decision of humans to attribute responsibility to AIs for the risks that they may cause, because this legal design yields advantages for society over legal designs that would simply expand the incumbent legal designs. That does not mean that AIs’ autonomy would be unrestrained or that responsibility becomes a shallow category. On the contrary, it means that a socially advantageous legal design becomes integrated into the conflict resolution mechanisms of doctrinal law.

To underscore the last point, it is worthwhile remembering the introduction of the limited liability company some 200 years ago. It was also not a human but a corporate actor with its own legal personality that was invented against the background of colonial trade and the need to raise financial capital for the new production possibilities of the industrial revolution. Hence, the introduction of legal personhood for companies was a deliberate act to reap the benefits of technological progress and the exploration of new parts of the world.Footnote 22 The process of introducing new corporate forms was thus not ad hoc, but was a process of legal experimentation until the adequate risk allocations between a company's stakeholders had been found. Moreover, the vast literature about the regulatory competition between company laws indicates that legal experimentation to find out the best legal designs never comes to an end.Footnote 23 In addition, the history of company law teaches us that there is not one, but a need for very different corporate forms with very different levels of sophistication – a point to which this paper will return in the final section.

In summary, the autonomy risk may emerge when decision-making power is delegated to AIs. This delegation is for good reasons, because otherwise the benefits of AI cannot be reaped. But that may bring with it a need for a recalibration of the accountability between a human principal and the AI as agent. This recalibration must close the accountability gap in order to resolve conflicts in the case of failure of AIs as well as to re-establish doctrinal consistency. Moreover, the accountability gap must be closed in a smart way, meaning the legal design must fulfil its purpose in an effective way and should facilitate the application of algorithmic decision making.

The association risk. This type of risk may materialise in man-machine associations. That is when humans and AIs collaborate and form an entity which interacts with other entities. An illustration is a surgeon who collaborates with a surgery robot to get the best result for a patient. This can be the case of an outside medical specialist who supervises the operation of Smart Tissue Autonomous Robot – an AI that can autonomously perform laparoscopic surgery – that is owned and operated by the hospital.Footnote 24 This scenario makes it difficult to allocate responsibilities for compensation purposes – eg whether the doctor should be considered as operator or user.Footnote 25 Another example can be the decision over a mortgage for a family house made by a bank employee in conjunction with a predictive analytics software that is scoring a high default risk of the couple asking for the mortgage due to a bias in the model used.Footnote 26 In this scenario, it is a tall order to prove voluntary or involuntary discrimination by the bank which had partly relied on (an opaque) AI technology.Footnote 27 In man-machine associations, man and machine bring in their comparative advantages which meld into one service. In the case of misfortune or damage, it is barely possible to sequentially trace back all decisions which were made either by the machine or the human and to allocate responsibility accordingly.Footnote 28 Therefore, those associations of man and machines may be regarded as a symbiosis that creates its own legal entity, at least as a locus for responsibility in the case of contractual and non-contractual liability.Footnote 29 This would still preserve the ethical obligation with the human but recognise that the decisions have been made in a conjunction with a machine. Any regulations or legal obligations are then targeted against the hybrid and not only the human(s) involved.Footnote 30 This yields the advantage that potential victims of the hybrid know exactly who to approach in case of damages or malperformance.

The network risk. This risk type points to a scenario in which the decision making is located in a network of AIs. AIs in a network learn from each other and can coordinate their decisions. Those networked AIs can do a whole range of things. Surgeon robots may learn from each other around the world and boost their capabilities.Footnote 31 That is especially relevant when it concerns complex surgery that does not happen very often at a single hospital, or where the gene sequencing for vaccines is largely done by AIs.Footnote 32 In a pandemic, networked AIs learn from each other worldwide. But networked AIs also analyse stock markets and may increase correlation of risk and decrease diversification, thereby contributing to the worsening of a systemic event and financial crises.Footnote 33 Networked AIs are also able to collude with each other and to perform cartel strategies that have not been seen yet; one only has to think about sophisticated price discrimination strategies of flight or hotel booking systems. Networked AIs open the door to a new world of possibilities in all aspects of life, as health, business, education, sustainability, or policing for better or for worse.

The most important feature of AI networks is that they take decisions without human interference. This implies that there is basically no human who could be made accountable and to whom a decision could be traced back. A poignant example for the doctrinal problems which emerge is algorithmic collusion.Footnote 34 Think, for example, of flight booking systems which learn from each other how to coordinate price discriminatory tactics. Those systems can coordinate with each other, using collusive tactics better than any human could do, because the documentation of quantities, qualities and prices is automatically in the big data. Also, keeping cartel stability is less of a problem for AIs, because relational trust is not a valid category for a machine. The networked AIs simply keep their collusive tactics as learned by their algorithms. As such, consumers and the public may suffer considerable damages from networked AIs. Hence, public authorities will certainly stop those activities when they detect them, possibly by simply pulling the plug. That means the public attaches a consequence to a behaviour that is not in the public interest and regarded as illegitimate. For economic and ethical reasons, society does not allow algorithmic collusion.

The problem with networked AI is, however, that traditional legal doctrine has major difficulties in solving the rising challenges within a consistent system of legal reasoning. This not only has to do with the lack of human responsibility in AI networks, but also with the lack of human moral judgement that could be addressed by legal norms. In other words, legal doctrine gets into problems, because there is no human to which its routines could be addressed. This becomes clear when one looks specifically at the case of algorithmic collusion.

Collusion through networked AIs has the evident effect of an anticompetitive agreement. But an agreement needs at least the quality of a meeting of minds, the will of someone to make an offer to collude or to follow an offer. This implies that there is a sort of communication and intent about any sort of agreement. This carries even more weight if a legal order attaches criminal sanctions to collusive tactics and charges it with moral sentiment. Therefore, it is implicitly assumed that there is a human who is responsible and morally in charge of the collusion. Typically, this is the company and its management involved in collusion. But with networked AI there is no human which could be morally targeted, or which would be deterred by the threat of a criminal sanction. Also, the responsibility of a human for the actions of the AI cannot be easily demonstrated when there is no evidence for collusive intent and if there is no documentation and communication about it.Footnote 35 The AI remains a black box, although the call for ‘explainability’ is becoming louder.

In the end, it is the lack of legal personhood that makes it impossible to integrate the case of networked AIs into the incumbent doctrinal conflict-resolution system. Incumbent legal doctrine foresees that there is at least some anchoring of decision making with humans. But networked AIs fail in this respect. There is no human in the loop that could be made accountable without overstretching the incumbent law and running into doctrinal inconsistencies. Therefore, it is reasonable to conceive networked AIs and their actions as separate legal entities that create specific risks, for which they are accountable. Those risk pools are identifiable and can be regulated as well as be obliged to pay compensation.

The incumbent legal system is not fully equipped to close the accountability gap that can emerge by the three identified risks of AI. While some attempts have been successful, a general framework that would cover all possible instances is yet to be found. It is in this context that many scholars start discussing possible alternatives. However, this is not an entirely novel problem in legal history. There have been other instances where law had to address lacunae in accountability. One such historical occurence is the emergence of slave-run business models in ancient Rome. The expansion of social and economic activities through slaves let the praetors, the Roman magistrates with responsibility for litigation, introduce new legal remedies – the actiones adiecticiae qualitatis. This ‘legal invention’ allowed the establishment of a sort of indirect agency for entities which did not have legal personality and were thus subject to others’ legal authority (alieni iuris). The paper will turn now to this legal invention of the praetors and relate it to today's legal problems of conceiving AIs as legal entities.

2. Mind the gap: how Romans closed the accountability gap

(a) The slaves-AI analogy

The literature on AI has occasionally looked at how ancient Romans dealt with the accountability gap problem created by assigning business activities to slaves.Footnote 36 In both AI and slave-run businesses, the underlying problem can become that of (indirect) agency. Just as the user or operator of AI cannot fully predict or control how the AI will behave and decide, so the master did not know how his slave would behave. Of course, the slave was a human, unlike AI. This implies that slaves had potentially full freedom and autonomy in carrying out any (business) activity – something that is presently beyond the abilities of AI entities. However, what is remarkably interesting for the present contribution is how Roman law dealt with a scenario where the slave, who was not granted legal personality, could take autonomous decisions which have an effect on the master. In other words, slave-run business in Roman times concerned a situation in which there was a sort of agency under structural uncertainty given that a micro-management of the slave by the master was either impossible or not reasonable. Hence, that the slave is a human may play a role in the detailing of the incentives of the governance system, but it is less relevant for solving the structural problem of agency under uncertainty. It is in fact on this latter aspect that the present contribution, adopting a future-oriented outlook, focuses its attention.

The agency problem between master and slave emerged after the second century BC, when ancient Rome was in the early days of becoming a hegemonial power in the Mediterranean Sea. The military success led to a sharp increase in the number of slaves. The traditional familia expanded, containing a relatively high number of slaves. Relatedly, the pater familias tended to delegate business activities to his slaves (and/or other persons-in-power such as filii).Footnote 37 Hence, the number of slaves who acted as the managers of the family business and were supposed to carry out transactions and negotiate binding contracts on behalf of their masters, increased considerably.Footnote 38

This shift in the ancient management practice created a new problem for the Roman regulatory framework: how to deal with the accountability gap problem? According to the ius civile in force at that time, masters did not have to answer for their slaves’ business activities vis-à-vis third parties, ie suppliers and customers. The guiding principle was ‘alteri stipulari nemo potest’: all obligations would only bind the parties which entered directly into an agreement, and not third parties – the so-called privity of contract.Footnote 39 This regulatory approach granted considerable protection to the pater familias, who could benefit from the slaves’ business activities without being accountable for their actions – the only exception being that the slaves would commit delicts rendering their master noxally liable.Footnote 40 On the other hand, third contracting parties were in a weak position since slaves, who did not have legal personality, could not be brought to courts and thus the contractors of slaves would end up with insufficient compensation even though slaves would be contractually liable. The situation as just described from the early days of Roman slavery seems to mirror today's situation in which employing AIs under the EU Product Liability Directive creates legal inconsistencies and produces economically wrong incentives to employ AI.Footnote 41 Therefore, it is no wonder that the EU, being confronted with this problem, has initiated a debate about an adaptation to the Product Liability Directive and a more coherent integration of AI into private law.

In ancient Rome, the accountability gap led to the risk allocation between the parties directly and indirectly involved being so asymmetric, and the incentives for getting efficient contractual outcomes so low, that the incumbent regulatory framework could hardly be a long-term sustainable solution. Contracting third parties were simply reluctant to do business with other masters’ slaves, given that there was no legal certainty that a master would honour the terms of the contract.Footnote 42 Hence, a legal change in the regulatory framework was necessary.

The so-called actiones adiecticiae qualitatis were progressively introduced.Footnote 43 These were a set of remedies granted by the praetor to contracting third parties to seek legal protection against the master of a slave with whom they carried out business transactions. One may understand this as a sort of ‘piercing the corporate veil’ from the slave to the legal entity of the master. The aim of these legal remedies was to ensure some additional responsibility for the master and, indirectly, to give some sort of incentive to oversee what the slaves were doing.Footnote 44

When looking at the Roman regulatory framework, however, the part that attracts most attention from scholars is the creation of a sort of corporate limited liability through the peculium and its associated actio de peculio.Footnote 45 The peculium was a fictitiously separate asset from the property owned by the master (res domini). Within the financial parameters of the peculium, the slave independently administered his business transactions. In other words, the slaves got a maximum capital that vouched for their transactions. Based on this historical experience, Pagallo considered the creation of a digital peculium for AI applications.Footnote 46 Whether this already includes the necessity of creating legal personhood for AI in a strict sense is a doctrinal question that need not be answered here. Making a tangent between the peculium and the liability of AI is a fascinating proposal. But one must acknowledge that the establishment of a peculium and its associated actio de peculio represents only a part of the more composite regulatory landscape offered by Roman law. Other legal solutions came into play and complemented the actio de peculio.

There were in fact six legal remedies (ie actiones adiecticiae qualitatis) available to Romans offered by the praetors. It is possible to distinguish these remedies based on whether they set an unlimited or limited liability for the master regarding the slave's business transactions vis-à-vis contracting third parties. As is further discussed below, one can conceive this as a direct consequence of the more differentiated legal needs of consumers and businesses in a growing society. The actio exercitoria, actio institoria, and actio quod iussu belong to the remedies granting an unlimited liability. The actio de peculio, actio de in rem verso, and actio tributoria are, conversely, those legal remedies that ensure the master's limited liability. The paper now turns to review these six legal remedies and uses the resulting accumulated knowledge to reflect on the contemporary discussions on AI. While some more specific points for today's legal issues are raised in the following subsection, the next main section adopts a more encompassing view.

(b) The specific legal remedies in Roman law

The actio exercitoria and the actio institoria were two similar remedies aimed at giving protection to contracting third parties which had business transactions with a slave who was either a maritime or commercial entrepreneur. The actio exercitoria was used whenever an exercitor (both the owner of the ship or the one who rented it)Footnote 47 entrusted the management of a ship to his slave so that the latter became shipmaster (magister navis) and could purchase equipment or goods.Footnote 48 Evidently, the actio exercitoria was a kind of insurance for the remote contractors of slaves to trust in the cooperation of – even though the ship was hundreds of miles distant from – the master. On the other hand, the actio institoria referred to the institor,Footnote 49 who was the administrator of any commercial activity.Footnote 50 As Paulus defines it, ‘A manager is a person who is appointed to buy or sell in a shop or in some other place or even without any place being specified’.Footnote 51 Thus, the actio exercitoria and the actio institoria allowed contracting third parties to sue the master, who is called upon to fulfil the obligations undertaken by the slave.Footnote 52

In both legal remedies, the master's responsibility was only limited by the praepositio, which was an explicit authorisation by the master to his slave to perform (only) certain activities.Footnote 53 Hence, the master would incur unlimited liability only for the transactions falling under the scope of the activities mentioned in the praepositio. Transferring this idea to the employment of AI would mean making the owner of the AI accountable only for the tasks that the AI is supposed to perform within the activities that characterise the business of its owner. In other cases, the owner would not be held accountable and, at the most, the liability could be shifted to the producer or programmer of the AI. How the actual allocation of responsibilities across the value chain (eg producers, operator, owner) would look like in practice would depend on different factors such as the level of automation or the specific sector involved. However, Roman law shows that the potential of private law does not yet seem exhausted in the contemporary proposals for regulating AI. Moreover, it hints at the fact that legal solutions more tailored to the challenges arising from an accountability gap can be possible as prescribed by current scholarship.Footnote 54

The third (and last) legal remedy to set the master's unlimited liability was the actio quod iussu.Footnote 55 This legal remedy aimed to provide contracting third parties with legal protection for the business transaction(s) concluded with a slaveFootnote 56 who was delegated by the master (quod iussu) to fulfil that specific transaction(s).Footnote 57 In addition, this legal remedy could also be brought against the master who ratified what his slave did without authorising him beforehand.Footnote 58 The appointment by command (iussum) had more formal requirements compared to the praepositio: the former could only occur before witnesses, by letter, on oath, or through a messenger.Footnote 59 In this way, the extent of the activities encompassed by these two types of authorisation differed: while the iussum could be limited to a specific act, the praepositio embraced several activities. This distinction led literature to argue that the recourse to legal remedies varied depending on the specific context.Footnote 60 The actio exercitoria and the actio institoria were usually applicable in a context where the slave acted as a ‘manager’, whereas the actio quod iussu was usually used for slaves who performed a single order by the master.Footnote 61

The distinction between the praepositio and the iussum might appear at first glance only as a procedural clarification between a general and a specific rule. Instead, the main difference is the fact that each legal remedy confined the owner's liability to a specific function of the slave's autonomy.. Looking at it this way, it is possible to find again a parallel to recent debates. For example, in the EU there is an ongoing debate about whether to regulate AI according to a general standard, applicable to all industries indifferently, or according to sector and technological specificities of AIs which create certain risk levels.Footnote 62

Roman law makes us aware that the latter solution is possible. In other words, a regulatory framework can accommodate a series of remedies, each one confining liability to specific functions of AI's autonomous nature. That way, it would be possible to develop a sort of regulatory experimentation, whereby different AI entities may be subject to different liability schemes so that rules for AI would better align the needs of business with society.Footnote 63 In its proposed regulatory framework for AI, the EU foresees, at least, the so-called regulatory sandboxes that will allow regulatory opt-outs for certain AI applications for a certain time.

As previously mentioned, Roman law not only foresaw cases in which the master would become unlimitedly liable. Other remedies allowed for a limited liability of the master. Here, the peculium played a decisive role, because it was the only source from which contracting parties could satisfy their credit vis-à-vis the slave.

The most prominent legal remedy was the actio de peculio, which allowed a party to receive legal protection for the business transactions contracted with the slave (or any another person in power).Footnote 64 The master would guarantee the contract within the limits of the peculium originally granted to the slave.Footnote 65 According to Roman law, the grant of free administration of the peculium (concessio liberae administrationis)Footnote 66 was equal to a general authorisation for the slave to do business within the parameters of the peculium. This legal design strongly supported the entrepreneurial activities of the slave and reduced the need for those activities to be monitored by the master. Because advanced AIs will become more entrepreneurial in the future and may conclude contracts that have not been foreseen, the legal design of the peculium may become an interesting starting point for a better integration of AI into private law.Footnote 67 Regulations which only suppress entrepreneurial activities of AI clearly lead to economic disadvantages by foreclosing many welfare-increasing opportunities. Therefore, identifying AIs as legal entities with a specified autonomy up to a certain amount of liability specified beforehand is a sensible proposal. This would not exclude the possibility of accompanying liability insurances coming into play to compensate extra-contractual damages.

Another remedy offered by Roman law to protect contracting parties was the actio de in rem verso.Footnote 68 This remedy was applicable whenever the benefits arising from a contract concluded by the slave were to be incorporated in the master's assets.Footnote 69 In other words, a master who enjoys the benefits of the slave's transaction implicitly has the obligation vis-à-vis the third party.Footnote 70 Because of this reciprocity, some scholars posit that the actio de in rem verso was usually applicable in those contexts where slaves were not business managers ‘by profession’.Footnote 71 In those cases, contracting third parties would be more likely refer to the actio de peculio. In addition, it is also noteworthy that the main distinction of the actio de in rem verso from the actio quod iussu is that the former was applicable whenever the slave performed a business transaction that was useful to the master, but without his actual knowledge.Footnote 72

The actio de in rem verso can trigger complex liability cascades and therefore plays only a niche role in today's civil laws. However, it gives an interesting perspective for the regulation of the association risk, when a human co-works with an AI. Then, the AI typically works for the financial interest of its master. At the same time the collaboration might be so close and intertwined that it is not possible to decipher whether the AI or the human is accountable for a certain action. In those cases, the actio de in rem verso gives a clear hint to make the master of the AI contractually liable if she enjoyed the benefits of the commercial collaboration. In turn, the master may seek financial relief herself from the producer or programmer of the AI. But in any case, an injured third party could demand compensation from the owner of the AI, if the latter enjoyed benefits from the human-AI association, even in cases in which it is not possible to identify who caused the breach of obligations. A similar approach could be advanced in the case of network risks: if the owners of an AI enjoy the benefits from a network of AIs, they will be obliged to compensate victims. This way, the owners of an AI get a strong incentive to oversee the behaviour of AIs in forming algorithmic collusions.

Finally, the last remedy offered by the praetor was the actio tributoria.Footnote 73 With this legal remedy, it was possible to ensure a par condicio creditorum between contracting third parties and the slave's master over the assets belonging to the peculium.Footnote 74 In fact, the contracting third parties’ receivables were traditionally paid only after deducting those of the master.Footnote 75 That way it was possible that the master could allow the slave (or another person in power)Footnote 76 to continue several business transactions in parallel without worrying about repaying all the receivables even within the peculium. As a result, there was the chance that the master would be over-indebted and would default when liquidity was lacking. Therefore, the introduction of the actio tributoria aimed to prevent this behaviour by the master. The master, being aware of the various debts incurred from his slave, would become liable and be treated on the same footing as contractual third parties in the distribution of the stock of the peculium (merx peculiaris).Footnote 77 As Albanese points out, Roman law could have considered the knowledge and the approval of the master to make a transaction with the merx peculiaris in the same mould as a praepositio.Footnote 78 However, one must note that there is a strand of scholarship which is dismissive of whether this remedy actually belongs to the actiones adiecticiae qualitatis.Footnote 79

From the actio tributoria we can learn something for today's AI regulation, too. The owner of an AI may be negligent in the sense that she lets an AI perform too many and/or too risky business transactions (eg financial risks), whereby her gains would be secured while the whole pool of third parties would not be. An example is civil law liability in the case of algorithmic collusion between two or more AIs when single AIs may not only perform the primary task, but also interact with each other to gain further benefits by coordinating their actions. Today, it is not self-evident that a doctrinal link can be made between the collusion of AIs and the owners of the AI.Footnote 80 Within the logic of the actio tributoria, the masters of all colluding AIs would be identified, because of the benefits from collusion. A financial pool is created from which the creditors are compensated according to the quotas decided by court. In this way Roman law may give a fresh idea of how to deal with the network risk of AI.

3. Back to the future – legal differentiation and the timing of legal innovation

In Section 2 a link was made between how Roman law regulated the relation between a master, a slave and a third party in contract law, and what we can learn from that for today's challenges of AI regulation. Central for Roman law is the master's consent in the transactions of the slave. Hence, legal protection for third contracting parties is based on either a master's explicit authorisation (praepositio and iussum) or the establishment of a peculium. The peculium can be considered as an implicit authorisation for the slave to perform autonomous business transactions for the master.

Moreover, the master's type of authorisation played a prominent role for the kind of liability masters had to incur. For example, Miceli claims that the unlimited liability was based on the existence of an explicit authorisation due to the stable and continued cooperation between master and slave.Footnote 81 The lack of an explicit authorisation, instead, could have been the reason why the master should only have limited liability for the slaves’ transaction activities.Footnote 82

Roman law foresaw context-specific ways of closing the accountability gap between masters and slaves, depending on the kind of business, the frequency of business and the experience of the slave. And this is exactly what can be learned for closing the accountability gap that can emerge between the owner of an AI, the AI and contractual third parties: the context specificity in which AIs do contracting and how this puts obligations on the master and third parties. Or, to put it differently, it is doubtful whether simple extensions of incumbent private law will be sufficient to fully lift the economic potential of AI. From the Roman law experience, one would expect a much more differentiated menu of legal options. However, legal differentiation is not the only lesson to be learned from Roman law. By delving into the academic controversy on the chronological order in which the legal remedies were introduced, it is in fact possible to infer other related observations that may also become relevant for today's AI problems.

It has already been argued that the possibility of establishing a peculium and its associated actio de peculio can be interpreted as a proto-limited liability scheme. The peculium has therefore been applauded as the zenith of Roman law making. But this overlooks that the actiones adiecticiae qualitatis were in fact not granted altogether by the praetor, but were introduced consecutively over time as adaptions to the legal needs of Roman businessmen in a prospering society.Footnote 83 The legal development of the actiones adiecticiae qualitatis was a gradual process.

According to the Institutiones of Gaius, the order of the actiones adiecticiae qualitatis was the following: actio quod iussu, actio exercitoria, actio institoria, actio de peculio and actio de in rem verso. The same order can also be found in the Digest reporting the praetorian edict, except that the actio quod iussu comes last (and not first). Hence, most Romanists believe that the legal remedies establishing the master's limited liability were the last to be introduced.Footnote 84 This conventional view can be further divided into two camps. In fact, some authors argue that the correct order was the one reported in the Institutiones.Footnote 85 A second strand of scholarship believes that it was the Digest which reported the accurate chronological order by which legal remedies emerged over time.Footnote 86

However, de Ligt suggests that the master's limited liability was only an intermediary stage, before Roman law provided some legal remedies that established an unlimited liability to the master.Footnote 87 This is quite an interesting observation, because it takes into account that when Romans started having recourse to slaves for commercial transactions the masters’ activities were strictly separated legally from those of the slaves, and it seems unreasonable to assume that Roman law immediately established a system of unlimited liability for masters.Footnote 88 Accordingly, the praetors would devote more attention to the needs of the pater familias (ie master), while it was only later that attention was shifted to contracting third parties, making it necessary to get to a more elaborated liability regime.Footnote 89

This alternative interpretation is particularly relevant because it showcases that the accountability gap problem is not a mere technical problem but depends on what factor the legislator deems to be more relevant. If the praetor thinks in terms of pater familias, a limited liability scheme is the logical starting point. It would be unreasonable to believe that Roman law would immediately establish an unlimited liability for the pater familias. But if the praetor thinks instead in terms of the problems created through a lack of legal personality of slaves and the pursuant reluctance of third parties to contract, then an unlimited liability scheme would be the logical starting point for law making.

The gist of this debate is the question of how the risks among the various parties involved in slaves’ business activities should be allocated and which incentives this allocation of risks sets for doing business. Adopting the conventional view means that a limited liability system was introduced only relatively late in Roman history, when the praetor had realised that business activities were inhibited by quasi-unlimited liability.Footnote 90 Adopting de Ligt's alternative view means, on the other hand, that the limited liability scheme granted by the actiones adiecticiae qualitatis was introduced relatively early in Roman law as an attempt to balance the master's interests with the opposing interests of third parties.Footnote 91 And it was only later that unlimited liability was permitted when the master gave explicit authorisations to slaves with professional business experiences (ie iussum and praescriptio).

The lack of sufficient evidence to corroborate one interpretation rather than another makes this interpretative exercise, to a certain extent, speculative. However, regardless of which interpretation is historically correct, the controversy shows two important aspects which seem valuable for today's legal assessment of AI. First, ancient Romans did not have resort to only one legal solution to address the accountability gap problem. Rather, they offered a series of different regulatory solutions depending on the contextual needs that emerged at a specific point in time. The legal remedies adopted at a later stage did not change the incumbent legal system but complemented it.

Secondly, the accountability gap problem, together with the pursuit of different societal goals, is essentially a matter of allocating risks among different stakeholders and choosing a starting point for legal development. If the regulator prefers the master's view, then a limited liability system will be preferred as a starting point. The master can ‘experiment’ with new business models and technologies and learn how to deal with completely new and uncertain situations, without fearing immediate bankruptcy. Instead, if the regulator adopts the third parties’ view, then the legal evolution would start out from unlimited towards limited liability. In this scenario, society would appreciate the legitimate interests of third parties over the business interests of the master. Only when the need to innovate and to boost business activities becomes stronger over time, will there be a shift to a limited liability system.

These two observations have concrete policy implications if contextualised to AI. For example, they suggest that fitting AIs with limited liability and thereby facilitating entrepreneurial ventures, while inhibiting more balanced and complex transactions, is not as fantastic as one may think at first glance. The more sophisticated liability regimes might be saved for the future when AIs have many more faculties and have become more established in society. Moreover, it suggests that initially opting for a certain regulatory scheme would not necessarily foreclose other possible legal solutions, especially when certain needs materialise at a later stage and create a demand for change. Hence, a more heterogenous legal framework, in which stakeholders can have recourse to multiple legal solutions and choose the one that comes closest to their interests, seems a more sensible solution due to the inherently dynamic nature of AI technology. This more open approach makes it possible that the most effective legal solution will emerge over time and that not just one specific route of legal development will become enshrined in stone.Footnote 92

Conclusion

This paper dealt with the accountability gap problem that may arise from the full deployment of AI. Here it is argued that the technical advancements of AI create new challenges for legal scholarship, which are likely to expand further due to the increasing role of the autonomous decision and its effect on the autonomy risk, the association risk, and the network risk. Incumbent law does not always seem fit to address the accountability problem without overstretching the given doctrinal law. A somewhat similar problem existed in ancient Rome. At that time, the emergence of slave-run business models required regulatory action by the praetors and the establishment of new legal routines. The regulatory response by Roman law had been context-specific and geared towards the actual needs of stakeholders, ie pater familias and contracting third parties. Related to that, Roman law did not establish a single and exclusive legal solution: the praetors allowed for the use of several legal remedies (ie the actiones adiecticiae qualitatis) which could be chosen by the concerned stakeholders depending on their needs.

Admittedly, the accountability gap problem posed by the deployment of AI in contemporary societies has some intrinsic features that may make the comparison with the use of slaves in ancient Rome not so easy. For instance, as a matter of fact, slaves had thinking and sentient capacities, which AI entities have not. Hence, Roman praetors could shape the legal remedies taking into consideration their possible incentive effects on slaves’ behaviour. Although the incentives associated with legal remedies can be discussed for producers or programmers of AI applications to deter machine failures, they would be of no use for AI entities themselves. Indeed, whether an AI system has a sensory output which can imitate a human or whether it is fundamentally different and what that may mean is an epistemological question that has not yet been answered, and possibly can never be answered.Footnote 93 On the other hand, the invention of the corporate form is a testimony that an effective allocation of risks and responsibilities is not bound to the human physis. Furthermore, masters were legally liable only insofar as the actions of their slaves had generated either contractual or delictual liability. In other words, according to Roman law, the existence of fault by slaves should be proven. This legal requirement cannot be so easily fulfilled when it comes to holding AI entities liable. Nonetheless, while the concept of fault may be far-fetched for AI applications, it may be possible to refer to other terms such as ‘mistakes’ or ‘unpredictable behaviour’. Lastly, while the increasing use of slavery likely led to ‘technological stagnation’ in ancient Rome,Footnote 94 this would achieve the opposite effect in present times due to the self-learning capabilities of AI systems.

Bearing in mind these caveats, two important things can be learned from the study of Roman law for shaping today's legal design on AI entities. First, the coexistence of multiple remedies to deal with the accountability gap is preferable for more effectively addressing context-specific issues. This consideration becomes even more relevant given that AI is a progressively developing technology characterised by a rising degree of autonomy. This implies that there will be a continuous need to have a flexible regulatory framework since it might not always be possible to anticipate the most suitable legal solution. Accordingly, today's regulatory discussions should not be focused on finding the one and only optimal solution for closing the accountability gap, but on devising a more heterogeneous framework in which different legal solutions coexist.Footnote 95

Secondly, the analysis of Roman law showed us that multiple regulatory solutions are the outcome of a continuous and gradual process in which the functionalities of a new law unfold over time based on the actual needs with which society is faced. However, unlike in Roman times, it seems that today the legislator, academia and other stakeholders have a sufficiently clear picture of the various interests at stake. This makes it easier to develop – in the first instance – multiple legal solutions, from which the concerned parties could choose in most cases. The actual regulatory choice would then create a path for learning and legal development. In summary, this contribution demonstrated that it is possible to draw some lessons from legal history for the future design of law. This does not necessarily imply the re-enactment of old legal solutions, but simply conceiving past experiences as a source of guidance and inspiration for modelling regulation on artificial intelligence.

Footnotes

We would like to thank Tammo Wallinga, Shu Li and the participants of the ZiF conference ‘Economic and Legal Challenges in the Advent of Smart Products’ for their valuable comments. The usual disclaimer applies.

References

1 See, for example, the report of the Plattform Industrie 4.0, Kuenstliche Intelligenz und Recht im Kontext von Industrie 4.0, issued by the German Federal Ministry of Economic Affairs and Energy in 2019. See also the 2019 Report from the Expert Group on Liability and New Technologies ‘Liability for Artificial Intelligence and Other Emerging Digital Technologies’, which recognises some challenges for actual liability law, but overall the Group is confident that staying within the perimeter of incumbent law will be sufficient for the future.

2 H Liu et al ‘Artificial intelligence and legal disruption: a new model for analysis’ (2020) 12(2) Law, Innovation and Technology 205; G D'Agostino et al (eds) Leading Legal Disruption: Artificial Intelligence and a Toolkit for Lawyers and the Law (Thomson Reuters, 2021).

3 See eg Matthias, AThe responsibility gap: ascribing responsibility for the actions of learning automata’ (2004) 6(3) Ethics and Information Technology 175CrossRefGoogle Scholar.

4 A Pirni et al ‘Robot care ethics between autonomy and vulnerability: coupling principles and practices in autonomous systems for care’ (2021) 8 Frontiers in Robotics and AI 184. Some practical examples of autonomous AI in healthcare are mobile servant robots, hobbit mutual care robots for the elderly, and nursing robots for the elderly. See eg respectively https://www.care-o-bot.de/en/care-o-bot-4.html; http://hobbit.acin.tuwien.ac.at/#:~:text=HOBBIT%20is%20a%20research%20project,persons%20feel%20safe%20at%20home; and https://theindexproject.org/post/nursebot.

5 See respectively European Commission White Paper on Artificial Intelligence: A European Approach to Excellence and Trust, COM(2020) 65 final and European Commission Proposal for a Regulation laying down harmonised rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union legislative acts, COM(2021) 206 final.

6 See European Commission Adapting Non-Contractual Civil Liability Rules to Artificial Intelligence (AI Liability Directive), COM(2022) 496 final.

7 For a discussion of this approach see for example Albert, HCritical rationalism: the problem of method in social sciences and law’ (1988) 1(1) Ratio Juris 1CrossRefGoogle Scholar.

8 Pagallo, UKillers, fridges, and slaves: a legal journey in robotics’ (2011) 26(4) AI & Society 347CrossRefGoogle Scholar; Čerka, P et alLiability for damages caused by artificial intelligence’ (2015) 31(3) Computer Law & Security Review 376CrossRefGoogle Scholar; Izumo, TDigital specific property of robots: a historical suggestion from Roman law’ (2018) 1 Delphi 14Google Scholar.

9 Harlow, C“Public” and “private” law: definition without distinction’ (1980) 43(3) The Modern Law Review 241CrossRefGoogle Scholar.

10 A Scalia ‘Common-law courts in a civil-law system: the role of United States Federal Courts in interpreting the constitution and laws’ in A Gutmann (ed) A Matter of Interpretation (Princeton University Press, 2018) p 3.

11 Teubner, GDigital personhood: the status of autonomous software agents in private law’ (2018) Ancilla Juris 35Google Scholar.

12 JD Turner ‘The development of English company law before 1900’ in H Wells (ed) Research Handbook on the History of Corporate and Company Law (Edward Elgar, 2018).

13 K Pistor and C Xu ‘Incomplete law’ (2013) 35 NYU Journal of International Law and Politics 931.

14 Priest, GLThe invention of enterprise liability: a critical history of the intellectual foundations of modern tort law’ (1985) 14 Journal of Legal Studies 461Google Scholar.

15 Teubner, above n 11.

16 For instance, the increasing use of software agents to initiate or mediate electronic transactions has captured the attention of literature for the possibly disrupting legal issues caused. See eg, T Balke and T Eymann ‘The conclusion of contracts by software agents in the eyes of the law’ in P Padgham et al (eds) Proceedings of the 7th International Joint Conference on Autonomous Agents and Multiagent Systems – Volume 2 (AAMAS, 2008) p 771 and G Sartor ‘Cognitive automata and the law: electronic contracting and the intentionality of software agents’ (2009) 17(4) Artificial Intelligence and Law 253.

17 See eg Teubner, above n 11.

18 For a discussion, see also the 2019 Report from the Expert Group on Liability and New Technologies on Liability for Artificial Intelligence.

19 SAE International ‘SAE levels of driving automation™ refined for clarity and international audience’ (3 May 2021), available at https://www.sae.org/blog/sae-j3016-update.

20 This is, for example, the case of the new European Commission's AI liability proposal that regulates those instances where causation cannot be proven due to the autonomous nature of AI, thereby acknowledging the possible existence of an accountability gap.

21 P Hacker et al ‘Explainable AI under contract and tort law: legal incentives and technical challenges’ (2020) 28 Artificial Intelligence and Law 1.

22 See eg G Dari-Mattiacci et al ‘The emergence of the corporate form’ (2017) 33 Journal of Law, Economics, and Organization 193.

23 See eg R Romano The Genius of American Corporate Law (American Enterprise Institute, 1993) and, with special reference to regulation of AI, U Pagallo ‘Apples, oranges, robots: four misunderstandings in today's debate on the legal status of AI systems’ (2018) 376(2133) Philosophical Transactions of the Royal Society A 20180168.

24 A Shademan et al ‘Supervised autonomous robotic soft tissue surgery’ (2016) 8 Science Translational Medicine 337 at at 341.

25 K Prifti et al ‘Digging into the accountability gap: operator's civil liability in healthcare AI-systems’ in B Custers and E Fosch-Villaronga (eds) Law and Artificial Intelligence: Regulating AI and Applying AI in Legal Practice (Springer, 2022) p 279.

26 S De Conca ‘Bridging the liability gaps: why AI challenges the existing rules on liability and how to design human-empowering solutions’ in Custers and Fosch-Villaronga (eds), above n 25, p 239 at pp 246–247.

27 A possible mitigation for the couple would be to ask for extra-contractual liability to the bank. However, this possibility would still present a series of limitation. On this, see ibid.

28 A related question is how far legal rules establish an association risk in the first place. For instance, the GDPR in its Art 22 states that an individual can require the right to human intervention in solely automated processing. Accordingly, the scope of application of the said article would not encompass man-machine association. Moreover, the current regulatory framework drew strong criticism by the scholarship due to (the lack of) a right to explanation. See eg S Wachter et al ‘Why a right to explanation of automated decision-making does not exist in the General Data Protection Regulation’ (2017) 7(2) International Data Privacy Law 76.

29 Teubner, above n 11, and see, for a more general discussion, A Fiebich et al ‘Cooperation with robots? A two-dimensional approach’ in C Misselhorn (ed) Collective Agency and Cooperation in Natural and Artificial Systems: Explanation, Implementation and Simulation (Springer, 2005) p 25.

30 Teubner, above n 11.

31 A Van Wynsberghe and L Shuhong ‘A paradigm shift for robot ethics: from HRI to human–robot–system interaction (HRSI)’ (2019) 9 Medicolegal and Bioethics 11.

32 S Bagabir et al ‘Covid-19 and artificial intelligence: genome sequencing, drug development and vaccine discovery’ (2022) 15(2) Journal of Infection and Public Health 289.

33 S Assad et al ‘Autonomous algorithmic collusion: economic research and policy implications’ (2021) 37(3) Oxford Review of Economic Policy 459.

34 SK Mehra ‘Antitrust and the robo-seller: competition in the time of algorithms’ (2016) 100 Minnesota Law Review 1323.

35 Ibid.

36 Pagallo, above n 8; Čerka et al, above n 8; Izumo, above n 8. Other scholars analysing slave-run business models in Roman law focused on the depersonalisation of business: see B Abatino et al ‘Depersonalization of business in ancient Rome’ (2011) 31(2) Oxford Journal of Legal Studies 365.

37 A Di Porto ‘Il diritto commerciale romano una ‘zona d'ombra’ nella storiografia romanistica e nelle riflessioni storico-comparative dei commercialisti’ in S Romano (ed) Nozione, formazione e interpretazione del diritto dall'età romana alle esperienze moderne, Ricerche dedicate al Professor Filippo Galli. Vol 3 (Jovene, 1997) p 413 at p 413.

38 JJ Aubert Business Managers in Ancient Rome: A Social and Economic Study of Institores, 200 BC–AD 250 (Brill, 1994) p 3.

39 B Nicholas An Introduction to Roman Law (Oxford University Press, 1962) p 199.

40 F Serrao ‘Responsabilità per fatto altrui e nossalità’ (1970) 12 Bullettino dell'Istituto di Diritto Romano 125.

41 See eg A Guerra et al ‘Liability for robots I: legal challenges’ (2022) 18(3) Journal of Institutional Economics 331.

42 An additional problem of lesser magnitude arising from the incumbent regulatory framework was that slaves were not entitled to transfer property through mancipatio. See Aubert, above n 38, pp 3–4 and 48.

43 P Bonfante Istituzioni di diritto romano (Giuffrè, 1987) p 147.

44 M Marrone Istituzioni di diritto romano (Palumbo, 2nd edn, 1994) p 197; A Wacke ‘Alle origini della rappresentanza diretta: le azionia diettizie’ in S Romano (ed) Nozione, formazione e interpretazione del diritto dall'età romana alle esperienze moderne, Ricerche dedicate al Professor Filippo Galli. Vol 2 (Jovene, 1997) p 583 at p 585 ff.

45 See eg A Di Porto Impresa collettiva e schiavo ‘manager’ in Roma antica (II sec. a.c.-II sec. d.c.) (Giuffrè, 1984) pp 42–57; A Watson Roman Slave Law (John Hopkins University Press, 1987) p 95; F Serrao Impresa e responsabilità a Roma nell'età commerciale (Pacini Editore, 2002) pp 61–64.

46 U Pagallo The Laws of Robots: Crimes, Contracts, and Torts (Springer Science & Business Media, 2013) pp 103 and 132.

47 Dig 14.1.1.15 (Ulpian 28 ad ed).

48 Dig 14.1.1.1-3 (Ulpian 28 ad ed).

49 Dig 14.3.7.1-2 (Ulpian 28 ad ed) and Dig 14.3.8 (Gaius 9 provincial ed).

50 Dig 14.3.5.1-9 (Ulpian 29 ad ed).

51 Dig 14.3.18 (Paulus sing de var lect). The translation in English is provided by A Watson The Digest of Justinian, vols 1–4 (University of Pennsylvania Press, 1998).

52 For the actio exercitoria, see Dig 14.1.4.2 (Ulpian 29 ad ed). For the actio institoria, Dig 14.3.5.11-18 (Ulpian 28 ad ed). See also Gaius Inst 4.71.

53 See eg Dig 14.1.1.3; Dig 14.1.1.7; Dig 14.1.1.12 (Ulpian 28 ad ed); Dig 14.3.5.11 (Ulpian 28 ad ed).

54 S Li et al ‘Liability rules for AI-related harm: law and economics lessons for a European approach’ (2022) European Journal of Risk Regulation 1 at 11.

55 Dig 15.4 (Ulpian 29 ad ed). See also Gaius Inst 4.70.

56 Dig 15.4.1.9 (Ulpian 29 ad ed).

57 Gaius Inst 4.70.

58 Dig 15.4.1.6 (Ulpian 29 ad ed).

59 Dig 15.4.1.1 (Ulpian 29 ad ed).

60 P Cerami and A Petrucci Lezioni di diritto commerciale romano (Giappichelli, 2002) p 46.

61 Ibid.

62 European Commission White Paper, above n 5.

63 Pagallo, above n 23.

64 Dig 15.1.11.1pr (Ulpian 29 ad ed); Dig 15.1.27pr (Gaius 9 provincial ed). See also Gaius Inst 4.73.

65 Dig 15.1.3pr (Ulpian 29 ad ed).

66 Dig 15.1.7.1(Ulpian 29 ad ed).

67 Pagallo, above n 23.

68 Dig 15.3.3.5 (Ulpian 29 ad ed).

69 Dig 15.3.5.3 (Ulpian 29 ad ed). See also Aubert, above n 38, p 64.

70 Dig 15.3.5pr (Ulpian 29 ad ed).

71 As to the question on the need to prove the peculium, see Dig 15.3.1pr (Ulpian 29 ad ed) and Gaius Inst 4.74.

72 Dig 15.3.5.2 (Ulpian 29 ad ed). See also 15.3.5.1 (Ulpian 29 ad ed).

73 Gaius Inst 4.72.

74 Dig 14.4.5.6 (Ulpian 29 ad ed).

75 Dig 15.1.30pr (Ulpian 29 ad ed) and 15.1.9.2-3 (Ulpian 29 ad ed).

76 Dig 14.4.1.5 (Ulpian 29 ad ed); Dig 14.4.5.3 (Ulpian 29 ad ed).

77 Dig.14.4.1 (Ulpian 29 ad ed) and Dig 14.4.5.5 (Ulpian 29 ad ed).

78 See also Dig 14.4.3pr (Ulpian 29 ad ed); B Albanese Le persone nel diritto privato romano (Montaina, 1979) p 160.

79 TJ Chiusi Contributo allo studio dell'editto De tributoria actione (Atti della Accademia nazionale dei Lincei 3(4), 1993) p 276 at pp 347–395; Aubert, above n 38, p 70.

80 See eg Mehra, above n 34.

81 M Miceli Sulla struttura formulare delle ‘actiones adiecticiae qualitatis’ (Giappichelli, 2001) p 207.

82 M Talamanca Istituzioni di diritto romano (Giuffrè, 1990) p 86.

83 Aubert, above n 38, p 70.

84 P Cerami et al Diritto commerciale romano (Giappichelli, 2nd edn, 2004) pp 11–41.

85 E Costa Le azioni exercitoria e institoria nel diritto romano (Battei, 1891) p 24; Albanese, above n 78, p 160; G Longo ‘Actio exercitoria – actio institoria – actio quasi institoria’ in Studi in onore di Gaetano Scherillo (Istituto Editoriale Cisalpino-La Goliardica, 1972) p 581 at p 582; Aubert, above n 38, p 76. Moreover, please consider that Aubert proposes that the actio institoria comes before the actio exercitoria.

86 S Solazzi ‘L'età dell'actio exercitoria’ in S Solazzi, Scritti di diritto romano. Vol IV: 1938–1947 (Jovene, 1963) pp 259–262; Valiño, ELas “actiones adiecticiae qualitatis” y sus relaciones básicas en derecho romano’ (1967) 37 Anuario de Historia del Derecho Espanol 339Google Scholar.

87 Ligt, L deLegal history and economic history: the case of the actiones adiecticiae qualitatis’ (1999) 67 Tijdschrift voor Rechtsgeschiedenis 205Google Scholar.

88 Ibid, at 213.

89 Ibid, at 212.

90 A Petrucci ‘Ulteriori osservazioni sulla protezione dei contraenti con gli institores ed i magistri navis nel diritto romano dell'età commerciale’ (2002) 53 IVRA 17.

91 de Ligt, above n 87.

92 Pagallo, above n 23.

93 For a recent review of this question and expanding it to moral philosophy and law see K Heine ‘Human rights, legal personality and artificial intelligence: what can epistemology and moral philosophy teach law?’ in A Quintavalla and J Temperman (eds) Artificial Intelligence and Human Rights (Oxford University Press, forthcoming).

94 A Schiavone The End of the Past: Ancient Rome and the Modern West (Harvard University Press, 2000) p 135.

95 Pagallo, above n 23.