Robots are not humans: they are “mere” machines that do as we tell them. They have no “will,” no “consciousness” and no autonomy in the sense that humans do. As with dolls and diaries, we may be tempted to attribute a kind of agency to them, “recognizing” their inner mind, believing they understand our language and share our zest for life. As in the case of dolls and diaries, they may trigger our imagination and help us to generate new ideas while interacting with them, though, as with dolls and diaries, we need to emancipate ourselves from naïve beliefs in them being capable of suffering humiliation or joy. It is hard to steer free from, on the one hand, the attribution of human agency to lifeless contraptions that execute complex, mathematically informed programs and, on the other hand, the idea that they are mere tools like hammers, mechanical cars or newspapers. Unlike previous technologies, robots that thrive on machine learning can anticipate our behaviors and – depending on their program – pre-empt us by tweaking the choice architecture that channels our action potential. In that sense, robots are agents, though with “mindless minds.”
This is a new chapter in the history of the relationship with our environment. We must learn to deal with the fact that these new types of agents can diminish or enhance our own agency, based on upstream design decisions taken by engineers who are keen on modeling our user behavior, hoping to make their machines ever more effective in steering us in the direction chosen by whoever pays for their design. As data-driven design is fundamentally probabilistic, whoever develops, provides, or deploys these robots takes the risk of harm due to errors, misuse, or unforeseen behaviors, and such risk-taking raises notable questions of guilt, wrongfulness and causality.
The release of ChatGPT has demonstrated how fluent our robot parrots have become and how easily they can convince us of the salience of their output. The release of large language models also reminds us of the extent to which these models succumb to producing what Harry Frankfurt coined as “bullshit.” Frankfurt distinguished bullshit from lying, explaining that whoever lies still cares about the truth, whereas those who bullshit have no interest in the truth, only in serving their own interests. Machines have no interests, not even in the truth. In that sense, their hallucinations are beyond both lying and bullshit. But when discussing criminal liability, the law of evidence, and criminal procedure, it is important to remember that even if positive law could very well attribute legal personhood to robots, there cannot be moral personhood for systems incapable of anything beyond the execution of – possibly highly complex and sophisticated – instructions.
The lack of moral personhood of robots highlights the well-known issues about who should be made liable for the harm caused by the potentially unpredictable behavior of these systems. These issues, in turn, confront us with the difference between criminal law, private law, and administrative and constitutional law. Whereas the attribution of private law liability to an AI system could at some point make sense, provided that those who took the risk of harming or diminishing others are not left off the hook, the attribution of criminal law liability is another matter. Blaming a system that has no intentionality in the sense of Brentano, i.e., intentionality as awareness of the world, would disrupt the foundational framework that has informed criminal law in constitutional democracies. Data-driven robots process data that serve as a proxy for the world they need to navigate, but they have no own stake in that world and no way of sensing, thinking, and acting as we do (which may raise some red flags regarding some of the definitions proposed in this volume). They have been programmed to model the distribution of the data, whether based on examples (supervised learning), on pattern recognition (unsupervised learning), or on goals defined in a way that a machine can execute (reinforcement learning). In the latter case, their output can be further “aligned” with the intended outcome by way of prompt engineering (reinforcement learning with human feedback). None of this, however, makes them aware of their environment. They can only process the data they are being trained on, following the mathematics that defines their model construction. The ingenuity, imagination, and novelty of their operations and output is the result of human investment; it is the developers, providers, deployers, and end-users who create, shape, and reconfigure robotic systems.
This edited volume takes the challenge of mindless, data-driven agency seriously, seeking to reconsider key tenets of substantive and procedural criminal law. Moreover, this volume reaches beyond an inquiry into the fitness of doctrinal intricacies that were developed for another era, where law was text-driven if anything. The final part devotes keen attention to how we can explain to ourselves what the role of robots can and should be in the context of constitutional democracies and how this implicates the criminal law. All this engages the pivotal question of what world we want to live in, share, and reconstruct, turning the volume into a crucial intervention in the debate on how criminal law should respond to the integration of robots in everyday life. With a star line-up of authors, coming from a diversity of perspectives to scrutinize the same pressing issue, the reader will find themselves both enlightened and perplexed, on the verge of a better understanding of the complex underlying issues and real-world challenges posed by the design and the deployment of data-driven robots.