2.1 New Technologies and the Rise of the Algorithmic Society
New technologies offer human agents entirely new ways of doing things.Footnote 1 However, as history shows, ‘practical’ innovations always bring with them more significant changes. Each new option introduced by technological evolution allowing new forms affects the substance, eventually changing the way humans think and relate to each other.Footnote 2 The transformation is especially true when we consider information and communication technologies (so-called ICT); as indicated by Marshall McLuhan, ‘the media is the message’.Footnote 3 Furthermore, this scenario has been accelerated by the appearance of artificial intelligence systems (AIS), based on the application of machine learning (ML).
These new technologies not only allow people to find information at an incredible speed; they also recast decision-making processes once in the exclusive remit of human beings.Footnote 4 By learning from vast amounts of data – the so-called Big Data – AIS offer predictions, evaluations, and hypotheses that go beyond the mere application of pre-existing rules or programs. They instead ‘induce’ their own rules of action from data analysis; in a word, they make autonomous decisions.Footnote 5
We have entered a new era, where big multinational firms (called ‘platforms’) use algorithms and artificial intelligence to govern vast communities of people.Footnote 6 Conversely, data generated by those platforms fuel the engine of the ‘Algorithmic Society’.Footnote 7
From this point of view, the Algorithmic Society is a distinctive evolution of the ‘Information Society’,Footnote 8 where a new kind of ‘mass-surveillance’ becomes possible.Footnote 9
This progress generates a mixture of excitement and anxiety.Footnote 10 The development of algorithms and artificial intelligence technologies is becoming ubiquitous, omnipresent, and seemingly omnipotent. They promise to eliminate our errors and make our decisions better suited for any purpose.Footnote 11
In this perspective, a relatively old prophecy, predicted by Herbert Marcuse in one of the ‘red books’ of that massive socio-political movement usually known as ‘1968’, The One-Dimensional Man, becomes reality. Marcuse starts the first page of that seminal book as follows:
A comfortable, smooth, reasonable, democratic unfreedom prevails in advanced industrial civilization, a token of technical progress.
Indeed, what could be more rational than the suppression of individuality in the mechanization of socially necessary but painful performances; … That this technological order also involves a political and intellectual coordination may be a regrettable and yet promising development. The rights and liberties which were such vital factors in the origins and earlier stages of industrial society yield to a higher stage of this society: they are losing their traditional rationale and content. …
To the degree to which freedom from want, the concrete substance of all freedom, is becoming a real possibility. The liberties that pertain to a state of lower productivity are losing their former content. … In this respect, it seems to make little difference whether the increasing satisfaction of needs is accomplished by an authoritarian or a non-authoritarian system.Footnote 12
If technology replaces all ‘socially necessary but painful performances’ – work included – personal freedom reaches its final fulfilment (that is, its very end). In Marcuse’s eyes, this is how technological power will take over our freedom and political system: not through a bloody ‘coup’ but by inducing people – practically and happily – to give up all their responsibilities.
However, this dystopic perspective – a future of ‘digital slavery’, where men and women will lose their liberty and quietly reject all democratic principlesFootnote 13 – produces a reaction. It is not by chance that the European Commission’s strategically endorsing the transformation of the EU into an AI-led economy, at the same time, requires great attention to people’s trust and a high level of fundamental rights protection.Footnote 14
One of the most common areas where we experience the rise of these concerns is public and private security.Footnote 15 For a large part of the 2010s onward, technological innovations have focused on safety and control; the consequence has been an alarming increase in public and private surveillance, coupled with growing threats to political and civil liberties.Footnote 16 In addition to this, the global ‘COVID-19’ pandemic has doubtlessly boosted the already fast-growing ‘surveillance capitalism’.Footnote 17
While at the beginning of the twenty-first century, there was an increasing awareness of the risks of the new pervasive surveillance technologies, today, hit by the pandemic and searching for practical tools to enforce social distancing or controlling policies, the general institutional and academic debate seems to be less worried by liberty-killing effects and more allured by health-preserving results.Footnote 18
Regardless, the most worrying challenges stem from the increasing power of algorithms, created through Big Data analytics such as machine learning and used to automate decision-making processes.Footnote 19 Their explicability,Footnote 20 liability, and culpability are still far from being clearly defined.Footnote 21 As a consequence, several scholars and policymakers are arguing, on the one hand, to aggressively regulate tech firmsFootnote 22 (since classic antitrust law is unfit for this purpose) or, on the other, to require procedural safeguards, allowing people to challenge the decisions of algorithms which can have significant consequences on their lives (such as credit score systems).Footnote 23
2.2 The Impact of the Algorithmic Society on Constitutional Law
As we know, at its very origin, constitutional theory wrestles with the problem of power control.Footnote 24 Scholars commonly consider constitutional law that part of the legal system whose function is to legallyFootnote 25 delimit power.Footnote 26 In the ‘modern sense’,Footnote 27 this discipline establishes rules or builds institutions capable of shielding personal freedoms from external constraints.Footnote 28 According to this idea, constitutionalism historically always ‘adapted’ itself to power’s features; that is to say, the protection of freedoms in constitutions has been shaped following the evolving character of the threats to those same freedoms.Footnote 29
At the beginning of the modern era, the power to be feared was the king’s private force.Footnote 30 The idea of ‘sovereignty’, which appeared at the end of the Middle Ages, had its roots in the physical and military strength of the very person of the ‘Sovereign’.Footnote 31 Sovereignty evoked an ‘external power’Footnote 32 grounded on the monopoly (actual or potential) of the physical ‘force’Footnote 33 used against individuals or communities (e.g., ‘military force’ or the ‘force of law’).Footnote 34 Consequently, liberties were those dimensions of human life not subjected to that power (e.g., habeas corpus). As the offspring of the French and American Revolutions, the ‘rule of law’ doctrine was the main legal tool ‘invented’ by constitutional theory to delimit the king’s power and protect personal freedom and rights. To be ‘legitimate’, any power has to be subjected to the rule of law.
The other decisive turning point in the history of constitutionalism was World War II and the end of twentieth-century European totalitarian regimes. It may sound like a paradox, but those regimes showed that the ‘legislative state’, built on the supremacy of law and therefore exercising a ‘legitimate power’, can become another terrible threat to human freedom and dignity.
If the law itself has no limits, whenever it ‘gives’ a right, it can ‘withdraw’ it. This practice is the inhuman history of some European twentieth-century states that cancelled human dignity ‘through the law’.
With the end of World War II, a demolition process of those regimes began, and learning from the American constitutional experience, Europe transformed ‘flexible’ constitutions – until then, mere ordinary laws – into ‘rigid’ constitutions,Footnote 35 which are effectively the ‘supreme law’ of the land.Footnote 36
In this new scenario, the power that instils fear is no longer the king’s private prerogative; the new limitless force is the public power of state laws, and the constitutional tool intended to effectively regulate that power is vested in the new ‘rigid’ constitution: a superior law, ‘stronger’ than ordinary statutes and thus truly able to protect freedoms, at least apparently, even against legal acts.
With the turn of the twenty-first century, we witness the rise of a new kind of power. The advent of new digital technologies, as discussed previously, provides an unprecedented means of limiting and directing human freedom that has appeared on the global stage; a way based on not an ‘external’ force (as in the two previous constitutional scenarios, the private force of the king or the public ‘force of the law’) but rather an ‘internal’ force, able to affect and eventually substitute our self-determination ‘from inside’.Footnote 37
This technological power is at the origin of ‘platform capitalism’,Footnote 38 which is a vast economic transformation induced by the exponentially fast-growing markets of Internet-related goods and services – for example, smart devices (Apple, Samsung, Huawei, Xiaomi), web-search engines (Google), social media corporations (Facebook, Instagram, Twitter), cloud service providers (Amazon, Microsoft, Google), e-commerce companies (Amazon, Netflix), and social platforms (Zoom, Cisco Webex).
Consider that today,Footnote 39 the combined value of the S&P 500’s five most prominent companiesFootnote 40 now stands at more than $7 trillion, accounting for almost 25 per cent of the market capitalization of the index, drawing a picture of what a recent doctrine accurately defined as a ‘moligopoly’.Footnote 41
These ‘moligopolists’Footnote 42 are not only creating communities and benefitting from network effects generated by users’ transactions, but they also develop a de facto political authority and influence once reserved for legal and political institutions. More importantly, they are taking on configurations that are increasingly similar to the state and other public authorities.Footnote 43 Their structure reflects a fundamental shift in the political and legal systems of Western democracies – what has been called a new type of ‘functional sovereignty’.Footnote 44 Elsewhere we used the term ‘cybernetic power’,Footnote 45 which perhaps sounds like an old-fashioned expression. Still, it is more accurate in its etymology (‘cyber’, from its original ancient Greek meaning,Footnote 46 shares the same linguistic root as ‘govern’ and ‘governance’) to identify how automation and ICT have radically transformed our lives.
As algorithms begin to play a dominant role in the contemporary exercise of power,Footnote 47 it becomes increasingly important to examine the ‘phenomenology’ of this new sovereign power and its unique challenges to constitutional freedoms.
2.3 The ‘Algorithmic State’ versus Fundamental Rights: Some Critical Issues
As already stated, the main force of algorithms is their practical convenience, so their interference with our freedom is not perceived as an ‘external’ constraint or a disturbing power. Instead, it is felt as evidence-based support for our decisions, capturing our autonomy by lifting our deliberation burden.
Who would like to switch back to searching for information in volumes of an encyclopaedia? Who would want to filter their email for spam manually anymore? Who would like to use manual calculators instead of a spreadsheet when doing complex calculations? We are not just living in an increasingly automated world; we are increasingly enjoying the many advantages that come with it. Public administrations are using more and more algorithms to help public-sector functions, such as welfare, the labour market, tax administration, justice, crime prevention, and more. The use of algorithms in decision-making and adjudications promises more objectivity and fewer costs.
However, as we said, algorithms have a darker side, and the following chapters of this section of the book illustrate some of the facets of the Algorithmic State phenomenology.
The fast-growing use of algorithms in the fields of justice, policing, public welfare, and the like could end in biased and erroneous decisions, boosting inequality, discrimination, unfair consequences, and undermining constitutional rights, such as privacy, freedom of expression, and equality.Footnote 48
And these uses raise considerable concerns not only for the specific policy area in which they are operated but also for our society as a whole.Footnote 49 There is an increasing perception that humans do not have complete control over Algorithmic State decision-making processes.Footnote 50 Despite their predictive outperformance over analogue tools, algorithmic decisions are difficult to understand and explain (the so-called black box effect).Footnote 51 While producing highly effective practical outcomes, algorithmic decisions could undermine procedural and substantive guarantees related to democracy and the rule of law.
Issues related to the use of algorithms as part of the decision-making process are numerous and complex, but at the same time, the debate is at an early stage. However, efforts towards a deeper understanding of how algorithms work when applied to legally tricky decisions will be addressed soon.
In this section, we will examine four profiles of the use of algorithmic decisions: the relation between automation and due process, the so-called ‘emotional’ AI, the algorithmic bureaucracy, and predictive policing.
Due Process in the Age of AI
In Chapter 3, entitled ‘Inalienable Due Process in an Age of AI: Limiting the Contractual Creep toward Automated Adjudication’, Frank Pasquale argues that robust legal values must inspire the current efforts to ‘fast track’ cases by judges and agencies, via statistical methods, machine learning, or artificial intelligence. First, he identifies four core features to be included in due process rights when algorithmic decisions are under consideration. They are related to the ‘ability to explain one’s case’, the ‘necessity of a judgment by a human decision-maker’, an ‘explanation for that judgment’, and an ‘ability to appeal’. As a second step, he argues that given that legal automation threatens due process rights, we need proper countermeasures, such as explainability and algorithmic accountability. Courts should not accept legal automation because it could be a hazard for vulnerable and marginalized persons, despite all good intentions. In the last part of his article, Pasquale traces a way to stem the tide of automation in the field of justice and administration, recalling the doctrine of Daniel Farber concerning ‘unconstitutional conditions’, which sets principles and procedures to block governments from requiring waiver of a constitutional right as a condition of receiving some governmental benefit.Footnote 52
Far from a solution that brings us back to an ‘analogic’ world, we agree with Frank Pasquale. In his article, he calls for a more robust and durable theory of constitutionalism to pre-empt the problems that may arise from using automation. However, this is not sufficient, since we need a parallel theory and practice of computer science to consider ethical values and constitutional rights involved in the algorithmic reasoning and to empower officials with the ability to understand when and how to develop and deploy the technology.Footnote 53 Besides, it is necessary to maintain a ‘human-centric’ process in judging for the sake of courts and citizens, who could be destroyed, as Pasquale warns, by the temptation of the acceleration, abbreviation, and automation of decisional processes.
Constitutional Challenges from ‘Emphatic’ Media
Chapter 4, by Peggy Valcke, Damian Clifford, and Viltė Kristina Steponėnaitė, focuses on ‘Constitutional Challenges in the Emotional AI Era’. The emergence of ‘emotional AI’, meaning technologies capable of using computing and artificial intelligence techniques to sense, learn about, and interact with human emotional life (so-called ‘emphatic media’)Footnote 54 raises concerns and challenges for constitutional rights and values from the point of view of its use in the business to consumer context.Footnote 55
These technologies rely on various methods, including facial recognition, physiological measuring, voice analysis, body movement monitoring, and eye-tracking. The social media business gauges several of these techniques to quantify, track, and manipulate emotions to increase their business profits.
In addition to technical issues about ‘accuracy’, these technologies pose several concerns related to protecting consumers’ fundamental rights and the rights of many other individuals, such as voters and ordinary people. As Peggy Valcke, Damian Clifford, and Viltė Kristina Steponėnaitė claim, emotional AI generates a growing pressure on the whole range of fundamental rights involved with the protection against the misuse of AI, such as privacy, data protection, respect for private and family life, non-discrimination, freedom of thought, conscience, and religion.
Although the authors argue for the necessity of constitutional protection against the possible impacts of emotional AI on existing constitutional freedoms, they ask themselves whether we need new rights in Europe in the light of growing practices of manipulation by algorithms and emotional AI. By highlighting the legal and ethical challenges of manipulating emotional AI tools, the three authors suggest a new research agenda that harnesses the academic scholarship and literature on dignity, individual autonomy, and self-determination to inquiring into the need for further constitutional rights capable of preventing or deterring emotional manipulation.
Algorithmic Surveillance as a New Bureaucracy
Chapter 5 is entitled ‘Algorithmic Surveillance as a New Bureaucracy: Law Production by Data or Data Production by Law?’, in which Mariavittoria Catanzariti explores the vast topic of algorithmic administration. Her argument deals with the legitimation of administrative power, questioning the rise of a ‘new bureaucracy’ in Weberian terms. Like bureaucracy, algorithms have a rational power requiring obedience and excluding non-predictable choices. Whereas many aspects of public administration could undoubtedly benefit from applying machine learning algorithms, their substitution for human decisions would ‘create a serious threat to democratic governance, conjuring images of unaccountable, computerized overlords’.Footnote 56
Catanzariti points out that with private sectors increasingly relying on machine learning power, even administration and public authorities, in general, keep pace and make use of the same rationale, giving birth to an automated form of technological rationality. The massive use of classification and measurement techniques affect human activity, generating new forms of power that standardize behaviours for inducing specific conduct. The social power of algorithms is currently visible in the business of many governmental agencies in the United States.
While producing a faster administration, decision-making with algorithms is likely to generate multiple disputes. The effects of algorithmic administration are far from being compliant with the same rationality as law and administrative procedures. Indeed, the use of algorithms determines results that are not totally ‘explainable’, a fact that is often accused of being ‘obscure, crazy, wrong, in short, incomprehensible’.Footnote 57
As Catanzariti explains, algorithms are not neutral, and technology is not merely a ‘proxy’ for human decisions. Whenever an automated decision-making technology is included in a deliberative or administrative procedure, it tends to ‘capture’ the process of deciding or make it extremely difficult to ignore it. Consequently, the author argues that law production by data ‘is not compatible with Weberian legal rationality’, or as we have claimed, automation, far from appearing a mere ‘slave’, unveils its true nature of being the ‘master’ of decision-making when employed, due to its ‘practical appeal’.Footnote 58 Indeed, algorithms put a subtle but potent spell on administrations: by using them, you can save work, time, and above all, you are relieved of your burden of motivating. Yet is this type of algorithmic administration really accountable? Coming back to Frank Pasquale’s question, are ‘due process’ principles effectively applicable to this kind of decision?
Predictive Policing
Finally, Chapters 6 and 7, ‘Human Rights and Algorithmic Impact Assessment for Predictive Policing’ by Céline Castets-Renard and ‘Law Enforcement and Data-Driven Predictions at the National and EU Level: A Challenge to the Presumption of Innocence and Reasonable Suspicion?’ by Francesca Galli, touch upon the issue of law enforcement and technology.Footnote 59 The first addresses the dilemma of human rights challenged by ‘predictive policing’ and the use of new tools such as the ‘Algorithmic Impact Assessment’ to mitigate the risks of such systems. The second explores the potential transformation of core principles of criminal law and whether the techniques of a data-driven society may hamper the substance of legal protection. Both the authors argue for the necessity to protect fundamental rights against the possible increase of coercive control of individuals and the development of a regulatory framework that adds new layers of fundamental rights protection based on ethical principles and other practical tools.
In some countries, police authorities have been granted sophisticated surveillance technologies and much more intrusive investigative powers to reduce crime by mapping the likely locations of future unlawful conduct so that the deployment of police resources can be more effective.Footnote 60
Here again, the problem regards the ability and sustainability of decisions by intelligent machines and their consequences for the rights of individuals and groups.Footnote 61 Machine learning and other algorithmic tools can now correlate multiple variables in a data set and then predict behaviours. Such technologies open new scenarios for information gathering, monitoring, surveilling, and profiling criminal behaviour. The risk here is that predictive policing represents more than a simple shift in tools and could result in less effective and maybe even discriminatory police interventions.Footnote 62
2.4 The Effects of the ‘Algorithmic State’ on the Practice of Liberty
Trying to synthetize some of the most critical issues brought about by the advent of what we call the Algorithmic State on the practice of constitutional liberties, there appear to be two main sensitive areas: surveillance and freedom.
Surveillance
As we have already seen, the rise of the algorithmic state has produced the change foreseen more than forty years ago by Herbert Marcuse. In general, technology is improving people’s lives. However, we know that this improvement comes at a ‘price’. We are increasingly dependent on big-tech-platform services, even if it is clear that they make huge profits with our data. They promise to unchain humans from needs and necessities, but they themselves are becoming indispensable.
Therefore, we are taking for granted that the cost of gaining such benefits – security, efficiency, protection, rewards, and convenience – is to consent to our personal data being recorded, stored, recovered, crossed, traded, and exchanged through surveillance systems. Arguing that people usually have no reason to question surveillance (the ‘nothing to hide’ misconception)Footnote 63 strengthens the order built by the system, and people become ‘normalized’ (as Foucault would have said).Footnote 64
Because of this massive use of technology, we are now subject to a new form of surveillance, which profoundly impacts individual freedom, as it is both intrusive and invasive in private life.Footnote 65 Both explicit and non-explicit forms of surveillance extend to virtually all forms of human interaction.Footnote 66
As the EU Court of Justice pointed out, mass surveillance can be produced by both governments and private companies. This is likely to create ‘in the minds of the persons concerned the feeling that their private lives are the subject of constant surveillance’.Footnote 67 In both cases, we have a kind of intrusive surveillance on people’s lives, and this is evidence of individuals’ loss of control over their personal data.
Freedom
This process also affects the very idea of the causal link between individual or collective actions and their consequences, therefore, the core notion of our freedom. Replacing causation with correlation profoundly affects the fundamental distinction embedded in our moral and legal theory between instruments and ends.Footnote 68 Today’s cybernetic power is no longer just an instrument to achieve ends decided by human agents. Machines make decisions autonomously on behalf of the person, thus interfering with human freedom.
As it is very clearly described in the following chapters, human agents (individual or collective) explicitly delegate the power to make decisions or express assessments on their behalf to automated systems (judicial support systems, algorithmic administration, emotional assessments, policing decisions). But we must be aware of another crucial dimension of that substitution.
There are two ways to capture human freedom: the first, as we saw in the previously noted cases, occurs whenever we ask a technological system to decide directly on our behalf (we reduce our self-determination to choose our proxy) and the second is when we ask automated machinery to provide the information upon which we take a course of action. Knowledge always shapes our freedom. One key factor (although not the only one) influencing our decisions is the information background we have. Deciding to drive a specific route rather than another to reach our destination is usually affected by information we have either on traffic or roadworks; the choice to vote for one political candidate instead of another depends on the information we get about his or her campaign or ideas. If we ask ourselves which channel we will use today to get information about the world beyond our direct experience, the answer will be more than 80 per cent from the Internet.Footnote 69
Automated technological systems increasingly provide knowledge.Footnote 70 Simultaneously, ‘individual and collective identities become conceivable as fluid, hybrid and constantly evolving’ as the result of ‘continuous processes bringing together humans, objects, energy flows, and technologies’.Footnote 71 This substitution profoundly impacts the very idea of autonomy as it emerged in the last two centuries and basically alters the way people come to make decisions, have beliefs, or take action.
In this way, two distinctive elements of our idea of freedoms’ violations seem to change or disappear in the Algorithmic Society. In the first case – when we explicitly ask technology to decide on our behalf – we cannot say that the restriction of our freedom is unwanted or unvoluntary because we ourselves consented to it. We expressly ask those technologies to decide, assuming they are ‘evidence-based’, more effective, more neutral, science-oriented, and so forth. Therefore, we cannot say that our freedom has been violated against our will or self-determination, given that we expressly asked those systems to make our decisions.
On the other hand, when our decisions are taken on the informative basis provided by technology, we can no longer say that such threats to our liberty are ‘external’; as a matter of fact, when we trust information taken from the Internet (from web search engines, like Google, or from social media, like Facebook or Twitter), there is no apparent coercion, no violence. That information is simply welcomed as a sound and valid basis for our deliberations. Yet there is a critical point here. We trust web-sourced information provided by platforms, assuming they are scientifically accurate or at least trustworthy. However, this trust has nothing to do with science or education. Platforms simply use powerful algorithms that learn behavioural patterns from previous preferences to reinforce individuals or groups in filtering overwhelming alternatives in our daily life. The accuracy of these algorithms in predicting and giving us helpful information with their results only occurs because they confirm – feeding a ‘confirmation bias’Footnote 72 – our beliefs or, worst, our ideological positions (‘bubble effect’).Footnote 73
There is something deeply philosophically and legally problematic about restricting people’s freedom based on predictions about their conduct. For example, as an essential requirement for a just society, liberal and communitarian doctrines share not only the absence of coercion but also independence and capacity when acting; from this point of view, new algorithmic decision-making affects the very basis of both liberal and communitarian theories. As Lawrence Lessig wrote, we have experienced, through cyberspace, a ‘displacement of a certain architecture of control and the substitution with an apparent freedom.’Footnote 74
Towards the Algorithmic State Constitution: A ‘hybrid’ Constitutionalism
Surveillance capitalism and the new algorithmic threats to liberty share a common feature: when a new technology has already appeared, it is often too late for the legal system to intervene. The gradual anticipation in the field of privacy rights, from subsequent to preventive (from protection by regulation, to protection ‘by design’ and finally ‘by default’), exactly traces this sort of ‘backwards’ trajectory. This is the main feature of the Algorithmic State constitutionalism.
It is necessary to incorporate the values of constitutional rights within the ‘design stage’ of the machines; for this, we need what we would define as a ‘hybrid’ constitutional law – that is, a constitutional law that still aims to protect fundamental human rights and at the same time knows how to express this goal in the language of technology.Footnote 75 Here the space for effective dialogue is still abundantly unexplored, and consequently, the rate of ‘hybridization’ is still extraordinarily low.
We argue that after the season of protection by design and by default, a new season ought to be opened – that of protection ‘by education’, in the sense that it is necessary to act when scientists and technologists are still studying and training, to communicate the fundamental reasons for general principles such as personal data protection, human dignity, and freedom protection, but also for more specific values as the explainability of decision-making algorithms or the ‘human in the loop’ principle.
Technology is increasingly integrated with the life of the person, and this integration cannot realistically be stopped, nor it would be desirable, given the huge importance for human progress that some new technologies have had.
The only possible way, therefore, is to ensure that the value (i.e., the meaning) of protecting the dignity of the person and his or her freedom becomes an integral part of the training of those who will then become technicians. Hence the decisive role of school, university, and other training agencies, professional or academic associations, as well as the role of soft law.
3.1 Introduction
Automation is influencing ever more fields of law. The dream of disruption has permeated the US and British legal academies and is making inroads in Australia and Canada, as well as in civil law jurisdictions. The ideal here is law as a product, simultaneously mass producible and customizable, accessible to all and personalized, openly deprofessionalized.Footnote 1 This is the language of idealism, so common in discussions of legal technology – the Dr. Jekyll of legal automation.
But the shadow side of legal tech also lurks behind many initiatives. Legal disruption’s Mr. Hyde advances the cold economic imperative to shrink the state and its aid to the vulnerable. In Australia, the Robodebt system of automated benefit overpayment adjudication clawed back funds from beneficiaries on the basis of flawed data, false factual assumptions, and misguided assumptions about the law. In Michigan, in the United States, a similar program (aptly named “MIDAS,” for Michigan Integrated Data Automated System) “charged more than 40,000 people, billing them about five times the original benefits” – and it was later discovered that 93 percent of the charges were erroneous.Footnote 2 Meanwhile, global corporations are finding the automation of dispute settlement a convenient way to cut labor costs. This strategy is particularly tempting on platforms, which may facilitate millions of transactions each day.
When long-standing appeals to austerity and business necessity are behind “access to justice” initiatives to promote online dispute resolution, some skepticism is in order. At the limit, jurisdictions may be able to sell off their downtown real estate, setting up trusts to support a rump judicial system.Footnote 3 To be sure, even online courts require some staffing. But perhaps an avant-garde of legal cost cutters will find some inspiration from US corporations, which routinely decide buyer versus seller disputes in entirely opaque fashion.Footnote 4 In China, a large platform has charged “citizen juries” (who do not even earn money for their labor but, rather, reputation points) to decide such disputes. Build up a large enough catalog of such encounters, and a machine learning system may even be entrusted with deciding disputes based on past markers of success.Footnote 5 A complainant may lose credibility points for nervous behavior, for example, or gain points on the basis of long-standing status as someone who buys a great deal of merchandise or pays a taxes in a timely manner.
As these informal mechanisms become more common, they will test the limits of due process law. As anyone familiar with the diversity of administrative processes will realize, there is an enormous variation at present in how much opportunity a person is entitled to state their case, to demand a written explanation for a final (or intermediate) result, and to appeal. A black lung benefits case differs from a traffic violation, which in term differs from an immigration case. Courts permit agencies a fair amount of flexibility to structure their own affairs. Agencies will, in all likelihood, continue to pursue an agenda of what Julie Cohen has called “neoliberal managerialism” as they reorder their processes of investigation, case development, and decision-making.Footnote 6 That will, in turn, bring in more automated and “streamlined” processes, which courts will be called upon to accommodate.
While judicial accommodations of new agency forms are common, they are not automatic. At some point, agencies will adopt automated processes that courts can only recognize as simulacra of justice. Think, for instance, of an anti-trespassing robot equipped with facial recognition, which could instantly identify and “adjudicate” a person overstepping a boundary and text that person a notice of a fine. Or a rail ticket monitoring system that would instantly convert notice of a judgment against a person into a yearlong ban on the person buying train tickets. Other examples might be less dramatic but also worrisome. For example, consider the possibility of “mass claims rejection” for private health care providers seeking government payment for services rendered to persons with government-sponsored health insurance. Such claims processing programs may simply compare a set of claims to a corpus of past denied claims, sort new claimants’ documents into categories, and then reject them without human review.
In past work, I have explained why legislators and courts should reject most of these systems, and should always be wary of claims that justice can be automated.Footnote 7 And some initial jurisprudential stirrings are confirming that normative recommendation. For example, there has been a backlash against red-light cameras, which automatically cite drivers for failing to obey traffic laws. And even some of those who have developed natural language processing for legal settings have cautioned that they are not to be used in anything like a trial setting. These concessions are encouraging.
And yet there is another danger lurking on the horizon. Imagine a disability payment scheme that offered something like the following “contractual addendum” to beneficiaries immediately before they began receiving benefits:
The state has a duty to husband resources and to avoid inappropriate payments. By signing below, you agree to the following exchange. You will receive $20 per month extra in benefits, in addition to what you are statutorily eligible for. In exchange, you agree to permit the state (and any contractor it may choose to employ) to review all your social media accounts, in order to detect behavior indicating you are fit for work. If you are determined to be fit for work, your benefits will cease. This determination will be made by a machine learning program, and there will be no appeal.Footnote 8
There are two diametrically opposed ways of parsing such a contract. For many libertarians, the right to give up one’s rights (here, to a certain level of privacy and appeals) is effectively the most important right, since it enables contracting parties to eliminate certain forms of interference from their relationship. By contrast, for those who value legal regularity and due process, this “addendum” is anathema. Even if it is possible for the claimant to re-apply after a machine learning system has stripped her of benefits, the process offends the dignity of the claimant. A person must pass on whether such a grave step is to be taken.
These divergent approaches are mirrored in two lines of US Supreme Court jurisprudence. On the libertarian side, the Court has handed down a number of rulings affirming the “right” of workers to sign away certain rights at work, or at least the ability to contest their denial in court.Footnote 9 Partisans of “disruptive innovation” may argue that startups need to be able to impose one-sided terms of service on customers, so that investors will not be deterred from financing them. Exculpatory clauses have spread like kudzu, beckoning employers with the jurisprudential equivalent of a neutron bomb: the ability to leave laws and regulations standing, without any person capable of enforcing them.
On the other side, the Supreme Court has also made clear that the state must be limited in the degree to which it can structure entitlements when it is seeking to avoid due process obligations. A state cannot simply define an entitlement to, say, disability benefits, by folding into the entitlement itself an understanding that it can be revoked for any reason, or no reason at all. On this dignity-centered approach, the “contractual addendum” posited above is not merely one innocuous add-on, a bit of a risk the claimant must endure in order to engage in an arms’ length exchange for $20. Rather, it undoes the basic structure of the entitlement, which included the ability to make one’s case to another person and to appeal an adverse decision.
If states begin to impose such contractual bargains for automated administrative determinations, the “immoveable object” of inalienable due process rights will clash with the “irresistible force” of legal automation and libertarian conceptions of contractual “freedom.” This chapter explains why legal values must cabin (and often trump) efforts to “fast track” cases via statistical methods, machine learning (ML), or artificial intelligence. Section 3.2 explains how due process rights, while flexible, should include four core features in all but the most trivial or routine cases: the ability to explain one’s case, a judgment by a human decision maker, an explanation for that judgment, and the ability to appeal. Section 3.3 demonstrates why legal automation often threatens those rights. Section 3.4 critiques potential bargains for legal automation and concludes that the courts should not accept them. Vulnerable and marginalized persons should not be induced to give up basic human rights, even if some capacious and abstract versions of utilitarianism project they would be “better off” by doing so.
3.2 Four Core Features of Due Process
Like the rule of law, “due process” is a multifaceted, complex, and perhaps even essentially contested concept.Footnote 10 As J. Roland Pennock has observed, the “roots of due process grow out of a blend of history and philosophy.”Footnote 11 While the term itself is a cornerstone of the US and UK legal systems, it has analogs in both public law and civil law systems around the world.
While many rights and immunities have been evoked as part of due process, it is important to identify a “core” conception of it that should be inalienable in all significant disputes between persons and governments. We can see this grasping for a “core” of due process in some US cases, where the interest at stake was relatively insignificant but the court still decided that the person affected by government action had to have some opportunity to explain him or herself and the contest the imposition of a punishment. For example, in Goss v. Lopez, students who were accused of misbehavior were suspended from school for ten days. The students claimed they were due some kind of hearing before suspension, and the Supreme Court agreed:
We do not believe that school authorities must be totally free from notice and hearing requirements if their schools are to operate with acceptable efficiency. Students facing temporary suspension have interests qualifying for protection of the Due Process Clause, and due process requires, in connection with a suspension of 10 days or less, that the student be given oral or written notice of the charges against him and, if he denies them, an explanation of the evidence the authorities have and an opportunity to present his side of the story.Footnote 12
This is a fair encapsulation of some core practices of due process, which may (as the stakes rise) become supplemented by all manner of additional procedures.Footnote 13
One of the great questions raised by the current age of artificial intelligence (AI) is whether the notice and explanation of the charges (as well as the opportunity to be heard) must be discharged by a human being. So far as I can discern, no ultimate judicial authority has addressed this particular issue in the due process context. However, given that the entire line of case law arises in the context of humans confronting other humans, it does not take a stretch of the imagination to imagine such a requirement immanent in the enterprise of due process.
Moreover, legal scholars Kiel Brennan-Marquez and Henderson argue that “in a liberal democracy, there must be an aspect of ‘role-reversibility’ to judgment. Those who exercise judgment should be vulnerable, reciprocally, to its processes and effects.”Footnote 14 The problem with robot or AI judges is that they cannot experience punishment the way that a human being would. Role-reversibility is necessary for “decision-makers to take the process seriously, respecting the gravity of decision-making from the perspective of affected parties.” Brennan-Marquez and Henderson derive this principle from basic principles of self-governance:
In a democracy, citizens do not stand outside the process of judgment, as if responding, in awe or trepidation, to the proclamations of an oracle. Rather, we are collectively responsible for judgment. Thus, the party charged with exercising judgment – who could, after all, have been any of us – ought to be able to say: This decision reflects constraints that we have decided to impose on ourselves, and in this case, it just so happens that another person, rather than I, must answer to them. And the judged party – who could likewise have been any of us – ought to be able to say: This decision-making process is one that we exercise ourselves, and in this case, it just so happens that another person, rather than I, is executing it.
Thus, for Brennan-Marquez and Henderson, “even assuming role-reversibility will not improve the accuracy of decision-making; it still has intrinsic value.”
Brennan-Marquez and Henderson are building on a long tradition of scholarship that focuses on the intrinsic value of legal and deliberative processes, rather than their instrumental value. For example, applications of the US Supreme Court’s famous Mathews v. Eldridge calculus have frequently failed to take into account the effects of abbreviated procedures on claimants’ dignity.Footnote 15 Bureaucracies, including the judiciary, have enormous power. They owe litigants a chance to plead their case to someone who can understand and experience, on a visceral level, the boredom and violence portended by a prison stay, the “brutal need” resulting from the loss of benefits (as put in Goldberg v. Kelly), the sense of shame that liability for drunk driving or pollution can give rise to. And as the classic Morgan v. United States held, even in complex administrative processes, the one who hears must be the one who decides. It is not adequate for persons to play mere functionary roles in an automated judiciary, gathering data for more authoritative machines. Rather, humans must take responsibility for critical decisions made by the legal system.
This argument is consistent with other important research on the dangers of giving robots legal powers and responsibilities. For example, Joanna Bryson, Mihailis Diamantis, and Thomas D. Grant have warned that granting robots legal personality raises the disturbing possibility of corporations deploying “robots as liability shields.”Footnote 16 A “responsible robot” may deflect blame or liability from the business that set it into the world. This is dangerous because the robot cannot truly be punished: it lacks human sensations of regret or dismay at loss of liberty or assets. It may be programmed to look as if it is remorseful upon being hauled into jail, or to frown when any assets under its control are seized. But these are simulations of human emotion, not the thing itself. Emotional response is one of many fundamental aspects of human experience that is embodied. And what is true of the robot as an object of legal judgment is also true of robots or AI as potential producers of such judgments.
3.3 How Legal Automation and Contractual Surrender of Rights Threaten Core Due Process Values
There is increasing evidence that many functions of the legal system, as it exists now, are very difficult to automate.Footnote 17 However, as Cashwell and I warned in 2015, the legal system is far from a stable and defined set of tasks to complete. As various interest groups jostle to “reform” legal systems the range of procedures needed to finalize legal determinations may shrink or expand.Footnote 18 There are many ways to limit existing legal processes, or simplify them, in order to make it easier for computation to replace or simulate them. The clauses mentioned previously – forswearing appeals of judgments generated or informed by machine learning or AI – would make non-explainable AI far easier to implement in legal systems.
This type of “moving the goalposts” may be accelerated by extant trends toward neoliberal managerialism in public administration.Footnote 19 This approach to public administration is focused on throughput, speed, case management, and efficiency. Neoliberal managerialists urge the public sector to learn from the successes of the private sector in limiting spending on disputes. One potential here is simply to outsource determinations to private actors – a move widely criticized elsewhere.Footnote 20 I am more concerned here with a contractual option: to offer to beneficiaries of government programs an opportunity for more or quicker benefits, in exchange for an agreement not to pursue appeals of termination decisions, or to thereby accepting their automated resolution.
I focus on the inducement of quicker or more benefits, because it appears to be settled law (at least in the US) that such restrictions of due process cannot be embedded into benefits themselves. A failed line of US Supreme Court decisions once attempted to restrict claimants’ due process rights by insisting that the government can create property entitlements with no due process rights attached. On this reasoning, a county might grant someone benefits with the explicit understanding that they could be terminated at any time without explanation: the “sweet” of the benefits could include the “bitter” of sudden, unreasoned denial of them. In Cleveland Board of Education v. Loudermill (1985), the Court finally discarded this line of reasoning, forcing some modicum of reasoned explanation and process for termination of property rights.
What is less clear now is whether side deals might undermine the delicate balance of rights struck by Loudermill. In the private sector, companies have successfully routed disputes with employees out of process-rich Article III courts, and into stripped-down arbitral forums, where one might even be skeptical of the impartiality of decision-makers.Footnote 21 Will the public sector follow suit? Given some current trends in the foreshortening of procedure and judgment occasioned by public sector automation, the temptation will be great.
These concerns are a logical outgrowth of a venerable literature critiquing rushed, shoddy, and otherwise improper automation of legal decision-making. In 2008, Danielle Keats Citron warned that states were cutting corners by deciding certain benefits (and other) claims automatically, on the basis of computer code that did not adequately reflect the complexity of the legal code it claimed to have reduced to computation.Footnote 22 Virginia Eubanks’s Automating Inequality has identified profound problems in governmental use of algorithmic sorting systems. Eubanks tells the stories of individuals who lose benefits, opportunities, and even custody of their children, thanks to algorithmic assessments that are inaccurate or biased. Eubanks argues that complex benefits determinations are not something well-meaning tech experts can “fix.” Instead, the system itself is deeply problematic, constantly shifting the goal line (in all too many states) to throw up barriers to access to care.
A growing movement for algorithmic accountability is both exposing and responding to these problems. For example, Citron and I coauthored work setting forth some basic procedural protections for those affected by governmental scoring systems.Footnote 23 The AI Now Institute has analyzed cases of improper algorithmic determinations of rights and opportunities.Footnote 24 And there is a growing body of scholarship internationally exploring the ramifications of computational dispute resolution.Footnote 25 As this work influences more agencies around the world, it is increasingly likely that responsible leadership will ensure that a certain baseline of due process values applies to automated decision-making.
Though they are generally optimistic about the role of automation and algorithms in agency decision-making, Coglianese and Lehr concede that one “due process question presented by automated adjudication stems from how such a system would affect an aggrieved party’s right to cross-examination. … Probably the only meaningful way to identify errors would be to conduct a proceeding in which an algorithm and its data are fully explored.”Footnote 26 This type of examination is at the core of Keats Citron’s concept of technological due process. It would require something like a right to an explanation of the automated profiling at the core of decision.Footnote 27
3.4 Due Process, Deals, and Unraveling
However, all such protections could be undone. The ability to explain oneself, and to hear reasoned explanations in turn, is often framed as being needlessly expensive. This expense of legal process (or administrative determinations) has helped fuel a turn to quantification, scoring, and algorithmic decision procedures.Footnote 28 A written evaluation of a person (or comprehensive analysis of future scenarios) often requires subtle judgment, exactitude in wording, and ongoing revision in response to challenges and evolving situations. A pre-set formula based on limited, easily observable variables, is far easier to calculate.Footnote 29 Moreover, even if individuals are due certain explanations and hearings as part of law, they may forego them in some contexts.
This type of rights waiver has already been deployed in some contexts. Several states in the United States allow unions to waive the due process rights of public employees.Footnote 30 We can also interpret some Employee Retirement Income Security Act (ERISA) jurisprudence as an endorsement and approval of a relatively common situation in the United States: employees effectively signing away a right to a more substantive and searching review of adverse benefit scope and insurance coverage determinations via an agreement to participate in an employer-sponsored benefit plan. The US Supreme Court has gradually interpreted ERISA to require federal courts to defer to plan administrators, echoing the deference due to agency administrators, and sometimes going beyond it.Footnote 31
True, Loudermill casts doubt on arrangements for government benefits premised on the beneficiary’s sacrificing due process protections. However, a particularly innovative and disruptive state may decide that the opinion is silent as to the baseline of what constitutes the benefit in question, and leverage that ambiguity. Consider a state that guaranteed health care to a certain category of individuals, as a “health care benefit.” Enlightened legislators further propose that the disabled, or those without robust transport options, should also receive assistance with respect to transportation to care. Austerity-minded legislators counter with a proviso: to receive transport assistance in addition to health assistance, beneficiaries need to agree to automatic adjudication of a broad class of disputes that might arise out of their beneficiary status.
The automation “deal” may also arise out of long-standing delays in receiving benefits. For example, in the United States, there have been many complaints by disability rights groups about the delays encountered by applicants for Social Security Disability Benefits, even when they are clearly entitled to them. On the other side of the political spectrum, some complain that persons who are adjudicated as disabled, and then regain capacities to work, are able to keep benefits for too long after they regain the capacity to work. This concern (and perhaps some mix of cruelty and indifference) motivated British policy makers who promoted “fit for work” reviews by private contractors.Footnote 32
It is not hard to see how the “baseline” of benefits might be defined narrowly, and all future benefits would be conditioned in this way. Nor are procedures the only constitution-level interest that may be “traded away” for faster access to more benefits. Privacy rights may be on the chopping block as well. In the United States, the Trump administration proposed reviews of the social media of persons receiving benefits.Footnote 33 The presumption of such review is that a picture of, say, a self-proclaimed depressed person smiling, or a self-proclaimed wheelchair-bound person walking, could alert authorities to potential benefits fraud. And such invasive surveillance could again feed into automated review, which could be flagged by such “suspicious activity” in a way similar to the activation of investigation at US fusion centers by “suspicious activity reports.”
What is even more troubling about these dynamics is the way in which “preferences” to avoid surveillance or preserve procedural rights might themselves become new data points for suspicion or investigation. A policymaker may wonder about the persons who refuse to accept the new due-process-lite “deal” offered by the state: What have they got to hide? Why are they so eager to preserve access to a judge and the lengthy process that may entail? Do they know some discrediting fact about their own status that we do not, and are they acting accordingly? Reflected in the economics of information as an “adverse selection problem,” this kind of speculative suspicion may become widespread. It may also arise as a byproduct of machine learning: those who refuse to relinquish privacy or procedural rights may, empirically, turn out to be more likely to pose problems for the system, or non-renewal of benefits, than those who trade away those rights. Black-boxed flagging systems may silently incorporate such data points into their own calculations.
The “what have you got to hide” rationale leads to a phenomenon deemed “unraveling” by economists of information. This dynamic has been extensively analyzed by the legal scholar Scott Peppet. The bottom line of Peppet’s analysis is that every individual decision to reveal something about himself or herself may also create social circumstances that pressure others to also disclose. For example, if only a few persons tout their grade point average (GPA) on their resumes, that disclosure may merely be an advantage for them in the job-seeking process. However, once 30 percent, 40 percent, 50 percent, or more of job-seekers include their GPAs, human resources personnel reviewing the applications may wonder about the motives of those who do not. If they assume the worst about non-revealers, it becomes a rationale for all but the very lowest GPA holders to reveal their GPA. Those at, say, the thirtieth percentile, reveal their GPA to avoid being confused with those in the twentieth or tenth percentile, and so on.
This model of unraveling parallels similar theorizing in feminist theorizing. For example, Catharine Mackinnon insisted that the “personal is political,” in part because any particular family’s division of labor helped either reinforce or challenge dominant patterns.Footnote 34 A mother may choose to quit work and stay home to raise her children, while her husband works fifty hours a week, and that may be an entirely ethical choice for her family. However, it also helps reinforce patterns of caregiving and expectations in that society which track women into unpaid work and men into paid work. It is not merely accommodating but also promoting gendered patterns of labor.Footnote 35 Like a path through a forest trod ever clearer of debris, it becomes the natural default.
This inevitably social dimension of personal choice also highlights the limits of liberalism in addressing due process trade-offs. Civil libertarians may fight the direct imposition of limitations of procedural or privacy rights by the state. However, “freedom of contract” may itself be framed as a civil liberties issue. If a person in great need wants immediate access to benefits, in exchange for letting the state monitor his social network feed (and automatically terminate benefits if suspect pictures are posted), the bare rhetoric of “freedom” also pulls in favor of permitting this deal. We need a more robust and durable theory of constitutionalism to preempt the problems that may arise here.
3.5 Backstopping the Slippery Slope toward Automated Justice
As the spread of plea bargaining in the United States shows, there is a clear and present danger of the state using its power to make an end-run around protections established in the constitution and guarded by courts. When a prosecutor threatens a defendant with a potential hundred-year sentence in a trial, or a plea for five to eight years, the coercion is obvious. By comparison, given the sclerotic slowness of much of the US administrative state, giving up rights in order to accelerate receipt of benefits is likely to seem to many liberals a humane (if tough) compromise.
Nevertheless, scholars should resist this “deal” by further developing and expanding the “unconstitutional conditions” doctrine. Daniel Farber deftly explicates the basis and purpose of the doctrine:
[One] recondite area of legal doctrine [concerns] the constitutionality of requiring waiver of a constitutional right as a condition of receiving some governmental benefit. Under the unconstitutional conditions doctrine, the government is sometimes, but by no means always, blocked from imposing such conditions on grants. This doctrine has long been considered an intellectual and doctrinal swamp. As one recent author has said, “[t]he Supreme Court’s failure to provide coherent guidance on the subject is, alas, legendary.”Footnote 36
Farber gives several concrete examples of the types of waivers that have been allowed over time. “[I]n return for government funding, family planning clinics may lose their right to engage in abortion referrals”; a criminal defendant can trade away the right to a jury trial for a lighter sentence. Farber is generally open to the exercise of this right to trade one’s rights away.Footnote 37 However, even he acknowledges that courts need to block particularly oppressive or manipulative exchanges of rights for other benefits. He offers several rationales for such blockages, including one internal to contract theory and another based on public law grounds.Footnote 38 Each is applicable to many instances of “automated justice.”
Farber’s first normative ground for unconstitutional conditions challenges to waivers of constitutional rights is the classic behavioral economics concern about situations “where asymmetrical information, imperfect rationality, or other flaws make it likely that the bargain will not be in the interests of both parties.”Footnote 39 This rationale applies particularly well to scenarios where black-box algorithms (or secret data) are used.Footnote 40 No one should be permitted to accede to an abbreviated process when the foundations of its decision-making are not available for inspection. The problem of hyperbolic discounting also looms large. A benefits applicant in brutal need of help may not be capable of fully thinking through the implications of trading away due process rights. Bare concern for survival occludes such calculations.
The second normative foundation concerns the larger social impact of the rights-waiver bargain. For example, Farber observes, “when the agreement would adversely affect the interests of third parties in some tangible way,” courts should be wary of it. The unraveling dynamic described above offers one example of this type of adverse impact on third parties from rights sacrifices. Though it may not be immediately “tangible,” it has happened in so many other scenarios that it is critical for courts to consider whether particular bargains may pave the way to a future where the “choice” to trade away a right is effectively no choice at all, because the cost of retaining it is a high level of suspicion generated by exercising (or merely retaining the right to exercise) the right.
Under this second ground, Farber also mentions that we may “block exchanges that adversely affect the social meaning of constitutional rights, degrading society’s sense of its connection with personhood.” Here again, a drift toward automated determination of legal rights and duties seems particularly apt for targeting. The right of due process at its core means something more than a bare redetermination by automated systems. Rather, it requires some ability to identify a true human face of the state, as Henderson and Brennan-Marquez’s work (discussed previously) suggests. Soldiers at war may hide their faces, but police do not. We are not at war with the state; rather, it is supposed to be serving us in a humanly recognizable way. The same is true a fortiori of agencies dispending benefits and other forms of support.
3.6 Conclusion: Writing, Thinking, and Automation in Administrative Processes
Claimants worried about the pressure to sign away rights to due process may have an ally within the administrative state: persons who now hear and decide cases. AI and ML may ease their workload, but could also be a prelude to full automation. Two contrasting cases help illuminate this possibility. In Albathani v. INS (2003), the First Circuit affirmed the Board of Immigration Appeals’ policy of “affirmance without opinion” (AWO) of certain rulings by immigration judges.Footnote 41 Though “the record of the hearing itself could not be reviewed” in the ten minutes which the Board member, on average, took to review each of more than fifty cases on the day in question, the court found it imperative to recognize “workload management devices that acknowledge the reality of high caseloads.” However, in a similar Australian administrative context, a judge ruled against a Minister in part due to the rapid disposition of two cases involving more than seven hundred pages of material. According to the judge, “43 minutes represents an insufficient time for the Minister to have engaged in the active intellectual process which the law required of him.”Footnote 42
In the short run, decision-makers at an agency may prefer the Albathani approach. As Chad Oldfather has observed in his article “Writing, Cognition, and the Nature of the Judicial Function,” unwritten, and even visceral, snap decisions have a place in our legal system.Footnote 43 They are far less tiring to generate than a written record and reasoned elaboration of how the decision-maker applied the law to the facts. However, in the long run, when the reduction of thought and responsibility for review reduces to a certain vanishing point, it is difficult for decision-makers to justify their own interposition in the legal process. A “cyberdelegation” to cheaper software may be proper then.Footnote 44
We must connect current debates on the proper role of automation in agencies to requirements for reasoned decision-making. It is probably in administrators’ best interests for courts to actively ensure thoughtful decisions by responsible persons. Otherwise, administrators may ultimately be replaced by the types of software and AI now poised to take over so many other roles now performed by humans. The temptation to accelerate, abbreviate, and automate human processes is, all too often, a prelude to destroying them.Footnote 45
4.1 Introduction
Is a future in which our emotions are being detected in real time and tracked, both in private and public spaces, dawning? Looking at recent technological developments, studies, patents, and ongoing experimentations, this may well be the case.Footnote 1 In its Declaration on the manipulative capabilities of algorithmic processes of February 2019, the Council of Europe’s Committee of Ministers alerts us for the growing capacity of contemporary machine learning tools not only to predict choices but also to influence emotions, thoughts, and even actions, sometimes subliminally.Footnote 2 This certainly adds a new dimension to existing computational means, which increasingly make it possible to infer intimate and detailed information about individuals from readily available data, facilitating the micro-targeting of individuals based on profiles in a way that may profoundly affect their lives.Footnote 3 Emotional artificial intelligence (further ‘emotional AI’) and empathic media are new buzzwords used to refer to the affective computing sub-discipline and, specifically, to the technologies that are claimed to be capable of detecting, classifying, and responding appropriately to users’ emotional lives, thereby appearing to understand their audience.Footnote 4 These technologies rely on a variety of methods, including the analysis of facial expressions, physiological measuring, analyzing voice, monitoring body movements, and eye tracking.Footnote 5
Although there have been important debates as to their accuracy, the adoption of emotional AI technologies is increasingly widespread, in many areas and for various purposes, both in the public and private sectors.Footnote 6 It is well-known that advertising and marketing go hand in hand with an attempt to exploit emotions for commercial gain.Footnote 7 Emotional AI facilitates the systematic gathering of insightsFootnote 8 and allows for the further personalization of commercial communications and the optimization of marketing campaigns in real time.Footnote 9 Quantifying, tracking, and manipulating emotions is a growing part of the social media business model.Footnote 10 For example, Facebook is now infamous in this regard due to its emotional contagionFootnote 11 experiment where users’ newsfeeds were manipulated to assess changes in emotion (to assess whether Facebook posts with emotional content were more engaging).Footnote 12 A similar trend has been witnessed in the political sphere – think of the Cambridge Analytica scandalFootnote 13 (where data analytics was used to gauge the personalities of potential Trump voters).Footnote 14 The aforementioned Declaration of the Council of Europe, among others, points to the dangers for democratic societies that emanate from the possibility to employ algorithmic tools capable of manipulating and controlling not only economic choices but also social and political behaviours.Footnote 15
Do we need new (constitutional) rights, as suggested by some, in light of growing practices of manipulation by algorithms, in general, and the emergence of emotional AI, in particular? Or, is the current law capable of accommodating such developments adequately? This is undoubtedly one of the most fascinating debates for legal scholars in the coming years. It is also on the radar of CAHAI, the Council of Europe’s Ad Hoc Committee on Artificial Intelligence, set up on 11 September 2019, with the mission to examine the feasibility and potential elements of a legal framework for the development, design, and application of AI, based on the Council of Europe’s standards on human rights, democracy, and the rule of law.Footnote 16
In the light of these ongoing policy discussions, the ambition of this chapter is twofold. First, it will discuss certain legal-ethical challenges posed by the emergence of emotional AI and its manipulative capabilities. Second, it will present a number of responses, specifically those suggesting the introduction of new (constitutional) rights to mitigate the potential negative effects of such developments. Given the limited scope of the chapter, it does not seek to evaluate the appropriateness of the identified suggestions, but rather to provide the foundation for a future research agenda in that direction. The focus of the chapter lies on the European legal framework and on the use of emotions for commercial business-to-consumer purposes, although some observations are also valid in the context of other highly relevant uses of emotional AI,Footnote 17 such as implementations by the public sector, or for the purpose of political micro-targeting, or fake news. The chapter is based on a literature review, including recent academic scholarship and grey literature. Its methodology relies on a legal analysis of how the emergence of emotional AI raises concerns and challenges for ‘constitutional’ rights and values through the lens of its use in the business to consumer context. With constitutional rights, we do not refer to national constitutions, but given the chapter’s focus on the European level, to the fundamental rights and values as enshrined in the European Convention for the Protection of Human Rights and Fundamental Freedoms (‘ECHR’), on the one hand, and the EU Charter of Fundamental Rights (‘CFREU’) and Article 2 of the Treaty on European Union (‘TEU’), on the other.
4.2 Challenges to Constitutional Rights and Underlying Values
Protecting the Citizen-Consumer
Emotion has always been at the core of advertising and marketing, and emotion detection has been used in market research for several decades.Footnote 18 Consequently, in various areas of EU and national law, rules have been adopted to protect consumers and constrain forms of manipulative practices in business-to-consumer relations. Media and advertising laws have introduced prohibitions on false, misleading, deceptive, and surreptitious advertising, including an explicit ban on subliminal advertising.Footnote 19 Consumer protection law instruments shield consumers from aggressive, unfair, and deceptive trade practices.Footnote 20 Competition law prohibits exploitative abuses of market power.Footnote 21 Data protection law has set strict conditions under which consumers’ personal data can be collected and processed.Footnote 22 Under contract law, typical grounds for a contract being voidable include coercion, undue influence, misrepresentation, or fraud. The latter, fraud (i.e., the intentional deception to secure an unfair or unlawful gain, or deprive a victim of her legal right) is considered a criminal offence. In the remainder of the text, these rules are referred to as ‘consumer protection law in the broad sense’, as they protect citizens as economic actors.
Nevertheless, the employment of emotional AI may justify additional layers of protection. The growing effectiveness of the technology drew public attention following Facebook’s aforementioned emotional contagionFootnote 23 experiment, where users’ newsfeeds were manipulated to assess changes in emotion (to assess whether Facebook posts with emotional content were more engaging),Footnote 24 as well as the Cambridge Analytica scandalFootnote 25 (where it was used to gauge the personalities of potential Trump voters).Footnote 26 There are also data to suggest that Facebook had offered advertisers the ability to target advertisements to teenagers based on real-time extrapolation of their mood.Footnote 27 Yet Facebook is obviously not alone in exploiting emotional AI (and emotions) in similar ways.Footnote 28 As noted by Stark and Crawford, commenting on the fallout from the emotional contagion experiment, it is clear that quantifying, tracking, and ‘manipulating emotions’ is a growing part of the social media business model.Footnote 29 Researchers are documenting the emergence of what Zuboff calls ‘surveillance capitalism’Footnote 30 and, in particular, its reliance on behavioural tracking and manipulation.Footnote 31 Forms of ‘dark patterns’ are increasingly detected, exposed, and – to some extent – legally constrained. Dark patterns can be described as exploitative design choices, ‘features of interface design crafted to trick users into doing things that they might not want to do, but which benefit the business in question’.Footnote 32 In its report from 2018, the Norwegian Consumer Authority called the use by large digital service providers (in particular Facebook, Google, and Microsoft) of such dark patterns an ‘unethical’ attempt to push consumers towards the least privacy friendly options of their services.Footnote 33 Moreover, it questioned whether such practices are in accordance with the principles of data protection by default and data protection by design, and whether consent given under these circumstances can be said to be explicit, informed, and freely given. It stated that ‘[w]hen digital services employ dark patterns to nudge users towards sharing more personal data, the financial incentive has taken precedence over respecting users’ right to choose. The practice of misleading consumers into making certain choices, which may put their privacy at risk, is unethical and exploitative.’ In 2019, the French data protection authority, CNIL, effectively fined Google for the violation of transparency and information obligations and lack of (valid) consent for advertisements personalization. In essence, the users were not aware of the extent of personalization.Footnote 34 Notably, the Deceptive Experiences to Online Users Reduction Act, as introduced by senators Deb Fischer and Mark Warner in the United States (the so-called DETOUR Act), explicitly provided protection against ‘manipulation of user interfaces’ and offered prohibiting dark patterns when seeking consent to use personal information.Footnote 35
It is unlikely, though, that existing consumer protection law (in the broad sense) will be capable of providing a conclusive and exhaustive answer to the question of where to draw the line between forms of permissible persuasion and unacceptable manipulation in the case of emotional AI. On the one hand, there may be situations in which dubious practices escape the scope of application of existing laws. Think of the cameras installed at Piccadilly Lights in London which are able to detect faces in the crowd around the Eros statue in Piccadilly Circus, and ‘when they identify a face the technology works out an approximate age, sex, mood (based on whether think you are frowning or laughing) and notes some characteristics such as whether you wear glasses or whether you have a beard’.Footnote 36 The cameras have been used during a certain period with the purpose of optimizing the advertising displayed on Piccadilly Lights.Footnote 37 Even if such practices of emotional AI in public spaces are not considered in violation of the EU General Data Protection Regulation (given the claimed immediate anonymization of the faces detected), they raise serious question marks from an ethical perspective.Footnote 38 On the other hand, the massive scale with which certain practices are deployed may surpass the enforcement of individual rights. The Council of Europe’s Parliamentary Assembly expressed concerns that persuasive technologies enable ‘massive psychological experimentation and persuasion on the internet’.Footnote 39 Such practices seem to require a collective answer (e.g., by including them in the blacklist of commercial practices),Footnote 40 since enforcement in individual cases risks being ineffective in remedying harmful effects on society as a whole.
Moreover, emotional AI is arguably challenging the very underlying rationality-based paradigm imbued in (especially, but not limited to) consumer protection law. Modern legality is characterized by a separation of rational thinking (or reason) from emotion and consumer protection essentially rely on rationality.Footnote 41 As noted by Maloney, the law works from the perspective that rational thinking and emotion ‘belong to separate spheres of human existence; the sphere of law admits only of reason; and vigilant policing is required to keep emotion from creeping in where it does not belong’.Footnote 42 The law is traditionally weighted towards the protection of the verifiable propositional content of commercial communications; however, interdisciplinary research is increasingly recognizing the persuasive effect of the unverifiable content (i.e., images, music)Footnote 43 and has long recognized that people interact with computers as social agents and not just tools.Footnote 44 It may be reasonably argued that the separation of rationality from affect in the law fails to take interdisciplinary insights into account.Footnote 45 In relation to this, the capacity of the current legal framework to cope with the advancements is in doubt. In particular, since the development of emotion detection technology facilitates the creation of emotion-evolved consumer-facing interactions, it poses challenges to the framework which relies on rationality.Footnote 46 The developments arguably raise concerns regarding the continuing reliance on the rationality paradigm within consumer protections, and hence consumer self-determination and individual autonomy, as core underlying principles of the legal protections.
Motivating a Constitutional Debate
The need for guidance about how to apply and, where relevant, complement existing consumer protection laws (in the broad sense) in light of the rise of emotional AI motivates the need for a debate at a more fundamental level, looking at constitutional and ethical frameworks. The following paragraphs – revolving around three main observations – focus on the former of these frameworks, and will highlight how emotion detection and manipulation may pose threats to the effective enjoyment of constitutional rights and freedoms.
What’s in a Name?
By way of preliminary observation, it should be stressed that, as noted by Sunstein, manipulation has ‘many shades’ and is extremely difficult to define.Footnote 47 Is an advertising campaign by an automobile company showing a sleek, attractive couple exiting from a fancy car before going to a glamorous party ‘manipulation’? Do governments – in an effort to discourage smoking – engage in ‘manipulation’ when they require cigarette packages to contain graphic, frightening health warnings, depicting people with life-threatening illnesses? Is showing unflattering photographs of your opponent during a political campaign ‘manipulation’? Is setting an opt-out consent system for deceased organ donation as the legislative default ‘manipulation’? Ever since Nobel Prize winner Richard Thaler and Cass Sunstein published their influential book Nudge, a rich debate has ensued on the permissibility of deploying choice architectures for behavioural change.Footnote 48 The debate, albeit extremely relevant in the emotional AI context, exceeds the scope of this chapter, and is inherently linked to political-philosophical discussions. A key takeaway from Sunstein’s writing is that, in a social order that values free markets and is committed to freedom of expression, it is ‘exceptionally difficult to regulate manipulation as such’.Footnote 49 He suggests to consider a statement or action as manipulative to the extent that it does not sufficiently engage or appeal to people’s capacity for reflective and deliberative choice. This reminds us of the notions of consumer self-determination and individual autonomy, which we mentioned previously and which will also be discussed further in this section.
From Manipulation over Surveillance to Profiling Errors
Second, it is important to understand that, in addition to the concerns over its manipulative capabilities, on which the chapter focused so far, emotional AI and its employment equally require to take into consideration potential harmful affective impacts, on the one hand, and potential profiling errors, on the other. In relation to the former (the latter are discussed later), it is well-known that surveillance may cause a chilling effect on behaviourFootnote 50 and, in this way, encroach on our rights to freedom of expression (Article 10 ECHR; Article 10 CFREU), freedom of assembly and association (Article 11 ECRH; Article 12 CFREU), and – to the extent that our moral integrity is at stake – our right to private life and personal identity (Article 8 ECHR; Article 7 CFREU).Footnote 51 Significantly, as noted by Calo, ‘[e]ven where we know intellectually that we are interacting with an image or a machine, our brains are hardwired to respond as though a person were actually there’.Footnote 52 The mere observation or perception of surveillance can have a chilling effect on behaviour.Footnote 53 As argued by Stanley (in the context of video analytics), one of the most worrisome concerns is ‘the possibility of widespread chilling effects as we all become highly aware that our actions are being not just recorded and stored, but scrutinized and evaluated on a second-by-second’ basis.Footnote 54 Moreover, such monitoring can also have an impact on an individual’s ability to ‘self-present’.Footnote 55 This refers to the ability of individuals to present multifaceted versions of themselves,Footnote 56 and thus behave differently depending on the circumstances.Footnote 57 Emotion detection arguably adds a layer of intimacy-invasion via the capacity to not only detect emotions as expressed but also detect underlying emotions that are being deliberately disguised. This is of particular significance, as it not only limits the capacity to self-present but potentially erodes this capacity entirely. This could become problematic if such technologies and the outlined technological capacity become commonplace.Footnote 58 In that regard, it is important to understand that emotional AI can have an impact on an individual’s capacity to self-present irrespective of its accuracy (i.e., what is important is that the individual’s belief or the mere observation or perception of surveillance can have a chilling effect on behaviour).Footnote 59
The lack of accuracy of emotional AI, resulting in profiling errors and incorrect inferences, presents additional risks of harm,Footnote 60 including inconvenience, embarrassment, or even material or physical harm.Footnote 61 In this context, it is particularly important that a frequently adopted approachFootnote 62 for emotion detection relies on the six basic emotions as indicated by Ekman (i.e., happiness, sadness, surprise, fear, anger, and disgust). However, this classification is heavily criticized as not accurately reflecting the complex nature of an affective state.Footnote 63 The other major approaches for detecting emotions, namely the dimensional and appraisal-based approach, also present challenges of their own.Footnote 64 As Stanley puts it, emotion detection is an area where there is a special reason to be sceptical, since many such efforts spiral into ‘a rabbit hole of naïve technocratic simplification based on dubious beliefs about emotions’.Footnote 65 The AI Now Institute at New York University alerts (in the light of facial recognition) that new technologies reactivate ‘a long tradition of physiognomy – a pseudoscience that claims facial features can reveal innate aspects of our character and personality’ – and emphasizes that contextual, social, and cultural factors play a larger role in emotional expression than was believed by Ekman and his peers.Footnote 66 Leaving the point that emotion detection through facial expressions is a pseudoscience to one side, improving the accuracy of emotion detection more generally may arguably require more invasive surveillance to gather more contextual insights and signals, paradoxically creating additional difficulties from a privacy perspective. Building on the revealed circumstances, the risks associated with profiling are strongly related to the fact that the databases being mined for inferences are often ‘out-of-context, incomplete or partially polluted’, resulting in the risk of false positives and false negatives.Footnote 67 This risk remains unaddressed by the individual participation rights approach in the EU data protection framework. Indeed, while the rights of access, correction, and erasure as evident in the EU General Data Protection Regulation may have theoretical significance, the practical operation of these rights requires significant effort and is becoming increasingly difficult.Footnote 68 This in turn may have a significant impact on the enjoyment of key fundamental rights and freedoms, such as inter alia the right to respect for private and family life and protection of personal data (Article 8 ECHR; Articles 7–8 CFREU); equality and non-discrimination (Article 14 ECHR; Articles 20–21 CFREU); and freedom of thought, conscience, and religion (Article 9 ECHR; Article 10 CFREU); but also – and this brings us to our third observation – the underlying key notions of autonomy and human dignity.
Getting to the Core Values: Autonomy and Human Dignity
Both at the EU and Council of Europe level, institutions have stressed that new technologies should be designed in such a way that they preserve human dignity and autonomy – both physical and psychological: ‘the design and use of persuasion software and of ICT or AI algorithms … must fully respect the dignity and human rights of all users’.Footnote 69 Manipulation of choice can inherently interfere with autonomy.Footnote 70 Although the notion of autonomy takes various meanings and conceptions, based on different philosophical, ethical, legal, and other theories,Footnote 71 for the purposes of this chapter, the Razian interpretation of autonomy is adopted, as it recognizes the need to facilitate an environment in which individuals can act autonomously.Footnote 72 According to Razian legal philosophy, rights are derivatives of autonomyFootnote 73 and, in contrast with the traditional liberal approach, autonomy requires more than simple non-interference. Raz’s conception of autonomy does not preclude the potential for positive regulatory intervention to protect individuals and enhance their freedom. In fact, such positive action is at the core of this conception of autonomy, as a correct interpretation must allow effective choice in reality, thus at times requiring regulatory intervention.Footnote 74 Raz argues that certain regulatory interventions which support certain activities and discourage those which are undesirable ‘are required to provide the conditions of autonomy’.Footnote 75 According to Raz, ‘[a]utonomy is opposed to a life of coerced choices. It contrasts with a life of no choices, or of drifting through life without ever exercising one’s capacity to choose. Evidently the autonomous life calls for a certain degree of self-awareness. To choose one must be aware of one’s options.’Footnote 76 Raz further asserts: ‘Manipulating people, for example, interferes with their autonomy, and does so in much the same way and to the same degree, as coercing them. Resort to manipulation should be subject to the same conditions as resort to coercion.’Footnote 77 Hence the manipulation of choice can inherently interfere with autonomy, and one can conclude that through this lens, excessive persuasion also runs afoul of autonomy.Footnote 78
Autonomy is inherent in the operation of the democratic values, which are protected at the foundational level by fundamental rights and freedoms. However, there is no express reference to a right to autonomy or self-determination in either the ECHR or the CFREU. Despite not being expressly recognized in a distinct ECHR provision, the European Court of Human Rights (further ‘ECtHR’) has ruled on several occasions that the protection of autonomy comes within the scope of Article 8 ECHR,Footnote 79 which specifies the right to respect for private and family life. This connection has been repeatedly illustrated in the ECtHR jurisprudence dealing with individuals’ fundamental life choices, including inter alia in relation to sexual preferences/orientation, and personal and social life (i.e., including a person’s interpersonal relationships). Such cases illustrate the role played by the right to privacy in the development of one’s personality through self-realization and autonomy (construed broadly).Footnote 80 The link between the right to privacy and autonomy is thus strong, and therefore, although privacy and autonomy are not synonyms,Footnote 81 it may be reasonably argued that the right to privacy currently offers an avenue for protection of autonomy (as evidenced by the ECtHR case law).Footnote 82 The emergence of emotional AI and the detection of emotions in real time through emotion surveillance challenges the two strands of the right simultaneously, namely (1) privacy as seclusion or intimacy through the detection of emotions and (2) privacy as freedom of action, self-determination, and autonomy via their monetization.Footnote 83
Dignity, similar to autonomy, cannot be defined easily. The meaning of the word is by no means straightforward, and its relationship with fundamental rights is unclear.Footnote 84 The Rathenau Institute has touched upon this issue, noting that technologies are likely to interfere with other rights if the use of technologies interferes with human dignity.Footnote 85 However, there is little or no consensus as to what the concept of human dignity demands of lawmakers and adjudicators, and as noted by O’Mahony, as a result, many commentators argue that it is at best meaningless or unhelpful, and at worst potentially damaging to the protection of human rights.Footnote 86 Whereas a full examination of the substantive content of the concept is outside the scope of this chapter, it can be noted that human dignity, despite being interpreted differently due to cultural differences,Footnote 87 is considered to be a central value underpinning the entirety of international human rights law,Footnote 88 one of the core principles of fundamental rights,Footnote 89 and the basis of most of the values emphasized in the ECHR.Footnote 90 Although the ECHR itself does not explicitly mention human dignity,Footnote 91 its importance has been highlighted in several legal sources related to the ECHR, including the case law of ECtHR and various documents of the CoE.Footnote 92 Human dignity is also explicitly recognized as the foundation of all fundamental rights guaranteed by the CFREU,Footnote 93 and its role was affirmed by the Court of Justice of the EU (further ‘CJEU’).Footnote 94
With regard to its substantive content, it can be noted that as O’Mahony argues, perhaps the most universally recognized aspects of human dignity are equal treatment and respect.Footnote 95 In the context of emotional AI, it is particularly relevant that although human dignity shall not be considered as a right itself,Footnote 96 it is the source of the right to personal autonomy and self-determination (i.e., the latter are derived from the underlying principle of human dignity).Footnote 97 As noted by Feldman, there is arguably no human right which is unconnected to human dignity; however, ‘some rights seem to have a particularly prominent role in upholding human dignity’, and these include the right to be free of inhuman or degrading treatment, the right to respect for private and family life, the right to freedom of conscience and belief, the right to freedom of association, the right to marry and found a family, and the right to be free of discriminatory treatment.Footnote 98 Feldman argues that, apart from freedom from inhuman and degrading treatment, these rights are ‘not principally directed to protecting dignity and they are more directly geared to protecting the interests in autonomy, equality and respect’.Footnote 99 However, it is argued that these interests – autonomy, equality, and respect – are important in providing circumstances in which ‘dignity can flourish’, whereas rights which protect them usefully serve as a cornerstone of dignity.Footnote 100 In relation to this, since the employment of emotional AI may pose threats to these rights (e.g., to the right to respect for private and family life, as illustrated above, or to the right to be free of discriminatory treatment),Footnote 101 in essence it may pose threats to human dignity, respectively. To illustrate, one may refer to the analysis of live facial recognition technologies by the EU Agency for Fundamental Rights (further ‘FRA’),Footnote 102 emphasizing that the processing of facial images may affect human dignity in different ways.Footnote 103 According to FRA, human dignity may be affected, for example, when people feel uncomfortable going to certain places or events, change their behaviours, or withdraw from social life. The ‘impact on what people may perceive as surveillance technologies on their lives may be so significant as to affect their capacity to live a dignified life’.Footnote 104 FRA argues that the use of facial recognition can have a negative impact on people’s dignity and, relatedly, may pose threats to (rights to) privacy and data protection.Footnote 105
To summarize, the deployment of emotional AI in a business-to-consumer context necessitates a debate at a fundamental, constitutional level. Although it may benefit both businesses and consumers (e.g., by providing revenues and consumer satisfaction respectively), it has functional weaknessesFootnote 106 and also begs for the revealed legal considerations. Aside from the obvious privacy and data protection concerns, from the consumer’s perspective, individual autonomy and human dignity as overarching values may be at risk. Influencing activities evidently interfere not only with an individual’s autonomy and self-determination, but also with the individual’s freedom of thought, conscience, and religion.Footnote 107 It may be clear, as the CoE’s Committee of Ministers has noted, that also in other contexts (e.g., political campaigning), fine-grained, subconscious, and personalized levels of algorithmic persuasion may have significant effects on the cognitive autonomy of individuals and their right to form opinions and take independent decisions.Footnote 108 As a result, not only the exercise and enjoyment of individual human rights may be weakened, but also democracy and the rule of law may be threatened, as they are equally grounded on the fundamental belief in the equality and dignity of all humans as independent moral agents.Footnote 109
4.3 Suggestions to Introduce New (Constitutional) Rights
In the light of the previously noted factors, it comes as no surprise that some authors have discussed or suggested the introduction of some novel rights, in order to reinforce the existing legal arsenal.Footnote 110 Although both autonomy and dignity as relevant underlying values and some relevant rights such as right to privacy, freedom of thought, and freedom of expression are protected by the ECHR, some scholars argue that the ECHR does not offer sufficient protection in the light of the manipulative capabilities of emotional AI.Footnote 111 The subsequent paragraphs portray, in a non-exhaustive manner, such responses that concern the introduction of some new (constitutional) rights.
A first notable (American) scholar is Shoshana Zuboff, who has argued (in a broader context of surveillance capitalism)Footnote 112 for the ‘right to the future tense’. As noted by Zuboff, ‘we now face the moment in history when the elemental right to future tense is endangered’ by digital architecture of behavioural modification owned and operated by ‘surveillance capital’.Footnote 113 According to Zuboff, current legal frameworks as mostly centred on privacy and antitrust have not been sufficient to prevent undesirable practices,Footnote 114 including the exploitation of technologies for manipulative purposes. The author argues for the laws that reject the fundamental legitimacy of certain practices,
including the illegitimate rendition of human experience as behavioral data; the use of behavioural surplus as free raw material; extreme concentrations of the new means of production; the manufacture of prediction products; trading in behavioral futures; the use of prediction products for third-party operations of modification, influence and control; the operations of the means of behavioural modification; the accumulation of private exclusive concentrations of knowledge (the shadow text); and the power that such concentrations confer.Footnote 115
While arguing about the rationale of the so-called right to the future tense, the author relies on the importance of free will (i.e., Zuboff argues that in essence manipulation eliminates the freedom to will). Consequently, there is no future without the freedom to will, and there are no subjects but only ‘objects’.Footnote 116 As the author puts it, ‘the assertion of freedom of will also asserts the right to the future tense as a condition of a fully human life’.Footnote 117 While arguing for the recognition of such a right as a human right, Zuboff relies on Searle, who argues that elemental rights are crystallized as formal human rights only at that moment in history when they come under systematic threat. Hence, given the development of surveillance capitalism, it is necessary to recognize it as a human right. To illustrate, Zuboff argues that no one is recognizing, for example, a right to breathe because it is not under attack, which cannot be said about the right to the future tense.Footnote 118
German scholar Jan Christoph Bublitz argues for the ‘right to cognitive liberty’ (phrased alternatively a ‘right to mental self-determination’), relying in essence on the fact that the right to freedom of thought has been insignificant in practice, despite its theoretical importance.Footnote 119 Bublitz calls for the law to redefine the right to freedom of thought in terms of its theoretical significance in light of technological developments capable of altering thoughts.Footnote 120 The author argues that such technological developments require the setting of normative boundaries ‘to secure the freedom of the forum internum’.Footnote 121
In their report for the Council of Europe analyzing human rights in the robot age, Dutch scholars Rinie van Est and Joost Gerritsen from the Rathenau Institute suggest reflecting on two novel human rights, namely, the right to not be measured, analyzed or coached and the right to meaningful human contact.Footnote 122 They argue that such rights are indirectly related to and aim to elaborate on existing human rights, in particular, the classic privacy right to be let alone and the right to respect for family life (i.e., the right to establish and develop relationships with other human beings).Footnote 123 While discussing the rationale of a potential right not to be measured, analyzed, or coached, they rely on scholarly work revealing detrimental effects of ubiquitous monitoring, profiling or scoring, and persuasion.Footnote 124 They argue that what is at stake given the technological development is not only the risk of abuse but the right to remain anonymous and/or the right to be let alone, ‘which in the robot age could be phrased as the right to not be electronically measured, analyzed or coached’.Footnote 125 However, their report ultimately leaves it unclear whether they assume it is necessary to introduce the proposed rights as new formal human rights. Rather, it calls for the CoE to clarify how these rights – the right to not be measured, analyzed, or coached, and the right to meaningful human contact – could be included within the right to privacy and the right to family life respectively.Footnote 126 In addition to considering potential novel rights, the Rathenau report calls for developing fair persuasion principles, ‘such as enabling people to monitor the way in which information reaches them, and demanding that firms must be transparent about the persuasive methods they apply’.Footnote 127
According to UK scholar Karen Yeung, manipulation may threaten individual autonomy and the ‘right to cognitive sovereignty’.Footnote 128 While arguing about the rationale of such a right, Yeung relies on the importance of individual autonomy and on the Razian approach comparing manipulation to coercion,Footnote 129 as discussed previously. In addition, Yeung relies on Nissenbaum, who observes that the risks of manipulation are even more acute in a digital world involving ‘pervasive monitoring, data aggregation, unconstrained publication, profiling, and segregation’, because the manipulation that deprives us of autonomy is more subtle than the world in which lifestyle choices are punished and explicitly blocked.Footnote 130 When it comes to arguing about the need to introduce a new formal human right, Yeung notes that human dignity and individual autonomy are not sufficiently protected by Articles 8, 9, and 10 of the ECHR; however, the study in question does not provide detailed arguments in that regard. The author also refrains from elaborating on the content of such a right.Footnote 131
Some novel rights are discussed at the institutional level as well. For example, the CoE’s Parliamentary Assembly has proposed working on guidelines which would cover, among other things, the recognition of some new rights, including the right not to be manipulated.Footnote 132
Further research is undoubtedly necessary to assess whether the current legal framework is not already capable of accommodating the developments properly. While the introduction of novel constitutional rights may indeed contribute to defining normative beacons, we should at the same time be cautious not to dilute the significance of constitutional rights by introducing new ones that could, in fact, be considered as manifestations of existing constitutional rights.Footnote 133 Hence, it is particularly important to delineate, as noted by Clifford, between primary and secondary law, and to assess the capabilities of the latter in particular.Footnote 134 In other words, it is necessary to exercise restraint and consider what already exists and also to delineate between rights and the specific manifestation of these rights in their operation and/or in secondary law protections (i.e., derived sub-rights). For example, key data subject rights like the right to erasure, object, access, and portability are all manifestations of the aim of respecting the right to data protection as balanced with other rights and interests. Admittedly, while the right to data protection has been explicitly recognized as a distinct fundamental right in the CFREU, this is not the case in the context of the ECHR, where the ECtHR has interpreted the right to privacy in Article 8 ECHR as encompassing informational privacy.Footnote 135 The rich debate on the relation between the right to privacy and the right to data protection, and how this impacts secondary law like the GDPR and Convention 108+, clearly exceeds the scope of this chapter.Footnote 136
4.4 Blueprint for a Future Research Agenda
The field of affective computing, and more specifically the technologies capable of detecting, classifying, and responding to emotions – in this chapter referred to as emotional AI – hold promises in many application sectors, for instance, for patient well-being in the health sector, for road safety, consumer satisfaction in retail sectors, and so forth. But, just like most (if not all) other forms of artificial intelligence, emotional AI brings with it a number of challenges and calls for assessing whether the existing legal frameworks are capable of accommodating the developments properly. Due to its manipulative capabilities, its potential harmful affective impact and potential profiling errors, emotional AI puts pressure on a whole range of constitutional rights, such as the right to respect for private and family life, non-discrimination, and freedom of thought, conscience, and religion. Moreover, the deployment of emotional AI poses challenges to individual autonomy and human dignity as underlying values underpinning the entirety of international human rights law, as well as to the underlying rationality-based paradigm imbued in law.
Despite the constitutional protection already offered at the European level, some scholars argue, in particular in the context of the ECHR, that this framework does not offer sufficient protection in light of the manipulative capabilities of emotional AI. They suggest (contemplating or introducing) novel rights such as the right to the future tense; the right to cognitive liberty (or, alternatively, the right to mental self-determination); the right to not be measured, analyzed, or coached; the right to cognitive sovereignty; and the right not to be manipulated.
At the same time, it should be noted that the field of constitutional law (in this chapter meant to cover the field of European human rights law) is a very dynamic area that is further shaped through case law, along with societal, economic, and technological developments. The way in which the ECtHR has given a multifaceted interpretation of the right to privacy in Article 8 ECHR is a good example of this.
This motivates the relevance of further research into the scope of existing constitutional rights and secondary sub-rights, in order to understand whether there is effectively a need to introduce new constitutional rights. A possible blueprint for IACL’s Research Group ‘Algorithmic State, Society and Market – Constitutional Dimensions’ could include
empirical research into the effects of fine-grained, subconscious, and personalised levels of algorithmic persuasion based on affective computing (in general or for specific categories of vulnerable groups, like childrenFootnote 137);
interdisciplinary research into the rise of new practices, such as the trading or renting of machine learning models for emotion classification, which may escape the traditional legal protection frameworks;Footnote 138
doctrinal research into the scope and limits of existing constitutional rights at European level in light of affective computing; Article 9 ECHR and Article 8 CFREU seem particularly interesting from that perspective;
comparative research, on the one hand, within the European context into constitutional law traditions and interpretations at the national level (think of Germany, where the right to human dignity is explicitly recognised in Article 1 Grundgesetz, versus Belgium or France, where this is not the case), and on the other hand, within the global context (comparing, for instance, the fundamental rights orientated approach to data protection in the EU and the more market-driven approach in other jurisdiction such as the US and AustraliaFootnote 139); and
policy research into the level of jurisdiction, and type of instrument, best suited to tackle the various challenges that emotional AI brings with it. (Is there, for instance, a need for a type of ‘Oviedo Convention’ in relation to (emotional) AI?)
At the beginning of this chapter, reference was made to the CoE’s Declaration on the Manipulative Capabilities of Algorithmic Processes of February 2019.Footnote 140 In that Declaration, the Committee of Ministers invites member States to
initiat[e], within appropriate institutional frameworks, open-ended, informed and inclusive public debates with a view to providing guidance on where to draw the line between forms of permissible persuasion and unacceptable manipulation. The latter may take the form of influence that is subliminal, exploits existing vulnerabilities or cognitive biases, and/or encroaches on the independence and authenticity of individual decision-making.
Aspiring to deliver a modest contribution to this much-needed debate, this chapter has set the scene and hopefully offers plenty of food for thought for future activities of the IACL Research Group on Algorithmic State Market & Society – Constitutional Dimensions.
5.1 Introduction
Online human interactions are a continuous matching of data that affects both our physical and virtual life. How data are coupled and aggregated is the result of what algorithms constantly do through a sequence of computational steps that transform the input into the output. In particular, machine learning techniques are based on algorithms that identify patterns in datasets. The paper explores how algorithmic rationality may fit into Weber’s conceptualization of legal rationality. It questions the idea that technical disintermediation may achieve the goal of algorithmic neutrality and objective decision-making.Footnote 1 It argues that such rationality is represented by surveillance purposes in the broadest meaning. Algorithmic surveillance reduces the complexity of reality calculating the probability that certain facts happen on the basis of repeated actions. Algorithms shape human behaviour, codifying situations and facts, stigmatizing groups rather than individuals, and learning from the past: predictions may lead to static patterns that recall the idea of caste societies, in which the individual potential of change and development is far from being preserved. The persuasive power of algorithms (the so-called nudging) largely consists of small changes aimed at predicting social behaviours that are expected to be repeated in time. This boost in the long run builds a model of anti-social mutation, where actions are oriented. Against such a backdrop, the role of law and legal culture is relevant for individual emancipation and social change in order to frame a model of data production by law. This chapter is divided into four sections: the first part describes commonalities and differences between legal bureaucracy and algorithms, the second part examines the linkage between a data-driven model of law production and algorithmic rationality, the third part shows the different perspective of the socio-legal approach to algorithmic regulation, and the fourth section questions the idea of law production by data as a product of legal culture.
5.2 Bureaucratic Algorithms
‘On-life’ dimensions represent the threshold for a sustainable data-driven rationality.Footnote 2 As stated in the White Paper on AI, ‘today 80% of data processing and analysis that takes place in the cloud occurs in data centres and centralized computing facilities, and 20% in smart connected objects, such as cars, home appliances or manufacturing robots, and in computing facilities close to the user (“edge computing”)’. By means of unceasing growth of categorizations and classifications, algorithms develop mechanisms of social control connecting the dots. This entails that our actions mostly depend or are somehow affected by the usable form in which the algorithm code is rendered. In order to enhance their rational capability in calculating every possible action, algorithms aim at reducing human discretion and at structuring behaviours and decisions similarly to bureaucratic organizations. Algorithms act as normative systems that formalize certain patterns. As pointed out by Max Weber, modern capitalist enterprise is mainly based on calculation. For its existence, it requires justice and an administration whose operation can at least in principle be rationally calculated on the basis of general rules – in the same way in which the foreseeable performance of a machine is calculated.Footnote 3 This entails that, on the one hand, like bureaucracy, algorithms, in fact, use impersonal laws requiring obedience that impede free not predictable choices.Footnote 4 In fact, according to the Weberian bureaucratic ideal types, the separation between the administrative body and the material means of the bureaucratic enterprise is quintessential to the most perfect form of bureaucratic administration: political expropriation towards specialized civil servants.Footnote 5 Nonetheless, impersonality of legal rules does not entail in any case lack of responsibility by virtue of the principle of the division of labour and the hierarchical order on which modern bureaucracy is based:Footnote 6 civil servants’ responsibility is indeed to obey impersonal rules or pretend they are impersonal, whereas the exclusive and personal responsibility belongs to the political boss for his actions.Footnote 7 Bureaucracy is characterized by the objective fulfilment of duties, ‘regardless of the person’ and based on foreseeable rules and independent from human considerations.Footnote 8
On the contrary, the risk of algorithmic decision-making is that no human actor is to take responsibility for the decision.Footnote 9 The supervision and the attribution of specialized competences from the highest bureaucratic levels towards the lowest ones (Weber uses the example of ‘procurement’)Footnote 10 assures that the exercise of authority is compliant to precise competences and technical qualities.Footnote 11 Standardization, rationalization, and formalization are common aspects both for bureaucratic organizations and algorithms. Bureaucratic administration can be considered economic as far as it is fast, precise, continuous, specialized, and avoids possible conflicts.Footnote 12 Testing algorithms as legal rational means imposes a double question: (1) whether through artificial intelligence and isocratic forms of administration the explainability of algorithmic processes improves the institutional processes and in what respect towards staff competence and individual participation, and (2) whether algorithms take on some of the role of processing institutional and policy complexity much more effectively than humans.Footnote 13
According to Aneesh, ‘bureaucracy represents an “efficient” ideal-typical apparatus characterized by an abstract regularity of the exercise of authority centred on formal rationality’.Footnote 14 In fact, algorithms ‘are trained to infer certain patterns based on a set of data. In such a way actions are determined in order to achieve a given goal’.Footnote 15 The socio-technical nature of public administration consists in the ability to share data: this is the enabler of artificial intelligence for rationalization. Like bureaucracy, algorithms would be apparently compatible with three Weberian rationales: the Zweckverein (purpose union), as an ideal type of the voluntary associated action; the Anstalt (institution), as an ideal type of institutions, rational systems achieved throughout coercive measures; the Verband (social group), as an ideal type of common action that aims to an agreement for a common purpose.Footnote 16 According to the first rationale, algorithms are used to smoothly guide a predictable type of social behaviour through data extraction on an ‘induced’ and mostly accepted voluntary basis;Footnote 17 as for the second, the induction of needs is achieved through forms of ‘nudging’, such as the customization of contractual forms and services based on profiling techniques and without meaningful mechanisms of consent; finally, the legitimacy is based on the social agreement on their utility to hasten and cheapen services (automation theory) or also improve them (augmentation system).Footnote 18
However, unlike bureaucracy, technology directly legitimizes action enabling users with the bare option ‘can/cannot’. Legitimacy is embedded within the internal rationality of technology. As Pasquale observes, ‘authority is increasingly expressed algorithmically’.Footnote 19 Moreover, similar to the rise of bureaucratic action, technologies have been thought to be controlled by the exercise of judicial review not to undermine civil liberties and equality. As a matter of fact, algorithmic systems are increasingly being used as part of the continuous process of Entzauberung der Welt (disenchantment of the world) – the achievement of rational goals through organizational measures – with potentially significant consequences for individuals, organizations and societies as a whole.
There are essentially four algorithmic rational models of machine learning that are relevant for law-making: the Neural Networks that are algorithms learning from examples through neurons organized in layers; the Tree Ensemble methods that combine more than one learning algorithm to improve the predictive power of any of the single learning algorithms that they combine; the Support Vector Machines that utilize a subset of the training data, called support vectors, to represent the decision boundary; the Deep Neural Network that can model complex non-linear relationship with multiple hidden layers.Footnote 20
Opaqueness and automation are their main common features, consisting of the secrecy of the algorithmic code and the very limited human input.Footnote 21 This typical rationality is blind, as algorithms – Zuboff notes – inform operations given the interaction of these two aspects. Nonetheless, explainability and interpretability are also linked to the potential of algorithmic legal design as rational means.Footnote 22 Rational algorithmic capability is linked to the most efficient use of data and inferences based on them. However, the development of data-driven techniques in the algorithmic architecture determines a triangulation among market, law, and technology. To unleash the full potential of data, rational means deployed to create wider data accessibility and sharing for private and public actors are now being devised in many areas of our lives. However, it should be borne in mind that the use of algorithms as a tool for speeding up the efficiency of the public sector cannot be separately examined from the risk of algorithmic surveillance based on indiscriminate access to private-sector data.Footnote 23 This is due to the fact that the entire chain of services depends upon more or less overarching access to private sector data. Access to those data requires a strong interaction between public actors’ political power and private actors’ economic and technological capability. This dynamic is pervasive as much as it entirely dominates our daily life from market strategy to economic supply. Furthermore, once the ‘sovereigns’ of the nation-states and their borders have been trumped, data flows re-articulate space in an endless way. The paradox of creating space without having a territory is one of the rationales of the new computational culture that is building promises for the future.
5.3 Law Production by Data
Law production is increasingly subjected to a specialized rationality.Footnote 24 Quantitative knowledge feeds the aspiration of the state bureaucracy’s ‘rationality’, since it helps dress the exercise of public powers of an aura of technical neutrality and impersonality, apparently leaving no room to the discretion of the individual power.Footnote 25 Behind the appearance of the Weberian bureaucratic principle sine ira et studio – which refers to the exclusion of affective personal, non-calculable, and non-rational factors in the fulfilment of civil servants’ dutiesFootnote 26 – the use of classification and measurement techniques affecting human activities generate new forms of power that standardize behaviours for forecasting expectations, performances and conducts of agents.Footnote 27 As correctly highlighted by Zuboff, ‘instrumentarian power reduces the human experience to measurable observable behaviour while remaining steadfastly indifferent to the meaning of that experience’.Footnote 28 However, even though the production of the law through customized and tailored solutions can be a legitimate goal of computational law, it is not all. Social context may change while the law is ruling, but technology reflects changing social needs in a more visible way than the law and apparently provides swifter answers.Footnote 29 On the contrary, the law should filter daily changes, including technological ones, into its own language, while it is regulating a mutable framework. To be competing with other prescriptive systems, the law may be used either as an element of computational rationality or a tool to be computable itself for achieving specific results. In the first case, the law guides and shrinks the action of rational agents through the legal design of algorithms, as an external constraint. In the second case, regulatory patterns are conveyed by different boosts that use the law in a predetermined way for achieving a given goal. Depending on which of those models is chosen, there could be a potential risk for the autonomy of the law in respect to algorithmic rationality. Both the autonomy of the law and the principle of certainty applicable to individuals are at stake. This is an increasingly relevant challenge since the whole human existence is fragmented through data.
Against these two drawbacks, the law may develop its internal rationality even in a third way: as the product of the legal culture that copes with social challenges and needs. Essentially, legal culture is a tough way in which society reflects upon itself, through doctrinal and conceptual systems elaborated by lawyers; through interpretation; and through models of reasoning.Footnote 30 This entails the law being a rational means not only due to its technical linguistic potentialFootnote 31 but also due to its technical task aimed at producing social order.Footnote 32 As Weber notes, the superiority of bureaucratic legal rationality over other rational systems is technical.
Nonetheless, not all times reflect a good legal culture, as this can be strongly affected by political and social turmoil. In the age of datification, all fragments of daily life are translated into data, and it is technically possible to shape different realities on demand, including information politics and market. The creation of propensities and assumptions through algorithms as a basis of a pre-packaged concept of the law – driven by colonizing factors – breaks off a spontaneous process through which legal culture surrounds the law. As a result, the effects of algorithmic legal predictions contrast with the goal of legal rationality, which is to establish certain hypotheses and to cluster factual situations into them. The production of the legal culture entails the law being the outcome of a specific knowledge and normative meanings as the result of a contextual Weltanschauung. This aspect has nothing to do either with the legitimacy or with the effectiveness, rather with the way in which the law relies on society. In particular, the capability to produce social consequences that are not directly addressed by the law, by suggesting certain social behaviours and by activating standardized decisions on a large scale, represents such a powerful tool that has been considered the core of algorithmic exception states.Footnote 33 The idea of exception is explained by the continuous confusion between the rule of causality and the rule of correlation.Footnote 34 Such a blurring between cause and effects, evidences and probabilities, causal inferences and variables, affects database structures, administrative measures that are showed under the form of the algorithmic code, and ultimately rules.Footnote 35 Algorithms lack adaptability because they are based on a casual model that cannot replicate the inferential process of humans to which the general character of the law refers. Human causal intuitions dominate uncertainty differently from machine learning techniques.Footnote 36
Data is disruptive for its capability to blur the threshold between what is inside and what is outside the law. The transformation of the human existence into data is at the crossroad of the most relevant challenges for law and society. Data informs the functioning of legal patterns, but it can be also a component of law production. A reflection on the social function of the law in the context of algorithmic rationality is useful in order to understand what type of data connections are created for regulatory purposes within an ‘architecture of assumptions’, to quote McQuillan. Decoding algorithms sometimes allows one to interpret such results, even though the plurality and complexity of societal patterns cannot be reduced to the findings of data analysis or inferential interpretation generated by automated decision-making processes. The growing amount of data, despite being increasingly the engine of law production, does reflect the complexity of the social reality, which instead refers to possible causal interactions between technology, reality and regulatory patterns, and alternative compositions of them, depending upon uncertain variables. Datification, on which advanced technologies are generally based, has profoundly altered the mechanisms of production of legal culture, which cannot be easily reduced to what data aggregation or data analysis is. Relevant behaviours and social changes nourish inferences that can be made from data streams: despite the fact that they can be the output of the law, they will never be the input of the legal culture. Between the dry facts and the causal explanation, there is a very dense texture for the elaboration of specialized jurists, legal scholars, and judges. Furthermore, globalization strongly shapes common characters across different legal traditions no longer identifiable with an archetypal idea of state sovereignty. This depends upon at least two factors: on the one hand, the increasing cooperation between private and public actors in data access and information management beyond national borders; on the other hand, the increasing production of data from different sources. Nonetheless, not much attention has been paid to the necessity of safeguarding the space of the legal culture in respect to law overproduction by data. Regulation of technology combined with the legal design of technology tends to create a misleading overlap between both, because technological feasibility is becoming the natural substitution of legal rationales. Instead, I argue that the autonomous function of the legal culture should be revenged and preserved as the theoretical grid for data accumulation. What legal culture calls into question is the reflexive social function of the law that data-driven law erases immediately by producing a computational output. In addition, the plurality of interconnected legal systems cannot be reduced to data. The increasing production of the law resulting from data does not reflect the complexity of social reality. How data and technologies based on them affect the rise of legal culture and the production of data-driven laws has not only to do with data. According to a simple definition of legal culture as ‘one way of describing relatively stable patterns of legally oriented social behaviour and attitudes’,Footnote 37 one may think of data-driven law as a technologically oriented legal conduct.
‘Commodification of “reality” and its transformation into behavioural data for analysis and sales’,Footnote 38 defined by Zuboff as surveillance capitalism, has made the private human experience a ‘free raw material’Footnote 39 that can be elaborated and transformed into behavioural predictions feeding production chain and business. Data extraction allows the capitalistic system to know all about all. It is a ‘one-way process, not a relationship’, which produces identity fragmentation and attributes an exchange value to single fragments of the identity itself.Footnote 40 Algorithmic surveillance indeed produces a twofold phenomenon: on the one hand, it forges the extraction process itself, which is predetermined to be predictive; on the other hand, it determines effects that are not totally explainable, despite all accurate proxies input into the system. Those qualities are defined operational variables that are processed at a very high speed so that it is hard for humans to monitor them.Footnote 41
In the light of an unprecedented transformation that is radically shaping the development of personality as well as common values, the role of the law should be not only to guarantee ex post legal remedies but also to reconfigure the dimension of human beings, technology, and social action within integrated projects of coexistence with regulatory models. When an individual is subject to automation – the decision-making process, which determines the best or worst chances of well-being, the easiest or least opportunities to find a good job, or in the case of the predictive police, a threat to the presumption of innocence – the social function of the law is necessary to cope with the increasing complexity of relevant variables and to safeguard freedom. Power relationships striving to impose subjugation vertically along command and obedience relationships are replaced by a new ‘axiomatic’ one: the ability to continuously un-code and re-code the lines along which information, communication, and production intertwine, combining differences rather than forcing unity.
5.4 The Socio-legal Approach
The current socio-legal debate on algorithmic application on legal frameworks is very much focused on issues related to data-driven innovation. Whereas the internal approach is still dominant in many regulatory areas, the relationship between law and technology requires an external perspective that takes into account different possibilities. As the impact of artificial intelligence on the law produces social and cultural patterns, a purely internal legal approach cannot contribute to a comprehensive understanding. However, whereas the law produces bindings effects depending on if certain facts may or not happen, algorithms are performative in the sense that the effect that they aim to produce is encompassed in the algorithmic code. The analysis of both the benefits and the risks of algorithmic rationality have societal relevance for the substantial well-being of individuals. On one hand, the lack of an adequate sectoral regulatory framework requires a cross-cutting analysis to highlight potential shortcomings in the existing legal tools and their inter-relationships. In addition, operational solutions should be proactive in outlining concrete joined-up policy actions, which also consider the role of soft-law solutions. On the other hand, the potential negative impact of biased algorithms on rights protection and non-discrimination risks establishing a legal regime for algorithmic rationality that does not meet societal needs. In order to address the interplay between societal needs, rights, and algorithmic decision-making, it is relevant to pinpoint several filters on the use of AI technology in daily life.
For example, a social filter sets a limits for the manner in which technology is applied on the basis of the activities of people and organizations. A well-known recent example of a social filter is how taxi drivers and their backing organizations have opposed transport platforms and services. An institutional filter sets institutionally determined limits on the ways in which technology can be applied. This type of institutional system includes the corporate governance model, the education system, and the labour market system. A normative filter sets regulatory and statute-based limitations on the manner in which technology can be applied. For example, the adoption of self-steering vehicles in road traffic will be slow until the related issues regarding responsibilities have been conclusively determined in legislation. Last but not least, an ethical filter sets restrictions on the ways in which technology is applied.
A further step requires identifying a changing legal paradigm that progressively shifts attention from the idea of a right to a reasonable explanation of the algorithm as a form of transparency to the right to reasonable inferences (through the extensive interpretation of the notion of personal data that it includes the notion of decisional inference) or towards an evolutionary interpretation of the principle of good administration.Footnote 42 The evolutionary interpretation of the principle of good administration has hinged on the algorithmic ‘black box’ within a more fruitful path, oriented towards the legality and responsibility of the decision maker in the algorithmic decision-making process. This is particularly relevant in the field of preventive surveillance, for example, as it is mainly a public service whose technological methods can be interpreted in the light of the principle of good administration.
More broadly, the rationale of AI in the digital single market should inter alia guarantee: (1) better services that are cost-efficient; (2) unifying cross-border public services, increasing efficiency and improving transparency; (3) promoting the participation of individuals in the decision-making process; and (4) improving the use of AI in the private sector as a potential to improve business and competitiveness.Footnote 43
In order to achieve these objectives, it is necessary to evaluate the social impact, as well as the risks and opportunities, that the interaction between public and private actors in accessing data through the use of algorithmic rationality combined with legal rationality entails. However, the optimization of organizational processes in terms of efficiency, on the one hand, and the degree of users’ satisfaction, on the other hand, are not relevant factors to face the impact of algorithms on rights. The law preserving individual chances of emancipation is at the centre of this interaction, constituting the beginning and the end of the causal chain, since both the production of law for protecting rights and the violation of rights significantly alter this relationship. This aspect is significant, for instance, in the field of machine learning carried out on the basis of the mass collection of data flows, from which algorithms are able to learn. The ability of machine learning techniques to model human behaviour, to codify reality and to stigmatize groups, increases the risk of couching static social situations, undermining the free and self-determined development of personality. Such a risk is real, irrespective of the fact that algorithms are used to align a legal system to a predetermined market model or to reach a precise outcome of economic policy. In both cases, algorithms exceed the primary function of the law, which is to match the provision of general and abstract rules with concrete situations through adaptive solutions. Such an adaptation process is missing in the algorithmic logic, because the algorithmic code is unchangeable.
Law as a social construction is able to address specific situations and change at the same time in its interpretation or according to social needs. Indeed, law should advocate an emancipatory function for human beings who are not subject to personal powers. If applied to algorithmic decision-making in the broadest context, the personality of laws may result in tailored and fragmented pictures corresponding to ‘social types’ on the basis of profiling techniques. This is the reason why law production by data processed through algorithms cannot be the outcome of any legal culture, as it would be a pre-packaged solution regardless of the institutional and political context surrounding causes and effects. Nonetheless, the increasing tailored production of data-driven law through algorithmic rationality cannot overcome such a threshold in a way that enables a decision-making process – at every level of daily life – being irrespective of autonomy, case-by-case evaluation, and freedom.
The alignment of legal requirements and algorithmic operational rules must always be demonstrated ex post both at a technical level and at a legal level in relation to the concrete case.
5.5 Data Production by Law
Against the backdrop of data-driven law, legal rationality should be able to frame a model rather based on data production by law. However, a real challenge that should be borne in mind is that algorithmic bureaucracy does not need a territory as legal bureaucracy.Footnote 44 Algorithmic systems are ubiquitous, along with data that feed machine learning techniques. Whereas a bureaucratic state is a way to organize and manage the distribution of power over and within a territory, algorithms are not limited by territory. Sovereignty’s fragmentation operated by data flows shows that virtual reality is a radical alternative form to territorial sovereignty and cannot be understood as a mere assignment of sovereign powers upon portions of data. The ubiquity of data requires a new description of regulatory patterns in the field of cross-border data governance as data location that would create under certain conditions the context of the application of legal regime, and the exclusion of another is not necessarily a criterion which is meaningfully associated with the data flow. Data is borderless, as it can be scattered everywhere across different countries.Footnote 45 Although data can be accessed everywhere irrespective of where it is located, its regulation and legal effects are still anchored to the territoriality principle. Access to data does not depend on physical proximity; nor are regulatory schemes arising from data flows intrinsically or necessarily connected to any particular territory. Connection with territory must justify jurisdictional concerns but does not have much to do with physical proximity. Such disconnection between information and its un-territorial nature potentially generates conflicts of law and may produce contrasting claims of sovereign powers.Footnote 46 This is magnified by algorithmic systems that do not have a forum loci because they are valid formulations regardless of the geographical space where they are applied. Furthermore, they gather data sets irrespective of borders or jurisdictions. Bureaucracy’s functioning much depends upon borders, as it works only within a limited territory.Footnote 47 On the contrary, algorithms are unleashed from territories but can affect multiple jurisdictions, as the algorithmic code is territorially neutral. This may be potentially dangerous for two reasons: on the one hand, algorithms can transversally impact different jurisdictions, regardless of the legal systems and regulatory regimes involved; on the other hand, the disconnection of the algorithmic code from territory and implies a law production that does not emerge from legal culture. Even though legal culture is not necessarily bound to the concept of state sovereignty,Footnote 48 it is inherent to a territory as a political and social space. Weber rejects the vision of the modern judge as a machine in which ‘documents are input together with expenses’ and which spits out the sentence together with the motives mechanically inferred from the paragraphs. Indeed, there is the space for the individualizing assessment in respect of which the general norms have a negative function in that they limit the official’s positive and creative activity.Footnote 49 This massive difference between legal rationality and algorithmic rationality imposes rethinking the relationship between law, technology, and legal culture. Data production by law can be a balanced response to reconnect algorithmic codes to the boundaries of jurisdictions. Of course, many means of data production by law exist. A simple legal design of data production is not the optimal option. Matching algorithmic production of data and legal compliance can be mechanically ensured through the application of certain patterns that are inserted in the algorithmic process. Instead, the impact of legal culture over the algorithmic production of data shape a socio-legal context inspiring the legal application of rules on data production.
The experience of the Italian Administrative Supreme Court (Council of State) is noteworthy. After the leading case of 8 April 2019 n. 2270 that opened the path to administrative algorithmic decision-making, the Council of State confirmed its case law.Footnote 50 It holds the lawfulness of automated decision-making in administrative law, providing limits and criteria.Footnote 51 It extended for the first time the automated decision-making both to public administration’s discretionary and binding activities. The use of algorithmic administrative decision-making is encompassed by the principle of good performance of administration pursuant to article 97 of the Italian Constitution. The Council stated that the fundamental need for protection posed by the use of the so-called IT tool algorithmic is transparency due to the principle of motivation of the decision.Footnote 52 It expressly denied algorithmic neutrality, holding that predictive models and criteria are the result of precise choices and values. Conversely, the issue of the dangers associated with the instrument is not overcome by the rigid and mechanical application of all detailed procedural rules of Law no. 241 of 1990 (such as, for example, the notice of initiation of the proceeding).
The underlying innovative rationale is that the ‘multidisciplinary character’ of the algorithm requires not only legal but technical, IT, statistical, and administrative skills, and does not exempt from the need to explain and translate the ‘technical formulation’ of the algorithm into the ‘legal rule’ in order to make it legible and understandable.
Since algorithm becomes a modality of the authoritative decision, it is necessary to determine specific criteria for their use. Surprisingly, the Council made an operation of legal blurring, affirming that knowability and transparency must be interpreted according to articles 13, 14, and 15 GDPR. In particular, the interested party must be informed of the possible execution of an automated decision-making process; in addition, the owner of algorithms must provide significant information on the logic used, as well as the importance and expected consequences of this treatment for the interested party.
Additionally, the Council adopted three supranational principles: (1) the full knowability of the algorithm used and the criteria applied pursuant to article 42 of the EU Charter (‘Right to a good administration’), according to which everyone has the right to know the existence of automated decision-making processes concerning him or her and, in this case, to receive significant information on the logic used; (2) the non-exclusivity of automated decision-making, according to which everyone has the right not to be subjected to solely automated decision-making (similarly to article 22 GDPR); and (3) the non-discrimination principle, as a result of the application of the principle of non-exclusivity, plus data accuracy, minimization of risks of errors, and data security.Footnote 53 In particular, the data controller must use appropriate mathematical or statistical procedures for profiling, implementing adequate technical and organizational measures in order to ensure correction of the factors that involve data inaccuracy, thus minimizing the risk of errors.Footnote 54 Input data should be corrected to avoid discriminatory effects in decision-making output. This operation requires the necessary cooperation of those who instruct the machines that produce these decisions. The goal of a legal design approach is to filter data production through the production of potential algorithmic harms and the protection of individual rights, and figure out which kind of legal remedies are available and also useful to individuals. The first shortcoming of such endeavour is that – given for granted the logic of garbage in/garbage out, according to which inaccurate inputs produce wrong outputs – it is noteworthy that a legal input is not a sufficient condition to produce a lawful output. Instead, an integrated approach such as the one adopted by the Council of State is based on more complex criteria to consider the lawfulness of algorithmic decision-making, also in respect of actors involved. First, it is necessary to ensure the traceability of the final decision to the competent body pursuant to the law conferring the power of the authoritative decision to the civil servants in charge.Footnote 55 Second, the comprehensibility of algorithms must involve all aspects but cannot result in harm for IP rights. In fact, pursuant to art. 22, let. c, Law 241/90 holders of an IP right on software are considered counter-interested,Footnote 56 but Consiglio di Stato does not specifically address the issue of holders of trade secrets.
5.6 Conclusions: TOWARDS DATA PROTECTION OF LAW
While discussing similarities between bureaucratic and algorithmic rationality, I voluntarily did not address the issue of secrecy. According to Weber, each power that aims to its preservation is a secret power in one of its features. Secrecy is functional for all bureaucracies to the superiority of their technical tasks towards other rational systems.Footnote 57 Secrecy is also the fuel of algorithmic reasoning, as its causal explanation is mostly secret. This common aspect, if taken for granted as a requirement of efficient rational decision-making, should be weighted in a very precise way in order to render algorithms compliant with the principle of legality.
This chapter has explored how algorithmic bureaucracy proves to be a valuable form of rationality as far as it does not totally eliminate human intermediation under the form of imputability, responsibility, and control.Footnote 58 To be sure, this may happen only under certain conditions that are summarized as follows: (1) Technological neutrality for law production cannot be a space ‘where legal determinations are de-activated’Footnote 59 in such a way that externalizes control. (2) Law production by data is not compatible with Weberian’s legal rationality. (3) Translation of technical rules into legal rules needs to be filtered through legal culture. (4) Data production by law is the big challenge of algorithmic rationality. (5) Algorithmic disconnection from territory cannot be replaced by algorithmic global surveillance. (6) Legal design of algorithmic functioning is not an exhaustive solution. (7) The linkage of automated decision-making to the principle of good administration is a promising trajectory along which concepts such as traceability, knowability, accessibility, readability, imputability, responsibility, and non-exclusivity of the automated decision have been developed in the public interest.
All these conditions underlie a regulatory idea that draws the role of lawyers from what Max Weber defined as die geistige Arbeit als Beruf (the spiritual work as profession). In this respect, algorithmic rationality may be compatible with a legal creative activity as long as a society is well equipped with good lawyers.Footnote 60 The transformation of law production by data into data production by law is a complex challenge that lawyers can drive if they do not give up being humanists for being only specialized experts.Footnote 61 From this perspective, algorithmic bureaucratic power has a good chance of becoming an ‘intelligent humanism’.Footnote 62 To accomplish this task, the law should re-appropriate its own instruments of knowledge's production. This does not mean to develop a simplistic categorization of legal compliance requirements for machine-learning techniques. Nor it only relies on the formal application of legal rationality to the algorithmic process. In the long run, it shall bring towards increasing forms of data production of law. Data production of law defines the capability of the law to pick and choose those data that are relevant to elaborate new forms of legal culture. How the law autonomously creates knowledge from experiences that impact on society is a reflexive process that needs institutions as well as individuals. As much as this process is enshrined in a composite legal culture, the law has more chances to recentre its own role in the development of democratic societies.
6.1 Introduction
Artificial intelligence (AI) constitutes a major form of scientific and technological progress. For the first time in human history, it is possible to create autonomous systems capable of performing complex tasks, such as processing large quantities of information, calculating and predicting, learning and adapting responses to changing situations, and recognizing and classifying objects.Footnote 1 For instance, algorithms, or so-called Algorithmic Decision Systems (ADS),Footnote 2 are increasingly involved in systems used to support decision-making in many fields,Footnote 3 such as child welfare, criminal justice, school assignment, teacher evaluation, fire risk assessment, homelessness prioritization, Medicaid benefit, immigration decision systems or risk assessment, and predictive policing, among other things.
An Automated Decision(-making/-support) System (ADS) is a system that uses automated reasoning to facilitate or replace a decision-making process that would otherwise be performed by humans.Footnote 4 These systems rely on the analysis of large amounts of data from which they derive useful information to make decisions and to inferFootnote 5 correlations,Footnote 6 with or without artificial intelligence techniques.Footnote 7
Law enforcement agencies are increasingly using algorithmic predictive policing systems to forecast criminal activity and allocate police resources. For instance, New York, Chicago, and Los Angeles use predictive policing systems built by private actors, such as PredPol, Palantir, and Hunchlab,Footnote 8 to assess crime risk and forecast its occurrence, in hope of mitigating it. More often, such systems predict the places where crimes are most likely to happen in a given time window (place-based) based on input data, such as the location and timing of previously reported crimes.Footnote 9 Other systems analyze who will be involved in a crime as either victim or perpetrator (person-based). Predictions can focus on variables such as places, people, groups, or incidents. The goal is also to better deploy officers in a time of declining budgets and staffing.Footnote 10 Such tools are mainly used in the United States, but European police forces have expressed an interest in using them to protect the largest cities.Footnote 11 Predictive policing systems and pilot projects have already been deployed,Footnote 12 such as PredPol, used by the Kent Police in the United Kingdom.
However, these predictive systems challenge fundamental rights and guarantees of the criminal procedure (Section 6.2). I will address these issues by taking into account the enactment of ethical norms to reinforce constitutional rights (Section 6.3),Footnote 13 as well as the use of a practical tool, namely Algorithmic Impact Assessment, to mitigate the risks of such systems (Section 6.4).
6.2 Human Rights Challenged by Predictive Policing Systems
In proactive policing, law enforcement uses data and analyzes patterns to understand the nature of a problem. Officers attempt to prevent crime and mitigate the risk of future harm. They refer to the power of information, geospatial technologies, and evidence-based intervention models to predict what and where something is likely to happen, and then deploy resources accordingly.Footnote 14
6.2.1 Reasons for Predictive Policing in the United States
There are many reasons why predictive policing systems have been specifically deployed in the United States. First, the high level of urban gun violence pushed the police departments of Chicago,Footnote 15 New York, Los Angeles, and Miami, among others, to take preventative action.
Second, it is an opportunity for American tech companies to deploy, within the national territory, products that have previously been developed and put into practice within the framework of international US military operations.
Third, beginning in 2007, within the context of the financial and economic crisis and ensuing budget cuts in police departments, predictive policing tools have been seen as a way ‘to do more with less’.Footnote 16 Concomitantly, the National Institute of Justice (NIJ), an agency of the US Department of Justice, granted several police departments permission to conduct research and try these new technologies.Footnote 17
Fourth, the emergence of predictive policing tools has been incited by the crisis of weakened public trust in law enforcement in numerous cities. Police violence, particularly towards young African Americans, has led to the research on more ‘objective’ methods to improve the social climate and conditions of law enforcement. Public outcry against the discrimination risks inherent to traditional methods has come from citizens, social movements such as ‘Black Lives Matter’, and even in an official capacity from the US Department of Justice (DOJ) investigations surrounding the actions of the Ferguson Police Department after the death of Michael Brown.Footnote 18 Following this incident, the goal was to find new and modern methods which are unbiased toward African Americans as much as possible. The unconstitutionality of methods,Footnote 19 such as Stop-and-Frisk in New York and Terry Stop,Footnote 20 based on the US Supreme Court’s decision in the Terry v. Ohio case, converged with the rise of new, seemingly perfect technologies. The Fourth Amendment of the US Constitution prohibits ‘unreasonable searches and seizures’, and states, ‘no warrants shall issue, but upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized’.
Fifth, the privacy laws are less stringent in the United States than in the European Union, due to a sectorial approach to protection within the United States. Such normative difference can explain why the deployment of predicting policing systems was easier in the United States.
6.2.2 Cases Studies: PredPol and Palantir
When working to predict crime, multiple methods and tools are available for use. I propose a closer analysis of two tools offered by the PredPol and Palantir companies.
6.2.2.1 PredPol
PredPol is a commercial software offered by the American company PredPol Inc. and was initially used in tests by the LAPDFootnote 21 and eventually used in Chicago and in Kent County in the United Kingdom. The tool’s primary purpose is to predict, both accurately and in real time, the locations and times where crimes have the highest risk of occurring.Footnote 22 In other words, this tool identifies risk zones (hotspots) based on the same types of statistical models used in seismology. The input data include city and territorial police archives (reports, ensuing arrests, emergency calls), all applied in order to identify the locations where crimes occur most frequently, so as to ‘predict’ which locations should be prioritized. Here, the target is based on places, not people. The types of offenses can include robberies, automobile thefts, and thefts in public places. A US patent regarding the invention of an ‘Event Forecasting System’Footnote 23 was approved on 3 February 2015 by the US Patent and Trademark Office (USPTO). The PredPol company claims that its product assists in improving the allocation of resources in patrol deployment. Finally, the tool also incorporates the position of all patrols in real time, which allows departments to not only know where patrols are located but also control their positions. Providing information on a variety of mobile tools such as tablets, smartphones, and laptops, in addition to desktop computers, was also a disruption from previously used methods.
The patent’s claims do not specify the manner in which data are used, calculated, or applied. The explanation provided in the patent is essentially based on the processes used by the predictive policing systems, particularly the organizational method used (the three types of data (place, time, offense), geographic division into cells, the transfer of information by a telecommunications system, the reception procedure of historic data, access to GPS data, the link with legal information from penal codes, etc.), rather than on any explanation of the technical aspects. The patent focuses more particularly on the various graphic interfaces and features available to users, such as hotspot maps (heatmaps), which display spatial-temporal smoothing models of historical crime data. It also allows for the use of the method in its entirety but does not relate to the predictive algorithm. The technical aspects are therefore not subject to ownership rights but are instead covered by trade secrets. Even if PredPol claims to provide transparency of its approach, the focus is on the procedure, rather than on the algorithm and mathematical methods used, despite the publication of several articles by the inventors.Footnote 24 Some technical studiesFootnote 25 have been carried out by using publicly available data in cities, such as Chicago, and applying the data to models similar to that of PredPol. However, this tool remains opaque.
It is difficult to estimate the value that these forecasts add in comparison to historic hotspot maps. The few works evaluating this approach that have been published do not concern the quality of the forecasting, but the crime statistics. Contrary to PredPol’s claims,Footnote 26 the difference in efficiency is ultimately modest, depending on both the quantity of data available on a timescale and on the type of offense committed. The studies most often demonstrate that the prediction of crimes occurred most frequently in the historically most criminogenic areas within the city. Consequently, the software does not teach anything to the most experienced police officers who may be using it. While the Kent Police Department was the first to introduce ‘predictive policing’ in Europe in 2013, it has been officially recognized that it is difficult to prove whether the system has truly reduced crime. It was finally stopped in 2018Footnote 27 and replaced by a new internal tool, the NDAS (National Data Analytics Solution) project, to reduce costs and achieve a higher efficiency. It is likely that a tool developed in one context will not necessarily be relevant in another criminogenic context, as the populations, geographic configurations of cities, and the organization of criminal groups are different.
Moreover, the software tends to systematically send patrols into neighbourhoods that are considered as more criminogenic, which are mainly inhabited in the United States by African American and Latino/a populations.Footnote 28 Historical data certainly show high risk in these neighbourhoods, but most of the data were collected in the age of policies such as Terry Stop and Stop-and-Frisk, and were biased, discriminatory, and ultimately unconstitutional. The system, however, does not examine or question the trustworthiness of these types of data. Furthermore, the choice of the type of offense, primarily related to property crime (burglaries, car thefts), constitutes a type of crime that is more likely to be practiced by the poorest and most vulnerable populations, which are frequently composed of the aforementioned minority groups. The results would naturally be different if white-collar crimes were considered. These crimes are excluded from today’s predictive policing due to the difficulties of modelling and the absence of significant data. The fact that law enforcement wants to prevent certain types of offenses rather than others, via the use of automated tools is not socially neutral and carries out discrimination against a part of the population. The founders of PredPol and its developers responded to these critiques of bias in several articles published in 2017 and 2018, in which they largely emphasize the auditing of learning data.Footnote 29 High-quality learning data are essential to avoid and reduce bias. But if the data used by PredPol are biased, this demonstrates that society itself is biased as a whole. PredPol simply emphasizes this fact, without actually being a point of origin of discrimination. Consequently, the bias present in the tool is no greater than the bias previously generated by the data collected by police officers on the ground.
6.2.2.2 Palantir
Crime Risk Forecasting is the patent held by the company Palantir Technologies Inc., based in California. This device has been deployed in Los Angeles, New York, and New Orleans, but the contracts are often kept secret.Footnote 30 Crime Risk Forecasting is an ensemble of software and material that constitutes an ‘invention’ outlined in US patent and obtained on 8 September 2015.Footnote 31 The patent combines several components and features, including a database manager, visualization tools (notably interactive geographic cartography), and criminal forecasts. The goal is to assist police in predicting when and where crime will take place in the future. The forecasts of criminal risk are established within a geographic and temporal grid, for example, of 250 square meters, during an eight-hour police patrol.
The data include:
Crime history, classified by date, type, location, and more. The forecast can provide either a precise date and time, or a period of time over which risk is uniformly distributed. Similarly, the location can be more or less precise, either by address, GPS coordinates, or geographic zone. The offenses can be, for example, robberies, vehicle thefts (or thefts of belongings from within vehicles), and violence.
Historical information which is not directly connected to crime: weather, presence of patrols within the grid or in proximity, distribution of emergency service personnel.
Custody data indicating individuals who have been apprehended or who are in custody for certain types of crimes. These data can be used to decrease crime risk within a zone or to increase risk after the release of accused or convicted criminal.
Complex algorithms can be developed by aggregating methods associating hot-spotting, histograms, criminology models, and learning algorithms. The combination possibilities and the aggregation of multiple models and algorithms, as well as the large numbers of variables, result in a highly complex system, with a considerable number of parameters to estimate and hyperparameters to optimize. The patent does not specify how these parameters are optimized, nor does it define the expected quality of the forecasts. It is difficult to imagine that any police force could actually use this tool regularly, without constant assistance from Palantir. Moreover, one can wonder: what are the risks of possible re-identification of victims from the historical data? What precautions are taken to anonymize and prevent re-identification? How about custody data, which are not only personal data, but are, in principle, only subject to treatment by law enforcement and government criminal justice services? Consequently, the features of these ADS remain opaque while the processed data are also unclear.
In this context, it would be a mistake to take predictive policing as a panacea to eradicate crime. Many concerns focus on inefficiency, risk of discrimination, as well as lack of transparency.
6.2.3 Fundamental Rights Issues
Algorithms are fallible human creations, and they are embedded with errors and bias, similar to human processes. More precisely, an algorithm is not neutral and depends notably on the data used. Many legal scholars have revealed bias and racial discrimination in algorithmic systems,Footnote 32 as well as their opacity.Footnote 33 When algorithmic tools are adopted by governmental agencies without adequate transparency, accountability, and oversight, their use can threaten civil liberties and exacerbate existing issues within government agencies. Most often, the data used to train automated decision-making systems will come from the agency’s own databases, and existing bias in an agency’s decisions will be carried over into new systems trained on biased agency data.Footnote 34 For instance, many data used by predictive policing systems come from the Stop-and-Frisk program in New York City and the Terry Stop policy. This historical data (‘dirty data’)Footnote 35 create a discriminatory pattern because data from 2004 to 2012 showed that 83 per cent of the stops were of black and Hispanic individuals and 33 per cent white. The overrepresentation of black and Hispanic people who were stopped may lead an algorithm to associate typically black and Hispanic traits with stops that lead to crime prevention.Footnote 36 Despite its over-inclusivity, inaccuracy, and disparate impact,Footnote 37 such data continue to be processed.Footnote 38 Consequently, the algorithms will consider African Americans as a high-risk population (resulting in a ‘feedback loop’ or a self-fulfilling prophecy),Footnote 39 as greater rates of police inspection lead to a higher rate of reported crimes, therefore reinforcing disproportionate and discriminatory policing practices.Footnote 40 Obviously, these tools may violate human rights protections in the United States, as well as in the European Union, both before or after their deployment.
A priori, predictive policing activities can violate the fundamental rights of individuals if certain precautions are not taken. Though predictive policing tools are useful for the prevention of offenses and the management of police forces, they should not be accepted as sufficient motive for stopping and/or questioning individuals. Several fundamental rights can be violated in case of abusive, disproportionate, or unjustified use of predictive policing tools: the right to physical and mental integrity (Charter of Fundamental Rights of the European Union, art. 3); the right to liberty and security (CFREU, art. 6); the right to respect for private and family life, home, and communications; the right to freedom of assembly and of association (CFREU, art. 12); the right to equality before the law (CFREU, art. 20); and the right to non-discrimination (CFREU, art. 21). The risks of infringing on these rights are greater if predictive policing tools target people, as opposed to places. The fact remains that the mere identification of a high-risk zone does not naturally lead to more rights for the police, who, in principle, must continue to operate within the framework of crime prevention and the maintenance of order.
In the United States, due process (the Fifth and Fourteenth Amendments)Footnote 41 and equal treatment clauses (the Fourteenth Amendment) could be infringed. Moreover, predictive policing could constitute a breach of privacy or infringe on citizens’ rights to be secure in their persons, houses, papers, and effects against unreasonable searches and seizures without a warrant based on a ‘probable cause’ (the Fourth Amendment). Similar provisions have been enacted in the State Constitutions. Despite the presence of these theoretical precautions, some infringements of fundamental rights have been revealed in practice.Footnote 42
A posteriori, these risks are higher when algorithms are involved in systems used to support decision-making by police departments. Law enforcement may find it needs to answer to the conditions of use of these tools on a case-by-case basis when decisions are reached involving individuals. To provide an example, the NYPD was taken to court for the use of the Palantir Gotham tool and its technical features.Footnote 43 The lack of information on the existence and use of predictive tools, the nature of the data in question, and the conditions of application of algorithmic results based on automated treatment were all contested on the basis of a lack of transparency and the resulting impossibility to enforce the defence’s right to due process (the Fifth and Fourteenth Amendments).Footnote 44 Additionally, the media,Footnote 45 academics,Footnote 46 and civil rights defence organizationsFootnote 47 have called out against the issues of bias and discrimination within these tools, which violate the Fourteenth Amendment principle of Equal Protection for all citizens under the law. In EU law, the Charter of Fundamental Rights also guarantees the right to an effective remedy and access to a fair trial (CFREU, art. 47), as well as the right to presumption of innocence and right of defence (CFREU, art. 48). All of these rights can be threatened if the implementation of predictive policing tools is not coupled with sufficient legal and technical requirements.
The necessity of protecting fundamental rights has to be reiterated in the algorithmic society. To achieve this, adapted tools must be deployed to ensure proper enforcement of fundamental rights. Some ethical principles need to be put in place in order to effectively protect fundamental rights and reinforce them. The goal is not substituting human rights with ethical principles but adding new ethical considerations focused on risks generated by ADS. These ethical principles must be accompanied by practical tools that will make it possible to provide designers and users with concrete information regarding what is expected when making or using automated decision-making tools. Algorithmic Impact Assessment (AIA) constitutes an interesting way to provide a concrete governance of ADS. I argue that while the European constitutional and ethical framework is theoretically sufficient, other tools must be adopted to guarantee the enforcement of Fundamental Rights and Ethical Principles in practice to provide a robust framework for putting human rights at the centre.
6.3 Human Rights Reinforced by Ethical Principles to Govern AI
Before considering the enactment of ethical principles to reinforce fundamental rights in the use of ADS, one needs to identify whether or not efficient legal provisions are already enacted.
6.3.1 Statutory Provisions in the European Law
At this time, very few statutory provisions in European Law are capable of reinforcing the respect and protection of fundamental rights with the use of ADS. ADS are algorithmic processes which require data in order to perform. Predictive policing systems do not automatically use personal data, but some of them do. In this case, if the processed personal data concerns some data subjects within the European Union, the General Data Protection Regulation (GDPR) may be applied by the private companies. Moreover, police services are subject to the Data Protection Law Enforcement Directive. It provides for several rights in favour of the data subject, especially the ‘right to receive a meaningful information concerning the logic involved’ (art. 13–15) and the right ‘not to be subject to a decision based solely on automated processing, including profiling, which produces legal effects concerning one or similarly significantly affects one’ (art. 22),Footnote 48 in addition to a Data Protection Impact Assessment (DPIA) tool (art. 35).Footnote 49
However, these provisions fail to provide adequate protection against the violation of human rights. First, several exceptions restrict the impact of these rights. Article 22 paragraph 1 is limited by paragraph 2, according to which the right ‘not to be subject to an automated decision’ is excluded, when consent has been given or a contract concluded. This right is also excluded if exceptions have been enacted by the member states.Footnote 50 For instance, French LawFootnote 51 provides an exception in favour of the governmental use of ADS. Consequently, Article 22 is insufficient per se to protect data subjects. Second, ADS can produce biased decisions without processing personal data, especially when a group is targeted in the decision-making process. Even if the GDPR attempts to consider the profiling of data subjects and decisions that affect groups of people, for instance, through collective representation, such provisions are insufficient to prevent group discrimination.Footnote 52 Third, other risks against fundamental rights have to be considered, such as procedural guarantees related to the presumption of innocence and due process. The protection of such rights is not, or at least not directly, within the scope of the GDPR. The personal data protection regulations cannot address all the social and ethical risks associated with ADS. Consequently, such provisions are insufficient, and because other specific statutory provisions have not yet been enacted,Footnote 53 ethical guidelines could be helpful as a first step.Footnote 54
6.3.2 European Ethics Guidelines for Trustworthy AI
In the EU, the Ethics Guidelines for Trustworthy Artificial Intelligence (AI) is a document prepared by the High-Level Experts Group on Artificial Intelligence (AI HLEG). This group was set up by the European Commission in June 2018 as part of the AI strategy announced earlier that year. The AI HLEG presented a first draft of the Guidelines in December 2018. Following further deliberations, the Guidelines were revised and published in April 2019, the same day as a European Commission Communication on Building Trust in Human-Centric Artificial Intelligence.Footnote 55
Guidelines are based on the fundamental rights enshrined in the EU Treaties, with reference to dignity, freedoms, equality and solidarity, citizens’ rights, and justice, such as the right to a fair trial and the presumption of innocence. These fundamental rights are at the top of the hierarchy of norms of many States and international texts. Consequently, they are non-negotiable and even less optional. However, the concept of ‘fundamental rights’ is integrated with the concept of ‘ethical purpose’ in these Guidelines, which creates a normative confusion.Footnote 56 According to the Experts Group, while fundamental human rights legislation is binding, it still does not provide comprehensive legal protection in the use of ADS. Therefore, the AI Ethics Principles have to be understood both within and beyond these fundamental rights. Consequently, trustworthy AI should be (1) lawful – respecting all applicable laws and regulations; (2) ethical – respecting ethical principles and values; and (3) robust – both from a technical perspective while taking into account its social environment.
The key principles are the principle of respect for human autonomy, the principle of prevention of harm, the principle of fairness, and the principle of explicability.Footnote 57 However, an explanation as to why a model has generated a particular output or decision (and what combination of input factors contributed to that) is not always possible.Footnote 58 These cases are referred to as ‘black box’ algorithms and require special attention. In those circumstances, other explicability measures (e.g., traceability, auditability, and transparent communication on system capabilities) may be required, provided that the system as a whole respects fundamental rights.
In addition to the four principles, the Expert Group established a set of seven key requirements that AI systems should meet in order to be deemed trustworthy: (1) Human Agency and Oversight; (2) Technical Robustness and Safety; (3) Privacy and Data Governance; (4) Transparency; (5) Diversity, Non-Discrimination, and Fairness; (6) Societal and Environmental Well-Being; and (7) Accountability.
Such principles and requirements certainly push us in the right direction, but they are not concrete enough to indicate to ADS designers and users how they can ensure the respect of fundamental rights and ethical principles. Back to the predictive policing activity, the risks against fundamental rights have been identified but not yet addressed. The recognition of ethical principles adapted to ADS is useful for highlighting specific risks but nothing more. It is insufficient to protect human rights, and they must be accompanied by practical tools to guarantee their respect on the ground.
6.4 Human Rights Reinforced by Practical Tools to Govern ADS
In order to identify solutions and practical tools, excluding the instruments of self-regulation,Footnote 59 the ‘Trustworthy AI Assessment List’ proposed by the Group of Experts can first be considered. Aiming to operationalize the ethical principles and requirements, the Guidelines present an assessment list that offers guidance on the practical implementation of each requirement. This assessment list will undergo a piloting process in which all interested stakeholders can participate, in order to gather feedback for its improvement. In addition, a forum to exchange best practices for the implementation of Trustworthy AI has been created. However, the goal of these Guidelines and the List is to regulate the activities linked with AI technologies via a general approach. Consequently, the measures proposed are broad enough to cover many situations and different applications of AI, such as climate action and sustainable infrastructure, health and well-being, quality education and digital transformation, tracking and scoring individuals, and lethal autonomous weapon systems (LAWS). But while our study concerns predictive policing activities, it is more relevant to consider specific, practical tools which regulate the governmental activities and ADS.Footnote 60 In this sense, the Canadian government enacted in February 2019 a Directive on Automated Decision-MakingFootnote 61 and a method on AIA.Footnote 62 These tools pursue the goal of offering governmental institutions a practical method to comply with fundamental rights, laws, and ethical principles. I argue that these methods are relevant to assess the activity of predictive policing in theory.
6.4.1 Methods: Canadian Directive on Algorithmic Decision-Making and the Algorithmic Impact Assessment Tool
The Canadian Government announced its intention to increasingly look to utilize artificial intelligence to make, or assist in making, administrative decisions to improve the delivery of social and governmental services. This government is committed to doing so in a manner that is compatible with core administrative legal principles such as transparency, accountability, legality, and procedural fairness, as based on the directive, and an AIA. An AIA is a framework to help institutions better understand and reduce the risks associated with ADS and to provide the appropriate governance, oversight, and reporting/audit requirements that best match the type of application being designed. The Canadian AIA is a questionnaire designed to assist the administration in assessing and mitigating the risks associated with deploying an ADS. The AIA also helps identify the impact level of the ADS under the proposed Directive on Automated Decision-Making. The questions are focused on the business processes, the data, and the systems to make decisions.
The Directive took effect on 1 April 2019, with compliance required by no later than 1 April 2020. It applies to any ADS developed or procured after 1 April 2020 and to any system, tool, or statistical model used to recommend or make an administrative decision about a client (the recipient of a service). Consequently, this provision does not apply in the criminal justice system or criminal proceedings. This Directive is divided into eleven parts and three appendices on Purpose, Authorities, Definitions, Objectives and Expected Results, Scope, Requirements, Consequences, Roles and Responsibilities of Treasury Board of Canada Secretariat, Application, References, and Enquiries. The three appendices concern the Definitions (appendix A), the Impact Assessment Levels (appendix B), and the Impact Level Requirements (appendix C).
The objective of this Directive is to ensure that ADS are deployed in a manner that reduces risks to Canadians and federal institutions, leading to more efficient, accurate, consistent, and interpretable decisions made pursuant to Canadian law. The expected results of this Directive are as follows:
Decisions made by federal government departments are data-driven, responsible, and comply with procedural fairness and due process requirements.
Impacts of algorithms on administrative decisions are assessed, and negative outcomes are reduced, when encountered.
Data and information on the use of ADS in federal institutions are made available to the public, where appropriate.
Concerning the requirements, the Assistant Deputy Minister responsible for the program using the ADS, or any other person named by the Deputy Head, is responsible for AIA, transparency, quality assurance, recourse, and reporting. He has to provide with any applicable recourse options that are available to them to challenge the administrative decision, and to complete an AIA prior to the production of any ADS. He can use the AIA tool to assess and mitigate the risks associated with deploying an ADS based on a questionnaire.
6.4.2 Application of These Methods to Predictive Policing Activities
Though such measures specifically concern the Government of Canada and do not apply to criminal proceedings, I propose to use this method both abroad and more extensively. It can be relevant for any governmental decision-making, especially for predictive policing activities. I will consider the requirements that should be respected by people responsible for predictive policing programs. Those responsible should be appointed to perform their work on the ground, for each predictive tool used. This would be done using a case-by-case approach.
The first step is to assess the impact in consideration of the ‘impact assessment levels’ provided by appendix B of the Canadian Directive.
Appendix B: Impact Assessment Levels | |
---|---|
Level | Description |
I | The decision will likely have little to no impact on:
Level I decisions will often lead to impacts that are reversible and brief. |
II | The decision will likely have moderate impacts on:
Level II decisions will often lead to impacts that are likely reversible and short-term. |
III | The decision will likely have high impacts on:
Level III decisions will often lead to impacts that can be difficult to reverse, and are ongoing. |
IV | The decision will likely have very high impacts on:
Level IV decisions will often lead to impacts that are irreversible, and are perpetual. |
At least level III would be probably reached for predictive policing activities in consideration of the high impact on the freedoms and rights of individuals and communities previously highlighted.
Keeping these levels III and IV in mind, they reveal in a second step the level of risks and requirements. Defined in appendix C, it indicates the ‘requirements’, concerning especially the notice, the explanation, and the human-in-loop process. The ‘notice requirements’ are focus on more transparency, which is particularly relevant to address the opacity problem of predictive policing systems.
Appendix C: Impact level requirements | ||||
---|---|---|---|---|
Requirement | Level I | Level II | Level III | Level IV |
Notice | None | Plain language notice posted on the program or service website. |
Publish documentation on relevant websites about the automated decision system, in plain language, describing:
|
These provisions allow one to know if the algorithmic system makes or supports the decision at levels III and IV. They also inform the public about the data used, especially from the start of the training process. This point is particularly relevant, in consideration of the historical and biased data mainly used in predictive policing systems. These requirements could help solve the discriminatory problem.
Moreover, AIAs usually provide a pre-procurement step that gives the public authority the opportunity to engage in a public debate and proactively identify concerns, establish expectations, and draw on expertise and understanding from relevant stakeholders. This is also when the public and elected officials can push back against deployment before potential harms occur. In implementing AIAs, authorities should consider incorporating them into the consultation procedures that they already use for procuring algorithmic systems or for assessing their pre-acquisition.Footnote 63 It would be a way to address the lack of transparency of predictive policing systems which should be addressed at levels III and IV.
Besides, other requirements concern the ‘explanation’.
Requirement | Level I | Level II | Level III | Level IV |
---|---|---|---|---|
Explanation | In addition to any applicable legislative requirement, ensuring that a meaningful explanation is provided for common decision results. This can include providing the explanation via a Frequently Asked Questions section on a website. | In addition to any applicable legislative requirement, ensuring that a meaningful explanation is provided upon request for any decision that resulted in the denial of a benefit, a service, or other regulatory action. | In addition to any applicable legislative requirement, ensuring that a meaningful explanation is provided with any decision that resulted in the denial of a benefit, a service, or other regulatory action. |
At levels III and IV, each regulatory action that impacts a person or a group requires the provision of a meaningful explanation. Concretely, if these provisions were made applicable to police services, the police departments who use some predictive policing tools should be able to give an explanation of the decisions made and the way of reasoning, especially in the case of using personal data. The place or a person targeted by predictive policing should also be explained.
Concerning the ‘human-in-loop for decisions’ requirement, levels III and IV impose a human intervention during the decision-making process. That is also relevant for predictive policing activities which require that the police officers keep their free will and self-judgment. Moreover, the human decision has to prevail over the machine-decision. That is crucial to preserve the legitimacy and autonomy of the law enforcement authorities, as well as their responsibility.
Requirement | Level I | Level II | Level III | Level IV |
---|---|---|---|---|
Human-in-the-loop for decisions | Decisions may be rendered without direct human involvement. | Decisions cannot be made without having specific human intervention points during the decision-making process, and the final decision must be made by a human. |
Furthermore, if infringement on human rights has to be prevented, additional requirements on testing, monitoring, and training have to be respected at all levels. Before going into production, the person in charge of the program has to develop the appropriate processes to ensure that training data are tested for unintended data biases and other factors that may unfairly impact the outcomes. Moreover, he has to ensure that data being used by the ADS are routinely tested to verify that it is still relevant, accurate, and up-to-date. He also has to monitor the outcomes of ADS on an ongoing basis to safeguard against unintentional outcomes and to ensure compliance with legislations.
Finally, the ‘training’ requirement for level III concerns the documentation on the design and functionality of the system. Training courses must be completed, but contrary to level IV, there is surprisingly no obligation to verify that it has been done.
The sum of these requirements is relevant to mitigate the risks of opacity and discrimination. However, alternately, it does not address the problem of efficiency. Such criteria should also be considered in the future, as the example of predictive policing activities reveals a weakness regarding the efficiency and social utility of this kind of algorithmic tool at this step. It is important not to consider that an ADS is necessarily efficient by principle. Public authorities should provide evidence of it.
6.5 Conclusion
Human rights are a representation of the fundamental values of a society and are universal. However, in an algorithmic society, even if a European lawmaker pretends to reinforce the protection of these rights through ethical principles, I have demonstrated that the current system is not good enough when it comes to guaranteeing their respect in practice. Constitutional rights must be reinforced not only by ethical principles but even more by specific practical tools taking into account the risks involved in ADS, especially when the decision-making concerns sensitive issues such as predictive policing. Beyond the Ethics Guidelines for Trustworthy AI, I argue that the European lawmaker should consider enacting similar tools as the Canadian Directive on Automated Decision Making and AIAs policies that must be made applicable to police services to make them accountable.Footnote 64 AIAs will not solve all of the problems that algorithmic systems might raise, but they do provide an important mechanism to inform the public and to engage policymakers and researchers in productive conversation.Footnote 65 Even if this tool is certainly not perfect, it constitutes a good starting point. Moreover, I argue this policy should come from the European Union and not its member states. The protection of human rights in an algorithmic society may be considered globally as a whole system integrating human rights. The final result is providing a robust theoretical and practical framework, while human rights keep a central place within this broad system.
7.1 Introduction
Technological progress could constitute a huge benefit for law enforcement and criminal justice more broadly.Footnote 1 In the security context,Footnote 2 alleged opportunities and benefits of applying big data analytics are greater efficiency, effectiveness, and speed of law enforcement operations, as well as more precise risk analyses, including the discovery of unexpected correlations,Footnote 3 which could nourish profiles.Footnote 4
The concept of ‘big data’ refers to the growing ability of technology to capture, aggregate, and process an ever-greater volume and variety of data.Footnote 5 The combination of mass digitisation of information and the exponential growth of computational power allows for their increasing exploitation.Footnote 6
A number of new tools have been developed. Algorithms are merely an abstract and formal description of a computational procedure.Footnote 7 Besides, law enforcement can rely on artificial intelligence (i.e., the theory and development of computer systems capable of performing tasks which would normally require human intelligence), such as visual perception, speech recognition, decision-making, and translation between languages.Footnote 8 For the purpose of this contribution, these systems are relevant because they do not simply imitate the intelligence of human beings; they are meant to formulate and often execute decisions. The notion of an allegedly clever agent, capable of taking relatively autonomous decisions, on the basis of its perception of the environment, is in fact, pivotal to the current concept of artificial intelligence.Footnote 9 With machine learning, or ‘self-teaching’ algorithms, the knowledge in the system is the result of ‘data-driven predictions’, the automated discovery of correlations between variables in a data set, often to make estimates of some outcome.Footnote 10 Correlations are relationships or patterns, thus more closely related to the concept of ‘suspicion’ rather than the concept of ‘evidence’ in criminal law.Footnote 11 Data mining, or ‘knowledge discovery from data’, refers to the process of discovery of remarkable patterns from massive amounts of data.
Such tools entail new scenarios for information gathering, as well as the monitoring, profiling, and prediction of individual behaviours, thus allegedly facilitating crime prevention.Footnote 12 The underlying assumption is that data could change public policy, addressing biases and fostering a data-driven approach in policy-making. Clearer evidence could support both evaluations of existing policies and impact assessments of new proposals.Footnote 13
Law enforcement authorities have already embraced the assumed benefits of big data, irrespective of criticism questioning the validity of crucial assumptions underlying criminal profiling.Footnote 14 In a range of daily operations and surveillance activities, such as patrol, investigation, as well as crime analysis, the outcomes of computational risk assessment are increasingly the underlying foundation of criminal justice policies.Footnote 15 Existing research on the implications of ‘big data’ has mostly focused on privacy and data protection concerns.Footnote 16 However, potential gains in security come also at the expenses of accountabilityFootnote 17 and could lead to the erosion of fundamental rights, emphasising coercive control.Footnote 18
This contribution first addresses the so-called rise of the algorithmic society and the use of automated technologies in criminal justice to assess whether and how the gathering, analysis, and deployment of big data are changing law enforcement activities. It then examines the actual or potential transformation of core principles of criminal law and whether the substance of legal protectionFootnote 19 may be weakened in a ‘data-driven society’.Footnote 20
7.2 The Rise of the Algorithmic Society and the Use of Automated Technologies in Criminal Justice
7.2.1 A Shift in Tools Rather than Strategy?
One could argue that the development of predictive policing is more a shift in tools than strategy. Prediction has always been part of policing, as law enforcement authorities attempt to predict where criminal activities could take place and the individuals involved in order to deter such patterns.Footnote 21
Law enforcement has over time moved towards wide-ranging monitoring and even more preventative approaches. Surveillance technologies introduced in relation to serious crimes (e.g., interception of telecommunications) are increasingly used for the purpose of preventing and investigating ‘minor’ offences; at the same time, surveillance technologies originally used for public order purposes in relation to minor offences (e.g., CCTV cameras) are gradually employed for the prevention and investigation of serious crime.Footnote 22 On the one side, serious crime including terrorism has had a catalysing effect on the criminal justice system, prompting increased use of surveillance techniques and technologies. The subsequent introduction of exceptional provisions has been first regarded as exceptional and limited in scope first to terrorism and then to organised crime. However, through a long-lasting normalisation process at the initiative of the legislator, specific measures have become institutionalised as part of the ordinary criminal justice system and have a tendency to be applied beyond their original scope.Footnote 23 On the other side, a parallel shift has occurred in the opposite direction. Video surveillance technologies, which are one of the most obvious and widespread signs of the development of surveillance, were originally conceived by the private sector for security purposes. They have been subsequently employed for public order purposes and finally in the prevention of minor offences and/or petty crimes (such as street crimes or small drug dealers), without any significant change in the level of judicial scrutiny and on the basis of a simple administrative authorisation. In such contexts, they were rather a tool to deter would-be criminals than an investigative means.Footnote 24 The terrorist threat has become an argument to justify an even more extensive deployment and use of video surveillance, as well as a broader use of the information gathered for the purposes of investigation.
Anticipative criminal investigations have a primary preventive function, combined with evidence gathering for the purpose of eventual prosecution.Footnote 25 The extensive gathering, processing, and storage of data for criminal law purposes imply a significant departure from existing law enforcement strategies. The relentless storage combined with an amplified memory capacity make a quantitative and qualitative jump as compared to traditional law enforcement activities. The growth of available data over the last two centuries has been substantial, but the present explosion in data size and variety is unprecedented.Footnote 26
First, the amount of data that are generated, processed, and stored has increased enormously (e.g., internet data) because of the direct and intentional seizure of information on people or objects; the automated collection of data by devices or systems; and the volunteered collection of data via the voluntary use of systems, devices, and platforms. Automated and volunteered collection have exponentially increased due to the widespread use of smart devices, social media, and digital transactions.Footnote 27 The ‘datafication’Footnote 28 of everyday activities, which is furthered driven by the ‘Internet of Things’,Footnote 29 leads to the virtually unnoticed gathering of data, often without the consent or even the awareness of the individual.
Second, new types of data have become available (e.g., location data). Irrespective of whether law enforcement authorities will eventually use these forms of data, much of the electronically available data reveal information about individuals which were not available in the past. Plus, there is a vast amount of data available nowadays on people’s behaviour.Footnote 30 Moreover, because of the combination of digitisation and automated recognition, data has become increasingly accessible, and persons can be easily monitored at distance.
Third, the growing availability of real-time data fosters real-time analyses. Thus the increased use of predictive data analytics is a major development. Their underlying rationale is the idea of predicting a possible future with a certain degree of probability.
7.2.2 Interoperable Databases: A New Challenge to Legal Protection?
Although police have always gathered information about suspects, now data can be stored in interoperable databases,Footnote 31 furthering the surveillance potential.Footnote 32 The possibility to link data systems and networks fosters the systematic analysis of computer processors as well as increased data storage capacity.
Interoperability challenges existing modes of cooperation and integration in the EU AFSJ and also the existing distribution of competences between the EU and Member States, between law enforcement authorities and intelligence services, and between public and private actors, which are increasingly involved in information-management activities. Moreover, large-scale information exchanges via interoperable information systems have progressively eroded the boundaries between law enforcement and intelligence services. Besides, they have facilitated a reshuffling of responsibilities and tasks within the law enforcement community, such as security and migration actors. Furthermore, competent authorities have access to huge amounts of data in all types of public and private databases. Interoperable information systems function not only across national boundaries but also across the traditional public-private divide.
If, on the one hand, the so-called big data policing partially constitutes a restatement of existing police practices, then on the other hand, big data analytics bring along fundamental transformations in police activities. There has been also an evolution of the share of roles, competences, and technological capabilities of intelligence services and law enforcement authorities. The means at the disposal of each actor for the prevention and investigation of serious crime are evolving so that the share of tasks and competences have become blurred. Nowadays the distinction is not always clear, and this leads to problematic coordination and overlap.Footnote 33 Intelligence has also been given operational tasks. Law enforcement authorities have resorted to ever more sophisticated surveillance technologies and have been granted much more intrusive investigative powers to use them. Faith in technological solutions and the inherent expansionary tendency of surveillance tools partially explains this phenomenon. Surveillance technologies, in fact, are used in areas or for purposes for which they were not originally intended.Footnote 34
Information sharing and exchange do not in itself blur the institutional barriers between different law enforcement authorities, but the nature of large-scale information-sharing activities does provide a new standing to intelligence activities in the law enforcement domain. The resources spent on and the knowledge developed by such large-scale information gathering and analysis are de facto changing police officers into intelligence actors or intelligence material users.
In addition, EU initiatives enhancing access to information by law enforcement authorities have a direct impact on the functional borders in the security domain. With the much-debated interoperability regulations,Footnote 35 the intention of the Commission has been to improve information exchanges not only between police authorities but also between customs authorities and financial intelligence units and in interactions with the judiciary, public prosecution services, and all other public bodies that participate in a process that ranges from the early detection of security threats and criminal offences to the conviction and punishment of suspects. The Commission has portrayed obstacles to the functional sharing of tasks as follows: ‘Compartmentalization of information and lack of a clear policy on information channels hinder information exchange’,Footnote 36 whereas there is, allegedly, a need to facilitate the free movement of information between competent authorities within Member States and across borders.
In this context, a controversial aspect of interoperability is that systems and processes are linked with information systems that do not serve law enforcement purposes, including other state-held databases and ones held by private actors. With reference to the first category, the issue to address concerns the blurring of tasks between different law enforcement actors. In fact, a key aspect of the EU strategy on databases and their interoperability is an aim to maximise access to personal data, including access by police authorities to immigration databases, and to personal data related to identification. This blurring has an impact on the applicable legal regime (in terms of jurisdiction) and also in terms of legal procedure (e.g., administrative/criminal). In fact, the purpose for which data are gathered, processed, and accessed is crucial, not only because of data protection rules but because it links the information/data with a different stage of a procedure (either administrative or criminal) to which a set of guarantees are (or are not) attached, and thus has serious consequences for the rights of individuals (including access, appeal, and correction rights). Neither legal systems nor legal provisions are fully compatible either because they belong to administrative or criminal law or because of a lack of approximation between Member State systems. Such differences also have an impact on the potential use of information: information used for identification purposes (the focus of customs officers at Frontex), or only for investigation purposes with no need to reach trial (the focus of intelligence actors), or for prosecution purposes (the focus of police authorities). Eventually, of course, the actors involved in the process have different impacts on the potential secret use of data, with consequent transparency concerns.Footnote 37
7.2.3 A ‘Public-Private Partnership’
The information society has substantially changed the ways in which law enforcement authorities can obtain information and evidence. Beyond their own specialised databases, competent authorities have access to huge amounts of data in all types of public and private databases.Footnote 38
Nowadays the legal systems in most Western countries thus face relevant changes in the politics of information control. The rise of advanced technologies has magnified the capability of new players to control both the means of communication and data flows. To an increasing extent, public authorities are sharing their regulatory competences with an indefinite number of actors by imposing preventive duties on the private sector, such as information-gathering and sharing (e.g., on telecommunication companies for data retention purposes).Footnote 39 This trend is leading to a growing privatisation of surveillance practises. In this move, key players in private information society (producers, service providers, key consumers) are given law enforcement obligations.
Private actors are not just in charge of the operational enforcement of public authority decisions in security matters. They are often the only ones with the necessary expertise, and therefore they profoundly shape decision-making and policy implementation. Their choices are nevertheless guided by reasons such as commercial interest, and they are often unaccountable.
In the context of information sharing, and particularly in the area of interoperable information systems, technical platform integration (information hubs) functions across national boundaries and across the traditional public–private divide. Most of the web giants are established overseas, so that often private actors – voluntarily or compulsorily – transfer data to third countries. Companies do not just cooperate with public authorities but effectively and actively come to play a part in bulk collection and security practices. They identify, select, search, and interpret suspicious elements by means of ‘data selectors’. Private actors, in this sense, have become ‘security professionals’ in their own right.
Systematic government access to private sector data is carried out not only directly via access to private sector databases and networks but also through the cooperation of third parties, such as financial institutions, mobile phone operators, communication providers, and the companies that maintain the available databases or networks.
Personal data originally circulated in the EU for commercial purposes may be transferred by private intermediaries to public authorities, often also overseas, for other purposes, including detection, investigation, and prosecution. The significant blurring of purposes among the different layers of data-gathering – for instance, commercial profiling techniques and security – aims to exploit the ‘exchange value’ of individuals’ fragmented identities, as consumers, suspects of certain crimes, ‘good citizens’, or ‘others’.
In this context, some have argued that the most important shortcoming of the 2016 data protection reform is that it resulted in the adoption of two different instruments, a Regulation and a Directive.Footnote 40 This separation is a step backwards regarding the objective envisaged by Article 16 TFEU – which instead promotes a cross-sectoral approach potentially leading to a comprehensive instrument embracing different policy areas (including the AFSJ) in the same way. This is a weakness because the level of protection envisaged by the 2016 Police Data Protection Directive is de facto lower than in the Regulation, as data gathering for law enforcement and national security purposes is mostly exempted from general data protection laws or constitutes an exemption under those provisions even at the EU level.Footnote 41 Furthermore, what happens in practice mostly depends on terms and conditions in contractual clauses signed by individuals every time they subscribe as clients of service providers and media companies.
A further element of novelty is thus the linkage of separate databases, which increased their separate utility since law enforcement authorities and private companies partially aggregated their data.Footnote 42 Such a link between criminal justice data with private data potentially provides numerous insights about individuals. Law enforcement and private companies have therefore embraced the idea of networking and sharing personal information. Law enforcement thus benefits from the growth of private surveillance gathering of information.
The nature and origins of data that are available for security purposes are thus further changing. Public and private data are increasingly mixed. Private data gathering tools play a broader role in security analyses, complementing data from law enforcement authorities’ sources.Footnote 43 An example is the use of social media analyses tools by the police together with intelligence (e.g., in counter-terrorism matters). It is often not merely the data itself which is valuable but the fact of linking large amounts of data.
Having examined the use of surveillance technologies for preventive and investigative purposes, it would be interesting to focus on the next phase of criminal procedure – that is, the retention and use of information gathered via surveillance technologies for the prosecution during trials for serious crimes, including terrorism. In fact, a huge amount of information is nowadays retained by private companies such as network and service providers, but also by different CCTV operators. The question is under which circumstances such information can be accessed and used by different actors of criminal procedures (police officers, intelligence services, prosecutors, and judges) for the purposes of investigating and prosecuting serious crimes. The retention of data for investigation and prosecution purposes poses the question of the collaboration between public authorities and private companies and what kind of obligations one may impose upon the latter.
7.3 The Transformation of Core Principles of Criminal Law
7.3.1 Control People to Minimise Risk
Technology is pivotal in the development of regulatory legislation that seeks to control more and more areas of life.Footnote 44
In fact, predictive policing is grounded and further supports a social growing desire to control people to minimise risk.Footnote 45 Sociologists such as Ulrich Beck have described the emergence of a ‘risk society’: industrial society produces a number of serious risks and conflicts – including those connected with terrorism and organised crime – and has thus modified the means and legitimisation of state intervention, putting risks and damage control at the centre of society as a response to the erosion of trust among people.Footnote 46
Along similar lines, Feeley and Simon have described a ‘new penology’ paradigm (or ‘actuarial justice’Footnote 47): a risk management strategy for the administration of criminal justice, aiming at securing at the lowest possible cost a dangerous class of individuals whose rehabilitation is deemed futile and impossible.Footnote 48 The focus is on targeting and classifying a suspect group of individuals and making assessments of their likelihood to offend in particular circumstances or when exposed to certain opportunities.
According to David Garland, the economic, technological, and social changes in our society during the past thirty years have reconfigured the response to crime and the sense of criminal justice leading to a ‘culture of control’ counterbalancing the expansion of personal freedom.Footnote 49 In his view, criminal justice policies thus develop from political actors’ desire to ‘do something’ – not necessarily something effective – to assuage public fear, shaped and mobilised as an electoral strategy.
The culture of control together with risk aversion sees technological developments as key enabling factors and is intimately linked to the rise of a surveillance society and the growth of surveillance technologies and infrastructures.
Koops has built upon pre-existing concepts of the culture of control and depicts the current emergence of what he calls ‘crime society’, which combines risk aversion and surveillance tools, with the preventative and architectural approaches to crime prevention and investigation.Footnote 50 Technology supports and facilitates the crucial elements at the basis of a crime society, pushing a further shift towards prevention in the fight against crime.
Finally, the prediction of criminal behaviours is supposed to enable law enforcement authorities to reorganise and manage their presence more efficiently and effectively. However, there is very little evidence as to whether police have, in fact, increased efficiency and improved fairness in daily tasks, and it seems to be very much related to the type of predictive policing under evaluation.
7.3.2 Would Crime-Related Patterns Question Reasonable Suspicion and the Presumption of Innocence?
The emergence of the ‘data-driven society’Footnote 51 allows for the mining of both content and metadata, allegedly inferring crime-related patterns and thus enable pre-emption, prevention, or investigation of offences. In the view of law enforcement authorities and policymakers, by running algorithms on a massive amount of data, it is allegedly possible to predict the occurrence of criminal behaviours.Footnote 52 In fact, data-driven analysis is different from the traditional statistical method because its aim is not merely testing hypotheses but also to find relevant and unexpected correlations and patterns, which may be relevant for public order and security purposes.Footnote 53
For instance, a computer algorithm can be applied to data from past crimes, including crime types and locations, to forecast in which city areas criminal activities are most likely to develop.
The underlying assumption of predictive policing is that certain aspects of the physical and social environment would encourage acts of wrongdoing. Patters emerging from the data could allow individuals to be identified predictively as suspects because past actions create suspicions about future criminal involvement. Moreover, there seems to be the belief that automated measured could provide better insight than traditional police practices, because of a general faith in predictive accuracy.
Yet a number of limits are inherent in predictive policing. It could be hard to obtain usable and accurate data to integrate into predictive systems of policing.Footnote 54 As a consequence, notwithstanding big data perceived objectivity, there is a risk of increased bias in the sampling process. Law enforcement authorities’ focus on a certain ethnic group or neighbourhood could instead take to the systematic overrepresentation of those groups and neighbourhoods in data sets, so that the use of a biased sample to train an artificial intelligence system could be misleading. The predictive model could reproduce the same bias which poisoned the original data set.Footnote 55 Artificial intelligence predictions could even amplify biases, thus fostering profiling and discrimination patterns. The same could happen with reference to the linkage between law enforcement databases and private companies’ data, which could increase errors exponentially, as the gathering of data for commercial purposes is surrounded by less procedural safeguards, thus leading to a diminished quality of such data.Footnote 56 Existing data could be of limited value for predictive policing, possibly resulting in a sort of technology-led version of racial profiling.
Could big data analyses strengthen social stratifications, reproducing and reinforcing the bias that is already present in data sets? Data are often extracted through observations, computations, experiments, and record-keeping. Thus the criteria used for gathering purposes could distort the results of data analyses because of their inherent partiality and selectivity. The bias may over time translate into discrimination and unfair treatment of particular ethnic or societal groups. The link between different data sets and the combined result of big data analyses may then well feed on each other.
Datafication and the interconnection of computing systems which grounds hyper-connectivity is transforming the concept of law, further interlinking it with other disciplines.Footnote 57 Moreover, the regulatory framework surrounding the use of big data analytics is underdeveloped if compared with criminal law. Under extreme circumstances, big data analysis could unfortunately lead to judging individuals on the basis of correlations and inferences of what they might do, rather than what they actually have done.Footnote 58 The gathering, analysis, and deployment of big data are transforming not only law enforcement activities but also core principles of criminal law, such as reasonable suspicion and the presumption of innocence.
A reasonable suspicion of guilt is a precondition for processing information, which would eventually be used as evidence in court. Reasonable suspicion is, however, not relevant in big data analytics. Instead, in a ‘data-driven surveillance society’, criminal intent is somehow pre-empted, and this could, at least to a certain extent, erode the preconditions of criminal law in a constitutional democracy – especially when there is little transparency with reference to profiles inferred and matched with subjects’ data.Footnote 59
Such major change goes even beyond the notorious ‘shift towards prevention’ in the fight against crime witnessed during the last decades.Footnote 60 First, the boundaries of what is a dangerous behaviour are highly contentious, and problems arise with the assessment of future harm.Footnote 61 Second, ‘suspicion’ has replaced an objective ‘reasonable belief’ in most cases in order to justify police intervention at an early stage without the need to envisage evidence-gathering with a view to prosecution.Footnote 62 Traditionally, ‘reasonable grounds for suspicion’ depend on the circumstances in each case. There must be an objective basis for that suspicion based on facts, evidence, and/or intelligence which are relevant to the likelihood of finding an article of a certain kind. Reasonable suspicion should never be supported on the basis of personal factors. It must rely on intelligence or information about an individual or his/her particular behaviour. Facts on which suspicion is based must be specific, articulated, and objective. Suspicion must be related to a criminal activity and not simply to a supposed criminal or group of criminals.Footnote 63 The mere description of a suspect, his/her physical appearance, or the fact that the person is known to have a previous conviction cannot alone, or in combination with each other, become factors for searching such individual. In its traditional conception, reasonable suspicion cannot be based on generalisations or stereotypical images of certain groups or categories of people as more likely to be involved in criminal activity. This has, at least partially, changed.
By virtue of the presumption of innocence, the burden of proof in criminal proceedings rests on the prosecutor and demands serious evidence, beyond reasonable doubt, that a criminal activity has been committed. Such presumption presupposes that a person is innocent until proven guilty. By contrast, data-driven pushes law enforcement in the opposite direction. The presumption of innocence comes along with the notion of equality of arms in criminal proceedings, as well as the safeguard of privacy against unwarranted investigative techniques, and with the right to non-discrimination as a way to protect individuals against prejudice and unfair bias.
Are algorithms in their current state amount to ‘risk forecasting’ rather than actual crime prediction?Footnote 64 The identification of the future location of criminal activities could be possible by studying where and why past times patterns have developed over time. However, forecasting the precise identity of future criminals is not evident.
If suspicion based on correlation, instead of evidence, could successfully lead to the identification of areas where crime is likely to be committed (on the basis of property and place-based predictive policing), it might be insufficient to point at the individual who is likely to commit such crime (on the basis of person-focused technology).Footnote 65
7.3.3 Preventive Justice
Predictive policing could be seen as a feature of preventive justice. Policy-making and crime-fighting strategies are increasingly concerned with the prediction and prevention of future risks (in order, at least, to minimise their consequences) rather than the prosecution of past offences.Footnote 66 Zedner describes a shift towards a society ‘in which the possibility of forestalling risks competes with and even takes precedence over responding to wrongs done’,Footnote 67 and where ‘the post-crime orientation of criminal justice is increasingly overshadowed by the pre-crime logic of security’.Footnote 68 Pre-crime is characterised by ‘calculation, risk and uncertainty, surveillance, precaution, prudentialism, moral hazard, prevention and, arching over all of these, there is the pursuit of security’.Footnote 69 An analogy has been drawn with the precautionary principle developed in environmental law in relation to the duties of public authorities in a context of scientific uncertainty, which cannot be accepted as an excuse for inaction where there is a threat of serious harm.Footnote 70
Although trends certainly existed prior to September 11, the counter-terrorism legislation enacted since then has certainly expanded all previous trends towards anticipating risks. The aim of current counter-terrorism measures is mostly that of a preventive identification, isolation, and control of individuals and groups who are regarded as dangerous and purportedly represent a threat to society.Footnote 71 The risk in terms of mass casualties resulting from a terrorist attack is thought to be so high that the traditional due process safeguards are deemed unreasonable or unaffordable and prevention becomes a political imperative.Footnote 72
Current developments, combined with preventive justice, lead to the so-called predictive reasonable suspicion. In a model of preventive justice, and specifically in the context of speculative security,Footnote 73 individuals are targets of public authorities’ measures; information is gathered irrespective of whether and how it could be used to charge the suspect of a criminal offence or use it in criminal proceedings and eventually at trial.
Law enforcement authorities can thus act not only in the absence of harm but even in the absence of suspicion. Thus there is a grey area for the safeguard of rights of individuals who do not yet fall into an existing criminal law category but are already subject to a measure which could lead to criminal law-alike consequences. At the same time, individual rights (e.g., within the realm of private or administrative law) are not fully actionable/enforceable unless a breach has been committed. However, in order for information to become evidence in court, gathering, sharing, and processing should respect criminal procedure standards. This is often at odds with the use of technologies in predictive policing.
7.4 Concluding Remarks
Law enforcement authorities and intelligence services have already embraced the assumed benefits of big data analyses. It is yet difficult to assess how and to what extent big data are applied to the field of security, irrespective of exploring whether or not their use fosters efficiency or effectiveness. This is also because of secrecy often surrounding law enforcement operations, the experimental nature of new means, and authorities’ understandable reluctance to disclose their functioning to public opinion. ‘Algorithms are increasingly used in criminal proceedings for evidentiary purposes and for supporting decision-making. In a worrying trend, these tools are still concealed in secrecy and opacity preventing the possibility to understand how their specific output has been generated’,Footnote 74 argues Palmiotto, addressing the Exodus case,Footnote 75 while questioning whether opacity represents a threat to fair trial rights.
However, there is still a great need for an in-depth debate about the appropriateness of using algorithms in machine-learning techniques in law enforcement, and more broadly in criminal justice. In particular, there is a need to assess how the substance of legal protection may be weakened by the use of tools such as algorithms and artificial intelligence.Footnote 76
Moreover, given that big data, automation, and artificial intelligence remain largely under-regulated, the extent to which data-driven surveillance societies could erode core criminal law principles such as reasonable suspicion and the presumption of innocence ultimately depends on the design of the surveillance infrastructures. There is thus a need to develop a regulatory framework adding new layers of protection to fundamental rights and safeguards against their erroneous use.
There are some improvements which could be made to increase the procedural fairness of these tools. First, more transparent algorithms could increase their trustworthiness. Second, if designed to remove pre-existing biases in the original data sets, algorithms could also improve their neutrality. Third, when algorithms are in use profiling and (semi-)automated decision-making should be regulated more tightly.Footnote 77
Most importantly, the ultimate decision should always be human. The careful implementations by humans involved in the process could certainly mitigate the vulnerabilities of automated systems. It must remain for a human decision maker or law enforcement authority to decide how to act on any computationally suggested result.
For instance, correlation must not be erroneously interpreted as a causality link, so that ‘suspicion’ is not confused with ‘evidence’. Predictions made by big data analysis must never be sufficient for the purpose of initiating a criminal investigation.
Trust in algorithms both in fully and partially automated decision processes is grounded on their supposed infallibility. There is a tendency (as has been the case in the use of experts in criminal casesFootnote 78) among law enforcement authorities to blindly follow them. Rubberstamping algorithms’ advice could also become a trick to minimise the responsibility of decision maker.
Algorithm-based decisions require time, context, and skills to be adequate in each individual case. Yet, given the complexity of algorithms, judges and law enforcement authorities can at times hardly understand the underlying calculus, and it is thus difficult to question their accuracy, effectiveness, or fairness. This is linked with the transparency paradox surrounding the use of big data:Footnote 79 citizens become increasingly transparent to government, while the profiles, algorithms, and methods used by government organisations are hardly transparent or comprehensible to citizens.Footnote 80 This results in a shift in the balance of power between state and citizen, in favour of the state.Footnote 81