1. Introduction
The use of border management technologies driven by artificial intelligence (AI) is proliferating in the European Union (EU).Footnote 1 Underpinned by the recent turn to security, AI systems such as algorithmic decision-making and decision supportFootnote 2 and surveillance and forecasting tools such as drones,Footnote 3 facial recognition and emotion recognition systemsFootnote 4 are being tested and deployed in all aspects of border and migration management. While the digitalization of migration and border management is not in itself a new phenomenon,Footnote 5 being seen for example in tools to assist with the identification of travellers, the increasing use of AI shows a move away from mere automation and digitalization to the development of ‘smart’ digital border control, where the management of border control is determined by data-driven systems.Footnote 6
There is a sizeable body of scholarship that examines the use of AI within border and migration contexts and its impact upon human rights.Footnote 7 This scholarship typically focuses on the threat that AI systems represent for the rights to privacy, non-discrimination and data protection.Footnote 8 This article goes further than noting tangible infringements of these discrete rights, examining the unseen impacts on human rights and the potential for the subtle erosion not only of other key rights—such as the freedom of thought—but also the unravelling of the conceptual and normative logic of the human rights framework.
The article examines three distinct, but related, questions concerning the use of AI in border management. First, it examines the potential impact of the use of AI on the freedom of thought. While technological interventions that pertain to the body (e.g. body scanners, surveillance cameras) are already commonplace, the continued push towards security seeks to introduce AI technology which has the potential to impact the human mind. Such technologies can attribute intentionality and criminality to individuals and have the potential to blur the boundaries between mens rea and actus reus, i.e. from criminal intent to criminal behaviour. This potential undermines individuals’ autonomy and freedom to think without undue scrutiny or external judgment, thus impacting the right to freedom of thought. Whilst there is literature examining the human rights and rule of law impacts of specific AI technologies used within border and migration managementFootnote 9 as well as highlighting the problematic human rights impacts of border management technologies in general,Footnote 10 the potential impact of AI on the freedom of thought has yet to be analysed in detail, nor has the ability of the turn to securitization to facilitate the deployment of such systems in the border context, a gap which this article seeks to fill.
Second, beyond the risk to discrete rights, the use of AI in the context of borders and migration also presents a conceptual challenge, namely the disempowerment of the individual. Border and migration control is a sector that is defined by the vulnerability of individuals, thus increasing the potential of such technology to impact fundamental rights detrimentally.Footnote 11 As the individual is the focus of human rights protection, the disempowerment of individuals in this sector risks hollowing the protection of the human rights framework from within.
Third, the increased deployment of AI-driven border management technologies risks exacerbating the inequality already present in human rights protection, in effect supporting a two-tier model of rights protection offering a lesser degree of protection to refugees and migrants than to (EU) citizens, thus challenging the very foundational principle of human rights, namely human dignity.
In examining questions of freedom of thought, individual disempowerment and human dignity, the article aims to broaden the discourse surrounding the impact of AI, recommending a more holistic understanding that transcends conventional forms of analysis. The article thus expands the usual scope of human rights analysis in relation to technology in border control, reasoning that it is not only discrete rights that are being undermined but that the very foundational purpose of human rights protection is being unravelled and undermined.
This article approaches the analysis from the standpoint of the impact of AI border and migration management technologies and the inadequacy of existing human rights critiques. As such, even though specific AI technologies will be analysed as examples in the forthcoming sections, the technologies themselves are not the central locus of examination.
Some further caveats are necessary. First, not all AI-driven systems analysed in the article are currently in deployment; some are only at the testing stage. This includes controversial technologies such as emotion recognition systems, which, at the time of writing, have yet to be deployed.Footnote 12 Nonetheless, the article observes that the general trajectory of AI systems in border and migration management is one of placing increasing trust on the objectivity, speed and scale of AI in managing the increasingly complex nature of human migration, juxtaposed against the securitized lens of reinforcing and protecting borders. Second, although border and migration management technologies are in widespread use across different jurisdictions,Footnote 13 this article focuses primarily on the EU, although other examples are given for comparison purposes. Third, a holistic human rights analysis goes beyond jurisdiction-specific human rights frameworks and provisions. Although the European Convention on Human Rights (ECHR) is used as a point of departure, the analysis extends beyond the European context and questions the broader normative aims of human rights as a concept.
The article is structured as follows. Section 2 provides an introductory overview of AI systems and their increasing use within the border and migration context in the EU. Section 3 identifies and clarifies the conceptual and normative challenges for human rights presented by the use of AI within border and migration management, examining questions of freedom of thought, individual disempowerment and human dignity. The final section concludes, with a call for policymakers to take a more expansive view of the human rights impacts of the use of AI within the border and migration context in order to respect and protect human dignity.
2. AI systems in the EU securitized border and migration context
The promise of AI as a general-purpose technology that is able to discern patterns from large datasets to aid in decision-making, recommendations and predictions has permeated the public and private sectors alike.Footnote 14 Similarly, AI is also seeing increased uptake and deployment in the border and migration context.Footnote 15 Before unpacking how AI systems in border and migration management challenge human rights, it is necessary to understand what is meant by AI. However, defining the concept of AI is complex and unclear, often seemingly acting as a ‘shorthand for what are deemed to be “new” or “emerging”’Footnote 16 technologies.
This article adopts the conception of AI systems as computational technologies that, for a given set of objectives, are able to produce decisions, recommendations, content and predictions that interact with or affect physical or virtual environments. This definition is close to those adopted by the Organisation for Economic Co-operation and Development (OECD)Footnote 17 and the EU, as it appears in its Artificial Intelligence Act of 2024 (AI Act).Footnote 18 AI systems, unlike their human counterparts, are able to treat ‘like cases alike’, at a scale and speed that far exceeds even earlier automated technologies.Footnote 19 However, the promise of AI has been tempered by the fact that such systems have been demonstrated to be biased, especially towards minority and vulnerable groups.Footnote 20 This can be caused by a variety of factors, including the lack of diverse data representation, a lack of testing and lack of diversity within the design process or organization itself.Footnote 21 In turn, AI systems based on machine learning have been criticized as being ‘black boxes’,Footnote 22 meaning there is a lack of transparency in the decision-making as a result of the inability to look beneath the surface to reveal the processes and rationale of a particular recommendation or decision. In this way, the outputs of an AI system can be incomprehensible to the affected person as they lack a clear statement of reasoning which is a necessary precondition upon which to contest results or decisions.Footnote 23 Thus it is clear that AI systems can potentially be discriminatory, affect access to social and economic rights and infringe the right to an effective remedy among other potentially detrimental human rights impacts.Footnote 24 The deployment of AI within the border and migration context raises similar issues, namely the invasion of privacy through surveillance, and concerns of bias and disproportionate impacts on vulnerable and marginalized groups. However, as the rest of this article will argue, the potential harm to human rights extends beyond these well-traversed concerns.
Within the EU border and migration context, various AI systems are already being used, have been tested or will be deployed in the future. Forecasting tools have long been used in border control, and involve an array of different technologies, some of which rely (increasingly) on AI, to enable the ‘forecasting and assessing the direction and intensity of irregular migratory flows’ to enable short-term or medium-term planning in managing migration flows.Footnote 25 Risk assessment predictive tools use AI to aggregate and detect patterns in data to identify and flag persons of interest to the border and migration authorities.Footnote 26 Facial recognition systems identify or verify people based upon facial biometrics which are a ‘numerical representation of a biographic feature of an individual’ gleaned through the face, fingerprints or voice.Footnote 27 Facial recognition systems have long been commonplace, having been possible before the advent of AI using simple image processing techniques for pattern matching, but their effectiveness and scalability has dramatically increased since incorporating AI. Their more sophisticated relation, emotion recognition, has only been possible with the development of AI. Emotion recognition systems use different methods such as ‘the analysis of facial expressions, physiological measuring, analyzing voice, monitoring body movements, and eye tracking’ to detect and infer emotions and intentions,Footnote 28 and have, in the EU, only been tested.Footnote 29
Border and migration management within the EU can be contextualized within the framework of the Single Market. The freedom of movement within internal EU borders is a key principle and one of the four freedoms that drives the European Single Market. However, a lack of internal borders within the EU necessitates the strengthening of its external borders.Footnote 30
The earliest framework was the Schengen Information System (SIS) which was introduced in 1995, facilitating the alert of the authorities to suspect travellers such as wanted persons or those with prior visa refusals.Footnote 31 An extended and updated SIS framework was put in place in 2013 and 2018, respectively. The latter significantly expanded its remit to address counter-terrorism and irregular migration better, including through the use of biometric data and expanding the categories triggering alerts.Footnote 32 While it is beyond the scope of this article to examine in detail the intricacies of every border and migration management system, other key ‘large-scale IT systems’, as SIS is known, should be mentioned. Eurodac is another large-scale information technology (IT) system tasked with managing the storage and processing of digitalized fingerprints of those seeking asylum in the EU. Intended to begin operation in 2025, the European Travel Information and Authorization System (ETIAS) is a system authorizing entry for visa-exempt third-country nationals to 30 European countries.Footnote 33 Although primarily aimed at enabling visa-free short-term travel to the EU, the same system will also be used prior to a traveller's arrival to assess whether they ‘pose a security, irregular migration or high epidemic risk’Footnote 34 to the EU. Other key systems within the European border and security architecture include the Visa Information System (VIS), a consolidated system that enables the exchange of visa data by linking the central system to national systems;Footnote 35 the Entry/Exit System (EES), to register third-country travellers when crossing external EU borders;Footnote 36 and, finally, the European Criminal Record Information System for Third-Country Nationals (ECRIS-TCN), a centralized system that enables Member State authorities to check the criminal records of third-country nationals or Stateless persons.Footnote 37 In other words, successive measures have expanded the legal framework for border management and ever-more complex technologies have been implemented to increase the sophistication of border management systems.
Beyond its widened application, AI is also seeing a deepened reach through the interoperability of these systems, enabling information flows across systems to manage EU borders effectively. In June 2019, Regulations (EU) 2019/817 and (EU) 2019/818 entered into force and established the interoperability of all six of the systems noted above, both those already in operation (SIS, VIS and Eurodac) and those yet to be implemented (EES, ETIAS and ECRIS-TCN). This merges these previously separate systems into ‘one single, overarching EU information system’,Footnote 38 creating an unprecedented information behemoth at the service of the border and migration authorities. This was highlighted in the 2021 Schengen Strategy, in which the European Commission envisioned ‘one of the world's most technologically advanced border management systems’, facilitated through increasing use of AI for the purposes of law enforcement.Footnote 39 It has been argued, however, that such interoperability poses human rights concerns, as it challenges the principles of necessity and proportionality.Footnote 40 The deepened reach enabled through AI also extends to distance. When data from AI systems is combined, the ‘visualising, registering, mapping, monitoring and profiling [of] mobile (sub)populations’ is facilitated.Footnote 41 This ecology of technological tools, processes and systems in turn constitutes the increasingly digitalized or ‘virtual’ border, in which border control can take place away from physical borders,Footnote 42 meaning that individuals may be treated as subjects of interest even while far from the physical borders of the destination country. Eklund argues that border controls increasingly rely on ‘automated, anticipatory and intelligence-based risk management tools which work more like a technological data-driven filter’.Footnote 43
The expansion of these various systems can be contextualized through the turn to securitization which has become the main paradigm for addressing major problems in society, to the point that ‘migration, asylum, terrorism and drug traffic [have all] been handled through the exclusive lens of security’.Footnote 44 In this vein, crime prevention has undergone a significant transformation since the attacks of 11 September 2001 (9/11), as terrorism legislation has shifted focus onto preparatory activities.Footnote 45 Anti-terrorist pre-crime measures (ATPCMs) have also become the norm in many democratic States such as France, Canada and the United Kingdom (UK).Footnote 46 This turn to prevention has occurred hand in hand with advances in technology. Passenger Pre-Screening Systems, closed-circuit television (CCTV), sensors and Global Positioning System (GPS), facial recognition devices and other technologies have all become part of the new apparatus used to screen and detect potential ill-intentioned individuals.Footnote 47
This turn has similarly gained traction within EU border and migration management, as forecasting tools are increasingly deployed for interdiction and pushbacks. AI-driven decision-making tools are used to gauge and profile suspicious persons and detect undesirable characteristics in the guise of ‘risk assessment’. Facial and emotion recognition is being used ostensibly to ‘read’ behaviour and in the most-concerning instances, impute qualities of distrust and criminality upon persons.Footnote 48 The EU Parliamentary Research report on the use of AI in border and migration has noted the increasing ‘“securitisation of identity” and surveillance culture of the last two decades’.Footnote 49 Thus, management of security risks, now enabled through the measurability afforded by datafication and AI, can be read as the continuation of a securitization trajectory that began with 9/11.Footnote 50 Further, the securitization lens means that the definition of a security threat varies, as it largely depends on the subjective judgments of States regarding what they consider a threat, thereby allowing the promotion of ‘highly securitised agendas’.Footnote 51
The next section examines how the deployment and testing of AI in the border and migration context poses novel challenges for human rights within the three critical frameworks outlined above.
3. The impacts of AI on human rights
3.1. The impact of AI on freedom of thought
To date the mind remains largely terra incognita for the law. Although freedom of thought is a fundamental right enshrined in all major human rights texts, when it comes to defining what is actually meant by ‘thought’, much confusion persists.Footnote 52 As a direct corollary, the jurisprudence on the protection of the so-called forum internum, that is, the inner part of one's intellect, remains vague.Footnote 53 With the advance of medical technologies, specifically those used in the field of neuroscience, there has been a renewed interest in the meaning and scope of this right.Footnote 54 The use of new AI technologies that can detect facial expressions, biometric measurements and even human emotions directly calls into question the relevance of the mind. This is because such technologies are increasingly able to capture the mental processes that occur before the formulation and expression into words of an emotion.Footnote 55 Put differently, they can detect all those granular and erratic activities of the mind below the level of consciousness which have a direct effect on our behaviour.Footnote 56 Going beyond the usual concerns about privacy and data protection, these AI systems now pose a direct threat to the freedom of our inner existence.Footnote 57 In the words of the United Nations (UN) Special Rapporteur on Freedom of Religion and Belief:
surveillance technologies deployed in ‘counter-terrorism’ and national security apparatuses threaten freedom of thought, among other rights, where they purport to reveal one's thought through inference … rooted in the idea that one can identify ‘extremist thinking’ and intervene before it manifests … authorities prosecute individuals without proving their correspondingly grave and guilty act (actus reus) shifting seamlessly from the criminalization of acts of terrorism to the criminalization of extremist thoughts and beliefs.Footnote 58
On the one hand, the turn to security has led to a gradual blurring of the traditional mens rea/actus reus paradigm. Increasingly, (alleged) criminal intentions are flagged, leading to a culture of suspicion that closely resembles the concept of pre-crime.Footnote 59 On the other hand, new AI technologies deployed in border management are pushing that paradigm a step forward. By analysing our mental processes, they impute culpability, inferring it from a subtle movement of the face, a trembling in the tone of voice, a line of sweat, or a heartbeat.Footnote 60 The transition from criminal intention to criminal behaviour originates in the realm of our conscious and unconscious mental activities. Beyond traditional methods of profiling based on racial characteristics, the mind has become the last bastion to be conquered. But how is the mind protected within the European framework, and what is meant by ‘thought’ from a legal standpoint? The ECHR enshrines the protection of thought, conscience and religion in Article 9:
1. Everyone has the right to freedom of thought, conscience and religion; this right includes freedom to change her/his religion or belief and freedom, either alone or in community with others and in public or private, to manifest her/his religion or belief, in worship, teaching, practice and observance.
2. Freedom to manifest one's religion or beliefs shall be subject only to such limitations as are prescribed by law and are necessary in a democratic society in the interests of public safety, for the protection of public order, health or morals, or for the protection of the rights and freedoms of others.Footnote 61
The European Court of Human Rights (ECtHR) has long held that this right constitutes one of the foundations of a democratic society.Footnote 62 The restrictions outlined in the second paragraph of Article 9 refer to the ‘freedom to manifest one's religion or belief’. In other words, the freedom of thought, conscience and religion are absolute, and limits can only be imposed on the external manifestations of such thoughts and beliefs.Footnote 63 This has been described as the protection of the so-called forum internum.Footnote 64
Significantly, the case law of the ECtHR on freedom of thought has mostly been confined to issues relating to the religious sphere.Footnote 65 Although Article 9 seems to distinguish between thought, conscience and religion, in practice case law has dealt mainly with the latter two, leaving the concept of ‘thought’ in an odd limbo.Footnote 66 Conscience has been interpreted almost exclusively in religious terms, with the terms freedom of conscience and individual conscience also being used to describe religious creed.Footnote 67 The danger posed by AI to this right is clear: AI technologies used in border management cannot distinguish between conscience, religion or other types of beliefs. The data collected is much less refined and yet more granular in its aggregation.Footnote 68 These technologies purport to detect a whole range of human reactions, as a direct consequence of conscious and unconscious thought processes.Footnote 69 Confining legal protection to personal beliefs and religious creeds is thus insufficient to address this scenario and represents a significant gap in the application of Article 9. What is the place of emotions and other mental processes affecting human behaviour in the scope of Article 9? Are they protected? And how can the potential impact of scrutiny by AI technology on a migrant's mental state be considered in this process? With the ECtHR's insistence on the absolute protection of the internal part of the mind, a discrepancy exists between the forum internum and the type of thoughts that enjoy legal protection, which will now be explored.
One of the earliest references to forum internum is found in the linked cases of X and C in the 1980s.Footnote 70 Both these Commission cases concerned Quakers who refused to contribute to military expenditure through taxation. Because they identified as pacifists, they considered military taxation contrary to their personal beliefs. In addressing the complaint, the Commission in C stated that: ‘Article 9 primarily protects the sphere of personal beliefs and religious creeds, i.e. the area which is sometimes called the forum internum’.Footnote 71 It subsequently added that: ‘in protecting this personal sphere, Article 9 of the Convention does not always guarantee the right to behave in the public sphere in a way which is dictated by such a belief’.Footnote 72 In both these cases, the obligation to pay taxes was seen as a neutral act due to the State, with no direct consequence for the inner beliefs of the applicant.
This basic formulation of forum internum is found throughout ECtHR jurisprudence.Footnote 73 It emphasizes that Article 9 protects people's innermost beliefs, but restrictions can be imposed the moment that such inner beliefs are translated into actions. However, the exact nature of what is meant by beliefs, other than those related to the religious sphere, remains an open question. As noted above, answering this question has become critical now that AI technology can detect a whole range of mental states. The position of the ECtHR is somewhat contradictory. This is reflected, for instance, in the Guide to Article 9: ‘[o]n the one hand, the scope of [the Article] is very wide, as it protects both religious and non-religious opinions and convictions. On the other hand, not all opinions or convictions necessarily fall within the scope of the provision.’Footnote 74
In one of its earliest formulations, the Commission adopted a restrictive approach, underlining that Article 9 ‘is essentially destined to protect religions, or theories on philosophical or ideological universal values’.Footnote 75 In Salonen, however, the Commission referred to ‘the comprehensiveness of the concept of thought’ in accepting the parents’ wish to give their child a particular name.Footnote 76 Whilst it remains ambiguous what freedom of thought means and what it encompass, over the years the ECtHR has established a threshold for Article 9 protection. As formulated in İzzettin Doğan: ‘the right to freedom of thought, conscience and religion denotes only those views that attain a certain level of cogency, seriousness, cohesion, and importance’.Footnote 77 This relatively high bar potentially has a negative impact when it comes to new AI technologies. If only opinions that reach a certain level of cogency find full protection, there is ample room for abuse. Furthermore, it is unclear how the importance of thoughts can be determined and through which moral framework they ought to be examined. The greater the ability of such technologies to detect internal mental processes, the stronger the protection under Article 9 should become. Here it is important to highlight once more how the securitarian paradigm whereby States seek to grasp the malicious intentions of individuals has significantly increased the interference in people's inner lives.Footnote 78
It is therefore necessary to reconsider the core considerations underlying the original development of the freedom of thought. When introducing this right during the preparatory works of the ECHR, the French Rapporteur Pierre-Henri Teitgen stated:
in recommending a collective guarantee not only of freedom to express convictions, but also of thought, conscience, religion and opinion, the Committee wished to protect all nationals of any Member State, not only from ‘confessions’ imposed for reasons of State, but also from those abominable methods of police enquiry or judicial process which rob the suspect or accused person of control of his intellectual faculties and of his conscience.Footnote 79
This need to protect individuals’ thoughts is even greater in the case of border and migration control, an already heavily securitized field in which individuals’ vulnerabilities are starkly exposed. Facial and emotional recognition AI systems increase the potential risk by aggregating a whole series of biometrics and biomarkers related to mental state. Multiple types of thoughts and mental processes which form an individual's inner life are today susceptible to being flagged as dangerous. Society is on the threshold of transferring analysis of a person's intellectual state from traditional, human-centred, intelligence methods to myriads of aggregate data and algorithmic processes over which there is no real control. The perils seem even greater with border management systems, where migrants and asylum seekers are subject to fear, tiredness and the constant threat of rejection. It is therefore imperative that the protection of the mind be elevated to greater prominence and importance in the years to come.Footnote 80
A potential development of greater protection can be noted in Sinan Işık which established that the Turkish government could not force the applicant to disclose his faith. The ECtHR recognized that: ‘what is at stake is the right not to disclose one's religion or beliefs, which falls within the forum internum of each individual. This right is inherent in the notion of freedom of religion and conscience.’Footnote 81 While remaining within the sphere of religious creed, this formulation of forum internum takes an inverse perspective, the ECtHR noting that it would examine the case: ‘from the angle of the negative aspect of freedom of religion and conscience, namely the right of an individual not to be obliged to manifest his or her beliefs’.Footnote 82 This approach has been endorsed by several scholars who have emphasized the need to move away from an interpretation of freedom of thought in the traditional sense and rather to consider the negative aspects of the protection offered by Article 9, focusing on an individual's right to have certain thoughts or beliefs without fear of manipulation or punishment.Footnote 83
Some comparative insights might be usefully drawn with Article 18 of the Universal Declaration of Human Rights (UDHR), which largely inspired the text of Article 9. The preparatory material of the UDHR shows an emphasis on the importance of the forum internum in Article 18. In particular the French representative René Cassin took the stance that:
freedom of thought was the basis and the origin of all other rights. Freedom of thought differed from freedom of expression in that the latter was subject to certain restrictions for the sake of public order. It might be asked why freedom of inner thought should have to be protected even before it was expressed. That was because the opposite of inner freedom of thought was the outward obligation to profess a belief which was not held. Freedom of thought thus required to be formally protected in view of the fact that it was possible to attack it indirectly.Footnote 84
Cassin stressed the substantial difference between the expression of a thought and its protection before it was even articulated.Footnote 85 Even though the UDHR debates do not elucidate the meaning of thought itself, they do offer an indication of the importance given to this right (and to the meaning of forum internum) at the time of its inception. This understanding should be revisited in this era of facial recognition and AI-based border management technologies.
Indeed, the approach in Sinan Işık suggests that this distinction is highly relevant in relation to AI technologies and algorithmic screening, with the ECtHR taking a step forward in protecting inner beliefs from State intervention. This case has brought new attention to the idea that thought can be attacked ‘indirectly’, as highlighted more than half a century ago by Cassin. It is clear that AI technologies are capable of inferring mental processes and unconscious activity with ever greater precision, which in combination with the widespread datafication and securitization of border management represents a considerable cause for concern. The time has therefore come for the ECtHR to reconsider the scope of Article 9 and adopt a more holistic approach, extending beyond religious beliefs and encompassing protection of the forum internum.Footnote 86 It is argued that adopting this view for future cases is crucial and would generate positive spillover effects not only at the level of fundamental rights protection under the ECHR, but also in relation to migration and the status of refugees in Europe.Footnote 87
3.2. Individual disempowerment
This section will examine a second challenge posed by border and migration technology that has yet to be addressed squarely, namely the disempowerment of the individual. It is a truism that the international human rights law framework consists primarily of individual legal rights.Footnote 88 On this basis, it can be argued that the human rights framework is premised upon empowering the individual to challenge and address violations prohibited under the framework. This section begins by demonstrating the fundamental nature of individual empowerment before examining how it is being undermined through the deployment of AI systems for border and migration management.
First, history supports the assertion that the individual is the focus of human rights protection. One key theoretical underpinning of human rights centres on natural rights, arguing that the contemporary human rights framework mirrors pre-existing moral rights.Footnote 89 The theory can be traced to scholars such as Locke, who argued that various rights exist in the ‘state of nature’ and are meant to be protected by the constitution of government.Footnote 90 The contemporary interpretation of natural rights is generally reflected in the concept of human dignity that underpins the human rights framework.Footnote 91 In this sense, individuals are said to have inherent worth by virtue of their existence, independent of who they are, where they are from or other status markers. Since only individuals possess moral rights, it is uncontroversial to describe the international human rights protection framework as being centred on the empowerment of the moral rights holder.
A second historical rationale for individual empowerment stems from more recent history, namely the impetus for international human rights protection created after the Holocaust and World War II.Footnote 92 Buchanan argues that ‘radical collectivism’,Footnote 93 meaning collectivist ideas promoted by, for example, National Socialism, negated the worth and importance of the individual and subsumed this to forms of collective identity. The individual is said to have ‘no significant moral worth on his own account but rather derives whatever value he has by virtue of his usefulness to or membership in the nation’.Footnote 94 The subsequent adoption of the UDHR reaffirmed the inherent worth of the individual qua individual by guaranteeing their fundamental rights.Footnote 95
Third, the emphasis upon individual empowerment is also evident through the conferral of rights to the individual as a pushback against the sovereign power of the State.Footnote 96 While State excesses of power have resulted in gross rights violations, the individual is generally able to hold the State accountable through a wide array of human rights which extend protection beyond physical violations to more ‘intangible’ harms such as violations of the right to privacy.
Fourth, the international protection of human rights is said to enable the operationalization of equality, demonstrating a ‘robust commitment to affirming and protecting the equal basic moral status of all individuals’.Footnote 97 Thus, beyond reflecting the moral rights that individuals are said to possess inherently, ensuring equal worth in practice necessitates conferring individuals with equal legal rights. As Besson notes, ‘(h)uman rights are rights individuals have against the political community, i.e. against themselves collectively. They generate duties on the part of public authorities not only to protect equal individual interests, but also individuals' political status qua equal political actors.’Footnote 98
Finally, the very nature of human rights as enforceable and justiciable individual legal rights confirms that the framework was designed with the normative goal of empowering individuals.Footnote 99
It is thus clear that human rights aim to empower individuals by granting them a set of rights and ensuring that they can seek protection for those rights. The paradigm of individual empowerment is also observed in the digital age, including through the developing space of digital rights.Footnote 100 However, the deployment of AI-driven border and migration management may be challenging the idea of individual empowerment which lies at the core of the human rights protection framework in three ways.
3.2.1. Datafication
First, it is becoming increasingly onerous or even impossible to be aware of the process and to challenge the imputation of intentionality upon the individual through the use of AI systems. The use of risk classification systems and emotion and facial recognition systems by border security arguably makes it increasingly impossible for the individual to understand why and how they have been deemed a risk factor and to challenge such classification. Risk classification systems work based upon a form of profiling, categorizing individuals by the risk that they potentially pose to the country of destination. Such systems are based on the data provided by the individual but also via other means such as other data-based risk profiles and data gleaned from other systems. Due to the interoperability of such systems, allowing for the cross-checking of data to find ‘hits’, and the imperative of security that underpins the use of AI systems, the individual is no longer the central figure in the maze of datapoints. The datapoints in turn pertain not to the individual, as historically and biologically situated,Footnote 101 but to profiles constructed for the purposes of informing decision-makers such as the border and migration authorities. Given that these systems operate in securitized settings, the individual is unlikely to be aware of the content of their profile or how intentionality has been imputed to them, let alone have access to the subsequent recommendations made by the system. Van der Sloot argues that ‘control is no longer feasible because of time and resources, but also because of information and power asymmetries: data is produced by data controllers and was thus never in the hands of an individual in the first place’.Footnote 102
While machine learning AI systems have been long criticized as being ‘black boxes’, i.e. their internal processes are so complex or opaque it is impossible to understand how outputs are reached, the individual's lack of knowledge is not (only) due to computational impossibility but is compounded by political impossibility in the face of endemic secrecy in securitized settings such as border and migration management. Transparency, if even available, is likely only superficial and insufficient to empower the individual, typically from an already vulnerable or marginalized group, to challenge a decision or seek accountability. Another reason that has been used to justify the lack of transparency around algorithmic systems, including those used within the field of border and migration, is that transparency could ostensibly facilitate misuse of the system by those seeking to exploit loopholes or information provided.Footnote 103
3.2.2. Inference and construction
Second, the difficulties of finding out how a decision is made and challenging it are exacerbated by the fact that it is not personal data per se that informs algorithmic decision-making, recommendations or forecasting, but rather algorithmic constructions of the individual's profile and inferences being drawn from that data. This ‘profile’ is thus by nature elusive and ever-changing, incorporating new datapoints as they are encountered, rendering it much harder to challenge. In effect, the individual is being judged not by their own personal data as such, but through acts, group profiling and the inferences therein. While data protection, privacy and human rights laws are generally applicable in the border and migration setting, the operationalization of these protections faces certain novel difficulties. The use of AI systems such as algorithmic assessments of the risk profiles of various travellers represents not only a novel way to ‘read’ subjects, but, as Van den Meerssche observes, it is in effect a new form of subject creation.Footnote 104 Subject construction means that ‘data flows, bodies and scattered signatures of past passages or events are assembled as scores amenable to immediate institutional action’.Footnote 105 This ephemeral form of subject making challenges the subject's capacity to know what data is out there about them and how it is processed, and could be framed as a form of ‘hermeneutic injustice’, described by Milano and Prunkl as the ‘depletion of epistemic resources that are needed to interpret and evaluate certain experiences’.Footnote 106 Such disempowerment of the individual in relation to their creation as an AI ‘subject’ also contravenes the ‘emancipatory promises of collectivity, solidarity and equality’Footnote 107 of international law, including human rights law. The generation of a profile from many types and sources of data also does not fall squarely within the ambit of data protection law, which is concerned with the identifiability of existing subjects.
3.2.3. Algorithmic groupings
The third way that AI systems disempower individuals stems from the fact that algorithmic profiles pertain not to the individual at all, but rather to groups. Algorithmic predictions and recommendations, even if applied to the individual, are essentially the result of groups created by inference based upon shared algorithmic patterns. It has been argued that: ‘in an era of big data where analytics are being developed to operate at as broad a scale as possible, the individual is often incidental to the analysis’.Footnote 108 Algorithmic group-based correlations enable actionability based upon the insight they afford in relation to the population as a whole.Footnote 109 This is fundamentally at odds with the interpretation of human rights as being premised upon individual empowerment. While there are attempts to broaden human rights protections to include group privacy and expand the basis of non-discrimination law, there are still unresolved issues in relation to ‘group rights’, such as which groups are deserving of protection, how a group can be identified when its contours are constantly shiftingFootnote 110 and where the threshold for what constitutes a group should lie.Footnote 111
There is a general counterargument that can be offered to the assertion of disempowerment of the individual by AI systems. It can be argued that the human rights framework is not centred upon the empowerment of the individual as such, but rather, it puts in place a protection mechanism that aims at securing a minimum standard of protection to prevent the worst human rights excesses.Footnote 112 However, as will be examined in Section 3.3, the minimum level of protection offered by human rights law serves the underlying purpose of the protection and realization of human dignity. Individual empowerment is a necessary ingredient for the realization of human dignity as individual autonomy is one of its key components.
In summary, it is clear that the use of AI systems within the border and migration context has the effect of disempowering the individual and is exacerbated by the turn to securitization. The lack of transparency is not only inherent to such systems, but is also necessary for their proper functioning. This differentiates the border and migration context from many other contexts in which AI systems are deployed, where transparency has been hailed as a key element in empowering the individual in understanding and challenging algorithmic decision-making, for example, within public administration where good governance principles are built upon transparency.
3.3. Politicising human dignity
This section delves into how human dignity, a key foundational concept within human rights, is being politicized and undermined. The 1945 United Nations Charter recognized the ‘dignity and worth of the human person’Footnote 113 and this was subsequently reflected in the UDHR in 1948 which affirmed that ‘all human beings are born free and equal in dignity and rights’.Footnote 114 The concept itself is open ended and its philosophical and historical provenance has seen human dignity being interpreted variously as not treating humans as a means to an end,Footnote 115 as protection of certain vulnerable classes of personsFootnote 116 and as recognizing the distinct capacities of humanity, including reasoning capacities of the human mind.Footnote 117 Human dignity has been described as the ‘the foundation on which the superstructure of human rights is built’,Footnote 118 and the very reason why we protect human rights.
The openness of the concept might intuitively convey its correspondingly open and evolving utility, even in light of new challenges to human rights. Thus, even putatively novel challenges such as environmental harms have been couched within the language of human dignity.Footnote 119 Human dignity has also been a relevant concern when it comes to AI systems.Footnote 120 However, others have criticized the human rights discourse for being shortsighted in its response to new challenges. Rodríguez-Garavito argues that human rights responses have tended to ‘register the earthquake but lose sight of the tectonic plates that are shifting beneath the surface’,Footnote 121 pointing to foundational concerns that are either missed or neglected.
This section puts forward three arguments as to how the use of AI within border and migration management contexts poses a normative problem for the human rights framework by failing to address the politicization of human dignity and the inherent worth of the human being.
3.3.1. Exacerbated exclusion in the border and migration context
First, while acknowledging that migration is an inherently exclusionary context that engages with the sovereign power of the State to determine who may or may not enter their territory, the use of AI systems in such determinations may exacerbate power inequalities and result in the disproportionate exclusion of certain ethnicities, races and nationalities.Footnote 122 While sovereign States have an almost exclusive power—barring international obligations such as protections afforded under refugee law and human rights law—to determine who they want to have within their borders,Footnote 123 this discretion is not unfettered. The use of AI in border and migration management can result in both direct and indirect discrimination, and at the same time impacts not only the individual but also effectively builds discriminatory structures and leaves them in place. Thus, even though the use of AI is pervasive within different segments of society and public administration, the border and migration context bring forth unique concerns.
To mitigate these concerns, the lens of human dignity has been deployed to cast a wider net of protection. For example, the European Data Protection Board (EDPB) Joint Opinion with the European Data Protection Supervisor (EDPS) on Artificial Intelligence called for a ban on ‘any use of AI for an automated recognition of human features in publicly accessible spaces – such as of faces but also of gait, fingerprints, DNA, voice, keystrokes and other biometric or behavioural signals’.Footnote 124 Similarly, the Opinion also called for a ban on the inference of emotions from natural persons such as through emotion AI systems. These systems were argued to impact human dignity detrimentally as individuals are computationally read and thereby may have their life opportunities ‘determined or classified by a computer as to future behaviour independent of one's own free will’.Footnote 125 The calls for banning such systems, including through civil society efforts, have not been taken up in the EU's groundbreaking AI Act.Footnote 126 However, the AI Act does classify AI systems used within the border and migration context as high risk, meaning operators will be subject to obligations on ensuring robustness, cybersecurity, data governance, data quality and bias, amongst others.Footnote 127
However, even the strong language of human dignity is unable to stop the systemic incursions of these technologies into human rights due to two evolving transformations. First, the lines between security and migration have been and continue to be increasingly blurred. The border and migration space is witnessing the combination of security-focused systems with migration-focused systems, including through the interoperability of large-scale IT systems. Blasi Casagran notes that border management systems such as EES, VIS, Eurodac and systems with distinct security logics such as ECRIS-TCN are now part of the interoperability framework. SIS, also part of the interoperable system, was the only system initially designed to straddle both border management and security. In effect, the enmeshing of these initially distinct objectives means that the EU can in effect ‘treat the objective of border management and the objective of police cooperation as one single general purpose’.Footnote 128 In criticizing the expanding reach of these interlinked databases, the EDPS has argued that surveillance capture is too wide as it ‘will put everyone trying to enter the EU under broad surveillance, when in fact they were designed to only catch a small minority of criminals’.Footnote 129 This leaves even the powerful language of human dignity unable to scale the high walls erected by the securitization lens.
In addition to the line between security and migration being blurred, the lines between asylum and refugee protection and migration management in general are also being blurred. Even though border and migration management falls under the high-risk designation in the AI Act, the categorization does not distinguish the distinctive elements at play, especially in relation to the heightened international legal obligations of the State when it comes to the protection needs of asylum seekers and refugees. In seeking to assess the risk of AI systems in the different use cases and sectors, the EU ended up compressing distinct State obligations relating to borders and migration into the same risk bucket. Doing so inadvertently entwined two distinct concerns with separate governing mechanisms, for example, someone claiming refugee status has different needs and legal concerns to those of a third-country national attempting to visit the EU.
A key element of refugee law protection is the concept of non-refoulement which prohibits States from returning refugees to countries where they may face persecution or threats to their life or freedom.Footnote 130 The principle of non-refoulement is argued to have jus cogens status and cannot be overridden by a generalized (and ever-expanding) securitization imperative.Footnote 131 Refoulement can be facilitated through AI forecasting technologies where they have been used to interdict migratory flows and facilitate pushbacks, instead of enabling better planning of asylum assistance.Footnote 132 This facilitates a form of ‘digital refoulement’.Footnote 133 For dignity and human rights to be respected effectively, the non-derogability of jus cogens norms must be reinforced to prevent the use of certain intrusive AI technologies such as emotion recognition and biometric facial recognition in migration management that threaten the principle of non-refoulement, and the detriment to human dignity that such treatment entails.Footnote 134
3.3.2. Inherent worth and AI-determined abnormalities
The inherent worth of human dignity is also being politicized through the AI-driven determination of the boundaries of normality versus abnormality. Border crossings and airport security checks involve intrusive forms of anomaly detection, including physicality-related anomalies. Security concerns are once again pertinent, in that these checks are deployed to ensure that no one is transporting banned or illegal items and substances that could endanger the security of many. However, at sites of border control, it has also been seen that bodies which do not fit into the binary male–female mould are singled out for scrutiny and undignified forms of examination.Footnote 135 In addition, disabled bodies have also triggered AI systems to suggest the need for human intervention, demonstrating a disproportionate impact on the rights of persons with disabilities.Footnote 136 The classification of bodies as normal or abnormal signifies that there is a range of normality in terms of what is acceptable within highly securitized settings, perpetuating ‘ableism, inequality, and other harms’.Footnote 137 It has been criticized that: ‘biometric technologies across the matrix are used to create baselines of what constitute “normal” behaviours and bodies, which further reinforces unequal treatment of people whose bodies and behaviours do not adhere to this normative frame’.Footnote 138
In addition to policing ‘normal’ ranges of external attributes, AI systems such as biometric facial and emotion recognition systems also create an algorithmically determined ‘acceptable’ range of emotions, micro-expressions and movements to analyse internal attributes. Those not falling within the acceptable range raise the potential of being singled out as displaying ‘biomarkers of deception’,Footnote 139 often without their knowledge. Thus, both bodies and intimate aspects of a person's existence such as emotions are ‘informatized’,Footnote 140 ostensibly revealing hidden intention.
The ‘datafication’ of human movements, expressions and micro-expressions poses a significant challenge for human dignity as it reduces individuals to datapoints, potentially undermining their autonomy, privacy and the presumption of innocence. An individual who possesses dignity and autonomy should fundamentally be empowered to govern themselves and make choices within their own life. For this to be possible, data concerning them has to be accessible and knowable, but instead datafication places trust in ostensibly neutral technology, whereby ‘as a multiplicity of inscriptions are produced, migrants’ claims can be disqualified through circumscriptions of data and ascriptions of expertise’.Footnote 141 It is the datafied individual that is judged, rather than the actual individual, whose human dignity is detrimentally impacted by the inability to know how their micro-expressions, emotions or gestures are profiled, or how these are seen as security threats or otherwise, thus rendering challenging such decisions impossible.
3.3.3. Possibility of solidarity and resistance
The final challenge for human dignity presented by AI systems used within the border and migration management setting is the impact it has on dignity as human flourishing, enabled through practices such as social solidarity or resistance (towards practices deemed as unjust).
First, even though the EU's AI Act sets the tone as the first comprehensive AI legislation, the large-scale IT systemsFootnote 142 used in border and migration control, including their inter-operationalization, are exempted from the initial coverage of the Act. Rather than being obliged to be brought into compliance by 2 August 2027 like other AI systems already in operation, these systems have until 31 December 2030.Footnote 143 This practice indirectly introduces a two-tiered application of human rights in relation to AI in the EU, with migrants’ rights apparently protected in the AI Act, but that protection being limited in practice. Civil society groups have criticized this as it ‘reinforces the notion of a differential approach to fundamental rights when migration is the subject matter and people on the move are the right-holders’.Footnote 144
While the securitization logic is one element of this two-tiered rights application, the exclusion of these large-scale IT systems reflects the fact that trust in technology within the field of border and migration is essential, and the widespread acceptance that ‘AI common sense’Footnote 145 can better forecast, assist in decision-making and determine truth or falsity to facilitate the management of human mobility than the testimony of migrants themselves. This form of technological determinism does away with the notion of the primacy of the human being and their own agency in shaping their destinies, as technological insights gleaned from AI systems are seen as better indicators of trustworthiness, reliability or deceit.Footnote 146
Solidarity and resistance can also be curtailed through the generation of ‘invisibilities’ by AI systems. As shown in Section 3.2, the way AI systems operate in making generalizations and drawing inferences does not necessarily correspond to socially salient concepts (such as age or gender) or fall within the protections offered by the law, such as non-discrimination law.Footnote 147 Instead, the data-driven inferences and categorizations of individuals take place outside of the individual's frame of reference and are thus ‘invisible’, both as a result of this data-driven nature, but also because operational details are intentionally kept confidential to deter attempts to circumvent their mechanisms.Footnote 148 Mann and Matzner agree and argue that ‘emergent categories are also “invisible” from the point of view of existing anti-discrimination protection. It becomes an invisible production of invisibilities.’Footnote 149 These invisibilities can generate new vulnerabilities and vulnerable groupings, as opposed to merely falling within existing categories of vulnerable groups.Footnote 150 Beyond non-discrimination law, this creates a new challenge for individual autonomy that is central to human dignity. How does one resist, challenge and gather solidarity around invisibilities when these are neither made evident to the individual nor known to others also subjected to such algorithmic readings? Van der Sloot argues that the legal forms of resistance and accountability, through the right to privacy and data protection law, are ill-prepared to address the changing ways in which knowledge is now produced through AI systems. Thus, individual knowledge and control over data have now been overtaken by datafication that enables algorithmic groupings, and the individual self-narrative has been replaced by reliance upon observed data.Footnote 151
The generation of invisibilities in this manner can make solidarity through shared experiences and challenging such experiences much more onerous. Prior forms of solidarity building in relation to human rights concerns, such as the suffragette movement, the LGBTQI+ (lesbian, gay, bisexual, transgender, queer or questioning, intersex, asexual, and more) movement and others, all relied upon a shared sense of injustice and mobilization against an identifiable cause. As AI systems generate invisibilities, such forms of shared solidarity and resistance can no longer be taken for granted. The data-driven groupings and inferences created by AI systems are atomized to each individual, making it difficult to form alliances. Such invisibilities benefit the party deploying the AI system as they not only have exclusive control over knowledge about how the system functions, but they can also prevent others from effectively understanding how various datapoints are used to infer certain characteristics about individuals subjected to the system. Where individuals and communities have successfully challenged the experimentation and use of technologies, these challenges were built upon effective knowledge, shared experiences and a shared sense of injustice, which are impossible in this context, thus preventing the full realization of human dignity. Scholarship has suggested various means to address the generation of such invisibilities, including calling for more transparency, proposing for protections for new ‘artificial’ groupsFootnote 152 and for the burden of (dis)proving harms to move from the individual to the deployer.Footnote 153 Others have called for a priori solidarity, namely through refusal in being made algorithmically reducible and readable through AI systems.Footnote 154
The concept of human dignity is being relied upon in re-asserting the primacy of the human being and in protecting the inherent worth of the individual person. As noted in Sections 3.3 and 3.3.1 above, the EDPB and EDPS used the language of human dignity to reassert the primacy of the individual when they called for a ban on the use of biometric facial recognition systems.Footnote 155 Human dignity can also be considered as an implicit motivation for the ban on certain types of AI systems under the EU's risk-based approach to AI regulation. In Recital 28 of the AI Act, banned AI systems such as those used for social scoring or which manipulate or exploit are said to ‘contradict Union values of respect for human dignity, freedom, equality, democracy and the rule of law and fundamental rights’.Footnote 156 It can thus be reasoned that a red line is being drawn concerning AI systems that bring forth particular harms that threaten the foundational idea of human dignity. At the same time, however, the list of banned versus high-risk and limited-risk categories has been criticized as unsustainable and a legal fiction.Footnote 157 For example, the fact that emotion recognition systems are banned if deployed within the education and workplace settings, but not within even more consequential fields such as law enforcement and border and migration management (which fall under the category of high-risk AI), warrants examination. If power disparities and the potential for abuse are the justifications given, it is clear that there is even greater potential for these issues to arise in relation to law enforcement and borders and migration. Thus, while attempts have been made to draw a red line by banning some AI use cases, there are internal tensions and lack of clarity as to why some systems are banned or classified as high risk for certain uses whilst others are not.Footnote 158
Although the drawing of a red line is commendable, policymaking should also be informed by the ‘hidden’ impacts of AI raised in Sections 3.1 to 3.3. Some successful legal developments indicate that there is hope on the horizon. The EDPS has criticized the use of forecasting technologies, including social media data, in ways that go against the purpose limitation within data protection law, whereby data gathered can only be used for specific purposes and not unknown future uses.Footnote 159 The Court of Justice of the EU, in its judgment in the PNR case,Footnote 160 stated that automated decision-making for risk assessment purposes had to respect the individual's right to privacy and data protection under the EU Charter of Fundamental Rights.Footnote 161 The Court argued that the transfer, processing and retention of passenger name record (PNR) information under the PNR DirectiveFootnote 162 must be limited to what is strictly necessary and rejected the use of self-learning systems to determine the result of the application or in the weighting of the criteria used for identification. In the UK, the use of an algorithmic system to allocate different ‘streams’ to visa applicants was found to be racist and discriminatory towards minority populations and was subsequently scrapped.Footnote 163
These examples demonstrate that while the deployment of AI systems is increasing throughout the border and migration setting, legal challenges can successfully be mounted to halt certain problematic uses of such systems. However, more scrutiny going beyond these discrete legal challenges is required in this field. Ongoing vigilance in relation to evolving harms and their ‘hidden’ impacts on human rights and human dignity can be maintained through tools such as human rights impact assessments. Also, more thorough stakeholder engagement in relation to the design and deployment of AI systems is necessary, including with particularly affected groups such as refugees, migrants and asylum seekers or civil society representatives. In turn, systems posing a disproportionate threat to human dignity should be banned. Policymaking should thus be informed not only by the familiar concept of threats to human rights, but also by the deeper implications of these concerns for the foundational elements of the human rights framework.
4. Conclusion
The deployment of AI systems in border and migration control can challenge not only the protection of specific human rights but also threaten the foundational and normative principles of the human rights framework. This article has demonstrated that the use of AI systems within the field of border and migration management is challenging human rights in novel ways, going beyond the oft-cited concerns for privacy, data protection and non-discrimination. It has shown how the freedom of thought can be compromised in new ways through AI systems that read and construct interpretations, including through biometric and emotion data, ostensibly to reveal suspicion or threat and therein impute intentionality upon the individual. The expanding use of AI within the border and migration context can also undermine the power of the individual to address disparities, and challenges even the wide concept of human dignity that is foundational to the human rights discourse. In addition, a data-based reading of an already vulnerable person can generate new threats to solidarity and mobilization and pre-empt resistance. As AI systems have transformed physical borders into digital ones, and redrawn boundaries between biometrics, intentionality and criminality, a need to reinvigorate and protect human dignity in the age of AI systems has arisen. This is essential to safeguard the foundational principles of human rights in an increasingly technologized (and mobile) world.
Acknowledgements
This paper was written within the framework of the Raoul Wallenberg Visiting Chair Project (2021–2025), The Future of Human Rights, a collaboration between the Faculty of Law at Lund University and the Raoul Wallenberg Institute of Human Rights and Humanitarian Law. The authors gratefully acknowledge the support provided by both institutions.