15.1 Introduction
This contribution examines the possibilities for individuals to access remedies against potential violations of their fundamental rights by EU actors, specifically the EU agencies’ deployment of artificial intelligence (AI). Presenting the intricate landscape of the EU’s border surveillance, Section 15.2 sheds light on the prominent role of Frontex in developing and managing AI systems, including automated risk assessments and drone-based aerial surveillance. These two examples are used to illustrate how the EU’s AI-powered conduct endangers fundamental rights protected under the EU Charter of Fundamental Rights (CFR).Footnote 1 These risks emerge for privacy and data protection rights, non-discrimination, and other substantive rights, such as the right to asylum. In light of these concerns, Section 15.3 examines the possibilities to access remedies by first considering the impact of AI uses on the procedural rights to good administration and effective judicial protection, before clarifying the emerging remedial system under the proposed AI ActFootnote 2 in its interplay with the EU’s existing data protection framework. Lastly, the chapter sketches the evolving role of the European Data Protection Supervisor (EDPS) in this context, pointing out the key areas demanding further clarifications in order to fill the remedial gaps (Section 15.4).
15.2 EU Border Surveillance and the Risks to Fundamental Rights
As European integration deepens, the need for enhanced security measures has led to modernising the EU’s information systems and other border surveillance capabilities, increasingly involving tools that can be classified as AI systems. The latter refers to ‘a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.’.Footnote 3 Among the AI tools explored for use in the EU’s border surveillance are tools to support the detection of forged travel documents and automated pre-processing of long-stay and residence permit applications for the Schengen Area, as well as the use of AI for risk assessments by way of identification of irregular travelling patterns, high-security risks, or epidemic risks. The design, testing, deployment, and evaluation of these systems is principally entrusted to eu-LISA – the agency responsible for the operational management of the systems in the area of freedom, security, and justice.Footnote 4 This task comes with a not-unimportant caveat that the design of the AI tools to be used in border control is delegated – with expensive tenders – to private developers.Footnote 5 For instance, a €300 million contract was agreed in 2020 with Idemia and Sopra Steria for the implementation of the new, sensitive data processing Biometric Matching System (BMS).Footnote 6 Similarly, EU agencies, such as the EU’s Border and Coast Guard Agency – Frontex,Footnote 7 invest heavily in developing AI-powered border surveillance systems, including aerial and other hardware tools.Footnote 8
Frontex is among the key EU actors whose tasks and powers are pointedly enhanced by the use of AI systems. Several factors are driving the increasing interest in using AI in the EU’s border security context. These include the need to process large amounts of data, the longing for cost and resource efficiency, coupled with the decreasing costs of data storage and processing power, the political democratisation of AI technology, and the resulting influence of EU initiatives embracing the development and deployment of AI.Footnote 9 To illustrate the risks that AI uses pose to fundamental rights, the following discussion zooms in on the Frontex’s AI-powered conduct (Section 15.2.1) before depicting the risks posed by these uses to fundamental rights (Section 15.2.2).
15.2.1 Frontex at the Forefront of Border Surveillance
Frontex is rapidly expanding its AI capabilities. Among those currently explored are AI tools for automated border control (e.g., e-gate hardware), document scanning tools, facial recognition and other biometric verification tools, maritime domain awareness capabilities, unmanned surveillance tools (i.e., ‘towers’ planted in border regions to detect illegal border crossings), and other forms of unmanned autonomous aerial surveillance systems.Footnote 10 Two examples deserve closer inspection to illustrate how the AI uses by EU actors give rise to fundamental rights violations: automated risk assessments and AI-powered aerial surveillance.
15.2.1.1 Automated Risk Assessments
Automated risk assessment (ARA) refers to a process of identifying potential risks by using computer systems, algorithms, or data analysis techniques to evaluate risks in a given context. The ARA relies on extensive datasets that are widely available in the digital age.Footnote 11 Increasing reliance on automated risk assessments in the EU’s border security is not new. It emerges from a long-standing practice of informational cooperation in the EU’s area freedom, security, and justice based on large-scale automated matching of personal data.Footnote 12 Among others, the exchange of detailed alert files occurs among the national competent authorities and the EU agencies via the Schengen Information System (SIS), the Visa Information System (VIS), and Eurodac, and soon also the Entry/Exit System (EES), the EU Travel Information and Authorisation System (ETIAS), and the European Criminal Records Information System (ECRIS-TCN).Footnote 13 In addition, automated exchanges of personal data take place among the national authorities, EU agencies, and third parties, such as airline companies or online communication services, under specially set up frameworks, such as the PNR scheme.Footnote 14 These information exchange frameworks provide for automated assessments of gathered information in order to identify and locate potential security threats. The identification may rely on matches between the alerts containing purely alphanumeric data concerning a specific individual. Generally, however, the data collected within the alerts also include sensitive and genetic data,Footnote 15 such as DNA, fingerprints, or facial images, enabling advanced identification based on pre-defined algorithms embodying the characteristics of AI tools.Footnote 16
EU agencies, including Frontex, employ a range of automated risk assessment tools in the performance of their tasks. Specifically, Frontex will host the ETIAS Central Unit, managing the automated risk analyses in the ETIAS – the European Travel Information and Authorisation System.Footnote 17 From mid-2025, the system will undertake pre-screening of about 1.4 billion people from sixty visa-exempt countries for their travel to the Schengen states.Footnote 18 The pre-screening aims to contribute to a high level of security, prevent illegal immigration, prevent, detect, and investigate terrorist offences or other serious crimes, as well as protect public health.Footnote 19 Beyond the ETIAS Central Unit hosted by Frontex, the ETIAS will operate on the National Units of the thirty European countries and the system itself, which is developed and maintained by eu-LISA.Footnote 20 The fast processing of future travel applications will be guaranteed by an automated risk assessment performed by this ETIAS Central System.Footnote 21
The risk assessment will entail a threefold comparison of travel application data. First, the Central System automatically compares the information submitted by the travel applicant against the alerts stored within the above-mentioned EU information systems, namely the SIS, VIS, Eurodac, and EES, as well as against Europol data and Interpol databases.Footnote 22 Second, the traveller’s application will be compared against a set of risk criteria pre-determined by the ETIAS Central Unit – that is, Frontex.Footnote 23 Lastly, the comparisons will be done against the ETIAS ‘Watchlist’ of persons suspected of involvement in terrorist offences or other serious crimes.Footnote 24 While the first category of ARA in the ETIAS process places the responsibility on the Member States (as primarily responsible for entering alerts into the EU large-scale databases), the latter two categories also directly involve EU agencies, namely Frontex and Europol (due to their role in setting up the ARA criteria or the ‘watchlist’). Given the focus on AI uses by EU actors in this chapter, only the Frontex-defined risk criteria encompassed within the pre-screening algorithm will be further discussed.
The Frontex-operated Central Unit should construe the risk criteria on the basis of risks identified by the EU Commission in corresponding implementing acts. The latter could be drawn from the EES and ETIAS statistics on abnormal rates of overstaying and refusals of entry for a specific group of travellers due to a security, illegal immigration, or high epidemic risk based on the information provided by Member States as well as by the WHO.Footnote 25 Based on this information, the ETIAS Central Unit will define the final screening rules underlying the ETIAS Central System’s algorithm.Footnote 26 Pursuant to Article 33(1) ETIAS Regulation, ‘these screening rules shall be an algorithm enabling profiling’ based on a comparison of the application data with specific risk indicators.
The algorithm will be built on a combination of data concerning the age range, sex, nationality, country and city of residence, level of education (primary, secondary, higher, or none), and current occupation.Footnote 27 These data will serve to evaluate a person’s behaviour, location, or movements based on a detailed history of one’s travels, submitted in the ETIAS application form. This type of practice thus corresponds to the practice of profiling, which, pursuant to the EU data protection rules,Footnote 28 should be prohibited, unless accompanied by strict safeguards.Footnote 29 Pursuant to the jurisprudence of the Court of Justice of the European Union (CJEU), the safeguards must ensure that the criteria used for profiling are targeted, proportionate, specific, and regularly reviewed, as well as not be based solely on the protected categories of age, sex, and others.Footnote 30 The ETIAS algorithm may however be targeting specific country of origin or nationality, which can give rise to concerns of discrimination, as discussed further below. In this respect, it is worth highlighting that ETIAS automated risk assessments will serve to select a rather small group of potential security threats from an ocean of otherwise innocent, law-abiding citizens. As the ETIAS explanatory website states, it is expected that about 97% of applications will be automatically approved. It is expected that the remaining 3% will require further manual verification by the ETIAS Central Unit in cooperation with the National Units.Footnote 31
Every refusal of travel authorisation in ETIAS will have to be notified to the applicant, explaining the reasons for the decision.Footnote 32 The notice email should include information on how the applicant may appeal this decision and details of the competent authorities and the relevant time limits.Footnote 33 The appeals will be handled by the Member State refusing the entry and hence in accordance with that state’s national law.Footnote 34 Individuals without an ETIAS authorisation will be refused boarding at international airports or will be stopped when trying to cross Schengen’s external borders by land. Accordingly, it is of the utmost importance that the system’s AI component embodied within the algorithmic risk assessments does not lead to disproportionate interferences with individuals’ fundamental rights, including the rights to privacy, data protection, and protection from discrimination. Equally, the ETIAS National Unit authorities must be sufficiently trained and equipped to ensure that refusal decisions are not based solely on the automated hit in the system.Footnote 35
15.2.1.2 Aerial Surveillance
In another vein, Frontex employs AI tools to improve situational awareness and early response in pre-frontier areas. This activity is essentially facilitated through the European Border Surveillance System (EUROSUR).Footnote 36 The system is a crucial information resource enabling Frontex to establish situational pictures of the land, sea, and air to identify potential illegal crossings and vessels in distress.Footnote 37 The system contains information collected through aerial (including unmanned drones) surveillance, automated vessel tracking and detection capabilities, software functionalities allowing complex calculations for detecting anomalies and predicting vessel positions, as well as precise weather and oceanographic forecasts enabled by the so-called EUROSUR fusion services deployed by Frontex.Footnote 38 With the help of the most advanced technology, Frontex is thus responsible for establishing the ‘European situational pictures’ and ‘specific situational pictures’ aimed at assisting the national coast guards of the EU and EU-associated states in the performance of border tasks.Footnote 39
The collection of information to be shared via EUROSUR increasingly relies on AI tools. Notably, in recent years, Frontex has significantly expanded its aerial surveillance arsenal.Footnote 40 This expansion required significant investments in advanced technology developed by private companies.Footnote 41 AI-powered drones or satellites enabling monitoring of the situation on land or sea do not directly pose risks to fundamental rights. However, reliance on such AI-powered surveillance tools gives the EU’s border authorities unmatched knowledge about the border situation, permitting the authorities to take actions that may put certain fundamental rights at risk, such as the right to asylum.
Furthermore, as the EU Fundamental Rights Agency states, ongoing development of these technologies and the sharing of the gathered intelligence through EUROSUR is likely to employ algorithms used to track suspicious vessels or extend to the processing of photographs and videos of ships with migrants by maritime surveillance aircraft.Footnote 42 In other words, the AI-powered information exchange will also directly implicate privacy and data protection rights. Therefore, the Frontex AI-powered border surveillance tools must also be subject to close legal scrutiny by independent supervisory authorities and potentially courts when risks to fundamental rights materialise.
The two examples of AI-powered information exchange frameworks examined here facilitate distinct types of border control conduct. On the one hand, the ETIAS automated risk assessments support decision-making by national authorities on whether or not to let someone into the Schengen area.Footnote 43 On the other hand, EUROSUR, accompanied by AI-powered land, sea, and air surveillance equipment, create detailed situational pictures with clear instructions for actions to be taken in the context of joint operations between the Frontex teams and national border guard authorities concerning identified vessels carrying individuals, primarily refugees in need of international protection. The two examples pose distinct risks to the fundamental rights of the individuals concerned.
15.2.2 The Diverse Nature of the Risks to Fundamental Rights
EU law requires that any use of AI, including by EU actors, must comply with fundamental rights enshrined in the EU Charter of Fundamental Rights and protected as general principles of EU law, irrespective of the area of AI use concerned.Footnote 44 This emerges from the requirements of the Union as a legal order based on the rule of law, which, under Article 2 TEU, declares, among others, respect for human dignity and human rights, including the rights of persons belonging to minorities.Footnote 45 With rapid technological progress, the use of AI as a system technologyFootnote 46 brings about an ever greater potential for misuse, which broadly impacts human dignity and various fundamental rights deeply connected to the inviolability of a human being.Footnote 47 This concern is broadly acknowledged within the international community and the EU,Footnote 48 asserting that protection of human values, including fundamental freedoms, equality, fairness, the rule of law, social justice, data protection, and privacy, shall remain at the centre of placing AI into use in modern democratic societies.
Preserving human dignity in the age of AI requires that individuals retain control over their lives, including when and how they are being subjected, without knowledge or informed consent, to the use of AI. Putting humans and human dignity at the centre of the use of AI is necessary to ensure full respect for fundamental rights. It should thus be the starting point in every discussion on the development, deployment, and reliance on AI where human lives are at stake. However, as the Court of Justice repeats, fundamental rights ‘do not constitute unfettered prerogatives’.Footnote 49 They must be viewed in light of their function within society, and, if necessary, they may be limited as long as any interferences with the rights are duly justified.Footnote 50 Accordingly, the deployment of and reliance on AI shall be reviewed with the same set of considerations in mind: it must be legally authorised, respect the essence of specific rights, and be proportionate and necessary under the objectives of general interest recognised by the Union or the need to protect the rights and freedoms of others (Article 52(1) CFR).
The examples of AI use by Frontex examined above exhibit the breadth of cross-cutting fundamental rights concerns occurring in AI-powered border surveillance. Three key concerns can be highlighted: the risks to privacy and data protection (Articles 7 and 8 CFR), discrimination (Article 21 CFR), and risks to other substantive rights, such as the right to asylum (Article 18 CFR).
15.2.2.1 Privacy and Data Protection
Given that the functionality of AI relies on the wide availability of (personal) data, the discussions on the use of AI tend to revolve around the rights of privacy and personal data protection, enshrined in Articles 7 and 8 of the Charter. Although deeply interconnected, these rights are separate, embodying the more traditional right to privacy and the modern right to data protection.Footnote 51 They share a common goal of safeguarding individual autonomy and dignity by providing a personal space to freely develop their identities and thoughts, thus laying the foundation for exercising other fundamental rights, such as freedom of thought, expression, information, and association.Footnote 52
Privacy and data protection will generally be implicated in the examined uses of AI in border surveillance. On the one hand, data protection concerns arise extensively in the context of the large-scale processing of personal data for automated risk assessments. Far beyond the scope of this chapter to examine all,Footnote 53 two risks are particularly worth mentioning. ARAs, such as those envisioned under the ETIAS, risk circumventing fundamental data protection principles, especially the purpose limitation and the related requirements of necessity and proportionality, as well as the prohibition on profiling, including based on discriminatory grounds.Footnote 54 As explained above, the AI-powered ETIAS assessments will be based on a threefold comparison of personal data, including sensitive data, against existing EU databases, against risk criteria pre-defined by the ETIAS Central Unit operated by Frontex, and against the ETIAS Watchlist.
The EU systems’ interoperability will facilitate the comparisons against the EU large-scale databases.Footnote 55 Effectively, interoperabilityFootnote 56 will transform the border surveillance architecture by enabling far-reaching linking of personal information stored in silo-based alerts.Footnote 57 The interlinking of databases will blur the boundaries between law enforcement and intelligence services and between the tasks of the EU and national law enforcement and migration authorities, undermining data protection safeguards.Footnote 58 Specifically, in the ETIAS authorisation process, the purpose limitation principle as a critical data protection safeguard seems to disappear completely, for instance, due to the requirement that the ETIAS Central Unit, hosted by Frontex, shall have access to ‘any linked application files, as well as to all the hits triggered during automated processing’.Footnote 59
Furthermore, the comparison against screening criteria defined by Frontex will employ algorithms to evaluate the risk factor of a specific individual, akin to a practice of profiling, to facilitate decisions about individuals’ lives. According to Article 22(3) of the GDPR, such automated decisions should, in principle, be prohibited unless accompanied by sufficient safeguards, including meaningful human intervention.Footnote 60 Since ETIAS assessments will lead to automatic authorisationsFootnote 61 and quasi-automated refusals of entry,Footnote 62 these decisions will have significant consequences for individuals. Automated risk assessments will not only interfere with the data protection right but also pose a distinct threat of discrimination while making access to remedies ever more difficult, as discussed below.
On the other hand, privacy concerns will feature, for instance, wherever surveillance measures are employed in public places. Aerial surveillance, such as with the help of aircraft or drones that record the situational pictures on the land or seas, is increasingly being used by Frontex and can interfere with individuals’ privacy by closely monitoring their location, behaviour, movements, and other aspects of personal activities without their knowledge or consent. The use of aerial surveillance technologies allows for gathering visual and sometimes audio information from above, which can capture private moments and sensitive information.Footnote 63 This intrusion can violate individuals’ right to privacy and data protection and potentially expose vulnerable persons to unwarranted conduct by surveillance authorities. It is of the utmost importance that whenever such technologies evolve to increasingly sophisticated people-monitoring tools, their deployment is limited to their original purposes with strict legal safeguards in place and effective opportunities to seek redress in case of misconduct.
15.2.2.2 Risk of Discrimination
Article 21 of the Charter guarantees individuals protection against any form of discrimination based on the protected grounds, such as sex, race, colour, ethnic or social origin, genetic features, language, religion or belief, political or any other opinion, and others. Discrimination concerns are fundamental to the discussions on subjecting human lives to the uses of artificial intelligence. This is because the very purpose of any computational analysis through algorithmic data processing is to evaluate, categorise, or otherwise discover patterns in the analysed data, including personal data. However, human or machine bias may affect the algorithmic output in many ways.Footnote 64
Risk assessment systems, such as the ETIAS, rely on an algorithmic model, which processes large amounts of personal data to make decisions about individual lives. Pursuant to Article 14 of the ETIAS Regulation, any processing of personal data ‘shall not result in discrimination against third-country nationals’. However, these assessment algorithms are designed and trained on personal data, which includes the protected grounds under the right to non-discrimination, such as sex, age, place of birth, or nationality.Footnote 65 Therefore, to guarantee the right to non-discrimination, the criteria used to train the ETIAS algorithm to evaluate a certain risk behaviour of a specific individual need to be carefully designed to avoid perpetuating or even amplifying existing societal biases.Footnote 66 Indeed, discriminatory misconduct has been found to occur in other administrative contexts, such as in the infamous welfare allocation scandal in the Netherlands.Footnote 67 The data quality and lawfulness of the data stored in the EU large-scale systems used for ETIAS comparisons and the design of the risk criteria to be used in the ETIAS ARA-based authorisations must cautiously balance the non-discrimination requirements of EU data protection rules with the requirements of the Charter right. Namely, the algorithm must be built to ensure that any AI-driven decision-making is not based solely on special categories of data, reflecting the protective grounds under Article 21 of the Charter.Footnote 68 The safeguards, including meaningful transparency for the manual human review following an automated match, must be effective in practice. This includes effective enforcement of compliance with these safeguards, given that such AI tools are powerful in nudging the national competent authorities to decide in a certain way, which may lead to other violations.Footnote 69
Recently, the Court of Justice enumerated the essential guidelines for designing the risk criteria for algorithmic assessments in the security context. In the Ligue des droits humains judgment,Footnote 70 the Court interpreted the EU’s PNR Scheme as requiring that any comparison of passengers’ name records against pre-determined risk criteria demands that such criteria are defined in a way that keeps incorrect identifications to a minimum.Footnote 71 To achieve this aim, any match must be individually reviewed by non-automated means to highlight any false positives and identify discriminatory results.Footnote 72 Furthermore, such review will be effective only where it is clearly established as a requirement in the rules of conduct in the specific context, is well documented, and the officials are sufficiently trained, including to ‘give preference to the result of the individual review conducted by non-automated means’.Footnote 73
This requirement of manual review is especially crucial since confronting direct or indirect discriminatory effects in AI-driven decision-making in legal proceedings is rather difficult for the affected individuals.Footnote 74 Indeed, in this respect, the Court also demands that the affected individuals are informed about the pre-determined assessment criteria so as to enable them to understand and defend themselves,Footnote 75 as discussed in the next section.
15.2.2.3 Risks to Other Substantive Rights
Beyond the rights to privacy, data protection, and non-discrimination, the AI uses in border surveillance might directly or indirectly implicate other substantive fundamental rights. For instance, AI-powered border surveillance may lead to detention of individuals presenting themselves at the land borders without a valid ETIAS authorisation in interference with their liberty and security (Article 6 CFR). In another vein, AI-powered aerial surveillance enabling identification of migrants on the sea might lead to wrongful actions being taken by the Frontex-led operations, possibly leading to violations of individuals’ right to life (Article 2 CFR). Recent investigations by human rights organisations revealed evidence that information gathered from Frontex-operated aerial surveillance has been utilised in facilitating illegal pushbacks of refugees that may contract their right to asylum (Article 18 CFR).Footnote 76
In conjunction with the use of drones, EU Member States have engaged in cooperative agreements with southern Mediterranean countries, such as Libya and Turkey, to intercept and return migrants, thereby externalising the responsibility for these actions.Footnote 77 This approach prevents other vessels from intervening or disembarking rescued individuals in supposedly safe harbours. EU Member States have justified these measures by claiming that search and rescue activities act as a ‘pull factor’ for migrants coming to EU countries. Frontex has often been viewed as a passive bystander in this context, given the division of responsibilities in the EU’s integrated border management (EIBM).Footnote 78 Under the EIBM, the final responsibility still lies with the Member States. Lately, this division has been criticised as it transpired from a classified EU reportFootnote 79 that Frontex knowingly contributed to illegal pushback practices.Footnote 80 These practices violate the right to asylum under Article 18 CFR and the cornerstone of international human rights law – the principle of non-refoulment.Footnote 81
And, as already mentioned above, with the continuing development of aerial surveillance, new risks to fundamental rights will emerge. These risks might arise from the processing of photographs and videos of vessels with migrants on board as well as the potential implications of the algorithms that will be used to track the vessels flagged as suspicious. All these types of AI-powered capacities of the EU’s Frontex-led border surveillance will expand the above risks to privacy, data protection, and discrimination and may continue to indirectly support unlawful practices, such as decisions on whether or not to save the lives of individuals in distress on the seas and those in need of international protection.Footnote 82
15.3 Exploring the Possibilities for Access to Remedies
In the EU legal order, when a person considers that the EU actors have violated their rights, they have the right to seek an effective remedy (Article 47 CFR). The use of AI, however, brings considerable challenges to ensuring that AI-powered conduct is both non-arbitrary and sufficiently reviewable to fulfil the requirements of this constitutional guarantee, which constitutes ‘the essence of the rule of law’.Footnote 83 To assess the properties of the EU remedial architecture, it is therefore necessary to also consider the interrelated impacts of the AI on the exercise of procedural requirements under the rights to good administration and effective judicial protection (Section 15.3.1). The discussion then turns to the construction of remedies based on the scope and interplay of the upcoming AI Act with the EU’s existing data protection framework (Section 15.3.2).
15.3.1 The Impact of AI Use on Individuals’ Access to Remedies
Article 41 CFR guarantees to everyone the right to good administration in decisions or other legal acts adopted by EU actors. Historically, the CJEU interpreted this right as a general principle of EU law,Footnote 84 which expanded its application wherever EU law applies. Under its umbrella,Footnote 85 the right to good administration enshrines rights and obligations, which hold at their core the enabling role for legal accountability in public conduct. On the one hand, the right demands that the authorities act fairly, impartially, and within a reasonable time. On the other hand, it obliges the authorities to present sufficient reasons substantiating their acts vis-à-vis the affected persons. In TUM, the Court formulated the interplay of good administration requirements as ‘the duty of the competent institution to examine carefully and impartially all the relevant aspects of the individual case’ prior to decision-making.Footnote 86 In Nölle, the Court further recognised this duty of care as an individual right arising from the clear, precise, and unconditional obligation in Article 41 CFR.Footnote 87 The authorities’ compliance with their duty of care obligations ensures that the affected person understands the evidentiary basis of the decision in order to decide whether or not to seek remedies against it. As an essential procedural requirement,Footnote 88 failure to comply with the duty of care obligations may lead to the annulment of the decision.Footnote 89
It is in its defence-enabling function that the right to good administration also becomes central to remedial possibilities against the AI-driven EU conduct. In this context, compliance with the good administration requirements faces significant obstacles. The opacity of algorithmic risk assessments, exemplified in the ETIAS authorisation process, poses substantial challenges to the authorities’ ability to reason their decisions and ensure that these are based on factually correct, relevant, and complete information. As explained above, any of the 1.4 billion visa-exempt citizens that apply through the ETIAS website will be automatically screened for any suspicion of posing serious threat to public security. This suspicion will be found to exist whenever an automated processing of the traveller’s application results in a hit against pre-determined risk criteria in conjunction with an automatic comparison with millions of alerts stored in other EU information systems. If the process results in a hit, the competent authorities will have to manually review the data, ensuring the possibility to contradict the automated result, in view of their duty of care obligations. This requirement of human intervention ensures that each rejection of travel authorisation is not a decision based solely on automated processing of personal data (Article 22(3) GDPR).
However, to what extent will the manual review verify the correctness, relevance, and completeness of the information so as to uncover whether or not the hit was, for instance, due to discriminatory profiling by the pre-screening algorithm? This question does not permit an easy answer,Footnote 90 especially considering the context in which the manual reviews will take place: namely, time-pressured (the ETIAS rules estimate a response within a few days where manual verification is required), without sufficient AI expertise of the officials, and facing other constraints, such as well-documented automation and confirmation biases in manual reviews,Footnote 91 coupled with a limited access to the training data underpinning the risk assessment algorithm.
The diminished potential to meet the requirements of good administration in the AI-powered decision-making will have direct implications for the individuals’ access to effective remedies. In fact, in its jurisprudence, the CJEU often equates the requirements of reasoning under the right to good administration to the requirements of an effective remedy under Article 47 CFR.Footnote 92 Their interplay, according to the Court, lies in enabling the person to ascertain the reasons upon which the decision is based, ‘so as to make it possible for him or her to defend his or her rights in the best possible conditions’.Footnote 93 Individuals will only be able to defend themselves when it is indeed possible to understand the relevant decision and the process under which it was taken. Additionally, the Court recognises the significance of reasoning for ability of judges and other supervisory authorities to exercise effective review. Indeed, these concerns are reflected in the regulation of AI – the AI Act, which brings about specific transparency requirements intended to facilitate the AI users’ ability to act with a meaningful human control over AI-generated outputs.
Before turning to the AI Act, it remains to be stressed that judicial remedies are rather limited in the context of AI-powered conduct based on composite administrative procedures involving actors at EU and Member State levels.Footnote 94 The courts’ jurisdiction to review AI-driven decision-making is territorially limited and constrained by the narrow notion of what constitutes a reviewable act.Footnote 95 On the one hand, the former prevents individuals from challenging the conduct of the EU actors directly before the Court of Justice when the responsibility lies with the national authorities, such as in case of refusals to ETIAS applications or illegal pushbacks of migrants.Footnote 96 The staff of the competent ETIAS National Unit will need to manually review the automated refusals, hence exercise final discretion.Footnote 97 Also, where an ETIAS ARA results in a hit with the information entered by Europol, the ETIAS Regulation only establishes a consultation procedure between the Europol and the responsible Member State.Footnote 98 Under this procedure, Europol must provide the responsible Member State with a ‘reasoned opinion on the application’.Footnote 99 Nonetheless, the final decision – hence final discretion – lies with the Member State concerned. Accordingly, complaints against potential discriminatory effects of the ETIAS algorithm in these circumstances might thus only be raised before the courts of that State, without the possibility of uncovering the factual basis of the information supplied by Europol and relied on in the refusal decision.
On the other hand, these potential discriminatory effects of the underlying risk criteria might not be deemed to produce legal effects to trigger justiciability of the ARA. Indeed, pursuant to the requirement of ‘direct and individual concern,’ the impact of the discriminatory effect might not occur for each person whose application had been refused by the ETIAS ARA.Footnote 100 Accordingly, as argued elsewhere,Footnote 101 construing reviewability in a similar context needs to reflect the underlying automation and output biases if we expect EU law to guarantee sufficient legal protection to the affected individuals. Yet, for now, EU law does not recognise the impact of the screening algorithm on the final decisions taken by the Member States.Footnote 102 As such, for now, there seems to be no possibility of direct judicial remedy against the Frontex-based ETIAS Central Unit for its role in the development and deployment of the ETIAS risk criteria algorithm.
15.3.2 Unwrapping the Remedial Possibilities under the Upcoming AI Act
In 2018, EU legislators embarked on the process of designing specific rules governing develoment and use of artificial intelligence systems in the EU, that would ensure, inter alia, full respect for fundamental rights.Footnote 103 The efforts culminated in the proposal for a horizontal regulation of AI – the AI Act.Footnote 104 Central to this is the EU’s self-proclaimed ‘human-centric approach to AI’, which shall ‘strive to ensure that human values are central to the way in which AI systems are developed, deployed, used and monitored, by ensuring respect for fundamental rights’.Footnote 105
That the human-centric Act will mitigate the above-identified risks to fundamental rights should, however, not be taken for granted. With the legal basis set in Article 114 TFEU,Footnote 106 the AI Act will first and foremost ensure safe operation of AI systems on the EU’s internal market (Article 1). The Act’s pledge to guarantee respect for fundamental rights is arguably manifested in its ‘risk-based approach’. On the most basic level, the Act differentiates between ‘prohibitions of certain [AI] practices’ and ‘specific requirements for high-risk AI systems’ (Article 1(2)). The former include systems using subliminal techniques beyond a person’s consciousness (manipulation), uses of AI that may exploit vulnerable individuals (Article 5 (a) and (b)), or the real-time biometric identification systems in publicly accessible spaces for law enforcement purposes, except where duly authorised (Article 5(c)).Footnote 107 According to the latest text, the list of high-risk systems in Annex III should be amended to consider the impact of the AI use on the relevant action or a decision to be taken.Footnote 108 Ultimately, the concept of risk to fundamental rights underpinning the AI Act’s approach is defined as ‘the combination of the probability of an occurrence of harm and the severity of that harm’.Footnote 109
Although far from settled, the final elaboration of the prohibited and high-risk AI uses will determine the extent to which the AI Act can provide some form of fundamental rights protection also in the context of AI uses by the EU actors. Yet, as we await clarity on the AI Act’s final shape, one aspect is clear: the AI Act will need to be applied in conjunction with the existing EU law, including the rules on remedies and existing data protection rules, wherever the system relies on, among others, processing of personal data. Accordingly, the following discussion highlights two key aspects, which will be determinative for the protection of fundamental rights from the risks posed by the EU’s AI uses: first, the scope of application of the AI Act to the EU actors’ use of AI and, second, the interplay and main discrepancies between the AI Act’s substantive rules and the data protection rules.
15.3.2.1 The Scope of Application of the AI Act to Border Surveillance
The Act emerges as the EU’s effort to establish general rules on the development, authorisation, and deployment of AI systems in the Union. Accordingly, its provisions will need to be complied with in their entirety. Pursuant to Article 2 of the AI Act, the rules apply to both ‘providers’ placing on the market or putting into service AI systems, irrespective of their location, and ‘deployers’ of AI systems with the establishment in the Union.Footnote 110 The AI Act will also apply to EU actors when acting as a provider or deployer of an AI system. There are some exceptions, however. For instance, the AI Act will not apply to AI systems developed or used exclusively for military purposes or for purely research purposes.Footnote 111
More worryingly, the initial Commission Proposal excluded from its scope the AI systems that are ‘components of the large-scale IT systems’, such as the SIS, EES, or ETIAS, before the entry into force of this regulation.Footnote 112 This has been revised to require that these systems comply with the AI Act by 31 December 2030.Footnote 113 Nevertheless, both solutions leave out the expansive use of AI systems in the EU border surveillance without immediate rules that could address the above-identified risks of AI uses. As the EDPS and the European Data Protection Board (EDPB) highlighted in their joint opinion, such exclusion ‘risks circumventing the safeguards enshrined in the AI Act’.Footnote 114 It also undermines the broader exercise of powers by the competent supervisory authorities, such as the EDPS, when presented with complaints regarding AI uses and claims of violations of the data protection rules.
15.3.2.2 The Interplay between the AI Act and the EU Data Protection Rules
The new safeguards introduced under the AI Act might only contribute to enhanced fundamental rights protection if crafted carefully on the basis of their interaction with the existing data protection rules is properly considered.Footnote 115 However, a number of discrepancies appear between the two legal frameworks, which may pose difficulties for the EDPS that acts as the first instance avenue for addressing potential fundamental rights violations by EU actors. Three aspects of this interplay specifically affect individuals’ access to remedies.
First, the EU data protection framework is far from homogeneous. The framework essentially consists of the GDPR, Regulation (EU) 2018/1725 (the ‘EU DPR’) governing the processing of personal data by Union institutions, bodies, offices, and agencies, and the so-called Law Enforcement Directive (EU) 2016/680 (the ‘LED’) governing the processing of personal data by national law enforcement authorities.Footnote 116 While it is the EU DPR that governs the use of AI by the EU actors examined in this chapter, we also see that the final legal responsibility in the EU’s integrated border control rests with the national border authorities or law enforcement authorities.Footnote 117 The data processing activities of the EU agencies, such as Frontex, are furthermore governed by the agencies’ own founding regulations. These specialised legal instruments thus embody both the exceptions to the EU DPR and the lex specialis rules of LED.Footnote 118 Accordingly, while the EU DPR/in conjunction with the Frontex and ETIAS Regulations will apply to the AI-powered ETIAS system and the development of its algorithm by eu-LISA and Frontex in their distinct capacities, the LED and/or GDPR will govern the ETIAS searches and reliance on the generated output by the national border and law enforcement authorities.
Second, this fragmentation is problematic for access to remedies in view of against potential violations of fundamental rights in AI-driven conduct of the EU actors, considering the remedial system under the EU data protection framework. The latter is essentially a twofold system. Individuals may lodge a complaint with an independent supervisory authority of the Member State/EDPS.Footnote 119 Furthermore, affected individuals enjoy the right to an effective judicial remedy against a decision of that supervisory authority/EDPS or against a decision of the controller or processor.Footnote 120 The GDPR also provides for the possibility of representative action by civil society organisations on behalf of the data subjects.Footnote 121 Research, however, shows that direct remedies are often not utilised, especially in the security context where collection of personal data within the alerts entered in the EU information systems is rarely known to the data subjects.Footnote 122
Instead, a person that is refused travel authorisation will in most cases be able to appeal the only final refusal decision before the supervisory authority of the refusing Member State. In this context, they will, for instance, be able to file a complaint against the Member State authority for non-compliance with the obligation to manually review the automated hit, pursuant to the requirements of Articles 22 GDPR/11 LED. The Member State authority will, however, lack jurisdiction to review the development and deployment of the ETIAS risk algorithm. Accordingly, the affected person will have to lodge a separate complaint with the EDPS, which is competent to review the acts of the ETIAS Central Unit based in the Frontex agency.
Lastly, in their complaint to the EDPS, the affected person will only be able to invoke their rights as the data subject.Footnote 123 The list of data subjects’ rights develops the substance of the autonomous fundamental right to personal data protection (Article 8 CFR). Via the remedial avenues under EU data protection law, individuals might however be able to bring claims concerning potential violations of other fundamental rights, including, for instance, non-discrimination or the right to an effective remedy. In other words, where the EU data protection rules provides specific safeguards regarding non-discrimination, such as in the context of processing special categories of data,Footnote 124 they integrate many of the Charter rights relevant to the digital context.Footnote 125 Overall, integrating the Charter’s rights in the secondary data subject rights could, however, lead to an inferior legal protection. This is because data protection law guarantees data subjects’ rights with substantial number of exceptions and limitations, which is evident from the long list of exceptions to the general prohibition on processing special categories of personal data in Article 9(2) GDPR. Such a priori exceptions might not be subject to the same proportionality and necessity test as permissible limits to fundamental rights are under Article 52(1) CFR.Footnote 126 Although the Court of Justice does apply a strict review of proportionality and necessity in similar high-risk AI uses as demonstrated in the PNR case, subjecting the safeguards under the PNR to a very strict scrutiny, the same may not be the case for complaints addressed by national supervisory authorities. A strict proportionality review is especially necessary, given that the affected persons might often not have sufficient possibilities to bring claims of violations of the Charter’s rights before the courts, since, as explaine above, enforcement of data-specific rights primarily rests with independent data protection authorities (DPAs). The DPAs’ remedial competence, however, differs substantially from judicial competence. Yet, given their primary role in the digital age, these authorities, increasingly perform quasi-judicial review of claims, which implicate Charter rights, beyond the requirements guaranteed under EU data protection framework.
The last, and key concern arising from the interplay between the AI Act and existing data protection remedies, concerns precisely the designation and cooperation among the variety of supervisory authorities with competences over different parts of an AI-driven conduct that may lead to potential fundamental rights violations. As a product safety regulation, the Commission’s original AI Act Proposal did not include any rights and remedies for the affected persons in relation to the uses of AI systems. Critics found this lacuna highly problematic,Footnote 127 given that the Act’s risk-based approach was envisioned to ensure full respect of fundamental rights and freedoms. Without a right to complain against the AI risks, individuals may be able to subsume their claims under their rights as data subjects. This would, however, prove to be only a partial remedy against the diverse and serious risks posed by the use of AI systems, as demonstrated in this chapter. It is therefore essential that individuals have meaningful access to redress mechanisms. The European Parliament proposed to fill this vacuum with the introduction of Chapter 3a to the AI Act Proposal.Footnote 128 The effort culminated in the addition of Section 4 in the final version of the Act, which provides however only provides a limited consolidation of the calls for enhancing access to justice against the risks of AI. Namely, the remedies under the AI Act are essentially two-fold: (a) a product-related complaint mechanisms before the designated market surveillance authorities; (b) the right to an explanation of individual decision-making when the latter is made on the basis of a high-risk AI output.
Furthermore, the effective enforcement of remedies under the AI Act will be contingent on more substantive discrepancies between the two legal frameworks which are however beyond the scope of this chapter.Footnote 129 For instance, such discrepancies surface with respect to the definitions. The original proposal lacked any recognition of the position of the affected private persons under the AI Act legal framework. Notably, the original notion of the ‘user’ within the AI Act Proposal has been changed to denote the ‘deployer’, meaning any natural or legal person, public authority, agency, or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity.Footnote 130 The AI ‘users’, now called ‘deployers’ thus indicate the data controllers or processors in the GDPR sense.Footnote 131 In another vein, discrepancies arise from the formulations of the scope of various corresponding rules within the two legal frameworks. For instance, pursuant to recital 63 of the AI Act Proposal, classification of an AI system as high-risk, and hence permitting its use, does not automatically mean that the use of that system is lawful under ‘other Union law’, including data protection rules. The Proposal further clarifies that ‘any such use should continue to occur solely in accordance with the applicable requirements resulting from the Charter and from the applicable acts of secondary Union law and national law’. Yet the AI Act provides its own legal basis, for instance, for the processing of special categories of personal data, which shall however not contradict the general prohibition under the data protection rules.Footnote 132 Assuming that these discrepancies will eventually be resolved, the access to effective remedies against the AI-driven conduct of EU actors, such as Frontex, requires specific attention on the role and powers of the EDPS. In order to effectively safeguard fundamental rights of individuals affected by the EU uses of AI, the EDPS will need to divert its role and powers by carefully crafting requirements under EU data protection rules in light of their potential interplay with AI Act requirements.
15.4 Double-hatting the EDPS
The AI Act envisions a central role for the EDPS to oversee AI uses by the EU actors, including Frontex. To appraise the potentials of this role, this section explores how the AI Act, in conjunction with the EU DPR, construe the EDPS’ competence and whether they do so with sufficient clarity so as to contribute to mitigating the above-identified risks to fundamental rights.
As explained above, under the EU DPR, the EDPS is responsible for ensuring that any processing of personal data by EU institutions, bodies, offices, and agencies respects the fundamental rights and freedoms of natural persons (Article 52(2)). To that end, the Supervisor, among others, receives and handles complaints from the data subjects (Article 63 EU DPR). Through this redress mechanism, individuals are given the possibility to take control over their data and seek remedies for any breaches of their rights as data subjects.Footnote 133 However, the mere existence of the possibility to complain about potential breaches of data subjects’ rights under the EU DPR does not necessarily guarantee the EU actors’ compliance with fundamental rights more broadly. Indeed, to that end, pursuant to Article 64 EU DPR, individuals also enjoy the right to an effective judicial remedy before the Court of Justice of the EU, including through direct claims for damages and appeals against the decision of the EDPS.Footnote 134
Under the AI Act, the EDPS’ role in remedying potential violations of fundamental rights is, however, less clear. The definition of ‘national competent authority’ under Article 3(3), specifies that for AI development and uses by EU actors, ‘references to national competent authorities or market surveillance authorities in this Regulation shall be construed as references to the European Data Protection Supervisor.’. For the latter, Article 74(9) further specified that wherever the AI Act applies to the EU actors, the EDPS shall be the designated supervisory authority, ‘except in relation to the [CJEU]acting in its judicial capacity.’. This opens up the question, however: what are the powers and competences of the EDPS as a market surveillace authority in respect of the AI Act as a product safety regulation and what does that mean for the individuals’ access to effective remedies?
Under the AI Act, the EDPS will assume diverse tasks with respect to the enforcement of the AI Act’s obligations. There is at the moment, however, no clear gap in the procedural possibility to lodge complaints with the EDPS under the AI Act. In contrast to the envisioned right to lodge a complaint with the national market surveillance authority under Article 85 AI Act, the AI Act does not afford the same right to lodge a complaint with the EDPS, akin to Article 63 EU DPR. Nor, therefore, does the AI Act grant the right to an effective judicial remedy against the decisions of the EDPS concerning EU actors' deployment of AI systems with the requirements of the AI Act., akin to Article 64 EU DPR (and in light of the requirements of Article 47 of the Charter). Without a direct procedural access to remedies, individuals will thus have to rely on their rights as data subjects under the EU DPR in seeking protection against the potential violations by the EU actors’ use of AI, despite the existence of clear obligations falling on the latter.
Instead, under the AI Act framework, the EDPS will assume a threefold role, as (a) ‘a market surveillance authority’ (Article 74(9)), (b) an “observer” within the new European AI Board (Article 65(2)), and (c) the designer of regulatory sandboxes for EU actors (Article 57(3).).
First, in its capacity as a market surveillance authority, the EDPS will undertake conformity assessments (a form of ex ante compliance mechanism) for the EU actors’ uses of AI and notify the outcomes of these assessments to the Commission.Footnote 135 While on the face of it a clear task, the AI Act requires that deployers of highrisk AI systems that are bodies governed by public law, or private entities providing public services and deployers of certain high-risk AI systems, such as banking or insurance entities, should carry out a fundamental rights impact assessment prior to putting it into use, according to Article 27. without clearly encompassing this within the mandate of the EDPS. As a whole, the conformity assessment procedure with respect to the EU actors’ development and deployment of AI is rather underspecified. This perhaps calls into question the EU legislators’ choice of a single regulatory instrument as opposed to, for instance, a separate, more targeted regulation governing the EU actors’ obligations, akin to the EU DPR.
Second, the EDPS will play a further role within the newly established European Artificial Intelligence Board (hearafter the AI Board).Footnote 136 The AI Board should advise and assist the Commission and the Member States to facilitate the consistent and effective application (Article 66), including by facilitating coordination and harmonisation of practices of national competent authorities, collecting and sharing technical and regulatory expertise and best practices; issuing recommendations and written opinions on any relevant matters related to the implementation of the AI Act; and other advisory and coordinating tasks aimed at improving the implementation of the AI Act as a whole.Footnote 137 Perhaps akin to the role of the European Data Protection Board in its advisory capacity,Footnote 138 by itself this role will not constitute a remedial avenue for individuals to ask for an effective review of EU actors’ uses of AI systems, as the Board will not possess any direct enforcement powers.Footnote 139
Lastly, the EDPS will also participate in the organisation of regulatory sandboxes for the development, testing and validation of innovative AI systems at the Union level, before they are deployed. The policy option of regulatory sandboxes has emerged as experimental regulatory method aim to address the uncertainty of the AI industry and its associated knowledge gaps, with the intention to enable smaller companies to prepare for the necessary conformity assessments.Footnote 140 Pursuant to Article 57, the sandboxes shall provide a, ‘controlled environment that fosters innovation and facilitates the development, training, testing and validation of innovative AI systems for a limited time before their being placed on the market or put into service pursuant to a specific sandbox plan agreed between the providers or prospective providers and the competent authority’ Research in the field of AI and ethics has compared the reliance on regulatory sandboxes to ‘nurturing moral imaginations’.Footnote 141 Pursuant to the third paragraph of Article 53, the AI regulatory sandboxes ‘shall not affect the supervisory and corrective powers of the competent authorities’ and ‘any significant risks to fundamental rights, democracy and rule of law, health and safety or the environment identified during the development and testing of such AI systems shall result in immediate and adequate mitigation’. (Article 57(11)). The EDPS will be tasked with organising such sandboxes at the EU level (Article 57(3)). In this context, the EDPS shall provide guidance and supervision within the sandbox with respect to identifying risks, in particular to fundamental rights, among others, and to demonstrating mitigation measures and their effectiveness to mitigate the identified risks. A relative novelty in EU law, regulatory sandboxes emerge as a form of ‘experimental legal regime’, which, according to Ranchordas, can ‘waive [or] modify national regulatory requirements (or implementation)’ as a way of offering ‘safe testbeds for innovative products and services without putting the whole system at risk’.Footnote 142 As a relatively new legal phenomenon, there is a lack of empirical knowledge about their potential usefulness to improve fundamental rights protection.
In view of the many difficulties in lodging complaints in the digital context,Footnote 143 the fundamental rights–protecting role of the EDPS is much wider under the EU DPR rules.Footnote 144 Beyond ensuring the EU actors’ compliance with the data subjects’ rights, the EDPS’ role entails promoting public awareness, conducting investigations, advising the EU institutions, adopting soft-law guidelines and clarifying the data protection requirements, authorising contractual clauses, and many others. In this respect, the supervisory role of the EDPS is likely to continue in its existing fashion with respect to the EU’s uses of AI applications, making a direct reference to the new AI-specific requirements enumerated under the AI Act in tandem with the data protection requirements. Indeed, for instance, the current EDPS has taken a firm stance on the AI-driven data processing activities of the EU agencies, including Frontex and Europol.Footnote 145
Navigating the landscape of exceptions and derogations with respect to data uses, especially in the law enforcement context, will, however, continue to undermine the EU’s efforts to ensure a human-centred use and deployment of AI meant to ensure full respect for fundamental rights. In light of the ongoing technological empowerment of the EU agencies, as exemplified by the expanding role of Frontex,Footnote 146 more structural adjustments of the EDPS’ powers and tasks vis-a-vis AI-powered EU conduct might be necessary for the effective enforcement of rights under the fragmented legal frameworks, instead of merely introducing more rights and obligations.Footnote 147 For now, direct protection of fundamental rights against the uses of AI by EU actors will remain primarily within the power of the EDPS under the remedial avenues stemming from the EU DPR. Accordingly, the way the Supervisor will apply the new AI-specific rules in conjunction with the individuals’ rights as data subjects will be crucial to furthering the protection of fundamental rights in the AI-driven conduct of the EU actors, such as Frontex.
15.5 Conclusion
Illustrated with the case of EU agencies like Frontex, which have spearheaded the development and deployment of AI for border surveillance purposes, the chapter assessed their risks to fundamental rights and the affected persons’ possibilities to remedy the likely violations. By examining two examples of AI uses by Frontex – automated risk assessments under the new ETIAS system and AI-powered aerial surveillance for border response – the chapter demonstrates diverse risks to fundamental rights, including privacy, personal data protection, non-discrimination, and the right to asylum.
In light of these concerns, the chapter highlighted the challenges for access to remedies against AI uses by the EU actors, including to procedural rights to good administration and effective judicial remedy and in the current remedial set up under the emerging framework for regulating AI – the AI Act. Examining the limis of the AI Act in determining a concrete role of the European Data Protection Supervisor (EDPS) the chapter calls for further restructuring of the EDPS powers with respect to fundamental rights protection in view of its combined mandate under the EU’s data protection and AI frameworks. With the identified gaps still in place, including the lack of a direct remedy against the EU actors’ use of AI under the AI Act, the EDPS will play a central role in guaranteeing the legal protection of fundamental rights in the emerging AI-powered conduct. To undertake this role effectively, the gaps identified in this chapter will need to be carefully addressed.