Published online by Cambridge University Press: 04 April 2024
ABSTRACT
The ubiquitous deployment of artificial intelligence (AI) technologies affects an array of human rights, raising concerns around issues of discrimination, privacy, freedom of expression, information and data protection. However, just as regulators and policy-makers call for technological design to respect existing human rights, others debate whether human rights are robust enough to counter new challenges posed by AI. This contribution takes a legalphilosophical approach, engaging the intersections of human rights law and theory, philosophy of technology, and law and technology in order to examine whether the theory and practice of the human rights law framework can address the systemic distortions afforded by AI systems. It identifies six systemic distortions, namely intangibility, ephemerality, modulation, the comparison deficit, the utilitarian logic, and, finally, the objectification of dividualised identities. These map on to (and challenge), respectively, the implicit observability, categorical legal groups, control, causality and foreseeability, the deontological motivation of human rights law, and the subjective sociality of individuals. These systemic distortions pose both a procedural challenge for individuals seeking to mount human rights claims, and a normative challenge to the formative aims of human rights law.
The contribution finds that these challenges are non-trivial to the human rights law framework, impacting its practical sustainability and relevance in the age of AI. However, a silver lining can be found within the normative foundation of the human rights framework itself, through the reinterpretation of human dignity as human vulnerability.
INTRODUCTION
The ubiquitous deployment of artificial intelligence (AI) technologies affects an array of human rights, raising concerns around issues of discrimination, privacy, freedom of expression, information and data protection. On the other hand, issues of fairness, transparency and accountability have also been raised, encompassing wider ethical concerns in relation to the use of AI. On the one hand, the advent of technological innovations, including AI, have been credited with bringing immense progress to science, and revolutionising diverse fields, such as health care, transportation and education. AI is also increasingly being deployed within public administration, determining eligibility for social welfare, unemployment and health care benefits, and educational access. On the other hand, the deployment of AI has threatened the sufficiency of the traditional tools of accountability, including those of the law, and human rights law in particular.
To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Find out more about the Kindle Personal Document Service.
To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.
To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.