Published online by Cambridge University Press: 24 January 2024
The face, as a site of interpersonal interaction often linked to identity, has become a prime target for quantification, datafication and automated assessment by algorithms like AI. Facial recognition has seen a massive rise in use across high stakes contexts like policing and education. The systems are particularly used to assess trust and its various proxies. These technologies are an update of historical practices like physiognomy that assign values for personal qualities based on the shape and measurements of people's faces (or brains or other body parts).
Parallels between AI and physiognomy are ‘both disturbing and instructive’ (Stark and Hutson, 2022: 954). Both are an attempt to read into the body characteristics that are deemed innate, and therefore inescapable. Measuring for these attributes performs them as roles and identities for specific individuals and groups. They include characteristics like ‘employability, educability, attentiveness, criminality, emotional stability, sexuality, compatibility, and trustworthiness’ (2022: 954). Trustworthiness was a key target for physiognomy, attempting to embed social biases within the narratives of a scientific discipline, performing legitimacy for discrimination. Parallels also include the view of physiognomy as progressive, which aligns with innovation narratives that hype up facial recognition ‘solutions’.
However, despite the sheer number of tools and academic papers being developed, there are big questions over the significant flaws in methodology and assumptions. This includes a frequent lack of definition of what concepts like trustworthiness actually are, part of a chain of proxy conflation of terms and ideas (Spanton and Guest, 2022). This critique highlights the ethical risks inherent even to ‘demonstration’ type academic papers, aligning with what Abeba Birhane describes as ‘cheap AI’ rooted in ‘prejudiced assumptions masquerading as objective enquiry’ (2021b: 45). Faulty demonstrations of the possibilities of machine learning technology are easily picked up and misinterpreted by the media to feed the myths of AI. But problematic papers also enable the same biased and decontextualized judgements to be easily transferred to policing, insurance, education and other areas where quantifying trust is directly, materially harmful.
Saving faces
The way facial recognition technologies are developed and deployed perpetuates specific injustices. If they work, they purposefully entrench power. When they do not work, which is often, then ‘the harm inflicted by these products is a direct consequence of shoddy craftsmanship, unacknowledged technical limits, and poor design’ (Raji, 2021: 57).
To save this book to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Find out more about the Kindle Personal Document Service.
To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.
To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.