We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Humanity’s increasing reliance on AI and robotics is driven by compelling narratives of efficiency in which the human is a poor substitute for the extraordinary computational power of machine learning, the creative competences of generative AI as well as the speed, accuracy and consistency of automation in so many spheres of human activity. Indeed, AI is increasingly becoming the core technological foundation of many contemporary societies. Most thinking on how to manage the downside risks to humanity of this seismic societal shift is set out in a direct fault-based relationship such as the innovative EU AI Act which is by far the most comprehensive political attempt to locate (or deter) those directly responsible for AI-generated harm. I argue that while such approaches are vital for combating injustice exacerbated by AI and robotics, too little thought goes into political approaches to the structural dynamics of AI’s impact on society. By way of example, I examine the UK ‘pro-innovation’ approach to AI governance and explore how it fails to address the structural injustices inherent in increasing AI usage.
What will our reproductive habits look like in the future, and why does it matter? Part of the answer to this question is the use of in vitro pre-implantation genetic technologies (PGTs). Originally designed to screen for a range of genetic conditions such as sickle cell disease or Huntington’s disease, new markets are set to emerge where prospective parents will be promised the opportunity to select the personality characteristics of their unborn children: this is what the political theorist Robert Nozick (1974) thought would result in a ‘genetic supermarket’. Unlike the case of AI, there has been a long-standing tradition of regulating Repro-tech. The UK’s Human Fertilisation and Embryonic Association (HFEA) is a regulatory public body created in 1990 in light of a report authored by the philosopher Mary Warnock. It is widely regarded internationally as the gold standard of regulators and the first to govern technologies as complex as gene-editing and cloning. However, though we might see some elements of promise in Warnock’s approach for a wider model of technology governance, I consider what I see as the general demise of regulatory landscapes in line with the dominant US-based ‘state capture’ school of thought.
Recommend this
Email your librarian or administrator to recommend adding this to your organisation's collection.