We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Personalisation can provide notable efficiencies and economic gains, but also unintended negative effects. Most accounts focus on potential negative impacts on individuals or categories of individuals and not the broader consequences or ripple effects of incorporating AI into existing social systems. This chapter explores such issues via an ‘AI ethics’ perspective, the dominant overarching discourse for ‘regulating’ AI for the good of society, commonly characterised as self-policing of AI system use by private corporate actors, sanctioned by government. The discussion critiques that self-policing by locating AI ethics within established traditions of corporate social responsibility and institutional ethical frameworks whose shortcomings translate into a systemic inability to be truly Other-regarding. It shows, referencing the recent EU AI ethics initiative, that even well-intentioned initiatives may miss their target by assuming the desirability of AI applications, regardless of their wider impacts. This approach simply tinkers with system details of minor consequence compared to the broader impacts of AI within social systems, captured by the idea of ‘algorithmic assemblage’.
Credit-score models provide one of the many contexts through which the big data micro-segmentation or ‘personalisation’ phenomenon can be analysed and critiqued. This chapter approaches the issue through the lens of anti-discrimination law, and in particular the concept of indirect discrimination. The argument presented is that, despite its initial promise based on its focus on impact, ‘indirect discrimination’ is after all unlikely to deliver a mechanism to intervene and curb the excesses of the personalised service model. The reason for its failure does not lie in its inherent weaknesses but rather in the 'shortcomings' (entrenched biases) of empirical reality itself which any 'accurate' (or useful) statistical analysis cannot but reflect. Still, the anti-discrimination context offers insights that are valuable beyond its own disciplinary boundaries. For example, the opportunities for oversight and review based on correlations within outputs rather than analysis of inputs is fundamentally at odds with the current trend that demands greater transparency of AI but may after all be more practical and realistic considering the ‘natural’ opacity of learning algorithms and businesses’ ‘natural’ secrecy. The credit risk score context also provides a low-key yet powerful illustration of the oppressive potential of a world in which individual behaviour from ANY sphere or domain may be used for ANY purpose; where a bank, insurance company, employer, health care provider, or indeed any government authority can tap into our social DNA to pre-judge us, should it be considered appropriate and necessary for their manifold objectives.
This is the introductory chapter to the edited collection on 'Data-Driven Personalisation in Markets, Politics and Law' (Cambridge University Press, 2021) that explores the emergent pervasive phenomenon of algorithmic prediction of human preferences, responses and likely behaviours in numerous social domains – ranging from personalised advertising and political microtargeting to precision medicine, personalised pricing and predictive policing and sentencing. This chapter reflects on such human-focused use of predictive technology, first, by situating it within a general framework of profiling and defends data-driven individual and group profiling against some critiques of stereotyping, on the basis that our cognition of the external environment is necessarily reliant on relevant abstractions or non-universal generalisations. The second set of reflections centres around the philosophical tradition of empiricism as a basis of knowledge or truth production, and uses this tradition to critique data-driven profiling and personalisation practices in its numerous manifestations.
An online seller or platform is technically able to offer every consumer a different price for the same product, based on information it has about the customers. Such online price discrimination exacerbates concerns regarding the fairness and morality of price discrimination, and the possible need for regulation. In this chapter, we discuss the underlying basis of price discrimination in economic theory, and its popular perception. Our surveys show that consumers are critical and suspicious of online price discrimination. A majority consider it unacceptable and unfair, and are in favour of a ban. When stores apply online price discrimination, most consumers think they should be informed about it. We argue that the General Data Protection Regulation (GDPR) applies to the most controversial forms of online price discrimination, and not only requires companies to disclose their use of price discrimination, but also requires companies to ask customers for their prior consent. Industry practice, however, does not show any adoption of these two principles.
This conclusion weaves together the wide-ranging contributions of this volume by considering data-driven personalisation as an internally self-sustaining (autopoietic) system. It observes that like other self-sufficient social systems, personalisation incorporates and processes new data and thereby redefines itself. In doing so it redefines the persons who participate in it, transforming them into ‘digital’ components of this new systems, as well as influencing social arrangements more broadly. The control that elite corporate and governmental entities have over systems of personalisation – which have been diversely described by contributors to this volume – reveals challenges in the taming of personalisation, specifically the limits of traditional means by which free persons address new phenomena – through consent as individuals, and democratic process collectively.
The development of data-driven personalisation in medicine, as exemplified by the ‘P4’ approach formulated by Leroy Hood and colleagues, may be viewed as consistent with a particular understanding of law’s role in respect of health, and with the dominant ethical principle of autonomy which underpins this. This chapter maps the direction of travel of health law in the UK in recent times against the evolution of personalised medicine. It notes, however, that this offers merely a partial account of the function of law in this context, as well as of the reach of this sub-discipline as a scholarly endeavour.
In the European Union, regulatory analysis of artificial intelligence in general and personalisation more specifically often starts with data protection law, more specifically the General Data Protection Regulation (GDPR). This is unsurprising due to the fact that training data often contains personal data and that the output of these systems can also take the form of personal data. There are, however, limits to data protection’s ability to function as a general AI law. This chapter highlights the importance of being realistic about the GDPR’s opportunities and limitations in this respect. It examines the application of certain elements of the GDPR to data-driven personalisation and highlights that whereas the Regulation indeed applies to the processing of personal data, it would be erroneous to frame it as a general ‘AI law’ capable of addressing all normative concerns around personalisation.
Data-driven personalisation is emerging as a central force in political communication. Political micro-targeting has the potential to enhance political engagement and to make it easier and more effective for political parties and movements to communicate with potential voters and supporters. However, the collection and use of personal information about voters also affects their privacy rights and can interfere with personal autonomy essential for democracy.
This chapter argues that the rise of data-driven communications requires a revaluation of the role of information privacy in political campaigns. Data protection laws have an important role to play in limiting the processing of personal data and requiring data practices to be designed in a manner that balances privacy and competing rights. In our view, there is no longer a good case for the retention in data protection laws of exemptions for political parties or actors, or overly broad provisions permitting data processing in political contexts.
Subjecting political parties and digital intermediaries to the general requirements of fair, transparent and lawful processing would go some way towards moderating political micro-targeting. The imposition of any privacy-based restrictions on political actors would enhance voter privacy, engender more trust in political communication and, ultimately, protect democratic discourse.
Drawing upon Foucauldian ideas this chapter explores how the ‘datafication’ of modern life shifts the modes of power acting upon the individual and social body. Through a brief exploration of three banal everyday social practices (driving, health, gambling) it argues that the construction of the data-self marks an emergent algorithmic govermentality centred simultaneously upon intimate knowledge of the individual (subjectivities) and the population. The intersection of technology, data and subjectivation, reproduces a ‘neoliberal subject’ – one closely monitored and policed to freely perform ‘correct’ forms of action or behaviour, and one increasingly governed by the imperatives of private capital. This chapter explores how this nexus between power and knowledge is central to debates about the relocation (or appropriation) of personal and population data from state to non-state institutions, with private corporations increasingly managing the health and wellbeing of individuals and society. It makes an argument for critical engagement with the complex interactions, intersections, effects and unintended consequences of multiple technologies that, through the use of data, make the simultaneous government of individuals and populations their targets of action.
Should we regulate artificial intelligence? Can we? From self-driving cars and high-speed trading to algorithmic decision-making, the way we live, work, and play is increasingly dependent on AI systems that operate with diminishing human intervention. These fast, autonomous, and opaque machines offer great benefits – and pose significant risks. This book examines how our laws are dealing with AI, as well as what additional rules and institutions are needed – including the role that AI might play in regulating itself. Drawing on diverse technologies and examples from around the world, the book offers lessons on how to manage risk, draw red lines, and preserve the legitimacy of public authority. Though the prospect of AI pushing beyond the limits of the law may seem remote, these measures are useful now – and will be essential if it ever does.
The most fascinating and profitable subject of predictive algorithms is the human actor. Analysing big data through learning algorithms to predict and pre-empt individual decisions gives a powerful tool to corporations, political parties and the state. Algorithmic analysis of digital footprints, as an omnipresent form of surveillance, has already been used in diverse contexts: behavioural advertising, personalised pricing, political micro-targeting, precision medicine, and predictive policing and prison sentencing. This volume brings together experts to offer philosophical, sociological, and legal perspectives on these personalised data practices. It explores common themes such as choice, personal autonomy, equality, privacy, and corporate and governmental efficiency against the normative frameworks of the market, democracy and the rule of law. By offering these insights, this collection on data-driven personalisation seeks to stimulate an interdisciplinary debate on one of the most pervasive, transformative, and insidious socio-technical developments of our time.
It is fitting that the last example we introduced in the book was about the Internet Research Agency’s (IRA) use of social media, analytics, and recommendation systems to wage disinformation campaigns and sow anger and social discord on the ground. At first glance, it seems odd to think of that as primarily an issue of technology. Disinformation campaigns are ancient, after all; the IRA’s tactics are old wine in new boxes. That, however, is the point. What matters most is not particular features of technologies. Rather, it is how a range of technologies affect things of value in overlapping ways. The core thesis of our book is that understanding the moral salience of algorithmic decision systems requires understanding how such systems relate to an important value, viz., persons’ autonomy. Hence, the primary through line of the book is the value itself, and we have organized it to emphasize distinct facets of autonomy and used algorithmic systems as case studies.
A little after 2 a.m. on February 11, 2013, Michael Vang sat in a stolen car and fired a shotgun twice into a house in La Crosse, Wisconsin. Shortly afterward, Vang and Eric Loomis crashed the car into a snowbank and fled on foot. They were soon caught, and police recovered spent shell casings, live ammunition, and the shotgun from the stolen and abandoned car. Vang pleaded no contest to operating a motor vehicle without the owner’s consent, attempting to flee or elude a traffic officer, and possession of methamphetamine. He was sentenced to ten years in prison.