We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
This chapter connects our arguments about agency and autonomy in chapters 2-4 to conceptions of freedom and its value. We argue that freedom has two fundamental conditions: that persons be undominated by others and that they have an adequate degree of autonomy and agency. We then explain that algorithmic systems can threaten both the domination-based and the agency-based requirements, either by facilitating domination or by exploiting weaknesses in human agency. We explicate these types of threats as three sorts of challenges to freedom. The first are “affective challenges,” which involve the role of affective, nonconscious processes (such as fear, anger, and addiction) in human behavior and decision-making. These processes, we argue, interfere with our procedural independence, thereby threatening persons’ freedom by undermining autonomy. The second are “deliberative challenges.” These involve strategic exploitation of the fact that human cognition and decision-making are limited. These challenges also relate to our procedural independence, but they do not so much interfere with it as they exploit its natural limits. A third sort of challenge, which we describe as “social challenges,” involve toxic social and relational environments. These threaten our substantive independence and thus, our freedom.
This chapter outlines the conception of autonomy that grounds the arguments throughout the book. We begin with a basic definition of autonomy as self-government, distinguish global and local autonomy, and explain how autonomy may be understood as a capacity, as the exercise of that capacity, as successful self-government, and as a right. We then describe a key split in the philosophical literature between psychological autonomy and personal autonomy. We offer an ecumenical view of autonomy that incorporates facets of both psychological and personal autonomy. Finally, we rehearse some key objections to traditional conceptions of autonomy, and explain how contemporary accounts address those criticisms.
In this chapter, we address some distinctively epistemic problems that algorithms pose in the context of social media and argue that in some cases that epistemic problems warrant paternalistic interventions. Our paternalistic proposal to these problems is compatible with respect for freedom and autonomy; in fact, we argue that freedom and autonomy demand some kinds of paternalistic interventions. The chapter proceeds as follows. First, we discuss an intervention that Facebook has run in hopes of demoting the spread of fake news on the site. We explain why the intervention is paternalistic and then, using the framework of this book, defend the intervention. We argue that while Facebook’s intervention is defensible, it is limited. It is an intervention that may pop some epistemic bubbles but will likely be powerless against echo chambers. We then discuss heavier-handed interventions that might be effective enough to dismantle some echo chambers, and we argue that at least some heavier-handed epistemically paternalistic interventions are permissible.
When agents insert technological systems into their decision-making processes, they can obscure moral responsibility for the results. This can give rise to a distinct moral wrong, which we call “agency laundering.” At root, agency laundering involves obfuscating one’s moral responsibility by enlisting a technology or process to take some action and letting it forestall others from demanding an account for bad outcomes that result. We argue that the concept of agency laundering helps in understanding important moral problems in a number of recent cases involving automated, or algorithmic, decision-systems. We apply our conception of agency laundering to a series of examples, including Facebook’s automated advertising suggestions, Uber’s driver interfaces, algorithmic evaluation of K-12 teachers, and risk assessment in criminal sentencing. We distinguish agency laundering from several other critiques of information technology, including the so-called “responsibility gap,” “bias laundering,” and masking.
This chapter addresses autonomy’s role in democratic governance. Political authority may be justifiable or not. Whether it is justified and how it can come to be justified is a question of political legitimacy, which is in turn a function of autonomy. We begin, in section 8.1, by describing two uses of technology: crime predicting technology used to drive policing practices and social media technology used to influence elections (including by Cambridge Analytica and by the Internet Research Agency). In section 8.2 we consider several views of legitimacy and argue for a hybrid version of normative legitimacy based on one recently offered by Fabienne Peter. In section 8.3 we explain that the connection between political legitimacy and autonomy is that legitimacy is grounded in legitimating processes, which are in turn based on autonomy. Algorithmic systems—among them PredPol and the Cambridge Analytica-Facebook-Internet Research Agency amalgam—can hinder that legitimation process and conflict with democratic legitimacy, as we argue in section 8.4. We conclude by returning to several cases that serve as through-lines to the book: Loomis, Wagner, and Houston Schools.
One important criticism of algorithmic systems is that they lack transparency, either because they are complex, protected by intellectual property, or deliberately obscure. There is a debate about whether the EU’s General Data Protection Regulation (GDPR) contains a “right to explanation” This chapter addresses the informational component of algorithmic systems. We argue that information access is integral for respecting autonomy, and transparency policies should be tailored to advance autonomy. We distinguish two facets of agency (i.e., capacity to act). The first is “practical agency,” or the ability to act effectively according to one’s values. The second is “cognitive agency,” which is the ability to exercise what Pamela Hieronymi calls “evaluative control”. We argue that respecting autonomy requires providing persons sufficient information to exercise evaluative control and properly interpret the world and one’s place in it. We draw this distinction out by considering algorithmic systems used in background checks, and we apply the view to key cases involving risk assessment in criminal justice decisions and K-12 teacher evaluation.
Chapter 3 takes the conception of autonomy outlined in chapter 2 and explains how it grounds moral evaluation of algorithmic systems. It begins by offering a view of what it takes to respect autonomy and to respect persons in virtue of their autonomy, drawing on a number of different normative moral theories. The argument starts with a description of a K-12 teacher evaluation program from Washington, DC. It then considers several puzzles about the case. Next, the chapter provides an account of respecting autonomy and what that means for individuals’ moral claims. It explains how that account can help us understand the DC case, and we will offer a general account of the moral requirements of algorithmic systems. Specifically, we offer the Reasonable Endorsement Test, according to which an action is morally permissible only if it would be allowed by principles that each person subject to it could reasonably endorse. The chapter applies that test to the Loomis, Houston Schools, and Wagner cases. Finally, the chapter explains why the book does not focus directly on “fairness.”
Algorithms influence every facet of modern life: criminal justice, education, housing, entertainment, elections, social media, news feeds, work… the list goes on. Delegating important decisions to machines, however, gives rise to deep moral concerns about responsibility, transparency, freedom, fairness, and democracy. Algorithms and Autonomy connects these concerns to the core human value of autonomy in the contexts of algorithmic teacher evaluation, risk assessment in criminal sentencing, predictive policing, background checks, news feeds, ride-sharing platforms, social media, and election interference. Using these case studies, the authors provide a better understanding of machine fairness and algorithmic transparency. They explain why interventions in algorithmic systems are necessary to ensure that algorithms are not used to control citizens' participation in politics and undercut democracy. This title is also available as Open Access on Cambridge Core.
The mythology of the Market is strongly evident, indicated by the corporate camouflage of existential desire by the wide range of constructed desires. This mythology has materialised in the personalisation of the idea of the corporation. Its functioning is revealed by the commodification of individuals within models of regulatory capitalism and by the structural embedding of debt as credit. These trends have been promoted by the digitisation of corporate function, by algorithmic profiling of individuals as consumers and by the exploitation of Big Data. This has morphed into surveillance capitalism. The non-mythological way forward would start with focusing on all shareholders, including all citizens on whom the corporation impacts. This is the reimagining of corporations on purpose-based, fiduciary principles. This in turn would require the redrafting of competition and consumer protection law, as well as shifting the control of personal data to the individual. It would also require changes to employee relations strategies.
The State has been a mythological entity through its history, from its sovereign phase to its present dispersed, nodal, regulatory phase. This dispersal raises important questions about gradual disappearance of public accountability. It also points to such other key issues as the dilution of personal responsibility, especially when considered in the context of the determinative implications of neuroscientific research. These trends are further emphasised by the increasingly avaricious, non-consensual digitisation of the State and the threat to democratic values posed by such trends as data brokering and algorithmic friendliness. The consequential move to a non-mythological State can be produced by the reimagining of agencies as purpose-based and which operate on existential, fiduciary principles in a manner that avoids Pettit’s republicanism. How this transition can take place is evidenced by the difference between mythological and non-mythological criminal justice, a model for which is presented.
This work will address the problems of contemporary accounts of privacy by placing them in a new context of the mythological social dynamic that has constrained the West. This dynamic has driven a trajectory of failed mythological magnitudes – Deity, State, Market and now Technology – by which we have tried to avoid existential reality rather than embrace it. This avoidance is why privacy is vulnerable to the imminent impact of the latest form of that dynamic, neuroscience: while ‘privacy’ comes from early forms of this dynamic, neuroscience is now its most powerful form and will overwhelm that sense of privacy. Privacy needs to be removed from this dynamic, reconceiving it through existential, respectful self-responsibility. It will then survive this challenge and will, counterintuitively, embrace neuroscientific benefits, including their promotion of this new privacy through the technological control of the citizen. We will examine the dynamic, how it produced present notions of privacy through a singular form of normalisation, and how it is being re-formed by the mythological algorithms of neuroscience. To disengage privacy, we will need new ethical principles and reimagined social infrastructure – law, State and Market, these best understood by reconceiving regulation. That will provide the necessary support for self-responsibility.