We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Transparency has been in the crosshairs of recent writing about accountable algorithms. Its critics argue that releasing data can be harmful, and releasing source code won’t be useful.1 They claim individualized explanations of artificial intelligence (AI) decisions don’t empower people, and instead distract from more effective ways of governing.2 While criticizing transparency’s efficacy with one breath, with the next they defang it, claiming corporate secrecy exceptions will prevent useful information from getting out.3
This chapter’s thesis is simple: as a general matter, agreements are a functional and conceptually straightforward way for the law to recognize algorithms. In particular, using agreements to recognize algorithms into the law is better than trying to use the law of agency to do so.1 Casual speech and conceptualism have led to the commonplace notion of “electronic agents,” but the law of agreements is a more functional entry point for algorithms to interact with the law than the concept of vicarious action. Algorithms need not involve any vicarious action, and most of the law of agency translates very poorly to algorithms that lack intent, reasonable understanding, and legal personality in their own right; instead, algorithms cause activity that may have contractual or other agreement-based legal significance. Recognizing the power (and perhaps the necessity) of addressing algorithms by means of the law governing agreements and other legal instruments can free us from formalistic attempts to shoehorn algorithms into a limited set of existing legal categories.
The United States’ transition from an economy built on form contracts to an economy built on algorithmic contracts has been as subtle as it has been thorough. In the form contract economy, many firms used standard order forms to make and receive orders. Firms purchased products and services with lengthy terms of services. Even negotiated agreements between fairly sophisticated businesses involved heavy incorporation of standard form terms selected by lawyers.
Online reputational injury can occur in a number of ways; and one way is through the use of algorithms that pervade the Internet. The Internet is comprised of complex technologies that enable the dissemination of information rapidly, providing global reach in a matter of seconds through the click of a button. The Internet provides robust public discourse over a gamut of topics in real-time and allows individuals from different parts of the world to interact with one another while preserving some sense of anonymity (that is, if they so choose). Many online communications stem from one piece of content that is regurgitated and redistributed on multiple platforms. For example, consider the social application Twitter. Twitter allows individuals to transmit bite-sized pieces of data among millions of users. Twitter has surprisingly become an outlet for a recent US President, and the fact that the public has direct access to a sitting President in this manner is undoubtedly incredible. Further, the Internet emboldens individuals to act behind the shield of a screen. There is little cost to spreading information, ideas, and gossip online, with seemingly few ramifications. Despite what may be perceived as a few keystrokes that have an ephemeral impact, content on the Internet has a tendency of permanence.
The use of “algorithms” in criminal investigation, adjudication, and punishment is not a new phenomenon. That is, to the extent that “algorithms” are simply sets of rules capable of being executed by a machine, the criminal justice system has long incorporated their use. For example, the US sentencing guidelines are so mechanistic that they were, at least for a time, literally calculated by software. Likewise, the New York Police Department’s erstwhile “stop and frisk” program was a mechanistic means of deciding whom to search and when. And so-called “per se” impaired driving laws have for nearly half a century mechanistically imposed criminal liability based on a machine’s determination that a person’s blood-alcohol level is over a certain threshold, without a jury determination of dangerous impairment.
The problem we address in this chapter is easy enough to state: Relatively simple algorithms, when duplicated many-fold and arrayed in parallel, produce systems capable of generating highly creative and nuanced solutions to real-world challenges. The catch is that the autonomy and architecture that make these systems so powerful also makes them difficult to control or even understand.
The First Amendment’s freedom of speech, the Supreme Court said in 1943, protects our capacity to use words or non-verbal symbols to create a “short-cut from mind to mind.”1 But does it continue to do so when one of the “minds” on either end of such a short cut is an artificial one? Does it protect my right to receive words or symbols not from another person, but from artificial intelligence (AI) – that is, a computer program that can write, compose music, or perform other tasks that used to be the sole province of human intelligence? If so, what kind of First Amendment protection does computer speech receive – and how, if it all, does it differ from that which protects the speech of human persons?
A specific software architecture, neural networks, not only takes advantage of the virtually perfect recollection and much faster processing speeds of any software, but also teaches itself and attains skills no human could directly program. We rely on these neural networks for medical diagnoses, financial decisions, weather forecasting, and many other crucial real-world tasks. In 2016, a program named AlphaGo beat the top-rated human player of the game of Go.3 Only a few years ago, this had been considered impossible.4 High-level Go requires remarkable skills, not just of calculation, at which computers obviously excel, but, more critically, of judgment, intuition, pattern recognition, and the weighing of ineffable considerations such as positional balance.5 These skills cannot be directly programmed. Instead, AlphaGo’s neural network6 trained itself with many thousands and, later, millions of games – far more than any individual human could ever play7 – and now routinely beats all human challengers.8 Because it learns and concomitantly modifies itself in response to experience, such a network is termed adaptive.9
Digital information and communications technologies (ICT) have been enthusiastically adopted by individuals, businesses, and government, altering the texture of commercial, social, and legal relationships in profound ways. In this decade, with the rapid development of “big data,” machine-learning tools, and the “Internet of Things,” it is clear that algorithms are becoming very important elements of modern society and a significant factor to consider when developing political or business strategies, developing new markets, or trying to solve problems.
In this chapter, we look at the global development of “people-scoring” and its implications. Unlike traditional credit scoring, which is used to evaluate individuals’ financial trustworthiness, social scoring seeks to comprehensively rank individuals based on social, reputational, and behavioral attributes. The implications of widespread social scoring are far-reaching and troubling. Bias and error, discrimination, manipulation, privacy violations, excessive market power, and social segregation are only some of the concerns we have discussed and elaborated on in previous works.1 In this chapter, we describe the global shift from financial scores to social credit, and show how, notwithstanding constitutional, statutory, and regulatory safeguards, the United States and other Western democracies are not as far from social credit as we seem to believe.
Automated systems that process vast amounts of data about individuals and communities have become a transformative force within contemporary societies and institutions. Governments and businesses, which adopt and develop new techniques of collecting and analyzing information, rely on algorithms in the decision-making process in various sectors: like banking, political marketing, health, and criminal justice. One of the early adopters of the automated systems are also welfare agencies responsible for the distribution of welfare benefits and management of social policies. These new ways of using technology highlight efficiency, standardization, and resource optimization as benefits. However, the debate about artificial intelligence (AI) and algorithms should not be limited to questions about its technical capabilities and functionalities. So too is the creation and implementation of technological innovations a significant normative and ethical challenge for our society. The decision to process data and use certain algorithms is structured and motivated by specific political and economic factors. Therefore, just as argued by Winner, technical artifacts pose political qualities and are far from being neutral.
The debate over algorithmic decision-making has focused primarily on two things: legal accountability and bias. Legal accountability seeks to leverage the institutions of law and compliance to put guard rails around the use of artificial intelligence (AI). This literature insists that if a state is going to use an algorithm to evaluate teachers or if a bank is going to use AI to make loan application decisions, both should do so transparently, in accordance with fair procedure, and be subject to interrogation. Algorithmic fairness seeks to highlight the ways in which AI discriminates on the basis of race, gender, and ethnicity, among other protected characteristics. This literature calls for making technologies that use AI, whether search engines or digital cameras, more inclusive by better training AI on diverse inputs and improving automated systems that have been shown to have a “disparate impact” on marginalized populations.
Our society in the twenty-first century is being shaped evermore by sets of instructions running at data centers spread around the world, commonly known as “algorithms.” Although algorithms are not a recent invention, they have become widely used to support decision systems, arguably triggering the emergence of an algorithmic society.1 These algorithmic decision systems (ADS) are deployed for purposes as disparate as pricing in online marketplaces,2 flying planes,3 generating credit scores,4 and predicting demand for electricity.5 Advanced ADS are characterized by two key features. First, they rely on the analysis of large amounts of data to make predictive inferences, such as the likelihood of a default for a potential borrower or an increase in demand for electricity consumption. Second, they automate in whole or in part the execution of decisions, such as refusing a loan to a high-risk borrower or increasing energy prices during peak hours, respectively. ADS may also refer to less advanced systems implementing only one of these features. Although ADS generally have proven to be beneficial in improving the efficiency of making decisions, the underlying algorithms remain controversial, among other issues, because they are susceptible to discrimination, bias, and a loss of privacy – with the potential to even be used to manipulate the democratic processes and structures underpinning our society6 – alongside lacking effective means of control and accountability.
In July 2014, Facebook ran a social experiment on emotional contagion2 by monitoring the emotional responses of 689,003 users to the omission of certain content containing positive and negative words. The project was severely criticized3 for manipulating emotions without the informed consent of the subjects, and raised concerns for users’ privacy. Most importantly, it brought about the question of respect toward the user’s autonomy in the era of automation.
Imagine: a FinTech lender, that is, a firm using computer programs to enable banking and financial services, which introduce a new product based on algorithmic artificial intelligence (AI) underwriting. The lender combs through the entirety of an applicant’s financial records to review where the applicant shopped, what purchases she made, purchase volumes and frequency, how much credit and debt she had, and whether she made utility and rent payments on time. The lender also reviews her mobile phone usage to understand how much time she spent on her phone and what she was engaged in, whether it was at work or at home, her typical geographic areas of travel, the frequency of her text messages, and how many spelling errors she made. (We’ll leave her social media usage out of this for now.) Through this mix of financial and behavioral data, the FinTech lender underwrites her application. It does the same for millions of other customers with little to no credit history, but who have long lived within their means, shopped responsibly, paid rent and utilities on time, and spent many hours at work.
If someone relies on algorithms1 to communicate to others, does that reliance change anything for First Amendment purposes?2 In this chapter I argue that, under the Supreme Court’s prevailing jurisprudence, the answer is no. Any words or pictures that would be speech under the First Amendment if produced entirely by a human are equally speech if produced via human-created algorithm. So long as humans are making the decisions that underlie the outputs, a human is sending whatever message is sent. Treatment as speech requires substantive editing by a human being, whether the speech is produced via algorithm or not. If such substantive editing exists, the resulting communication is speech under the current jurisprudence. Simply stated, if we accept Supreme Court jurisprudence, the First Amendment encompasses a great swath of algorithm-based decisions – specifically, algorithm-based outputs that entail a substantive communication.