We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
In this chapter, I ask whether, and under what circumstances, the First Amendment should protect algorithms from regulation by government. This is a broad frame for discussion, and it is important to understand that constitutional “protection for algorithms” could take at least four forms that have little if anything to do with one another.
For more than sixty years, “obviousness” has set the bar for patentability. Under this standard, if a hypothetical “person having ordinary skill in the art” would find an invention obvious in light of existing relevant information, then the invention cannot be patented. This skilled person is defined as a non-innovative worker with a limited knowledge-base. The more creative and informed the skilled person, the more likely an invention will be considered obvious. The standard has evolved since its introduction, and it is now on the verge of an evolutionary leap: inventive algorithms are increasingly being used in research, and once the use of such algorithms becomes standard, the person skilled in the art should be a person augmented by algorithm, or just an inventive algorithm. Unlike the skilled person, the inventive algorithm is capable of innovation and considering the entire universe of prior art. As inventive algorithms continue to improve, this will increasingly raise the bar to patentability, eventually rendering innovative activities obvious. The end of obviousness means the end of patents, at least as they are now.
To many people, there is a boundary which exists between artificial intelligence (AI), sometimes referred to as an intelligent software agent, and the system which is controlled through AI primarily by the use of algorithms. One example of this dichotomy is robots which have a physical form, but whose behavior is highly dependent on the “AI algorithms” which direct its actions. More specifically, we can think of a software agent as an entity which is directed by algorithms that perform many intellectual activities currently done by humans. The software agent can exist in a virtual world (for example, a bot) or can be embedded in the software controlling a machine (for example, a robot). For many current robots controlled by algorithms, they represent semi-intelligent hardware that repetitively perform tasks in physical environments. This observation is based on the fact that most robotic applications for industrial use since the middle of the last century have been driven by algorithms that support repetitive machine motions. In many cases, industrial robots which typically work in closed environments, say, for example, factory floors, do not need “advanced” techniques of AI to function because they perform daily routines with algorithms directing the repetitive motions of their end effectors. However, lately, there is an emerging technological trend which has resulted from the combination of AI and robots, which, by using sophisticated algorithms, allows robots to adapt complex work styles and to function socially in open environments. We may call these merged technological products “embodied AI,” or in a more general sense, “embodied algorithms.”
The (un)limited potential of algorithmic decision-making is increasingly embraced by numerous private sector actors, ranging from pharmaceutical to banking, and from transport industries to powerful Internet platforms. The celebratory narratives about the use of big data and machine-learning algorithms by private companies to simulate intelligence, improve society, and even save humanity are common and widespread. The deployment of algorithms to automate decision-making also promises to make governments not only more efficient, but also more accurate and fair. Ranging from welfare and criminal justice, to healthcare, national security, and beyond, governments are increasingly relying on algorithms to automate decision-making – a development which has been met with concern by many activists, academics, and members of the general public.1 Yet, it remains incredibly difficult to evaluate and measure the nature and impact of automated systems, even as empirical research has demonstrated their potential for bias and individual harm.2 These opaque and elusive systems often are not subject to the same accountability or oversight mechanisms as other public actors in our legal systems, which raises questions about their compatibility with fundamental principles of public law. It is thus not surprising that numerous scholars are increasingly calling for more attention to be paid to the use of algorithms in government decision-making.3
This chapter explores the legal protection awarded to algorithms and argues that in the coming decade, with changes in coding methods, awarding IP protection for algorithms might not prevail. Even today, machines controlled by algorithms are outsmarting humans in many areas. For example, advanced algorithms influence markets and affect finance, commerce, human resources, health, and transportation.
Risk assessment – measuring an individual’s potential for offending – has long been an important aspect of most legal systems, in a wide variety of contexts. In most countries, sentences are often heavily influenced by concerns about preventing reoffending. Correctional officials and parole boards routinely rely on risk assessments. Post-sentence commitment of “dangerous” offenders (particularly common in connection with sex offenders) is based almost entirely on determinations of risk, as is involuntary hospital commitment of people found not guilty by reason of insanity and of people who are not prosecuted but require treatment. Detention prior to trial is frequently authorized not only upon a finding that a suspect will otherwise flee the jurisdiction, but also when the individual is thought to pose a risk to society if left at large. And police on the streets have always been on the look-out for suspicious individuals who might be up to no good.
If law is to promote justice and welfare, it must respond to changes in society. In much the same way, the tools that government uses to make, implement, and enforce laws also need to adapt in the face of societal changes as well as in light of changes in technology. In this spirit, governments around the world increasingly look to the promise of one of the newest technological innovations made possible by modern computing power: machine-learning algorithms.
Technological advances continue to produce massive amounts of information from a variety of sources about our everyday lives. The simple use of a smartphone, for example, can generate data on individuals through telephone records (including location data), social media activity, Internet browsing, e-commerce transactions, and email communications. Much attention has been given to expectations of privacy in light of this data collection, especially consumer privacy. Much attention has also been given to how and when government agencies collect and use this data to monitor the activities of individuals.
Software-related inventions have had an uneasy relationship with the patent-eligible subject matter requirement of Section 101 of the Patent Act. In applying the requirement, the Supreme Court has historically characterized mathematical algorithms and formulas simpliciter as sufficiently analogous to laws of nature to warrant judicial exclusion as abstract ideas. The Court has also found “the mere recitation of a generic computer” in a patent claim as tantamount to “adding the words ‘apply it with a computer,’” a mere drafting effort that does not relieve “the pre-emption concern that undergirds our § 101 jurisprudence.” Lower courts, patent counsel, and commentators have struggled to apply these broad principles to specific software-related inventions, a difficulty largely rooted in the many forms and levels of abstraction in which mathematical algorithms can be situated, both in the computing context and in the terms of a patent claim. Consequently, widely varying approaches to claiming inventions that involve algorithms in their use have perennially complicated efforts to develop a coherent doctrine of unpatentable abstract ideas.
The development of a policy framework for the sustainable and ethical use of artificial intelligence (AI) techniques has gradually become one of the top policy priorities in developed countries as well as in the international context, including the G7 and G20 and work within the Organisation for Economic Cooperation and Development (OECD), the World Economic Forum, and the International Telecommunications Union. Interestingly, this mounting debate has taken place with very little attention to the definition of what AI is and its phenomenology in the real world, as well as its expected evolution. Politicians evoke the imminent takeover of smart autonomous robots; entrepreneurs announce the end of mankind, or the achievement of immortality through brain upload; and academics fight over the prospects of Artificial General Intelligence, which appears inevitable to some, and preposterous to others. In all this turmoil, governments developed the belief that, as both Vladimir Putin and Xi Jinping recently put it, the country that will lead in AI will, as a consequence, come to dominate the world. As AI gains positions in the ranking of top government priorities, a digital arms race has also emerged, in particular between the United States and China. This race bears far-reaching consequences when it comes to earmarking funds for research, innovation, and investment on AI technologies: gradually, AI becomes an end, rather than a means, and military and domestic security applications are given priority over civilian use cases, which may contribute more extensively to social and environmental sustainability. As of today, one could argue that the top priority in US AI policy is contrasting the rise of China, and vice versa.1
Rapid, recent technological change has brought forward a new form of “algorithmic competition.” Firms can and do draw on supercharged connectivity, mass data collection, algorithmic processing, and automated pricing to engage in what can be called “robo-selling.” But algorithmic competition can also produce results that harm consumers. Notably, robo-selling may make anticompetitive collusion more likely, all things being equal. Additionally, the possibility of new forms of algorithmic price discrimination may also cause consumers to suffer. There are no easy solutions, particularly because algorithmic competition also promises significant benefits to consumers. As a result, this chapter sets forth some approaches to each of these issues, necessarily tentative, to address the changes that algorithmic competition is likely to bring.
This chapter addresses whether and when content generated by an algorithm should be considered “expression” deserving of legal protection. Free expression has been described as “the matrix, the indispensable condition, of nearly every other form of freedom.”1 It receives extensive protection in many countries through legislation, constitutional rights, and the common law.2 Despite its deep roots, however, freedom of expression has unsettled boundaries. At their cutting edge lies the problem of “speech” produced by algorithms, a phenomenon that challenges traditional accounts of freedom of expression and impacts the balance of power between producers and consumers of algorithmically generated content.3
As technology continues to advance and specifically as algorithms proliferate into society, the law is increasingly confronted with the task of determining who is responsible when property is damaged or people are harmed. From a historical perspective, the Industrial Revolution resulted in machines that were able to automate tasks that were previously performed manually by humans. However, despite the superiority of these early automated machines, their use could cause physical damage; due to, for example, machine malfunctioning, poor machine design, or misuse by their users. The legal framework traditionally applied to machine-induced damages is comprised of two doctrines: that of general negligence and that of product liability. In this chapter, with algorithmic-based entities, I focus primarily on the reasonable person standard for actors.