We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
People have long sought out the public realm because of a desire for transcendence. The ancient Greeks sought it out because they wanted more than the oikos, or the family home, had to offer. Accordingly, the private realm was long deemed ‘privative’ in some essential way – it deprived us of what it means to be uniquely or distinctly human.1 In Classical times, the oikos was the realm of function and hierarchy. It was hierarchical because of the task at hand, the business of survival. But things were otherwise in the public realm, where men were free – for those lucky enough to be citizens, that is.2 When they entered the public realm, the realm of politics where freedom was exercised, people were released somewhat, or temporarily, from the tyranny of necessity, and could entertain higher matters and higher concerns – uniquely human concerns.
The tax system incentivizes automation, even in cases where it is not otherwise efficient. This is because the vast majority of tax revenue is derived from labor income. When an AI replaces a person, the government loses a substantial amount of tax revenue - potentially hundreds of billions of dollars a year in the aggregate. All of this is the unintended result of a system designed to tax labor rather than capital. Such a system no longer works once labor is capital. Robots are not good taxpayers. The solution is to change the tax system to be more neutral between AI and human workers and to limit automation’s impact on tax revenue. This would be best achieved by reducing taxes on human workers and increasing corporate and capital taxes.
This chapter explains the need for AI legal neutrality and discusses its benefits and limitations. It then provides an overview of its application in tax, tort, intellectual property, and criminal law. Law is vitally important to the development of AI, and AI will have a transformative effect on the law given that many legal rules are based on standards of human behavior that will be automated. As AI increasingly steps into the shoes of people, it will need to be treated more like a person, and more importantly, sometimes people will need to be treated more like AI.
This chapter defines artificial intelligence and discusses its history and evolution, explains the differences between major types of AI (symbolic/classical and connectionist), and describes AI’s most recent advances, applications, and impact. It also weighs in on the question of whether AI can “think,” noting that the question is less relevant to regulatory efforts, which should focus on promoting behaviors that improve social outcomes.
AI has the potential to be substantially safer than people. Self-driving cars will cause accidents, but they will cause fewer accidents than people. Because automation will result in substantial safety benefits, tort law should encourage its adoption as a means of accident prevention. Under current laws, suppliers of AI tortfeasors are strictly responsible for their harms. A better system would hold them liable for harms caused by AI tortfeasors in negligence. Not only would this encourage the use of AI after it exceeds human performance, but also the liability test would focus on activity rather than design, which would be simpler to administer. More importantly, just as AI activity should be discouraged when it is less safe than a person, human activity should be discouraged when it is less safe than an AI. Once AI is safer than a person and automation is practicable, human tortfeasors should be held to the standard of AI behavior.
The impact of artificial inventors is only starting to be felt, but AI’s rapid improvement means that it may soon outdo people at solving problems in certain areas. This should revolutionize not only research and development but also patent law. The most important requirement to being granted a patent is that an invention must be nonobvious to a hypothetical skilled person who represents an average researcher. As AI increasingly augments average researchers, this should make them more knowledgeable and sophisticated. In turn, this should raise the bar to patentability. Once inventive AI moves from augmenting to automating average researchers, it should directly represent the skilled person in obviousness determinations. As inventive AI continues to improve, this should continue to raise the bar to patentability, eventually rendering innovative activities obvious. To a superintelligent AI, everything will be obvious.
This chapter concludes by responding to some of the controversies about artificial intelligence and possible criticisms of AI legal neutrality. It argues that AI legal neutrality is important regardless of whether AI broadly achieves superhuman performance, and that the law would not want to constrain AI development for protectionist reasons. It further argues that AI legal neutrality is a coherent principle for policymakers to apply, even though it allows the law to treat AI and people differently and will sometimes be at odds with other regulatory goals. Finally, it discusses some of the risks and dangers of AI and argues these are susceptible to management with appropriate legal frameworks.
Criminal law falls short in cases where an AI functionally commits a crime and there are no individuals who are criminally liable. This chapter explores potential solutions to this problem, with a focus on holding AI directly criminally liable where it is acting autonomously and irreducibly. Conventional wisdom holds that punishing AI is incongruous with basic criminal law principles such as the capacity for culpability and the requirement for a voluntary act. Drawing on analogies to corporate and strict criminal liability, the chapter shows AI punishment cannot be categorically ruled out with quick theoretical arguments. AI punishment could result in general deterrence and expressive benefits, and it need not run afoul of negative limitations such as punishing in excess of culpability. Ultimately, however, punishing AI is not justified, because it might entail significant costs and would certainly require radical legal changes. Modest changes to existing criminal laws that target persons, together with potentially expanded civil liability, is a better solution to AI crime.
AI is generating patentable inventions without a person involved who qualifies as an inventor. Yet, there are no rules about whether such an invention could be patented, who or what could qualify as an inventor, and who could own the patents. There are laws that require inventors be natural persons, but they predate inventive AI and were never intended to prohibit patents. AI-generated inventions should be patentable because this will incentivize the development of inventive AI and result in more benefits for everyone. When an AI invents, it should be listed as an inventor because listing a person would be unfair to legitimate inventors. Finally, an AI’s owner should own any patents on its output in the same way that people own other types of machine output. The chapter proceeds to address a host of challenges that would result from AI inventorship, ranging from ownership of AI-generated inventions and displacement of human inventors to the need for consumer protection policies.
Donald Trump, the Arab Spring, Brexit: digital media have provided political actors and citizens with new tools to engage in politics. These tools are now routinely used by activists, candidates, non-governmental organizations, and parties to inform, mobilize, and persuade people. But what are the effects of this retooling of politics? Do digital media empower the powerless or are they breaking democracy? Have these new tools and practices fundamentally changed politics or is their impact just a matter of degree? This clear-eyed guide steps back from hyperbolic hopes and fears to offer a balanced account of what aspects of politics are being shaped by digital media and what remains unchanged. The authors discuss data-driven politics, the flow and reach of political information, the effects of communication interventions through digital tools, their use by citizens in coordinating political action, and what their impact is on political organizations and on democracy at large.
AI and people do not compete on a level-playing field. Self-driving vehicles may be safer than human drivers, but laws often penalize such technology. People may provide superior customer service, but businesses are automating to reduce their taxes. AI may innovate more effectively, but an antiquated legal framework constrains inventive AI. In The Reasonable Robot, Ryan Abbott argues that the law should not discriminate between AI and human behavior and proposes a new legal principle that will ultimately improve human well-being. This work should be read by anyone interested in the rapidly evolving relationship between AI and the law.
Where did you last encounter a piece of political information? Chances are, you clicked on a link a friend sent you on a messaging app, read the preview to a piece on the Facebook wall of a colleague, or followed a retweet posted by an acquaintance on Twitter. Depending on your predilections for the ways of the ancients, you might also have picked up a printed newspaper or watched the news on a television set.
Two episodes from 2011 and 2016 bookend public expectations regarding the role of digital media in politics. In the wake of the protests and demonstrations in North Africa and the Middle East that we discussed in Chapter 5, the dominant public narrative portrayed social media as the keystone that enabled the opposition to coordinate a challenge to otherwise seemingly unwavering autocracies. Only social media offered disgruntled citizens the possibility of taking their discontent to the streets. Decentralized networks on top of real-time communication systems enabled activists to level the playing field against authoritarian regimes that previously had taken full advantage of their control over the official media and showed an unfettered capacity to repress any sign of dissent. It does not matter whether we see digital media as a causal factor; no account of the events in Egypt would be complete without a reference to the #jan25 hashtag on Twitter or the “We are all Khaled Said” site on Facebook (see Chapter 5).
It is June 2015 and the famous American reality-TV personality Donald Trump announces his bid for the Republican nomination to the 2016 race for the US presidency. Journalists, Republican donors, and prospective voters now have to decide if they should take his bid seriously. The history of American presidential campaigns is littered with celebrities and third-party candidates who tried to capitalize on their fame or success by entering politics. While some like Ronald Reagan, Arnold Schwarzenegger, or Michael Bloomberg proved to be successful, most celebrity candidacies turned out to be mere blips in the history of American politics. How should observers decide on whether Donald Trump’s bid fell into the first or the second category? The Trump campaign portrayed their candidate as being in touch with the long-forgotten people lacking a voice in US politics (Green 2017), a group that the campaign of the Democratic frontrunner Hillary Clinton helpfully labeled “deplorables” (Chozick 2016). To assess the validity of Trump’s claims, journalists decided to take to social media as a source of how well his message resonated with the public.