We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Autonomous robots and Artificial Intelligence (AI) are increasingly involved in the commission of criminal offenses, resulting in questions as to who is accountable for such crimes and how human–machine interactions influence criminal responsibility. This chapter first elaborates upon the conditions of machine responsibility and explains why technical systems cannot be granted personhood under today’s criminal law. It then discusses the challenges that complex human–machine interactions bring in terms of the attribution of criminal responsibility, before outlining options regarding how they might be met. Human–machine interactions should be understood as a form of distributed agency. It is therefore essential to define what a legitimate delegation of agency to a machine is. As this chapter shows, this could lead to a modified understanding of the attribution of action under criminal law, or even new norms specifically designed to address automation. Ensuring accountability under criminal law for robots and AI means holding actors accountable who cede agency to such technologies in bad or unjustified ways.
Social norms, often described as the “cement” or “grammar” of society, guide human behavior and infuse it with social meaning. In this chapter, we offer a legal perspective on whether robots should be required to adhere to social norms in the context of human–robot interactions (HRIs). To do so, we examine how the mass introduction of social robots may affect existing norms in different contexts, develop a taxonomy of HRIs as they pertain to social norms, and offer a legal-normative framework of analysis to determine whether and to what extent should robots be legally required to adhere to social norms.
This chapter explores design guidelines and potential regulatory issues that could be associated with future baby robot interaction. We coin the term “robot natives,” which we define as the first generation of human’s regularly interacting with robots in domestic environments. This term includes babies (0–1 year old) and toddlers (1–3 years old) born in the 2020s. Drawing from the experience of other interactive technologies becoming widely available in the home and the positive and negative impact they have on humans; we propose some insights into the design of future scenarios for baby–robot interaction, aiming to influence future legislation regulating service robots and social robots used with robot natives. Similarly, we aim to inform designers and developers to inhibit robot designs which can negatively affect the long-term interactions for the robot natives. We conclude that a qualitative, multidisciplinary, ethical, human-centered design approach should be beneficial to guide the design and use of robots in the home and around families as this is currently not a common approach in the design of studies in child robot interaction.
Romance between a human and robot will pose many questions for the laws that apply to human–robot interaction and, in particular, family law. Such questions include whether humans and robots can marry and what a subsequent divorce might look like. This chapter considers these issues, organized to track the seasons of romantic relationships, such as cohabitation, engagement, and marriage. Given that marriage is no longer devoid of the possibility of divorce, this chapter also considers issues of property division, alimony, child custody, and child support when a marriage between a human and robot dissolves. Even for skeptics of such a future, given rapid advances in robotics, the applicability of family law to relationships between a human and robot is nonetheless an increasingly relevant thought experiment and intersects with other emerging areas of law, technology, and robotics.
The purpose of this chapter is to contribute to the current discussion on what robot laws might look like once artificial intelligence (AI)-enabled robots become more widespread and integrated within society. The starting point of the discussion is Asimov’s three laws of robotics, and the realization that while Asimov’s laws are norms formulated in human language, the behavior of robots is fundamentally controlled by algorithms, that is, by code. Three conclusions can be drawn from this starting point in the discussion of what laws for robots might look like. One is that laws enacted for humans will be translated into laws for robots, which as discussed here, will be a difficult challenge for legal scholars. The second conclusion is that due to the norms which exist within society, the same rules will be simultaneously present in the natural language version of laws for humans and the code version of laws for robots. And the third conclusion is that the translation of the robots’ actions and outputs back into human language will also be a challenging process. In addition, the chapter also argues that the regulation of a robot’s behavior largely overlaps with the current discourse on providing explainable AI but with the added difficulty of understanding how explaining legal decisions differ from explaining the outputs of AI and AI-enabled robots in general.
In previous research, we considered several novel problems posed by robot accidents and assessed related legal and economic approaches to the creation of optimal incentives for robot manufacturers, operators, and prospective victims. In this chapter, we synthesize our previous work in a unified analysis. We begin with a discussion about the problems and legal challenges posed by robot torts. Next, we describe the novel liability regime we proposed, that is, “manufacturer residual liability,” which blends negligence-based rules and strict manufacturer liability rules to create optimal incentives for robot torts. This regime makes operator and victim liability contingent upon their negligence (incentivizing them to act diligently) and makes manufacturers residually liable for nonnegligent accidents (incentivizing them to make optimal investments in researching safer technologies). This rule has the potential to drive unsafe technology out of the market and also to incentivize operators to adopt optimal activity levels in their use of automated technologies.
Humans categorize themselves and others on the basis of many attributes forging a range of social groups. Such group identities can influence our perceptions, attitudes, beliefs, and behaviors toward others and ourselves. While decades of psychological research has examined how dividing the world into “us” and “them” impacts our attitudes, beliefs, and behaviors toward others, a new and emerging area of research considers how humans can ascribe social group memberships to humanoid robots. Specifically, our social perceptions and evaluations of humanoids can be shaped by subtle characteristics about the robot’s appearance or other features, particularly if these characteristics are interpreted through the lens of important human group identities. The current chapter reviews research on the psychology of intergroup relations to consider its manifestations and expressions in the context of human–robot interaction. We first consider how robots despite being nonliving can be ascribed certain identities (e.g., race, gender, and national origin). We then consider how this can in turn impact attitudes, beliefs, and behaviors toward such technology. Given the nascency of this field of study, we highlight existing gaps in our knowledge and highlight important directions for future research. The chapter concludes by considering the societal, market, and legal implications of bias in the context of human–robot interaction.
Over the last years there has been growing research interest in religion within the robotics community. Along these lines, this chapter will provide a case study of the ‘religious robot’ SanTO, which is the world’s first robot designed to be ‘Catholic’. This robot was created with the aim of exploring the theoretical basis for the application of robot technology in the religious space. While the application of this technology has many potential benefits for users, the use and design of religious or other social robots raises a number of ethical, legal, and social issues (ELSI). This paper, which is concerned with such issues will start with a general introduction, offer an ELSI analysis, and finally develop conclusions from an ethical design perspective.
The First Amendment of the U.S. Constitution protects “the freedom of speech.” Courts have also said it protects “freedom of thought.” But does that mean we have a right to speak with or listen to the speech of robots? Does it mean we have a First Amendment right to recruit robots to help us think or change the way we think? By “robots” here, I mean the robots that are the primary focus of this book: Humanoid robots that might not only emulate the way we solve intellectual challenges or express ourselves, but also emulate us physically – by taking on a physical form similar to that of human beings and moving and acting in the physical space of our homes, workplaces, or other spaces, and not simply the virtual space on our computers.
As blockchain in general and NFTs in particular reshape operation logistics, data creation, and data management, these technologies bring forth many legal and ethical dilemmas. This handbook offers a comprehensive exploration of the impact of these technologies in different industries and sectors including finance, anti-money laundering, taxation, campaign-finance, and more. The book specifically provides insights and potential solutions for cutting-edge issues related to intellectual property rights, data privacy and strategy, information management, and ethical blockchain use, while offering insights, case studies, and recommendations to help anyone seeking to shape effective, balanced regulation to foster innovation while safeguarding the interests of all stakeholders. This handbook offers readers an invaluable roadmap for navigating the dynamic and evolving landscape of these new technologies.
The chapter argues that to create consumer trust requires technologically neutral rules of consumer and data protection law. The limited impact of the established rules on digital markets raises the question about the need for reform. The chapter focuses on personalised advertising as our case study. Regulation by design is the best method of weaker parties.
Recommender systems (RSs) are one of the most important examples of how AI can be used to improve the experience of consumers, as well as to increase revenues for companies. The chapter presents a short survey of the main approaches. The manipulation of consumer behavior by RCs is less a legal issue, then an ethical one, which should be considered when designing these type of systems
This chapter argues that AI can be a positive force in consumer protection enforcement, although in its current form, it has a limited range. If not used with adequate caution and safeguards or understanding of its limitations, it could lead to under-enforcement. Enforcement authorities are encouraged not to reach for AI solutions first but reflect on the best strategy for including AI-enabled technology in their enforcement toolbox.
Regulators are increasingly aware of the practical implication of such phenomena as algorithmic bias, price discrimination, blackbox AI as well as the misuse of the personal information of consumers. The dangers of algorithmic exploitation in the context of mass market consumer transactions is underexplored. The chapter describes how technology-driven exploitative commercial practices have passed under the legal and regulatory radar. It examines the extent recent regulatory addresses the impact of such practices.
Despite the benefits of the convergence of AI in ecommerce, it is necessary to address some concerns. The presence of AI-powered platforms raises significant challenges to consumer autonomy. This chapter discusses the overlap and interplay among three main legal regimes – EU AI Act Proposal, Digital Services Act (DSA), and EU Consumer Law.These laws will need to be amended with new articles to adequately address AI-specific concerns
The chapter examines the issue of civil liability in the framework of damages resulting from the use of such autonomous systems. A particular emphasis is placed on the importance of access to justice – of enhancing access for victims of harm to remedies and relief – rather than more abstract or conceptual considerations of the appropriateness of a particular regulatory solution