We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The risks emanating from algorithmic rule by law lie at the intersection of two regulatory domains: regulation pertaining to the rule of law’s protection (the EU’s rule of law agenda), and regulation pertaining to the protection of individuals against the risks of algorithmic systems (the EU’s digital agenda). Each of these domains consists of a broad range of legislation, including not only primary and secondary EU law, but also soft law. In what follows, I confine my investigation to those areas of legislation that are most relevant for the identified concerns. After addressing the EU’s competences to take legal action in this field (Section 5.1), I respectively examine safeguards provided by regulation pertaining to the rule of law (Section 5.2), to personal data (Section 5.3) and to algorithmic systems (Section 5.4), before concluding (Section 5.5).
In this chapter, I first examine how the rule of law has been defined in legal theory, and how it has been distinguished from the rule by law, which is a distortion thereof (Section 3.1). Second, I assess how the rule of law has been conceptualised in the context of the European Union, as this book focuses primarily on the EU legal order (Section 3.2). In this regard, I also draw on the acquis of the Council of Europe. The Council of Europe is a distinct jurisdictional order, yet it heavily influenced the ‘EU’ conceptualisation of the rule of law, and the EU regularly relies on Council of Europe sources in its own legal practices. Finally, I draw on these findings to identify the rule of law’s core principles and to distil the concrete requirements that public authorities must fulfil to comply therewith (Section 3.3). Identifying these requirements – and the inherent challenges to achieve them – will subsequently allow me to build a normative analytical framework that I can use as a benchmark in Chapter 4 to assess how algorithmic regulation impacts the rule of law.
This volume provides a unique perspective on an emerging area of scholarship and legislative concern: the law, policy, and regulation of human-robot interaction (HRI). The increasing intelligence and human-likeness of social robots points to a challenging future for determining appropriate laws, policies, and regulations related to the design and use of AI robots. Japan, China, South Korea, and the US, along with the European Union, Australia and other countries are beginning to determine how to regulate AI-enabled robots, which concerns not only the law, but also issues of public policy and dilemmas of applied ethics affected by our personal interactions with social robots. The volume's interdisciplinary approach dissects both the specificities of multiple jurisdictions and the moral and legal challenges posed by human-like robots. As robots become more like us, so too will HRI raise issues triggered by human interactions with other people.
The use of care robots can reduce the demands for manpower in long-term care facilities. Further, care robots serve the needs of both the elders residing in care facilities and the staff of the facilities. This chapter considers the following issues for care robots. Whether long-term care robots should be required to meet the high standards for the use of medical devices found in current regulations. How should standards of use be developed for care robots based on the characteristics of the robots? For this question, I note that in Japan, a public–private partnership has shown success in the regulation of care robots. In addition, with care robots, how should we protect the privacy of elders and their relatives or friends in contact with care robots given that the elderly may have reduced cognitive ability. And lastly, what legal and ethical concerns apply to the design of the interfaces between care robots and elders?
When a robot harms humans, are there any grounds for holding it criminally liable for its misconduct? Yes, provided that the robot has the ability to form, act on, and explain its moral decisions. If such a robot falls short of the basic moral standards expected by society, labeling it as a criminal can serve criminal law’s function of censuring wrongful conduct and ease the emotional harm suffered by human victims. Moreover, imposing criminal liability on robots could have significant instrumental value in certain cases, such as in identifying culpable humans. However, this does not exempt the manufacturers, trainers, or owners of the robots from any potential criminal liability.
Even the most market-oriented approaches to regulating AI-enabled robots presume some governmental regulator to collaborate in setting outcome goals. And the more advanced an AI-enabled robot becomes, the greater the need for oversight. For the past several decades, regulatory oversight boards have grown in use to promote the quality, transparency, and accountability of regulatory rules and policy. And recently, leading administrative law voices in the United States and the European Union have called for the creation of an oversight board to monitor regulator use of AI entities. How do we determine if these boards are worth the risks they create? To answer this question, this chapter uses the context of AI-enabled robots, which are increasingly prominent in homes, business, and education, to explain both when regulatory oversight boards are valuable as well as when they can frustrate society’s efforts to reduce the ill effects of emerging smart robots. Regulatory oversight boards create value in this context by conducting impact assessments of regulatory policies with an eye to the technological advancements and social context relevant to AI technologies such as robots, and oversight boards can promote regulatory foresight. However, oversight boards themselves pose risks. By influencing the methodological approach used by regulators, errors made by oversight boards can have outsized impacts. To determine whether any given oversight board is worth the risk it creates, this chapter sets out a simple cost-based approach for comparing the risks and benefits of regulatory oversight boards. This approach then is applied to emerging regulatory oversight boards looking at robots entering society.
The chapter examines a classic subject of HRI, social robotics, and the law, such as the design, manufacture, and use of humanoid AI robots for healthcare. The aim is to illustrate a new set of legal challenges that are unique to these systems when displayed in outer space. Such challenges may require the adoption of new legal standards, in particular, either sui generis standards of space law, or stricter or more flexible standards for HRI in space missions, down to a new “principle of equality” between human standards and robotic standards in outer space. The chapter complements current discussions and initiatives on the development of standards for the use of humanoid AI robots in health law, consumer law, and cybersecurity by considering the realignment of terrestrial standards that we may thus expect with the increasing use of humanoid AI systems in space journeys. The assumption is that breathtaking advancements of AI and robotics, current trends on the privatization of the space, and the evolution of current regulatory frameworks, also but not only in space law, will put the development of these new legal standards in the spotlight.
This chapter introduces the construct of anthropomorphism and highlights its relevance for human–robot interaction (HRI) research. This chapter reviews existing definitions of anthropomorphism and distinguishes it from anthropomorphization. It further discusses established theoretical models of anthropomorphism and their respective empirical support (or lack thereof). Moreover, we address consequences of anthropomorphism, especially for HRI. We shed light on different ways to measure anthropomorphism in HRI, discussing advantages and disadvantages of such measurement approaches, respectively. Finally, the present overview offers reflections on the added value of taking into account anthropomorphism and anthropomorphization in HRI research.