We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
The integration of assisted living technologies in the home is rapidly accelerating. As socially assistive robots (SARs) often operate in the private sphere of life, sometimes in symbiotic relations with the people they assist, they may give rise to privacy concerns. This chapter investigates the potential privacy and data protection issues arising from the increasing deployment of assisted living technologies (AAL) in general and SARs in particular. It addresses privacy concerns related to human–robot interactions, including conversational interfaces, audio- and video-based assistive technologies, and analyzes them within the European context. Since the wide range of privacy concerns resulting from using SARs raises particular challenges for the design process, this article zooms in on the Privacy by Design concept introduced in the European General Data Protection Regulation (GDPR). Since communication and interaction with robots in therapeutic and care contexts impact data protection, these privacy concerns pose challenges that must be considered in a life cycle starting during the robot design and finalizing the implementation in care settings, including home care.
Social robots present a novel category of socially interactive technology. There is increasing interest in how people behave toward social robots, how robots can change human behaviors, and what are the factors that influence this interaction. This is a complex relationship between the robot’s physical embodiment, social behaviors, and capabilities, and also the human factor. There are differences in how people behave toward robots and this chapter takes a look at the role of an individual’s cultural background and the factors interwoven with what we generally define as culture, and how those are a factor in holistically understanding how robots are perceived.
Robots are an increasingly common feature in public spaces. From regulations permitting broader drone use in public airspace, and autonomous vehicle testing on public roads, to laws permitting or restricting the presence of delivery robots on sidewalks – law often precipitates the introduction of new robotic systems into shared spaces. Laws that permit, regulate, or prohibit robotic systems in public spaces will in many ways determine how this new technology affects public space and the people who inhabit that space. This begs the questions: How should regulators approach the task of regulating robots in public spaces? And should any special considerations apply to the regulation of robots because of the public nature of the spaces they occupy? With a focus on the Canadian legal system, and drawing upon insights from the interdisciplinary field of law and geography, this chapter argues that the laws that regulate robots deployed in public space will affect the public nature of that space, potentially to the benefit of some human inhabitants of the space over others. For this reason, special considerations should apply to the regulation of robots that will operate in public space. In particular, the entry of a robotic system into a public space should never be prioritized over communal access to and use of that space by people. And, where a robotic system serves to make a space more accessible, lawmakers should avoid permitting differential access to that space through the regulation of that robotic system.
We humans are biased – and our robotic creations are biased, too. Bias is a natural phenomenon that drives our perceptions and behavior, including when it comes to socially expressive robots that have humanlike features. Recognizing that we embed bias, knowingly or not, within the design of such robots is crucial to studying its implications for people in modern societies. In this chapter, I consider the multifaceted question of bias in the context of humanoid, AI-enabled, and expressive social robots: Where does bias arise, what does it look like, and what can (or should) we do about it. I offer observations on human–robot interaction (HRI) along two parallel tracks: (1) robots designed in bias-conscious ways and (2) robots that may help us tackle bias in the human world. I outline a curated selection of cases for each track drawn from the latest HRI research and positioned against social, legal, and ethical factors. I also propose a set of critical next steps to tackle the challenges and opportunities on bias within HRI research and practice.
Recent robotics, AI, and human–robot interaction techniques increasingly improve the capability of social robots. Yet, there seems to be the lack of important capabilities of social robots for their more substantial use. This chapter specifically focus on mobile social robots that would be used for service industry. We introduce some of the technical advancement about the basic social capabilities for mobile social robots, such as how social robots should consider human personal space, approach people, crossing with human pedestrians, and harmonized with crowd of people. Then, we discuss the moral-related issues with social robots, particularly, robot abuse problem. The fact that children treat social robots aggressively and even violently indicates the lack of peer respect for them. Here, we discuss what the future social robots would need to be equipped, and argue for the needs of “moral interaction” capability. Two capabilities, peer respect and peer pressure, are discussed. That is, social robots would need to elicit people to consider them as a kind of moral recipient (peer respect), and elicit people to behave good (peer pressure) in a way similar to “human eyes” would work. We introduce some of recent research works for moral interaction capability. Finally, we discuss the future implications on law and regulations if much more moral interaction capabilities will be equipped for social robots in the future.
This chapter discusses the topic of ethics, law, and policy as related to human interaction with robots which are humanoid in appearance, expressive, and AI enabled. The term “robot ethics” (or roboethics) is generally concerned with ethical problems that occur when humans and robots interact in various social contexts. For example, whether robots pose a threat to humans in warfare, the use of robots as caregivers, or the use of robots which make decisions that could impact historically disadvantaged populations. In each case, the focus of the discussion is predominantly on how to design robots that act ethically toward humans (some refer to this issue as “machine ethics”). However, the topic of robot ethics could also refer to the ethical issues associated with human behavior toward robots especially as robots become active members of society. It is this latter and less investigated view of robot ethics that the chapter focuses on, and specifically whether robots that are humanoid in appearance, AI enabled, and expressive will be the subject of discrimination based on the robot’s perceived race, gender, or ethnicity. This is an emerging topic of interest among scholars within law, robotics, and social science and there is evidence to suggest that biases and other negative reactions which may be expressed toward people in social contexts may also be expressed toward robots that are entering society. For these and other reasons presented within the chapter, a discussion of the ethical treatment of robots is an important and timely topic for human–robot interaction.
This chapter introduces the problem of regulating human–robot interaction (HRI) according to the rule of law in the convergence of the Web of Data, the Internet of Things, and Industry 5.0. It explains some strategies fleshed out in the EU H2020 Project OPTIMAI, a data-driven platform for zero-defect manufacturing (ZDM) to deploy a smart industry ecosystem. The chapter defines the notions of legal governance and smart legal ecosystems as a mindset and as a toolkit to foster and regulate HRI in iterative cycles.
Recently there has been more attention to the cultural aspects of social robots. This chapter contributes to this effort by offering a philosophical, in particular Wittgensteinian framework for conceptualizing in what sense and how robots are related to culture and by exploring what it would mean to create an “Ubuntu Robot.” In addition, the chapter gestures toward a more culturally diverse and more relational approach to social robotics and emphasizes the role technology can play in addressing the challenges of modernity and in assisting cultural change: It argues that robots can help us to engage in cultural dialogue, reflect on our own culture, and change how we do things. In this way, the chapter contributes to the growing literature on cross-cultural approaches to social robotics.
This chapter examines from a theoretical perspective the role of interface in the formation of trust in HR interaction, notably between humans and humanoid robotic agents. Interfaces are fundamental means of communication that do not only allow interaction between different agents but they also enable them to exchange information on the nature of the context in which the interaction takes place: This can facilitate the formation of a trust relationship between agents, whether they are human or robotic, within a more or less structured context. The chapter concludes with a discussion of the pivotal role of human–robot interfaces as essential means of communication from the analysis displayed in my contribution.
After researching for years the law and ethics of robotic systems in the security sector, that is, defense, law enforcement, and disaster relief, we present a set of eight recommendations bearing on human–robot interaction. These recommendations should guide developers, lawyers, ethicists, and policymakers who navigate a highly technical and dynamic field. Our eight recommendations boil down to the following: In circumstances where technology develops rapidly while normative discussions fail to keep up and uncertainty prevails, we suggest a pragmatic, practical, and dynamic form of applied ethics to those involved with robotics – an ethics that eschews high politics at first, but ultimately produces experience that can be fed back into higher-level normative discussions on robots and technology more generally and thus move them forward.
The use of advanced robots interacting with humans, especially in the health and care sector, will interfere with the personal autonomy and privacy of the users. A person can give permission to such interference by giving their consent. In law, the term “consent” has different connotations dependent on the field of law and jurisdictions. A recurrent topic in robotics research is how consent can be implemented in human–robot interaction (HRI). However, it is not always clear what consent actually means. The terms “informed consent” and “consent” are used interchangeably with an emphasis on informational privacy, whereas the embodiment of robots also intervenes with “physical” privacy. This lack of nuance leads to misconceptions about the role consent can play in HRI. In this chapter, I critically examine the perception and operationalization of consent in HRI and whether consent is an appropriate concept at all. It is crucial for the adaptation of consent to understand the requirements for valid consent, and the implications consent have for deployment of robots, in particular in a health and care setting. I argue that valid consent to the use of AI-driven robots may be hard, if not impossible, to obtain due to the complexity of understanding the intentions and capabilities of the robot. In many instances, consent would not be appropriate in robotics because it would not be possible to have truly informed and free consent beyond a simple “yes” or “no.” This chapter is a critical analysis that builds on the extensive scholarly critique of consent. As the chapter shows, robotics is yet another field where the shortcomings of consent are salient.
This chapter assesses EU Consumer law and Policy in the light of consumers’ increasing interaction with humanoid robots. Amidst the plethora of benefits that humanoid robots can bring to consumers, they can also challenge the application of consumer protection principles. The biggest problems lie in the areas of protection of the weaker party, consumer autonomy, nondiscrimination, and privacy. Consumers may face difficulties ranging from exposure to unfair commercial practices, difficulties in exercising their rights under the Consumer Rights Directive, in claiming damages for AI-related losses under the Product Liability Directive, in exercising the rights – for example, to consent, to be forgotten, to be informed, not to be profiled – under the General Data Protection Regulation, and in avoiding being discriminated by credit scoring systems under the Consumer Credit Directive. The chapter explores the sufficiency of EU legislation to tackle the wide range of challenges and proposes targeted regulatory action. The rationale behind addressing these issues is to strike a balance between consumer protection and innovation in robotics.
When the idea of a robot butler or maid started to spread, Rosie from the Jetsons became a symbol for a vision shared by many roboticists: One day, each household would have a robotic helper that takes care of our chores and keeps us company. However, when we examine the realities of robots produced for the consumer market today (from autonomous cars to social robots), we discover that realizing such a future with robots is not all that rosy for consumers. From a property law perspective, we find that most consumers will only partially own the robots they purchase. The software that gives life, functionality, and character to the robots will likely be owned by the manufacturer, who can take it all away from the purchasers just as easily as they can change subscription fees and user terms and conditions. In this chapter, we draw analogies from the automotive industry and current business trends in the consumer robotics market to underscore seven issues that stem from this precarious ownership dynamic. Reflecting on our collective journey toward “a robot in every home,” we highlight how the status quo is not only unsustainable from an environmental perspective (i.e., generation of electronic waste) but also ineffective in generating a flourishing consumer robotics market. We find that the existing roboethics and AI ethics frameworks do not sufficiently protect consumers from these issues and call on the community to critically examine the complex ethical and societal implications of deploying social robots widely.
In this concluding chapter, we discuss future directions in law, policy, and regulations for robots that are expressive, humanoid in appearance, becoming smarter, and that are anthropomorphized by users. Given the wide range of skills shown by this emerging class of robots, legal scholars, legislators, and roboticists are beginning to discuss how law, policy, and regulations should be applied to robots that are becoming more like us in form and behavior. For such robots, we propose that human–robot interaction should be the focus of efforts to regulate increasingly smart, expressive, humanoid, and social robots. Therefore, in the context of human–robot interaction, this chapter summarizes our views on future directions of law and policy for robots that are becoming highly social and intelligent, displaying the ability to detect and express emotions, and controversially, in the view of some commentators, beginning to display a rudimentary level of self-awareness.
This chapter aims to present the potential challenges and opportunities brought forth by the integration of care robots within the home- and healthcare services within the Nordic context, stressing the difference between home and conventional or institutional care. In this chapter, we specifically focus on Social and Assistive Robots. Similarly, the chapter aims to discuss if and to what extent these care robots need to be designed for a diversity of users. The chapter suggests Universal Design and accessibility of robots as a potential approach to achieving the care robots’ characteristics of becoming inclusive. Nevertheless, the chapter argues that an inclusive robot approach is important in order to protect the right to health(care), especially if robots will be integrated as part of the home- and/or healthcare services. The chapter exemplifies the need for universal design and accessible robots, as well as the idea of inclusive care robots through empirical work based on a series of interview sessions. Finally, the chapter concludes that: Social and Assistive robots need to be seen in context if such robots are integrated into home- and healthcare services; and that a universal design approach could offer an ethical design charter, a solution, or a guide for designing robots that cater to diverse populations with various situated abilities, considering rights such as the right to healthcare.