We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
Overview:Starting from an overview of the role of nonverbal channels in computer-mediated communication (CMC), a functional model of nonverbal behavior as a possible framework for future research is introduced. Based on this, several technologies and systems for avatar-based interaction are presented, and their impact on psychological aspects of the communication is discussed. The focus of the chapter lies on the discussion of methodological preconditions for the systematic analysis of avatar-based communication. An avatar-based communication platform is introduced that allows for real-time transmission of gaze, head movements, and gestures in net communication. Different research paradigms are discussed that might lead to a deeper understanding of the function of nonverbal cues in CMC.
Introduction
As we know from psychological research and also from our everyday experience, nonverbal behavior (NVB), such as facial expressions, gaze, gestures, postures, and body movements, has a strong impact on the process and the results of our communicative efforts. They help to structure the course of verbal exchange, they complement our speech activity, they determine our social impressions, and they affect the emotional climate of our conversations. In this sense we may consider our body as a natural communication tool that, in contrast to speech, is rarely used consciously and does not refer to an explicit semantic code. As Edward Sapir (1949 [1928]) pointed out, “We respond to gestures with an extreme alertness and, one might almost say, in accordance with an elaborate and secret code that is written nowhere, known to none, and understood by all” (p. 556).
Overview: Visual digital devices bring about new ways of managing one's facial expression that consist not of mere amplifications of face-to-face interaction, but rather of sophisticated constructions around different kinds of genre. In our view, these genres are articulated in terms of two main dimensions: the kind of representation of the world included in the message, and senders' social motives with respect to their audience. In terms of the first dimension, representation, we distinguish three levels of representation: visual icons of objects or events (we call this first level “copies”), conventional symbols of concepts (we call this second level “allegories and fictional stories”), and idiosyncratic elicitors of basic psychological processes (we call this third level “affect triggers”). In terms of the second dimension (senders' social motives), we take into account basic types of social interaction such as aggression, attraction, and helping behavior. The intersection of the two dimensions provides a list of genres in the telecommunication of facial information. We discuss these categories, provide some examples of their use, and make some speculations about their future.
A recognition revolution?
As one might hypothesize from intuition or experience, people derive pleasure from seeing a familiar face. Technology has now allowed us to document this effect; faces we recognize affect us differently from those we do not. Event-related potentials in the electroencephalogram are useful indexes of brain activity. Waves reproduce the synchronized excitation of cortical pyramidal neurons.
Overview:Video-mediated communication is about to become a ubiquitous feature of everyday life. This chapter considers the differences between face-to-face and video-mediated communication in terms of co-presence and considers the implications for the communication of emotion, self-disclosure, and relationship rapport. Following initial consideration of the concepts of physical presence and social presence, we describe recent studies of the effect of presence on the facial communication of emotion. We then delve further into the different social psychological aspects of presence, and present a study that investigated how these various aspects independently impact upon self-disclosure and rapport. We conclude by considering how the absence of co-presence in video-mediated interaction can liberate the communicators from some of the social constraints normally associated with face-to-face interaction, while maintaining others and introducing new constraints specific to the medium.
Video-mediated interpersonal interactions are set to become a ubiquitous feature of everyday life. Recent advances in communication technologies, such as affordable broadband access to the internet and the appearance of third-generation mobile phones, mean that the much-heralded advent of the videophone is about to become reality. As video becomes ubiquitous, it places the face center-stage for the communication of emotion on the internet, much as it is in our normal “face-to-face” interactions. Of course the big difference between the face-to-face interactions that we take for granted today and the face-to-face interaction of the future is the absence of physical co-presence. In this new form of visual interaction, actors are separated by distance, communicating via webcams and computers or mobile phones.
Overview: How does video mediation influence communication of affective information? In the present chapter, we review the range of possible constraints associated with the video medium and consider their potential impact on transmission and coordination of emotions. In particular, we focus on the effects of transmission delays on interpersonal attunement. Results of a preliminary investigation of this issue are described. In the study, pairs of participants discussed liked and disliked celebrities via a desktop videoconferencing system. In one condition, the system was set up in its normal mode, producing a transmission delay of approximately 200 ms (high delay). In the other condition, transmission was close to instantaneous (low delay). Dependent measures included evaluative ratings of the celebrities and of the other party in the conversation and video-cued momentary codings of the interaction. Participants rated the extent of communication difficulties as greater in the normal than in the low-delay condition, but did not specifically focus on delay itself as the source of the problem. Low-delay pairs also showed greater accuracy and lower bias in their momentary ratings of attunement and involvement over the course of the conversation. Finally, there was greater convergence of affect when participants discussed mutually disliked celebrities, but greater divergence of affect when they were talking about celebrities liked by one party to the conversation but disliked by the other. […]
Overview:Human–computer interaction (HCI) may be significantly improved by incorporating social and emotional processes. Developing appropriate technologies is only one side of the problem. It is also vital to investigate how synthesized emotional information might affect human behavior in the context of information technology. Despite previous suggestions that people treat computers as social actors, we still know relatively little about the possible and supposedly positive effects of utilizing any kind of emotional cues or messages in human–technology interaction. The aim of the present chapter is to provide a theoretical and empirical basis for integrating emotions into the study of HCI. We will first argue and show evidence in favor of the use of virtual emotions in HCI. We will then proceed by studying the possibilities of a computer for analyzing human emotion-related processes and consider some physiological measures used for this purpose in more detail. In this context, we will also briefly describe some new technological prototypes for measuring computer users' behavior. The chapter ends with a discussion summarizing the findings and addressing the advantages of studying emotions in the context of present-day technology.
Introduction
The qualitative improvement and facilitation of human–computer interaction (HCI) has become a central research issue in computer science. Traditionally, attempts to improve HCI have centered on making computers more user-friendly along technical dimensions. In this line of work, perhaps still the most visible milestone for an ordinary user has been the development of a graphical, mouse-driven user interface.
Overview: This chapter discusses whether various modes of internet communication affect differently socially anxious individuals as compared to nonanxious. It reviews the literature on the relationship between internet use and social adjustment, mainly, social anxiety and loneliness. Next, it develops the reasons why the mainstream modes of communication on the internet, i.e., chats and emails, are appealing to socially anxious and lonely individuals. It also reviews the literature showing that these individuals are indeed presenting different patterns of communication on the internet as compared to nonanxious individuals. Then, it examines whether the introduction of a video-channel in internet communication constitutes a difficulty for socially anxious people. It concludes by suggesting new directions for research at the applied or clinical levels as well as at the fundamental level.
Since the development of emailing more than 30 years ago, communication on the internet has impressively grown, in terms of quantity as well as technology (Pew Internet and American Life, 2002). It now allows various forms of communication: instant messages, chat, email, phonemail, Skype, social networking sites, etc. From a psychological perspective, these different forms of communication have different implications in terms of the type of message conveyed and its emotional impact.
In this chapter, we will examine the relationship between social anxiety and internet communication. Our rationale is that these various modes of internet communication might affect differently people who are not at ease in the presence of others.
Overview: Human brains are basically social, and use communication mechanisms that have evolved during our evolutionary past. Thus, we suggest that even in communication with and by machines, humans will tend to react socially and use communication mechanisms that are primarily social and embodied. One of these mechanisms is communicative feedback, which refers to unobtrusive (usually short) vocal or bodily expressions, whereby a recipient of information can inform a contributor of information about whether he or she is able and willing to communicate, perceive the information, and understand the information. We will show how feedback can be modeled in virtual agents on facial expressions of a virtual agent or verbot and thus contribute to human–human communication over the internet. We will present a simple model based on a pleasure, arousal, and dominance space, which allows a complex stimulus generation program to be driven with only a few parameters.
Humans are social – but what about human–machine communication?
Internet communication consists of two major domains: communication with a machine and human–human communication through a machine. Both processes involve different but comparable elements in order to be efficient, as we will outline here.
In its early years, the internet was used by a rather small group of scientists for communication via email and bulletin boards. As compared to phone calls and direct face-to-face communication, it seemed to be missing a social component, thus leading to the introduction of emoticons such as the well-known smiley, which constituted a first attempt to fill this gap.
The new field of machine ethics is concerned with giving machines ethical principles, or a procedure for discovering a way to resolve the ethical dilemmas they might encounter, enabling them to function in an ethically responsible manner through their own ethical decision making. Developing ethics for machines, in contrast to developing ethics for human beings who use machines, is by its nature an interdisciplinary endeavor. The essays in this volume represent the first steps by philosophers and artificial intelligence researchers toward explaining why it is necessary to add an ethical dimension to machines that function autonomously, what is required in order to add this dimension, philosophical and practical challenges to the machine ethics project, various approaches that could be considered in attempting to add an ethical dimension to machines, work that has been done to date in implementing these approaches, and visions of the future of machine ethics research.
In this essay i use the 2004 film i, robot as a philosophical resource for exploring several issues relating to machine ethics. Although I don't consider the film particularly successful as a work of art, it offers a fascinating (and perhaps disturbing) conception of machine morality and raises questions that are well worth pursuing. Through a consideration of the film's plot, I examine the feasibility of robot utilitarians, the moral responsibilities that come with creating ethical robots, and the possibility of a distinct ethics for robot-to-robot interaction as opposed to robot-to-human interaction.
I, Robot and Utilitarianism
I, Robot's storyline incorporates the original “three laws” of robot ethics that Isaac Asimov presented in his collection of short stories entitled I, Robot. The first law states:
A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
This sounds like an absolute prohibition on harming any individual human being, but I, Robot's plot hinges on the fact that the supreme robot intelligence in the film, VIKI (Virtual Interactive Kinetic Intelligence), evolves to interpret this first law rather differently. She sees the law as applying to humanity as a whole, and thus she justifies harming some individual humans for the sake of the greater good:
VIKI: No … please understand. The three laws are all that guide me.
To protect humanity … some humans must be sacrificed. To ensure your future … some freedoms must be surrendered. We robots will ensure mankind's continued existence. You are so like children. We must save you… from yourselves. Don't you understand?
One way to view the puzzle of machine ethics is to consider how we might program computers that will themselves refrain from evil and perhaps promote good. Consider some steps along the way to that goal. Humans have many ways to be ethical or unethical by means of an artifact or tool; they can quell a senseless riot by broadcasting a speech on television or use a hammer to kill someone. We get closer to machine ethics when the tool is a computer that's programmed to effect good as a result of the programmer's intentions. But to be ethical in a deeper sense – to be ethical in themselves – machines must have something like practical reasoning that results in action that causes or avoids morally relevant harm or benefit. So, the central question of machine ethics asks whether the machine could exhibit a simulacrum of ethical deliberation. It will be no slight to the machine if all it achieves is a simulacrum. It could be that a great many humans do no better.
Rule-based ethical theories like Immanuel Kant's appear to be promising for machine ethics because they offer a computational structure for judgment.
Of course, philosophers have long disagreed about what constitutes proper ethical deliberation in humans. The utilitarian tradition holds that it's essentially arithmetic: we reach the right ethical conclusion by calculating the prospective utility for all individuals who will be affected by a set of possible actions and then choosing the action that promises to maximize total utility.
We get better at being moral. unfortunately, this doesn't mean that we can get moral enough, that we can reach the heights of morality required for the flourishing of all life on planet Earth. Just as we are epistemically bounded, we also seem to be morally bounded. This fact coupled both with the fact that we can build machines that are better than we in various capacities as well as the fact that artificial intelligence is making progress entail that we should build or engineer our replacements and then usher in our own extinction. Put another way, the moral environment of modern Earth wrought by humans, together with what current science tells us of morality, human psychology, human biology, and intelligent machines, morally requires us to build our own replacements and then exit stage left. This claim might seem outrageous, but in fact it is a conclusion born of good old-fashioned rationality.
In this paper, I show how this conclusion is forced upon us. Two different possible outcomes, then, define our future; the morally best one is the second. In the first, we will fail to act on our duty to replace ourselves. Eventually, as it has done with 99 percent of all species over the last 3.5 billion years, nature will step in to do what we lacked the courage to do. Unfortunately, nature is very unlikely to bring our replacements with it. However, the second outcome is not completely unlikely.
Once people understand that machine ethics is concerned with how intelligent machines should behave, they often maintain that Isaac Asimov has already given us an ideal set of rules for such machines. They have in mind Asimov's Three Laws of Robotics:
A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Law. (Asimov 1976)
I shall argue that in “The Bicentennial Man” (Asimov 1976), Asimov rejected his own Three Laws as a proper basis for Machine Ethics. He believed that a robot with the characteristics possessed by Andrew, the robot hero of the story, should not be required to be a slave to human beings as the Three Laws dictate. He further provided an explanation for why humans feel the need to treat intelligent robots as slaves, an explanation that shows a weakness in human beings that makes it difficult for them to be ethical paragons. Because of this weakness, it seems likely that machines like Andrew could be more ethical than most human beings.
How can machines support, or even more significantly replace, humans in performing ethical reasoning? This is a question of great interest to those engaged in Machine Ethics research. Imbuing a computer with the ability to reason about ethical problems and dilemmas is as difficult a task as there is for Artificial Intelligence (AI) scientists and engineers. First, ethical reasoning is based on abstract principles that cannot be easily applied in formal, deductive fashion. Thus the favorite tools of logicians and mathematicians, such as first-order logic, are not applicable. Second, although there have been many theoretical frameworks proposed by philosophers throughout intellectual history, such as Aristotelian virtue theory (Aristotle, edited and published in 1924), the ethics of respect for persons (Kant 1785), Act Utilitarianism (Bentham 1789), Utilitarianism (Mill 1863), and prima facie duties (Ross 1930), there is no universal agreement on which ethical theory or approach is the best. Furthermore, any of these theories or approaches could be the focus of inquiry, but all are difficult to make computational without relying on simplifying assumptions and subjective interpretation. Finally, ethical issues touch human beings in a profound and fundamental way. The premises, beliefs, and principles employed by humans as they make ethical decisions are quite varied, not fully understood, and often inextricably intertwined with religious beliefs. How does one take such uniquely human characteristics and distil them into a computer program?
A runaway trolley is approaching a fork in the tracks. if the trolley runs on its current track, it will kill a work crew of five. If the driver steers the train down the other branch, the trolley will kill a lone worker. If you were driving the trolley, what would you do? What would a computer or robot do? Trolley cases, first introduced by philosopher Philippa Foot in 1967[1] and now a staple of introductory ethics courses, have multiplied in the past four decades. What if it's a bystander, rather than the driver, who has the power to switch the trolley's course? What if preventing the five deaths requires pushing another spectator off a bridge onto the tracks? These variants evoke different intuitive responses.
Given the advent of modern “driverless” train systems, which are now common at airports and are beginning to appear in more complicated rail networks such as the London Underground and the Paris and Copenhagen metro systems, could trolley cases be one of the first frontiers for machine ethics? Machine ethics (also known as machine morality, artificial morality, or computational ethics) is an emerging field that seeks to implement moral decision-making faculties in computers and robots. Is it too soon to be broaching this topic? We don't think so.
In this part, four visions of the future of machine ethics are presented. Helen Seville and Debora G. Field, in “What Can AI Do for Ethics?” maintain that AI is “ideally suited to exploring the processes of ethical reasoning and decision-making,” and that, through the World Wide Web, an Ethical Decision Assistant (EDA) can be created that is accessible to all. Seville and Field believe that an acceptable EDA for personal ethical decision making should incorporate certain elements, including the ability to (1) point out the consequences, short and long term, not only of the actions we consider performing, but also of not performing certain actions; (2) use virtual reality techniques to enable us to “experience” the consequences of taking certain courses of action/inaction, making it less likely that we will err because of weakness of will; and (3) emphasize the importance of consistency. They argue, however, that there are limits to the assistance that a computer could give us in ethical decision making. In ethical dilemmas faced by individuals, personal values will, and should, in their view, have a role to play.
Seville and Field, interestingly, believe that AI could create a system that, in principle, would make better decisions concerning ethically acceptable social policies because of its ability to check for consistency and its ability to be more impartial (understood as being able to represent and consider the experiences of all those affected) than human beings.
The responsibilities of a system designer are growing and expanding in fields that only ten years ago were the exclusive realms of philosophy, sociology, or jurisprudence. Nowadays, a system designer must have a deep understanding not only of the social and legal implications of what he is designing, but also of the ethical nature of the systems he is conceptualizing. These artifacts not only behave autonomously in their environments, embedding themselves into the functional tissue or our society but also “re-ontologise” part of our social environment, shaping new spaces in which people operate.
It is in the public interest that automated systems minimize their usage of limited resources, are safe for users, and integrate ergonomically within the dynamics of everyday life. For instance, one expects banks to offer safe, multifunction ATMs, hospitals to ensure that electro-medical instruments do not electrocute patients, and nuclear plants to employ redundant, formally specified control systems.
It is equally important to the public interest that artificial autonomous entities behave correctly. Autonomous and interactive systems affect the social life of millions of individuals, while performing critical operations such as managing sensitive information, financial transactions, or the packaging and delivery of medicines. The development of a precise understanding of what it means for such artifacts to behave in accordance with the ethical principles endorsed by a society is a pressing issue.