Hostname: page-component-586b7cd67f-t8hqh Total loading time: 0 Render date: 2024-12-02T19:50:54.508Z Has data issue: false hasContentIssue false

When Pinocchio becomes a real boy: Capability and felicity in AI and interactive depictions

Published online by Cambridge University Press:  05 April 2023

John M. Carroll*
Affiliation:
College of Information Sciences and Technology, Pennsylvania State University, University Park, PA 16802, USA [email protected]; https://jcarroll.ist.psu.edu

Abstract

Clark and Fischer analyze social robots as interactive depictions, presenting characters that people can interact with in social settings. Unlike other types of depictions, the props for social robot depictions depend on emerging interactive technologies. This raises questions about how such depictions depict: They conflate character and prop in ways that delight, confuse, mistreat, and may become ordinary human–technology interactions.

Type
Open Peer Commentary
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press

Clark and Fischer (C&F) characterize social robots as autonomous agents that nonetheless act on the authority of and are “ultimately controlled by principals” – defined as the other agents who designed, manufactured, or administer the social robot. They note that controls for social robots are typically not transparent, meaning a robot's design does not convey to its human users that or how it is controlled by its principals.

Indeed, social robots seem designed deliberately to misrepresent their epistemic states and conceal their limitations. C&F cite this greeting from the robot Smooth: “I wonder if you would like something to drink.” Is the robot empathetic about someone's thirst? Is it wondering what drinking is like? The resulting vagueness may be instrumental in encouraging imagination and emotional projection in humans; what C&F call the “pretense” of depictions: People acting toward the depiction as if the depiction is what it depicts.

The pretense of depiction can be fun: Chatty Cathy (character depicted) says “I love you,” and at the same place and time, the doll (prop depicting the character) plays a recorded phase when the “chatty ring” in its upper back is pulled. The pretense invites confusion about (apparently) autonomous behavior and the nature of authorities in the background, but it is moderated by the obvious ring in the doll's back. For Smooth the robot, the projected and imagined character autonomously approaches people and offers a drink, but the physical embodiment of the robot is a prop that executes instructions a human created. This is a subtle and evocative distinction.

Outside playful pretense, things are more complex. Facilitating human confusion about actual capabilities of a robot creates ethical problems. Thus, if someone were to deduce from dialog interaction that Smooth can wonder about things, and that it understands relationships and experiences inherent in thirst and drinking, then they have been deceived. If someone expects a robot math tutor to teach them math but it introduces erroneous definitions, they have been educationally harmed. C&F observe that in such misfire scenarios people would sue the manufacturer or programmer, not the robot. But whoever is sued, people would have been mistreated. Ethical problems of this sort are not new; Weizenbaum (Reference Weizenbaum1976) was shocked at how susceptible and tenacious people were in experiencing his mid-1960s script-based chatbot ELIZA as human.

In their discussion of agents acting on the authority of other, principal agents, C&F conflate cases where the agent acting on another agent's authority (called a rep-agent) is a social robot with cases in which the rep-agent is a human playing a job role. But these cases are different. Thus, Clark and his hotel booking agent briefly divert their phone call to high school reminiscence, which also allows them to pursue enabling goals of building trust and common ground. The agent can do this because the character of the booking agent is projected if and when she chooses. Social robots as rep-agents never act on their own authority; their capabilities for goals and actions are too limited. However much they conceal it, they must be controlled by principals.

In the future, challenges around interactive depictions may become more complex, arising not only from exaggeration of limited capabilities, but also from ethically presenting capabilities that rival or might even exceed those of humans, and have only tenuous causal connection to remote (human) authorities. For example, the language model GPT-3 (third-generation pretrained transformer) is a learning machine with a gargantuan text base; it responds broadly to language prompts (Dale, Reference Dale2021). It can compose original sonnets in the style of Shakespeare, develop program code from natural language specifications, and create news articles and philosophy essays, among many other applications. It carries out these tasks at human levels.

These capabilities raise many challenges already; GPT-3 might soon carry out the rep-agent depiction roles enumerated by C&F better than the humans currently employed to perform them. Like ELIZA and social robots, GPT-3 can convey self-awareness without necessarily having it. At the least, it is the best simulacrum yet (Tiku, Reference Tiku2022).

The rapid emergence of interactive possibilities for artificial intelligence (AI) applications has helped to highlight “explainable AI” (Holzinger et al., Reference Holzinger, Goebel, Fong, Moon, Müller and Samek2022). AI systems can be smart enough to carry out significant cognitive tasks but lack capability to insightfully and effectively explain how and why they do what they do. For interactive AI, the explanation required is not an execution trace or design rationale, it is conversational explanation of the sort people expect from their responsible interlocutors (Carroll, Reference Carroll2022). C&F's human rep-agents provide a model for how agents work together to conversationally explain role-related conduct. Robots have responsibility to explain themselves effectively or to clearly convey that they cannot (cf. the chatty ring).

Hancock (Reference Hancock2022) argues that humans and emerging AI systems could experience time very differently. He mocks the expression “real time” as indicative of how humans uncritically privilege a conception of time scaled to human perception and cognition. Emerging AI systems might think through and carry out complex courses of action without humans noticing anything happened. The very transition from contemporary AI to quite autonomous and self-aware agents might occur in what humans would experience as “a single perceptual moment.” Hancock worries this could entail a Skynet Armageddon, but it seems more likely to result in greater diversity for AI systems, some of whose capabilities are occasionally unclear, even to them. Humans are very skilled at building and coordinating common ground with others, as when Clark reminisces about high school with the hotel agent. This enables the development of trust and fluent interaction. Future robots must effectively coordinate common ground with humans, and humans must reciprocally depict themselves as responsible and empathetic.

In the 1883 children's novel, Pinocchio is a wooden marionette, a puppet depicting a boy (Collodi, Reference Collodi1883). Through the novel, Pinocchio encounters challenges, and often behaves too reflexively, without much planning or empathy for others. Ultimately though, he becomes more responsible and empathetic. Through the intervention of a fairy, Pinocchio becomes a real boy. The novel ends there, but we might think that is where the more interesting story begins.

Financial support

This work was supported by the National Institutes of Health R01LM013330-03.

Competing interest

None.

References

Carroll, J. M. (2022). Why should humans trust AI? ACM Interactions, 29(4), 73–77. https://dl.acm.org/doi/10.1145/3538392Google Scholar
Collodi, C. (1883). The adventures of Pinocchio. Labero.Google Scholar
Dale, R. (2021). GPT-3: What's it good for?. Natural Language Engineering, 27(1), 113118. https://doi.org/10.1007/978-3-031-04083-2CrossRefGoogle Scholar
Hancock, P. A. (2022). Avoiding adverse autonomous agent actions (with peer commentary). Human–Computer Interaction, 37(3), 211236. https://doi.org/10.1080/07370024.2021.1970556CrossRefGoogle Scholar
Holzinger, A., Goebel, R., Fong, R., Moon, T., Müller, K. R., & Samek, W. (Eds.) (2022). xxAI-Beyond explainable AI. Springer. https://doi.org/10.1007/978-3-031-04083-2CrossRefGoogle Scholar
Tiku, N. (2022). The Google engineer who thinks the company's AI has come to life. Washington Post. June 11. https://www.washingtonpost.com/technology/2022/06/11/google-ai-lamda-blake-lemoine/Google Scholar
Weizenbaum, J. (1976). Computer power and human reason: From judgment to calculation. W. H. Freeman.Google Scholar