Hostname: page-component-78c5997874-v9fdk Total loading time: 0 Render date: 2024-11-07T08:35:30.057Z Has data issue: false hasContentIssue false

A neurocognitive view on the depiction of social robots

Published online by Cambridge University Press:  05 April 2023

Ruud Hortensius
Affiliation:
Department of Psychology, Utrecht University, 3584 CS Utrecht, The Netherlands [email protected] www.ruudhortensius.nl
Eva Wiese
Affiliation:
Cognitive Psychology & Ergonomics, Institute of Psychology and Ergonomics, School of Mechanical Engineering and Transport Systems, Berlin Institute of Technology, D-10587 Berlin, Germany [email protected] https://sites.google.com/view/gmuscilab

Abstract

While we applaud the careful breakdown by Clark and Fischer of the representation of social robots held by the human user, we emphasise that a neurocognitive perspective is crucial to fully capture how people perceive and construe social robots at the behavioural and brain levels.

Type
Open Peer Commentary
Copyright
Copyright © The Author(s), 2023. Published by Cambridge University Press

Within their framework, Clark and Fischer (C&F) focus on observable (e.g., language) and self-reported behaviour (e.g., ratings on a questionnaire). While these measures provide a first indication of how people perceive and interact with social robots, they do not paint a complete picture. We propose that perspectives and techniques from psychology and neuroscience will not only allow to answer if people indeed represent a social robot at three connected physical scenes but also when and how. More objective measures, such as neuroimaging, can peal apart the multiple layers of human–robot interactions by outlining the neural and behavioural mechanisms supporting these interactions.

When observing the emotions expressed by a social robot, an individual could be asked what emotion the robot is displaying (i.e., open-ended question) or if the robot is happy or angry (i.e., two-alternative forced choice). While these and similar subjective measures (e.g., questionnaires, in-depth interviews) already provide a glimpse of how the individual views the robot, they provide just that, a glimpse. The indication of an emotion does not mean that the robot is represented as a happy robot. Nor does it mean that the same mechanisms are used to observe and understand the emotions expressed by the robot as when people observe and understand the emotions of other people. Behavioural and neural measures are vital to truly understand the social cognitive mechanisms during human–robot interaction (Wiese, Metta, & Wykowska, Reference Wiese, Metta and Wykowska2017). Advanced neuroimaging and brain stimulation techniques paired with new analytic approaches as well as implicit measures will provide a more detailed understanding of the representation held by the human user. For instance, fMRI studies indicate that some neurocognitive processes, such as person perception, show similar profiles across interactions with people and social robots, while other processes, such as theory-of-mind, show dissimilar profiles during these interactions (Hortensius & Cross, Reference Hortensius and Cross2018; Hortensius et al., Reference Hortensius, Hekele and Cross2018; Wykowska, Chaminade, & Cheng, Reference Wykowska, Chaminade and Cheng2016). It is important to note that even in the presence of similar behavioural patterns or neural activity, the underlying mechanism might differ between these interactions. New analytic approaches derived from cognitive neuroscience, such as representational similarity analysis, can test if behavioural reactions or activity within a neural network represents, for example, agent type (robot or human) or emotion (angry or happy) (Henschel, Hortensius, & Cross, Reference Henschel, Hortensius and Cross2020). These techniques and approaches could therefore be vital in outlining at what level and scene people represent the robot expressing emotions.

This approach has been successful in distilling the multiple layers during human–robot interactions, for example during mind perception. Top-down effects of mental state attribution are widely observed in social perception (Teufel, Fletcher, & Davis, Reference Teufel, Fletcher and Davis2010) and this also holds for human–robot interaction. Besides the appearance of the robot, the beliefs and expectations held by the individual play a critical role how they construe the social robot (Hortensius & Cross, Reference Hortensius and Cross2018; Wykowska, Wiese, Prosser, & Müller, Reference Wykowska, Wiese, Prosser and Müller2014). For example, if people believe that the action of the robot has a human-origin, activity in a region of the theory-of-mind network is increased compared to when people believe the action has a pre-programmed origin (Özdem et al., Reference Özdem, Wiese, Wykowska, Müller, Brass and Overwalle2017). When teasing apart mind perception even further, we can dissociate two distinct processes: theory-of-mind and anthropomorphism. Often viewed or treated as similar or even as one form of mind perception, recent evidence from psychology and neuroscience suggests otherwise (Hortensius et al., Reference Hortensius, Kent, Darda, Jastrzab, Koldewyn, Ramsey and Cross2021; Tahiroglu & Taylor, Reference Tahiroglu and Taylor2019). While the observed outcome, understanding the actions and hidden states of an agent, is the same, these forms of mind perception are likely supported by separate behavioural and neural mechanisms. For instance, activity within the theory-of-mind network did not relate to an individual's tendency to ascribe human characteristics to objects and nonhuman agents, such as robots (Hortensius et al., Reference Hortensius, Kent, Darda, Jastrzab, Koldewyn, Ramsey and Cross2021). Even if the observer understands the gestures and motion of the robot as happy, it does not mean that they truly belief that the robot is happy or that the same processes are used as when understanding the happiness of a friend.

Not only can this approach elucidate the neural and behavioural mechanism supporting human–robot interactions, it can also indicate the reliance of these interactions on both social and non-social processes. Besides the possibility that the human user represents the robot as a depiction of a social agent, it is also possible that it is represented as a depiction of an object. The main focus of human–robot interaction research has mostly been on if robots activate similar neurocognitive processes as humans. The reference or comparison category in this case is thus always a human agent, thereby restricting the focus on neurocognitive processes that are social in nature. Considering to what extent human–robot interactions rely on non-social neurocognitive processes or processes that extent over domains (e.g., attention, memory) is vital (Cross & Ramsey, Reference Cross and Ramsey2021). Robust activation in object-specific brain regions has been observed across neuroimaging studies on the perception and interaction with robots (Henschel et al., Reference Henschel, Hortensius and Cross2020). Extending the scope of both neurocognitive mechanisms and reference and comparison categories is needed, to understand if people construct these agents as (depictions) of objects or social agents (including animals), or as a completely new, standalone category. It is unlikely that one category fits all, as for instance, not only appearance and behaviour of the robot influence social cognitive processes (Abubshait & Wiese, Reference Abubshait and Wiese2017; Waytz, Gray, Epley, & Wegner, Reference Waytz, Gray, Epley and Wegner2010), but also context (e.g., lifelikeness of the interaction) (Abubshait, Weis, & Wiese, Reference Abubshait, Weis and Wiese2021). Importantly, people can hold different views of a robot. For example, implicit and explicit measures of mind perception do not correlate (Li, Terfurth, Woller, & Wiese, Reference Li, Terfurth, Woller and Wiese2022). It is therefore possible that people can view a robot as an object while in parallel view the robot as a social entity ostensibly experiencing happiness.

Together, this psychology and social and cognitive neuroscience approach to the study of human–robot interaction will provide a much completer picture by providing the necessary evidence for or against the framework put forward by C&F, and ultimately tell if, when, and how people construe social robots as mere depictions of social agents.

Financial support

This work was supported by the Human-Centered AI focus area at Utrecht University (Embodied AI initiative).

Competing interest

None.

References

Abubshait, A., Weis, P. P., & Wiese, E. (2021). Does context matter? Effects of robot appearance and reliability on social attention differs based on lifelikeness of gaze task. International Journal of Social Robotics, 13(5), 863876. https://doi.org/10.1007/s12369-020-00675-4CrossRefGoogle Scholar
Abubshait, A., & Wiese, E. (2017). You look human, but act like a machine: Agent appearance and behavior modulate different aspects of human–robot interaction. Frontiers in Psychology, 8, 1393. https://www.frontiersin.org/article/10.3389/fpsyg.2017.01393CrossRefGoogle Scholar
Cross, E. S., & Ramsey, R. (2021). Mind meets machine: Towards a cognitive science of human–machine interactions. Trends in Cognitive Sciences, 25(3), 200212. https://doi.org/10.1016/j.tics.2020.11.009CrossRefGoogle ScholarPubMed
Henschel, A., Hortensius, R., & Cross, E. S. (2020). Social cognition in the age of human–robot interaction. Trends in Neurosciences, 43(6), 373384. https://doi.org/10.1016/j.tins.2020.03.013CrossRefGoogle ScholarPubMed
Hortensius, R., & Cross, E. S. (2018). From automata to animate beings: The scope and limits of attributing socialness to artificial agents. Annals of the New York Academy of Sciences, 1426(1), 93110. https://doi.org/10.1111/nyas.13727CrossRefGoogle Scholar
Hortensius, R., Hekele, F., & Cross, E. S. (2018). The perception of emotion in artificial agents. IEEE Transactions on Cognitive and Developmental Systems, 11. https://doi.org/10.1109/TCDS.2018.2826921Google Scholar
Hortensius, R., Kent, M., Darda, K. M., Jastrzab, L., Koldewyn, K., Ramsey, R., & Cross, E. S. (2021). Exploring the relationship between anthropomorphism and theory-of-mind in brain and behaviour. Human Brain Mapping, 42(13), 42244241. https://doi.org/10.1002/hbm.25542CrossRefGoogle ScholarPubMed
Li, Z., Terfurth, L., Woller, J. P., & Wiese, E. (2022). Mind the Machines: Applying Implicit Measures of Mind Perception in Social Robotics. Proceedings of the 2022 ACM/IEEE International Conference on Human–Robot Interaction, Sapporo, Hokkaido, Japan, pp. 236–245.CrossRefGoogle Scholar
Özdem, C., Wiese, E., Wykowska, A., Müller, H., Brass, M., & Overwalle, F. V. (2017). Believing androids – fMRI activation in the right temporo-parietal junction is modulated by ascribing intentions to non-human agents. Social Neuroscience, 12(5), 582593. https://doi.org/10.1080/17470919.2016.1207702CrossRefGoogle ScholarPubMed
Tahiroglu, D., & Taylor, M. (2019). Anthropomorphism, social understanding, and imaginary companions. British Journal of Developmental Psychology, 37(2), 284299. https://doi.org/10.1111/bjdp.12272CrossRefGoogle ScholarPubMed
Teufel, C., Fletcher, P. C., & Davis, G. (2010). Seeing other minds: Attributed mental states influence perception. Trends in Cognitive Sciences, 14(8), 376382. https://doi.org/10.1016/j.tics.2010.05.005CrossRefGoogle ScholarPubMed
Waytz, A., Gray, K., Epley, N., & Wegner, D. M. (2010). Causes and consequences of mind perception. Trends in Cognitive Sciences, 14(8), 383388. https://doi.org/10.1016/j.tics.2010.05.006CrossRefGoogle ScholarPubMed
Wiese, E., Metta, G., & Wykowska, A. (2017). Robots as intentional agents: Using neuroscientific methods to make robots appear more social. Frontiers in Psychology, 8, 1663. https://doi.org/10.3389/fpsyg.2017.01663CrossRefGoogle ScholarPubMed
Wykowska, A., Chaminade, T., & Cheng, G. (2016). Embodied artificial agents for understanding human social cognition. Philosophical Transactions of the Royal Society B: Biological Sciences, 371(1693), 20150375. https://doi.org/10.1098/rstb.2015.0375CrossRefGoogle ScholarPubMed
Wykowska, A., Wiese, E., Prosser, A., & Müller, H. J. (2014). Beliefs about the minds of others influence how we process sensory information. PLoS ONE, 9(4), e94339. https://doi.org/10.1371/journal.pone.0094339CrossRefGoogle ScholarPubMed