Social environments are complex. To navigate them, we use simplified scaffolding information, called schemas, built from our past experiences (Macrae & Cloutier, Reference Macrae and Cloutier2009). Often, schemas focus on social identity categories, and contain stereotypes – simple, categorical, automatically arising predictors about what someone will be like (Hammond & Cimpian, Reference Hammond and Cimpian2017) – about those identities. Any given individual has many identities, each of which might be differently salient from context to context, and so different assumptions about the same individual will come to mind more readily depending on the situation (Oyserman, Reference Oyserman, Scott and Kosslyn2015; Shih, Pittinsky, & Ambady, Reference Shih, Pittinsky and Ambady1999).
We propose that “robot” is an identity category that comprises three subschemas, delineated in Clark and Fischer's (C&F's) work as three levels of depiction. Each subschema evokes different types of behavior, but which is evoked as most relevant can fluctuate, just as one's perception of another person's most relevant identity might. If this is true, then people's individual variation in when and whether they approach robots as characters, depictions of social agents, or pieces of machinery is likely because of the same reasons stereotypes about any kind of identity are variably activated.
One underlying impetus for switching between these schemas, we contend, is the degree to which people perceive the robot as having a mind. Human beings assume things about each other's minds in order to communicate effectively – a task that is vital for social interaction, but very complex. Despite understandable objections about the overuse of stereotypes, particularly negative stereotypes of minority groups, stereotypes facilitate communication by providing quick, and often accurate, predictions about what someone else might be thinking (Hodges & Kezer, Reference Hodges and Kezer2021; Lewis, Hodges, Laurent, Srivastava, & Biancarosa, Reference Lewis, Hodges, Laurent, Srivastava and Biancarosa2012). If a social robot is perceived as having a mind, people are more likely to interact with the robot as a character rather than as a machine, with “robot(ic)” being simply one of the stereotypes activated to describe it as a social entity, much like “teenager” or “doctor.”
Robots are often not perceived as having a mind (Gray, Gray, & Wegner, Reference Gray, Gray and Wegner2007), and in these instances social stereotypes do not come into play. However, some things can cause people to ascribe more mind to a robot, such as the robot behaving in unexpected ways, the robot possessing human-like features, or the person who perceives the robot feeling particularly lonely (Epley, Waytz, & Cacioppo, Reference Epley, Waytz and Cacioppo2007; Waytz, Gray, Epley, & Wegner, Reference Waytz, Gray, Epley and Wegner2010). Excessive robotic attempts to copy human appearances perfectly can be unsettling (Gray, Knobe, Sheskin, Bloom, & Barrett, Reference Gray, Knobe, Sheskin, Bloom and Barrett2011; Gray & Wegner, Reference Gray and Wegner2012), but characteristics that allow the robot to express the things humans notice and communicate to each other – like attention and emotion – can facilitate perception of mind (Duffy, Reference Duffy2003). Given the right cues, anthropomorphism can occur automatically when the perceiver is presented with a situation in which treating a robot as a social agent is contextually appropriate (Kim & Sundar, Reference Kim and Sundar2012).
The characteristics of the human perceivers, therefore, are important in addition to the features of the social robot itself. Qualities like willingness to suspend disbelief (Duffy & Zawieska, Reference Duffy and Zawieska2012; Muckler, Reference Muckler2017) and tendency to anthropomorphize (Waytz, Cacioppo, & Epley, Reference Waytz, Cacioppo and Epley2014) vary between people, and may make people more or less inclined to treat a robot like a character or like a machine. As delineated in C&F's example of the three human interactants encountering Smooth the robot, some people will readily engage socially with the same robot that others will not. This tendency to anthropomorphize is partly because of individual variation between people, but past experience and mindset likely play a role, too: People who are distracted by novel aspects of a social robot or focused on its non-humanness may be impeded in depicting the robot as a character, and by extension, in applying certain stereotypes that guide particular kinds of interactions with it. However, these effects are not unique to perceptions of robots. For example, encountering other humans in heavily scripted roles (e.g., flight attendant, nightclub bouncer) may lead us to evoke prop-like schemas that preclude character depictions. Cues that prompt thoughts of body counts or bodily actions may similarly interfere with character depiction, and evoke more mechanical schemas (Mooijman & Stern, Reference Mooijman and Stern2016; Small & Loewenstein, Reference Small and Loewenstein2003).
Social robots might have difficulty being perceived as genuinely plausible interaction partners in part because the features of the robot fail to activate the character-level stereotypes, such that the robot is stuck at depiction or machinery. Alternatively, some observers might be unwilling or unable to suspend their disbelief in order to interact with the robot like a character (which would, in turn, create a social situation in which others who might otherwise be willing to treat the robot anthropomorphically are made more self-conscious by their peers' reluctance). Finally, even robots depicted as characters might evoke stereotypes of robots being less socially capable than humans (Chan et al., Reference Chan, Doyle, McElfresh, Conitzer, Dickerson, Schaich Borg and Sinnott-Armstrong2020) because, for example, their language is less fluid. As we further explore the factors that promote the willingness and ease with which humans can interact with robots as social agents, we should also heed when robots mirror aspects of some human agents with whom interactions are problematic.
Our suggestion that the three levels of depiction that C&F outline provide three schemas for robots, each of which can be activated to bring to mind different stereotypes, offers a psychological explanation for how people are able to switch their focus between machinery, depiction, and character fluidly. As C&F note, humans have extensive experience engaging with depictions, which should help us construe social robots as depictions of social agents. Increasingly sophisticated robots should trigger stereotypes of various different social agents, providing humans with further cognitive scaffolding to guide and elaborate interactions with robots. Additionally, humans also have experience engaging with what C&F call “nonstandard” (i.e., not real) characters from whom they seek and derive a number of very “human” yearnings (e.g., companionship, inspiration, perspective; see Gabriel & Young, Reference Gabriel and Young2011; Myers & Hodges, Reference Myers, Hodges, Markman, Klein and Suhr2009; Taylor, Hodges, & Kohányi, Reference Taylor, Hodges and Kohányi2003), suggesting a flexible, inclusive, and creative ability to connect with a wide range of social agents.
Social environments are complex. To navigate them, we use simplified scaffolding information, called schemas, built from our past experiences (Macrae & Cloutier, Reference Macrae and Cloutier2009). Often, schemas focus on social identity categories, and contain stereotypes – simple, categorical, automatically arising predictors about what someone will be like (Hammond & Cimpian, Reference Hammond and Cimpian2017) – about those identities. Any given individual has many identities, each of which might be differently salient from context to context, and so different assumptions about the same individual will come to mind more readily depending on the situation (Oyserman, Reference Oyserman, Scott and Kosslyn2015; Shih, Pittinsky, & Ambady, Reference Shih, Pittinsky and Ambady1999).
We propose that “robot” is an identity category that comprises three subschemas, delineated in Clark and Fischer's (C&F's) work as three levels of depiction. Each subschema evokes different types of behavior, but which is evoked as most relevant can fluctuate, just as one's perception of another person's most relevant identity might. If this is true, then people's individual variation in when and whether they approach robots as characters, depictions of social agents, or pieces of machinery is likely because of the same reasons stereotypes about any kind of identity are variably activated.
One underlying impetus for switching between these schemas, we contend, is the degree to which people perceive the robot as having a mind. Human beings assume things about each other's minds in order to communicate effectively – a task that is vital for social interaction, but very complex. Despite understandable objections about the overuse of stereotypes, particularly negative stereotypes of minority groups, stereotypes facilitate communication by providing quick, and often accurate, predictions about what someone else might be thinking (Hodges & Kezer, Reference Hodges and Kezer2021; Lewis, Hodges, Laurent, Srivastava, & Biancarosa, Reference Lewis, Hodges, Laurent, Srivastava and Biancarosa2012). If a social robot is perceived as having a mind, people are more likely to interact with the robot as a character rather than as a machine, with “robot(ic)” being simply one of the stereotypes activated to describe it as a social entity, much like “teenager” or “doctor.”
Robots are often not perceived as having a mind (Gray, Gray, & Wegner, Reference Gray, Gray and Wegner2007), and in these instances social stereotypes do not come into play. However, some things can cause people to ascribe more mind to a robot, such as the robot behaving in unexpected ways, the robot possessing human-like features, or the person who perceives the robot feeling particularly lonely (Epley, Waytz, & Cacioppo, Reference Epley, Waytz and Cacioppo2007; Waytz, Gray, Epley, & Wegner, Reference Waytz, Gray, Epley and Wegner2010). Excessive robotic attempts to copy human appearances perfectly can be unsettling (Gray, Knobe, Sheskin, Bloom, & Barrett, Reference Gray, Knobe, Sheskin, Bloom and Barrett2011; Gray & Wegner, Reference Gray and Wegner2012), but characteristics that allow the robot to express the things humans notice and communicate to each other – like attention and emotion – can facilitate perception of mind (Duffy, Reference Duffy2003). Given the right cues, anthropomorphism can occur automatically when the perceiver is presented with a situation in which treating a robot as a social agent is contextually appropriate (Kim & Sundar, Reference Kim and Sundar2012).
The characteristics of the human perceivers, therefore, are important in addition to the features of the social robot itself. Qualities like willingness to suspend disbelief (Duffy & Zawieska, Reference Duffy and Zawieska2012; Muckler, Reference Muckler2017) and tendency to anthropomorphize (Waytz, Cacioppo, & Epley, Reference Waytz, Cacioppo and Epley2014) vary between people, and may make people more or less inclined to treat a robot like a character or like a machine. As delineated in C&F's example of the three human interactants encountering Smooth the robot, some people will readily engage socially with the same robot that others will not. This tendency to anthropomorphize is partly because of individual variation between people, but past experience and mindset likely play a role, too: People who are distracted by novel aspects of a social robot or focused on its non-humanness may be impeded in depicting the robot as a character, and by extension, in applying certain stereotypes that guide particular kinds of interactions with it. However, these effects are not unique to perceptions of robots. For example, encountering other humans in heavily scripted roles (e.g., flight attendant, nightclub bouncer) may lead us to evoke prop-like schemas that preclude character depictions. Cues that prompt thoughts of body counts or bodily actions may similarly interfere with character depiction, and evoke more mechanical schemas (Mooijman & Stern, Reference Mooijman and Stern2016; Small & Loewenstein, Reference Small and Loewenstein2003).
Social robots might have difficulty being perceived as genuinely plausible interaction partners in part because the features of the robot fail to activate the character-level stereotypes, such that the robot is stuck at depiction or machinery. Alternatively, some observers might be unwilling or unable to suspend their disbelief in order to interact with the robot like a character (which would, in turn, create a social situation in which others who might otherwise be willing to treat the robot anthropomorphically are made more self-conscious by their peers' reluctance). Finally, even robots depicted as characters might evoke stereotypes of robots being less socially capable than humans (Chan et al., Reference Chan, Doyle, McElfresh, Conitzer, Dickerson, Schaich Borg and Sinnott-Armstrong2020) because, for example, their language is less fluid. As we further explore the factors that promote the willingness and ease with which humans can interact with robots as social agents, we should also heed when robots mirror aspects of some human agents with whom interactions are problematic.
Our suggestion that the three levels of depiction that C&F outline provide three schemas for robots, each of which can be activated to bring to mind different stereotypes, offers a psychological explanation for how people are able to switch their focus between machinery, depiction, and character fluidly. As C&F note, humans have extensive experience engaging with depictions, which should help us construe social robots as depictions of social agents. Increasingly sophisticated robots should trigger stereotypes of various different social agents, providing humans with further cognitive scaffolding to guide and elaborate interactions with robots. Additionally, humans also have experience engaging with what C&F call “nonstandard” (i.e., not real) characters from whom they seek and derive a number of very “human” yearnings (e.g., companionship, inspiration, perspective; see Gabriel & Young, Reference Gabriel and Young2011; Myers & Hodges, Reference Myers, Hodges, Markman, Klein and Suhr2009; Taylor, Hodges, & Kohányi, Reference Taylor, Hodges and Kohányi2003), suggesting a flexible, inclusive, and creative ability to connect with a wide range of social agents.
Financial support
This research received no specific grant from any funding agency, commercial, or not-for-profit sectors
Competing interest
None.