Hostname: page-component-586b7cd67f-dsjbd Total loading time: 0 Render date: 2024-11-24T02:31:57.572Z Has data issue: false hasContentIssue false

Changing the incentive structure of social media may reduce online proxy failure and proliferation of negativity

Published online by Cambridge University Press:  13 May 2024

Claire E. Robertson*
Affiliation:
Department of Psychology, New York University, New York, NY, USA [email protected]; [email protected]; [email protected]
Kareena del Rosario
Affiliation:
Department of Psychology, New York University, New York, NY, USA [email protected]; [email protected]; [email protected]
Steve Rathje
Affiliation:
Department of Psychology, New York University, New York, NY, USA [email protected]; [email protected]; [email protected]
Jay J. Van Bavel
Affiliation:
Department of Psychology, New York University, New York, NY, USA [email protected]; [email protected]; [email protected] Center for Neural Science, New York University, New York, NY, USA Department of Strategy and Management, Norwegian School of Economics, Bergen, Norway
*
Corresponding author: Claire E. Robertson; Email: [email protected]

Abstract

Social media takes advantage of people's predisposition to attend to threatening stimuli by promoting content in algorithms that capture attention. However, this content is often not what people expressly state they would like to see. We propose that social media companies should weigh users’ expressed preferences more heavily in algorithms. We propose modest changes to user interfaces that could reduce the abundance of threatening content in the online environment.

Type
Open Peer Commentary
Copyright
Copyright © The Author(s), 2024. Published by Cambridge University Press

Through millennia of evolution, the brain has developed an attentional system that preferentially orients people toward physical, emotional, and social threats (Baumeister, Bratslavsky, Finkenauer, & Vohs, Reference Baumeister, Bratslavsky, Finkenauer and Vohs2001; Öhman & Mineka, Reference Öhman and Mineka2001; Rozin & Royzman, Reference Rozin and Royzman2001). The proxy for threat is, then, negative or threatening stimuli for both the individual and their group. Indeed, people attend to physical threats such as snakes or heights (Dijksterhuis & Aarts, Reference Dijksterhuis and Aarts2003; Öhman & Mineka, Reference Öhman and Mineka2001), social threats such as anger or out-group animosity (Fox et al., Reference Fox, Lester, Russo, Bowles, Pichler and Dutton2000; Rathje, Van Bavel, & Van Der Linden, Reference Rathje, Van Bavel and Van Der Linden2021), and moral threats such as outrage or disgust (Brady, Crockett, & Van Bavel, Reference Brady, Crockett and Van Bavel2020; Brady, Gantman, & Van Bavel, Reference Brady, Gantman and Van Bavel2020; Hutcherson & Gross, Reference Hutcherson and Gross2011), in part because they alert people to potential harm. Our attentional system acts as a regulator, and its goal is to attend to stimuli that are relevant for people's wellbeing. Proxy failure occurs when people attend to false threats, even though no harm is imminent (John et al.).

Thus, attention is biased by threat proxies. It is therefore problematic that social media in particular uses attention as a proxy for what people want to see, and incorporate that into their algorithms. By using attention as a proxy for interest, social media companies motivate users to try to “hack” innate threat detection systems that drive people's attention (Brady, McLoughlin, Doan, & Crockett, Reference Brady, McLoughlin, Doan and Crockett2021; Crockett, Reference Crockett2017). In this way, negative, divisive, and threatening content may be preferentially spread by algorithms because of lower-level proxy failure.

Critically, people do not necessarily want negative and divisive content in their social media feeds. Although people acknowledge that negative, false, and hateful content goes viral online, they do not want such content to go viral (Rathje, Robertson, Brady, & Van Bavel, Reference Rathje, Robertson, Brady and Van Bavelin press). Rather, they want accurate, positive, and educational content to go viral. Proxy failure may be partially responsible for this outcome – focusing too much on social media engagement (likes, shares, etc.) as a proxy for people's preferences leads to proxy failure, because engagement metrics often promote content that people say they do not like, such as false, negative, or hostile content (Brady, Wills, Jost, Tucker, & Van Bavel, Reference Brady, Wills, Jost, Tucker and Van Bavel2017; Rathje et al., Reference Rathje, Van Bavel and Van Der Linden2021; Robertson et al., Reference Robertson, Pröllochs, Schwarzenegger, Pärnamets, Van Bavel and Feuerriegel2023).

Furthermore, changing incentive structures on social media toward more positive and constructive content may have downstream positive consequences. When the incentive structure of social media changes, people adjust the type of content they share. For instance, when social media platforms reward the veracity or trustworthiness of a post, people are far more likely to share true information (Globig, Holtz, & Sharot, Reference Globig, Holtz and Sharot2023; Pretus et al., Reference Pretus, Servin-Barthet, Harris, Brady, Vilarroya and Van Bavel2023). Furthermore, this can be achieved with a relatively small tweak to the design features of a social media site. Adding “trust,” “distrust,” or “misleading” buttons to the standard “like” and “dislike” reactions led people to become more discerning of true and false information, and more likely to share accurate information.

When attention or watch time is used as the basis for newsfeed algorithms, the algorithms may become more likely to show harmful or negative content because of peoples’ attentional bias toward threats and engagement with divisive content (Milli, Carroll, Pandey, Wang, & Dragan, Reference Milli, Carroll, Pandey, Wang and Dragan2023). However, if people were given a simple way to curate their news feeds, they would be able to make their news feeds align more with their preferences. Thus, social media companies should use people's expressed preferences for what they want to see on social media (e.g., positive, nuanced, educational, or entertaining content) as a proxy for interest, rather than ambiguous metrics such as attention or engagement. This could be achieved with modest changes to existing social media platforms (Globig et al., Reference Globig, Holtz and Sharot2023; Pretus et al., Reference Pretus, Servin-Barthet, Harris, Brady, Vilarroya and Van Bavel2023).

Adding an accessible mechanism that lets users say “I don't want to see content like this” could substantially reduce people's exposure to unwanted threatening stimuli that an attentional proxy might promote. Most social media companies even already have this functionality, but it is often hidden in menus, cumbersome to activate, or the default is set to attention-based proxies. Positive results might be achieved by simply moving such a feature to the default or a more visible location in the platform design.

Overall, this type of intervention will only work if social media companies’ superordinate goals include and prioritize positive user experience and societal outcomes over monetary gain. Negativity and toxicity increase grab people's attention, moral outrage, and spill into offline behaviors such as hate speech and endorsement of antidemocratic action (Brady et al., Reference Brady, McLoughlin, Doan and Crockett2021; Kim, Guess, Nyhan, & Reifler, Reference Kim, Guess, Nyhan and Reifler2021; Suhay, Bello-Pardo, & Maurer, Reference Suhay, Bello-Pardo and Maurer2018). Reducing such content may lead to greater user wellbeing and a reduction in negativity, but it also reduces people's use of social media (Beknazar-Yuzbashev, Jiménez Durán, McCrosky, & Stalinski, Reference Beknazar-Yuzbashev, Jiménez Durán, McCrosky and Stalinski2022). This may explain why whistleblowers have revealed that Meta considered implementing algorithms that help downregulate content that is “bad for the world,” but decided against it because it reduced time spent on the platforms by the users (Roose, Isaac, & Frenkel, Reference Roose, Isaac and Frenkel2020). If the superordinate goal is simply making money, then social media companies may continue to “hack” people's attentional systems and promote content that draws users’ attention through sensationalism and threat.

Acknowledgments

The authors acknowledge the Social Identity and Morality Lab for their helpful feedback.

Financial support

This work was supported by the Templeton World Charity Foundation (J. J. V. B., TWCF-2022-30561) and the Russell Sage Foundation (J. J. V. B. and S. R., G-2110-33990).

Competing interest

None.

References

Baumeister, R. F., Bratslavsky, E., Finkenauer, C., & Vohs, K. D. (2001). Bad is stronger than good. Review of General Psychology, 5(4), 323370.CrossRefGoogle Scholar
Beknazar-Yuzbashev, G., Jiménez Durán, R., McCrosky, J., & Stalinski, M. (2022). Toxic content and user engagement on social media: Evidence from a field experiment. Available at SSRN. https://doi.org/10.2139/ssrn.4307346Google Scholar
Brady, W. J., Crockett, M. J., & Van Bavel, J. J. (2020). The MAD model of moral contagion: The role of motivation, attention, and design in the spread of moralized content online. Perspectives on Psychological Science, 15(4), 9781010.CrossRefGoogle ScholarPubMed
Brady, W. J., Gantman, A. P., & Van Bavel, J. J. (2020). Attentional capture helps explain why moral and emotional content go viral. Journal of Experimental Psychology: General, 149(4), 746.CrossRefGoogle ScholarPubMed
Brady, W. J., McLoughlin, K., Doan, T. N., & Crockett, M. J. (2021). How social learning amplifies moral outrage expression in online social networks. Science Advances, 7(33), eabe5641.CrossRefGoogle ScholarPubMed
Brady, W. J., Wills, J. A., Jost, J. T., Tucker, J. A., & Van Bavel, J. J. (2017). Emotion shapes the diffusion of moralized content in social networks. Proceedings of the National Academy of Sciences of the United States of America, 114(28), 73137318.CrossRefGoogle ScholarPubMed
Crockett, M. J. (2017). Moral outrage in the digital age. Nature Human Behaviour, 1(11), 769771.CrossRefGoogle ScholarPubMed
Dijksterhuis, A., & Aarts, H. (2003). On wildebeests and humans: The preferential detection of negative stimuli. Psychological Science, 14(1), 1418.CrossRefGoogle ScholarPubMed
Fox, E., Lester, V., Russo, R., Bowles, R. J., Pichler, A., & Dutton, K. (2000). Facial expressions of emotion: Are angry faces detected more efficiently? Cognition & Emotion, 14(1), 6192.CrossRefGoogle ScholarPubMed
Globig, L. K., Holtz, N., & Sharot, T. (2023). Changing the incentive structure of social media platforms to halt the spread of misinformation. eLife, 12, e85767.CrossRefGoogle ScholarPubMed
Hutcherson, C. A., & Gross, J. J. (2011). The moral emotions: A social–functionalist account of anger, disgust, and contempt. Journal of Personality and Social Psychology, 100(4), 719.CrossRefGoogle ScholarPubMed
Kim, J. W., Guess, A., Nyhan, B., & Reifler, J. (2021). The distorting prism of social media: How self-selection and exposure to incivility fuel online comment toxicity. Journal of Communication, 71(6), 922946. https://doi.org/10.1093/joc/jqab034CrossRefGoogle Scholar
Milli, S., Carroll, M., Pandey, S., Wang, Y., & Dragan, A. D. (2023) Engagement, user satisfaction, and the amplification of divisive content on social media. Preprint at arXiv https://doi.org/10.48550/arXiv.2305.16941CrossRefGoogle Scholar
Öhman, A., & Mineka, S. (2001). Fears, phobias, and preparedness: Toward an evolved module of fear and fear learning. Psychological Review, 108(3), 483.CrossRefGoogle ScholarPubMed
Pretus, C., Servin-Barthet, C., Harris, E. A., Brady, W. J., Vilarroya, O., & Van Bavel, J. J. (2023). The role of political devotion in sharing partisan misinformation and resistance to fact-checking. Journal of Experimental Psychology: General, 152(11), 31163134. https://doi.org/10.1037/xge0001436CrossRefGoogle ScholarPubMed
Rathje, S., Robertson, C., Brady, W. J., & Van Bavel, J. J. (in press). People think that social media platforms do (but should not) amplify divisive content. Perspectives on Psychological Science.Google Scholar
Rathje, S., Van Bavel, J. J., & Van Der Linden, S. (2021). Out-group animosity drives engagement on social media. Proceedings of the National Academy of Sciences of the United States of America, 118(26), e2024292118.CrossRefGoogle ScholarPubMed
Robertson, C. E., Pröllochs, N., Schwarzenegger, K., Pärnamets, P., Van Bavel, J. J., & Feuerriegel, S. (2023). Negativity drives online news consumption. Nature Human Behaviour, 7(5), 812822.CrossRefGoogle ScholarPubMed
Roose, K., Isaac, M., & Frenkel, S. (2020). Facebook struggles to balance civility and growth. The New York Times, 24.Google Scholar
Rozin, P., & Royzman, E. B. (2001). Negativity bias, negativity dominance, and contagion. Personality and Social Psychology Review, 5(4), 296320.CrossRefGoogle Scholar
Suhay, E., Bello-Pardo, E., & Maurer, B. (2018). The polarizing effects of online partisan criticism: Evidence from two experiments. The International Journal of Press/Politics, 23(1), 95115. https://doi.org/10.1177/1940161217740697CrossRefGoogle Scholar