Hostname: page-component-cd9895bd7-q99xh Total loading time: 0 Render date: 2024-12-25T07:30:14.108Z Has data issue: false hasContentIssue false

All the News That’s Fit to Fabricate: AI-Generated Text as a Tool of Media Misinformation

Published online by Cambridge University Press:  20 November 2020

Sarah Kreps*
Affiliation:
Department of Government, Cornell University, Ithaca, New York, 14853, USA, Twitter: @sekreps
R. Miles McCain
Affiliation:
Stanford University, Stanford, California, 94305, USA, Twitter: @MilesMcCain
Miles Brundage
Affiliation:
OpenAI, San Francisco, California, 94110, USA, Twitter: @Miles_Brundage
*
*Corresponding author. Email: [email protected]

Abstract

Online misinformation has become a constant; only the way actors create and distribute that information is changing. Advances in artificial intelligence (AI) such as GPT-2 mean that actors can now synthetically generate text in ways that mimic the style and substance of human-created news stories. We carried out three original experiments to study whether these AI-generated texts are credible and can influence opinions on foreign policy. The first evaluated human perceptions of AI-generated text relative to an original story. The second investigated the interaction between partisanship and AI-generated news. The third examined the distributions of perceived credibility across different AI model sizes. We find that individuals are largely incapable of distinguishing between AI- and human-generated text; partisanship affects the perceived credibility of the story; and exposure to the text does little to change individuals’ policy views. The findings have important implications in understanding AI in online misinformation campaigns.

Type
Research Article
Copyright
© The Author(s), 2020. Published by Cambridge University Press on behalf of The Experimental Research Section of the American Political Science Association

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

The data, code, and any additional materials required to replicate all analyses in this article are available at the Journal of Experimental Political Science Dataverse within the Harvard Dataverse Network, at: doi:10.7910/DVN/1XVYU3. This research was conducted using Sarah Kreps’ personal research funds. Early access to GPT-2 was provided in-kind by OpenAI under a non-disclosure agreement. Sarah Kreps and Miles McCain otherwise have no relationships with interested parties. Miles Brundage is employed by OpenAI.

References

Arceneaux, K., Johnson, M., and Murphy, C.. 2012. Polarized Political Communication, Oppositional Media Hostility, and Selective Exposure. The Journal of Politics 74(1): 174–86. https://www.jstor.org/stable/10.1017/s002238161100123x CrossRefGoogle Scholar
Brenan, M. 2019. Americans’ Trust in Mass Media Edges Down to 41%. Retrieved September 18, 2020, from https://news.gallup.com/poll/267047/americans-trust-mass-media-edges-down.aspx Google Scholar
Brown, T. B., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., HerbertVoss, A., Krueger, G, Henighan, T, Child, R, Ramesh, A, Ziegler, D. M., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I., and Amodei, D.. 2020. Language Models are Few-Shot Learners [arXiv: 2005.14165]. arXiv:2005.14165 [cs]. Retrieved September 18, 2020, from http://arxiv.org/abs/2005.14165 Google Scholar
Chong, D. and Druckman, J. N.. 2013. Counterframing Effects. The Journal of Politics 75(1): 116. doi: 10.1017/S0022381612000837 CrossRefGoogle Scholar
Clayton, K., Blair, S., Busam, J. A., Forstner, S., Glance, J., Green, G., Kawata, A., Kovvuri, A., Martin, J., Morgan, E., Sandhu, M., Sang, R., ScholzBright, R., Welch, A. T., Wolff, A. G., Zhou, A., and Nyhan, B.. 2019. Real Solutions for Fake News? Measuring the Effectiveness of General Warnings and Fact-Check Tags in Reducing Belief in False Stories on Social Media. Political Behavior, Forthcoming. doi: 10.1007/s11109-01909533-0 Google Scholar
Diakopoulos, N. 2019. Automating the News: How Algorithms are Rewriting the Media. Cambridge, MA: Harvard University Press.CrossRefGoogle Scholar
Flanagin, A. J. and Metzger, M. J.. 2000. Perceptions of Internet Information Credibility. Journalism & Mass Communication Quarterly 77(3): 515–40. doi: 10.1177/107769900007700304 CrossRefGoogle Scholar
Flynn, D., Nyhan, B., and Reifler, J.. 2017. The Nature and Origins of Misperceptions: Understanding False and Unsupported Beliefs About Politics: Nature and Origins of Misperceptions. Political Psychology 38: 127–50. doi: 10.1111/pops.12394 CrossRefGoogle Scholar
Garrett, R. K., Long, J. A., and Jeong, M. S.. 2019. From Partisan Media to Misperception: Affective Polarization as Mediator. Journal of Communication 69(5): 490512. doi: 10.1093/joc/jqz028 CrossRefGoogle Scholar
Grover. n.d. A State-of-the-Art Defense against Neural Fake News [publisher: Allen Institute for AI]. Retrieved September 18, 2020, from https://grover.allenai.org/detect Google Scholar
Guess, A. M., Lerner, M., Lyons, B., Montgomery, J. M., Nyhan, B., Reifler, J., and Sircar, N.. 2020. A Digital Media Literacy Intervention Increases Discernment Between Mainstream and False News in the United States and India. Proceedings of the National Academy of Sciences 117(27): 15536–45. doi: 10.1073/pnas.1920498117 CrossRefGoogle ScholarPubMed
Helmus, T. 2018. Russian Social Media Influence: Understanding Russian Propaganda in Eastern Europe: Addendum. RAND Corporation. doi: 10.7249/CT496.1 CrossRefGoogle Scholar
Iyengar, S. and Hahn, K. S.. 2009. Red Media, Blue Media: Evidence of Ideological Selectivity in Media Use. Journal of Communication 59(1): 1939. doi: 10.1111/j.1460-2466.2008.01402.x CrossRefGoogle Scholar
Johnson, T. and Kaye, B.. 2000. Using Is Believing: The Influence of Reliance on the Credibility of Online Political Information among Politically Interested Internet Users. Journalism & Mass Communication Quarterly 77: 865879. doi: 10.1177/107769900007700409 CrossRefGoogle Scholar
Kreps, S. 2020. Replication Data for: All the News that’s Fit to Fabricate: AI-Generated Text as a Tool of Media Misinformation. Harvard Dataverse. doi: 10.7910/DVN/1XVYU3 CrossRefGoogle Scholar
Lazer, D. M. J., Baum, M. A., Benkler, Y., Berinsky, A. J., Greenhill, K. M., Menczer, F., Metzger, M. J., Nyhan, B., Pennycook, G., Rothschild, D., Schudson, M., Sloman, S. A., Sunstein, C. R., Thorson, E. A., Watts, D. J., and Zittrain, J. L.. 2018. The Science of Fake News. Science 359(6380): 1094–96. doi: 10.1126/science.aao2998 CrossRefGoogle ScholarPubMed
MacFarquhar, N. 2018. Inside the Russian Troll Factory: Zombies and a Breakneck Pace. The New York Times. Retrieved September 18, 2020, from https://www.nytimes.com/2018/02/18/world/europe/russia-trollfactory.html Google Scholar
McLean, R. 2019. Facebook’s Mark Zuckerberg: Private Companies Should Not Censor Politicians. Retrieved September 18, 2020, from https://www.cnn.com/2019/10/17/tech/mark-zuckerberg-fox-news-interview/index.html Google Scholar
Meyer, P. 1988. Defining and Measuring Credibility of Newspapers: Developing an Index. Journalism Quarterly 65(3): 567–74. doi: 10.1177/107769908806500301 CrossRefGoogle Scholar
Miller, M. 2019. Senate Intel Report Urges Action to Prevent Russian Meddling in 2020 Election. The Hill, 8 October.Google Scholar
Mitchell, A., Gottfried, J., Kiley, J., and Matsa, K. E.. 2014. Political Polarization & Media Habits. Retrieved September 18, 2020, from https://www.journalism.org/2014/10/21/political-polarization-media-habits/ Google Scholar
Newport, F. 2018. Immigration Surges to Top of Most Important Problem List. Retrieved September 18, 2020, from https://news.gallup.com/poll/237389/immigration-surges-top-important-problem-list.aspx Google Scholar
Norris, A. 1996. Arendt, Kant, and the Politics of Common Sense. Polity 29(2): 165–91. doi: 10.2307/3235299 CrossRefGoogle Scholar
Nyhan, B. 2019. Why Fears of Fake News Are Overhyped [publisher: GEN]. Retrieved September 18, 2020, from https://gen.medium.com/whyfears-of-fake-news-are-overhyped-2ed9ca0a52c9 Google Scholar
Pennycook, G. and Rand, D. G.. 2018. Lazy, Not Biased: Susceptibility to Partisan Fake News is Better Explained by Lack of Reasoning than by Motivated Reasoning. Cognition 188: 3950.CrossRefGoogle Scholar
Polyakova, A. 2018. Weapons of the Weak: Russia and AI-driven Asymmetric Warfare. Retrieved September 18, 2020, from https://www.brookings.edu/research/weapons-of-the-weak-russia-and-ai-driven-asymmetricwarfare/ Google Scholar
Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., and Sutskever, I.. 2019. Language Models are Unsupervised Multitask Learners. OpenAI Blog 1(8): 9.Google Scholar
Roberts, A. and Raffel, C.. 2020. Exploring Transfer Learning with T5: The Text-To-Text Transfer Transformer. http://ai.googleblog.com/2020/02/exploring-transfer-learning-with-t5.html Google Scholar
Sherman, J. 2019. The Fight Over Social Media’s Potent Political Ads Just Got More Interesting. Retrieved September 18, 2020, from https://thebulletin.org/2019/11/the-fight-over-social-medias-potent-political-ads-just-gotmore-interesting/ Google Scholar
Solaiman, I., Brundage, M., Clark, J., Askell, A., Herbert-Voss, A., Wu, J., Radford, A., Krueger, G., Kim, J. W., Kreps, S., McCain, M., Newhouse, A., Blazakis, J., McGuffie, K., and Wang, J.. 2019. Release Strategies and the Social Impacts of Language Models [arXiv: 1908.09203]. arXiv:1908.09203 [cs]. Retrieved September 18, 2020, from http://arxiv.org/abs/1908.09203 Google Scholar
Tavernise, S. and Gardiner, A.. 2019. ‘No One Believes Anything’: Voters Worn Out by a Fog of Political News. The New York Times. Retrieved September 18, 2020, from https://www.nytimes.com/2019/11/18/us/polls-media-fake-news.html Google Scholar
Vosoughi, S., Roy, D., and Aral, S.. 2018. The Spread of True and False News Online. Science 359(6380): 1146–51. doi: 10.1126/science.aap9559 CrossRefGoogle ScholarPubMed
Weedon, J., Nuland, W., and Stamos, A.. 2017. Information Operations and Facebook.Google Scholar
Wiggers, K. 2020. Facebook Open-Sources Blender, A Chatbot People Say ‘feels more human’. Retrieved September 18, 2020, from https://venturebeat.com/2020/04/29/facebook-open-sources-blender-a-chatbot-thatpeople-say-feels-more-human/ Google Scholar
Zellers, R., Holtzman, A., Rashkin, H., Bisk, Y., Farhadi, A., Roesner, F., and Choi, Y.. 2019. Defending Against Neural Fake News, In Advances in neural information processing systems.Google Scholar
Supplementary material: Link

Kreps et al. Dataset

Link
Supplementary material: PDF

Kreps et al. supplementary material

Kreps et al. supplementary material 2

Download Kreps et al. supplementary material(PDF)
PDF 481.6 KB
Supplementary material: File

Kreps et al. supplementary material

Kreps et al. supplementary material 1

Download Kreps et al. supplementary material(File)
File 208.1 KB