Hostname: page-component-cd9895bd7-lnqnp Total loading time: 0 Render date: 2024-12-27T18:46:02.186Z Has data issue: false hasContentIssue false

The Generalizability of Online Experiments Conducted During the COVID-19 Pandemic

Published online by Cambridge University Press:  02 July 2021

Kyle Peyton*
Affiliation:
Institute for Humanities and Social Sciences, Australian Catholic University, Melbourne, Australia
Gregory A. Huber
Affiliation:
Yale University, New Haven, CT, USA
Alexander Coppock
Affiliation:
Yale University, New Haven, CT, USA
*
*Corresponding author. Email: [email protected]

Abstract

The COVID-19 pandemic imposed new constraints on empirical research, and online data collection by social scientists increased. Generalizing from experiments conducted during this period of persistent crisis may be challenging due to changes in how participants respond to treatments or the composition of online samples. We investigate the generalizability of COVID era survey experiments with 33 replications of 12 pre-pandemic designs, fielded across 13 quota samples of Americans between March and July 2020. We find strong evidence that pre-pandemic experiments replicate in terms of sign and significance, but at somewhat reduced magnitudes. Indirect evidence suggests an increased share of inattentive subjects on online platforms during this period, which may have contributed to smaller estimated treatment effects. Overall, we conclude that the pandemic does not pose a fundamental threat to the generalizability of online experiments to other time periods.

Type
Research Article
Copyright
© The Author(s), 2021. Published by Cambridge University Press on behalf of The Experimental Research Section of the American Political Science Association

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

Kyle Peyton is Research Fellow in Political Science, Institute for Humanities and Social Sciences, Australian Catholic University, Melbourne, Australia ([email protected], @pylekeyton); Gregory A. Huber is Forst Family Professor of Political Science, Yale University, New Haven, CT, USA ([email protected]); Alexander Coppock is Assistant Professor of Political Science, Yale University, New Haven, CT, USA ([email protected], @aecoppock).

This article has earned badges for transparent research practices: Open Data and Open Materials. For details see the Data Availability Statement.

References

Arechar, Antonio A and Rand, David. 2020. “Turking in the time of COVID.” PsyArXiv.CrossRefGoogle Scholar
Aronow, Peter M, Baron, Jonathon and Pinson, Lauren. 2019. “A note on dropping experimental subjects who fail a manipulation check.Political Analysis 27(4): 572–89.CrossRefGoogle Scholar
Aronow, Peter Michael, Kalla, Joshua, Orr, Lilla and Ternovski, John. 2020. “Evidence of Rising Rates of Inattentiveness on Lucid in 2020.” SocArXiv.CrossRefGoogle Scholar
Berinsky, Adam J, Huber, Gregory A and Lenz, Gabriel S. 2012. “Evaluating online labor markets for experimental research: Amazon.com’s Mechanical Turk.” Political Analysis 20(3): 351–68.CrossRefGoogle Scholar
Berinsky, Adam J, Margolis, Michele F and Sances, Michael W. 2014. “Separating the shirkers from the workers? Making sure respondents pay attention on self-administered surveys.American Journal of Political Science 58(3): 739–53.CrossRefGoogle Scholar
Berinsky, Adam J, Margolis, Michele F and Sances, Michael W. 2016. “Can we turn shirkers into workers?Journal of Experimental Social Psychology 66: 20–8.CrossRefGoogle Scholar
Berinsky, Adam J, Margolis, Michele F, Sances, Michael W and Warshaw, Christopher. 2019. “Using screeners to measure respondent attention on self-administered surveys: Which items and how many?Political Science Research and Methods 18.Google Scholar
Camerer, Colin F, Dreber, Anna, Forsell, Eskil, Ho, Teck-Hua, Huber, Jürgen, Johannesson, Magnus, Kirchler, Michael, Almenberg, Johan, Altmejd, Adam, Chan, Taizan et al. 2016. “Evaluating replicability of laboratory experiments in economics.Science 351(6280): 1433–6.CrossRefGoogle ScholarPubMed
Coppock, Alexander. 2019. “Generalizing from Survey Experiments Conducted on Mechanical Turk: A Replication Approach.Political Science Research and Methods 7(3): 613–28.CrossRefGoogle Scholar
Coppock, Alexander, Leeper, Thomas J. and Mullinix, Kevin J.. 2018. “Generalizability of heterogeneous treatment effect estimates across samples.Proceedings of the National Academy of Sciences 115(49): 12441–6.CrossRefGoogle ScholarPubMed
Coppock, Alexander and McClellan, Oliver A. 2019. “Validating the demographic, political, psychological, and experimental results obtained from a new source of online survey respondents.Research & Politics 6(1): 2053168018822174.CrossRefGoogle Scholar
Cronbach, Lee J. and Shapiro, Karen. 1982. Designing evaluations of educational and social programs. San Francisco: Jossey-Bass.Google Scholar
Druckman, James N. 2001. “Using credible advice to overcome framing effects.” Journal of Law, Economics, and Organization 17(1): 6282.CrossRefGoogle Scholar
Findley, Michael G, Kikuta, Kyosuke and Denly, Michael. 2020. “External Validity.Annual Review of Political Science.Google Scholar
Gadarian, Shana Kushner and Albertson, Bethany. 2014. “Anxiety, Immigration, and the Search for Information.Political Psychology 35(2): 133–64.CrossRefGoogle Scholar
Gilens, Martin. 2001. “Political Ignorance and Collective Policy Preferences.” American Political Science Review 95(2): 379396.CrossRefGoogle Scholar
Glass, Gene V. 1976. “Primary, secondary, and meta-analysis of research.Educational Researcher 5(10): 38.Google Scholar
Hainmueller, Jens and Hopkins, Daniel J. 2015. “The hidden American immigration consensus: A conjoint analysis of attitudes toward immigrants.” American Journal of Political Science 59(3): 529548.Google Scholar
Huber, Gregory A and Paris, Celia. 2013. “Assessing the programmatic equivalence assumption in question wording experiments: Understanding why Americans like assistance to the poor more than welfare.Public Opinion Quarterly 77(1): 385–97.CrossRefGoogle Scholar
Hyman, Herbert H. and Sheatsley, Paul B. 1950. The Current Status of American Public Opinion. In The Teaching of Contemporary Affairs, ed. Payne, J. C. New York: National Council of Social Studies, 1134.Google Scholar
IJzerman, Hans, Lewis, Neil A, Przybylski, Andrew K, Weinstein, Netta, DeBruine, Lisa, Ritchie, Stuart J, Vazire, Simine, Forscher, Patrick S, Morey, Richard D, Ivory, James D et al. 2020. “Use caution when applying behavioural science to policy.Nature Human Behaviour 4(11): 1092–4.CrossRefGoogle ScholarPubMed
Kane, John V., Velez, Yamil R. and Barabas, Jason. 2020. “Analyze the Attentive and Bypass Bias: Mock Vignette Checks in Survey Experiments.” Unpublished Manuscript.Google Scholar
Klein, Richard A, Ratliff, Kate A, Vianello, Michelangelo, Adams, Reginald B Jr, Bahník, Štěpán, Bernstein, Michael J, Bocian, Konrad, Brandt, Mark J, Brooks, Beach, Brumbaugh, Claudia Chloe et al. 2014. “Investigating variation in replicability.” Social Psychology.CrossRefGoogle Scholar
Klein, Richard A, Vianello, Michelangelo, Hasselman, Fred, Adams, Byron G, Adams, Reginald B Jr, Alper, Sinan, Aveyard, Mark, Axt, Jordan R, Babalola, Mayowa T, Bahník, Štěpán et al. 2018. “Many Labs 2: Investigating variation in replicability across samples and settings.Advances in Methods and Practices in Psychological Science 1(4): 443–90.CrossRefGoogle Scholar
Knobe, Joshua. 2003. “Intentional action and side effects in ordinary language.” Analysis 63(3): 190194.CrossRefGoogle Scholar
Montgomery, Jacob M, Nyhan, Brendan and Torres, Michelle. 2018. “How conditioning on posttreatment variables can ruin your experiment and what to do about it.American Journal of Political Science 62(3): 760–75.CrossRefGoogle Scholar
Mullinix, Kevin J., Leeper, Thomas J., Druckman, James N. and Freese, Jeremy. 2015. “The Generalizability of Survey Experiments.Journal of Experimental Political Science 2: 109–38.Google Scholar
Munger, Kevin. 2020. “Knowledge Decays: Temporal Validity and Social Science in a Changing World.” Unpublished Manuscript.Google Scholar
Open Science Collaboration. 2015. “Estimating the reproducibility of psychological science.Science 349(6251).Google Scholar
Oppenheimer, Daniel M, Meyvis, Tom and Davidenko, Nicolas. 2009. “Instructional manipulation checks: Detecting satisficing to increase statistical power.Journal of Experimental Social psychology 45(4): 867–72.Google Scholar
Paolacci, Gabriele, Chandler, Jesse, Ipeirotis, Panagiotis G et al. 2010. “Running experiments on Amazon Mechanical Turk.Judgment and Decision Making 5(5): 411–9.Google Scholar
Permut, Stephanie, Fisher, Matthew and Oppenheimer, Daniel M. 2019. “Taskmaster: A tool for determining when subjects are on task.Advances in Methods and Practices in Psychological Science 2(2): 188–96.CrossRefGoogle Scholar
Peyton, Kyle. 2020. “Does Trust in Government Increase Support for Redistribution? Evidence from Randomized Survey Experiments.American Political Science Review 114(2): 596602.CrossRefGoogle Scholar
Peyton, Kyle, Coppock, Alexander and Huber, Gregory. 2021. “Replication Data for: The Generalizability of Online Experiments Conducted During the COVID-19 Pandemic.”. https://doi.org/10.7910/DVN/38UTBF CrossRefGoogle Scholar
Press, Daryl G., Sagan, Scott D., and Valentino, Benjamin A. 2013. “Atomic aversion: Experimental evidence on taboos, traditions, and the non-use of nuclear weapons.” American Political Science Review 107(1): 188206.CrossRefGoogle Scholar
Porter, Ethan, Wood, Thomas J., and Kirby, David. 2018. “Sex trafficking, Russian infiltration, birth certificates, and pedophilia: A survey experiment correcting fake news.” Journal of Experimental Political Science 5(2): 159164.CrossRefGoogle Scholar
Rosenfeld, Daniel L, Balcetis, Emily, Bastian, Brock, Berkman, Elliot, Bosson, Jennifer, Brannon, Tiffany, Burrow, Anthony L, Cameron, Daryl, Serena, CHEN, Cook, Jonathan E et al. forthcoming. “Conducting Social Psychological Research in the Wake of COVID-19.Perspectives on Psychological Science.Google Scholar
Smith, Tom W. 1987. “That Which We Call Welfare by Any Other Name Would Smell Sweeter: An Analysis of the Impact of Question Wording on Response Patterns.Public Opinion Quarterly 51: 7583.CrossRefGoogle Scholar
Trump, Kris-Stella and Ariel, White. 2018. “Does inequality beget inequality? Experimental tests of the prediction that inequality increases system justification motivation.” Journal of Experimental Political Science 5(3): 206216.Google Scholar
Tversky, Amos and Kahneman, Daniel. 1981. “The framing of decisions and the psychology of choice.Science 211(4481): 453–8.CrossRefGoogle ScholarPubMed
Valentino, Nicholas A., Banks, Antoine J., Hutchings, Vincent L. and Davis, Anne K.. 2009. “Selective Exposure in the Internet Age: The Interaction between Anxiety and Information Utility.Political Psychology 30(4): 591613.CrossRefGoogle Scholar
Young, Lauren E. 2019. “The Psychology of State Repression: Fear and Dissent Decisions in Zimbabwe.American Political Science Review 113(1): 140–55.Google Scholar
Supplementary material: PDF

Peyton et al. supplementary material

Peyton et al. supplementary material

Download Peyton et al. supplementary material(PDF)
PDF 1.5 MB