Hostname: page-component-78c5997874-4rdpn Total loading time: 0 Render date: 2024-11-12T21:45:42.475Z Has data issue: false hasContentIssue false

Is There a Cost to Convenience? An Experimental Comparison of Data Quality in Laboratory and Online Studies

Published online by Cambridge University Press:  02 October 2014

Scott Clifford
Affiliation:
Department of Political Science, University of Houston, Houston, TX, USA; e-mail: [email protected]; [email protected]
Jennifer Jerit
Affiliation:
Department of Political Science, Stony Brook University, Stony Brook, NY, USA; email: [email protected]

Abstract

Increasingly, experimental research is being conducted on the Internet in addition to the laboratory. Online experiments are more convenient for subjects and researchers, but we know little about how the choice of study location affects data quality. To investigate whether respondent behavior differs across study location, we randomly assign subjects to participate in a study in a laboratory or in an online setting. Contrary to our expectations, we find few differences between participants in terms of the level of attention and socially desirable responding. However, we find significant differences in two areas: the degree of self-reported distractions while completing the questionnaire and the tendency to consult outside sources for answers to political knowledge questions. We conclude that when the greater convenience (and higher response rates) of online experiments outweighs these disadvantages, Internet administration of randomized experiments represent an alternative to laboratory administration.

Type
Research Article
Copyright
Copyright © The Experimental Research Section of the American Political Science Association 2014 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

REFERENCES

Bargh, J., and Chartrand, T. L. 2000. The Mind in the Middle: A Practical Guide to Priming and Automaticity Research. In Handbook of Research Methods in Social and Personality Psychology eds. Reis, H. T. and Judd, C. M.. New York: Cambridge University Press, 253–85.Google Scholar
Berinsky, A. J. 1999. Can We Talk? Self-Presentation and the Survey Response. Political Psychology 25 (4): 643–59.CrossRefGoogle Scholar
Berinsky, A. J., Huber, G. A., and Lenz, G. S. 2012. Using Mechanical Turk as a Subject Recruitment Tool for Experimental Research. Political Analysis 20 (3): 351–68.Google Scholar
Berinsky, A. J., Margolis, M., and Sances, M. 2014. Separating the Shirkers from the Workers? Making Sure Respondents Pay Attention on Internet Surveys. American Journal of Political Science Forthcoming.Google Scholar
Berry, D. T. R., Wetter, M. W., Baer, R. A., Larsen, L., Clark, C., and Monroe, K. 1992. MMPI-2 Random Responding Indices: Validation Using a Self-Report Methodology. Psychological Assessment 4 (3): 340–45.CrossRefGoogle Scholar
Boster, F., and Shulman, H. 2014. Political Knowledge Test Performance as a Function of Venue, Time Pressure, and Performance Norms. Working Paper, North Central College.Google Scholar
Cassese, E. C., Huddy, L., Hartman, T. K., Mason, L., and Weber, C. R. 2013. Socially Mediated Internet Surveys: Recruiting Participants for Online Experiments. PS: Political Science and Politics 46 (4): 775–84.Google Scholar
Chang, L., and Krosnick, J. A. 2009. National Surveys via RDD Telephone Interviewing versus the Internet. Public Opinion Quarterly 73 (4): 641–78.Google Scholar
Chang, L., and Krosnick, J. A. 2010. Comparing Oral Interviewing with Self-Administered Computerized Questionnaires. Public Opinion Quarterly 74 (1): 154–67.Google Scholar
Coppock, A., and Green, D. P. 2013. Assessing the Correspondence Between Experimental Results Obtained in the Lab and Field: A Review of Recent Social Science Research. Working Paper, Columbia University.Google Scholar
Evans, D. C., Garcia, D. J., Garcia, D. M., and Baron, R. S. 2003. In the Privacy of Their Own Homes: Using the Internet to Assess Racial Bias. Personality and Social Psychology Bulletin 29 (2): 273–84.CrossRefGoogle Scholar
Gerber, A. 2011. Field Experiments in Political Science. In Handbook of Experimental Political Science eds. Druckman, J., Green, D. P., Kuklinski, J. H., and Lupia, A.. New York: Cambridge University Press, 115–38.Google Scholar
Goodman, J. K., Cryder, C. E., and Cheema, A. 2013. Data Collection in a Flat World: The Strengths and Weaknesses of Mechanical Turk Samples. Journal of Behavioral Decision Making 26 (3): 213–24.CrossRefGoogle Scholar
Huang, J. L., Curran, P. G., Keeney, J., Poposki, E. M., and DeShon, R. P. 2012. Detecting and Deterring Insufficient Effort Responding to Surveys. Journal of Business Psychology 27: 99114.CrossRefGoogle Scholar
Jerit, J., Barabas, J., and Clifford, S. 2013. Comparing Contemporaneous Laboratory and Field Experiments on Media Effects. Public Opinion Quarterly 77 (1): 256–82.CrossRefGoogle Scholar
Lelkes, Y., Krosnick, J. A., Marx, D. M., Judd, C. N., and Park, B. 2012. Complete Anonymity Compromises the Accuracy of Self-Reports. Journal of Experimental Social Psychology 48: 1291–99.Google Scholar
Lodge, M., and Taber, C. S. 2013. The Rationalizing Voter. New York: Cambridge University Press.CrossRefGoogle Scholar
Meade, A. W., and Craig, S. B. 2012. Identifying Careless Responses in Survey Data. Psychological Methods 17 (3): 437–55.Google Scholar
McConahay, J. G. 1986. Modern Racism, Ambivalence, and the Modern Racism Scale. In Prejudice, Discrimination, and Racism eds. Dovidio, J. F. and Gaertner, S. L.. New York: Academic Press, 91125.Google Scholar
McDermott, R. 2002. Experimental Methods in Political Science. Annual Review of Political Science 5: 3161.CrossRefGoogle Scholar
Morton, R. B., and Williams, K. C. 2010. Experimental Political Science and the Study of Causality. New York: Cambridge University Press.Google Scholar
Mutz, D. 2011. Population-Based Survey Experiments. Princeton, NJ: Princeton University Press.CrossRefGoogle Scholar
Oppenheimer, D., Meyvis, T., and Davidenko, N. 2009. Instructional Manipulation Checks: Detecting Satisficing to Increase Statistical Power. Journal of Experimental Social Psychology 45: 867–72.CrossRefGoogle Scholar
Prior, M. 2009. Improving Media Effects Research Through Better Measurement of News Exposure. Journal of Politics 71 (3): 893908.CrossRefGoogle Scholar
Sargis, E. G., Skitka, L. J., and McKeever, W. 2014. The Internet as Psychological Laboratory Revisited: Best Practices, Challenges, and Solutions. In The Social Net: The Social Psychology of the Internet ed. Amichai-Hamberger, Y.. Oxford: Oxford University Press. In Press.CrossRefGoogle Scholar
Tourangeau, R., and Smith, T. W. 1996. Asking Sensitive Questions: The Impact of Data Collection Mode, Question Format, and Question Context. Public Opinion Quarterly 60 (2): 275304.Google Scholar
Tourangeau, R., and Yan, T. 2007. Sensitive Survey Questions. Psychological Bulletin 133 (5): 859–83.Google Scholar
Vavreck, L. 2012. The Myth of Cheating on Self-Completed Surveys. http://today.yougov.com/news/2012/04/17/myth-cheating-self-completed-surveys/ (Accessed June 10, 2013).Google Scholar
Warren, J. 2012. Fake Orgasms and the Tea Party: Just Another Political Science Convention. The Atlantic Online. http://www.theatlantic.com/politics/archive/2012/04/fake-orgasms-and-the-tea-party-just-another-political-science-convention/255909/ (Accessed June 10, 2013).Google Scholar
Weigold, A., Weigold, I. K., and Russell, E. J. 2013. Examination of the Equivalence of Self-Report Survey-Based Paper-and-Pencil and Internet Data Collection Methods. Psychological Methods 18 (1): 5370.CrossRefGoogle Scholar
Weinberger, J., and Westen, D. 2008. RATS, We Should Have Used Clinton: Subliminal Priming in Political Campaigns. Political Psychology 29 (5): 631–51.Google Scholar
Supplementary material: PDF

Clifford and Jerit Supplementary Material

Appendix

Download Clifford and Jerit Supplementary Material(PDF)
PDF 910 KB