Hostname: page-component-586b7cd67f-t7czq Total loading time: 0 Render date: 2024-11-27T02:01:33.501Z Has data issue: false hasContentIssue false

Don't Throw the Baby Out With the Bathwater: Comparing Data Quality of Crowdsourcing, Online Panels, and Student Samples

Published online by Cambridge University Press:  28 July 2015

Nicolas Roulin*
Affiliation:
University of Manitoba
*
Correspondence concerning this article should be sent to Nicolas Roulin, Asper School of Business, Department of Business Administration, University of Manitoba, 406 Drake Center, Winnipeg, Manitoba, CanadaR3T 5V4. E-mail: [email protected]

Extract

In their focal article, Landers and Behrend (2015) propose to reevaluate the legitimacy of using the so-called convenience samples (e.g., crowdsourcing, online panels, and student samples) as compared with traditional organizational samples in industrial–organizational (I-O) psychology research. They suggest that such sampling strategies should not be judged as inappropriate per se but that decisions to accept or reject such samples must be empirically or theoretically justified. I concur with Landers and Behrend's call for a more nuanced view on convenience samples. More precisely, I suggest that we should not “throw the baby out with the bathwater” but rather carefully and empirically examine the advantages and risks associated with using each sampling strategy before classifying it as suitable or not.

Type
Commentaries
Copyright
Copyright © Society for Industrial and Organizational Psychology 2015 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Behrend, T. S., Sharek, D. J., Meade, A. W., & Wiebe, E. N. (2011). The viability of crowdsourcing for survey research. Behavior Research Methods, 43, 800813. doi:10.3758/s13428-011-0081-0CrossRefGoogle ScholarPubMed
Brandon, D. M., Long, J. H., Loraas, T. M., Mueller-Phillips, J., & Vansant, B. (2013). Online instrument delivery and participant recruitment services: Emerging opportunities for behavioral accounting research. Behavioral Research in Accounting, 26, 123.Google Scholar
Buhrmester, M., Kwang, T., & Gosling, S. D. (2011). Amazon's Mechanical Turk: A new source of inexpensive, yet high-quality, data? Perspectives on Psychological Science, 6, 35. doi:10.1177/1745691610393980Google Scholar
Crump, M. J. C., McDonnell, J. V., & Gureckis, T. M. (2013). Evaluating Amazon's Mechanical Turk as a tool for experimental behavioral research. PLoS ONE, 8, e57410. doi:10.1371/journal.pone.0057410Google Scholar
Duckitt, J., Wagner, C., Du Plessis, I., & Birum, I. (2002). The psychological bases of ideology and prejudice: Testing a dual process model. Journal of Personality and Social Psychology, 83, 7593. doi:10.1037/0022-3514.83.1.75Google Scholar
Landers, R. N., & Behrend, T. S. (2015). An inconvenient truth: Arbitrary distinctions between organizational, Mechanical Turk, and other convenience samples. Industrial and Organizational Psychology: Perspectives on Science and Practice, 8.CrossRefGoogle Scholar
Paolacci, G., Chandler, J., & Ipeirotis, P. G. (2010). Running experiments on Amazon Mechanical Turk. Judgment and Decision Making, 5, 411419. doi:10.2139/ssrn.1626226CrossRefGoogle Scholar
U.S. Bureau of Labor Statistics. (2014). Labor force characteristics by race and ethnicity, 2013. Retrieved from http://www.bls.gov/cps/cpsrace2013.pdfGoogle Scholar