Hostname: page-component-586b7cd67f-dsjbd Total loading time: 0 Render date: 2024-11-30T19:32:52.384Z Has data issue: false hasContentIssue false

Is Mechanical Turk the Answer to Our Sampling Woes?

Published online by Cambridge University Press:  23 March 2016

Melissa G. Keith*
Affiliation:
Department of Psychological Sciences, Purdue University
Peter D. Harms
Affiliation:
Department of Management, University of Alabama
*
Correspondence concerning this article should be addressed to Melissa G. Keith, Department of Psychological Sciences, Purdue University, 703 Third Street, West Lafayette, IN 47906. E-mail: [email protected]

Extract

Although we share Bergman and Jean's (2016) concerns about the representativeness of samples in the organizational sciences, we are mindful of the ever changing nature of the job market. New jobs are created from technological innovation while others become obsolete and disappear or are functionally transformed. These shifts in employment patterns produce both opportunities and challenges for organizational researchers addressing the problem of the representativeness in our working population samples. On one hand, it is understood that whatever we do, we will always be playing catch-up with the market. On the other hand, it is possible that we can leverage new technologies in order to react to such changes more quickly. As an example, in Bergman and Jean's commentary, they suggested making use of crowdsourcing websites or Internet panels in order to gain access to undersampled populations. Although we agree there is an opportunity to conduct much research of interest to organizational scholars in these settings, we also would point out that these types of samples come with their own sampling challenges. To illustrate these challenges, we examine sampling issues for Amazon's Mechanical Turk (MTurk), which is currently the most used portal for psychologists and organizational scholars collecting human subjects data online. Specifically, we examine whether MTurk workers are “workers” as defined by Bergman and Jean, whether MTurk samples are WEIRD (Western, educated, industrialized, rich, and democratic; Henrich, Heine, & Norenzayan, 2010), and how researchers may creatively utilize the sample characteristics.

Type
Commentaries
Copyright
Copyright © Society for Industrial and Organizational Psychology 2016 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Barger, P., Behrend, T. S., Sharek, D. J., & Sinar, E. F. (2011). I-O and the crowd: Frequently asked questions about using Mechanical Turk for research. The Industrial–Organizational Psychologist, 49 (2), 1117.Google Scholar
Behrend, T. S., Sharek, D. J., Meade, A. W., & Wiebe, E. N. (2011). The viability of crowdsourcing for survey research. Behavioral Research Methods, 43 (3), 114.CrossRefGoogle ScholarPubMed
Bergman, M. E., & Jean, V. A. (2016). Where have all the “workers” gone? A critical analysis of the unrepresentativeness of our samples relative to the labor market in the industrial–organizational psychology literature. Industrial and Organizational Psychology: Perspectives on Science and Practice, 9, 84113.CrossRefGoogle Scholar
Buhrmester, M., Kwang, T., & Gosling, S. D. (2011). Amazon's Mechanical Turk: A new source of inexpensive, yet high-quality, data? Perspectives on Psychological Science, 6, 35.CrossRefGoogle Scholar
Casler, K., Bickel, L., & Hackett, E. (2013). Separate but equal? A comparison of participants and data gathered via Amazon's MTurk, social media, and face-to-face behavioral testing. Computers and Human Behavior, 29, 21562160.CrossRefGoogle Scholar
Chandler, J., Mueller, P., & Paolacci, G. (2014). Nonnaïveté among Amazon Mechanical Turk workers: Consequences and solutions for behavioral researchers. Behavioral Research, 46, 112130. doi:10.3758/s13428-013-0365-7 CrossRefGoogle ScholarPubMed
Downs, J. S., Holbrook, M. B., Sheng, S., & Cranor, L. F. (2010). Are your participants gaming the system? Screening Mechanical Turk workers. In Proceedings from SIGCHI ’10: The 28th International Conference on Human Factors in Computing Systems (pp. 23992402). New York, NY: ACM Press.CrossRefGoogle Scholar
Feitosa, J., Joseph, D. L., & Newman, D. A. (2015). Crowdsourcing and personality measurement equivalence: A warning about countries whose primary language is not English. Personality and Individual Differences, 75, 4752.CrossRefGoogle Scholar
Harms, P. D., & DeSimone, J. A. (2015). Caution! MTurk workers ahead—Fines doubled. Industrial and Organizational Psychology: Perspectives on Science and Practice, 8 (2), 183190.CrossRefGoogle Scholar
Henrich, J., Heine, S. J., & Norenzayan, A. (2010). The weirdest people in the world? Behavioral and Brain Sciences, 33 (2/3), 6183.CrossRefGoogle ScholarPubMed
Ipeirotis, P. G. (2010). Demographics of Mechanical Turk (Technical Report CeDER-10-01). New York: New York University.Google Scholar
Litman, L., Robinson, J., & Rosenzweig, C. (2015). The relationship between motivation, monetary compensation and data quality among US- and India-based workers on Mechanical Turk. Behavioral Research, 47, 519528.CrossRefGoogle ScholarPubMed
Paolacci, G., Chandler, J., & Ipeirotis, P. G. (2010). Running experiments on Amazon Mechanical Turk. Judgment and Decision Making, 5 (5), 411419.CrossRefGoogle Scholar
Ross, J., Zaldivar, A., Irani, L., & Tomlinson, B. (2010). Who are the crowdworkers? Shifting demographics in Mechanical Turk. In Proceedings from CHI’10: Extended Abstracts on Human Factors in Computing Systems (pp. 28632872). Atlanta, GA: ACM Press.Google Scholar
Woo, S. E., Keith, M., & Thornton, M. A. (2015). Amazon Mechanical Turk for industrial and organizational psychology: Advantages, challenges, and practical recommendations. Industrial and Organizational Psychology: Perspectives on Science and Practice, 8 (2), 171179.CrossRefGoogle Scholar