Hostname: page-component-586b7cd67f-g8jcs Total loading time: 0 Render date: 2024-11-30T17:06:41.608Z Has data issue: false hasContentIssue false

A Pairwise Comparison Framework for Fast, Flexible, and Reliable Human Coding of Political Texts

Published online by Cambridge University Press:  05 September 2017

DAVID CARLSON*
Affiliation:
Washington University in St. Louis
JACOB M. MONTGOMERY*
Affiliation:
Washington University in St. Louis
*
David Carlson is a PhD candidate, Washington University in St. Louis, Department of Political Science, Campus Box 1063, One Brookings Drive, St. Louis, MO 63130-4899 ([email protected]).
Jacob M. Montgomery is an Associate Professor, Washington University in St. Louis, Department of Political Science, Campus Box 1063, One Brookings Drive, St. Louis, MO 63130-4899 ([email protected]).

Abstract

Scholars are increasingly utilizing online workforces to encode latent political concepts embedded in written or spoken records. In this letter, we build on past efforts by developing and validating a crowdsourced pairwise comparison framework for encoding political texts that combines the human ability to understand natural language with the ability of computers to aggregate data into reliable measures while ameliorating concerns about the biases and unreliability of non-expert human coders. We validate the method with advertisements for U.S. Senate candidates and with State Department reports on human rights. The framework we present is very general, and we provide free software to help applied researchers interact easily with online workforces to extract meaningful measures from texts.

Type
Research Article
Copyright
Copyright © American Political Science Association 2017 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

We thank Burt Monroe, John Freeman, and Brandon Stewart for providing comments on a previous version of this paper. We are indebted to Ryden Butler, Dominic Jarkey, Jon Rogowski, Erin Rossiter, and Michelle Torres for their assistance with this project. We particularly wish to thank Matt Dickenson for his programming assistance. We also appreciate the assistance in the R package development from David Flasterstein, Joseph Ludmir, and Taishi Muraoka. We are grateful for the financial support provided by the Weidenbaum Center on the Economy, Government, and Public Policy. Finally, we wish to thank the partner-workers at Amazon’s Mechanical Turk who make this research possible.

References

REFERENCES

Benoit, Kenneth, Conway, Drew, Lauderdale, Benjamin E., Laver, Michael, and Mikhaylov, Slava. 2016. “Crowd-sourced Text Analysis: Reproducible and Agile Production of Political Data.” American Political Science Review 110 (2): 278–95.Google Scholar
Berinsky, Adam J., Huber, Gergory A., and Lenz, Gabriel S.. 2012. “Evaluating Online Labor Markets for Experimental Research: Amazon.com’s Mechanical Turk.” Political Analysis 20 (3): 329–50.Google Scholar
Carpenter, Bob, Gelman, Andrew, Hoffman, Matt, Lee, Daniel, Goodrich, Ben, Betancourt, Michael A., Brubaker, Michael, Guo, Jiqiang, Li, Peter, and Riddell, Allen. 2016. “Stan: A probabilistic programming language.” Journal of Statistical Software 20: 137.Google Scholar
Hathaway, Oona A. 2002. “Do Human Rights Treaties Make a Difference?The Yale Law Journal 111 (8): 19352042.Google Scholar
Henderson, John A. 2015. “Using experiments to improve ideal point estimation in text with an application to political ads.” Unpublished manuscript.Google Scholar
Hitlin, Paul. 2016. Research in the crowdsourcing age, a case study. www.pewinternet.org/2016/07/11/research-in-the-crowdsourcing-age-a-case-study/: Pew Research Center.Google Scholar
King, Gary. 2007. “An Introduction to the Dataverse Network as an Infrastructure for Data Sharing.” Sociological Methods and Research 36 (2): 173–99.CrossRefGoogle Scholar
King, Gary, Murray, Christopher J. L., Salomon, Joshua A., and Tandon, Ajay. 2004. “Enhancing the Validity and Cross-Cultural Comparability of Measurement in Survey Research.” American Political Science Review 98 (1): 191207.CrossRefGoogle Scholar
Neumayer, Eric. 2005. “Do International Human Rights Treaties Improve Respect for Human Rights?Journal of Conflict Resolution 49 (6): 925–53.Google Scholar
Oishi, Shigehiro, Hahn, Jungwon, Schimmack, Ulrich, Radhakrishan, Phanikiran, Dzokoto, Vivian, and Ahadi, Stephen. 2005. “The Measurement of Values Across Cultures: A Pairwise Comparison Approach.” Journal of Research in Personality 39 (2): 299305.Google Scholar
Sheng, Victor S., Provost, Foster, and Ipeirotis, Panagiotis G.. 2008. “Get Another Label? Improving Data Quality and Data Mining using Multiple, Noisy Labelers.” In Proceedings of the 14th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. New York: ACM, 614–22.Google Scholar
Supplementary material: Link

Carlson et.al. Dataset

Link
Supplementary material: PDF

Carlson and Montgomery supplementary material

Carlson and Montgomery supplementary material 1

Download Carlson and Montgomery supplementary material(PDF)
PDF 496 KB
Submit a response

Comments

No Comments have been published for this article.