Hostname: page-component-cd9895bd7-gvvz8 Total loading time: 0 Render date: 2024-12-27T06:31:54.736Z Has data issue: false hasContentIssue false

Assessing the Impact of Non-Random Measurement Error on Inference: A Sensitivity Analysis Approach

Published online by Cambridge University Press:  16 January 2017

Abstract

Many commonly used data sources in the social sciences suffer from non-random measurement error, understood as mis-measurement of a variable that is systematically related to another variable. We argue that studies relying on potentially suspect data should take the threat this poses to inference seriously and address it routinely in a principled manner. In this article, we aid researchers in this task by introducing a sensitivity analysis approach to non-random measurement error. The method can be used for any type of data or statistical model, is simple to execute, and straightforward to communicate. This makes it possible for researchers to routinely report the robustness of their inference to the presence of non-random measurement error. We demonstrate the sensitivity analysis approach by applying it to two recent studies.

Type
Original Articles
Copyright
© The European Political Science Association 2017 

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

*

Max Gallop is a Lecturer, Department of Government and Public Policy, University of Strathclyde, 16 Richmond St., Glasgow G1 1XQ ([email protected]). Simon Weschle is a Junior Research Fellow, Carlos III-Juan March Institute, Calle Madrid 135, Building 18, 28903 Getafe, Madrid ([email protected]). For their helpful comments and suggestions, the authors are thankful to Florian Hollenbach, Kosuke Imai, Jack Paine, Jan Pierskalla, Michael Ward, Natalie Jackson, Nils Weidmann, participants of the Annual Summer Meeting of the Society for Political Methodology at the University of Georgia in 2014, and the PSRM reviewers and editors. To view supplementary material for this article, please visit https://doi.org/10.1017/psrm.2016.53

References

Acemoglu, Daron, Johnson, Simon H., and Robinson, James A.. 2001. ‘The Colonial Origins of Comparative Development: An Empirical Investigation’. American Economic Review 91(5):13691401.Google Scholar
Acemoglu, Daron, Johnson, Simon H., and Robinson, James A.. 2012. ‘The Colonial Origins of Comparative Development: An Empirical Investigation: Reply’. American Economic Review 102(6):30773110.Google Scholar
Albouy, David Y. 2012. ‘The Colonial Origins of Comparative Development: An Empirical Investigation: Comment’. American Economic Review 102(6):30593076.Google Scholar
Ansolabehere, Stephen, and Hersh, Eitan. 2012. ‘Validation: What Big Data Reveal About Survey Misreporting and the Real Electorate’. Political Analysis 20:437459.Google Scholar
Anthopolos, Rebecca, and Becker, Charles M.. 2009. ‘Global Infant Mortality: Correcting for Undercounting’. World Development 38(4):467481.Google Scholar
Bernstein, Robert, Chandha, Anita, and Montjoy, Robert. 2001. ‘Overreporting Voting: Why it Happens and Why it Matters’. Public Opinion Quarterly 65(1):2244.Google Scholar
Betz, Timm. 2013. ‘Robust Estimation With Nonrandom Measurement Error and Weak Instruments’. Political Analysis 23(1):8696.Google Scholar
Blackwell, Matthew. 2014. ‘A Selection Bias Approach to Sensitivity Analysis for Causal Effects’. Political Analysis 22(2):169182.Google Scholar
Blackwell, Matthew, Honaker, James, and King, Gary. 2015. ‘A Unified Approach to Measurement Error and Missing Data: Overview and Applications.’ Sociological Methods and Research http://dx.doi.org/10.1177/0049124115585360.Google Scholar
Carroll, Raymond J., Ruppert, David, Stefanski, Leonard A., and Crainiceanu, Ciprian M.. 2006. Measurement Error in Nonlinear Models. A Modern Perspective. Boca Raton, FL: Chapman & Hall/CRC.Google Scholar
Carroll, Raymond J., and Stefanski, Leonard A.. 1990. ‘Approximate Quasilikelihood Estimation in Models With Surrogate Predictors’. Journal of the American Statistical Association 85(411):652663.Google Scholar
Cook, James R., and Stefanski, Leonard A.. 1994. ‘Simulation-Extrapolation Estimation in Parametric Measurement Error Models’. Journal of the American Statistical Association 89(428):13141328.Google Scholar
Curtin, Philip D. 1989. Death by Migration: Europe’s Encounter With the Tropical World in the 19th Century. New York: Cambridge University Press.Google Scholar
Dafoe, Allan, and Lyall, Jason. 2015. ‘From Cell Phones to Conflict? Reflections on the Emerging ICT-Political Conflict Research Agenda’. Journal of Peace Research 52(3):401413.Google Scholar
Fariss, Christopher J. 2014. ‘Respect for Human Rights has Improved Over Time: Modeling the Changing Standard of Accountability’. American Political Science Review 108(2):297318.Google Scholar
Gohdes, Anita, and Price, Megan. 2013. ‘First Things First: Assessing Data Quality Before Model Quality’. Journal of Conflict Resolution 57(6):10901108.Google Scholar
Guolo, Annamaria. 2008. ‘Robust Techniques for Measurement Error Correction: A Review’. Statistical Methods in Medical Research 17:555580.Google Scholar
Hausman, Jerry A., Abrevaya, Jason, and Scott-Morton, Fiona M.. 1998. ‘Misclassification of the Dependent Variable in a Discrete-Response Setting’. Journal of Econometrics 87(2):239269.Google Scholar
Herrera, Yoshiko M., and Kapur, Devesh. 2007. ‘Improving Data Quality: Actors, Incentives, and Capabilities’. Political Analysis 15(4):365386.Google Scholar
Hill, Daniel W., Moore, Will H., and Mukherjee, Bumba. 2013. ‘Information Politics Versus Organizational Incentives: When are Amnesty International’s “Naming and Shaming” Reports Biased?’. International Studies Quarterly 57(2):219232.Google Scholar
Hollyer, James R., Rosendorff, B. Peter, and Vreeland, James Raymond. 2011. ‘Democracy and Transparency’. Journal of Politics 73(4):11911205.Google Scholar
Horowitz, Joel L., and Manski, Charles F.. 1995. ‘Identification and Robustness With Contaminated and Corrupted Data’. Econometrica 63(2):281302.Google Scholar
Hug, Simon. 2010. ‘The Effect of Misclassifications in Probit Models: Monte Carlo Simulations and Applications’. Political Analysis 18(1):78102.Google Scholar
Humphreys, Macartan, Masters, William A., and Sandbu, Martin E.. 2006. ‘The Role of Leaders in Democratic Deliberations: Results from a Field Experiment in São Tomé and Príncipe’. World Politics 58(4):583622.Google Scholar
Imai, Kosuke, and Yamamoto, Teppei. 2010. ‘Causal Inference With Differential Measurement Error: Nonparametric Identification and Sensitivity Analysis’. American Journal of Political Science 54(2):543560.Google Scholar
Imbens, Guido W. 2003. ‘Sensitivity to Exogeneity Assumptions in Program Evaluation’. American Economic Review 93(2):126132.Google Scholar
Jensen, Nathan M., Li, Quan, and Rahman, Aminur. 2010. ‘Understanding Corruption and Firm Responses in Cross-National Firm-Level Surveys’. Journal of International Business Studies 41:14811504.Google Scholar
Katz, Jonathan N., and Katz, Gabriel. 2010. ‘Correcting for Survey Misreports Using Auxiliary Information With an Application to Estimating Turnout’. American Journal of Political Science 54(3):815835.Google Scholar
Kipnis, Victor, Midthune, Douglas, Freedman, Laurence S., Bingham, Sheila, Schatzkin, Arthur, Subar, Amy, and Carroll, Raymond J.. 2001. ‘Empirical Evidence of Correlated Biases in Dietary Assessment Instruments and its Implications’. American Journal of Epidemiology 153(4):394403.Google Scholar
Kipnis, Victor, Carroll, Raymond J., Freedman, Laurence S., and Li, Li. 1999. ‘Implications of a New Dietary Measurement Error Model for Estimation of Relative Risk: Application to Four Calibration Studies’. American Journal of Epidemiology 150(6):642651.Google Scholar
Kreider, Brent, Pepper, John V., Gundersen, Craig, and Jolliffe, Dean. 2012. ‘Identifying the Effects of SNAP (Food Stamps) on Child Health Outcomes When Participation is Endogenous and Misreported’. Journal of the American Statistical Association 107(499):958975.Google Scholar
Kuklinski, James H., Cobb, Michael D., and Gilens, Martin. 1997. ‘Racial Attitudes and the “New South”’. Journal of Politics 59(2):323349.Google Scholar
Lacina, Bethany, and Gleditsch, Nils Petter. 2013. ‘The Waning of War is Real: A Response to Gohdes and Price’. Journal of Conflict Resolution 57(6):11091127.Google Scholar
Pierskalla, Jan H., and Hollenbach, Florian M.. 2013. ‘Technology and Collective Action: The Effect of Cell Phone Coverage on Political Violence in Africa’. American Political Science Review 107(2):207224.Google Scholar
Rosenbaum, Paul R., and Rubin, Donald B.. 1983. ‘Assessing Sensitivity to an Unobserved Binary Covariate in an Observational Study With Binary Outcome’. Journal of the Royal Statistical Society. Series B (Methodological) 45(2):212218.Google Scholar
Sundberg, Ralph, and Melander, Erik. 2013. ‘Introducing the UCDP Georeferenced Event Dataset’. Journal of Peace Research 50(4):523532.Google Scholar
Tokdar, Surya, Grossmann, Iris, Kadane, Joseph, Charest, Anne-Sophie, and Small, Mitchell. 2011. ‘Impact of Beliefs About Atlantic Tropical Cyclone Detection on Conclusions About Trends in Tropical Cyclone Numbers’. Bayesian Analysis 6(4):547572.Google Scholar
Wallace, Jeremy L. 2016. ‘Juking the Stats? Authoritarian Information Problems in China’. British Journal of Political Science 46(1):1129.Google Scholar
Weidmann, Nils B. 2016. ‘A Closer Look at Reporting Bias in Conflict Event Data’. American Journal of Political Science 60(1):206218.Google Scholar
Supplementary material: PDF

Gallop and Weschle supplementary material

Online Appendix

Download Gallop and Weschle supplementary material(PDF)
PDF 3.7 MB
Supplementary material: Link

Gallop and Weschle Dataset

Link