Hostname: page-component-78c5997874-v9fdk Total loading time: 0 Render date: 2024-11-20T03:25:11.502Z Has data issue: false hasContentIssue false

The Effects of Certain and Uncertain Incentives on Effort and Knowledge Accuracy

Published online by Cambridge University Press:  15 October 2019

Thomas Jamieson
Affiliation:
School of Public Administration, University of Nebraska, Omaha, NE, USA, e-mail: [email protected]
Nicholas Weller
Affiliation:
Department of Political Science, University of California, Riverside, CA, USA, e-mail: [email protected]

Abstract

In many situations, incentives exist to acquire knowledge and make correct political decisions. We conduct an experiment that contributes to a small but growing literature on incentives and political knowledge, testing the effect of certain and uncertain incentives on knowledge. Our experiment builds on the basic theoretical point that acquiring and using information is costly, and incentives for accurate answers will lead respondents to expend greater effort on the task and be more likely to answer knowledge questions correctly. We test the effect of certain and uncertain incentives and find that both increase effort and accuracy relative to the control condition of no incentives for accuracy. Holding constant the expected benefit of knowledge, we do not observe behavioral differences associated with the probability of earning an incentive for knowledge accuracy. These results suggest that measures of subject performance in knowledge tasks are contingent on the incentives they face. Therefore, to ensure the validity of experimental tasks and the related behavioral measures, we need to ensure a correspondence between the context we are trying to learn about and our experimental design.

Type
Research Article
Copyright
© The Experimental Research Section of the American Political Science Association 2019

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Footnotes

The research design of the paper was presented at the 2017 ISA Annual Convention, at the EITM Summer Institute at the University of Houston, and in the Networked Democracy Lab at the University of Southern California. We would especially like to thank Pablo Barberá, A. Burcu Bayram, Harold Clarke, Gail Buttorff, Francisco Cantú, Dennis Chong, Douglas Dion, Nehemia Geva, Jim Granato, Patrick James, Brian Rathbun, Frank Scioli, Philip Seib, Rick Wilson, Sunny Wong, Jonathan Woon, participants in the panels, the anonymous reviewers and the Associate Editor for excellent comments and suggestions. Any errors that remain are our own responsibility. This research was supported by a USC Dornsife Gold Family Fellowship and the University of California, Riverside. The authors are aware of no conflicts of interest regarding this research. The data, code, and any additional materials required to replicate all analyses in this article are available at the Journal of Experimental Political Science Dataverse within the Harvard Dataverse Network, at: https://doi.org/10.7910/DVN/WVFZGE (Jamieson and Weller, 2019).

References

REFERENCES

Barabas, Jason, Jerit, Jennifer, Pollock, William and Rainey, Carlisle. 2014. The Question(s) of Political Knowledge. American Political Science Review 108(4): 840855.CrossRefGoogle Scholar
Berinsky, Adam J., Huber, Gregory A. and Lenz, Gabriel S.. 2012. Evaluating Online Labor Markets for Experimental Research: Amazon.com’s Mechanical Turk. Political Analysis 20(3): 351368.CrossRefGoogle Scholar
Berinsky, Adam J., Margolis, Michele F. and Sances, Michael W.. 2014. Separating the Shirkers from the Workers? Making Sure Respondents Pay Attention on Self-administered Surveys. American Journal of Political Science 58(3): 739753.CrossRefGoogle Scholar
Casler, Krista, Bickel, Lydia and Hackett, Elizabeth. 2013. Separate but Equal? A Comparison of Participants and Data Gathered via Amazon’s MTurk, Social Media, and Face-to-Face Behavioral Testing. Computers in Human Behavior 29(6): 21562160.Google Scholar
Clifford, Scott and Jerit, Jennifer. 2016. Cheating on Political Knowledge Questions in Online Surveys: An Assessment of the Problem and Solutions. Public Opinion Quarterly 80(4): 858887.Google Scholar
Converse, Philip E. 1964. The Nature of Belief Systems in Mass Publics. Critical Review 18(1–3): 174.CrossRefGoogle Scholar
Delli Carpini, Michael X. and Keeter, Scott. 1996. What Americans Know About Politics and Why It Matters. New Haven: Yale University Press.Google Scholar
Edlin, Aaron, Gelman, Andrew and Kaplan, Noah. 2007. Voting as a Rational Choice: Why and How People Vote to Improve the Well-Being of Others. Rationality and Society 19(3): 293314.CrossRefGoogle Scholar
Feldman, Stanley, Huddy, Leonie and Marcus, George E.. 2015. Going to War in Iraq: When Citizens and the Press Matter.Chicago: University of Chicago Press.CrossRefGoogle Scholar
Hauser, David J. and Schwarz, Norbert. 2016. Attentive Turkers: MTurk Participants Perform Better on Online Attention Checks than Do Subject Pool Participants. Behavior Research Methods 48(1): 400407.CrossRefGoogle ScholarPubMed
Hicks, Raymond and Tingley, Dustin. 2011. Causal Mediation Analysis. Stata Journal 11(4): 605619.CrossRefGoogle Scholar
Hill, Seth J. 2017. Learning Together Slowly: Bayesian Learning about Political Facts. Journal of Politics 79(4): 14031418.CrossRefGoogle Scholar
Holsti, Ole R. 1992. Public Opinion and Foreign Policy: Challenges to the Almond-Lippmann Consensus. International Studies Quarterly 36(4): 439466.CrossRefGoogle Scholar
Huff, Connor and Tingley, Dustin. 2015. ‘Who Are These People?’ Evaluating the Demographic Characteristics and Political Preferences of MTurk Survey Respondents. Research & Politics 2(3): 2053168015604648.CrossRefGoogle Scholar
Imai, Kosuke, Keele, Luke, Tingley, Dustin and Yamamoto, Teppei. 2011. Unpacking the Black Box of Causality: Learning about Causal Mechanisms from Experimental and Observational Studies. American Political Science Review 105(4): 765789.CrossRefGoogle Scholar
Imai, Kosuke, Keele, Luke and Yamamoto, Teppei. 2010. Identification, Inference and Sensitivity Analysis for Causal Mediation Effects. Statistical Science 25(1): 5171.Google Scholar
Jamieson, Thomas and Weller, Nicholas. 2019. Replication Data for: The Effects of Certain and Uncertain Incentives on Effort and Knowledge Accuracy. Journal of Experimental Political Science Harvard Dataverse, V1. doi: 10.7910/DVN/WVFZGE CrossRefGoogle Scholar
Jann, Ben. 2014. Plotting Regression Coefficients and Other Estimates. The Stata Journal 14(4): 708737.CrossRefGoogle Scholar
Kane, John V. and Barabas, Jason. 2019. No Harm in Checking: Using Factual Manipulation Checks to Assess Attentiveness in Experiments. American Journal of Political Science 63(1): 234249.Google Scholar
Kinder, Donald R. and Sears, David O.. 1985. Public Opinion and Political Action. In Handbook of Social Psychology, eds. Daniel, Gilbert and Lindzey, Gardner. New York: Random House, 659741.Google Scholar
Krupnikov, Yanna, Levine, Adam Seth, Lupia, Arthur and Prior, Markus. 2006. Public Ignorance and Estate Tax Repeal: The Effect of Partisan Differences and Survey Incentives. National Tax Journal 59(3): 425437. Retrieved from https://www.jstor.org/stable/41790333 CrossRefGoogle Scholar
Levay, Kevin E., Freese, Jeremy and Druckman, James N.. 2016. The Demographic and Political Composition of Mechanical Turk Samples. SAGE Open 6(1): 2158244016636433.Google Scholar
Lupia, Arthur. 2015. Uninformed: Why People Seem to Know So Little about Politics and What We Can Do about It. New York: Oxford University Press.Google Scholar
Lupia, Arthur and McCubbins, Mathew D.. 1998. The Democratic Dilemma: Can Citizens Learn What They Need to Know? Cambridge, UK; New York: Cambridge University Press.Google Scholar
Mellers, Barbara, Stone, Eric, Atanasov, Pavel, Rohrbaugh, Nick, Metz, S.Emlen, Ungar, Lyle, Bishop, Michael M., Horowitz, Michael, Merkle, Ed and Tetlock, Philip. 2015. The Psychology of Intelligence Analysis: Drivers of Prediction Accuracy in World Politics. Journal of Experimental Psychology Applied 21(1): 114.CrossRefGoogle ScholarPubMed
Mildenberger, Matto and Tingley, Dustin. 2019. Beliefs about Climate Beliefs: The Importance of Second-Order Opinions for Climate Politics. British Journal of Political Science 49(4): 12791307.CrossRefGoogle Scholar
Morton, Rebecca B. and Williams, Kenneth C.. 2010. Experimental Political Science and the Study of Causality. New York: Cambridge University Press.Google Scholar
Mullinix, Kevin J., Leeper, Thomas J., Druckman, James N. and Freese, Jeremy. 2015. The Generalizability of Survey Experiments. Journal of Experimental Political Science 2(2): 109138.CrossRefGoogle Scholar
Pforr, Klaus, Blohm, Michael, Blom, Annelies G., Erdel, Barbara, Felderer, Barbara, Fräßdorf, Mathis, Hajek, Kristin, Helmschrott, Susanne, Kleinert, Corinna, Koch, Achim, Krieger, Ulrich, Kroh, Martin, Martin, Silke, Saßenroth, Denise, Schmiedeberg, Claudia, Trüdinger, Eva-Maria and Rammstedt, Beatrice. 2015. Are Incentive Effects on Response Rates and Nonresponse Bias in Large-Scale, Face-to-Face Surveys Generalizable to Germany? Evidence from Ten Experiments. Public Opinion Quarterly 79(3): 740768.CrossRefGoogle Scholar
Prior, Markus and Lupia, Arthur. 2008. Money, Time, and Political Knowledge: Distinguishing Quick Recall and Political Learning Skills. American Journal of Political Science 52(1): 169183.CrossRefGoogle Scholar
Prior, Markus, Sood, Gaurav and Khanna, Kabir. 2015. You Cannot be Serious: The Impact of Accuracy Incentives on Partisan Bias in Reports of Economic Perceptions. Quarterly Journal of Political Science 10(4): 489518.Google Scholar
Roberts, Margaret E., Stewart, Brandon M. and Tingley, Dustin. 2018. stm: R Package for Structural Topic Models. R Package Version 1.3.3. Retrieved from http://www.structuraltopicmodel.com Google Scholar
Roberts, Margaret E., Stewart, Brandon M., Tingley, Dustin, Lucas, Christopher, Leder-Luis, Jetson, Gadarian, Shana Kushner, Albertson, Bethany and Rand, David G.. 2014. Structural Topic Models for Open-Ended Survey Responses. American Journal of Political Science 58(4): 10641082.Google Scholar
Tetlock, Philip E. 1998. Close-Call Counterfactuals and Belief-System Defenses: I Was Not Almost Wrong but I Was Almost Right. Journal of Personality and Social Psychology 75:639652.CrossRefGoogle Scholar
Tetlock, Philip E. 1992. Good Judgment in International Politics: Three Psychological Perspectives. Political Psychology 13(3): 517539.CrossRefGoogle Scholar
Tetlock, Philip E. 1999. Theory-Driven Reasoning About Plausible Pasts and Probable Futures in World Politics: Are We Prisoners of Our Preconceptions? American Journal of Political Science 43(2): 335366.CrossRefGoogle Scholar
Tetlock, Philip E. 2006. Expert Political Judgment: How Good Is It? How Can We Know? Princeton, NJ: Princeton University Press.Google Scholar
Tetlock, Philip E. and Gardner, Dan. 2016. Superforecasting: The Art and Science of Prediction. New York: Random House.Google Scholar
Tversky, A. and Kahneman, D.. 1981. The Framing of Decisions and the Psychology of Choice. Science 211(4481): 453458.CrossRefGoogle ScholarPubMed
Warriner, Keith, Goyder, John, Gjertsen, Heidi, Hohner, Paula and Mcspurren, Kathleen. 1996. Charities, No; Lotteries, No; Cash, Yes: Main Effects and Interactions in a Canadian Incentives Experiment. Public Opinion Quarterly 60(4): 542562.CrossRefGoogle Scholar
Zheng, Alvin, Gong, Jing and Pavlou, Paul. 2017. On Using the Lottery in Crowdfunding Platforms: ‘Crowding in’ the Masses But ‘Crowding out’ Success. Retrieved from https://papers.ssrn.com/abstract=2916807 Google Scholar
Supplementary material: PDF

Jamieson and Weller supplementary material

Jamieson and Weller supplementary material

Download Jamieson and Weller supplementary material(PDF)
PDF 962.7 KB
Supplementary material: Link

Jamieson and Weller Dataset

Link