Hostname: page-component-586b7cd67f-t7fkt Total loading time: 0 Render date: 2024-11-24T14:24:01.910Z Has data issue: false hasContentIssue false

Do letters about conspiracy belief studies greatly exaggerate? A reply to Sutton and Douglas

Published online by Cambridge University Press:  30 July 2020

Daniel Freeman*
Affiliation:
Psychiatry, Oxford University, UK
Felicity Waite
Affiliation:
Psychiatry, Oxford University, UK
Laina Rosebrock
Affiliation:
Psychiatry, Oxford University, UK
Ariane Petit
Affiliation:
Psychiatry, Oxford University, UK
Emily Bold
Affiliation:
Psychiatry, Oxford University, UK
Sophie Mulhall
Affiliation:
Psychiatry, Oxford University, UK
Lydia Carr
Affiliation:
Psychiatry, Oxford University, UK
Ashley-Louise Teale
Affiliation:
Psychiatry, Oxford University, UK
Lucy Jenner
Affiliation:
Psychiatry, Oxford University, UK
Anna East
Affiliation:
Psychiatry, Oxford University, UK
Chiara Causier
Affiliation:
Psychiatry, Oxford University, UK
Jessica C. Bird
Affiliation:
Psychiatry, Oxford University, UK
Sinéad Lambe
Affiliation:
Psychiatry, Oxford University, UK
*
Author for correspondence: Daniel Freeman, E-mail: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Type
Invited Letter Rejoinder
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
Copyright © The Author(s) 2020. Published by Cambridge University Press

In the design of our study about coronavirus conspiracy beliefs and the consequences for adherence to social distancing guidelines (Freeman et al., Reference Freeman, Waite, Rosebrock, Petit, Causier, East and Lambe2020), we thought very carefully about the content of questionnaire items, and their associated scales, in order to test the primary hypothesis. We chose to develop item content that was unambiguous, extreme, and false (e.g. ‘Jews have created the virus to collapse the economy for financial gain’). We selected a response scale to assess any degree of endorsement (do not agree, agree a little, agree moderately, agree a lot, agree completely) in a manner easily understandable for participants and simple to interpret. Respondents were presented with stark beliefs and a clear decision to make about endorsement. Hence, we could test whether any countenance of the extreme beliefs − which might include a degree of acquiescence, though there was no ambiguity in the statements being endorsed − affects adherence to social distancing guidelines. It is a study about how belief may drive action and any belief in an obvious conspiracy theory might be socially damaging. It should not be forgotten that there was a very simple, low cognitive load option for responding to the extreme beliefs: ‘do not agree’. There is also evidence that online surveys can be resistant to demand effects (Mummolo & Peterson, Reference Mummolo and Peterson2019). The measurement method for conspiracy beliefs was grounded in clinical studies assessing delusions, in which a single dimension is isolated of degree of conviction in unfounded beliefs [from do not believe (0%) to completely believe (100%)]. Deliberately avoided is a single ‘completely disbelieve’ (−100%) to ‘completely believe’ (100%) scale. This avoidance is partly because of difficulties in interpreting such dimensions of disagreement and partly because of empirical evidence that shows that the degree to which an individual believes a delusional belief is separate (to an extent) from the degree to which he or she thinks that they could be mistaken (So et al., Reference So, Freeman, Dunn, Kapur, Kuipers, Bebbington and Garety2012).

Sutton and Douglas (Reference Sutton and Douglas2020) asked a convenience sample of 750 people to complete a small number of our conspiracy belief questions but using different rating scales. We wish to note just three simple points in response. First, their result will surprise no one: using different rating scales results in a (somewhat) different pattern of answers. It does not identify which scale might be best. Second, the letter writers have overlooked the basic research design principle that items and their corresponding scales are chosen for the particular purpose of a study. A consequence is that they have missed a genuinely interesting methodological question: do different rating scales have differential sensitivity in assessing whether conspiracy beliefs affect adherence to social distancing guidelines? Finally, the letter writers omit consideration of the significant limitations of the scales they advocate. They think that there is a single continuum between strong disagreement and strong agreement and hence that simply adding disagree responses and a ‘neither agree nor disagree’ response solves issues of scaling – this is mistaken on all three counts. Degrees of agreement and disagreement are obviously negatively associated but typically they are not genuine opposites of a single dimension and it creates difficulties in interpretation when they are treated as so (Saris, Krosnick, Revilla, & Shae, Reference Saris, Krosnick, Revilla and Shae2010). The interpretative problems of introducing disagree options to a linear agree scale can be acutely seen if one pauses for a moment to consider our conspiracy theory study. For example, it would be plausible to think that a respondent who only ‘disagrees a little’ with the item ‘Jews have created the virus to collapse the economy for financial gain’ might also ‘agree a little’ with the extreme belief, but he or she would only be able to select one option. Sutton and Douglas add further imprecision with their use of the notoriously ambiguous midpoint response of ‘neither agree nor disagree’, known to be selected for many different reasons by respondents (Kulas & Stachowski, Reference Kulas and Stachowski2013). If we were to finesse our scale, we would consider adding a ‘Don't know’ response option, although this too is not without complications since there is a decision to make about how to treat such responses in analyses.

No questionnaires are perfectFootnote 1, but our choice of item content and associated scaling was conceptually precise, easy to understand, and easy to interpret. If Sutton and Douglas are as fixed on introducing disagreement as they seem, then they should have added a second rating scale for disagreement for each item. Their letter concludes with an age-old lament about press releases purportedly stripping research coverage of nuance and caveats and introducing sensationalism; we hope such injudicious traits are equally guarded against in journal letters.

Footnotes

1 Except ours.

References

Freeman, D., Waite, F., Rosebrock, L., Petit, A., Causier, C., East, A., … Lambe, S. (2020). Coronavirus conspiracy beliefs, mistrust, and compliance with government guidelines in England. Psychological Medicine. doi:10.1017/S0033291720001890.Google ScholarPubMed
Kulas, J. T., & Stachowski, A. A. (2013). Respondent rationale for neither agreeing nor disagreeing: Person and item contributors to middle category endorsement intent on Likert personality indicators. Journal of Research in Personality, 47, 254262.CrossRefGoogle Scholar
Mummolo, J., & Peterson, E. (2019). Demand effects in survey experiments: An empirical assessment. American Political Science Review, 113, 517529.CrossRefGoogle Scholar
Saris, W. E., Krosnick, J. A., Revilla, M., & Shae, E. M. (2010). Comparing questions with agree/disagree response options to questions with item-specific response options. Survey Research Methods, 4, 6179.Google Scholar
So, S., Freeman, D., Dunn, G., Kapur, S., Kuipers, E., Bebbington, P., … Garety, P. (2012). Jumping to conclusions, a lack of belief flexibility and delusional conviction in psychosis. Journal of Abnormal Psychology, 121, 129139.CrossRefGoogle ScholarPubMed
Sutton, R., & Douglas, K. (2020). Letter: Agreeing to disagree: Reports of the popularity of Covid-19 conspiracy theories are greatly exaggerated. Psychological Medicine.Google Scholar