Hostname: page-component-cd9895bd7-7cvxr Total loading time: 0 Render date: 2024-12-26T04:07:55.644Z Has data issue: false hasContentIssue false

Validity testing of the conspiratorial thinking and anti-expert sentiment scales during the COVID-19 pandemic across 24 languages from a large-scale global dataset

Published online by Cambridge University Press:  12 September 2022

Hyemin Han*
Affiliation:
Educational Psychology Program, University of Alabama, Tuscaloosa, AL, USA
Angélique M. Blackburn
Affiliation:
Department of Psychology and Communication, Texas A&M International University, Laredo, TX, USA
Alma Jeftić
Affiliation:
Peace Research Institute, International Christian University, Tokyo, Japan
Thao Phuong Tran
Affiliation:
Department of Psychology, Colorado State University, Fort Collins, CO, USA
Sabrina Stöckli
Affiliation:
Department of Consumer Behavior, University of Bern, Bern, Switzerland
Jason Reifler
Affiliation:
Department of Politics, University of Exeter, Exeter, UK
Sara Vestergren
Affiliation:
School of Psychology, Keele University, Keele, UK
*
Author for correspondence: Hyemin Han, E-mail: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

In this study, we tested the validity across two scales addressing conspiratorial thinking that may influence behaviours related to public health and the COVID-19 pandemic. Using the COVIDiSTRESSII Global Survey data from 12 261 participants, we validated the 4-item Conspiratorial Thinking Scale and 3-item Anti-Expert Sentiment Scale across 24 languages and dialects that were used by at least 100 participants per language. We employed confirmatory factor analysis, measurement invariance test and measurement alignment for internal consistency testing. To test convergent validity of the two scales, we assessed correlations with trust in seven agents related to government, science and public health. Although scalar invariance was not achieved when measurement invariance test was conducted initially, we found that both scales can be employed in further international studies with measurement alignment. Moreover, both conspiratorial thinking and anti-expert sentiments were significantly and negatively correlated with trust in all agents. Findings from this study provide supporting evidence for the validity of both scales across 24 languages for future large-scale international research.

Type
Original Paper
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
Copyright © The Author(s), 2022. Published by Cambridge University Press

Introduction

Before and throughout the COVID-19 pandemic, beliefs in conspiratorial theories and negative attitudes about experts have been on the rise. Conceptually, conspiratorial thinking is an increased likelihood to view the world in conspiratorial terms and attribute the causes of events to groups acting in secret for personal benefit against the common good [Reference Uscinski, Klofstad and Atkinson1, Reference Lewandowsky, Gignac and Oberauer2]. Anti-expert sentiments, a phenomenon often studied alongside conspiratorial thinking, is a form of anti-elitist and anti-intellectualism, which is marked by distrust of individuals who claim to be experts or have credentials about a topic [Reference Lewandowsky, Gignac and Oberauer2, Reference Motta3]. The rise in conspiratorial thinking and anti-expert sentiments in recent times may occur in part due to increases in use of conspiracy theories for political gain [Reference Motta3Reference Gauchat5], the rise in confirmation bias in social media circles [Reference Milošević Đorđević6], inconsistencies in public health information [Reference Romer and Jamieson7] or the fact that conspiracy theories proliferate during societal crises and times of uncertainty [Reference van Prooijen8]. Given the potential harm by conspiratorial thinking and anti-expert sentiments, it is critical to have a rapid and effective global tool to assess both types of thinking in order to implement mitigation plans to improve science-driven public health and policy decisions.

Need for cross-language scale validity for rapid data collection during a global health crisis

Conspiracy theories can influence social and political behaviours [Reference Uscinski, Klofstad and Atkinson1, Reference Motta3, Reference Jolley, Mari, Douglas, Butter and Knight9, Reference Jolley and Douglas10] and result in undesirable and even catastrophic social outcomes [Reference Motta3, Reference Romer and Jamieson7, Reference Moore11]. Of particular interest for an international health crisis such as the COVID-19 pandemic is that believing conspiracy theories was linked to vaccine hesitancy [Reference Milošević Đorđević6], reduced compliance with containment measures [Reference Romer and Jamieson7, Reference Rothmund12, Reference Bruder and Kunert13] and reduced behaviours linked to civic and social responsibility [Reference Uscinski14]. Specifically, doubters and deniers of COVID-19 risk tended to believe conspiracy theories related to the pandemic, expressed anti-elitist sentiments and reported low compliance with measures to reduce the spread of the virus [Reference Rothmund12]. Low trust in institutions, including the scientific community, is also linked to vaccine hesitancy as well as compliance with preventive measures in general [Reference Han15Reference Han17]. Finally, conspiracy theories and negative attitudes towards experts have other detrimental effects such as increasing uncertainty and discrimination against marginalised groups [Reference Jolley, Mari, Douglas, Butter and Knight9].

Overall, both conspiracy theories and negativity towards experts can have lasting impacts on the trajectory of a global (health) crisis. Therefore, a consistent method of measuring conspiratorial thinking and anti-expert sentiments across languages is needed, especially when considering political and public health events on a global scale. Reliable means to rapidly assess these beliefs across countries are necessary to implement mitigation strategies [Reference Motta3]. This is particularly critical, as interventions to reduce these beliefs with accompanying behaviours may be fairly straightforward and rapidly implemented [Reference Yaqub16].

There are an endless number of conspiracy theories that attract individuals across different demographics [e.g. Reference Rothmund12, Reference Shahsavari18], so a singular scale which measures specific conspiracy theory beliefs is difficult to generalise. Uscinski et al. [Reference Uscinski, Klofstad and Atkinson1] developed the Conspiratorial Thinking Scale (CTS) assessing individuals' general disposition towards believing conspiracy theories. Previous work showed that individuals with conspiratorial thinking are also more likely to report anti-expert beliefs, and vice versa [Reference Uscinski14, Reference Merkley19]. As such, the COVIDiSTRESSII Consortium developed an Anti-Expert Sentiment Scale (AESS) [Reference Blackburn and Vestergren20] to gauge individuals' levels of distrust in expert consensus.

However, these scales have yet to be validated across different languages. This is critical because a general CTS in different languages provides a way to compare conspiracy theorising across political contexts in a way that studying specific conspiracy theories could not. Likewise, the AESS was designed to be generalisable across countries and contexts. The CTS and AESS are the shortest of the available scales and, once validated across languages, provide scholars with a cost-effective and efficient way of measuring conspiratorial thinking and anti-expert sentiment in multi-country studies.

Relationship between conspiratorial thinking and anti-expert beliefs, and trust as a mean to validate scales

Robust associations have been reported between general conspiratorial thinking and trust in government, science and public health institutions [Reference Lewandowsky, Gignac and Oberauer2, Reference Bruder and Kunert13]. Moreover, trust in an institution, whether political or scientific, was tightly coupled with conspiratorial thinking specifically related to that institution [Reference Lewandowsky, Gignac and Oberauer2, Reference Miller, Saunders and Farhart21, Reference Mari22]. For instance, a strong correlation has been observed between belief in conspiracy theories related to vaccines and reduced trust in science and institutions [Reference Milošević Đorđević6]. Likewise, trust in government mediated the inverse relationship between conspiratorial thinking and compliance with social distancing behaviours to reduce the spread of disease [Reference Bruder and Kunert13]. The relationship between trust and conspiratorial thinking is so robust that mere exposure to a conspiracy claim has been shown to negatively affect trust in government institutions, even of institutions that were not connected to the conspiracy theory [Reference Einstein and Glick23].

The likelihood of believing a particular conspiracy theory appears to be driven to some degree by exposure to information related to the conspiracy (e.g. within one's social network), while also heavily driven by a combination of general conspiratorial thinking and trust [Reference Uscinski, Klofstad and Atkinson1], which in turn can affect how one perceives the information they are exposed to. Also, studies that included diverse psychological constructs and demographics documented denialism of expert information as the strongest predictor of believing in COVID-19 conspiracy theories as measured by the CTS and partisan and ideological motivations [Reference Uscinski14]. Partisanship appears to drive the direction of conspiratorial thinking in such a way that members of one political party are more inclined to believe conspiracy theories about another, and vice versa, even when the degree of general conspiratorial thinking did not differ between political parties [Reference Uscinski, Klofstad and Atkinson1]. In other words, the degree of trust in an institution is linked to conspiratorial thinking related to that institution, and perhaps to other government institutions and services more broadly [Reference Einstein and Glick23]. Hence, a negative association of conspiratorial thinking and anti-expert beliefs with trust could be expected.

This study

In this paper, we tested the validity of scales capturing conspiratorial thinking and anti-expert sentiments that may influence behaviours related to public health during an epidemic or pandemic. In particular, we used two scales: the 4-item CTS adapted from Uscinski et al. [Reference Uscinski, Klofstad and Atkinson1] and a 3-item AESS designed by the COVIDiSTRESSII Consortium [Reference Blackburn and Vestergren20] and tested their cross-language validity. While a number of conspiracy belief scales have been tested [Reference Brotherton, French and Pickering24Reference Swami27], we selected the CTS due to its face and content validity. Given that the CTS has been used in various previous studies examining conspiratorial thinking within the context of COVID-19 research, it is possible to assume that its validity has been supported by findings from such studies. However, so far, the scale has been primarily used within the US context, it might need to be tested in diverse settings. The COVIDiSTRESSII Consortium opted to adapt a short new scale that fully captured the concept of anti-expert sentiments using items created by a co-author, and which included three questions about belief in expert knowledge compared to confidence in one's own knowledge.

Assuring the measurement validity of the two scales in different languages is the first step to take before conducting international research on the topic. In addition, during the survey process, participants were presented with survey forms in different languages depending on their first language. Hence, we focused on the measurement validity across different languages in the present study. We tested the measurement invariance and alignment of these scales across 24 languages and dialects using the COVIDiSTRESSII Global Survey dataset. In addition, we also examined whether the measurement model can be applied to individual language groups. If the measurement model is valid within each individual group, then researchers who intend to collect data from a single language group but do not intend to conduct international comparison would be able to use the measures written in their own language.

The measurement invariance test was conducted to examine whether the scales in different languages were designed to measure the same construct in the same measurement structure across different languages [Reference Putnick and Bornstein28]. The presence of scalar invariance, which assumes the same factor loadings and intercepts across groups, is essential to assure the quality of cross-national research using the scales [Reference Fischer and Karl29]. Measurement alignment was performed to address the potential issue of measurement non-invariance reported by the measurement invariance test as done in prior COVID-19-related international survey studies if needed [Reference Han15]. The measurement alignment process was expected to address non-invariance so that researchers would be able to conduct cross-national comparison. Whether the measures written in a single language can be employed in studies focusing on one language group, not international comparison, was also examined during the invariance test process.

We then assessed the convergent validity of each scale by testing the expected correlations between both CTS and AESS scales and items measuring trust in institutions. We predicted negative correlations between both scales and different trust items. In particular, because trust in political entities is related to conspiratorial thinking [Reference Jolley, Mari, Douglas, Butter and Knight9], we predicted a negative correlation between the CTS and trust in one's national parliament or government. We also predicted negative correlations between the AESS and trust in the scientific community and the World Health Organization (WHO). Positive correlations between CTS and AESS and negative correlations of each scale with trust, as demonstrated in previous literature, would indicate that the scales are measuring the intended constructs.

Methods

Dataset

The COVIDiSTRESSII Global Survey is a pre-registered, large-scale international survey dataset collected online by a consortium of over 150 international researchers who used local recruitment methods and snowball sampling to recruit anonymous volunteers from 137 countries across the globe [Reference Blackburn and Vestergren20]. This survey was administered online from 28 May through 29 August 2021. The data collection process was initially reviewed and approved by the Research, Enterprise and Engagement Ethical Approval Panel at the University of Salford (IRB number: 1632). The cleaned dataset included responses from 15 740 participants from 137 countries (see [Reference Blackburn and Vestergren20] for further details about the data collection and cleaning processes).

As measurement invariance test and measurement alignment involve confirmatory factor analysis (CFA), following statistical guidelines, we analysed responses in language groups where n ≥ 100 [Reference Han30, Reference Rachev31]. After excluding language groups with n < 100, we retained 12 261 responses in 24 language groups for further analysis. Demographics of the participants are presented in Table S1.

Materials

All items were first prepared in English. Then, the English version was translated and back translated into various languages by researchers with native language skills.

Conspiratorial Thinking Scale

At the beginning of the survey section addressing conspiratorial thinking and anti-expert sentiments, participants were presented with the following statement: ‘We will now present a few statements about the COVID-19 virus and about you. Please read the statements and indicate to what extent you agree with them’. Then, conspiracy thinking was measured with four items. These four items were slightly modified from Uscinski et al. [Reference Uscinski, Klofstad and Atkinson1]. The four items were: ‘much of our lives are being controlled by plots hatched in secret places’, ‘even though we live in a democracy, a few people will always run things anyway’, ‘the people who really “run” the country are not known to the voter’, ‘big events like wars, recessions, and the outcomes of elections are controlled by small groups of people who are working in secret against the rest of us’. Responses were anchored to a 7-point Likert scale, ‘1: Strongly disagree to 7: Strongly agree’.

Anti-Expert Sentiment Scale

Based on findings relating conspiratorial thinking and anti-expert sentiments [Reference Uscinski, Klofstad and Atkinson1, Reference Uscinski14], the items for the AESS were formulated by experts in the COVIDiSTRESSII Consortium based on previous research, e.g. [Reference Uscinski, Klofstad and Atkinson1Reference Motta3]. The items consist of: ‘I am more confident in my opinion than other people's facts’, ‘most of the time I know just as much as experts’, ‘experts really don't know that much’. Answers were anchored to a 7-point Likert scale, ‘1: Strongly disagree to 7: Strongly agree’.

Trust

To test convergent validity of the two scales, we also collected data about trust in agents that are addressing the COVID-19 pandemic. Following methods from Lieberoth et al. [Reference Lieberoth32], seven items were used to survey trust in these seven agents: parliament/government; police; civil service; health system; the WHO; government's effort to handle Coronavirus; scientific research community. Responses were anchored to an 11-point scale, ‘10%: No trust to 90%: Complete trust’.

Analysis plan

Measurement invariance test

To examine whether the two scales were valid across different languages, we performed a measurement invariance test with lavaan [Reference Rosseel33]. Before examining the cross-language validity of the scales, their internal consistency was tested in terms of Cronbach's α. Following the internal consistency testing the theoretical measurement model of each scale was tested with CFA while setting the language as a group. Because responses to the items were anchored to a 6-point Likert scale, we employed the diagonally weighted least squares estimator as suggested by DiStefano et al. [Reference DiStefano, Zhu and Mîndrilă34].

Measurement invariance was examined in terms of whether model fit indicators, i.e. RMSEA, SRMR, CFI, changed significantly when different levels of model constraints were applied [Reference Rachev31]. We tested four different levels of measurement invariance: configural, metric, scalar and residual invariance [Reference Fischer and Karl29]. First, the most lenient invariance, configural invariance only assumes the equal measurement structure across different groups. The presence of configural invariance suggests that the examined factor structure can be validly applied across different groups [Reference Vandenberg and Lance37, Reference Xu and Tracey38]. Thus, if configural invariance is achieved, the examined scale can be used within one specific group with the tested measurement model provided cross-group comparison is not conducted. Second, metric invariance additionally assumes equal loadings. Third, achievement of scalar invariance requires equal intercepts. Fourth, the strictest invariance, residual invariance, assumes the presence of equal residuals. In general, scalar invariance is a minimum requirement for between-group comparison. In the case of metric invariance, we required ΔRMSEA < +0.015, ΔSRMR < +0.030 and ΔCFI > −0.01. For the other invariance levels, we examined whether ΔRMSEA < +0.015, ΔSRMR < +0.015 and ΔCFI > −0.01 [Reference Putnick and Bornstein28].

Measurement alignment

If at the least scalar invariance was not achieved, we performed measurement alignment to address the existing measurement non-invariance between different languages. Measurement alignment was performed with the sirt package [Reference Robitzsh35]. It addresses non-invariance by adjusting factor loadings, intercepts and group means across different groups [Reference Fischer and Karl29].

After conducting measurement alignment, we examined whether the alignment process was successful with two R 2 indicators, R 2loadings and R 2intercepts. Those R 2 values indicate the extent of non-invariance in factor loadings and intercepts, respectively [Reference Asparouhov and Muthén36]. The value R 2 = 1.00 indicates that 100% of non-invariance was successfully absorbed through alignment while R 2 = 0.00 means that none of non-invariance was resolved. In general, whether less than 25% of non-invariance remains after alignment is regarded as a criterion to determine the success of alignment [Reference Asparouhov and Muthén36]. Thus, we examined whether both R 2 values were 75% higher in the present study. If both values exceeded the cut-off, we assumed that non-invariance was successfully addressed, and thus, scalar invariance was achieved through alignment.

In addition, we also examined whether there were any significant unique item parameters in both the factor loadings and intercepts across language groups, which were deemed to demonstrate significantly deviated loadings or intercepts relative to other groups. This process was conducted by performing invariance_alignment_constraint implemented in sirt. The function was developed to adjust factor loadings and intercepts across groups so that the aligned model can absorb non-invariances through measurement alignment. Once more than 25% of item parameters reported significant unique parameters, we deemed that there was significant measurement non-invariance either in loadings or intercepts. The 25% cut-off value was employed by Asparouhov and Muthén [Reference Asparouhov and Muthén36].

Once measurement alignment was completed, we calculated factor scores with adjusted factor loadings and intercepts for each language group. We used the factor scores for further analyses. Furthermore, we tested whether measurement alignment was capable of producing consistent outcomes. For repetitive cross-validation, we employed a simulation test, which was originally implemented in the format of Monte Carlo simulation for cross-validation of measurement alignment [Reference Muthén and Asparouhov39]. We generated a simulation dataset with N = 100, 200 and 500 per group. Then, we performed measurement alignment with the generated dataset and examined whether it produced outcomes consistent with CFA. The consistency was quantified in terms of Spearman correlation coefficient between factor mean scores estimated by alignment and CFA (see supplementary materials in Lieberoth et al. for further methodological details [Reference Lieberoth32]). The same simulation process was performed 500 times with multiprocessing for cross-validation with improved computational power [Reference Han40]. Following Muthén and Asparouhov, which employed the same procedure, we assumed that a mean correlation value ≥0.95 means good consistency and reliability of alignment [Reference Muthén and Asparouhov39]. For additional information, correlation between factor variances estimated by measurement alignment and CFA was also examined.

Correlation analysis

We examined the correlation between conspiratorial thinking and anti-expert sentiments, and seven trust items to test the convergent validity of the two scales. In the case when measurement alignment was conducted, we employed factor scores that were calculated with adjusted factor loadings and intercepts for the correlation analysis to address the issue of measurement non-alignment [Reference Han17]. For additional information, we also examined the correlation between factor scores estimated without alignment and trust variables as well.

Results

Measurement invariance test

When the internal consistency of each scale was examined in terms of Cronbach's α, both the CTS (α = 0.85) and AESS (α = 0.74) reported at least acceptable consistency. Findings from the measurement invariance test are presented in Table 1.

Table 1. Results from the measurement invariance test

As shown, although configural invariance, which supports the equal measurement structure across languages, was achieved in both scales, metric invariance as well as scalar invariance were not achieved due to changes in RMSEA, SRMR and CFI exceeding the cut-off values. Although the raw values of RMSEA (~0.08), SRMR (<0.08) and CFI (≥0.90) per se were seemingly acceptable, the changes exceeded the set thresholds (i.e. ΔRMSEA < +0.015, ΔSRMR < +0.030, ΔCFI > −0.01). Hence, we conducted measurement alignment to address the measurement non-invariance issue.

Measurement alignment

We performed measurement alignment for the two scales to address non-invariance to enable future cross-national investigations using the scales. First, when measurement alignment was performed for the CTS, the resultant R 2loadings = 0.97 and R 2intercepts = 0.99. Second, in the case of the AESS, R 2loadings = 0.85 and R 2intercepts = 0.99.

Furthermore, our inspection of item parameters also showed that no more than 25% of item parameters reported unique parameters. In the case of the CTS, 6.2% of factor loadings and 19.8% of intercepts reported significant unique item parameters (see Tables S2 and S3 for the groups reported significant item parameters in CTS factor loadings and intercepts, respectively). When the AESS was examined, 6.9% of factor loadings and 19.4% of intercepts demonstrated significant unique item parameters (see Tables S4 and S5 for the groups reported significant item parameters in AESS factor loadings and intercepts, respectively). In all cases, the proportions were below the cut-off value, 25%. These findings support the point that measurement non-invariance in both factor loadings and intercepts were successfully addressed.

The simulation test for consistency check reported that measurement alignment was capable of producing consistent and reliable outcomes across repetitions. In all cases, N = 100, 200 and 500, the mean correlation between the factor mean scores estimated by alignment and original CFA exceeded 0.95 (see Cor (mean) in Table 2). As proposed by Muthén and Asparouhov, the good correlation coefficient resulting from the simulation test suggests that measurement alignment was able to produce consistent outcomes, in terms of factor loadings and intercepts, across trials.

Table 2. Repetitive simulation test results

Note: Cor (mean): correlation between factor mean scores estimated by measurement alignment and CFA across repetitions. Cor (var): correlation between factor variances estimated by measurement alignment and CFA across repetitions.

For additional information, factor loadings and intercepts per group before and after measurement alignment are reported in the Supplementary materials. Factor loadings and intercepts in each group estimated by multigroup CFA are reported in Tables S6 and S7, respectively. Those resulting from measurement alignments are demonstrated in Tables S8 and S9, respectively.

Correlation analysis

The result of the correlation analysis is presented in Table 3. In Table 3, CTS and AESS factor scores were estimated with factor loadings and intercepts adjusted through measurement alignment. The same correlation pattern between variables was also found when factor scores estimated without alignment were examined (see Table S10).

Table 3. Correlation between conspiratorial thinking and anti-expert sentiment with trust

Note: Conspiratorial thinking and anti-expert sentiment scores were calculated based on results from measurement alignment. In all cases, p < 0.001 after applying false discovery rate correction.

Discussion

When measurement invariance was tested, although both scales achieved configural invariance, they were not able to demonstrate metric invariance. Given scalar invariance is required for multigroup comparison, the two scales might not be used for such comparison without additional processing. The results of measurement alignment suggest that the process was able to handle the measurement non-invariance issue in a satisfactory manner for both the CTS and AESS. The majority of the non-invariance existing in loadings (≥85%) and intercepts (≥99%) across different languages was absorbed by adjusting loadings and intercepts. Also, in all cases, less than 25% of item parameters demonstrated significant unique parameters. Hence, although scalar invariance was not achieved when measurement invariance test was conducted initially, we found that both scales can be employed in further international studies with measurement alignment. Furthermore, the repetitive simulation results suggest that measurement alignment was capable of producing consistent outcomes across trials in the present study.

One point to note is that configural invariance was achieved in both scales, so researchers who intend to collect data from one language group can use the scales if they do not compare scores across different language groups. Given the presence of configural invariance means that the same factor structure is valid across different groups [Reference Vandenberg and Lance37, Reference Xu and Tracey38], using the scales for further analyses within one group can be justifiable even without alignment. However, given scalar invariance was not achieved, if international comparison involving multiple languages becomes a goal, then measurement alignment may be required.

The result of the correlation analysis also provides additional evidence supporting the validity of the two scales. Both conspiratorial thinking and anti-expert sentiments were significantly and negatively associated with trust in all agents. The finding was consistent with prior research regarding how conspiratorial thinking and objective vaccine knowledge within the context of COVID-19 (e.g. ‘the government is trying to cover up the link between vaccines and autism’) were associated with trust in science and institutions [Reference Milošević Đorđević6]. The pattern of effects was also consistent with previous literature, with the strongest correlations within institutions and significant correlations across all trust agents [Reference Einstein and Glick23]. That is, the negative correlation between the CTS and trust in one's national parliament or government is consistent with previous literature indicating that trust in political entities is related to conspiratorial thinking [Reference Miller, Saunders and Farhart21]. Likewise, negative correlations between the AESS and trust in experts – the scientific community and the WHO – are consistent with previous literature [Reference Merkley19]. The similar correlation pattern was found when correlation analysis was performed with factor scored without alignment. This may provide additional evidence supporting that the two scales can be used within one language group even without conducting measurement alignment when international comparison is not performed.

Conclusions

To summarise, we validated the 4-item CTS and 3-item AESS across 24 languages and dialects using the COVIDiSTRESSII Global Survey dataset (N = 12 261). Although scalar invariance was not achieved when the measurement invariance test was conducted initially, we found that both scales can be employed in further international studies with measurement alignment. For future studies focusing on only one language group, not international comparison, researchers may use the two scales composed in one language for their analyses since configural invariance was achieved and the measurement model was validated across groups. Moreover, both conspiratorial thinking and anti-expert sentiments were significantly correlated with each other and negatively correlated with trust in all agents. As both conspiratorial thinking and anti-expert sentiments have negative implications for political events and public health and safety, having a consistent measure across languages is critical for rapid data collection in the face of an international disaster or public health crisis. The findings from this study provide evidence supporting the validity of both scales across 24 languages for future large-scale international research, and can thus be used to measure these factors during a global health crisis such as the COVID-19 pandemic.

Supplementary material

The supplementary material for this article can be found at https://doi.org/10.1017/S0950268822001443.

Acknowledgements

We appreciate the COVIDiSTRESSII Global Survey Consortium members for their contribution to the scale design and data collection.

Financial support

This research received no specific grant from any funding agency, commercial or not-for-profit sectors.

Conflict of interest

None.

Data availability statement

All data files and analysis scripts for this study can be found under this link: https://github.com/hyemin-han/COVIDiSTRESS2_belief_scales.

Footnotes

*

These authors contributed equally to this work.

References

Uscinski, JE, Klofstad, C and Atkinson, MD (2016) What drives conspiratorial beliefs? The role of informational cues and predispositions. Political Research Quarterly 69, 5771.CrossRefGoogle Scholar
Lewandowsky, S, Gignac, GE and Oberauer, K (2013) The role of conspiracist ideation and worldviews in predicting rejection of science. PLoS One 8, e75637.CrossRefGoogle ScholarPubMed
Motta, M (2018) The dynamics and political implications of anti-intellectualism in the United States. American Politics Research 46, 465498.CrossRefGoogle Scholar
Duplaga, M (2020) The determinants of conspiracy beliefs related to the COVID-19 pandemic in a nationally representative sample of internet users. International Journal of Environmental Research and Public Health 17, 7818.CrossRefGoogle Scholar
Gauchat, G (2012) Politicization of science in the public sphere: a study of public trust in the United States, 1974 to 2010. American Sociological Review 77, 167187.CrossRefGoogle Scholar
Milošević Đorđević, J et al. (2021) Links between conspiracy beliefs, vaccine knowledge, and trust: anti-vaccine behavior of Serbian adults. Social Science & Medicine 277, 113930.CrossRefGoogle ScholarPubMed
Romer, D and Jamieson, KH (2020) Conspiracy theories as barriers to controlling the spread of COVID-19 in the U.S. Social Science & Medicine 263, 113356.CrossRefGoogle ScholarPubMed
van Prooijen, J-W (2020) An existential threat model of conspiracy theories. European Psychologist 25, 1625.CrossRefGoogle Scholar
Jolley, D, Mari, S and Douglas, KM (2020) Consequences of conspiracy theories. In Butter, M and Knight, P (eds), Routledge Handbook of Conspiracy Theories, 1st Edn. Abingdon, Oxon; New York, NY: Routledge, pp. 231241.CrossRefGoogle Scholar
Jolley, D and Douglas, KM (2014) The social consequences of conspiracism: exposure to conspiracy theories decreases intentions to engage in politics and to reduce one's carbon footprint. British Journal of Psychology 105, 3556.CrossRefGoogle ScholarPubMed
Moore, A (2018) Conspiracies, conspiracy theories and democracy. Political Studies Review 16, 212.CrossRefGoogle Scholar
Rothmund, T et al. (2020) Scientific Trust, Risk Assessment, and Conspiracy Beliefs about COVID-19 – Four Patterns of Consensus and Disagreement between Scientific Experts and the German Public. PsyArXiv preprint. Published online: 2020. doi: 10.31234/osf.io/p36w9.CrossRefGoogle Scholar
Bruder, M and Kunert, L (2022) The conspiracy hoax? Testing key hypotheses about the correlates of generic beliefs in conspiracy theories during the COVID-19 pandemic. International Journal of Psychology 57, 4348.CrossRefGoogle ScholarPubMed
Uscinski, JE et al. (2020) Why do people believe COVID-19 conspiracy theories? Harvard Kennedy School Misinformation Review. Published online: 28 April 2020. doi: 10.37016/mr-2020-015.CrossRefGoogle Scholar
Han, H (2022) Trust in the scientific research community predicts intent to comply with COVID-19 prevention measures: an analysis of a large-scale international survey dataset. Epidemiology and Infection 150, e36.CrossRefGoogle ScholarPubMed
Yaqub, O et al. (2014) Attitudes to vaccination: a critical review. Social Science & Medicine 112, 111.CrossRefGoogle ScholarPubMed
Han, H (2022) Testing the validity of the modified vaccine attitude question battery across 22 languages with a large-scale international survey dataset: within the context of COVID-19 vaccination. Human Vaccines & Immunotherapeutics 18, 2024066.CrossRefGoogle ScholarPubMed
Shahsavari, S et al. (2020) Conspiracy in the time of corona: automatic detection of emerging COVID-19 conspiracy theories in social media and the news. Journal of Computational Social Science 3, 279317.CrossRefGoogle ScholarPubMed
Merkley, E (2020) Anti-intellectualism, populism, and motivated resistance to expert consensus. Public Opinion Quarterly 84, 2448.CrossRefGoogle Scholar
Blackburn, AM, Vestergren, S and COVIDiSTRESS II Consortium (2022) COVIDiSTRESS diverse dataset on psychological and behavioural outcomes one year into the COVID-19 pandemic. Scientific Data 9, 331.CrossRefGoogle Scholar
Miller, JM, Saunders, KL and Farhart, CE (2016) Conspiracy endorsement as motivated reasoning: the moderating roles of political knowledge and trust: conspiracy endorsement as motivated reasoning. American Journal of Political Science 60, 824844.CrossRefGoogle Scholar
Mari, S et al. (2022) Conspiracy theories and institutional trust: examining the role of uncertainty avoidance and active social media use. Political Psychology 43, 277296.CrossRefGoogle Scholar
Einstein, KL and Glick, DM (2015) Do I think BLS data are BS? The consequences of conspiracy theories. Political Behavior 37, 679701.CrossRefGoogle Scholar
Brotherton, R, French, CC and Pickering, AD (2013) Measuring belief in conspiracy theories: the generic conspiracist beliefs scale. Frontiers in Psychology 4, 279.CrossRefGoogle ScholarPubMed
Lantian, A et al. (2016) Measuring belief in conspiracy theories: validation of a French and English single-item scale. International Review of Social Psychology 29, 1.CrossRefGoogle Scholar
Stojanov, A and Halberstadt, J (2019) The conspiracy mentality scale: distinguishing between irrational and rational suspicion. Social Psychology 50, 215232.CrossRefGoogle Scholar
Swami, V (2012) Social psychological origins of conspiracy theories: the case of the Jewish conspiracy theory in Malaysia. Frontiers in Psychology 3, 280.CrossRefGoogle ScholarPubMed
Putnick, DL and Bornstein, MH (2016) Measurement invariance conventions and reporting: the state of the art and future directions for psychological research. Developmental Review 41, 7190.CrossRefGoogle ScholarPubMed
Fischer, R and Karl, JA (2019) A primer to (cross-cultural) multi-group invariance testing possibilities in R. Frontiers in Psychology 10, 1507.CrossRefGoogle ScholarPubMed
Han, H (2021) Exploring the association between compliance with measures to prevent the spread of COVID-19 and big five traits with Bayesian generalized linear model. Personality and Individual Differences 176, 110787.CrossRefGoogle ScholarPubMed
Rachev, NR et al. (2021) Replicating the disease framing problem during the 2020 COVID-19 pandemic: a study of stress, worry, trust, and choice under risk. PLoS One 16, e0257151.CrossRefGoogle ScholarPubMed
Lieberoth, A et al. (2021) Stress and worry in the 2020 coronavirus pandemic: relationships to trust and compliance with preventive measures across 48 countries. Royal Society Open Science 8, 200589.CrossRefGoogle ScholarPubMed
Rosseel, Y (2012) Lavaan: an R package for structural equation modeling. Journal of Statistical Software 48. Published online: 2012. doi: 10.18637/jss.v048.i02.CrossRefGoogle Scholar
DiStefano, C, Zhu, M and Mîndrilă, D (2009) Understanding and using factor scores: considerations for the applied researcher. Practical Assessment, Research, and Evaluation 14, 20.Google Scholar
Robitzsh, A (2021) Package ‘sirt’. (Accessed 27 September 2021).Google Scholar
Asparouhov, T and Muthén, B (2014) Multiple-group factor analysis alignment. Structural Equation Modeling: A Multidisciplinary Journal 21, 495508.CrossRefGoogle Scholar
Vandenberg, RJ and Lance, CE (2000) A review and synthesis of the measurement invariance literature: suggestions, practices, and recommendations for organizational research. Organizational Research Methods 3, 470.CrossRefGoogle Scholar
Xu, H and Tracey, TJG (2017) Use of multi-group confirmatory factor analysis in examining measurement invariance in counseling psychology research. The European Journal of Counselling Psychology 6, 7582.CrossRefGoogle Scholar
Muthén, B and Asparouhov, T (2018) Recent methods for the study of measurement invariance with many groups: alignment and random effects. Sociological Methods & Research 47, 637664.CrossRefGoogle Scholar
Han, H et al. (2022) Exploring the association between character strengths and moral functioning. Ethics & Behavior. Published online: 2022. doi: 10.1080/10508422.2022.2063867CrossRefGoogle Scholar
Figure 0

Table 1. Results from the measurement invariance test

Figure 1

Table 2. Repetitive simulation test results

Figure 2

Table 3. Correlation between conspiratorial thinking and anti-expert sentiment with trust

Supplementary material: PDF

Han et al. supplementary material

Tables S1-S10

Download Han et al. supplementary material(PDF)
PDF 215.8 KB