“Threat perception” is frequently deployed as a causal variable in theories of international relations (IR) and foreign policy decision making.Footnote 1 Both rationalist and behavioral explanations of war onset,Footnote 2 crisis behavior,Footnote 3 coercion,Footnote 4 and alliance formationFootnote 5 assign “threat perception” a causal role. The term has appeared in approximately one thousand titles and abstracts of journal articles, books, and chaptersFootnote 6 since Robert Jervis highlighted the concept in Perception and Misperception in International Politics,Footnote 7 a work that inspired a body of scholarship devoted to integrating the study of human cognition into IR.Footnote 8
Yet the ubiquity of “threat perception” in the literature may owe more to ambiguity in its meaning than to consensus on its importance. The term suffers from two related problems. The first is inconsistent conceptualization of both “threat” and “perception.” The definition of “threat” shifts across situations—such as crisis bargaining,Footnote 9 the security dilemma,Footnote 10 and alliance formationFootnote 11—and across units of analysis, including leaders,Footnote 12 states,Footnote 13 and citizens.Footnote 14 Some scholarship constrains “threats” to the observable (for example, military capabilities or coercive communiqués), while other scholarship considers the ephemeral (for example, hostile intentions or environmental uncertainty). Some scholarship allows for intrinsic variation in how and whether threats are perceived, while other scholarship relies on a consistent correspondence between the existence of a threat and its apprehension. Ultimately, “threat perception” is a background concept with a “broad constellation of meanings and understandings.”Footnote 15
A second, related problem within IR theory is the reliance on untested psychological assumptions to link “threat perception” to outcomes of interest. Does the threat of existential harm consistently trump other considerations in decision making, as realist theory assumes?Footnote 16 Do people think about the potential for harmful outcomes of all kinds (for example, battle deaths or territorial losses) as if they were tallying potential monetary costs, as a number of rational-choice models require?Footnote 17 Do people assume that those who issue threats have inscrutable intentions that can only be inferred by estimating their strategic interestsFootnote 18 or their personal characteristics?Footnote 19 Each of these assumptions constitutes a link in a theoretical chain between “threat perception” and an outcome of interest. Yet rarely, if ever, are these links tested.
The combined effect of conceptual ambiguity and untested assumptions is that little is known about what “threat perception” is or how it works to influence outcomes in IR. In this article, I argue that both problems can be addressed by integrating the brain into the conceptual and empirical microfoundations of the study of threat perception in IR.Footnote 20 I make the argument for taking the brain into account in two stages. First, I carry out a conceptual ground-clearing exercise to demonstrate that the proliferation of customized definitions of “threat” and “perception” in the IR literature is unnecessary. The plain-language meaning of “perception” entails a role for the brain. Combining this with the plain-language definitions of “threat” yields two generalizable, interpretable, systematized concepts:Footnote 21 the brain's apprehension of any potential danger, subjectively defined (threat-as-danger perception); and the brain's detection of a socially communicated statement indicating the conditional intention to harm (threat-as-signal perception).Footnote 22 Most existing customized definitions of “threat perception” can fit within one of these two concepts.
My proposed disambiguation is not entirely novel. David Baldwin called for a similar distinction in the service of developing a “science of threat systems.”Footnote 23 But in the second stage of my argument, I show how explicitly acknowledging that perception is a brain-level phenomenon creates new opportunities to test assumptions within IR theories about how threat-as-danger perception and threat-as-signal perception work. These opportunities arise because both plain-language conceptualizations of “threat perception” are also topics of study in cognitive science.Footnote 24 This correspondence opens up new sources of data for IR scholars because the neuroscience literature contains a great deal of brain-level data on threat perception, in both senses of the term. Using two empirical examples, I demonstrate how these data can be used to test key assumptions of the IR literature. I also show how brain-level data can generate new insights and aid in developing new theories of threat perception's role in IR that rest on firmer microfoundations.
This article joins a small body of work linking neuroscientific evidence to IR theory.Footnote 25 Instead of relying on findings from single neuroimaging studies, as previous work has had to do, I introduce a new type of brain-level data: coordinate-based meta-analyses (CBMAs). CBMAs are one way to analyze large-scale neuroimaging data (that is, data from many individual neuroimaging studies). CBMAs did not exist when IR scholars initially explored the possibility of integrating neuroscience into the study of IR,Footnote 26 and I argue they are well suited to address some of the concerns raised about the viability of that project.
To demonstrate the utility of large-scale neuroimaging data, I analyze fifteen previously published, peer-reviewed CBMAs that represent the data collected by over 500 neuroimaging studies from more than 11,000 subjects. I use the data from these CBMAs to test two psychological assumptions found in theories in the IR literature that posit a causal role for “threat perception.” I first test an assumption found within theories of conflict decision making about how people reason when confronted with threats-as-dangers (such as adversarial states, rebel groups, or hostile leaders). Specifically, I consider a proposition common in rational-choice models: that people think about the harms associated with prospective conflict (for example, battle deaths or territorial losses) as if they were economic costs. Using statistical comparisons of patterns of brain activity across relevant tasks from 126 unique studies, I find no support for this proposition. Behavioral evidence further suggests that relying on this simplifying assumption risks interpreting rational, but complex, choices as irrational or as mistakes. Instead of a cost- or value-based approach, findings in the cognitive science literature suggest that heuristic models of harm evaluation may better explain conflict-related decision making.
The second assumption, common to both rationalist and behavioral theories of coercion, is that people who receive threats-as-signals treat the intentions of the issuer as fundamentally inscrutable. From this perspective, people forgo the futile task of “mind-reading” to determine whether or not the issuer will act on their threat and instead reason about the issuer's strategic interests (such as domestic audience costs or reputational concerns) or their characteristics (such as a “madman” personality or a fearful emotional state). Using data from 392 unique studies, I find that neither of these theorized workarounds engages the same brain-level architecture as directly reasoning about those who intend to do harm. Further, activation of the brain-level architecture engaged by reasoning about those who intend to do harm (particularly the amygdalae) is associated with distorted perceptions of harm magnitude and a heightened desire for blame and punishment. These perceptions and preferences could help explain documented patterns of coercion failure,Footnote 27 since they are associated with resistance and retaliation, rather than capitulation. These same brain-level mechanisms also suggest why costly signals that make threats more credible may be associated with coercion failure. Finally, a brain-level account of threats-as-signals suggests that misperception of intended harm is a feature of the brain's social cognitive systems, not a bug to be corrected by better information. Thus, while the process of intention inference is challenging to observe directly, assuming it away may limit our ability to understand the consequences of coercion, and coercion failure in particular.
This article makes several contributions to the literature. As Joshua Kertzer has shown, many IR theories rest on psychological assumptions that are rarely tested.Footnote 28 A primary contribution of this article is to conduct two such tests within the literatures on conflict decision making and on coercion. I show how this testing can be conducted with brain-level data and how those data can provide evidence for theory building in addition to theory testing. As a second contribution, I introduce a new (low-cost) type of brain-level data to the study of IR: CBMAs. This type of data can help answer the call for better integration of neuroscientific evidence into IR,Footnote 29 which has been hindered, in part, by the prohibitive cost of original data collection. The utility of CBMAs is not limited to the domain of threat perception. Much neuroimaging data related to decision making under risk and uncertaintyFootnote 30 and intergroup relationsFootnote 31 has already been meta-analyzed and could be leveraged by IR scholars.Footnote 32 Finally, this article provides conceptual clarity. “Threat perception” appears in many IR theories, and there is a sense that it matters. Raymond Cohen went so far as to call threat perception “the decisive intervening variable between action and reaction in international crisis.”Footnote 33 But advancing our understanding of conflict initiation, alliances, crises, coercion, and other contexts where “threat perception” might matter requires a clearer understanding of what it is and how it works.
The article proceeds in four sections. First, I conduct a conceptual ground-clearing exercise to show why taking the brain into account by reverting to plain-language definitions of “perception” and “threat” has value. I also highlight why—from a brain-level perspective—threats-as-dangers and threats-as-signals are distinct concepts. Second, I show how large-scale neuroimaging data represented by CBMAs can offer microfoundational evidence with which to test the psychological assumptions in the IR literature. In the third section, I analyze fifteen CBMAs to perform two such tests. The final section considers the implications of a closer connection between neuroscience and IR.
Threat Perception in International Relations
“Threat perception” means different things to different scholars of IR. David Baldwin identified the crux of the problem when he asked, “Precisely what is the phrase ‘A threatens B’ supposed to tell us?”Footnote 34 Baldwin showed that IR scholars use “threat” in at least two different ways. For one school, which Baldwin associated with game theorists, “threat … is an undertaking by A intended to change B's future behavior.”Footnote 35 For another school, which he associated with social psychology, “threat refers to [B's] anticipation of harm,” such that “B may be threatened by A regardless of what A is doing; B may even be threatened by A when A does not exist.”Footnote 36 Baldwin argued that scholars of these two types of threat “are simply not referring to the same thing when they say ‘A threatens B’.”Footnote 37
An Abundance of Conceptualizations
The problem of conceptual confusion Baldwin identified has only grown. IR scholars now variously define “threats” as a set of material properties, such as offensive military capabilities;Footnote 38 informational content, such as coercive communiqués;Footnote 39 intended bad outcomes, such as defection in the prisoner's dilemma;Footnote 40 unintentional vulnerabilities, such as environmental uncertainty;Footnote 41 and holistic impressions, such as enemy “images.”Footnote 42 Some of these definitions (for example, coercive communiqués) follow plain-language use, but most are customized to the study of IR and to specific use cases. Only by comparing definitions is it possible to see whether two scholars are talking about the same thing when they talk about “threats.”
How scholars link “threats” to “perception” adds a layer of ambiguity. From a mechanistic perspective, if a material danger exists (such as an enemy's tanks) then it will be perceived, though perhaps with some systemic uncertainty. This mechanistic perspective is most obvious in the survival-oriented logic of realism.Footnote 43 Other scholarship allows for individual-level variation in whether “threats” are “perceived.” Some argue that features of the individual perceiver (B in Baldwin's formulation) matter most. These include the perceiver's disposition,Footnote 44 emotional state,Footnote 45 or personal experiences.Footnote 46 When B is a state or collective, its own values may structure what is (or is not) perceived as threatening.Footnote 47 For other scholarship, features of the target (A in Baldwin's formulation) explain perception. These include the content of holistic “images,”Footnote 48 an attachment to democratic norms,Footnote 49 or A's estimated capabilities and intentions.Footnote 50 Finally, there is the surrounding context. For those emphasizing the perils of an anarchic world, for example, existential risk,Footnote 51 the distribution of power in the international system,Footnote 52 or the level of general uncertainty in the environmentFootnote 53 determine whether threats are perceived. These varied notions of “perception” only make it more difficult to pin down what is known about “threat perception” in IR.
Hindering knowledge accumulation further are the many untested psychological assumptions that underpin theories of how “threat perception” (however defined) affects outcomes. Is one kind of danger more important than all others, serving as a reliable, driving force for human behavior? Realist theory posits that the avoidance of existential harm motivates people above all other considerations,Footnote 54 but, observably, considerations of nonmaterial threats (for example, spiritual concerns) can dictate behavior that runs counter to physical harm avoidance.Footnote 55 Can the decision to initiate war be reduced to the net expected value of war's gains set against its harmful consequences? Rational-choice theories assume people can tally the losses of both “blood” and “treasure” using a common scale,Footnote 56 but the value of abstract lives often defies a consistent calculus.Footnote 57 Assumptions like these reflect IR scholars’ models of how the mind thinks about threats. The literature contains more models and assumptions than I can list. Testing them is challenging. But leaving assumptions about how threat perception works untested means that IR theories rest on fundamentally uncertain microfoundations.
Acknowledging the Brain and Returning to Plain Language
Without a clear understanding of what threat perception is, it is impossible to improve our theoretical models of how it works. What seems to have been lost in the conceptual proliferation of “threat perception” over time is that useful definitions of both “threat” and “perception” already exist in plain language. The plain-language meaning of “perception” is the becoming aware of something through the senses and how that thing is understood.Footnote 58 The integration and comprehension of sensory input occur in the brain. There is no escaping this at any level of analysis or aggregation. Institutions, groups, and systemic structures might affect the sensory input (for example, information a person is exposed to), but they do not negate the brain's role as the site where integration and comprehension occur.
Of what does the brain become aware? Sticking with plain language, “threat” has two meanings (in English) aligned with Baldwin's distinction: anything subjectively apprehended as dangerous or potentially damaging (threat-as-danger); or a statement of a conditional intention to do harm (threat-as-signal).Footnote 59 Combining these terms partitions “threat perception” into two distinct concepts: “threat-as-danger perception” and “threat-as-signal perception.” Both take place in the brain.
While it may seem that threat-as-danger perception is a macro-concept that simply subsumes the detection of threats-as-signals, a brain-level perspective highlights why the two concepts are distinct. As Baldwin noted, threats-as-dangers reside in the mind of the beholder and can include dangers with agency (for example, a rival leader), dangers without agency (for example, a deadly virus), and even dangers that do not exist (for example, a fire-breathing dragon). The mental exercises involved in thinking about threats-as-dangers do not inherently include thinking about other people. This can be because the danger in question is not a person (the deadly virus) or because the exercise does not require doing so (as with estimating the effect on trade of war with a rival state). But threats-as-signals are by definition social communications, even when they are misperceived. Reasoning about social signals requires reasoning about the content of another person's mind, which engages the brain's architecture for social cognition.Footnote 60
Because reasoning about other people is enabled by specific brain-level architecture, threat-as-signal perception constitutes a distinct collection of mental exercises.Footnote 61 Preserving this distinction is particularly important in the study of coercion, where communication (implicit or explicit) between two or more actors is a focal point of inquiry.
Constraining “threat perception” to mean either “threat-as-danger perception” or “threat-as-signal perception” in lieu of customized definitions has several advantages. First, these two concepts can accommodate most definitions used in the literature, though some scholarly usage spans both concepts.Footnote 62 Second, these two definitions of “threat perception” are interpretable to other social scientists and to policy-makers, because they align with the intuitions of nonspecialists. Third, the terminology is independent of any particular theory or use case, which enables consistency across scholarship. Finally, both systematized concepts of threat perception are compatible with other disciplines, including cognitive science.Footnote 63 This conceptual compatibility opens up new sources of brain-level data for the study of threat perception in IR. These data can be used to test theoretical models and to illuminate aspects of threat perception that other theories miss. Before I illustrate these points through two empirical examples, I introduce the analysis of large-scale neuroimaging data as a new source of microfoundational evidence for theory development and testing in IR.
Brain-Level Microfoundations and Large-Scale Neuroimaging Data
By specifying that perception is a process that occurs in the brain, I have made an explicit connection between two constructs relevant to IR theory (threat-as-danger perception and threat-as-signal perception) and brain activity. Scholars have long seen the value in linking brain data to the study of political science generallyFootnote 64 and IR in particular.Footnote 65 To date, work seeking to bridge neuroscience and IR has relied on insights from a handful of neuroscientific studies to inform theory building.Footnote 66 Here, I introduce a relatively new type of neuroscientific data that can be used for theory testing as well as theory building: large-scale neuroimaging data summarized by CBMAs. In this section, I connect the dots between functional neuroimaging, CBMAs, and how assumptions in IR theory can be tested.
Spatial Patterns of Brain Function as Data
Neuroscience encompasses the study of brain structure, function, and connectivity at various levels of granularity. For the purposes of this article, I focus on brain function, which refers to the connection between neuronal activity and mental or physical outputs (for example, cognition or behavior).Footnote 67 Functional neuroimaging data result from recording the brain's response to particular stimuli or during performance of a specific task.Footnote 68 When these functional data are collected over time using magnetic resonance imaging (MRI), the data are often summarized into a three-dimensional representation of average brain activity and stored as a single image (for a deeper discussion, see the online supplement).
The 3D images generated by functional MRI (fMRI) contain useful information because the brain is spatially organized. This means that collections of neurons in specific locations play consistent (replicable) roles in brain function, including responses to stimuli and other forms of cognition.Footnote 69 Most complex stimuli (such as watching people interact) and tasks (such as playing a strategic game) require the involvement of multiple brain regions that are spatially distributed.Footnote 70 When this spatial distribution of neural activity is consistent across people, it can be treated as a pattern.Footnote 71 Because brains are always “on,” these patterns are often expressed as a contrast, which is a comparison between the brain's activity while dealing with the stimuli of interest and its activity during a control condition or at rest (for example, feeling pain, contrasted with not feeling pain). This method cancels out much of the brain's background activity and isolates the effect of interest (that is, the brain's response to pain). The 3D images of contrasts are the basis of the popular notion that the brain “lights up” in response to certain stimuli. Figure A1 in the online supplement provides an example of this kind of canonical brain image, where brightly colored clusters indicate locations where there is a positive and significant difference in the brain's response between two experimental conditions (expressed as pain > no pain).
Contrast maps are a kind of basic data frequently produced (and published) as part of traditional neuroimaging studies. In the online supplement, I provide more detail on how contrast maps are calculated. Here, I will just note two aspects of these data that are important for what follows. First, each contrast image is made up of three-dimensional pixels, known as voxels. As with a pixel in a 2D image, each voxel has a location within the image (expressed in x, y, and z coordinates) and a value, which is often the result of a statistical test,Footnote 72 such as the t-statistics captured in Figure A1. Thus, the information shown in a contrast image resides in the values and spatial arrangement of statistically significant voxels. Second, because large collections of neurons are required for many cognitive functions, the activity associated with a particular contrast is captured by clusters of statistically significant voxels. The co-activation of these clusters constitutes the brain activity pattern associated with a given task.
A standard fMRI contrast map is simply a 3D image capturing a common pattern of activation across the brain for participants in a study. From that pattern, it is possible to identify the brain areas involved in the cognitive process of interest (for example, experiencing pain) using the image's coordinate system. Two images within the same coordinate system can be statistically compared for similarity in their cluster-level patterns and their voxel-level patterns. Weak (cluster-level) similarity implies involvement of the same brain areas, and thus possibly similar brain functions.Footnote 73 Strong (voxel-level) similarity implies that the same populations of neurons are involved, suggesting fundamentally similar brain-level representations.Footnote 74 I discuss how both types of similarity can be used for testing assumptions about how threat perception works, but first I introduce the concept of large-scale neuroimaging data analysis.
Coordinate-Based Meta-Analyses
A single neuroimaging study produces a single contrast map for a given mental state of interest, representing average effects for the study's participants (for example, pain > no pain, as in supplementary Figure A1). Interpreting a single neuroimaging study requires the same caveats as interpreting results produced by other stand-alone experiments. A single neuroimaging study usually has between fifteen and fifty subjects, and these small sample sizes can raise concerns about both statistical robustness and generalizability.Footnote 75 While social scientists have turned to replication as a way to address similar concerns,Footnote 76 neuroimaging is an expensive method of data collection, which makes replication for its own sake unlikely.Footnote 77
Neuroscientists have instead turned to data pooling as a means of establishing reliable and general patterns of activity across studies. Peer-reviewed, statistically driven CBMAs have become an increasingly common way to pool and report these results.Footnote 78 Where social scientists use meta-analyses to validate effect sizes, neuroscientists use CBMAs to validate effect sizes and locations across studies.Footnote 79 These effect- and location-based tests result in concordance maps, which are contrast maps indicating clusters where research shows some statistical consensus. CBMA concordance maps (hereafter, CBMA maps) are thus a form of aggregated large-scale neuroimaging data that summarizes the field's statistical consensus on the patterns of activity associated with a particular task (such as playing strategic games) or stimulus (such as experiencing pain) while masking one-off results.Footnote 80 This type of data is thus better suited to support general claims about the brain's responses than any single study with conventional sample sizes.
It is important to acknowledge that CBMAs cannot fix fundamental flaws in research design or analysis. “Garbage in, garbage out” still applies.Footnote 81 Researchers conducting meta-analyses also make several choices, beyond which studies to include, that affect a CBMA's stringency. I provide greater detail on CBMA construction and evaluation in the online supplement.
Conditional on reasonable research practices, then, CBMAs can address several concerns about generalizability that have limited the lessons IR scholars can draw from neuroimaging research. In Naoki Egami and Erin Hartman's terms, CBMAs improve the ability to generalize from the sample (sample validity), from the treatment (treatment validity), and sometimes from the outcome measure (outcome validity).Footnote 82 Sample validity improves because CBMAs derive results from data collected at different research sites, sometimes in multiple languages, increasing the geographic and cultural diversity of the subject pool. Age and socioeconomic status diversity are likely to be lower in neuroimaging studies than in survey research, however.Footnote 83 Treatment validity improves when the same effects are found using multiple implementations.Footnote 84 Most CBMAs aggregate studies with a variety of stimuli presented in different modalities (such as video, audio, and text), which means that concordance maps reflect a variety of treatment implementations. Outcome validity can improve for the same reason, though some CBMAs consider only a single outcome by design.
Limitations
The main limitations of CBMAs for IR scholarship are limitations shared by single-study neuroscience. Neuroimaging studies rarely have access to leaders or elites, who are often the targets of IR theories. Kertzer demonstrated via a meta-analysis that elites and members of the public respond similarly to the vast majority of manipulations in the IR experiments sampled, which included 162 paired treatments.Footnote 85 Nevertheless, a concern with elite/non-elite comparisons from a neuroscience perspective is that elites would “think differently” about relevant tasks than non-elites, even if observable outcomes are similar. That is, the neural architecture used by elites might be different and so would not be represented in the neuroscientific literature.
To my knowledge, no studies have investigated directly whether elite/non-elite or leader/non-leader differences exist in any functional neuroimaging task. A few studies have considered the differences between experts and non-experts, however. In certain contexts, such as chess and clinical diagnoses, years of training does appear to alter the areas of the brain that process domain-relevant information for challenging problems (“thinking differently”).Footnote 86 Yet, in the case of financial decision making, expertise was correlated with differences in the magnitude of neuronal activation and decision speed (“thinking faster”), but not with different patterns of activation or different choice behavior (“thinking differently”).Footnote 87
In the domain of threat perception, there is no evidence of chess-like “thinking differently” among those who are more sensitive to threats—such as highly anxious individuals or those who have been in combat—but there is evidence of “thinking faster.”Footnote 88 This consistency is not surprising because the brain-level architecture for dealing with dangers has been largely conserved in evolutionary terms between humans and other mammals.Footnote 89 So it seems reasonable to provisionally extend the findings of threat-related neuroimaging studies to elite decision makers, but further research would be helpful for validation.
A second limitation of CBMA (and single-study) neuroimaging data is the context in which they are collected. Context validity is a challenge for experimentation in general,Footnote 90 but neuroimaging research takes place in a setting that is very removed from day-to-day life. Yet neuroscientists have started to demonstrate that how subjects think during fMRI tasks reflects how broader populations think when confronted with the same stimuli in the real world. For example, the “neural focus group” method pioneered by Emily Falk and colleagues demonstrates that it is possible to use patterns of neural response from small samples in neuroimaging studies to predict population-level behavioral responses to media stimuli (such as television ads and newspaper articles).Footnote 91 Thus, neuroimaging's unique data-collection context need not automatically invalidate the real-world applicability of its findings.
Testing Psychological Assumptions
CBMA maps can reflect the state of the neuroimaging literature's statistical consensus on how the brain responds during a particular mental exercise in the form of a single 3D image (see Panel A of Figure 1 for two illustrations). When a theoretical assumption about how threat perception works in IR theory can be reframed as positing the equivalence of two mental exercises—for example, thinking about physical harm is like thinking about an economic cost; or, reasoning about others’ intentions is like reasoning about their strategic interests—then the assumption can be translated into a testable hypothesis. Specifically, the hypothesis is that the patterns of brain response associated with each mental exercise should be similar. The reason to use CBMAs instead of single studies for this comparison is that IR theories posit general models of how threat perception works, and only CBMAs offer sufficiently generalizable brain-level findings.
Similarity Analyses
To carry out tests of similarity, I use both cluster-level and voxel-level information from CBMA images. As a weak test, the functional similarity of two mental exercises compares the distribution of clusters of activity across major areas of the brain. Functional similarity does not require that two mental exercises activate the exact same voxels. Rather, it measures the extent to which two mental exercises activate a similar pattern of brain areas. I use Alejandro de la Vega and colleagues’ definition of these major brain areas as the basis for assessing functional similarity.Footnote 92 Following Huixin Tan and colleagues, I calculate the distributions of active voxels across the major brain areas and estimate functional similarity between two distributional vectors using Spearman's rho (ρ).Footnote 93
As a stronger test, I also calculate voxel-level similarity within certain brain areas. This representational similarity quantifies the extent to which two mental exercises rely on the same voxels and thus potentially share some neuronal architecture.Footnote 94 While two mental exercises might engage the same brain area, representing thousands of voxels and millions of neurons, they may not actually activate the same voxels/neurons. Functional similarity may thus overstate the extent to which two mental exercises have the same brain-level microfoundations. To calculate representational similarity, I compare the binary patterns of activation for all the voxels in a given brain area using Pearson's phi (ϕ) as the measure of similarity. I account for the number of tests run on each pair of CBMAs (one test for each shared area of activity) using a simple Bonferroni correction.Footnote 95
Data Sets
In this article, I rely on the output of fifteen previously published, peer-reviewed CMBAs from the neuroimaging literature. Table 1 gives the full list of publications and each meta-analysis used as data. I also reference some single-study findings where no meta-analytic equivalent yet exists.
* Number of subjects is imputed.
As the table indicates, many published meta-analytic papers contain more than one CBMA. Comparison of CBMAs is an increasingly common means of summarizing meta-analytic knowledge within neuroscience.Footnote 96 Most comparisons conducted by the original authors are not directly related to my questions of interest, however, which is why I conduct my own analyses of their data. Additional information about the meta-analyses is provided in Table S1 in the online supplement. Throughout the text, I provide links to the raw CBMA data stored in the freely available archive, Neurovault.org, at <https://neurovault.org/>.
While each meta-analysis captures findings from a variety of tasks, many are analogous to the mental exercises described in the experimental and theoretical IR literature. The most direct analogues are the strategic gameplay CBMA, which is neuroimaging data collected while subjects play the prisoner's dilemma, the ultimatum game, chicken, or other zero-sum, adversarial games, and the visceral simulation CBMA, which asks participants to engage in the processes posited by some IR scholars to play a role in intention inference.Footnote 97 The least comparable set of tasks, relative to the IR experimental literature, is found in the physical harm CBMA. These tasks capture the response to genuine pain, which represents a more direct test of IR's long-standing interest in both the threat and the experience of painFootnote 98 than IR scholars have generally induced.Footnote 99 The other meta-analyses have tasks that are analogous to processes theorized in IR (for example, experiencing financial losses or reasoning about out-group members) but largely without explicit political overtones. For a deeper discussion of tasks included in the CBMAs, see the online supplement.
Applications
I next consider two assumptions from the IR literature about how threat-as-danger perception and threat-as-signal perception work. I first review the literature that leverages each assumption—conflict decision making in the first case and coercion in the second. I then translate each assumption into an expectation for patterns of similarity in brain-level responses. I next test these expectations empirically, using measures of functional (weak) and representational (strong) similarity. Finally, I discuss how these brain-level analyses affect our understanding of behavior and the implications for the study of threat-as-danger perception and threat-as-signal perception in IR.
Assumption 1: Thinking About Harms as Costs
Decisions surrounding violent conflict are some of the most scrutinized in the study of IR.Footnote 100 The perception of threats-as-dangers is considered an integral component of decisions to initiate conflictFootnote 101 and to form alliances.Footnote 102 Threats-as-dangers in this context could be adversarial states, rebel groups, hostile ideologies, or particular leaders, but in all cases, decision makers must evaluate the harms they could inflict. At minimum, two kinds of harm are relevant: losses of “blood” (lives) and losses of “treasure” (money). In the context of alliances, deciding whether to balance against, or bandwagon with, a powerful neighbor requires comparing losses of “blood” and “treasure” during a potential conflict to losses of political autonomy as well. Given the nature of these conflict-related decisions, it is unsurprising that scholars have assumed people have a way of evaluating a variety of harmful outcomes in a common space.
The language of “costs” is often used to characterize this comparison space.Footnote 103 This linguistic choice is reflected in the representation of harmful decision outcomes by either a quasi-numeric value (such as –10 in the double-defection cell in the prisoner's dilemma), an actual purported quantity (such as the monetary expenditure of equipping troops), or a variable that has ordinal properties, if not an exact value (such as c in a bargaining model of war). The core psychological assumption behind these implementations is that people reason about the nonmonetary harms associated with conflict, such as battle deaths or lost territory, as if they could be summarized in a single numeric (or quasi-numeric) value.
Assuming an equivalence between thinking about harms and thinking about costs is useful from a theoretical standpoint. Rational-choice models minimally require preferences for outcomes to be connected (that is, comparable) and transitive (that is, if A is preferred to B, and B to C, then A is preferred to C). When harmful outcomes can be summarized as a simple, one-dimensional quantity, like a cost, it is straightforward to maintain these two requirements.Footnote 104 Yet, constructs that are inherently subjective, such as privacy or well-being, are complex in that they are evaluated along multiple, sometimes incommensurable dimensions.Footnote 105 In these cases, researchers have found that preferences do not always maintain transitivity, though they may be comparable.Footnote 106 That is, the complexity of a concept has implications for how we think about it during decision making and for how we behave when we make choices.
If the brain simplifies physical and other nonmonetary harms such that they are processed similar to monetary losses, we could lean on the one-dimensional perspective favored by rational-choice frameworks. If, however, other types of harm are processed substantially differently and with greater brain-level complexity, we need to ask how decision making involving harm should be modeled.
Comparing the mental exercises of thinking about physical harm (“blood”) and thinking about monetary losses (“treasure”) offers the most direct test of harm–cost equivalence. Tan and colleagues conduct two meta-analyses to directly compare how the brain represents physical harm (30 studies, 615 subjects) and economic losses (20 studies, 469 subjects).Footnote 107 I use the corresponding CBMA maps to calculate the functional (weak) and representational (strong) similarity of these two mental exercises.Footnote 108 Figure 1 illustrates that these two exercises are not especially similar. Panel A visualizes the two CBMA maps and the distribution of significant clusters throughout the brain. The response to physical pain is more widely distributed than the response to economic losses, comprises almost three times as many active voxels (3,052 versus 1,118), and engages twice as many functional brain areas (18 versus 9). Panel B illustrates the functional activation profiles across brain areas and quantifies their lack of similarity (Spearman's ρ = −0.28, p = 0.21). While the two exercises engage five of the same functional brain areas, they each engage a number of regions uniquely (physical harm, 13; monetary loss, 4), suggesting that the brain functions supporting each exercise are quite different. Panel C shows that, within their five shared brain areas, representations are significantly similar in only one (dorsal midcingulate cortex, dMCC) and significantly dissimilar in another (anterior medial prefrontal cortex, aMPFC).Footnote 109
Dorsal MCC is associated with integrating negative emotions and managing subsequent actions.Footnote 110 Similar patterns in this area may reflect the fact that both monetary losses and physical harm generate negative emotional experiences. Anterior MPFC, which shows significant dissimilar activation, is broadly associated with the encoding of subjective value, in addition to many other functions.Footnote 111 The significant negative correlation in activation across this region suggests that while both physical harm and monetary loss carry negative subjective value, the collections of neurons that represent this value are not the same. Thus, whether considering functional (weak) or representational (strong) similarity, the brain's responses to physical harm and economic losses are not much alike.
Physical pain may be too literal a translation of the harms associated with conflict, however. Excluding the experience of pain, Bartra and colleagues meta-analyzed 77 studies (1,371 subjects) of “negative subjective value” in which participants experienced a range of bad outcomes.Footnote 112 Harmful experiences used as stimuli included nonmaterial losses (such as losing a competition to a rival), material losses (such as losing money), and visceral unpleasantness for oneself (such as watching violent imagery) and for others (such as watching others experience fear).Footnote 113 I calculated the functional profile of negative subjective value and its similarity to Tan and colleagues' profile for economic losses. The functional profile of negative subjective value is represented across more brain areas (27) than monetary losses alone (9). As with physical pain, negative subjective value is not functionally similar to monetary losses (Spearman's ρ = 0.11, p = 0.57). Representational similarity analysis in aMPFC replicated the negative correlation found for physical harm (ϕ = −0.11, p < 0.001). Negative subjective value and monetary losses were positively correlated in two areas associated with emotion experience (anterior insula: ϕ = 0.09, p = 0.002) and emotion regulation (inferior frontal gyrus: ϕ = 0.11, p < 0.001), respectively.Footnote 114 This suggests that while the feeling of losing money is processed in a manner similar to the feeling of other negative outcomes, they have little else in common as far as brain-level architecture is concerned. This meta-analytic finding is consistent with a single neuroimaging study showing that a classifier trained to detect monetary losses on the basis of neural responses could not detect other types of negative outcomes (such as visceral unpleasantness) for oneself or for others.Footnote 115
Evidence from CBMAs thus indicates that nonmonetary harms engage a broader network of brain areas (18 for pain, 27 for other negative experiences) than the experience of monetary loss (9 areas), demonstrating a more complex brain-level representation.Footnote 116 The relative complexity of what is being valued in the brain has been linked to particular decision-making behavior, and to the intransitivity of preferences specifically. In such cases, intransitivity is not a matter of error; “instead, these data suggest that the irrationalities we observe in behavior reflect a fundamental irrationality in the neural representation of subjective value.”Footnote 117 As Benjamin Hayden and Yael Niv summarize, “The fact that the brain can compute values to compare apples and oranges does not mean that it routinely does so, or that valuation is the primary process underlying choice.”Footnote 118 Put differently, the brain may not assign one-dimensional values to “blood” and “treasure” to add them up, nor does it necessarily ever compute their cumulative “cost.”
While the complex neural representation of the harms associated with conflict-related decision making may violate the requirements of rational-choice models in some cases, this does not mean that people make irrational decisions where harm is concerned. Frameworks for decision making that do not require transitivity or even value-based computation (such as heuristic decision-making models) often perform better at predicting multi-attribute choices than rational-choice models and are themselves still rational.Footnote 119 For example, a single neuroimaging study evaluating threat-related decision making found that a heuristic policy best explained participants’ choices for their survival strategy.Footnote 120 An optimally rational, utility-maximizing policy was a second-best explanation, and each decision policy was supported by different brain areas. The optimal policy also maximized a monetary incentive, but many adopted a heuristic approach anyway.
In sum, the brain does not appear to reason about all bad outcomes using the brain-level architecture for thinking about economic losses. While the language of costs has a simplifying appeal, the microfoundational evidence suggests that harm evaluation is a more complex process, engaging a broader range of brain areas. And behavioral evidence suggests that this type of complex brain-level representation yields preference patterns that do not obey the assumptions required by rational-choice models. Other logics of preference structure (such as heuristic models) might better capture how people evaluate the threats they perceive and the choices they make. Research into such models thus offers an alternative method for theory building in the domain of conflict-related decision making.Footnote 121
Assumption 2: Treating Intentions as Inscrutable
The determinants of coercion's success or failure are a perennial topic of interest in IR.Footnote 122 Issuing threats-as-signals is one of the primary tools actors use to attempt to coerce others into doing something they would rather not.Footnote 123 Recent scholarship has demonstrated that coercion often fails in the real worldFootnote 124 and that states with greater material power are not necessarily more successful,Footnote 125 suggesting that coercion's outcomes are not merely determined by resources. This scholarship has sought to explain variation in coercive outcomes by focusing on how threats are processed by those on the receiving end using experimentalFootnote 126 and qualitativeFootnote 127 data. These works provide an important step forward for the study of coercion, and coercion failure in particular. But, by design, they can only speculate on the cognitive mechanisms involved.
How does the brain process threats-as-signals—that is, socially communicated statements of the conditional intention to harm? And do the cognitive processes involved affect how the recipients of threats respond to coercive attempts? Assuming that actors correctly identify that a threat-as-signal is meant for them,Footnote 128 scholarship on coercive threats often parses the first question into two problems: how do people assess capabilities (that is, can the issuer make good on their threat?) and how do people assess intentions (that is, will they?).Footnote 129 Scholarship often treats capabilities as somewhat uncertain but ultimately knowable, given sufficient information.Footnote 130 But intentions are generally treated as inscrutable: that is, they cannot be understood or investigated directly. This theoretical position is justified by a basic truth: we cannot know the content of another person's mind.Footnote 131
One consequence of assuming that intentions are inscrutable is that both rationalist and behavioral approaches to the study of coercion posit that people do not try to puzzle out intentions directly. Rather, scholars have proposed that people use workarounds to estimate others’ intentions, which then feeds into the assessment of the threat-as-signal they have received.Footnote 132 A theoretical advantage of this stance is that it allows scholars to substitute a process that has no visible inputs (intention inference) with processes that do (such as emotion inference using facial expressions or audience cost inference using speech acts).
One commonly proposed workaround is that people reason about the strategic interests of the threat's issuer. Understanding the issuer's interests would then reveal whether they “should” carry out their threat. These interests can include the domestic costs and benefits of making good on the threat versus backing down,Footnote 133 as well as their potential gains and losses in the international arena.Footnote 134 Some scholars use well-established strategic games (such as chicken or the prisoner's dilemma) to capture the mental exercise of reasoning through another's interests in a coercive situation.Footnote 135 Others use situation-specific game-theoretic frameworks.Footnote 136 In both cases, reasoning about interests and payoff structures provides a shortcut to inferring intentions because payoff structures dictate what the threat's issuer should do and, therefore, what they likely intend to do as long as they are rational actors.Footnote 137 As Wilson, Stevenson, and Potts put it, “Game theory relies very little on assessing the motives of others.”Footnote 138
A second category of workarounds assumes that the threat issuer's personal characteristics provide clues about whether they intend to carry out their threat. Characteristics hypothesized as being informative here include relevant personality types or traits,Footnote 139 beliefs,Footnote 140 and emotional states.Footnote 141 From this perspective, assigning a value to one of these characteristics (such as untrustworthy for a trait or afraid for an emotional state) allows the recipient of a threat to infer the issuer's intentions using observable information, either from the current situation or from prior interactions. Taking the consideration of characteristics even further, Marcus Holmes has argued that when people have the opportunity to interact face to face, they viscerally simulate others’ internal states and so gain insight into their intentions.Footnote 142 Seanon Wong makes a similar argument about the insights afforded by face-to-face interaction, but focuses on reading others’ emotional states.Footnote 143
Theories incorporating these indirect methods of intention inference generate the same testable implication: does the mental exercise of thinking about others’ intentions to do harm look like something else (that is, one of the hypothesized workaround exercises)? I answer this question using two comparisons. First, I evaluate the similarity between direct intention inference (that is, the brain's response when specifically tasked with inferring intentions) and five possible workaround exercises derived from the coercion literature. Second, I compare patterns of neural response for inferences about those who intend harm and about those who do not, to more precisely characterize how people might respond to receiving threats-as-signals.
In social cognitive neuroscience, the study of how we think about the thoughts and mental states of other people is often captured by the umbrella term “mentalizing,” which includes empathy, perspective taking, and reasoning about others’ minds in a variety of contexts.Footnote 144 Mentalizing activities are supported by an identifiable brain network required for social cognition, which Lynn Fehlbaum and colleagues summarize in a CBMA (204 studies, 4,786 subjects). The mentalizing network as represented in their CBMA spans many brain regions, including the bilateral temporoparietal junction, bilateral superior temporal sulci, precuneus, and portions of the medial and dorsolateral prefrontal cortex. As discussed, a crucial aspect of threats-as-signals is that receiving them requires mentalizing, even when considering strangers or relying on stereotypes.Footnote 145
Matthias Schurz and colleagues conducted meta-analyses to compare the patterns of brain activation associated with specific mentalizing tasks, including inferring others’ intentions (11 studies, 238 subjects), their traits (19 studies, 330 subjects), their factual beliefs (25 studies, 567 subjects), and their emotions (12 studies, 346 subjects), as well as simulating others’ emotional or physical states (12 studies, 288 subjects) and reasoning about their actions during strategic gameplay (13 studies, 236 subjects).Footnote 146 I use the corresponding CBMA maps to test the empirical similarity of directly reasoning about others’ intentions against the five workaround mental exercises derived from the coercion literature.Footnote 147
As illustrated in panels A and B of Figure 2, direct intention inference is functionally (weakly) similar to three of the five workaround mental exercises advanced in theories of coercion and signaling. Panel A depicts the functional profiles of directly inferring someone's intentions and the functional profiles generated by each of the five workaround exercises. Panel B shows the functional similarities between the activation profile of direct intention inference depicted in panel A and each of the workarounds. In functional-similarity terms, the closest workaround activity is reasoning about others’ emotional responses (Spearman's ρ = 0.61, p < 0.001), but belief inference (ρ = 0.58, p < 0.001) and trait inference (ρ = 0.56, p < 0.001) are also substantively similar. Neither strategic gameplay (ρ = 0.24, p = 0.19) nor visceral simulation (ρ = 0.15, p = 0.38) are even weakly similar to direct intention inference.
Much of the overlap in patterns of neural activation shown in panel A occurs within areas associated with the mentalizing network.Footnote 148 Panel C illustrates the representational (strong) similarity between direct intention inference and the three functionally similar mental exercises. Within mentalizing areas, the average representational similarity between direct intention inference and the three functionally similar workarounds is 0.40; within other brain areas, it is 0.18.Footnote 149 This suggests that inferring others’ intentions directly is functionally similar to inferring their emotional states, traits, or beliefs, in large part because all these tasks leverage the same basic brain-level architecture used for social cognition.
But the differences between direct intention inference and the theorized workarounds are also notable. The justification for proposing workaround mental exercises in theories of coercion is that people know they are not mind readers and therefore try some other method. Yet, if this substitution occurs because people know that intention inference is fundamentally futile, then we should see similar substitutions in other contexts, including neuroimaging studies. This would generate a high degree of representational similarity both inside and outside the mentalizing network, which is not what the comparison of CBMAs in panel C indicates. The implication for theories of coercion is that, while accurate intention inference may indeed be extremely challenging, people still try. Intentions may be uncertain, but they are not inscrutable. The presumption of workarounds is convenient for scholarship because it is difficult to study cognitive processes that have no visible inputs or indicators. Nevertheless, this approach may be misleading.
The direct-intention-inference exercises reflected in Schurz and colleagues' CBMA are still one step removed from the mental exercise at the heart of coercion theories: thinking about those who might intend to harm us. None of the CBMAs analyzed here focus on inferences about those who intend harm, with the exception of the CBMA for strategic games.Footnote 150 Crucially, the CBMA for direct intention inference is derived from studies where subjects reasoned about others’ actions when they did not result in serious harm.Footnote 151
But evidence from cognitive science suggests that thinking about harm has consequences for reasoning. For example, across many cultures, people do not reason about actions that could cause harm to themselves or to others purely on the basis of the action's outcome.Footnote 152 Ryan Carlson and colleagues note that “some actions, especially those involving direct physical harm, are judged to be worse than others, even when the outcome is the same.”Footnote 153 In particular, judgments about harm are influenced by intent. Intentional harms are often evaluated differently from accidents that cause identical bad outcomes, though not in all cultures.Footnote 154 When intent plays a role, it increases subjective perceptions of harm severity (sometimes called “harm magnification”),Footnote 155 even in damage-quantification tasks with objective answers and incentives for accuracy.Footnote 156 Attribution of intent to harm also increases the willingness to blame and punishFootnote 157 and the perception of moral wrongness.Footnote 158 Sandra Baez and colleagues observe a magnification effect of intent on perceived harm severity (but not punishment choices) in a study of judges and attorneys, suggesting motivation and expertise do not alter harm magnification as a perceptual phenomenon, but can attenuate its downstream behavioral consequences.Footnote 159
Neuroimaging meta-analyses provide insight into the brain-level architecture supporting this perceptual bias. In meta-analyses that consider others’ intentions, the mentalizing network is engaged when thinking about others’ intentions in both harm-related and neutral scenarios.Footnote 160 But harm-related scenarios are associated with additional areas of brain activation. Specifically, multiple CBMAs of reasoning about others in harm-related versus non-harm-related scenarios find that harm-related scenarios engage either the left amygdalaFootnote 161 or both the left and the right amygdalae.Footnote 162 A meta-analytic study of first- and third-person intentional harm showed amygdala activation only in the third-person case of observing others engaging in harmful behavior and not when engaging in harmful behavior oneself.Footnote 163 A meta-analysis of mentalizing directed at out-group members (50 studies, approximately 1,120 subjects) did not find amygdala activation,Footnote 164 suggesting that it is sensitive to harm-related context rather than adversarial social relationships per se.Footnote 165 Dorottya Lantos and colleagues found similar effects in a single neuroimaging study that compared videos of an out-group member issuing threats and the same out-group member making neutral statements, where the left amygdala was more active in the threat-as-signal condition.Footnote 166 Similarly, Ronald Sladky and colleagues found amygdala activation during the trust game, but only when subjects interacted with people they knew to be untrustworthy—that is, those who might intentionally harm them.Footnote 167
Figure 3 visualizes the spatial relationship between the areas engaged in direct intention inference and the left and right amygdalae. Notably, the amygdalae are not active in any of the meta-analytic maps generated by Schurz and colleagues. The absence of amygdala activation in the workaround exercises is significant for two reasons. First, many of these CBMAs include subjects reasoning about negative outcomes, negative emotions, and others’ pain, so valence alone cannot explain the lack of amygdala activation. Second, several studies included in the CBMAs for strategic gamesFootnote 168 and visceral simulationFootnote 169 report looking specifically for amygdala activation but not finding it, which suggests the null result is not a statistical fluke of the meta-analytic procedure. That is, the theorized workarounds are missing a key feature supporting how the brain reasons about those who intend harm.
Single-study research has also positively demonstrated that the amygdalae's engagement in the evaluation of harm-related scenarios is related to intentionality and not purely to harm.Footnote 170 Eugenia Hesse and colleagues show in a small study of patients with electrodes in their left amygdala that the region responded more to intentional harm than to unintentional harm and did so faster than other brain regions.Footnote 171 In studies where people judge the wrongness of harms, the strength of the amygdala's response positively correlates with the severity of preferred punishment responsesFootnote 172 and estimates of blameworthiness.Footnote 173
Matthew Ginther and colleagues argue that the amygdala acts as a “gate,” controlling whether emotional response information is passed along to brain regions engaged in decision making and complex social cognition (such as medial prefrontal cortex), and that the “gate” is open when harm by others is interpreted as intentional.Footnote 174 A behavioral meta-analysis argued for a similar process based on twenty-five studies of perceptions of intentional and unintentional harms, where perceptions of intent directly influenced preferences for aggression but also triggered negative emotional responses that additively contributed to preferences for aggression.Footnote 175 Consistent with these behavioral and brain-level findings, Hesse and colleagues posit that amygdala activity is related to harm magnification.Footnote 176 That is, perceptions of harm severity are distorted because of the amygdala's role in the cognitive process of evaluating intentional harms.
These findings regarding how the intention-inference piece of threat-as-signal perception is processed in the brain have several implications for the coercion literature. First, the findings regarding (fast) amygdala involvement in evaluating intentional harms suggests a brain-level bridge between theories of coercion and theories of emotion in IR. Recently, scholars have argued that receiving coercive threats-as-signals can prompt anger or even hatred, leading the recipient to resist the issuer's demands.Footnote 177 Todd Hall has proposed a similar dynamic for outrage.Footnote 178 To the extent that amygdala activity occurs in response to thinking about others’ intentions on receipt of a threat-as-signal, Ginther and co-authors' “gate” model provides an explanation for the behaviors that these scholars have associated with hatred and anger (retaliation and resistance, respectively) and with outrage (aggression). This brain-level mechanism is also consistent with emotional accounts of coercion failures,Footnote 179 as well as accounts that are agnostic regarding the precise mental exercises involved in the choice to retaliate rather than capitulate.Footnote 180
Second, these findings imply that efforts to make threats-as-signals more credible via costly signals may not work as the issuer intends. Costly signals provide additional information to the recipient of a threat about the issuer's intention to carry it out,Footnote 181 though the signal itself may be misinterpreted.Footnote 182 Greater credibility of a threat translates into greater certainty for the recipient that the issuer intends to harm them. In single neuroimaging studies, amygdala activation was greater during the anticipation of certain rather than uncertain harmFootnote 183 and when punishing identifiable rather than anonymous norm violators.Footnote 184 These findings suggest a positive relationship between (on the one hand) the confidence a threat's recipient has in the likelihood of harm at the hands of the issuer specifically and (on the other hand) amygdala-linked distortions and preferences (such as harm magnification or a desire to blame and punish). Given these distortions and preferences, resistance rather than capitulation is a plausible response to credible attempts at coercion.
Coercion failures may thus result from the same steps taken to improve coercion's chances of success (that is, costly signals). The potential for a null relationship between threat credibility established via costly signals and the probability of compliance has been identified experimentally.Footnote 185 In observational data, Todd Sechser has shown that compellant threats backed by demonstrations of military force (a costly signal) improve chances of success from a low baseline of 12.5 percent, but still fail half the time.Footnote 186 A brain-level account provides insight into the mechanisms behind these empirical findings.
Finally, prior theoretical treatments of overblown responses to threats-as-signals argue that these responses are the product of misperceptions which can be corrected with informationFootnote 187 or empathy,Footnote 188 or by addressing motivations for bias.Footnote 189 But this “correctable” account risks mistaking a feature of the brain's architecture for a bug in human reasoning. Baez and colleagues' study on judges and lawyers demonstrates that the behavioral effects of perceptual distortion can be mitigated, but not the misperception itself. A brain-level account thus suggests that to avoid escalation, short-circuiting the consequences of perceptual distortion may be a more fruitful intervention than correcting the misperception.
Conclusion
Within IR, the study of “threat perception” has suffered from a proliferation of conceptualizations and a dependence on untested psychological assumptions. The result is that we know less about what “threat perception” is and how it matters for IR than the volume of literature would suggest.
I have argued that the solution to both of these problems is the same: take the brain into account. Conceptually, this is straightforward. Returning to the plain-language meaning of “perception” renders the brain an essential aspect of any understanding of “threat perception.” Pairing “perception” as a brain-based process with the two plain-language meanings of “threat” yields two systematized concepts grounded in the brain: threat-as-danger perception and threat-as-signal perception. These two definitions cover most custom formulations, generalize across situations, and are directly interpretable to non-experts. This conceptual ground-clearing also makes it easier to see why brain-level data offer a way to test assumptions about how threat-as-danger perception and threat-as-signal perception work in IR.
I argued that large-scale neuroimaging data are a particularly valuable resource for this kind of testing and introduced CBMAs as one way to leverage this type of data. CBMAs exist as peer-reviewed publications in their own right and thus are accessible to IR scholars. I argued that CBMAs are especially useful because they summarize the statistical concordance of many neuroimaging studies, offering generalizable insights and improving external validity in terms of samples, treatments, and outcomes. I then introduced pattern similarity analysis as a way of using CBMA data to test assumptions about psychological processes using the image-based outputs of fMRI.
In an original analysis of fifteen previously published CBMAs, I demonstrated this type of brain-level assumption testing. I first considered whether people think about all types of harm associated with conflict as if they were economic costs, which is an assumption common in rational-choice models of conflict-related decision making. Analyzing the results of three CBMAs representing 126 unique studies, I found that there is minimal similarity between the brain's processing of monetary and nonmonetary harms. Nonmonetary harms are represented more widely across the brain, indicating greater complexity. Single neuroimaging studies and behavioral data demonstrate that complex constructs like harm are not always evaluated in ways that satisfy a rational-choice framework's requirements. That is, “blood” and “treasure” may not be so easily compared, much less added together. For this reason, alternative approaches to decision making that grant this complexity while preserving rationality (such as heuristic models) may provide better models of choices about violent conflict.
In the second example, I considered whether people treat the intentions of those who might harm them as inscrutable. Theories of coercion, both rationalist and behavioralist, often make this assumption based on the intuition that humans are not mind readers and yet must still assess the intentions of those who issue threats. Thus coercion theories often posit that people use mental workarounds, such as reasoning about the issuer's strategic interests or their characteristics, to indirectly estimate their intentions. Analyzing the results of twelve CBMAs representing 392 unique studies, I find that reasoning about those who intend harm does not look like the workarounds proposed in the literature in several important respects. Thinking about those who intend harm is characterized by amygdala activation and associated with magnified subjective perceptions of harm, heightened assessments of wrongness, and stronger preferences for blame and punishment. These perceptions and preferences shed light on several findings in the coercion literature, including high rates of coercion failure, the role of emotions, and the nature of misperceptions by those who receive threats-as-signals. There are also implications for the study of costly signaling, since brain-level processes suggest that rendering a threat-as-signal more credible may make a recipient less likely to capitulate.
Taken together, these tests illustrate the value of building an understanding of the brain into the study of “threat perception” in IR. The brain's architecture and functions are the ultimate arbiter of whether the psychological assumptions used in IR theories are valid, especially if these assumptions claim to characterize general cognitive processes underlying how most people think. Brain-level data also provide insights into processes that are unobservable and might otherwise remain subject to speculation (for example, intention inference).
While many scholars have discussed the potential for brain-level data to contribute to the study of IR, few have offered practical demonstrations of how this can be done. This article provides one such demonstration by introducing CBMAs as a source of accessible data for those looking to leverage neuroscientific evidence. This type of large-scale, cumulative data provides a more robust view of the field's findings than any single neuroimaging study. Moreover, with some training, any researcher with access to the neuroscientific literature can conduct a new meta-analysis without the cost of original data collection. Even the existing set of CBMAs represent a trove of data on a variety of IR topics beyond threat perception, including risk and uncertainty processingFootnote 190 and intergroup relations.Footnote 191
Skepticism is appropriate with neuroscientific data, however. CBMAs address only some of the concerns about statistical robustness and validity associated with brain-level data. Neuroscience as a field is also relatively young. Yet, relative to the state of psychology when Jervis published Perception and Misperception in International Politics, neuroscience is far more advanced and self-aware.Footnote 192 Jervis's book relied on single studies (some observational) for much of its theory building, while also acknowledging ongoing debates within psychology on many important fronts.Footnote 193 Even so, many of psychology's deepest issues were unknown or not acknowledged when it was published.Footnote 194 From the perspective of the field's “readiness” for outside use, integrating neuroscientific evidence into theory building and theory testing in IR today, when done carefully, represents a safer bet than Jervis made with cognitive and social psychology in the early 1970s.
Fundamentally, this article makes an argument for both conceptual and empirical consilience.Footnote 195 Aligning the study of threat perception across fields, including IR, is essential for the accumulation of knowledge. Not only does closer integration with cognitive science at a microfoundational level promise to advance the study of threat perception in IR, but it also enables IR scholarship to contribute to a broader understanding of human cognition and behavior when it comes to dealing with danger.Footnote 196
Data Availability Statement
Replication files for this article may be found at <https://doi.org/10.7910/DVN/TBAFE2>.
Supplementary Material
Supplementary material for this article is available at <https://doi.org/10.1017/S0020818324000328>.
Acknowledgments
I thank Ryan Brutger, Ron Hassner, Tyler Jost, Michaela Mattes, Spring Park, Rebecca Perlman, attendees at UC Berkeley's IR workshop and MIT's SSP seminar, and two anonymous reviewers for comments on this project.