Hostname: page-component-cd9895bd7-8ctnn Total loading time: 0 Render date: 2024-12-23T08:50:35.412Z Has data issue: false hasContentIssue false

Radical uncertainty and pragmatism: Threat perception and response

Published online by Cambridge University Press:  04 December 2024

Janice Gross Stein*
Affiliation:
Munk School of Global Affairs and Public Policy, University of Toronto, Toronto, ON, Canada
Rights & Permissions [Opens in a new window]

Abstract

This article brings a pragmatic perspective to the analysis of threat perception in two important ways. First, as does the philosophical tradition of pragmatism, the article joins perception (or knowledge) together with response (action). Perception and response, knowledge and action, are inextricably linked as people learn through the creation of knowledge what is useful in the world. It is in this sense that pragmatists understand the perception and response to threats as evolved practices that are conjoined. Second, the paper explores threat perception and response under the condition of radical uncertainty. I explore the kinds of strategies leaders can use to respond to perceived threat in a context of radical uncertainty, the defining characteristic of contemporary world politics, and explore the advantages of pragmatic strategies that proceed to look for ‘what works in the world’ as provisional responses to perceived threats through iterative, ongoing experimental processes.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of The British International Studies Association.

Drawing on scholarship on uncertainty in economics, in the social sciences, and in philosophy, this article challenges the conventional wisdom that puts risk at the centre of the analysis of decision-making on issues of international security and examines how leaders make decisions under condition of radical uncertainty. While decision-making under conditions of risk is an important part of understanding some of the big challenges in international security, exploring what leaders do under conditions of uncertainty should receive far greater attention than it has.Footnote 1

The first part of this article explores how people perceive and respond to threats when they find themselves in a world of uncertainty rather than the more familiar one of risk. It examines how leaders hide uncertainty or hide from uncertainty to make problems more tractable.

The second section draws on well-established traditions within pragmatist philosophy to suggest strategies that are better suited to a world of radical uncertainty. Examination of the Biden administration’s strategy in the early years after Russia’s invasion of Ukraine illustrates how strategies drawing on the tenets of pragmatism are well suited to manage uncertainty. In the tradition of John Dewey, I call these strategies ‘learning by doing’.Footnote 2

The final section explores the kinds of strategies leaders should use to respond to perceived threats in a context of radical uncertainty, a defining characteristic of contemporary world politics.Footnote 3 It explores the advantages of pragmatic strategies that proceed to look for ‘what works in the world’ as provisional responses to perceived threats through iterative, ongoing processes of experimentation. I argue that these kind of pragmatist strategies, which have not been commonly used in international politics, are especially appropriate as uncertainty, always present, becomes more important in contemporary international security and political economy.

A pragmatist perspective

Perceiving and responding to threats are evolved practices that have long interested evolutionary biologists, neuroscientists, psychologists, economists, sociologists, historians, political scientists, and philosophers. Different disciplines bring different perspectives, and within disciplines there are often big differences in what scholars think is provisionally adaptive, how they explain maladaptive behaviour, and what they see as remediable. What different disciplines mean by adaptive in world politics is not obvious, not fixed, and almost always subject to contestation.

Pragmatist scholarship contributes to these debates and shifts the analysis of threat perception in several important ways. It joins perception (or knowledge) together with response (action). It does so because pragmatism tells us that the test of knowledge, whether that knowledge is ‘objective’ or ‘subjective’ or a mixture of both, is whether it is useful when it meets the world. It follows that perception and response, knowledge and action, are inextricably linked as people learn through the creation of knowledge what is useful in the world. In a dynamic world, we cannot understand one without understanding the other. It is in this sense that pragmatists – and evolutionary psychologists as well – understand the perception and response to threats as evolved practices that are conjoined.

Pragmatist perspectives are especially valuable under conditions of uncertainty. This article explores threat perception and response under the condition of radical uncertainty, the most intense kind of uncertainty. Pragmatists were and continue to be preoccupied with the challenges created by uncertainty, the opportunities and limits of probability, and the scope for subjective estimates and judgements.Footnote 4 They share a concern about inference and judgement with evolutionary biologists and psychologists as well as with social scientists who have studied threat perception, but they pay special attention to the condition of uncertainty and the responses that it requires.

The special case of radical uncertainty

Understanding the world and crafting strategic responses to the threats that leaders of states or non-governmental organisations perceive is always challenging. It is more difficult to do, however, under conditions of uncertainty than it is under risk. Uncertainty grows out of the incomplete knowledge that is inherent in a dynamic world and the need to make predictions about the future consequences of present actions. In world politics, which has been conceived both as anarchical and as deeply social, that prediction problem can be partly mitigated by well-established norms and deeply embedded practices. These norms and practices make it easier to separate what is unusual from what is routine. They help to create boundaries and structure problems and consequently to reduce some kinds of uncertainty that characterise strategic decision-making. When norms are contested, rules are thin, and practices range widely, however, uncertainty deepens. It becomes more difficult to connect present actions to future consequences and to identify unusual behaviour, much less their unintended consequences. Assessing whether behaviour is threatening becomes harder, and crafting strategic responses consequentially becomes more challenging.

This article examines threat perception and strategic response in a world of radical uncertainty, a special case of uncertainty. Different disciplines conceive of radical uncertainty in different ways and deal differently with the challenges it creates. Here, I distinguish uncertainty from risk and then further separate radical uncertainty from other kinds of uncertainty and explore its impact on threat perception and response. My approach to the conceptualisation of radical uncertainty, as I have noted, falls broadly within the philosophical tradition of pragmatism.

As I have observed, uncertainty grows out of incomplete knowledge of a dynamic, ever-changing, interactive, complex world, and the need to make judgements about the connections between present action or inaction and their future consequences.Footnote 5 Uncertainty and risk are often conflated in everyday language, but philosophers and economists especially treat them as distinct. Decision-making under risk occurs when probability distributions are available to estimate the likelihoods of events or of known outcomes as consequences of known options. This is the easy world of ‘known knowns’, where the consequences of the principal options are known, and the occurrence of these consequences are known and predictable or they can be estimated because the pattern of their occurrence more or less resembles some probability distribution. Even if the consequences are not known with a probability of 1, they are ‘knowable’ because their likelihood is knowable. These are the best-suited problems for classical estimation of probability. Similarly, the likelihood of threatening events that occur with some regularity are easier for analysts to estimate with a reasonable degree of confidence.

In many cases in international politics, the requirements to construct probability distributions (reliable data over time and large numbers of repeated trials) are simply not present.Footnote 6 Here, decision-making does not occur under conditions of risk, but under conditions of uncertainty. The point is elegantly made by John Maynard Keynes, writing in 1937:

By uncertain knowledge I do not mean merely to distinguish what is known for certain from what is only probability. The sense in which I am using the term is that in which the prospect of a European war is uncertain, or the price of copper, or the rate of interest in twenty years hence, or the obsolescence of invention, or the position of private wealth owners in the social system in 1970. About these matters there is no scientific basis to form any calculable probability whatever. We simply do not know.Footnote 7

There is no more difficult sentence for advisers or analysts to say when they are asked by leaders about the likelihood of some future threatening event than ‘I simply do not know’. Even when they are asked in private, most are unwilling to say flat out that they do not know. That answer seems to convey either a lack of knowledge or an absence of confidence or both. Admitting that one does not know is psychologically deeply uncomfortable.

It is puzzling that it should be so difficult, because the ability to distinguish between a world of risk and a world of uncertainty itself requires considerable expertise. Analysts require deep knowledge about what is known or knowable and about what is unknowable, not only because what is unknowable stretches far into the future, but because it has occurred only rarely and under very different conditions in the past and does not fit within any probability distribution that would generate a meaningful estimate of risk. It also occurs in the context of a complex, interactive, dynamic world. Designing strategy in a world of uncertainty is consequently more difficult than crafting strategy in a world of risk. In all the examples cited by Keynes, the accurate answer is: ‘I don’t know’. In these kinds of situations, decision-makers have to estimate threat under uncertainty. How they do so when they acknowledge that they ‘do not know’ is the focus of this essay.

Like others, I distinguish here between uncertainty and radical uncertainty.Footnote 8 Decision-making under uncertainty occurs when the consequences of options are known, but there are no relevant probability distributions to estimate their likelihood – known unknowns. There is some structure to the problem – the options and their consequences are known or knowable – but the likelihood of these consequences is not.Footnote 9 Were leaders to ask their intelligence analysts what the probability of Xi Jinping attacking Taiwan is in the next decade, the honest answer would be: ‘I do not know’. Leaders face this kind of challenge with many events they would consider threatening.

Keynes was, of course, raising a far more difficult problem. In some worlds, we do not know what the consequences of options are, much less their probabilities. Decision-making under these conditions is even more challenging. Neither the consequences of options nor their likelihoods are known; we are in a world of unknown unknowns. In the large dynamic worlds of entanglement that Peter Katzenstein describes, worlds characterised by what quantum scientists call superpositionality, at the edges we may not even know the state of the world.Footnote 10 This is the condition of radical uncertainty, a condition where states of the world are changing and their attributes change in response to these dynamic movements. We cannot think about these states of the world in probabilistic terms because these states of the world are changing in dynamic and complex ways that we do not fully understand.Footnote 11

In a world of radical uncertainty, how can people estimate threats and the consequences of options? Almost 100 years ago, a vigorous debate about how to handle uncertainty developed between Keynes and his protégé, the young pragmatist philosopher, mathematician, and economist Frank Ramsey. They agreed on the importance of radical uncertainty. Keynes, as we have seen, insisted that the logic of probability could not be applied beyond the situations where its requirements were met. Ramsey however made a big and consequential leap. He proposed that ‘subjective probability’, or the subjective judgements that people make even in the face of radical uncertainty, could be calculated in the same way as probabilities. He cautioned, however, that this method, like frictionless planes, was not suited for real-world decision-makers. His followers ignored the caution.Footnote 12 In so doing, they transformed intractable uncertainty into tractable measures of risk and probabilistic reasoning. In a sense, they made uncertainty disappear by quantifying their subjective estimates – or beliefs or intuitions – and then treated the problem as one of risk. Talking about risk and risk management felt much more comfortable than talking about uncertainty.

This move, a very large one by those who followed Ramsey, one that he had warned against, enabled contemporary ‘decision sciences’. As the century progressed, they largely banished uncertainty from the discussion of threat perception and strategy. When people made obvious errors in either estimating threats or in choosing ways to respond, errors that were systematically different from the way probabilities ‘should be’ calculated, cognitive psychologists explained these errors as the result of evolved ‘biases’ or ‘heuristics’ that compromised human judgement.Footnote 13 Human beings, they found, were not intuitive estimators of probability. The explanation of failure inhered in the perceiver rather than in the environment. In the language of psychologists, the errors were dispositional rather than situational, and context did not matter much. This kind of argument was developed in controlled laboratory environments but was applied frequently in a context that was radically uncertain. Doing so can be considered a category error. I return to this issue when I examine the many different ways leaders respond to uncertainty.

More recently, radical uncertainty has returned as a focus of concern, perhaps as a result of the series of ‘shocks’, or unexpected events that in the last two decades destabilised expectations about how the world works. Beginning with the attacks on the World Trade Center in 2001, followed by Russia’s attacks on Ukraine in 2014 and 2022, it became increasingly difficult to argue that any of these threats, even retrospectively, could have been estimated with probabilistic reasoning.Footnote 14 They could not because there was no reliable and large set of frequency data for that class of event, nor were the processes that created them regular and stable. Converting uncertainty to risk came at a very high cost. Adding ‘subjective’ as an adjective before probability only served to hide the uncertainty from those analysts and decision-makers responsible for estimating the threat and choosing how to respond.

John Kay and Mervyn King, in their analysis of radical uncertainty, tell the story of President Obama struggling with whether or not to authorise the dispatch of Navy SEALs to Abbottabad to capture Osama bin Laden. He and his advisers had at best very limited information to work with. There were some ‘known’ consequences of the two options of raid or do not raid – the probability of mechanical failure in the helicopters, for example, that would be deployed that could lead to the failure of the raid – but there were many more ‘unknowns’. And there was no basis for attaching probabilities to the estimation of an event – whether bin Laden was in that compound. Their description of the conversation around that decision is telling, not only because officials did not acknowledge uncertainty, but because the discussion conflated officials’ estimates of probability with the confidence they had in these estimates:

John, the CIA team leader, was 95% certain that bin Laden was in the compound. But others were less sure. Most placed their probability estimate at about 80%. Some were as low as 40% or even 30%.

The President summed up the discussion. ‘This is 50–50. Look guys, this is a flip of the coin. I can’t base this decision on the notion that we have any greater certainty than that.’ … His summary recognized that he had to make his decision without knowing whether the terrorist leader was in the compound or not. Obama would reflect on that discussion in a subsequent interview: ‘In this situation, what you started getting was probabilities that disguised uncertainty as opposed to actually providing you with more useful information.’Footnote 15

Obama identified a very important part of the problem. Leaders in the room were hiding their uncertainty. The subjective probabilities created false precision about whether a proposition was true or false and masked the radical uncertainty that he faced about the choices and their consequences. What he seems to have missed was that advisers did not distinguish their estimate of the probability that it was indeed bin Laden in the compound from their degree of confidence in those estimates.Footnote 16 Only by forcing an acknowledgement of uncertainty, something leaders do all too infrequently, could the president think through the consequences of authorising the raid that could then fail or succeed or of refusing to authorise the raid at all.

What do leaders do in a world of uncertainty?

Ramsey identified a century ago the difficulties of precisely measuring degrees of belief and rightly located the problem in human psychology. The problem is not only one of quantification, but also the far deeper challenge that people tend to be deeply uncomfortable with uncertainty because it challenges directly their need to control and the illusion that they can control and predict the consequences of their actions. I identify three broad patterns in what leaders do. The first is the effort they make to mask or hide uncertainty by converting the problem to risk and preserving the illusion of control or by denying uncertainty through a repertoire of psychological strategies. In a second pattern, leaders who are faced with uncertainty avoid or delay decisions for as long as possible. In neither of these patterns do leaders engage with uncertainty and the challenges it creates. The third pattern, one that is consistent with pragmatism, is acknowledgement of uncertainty and a process of trial-and-error experimentation that proceeds over time to manage the uncertainty. Closely related is storytelling about possible futures.

Hiding uncertainty

The quantification of beliefs about the probability of the consequences of options created the field of decision sciences and opened the door to hiding uncertainty. When leaders are asked – ‘what is the likelihood of escalation if you block these shipping lanes?’ – they have beliefs about whether escalation will happen. These beliefs are not based on Gaussian probability distributions. They are a product of experience, salient historical memories, deeply embedded stories that they have heard and told over and over again, and intuition. Bayesian models of updating were developed to translate these beliefs into subjective estimates of likelihood that were quantified and then folded into decision sequences that updated ‘prior’ beliefs in response to new information in a continuous process over time.

There are important debates about how valid this elicitation of subjective probabilities is when answers like ‘very likely’ or ‘not at all’ are quantified using language scales.Footnote 17 What is not debatable is that these processes of quantification make uncertainty disappear. Leaders believe that they are now in a world of risk that can be managed and controlled. The decision can be modelled, the results quantified, and the optimising response can be identified. Under conditions of uncertainty, Bayesian models, like other models that rely on subjective expectations of probability, convert uncertainty into risk that can be managed and mitigated. They help decision-makers both to hide the uncertainty from themselves and, in so doing, to become overconfident.

Overconfidence amplifies the impact of forecasting errors when people make subjective estimates of probability.Footnote 18 People who are overconfident tend, by definition, to be more confident than they are accurate; they exaggerate the true likelihood of an outcome if, of course, that ‘true’ likelihood can be known.Footnote 19 Overconfidence is a tendency towards optimism that neuroscientists locate in specific areas of the brain that block disconfirming evidence.Footnote 20 Accuracy is a function of the ability of people to make predictions and then get feedback so that they can adjust their future estimates.Footnote 21 In a world of dynamic uncertainty, feedback is generally slower and more ambiguous, leaving greater scope for people to reinforce rather than adjust their prior beliefs and avoid revising their estimates. Consequently, decision-makers are likely to remain overconfident far longer in an uncertain world than in worlds where feedback comes quickly and is undisputable.Footnote 22

Putting aside for the moment the challenge of quantifying beliefs, the challenge that Ramsay raised almost a century ago, Bayesian models can be helpful to decision-makers who face a choice under conditions of risk where the problem is well defined, the options and the consequences are known, and there are relevant probability distributions to estimate the likelihood of the consequences. Making intuitive estimates more explicit can be helpful as decision-makers consider how ‘diagnostic’ a new piece of information is and think through in a more reflective way how their estimates should change. That is not the case, however, in a world of uncertainty where none of these conditions is met. This kind of analytic tool then obscures rather than reveals. The usefulness of any tool in the toolkit, whether it works in the world, depends in large part on the world in which it is deployed.

Leaders also use a closely related set of cognitive and emotional strategies to discount and deny uncertainty. The most common is to simplify complex environments by building simple problem frames. President George H. Bush famously said when Iraq invaded Kuwait in 1990 that Saddam Hussein was ‘another Hitler’. That kind of simplified reasoning by analogy to frame a complex problem hides the uncertainty and, for the decision-maker, makes it simpler to identify options and their consequences. Leaders select analogies by their emotional valence.Footnote 23 Psychologists have identified what thy call the ‘negativity bias’, the principle that ‘negative events are more salient, potent, dominant in combinations and generally more efficacious than positive events’.Footnote 24 People pay more attention to and give more weight to negative elements of their environment. There is evidence of variation by ideological predisposition. Compared to liberals, conservatives tend to register greater physiological responses to processing features of the environment that are negative. They are psychologically, physiologically, and neurologically more sensitive to threats in the environment.Footnote 25 To hide their uncertainty, they also choose frames through preferred policy paradigms or draw on their underlying assumptions about the state of the world.Footnote 26 Temporal sensitivity also affects the salience of frames; negative events become disproportionately salient the closer they are in time or distance.Footnote 27

Closely related, evidence suggests that people become uncomfortable when they come face to face with information that suggests a problem is more complex than they had thought. They consequently deny or discount inconsistent information to preserve their pre-existing frames. People have a strong tendency to see what they expect to see based on their existing beliefs and to avoid the discomfort that complexity and uncertainty can create.Footnote 28

Hiding from uncertainty

At times, in the face of uncertainty, leaders choose to do nothing. They can do so through two quite different processes Doing nothing can be a conscious and reflective process. Aware that they do not know enough, and hoping that, with time, the problem will become clearer and the options better defined, decision-makers actively choose to postpone. The aphorism ‘don’t decide until you have to’ captures the common-sense understanding of waiting in the face of uncertainty. When leaders acknowledge that there is uncertainty about the state of the world, this kind of process can avoid a rush to judgement and serve leaders well, at least for a time.

At other times, in an evolved response driven unconsciously by emotions, leaders simply freeze. Overwhelmed by the fear or anxiety that is induced by the sense that they have lost control, they are unable to make a decision. They tend to use ambiguous language or veto action so that they do not have to make a choice.Footnote 29 They drift until catastrophe looms and forces a decision or, through sheer good luck, the problem goes away. This kind of response, largely involuntary and unreflexive, generally does not serve leaders well.

What should leaders do? What might work better in a world of uncertainty?

How should leaders estimate threats and make choices in a world of radical uncertainty?

To answer these questions, I draw heavily on the rich traditions of pragmatism. There are important differences among pragmatist thinkers, but pragmatism can be defined broadly as a set of dispositions for making sense of the world and what works in the world.Footnote 30 The two are intimately connected. Pragmatists understand belief as a predisposition to act. The importance of an argument or proposition is in its practical consequences in the world. Pragmatism treats the world as relational and contingent where people adjust their habits to those contingent circumstances, both to make sense of the world and for understanding what works in the world.Footnote 31 Making sense of the world can be understood as an answer to the question, what is going on here? What is the story? How can we arrive at a successful answer to the problem that confronts us?Footnote 32 That successful answer, what works in the world, can be understood as an incremental, iterative, and sequential process of experimentation with options that will go some way to addressing the problem, options that are ‘useful’ rather than ‘optimal’.

Making sense of the world and understanding what works in the world are closely connected. In pragmatist thinking, knowing cannot be separated from doing; pragmatism recognises the ways in which knowledge is embedded in practice. Knowing matters when it has consequences in the world. When we think about the problem of threat perception in global politics through this lens, we see that threat perception matters when the way the threat is perceived and the way leaders respond to it have practical consequences in the world. Threat perception and strategy are intimately linked, inseparable from one another. In much of the literature in international politics, the two are analytically separated. In this pragmatist analysis, they are conjoined.

Other disciplines provide context to the processes of threat perception. Evolutionary psychology is particularly helpful.Footnote 33 Perception is one of the oldest practices of human beings, one that evolved over time in adaptive ways to meet both individual and collective needs. Evolutionary psychologists situate the cognitive patterns and emotions that play such an important role in threat perception within a long time frame. They consider these patterns as evolved adaptations that respond to cues with minimal cognitive effort to solve recurrent problems. These patterns provided the quick adaptive solutions that were necessary to survive and reproduce. Natural selection enabled over time the ‘representational systems able to make predictions about the situation under informational uncertainty from indirect cues’.Footnote 34 In this sense, evolutionary processes are consistent with adaptive, trial-and-error processes of learning-by-doing that in iterative sequences look for ‘useful’ solutions, generate feedback, and start the process again.

Evolutionary arguments are intuitively appealing because they make sense of prevalent cognitive patterns and emotional triggers that are widely documented and considered as ‘errors’ or ‘biases’.Footnote 35 They explain why we are ‘wired’ the way we are, not as a default from a normative model of rationality, but as the consequence of natural selection to survive and reproduce over time. At the individual level, the process is fast, physiological, emotional, and only then conscious and cognitive. Evolutionary psychologists treat cognitive and emotional patterns not as bugs but as adaptive design features that may bring practical benefits even in our complex modern environment.Footnote 36

Pragmatism walks hand in hand with our understanding of these evolved processes of threat perception. To take only one example, some political psychologists suggest that conservative processes of threat perception that lead to ‘overestimating’ threat may be a design feature rather than a bug because, over time, they helped people to survive by avoiding the worst.Footnote 37 In the unforgiving world of some analysts of international politics where the consequences of anarchy, rather than society, anchor many analyses, these evolutionary arguments resonate. Drawing on error management theory, Johnson argues that, in particular contexts, these evolutionary processes privilege certain kinds of problem frames and judgements to avoid the worst mistakes and are consequently adaptive.Footnote 38 Neville Chamberlain, the prime minister of the United Kingdom before World War II, made an almost-fatal mistake because he and many of his colleagues did not commit the fundamental attribution error. Rather, he underestimated the threat that Hitler’s Germany posed to the United Kingdom. There are better odds of survival, Johnson concludes, if leaders overestimate threats many times but avoid underestimating the one serious threat to their survival. Ours is an unforgiving world where the costs of one serious false negative, he argues, far exceeds the costs of numerous false positives.Footnote 39

Pragmatists would reframe the question by asking whether that process of overestimation ‘worked in the world’ under that specific contingency. They would eschew the generalisation that overestimating threat is adaptive, pay a great deal of attention to the specific relationship, the contingencies, and the context and be more comfortable with the radical uncertainty that precludes many generalisations as universal over time.Footnote 40 Much depended on the story Churchill as well as Chamberlain told and the narrative each constructed about Hitler and his intentions. And as history has shown, although Churchill was wrong so many times earlier in his career, he was right this one time that proved to be so overwhelmingly important. The individual narratives of the two leaders did not change much, but collective judgement evaluated the consistency and coherence of the competing narratives as the conditions and the contingencies shifted. Those judgements by leaders of the government in the United Kingdom moved as the context changed.

The shift in the threat perception in the United Kingdom from 1938 to 1940 illustrates the argument that threat perception in international politics is not only an individual, embodied process. It is also a deeply social process where stories are told and shared within communities. Threat perception at the individual level evolved as an adaptive process to enable the mobilisation of the physiological responses to react rapidly in the face of danger. As a community process, threat perceptions are shared through storytelling and narrative to allow communities to mobilise the collective resources they need to respond to what they think constitutes danger.Footnote 41

Storytelling becomes an important part of pragmatist thinking when the stories are about what works – or does not work – in the world.Footnote 42 Stories have narrative structure that often build in context, contingency, and relational processes that stretch over time in a narrative arc. Often, stories are the result of shared understandings of how the world works under these conditions in this context. Shared narratives that become deeply embedded over time can reinforce norms and social conventions, provide policy frames, highlight processes of experimentation over time, and generate practical knowledge. It is in this sense that storytelling helps individuals and communities to overcome their deeply rooted fear of loss of control and acknowledge that they are indeed in a world of uncertainty.

Pragmatists do not consider all stories as equivalent. Some stories are more responsive to experience and more open to challenge than others. These stories help make sense of a world that seems threatening, as opposed to simply offering a perspective (this is my story, this is my truth). In this respect, pragmatists differ from political psychologists who identify misperceptions against what is, inevitably, an unclear standard of objective or accurate perception. That standard becomes obvious only ex post and, given the multiple interpretations of threat that are often plausible, is very difficult to articulate ex ante. Stories can be one kind of evidence – narrative evidence, operating in tandem with qualitative and quantitative data. One can see our theories and models as overarching stories, sometimes put in axiomatic language, sometimes not. Theories, that is, are stories about the way the world works. This was Ramsey’s point in his famous paper ‘Theories’. Just as we might start a bedtime story with ‘There was a girl, who …’, our scientific theories start with an existential quantifier ‘There are electrons, which …’. We revise the theory as new evidence comes in.Footnote 43

Logical inference, consistency, and interrogation by peers in the community help to strengthen the internal coherence of stories and their believability. Ultimately their value lies in how useful and helpful they are, in the practical knowledge that they generate.Footnote 44 It is in this sense that widespread understanding of what is threatening is a deeply social and political process that mobilises resources for collective action that works in the world. And the various answers can be evaluated, after the fact, in terms of how well they met the world.

Stories about threats shape the strategies that leaders choose to respond. Effective strategy under conditions of radical uncertainty requires experimentation and iterative processes of learning through trial and error and feedback over time and, at different times, the imagining of possible futures. That kind of process is uniquely suited to a process of discovery through learning that incrementally reduces uncertainty about what will work, at least for this problem under these conditions in this context. The next section looks at the stories told about Russia’s invasion of Ukraine and the strategy that the United States led in response. Although that kind of strategic process is not without significant risks, it is especially well suited to a world of radical uncertainty. I return to the imagining of possible futures in the conclusion.

Stories of World War III

Radical uncertainty as a strategic asset

Russia set the scene in February 2022 by using radical uncertainty as a strategic asset as it prepared to launch a large-scale invasion of Ukraine. It did so deliberately by raising the possibility that Moscow might use a tactical nuclear weapon. The use of a nuclear weapon could lead to escalation in unpredictable ways with unforeseeable consequences. I use the word ‘possibility’ rather than ‘probability’ deliberately, since that strategy is built on radical uncertainty that leaders then deliberately manipulate to shape the behaviour of their adversaries. It is precisely in its unpredictability that the advantage of the strategy lies.Footnote 45

Thomas Schelling first developed strategies to manipulate what he called risk at a time when the Soviet Union as well as the United States both had nuclear weapons. Each side worried that any use of a nuclear weapon could provoke a strike by the other that could escalate to all-out nuclear war. If nuclear weapons were not useful for warfighting, they nevertheless could be useful as instruments of coercion that might force the other side to back down. That strategy of coercion worked through the radical uncertainty it created for an adversary.

Schelling’s strategy is more accurately described as one that manipulates not risk but uncertainty. It does so through what he called ‘the threat that leaves something to chance’.Footnote 46 Schelling wrote a confidential memo in 1959 for the RAND Corporation that was only released in 2021.Footnote 47 He made clear that the key to ‘these threats is that, although one may or may not carry them out if the threatened party fails to comply, the final decision is not altogether under the threatener’s control. The threat … has an element of, “I may or I may not, and even I can’t be altogether sure.”’Footnote 48 The final decision is left to ‘chance’. Chance is random and unpredictable. Threats that leave something to chance therefore build in some danger by deliberately designing in uncertainty. Schelling understands the power of the illusion of control and recognises that it is that danger inherent in giving up some control that conveys commitment to an adversary. As we shall see, it is as though, in a changed strategic context, some elements of Schelling’s strategic thinking have found new life in Moscow.

A year and a half before he attacked Ukraine, Putin told an elaborate story of Russia and Ukraine as one nation that was steeped in historical grievance and a sense of betrayal by Western powers.Footnote 49 Then just before the invasion on 24 February 2022, Putin ordered an unknown level of alert of Russia’s strategic forces and issued a veiled nuclear threat should NATO intervene.Footnote 50 A threat that leaves something to chance, as Schelling developed the concept, derives its leverage principally from what leaders do and how much control they give up, not from what they say. Putin did not do much. He did not order the removal of any nuclear weapons from storage. He did not give up any control. Instead, he used ambiguous language about Russia’s nuclear infrastructure to weakly approximate, through fuzziness, a threat that left something to chance.

Reducing uncertainty: Learning by doing

President Biden and his advisers found themselves in a world of radical uncertainty as the evidence began to mount that Russia was about to invade Ukraine. An invasion would break all the norms and rules of the post–Cold War period. As it became clear that the invasion was going ahead, the uncertainty deepened. Putin’s behaviour was unpredictable, not only because no probability estimates were relevant, but because he was demonstrating his readiness to break all the constraining norms. Economists have long recognised that the usual sequence of preferences preceding and shaping choices may not always hold. At times, choice reveals preferences.Footnote 51 People do not always know their preferences. It is only after they make a choice that they discover them by reasoning backwards from their behaviour. Wartime presidents are no exception. As one astute observer remarked, ‘Putin himself likely does not know what he would do then [if he were losing the war]’.Footnote 52 It may well be that Putin is not a reliable predictor of what he will do because even he does not know how he will feel, should he face any of these contingencies. The United States had no alternative but to develop a strategy to try to manage escalation in the context of these deep uncertainties.

The president began by putting some structure into what was an undefined problem with almost no constraints. Biden first told a story of World War III, a story of a war that could erupt if US forces engaged in combat with Russians forces that could then escalate to nuclear war. Then, in an unprecedented step, he laid out in a published op-ed five boundary conditions to reduce some of the most important uncertainties. The president made public what the United States would do and what it would not do.Footnote 53 First, Biden made clear that ‘we do not seek a war between NATO and Russia’. Second, he made explicit that ‘so long as the United States or our allies are not attacked, we will not be directly engaged in this conflict, either by sending American troops to fight in Ukraine or by attacking Russian forces’. Third, Biden clarified that, despite his revulsion at Russia’s unjust attack, ‘the United States will not try to bring about his [Putin’s] ouster in Moscow’. While the United States would support Ukraine to the fullest extent possible, Biden continued, ‘We are not encouraging or enabling Ukraine to strike beyond its borders’. Biden established a final parameter when he warned Russia explicitly against the use of nuclear weapons: ‘Any use of nuclear weapons on any scale would be completely unacceptable to the United States as well as to the rest of the world and would entail severe consequences.’

Within the framing narrative of escalation that could lead to World War III and the structural parameters the president put in place to define the problem, the Biden administration then put in place a pragmatic strategy of ‘learning by doing’ that tried to further reduce uncertainty in response to Russia’s manipulation of uncertainty. The process of experimentation was incremental and inductive as officials struggled to figure out what would work. Biden early on ruled out NATO enforcement of a no-fly zone over Ukraine despite brutal Russian attacks on Ukraine’s civilian infrastructure and desperate pleas from Volodymyr Zelensky because he judged the risk of direct engagement between NATO and Russian pilots to be too high. After that, the administration began incrementally, and at times with considerable delays, to provide Ukraine with increasingly more advanced military equipment. Officials then waited to assess Russia’s reaction. The Russian response was limited to verbal warnings and veiled nuclear threats but no military escalation outside the battlefield in Ukraine.

Early decisions to supply defensive Javelins and Stingers were followed, after Ukrainian civilian infrastructure came under relentless attack, by the decision to provide Kyiv with High Mobility Artillery Rocket Systems. Even though their range was short enough that they could not reach Russian territory, a pattern of probe and wait preceded the decision. Russia railed against the provision of these longer-range rocket systems but took no action against NATO members. Over time, the United States moved by increments, with some of its allies, to supply Bradley and Marder armored vehicles, then advanced surface-to-air missiles in November and December of 2022. They then pledged to send batteries of the Patriot air defence system that arrived in Ukraine the following April. In January 2023, the United Kingdom, the United States, and Germany decided to supply Challenger, Abrams, and Leopard-2 tanks and then Ground Launched Small Diameter Bombs, rocket-propelled guided bombs with a more extended range. Even though Russia has threatened in the past that any attack on Crimea would ‘ignite judgment day’, in May 2023, the United Kingdom followed by France, sent missiles with a range long enough to reach Crimea. That same month, the Biden administration, acceding to a long-standing Ukrainian request, also authorised the training of Ukrainian pilots on F-16 fighter jets and promised that its allies would deliver these aircraft to Ukraine in the next several months. Consistent with the parameter condition that NATO not enable Ukraine to attack beyond its borders, Ukrainian Defense Minister Oleksiy Reznikov again confirmed Ukraine’s agreement not to use Western long-range weapons to strike Russian territories.Footnote 54

What had been off limits in February 2022 was on its way or promised to Ukraine 18 months later. So far, Russia has not put any strategic weapons on high alert, although it has moved tactical nuclear weapons to Belarus that, however, remain under Russian control. The pragmatic strategy of incremental learning-by-doing seems to have worked, up until now, in controlling escalation and supplying Ukraine with most of what it needed, although far more slowly than Kyiv would have liked.

Pragmatism in the world

How do we know that what worked in the past is still working? The limits of trial-and-error learning in a world of uncertainty

The pragmatic strategy that the Biden administration used worked, at least in the first two years of the war. The United States and its NATO allies were able to shore up Ukraine’s capacity to defend itself without provoking escalation by Russia. That strategy has by now almost run its course, largely because almost all of Ukraine’s requests for more advanced weapons have been met. Little remains except for permission to use US longer-range missiles and artillery to strike at Russia’s forces that were attacking or preparing to attack Ukraine from inside Russian territory. The United States gave that permission, albeit in a limited and circumscribed way, on 30 May 2024 as Russian forces increased their bombardment of the area near Kharkiv.Footnote 55 And again, as they have in the past, Russian officials issued veiled nuclear threats and warned the United States against ‘miscalculations that could have fatal consequences’. Russia continued to threaten, at times obliquely and at times more explicitly, to retaliate against NATO forces and the United States should they intervene and allow Ukraine to use more advanced weapons on Russian soil. At the end of May, for example, Vladimir Putin issued an oblique warning to small European countries that they are ‘very densely populated’.Footnote 56

This description of how a pragmatic strategy worked, at least in the early phases, in the context of an attack by one nuclear power against a smaller neighbour that is supported by another nuclear power, raises important questions. That the strategy ‘worked’ matters. It matters because it avoided the first trap of converting the choice among options under conditions of radical uncertainty to one of risk and hiding the uncertainty. Nor did the Biden administration fall into the second trap of hiding from the uncertainty. But leaders could only know that the strategy worked only after the fact, as time passed. Can they ‘know’ that the strategy is still working? And that it will continue to work?

Ukraine faces enormous challenges as the war goes on, and the United States and its allies will have to continue to craft new strategies to cope with a deepening crisis. The obvious danger is that leaders who use a pragmatic, incremental strategy of experimentation to explore what works, as the Biden administration has, may cross a threshold of escalation without knowing that they are doing so. Several problems stand out.

Although some leaders make their red lines clear, others, especially those who are manipulating uncertainty, deliberately will not do so. Leaders often have incentives to keep that kind of information private or to dissemble. The red lines of others, like their intentions, tend to be opaque. The opacity of an adversary’s intentions is compounded when that adversary has broken established norms and disrupted their past practices. Norm-shattering behaviour that is discontinuous with the past deepens radical uncertainty. In a radically uncertain world, one that is at the edge of danger, there is no reliable way of anticipating their future behaviour or estimating their red lines. We simply cannot know.

In a complex world of contingencies that are dynamic, an adversary’s leaders themselves may not know their own red lines until they face a situation and then learn from their response to the situation what their preferences are. Their preferences are ‘revealed’ to them through the choices they make in the moment. And if they themselves do not know their preferences, how can others know them with any confidence?

For all these reasons, knowing whether a strategy that was working in the world yesterday will continue to work today, much less tomorrow, is a wicked problem. Success in the past in pushing forward, pausing, and then pushing forward again when they meet no resistance, the kind of success the Biden administration has had, tells leaders little about a threshold that they might encounter if conditions change. Consequently, leaders are at risk of over-learning from the past. Nikolai Sokov, a former Russian diplomat, made clear how difficult it is to know in advance what a tipping point will be:

in the absence of very definitively drawn red lines. The trouble is that when you advance in these small kinds of steps, most likely you will not know that you have crossed the red line. So that’s the danger.Footnote 57

The United States devoted considerable effort in the first two years of the war in Ukraine to understanding Russia’s thresholds. But thresholds generally do not remain fixed. They are dynamic and develop as conditions change. Past success, unfortunately, is not a reliable predictor of future success.

How can leaders then manage the radical uncertainty they continue to face even when they use pragmatic, incremental strategies of experimentation. Anthony Blinken, the US Secretary of State, when asked about the decision to allow Ukraine to use US missiles to strike Russian troops that were massing just across the border in preparation for attack, answered in language that was deeply pragmatist: ‘So with regard to the use of U.S. arms by Ukraine … the hallmark of our engagement, our support for Ukraine over these more than two years, has been to adapt and adjust as necessary to meet what’s actually going on.’Footnote 58

Reducing uncertainty by storytelling: Imagining possible futures

Storytelling about what works in the world, as I noted earlier, is an important part of pragmatist thinking and, more importantly, how we in fact think about a world only partly of our making. Stories do more than build in context, contingency, and social processes and conventions. They help us to think about the future by imagining different worlds. Whether they are called stories, narratives, or scenarios matters less than the individual and collective processes of imagining possible futures, the contexts and the contingencies that could combine to create these futures, and the pathways that would take us to them.Footnote 59 Imagining the pathways to these possibilities helps to shape understanding of what practical knowledge we need to build, how that knowledge might meet each of these worlds, the signposts to watch for along the pathways, and where and when to begin a process of experimentation, probing, and learning.

Living in a world of uncertainty

A central theme of this article is the importance of acknowledging that, at times, we live in a world of uncertainty. Acknowledging uncertainty is hard to do. It pushes against powerful psychological impulses. Yet only by acknowledging uncertainty, rather than hiding it by denying it or treating it as risk, or by hiding from it through avoidance, can leaders open the door to experimentation and learning and adaptation.

In a world of radical uncertainty, pragmatists as well as evolutionary biologists and psychologists can only know, after the fact, what has worked in the world. There are no easy or obvious solutions to that wicked problem. Leaders can only guard against false certainties from their advisers and overconfidence in themselves. They can do their best to remain open to new information, challenge assumptions and advice, ask what they have not heard, actively create an environment that supports dissent, consult people who are not usually in the room who might think about the problem differently, and engage in continuous thought experiments and storytelling to imagine possible futures and the proposed responses that might work in each of these worlds. All of these are easy to prescribe and hard to do.Footnote 60 And none of them may be enough. Leaders may still fail even with iterative processes of experimentation. In a world of radical uncertainty, however, without processes that are grounded in the search for practical knowledge, failure looms large.

Acknowledgements

I wish to thank Peter Katzenstein, Cheryl Misak, Caleb Pomeroy, and Brian Rathbun for their extraordinarily helpful comments.

Janice Gross Stein is Belzberg Professor of Conflict Management and Founding Director of the Munk School of Global Affairs and Public Policy at the University of Toronto. She is an honorary member of the American Academy of Arts and Science and Senior Scholar at the Kissinger Center at the School for Advanced International Studies at Johns Hopkins University.

References

1 See Richard Bradley and Mareile Drechsler, ‘Types of uncertainty’, Erkenntnis, 79:6 (2013), pp. 1225–48; Duncan R. Luce and Howard Raiffa, Games and Decisions: Introduction and Critical Survey (New York: Dover Publications 1957); John Kay and Mervyn King, Radical Uncertainty: Decision-Making beyond the Numbers (New York: W. W. Norton, 2021).

2 John Dewey, Democracy and Education (New York: Macmillan, 1916), p. 192 used the concept of ‘learning by doing’ to describe the development of meaning through experience and reflection in a social context. ‘The knowledge which comes first to persons, and that remains most deeply ingrained, is knowledge of how to do … Recognition of the natural course of development … always sets out with situations which involve learning by doing.’ For an application of pragmatism to political decision-making, see Janice Gross Stein, ‘Political learning by doing: Gorbachev as uncommitted thinker and motivated learner’, International Organization, 48:2 (1994), pp. 155–83.

3 See especially Peter J. Katzenstein, Entanglements in World Politics: The Power of Uncertainty (Cambridge: Cambridge University Press, forthcoming, 2025) for a seminal analysis of large worlds of entanglement in world politics and the importance of uncertainty in and to these worlds.

4 See Frank P. R. Ramsey, ‘Truth and probability’, in D. H. Mellor (ed.), Philosophical Papers (Cambridge: Cambridge University Press, 1926), pp. 52–109.

5 See especially Kay and King, Radical Uncertainty.

6 Probabilistic reasoning is most suited to games of chance where all the parameters are specified, or those situations where processes are regular and there are extensive data over time as well as repeated trials so that probability distributions are known.

7 John Maynard Keynes, ‘The general theory of employment’, Quarterly Journal of Economics, 51:2 (1937), pp. 209–23 (pp. 213–14).

8 Kay and King, Radical Uncertainty.

9 Many theorists treat ‘known unknowns’ as shorthand for the world of risk rather than uncertainty, because the consequences of options are known, even if their probability distributions are not. See Frank H. Knight, Risk, Uncertainty and Profit (New York: Houghton Mifflin, 1921). I put these in the world of uncertainty, making a distinction between unknown probabilities and unknown probability distributions. When the underlying probability distributions are unknown, the probabilities cannot be derived. Beyond the formal logic, it is worth noting that there is the practical possibility of unintended consequences that are not ‘known’ at the time and were not considered in analyses produced by experts.

10 Katzenstein, Entanglements in World Politics.

11 Kay and King, Radical Uncertainty. Under some conditions, radical uncertainty can be considered as a kind of ontological uncertainty where propositions about relevant future consequences cannot be formulated because the entities and relations are not known at the time when the propositions would need to be formulated.

12 Cheryl Misak, Frank Ramsey: A Sheer Excess of Powers (Oxford: Oxford University Press, 2020); Ramsey was deeply sceptical about the capacity of measuring with precision degrees of belief. That understanding of human psychology led to the pragmatic emphasis on experience and experimentation. Cheryl Misak, C. David Naylor, Mark Tonelli, Trisha Greenhalgh, and Graham Foster, ‘Case report: What – or who – killed Frank Ramsey? Some reflections on cause of death and the nature of medical reasoning’, Wellcome Open Research (23 May 2022).

13 This formulation of course ignores whether it is appropriate to calculate probabilities at all given the severity of uncertainty in the specific case under discussion.

14 I have deliberately excluded the global financial crises as example, largely because there are detailed historical case studies of global financial crises over time that include extensive compilations of data. It is arguable whether these qualitative and quantitative data can be used to estimate probabilities of a global financial crisis occurring within a specified time period in the future. See Carmen M. Reinhart and Kenneth S. Rogoff, This Time Is Different: Eight Centuries of Financial Folly (Princeton, NJ: Princeton University Press, 2009) and Katzenstein, Entanglements in World Politics.

15 Kay and King, Radical Uncertainty; Mark Bowden, The Finish: The Killing of Osama bin Laden (New York: Atlantic Monthly Press, 2012), p. 160 (emphasis added).

16 The first estimate – whether bin Laden was in the compound – has been called state uncertainty, or the true state of the world.

17 See Sherman Kent, ‘Words of estimative probability’, Studies in Intelligence, 8:4 (1964), pp. 49–65. More generally, psychologists have identified heuristics and cognitive patterns that people use in environments of risk and uncertainty that can impair processes of probability estimation. Representativeness refers to people’s proclivity to exaggerate similarities between one event and a prior class of events, typically leading to significant errors in probability judgements or estimates of frequency. See Daniel Kahneman and Amos Tversky, ‘Subjective probability: A judgment of representativeness’, Cognitive Psychology, 3:3 (1972), pp. 430–54; Daniel Kahneman and Amos Tverksy, ‘On the psychology of prediction’, Psychological Review, 80:4 (1973), pp. 237–51; Amos Tversky and Daniel Kahneman, Judgement under Uncertainty: Heuristics and Biases (New York: Cambridge University Press, 1982). Anchoring refers to an estimation of magnitude or degree by comparing it with an ‘available’ initial value (often an inaccurate one) as a reference point and making a comparison. See Susan T. Fiske and Shelley E. Taylor, Social Cognition (Reading, MA: Addison Wesley, 1984), pp. 250–6, pp. 268–75; in a world of uncertainty, leaders search for the relevant reference classes to anchor their judgements (Amos Tversky and Daniel Kahneman, ‘Judgment under uncertainty: Heuristics and biases’, Science, 185 [1974], pp. 1124–31); initial judgements or prior beliefs serve as a conceptual anchor on the processing of new information and the revision of estimates.

18 Hal R. Arkes, ‘Overconfidence in judgmental forecasting’, in J.S. Armstrong (ed), Principles of Forecasting (Boston: Springer, 2001), pp. 495–515; Dominic D. P. Johnson and James H. Fowler, ‘The evolution of overconfidence’, Nature, 477:7364 (2011), pp. 317–20 (p. 317).

19 Dominic D. P. Johnson, Strategic Instincts: The Adaptive Advantages of Cognitive Biases in International Politics (Princeton, NJ: Princeton University Press, 2020), p. 51; Rose McDermott, ‘The influence of psychological factors in the search for strategic stability’, Paper presented to the Stanford University Colloquium on Strategic Stability, November 2020; Philip Tetlock, Expert Political Judgment: How Good Is It and How Do We Know (Princeton, NJ: Princeton University Press, 2005).

20 Johnson, Strategic Instincts; Sharot Tali, The Optimism Bias: A Tour of the Irrationally Positive Brain (New York: Pantheon, 2011).

21 Ajay Agrawal, Joshua Gans, and Avi Goldfarb, Prediction Machines: The Simple Economics of Artificial Intelligence (Boston, MA: Harvard Business Review Press, 2018); McDermott, ‘The influence of psychological factors’.

22 Overconfidence varies significantly across individuals, genders, domains, or cultures. Experimental war games find that men are more likely to be overconfident than women, and that overconfident men are especially likely to fight. See Johnson, Strategic Instincts; Dominic D. P. Johnson et al., ‘Overconfidence in wargames: Experimental evidence on expectations, aggression, gender, and testosterone’, Proceedings of the Royal Society B: Biological Sciences, 273:1600 (2006), pp. 2513–20.

23 Yuen Foong Khong, Analogies at War: Korea, Munich, Dien Bien Phu, and the Vietnam Decisions of 1965 (Princeton, NJ: Princeton University Press, 2020).

24 Paul Rozin and Edward B. Royzman, ‘Negativity bias, negativity dominance, and contagion’, Personality and Social Psychology Review, 9 (2001), pp. 296–320; Marika Landau-Wells and Rebecca Saxe, ‘Political preferences and threat perception: Opportunities for neuroimaging and developmental research’, Science Direct: Current Opinion in Behavioral Sciences, 34 (2019), pp. 58–63; for competing interpretations of the relationship between the predisposition to choose negative frames and ideological predispositions, see John R. Hibbing, Kevin B. Smith, and John R. Alford, ‘Differences in negativity bias underlie variations in political ideology’, Behavioral and Brain Sciences, 37 (2014), pp. 297–350; Daniel Kahneman and Jonathan Renshon, “Why Hawks Win.” Foreign Policy, 158 (2007), pp. 34–8, p. 36; Stanley Feldman and Leonie Huddy, ‘Not so simple: The multidimensional nature and diverse origins of political ideology’, Behavioral and Brain Sciences, 37:3 (2014), pp. 312–3; evolutionary arguments may help to explain this difference. Michael Bang Petersen and Lene Aarøe, ‘Individual differences in political ideology are effects of adaptive error management’, Behavioral and Brain Sciences, 37 (2014), pp. 324–25; Andrew Edward White and Steven L. Neuberg, ‘Beyond the negative: Political attitudes and ideologies strategically manage opportunities too’, Behavioral and Brain Sciences, 37:3 (2014), pp. 332–3.

25 Dominic D. P. Johnson, Rose McDermott, Jon Cowden, and Dustin Tingley, ‘Dead certain: Confidence and conservatism predict aggression in simulated international crisis decision-making’, Human Nature, 23:119 (2012), pp. 98–126; a striking feature of this predisposition to negativity is that it varies individually and situationally and may be correlated with political orientations. Elizabeth Culotta, ‘Roots of racism’, Science, 336 (2012), pp. 825–7; Landau-Wells and Saxe, ‘Political preferences and threat perception’; Brian Rathbun, Reasoning of State: Realists, Romantics, and Rationality in International Relations (Cambridge: Cambridge University Press, 2019); Brian C. Rathbun, “War and Chance: Assessing Uncertainty in International Politics:” by Jeffrey A. Friedman,” Perspectives on Politics, 18:2 (2020), pp. 693–5. Teasing out what causes what is not easy, but some evidence suggests political orientations are the consequence of psychological traits (Mark J. Brandt, Geoffrey Wetherell, and Christine Reyna, ‘Liberals and conservatives can show similarities in negativity bias’, Behavioral and Brain Sciences, 37 [2014], pp. 307–8; John T. Jost, Sharareh Noorbaloochi, and Jayt J. van Bavel, ‘The “chicken-and-egg” problem in political neuroscience’, Behavioral and Brain Sciences, 37 [2014], pp. 317–8.

26 Mark Blyth, “Ideas, uncertainty, and evolution,” In Ideas and Politics in Social Science Research, ed. Robert Cox and Daniel Beland (Oxford: Oxford University Press, 2010), pp. 83-101. ; Brian Rathbun, ‘Politics and paradigm preferences: The implicit ideology of International Relations scholars’, International Studies Quarterly, 56:3 (2012), pp. 607–22.

27 Rozin and Royzman, ‘Negativity bias, negativity dominance, and contagion’, pp. 304–5; Dominic D. P. Johnson and Dominic Tierney, ‘Bad world: The negativity bias in international politics’, International Security, 43:3 (2019), pp. 96–140 (p. 121).

28 Janice Gross Stein, ‘Escalation management in Ukraine: “Learning by doing” in response to the “threat that leaves something to chance”’, Texas National Security Review, 6:3 (2023), pp. 29–50. Tetlock, Expert Political Judgment, shows that the lengths policymakers will go to defend forecasts gone wrong are quite remarkable. He finds considerable variation across analysts and explains the heterogeneity by people’s curiosity and openness to new information.

29 Barbara Koremenos, Charles Lipson, and Duncan Snidal, ‘The rational design of international institutions’, International Organization, 55:4 (2001), pp. 761–99; Barbara Koremenos, ‘Contracting around international uncertainty’, American Political Science Review, 99:4 (2005), pp. 549–65.

30 There are differences between some of the important pragmatists such as C. S. Peirce, William James, John Dewey, and Frank Ramsey. See Cheryl Misak, Cambridge Pragmatism: From Peirce and James to Ramsey and Wittgenstein (Oxford: Oxford University Press, 2016).

31 For applications of pragmatist theories to international politics, see Molly Cochran, ‘A story of closure and opening’, European Journal of Pragmatism and American Philosophy, 2 (2012): pp. 1–23); Simon Frank Pratt, Sebastian Schmidt, and Deborah Avant, ‘Pragmatism in IR: The prospects for substantive theorizing’, International Studies Review, 23 (2021), pp. 1933–58.

32 Kay and King, Radical Uncertainty, write about ‘what’s going on here’ as diagnosis. I consider that as a form of storytelling or narrative construction.

33 Evolutionary psychology can be conceived either as foundational – as the ultimate cause – or as the proximate cause. Evolutionary psychologists consider that trial-by-error learning, which is at the proximate level, requires evolved learning mechanisms that are instantiated in the brain, and that these pay fitness dividends. In evolutionary accounts that are functional, trial-and-error learning would be grounded in physiological processes that played a role in enhancing survival. Such an account is consistent with intuitive processes of experimentation until a ‘good enough’ (rather than optimal) solution to an adaptive challenge is found. See Brian C. Rathbun, Right and Wronged in International Relations: Evolutionary Ethics, Moral Revolutions, and the Nature of Power Politics (Cambridge: Cambridge University Press, 2023), pp. 67–8; Jaime C. Confer, Judith A. Easton and Diana S. Fleischman, ‘Evolutionary psychology: Controversies, questions, prospects, and limitations’, American Psychologist, 65:2 (2010), pp. 110–26 (pp. 116–18); Laith Al-Al-Shawaf, David M. G. Lewis, Yzar S. Wehbe, and David. M. Buss, ‘Context, environment, and learning in evolutionary psychology’, in Encyclopedia of Evolutionary Psychological Science, 2–3 (Cham: Springer International Publishing, 2021), pp. 1330–41; Samuel Bowles and Herbert Gintis, A Cooperative Species: Human Reciprocity and Its Evolution (Princeton, NJ: Princeton University Press, 2011) argue that we have evolved to pass on what we learn via experimentation through culture. Others think of evolutionary psychology as a useful analogy rather than foundational to processes of learning. I am indebted to Caleb Pomeroy for these arguments.

34 M. B. Petersen, ‘Evolutionary Political Psychology: On the Origin and Structure of Heuristics and Biases in Politics,’ Political Psychology 36 (2015), pp. 45-78, p. 54; see also James Davis and Rose McDermott, ‘The past, present, and future of behavioral IR’, International Organization, 75:1 (2021), pp. 147–77; Peter K. Hatemi and Rose McDermott (eds), Man Is by Nature a Political Animal: Evolution, Biology and Politics (Chicago: University of Chicago Press, 2011); Peter K. Hatemi and Rose McDermott, ‘A neurobiological approach to foreign policy analysis: Identifying individual differences in political violence’, Foreign Policy Analysis, 8:2 (2012), pp. 111–29; Johnson, Strategic Instincts; John Tooby and Leda Cosmides, ‘The psychological foundations of culture’, in Jerome Barkow, Leda Cosmides, and John Tooby (eds), The Adapted Mind: Evolutionary Psychology and the Generation of Culture (New York: Oxford University Press, 1992), pp. 19–136.

35 For a recent review of these heuristics, see Janice Gross Stein, ‘Perceiving threat: Cognition, emotion and judgment’, in Leonie Huddy, David O. Sears, and Jack Levy (eds), The Oxford Handbook of Political Psychology, 3rd ed. (Oxford: Oxford University Press, 2024), pp. 394–427.

36 Johnson, Strategic Instincts; Landau-Wells and Saxe, ‘Political preferences and threat perception’.

37 Gerd Gigerenzer, Adaptive Thinking: Rationality in the Real World (Oxford: Oxford University Press, 2002); Gerd Gigerenzer and Wolfgang Gaissmaier, ‘Heuristic decision making’, Annual Review of Psychology, 62 (2011), pp. 451–82; Gerd Gigerenzer and Peter M. Todd, Simple Heuristics That Make Us Smart (Oxford: Oxford University Press, 2000).

38 Martie G. Haselton, Daniel Nettle, and Paul Andrews, ‘The evolution of cognitive bias’, In The Handbook of Evolutionary Psychology, ed. David M. Buss (New York: John Wiley & Sons, 2015), pp. 724–46; Martie G. Haselton and David M. Buss, ‘Error management theory: A new perspective on biases in cross-sex mind reading’, Journal of Personality and Social Psychology, 78:1 (2000), pp. 81–91; Dominic D. P. Johnson, Daniel Blumstein, James H. Fowler, and Martie G. Haselton, ‘The evolution of error: Error management, cognitive constraints and adaptive decision-making biases’, Trends in Ecology and Evolution, 28:8 (2013), pp. 474–81; Johnson, Strategic Instincts; Petersen and Aarøe, ‘Individual differences in political ideology are effects of adaptive error management’.

39 For similar arguments informed by evolutionary approaches, see Kay and King, Radical Uncertainty, pp. 154–6. It is hard to imagine a context where this understanding of the world would make sense. A government that always saw threats as intentional and developed consistently alarmist threat perceptions, armed itself to the outer limits of its capacity, and pre-emptively attacked at the first opportunity would likely sooner or later exhaust its material and human reserves and make itself vulnerable to an opportunity-seeking antagonist that also saw threatening behaviour as intentional. And other states would be likely to infer threatening intent from the growing military capabilities and aggressive behaviour. These kinds of dynamics give rise to what Jervis aptly called the ‘spiral model’ that leaves all parties more rather than less insecure. See Robert Jervis, Perception and Misperception in International Politics, rev. ed. (Princeton, NJ: Princeton University Press, 2017); the strategy also advantages the largest and the richest, who can sustain the cost of struggle the longest.

40 Pragmatists generally expect very few generalisations to be universal over time. One generalisation that does hold true over time and in all cases is that all humans are mortal. I am indebted to Cheryl Misak for this clarification.

41 A contemporary example is the rapid and discontinuous change in the estimation of threat that China poses in Canada. The change was dramatic in a short period from 2018 to 2022. China’s arbitrary arrest of two Canadians in response to the arrest of Meng Wanzhou, the chief financial officer of Huawei, at the request of the United States, prompted massive swings in public opinion that reinforced leaders’ stories of a China led by an authoritarian leader that was prepared to lash out against smaller countries. Widespread acceptance of these narratives enabled governments to put in place policies restricting Chinese investment in strategic sectors of the economy to limit their vulnerability to future retaliation by China.

42 Kay and King, Radical Uncertainty.

43 I am indebted to Cheryl Misak for her contribution to this section on pragmatists’ understanding of the role of narratives. See Frank P. R. Ramsey, “Theories (1929),” ed. D.H. Mellor, (Cambridge: Cambridge University Press, 1990), pp. 112–37, cited by Misak, Frank Ramsey, pp. 395–6. See also Cheryl Misak, ‘Experience, narrative, and ethical deliberation’, Ethics, 118:4 (2008), pp. 614–32 for the pragmatist argument that narratives can be sources of evidence, but only if they are open to challenge.

44 As Kay and King observe: ‘The selection of relevant narratives is problem – and context-specific, so that the choice of fictions, numbers and models requires the exercise of judgment in relation to both problem and context. The narratives we seek to construct are neither true nor false, but helpful or unhelpful. The exercise of judgment in the selection of narratives is eclectic and pragmatic.’ Radical Uncertainty, p. 397.

45 See Katzenstein, Entanglements in World Politics, for a compelling discussion of the protean power of uncertainty, conceived not as manipulation but rather as knowledge generated by experimentation and learning over time. See also Peter J. Katzenstein and Lucia A. Seybert, ‘Protean power and uncertainty: Exploring the unexpected in world politics’, International Studies Quarterly, 62:1 (2018), pp. 80–93.

46 Schelling converted uncertainty to risk when he described his strategy to make a threat to use nuclear weapons credible as ‘manipulation of risk’ rather than more accurately as ‘manipulation of uncertainty’. Throughout this paper, I use the term ‘manipulation of uncertainty’ to capture the unknowns that leaders are trying to manage and reserve ‘manipulation of risk’ to describe problems that analysts or leaders structure to make calculation possible.

47 Thomas Schelling, The Threat That Leaves Something to Chance, RAND Historical Document HD-A1631-1 (2021), available at: {https://www.rand.org/pubs/historical_documents/HDA1631-1.html}.

48 Schelling, The Threat That Leaves Something to Chance, p. 2 (emphasis in original).

49 Vladimir Putin, ‘On the historical unity of Russians and Ukrainians’, Moscow: Office of the President of Russia (12 July 2021). English translation available at: {http://en.kremlin.ru/events/president/news/66181}.

50 The alert proved to be no more than increased staffing of strategic command centres.

51 Paul A. Samuelson, ‘A note on the pure theory of consumer behavior’, Economica, 5 (1938), pp. 61–71 argued that people’s preferences could more easily be estimated from their behaviour than from concepts of utility that are abstract and difficult to calculate. Samuelson’s model assumes optimisation of choice. Economists and political psychologists have demonstrated that people do not know their preferences at all times and often discover them by making a choice and then inferring their preferences from their behaviour. For a discussion of ‘theory in action’, see Chris Argyris and Donald A. Schon, Organizational Learning (Reading, MA: Addison-Wesley, 1978) and for an examination of action bias, see Thomas Peters and Robert H. Waterman, In Search of Excellence (New York: Harper and Row, 1982).

52 Author interview with Pavel Podvig, 18 January 2023.

53 Joseph R. Biden Jr., ‘What America will and will not do in Ukraine’, New York Times (31 May 2022), available at: {https://www.nytimes.com/2022/05/31/opinion/biden-ukraine-strategy.html}.

54 Kateryna Stepanenko, Karolina Hird, Grace Mappes, et al., ‘Latest Russian offensive campaign assessment’, Institute for the Study of War (6 February 2023), available at: {https://www.understandingwar.org/backgrounder/russian-offensive-campaign-assessment-february-6-2023}.

55 Yehor Chernev, the deputy chairman of the Committee on National Security and Intelligence in Ukraine’s parliament wrote on 4 June 2024 that the United States still limited the use of these missile systems and artillery to Russian territory near north-eastern Ukraine and for defensive purposes only. Cited in Maria Varenikova, Constant Mehut, and Arc Toler, ‘Kyiv used U.S-made artillery to fire at Russia, Ukraine M.P. says’, New York Times (5 June 2024), A5, available at: {https://www.nytimes.com/2024/06/04/world/europe/ukraine-strikes-russia-western-weapons.html}.

56 It was Sergei A. Rybakov, a Russian deputy foreign minister, who warned on 3 June 2024 of ‘fatal consequences’. His warning and that of Vladimir Putin cited in Varenikova et al. ‘Kyiv used U.S-made artillery to fire at Russia’.

57 Quoted in Cassandra Vinograd, ‘Ukraine says that recent pledges show that new weapons are no longer taboo for its allies’, New York Times (6 January 2023), available at: {https://www.nytimes.com/2023/01/06/world/europe/ukraine-weapons-allies-vehicles.html}.

58 Secretary Antony J. Blinken at a solo press availability. United States Department of State, www.state.gov. Hrzan Palace, Prague, Czechia, 31 May 2024. Emphasis added. A crisper version of his remark is cited in the New York Times on 1 June 2024, A4, where Blinken said: ‘Going forward, we’ll continue to do what we’ve been doing, which is: As necessary, adapt and adjust.’ available at: {https://www.nytimes.com/2024/05/31/world/europe/ukraine-weapons-united-states-blinken.html}.

59 Some of these elements are captured in an applied process called ‘scenario planning’. Decades ago, Herman Kahn described the future in stories written by people in the future. See Herman Kahn, Thinking about the Unthinkable (New York: Horizon Press, 1965); Herman Kahn and Anthony J. Weiner, ‘The next thirty-three years: A framework for speculation’, Daedalus, 96:3 (1967), pp. 705–32. Gaston Berger developed ‘la prospective’ to develop normative scenarios of the future to guide public policy. See Gaston Berger, Phenomenologies du temps et prospectives (Paris: Presses Universitaires de France, 1964). The earliest application of what today we call scenario planning was by Pierre Wack at Royal Dutch Shell. See Pierre Wack, ‘Scenarios: Uncharted waters ahead’, Harvard Business Review, 63:5 (1985), pp. 72–89.

60 The evidence shows that leaders who manage to do so, who remain open to new evidence and who challenge arguments, are far better forecasters than those who are confident that they know one thing very well. Tetlock, Expert Political Judgment.