Hostname: page-component-586b7cd67f-gb8f7 Total loading time: 0 Render date: 2024-11-30T16:55:46.672Z Has data issue: false hasContentIssue false

Conviction Narrative Theory: A theory of choice under radical uncertainty

Published online by Cambridge University Press:  30 May 2022

Samuel G. B. Johnson
Affiliation:
Department of Psychology, University of Warwick, Coventry CV4 7AL, UK. [email protected] Centre for the Study of Decision-Making Uncertainty, University College London, London W1CE 6BT, UK. [email protected] [email protected] University of Bath School of Management, Bath BA2 7AY, UK Department of Psychology, University of Waterloo, Waterloo, ON N2L 3G1, Canada
Avri Bilovich
Affiliation:
Centre for the Study of Decision-Making Uncertainty, University College London, London W1CE 6BT, UK. [email protected] [email protected]
David Tuckett
Affiliation:
Centre for the Study of Decision-Making Uncertainty, University College London, London W1CE 6BT, UK. [email protected] [email protected] Blavatnik School of Government, University of Oxford, Oxford OX2 6GG, UK.
Rights & Permissions [Opens in a new window]

Abstract

Conviction Narrative Theory (CNT) is a theory of choice under radical uncertainty – situations where outcomes cannot be enumerated and probabilities cannot be assigned. Whereas most theories of choice assume that people rely on (potentially biased) probabilistic judgments, such theories cannot account for adaptive decision-making when probabilities cannot be assigned. CNT proposes that people use narratives – structured representations of causal, temporal, analogical, and valence relationships – rather than probabilities, as the currency of thought that unifies our sense-making and decision-making faculties. According to CNT, narratives arise from the interplay between individual cognition and the social environment, with reasoners adopting a narrative that feels “right” to explain the available data; using that narrative to imagine plausible futures; and affectively evaluating those imagined futures to make a choice. Evidence from many areas of the cognitive, behavioral, and social sciences supports this basic model, including lab experiments, interview studies, and econometric analyses. We identify 12 propositions to explain how the mental representations (narratives) interact with four inter-related processes (explanation, simulation, affective evaluation, and communication), examining the theoretical and empirical basis for each. We conclude by discussing how CNT can provide a common vocabulary for researchers studying everyday choices across areas of the decision sciences.

Type
Target Article
Copyright
Copyright © The Author(s), 2022. Published by Cambridge University Press

Before the wheel was invented… no one could talk about the probability of the invention of the wheel, and afterwards there was no uncertainty to discuss…. To identify a probability of inventing the wheel is to invent the wheel.

John Kay and Mervyn King, Radical Uncertainty (2020)

1. Everyday decisions

A government amidst a public health lockdown debates exit strategy; a couple debates divorce. A university graduate considers her career options; the CEO of a toaster company considers expanding into blenders. A widow, awoken by a strange sound, contemplates whether to investigate its source; a burglar, outside, contemplates whether he is making a grave mistake.

We make such decisions, grand and petite, every day. This is remarkable because many everyday choices require us to solve six challenges – each daunting, together herculean:

  • Radical uncertainty. Our knowledge about the future often eludes quantification. (Experts give conflicting advice to the government; the bickering couple cannot know whether their past signals their future.)

  • Fuzzy evaluation. The criteria for evaluating the future are ambiguous and multidimensional. (The couple must consider their feelings, children, finances; careers bring different forms of satisfaction.)

  • Commitment. Decisions and outcomes are often separated in time, so we must manage our course of action as the situation evolves. (People must sustain career training and organize their plans for years on end.)

  • Sense-making. The right decision about the future depends on grasping the present. (The government considers which epidemiological models are most plausible; the widow makes her best guess about what caused the noise.)

  • Imagination. Since the future does not yet exist, we must imagine it to evaluate its desirability. (Decisions about love, appliances, intruders, and viruses require future forecasts.)

  • Social embeddedness. The decision depends both on our beliefs and values, and those of others. (The government persuades the public to implement its policies; beliefs about marriage are shaped by our culture and media diet.)

These challenges are ubiquitous, yet their solutions elude dominant theories of decision-making.

This paper presents Conviction Narrative Theory (CNT) – an account of choice under radical uncertainty. According to CNT, narratives – mental representations that summarize relevant causal, temporal, analogical, and valence information – are the psychological substrate underlying such decisions. Narratives support and link four processes – explanation (structuring evidence to understand the past and present, yielding emotional satisfaction), simulation (generating imagined futures by running the narrative forward), affective evaluation (appraising the desirability of imagined futures and managing commitment toward a course of action over time), and communication (transmitting decision-relevant knowledge across social networks to justify, persuade, and coordinate action). Narratives are why the above-mentioned properties so often co-occur: In contexts marked by radical uncertainty and fuzzy evaluation, we use narratives to make sense of the past, imagine the future, commit to action, and share these judgments and choices with others.

Narratives bubble beneath every example above. Governments debate whether a virus is more like flu or plague; these narratives yield very different explanations of the situation, hence predictions about the future, hence emotional reactions to particular options. The couple can interpret their fights as signaling differences in fundamental values or resulting from temporary stresses; either narrative can explain the fights, portending either a dark or rosy future. The toaster CEO might consider her company ossified, complacent, or innovative; these narratives have different implications about the risks and benefits of new ventures, motivating different decisions. In each case, the decision-maker's first task is to understand the current situation, which informs how they imagine a particular choice would go, which is deemed desirable or undesirable based on how the decision-maker would feel in that imagined future.

Narratives pervade decision-making. This article explains why and how.

2. The logic of decision

2.1. Two problems

Any theory of decision-making must account for how beliefs and values yield action. We divide this question into two problems – mediation and combination.

2.1.1. The mediation problem

Since data must be interpreted to be useful, decision-making requires a mental representation – a currency of thought – mediating between the external world and our actions (dashed lines in Fig. 1). When we face a decision, we form beliefs – based on prior knowledge and new data – to characterize what will likely happen given potential actions. Those beliefs must be represented in a format that can be combined with our values to guide action (Baumeister & Masicampo, Reference Baumeister and Masicampo2010). Put differently, external data or raw facts do not become actionable information until interpreted in conjunction with our broader knowledge (Tuckett, Holmes, Pearson, & Chaplin, Reference Tuckett, Holmes, Pearson and Chaplin2020). The burglar must consider, were he to burgle, the likely outcomes (beliefs) and their desirability (values). Some mental representation must simultaneously be the output of the reasoning process that judges what will happen and an input to the decision-making process that combines those beliefs with each outcome's desirability.

Figure 1. The logic of decision. Decisions reflect both data picked up from the external world – including the social environment – and internally derived goals. The mediation problem (dashed lines) reflects the need for an internal representation – a currency of thought – that can mediate between data from the external world and actions decided internally. The combination problem (gray lines) reflects the need for a process – a driver of action – that can combine beliefs and goals to yield actions. In classical decision theory, the currency of thought is probability and the driver of action is expected utility maximization. In CNT, the currency of thought is narratives, and the driver of action is affective evaluation.

In classical decision theory, the currency of thought is probability – continuous values that quantify risk. The burglar's decision depends on his perceived chance he will be caught (C) or not caught (NC). But this assessment depends potentially on many things – the police presence, burglar's skill, odds the inhabitants are home, etc. These data must be aggregated through Bayesian inference (Section 4.1), combining prior knowledge with new evidence.

The burglar weighs the evidence, assigning 0.2 probability to C and 0.8 to NC. These probabilities summarize all relevant data about the external world in a format used internally to combine these beliefs about outcomes with values about their desirability. Probabilities solve the mediation problem because a single representation can be the output of belief-formation and the input to decision-making.

2.1.2. The combination problem

Decision-making requires a process – a driver of action – that combines beliefs and values to yield action. The burglar must not only assess the likelihood of being caught or not, but how bad or good that would be. If the mediation problem has been solved, we have a suitable representation of likelihood to combine with value judgments. Yet a further principle must govern this combination.

In classical decision theory, the driver of action is utility-maximization: Disparate sources of value are aggregated into an outcome's utility, multiplied by each outcome's probability to yield an option's expected utility. The decision rule is simply to maximize this quantity. The burglar would consider the sources of (dis)utility associated with being caught (C) – social stigma, financial costs, prison – and with not being caught (NC) – newfound wealth, perhaps guilt. The utility of C and NC might be −8 and +3, respectively. Then, the expected utility is each state's utility, weighted by its probability:

$$U( C ) \times P( C ) + U( {NC} ) \times P( {NC} ) = ( {0.2} ) ( {-8} ) + ( {0.8} ) ( 3 ) = { + } 0.8$$

Crime is expected to pay, so the rational burglar would attempt the burglary. Although expected utility maximization is not the only justifiable decision rule, philosophers and economists have marshalled powerful arguments for its rationality (Savage, Reference Savage1954; Von Neumann & Morgenstern, Reference Von Neumann and Morgenstern1944).

2.2. Two puzzles

The reader may already feel uneasy about these admittedly ingenious solutions to the mediation and combination problems: Where do these numbers come from? Varieties of this puzzle afflict both probabilities and utilities.

2.2.1. Radical uncertainty

Radical uncertainty characterizes situations when probabilities are unknowable (Kay & King, Reference Kay and King2020; Knight, Reference Knight1921; Volz & Gigerenzer, Reference Volz and Gigerenzer2012), because we do not know the data-generating model or cannot list all possible outcomes. Debates over pandemic policy are riddled with uncertainty about the infection itself (contagiousness, lethality) and policy responses (efficacy, unintended consequences). We don't know the right model for any of these – without a model, how do we calculate probabilities? Moreover, we cannot enumerate the potential implications of each policy choice – without a list of outcomes, how do we assign probabilities to them? Similar problems afflict our other examples earlier – try assigning probabilities to the prospects of the bickering couple, the toaster CEO, or, indeed, the burglar, and it becomes clear that radical uncertainty haunts many everyday decisions.

Radical uncertainty has many sources. Some derive from aleatory uncertainty from the world itself (Kay & King, Reference Kay and King2020):

  • Non-stationary distributions. Stationary processes have constant probability distributions over time, learnable over repeated observations. Many real-world processes are non-stationary. Each time a pathogen mutates, its previously observed properties – the severity of disease in population sub-groups, its responsiveness to treatments, and prevention by vaccines – change in unknowable ways. The question “What is the probability of dying of a mutating virus if I contract it in 6 months?” has no answer.

  • Agency. Human behavior is often unpredictable. This is especially obvious for pivotal historical events – the assassination of Caesar, Putin's invasion of Ukraine – but smaller forms of agency-driven uncertainty render foggy whole swaths of the future. Technological innovation depends on the insights and happenstance of individuals (Beckert, Reference Beckert2016; Knight, Reference Knight1921; Ridley, Reference Ridley2020), yet produce profound discontinuities. Behavior emerging from interactions among collectives adds further uncertainty, as illustrated by waves of virality in social media. The sweeping effects of government policies often depend on the preferences of one person or unpredictable interactions among a group. The COVID-19 pandemic would have had a far different shape were it not for many unpredictable choices – the rapid development of vaccines by scientists, the often-haphazard decisions of politicians.

Radical uncertainty also results from the epistemic limitations of our finite minds:

  • Information limits. Often, we lack information to fully understand a situation. In the early days of a pandemic, we know little about how a pathogen is transmitted or who is afflicted. At other times, we have more information than we can process: An endless parade of potentially relevant data resides in our environment and the deepest trenches of memory. There is often an abundance of relevant information, if only we knew where to find it. But life is not an exam problem – information is not branded with “relevant” or “irrelevant.”

  • Specification limits. When we do not know the data-generating model, we often cannot rationally assign precise probabilities (Goodman, Reference Goodman1955). This means that Bayesian inference – combining precisely expressed prior beliefs with quantitative assessments of how well the data fit each hypothesis – is often mathematically ill-posed. Much thought is instead more qualitative (Fisher & Keil, Reference Fisher and Keil2018; Forbus, Reference Forbus1984); while this can create bias, it is often unclear even normatively how to assign precise values. To generate probabilities, data must be interpreted; interpretation requires a model; and our models, for all but the simplest situations, are incomplete.

  • Generation limits. Most realistic problems are open-ended, requiring us to generate our own hypotheses. There are endless reasons why a new cluster of virus cases can arise – a resident returned from abroad, a tourist brought the virus, a superspreader event happened, a new variant has arisen. Even if we can test individual explanations, we will never be able to list all possible explanations. Our imaginations are limited and so we cling to small numbers of especially plausible hypotheses – raising the question of where they come from.

  • Capacity limits. Our minds have limited attention, working memory, and inference capacity (Miller, Reference Miller1956; Murphy & Ross, Reference Murphy and Ross1994). Bayesian calculations rapidly reach absurd calculational complexity. For each calculation of a posterior, we must separately calculate the prior and likelihood and combine them, and an inference may require posteriors for many plausible hypotheses. This is bad enough, but often our inferences are chained (Steiger & Gettys, Reference Steiger and Gettys1972). In the case of a pandemic, we cannot generate reliable predictions of how death numbers will respond to policy interventions because the responses both of individuals (e.g., distancing behavior) and the virus (e.g., mutations) are uncertain and intertwined in feedback loops.

Probabilities, by definition, are inappropriate under radical uncertainty. Although uncertainty has long been a thorn in the side of economics (Camerer & Weber, Reference Camerer and Weber1992; Ellsberg, Reference Ellsberg1961; Knight, Reference Knight1921), almost all economic models assume that outcomes can be enumerated and assigned probabilities. Even behavioral models typically replace optimal with biased probabilistic processing (Tversky & Kahneman, Reference Tversky and Kahneman1979). This can work when the underlying model really is known, as in gambling. But real-world decision-making often resembles poker more than roulette – probabilities only get you so far.

2.2.2. Fuzzy evaluation

Fuzzy evaluation characterizes situations in which utilities cannot be evaluated. Reasons include:

  • Incommensurable attributes. We rarely evaluate choice objects along a single dimension, but must somehow combine multiple dimensions into an overall summary judgment. Writing an academic article mingles the joy of intellectual work and the pride of completion with the frustration of slow progress and the angst of possible rejection. Filing for divorce merges the pain of leaving behind shared history with the prospect of turning over a new leaf. These potential options are difficult to evaluate because these attributes are along almost totally unrelated dimensions that resist placement onto a common scale (Walasek & Brown, Reference Walasek and Brown2021).

  • Incomparable outcomes. When we compare objects along a single dimension, we can often simply rank them pairwise along that dimension. But so often, one object excels on one dimension while another object excels on another. When the attributes are incommensurable, trading off attributes across choice objects is necessary to make a choice, yet it is often unclear how to do so rationally (Walasek & Brown, Reference Walasek and Brown2021). For example, consider choosing between careers as a clarinetist or lawyer (Raz, Reference Raz1986). Neither career is clearly better, nor are they equally good – they are good in different ways: One involves more self-expression, the other more opportunity to improve the world. The relative desirability of these attributes eludes quantification. Imagine increasing the clarinetist's salary by 1%. Although clearly better than the original clarinetist job, it is still not clearly better than the lawyer job, violating transitivity (Sinnott-Armstrong, Reference Sinnott-Armstrong1985). Gaining further information is unlikely to help here, where there are good arguments for and against each choice – a recipe for ambivalence (Smelser, Reference Smelser1998).

  • Non-stationary values. Our values may be unstable over time, yet we often make decisions for our future selves. Innovators face the challenge that consumers may not know what they like until they actually experience it – as in Henry Ford's (apocryphal) remark that if he had asked customers what they wanted, they would have said “faster horses.” We decide whether to have a first child before the experience of parenthood radically alters our priorities (Paul, Reference Paul2014). Just as beliefs are uncertain when probability distributions are non-stationary, so are values uncertain when they change unpredictably. Moreover, even if one could accurately predict one's future values, how can current decisions be governed by future preferences?

Just as neoclassical and behavioral models differ in their approach to uncertainty mainly in assuming optimal versus biased probabilistic processing, their approach to preferences differs mainly in adding additional sources of utility (e.g., social utility; Fehr & Schmidt, Reference Fehr and Schmidt1999) or biases (e.g., reference-dependent preferences; Tversky & Kahneman, Reference Tversky and Kahneman1979). Such approaches are poorly suited to many everyday decisions where utilities are non-calculable and fuzzy evaluation reigns.

3. Conviction Narrative Theory

Conviction Narrative Theory (CNT) characterizes the social and informational context in which decision-making occurs and the cognitive and affective processes governing it. CNT provides alternative solutions to the mediation and combination problems that eschew probabilities and utilities.

Under radical uncertainty and fuzzy evaluation, decision-making requires us to extract relevant information by explaining the past, use that information to predict the future, and evaluate possible futures. CNT posits narratives as the key mental representation underpinning these processes: A narrative is selected that best explains the data, which is then used to imagine possible futures given potential choices, with emotional reactions to those imagined futures motivating choices – producing conviction to take sustained action (Tables 1 and 2). (For precursors, see Chong & Tuckett, Reference Chong and Tuckett2015; Tuckett, Reference Tuckett, Kirman and Teschiin press; Tuckett et al., Reference Tuckett, Holmes, Pearson and Chaplin2020; Tuckett & Nikolic, Reference Tuckett and Nikolic2017.)

Table 1. Elements of Conviction Narrative Theory

Description of key aspects of the decision-making context, mental representations, and mental processes invoked by CNT.

Table 2. Propositions of Conviction Narrative Theory

These propositions are elaborated in Sections 59 with supporting evidence.

Context. Although not every decision is taken under radical uncertainty and fuzzy evaluation – probabilities and utilities are well-suited for studying gambles typical in risky-choice experiments – these properties are common in everyday decisions that do not wear numbers on their sleeves. Despite drawing on fewer resources by avoiding probabilities and utilities, CNT draws on more resources in another sense – decisions are typically socially embedded, with beliefs and values influenced by others and subject to cultural evolution (Section 9). This often permits reasonable decision-making in the absence of probabilities and utilities.

Representations. CNT posits narratives as structured, higher-order mental representations summarizing causal, temporal, analogical, and valence structure in a decision domain (Section 5). For example, the widow hearing the noise has different causal theories of why sounds occur at different times; draws analogies between the present case and similar situations; and keeps track of the nefarious or innocent intentions implied. This knowledge might be represented in “burglary” versus “noisy cat” narratives. Similarly, different individuals may hold sharply distinct narratives about a global pandemic by drawing on different causal and analogical theories (see Fig. 5 in Section 5).

Despite the ecumenical representational format, narratives are constrained by their functions: They explain and summarize data, facilitate predictions, and motivate and support action. These correspond to the three key processes underlying individual decision-making in CNT, which are intertwined with narratives (Fig. 2).

Figure 2. Representations and processes in Conviction Narrative Theory. Narratives, supplied in part by the social environment, are used to explain data. They can be run forward in time to simulate imagined futures, which are then evaluated affectively considering the decision-maker's goals. These appraisals of narratives then govern our choice to approach or avoid those imagined futures. The figure also depicts two feedback loops: Fragments of narratives that are successfully used may be communicated recursively back to the social context, evolving narratives socially, and our actions generate new data that can lead us to update narratives, evolving narratives individually. (Block arrows depict representations; rectangles depict processes; circles depict sources of beliefs and values, which are inputs to processes via thin arrows.)

Processes. Explanation makes sense of available data in a unified mental framework by evaluating potential narratives (Section 6). For example, the widow would consider the evidence – time of day, type and duration of noise – to adjudicate the burglar versus noisy cat narratives. Explanation draws on multiple sources of evidence, including prior beliefs, shared narratives, and new observations. Because probabilities are not available under radical uncertainty, heuristics – simple rules relying on small numbers of cues – are used to evaluate narratives, including those exploiting causal, analogical, and temporal structure embedded in narratives. These heuristics are often implemented through affect – which narrative feels right.

The narrative is then used to simulate the future (Section 7). Given the burglary narrative, the widow would consider the likely outcome if she were to investigate (being violently attacked), ignore the noise (losing possessions), or equip her investigation with a baseball bat (showing the burglar who's boss). Whereas explanation works by thinking across narratives and adopting the most plausible, simulation works by thinking within the adopted narrative and imagining the future. We project ourselves into a narrative and imagine what would happen if an action is taken by “running” causation forward. This process generates a representation we term an imagined future – a specific sequence of imagined events, represented iconically with a temporal dimension; unlike a narrative, it need not include detailed relational information except an ordered sequence of events. However, imagination has sharp limits – rather than imagining multiple potential futures and “averaging” them, we typically imagine only one future for each choice.

We then affectively evaluate that imagined future and take (sustained) action (Section 8). Emotional responses to that future combine beliefs and values. When emotions such as excitement or anxiety are triggered by contemplating an imagined future, we are motivated to approach or avoid that future (Elliot, Reference Elliot2006). The widow imagines an unpleasant future from ignoring the noise, and a more palatable one from a cautious investigation, motivating approach toward the latter future. CNT describes two ways emotions can appraise futures: A default strategy based on typical appraisal dimensions used by our affective systems, and an ad hoc strategy based on the active goal(s) in our goal hierarchy (Section 8.2).

Emotions are also needed for maintaining decisions (Section 8.3). Uncertainty breeds ambivalence, with good arguments for multiple options and the need to sustain decisions over time. When emotions become embedded in narratives, such conviction narratives can manage the incorporation of new information into decision-making while maintaining commitment. As the widow approaches the noise's source, it is natural to feel deeply ambivalent about her choice. Confidence in a stable narrative and imagined future helps to maintain a consistent course of action. When used adaptively, decision-makers incorporate new information from the world into their narratives, creating feedback loops and allowing us to improve repeated or sustained decisions over time.

Beyond these backbone operations of choice – explanation, simulation, evaluation – narratives underlie a fourth function: Communication (Section 9). Whereas narratives in our definition are mental representations embedded in individual minds, some elements are shared in common across a social group; we refer to this set of common elements as a shared narrative. Since these elements are shared only piecemeal (primarily through language), it is these narrative fragments that are communicated and which shape and maintain shared narratives. Communication is another way that narratives can participate in feedback loops, now at the collective level: Shared narratives propagate, adapt, and die according to the principles of cultural evolution, permitting learning not only at the individual but at the cultural level. Shared narratives facilitate coordination when used to persuade others and maintain reputation. They propagate when they are catchy enough to be shared, memorable enough to store, and relevant enough to guide decisions.

Nowhere in this proposed process do probabilities or utilities appear; instead, ecologically and cognitively available substitutes play leading roles. In lieu of probabilities to assess narratives, heuristics are used, with narratives arising from the social environment and subjected to cultural evolution; instead of probabilities assigned to imagined futures, the single likeliest imagined future is adopted and evaluated. Rather than utilities assigned to particular outcomes over many dimensions, emotions are felt in response to imagined futures, dependent on the decision-maker's goals.

4. Relationship to alternatives

Although we believe our model is the most comprehensive explanation of how and why narratives predominate in decision-making, CNT draws on several related approaches.

4.1. Rational approaches

Bayesian cognitive science models go beyond classical decision theory in showing how probabilities can be calculated and applied to many tasks (Tenenbaum, Kemp, Griffiths, & Goodman, Reference Tenenbaum, Kemp, Griffiths and Goodman2011). First, such models specify the hypothesis space. For example, the CEO considering a new blender model might entertain three hypotheses: “We cannot engineer the blender,” “We engineer the blender but cannot successfully market it,” “The blender expansion succeeds.” The reasoner assigns prior probabilities to the hypotheses, evaluates each hypothesis's fit with the data, and combines these using Bayesian inference. For example, the CEO might assign priors of 0.4, 0.35, and 0.25 to these hypotheses. As evidence accumulates – engineers develop a prototype, marketers run focus groups – she considers the likelihood of this evidence under each hypothesis, using Bayes’ theorem to combine likelihoods with priors. Despite the simplicity of this toy example, Bayesian cognitive science models make a range of substantive and interesting claims about how people think in a vast array of contexts.

Many critiques have been written of these models (Bowers & Davis, Reference Bowers and Davis2012; Jones & Love, Reference Jones and Love2011; Marcus & Davis, Reference Marcus and Davis2013), and we do not endorse all points of criticism. From our perspective, Bayesian approaches potentially work, both in principle and practice, quite well under risk. But their inescapable limitation is the same as for classical decision theory – probabilities cannot be modeled under radical uncertainty, which can interfere with each modeling step:

  • Hypothesis space. Many problems resist an enumeration of possible outcomes. The CEO has neglected many other possibilities – competitors entering the blender market, engineers generating a prototype no better than competitors’, regulators blocking the expansion. This is due to both aleatory uncertainty (some possibilities cannot be imagined even in principle because the future is unknown – the “unknown unknowns”) and to generation limits (our finite cognitive capacity to imagine possibilities in open-ended problems).

  • Priors. It is often unclear even in principle how to rationally set priors. How did the CEO assign a 0.4 probability to her engineering team's failure? Why not 0.2, 0.35, or 0.7? Is it reasonable to use the base rate of engineering team failures given that this product has never before been designed? This is a specification limit – one cannot non-arbitrarily favor one value over another within a range of plausible values. This not only limits the psychological plausibility of these models, but can be problematic for the models themselves since the specific priors chosen sometimes drive the model's fit (Marcus & Davis, Reference Marcus and Davis2013).

  • Likelihoods. Likelihoods reflect the probability of the evidence conditional on each hypothesis. This raises three problems in realistic contexts. First, information limits: How do we know what evidence is relevant? The CEO may scan the newspaper, consumer research, and marketing reports to grasp the blender market, but will struggle to know which pieces of information bear on her hypotheses. Second, as for priors, specification limits: How do we assign probabilities non-arbitrarily? How is she supposed to estimate the likelihood of particular focus group feedback conditional on the product's future success? Third, capacity limits: With very many pieces of evidence and hypotheses, the amount of calculation rubs up against memory and attention limits.

  • Updating. Bayesian inference itself quickly approaches capacity limits as the number of hypotheses increases, since priors and likelihoods for each hypothesis must be stored and combined. This is especially problematic for the chains of inference that are common in real-world problems. The probability of successful roll-out depends on the quality of the product, the performance of the marketing team, word-of-mouth, predictions made by analysts and retailers, and countless other factors, often mutually dependent. Moreover, the problems surrounding hypothesis spaces, priors, and likelihoods compound at each step.

Rational approaches – both classical and Bayesian varieties – are nonetheless valuable. They can characterize “small-world” problems, such as risky decisions with enumerable outcomes and probabilities; sometimes provide normative benchmarks for assessing human decisions; provide valuable insights for designing artificial systems (Lake, Ullman, Tenenbaum, & Gershman, Reference Lake, Ullman, Tenenbaum and Gershman2017); and provide insight at Marr's (Reference Marr1982) computational level by characterizing the goals of a cognitive system. And although probabilistic approaches cannot capture cognition under radical uncertainty, they have inspired some of the boundedly rational approaches discussed next.

4.2. Boundedly rational approaches

Both classical and Bayesian approaches have been criticized for their lack of psychological realism, leading to several varieties of bounded rationality as amendments. Their core insight is that, although our minds are limited and prone to error, we get quite far with these limited resources: We can be rational within the bounds of our finite minds.

The dominant theoretical style in behavioral economics is surprisingly continuous with neoclassical modeling. Traditional models assume that economic agents are rational, then specify the institutional environment through which the agents interact (e.g., firms and consumers in a competitive market with priced goods) and examine the resulting equilibrium. Behavioral models use the same steps, but tweak the agents to incorporate biases or non-standard preferences, as explained in Section 2.

Like classical approaches, we believe these models produce valuable insights, particularly how small changes to the assumed psychology of economic agents qualify the results of standard models. Yet such models struggle with radical uncertainty and fuzzy evaluation. The same problems that plague classical models with probabilities apply to behavioral models with “decision weights”: Such models may capture real psychological biases in how people process probabilities, yet assume probabilities exist to be processed. This makes sense for some formal models and laboratory tasks, but not when probabilities do not exist (Section 2.2.1). Likewise, models that stuff the utility function with goodies can capture genuine trends in preferences, but create an illusion of precision when evaluation is fuzzy and options are incommensurable (Section 2.2.2).

Several important principles of bounded rationality, however, do not depend on the intelligibility of optimization:

  • Resource rationality. In coining the term “bounded rationality,” Simon (Reference Simon1957) did not view humans as capriciously irrational, but as managing the best we can given our cognitive and environmental limitations. This approach has been refined in sophisticated models of resource rationality (Lieder & Griffiths, Reference Lieder and Griffiths2020), emphasizing the rationality of simplifying strategies such as sampling (Sanborn, Griffiths, & Navarro, Reference Sanborn, Griffiths and Navarro2010). Rationing limited resources is one reason to adopt simplifications – such as narrative thinking – in the face of the calculational difficulties of uncertainty. Equally importantly, probabilistic strategies under uncertainty are not always capable of giving any answer at all.

  • Heuristics. A heuristic is a fallible-but-useful shortcut. Heuristics are often discussed in contexts where correct answers exist but algorithmic approaches are infeasible or knowledge too limited to provide an optimal answer: Some researchers (“heuristics-and-biases”) emphasize the “fallible” part of “fallible-but-useful,” and others (“fast-and-frugal”) the “but-useful” part (Gigerenzer, Reference Gigerenzer2008; Tversky & Kahneman, Reference Tversky and Kahneman1974). For our purposes, we note that heuristics also may be useful in situations where no objectively correct answer exists, yet some answers are more reasonable than others.

    A parallel debate has raged in normative and descriptive ethics. Utilitarianism emphasizes calculation (Bentham, Reference Bentham1907/1789), positing a duty to maximize social utility. Yet just as Bayesian calculations are often impossible in principle, utilitarian calculations often fail in real-world situations. Aristotle (Reference Irwin1999/350 BCE) bemoans the impossibility of a complete theory of ethics, instead urging us to cultivate habit and virtue to do the right thing in particular situations. Indeed, people distinguish between “rational” and “reasonable” behaviors (Grossmann, Eibach, Koyama, & Sahi, Reference Grossmann, Eibach, Koyama and Sahi2020), with the former characterized by optimization and abstract universalism, the latter by pragmatism and context-sensitivity (Rawls, Reference Rawls and Kelly2001; Sibley, Reference Sibley1953). This is likely why descriptive accounts of moral decision-making point to tools such as rules (Greene, Sommerville, Nystrom, Darley, & Cohen, Reference Greene, Sommerville, Nystrom, Darley and Cohen2001; Kant, Reference Kant2002/1796; Mikhail, Reference Mikhail2007), norms (Nichols, Reference Nichols2002), sacred or protected values (Baron & Spranca, Reference Baron and Spranca1997; Tetlock, Reference Tetlock2003), and character virtues (Johnson & Ahn, Reference Johnson and Ahn2021; Uhlmann, Pizarro, & Diermeier, Reference Uhlmann, Pizarro and Diermeier2015), which often act like heuristics (De Freitas & Johnson, Reference De Freitas and Johnson2018; Sunstein, Reference Sunstein2005). For moral decisions, like many everyday choices, often no clearly correct option exists, yet some are more readily justifiable.

  • Ecological rationality. A crucial point made by some researchers from boundedly rational traditions is that decision-making is adapted to real environments, so seemingly irrational behaviors observed in atypical contexts may be manifestations of more deeply rational – or at least adaptive – behaviors (Todd & Gigerenzer, Reference Todd and Gigerenzer2007). If most everyday decisions are taken under radical uncertainty, then behaviors that may be adaptive in the real world may manifest as demonstrably suboptimal decisions in the context of risky (often lab-based) contexts.

Narrative approaches to decision-making are compatible with these insights, and can be considered a species of bounded rationality – albeit, at least for CNT, one for which the appropriate benchmark is reasonableness rather than rationality.

4.3. Narrative approaches

Several researchers in both psychology and economics have argued that narratives guide decision-making.

From early days in the heuristics-and-biases tradition, causal thinking was thought to play a privileged role in judgment (Kahneman & Tversky, Reference Kahneman, Tversky, Kahneman, Slovic and Tversky1982), such as our ability to use base rates (Ajzen, Reference Ajzen1977; Krynski & Tenenbaum, Reference Krynski and Tenenbaum2007; Tversky & Kahneman, Reference Tversky, Kahneman and Fishbein1980; cf. Barbey & Sloman, Reference Barbey and Sloman2007; Gigerenzer & Hoffrage, Reference Gigerenzer and Hoffrage1995; Koehler, Reference Koehler1996). However, the first decision-making model to consider detailed cognitive mechanisms underlying narrative thought was the Story Model of Pennington and Hastie (Reference Pennington and Hastie1986, Reference Pennington and Hastie1988, Reference Pennington and Hastie1992, Reference Pennington and Hastie1993), most famously applied to juror decisions. In their model, jurors reach verdicts by constructing causal stories and assigning the story to the most appropriate verdict category (e.g., manslaughter, not-guilty). In their studies, participants generated verdicts based on realistic trial evidence. When describing their reasoning, participants overwhelmingly supported their verdicts with causal stories (describing intentions and behaviors) rather than unelaborated lists of facts, with these stories differing greatly across individuals depending on their verdict (Pennington & Hastie, Reference Pennington and Hastie1986). Manipulating the ease of constructing coherent stories (scrambling evidence order) dramatically shift participants’ verdicts (Pennington & Hastie, Reference Pennington and Hastie1988, Reference Pennington and Hastie1992). Although the Story Model's legal applications are best-known, it has also been applied to other contexts including economic decisions (Hastie & Pennington, Reference Hastie, Pennington, Connolly, Arkes and Hammond2000; Mulligan & Hastie, Reference Mulligan and Hastie2005).

Research since this seminal work has developed in two directions. In cognitive science, increasingly sophisticated theories model how people think about networks of causal relationships (Gopnik et al., Reference Gopnik, Glymour, Sobel, Schulz, Kushnir and Danks2004). For example, Sloman and Hagmayer (Reference Sloman and Hagmayer2006) argue that people conceptualize their decisions as interventions on a causal network – an idea in sympathy with CNT, wherein choice points in a narrative are opportunities to select among different imagined futures implied by the narrative. A separate but kindred line of work on the Theory of Narrative Thought (Beach, Reference Beach2010; Beach, Bissell, & Wise, Reference Beach, Bissell and Wise2016) emphasizes the pervasive role of narratives in memory and cognition, and, like CNT, highlights the importance of narratives for forecasting (Beach, Reference Beach2020).

A second direction (Abolafia, Reference Abolafia2020; Akerlof & Shiller, Reference Akerlof and Shiller2009; Akerlof & Snower, Reference Akerlof and Snower2016; Shiller, Reference Shiller2019) emphasizes the role of shared narratives in economic outcomes. Shiller argues that when shared narratives go viral, they influence expectations about the future, shaping macroeconomic activity. Shiller's view also provides a powerful role for contagious emotions, especially excitement and panic.

CNT develops these ideas in a third direction: Incorporating ideas about narrative decision-making into a broader framework that elaborates processes and mechanisms, explains how narratives and emotion combine to drive and support action, and accounts for the role of cultural evolution of narratives to render decision-making adaptive even under radical uncertainty. We see CNT as complementing rather than contradicting these perspectives, developing these approaches to the next stage in their evolution.

5. Narratives

Prior work has not coalesced around a single definition of “narrative,” much less a single notion of representation. For example, Beach (Reference Beach2010) defines “narrative” as “…a rich mixture of memories, of visual, auditory, and other cognitive images, all laced together by emotions to form a mixture that far surpasses mere words and visual images in their ability to capture context and meaning,” while Shiller (Reference Shiller2019) follows the Oxford English Dictionary in defining it as “…a story or representation used to give an explanatory or justificatory account of a society, period, etc.” (quoted in Shiller, p. xi). Meanwhile, Pennington and Hastie (Reference Pennington and Hastie1992) say that stories “…could be described as a causal chain of events in which events are connected by causal relationships of necessity and sufficiency….” The hierarchical structure of stories – for instance, that events can be grouped into higher-order episodes – is also often noted as a common feature (Abbott, Black, & Smith, Reference Abbott, Black and Smith1985; Pennington & Hastie, Reference Pennington and Hastie1992). Across these conceptions, causation is central but not the only hallmark of narratives – they provide meaning by explaining events (Graesser, Singer, & Trabasso, Reference Graesser, Singer and Trabasso1994; Mandler & Johnson, Reference Mandler and Johnson1977; Rumelhart, Reference Rumelhart, Bobrow and Collins1975).

In our view, ordinary causal models (Pearl, Reference Pearl2000; Sloman, Reference Sloman2005; Spirtes, Glymour, & Scheines, Reference Spirtes, Glymour and Scheines1993) are a crucial starting point, yet not quite up to the task of representing narratives. (That said, some progress has been made toward formalizing some economic narratives in this way; Eliaz & Spiegler, Reference Eliaz and Spiegler2018.) For our purposes, causal models have two shortcomings: They do not represent some information – such as analogies and valence – that will prove crucial to narrative thinking; and operations over causal models are usually assumed to be probabilistic – a non-starter under radical uncertainty.

We define narratives as structured, higher-order mental representations incorporating causal, temporal, analogical, and valence information about agents and events, which serve to explain data, imagine and evaluate possible futures, and motivate and support action over time. No doubt, this definition itself requires some explanation.

5.1. Narratives are structured, higher-order representations

In a structured mental representation, relations are represented among the objects it represents. For example, a feature list is an unstructured representation of a category, whereas a propositional, sentence-like representation with explicit predication (i.e., assigning attributes to specific elements and specifying relations among elements) is highly structured. Similarly, causal models are highly structured, as are representations of categories whose features are related to one another through analogy (Gentner, Reference Gentner1983).

Narratives may have been difficult to pin down in past work because they are structured, like causal models, but contain richer information that is not typically represented in causal models. Specifically, we argue that narratives can represent causal, temporal, analogical, and valence structure. Complicating things further, not all types of structure are necessarily invoked in a given narrative. Narratives are higher-order representations that flexibly include lower-order representations.

5.1.1. Causal structure

Narratives represent at least two types of causal relationships: event-causation and agent-causation. Event-causation can further be sub-divided based on the kind of event (individuals or categories) and how the events are connected (e.g., colliders, chains, or webs).

Event-causation refers to dependency relationships between either individual events (the Central Bank lowered interest rates, causing investment to increase) or event categories (lower interest rates cause increased investment). Both kinds of event-causal relations are important since narratives incorporate information both about individual events (e.g., the course of my marriage) and event categories (e.g., how relationships work generally), including analogical links between these knowledge types. We are agnostic about the representational format of event-causation, and indeed these representations may involve aspects of networks, icons, and schemas (Johnson & Ahn, Reference Johnson, Ahn and Waldmann2017). For familiarity, the diagrams we use to depict narratives are elaborated from causal network conventions (Figs. 36).

Figure 3. Common causal structures in narratives. In panel A (a causal collider), multiple potential causes (A or B) could explain an event (C); a typical inference problem would be to evaluate A and B as potential explanations of observation C, which may in turn license other inferences about effects of A or B (not depicted here). In panel B (a causal chain), a sequence of causally related events (D, E, F, G) is posited; typical inference problems would be to evaluate whether the overall sequence of events is plausible, or whether an intermediate event (E) is plausible given that the other events (D, F, G) were observed. In panel C (a causal web), many event types (H–N) are thought to be related to one another, with some relationships positive and others negative, and some bidirectional; typical inference problems would be to evaluate the plausibility of individual links or to infer the value of one variable from the others. In panel D (agent-causation), an agent (Q) considers taking an action (P), based partly on reasons (O) and their judgment of the action itself (P); typical inference problems would be to predict the agent's action based on the available reasons, or infer the agent's reasons based on their actions. [Circles and squares depict events and agents, respectively; straight arrows depict causal relationships, which could be unidirectional or bidirectional, positive (default or with a “+” sign) or negative (with a “–” sign); curved, diamond-tipped arrows depict reasons. For causation among events and agents, but not event-types (panels A, B, and D), left–right orientation depicts temporal order.]

Patterns of event-causation also vary in their topology and inference patterns (Johnson, Reference Johnson2019). Three common types of causal patterns in narrative contexts are colliders, chains, and webs (Fig. 3A–C).

In a causal collider (Fig. 3A), we can observe evidence and seek an explanation for it, which may in turn generate further predictions. For example, if a central banker makes some statement, this licenses inferences about the banker's intention, which may yield predictions about the bank's future policies.

In a causal chain (Fig. 3B), a sequence of events is causally and temporally ordered. Mrs O'Leary went to milk her cow; the cow objected and kicked a lantern; the lantern started a fire; and so began the Great Chicago Fire (supposedly). A mysterious individual invents blockchain technology; it fills an important economic niche; it gains value; it becomes widely adopted. We can think about the plausibility of these sequences overall, fill in missing events from the chains, and predict what will happen next.

In a causal web (Fig. 3C), we ask how a set of variables relate to one another – an intuitive theory (Shtulman, Reference Shtulman2017). Some intuitive theories probably have innate components (Carey, Reference Carey2009a), but many decision-relevant intuitive theories are learned. For example, investors must have mental models of how macroeconomic variables such as exchange rates, inflation, unemployment, economic growth, and interest rates are linked (Leiser & Shemesh, Reference Leiser and Shemesh2018), and voters have intuitions about trade, money, and profits (Baron & Kemp, Reference Baron and Kemp2004; Bhattacharjee, Dana, & Baron, Reference Bhattacharjee, Dana and Baron2017; Johnson, Zhang, & Keil, Reference Johnson, Zhang, Keil, Rogers, Rau, Zhu and Kalish2018, Reference Johnson, Zhang and Keil2019). These intuitions likely drive much political and economic behavior, yet differ strikingly from economists’ consensus (Caplan, Reference Caplan2007; Leiser & Shemesh, Reference Leiser and Shemesh2018). This may reflect both divergences between the modern world and evolved intuitions (Boyer & Petersen, Reference Boyer and Petersen2018), and the difficulty of correctly extracting causal structure from causal systems with more than a few variables (Steyvers, Tenenbaum, Wagenmakers, & Blum, Reference Steyvers, Tenenbaum, Wagenmakers and Blum2003).

Narratives often center around the intentional actions of human agents, and such agent-causation appears to be a very different way of thinking about causation. Rather than representing events as causing one another (e.g., Alan Greenspan's forming the intention to decrease interest rates caused interest rates to decrease), people sometimes appear to think of agents as causing events directly (e.g., Alan Greenspan caused interest rates to decrease). This reflects the intuitive notion that agents have free will; that our choices, when construed as agent-causes rather than event-causes, are themselves uncaused (Hagmayer & Sloman, Reference Hagmayer and Sloman2009; Nichols & Knobe, Reference Nichols and Knobe2007). Agents can act for reasons (Malle, Reference Malle1999); they make intentional choices based on their beliefs and desires which are assumed to be rational (Gergely & Csibra, Reference Gergely and Csibra2003; Jara-Ettinger, Gweon, Schulz, & Tenenbaum, Reference Jara-Ettinger, Gweon, Schulz and Tenenbaum2016; Johnson & Rips, Reference Johnson and Rips2015). Complicating things further, reasons are often anticipations of the likely effects of one's action.

5.1.2. Temporal structure

Narratives often, but not always, include temporal information about the order, duration, and hierarchical structure of events. For example, causal chains are necessarily ordered sequences of events because causes occur before effects. At the opposite extreme, temporal order often is lacking entirely from causal webs that depict causality among event categories rather than individual events. That said, people can track the order, delay, and part–whole structure of causally related events and use these different types of temporal information to disambiguate causal structures (Lagnado & Sloman, Reference Lagnado and Sloman2006; Rottman & Keil, Reference Rottman and Keil2012). For example, sequences of events often are segmented into higher-order episodes, each containing lower-level sub-events (Zacks & Tversky, Reference Zacks and Tversky2001). This part–whole organization affects causal representations, with higher-level events believed to be both causes and effects of other higher-level events, and low-level events from one high-level cluster believed to affect only low-level events from that same cluster (Johnson & Keil, Reference Johnson and Keil2014).

5.1.3. Analogical structure

Both the power and peril of narrative thinking compound when people perform inference not only by causal thinking within a single domain, but across different domains through analogies. Structure-mapping theory (Gentner, Reference Gentner1983) is a model of how people select analogies and use them to make inferences, emphasizing that matches in the relationships within a domain make it a good or bad analogy for another domain. Thus, analogy is especially powerful when combined with other relational systems such as causal systems (Holyoak, Lee, & Lu, Reference Holyoak, Lee and Lu2010) (Fig. 4A).

Figure 4. Analogical, valence, and causal structure. In panel A (analogical structure), one causal chain (R1, S1, T1) is analogized to another (R2, S2, T2); typical inference patterns would be to reason from a known sequence (R1, S1, T1) of specific events or schematized depiction of a general causal mechanism to infer the causal–temporal order of a new sequence (R2, S2, T2) or to infer missing events (T2) given that all other events are observed. In panel B (valence structure), positive event types (U, V, W) are seen as bidirectionally and positively related to each other, negative event types (X, Y, Z) are seen as bidirectionally and positively related to each other, whereas negative and positive events are seen as bidirectionally and negatively related to each other (Leiser & Aroch, Reference Leiser and Aroch2009). (Dashed lines represent analogical correspondences; white and black circles represent “good” and “bad” events or event types, respectively.)

Analogies are important to narratives for at least two reasons. First, they allow us to use familiar domains to make sense – if imperfectly – of less familiar domains. For example, people have highly impoverished mental models of central banking, but more detailed mental models of cars. In a car, stepping on the gas pedal causes more gasoline to enter the engine, increasing the car's speed. People often use this analogy for discussing and understanding central banking; the central bank prints more money, causing more money to enter the economy, causing the economy to go faster. Second, abstract and gist-like representations apply to a broader set of future situations, particularly when making decisions about the distant future (Schacter & Addis, Reference Schacter and Addis2007; Trope & Liberman, Reference Trope and Liberman2003). Thus, forming analogical links among specific past events and, ultimately, between specific events and more abstract event categories is crucial for generating generalizable knowledge. This is how our representational system incorporates some aspects of narratives’ hierarchical structure.

5.1.4. Valence structure

Stories involve good guys and bad guys, goals being achieved or objectives thwarted. Information about norms (right or wrong) and valence (good or bad) is processed rapidly and automatically (Moors & De Houwer, Reference Moors and De Houwer2010) and influences thinking in many domains including causation (Knobe, Reference Knobe2010). For example, people are likelier to identify norm-violations as causes and non-norm-violations as non-causes (Kominsky, Phillips, Gerstenberg, Lagnado, & Knobe, Reference Kominsky, Phillips, Gerstenberg, Lagnado and Knobe2015), reason differently about the potency of good versus bad causes (Sussman & Oppenheimer, Reference Sussman and Oppenheimer2020), and tend to think that good events cause other good events (LeBoeuf & Norton, Reference LeBoeuf and Norton2012). Macroeconomic understanding is dominated by a “good begets good” heuristic (Leiser & Aroch, Reference Leiser and Aroch2009), wherein “bad” events (inflation, unemployment, stagnation, inequality) are thought to be inter-related and negatively related to “good” events (price stability, full employment, economic growth, equality) (Fig. 4B). In reality, the opposite often holds.

5.1.5. Coherence principles

Given their rich representational capacities, coherence principles are needed to constrain narratives’ vast possibility space (determining which narratives to entertain) and draw inferences about missing information (filling in details within a narrative). For example, Thagard (Reference Thagard1989) develops a theory of how explanations and evidence cohere, Gentner (Reference Gentner1983) presents a theory of how analogical correspondences are drawn, and Rottman and Hastie (Reference Rottman and Hastie2014) summarize evidence about how people draw inferences on causal networks. In addition to principles governing each type of lower-level representation individually, we suggest three principles as starting points for how different types of lower-level representations cohere: (i) Causal, temporal, and valence structures are preserved across analogies; (ii) Causes occur before their effects; (iii) Causal relationships between agents and events with the same valence status (“good” or “bad”) are positive, whereas they are negative for links between “good” and “bad” events.

5.1.6. What narratives are not

Narratives are a flexible representational format, but they are not infinitely flexible. We (tentatively) suggest the following test for whether a representation is a narrative: It must (i) represent causal, temporal, analogical, or valence information, and (ii) for any of these it does not represent, it must be possible to incorporate such information.

This distinguishes narratives from several other kinds of representations, including probabilities, spatial maps, associative networks, images, categories, and logical relations: Such formats do not necessarily include any of the four structured information types. (However, elements of narratives may be linked to such other representations in memory. Indeed, this may be required for narrative simulation to generate iconic representations of imagined futures.)

Figure 5 provides additional examples of possible narratives that might underlie decision-making in the context of a global pandemic, as in one of our running examples.

Figure 5. Possible narratives around a global pandemic. Panel A depicts one possible individual's narrative around a global pandemic, which aligns largely with the mainstream view. Infections and deaths (which are bad) are negatively related to interventions such as social distancing, masks, and vaccines, which are themselves results of government action. The government chose these actions for the reason that it would have a preventative effect on deaths. The causal links between each intervention and infection is supported by an analogy to other diseases, such as influenza (i.e., staying home from work, covering one's mouth when coughing, and vaccines all help to prevent flu infections). Panel B depicts one possible conspiratorial narrative around a pandemic. In this narrative, global elites control the government, and are acting so as to increase their profits, which can be accomplished by several channels including economic distress, population reduction, and mind control. These causal links to profitability are supported by their own analogies (e.g., the global financial crisis and subliminal messaging being ways that bankers, corporations, and other elites are thought to increase their profits), as is the idea that the government is captured by unelected elites such as lobbyists for big business. In this narrative, social distancing has little effect on the spread of disease but a strong link to intentional economic distress; masks and vaccines increase infection and death rather than preventing it. For this reason, interventions that are seen as good in the mainstream narrative (because they have a preventive relationship with death) are seen as bad in the conspiracy narrative. These hypothetical narratives will be supported by different social and informational environments, yield conflicting forecasts about the future, and motivate distinctive actions.

5.2. Narratives characterize real-world decisions under radical uncertainty

Lab experiments are ill-suited for testing the prevalence of narrative thinking in everyday decision-making. Thus, we bring linguistic and qualitative data to bear.

5.2.1. Linguistic data about macroeconomic narratives

Shiller (Reference Shiller2019) uses time-series data from Google N-Grams to track language linked to particular shared narratives. Shiller emphasizes that “viral” narratives, even if false, can affect macroeconomic events. We would add that shared narratives held true within one's own social network (rather than those held true somewhere else) may be the only way for many to make sense of complex macroeconomic causation.

The shared narrative that labor-saving machinery creates unemployment is perennial. Shiller traces it from Aristotle, through worker riots during the Industrial Revolution, to economic depressions, to present-day concerns about artificial intelligence displacing humans. (Most economists disagree: Since machines increase productivity, wages rise and labor is redeployed to higher-valued uses.) Shiller traces the frequency of “labor-saving machinery” in books from 1800 to 2008, with the term peaking during the 1870s depression and again in the lead-up to the Great Depression, with the new term “technological unemployment” reaching epidemic proportions throughout the 1930s. Plausibly, the fear produced by such narratives exacerbated the underlying problems causing the Depression. Shiller notes that this dovetails with the then-popular folk theory – never accepted by economists – that machines would produce such plentiful products that we could never consume it all, generating unemployment. Accordingly, the term “underconsumption” skyrockets during the Depression. A simplified version of this narrative can be seen in Figure 6A.

Figure 6. Economic narratives from linguistic and interview data. Panels A–C depict simplified versions of three narratives drawn from Shiller's (Reference Shiller2019) linguistic studies of viral economic narratives and Tuckett's (Reference Tuckett2011) interview studies of money managers. In panel A, a generic causal mechanism of machinery generally leading to increased efficiency (Ma 1 and Ef 1) is analogized to machinery in one's particular industry leading to increased efficiency in that industry (Ma 2 and Ef 2). Efficiency is thought to cause unemployment (UN) directly by displacing human workers and indirectly through underconsumption (UC). Because unemployment is seen as bad, all other variables in the causal chain are inferred to be bad too. In panel B, greedy businessmen (GB) are inspired by the opportunity of World War I (W1) to increase prices (HP), which leads to inflation (Inf). A boycott (By) is thought to reduce demand (RD), which would in turn push prices back down (negative effect on HP). Since inflation and the greedy businessmen who cause it are bad, the countervailing boycott chain is perceived as good. In panel C, negative news about a company (NN) is thought to affect its stock price at an initial time (P 1), but only the company's fundamentals (F) affect its stock price later (P 2). Other investors (Other) are less observant and only act based on the negative news, but our investment firm (Us) is more observant and sees the fundamentals, creating a profit opportunity.

Another of Shiller's examples concerns boycotts following World War I (Fig. 6B). The US dollar experienced 100% inflation after the war, contributing to an anti-business shared narrative, with mentions of “profiteer” in newspapers peaking at the start of the subsequent depression. According to this narrative, businesses were raising prices to achieve “excess profits” during the war, explaining the inflation. (Although economists now reject this view of inflation, similar narratives were proposed by some American politicians during the inflation episode in 2022.) Protests ensued, resulting in boycotts on the theory that if consumers did not buy products beyond minimum necessities, the drop in demand would force prices back to “normalcy.” Although prices never declined to pre-war levels, deflation did indeed result in the 1920–1921 depression.

5.2.2. Interview data about microeconomic decision-making

In the same spirit, but using a very different method, Chong and Tuckett (Reference Chong and Tuckett2015) and Tuckett (Reference Tuckett2011, Reference Tuckett2012) interviewed 52 highly experienced money managers in 2007 and 2011, gathering accounts of decisions to buy or sell securities, using a standardized non-schedule interview approach (Richardson, Dohrenwend, & Klein, Reference Richardson, Dohrenwend and Klein1965; Tuckett, Boulton, & Olson, Reference Tuckett, Boulton and Olson1985).

These accounts consistently invoked narratives. For example, consider one of the respondents selected at random for detailed presentation in Tuckett (Reference Tuckett2011, p. 33). When interviewed, he directed a team of 20 and was personally responsible for allocating stocks into a $35 billion portfolio. His task was to try “to pierce through the smoke and emotion” surrounding market moves and “be contrary to the consensus notion of ‘let's wait for the smoke to clear’.” “I mean the problem with that philosophy” as he put it, is that “if you wait for everything to be clear you will miss most of the money to be made.” “Once everything's clear…it's easy, right?”

He described one stock his firm had bought the previous year (Fig. 6C), which had been experiencing issues with a main supplier, leading to very negative news. His team, however, “kicked the tyres and did a lot of work” to conclude that the situation was not so dire as widely perceived, taking a large stake that rose over 50% within weeks. After exiting their position, there were more negative news items, involving a very large shareholder selling the company's stock, causing apprehension in the market and price drop; they again concluded that the company was undervalued and “re-established the position.” “It was somewhat controversial…. It was not easy going against consensus sentiment but that's…what distinguishes us.” It worked out.Footnote 1

Respondents’ narratives coalesced around common themes (Tuckett, Reference Tuckett2011, p. 89). For instance, among the 165 “Buy” narratives from the 2007 interviews, themes included spotting some attractive features through the respondent's exceptional ability or effort (45% prevalence as rated by independent content-coders); the company/sector offering exceptional opportunities (39%); limits to downward surprise (27%); management as proven and reliable (26%); successful management of the respondent's own emotions (11%); and (temporary) monopoly or market power (10%).

In addition to providing examples of narrative thinking, these interviews illustrate five features of the radically uncertain decision context money managers face, which suggest why narratives are useful (Tuckett, Reference Tuckett2011, pp. 50–54). First, they acted in situations where they could only speculate about the outcome of their actions and doing nothing was an action. Second, the financial assets they traded had unknowable future values. Given their mandates to try to outperform each other, they sought opportunities that they thought wrongly priced. To establish mispricing, they imagined how various stories might influence future income streams of a firm and how others would react in those situations. The issue, in the several hundred decisions analyzed, was always the same: How to know and create confidence in their particular story about future prices. Third, the available data to help them form price expectations were effectively infinite – a massive range of public and privately available information, some of questionable provenance, from countless sources in numerous languages. Fourth, they made decisions in a social context. Most respondents talked with or explored their views of the future with others or had to justify their decisions (and reputations) to others. Fifth, decisions were never final and time horizons were always part of the context. For how long would prices go lower before rising? Had prices reached their peak? The money managers’ decisions had many of the features depicted in Section 1, which they managed using narratives.

Interviews do not provide conclusive evidence about cognitive processing, but do underscore two key points – the respondents' decision context is characterized by radical uncertainty, and respondents frequently invoke narratives in their reflections on these choices. Plausibly, narratives may have played some role in many of these choices, but finer-grained experimental evidence is needed to understand this role more precisely.

6. Explanation

Good decisions about the future often depend on how we understand the present. Explanation is how we create these understandings – how narratives are constructed and evaluated.

We have a fundamental motivation to make sense of things (Chater & Loewenstein, Reference Chater and Loewenstein2016) – a drive that pervades mental life. The world is not perceived directly, but must be interpreted (Fodor & Pylyshyn, Reference Fodor and Pylyshyn1981; Gregory, Reference Gregory1970; Von Helmholtz, Reference Von Helmholtz2005/1867). Light hits our two-dimensional retinas, and perception allows us to infer from this a three-dimensional world – to make sense of data under uncertainty. Much cognitive processing has a similar logic: Assembling relevant facts into useful models that can guide predictions and choices. Categories license predictions about individuals, using an object's observed features (evidence) to determine the appropriate category (hypothesis) (Murphy & Medin, Reference Murphy and Medin1985). Causal cognition explains events (evidence) in terms of hidden causes (hypotheses) (Lombrozo, Reference Lombrozo2016). Our memories use scattered strands of remembrance (evidence) to piece together a coherent story about what happened (hypothesis) (Bartlett, Reference Bartlett1932; Johnson & Sherman, Reference Johnson, Sherman, Higgins and Sorrentino1990). Theory-of-mind uses others’ observed behavior (evidence) to infer mental states (hypotheses) (Gergely & Csibra, Reference Gergely and Csibra2003).

Bayesian approaches to explanatory reasoning are popular in both cognitive science and philosophy of science. This approach conceptualizes the phenomenon to be explained (explanandum) as evidence and potential explanations as hypotheses to be evaluated using Bayesian inference. Thus, rational explanation requires the reasoner to evaluate the prior probability of each explanation [P(H)] and the fit of the evidence with the explanation [P(E|H)].

In two important respects, this is a successful theory. First, it uses the same mental machinery to understand many explanatory inferences. There is a common logical structure to explanatory inferences such as causal reasoning, theory-of-mind, and categorization; rather than invoking entirely separate mental mechanisms, it assumes common computational mechanisms. This is in keeping with the empirical evidence as well as theoretically parsimonious. Second, it accounts for why people are reasonably adept at explaining events. Enterprises such as science, technology, and commerce – not to mention everyday activities such a social interaction – depend on explanatory processes; to the extent we are adept at these activities, we are necessarily adept at explaining things.

Yet this approach cannot be quite right. For the reasons above (Section 2.2.1), such calculations are often impossible in principle, much less for flesh-and-blood humans, given aleatory and epistemic limits on probabilities. Bayesian accounts can try to avoid this problem by claiming agnosticism about the actual processes used to reach the outputs of Bayesian theories (retreating to the computational level) or by invoking approximation mechanisms that do not require the full probabilistic machinery. Despite our sympathy with both approaches, the theoretical problems can only be fully resolved if these strategies avoid invoking probabilities altogether under radical uncertainty.

Even eschewing probabilities altogether – as in some Bayesian sampling approaches (Sanborn & Chater, Reference Sanborn and Chater2016) – cannot avoid a further problem: A dizzying array of empirical anomalies relative to Bayesian predictions. For instance, people often favor explanations that are simpler than merited by the evidence (Lombrozo, Reference Lombrozo2007), while in other contexts favoring overly complex explanations compared to rational models (Johnson, Jin, & Keil, Reference Johnson, Jin, Keil, Bello, Guarini, McShane and Scassellati2014). Although people reasonably prefer explanations that explain more observed evidence (Read & Marcus-Newhall, Reference Read and Marcus-Newhall1993), they also prefer explanations that make no unverified predictions (Khemlani, Sussman, & Oppenheimer, Reference Khemlani, Sussman and Oppenheimer2011), contradicting Bayes’ theorem. Yet these anomalies are not random: They point to systematic principles we use to evaluate explanations in the absence of precise probabilistic reasoning.

6.1. We use a suite of explanatory heuristics to evaluate narratives

Successful explanation under radical uncertainty requires strategies that circumvent probabilistic reasoning, but instead exploit other forms of structure available in a given situation. People use a variety of heuristics and strategies satisfying this criterion (e.g., Horne, Muradoglu, & Cimpian, Reference Horne, Muradoglu and Cimpian2019; Lombrozo, Reference Lombrozo2016; Zemla, Sloman, Bechlivanidis, & Lagnado, Reference Zemla, Sloman, Bechlivanidis and Lagnado2017). These strategies are useful, not because they are infallible or optimal (which may not even be meaningful under radical uncertainty), but because they do not require explicit probabilistic reasoning, yet exploit regularities that often make these strategies broadly truth-tracking.

Above, we highlighted how temporal, analogical, and valence information are incorporated into narrative representations. Heuristics exploiting such information are valuable because they capitalize on regularities we naturally attend to, such as event structure and causal mechanisms, while powerfully constraining which explanations are deemed plausible (Einhorn & Hogarth, Reference Einhorn and Hogarth1986). For example, people use temporal order (Bramley, Gerstenberg, Mayrhofer, & Lagnado, Reference Bramley, Gerstenberg, Mayrhofer and Lagnado2018), the delay between cause and effect (Buehner & May, Reference Buehner and May2002), and the part–whole structure of events (Johnson & Keil, Reference Johnson and Keil2014) to disambiguate causal directionality, determine which events are causally relevant, and assign causal responsibility. Such strategies are adaptive because they exploit regularities in the environment that are less susceptible to the problems of radical uncertainty, even if they are not infallible.

Likewise, analogies allow us to extend hard-won knowledge of one domain into another. Much of our causal knowledge appears to be stored in the format of stereotyped causal schemas (Johnson & Ahn, Reference Johnson and Ahn2015, Reference Johnson, Ahn and Waldmann2017). We can analogically reason from a known story (e.g., my cousin's marriage; the Spanish Flu) to the current situation (e.g., my marriage; COVID); or at a more abstract level, we reason from known causal mechanisms when evaluating an unfamiliar domain, as when we compare the economy to a stalled car or a bureaucratized corporation to an arthritic giant. The notion that “good begets good” (Leiser & Aroch, Reference Leiser and Aroch2009) can be thought of a highly generalized causal schema, with narratives failing to match this schema (e.g., actions that decrease inflation are likely to increase unemployment) fighting an uphill battle for plausibility. In other cases, we have domain-specific expectations (Johnston, Sheskin, Johnson, & Keil, Reference Johnston, Sheskin, Johnson and Keil2018), such as the belief that physical causation follows more linear causal pathways compared to the more web-like structure of social causation (Strickland, Silver, & Keil, Reference Strickland, Silver and Keil2016).

Other heuristics derive from causal structure itself, accounting for the anomalies above. People often substitute the vague and challenging question of an explanation's prior probability for the more straightforward question of an explanation's simplicity (Lombrozo, Reference Lombrozo2007). Yet, because simpler explanations often explain less of the data, people also use an opponent complexity heuristic to estimate an explanation's fit to the data or Bayesian likelihood (Johnson, Valenti, & Keil, Reference Johnson, Valenti and Keil2019). Likewise, people prioritize explanations that account for a wider range of observations (Read & Marcus-Newhall, Reference Read and Marcus-Newhall1993) and attempt to make inferences about evidence that has not actually been observed (Johnson, Kim, & Keil, Reference Johnson, Kim, Keil, Papafragou, Grodner, Mirman and Trueswell2016a; Johnson, Rajeev-Kumar, & Keil, Reference Johnson, Rajeev-Kumar and Keil2016b; Khemlani et al., Reference Khemlani, Sussman and Oppenheimer2011).

Explanatory heuristics are used widely across cognition – for problems such as categorization, causal attribution, theory-of-mind, stereotyping, and even some visual tasks (Johnson, Reference Johnson2016; Johnson et al., Reference Johnson, Jin, Keil, Bello, Guarini, McShane and Scassellati2014; Johnson, Merchant, & Keil, Reference Johnson, Merchant, Keil, Noelle, Dale, Warlaumont, Yoshimi, Matlock, Jennings and Maglio2015a; Johnson, Rajeev-Kumar, & Keil, Reference Johnson, Rajeev-Kumar, Keil, Noelle, Dale, Warlaumont, Yoshimi, Matlock, Jennings and Maglio2015b; Johnson et al., Reference Johnson, Kim, Keil, Papafragou, Grodner, Mirman and Trueswell2016a, Reference Johnson, Rajeev-Kumar and Keil2016b; Sussman, Khemlani, & Oppenheimer, Reference Sussman, Khemlani and Oppenheimer2014) and emerge early in development (Bonawitz & Lombrozo, Reference Bonawitz and Lombrozo2012; Cimpian & Steinberg, Reference Cimpian and Steinberg2014; Johnston, Johnson, Koven, & Keil, Reference Johnston, Johnson, Koven and Keil2017). They are also linked to action: People favor explanations that license task-relevant and high-utility actions, and which highlight stable causal relationships likely to apply across contexts (Johnson et al., Reference Johnson, Rajeev-Kumar, Keil, Noelle, Dale, Warlaumont, Yoshimi, Matlock, Jennings and Maglio2015b; Vasilyeva, Wilkenfeld, & Lombrozo, Reference Vasilyeva, Wilkenfeld and Lombrozo2017, Reference Vasilyeva, Blanchard and Lombrozo2018). Explanatory heuristics, though imperfect, aid people in circumventing specification limits and information limits (Section 2.2.1).

6.2. Explanatory fit is experienced affectively

Although these heuristics may function to push us toward more useful or probable portions of the hypothesis space, their phenomenology is often more affective than cognitive. Emotions rapidly summarize information not readily available to consciousness (Rolls, Reference Rolls2014; Rozin & Fallon, Reference Rozin and Fallon1987; Todorov, Reference Todorov2008). Because emotions have an intelligence of their own (Nussbaum, Reference Nussbaum2001), we make inferences from them (Cushman, Reference Cushman2020; Schwarz, Reference Schwarz, Sorrentino and Higgins1990) and often rely on “gut” feelings to assess situations (Klein, Reference Klein1998). This can be a broadly rational strategy despite leading to some mistakes.

In the case of explanations, we feel “satisfied” when we achieve a sense of understanding (Gopnik, Reference Gopnik1998; Lipton, Reference Lipton2004). Despite the adaptive basis of many of these heuristics, we do not feel like we are performing rational computations when using them. Instead, good explanations often are accompanied by positive emotions or even aesthetic beauty (Johnson & Steinerberger, Reference Johnson and Steinerberger2019), as when scientists and mathematicians claim to prioritize beauty in constructing their theories. Conversely, explanations that conflict with prior beliefs produce cognitive dissonance and may therefore be rejected (Festinger, Reference Festinger1962).

Although heuristics often lead to error (Kahneman, Reference Kahneman2002), they can be adaptive in solving problems given cognitive and environmental limits (Gigerenzer & Goldstein, Reference Gigerenzer and Goldstein1996; Simon, Reference Simon1955). Good thing, too: Under radical uncertainty, they often are all we have.

7. Simulation

Having selected a narrative through explanation, we project the narrative into the future through simulation. This is why sense-making and imagination are linked: We make sense of the past to imagine the future.

7.1. Imagined futures are simulated by projecting a narrative forward

The brain mechanisms involved in prospective thought about the future overlap with those used for episodic memory about the past (Schacter, Addis, & Buckner, Reference Schacter, Addis and Buckner2008) and may even be subsystems of a broader mental time travel faculty (Suddendorf & Corballis, Reference Suddendorf and Corballis1997, Reference Suddendorf and Corballis2007). This is consistent with our view that the same representations – narratives – underlie explanations of the past and simulations of the future (Aronowitz & Lombrozo, Reference Aronowitz and Lombrozo2020). Moreover, simulation can rely on step-by-step reasoning using causal mechanisms. For example, when shown a set of interlocked gears and asked which direction one gear will turn given that another gear is turned, people solve this problem by mentally turning one gear at a time (Hegarty, Reference Hegarty2004). Likewise, thoughts about how reality might be different – counterfactuals – operate over causal structures (Rips, Reference Rips2010; Sloman, Reference Sloman2005) and are central to imagination (Markman, Gavanski, Sherman, & McMullen, Reference Markman, Gavanski, Sherman and McMullen1993).

However, less work has looked at how particular features of narratives manifest in simulations or how these simulations manifest in choices. Our recent research program has studied the role of narrative-based simulation in financial decision-making.

Our strategy relies on the idea that cognitive processes have specific signatures associated with their limitations or biases. To use an example from a different area (Carey, Reference Carey2009b), a signature of analog magnitude representations is that the ability to discriminate two magnitudes is proportional to their ratio. Since discriminations for large (but not small) numbers have this property, this implies that representations of large numbers are analog.

Analogously, we look for signature limitations associated with using narrative representations for predictions and decisions under uncertainty. Since narratives can incorporate causal, valence, temporal, and analogical structure (Section 5.1), we designed experiments to examine whether introducing these structural features into forecasting problems produces signature biases relative to standard financial theory. Although any one study does not necessarily implicate narratives, their combination triangulates on the conclusion that people project narratives to simulate the future.

7.1.1. Causal structure: internal and external attributions

Narratives contain causal structure which provides explanations. Therefore, if people use narratives to forecast the future, causal explanations should affect future forecasts. To test this, participants read about companies that had recently changed in stock price (Johnson, Matiashvili, & Tuckett, Reference Johnson, Matiashvili and Tuckett2019a). According to financial theory, the reason for the price change is irrelevant to predicting future prices. For example, when a CEO retires, the firm's stock price often declines. But this decline occurs when the CEO's retirement becomes publicly known, after which it does not reliably produce further declines. Nonetheless, if people use stories to predict the future, it should be irresistible to look for why the price changed and use these inferred causes to predict further price changes.

In one study, we compared three explanation types for a price change – no explanation, an internal attribution (relating to skill or quality, e.g., an ill-considered management change), or an external attribution (relating to factors outside the company's control, e.g., a market crash). Participants predicted more extreme future trends when an explanation was given rather than not given, and more extreme trends when the explanation was internal rather than external. This suggests that people look for causal stories to account for events and predict future ones, the most compelling stories invoking internal or inherent features (Cimpian & Steinberg, Reference Cimpian and Steinberg2014).

Financial markets are volatile: Prices often shift with little apparent cause (Shiller, Reference Shiller1981). Might people nonetheless supply causes of price changes by default, affecting downstream predictions? To find out, we compared the no-explanation and internal-explanation conditions to a noise condition, in which the price change was explained as random. Several conclusions followed. First, participants still projected more positive trends after a positive (vs. negative) price change, even if told that the change was random: People are “fooled by randomness” (Taleb, Reference Taleb2001; Tversky & Kahneman, Reference Tversky and Kahneman1971), even when randomness is noted explicitly. Second, the no-explanation condition was always more extreme than the noise condition, but less extreme than the internal-explanation condition: People consider unexplained price changes to contain some signal but not as much as explained changes. Third, this effect was asymmetric. For positive changes, the no-explanation was closer to the internal-explanation condition, whereas for negative changes, the no-explanation was closer to the noise condition. Thus, unexplained price changes – accounting for most volatility – are treated more like signal when positive and like noise when negative. This could lead to bubble dynamics, wherein positive price trends build on themselves but the corresponding force in the negative direction is weaker.

7.1.2. Valence structure: approach and avoidance

Events in narratives often have positive or negative valences, motivating approach or avoidance behavior. To test the effect of valence structure on forecasts, participants read about fictitious companies which experienced either good (an increase in earnings) or bad news (a decrease in new oil discoveries), announced prior to the most recent stock price quotation (Johnson & Tuckett, Reference Johnson and Tuckett2022). According to financial theory, there is no further effect on the stock price once the information is publicly revealed and priced in. However, if people use narratives to organize their beliefs, they should continue to rely on this valence information to predict future value.

They did. Without news, participants thought that a stock would increase in value by +4.3% in the following two weeks. These predictions were much more extreme when news was available (+10.1% and −5.9% for good vs. bad news). This implies that people believe that markets profoundly underreact to news, leading to price momentum (prices trending in a particular direction). Could participants have intuited the finding that financial markets do experience modest price momentum in the short- to medium-term after news announcements, which then reverts back toward the baseline trend (Shefrin, Reference Shefrin2002)? If so, they should predict smaller gaps between positive and negative news at longer intervals. In fact, the predicted gap is larger at a one-year interval (+16.1% vs. −5.9%). Price expectations seem to follow valenced stories about the companies’ underlying causal propensities – with companies categorized as “good” versus “bad” – rather than economic intuition.

7.1.3. Temporal structure: asymmetries between past and future

Narratives also contain temporal structure, which implies a boundary condition on the effect of valence: When information is related to the future (vs. the past), we should see more of an effect of its valence on predictions. Indeed, consistent with CNT but not standard financial theory, news about the future (e.g., a revision to next quarter's projected earnings; +17.5% predicted one-year change) stimulated more extreme predictions compared to news about the past (e.g., a revision to last quarter's actual earnings; +14.7%). Thus, not only the valence but also the temporal orientation of news affects forecasts, in line with narrative representations (Johnson & Tuckett, Reference Johnson and Tuckett2022).

Moreover, the valence- and time-based predictions described in Sections 7.1.2 and 7.1.3 manifested in emotions – more positive forecasts led to more approach emotions, more negative forecasts to more avoidance emotions – which in turn motivated investment decisions. This confirms a basic principle of CNT: People use narratives to imagine the future, react affectively to that future, and choose in line with their affect (Section 8.1).

7.1.4. Analogical structure: pattern detection and extrapolation

Analogies allow us to impose structure on problems by using our knowledge of one thing to understand another. Although some analogies compare radically different things (e.g., an atom is like the solar system), most analogies are much more prosaic: This dishwasher is like that dishwasher, this dog behaves like other dogs, this company's future resembles that other company's.

Our minds seem to hold multiple, conflicting analogies which can impose structure on time-series price data. On the one hand, it seems plausible that a series of prices trending in one direction should continue that trend – momentum. Many familiar variables have this property – successful people tend to become even more successful; objects in motion tend to stay in motion (as in the analogy of “momentum” itself). At the same time, it seems equally plausible that if prices have trended one way, it's only a matter of time before the trend reverses – mean reversion. Mean reversion too is common among familiar variables – extreme weather regresses toward the mean; objects thrown into the air come down eventually. The extent of momentum and mean reversion in real prices has been a matter of great debate in behavioral finance. Given that people can harness sophisticated intuitions for pattern detection and extrapolation (Bott & Heit, Reference Bott and Heit2004; DeLosh, Busemeyer, & McDaniel, Reference DeLosh, Busemeyer and McDaniel1997), how might we resolve these conflicting analogical intuitions when predicting future prices?

We anticipated that these analogies are triggered by different evidence, with these analogical frames imposed on time-series data to best explain it and project that explanation into the future. This signature bias would contrast with standard financial theory, which says that only the current price and its variance (risk) are relevant to future prices, as well as with existing behavioral models which assume that people are linear trend extrapolators (Barberis, Greenwood, Jin, & Shleifer, Reference Barberis, Greenwood, Jin and Shleifer2015).

In our studies, participants encountered prices series in one of three patterns (Johnson, Matiashvili, & Tuckett, Reference Johnson, Matiashvili and Tuckett2019b). In the linear condition, the prices had been trending in either a positive or negative direction for five periods. Here, participants linearly extrapolate the trend, consistent with past work (Cutler, Poterba, & Summers, Reference Cutler, Poterba and Summers1991; De Bondt, Reference De Bondt1993; Hommes, Reference Hommes2011; Jegadeesh & Titman, Reference Jegadeesh and Titman1993). But in two other conditions, we find strikingly different results. In the reversion condition, prices had previously experienced a reversion during the past 5 periods – they had the same starting price, ending price, and mean as the linear condition, but experienced higher variance in the intervening periods. Participants had a greatly dampened tendency to project this trend linearly, with many believing that prices would reverse again. In the stable condition, prices had previously hovered around one price level before experiencing a sudden increase or decrease to the current price. This pattern too led many to predict reversion, toward the previously stable price.

These results suggest that people draw on analogies – such as momentum and mean reversion in other data series – to generate narratives to account for past price trends, projecting these narratives to forecast the future. We find similar pattern-based expectations for many other consumer and investment prices, and real stock prices in an incentive-compatible task. Beyond their theoretical implications, these results may be economically significant, as price expectations play key roles in asset pricing (Hommes, Sonnemans, Tuinstra, & van de Velden, Reference Hommes, Sonnemans, Tuinstra and van de Velden2005) and inflation (Carlson & Parkin, Reference Carlson and Parkin1975).

7.2. Imagined futures are simulated one at a time

We have focused so far on how narratives circumvent limitations on probabilistic thinking. Yet narrative thinking has limits of its own: Once we have adopted a particular narrative, we are often blind to alternative possibilities. We simulate narratives one at a time.

For example, different government policies lead to different predictions about market prices. If the central bank raises interest rates, this is likely to depress market prices. But central bankers often speak opaquely. If an investor assigns a 75% chance to the story that the banker plans to raise rates but a 25% chance to the story that the banker plans to leave them alone, does she account for both possibilities or rely on just one? A Bayesian would calculate the likely effect on markets if each story is true, taking a weighted average when estimating future prices. But an investor who “digitizes” and treats these stories as either certainly true or false would “round up” the 75% chance to 100% and “round down” the 25% chance to 0%. Only the dominant narrative resulting from explanatory reasoning would be retained for downstream computations such as forecasting.

We found that investors are not Bayesians, instead digitizing (Johnson & Hill, Reference Johnson, Hill, Gunzelmann, Howes, Tenbrink and Davelaar2017). In one study, participants were given information leading them to think that one government policy was likelier than another (in one variation, they were even given these probabilities directly). Comparing conditions identical except the effect of the more-likely (75% chance) policy, there was a large difference in predictions. But comparing conditions identical except the effect of the less-likely (25% chance) policy, predictions do not differ at all. Investors take account of the implications of more-likely narratives, but ignore entirely the implications of less-likely narratives: They adopt a single narrative as true, treating it as certain rather than probable.

Digitization is a broad feature of cognition; similar effects have been found in causal reasoning (Doherty, Chadwick, Caravan, Barr, & Mynatt, Reference Doherty, Chadwick, Caravan, Barr and Mynatt1996; Fernbach, Darlow, & Sloman, Reference Fernbach, Darlow and Sloman2010; Johnson, Merchant, & Keil, Reference Johnson, Merchant and Keil2020) and categorization (Lagnado & Shanks, Reference Lagnado and Shanks2003; Murphy & Ross, Reference Murphy and Ross1994). Yet it has boundary conditions: People do reason across multiple hypotheses in cases where one of the hypotheses invokes a moral violation (Johnson, Murphy, Rodrigues, & Keil, Reference Johnson, Murphy, Rodrigues, Keil, Goel, Seifert and Freksa2019) or danger (Zhu & Murphy, Reference Zhu and Murphy2013), and expertise in a domain may promote multiple-hypothesis use (Hayes & Chen, Reference Hayes and Chen2008).

Despite boundary conditions, simulations produce, by default, a single imagined future. Electrons may exist as probability clouds rather than in one definite state. But stories resist Heisenberg's principle – they take only one state at a time.

8. Affective evaluation

We have seen how narratives solve the mediation problem: They summarize available data (about the past) in a format used to predict what will happen (in the future) given a particular choice. But choices must somehow combine these visions of the future with our values and goals – the combination problem. Since emotions function to coordinate goals, plans, and actions (Damasio, Reference Damasio1994; Elliot, Reference Elliot2006; Fishbach & Dhar, Reference Fishbach, Dhar, Haugtvedt, Herr and Kardes2007; Ford, Reference Ford1992; Lewin, Reference Lewin1935; Oatley & Johnson-Laird, Reference Oatley and Johnson-Laird1987; Rolls, Reference Rolls2014), affective evaluation is tasked with solving the combination problem.

Let us first consider how existing theories of emotion address simpler choices. Suppose someone cuts into the supermarket queue and you must decide whether to assert your rightful place. According to the appraisal-tendency framework (Lerner & Keltner, Reference Lerner and Keltner2000), the decision-maker evaluates this situation along several dimensions – including certainty, pleasantness, controllability, and others’ responsibility – which jointly determine which emotion is felt (Smith & Ellsworth, Reference Smith and Ellsworth1985). In the queue-cutting case, one might perceive the event as unpleasant and the queue-cutter as responsible, but the situation as certain and under control – leading to anger; or instead, one might be less sure that the queue-cutter acted deliberately but perceive the situation as less controllable and certain because the queue-cutter appears big and mean – leading to fear. These emotions, in turn, motivate different actions (Frijda, Reference Frijda1988); anger is an approach emotion motivating aggression, whereas fear is an avoidance emotion motivating withdrawal. Finally, once these emotions are active, they shift attention to relevant dimensions for subsequent events; for example, fear leads one to perceive subsequent events as relatively uncontrollable compared to anger (Lerner & Keltner, Reference Lerner and Keltner2001).

CNT adds three modifications to account for challenges in complex, future-oriented decision-making. First, emotions are felt not only in response to actual events but to imagined futures generated from narratives, motivating us to approach or avoid associated choices. Second, appraisals of those futures can rely either on a default set of dimensions or on ad hoc evaluations relative to specific goals. Finally, since decisions must often be sustained over time, feelings of conviction in a narrative permit committed action in the face of uncertainty.

8.1. Affective evaluations of imagined futures motivate choices

We feel emotions in response not only to the present situation, but to situations we imagine (Loewenstein, Weber, Hsee, & Welch, Reference Loewenstein, Weber, Hsee and Welch2001; Richard, van der Pligt, & de Vries, Reference Richard, van der Pligt and de Vries1996). This is why we experience emotions when understanding literature (Mar, Oatley, Dikic, & Mullin, Reference Mar, Oatley, Dikic and Mullin2010) or empathizing with others (Mitchell, Banaji, & Macrae, Reference Mitchell, Banaji and Macrae2005). CNT accords a central role to the emotional reactions we experience in response to imagined futures generated from narratives. These emotional reactions within a narrative, by motivating approach and avoidance behaviors, drive action in the real world.

Some emotions are inherently future-oriented: If one feels excited (or anxious) about a potential future, one acts to approach (or avoid) that future. But even past-oriented emotions, such as regret (Loomes & Sugden, Reference Loomes and Sugden1982), can influence our choices through simulations of how we would feel. For example, people anticipate more regret over not playing in postcode lotteries (where non-players can learn if they would have won) versus traditional lotteries, which motivate participation (Zeelenberg & Pieters, Reference Zeelenberg and Pieters2004). Many other anticipated emotions are known to guide choices, including guilt, sadness, anger (Baron, Reference Baron1992); pleasure (Wilson & Gilbert, Reference Wilson and Gilbert2005); dread and savoring (Dawson & Johnson, Reference Dawson and Johnson2021; Loewenstein, Reference Loewenstein1987); and envy (Loewenstein, Thompson, & Bazerman, Reference Loewenstein, Thompson and Bazerman1989). Emotions mediate between our predictions of the future and decisions to approach or avoid that future, coloring narratives with emotion (Beach, Reference Beach1998; Bruner, Reference Bruner1986).

8.2. Imagined futures can be appraised on default or ad hoc dimensions

The fuzzy evaluation problem (Section 2.2.2) results from the challenges of summarizing incommensurable attributes as a single utility, especially when our values may change over time. CNT proposes two computational simplification strategies that the affective system can use to address this problem – a default, bottom-up strategy and an ad hoc, top-down strategy.

This is analogous to the distinction between natural categories and ad hoc categories. Natural categories – such as BIRD, TABLE, and MOUNTAIN – roughly capture regularities in the external world (Rosch, Mervis, Gray, Johnson, & Boyes-Braem, Reference Rosch, Mervis, Gray, Johnson and Boyes-Braem1976); by default, objects are classified bottom-up, automatically and effortlessly, into natural categories (Greene & Fei-Fei, Reference Greene and Fei-Fei2014). In contrast, ad hoc categories (Barsalou, Reference Barsalou1983) – such as THINGS TO SELL IN A GARAGE SALE and WAYS TO ESCAPE THE MAFIA – are constructed on the fly to achieve specific goals, using effortful, top-down processes. Whereas bottom-up classification into natural categories proceeds by default and relies on predetermined dimensions, top-down classification into ad hoc categories requires effort and relies on spontaneously determined, goal-derived dimensions.

Analogously, in line with the appraisal tendency framework, one strategy for evaluating imagined futures relies on a default set of dimensions mirroring those for evaluating actually present situations (e.g., controllability, certainty, pleasantness), which determine which emotion is felt. That emotion, in turn, motivates action. Because these dimensions are thought to be evaluated automatically with minimal effort (Lazarus, Reference Lazarus1991), this default appraisal strategy is an appealing solution to the fuzzy evaluation problem. Specific emotions are felt in response to qualitative appraisals of predetermined dimensions. Because the dimensions are predetermined, the computational problem of identifying dimensions is avoided; because the appraisals are qualitative, the need to trade off these dimensions is circumvented. Moreover, although particular preferences may well change over time, our basic emotional architecture does not. Thus, the problems of incommensurable attributes and non-stationary values are averted.

However, these default dimensions often do not suffice when we have specific goals, leading to a second, ad hoc strategy based on the decision-maker's goal hierarchy. A decision-maker's attention will be deployed according to the active goal(s) at the time of decision-making (Van Osselaer & Janiszewski, Reference Van Osselaer and Janiszewski2012). Narratives can be used to generate imagined futures, on an ad hoc basis, that are evaluable with respect to these goals. The compatibility of those imagined futures with those goals produces approach and avoidance emotions that motivate action (Elliot, Reference Elliot2006; Oatley & Johnson-Laird, Reference Oatley and Johnson-Laird1987).

This ad hoc route depends on two claims. First, we assume that although we may have many goals, a small subset are typically active at once, because goals are triggered context-dependently (Panksepp, Reference Panksepp1998; Tuckett, Reference Tuckett, Kirman and Teschiin press) and organized hierarchically. For example, when basic physiological needs are not met, these are likely to supersede less immediately essential needs such as social belonging (Lavoie, Reference Lavoie1994; Maslow, Reference Maslow1943). This is not only helpful for survival, but also a crucial computational simplification: Goal hierarchies allow us to evaluate imagined futures over a much smaller number of dimensions. This is how ad hoc appraisals, like default appraisals, help to resolve the fuzzy evaluation puzzle. This insight also casts new light on multi-attribute choice strategies (Payne, Bettman, & Johnson, Reference Payne, Bettman and Johnson1988). A couple facing divorce faces a dizzying array of potential attributes. Yet, for many such couples, their children's well-being is paramount. If this dimension does not prove decisive, they may move on to other concerns, such as their financial well-being or sexual satisfaction. The hierarchical organization of goals can explain why particular situations call for particular decision rules, such as lexicographic rules (using a single attribute) or elimination-by-aspects (eliminating options beneath a minimum criterion on a key attribute, then iteratively considering other dimensions; Tversky, Reference Tversky1972).

Second, we assume that we can generate imagined futures containing details relevant to evaluating the required dimensions. For example, our married couple might first imagine their future with respect to their child's well-being, then elaborate this image to consider their romantic prospects, then their finances. Although these different imagined aspects may well emanate from a shared narrative of the couple's married life, it is unlikely that this complete conception of their future would emerge fully formed, but instead must be simulated one piece at a time. Although we do not know of any research directly examining this proposed ability in the context of narratives, evidence about other domains suggests that this is possible; for example, people can fluidly reclassify objects dependent on goals (Barsalou, Reference Barsalou1983) and can manipulate their mental images dependent on queries (Kosslyn, Reference Kosslyn1975).

8.3. Emotions are used to manage decisions extended over time

Hamlet's uncertainty paralyzed him for three acts; by the fifth act, it was too late. Hamlet learned the hard way that strong, conflicting arguments produce ambivalence that can stop action in its tracks (Armitage & Conner, Reference Armitage and Conner2000; Festinger, Reference Festinger1962; Rosner et al., Reference Rosner, Basieva, Barque-Duran, Glöckner, von Helversen, Khrennikov and Pothos2022; Rucker, Tormala, Pety, & Briñol, Reference Rucker, Tormala, Pety and Briñol2014; Smelser, Reference Smelser1998). Many a couple and many an investor have talked themselves in circles while romantic and financial opportunities slipped away; committing to one distinct course of action often yields better fortunes, even if one can never be certain a choice is truly “right.” Moreover, high-stakes decisions often are extended through time, requiring commitment. Failure in this respect leads many novice investors to overtrade and defray their gains through transaction costs (Barber & Odean, Reference Barber and Odean2000). Conviction bears dividends.

Yet conviction also bears risks. Blindly following a plan, while ignoring new information, is equally a recipe for disaster as Hamletian paralysis. Complex problems such as the COVID pandemic, Putin's Ukraine invasion, or climate change require different approaches as events or our knowledge evolve. Emotions are instrumental in the inter-related processes of conviction managementgaining conviction (acting in the face of ambivalence), maintaining conviction (committing to a sustained course of action), and moderating conviction (taking account of new evidence and potentially changing course as the situation changes). To manage conviction is to manage our emotional attachments to a person, object, or course of action. A lack of emotional attachment against the temporary vicissitudes of fortune yields indecision, yet an inability to reappraise a truly changing situation can yield calamity.

Cognition and affect are intertwined in generating conviction. Conviction-generating narratives integrate information about the past and expectations about the future to emotionally support a course of action. Experiments have probed this process. In a purely cognitive model, confidence in a decision is proportional to the strength of the arguments in its favor; in a purely affective model, confidence in a decision is proportional to its propensity to trigger approach emotions. Instead, cognition and emotion work together (Bilovich, Johnson, & Tuckett, Reference Bilovich, Johnson and Tuckett2020). When situations trigger approach emotions, investors find favorable arguments more relevant, with the converse for avoidance emotions. Perceived relevance in turn influences investors’ choices. Although emotions profoundly affect choice, they do so by influencing intermediate cognitive processing.

This interplay between cognition and affect is also illustrated by the conviction-generating strategies cited by the investment managers in Tuckett's (Reference Tuckett2011) interview study (Chong & Tuckett, Reference Chong and Tuckett2015). Numerous respondents (90%) referred to one or more “attractor” narratives, producing excitement over an investment due to an exceptional opportunity for gain. Attractor narratives typically cite either the investment's intrinsic properties (e.g., exceptional products) or the investor's own special skills (e.g., exceptional insight). A similar proportion of respondents (88%) referred to one or more “doubt-repelling” narratives, reducing anxiety over an investment. Doubt-repelling narratives typically raise and then counter a potential concern, placing bounds on either uncertainty (e.g., competent managers) or downside surprise (e.g., solid fundamentals).

Uncertainty can undermine conviction, but so can excessive certainty in dynamic situations. At a single time-point, people are more confident in investments described as having a specific, predictable return (8%) rather than falling within a range (3–13%) (Batteux, Bilovich, Johnson, & Tuckett, Reference Batteux, Khon, Bilovich, Johnson and Tuckettin press; Du & Budescu, Reference Du and Budescu2005). However, building conviction by masking uncertainty is not sustainable: Once point forecasts are shown to be unreliable – an inevitable event under uncertainty – trust is reduced in the forecaster (Batteux, Bilovich, Johnson, & Tuckett, Reference Batteux, Bilovich, Johnson and Tuckett2021a). This has implications for risk communication: When uncertainty is communicated in vaccine announcements, trust in vaccines is buffered against subsequent negative outcomes (Batteux, Bilovich, Johnson, & Tuckett, Reference Batteux, Bilovich, Johnson and Tuckett2021b).

Conviction is not good or bad in itself. It is needed to overcome ambivalence and sustain commitment, but is only adaptive if it does not preclude learning. When emotions are regulated well (Gross, Reference Gross1998), conviction buffers against the vicissitudes of the unfolding situation, but can be moderated. In such an integrated state, we can be sensitive to new information and adjust decisions in an orderly way; one adopts a particular narrative but acknowledges the possibility of error and stays attuned to evidence for competing narratives. In contrast, in a divided state, ambivalence is hidden by attentional neglect of information inconsistent with the preferred narrative. Whereas new information in integrated states can trigger curiosity and evidence integration, incongruent information is rejected in a divided state (Tuckett, Reference Tuckett2011; Tuckett & Taffler, Reference Tuckett and Taffler2008). This is why ambivalence has been linked both to maladaptive responses, such as confirmation bias and behavioral paralysis, and adaptive responses, such as broader attention and willingness to consider multiple perspectives (Rothman, Pratt, Rees, & Vogus, Reference Rothman, Pratt, Rees and Vogus2017). Decision-makers who experience balanced emotions are likelier to rely on “wise reasoning” strategies, such as epistemic humility and integrating diverse perspectives (Grossmann, Oakes, & Santos, Reference Grossmann, Oakes and Santos2019). Integrated conviction management is adaptive because it permits decision-makers to accept a narrative as provisionally true and act accordingly – a crucial characteristic when there is a cost to changing course – while accumulating evidence in the background, changing course when clearly merited.

Integrated conviction management is closely linked with one way that feedback loops help decisions to become adaptive over time. Although under radical uncertainty we often have no choice but to make some decision without a clear sense of whether it is the best option, we can accumulate evidence about what does and does not work. Thus, acting on one narrative can yield information that is then used to reappraise that same narrative, potentially leading to a shift in narrative and decision in an iterative manner. Extended decisions can often be treated as a series of experiments, providing information about what does and does not work (Fenton O'Creevy & Tuckett, Reference Fenton O'Creevy and Tuckett2022).

9. Communication

Many everyday decisions are inseparable from their social context. The communication processes through which narratives or narrative fragments are transmitted across minds are crucial for understanding decision-making at macro scales. Subjecting narratives to cultural evolution has allowed narratives to adapt over time to generate reasonably high-quality decisions in the absence of calculable probabilities and utilities.

9.1. Shared narratives facilitate social coordination

Decisions are socially embedded in part through their shared consequences – when decisions are taken collectively, as in political decision-making, or when one individual's decision affects others in their social group. Socially coordinated decisions can generate more value than the sum of their individual components (Chwe, Reference Chwe2001). However, coordination is challenging, due to both divergent interests and divergent information. Shared narratives help to coordinate both interests and information.

Reputation-tracking is key to aligning individual incentives with collective interest (Rand & Nowak, Reference Rand and Nowak2013; Tennie, Frith, & Frith, Reference Tennie, Frith and Frith2010). People are motivated to evaluate others’ reputation based on their actions, even actions only affecting third parties. For instance, people evaluate others’ moral character based on prosocial actions such as donation (Glazer & Konrad, Reference Glazer and Konrad1996; Johnson, Reference Johnson2020), volunteering (Johnson & Park, Reference Johnson and Park2021), and eco-friendly actions (Griskevicius, Tybur, & van den Bergh, Reference Griskevicius, Tybur and van den Bergh2010), generating incentives for apparent altruism. Conversely, bad reputations are costly: People sacrifice resources to punish free-riders (Jordan, Hoffman, Bloom, & Rand, Reference Jordan, Hoffman, Bloom and Rand2016). Because we are aware that others are tracking our reputations, we are motivated to take actions bringing reputational benefits and avoid actions bringing reputational harm.

An important means for reputation management is how we justify choices to others (Lerner & Tetlock, Reference Lerner and Tetlock1999; Mercier & Sperber, Reference Mercier and Sperber2017). Narratives often play this justificatory role, maintaining reputation in the face of disagreement and coordinating group activity when other stakeholders must adopt the same decision. For instance, stories shared by the Agta, hunter–gatherers in the Philippines, express messages promoting cooperation, allowing skilled storytellers to achieve greater cooperation (Smith et al., Reference Smith, Schlaepfer, Major, Dyble, Page, Thompson and Migliano2017). Closer to home, scientists often debate what “story” they will sell to readers and reviewers. Why are narratives so effective for reputation maintenance?

First, narratives contain causal structure that can generate reasons justifying a position. For example, when deciding which parent should be awarded custody of a child, people favor the parent with both more extreme positive (above-average income) and negative attributes (work-related travel) over one with more neutral attributes (typical income and working hours), since the former gives more positive reasons favoring custody. But when asked instead who should be denied custody, people again choose the more extreme parent because there are also more negative reasons against custody (Shafir, Simonson, & Tversky, Reference Shafir, Simonson and Tversky1993). More extreme attributes support more causally potent explanations, generating both supporting and opposing narratives.

Second, narratives can not only justify decisions after the fact but to persuade others to adopt our perspective (Krause & Rucker, Reference Krause and Rucker2020). Arguments communicated with a narrative are often more persuasive than those communicated with facts alone (Chang, Reference Chang2008; De Wit, Das, & Vet, Reference De Wit, Das and Vet2008; Shen, Ahern, & Baker, Reference Shen, Ahern and Baker2014), in part because narratives are readily understood (Section 9.2). Narratives induce emotional engagement, mental imagery, and attention, creating “narrative transportation” that can lead reasoners to believe elements of the story (Adaval & Wyer, Reference Adaval and Wyer1998; Escalas, Reference Escalas2004; Green & Brock, Reference Green and Brock2000; Hamby, Brinberg, & Daniloski, Reference Hamby, Brinberg and Daniloski2017; Van Laer, de Ruyter, Visconti, & Wetzels, Reference Van Laer, de Ruyter, Visconti and Wetzels2014). Moreover, the broader narratives espoused by a communicator, such as moral and political worldviews, can lend additional credence to their claims (Johnson, Rodrigues, & Tuckett, Reference Johnson, Rodrigues and Tuckett2021; Marks, Copland, Loh, Sunstein, & Sharto, Reference Marks, Copland, Loh, Sunstein and Sharto2019). Persuasion is crucial in coordination because it allows a group to have the same narrative in their heads, making narratives a part of our collective or transactive memory (Boyd, Reference Boyd2009; Chwe, Reference Chwe2001; Hirst, Yamashiro, & Coman, Reference Hirst, Yamashiro and Coman2018; Wegner, Reference Wegner, Mullen and Goethals1987) and providing a shared plan for coordinated action.

9.2. Shared narratives shape social learning and evolve

Decision-making is also socially embedded through the informational environment. Human knowledge arises largely through our cumulative cultural heritage (Boyd, Richerson, & Henrich, Reference Boyd, Richerson and Henrich2011; Henrich, Reference Henrich2018). Indeed, because it is so often the ability to access external knowledge when needed that is crucial for decision-making rather than our internal knowledge itself, we often confuse knowledge inside and outside our heads (Sloman & Fernbach, Reference Sloman and Fernbach2017).

Communication of narratives is a crucial way we learn beyond immediate experience (Boyd, Reference Boyd2018), with some even suggesting that the adaptive advantage of sharing narratives is the main reason that language evolved (Donald, Reference Donald1991). However, we do not suggest that narratives in the full form described in Section 5 migrate wholesale from one mind to another. The knowledge that is transferred from one mind and stored in another is relatively skeletal, or even a placeholder (Rozenblit & Keil, Reference Rozenblit and Keil2002) as when we store the source of a piece of information rather than the information itself (Sloman & Fernbach, Reference Sloman and Fernbach2017; Sparrow, Liu, & Wegner, Reference Sparrow, Liu and Wegner2011). Instead of full narrative representations, we assume instead that primitive elements – narrative fragments such as basic causal schemas, memorable analogies, and emotional color – are the key narrative elements that are shared and shape social learning. These narrative fragments, communicated consistently within a social group, give rise to a set of elements that are common among the narratives represented within those group members’ individual minds – what we are calling a shared narrative.

Several approaches seek to model how ideas spread and evolve (Boyd & Richerson, Reference Boyd and Richerson1985; Dawkins, Reference Dawkins1976; Sperber, Reference Sperber1996). A common insight is that ideas spread when they pass through two sets of cognitive filters: Constraints on encoding (attention, memory, and trust) and constraints on communication (motivation and ability to share). But for a culturally transmitted idea to not only spread but be acted on, that idea must pass through a third filter – constraints on action. The idea must be perceived as actionable and produce motivation to act. Narratives can pass through all three filters – encoding, communication, and action – and therefore are likely to socially propagate.

First, narratives are easy to remember. People often represent information as scripts, or stereotyped sequences of events (Schank & Abelson, Reference Schank and Abelson1977). Because people so naturally represent information using causal–temporal structure, people are far better at remembering information organized as stories (Bartlett, Reference Bartlett1932; Kintsch, Mandel, & Kozminsky, Reference Kintsch, Mandel and Kozminsky1977; Thorndyke, Reference Thorndyke1977). Humans’ remarkable ability to remember information encoded as stories is perhaps most impressively attested by oral traditions, such as the transmission of the Greek epics and Hindu and Buddhist historical texts over centuries purely through word-of-mouth (Rubin, Reference Rubin1995).

Second, people like to talk about narratives, making them susceptible to spreading through word-of-mouth (Berger, Reference Berger2013). This may be because narratives are well-suited to balancing novelty against comprehensibility (Berlyne, Reference Berlyne1960; Silvia, Reference Silvia2008), with the most contagious narratives including a small number of novel concepts against a larger background of familiar concepts (Norenzayan, Atran, Faulkner, & Schaller, Reference Norenzayan, Atran, Faulkner and Schaller2006). Narratives can convey new information in digestible form because they match our default causal–temporal representations of events (Schank & Abelson, Reference Schank and Abelson1977) and focus on the behavior of human actors, commandeering our natural tendency toward gossip (Dunbar, Reference Dunbar1996). Moreover, as discussed above, narratives are highly persuasive, and therefore commonly used when trying to convince others.

Finally, narratives lend themselves to action. Because narratives have a causal–temporal organization, and are often organized around the actions of individuals (Mandler & Johnson, Reference Mandler and Johnson1977), they provide a ready template for intervening on the world. Indeed, causal knowledge is crucial precisely because it can be used to bring about desired outcomes (Woodward, Reference Woodward2003). Culturally acquired knowledge of physical causation is embedded in physical tools, which can manipulate the physical world. Analogously, culturally acquired knowledge of social causation is embedded in narratives, which serve as templates for manipulating the social world. We suggest that narratives which lead to effective actions are particularly likely to survive this filter, just as effective institutions are likelier to survive social evolution (Hayek, Reference Hayek1958). Therefore, narratives that have survived this crucible of cultural evolution potentially lead to adaptive decision-making even under radical uncertainty.

Because narratives live and die by cultural evolution (Boyd & Richerson, Reference Boyd and Richerson1985; Henrich, Reference Henrich2018), their propagation depends on their social, economic, and physical environment. Narratives surrounding masks at the start of the COVID-19 differed profoundly across countries (Hahn & Bhadun, Reference Hahn and Bhadun2021), due both to social norms and differing experience with infectious disease. Henrich's (Reference Henrich2020) account of how the West became prosperous suggests that narratives originated by the Catholic Church altered norms around cousin marriage, generating new family structures and patterns of cooperation that led to markets and science.

9.3. Shared narratives propagate through social networks

If shared narratives facilitate coordination and learning, then as they shift over time and propagate through social networks, they should be tied to large-scale outcomes. Since economic actors’ decisions are driven by narratives (Section 5.2.2), changes in the emotional content of socially available narratives may shift attitudes toward risk. If so, then measures of approach and avoidance emotions in economic narratives should predict the direction of economic aggregates – output, employment, GDP growth – that depend on investment. Nyman, Kapadia, and Tuckett (Reference Nyman, Kapadia and Tuckett2021) studied this claim using text-mining techniques on internal Bank of England commentary (2000–2010), broker research reports (2010–2013), and Reuters news articles (1996–2014).

First, relative sentiment is a leading indicator of economic volatility and consumer sentiment. This reflects the idea that a preponderance of approach over avoidance emotions is needed to produce conviction to invest (Keynes, Reference Keynes1936). For each document, the proportion of words signaling approach (e.g., “excited,” “ideal”) and avoidance words (“threatening,” “eroding”) was calculated, with the difference between these indices constituting relative sentiment. Shocks to relative sentiment in the UK had negative effects on industrial production, employment, and the stock market, with these impacts lasting for nearly 20 months (Nyman et al., Reference Nyman, Kapadia and Tuckett2021). Tuckett and Nyman (Reference Tuckett and Nyman2018) also found that relative sentiment also predicted changes in investment and employment in the UK, US, and Canada more than 12 months out (Tuckett, Reference Tuckett2017). For example, a plot of relative sentiment against major events in the lead-up to the global financial crisis shows a precipitous decline in relative sentiment in the year leading up to the failure of Bear Sterns in March 2008. A similar analysis using 1920s data from the Wall Street Journal found that sentiment shocks, beyond economic fundamentals, impacted production and stock values leading up to the Great Depression (Kabiri, James, Landon-Lane, Tuckett, & Nyman, Reference Kabiri, James, Landon-Lane, Tuckett and Nyman2023); sentiment likewise appears to account for the slow recovery from the 2008 recession (Carlin & Soskice, Reference Carlin and Soskice2018).

Second, excessive homogeneity around narratives portends trouble. Nyman et al. (Reference Nyman, Kapadia and Tuckett2021) used topic modeling to assign each article in the text database to a particular narrative and compute the degree of narrative topic consensus at each time point. Prior to the crisis, homogeneity increased around narratives high in approach emotions (excitement) and lacking avoidance emotions (anxiety), which could have been a potential warning sign of impending financial system distress (Nyman et al., Reference Nyman, Kapadia and Tuckett2021). This supports the idea of groupfeel or emotional conformity as a driver of booms and busts and the notion that integrated emotional states that manage ambivalence are better-suited to stable decision-making, compared to divided states that ignore discordant information (Tuckett & Taffler, Reference Tuckett and Taffler2008).

Macroeconomic crises are necessarily rare and atypical, so no method will reveal definitive answers about their causes. But these techniques can also be used at a more micro-scale. For example, Tuckett, Smith, and Nyman (Reference Tuckett, Smith and Nyman2014) studied relative sentiment in news articles about Fannie Mae. From 2005 to mid-2007, sentiment became increasingly exuberant, along with Fannie Mae's share price, and unmoored from economic realities reflected in the Case-Shiller Housing Price index. Such states can result from the fetishization of some “phantastic object” (Tuckett & Taffler, Reference Tuckett and Taffler2008) – in this case, mortgage securitization. A similar analysis of Enron's internal emails in 2000–2002 revealed comparable emotional–narrative dynamics of build-up and collapse surrounding the deregulation of the California energy market and Enron's impending (ill-fated) entry into broadband. Overall, the confluence of macro- and micro-level analyses converges to suggest that emotional–narrative sentiment spreads through social networks and may causally influence economic outcomes.

10. Conclusion

So often, decisions in economics textbooks and psychology laboratories alike are divorced from the need for sense-making and imagination, overtly quantified in their consequences and probabilities, taken at a single time-point, and stripped of social context. This reductionist tradition has yielded massive progress. But progress in understanding everyday decision-making also requires us to put back those elements that have been stripped away. CNT is our answer to this need.

To summarize CNT standing on one foot: We impose narrative structure on information to explain the past, imagine the future, appraise that future, take sustained action, and coordinate actions socially. The mediation problem – a mental representation that can mediate between the external world and our choices – is solved by narratives; the combination problem – a mental process that can drive action by combining beliefs and values – is solved by emotion. Decisions can be reasonably adaptive even without well-specified probabilities and utilities because individual narratives are influenced by culturally evolved shared narratives and by feedback from our own actions.

10.1. Meta-theoretical considerations

Scientific theories, like narratives, are evaluated partly on aesthetic grounds. In this spirit, we discuss two potential “meta-theoretical” objections to CNT.

First, some may view CNT as too grandiose. Philosophers have mostly given up on generating grand unified theories, and similar efforts such as behaviorism have fared little better in psychology. A skeptical reader may view CNT as a kaleidoscope of ideas – encompassing narratives, explanation, causation, analogy, forecasting, emotion, motivation, cultural evolution, and more – biting off too much theoretical meat to properly chew, much less digest.

Second, some may view CNT as too skeletal. We do not provide our own account of explanation or emotion or cultural evolution, but rather focus on how these processes fit together. Even the notion of narratives – the theoretical centerpiece of our account, as the mental substrate binding these processes – can be elusive, as it coordinates lower-level representations of causation, analogy, time, and valence. Perhaps we have not bitten off theoretical meat at all, but merely bones.

We are sympathetic to these concerns. Yet we believe that in this case, we have encountered a grand problem that requires a strong skeleton. CNT is not grand in the sense that it attempts to explain all of cognition; rather, it is attempting to explain decision-making under radical uncertainty – an important problem that has largely resisted theoretical progress. It is a grand problem precisely because, we contend, many parts of our mind cooperate to make such decisions. Approaches that ignore any component piece will lack the theoretical machinery required to understand the problem. If CNT is grand, it is not by choice but by necessity.

Thus, CNT is not a theory of explanation, analogy, causation, or emotion, but a theory of decision-making under radical uncertainty. We focus less on the details of these component processes because it is how the processes interact that is central to how we produce the conviction to act under uncertainty. We do not necessarily take a side where there is theoretical disagreement about one of these processes; rather, we specify how these processes relate to each other. CNT is skeletal because the skeleton is the theory; the meat, though delicious, belongs to other theories.

10.2. Contributions

We are not the first to highlight the importance of narratives to decision-making, nor (we hope!) the last. But we have aimed to provide an integrative framework that allows insights from several disciplines to be combined, contributing to the ongoing conversation in four main ways.

First, by explaining in detail how narrative decision-making works. We provide a representational framework that captures key information used in real-world examples of narrative-based decisions, and explain how these representations sustain processes of explanation, simulation, and affective evaluation, which jointly motivate action.

Second, by highlighting that narratives address important puzzles about everyday decision-making. Many ordinary decisions are plagued by radical uncertainty, fuzzy evaluation, and the need for sustained commitment; they involve sense-making and imagination; they are inextricably linked to social context. Such decisions – where optimality is ill-defined – resist dichotomization into “rational” versus “irrational.” Narratives not only help to solve these problems, but often do so adaptively. Feedback loops at both the individual level (managing conviction for decisions sustained over time) and the collective level (the cultural evolution of narratives) contribute to adaptive choice.

Third, by identifying how processes ordinarily studied in isolation work together. Recent advances in explanatory reasoning highlight the role of heuristics and affect in explanation under radical uncertainty – advances unknown at the time of Pennington and Hastie's (Reference Pennington and Hastie1986) seminal work. Causal and analogical processing have been studied extensively, with excellent cognitive models of each; yet these models have been integrated surprisingly little with one another or with decision-making models. The pivotal role of affect in solving the fuzzy evaluation problem has received less than its deserved attention in the decision-making literature, as has the role of cultural evolution in socially propagating narratives that can then guide individual choices. We hope to put these areas in dialogue.

Finally, by providing a common vocabulary – including our ideas around the structure of narrative representations, the information flow among narrative processes, and the set of problems to be addressed by a theory of everyday choice.

In keeping with this final point, we emphasize that a common vocabulary is needed precisely because CNT will not be the final word on this topic. Indeed, some of our proposals remain tentative even for us. First, although we believe that causal, temporal, analogical, and valence structure are the key lower-level information included in narratives, there may be other forms of information we have not considered. Second, relatively little is known about how these kinds of information are coordinated; thus, our proposals around narrative coherence rules must remain tentative and likely incomplete. Third, although much has recently been learned about explanatory heuristics, we have not provided an exhaustive list of these heuristics but merely given a few examples to illustrate how they work; more will surely be discovered. Fourth, we know relatively little about what specific features of narratives lead them to be more or less socially contagious and which of those features promote adaptive decisions; memes are selfish, as Dawkins (Reference Dawkins1976) noted, and not all features of catchy narratives are likely to be adaptive. Yet we would add that the preoccupation of much decision-making research with optimality – whether in assumption or subversion – might profitably yield some ground to the more basic question of how, under radical uncertainty and fuzzy evaluation, we gain conviction to act at all.

We are excited by the prospect that CNT might provide a fruitful platform for collaboration between researchers across the decision sciences – a rallying cry for all who aim to understand social, psychological, and economic aspects of decision-making in the real world.

Acknowledgements

We thank Liz Allison, Eleonore Batteux, Tim Besley, Gordon Brown, Christopher Dawson, Arman Eshraghi, Mark Fenton O'Creevy, Peter Fonagy, Aikaterini Fotopoulou, Karl Friston, Gerd Gigerenzer, Reid Hastie, Douglas Holmes, John Kay, Mervyn King, Dave Lagnado, Rickard Nyman, David Shanks, George Soros, Lukasz Walasek, and the members of UCL's Centre for the Study of Decision-Making Uncertainty, for useful comments, encouragement, and discussion.

Financial support

This research was supported by grants to DT and the UCL Centre for the Study of Decision-Making Uncertainty from the Institute of New Economic Thinking (IN011-00025, IN13-00051, and INO16-00011), the Eric Simenhauer Foundation of the Institute of Psychoanalysis (London), UKRI (EPSRC grant reference EP/P016847/1), and the ESRC-NIESR Rebuilding Macroeconomics network.

Competing Interest

None.

Footnotes

1. Excerpts can be found in Tuckett (Reference Tuckett2012). The full interview, along with the three others selected at random from this larger group, is available at https://www.macmillanihe.com/companion/Tuckett-Minding-The-Markets/study-resources/. More detailed analysis of all the decisions reported by the entire sample, also supported by randomly drawn examples from the coded interview data, has been reported elsewhere (Tuckett, Reference Tuckett2011).

References

Abbott, V., Black, J. B., & Smith, E. E. (1985). The representation of scripts in memory. Journal of Memory and Language, 24, 179199.CrossRefGoogle Scholar
Abolafia, M. Y. (2020). Stewards of the market: How the Federal Reserve made sense of the financial crisis. Harvard University Press.Google Scholar
Adaval, R., & Wyer, R. S. (1998). The role of narratives in consumer information processing. Journal of Consumer Psychology, 7, 207245.CrossRefGoogle Scholar
Ajzen, I. (1977). Intuitive theories of events and the effects of base-rate information on prediction. Journal of Personality and Social Psychology, 35, 303314.CrossRefGoogle Scholar
Akerlof, G. A., & Shiller, R. J. (2009). Animal spirits: How human psychology drives the economy, and why it matters for global capitalism. Princeton University Press.Google Scholar
Akerlof, G. A., & Snower, D. J. (2016). Bread and bullets. Journal of Economic Behavior & Organization, 126, 5871.CrossRefGoogle Scholar
Aristotle (1999). Nicomachean ethics (Irwin, T., Trans.). Hackett. (Original work published 350 BCE.).Google Scholar
Armitage, C. J., & Conner, M. (2000). Attitudinal ambivalence: A test of three key hypotheses. Personality and Social Psychology Bulletin, 26, 14211432.CrossRefGoogle Scholar
Aronowitz, S., & Lombrozo, T. (2020). Experiential explanation. Topics in Cognitive Science, 12, 13211336.CrossRefGoogle ScholarPubMed
Barber, B. M., & Odean, T. (2000). Trading is hazardous to your wealth: The common stock investment performance of individual investors. Journal of Finance, 55, 773806.CrossRefGoogle Scholar
Barberis, N., Greenwood, R., Jin, L., & Shleifer, A. (2015). X-CAPM: An extrapolative capital asset pricing model. Journal of Financial Economics, 115, 124.CrossRefGoogle Scholar
Barbey, A. K., & Sloman, S. A. (2007). Base-rate respect: From ecological rationality to dual processes. Behavioral and Brain Sciences, 30, 241297.CrossRefGoogle ScholarPubMed
Baron, J. (1992). The effect of normative beliefs on anticipated emotions. Journal of Personality and Social Psychology, 63, 320330.CrossRefGoogle ScholarPubMed
Baron, J., & Kemp, S. (2004). Support for trade restrictions, attitudes, and understanding of comparative advantage. Journal of Economic Psychology, 25, 565580.CrossRefGoogle Scholar
Baron, J., & Spranca, M. (1997). Protected values. Organizational Behavior & Human Decision Processes, 70, 116.CrossRefGoogle ScholarPubMed
Barsalou, L. W. (1983). Ad hoc categories. Memory & Cognition, 11, 211227.CrossRefGoogle ScholarPubMed
Bartlett, F. C. (1932). Remembering: An experimental and social study. Cambridge University Press.Google Scholar
Batteux, E., Bilovich, A., Johnson, S. G. B., & Tuckett, D. (2021b). Negative consequences of failing to communicate uncertainties during a pandemic: an online randomised controlled trial on COVID-19 vaccines. BMJ Open, 12, e051352.Google Scholar
Batteux, E., Khon, Z., Bilovich, A., Johnson, S. G. B., & Tuckett, D. (in press). When do consumers favor overly precise information about investment returns? Journal of Experimental Psychology: Applied.Google Scholar
Batteux, E., Bilovich, A., Johnson, S. G. B., & Tuckett, D. (2021a). When certainty backfires: The effects of unwarranted certainty on consumer loyalty. UCL Working Paper.Google Scholar
Baumeister, R. F., & Masicampo, E. J. (2010). Conscious thought is for facilitating social and cultural interactions: How mental simulations serve the animal–culture interface. Psychological Review, 117, 945971.CrossRefGoogle ScholarPubMed
Beach, L. R. (1998). Image theory: Theoretical and empirical foundations. Erlbaum.CrossRefGoogle Scholar
Beach, L. R. (2010). The psychology of narrative thought: How the stories we tell ourselves shape our lives. Xlibris.Google Scholar
Beach, L. R. (2020). Scenarios as narratives. Futures & Foresight Science, 3, e58.Google Scholar
Beach, L. R., Bissell, B. L., & Wise, J. A. (2016). A new theory of mind: The theory of narrative thought. Cambridge Scholars Publishing.Google Scholar
Beckert, J. (2016). Imagined futures: Fictional expectations and capitalist dynamics. Harvard University Press.CrossRefGoogle Scholar
Bentham, J. (1907/1789). An introduction to the principles of morals and legislation. Clarendon Press.Google Scholar
Berger, J. (2013). Contagious: Why things catch on. Simon & Schuster.Google Scholar
Berlyne, D. E. (1960). Conflict, arousal, and curiosity. McGraw-Hill.CrossRefGoogle Scholar
Bhattacharjee, A., Dana, J., & Baron, J. (2017). Anti-profit beliefs: How people neglect the societal benefits of profit. Journal of Personality and Social Psychology, 113, 671696.CrossRefGoogle ScholarPubMed
Bilovich, A., Johnson, S. G. B., & Tuckett, D. (2020). Perceived argument strength mediates the influence of emotions on confidence. UCL Working Paper.Google Scholar
Bonawitz, E. B., & Lombrozo, T. (2012). Occam's rattle: Children's use of simplicity and probability to constrain inference. Developmental Psychology, 48, 11561164.CrossRefGoogle ScholarPubMed
Bott, L., & Heit, E. (2004). Nonmonotonic extrapolation in function learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 30, 3850.Google ScholarPubMed
Bowers, J. S., & Davis, C. J. (2012). Bayesian just-so stories in psychology and neuroscience. Psychological Bulletin, 138, 389414.CrossRefGoogle ScholarPubMed
Boyd, B . (2009). On the origin of stories: Evolution, cognition, and fiction. Belknap.CrossRefGoogle Scholar
Boyd, B. (2018). The evolution of stories: From mimesis to language, from fact to fiction. Wiley Interdisciplinary Reviews Cognitive Science, 9, e1444.CrossRefGoogle ScholarPubMed
Boyd, R., & Richerson, P. (1985). Culture and evolutionary process. University of Chicago Press.Google Scholar
Boyd, R., Richerson, P. J., & Henrich, J. (2011). The cultural niche: Why social learning is essential for human adaptation. Proceedings of the National Academy of Sciences, 108, 1091810925.CrossRefGoogle ScholarPubMed
Boyer, P., & Petersen, M. B. (2018). Folk-economic beliefs: An evolutionary cognitive model. Behavioral and Brain Sciences, 41, e158.CrossRefGoogle Scholar
Bramley, N. R., Gerstenberg, T., Mayrhofer, R., & Lagnado, D. A. (2018). Time in causal structure learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 44, 18801910.Google ScholarPubMed
Bruner, J. (1986). Actual minds, possible worlds. Harvard University Press.CrossRefGoogle Scholar
Buehner, M. J., & May, J. (2002). Knowledge mediates the timeframe of covariation assessment in human causal induction. Thinking and Reasoning, 8, 269295.CrossRefGoogle Scholar
Camerer, C., & Weber, M. (1992). Recent developments in modeling preferences: Uncertainty and ambiguity. Journal of Risk and Uncertainty, 5, 325370.CrossRefGoogle Scholar
Caplan, B. (2007). The myth of the rational voter: Why democracies choose bad policies. Princeton University Press.Google Scholar
Carey, S. (2009a). The origin of concepts. Oxford University Press.CrossRefGoogle Scholar
Carey, S. (2009b). Where our number concepts come from. Journal of Philosophy, 106, 220254.CrossRefGoogle ScholarPubMed
Carlin, W., & Soskice, D. (2018). Stagnant productivity and low unemployment: Stuck in a Keynesian equilibrium. Oxford Review of Economic Policy, 34, 169194.CrossRefGoogle Scholar
Carlson, J. A., & Parkin, M. (1975). Inflation expectations. Economica, 42, 123138.CrossRefGoogle Scholar
Chang, C. (2008). Increasing mental health literacy via narrative advertising. Journal of Health Communication, 13, 3755.CrossRefGoogle ScholarPubMed
Chater, N., & Loewenstein, G. (2016). The under-appreciated drive for sense-making. Journal of Economic Behavior & Organization, 126, 137154.CrossRefGoogle Scholar
Chong, K., & Tuckett, D. (2015). Constructing conviction through action and narrative: How money managers manage uncertainty and the consequences for financial market functioning. Socio-Economic Review, 13, 126.CrossRefGoogle Scholar
Chwe, M. S. (2001). Rational ritual: Culture, coordination, and common knowledge. Princeton University Press.Google Scholar
Cimpian, A., & Steinberg, O. D. (2014). The inherence heuristic across development: Systematic differences between children's and adults’ explanations for everyday facts. Cognitive Psychology, 75, 130154.CrossRefGoogle ScholarPubMed
Cushman, F. (2020). Rationalization is rational. Behavioral and Brain Sciences, 43, e28.CrossRefGoogle Scholar
Cutler, D. M., Poterba, J. M., & Summers, L. H. (1991). Speculative dynamics. Review of Economic Studies, 58, 529546.CrossRefGoogle Scholar
Damasio, A. (1994). Descartes’ error: Emotion, reason, and the human brain. Putnam.Google Scholar
Dawkins, R. (1976). The selfish gene. Oxford University Press.Google Scholar
Dawson, C., & Johnson, S. G. B. (2021). Dread aversion and economic preferences. Available at PsyArXiv.CrossRefGoogle Scholar
De Bondt, W. F. M. (1993). Betting on trends: Intuitive forecasts of financial risk and return. International Journal of Forecasting, 9, 355371.CrossRefGoogle Scholar
De Freitas, J., & Johnson, S. G. B. (2018). Optimality bias in moral judgment. Journal of Experimental Social Psychology, 79, 149163.CrossRefGoogle Scholar
DeLosh, E. L., Busemeyer, J. R., & McDaniel, M. A. (1997). Extrapolation: The sine qua non for abstraction in function learning. Journal of Experimental Psychology: Learning, Memory, and Cognition, 23, 968986.Google ScholarPubMed
De Wit, J. B. F., Das, E., & Vet, R. (2008). What works best: Objective statistics or a personal testimonial? An assessment of the persuasive effects of different types of message evidence on risk perception. Health Psychology, 27, 110115.CrossRefGoogle ScholarPubMed
Doherty, M. E., Chadwick, R., Caravan, H., Barr, D., & Mynatt, C. R. (1996). On people's understanding of the diagnostic implications of probabilistic data. Memory & Cognition, 24, 644654.CrossRefGoogle ScholarPubMed
Donald, M. (1991). Origins of the modern mind: Three stages in the evolution of culture and cognition. Harvard University Press.Google Scholar
Du, N., & Budescu, D. V. (2005). The effects of imprecise probabilities and outcomes in evaluating investment options. Management Science, 51, 17911803.CrossRefGoogle Scholar
Dunbar, R. (1996). Grooming, gossip, and the evolution of language. Harvard University Press.Google Scholar
Einhorn, H. J., & Hogarth, R. M. (1986). Judging probable cause. Psychological Bulletin, 99, 319.CrossRefGoogle Scholar
Eliaz, K., & Spiegler, R. (2018). A model of competing narratives. Available at arXiv.Google Scholar
Elliot, A. J. (2006). The hierarchical model of approach-avoidance motivation. Motivation and Emotion, 30, 111116.CrossRefGoogle Scholar
Ellsberg, D. (1961). Risk, ambiguity, and the Savage axioms. Quarterly Journal of Economics, 75, 643669.CrossRefGoogle Scholar
Escalas, J. E. (2004). Imagine yourself in the product: Mental simulation, narrative transportation, and persuasion. Journal of Advertising, 33, 3748.CrossRefGoogle Scholar
Fehr, E., & Schmidt, K. M. (1999). A theory of fairness, competition, and cooperation. Quarterly Journal of Economics, 114, 817868.CrossRefGoogle Scholar
Fenton O'Creevy, M., & Tuckett, D. (2022). Selecting futures: The role of conviction, narratives, ambivalence, and constructive doubt. Futures & Foresight Science, 4, e111.CrossRefGoogle Scholar
Fernbach, P. M., Darlow, A., & Sloman, S. A. (2010). Neglect of alternative causes in predictive but not diagnostic reasoning. Psychological Science, 21, 329336.CrossRefGoogle Scholar
Festinger, L. (1962). Cognitive dissonance. Scientific American, 207, 93106.CrossRefGoogle ScholarPubMed
Fishbach, A., & Dhar, R. (2007). Dynamics of goal-based choice. In Haugtvedt, C. P., Herr, P. M., & Kardes, F. R. (Eds.), Handbook of consumer psychology (pp. 611637). Psychology Press.Google Scholar
Fisher, M., & Keil, F. C. (2018). The binary bias: A systematic distortion in the integration of information. Psychological Science, 29, 18461858.CrossRefGoogle Scholar
Fodor, J. A., & Pylyshyn, Z. (1981). How direct is visual perception? Some reflections on Gibson's “ecological approach”. Cognition, 9, 139196.CrossRefGoogle ScholarPubMed
Forbus, K. D. (1984). Qualitative process theory. Artificial Intelligence, 24, 85168.CrossRefGoogle Scholar
Ford, M. E. (1992). Motivating humans: Goals, emotions, and personal agency beliefs. Sage.CrossRefGoogle Scholar
Frijda, N. H. (1988). The laws of emotion. American Psychologist, 43, 449458.CrossRefGoogle ScholarPubMed
Gentner, D. (1983). Structure-mapping: A theoretical framework for analogy. Cognitive Science, 7, 155170.CrossRefGoogle Scholar
Gergely, G., & Csibra, G. (2003). Teleological reasoning in infancy: The naïve theory of rational action. Trends in Cognitive Sciences, 7, 287292.CrossRefGoogle Scholar
Gigerenzer, G. (2008). Why heuristics work. Perspectives on Psychological Science, 3, 2029.CrossRefGoogle ScholarPubMed
Gigerenzer, G., & Goldstein, D. G. (1996). Reasoning the fast and frugal way: Models of bounded rationality. Psychological Review, 103, 650669.CrossRefGoogle ScholarPubMed
Gigerenzer, G., & Hoffrage, U. (1995). How to improve Bayesian reasoning without instruction: Frequency formats. Psychological Review, 102, 684704.CrossRefGoogle Scholar
Glazer, A., & Konrad, K. A. (1996). A signaling explanation for charity. American Economic Review, 86, 10191028.Google Scholar
Goodman, N. (1955). Fact, fiction, and forecast. Harvard University Press.Google Scholar
Gopnik, A. (1998). Explanation as orgasm. Minds and Machines, 8, 101118.CrossRefGoogle Scholar
Gopnik, A., Glymour, C., Sobel, D. M., Schulz, L. E., Kushnir, T., & Danks, D. (2004). A theory of causal learning in children: Causal maps and Bayes nets. Psychological Review, 111, 332.CrossRefGoogle ScholarPubMed
Graesser, A. C., Singer, M., & Trabasso, T. (1994). Constructing inferences during narrative text comprehension. Psychological Review, 101, 371395.CrossRefGoogle ScholarPubMed
Green, M. C., & Brock, T. C. (2000). The role of transportation in the persuasiveness of public narratives. Journal of Personality and Social Psychology, 79, 701721.CrossRefGoogle ScholarPubMed
Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D. (2001). An fMRI investigation of emotional engagement in moral judgment. Science, 293, 21052108.CrossRefGoogle ScholarPubMed
Greene, M. R., & Fei-Fei, L. (2014). Visual categorization is automatic and obligatory: Evidence from Stroop-like paradigm. Journal of Vision, 14, 111.CrossRefGoogle ScholarPubMed
Gregory, R. (1970). The intelligent eye. Weidenfeld and Nicolson.Google Scholar
Griskevicius, V., Tybur, J. M., & van den Bergh, B. (2010). Going green to be seen: Status, reputation, and conspicuous conservation. Journal of Personality and Social Psychology, 98, 392404.CrossRefGoogle Scholar
Gross, J. J. (1998). The emerging field of emotion regulation: An integrative review. Review of General Psychology, 2, 271299.CrossRefGoogle Scholar
Grossmann, I., Eibach, R. P., Koyama, J., & Sahi, Q. B. (2020). Folk standards of sound judgment: Rationality versus reasonableness. Science Advances, 6, eaaz0289.CrossRefGoogle ScholarPubMed
Grossmann, I., Oakes, H., & Santos, H. C. (2019). Wise reasoning benefits from emodiversity, irrespective of emotional intensity. Journal of Experimental Psychology: General, 148, 805823.CrossRefGoogle ScholarPubMed
Hagmayer, Y., & Sloman, S. A. (2009). Decision makers conceive of their choices as interventions. Journal of Experimental Psychology: General, 138, 2238.CrossRefGoogle ScholarPubMed
Hahn, K. H. Y., & Bhadun, G. (2021). Mask up: Exploring cross-cultural influences on mask-making behavior during the COVID-19 pandemic. Clothing and Textiles Research Journal, 39, 297313.CrossRefGoogle Scholar
Hamby, A., Brinberg, D., & Daniloski, K. (2017). Reflecting on the journey: Mechanisms in narrative persuasion. Journal of Consumer Psychology, 27, 1122.CrossRefGoogle Scholar
Hastie, R., & Pennington, N. (2000). Explanation-based decision making. In Connolly, T., Arkes, H. R., & Hammond, K. R. (Eds.), Judgment and decision making: An interdisciplinary reader (pp. 212228). Cambridge University Press.Google Scholar
Hayek, F. A. (1958). Freedom, reason, and tradition. Ethics, 68, 229245.CrossRefGoogle Scholar
Hayes, B. K., & Chen, T. J. (2008). Clinical expertise and reasoning with uncertain categories. Psychonomic Bulletin & Review, 15, 10021007.CrossRefGoogle ScholarPubMed
Hegarty, M. (2004). Mechanical reasoning by mental simulation. Trends in Cognitive Sciences, 8, 280285.CrossRefGoogle ScholarPubMed
Henrich, J. (2018). The secret of our success: How culture is driving human evolution, domesticating our species, and making us smarter. Princeton University Press.Google Scholar
Henrich, J. (2020). The WEIRDest people in the world: How the West became psychologically peculiar and particularly prosperous. Penguin.Google Scholar
Hirst, W., Yamashiro, J. K., & Coman, A. (2018). Collective memory from a psychological perspective. Trends in Cognitive Sciences, 22, 438451.CrossRefGoogle ScholarPubMed
Holyoak, K. J., Lee, H. S., & Lu, H. (2010). Analogical and category-based inference: A theoretical integration with Bayesian causal models. Journal of Experimental Psychology: General, 139, 702727.CrossRefGoogle ScholarPubMed
Hommes, C. (2011). The heterogeneous expectations hypothesis: Some evidence from the lab. Journal of Economic Dynamics and Control, 35, 124.CrossRefGoogle Scholar
Hommes, C., Sonnemans, J., Tuinstra, J., & van de Velden, H. (2005). Coordination of expectations in asset pricing experiments. Review of Financial Studies, 18, 955980.CrossRefGoogle Scholar
Horne, Z., Muradoglu, M., & Cimpian, A. (2019). Explanation as a cognitive process. Trends in Cognitive Sciences, 23, 187199.CrossRefGoogle ScholarPubMed
Jara-Ettinger, J., Gweon, H., Schulz, L. E., & Tenenbaum, J. B. (2016). The naïve utility calculus: Computational principles underlying commonsense psychology. Trends in Cognitive Sciences, 20, 589604.CrossRefGoogle ScholarPubMed
Jegadeesh, N., & Titman, S. (1993). Returns to buying winners and selling losers: Implications for stock market efficiency. Journal of Finance, 48, 6591.CrossRefGoogle Scholar
Johnson, M. K., & Sherman, S. J. (1990). Constructing and reconstructing the past and future in the present. In Higgins, E. T. & Sorrentino, R. M. (Eds.), Handbook of motivation and cognition: Foundations of social behavior (Vol. 2, pp. 482526). Guilford Press.Google Scholar
Johnson, S. G. B., & Hill, F. (2017). Belief digitization in economic prediction. In Gunzelmann, A., Howes, A., Tenbrink, T., & Davelaar, E. J. (Eds.), Proceedings of the 39th annual conference of the Cognitive Science Society (pp. 23132319). Cognitive Science Society.Google Scholar
Johnson, S. G. B., Matiashvili, T., & Tuckett, D. (2019a). Explaining the past, predicting the future: How attributions for past price changes affect price expectations. Working Paper.Google Scholar
Johnson, S. G. B., & Tuckett, D. (2022). Narrative expectations in financial forecasting. Journal of Behavioral Decision Making, 35, e2245.CrossRefGoogle Scholar
Johnson, S. G. B. (2016). Cognition as sense-making. Unpublished doctoral dissertation, Yale University, New Haven, CT.Google Scholar
Johnson, S. G. B. (2019). Toward a cognitive science of markets: Economic agents as sense-makers. Economics, 13, 49.CrossRefGoogle Scholar
Johnson, S. G. B. (2020). Dimensions of altruism: Do evaluations of charitable behavior track prosocial benefit or personal sacrifice? Available at PsyArXiv.CrossRefGoogle Scholar
Johnson, S. G. B., & Ahn, J. (2021). Principles of moral accounting: How our intuitive moral sense balances rights and wrongs. Cognition, 206, 104467.CrossRefGoogle ScholarPubMed
Johnson, S. G. B., & Ahn, W. (2015). Causal networks or causal islands? The representation of mechanisms and the transitivity of causal judgment. Cognitive Science, 39, 14681503.CrossRefGoogle ScholarPubMed
Johnson, S. G. B., & Ahn, W. (2017). Causal mechanisms. In Waldmann, M. R. (Ed.), Oxford handbook of causal reasoning (pp. 127146). Oxford University Press.Google Scholar
Johnson, S. G. B., Jin, A., & Keil, F. C. (2014). Simplicity and goodness-of-fit in explanation: The case of intuitive curve-fitting. In Bello, P., Guarini, M., McShane, M. & Scassellati, B. (Eds.), Proceedings of the 36th annual conference of the Cognitive Science Society (pp. 701706). Cognitive Science Society.Google Scholar
Johnson, S. G. B., & Keil, F. C. (2014). Causal inference and the hierarchical structure of experience. Journal of Experimental Psychology: General, 143, 22232241.CrossRefGoogle ScholarPubMed
Johnson, S. G. B., Kim, H. S., & Keil, F. C. (2016a). Explanatory biases in social categorization. In Papafragou, A., Grodner, D., Mirman, D., & Trueswell, J. C. (Eds.), Proceedings of the 38th annual conference of the Cognitive Science Society (pp. 776781). Cognitive Science Society.Google Scholar
Johnson, S. G. B., Matiashvili, T., & Tuckett, D. (2019b). Expectations based on past price patterns: An experimental study. UCL Working Paper.Google Scholar
Johnson, S. G. B., Merchant, T., & Keil, F. C. (2015a). Argument scope in inductive reasoning: Evidence for an abductive account of induction. In Noelle, D. C., Dale, R., Warlaumont, A. S., Yoshimi, J., Matlock, T., Jennings, C. D., & Maglio, P. P. (Eds.), Proceedings of the 37th annual conference of the Cognitive Science Society (pp. 10151020). Cognitive Science Society.Google Scholar
Johnson, S. G. B., Merchant, T., & Keil, F. C. (2020). Belief digitization: Do we treat uncertainty as probabilities or as bits? Journal of Experimental Psychology: General, 149, 14171434.CrossRefGoogle ScholarPubMed
Johnson, S. G. B., Murphy, G. L., Rodrigues, M., & Keil, F. C. (2019). Predictions from uncertain moral character. In Goel, A. K., Seifert, C. M., & Freksa, C. (Eds.), Proceedings of the 41st annual conference of the Cognitive Science Society (pp. 506512). Cognitive Science Society.Google Scholar
Johnson, S. G. B., & Park, S. Y. (2021). Moral signaling through donations of money and time. Organizational Behavior & Human Decision Processes, 165, 183196.CrossRefGoogle Scholar
Johnson, S. G. B., Rajeev-Kumar, G., & Keil, F. C. (2015b). Belief utility as an explanatory virtue. In Noelle, D. C., Dale, R., Warlaumont, A. S., Yoshimi, J., Matlock, T., Jennings, C. D., & Maglio, P. P. (Eds.), Proceedings of the 37th annual conference of the Cognitive Science Society (pp. 10091014). Cognitive Science Society.Google Scholar
Johnson, S. G. B., Rajeev-Kumar, G., & Keil, F. C. (2016b). Sense-making under ignorance. Cognitive Psychology, 89, 3970.CrossRefGoogle ScholarPubMed
Johnson, S. G. B., & Rips, L. J. (2015). Do the right thing: The assumption of optimality in lay decision theory and causal judgment. Cognitive Psychology, 77, 4276.CrossRefGoogle ScholarPubMed
Johnson, S. G. B., Rodrigues, M., & Tuckett, D. (2021). Moral tribalism and its discontents: How intuitive theories of ethics shape consumers’ deference to experts. Journal of Behavioral Decision Making, 34, 4765.CrossRefGoogle Scholar
Johnson, S. G. B., & Steinerberger, S. (2019). Intuitions about mathematical beauty: A case study in the aesthetic experience of ideas. Cognition, 189, 242259.CrossRefGoogle ScholarPubMed
Johnson, S. G. B., Valenti, J. J., & Keil, F. C. (2019). Simplicity and complexity preferences in causal explanation: An opponent heuristic account. Cognitive Psychology, 113, 101222.CrossRefGoogle ScholarPubMed
Johnson, S. G. B., Zhang, J., & Keil, F. C. (2018). Psychological underpinnings of zero-sum thinking. In Rogers, T. T., Rau, M., Zhu, X., & Kalish, C. W. (Eds.), Proceedings of the 40th annual conference of the Cognitive Science Society (pp. 566571). Cognitive Science Society.Google Scholar
Johnson, S. G. B., Zhang, J., & Keil, F. C. (2019). Consumers’ beliefs about the effects of trade. Available at SSRN.CrossRefGoogle Scholar
Johnston, A. M., Johnson, S. G. B., Koven, M. L., & Keil, F. C. (2017). Little Bayesians or little Einsteins? Probability and explanatory virtue in children's inferences. Developmental Science, 20, e12483.CrossRefGoogle ScholarPubMed
Johnston, A. M., Sheskin, M., Johnson, S. G. B., & Keil, F. C. (2018). Preferences for explanation generality develop early in biology, but not physics. Child Development, 89, 11101119.CrossRefGoogle Scholar
Jones, M., & Love, B. C. (2011). Bayesian fundamentalism or enlightenment? On the explanatory status and theoretical contributions of Bayesian models of cognition. Behavioral and Brain Sciences, 34, 169188.CrossRefGoogle ScholarPubMed
Jordan, J. J., Hoffman, M., Bloom, P., & Rand, D. G. (2016). Third-party punishment as a costly signal of trustworthiness. Nature, 530, 473476.CrossRefGoogle ScholarPubMed
Kabiri, A., James, H., Landon-Lane, J., Tuckett, D., & Nyman, R. (2023). The role of sentiment in the US economy: 1920 to 1934. Economic History Review, 76, 330.CrossRefGoogle Scholar
Kahneman, D. (2002). Maps of bounded rationality: Psychology for behavioral economics. American Economic Review, 93, 14491475.CrossRefGoogle Scholar
Kahneman, D., & Tversky, A. (1982). The simulation heuristic. In Kahneman, D., Slovic, P., & Tversky, A. (Eds.), Judgment under uncertainty: Heuristics and biases (pp. 201208). Cambridge University Press.CrossRefGoogle Scholar
Kant, I. (2002). Groundwork for the metaphysics of morals (A. Zweig, Trans.). Oxford University Press. (Original work published 1796).Google Scholar
Kay, J., & King, M. (2020). Radical uncertainty: Decision-making for an unknowable future. Bridge Street Press.Google Scholar
Keynes, J. M. (1936). The general theory of employment, interest, and money. Palgrave Macmillan.Google Scholar
Khemlani, S. S., Sussman, A. B., & Oppenheimer, D. M. (2011). Harry Potter and the sorcerer's scope: Latent scope biases in explanatory reasoning. Memory & Cognition, 39, 527535.CrossRefGoogle Scholar
Kintsch, W., Mandel, T. S., & Kozminsky, E. (1977). Summarizing scrambled stories. Memory and Cognition, 5, 547552.CrossRefGoogle ScholarPubMed
Klein, G. (1998). Sources of power: How people make decisions. MIT Press.Google Scholar
Knight, F. (1921). Risk, uncertainty, and profit. Houghton-Mifflin.Google Scholar
Knobe, J. (2010). Person as scientist, person as moralist. Behavioral and Brain Sciences, 33, 315329.CrossRefGoogle ScholarPubMed
Koehler, J. J. (1996). The base rate fallacy reconsidered: Descriptive, normative, and methodological challenges. Behavioral and Brain Sciences, 19, 153.CrossRefGoogle Scholar
Kominsky, J. F., Phillips, J., Gerstenberg, T., Lagnado, D., & Knobe, J. (2015). Causal superseding. Cognition, 137, 196209.CrossRefGoogle ScholarPubMed
Kosslyn, S. M. (1975). Information representation in visual images. Cognitive Psychology, 7, 341370.CrossRefGoogle Scholar
Krause, R. J., & Rucker, D. D. (2020). Strategic storytelling: When narratives help versus hurt the persuasive power of facts. Personality and Social Psychology Bulletin, 46, 216227.CrossRefGoogle ScholarPubMed
Krynski, T. R., & Tenenbaum, J. B. (2007). The role of causality in judgment under uncertainty. Journal of Experimental Psychology: General, 136, 430450.CrossRefGoogle ScholarPubMed
Lagnado, D. A., & Shanks, D. R. (2003). The influence of hierarchy on probability judgment. Cognition, 89, 157178.CrossRefGoogle ScholarPubMed
Lagnado, D. A., & Sloman, S. A. (2006). Time as a guide to cause. Journal of Experimental Psychology: Learning, Memory, and Cognition, 32, 451460.Google ScholarPubMed
Lake, B. M., Ullman, T. D., Tenenbaum, J. B., & Gershman, S. J. (2017). Building machines that learn and think like people. Behavioral and Brain Sciences, 40, e253.CrossRefGoogle ScholarPubMed
Lavoie, M. (1994). A post Keynesian approach to consumer choice. Journal of Post Keynesian Economics, 16, 539562.CrossRefGoogle Scholar
Lazarus, R. S. (1991). Emotion and adaptation. Oxford University Press.Google Scholar
LeBoeuf, R. A., & Norton, M. I. (2012). Consequence–cause matching: Looking to the consequences of events to infer their causes. Journal of Consumer Research, 39, 128141.CrossRefGoogle Scholar
Leiser, D., & Aroch, R. (2009). Lay understanding of macroeconomic causation: The good-begets-good heuristic. Applied Psychology, 58, 370384.CrossRefGoogle Scholar
Leiser, D., & Shemesh, Y. (2018). How we misunderstand economics and why it matters: The psychology of bias, distortion, and conspiracy. Routledge.CrossRefGoogle Scholar
Lerner, J. S., & Keltner, D. (2000). Beyond valence: Toward a model of emotion-specific influences on judgement and choice. Cognition and Emotion, 14, 473493.CrossRefGoogle Scholar
Lerner, J. S., & Keltner, D. (2001). Fear, anger, and risk. Journal of Personality and Social Psychology, 81, 146159.CrossRefGoogle ScholarPubMed
Lerner, J. S., & Tetlock, P. E. (1999). Accounting for the effects of accountability. Psychological Bulletin, 125, 255275.CrossRefGoogle ScholarPubMed
Lewin, K. (1935). A dynamic theory of personality. McGraw-Hill.Google Scholar
Lieder, F., & Griffiths, T. L. (2020). Resource-rational analysis: Understanding human cognition as the optimal use of limited computational resources. Behavioral and Brain Sciences, 43, e1.CrossRefGoogle Scholar
Lipton, P. (2004). Inference to the best explanation (2nd ed.). Routledge.Google Scholar
Loewenstein, G. (1987). Anticipation and the valuation of delayed consumption. The Economic Journal, 97, 666684.CrossRefGoogle Scholar
Loewenstein, G. F., Thompson, L., & Bazerman, M. H. (1989). Social utility and decision making in interpersonal contexts. Journal of Personality and Social Psychology, 57, 426441.CrossRefGoogle Scholar
Loewenstein, G. F., Weber, E. U., Hsee, C. K., & Welch, N. (2001). Risk as feelings. Psychological Bulletin, 127, 267286.CrossRefGoogle ScholarPubMed
Lombrozo, T. (2007). Simplicity and probability in causal explanation. Cognitive Psychology, 55, 232257.CrossRefGoogle ScholarPubMed
Lombrozo, T. (2016). Explanatory preferences shape learning and inference. Trends in Cognitive Sciences, 20, 748759.CrossRefGoogle ScholarPubMed
Loomes, G., & Sugden, R. (1982). Regret theory: An alternative theory of rational choice under uncertainty. The Economic Journal, 92, 805824.CrossRefGoogle Scholar
Malle, B. F. (1999). How people explain behavior: A new theoretical framework. Personality and Social Psychology Review, 3, 2348.CrossRefGoogle ScholarPubMed
Mandler, J. M., & Johnson, N. S. (1977). Remembrance of things parsed: Story structure and recall. Cognitive Psychology, 9, 111151.CrossRefGoogle Scholar
Mar, R. A., Oatley, K., Dikic, M., & Mullin, J. (2010). Emotion and narrative fiction: Interactive influences before, during, and after reading. Cognition and Emotion, 25, 818833.CrossRefGoogle Scholar
Marcus, G. F., & Davis, E. (2013). How robust are probabilistic models of higher-level cognition? Psychological Science, 24, 23512360.CrossRefGoogle ScholarPubMed
Markman, K. D., Gavanski, I., Sherman, S. J., & McMullen, M. N. (1993). The mental simulation of better and worse possible worlds. Journal of Experimental Social Psychology, 29, 87109.CrossRefGoogle Scholar
Marks, J., Copland, E., Loh, E., Sunstein, C. R., & Sharto, T. (2019). Epistemic spillovers: Learning others’ political views reduces the ability to assess and use their expertise in nonpolitical domains. Cognition, 188, 7484.CrossRefGoogle ScholarPubMed
Marr, D. (1982). Vision. Freeman.Google Scholar
Maslow, A. H. (1943). A theory of human motivation. Psychological Review, 50, 370396.CrossRefGoogle Scholar
Mercier, H., & Sperber, D. (2017). The enigma of reason. Harvard University Press.Google Scholar
Mikhail, J. (2007). Universal moral grammar: Theory, evidence, and the future. Trends in Cognitive Sciences, 11, 143152.CrossRefGoogle ScholarPubMed
Miller, G. A. (1956) The magical number seven, plus or minus two: Some limits on our capacity for processing information. Psychological Review, 63, 8197.CrossRefGoogle ScholarPubMed
Mitchell, J. P., Banaji, M. R., & Macrae, C. N. (2005). The link between social cognition and self-referential thought in the medial prefrontal cortex. Journal of Cognitive Neuroscience, 17, 13061315.CrossRefGoogle ScholarPubMed
Moors, A., & De Houwer, J. (2010). Automatic appraisal of motivational valence: Motivational affective priming and Simon effects. Cognition and Emotion, 15, 749766.CrossRefGoogle Scholar
Mulligan, E. J., & Hastie, R. (2005). Explanations determine the impact of information on financial investment judgments. Journal of Behavioral Decision Making, 18, 145156.CrossRefGoogle Scholar
Murphy, G. L., & Medin, D. L. (1985). The role of theories in conceptual coherence. Psychological Review, 92, 289316.CrossRefGoogle ScholarPubMed
Murphy, G. L., & Ross, B. H. (1994). Predictions from uncertain categorizations. Cognitive Psychology, 27, 148193.CrossRefGoogle ScholarPubMed
Nichols, S. (2002). Norms with feeling: Towards a psychological account of moral judgment. Cognition, 84, 221236.CrossRefGoogle ScholarPubMed
Nichols, S., & Knobe, J. (2007). Moral responsibility and determinism: The cognitive science of folk intuitions. Noûs, 41, 663685.CrossRefGoogle Scholar
Norenzayan, A., Atran, S., Faulkner, J., & Schaller, M. (2006). Memory and mystery: The cultural selection of minimally counterintuitive narratives. Cognitive Science, 30, 531553.CrossRefGoogle ScholarPubMed
Nussbaum, M. C. (2001). Upheavals of thought: The intelligence of emotions. Cambridge University Press.CrossRefGoogle Scholar
Nyman, R., Kapadia, S., & Tuckett, D. (2021). News and narratives in financial systems: Exploiting big data for systemic risk assessment. Journal of Economic Dynamics & Control, 127, 104119.CrossRefGoogle Scholar
Oatley, K., & Johnson-Laird, P. N. (1987). Towards a cognitive theory of emotions. Cognition and Emotion, 1, 2950.CrossRefGoogle Scholar
Panksepp, J. (1998). The periconscious substrates of consciousness: Affective states and the evolutionary origins of the self. Journal of Consciousness Studies, 5, 566582.Google Scholar
Paul, L. A. (2014). Transformative experience. Oxford University Press.CrossRefGoogle Scholar
Payne, J. W., Bettman, J. R., & Johnson, E. J. (1988). Adaptive strategy selection in decision making. Journal of Experimental Psychology: Learning, Memory, and Cognition, 14, 534552.Google Scholar
Pearl, J. (2000). Causality: Models, reasoning, and inference. Cambridge University Press.Google Scholar
Pennington, N., & Hastie, R. (1986). Evidence evaluation in complex decision making. Journal of Personality and Social Psychology, 51, 242258.CrossRefGoogle Scholar
Pennington, N., & Hastie, R. (1988). Explanation-based decision making: Effects of memory structure on judgment. Journal of Experimental Psychology: Learning, Memory, and Cognition, 14, 521533.Google Scholar
Pennington, N., & Hastie, R. (1992). Explaining the evidence: Tests of the Story Model for juror decision making. Journal of Personality and Social Psychology, 62, 189206.CrossRefGoogle Scholar
Pennington, N., & Hastie, R. (1993). Reasoning in explanation-based decision making. Cognition, 49, 123163.CrossRefGoogle ScholarPubMed
Rand, D. G., & Nowak, M. A. (2013). Human cooperation. Trends in Cognitive Sciences, 17, 413425.CrossRefGoogle ScholarPubMed
Rawls, J. (2001). Justice as fairness: A restatement (Kelly, E., Ed.). Harvard University Press.CrossRefGoogle Scholar
Raz, J. (1986). The morality of freedom. Clarendon Press.Google Scholar
Read, S. J., & Marcus-Newhall, A. (1993). Explanatory coherence in social explanations: A parallel distributed processing account. Journal of Personality and Social Psychology, 65, 429447.CrossRefGoogle Scholar
Richard, R., van der Pligt, J., & de Vries, N. (1996). Anticipated affect and behavioral choice. Basic and Applied Social Psychology, 18, 111129.CrossRefGoogle Scholar
Richardson, S., Dohrenwend, B. S., & Klein, D. (1965). Interviewing: Its forms and functions. Basic Books.Google Scholar
Ridley, M. (2020). How innovation works. Fourth Estate.Google Scholar
Rips, L. J. (2010). Two causal theories of counterfactual conditionals. Cognitive Science, 34, 175221.CrossRefGoogle ScholarPubMed
Rolls, E. T. (2014). Emotion and decision-making explained. Oxford University Press.Google ScholarPubMed
Rosch, E., Mervis, C. B., Gray, W. D., Johnson, D. M., & Boyes-Braem, P. (1976). Basic objects in natural categories. Cognitive Psychology, 8, 382439.CrossRefGoogle Scholar
Rosner, A., Basieva, I., Barque-Duran, A., Glöckner, A., von Helversen, B., Khrennikov, A., & Pothos, E. M. (2022). Ambivalence in decision making: An eye tracking study. Cognitive Psychology, 134, 101464.CrossRefGoogle ScholarPubMed
Rothman, N. B., Pratt, M. G., Rees, L., & Vogus, T. J. (2017). Understanding the dual nature of ambivalence: Why and when ambivalence leads to good and bad outcomes. Academy of Management Annals, 11, 3372.CrossRefGoogle Scholar
Rottman, B. M., & Hastie, R. (2014). Reasoning about causal relationships: Inferences on causal networks. Psychological Bulletin, 140, 109139.CrossRefGoogle ScholarPubMed
Rottman, B. M., & Keil, F. C. (2012). Causal structure learning over time: Observations and interventions. Cognitive Psychology, 64, 93125.CrossRefGoogle ScholarPubMed
Rozenblit, L., & Keil, F. C. (2002). The misunderstood limits of folk science: An illusion of explanatory depth. Cognitive Science, 26, 521562.CrossRefGoogle ScholarPubMed
Rozin, P., & Fallon, A. E. (1987). A perspective on disgust. Psychological Review, 94, 2341.CrossRefGoogle ScholarPubMed
Rubin, D. C. (1995). Memory in oral traditions: The cognitive psychology of epic, ballads, and counting-out rhymes. Oxford University Press.Google Scholar
Rucker, D. D., Tormala, Z. L., Pety, R. E., & Briñol, P. (2014). Consumer conviction and commitment: An appraisal-based framework for attitude certainty. Journal of Consumer Psychology, 24, 119136.CrossRefGoogle Scholar
Rumelhart, D. E. (1975). Notes on a schema for stories. In Bobrow, D. G. & Collins, A. (Eds.), Representation and understanding: Studies in cognitive science (pp. 211236). Academic Press.CrossRefGoogle Scholar
Sanborn, A. N., & Chater, N. (2016). Bayesian brains without probabilities. Trends in Cognitive Sciences, 20, 883893.CrossRefGoogle ScholarPubMed
Sanborn, A. N., Griffiths, T. L., & Navarro, D. J. (2010). Rational approximations to rational models: Alternative algorithms for category learning. Psychological Review, 117, 11441167.CrossRefGoogle ScholarPubMed
Savage, L. J. (1954). The foundations of statistics. Wiley.Google Scholar
Schacter, D. L., & Addis, D. R. (2007). The ghosts of past and future: A memory that works by piecing together bits of the past may be better suited to simulating future events than one that is a store of perfect records. Nature, 445, 27.CrossRefGoogle Scholar
Schacter, D. L., Addis, D. R., & Buckner, R. L. (2008). Episodic simulation of future events: Concepts, data, and applications. Annals of the New York Academy of Sciences, 1124, 3960.CrossRefGoogle ScholarPubMed
Schank, R. C., & Abelson, R. P. (1977). Scripts, plans, goals, and understanding: An inquiry into human knowledge structures. Erlbaum.Google Scholar
Schwarz, N. (1990). Feelings as information: Informational and motivational functions of affective states. In Sorrentino, R. M. & Higgins, E. T. (Eds.), Handbook of motivation and cognition: Cognitive foundations of social psychology (Vol. 2, pp. 527561). Guilford Press.Google Scholar
Shafir, E., Simonson, I., & Tversky, A. (1993). Reason-based choice. Cognition, 49, 1136.CrossRefGoogle ScholarPubMed
Shefrin, H. (2002). Beyond greed and fear: Understanding behavioral finance and the psychology of investing (2nd ed.). Oxford University Press.CrossRefGoogle Scholar
Shen, F., Ahern, L., & Baker, M. (2014). Stories that count: Influence of news narratives on issue attitudes. Journalism and Mass Communication Quarterly, 91, 98117.CrossRefGoogle Scholar
Shiller, R. J. (1981). Do stock prices move too much to be justified by subsequent changes in dividends? American Economic Review, 71, 421436.Google Scholar
Shiller, R. J. (2019). Narrative economics: How stories go viral and drive major economic events. Princeton University Press.Google Scholar
Shtulman, A. (2017). Scienceblind: Why our intuitive theories about the world are so often wrong. Basic Books.Google Scholar
Sibley, W. M. (1953). The rational and the reasonable. Philosophical Review, 62, 554560.CrossRefGoogle Scholar
Silvia, P. J. (2008). Interest – the curious emotion. Current Directions in Psychological Science, 17, 5760.CrossRefGoogle Scholar
Simon, H. A. (1955). A behavioral model of rational choice. Quarterly Journal of Economics, 69, 99118.CrossRefGoogle Scholar
Simon, H. A. (1957). Models of man. Wiley.Google Scholar
Sinnott-Armstrong, W. (1985). Moral dilemmas and incomparability. American Philosophical Quarterly, 22, 321329.Google Scholar
Sloman, S. (2005). Causal models. Oxford University Press.CrossRefGoogle Scholar
Sloman, S., & Fernbach, P. (2017). The knowledge illusion: Why we never think alone. Penguin Random House.Google Scholar
Sloman, S. A., & Hagmayer, Y. (2006). The causal psycho-logic of choice. Trends in Cognitive Sciences, 10, 407412.CrossRefGoogle ScholarPubMed
Smelser, N. J. (1998). The rational and the ambivalent in the social sciences. American Sociological Review, 63, 116.CrossRefGoogle Scholar
Smith, C. A., & Ellsworth, P. C. (1985). Patterns of cognitive appraisal in emotion. Journal of Personality and Social Psychology, 48, 813838.CrossRefGoogle ScholarPubMed
Smith, D., Schlaepfer, P., Major, K., Dyble, M., Page, A. E., Thompson, J., … Migliano, B. (2017). Cooperation and the evolution of hunter–gatherer storytelling. Nature Communications, 8, 1853.CrossRefGoogle ScholarPubMed
Sparrow, B., Liu, J., & Wegner, D. M. (2011). Google effects on memory: Cognitive consequences of having information at our fingertips. Science, 333, 776778.CrossRefGoogle ScholarPubMed
Sperber, D. (1996). Explaining culture: A naturalistic approach. Blackwell.Google Scholar
Spirtes, P., Glymour, C., & Scheines, R. (1993). Causation, prediction, and search. Springer.CrossRefGoogle Scholar
Steiger, J. H., & Gettys, C. F. (1972). Best-guess errors in multistage inference. Journal of Experimental Psychology, 92, 17.CrossRefGoogle Scholar
Steyvers, M., Tenenbaum, J. B., Wagenmakers, E., & Blum, B. (2003). Inferring causal networks from observations and interventions. Cognitive Science, 27, 453489.CrossRefGoogle Scholar
Strickland, B., Silver, I., & Keil, F. C. (2016). The texture of causal construals: Domain-specific biases shape causal inferences from discourse. Memory & Cognition, 45, 442455.CrossRefGoogle Scholar
Suddendorf, T., & Corballis, M. C. (1997). Mental time travel and the evolution of the human mind. Genetic, Social, and General Psychology Monographs, 123, 133167.Google ScholarPubMed
Suddendorf, T., & Corballis, M. C. (2007). The evolution of foresight: What is mental time travel, and is it unique to humans? Behavioral and Brain Sciences, 30, 299313.CrossRefGoogle ScholarPubMed
Sunstein, C. R. (2005). Moral heuristics. Behavioral and Brain Sciences, 28, 531573.CrossRefGoogle ScholarPubMed
Sussman, A. B., Khemlani, S. S., & Oppenheimer, D. M. (2014). Latent scope bias in categorization. Journal of Experimental Social Psychology, 52, 18.CrossRefGoogle Scholar
Sussman, A. B., & Oppenheimer, D. M. (2020). The effect of effects on effectiveness: A boon-bane asymmetry. Cognition, 199, 104240.CrossRefGoogle ScholarPubMed
Taleb, N. N. (2001). Fooled by randomness: The hidden role of chance in life and in the markets. Random House.Google Scholar
Tenenbaum, J. B., Kemp, C., Griffiths, T. L., & Goodman, N. D. (2011). How to grow a mind: Statistics, structure, and abstraction. Science, 331, 13791385.CrossRefGoogle Scholar
Tennie, C., Frith, U., & Frith, C. D. (2010). Reputation management in the age of the world-wide web. Trends in Cognitive Sciences, 14, 482488.CrossRefGoogle ScholarPubMed
Tetlock, P. E. (2003). Thinking the unthinkable: Sacred values and taboo cognitions. Trends in Cognitive Sciences, 7, 320324.CrossRefGoogle ScholarPubMed
Thagard, P. (1989). Explanatory coherence. Behavioral and Brain Sciences, 12, 435502.CrossRefGoogle Scholar
Thorndyke, P. W. (1977). Cognitive structures in comprehension and memory of narrative discourse. Cognitive Psychology, 9, 77110.CrossRefGoogle Scholar
Todd, P. M., & Gigerenzer, G. (2007). Environments that make us smart: Ecological rationality. Current Directions in Psychological Science, 16, 167171.CrossRefGoogle Scholar
Todorov, A. (2008). Evaluating faces on trustworthiness: An extension of systems for recognition of emotions signaling approach/avoidance behaviors. Annals of the New York Academy of Sciences, 1124, 208224.CrossRefGoogle ScholarPubMed
Trope, Y., & Liberman, N. (2003). Temporal construal. Psychological Review, 110, 403421.CrossRefGoogle ScholarPubMed
Tuckett, D. (2011). Minding the markets: An emotional finance view of financial instability. Palgrave Macmillan.CrossRefGoogle Scholar
Tuckett, D. (2012). Financial markets are markets in stories: Some possible advantages of using interviews to supplement existing economic data sources. Journal of Economic Dynamics and Control, 36, 10771087.CrossRefGoogle Scholar
Tuckett, D. (2017). Why observation of the behaviour of human actors and how they combine within the economy is an important next step. Paper presented at the Institute of New Economic Thinking: Future of Macroeconomics Conference, Edinburgh, UK.Google Scholar
Tuckett, D., Boulton, M., & Olson, C. (1985). A new approach to the measurement of patients’ understanding of what they are told in medical consultations. Journal of Health and Social Behavior, 26, 2738.CrossRefGoogle Scholar
Tuckett, D., Holmes, D., Pearson, A., & Chaplin, G. (2020). Monetary policy and the management of uncertainty: A narrative approach. Bank of England Working Paper.Google Scholar
Tuckett, D. (in press). Feelings, narratives, and mental states: How neuroeconomics can shift the paradigm. In Kirman, A. & Teschi, M. (Eds.), The state of mind in economics. Cambridge University Press.Google Scholar
Tuckett, D., & Nikolic, M. (2017). The role of conviction and narrative in decision-making under radical uncertainty. Theory & Psychology, 27, 501523.CrossRefGoogle ScholarPubMed
Tuckett, D., & Nyman, R. (2018). The relative sentiment shift series for tracking the economy. UCL Working Paper.Google Scholar
Tuckett, D., Smith, R. E., & Nyman, R. (2014). Tracking phantastic objects: A computer algorithmic investigation of narrative evolution in unstructured data sources. Social Networks, 38, 121133.CrossRefGoogle Scholar
Tuckett, D., & Taffler, R. (2008). Phantastic objects and the financial market's sense of reality: A psychoanalytic contribution to the understanding of stock market instability. International Journal of Psychoanalysis, 89, 389412.CrossRefGoogle Scholar
Tversky, A. (1972). Elimination by aspects: A theory of choice. Psychological Review, 79, 281299.CrossRefGoogle Scholar
Tversky, A., & Kahneman, D. (1971). Belief in the law of small numbers. Psychological Bulletin, 76, 105110.CrossRefGoogle Scholar
Tversky, A., & Kahneman, D. (1974). Judgment under uncertainty: Heuristics and biases. Science, 185, 11241131.CrossRefGoogle ScholarPubMed
Tversky, A., & Kahneman, D. (1979). Prospect theory: An analysis of decision under risk. Econometrica, 47, 263291.Google Scholar
Tversky, A., & Kahneman, D. (1980). Causal schemas in judgments under uncertainty. In Fishbein, M. (Ed.), Progress in social psychology (pp. 4972). Erlbaum.Google Scholar
Uhlmann, E. L., Pizarro, D. A., & Diermeier, D. (2015). A person-centered approach to moral judgment. Perspectives on Psychological Science, 10, 7281.CrossRefGoogle ScholarPubMed
Van Laer, T., de Ruyter, K., Visconti, L. M., & Wetzels, M. (2014). The extended transportation-imagery model: A meta-analysis of the antecedents and consequences of consumers’ narrative transportation. Journal of Consumer Research, 40, 797817.CrossRefGoogle Scholar
Van Osselaer, S. M. J., & Janiszewski, C. (2012). A goal-based model of product evaluation and choice. Journal of Consumer Research, 39, 260292.CrossRefGoogle Scholar
Vasilyeva, N., Blanchard, T., & Lombrozo, T. (2018). Stable causal relationships are better causal relationships. Cognitive Science, 42, 12651296.CrossRefGoogle ScholarPubMed
Vasilyeva, N., Wilkenfeld, D., & Lombrozo, T. (2017). Contextual utility affects the perceived quality of explanations. Psychonomic Bulletin and Review, 24, 14361450.CrossRefGoogle ScholarPubMed
Volz, K. G., & Gigerenzer, G. (2012). Cognitive processes in decisions under risk are not the same as in decisions under uncertainty. Frontiers in Neuroscience, 6, 105.CrossRefGoogle Scholar
Von Helmholtz, H. (2005). Treatise on physiological optics (Vol. III). Dover. (Original work published 1867).Google Scholar
Von Neumann, J., & Morgenstern, O. (1944). Theory of games and economic behavior. Princeton University Press.Google Scholar
Walasek, L., & Brown, G. D. A. (2021). Incomparability and incommensurability in choice: No common currency of value? Warwick Working Paper.CrossRefGoogle Scholar
Wegner, D. M. (1987). Transactive memory: A contemporary analysis of the group mind. In Mullen, B. & Goethals, G. R. (Eds.), Theories of group behavior (pp. 185208). Springer–Verlag.CrossRefGoogle Scholar
Wilson, T. D., & Gilbert, D. T. (2005). Affective forecasting: Knowing what to want. Current Directions in Psychological Science, 14, 131134.CrossRefGoogle Scholar
Woodward, J. (2003). Making things happen: A theory of causal explanation. Oxford University Press.Google Scholar
Zacks, J. M., & Tversky, B. (2001). Event structure in perception and conception. Psychological Bulletin, 127, 321.CrossRefGoogle ScholarPubMed
Zeelenberg, M., & Pieters, R. (2004). Consequences of regret aversion in real life: The case of the Dutch postcode lottery. Organizational Behavior and Human Decision Processes, 93, 155168.CrossRefGoogle Scholar
Zemla, J. C., Sloman, S., Bechlivanidis, C., & Lagnado, D. A. (2017). Evaluating everyday explanations. Psychonomic Bulletin & Review, 24, 14881500.CrossRefGoogle ScholarPubMed
Zhu, J., & Murphy, G. L. (2013). Influence of emotionally charged information on category-based induction. PLoS ONE, 8, e54286.CrossRefGoogle ScholarPubMed
Figure 0

Figure 1. The logic of decision. Decisions reflect both data picked up from the external world – including the social environment – and internally derived goals. The mediation problem (dashed lines) reflects the need for an internal representation – a currency of thought – that can mediate between data from the external world and actions decided internally. The combination problem (gray lines) reflects the need for a process – a driver of action – that can combine beliefs and goals to yield actions. In classical decision theory, the currency of thought is probability and the driver of action is expected utility maximization. In CNT, the currency of thought is narratives, and the driver of action is affective evaluation.

Figure 1

Table 1. Elements of Conviction Narrative Theory

Figure 2

Table 2. Propositions of Conviction Narrative Theory

Figure 3

Figure 2. Representations and processes in Conviction Narrative Theory. Narratives, supplied in part by the social environment, are used to explain data. They can be run forward in time to simulate imagined futures, which are then evaluated affectively considering the decision-maker's goals. These appraisals of narratives then govern our choice to approach or avoid those imagined futures. The figure also depicts two feedback loops: Fragments of narratives that are successfully used may be communicated recursively back to the social context, evolving narratives socially, and our actions generate new data that can lead us to update narratives, evolving narratives individually. (Block arrows depict representations; rectangles depict processes; circles depict sources of beliefs and values, which are inputs to processes via thin arrows.)

Figure 4

Figure 3. Common causal structures in narratives. In panel A (a causal collider), multiple potential causes (A or B) could explain an event (C); a typical inference problem would be to evaluate A and B as potential explanations of observation C, which may in turn license other inferences about effects of A or B (not depicted here). In panel B (a causal chain), a sequence of causally related events (D, E, F, G) is posited; typical inference problems would be to evaluate whether the overall sequence of events is plausible, or whether an intermediate event (E) is plausible given that the other events (D, F, G) were observed. In panel C (a causal web), many event types (H–N) are thought to be related to one another, with some relationships positive and others negative, and some bidirectional; typical inference problems would be to evaluate the plausibility of individual links or to infer the value of one variable from the others. In panel D (agent-causation), an agent (Q) considers taking an action (P), based partly on reasons (O) and their judgment of the action itself (P); typical inference problems would be to predict the agent's action based on the available reasons, or infer the agent's reasons based on their actions. [Circles and squares depict events and agents, respectively; straight arrows depict causal relationships, which could be unidirectional or bidirectional, positive (default or with a “+” sign) or negative (with a “–” sign); curved, diamond-tipped arrows depict reasons. For causation among events and agents, but not event-types (panels A, B, and D), left–right orientation depicts temporal order.]

Figure 5

Figure 4. Analogical, valence, and causal structure. In panel A (analogical structure), one causal chain (R1, S1, T1) is analogized to another (R2, S2, T2); typical inference patterns would be to reason from a known sequence (R1, S1, T1) of specific events or schematized depiction of a general causal mechanism to infer the causal–temporal order of a new sequence (R2, S2, T2) or to infer missing events (T2) given that all other events are observed. In panel B (valence structure), positive event types (U, V, W) are seen as bidirectionally and positively related to each other, negative event types (X, Y, Z) are seen as bidirectionally and positively related to each other, whereas negative and positive events are seen as bidirectionally and negatively related to each other (Leiser & Aroch, 2009). (Dashed lines represent analogical correspondences; white and black circles represent “good” and “bad” events or event types, respectively.)

Figure 6

Figure 5. Possible narratives around a global pandemic. Panel A depicts one possible individual's narrative around a global pandemic, which aligns largely with the mainstream view. Infections and deaths (which are bad) are negatively related to interventions such as social distancing, masks, and vaccines, which are themselves results of government action. The government chose these actions for the reason that it would have a preventative effect on deaths. The causal links between each intervention and infection is supported by an analogy to other diseases, such as influenza (i.e., staying home from work, covering one's mouth when coughing, and vaccines all help to prevent flu infections). Panel B depicts one possible conspiratorial narrative around a pandemic. In this narrative, global elites control the government, and are acting so as to increase their profits, which can be accomplished by several channels including economic distress, population reduction, and mind control. These causal links to profitability are supported by their own analogies (e.g., the global financial crisis and subliminal messaging being ways that bankers, corporations, and other elites are thought to increase their profits), as is the idea that the government is captured by unelected elites such as lobbyists for big business. In this narrative, social distancing has little effect on the spread of disease but a strong link to intentional economic distress; masks and vaccines increase infection and death rather than preventing it. For this reason, interventions that are seen as good in the mainstream narrative (because they have a preventive relationship with death) are seen as bad in the conspiracy narrative. These hypothetical narratives will be supported by different social and informational environments, yield conflicting forecasts about the future, and motivate distinctive actions.

Figure 7

Figure 6. Economic narratives from linguistic and interview data. Panels A–C depict simplified versions of three narratives drawn from Shiller's (2019) linguistic studies of viral economic narratives and Tuckett's (2011) interview studies of money managers. In panel A, a generic causal mechanism of machinery generally leading to increased efficiency (Ma1 and Ef1) is analogized to machinery in one's particular industry leading to increased efficiency in that industry (Ma2 and Ef2). Efficiency is thought to cause unemployment (UN) directly by displacing human workers and indirectly through underconsumption (UC). Because unemployment is seen as bad, all other variables in the causal chain are inferred to be bad too. In panel B, greedy businessmen (GB) are inspired by the opportunity of World War I (W1) to increase prices (HP), which leads to inflation (Inf). A boycott (By) is thought to reduce demand (RD), which would in turn push prices back down (negative effect on HP). Since inflation and the greedy businessmen who cause it are bad, the countervailing boycott chain is perceived as good. In panel C, negative news about a company (NN) is thought to affect its stock price at an initial time (P1), but only the company's fundamentals (F) affect its stock price later (P2). Other investors (Other) are less observant and only act based on the negative news, but our investment firm (Us) is more observant and sees the fundamentals, creating a profit opportunity.