Hostname: page-component-f554764f5-fnl2l Total loading time: 0 Render date: 2025-04-14T12:20:29.688Z Has data issue: false hasContentIssue false

Industrial Distraction

Published online by Cambridge University Press:  16 January 2025

Cailin O’Connor
Affiliation:
Department of Logic and Philosophy of Science, University of California, Irvine, CA, USA
David Peter Wallis Freeborn*
Affiliation:
Department of Philosophy, Northeastern University, London, UK
*
Corresponding author: David Peter Wallis Freeborn; Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

There are myriad techniques industry actors use to shape the public understanding of science. While a naive view might assume these techniques typically involve fraud or outright deception, the truth is more nuanced. This paper analyzes industrial distraction, a common technique where industry actors fund and share research that is accurate, often high-quality, but nonetheless misleading on important matters of fact. This involves reshaping causal understanding of phenomena with distracting information. Using case studies and causal models, we illustrate how this impacts belief and decision making even for rational learners, informing science policy and debates about misleading content.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of the Philosophy of Science Association

1. Introduction

Over the past few decades the Coca-Cola company has engaged in an extensive campaign to fund and share research on the benefits of exercise to health, and especially its impacts on weight and diet-related diseases (Serodio et al. Reference Serodio, Ruskin, McKee and Stuckler2020; Wood et al. Reference Wood, Ruskin and Sacks2020; Nestle Reference Nestle2015; O’Connor Reference O’Connor2015; Greenhalgh Reference Greenhalgh2024; Carpenter Reference Carpenter2025). In response, scientists have raised the alarm about the potential for negative health effects from this campaign. For example, in 2017 the Union of Concerned Scientists published a report documenting Coca-Cola’s influence on the sciences of sugar, obesity, and exercise (Union of Concerned Scientists 2017). Notably, though, these scientists made no accusations of fraud, questionable research practices, or lying. Neither did they suggest that the research funded by Coca-Cola was itself bad or inaccurate. What, we might ask, is wrong with a company giving money to otherwise independent scientists to do research on a topic of interest to public health?

The worry is that even good science on exercise can shift blame for public health problems away from Coca-Cola products, and towards sedentary lifestyles. This type of technique—funding and sharing accurate, often high-quality, often independent, research with the goal of distraction—is one that has been used extensively in the history of industry influence on science (Proctor Reference Proctor1995, Reference Proctor2012; Oreskes and Conway Reference Oreskes and Conway2011). In this paper we analyze this sort of technique, which we call industrial distraction.Footnote 1 We use both case studies and causal models to show how and why industrial distraction works, and to identify a few variations of the technique.

At its heart, industrial distraction involves changing how targets understand some causal system in the world. Typically it shifts public understanding towards some distracting potential cause of a public harm, and away from a known industrial cause of the same harm. A second variation uses inaccurate information to introduce distracting mitigants of industrial harms. And a last variant shifts public beliefs about downstream effects of policies to focus on distracting harms they may cause.

One reason it is important to understand and analyze industrial distraction is that it does not fit with a naive understanding of how industry influences public opinion about science. A typical picture focuses on the production of fraudulent or influenced research, and/or the sharing of inaccurate, false, or deceptive scientific claims. While this does happen, it is far from the only method of industry influence (Lesser et al. Reference Lesser, Ebbeling, Goozner, Wypij and Ludwig2007; Oreskes and Conway Reference Oreskes and Conway2011; Bes-Rastrollo et al. Reference Bes-Rastrollo, Schulze, Ruiz-Canela and Martinez-Gonzalez2013; Proctor Reference Proctor2012, Reference Proctor1995; O’Connor and Weatherall Reference O’Connor and Owen Weatherall2019b). Industrial distraction does not work this way. Nonetheless, as our models will illustrate, it can shift public belief in harmful ways, and, as a result, shift policy decisions in harmful ways. As our models also show, this sort of harm need not depend on human fallibility—even fully rational learners and decision makers can err in the presence of industrial distraction.

Recent research has highlighted a suite of industry techniques that avoid moral and legal censure by technically “playing by the rules” (Oreskes and Conway Reference Oreskes and Conway2011; Holman Reference Holman2015; Holman and Bruner Reference Holman and Bruner2017; Weatherall et al. Reference Weatherall, O’Connor and Bruner2020; Greenhalgh Reference Greenhalgh2024). In order to properly regulate industry influence, then, policy makers must be able to recognize how industrial actors can skirt current norms and regulations and nonetheless influence policy outcomes. Industrial distraction is one more technique in this vein. We argue that, given the presence of these techniques, policies are needed to more stringently separate industry from science, and to regulate how industry communicates with the public about science.

This paper will also be relevant to both philosophical and policy debates about how to understand misinformation, disinformation, and misleading content. While this kind of content is often defined as “false” or “inaccurate,” it is increasingly recognized that true and accurate content can mislead, industrial distraction arguably providing one example (Fallis Reference Fallis2015; Wardle and Derakhshan Reference Wardle and Derakhshan2017). The ubiquity of accurate but misleading content online leads to thorny questions about how best to regulate both social and traditional media. Relatedly, our analysis will be relevant to philosophical debates about how to characterize and identify illegitimate scientific dissent.

On one last note, there has been a great deal of excellent historical investigation into the details of industrial influence on public health.Footnote 2 Many of these investigations carefully outline various details of industrial strategy. What philosophers of science and social epistemologists have added to this research are systemic analyses of the epistemic impacts of industrial propaganda. These are formal and theoretical understandings of just how and why propaganda of various sorts can impact belief. This paper follows in this vein.

The paper will proceed as follows. Section 2 will introduce Bayesian causal models, giving the background information necessary to model various types of industrial distraction. Section 3 will discuss cases where industry shifts beliefs about causes of an industrial harm, and develop causal models that illustrate how this sort of industrial distraction works. The next section, 4, looks at cases where industry introduces spurious mitigants of industry harms. And section 5 analyzes cases where industry shifts understandings of the effects of policy. As will become clear, these three varieties of industrial distraction all work differently, though they all can be effective. In section 6 we discuss what this means for policy regulation of industry influence on science and public belief, and for thinking about misleading content more generally.

2. Causal models

Causal models provide a useful framework for analyzing the various techniques of industrial distraction both because they illuminate the logic of these strategies, and because they make clear how even rational learners are misled by them. In fact, recent work in philosophy and the social sciences has demonstrated how this sort of model is useful to understanding a suite of phenomena related to false belief, propaganda, and polarization (Freeborn Reference Freeborn2023, Reference Freeborn2024; Eliaz et al. Reference Eliaz, Galperti and Spiegler2022; Jern et al. Reference Jern, Chang and Kemp2014; Eliaz and Spiegler Reference Eliaz and Spiegler2024; Spiegler Reference Spiegler2020).

Causal models offer formal representations of systems with multiple stochastic variables and causal relationships between them. For example, when studying obesity in humans these variables could represent the events that some population (i) drinks sugary drinks, (ii) has high rates of sedentary lifestyles, and (iii) exhibits high levels of obesity. Causal models allow us to reason about cause-and-effect relationships between these variables, to predict how changes in one variable might influence others, and to estimate the effects of specific interventions. In addition, as we will see, they allow us to represent how an ideal learner might update their beliefs about such a causal system in light of new evidence.

2.1. Causal Bayesian networks

Bayesian networks are one popular type of causal model, which allow for consistent probabilistic reasoning (Pearl Reference Pearl2009; Spirtes et al. Reference Spirtes, Glymour and Scheines2000). A Bayesian network represents a probabilistic system using a directed acyclic graph. These graphs consist of nodes and directed edges (arrows) between them. (They are “acyclic” because these arrows never form closed loops between the nodes, as will become clear shortly.) We can fully specify a Bayesian network by:

  • A set of $n$ random variables ${\bf{X}} = \left\{ {{X_1}, \ldots, {X_n}} \right\}$ . For example, these variables could be obesity, a sedentary lifestyle, and intake of sugar. Each variable is associated with a node on the graph.

  • A set of directed edges, ${\bf{E}}$ , between nodes. Each edge represents a probabilistic relationship between the variables. For example, if sedentary lifestyles increase the probability of obesity, then there could be an edge pointing from sedentary lifestyles to obesity. If there is a directed edge from node ${X_i}$ to node ${X_j}$ , we call ${X_i}$ a “parent” of ${X_j}$ , and ${X_j}$ a “child” of ${X_i}$ .

  • Conditional probability distributions ${\rm{P}}({X_i}\ |\ {\rm{Pa}}\left( {{X_i}} \right))$ for each random variable ${X_i}$ , where ${\rm{Pa}}\left( {{X_i}} \right)$ denotes the parents of ${X_i}$ .Footnote 3

These probability distributions determine how nodes are probabilistically related to each other. For example, they might specify a strong link between sugar intake and obesity, or else a weak one. Together, these conditional distributions must be probabilistically consistent with each other.Footnote 4 Note that in the following we will label the two possible values for any binary variables true or false. For instance, ${\rm{P}}(X = {\rm{true}}\ |\ Y = {\rm{true}})$ will give the probability that variable $X$ is true conditional on $Y$ being true. Occasionally, it will be convenient to omit the values of variables, for instance when discussing independence. For example, ${\rm{P}}\left( X \right) = {\rm{P}}(X\ |\ Y)$ means that variable $X$ is independent of variable $Y$ .

When we learn some new piece of information, $E$ , the probabilities in the network can remain consistent by updating through Bayesian conditionalization, ${{\rm{P}}_{{\rm{new}}}}\left( {{X_i}} \right) = {\rm{P}}({X_i}\ |\ E = {\rm{true}})$ . As such, Bayesian networks can provide a model of rational learning. The nodes represent events that might hold, the edges their probabilistic relationships, and the constraints of the model specify how a rational agent should update their beliefs about all these events.

For example, suppose that high pollen count ( $P$ ) and colds ( $C$ ) are two independent causes of a bout of sneezing ( $S$ ). Then, we can represent this situation with the Bayesian network in figure 1. Both variables increase the probability that one experiences a bout of sneezing according to the conditional probabilities given in the corresponding table.Footnote 5 Then, learning either that the pollen count is high or that I have caught a cold should increase my credence that I will have a bout of sneezing today. Alternatively, experiencing a bout of sneezing should increase my credences that the pollen count is high and that I have a cold.

Figure 1. A causal graph and associated conditional probability table representing two possible causes, high pollen count ( $P$ ) or a cold ( $C$ ), of sneezing ( $S$ ). We assume that these two causes are independent.

To give an example, according to this Bayesian network, if I start with a prior belief of $0.5$ that the pollen count is high, and a prior belief of $0.5$ that I have a cold, then my prior degree of belief that I will experience a bout of sneezing should be $0.65$ . Suppose that I do start experiencing such a bout of sneezing. Then I can use this observation, plus Bayesian inference, to update my degree of belief that I have a cold, ${{\rm{P}}_{{\rm{new}}}}\left( {C = {\rm{true}}} \right) \approx 0.65$ .

We are often interested in knowing which variables are statistically dependent or independent of others. We say that variables $X$ and $Y$ are independent of each other, conditional on a set of variables ${\bf{Z}}$ , if ${\rm{P}}(X\ |\ {\bf{Z}}) = {\rm{P}}(X\ |\ Y,{\bf{Z}})$ , or equivalently ${\rm{P}}(Y\ |\ {\bf{Z}}) = {\rm{P}}(Y\ |\ X,{\bf{Z}})$ .Footnote 6 For example, in the graph in figure 1 the two possible causes, high pollen count $P$ and a cold $C$ , are independent of each other. Although they are connected by the path $P$ $S$ $C$ , it is blocked by a “collider” at $S$ . Roughly, we can understand this as saying that whilst both $P$ and $C$ might inform us about $S$ , they do not inform us about each other. However, $P$ and $C$ are not independent conditional on $S$ .Footnote 7 If we assume that a sneezing bout is taking place, then each of the other variables can inform us about the other. For instance, if the pollen count is high, that might explain the sneezing, so it is less likely I have a cold. Or if I know I have a cold, this can already explain the sneezing, so it is less likely that the pollen count is high. This sort of conditional dependence will be relevant to cases we discuss below.

3. Distracting causes

As noted, industrial distraction involves attempts to reshape the way targets understand causal relations in the world, and thus avoid undesirable outcomes for industry. We divide these attempts into several sorts—those aimed at shifting beliefs about causes of some harmful phenomenon, those aimed at (falsely) shifting beliefs about factors mitigating harmful effects, and those aimed at shifting beliefs about effects of policy interventions.

The Coca-Cola case described above is an excellent example of the first sort of industrial distraction. We have an undesirable phenomenon from the point of view of public health—obesity and obesity-related disease.Footnote 8 We have clear scientific evidence connecting the consumption of sugar-sweetened beverages, such as sodas, to weight gain, diabetes, and heart disease (Ludwig et al. Reference Ludwig, Peterson and Gortmaker2001; Malik et al. Reference Malik, Schulze and Hu2006; Malik et al. Reference Malik, Popkin, Bray, Després and Hu2010; Schulze et al. Reference Schulze, Manson, Ludwig, Colditz, Stampfer, Willett and Hu2004; Yang et al. Reference Yang, Zhang, Gregg, Dana Flanders, Merritt and Hu2014). We have increasing public attention to this connection, and increasing action by policy makers to regulate soda (Greenhalgh Reference Greenhalgh2024; Carpenter Reference Carpenter2025).

These events create pressure on industries producing soda to disrupt public belief about its health effects, and prevent policy regulation. However, in a case like this, enough scientific evidence has accumulated to make it difficult for Coca-Cola to outright deny the causal connection between soda consumption and obesity. One way forward is to distract the public and policy makers from this connection by focusing on some other causal factor that contributes to obesity—in this case, sedentary lifestyle. By strengthening beliefs about the connection between a distraction ( $D$ ) and an undesirable outcome ( $U$ ), propagandists decrease beliefs that industry ( $I$ ) is a relevant or important cause of $U$ .

There are several ways that Coca-Cola emphasized this distracting causal pathway. First, they funded research into exercise, for example through the Global Energy Balance Network—a Coca-Cola-funded research group promoting the idea that the best way to lose weight is through exercise. Second, they widely shared research on exercise and obesity, whether or not they had funded that research. The variations in how they fund, and promote, this sort of research are many and complicated. They go beyond the scope of this paper, but interested readers can learn more in Greenhalgh (Reference Greenhalgh2024) or Carpenter (Reference Carpenter2025).

It is important to recognize that industrial distraction as used by Coca-Cola is very far from an isolated case. Another notable case involved the tobacco industry, which spent enormous resources sowing doubt about the connection between tobacco and diseases like lung cancer and emphysema. (As Oreskes and Conway (Reference Oreskes and Conway2011) convincingly show, tobacco pioneered many industry techniques for influencing scientific belief, so this is, in fact, an early and important example of industrial distraction.) Notably, they promoted research about alternative causes of lung disease, including asbestos exposure, air pollution, coal smoke, and even early marriage (O’Connor and Weatherall Reference O’Connor and Owen Weatherall2019b).Footnote 9 Later, when fighting consensus on the dangers of second-hand smoke, tobacco publicized alternative causes for lung disease in spouses of smokers such as, “microorganisms, allergens, pesticides, herbicides, household chemicals, insect and rodent products, nitrogen and sulfur dioxides, ozone, formaldehyde, respirable dusts, radon.”Footnote 10

The sugar industry has been criticized, similarly, for funding research on the link between dietary fat and heart health in the mid-twentieth century (Kearns et al. Reference Kearns, Schmidt and Glantz2016). Ironically, at the same time various industry groups connected to fatty foods, like the British Egg Marketing Board and the National Dairy Council—were funding research into the link between sugar and heart disease, and thus also attempting industrial distraction (Johns and Oppenheimer Reference Johns and Oppenheimer2018).

Industrial distraction sometimes involves poor science, but not necessarily so. For example, Johns and Oppenheimer (Reference Johns and Oppenheimer2018) argue that in the sugar case, the industry funded mainstream researchers doing high quality work. They argue there is little evidence that the nutrition research itself was directly impacted by industry funding. Notably, there is often no need, in industrial distraction, to promote low-quality work. There are typically multiple, real causes of some undesirable outcome, and revealing these links constitutes important research. It is just when this research is funded and communicated cynically as a distraction strategy that it tends to harm public belief.

With these cases in hand, we now turn to causal models to illuminate how this sort of technique works generally, and to illustrate how learners updating on accurate and relevant data can be misled by it.

3.1. Distracting causes model

As noted, this version of industrial distraction involves promoting an alternative cause ( $D$ ) to distract from the industry’s own causal role ( $I$ ) in an undesirable outcome ( $U$ ). Let us use the Coca-Cola case to ground our analysis. If we regard the two possible causes (e.g., a sedentary lifestyle and intake of sugary sodas) as statistically independent, one way to represent this type of distraction is with a simple causal network like the one shown in figure 2 (note that this has the same structure as the sneezing example in figure 1).

Figure 2. A causal graph in which the effect $U$ has two independent possible causes, an industrial product $I$ and a distracting cause $D$ .

Suppose that we encounter evidence that the distraction $D$ is a cause of $U$ . How should that affect our beliefs about the industrial cause, $I$ ? Well, although the variables $I$ and $D$ are marginally independent (i.e., ${\rm{P}}\left( I \right) = {\rm{P}}(I\ |\ D)$ ), they are not conditionally independent given $U$ (i.e., ${\rm{P}}(I|U) \ne {\rm{P}}(I\ |\ D,U)$ ).Footnote 11 In many instances we might already know that the undesirable effect $U$ is taking place. Or alternatively, we might acquire evidence about the causes that does not alter our beliefs about whether the effect is taking place. In either case, if $D$ can account for some or all of the effect of $U$ , then $I$ does not need to account for as much. Thus we should often rationally lower our degree of belief in $I$ being a cause of $U$ .

There are at least two different ways we could model this effect using the Bayesian network structure. In the first approach, we use the conditional probabilities to represent changes in beliefs about the causal effect of one variable on another. In other words we change the strength of the “edges” between nodes, i.e., the entries in our conditional probability tables. In the second approach, we assume a change in our marginal probabilities (the “node” itself), whilst keeping the conditional probabilities fixed. Mathematically, we can achieve the same effect either way. However, each modeling choice will require slightly different interpretations of each of the variables. Different choices will be more natural in different cases. We explore both options in turn.

3.1.1. Updating only the conditional probabilities

Suppose we use the Bayesian network and conditional probabilities in figure 2. We use the following variables to represent these events:

  • $I$ : The population has a high intake of sugary drinks.

  • $D$ : The population has high rates of sedentary lifestyles.

  • $U$ : There is an increase in obesity levels.

Suppose we begin with the prior probabilities ${\rm{P}}\left( {I = {\rm{true}}} \right) = {\rm{P}}\left( {D = {\rm{true}}} \right) = 0.8$ . Then, from the conditional probability tables, it follows that ${\rm{P}}\left( {U = {\rm{true}}} \right) \approx 0.836$ . Now suppose that we learn new information that increases our credence that sedentary lifestyles cause obesity,

$${\rm{P}}(U = {\rm{true}}\ |\ D = {\rm{true}},I = {\rm{false}}) = 0.9,$$
$${\rm{P}}(U = {\rm{true}}\ |\ D = {\rm{true}},I = {\rm{true}}) = 0.95,$$

but which does not alter our beliefs in the marginal probabilities ( ${\rm{P}}\left( I \right)$ , ${\rm{P}}\left( D \right)$ , and ${\rm{P}}\left( U \right)$ ) regarding whether obesity, rates of sugary drinks, and sedentary lifestyles are high. Furthermore, we assume that it does not alter the probability that obesity arises if neither the intake of sugary drinks nor rates of sedentary lifestyles are high, ${\rm{P}}(U = {\rm{true}}\ |\ I = {\rm{false}},D = {\rm{false}})$ .Footnote 12 Then, in order to keep the probabilities consistent, we are forced to revise our beliefs about whether sugary drinks cause obesity to arise (if sedentary lifestyles are not at high rates). Now, ${\rm{P}}(U = {\rm{true}}\ |\ I = {\rm{true}},D = {\rm{false}}) = 0.5$ , which is substantially lower than our prior belief.

Note that this is a rational case of consistently updating beliefs in the light of evidence. Thus, if we become more persuaded that the distracting cause ( $D$ ) can explain some or all of the undesirable outcome ( $U$ ), we have less reason to ascribe some of that effect to the industrial product ( $I$ ). The result is that we rationally decrease our degree of belief that the industrial product, $I$ , causes the undesirable effect, $U$ . This is sometimes known as the explaining away effect in Bayesian epistemology (Kim and Pearl Reference Kim and Pearl1983; Wellman and Henrion Reference Wellman and Henrion1993).

3.1.2. Updating only the marginal probabilities

In a causal modeling framework, it is often more mathematically natural to update the marginal probabilities, whilst leaving conditional probabilities fixed. This provides an alternative way to model the distracting causes scenario; however, it necessitates a different, less straightforward, interpretation of the variable—we include causal effects within the variables.

For example, we might use the variables to represent the following propositions:

  • $I$ : High sugary drink intake leads to obesity.

  • $D$ : High rates of sedentary lifestyles lead to obesity.

  • $U$ : There is an increase in obesity levels.

Let us suppose that at first, we treat the two causes as independent, and we believe that sugar-sweetened beverages are the most likely cause, whilst sedentary lifestyles are less likely, adopting the prior probabilities ${\rm{P}}\left( {I = {\rm{true}}} \right) = 0.6$ , ${\rm{P}}\left( {D = {\rm{true}}} \right) = 0.4$ . If we are sure that there really is an increase in obesity, i.e., ${\rm{P}}\left( {U = {\rm{true}}} \right) = 1$ , then by Bayesian conditionalization we should increase our degree of belief in each of these two possible causes: ${{\rm{P}}_{{\rm{new}}}}\left( {I = {\rm{true}}} \right) = {\rm{P}}(I = {\rm{true}}\ |\ U = {\rm{true}}) \approx 0.77$ and ${{\rm{P}}_{{\rm{new}}}}\left( {D = {\rm{true}}} \right) = {\rm{P}}(D = {\rm{true}}\ |\ U = {\rm{true}}) \approx 0.52$ . However, these conditional probabilities are not independent: if sedentary lifestyles can explain some of the known effect, $U$ , then sugary drinks need to explain less. If we then learn that the distracting cause is true, i.e., that ${\rm{P}}\left( {D = {\rm{true}}} \right) = 1$ , then we should decrease our degree of belief in $I$ : ${{\rm{P}}_{{\rm{new}}}}\left( {I = {\rm{true}}} \right) = {\rm{P}}(I = {\rm{true}}\ |\ U = {\rm{true}},D = {\rm{true}}) \approx 0.63$ .

Once again, we can think of this as a case of the explaining awcty P(I = true | U = true, D = true) < P(I = true | U = true). This effect will arise in the simple model as long as the two possible causes, $I$ and $D$ , are probabilistically independent, are the only two possible causes, and both always positively increase the probability of $U$ being true.Footnote 13

3.2. Accurate sharing and inaccurate beliefs?

Before continuing to the next version of industrial distraction, we will take a moment to address a possible worry here. One might think that if industry is actually sharing accurate scientific data, recipients will develop accurate causal pictures of the world. In other words, although they might strengthen beliefs in a distracting cause, they will only do so in an accurate way, and thus are not harmed.

There are a few things to note here. First, as we will emphasize later, industry is often supporting and spreading real scientific information but in a cherry-picked way. Targets are receiving too much information about distracting causes, and not enough information about relevant industry causes. Even rational learners can develop inaccurate pictures of the world on the basis of good data that is cherry picked or curated (Mohseni et al. Reference Mohseni, O’Connor and Owen Weatherall2022).

Second, industry is often picking distracting causes to highlight that are not currently a public focus. In other words, they cynically select distracting causes where accurate information can decrease beliefs in the strength of industry causes. It is in this sort of context that the sharing of such distracting information functions as a type of misleading content (even if it improves beliefs about a distracting cause). It misleads by shaping beliefs in such a way as to purposefully prevent effective policy.Footnote 14

Third, although we are emphasizing the role that accurate scientific information can play in industrial distraction, there is no reason that inaccurate, false, hyperbolic, or fraudulent information cannot play the same role. Furthermore, it is often the case that media coverage of science overstates the strength of results, meaning that the public may get an inaccurate picture of the strength of a distracting cause.

4. Distracting mitigations

The next sort of case occurs when industry promotes distracting mitigations to some industrial harm. To give some examples, the sugar industry promoted and publicized research into enzymes that would disrupt dental plaque, and into a tooth decay vaccine (Kearns et al. Reference Kearns, Glantz and Schmidt2015). The plastic industry widely shared false claims about the effectiveness of plastic recycling (Singla Reference Singla2022; Allen et al. Reference Allen, Linsley, Spoelman and Johl2024). Tobacco invented “healthier cigarettes,” like those with filters (Cummings et al. Reference Cummings, Brown and O’Connor2007).

This kind of technique again reworks the public’s causal picture. Instead of thinking that industrial product ( $I$ ) is necessarily connected to undesirable effect ( $U$ ), the public now thinks there is some mitigating factor ( $M$ ) that interrupts that causal connection. Unlike the last technique, though, this one typically must involve sharing spurious or false claims. If some mitigating factor actually could prevent industrial harms, then no industrial propaganda would be needed. Instead, because no such mitigating factors exist, industry must mislead observers as to their abilities to prevent harm. (Filters do not prevent harms from smoking, plastic recycling is mostly a myth, and there is no tooth decay vaccine.)

There are some similar cases where industry over-emphasizes the potential mitigating impacts of future technologies. In these cases, it may turn out that these technologies actually can disrupt the link between an industrial product and harms. For example, it is possible that carbon capture technologies might someday greatly mitigate the harms of fossil fuel use. But even in these cases industrial communication about these benefits should be understood as a harmful distraction technique. The benefits of these technologies are not yet clear, and they are being shared cynically to shape policy with little regard for public health.

4.1. Distracting mitigations model

To model distracting mitigation we can use a network with the same structure as in section 3.1. Here, the undesirable effect ( $U$ ) may be causally influenced by two variables, one representing the presence of an industrial product ( $I$ ), the other representing the presence of a mitigating factor ( $M$ ). For example, we could interpret the variables as follows:

  • $I$ : High sugary drink intake leads to tooth decay.

  • $M$ : There is an effective tooth decay vaccine.

  • $U$ : There is an increase in tooth decay levels.

A Bayesian network representation and possible conditional probability table are shown in figure 3. The main difference here is in the conditional probabilities.

Figure 3. A causal graph in which the effect $U$ is influenced by two causal factors, the industrial product $I$ and a mitigating factor $M$ . The conditional probability table for $U$ shows that $M$ reduces the causal effect of $I$ on $U$ .

Without the mitigating factor in play, the presence of the industrial product (e.g., sugar) increases the probability that the undesirable effect (tooth decay) will arise. However, if the mitigating factor is in play, the effects of the industrial product on the undesirable effect are greatly reduced. For instance, suppose we hold the prior probabilities ${\rm{P}}\left( {I = {\rm{true}}} \right) = 0.6$ , ${\rm{P}}\left( {M = {\rm{true}}} \right) = 0.1$ . If, say, we learn that the undesirable effect is taking place, then we should rationally update our credence in the industrial product being the cause, Pnew(I = true) ≈ P(I = true | U = true) = 0.93. After all, with this setup, the industrial product is our only likely (and therefore best) explanation of the undesirable effect. As such, the existence of the undesirable effect is itself good evidence that the industrial product is causing it.

However, suppose that we also come to believe that the mitigating variable is true (i.e., the mitigating factor is present). Now, the industrial product is a much weaker explanation. In this case, we should rationally alter our credences, ${{\rm{P}}_{{\rm{new}}}}\left( {I = {\rm{true}}} \right) = {\rm{P}}(I = {\rm{true}}\ |\ U = {\rm{true}},M = {\rm{true}}) \approx 0.75$ . The industrial product may still be a cause, in spite of the mitigating factor, but it is a less convincing one. (Alternatively, in this case, we might be unsure about whether $U$ will occur in the future as a result of $I$ . If we learn that $M$ is true we decrease our belief in $U$ .)

This effect is highly analogous to the explaining away effect discussed in section 3.1. Once again, the mitigating factor and the industrial cause are no longer statistically independent once the undesirable effect is known. However, in this case, the mitigating factor serves to reduce some of the explanatory strength of the industrial product, rather than serving as a separate explanation in itself.

4.2. Distracting causes and mitigations

The effect of the mitigating factor was quite weak in this example, because we had no alternative good explanations of the undesirable effect. Notice, though, that in some of the cases above industry introduced both distracting causes and distracting mitigants. The Tobacco industry emphasized the harms of asbestos, and also the mitigating hope of filters, with respect to lung cancer, for example.

Assume that a distracting explanation $D$ and a mitigating factor $M$ are both in place. Now the undesirable effect $U$ is influenced by three causal factors: the presence of the industrial product, $I$ , the mitigating factor, $M$ , and the distracting cause, $D$ . Then the false mitigating factor might cause us to further rationally reduce our degree of belief that the industrial product $I$ is responsible for the effect, analogous to the shifting causes model in 3.1. We can represent this in a hybrid model, shown in figure 4.

Figure 4. A causal graph in which the effect $U$ is influenced by three causal factors: the industrial product $I$ , a false mitigating factor $M$ , and a distracting cause $D$ . The conditional probability table for $U$ shows that $M$ reduces the causal effect of $I$ on $U$ .

For example, suppose that we adopt the initial probabilities ${\rm{P}}\left( {I = {\rm{true}}} \right) = 0.6$ , ${\rm{P}}\left( {M = {\rm{true}}} \right) = 0.1$ , ${\rm{P}}\left( {D = {\rm{true}}} \right) = 0.4$ . These lead to a prior expectation of the undesirable effect of ${\rm{P}}\left( {U = {\rm{true}}} \right) \approx 0.63$ . Suppose we learn that the undesirable effect does take place and there is a public harm to worry about, i.e., ${\rm{P}}\left( {U = {\rm{true}}} \right) = 1$ . Then, by Bayesian conditionalization, we should update our degrees of belief as follows:

$${{\rm{P}}_{{\rm{new}}}}\left( {I = {\rm{true}}} \right) = {\rm{P}}(I = {\rm{true}}\ |\ U = {\rm{true}}) = 0.76,$$
$${{\rm{P}}_{{\rm{new}}}}\left( {M = {\rm{true}}} \right) = {\rm{P}}(M = {\rm{true}}\ |\ U = {\rm{true}}) = 0.066,$$
$${{\rm{P}}_{{\rm{new}}}}\left( {D = {\rm{true}}} \right) = {\rm{P}}(D = {\rm{true}}\ |\ U = {\rm{true}}) = 0.54.$$

Now we think that both causes are more likely to be acting to produce $U$ . However, suppose we then come to believe the mitigating variable is true (i.e., the mitigating factor is present), ${\rm{P}}\left( {M = {\rm{true}}} \right) = 1$ . Then the industrial cause is less able to explain the effect of $U$ . Consequently, we should rationally increase our degree of belief in the alternative explanation, $D$ , as a likely cause of the undesirable effect, ${{\rm{P}}_{{\rm{new}}}}\left( {D = {\rm{true}}} \right) = {\rm{P}}(D = {\rm{true}}\ |\ U = {\rm{true}},M = {\rm{true}}) \approx 0.77$ . Likewise, we should rationally decrease our degree of belief in the industrial product, $I$ , as the cause, ${{\rm{P}}_{{\rm{new}}}}\left( {I = {\rm{true}}} \right) = {\rm{P}}(I = {\rm{true}}\ |\ U = {\rm{true}},M = {\rm{true}}) \approx 0.63$ . In this case the false mitigating factor works to reduce our rational credence that the industrial product causes the undesirable effect. This is again similar to the explaining away effect.

5. Distracting effects

The last variety of industrial distraction involves influencing beliefs about distracting effects of policy interventions. Compared to the first two variants, this one is more straightforward to understand. But it, too, involves industry using accurate data to shape a target’s causal understanding of the world, to their own benefit. And it has been an important technique employed in real cases of industrial distraction. For these reasons, we analyze it here.

There are typically multiple downstream effects of policy given the complexity of many social, natural, and economic systems. When industry propagandists wish to counter policy proposals, and when they cannot plausibly deny the relevance of such proposals to mitigating the harms of their products, one solution is to emphasize negative causal outcomes instead.

Consider the recent transition from fossil fuels to wind power, intended to prevent the harms of global warming. The oil and gas industry spent decades obfuscating the link between fossil fuels and global warming, but their ability to plausibly do so is waning (Oreskes and Conway Reference Oreskes and Conway2011). Instead, a number of prominent Republican lawmakers in the United States—backed by powerful oil and gas interests—have blamed offshore wind turbines for the deaths of whales (Hu Reference Hu2023). Legitimate scientists are indeed worried about impacts of these installations on cetaceans, and have produced studies of these impacts (Quintana-Rizzo et al. Reference Quintana-Rizzo, Leiter, Cole, Hagbloom, Knowlton, Nagelkirk, Brien, Khan, Henry, Duley, Crowe, Mayo and Kraus2021; Thompson et al. Reference Thompson, David Lusseau, Simmons, Rusin and Bailey2010). But their worries are being shared cynically to distract from the more important benefits of wind energy. Others connected to the Republican party, and funded by oil and gas, have emphasized the impacts of wind turbines on birds, despite evidence of fossil fuel’s much more serious impacts on bird life (Katovich Reference Katovich2023; Sovacool Reference Sovacool2013; Bateman et al. Reference Bateman, Chad Wilsey and Joanna Wu2020). Republicans have also focused on wind power as a cause of power outages and shortages, even in cases where it is a less important cause than outages in traditional energy sources (Benshoff Reference Benshoff2022).

In a similar case, a 2017 report by the US Chamber of Commerce—produced with money from companies like Exxon Mobile—seriously overstated the economic impacts to the US from complying with the Paris agreement (Bernstein et al. Reference Bernstein, Montgomery, Ramkrishnan and Tuladar2017; Negin Reference Negin2020). The report was debunked, but was used by politicians like then US President Donald Trump to justify inaction on climate change (Greenberg Reference Greenberg2017; Biesecker and Wiseman Reference Biesecker and Wiseman2017).

In all of these cases industry, and their political allies, introduce and/or emphasize distracting downstream effects of unwanted policy. In other words, they argue that policy ( $P$ ), while causing a desirable outcome ( $O$ ), also causes some other harmful outcome ( $H$ ). Once again, this involves reworking the causal picture policy makers have of the world, using data that may be perfectly good. Now, in assessing some policy proposal, their causal picture involves a harmful outcome, as well as a desirable one.

5.1. Distracting effects model

This type of case is easy to understand even without a model, so we keep this section brief. Unlike the previous case, it is natural to assume that the two outcomes, $O$ and $H$ , are not independent: they both have a common cause, the policy or product, $P$ . However, we assume $O$ and $H$ are independent, conditional on $P$ . If we already know for certain that a policy intervention is happening, learning about one effect does not give us further information about the other effect. In that case, we can represent the situation with the causal graph model in figure 5.

Figure 5. A causal graph in which the common cause, policy $P$ , has two possible effects, a desirable outcome $O$ and a harmful outcome $H$ , which are independent conditional on .

This technique works by changing our overall estimates of the likelihoods of positive and negative effects of the policy. The key is that changing our beliefs about one outcome does not directly affect our beliefs about the other outcome once we know that $P$ is happening. Suppose that we learn that the negative outcome is more likely (perhaps ${\rm{P}}(H = {\rm{true}}\ |\ P = {\rm{true}})$ increases to $0.9$ ) while our beliefs about the positive outcome remain unchanged (i.e., ${\rm{P}}(O = {\rm{true}}\ |\ P = {\rm{true}})$ stays fixed). Then this should decrease how positively we feel about the policy intervention overall.Footnote 15 And, importantly, our shift in beliefs about effects can impact our subsequent decision making.

For example, we might interpret the variables as the following events:

  • $P$ : There is increased use of wind power.

  • $O$ : There are reduced effects of global warming.

  • $H$ : There are harmful effects for birds.

We might initially be focused on the positive effects of preventing the harms of global warming ( $O$ ). Upon learning evidence that harmful effects to birds ( $H$ ) can be caused by wind farms, (i.e., ${\rm{P}}(H = {\rm{true}}\ |\ P = {\rm{true}})$ is high), we now think that $H$ is a more likely outcome of $P$ , although the information does not alter how likely it is that $O$ will occur. We might thus revise our overall judgment of whether we should expand wind power.Footnote 16

6. Discussion

As we have seen, there are a series of related techniques where industry can use distracting information to reshape causal beliefs to their benefit. One of these variants (distracting mitigations) relies on false or misleading information. Notably, though, the two others (causes and effects) can function perfectly well with accurate or true information (not that they always do). And all three techniques are perfectly capable of impacting decision making. If, for example, we do not think Coca-Cola is an important cause of public health problems, we should not work to regulate it. If filters prevent tobacco deaths, we do not need to decrease smoking.

One thing to note is that while all our models represented rational learners, real-world learners may sometimes be even more vulnerable to industrial distraction. For example, humans are known to strengthen their beliefs upon repeated exposure to a claim, even when it is not reasonable to do so, and even when that claim is known to be false (Hassan and Barber Reference Hassan and Barber2021; Fazio et al. Reference Fazio, Brashier, Keith Payne and Marsh2015; Udry and Barber Reference Udry and Barber2023). In cases where industry can flood media, advertisements, and social media with some claim—say that wind farms kill birds, or that filtered cigarettes are safe—repeated exposure to these claims may have a stronger impact than our Bayesian models would predict.

One upshot of our analysis is that policy aimed at protecting public belief should not be limited to industrial propaganda that promotes scientific fraud or shares false information. Such policy misses the harms of techniques like industrial distraction. In thinking about science policy, a nuanced understanding of the many and subtle ways industry influences belief and decision making is necessary to prevent harms from this influence. This is especially true because industrial distraction is far from the only subtle influence technique used by industry.

Holman and Bruner (Reference Holman and Bruner2017) use a model to illustrate what they call industrial selection, where industry promotes researchers who happen to already be producing favorable research. Doing so involves taking advantage of natural variation in the background beliefs, assumptions, focus, or methodology of different scientists, and then, through funding and other amplification methods, making some subset of work more productive or more salient. Notably, many instances of industrial distraction are also instances of industrial selection. In these cases, industry is selecting researchers to fund or promote based on the fact that they are working on a causal connection favorable to industry. For example, as Serodio et al. (Reference Serodio, Ruskin, McKee and Stuckler2020) point out, Coca-Cola promoted the careers of many academic researchers already friendly to their “energy balance” message.Footnote 17 Whether industrial selection uses distraction or not, though, it is another technique where industry technically plays by the rules, but can nonetheless seriously impact the course of science.

Others have emphasized the role of cherry picking in industry misinformation. This involves selecting just some biased subset of independent research to share and promote. For example, the tobacco industry widely shared studies that happened to spuriously find no link between tobacco and disease (Oreskes and Conway Reference Oreskes and Conway2011). Both Weatherall et al. (Reference Weatherall, O’Connor and Bruner2020) and Lewandowsky et al. (Reference Lewandowsky, Pilditch, Madsen, Oreskes and Risbey2019) use models to show how this sort of selection can influence rational learners to form false beliefs favorable to some propagandist.Footnote 18 As noted, industrial distraction can involve a form of cherry picking when only research relevant to a limited part of a full causal picture is shared. When engaged in industrial distraction, propagandists cynically select just some areas of research to promote, and in doing so distort the importance of causes and effects, thus distorting the beliefs of their targets. But again, whether or not cherry picking involves distracting information or straightforwardly misleading information, this sort of industrial technique works within the rules of science and policy to impact decision making in ways that harm public health.

Given these influence techniques, what should the policy response be? We think it necessary to create a greater separation between industry and science funding, especially in cases where there is a potential conflict of interest between industry incentives and public health concerns.Footnote 19 It is clear that as long as industry is incentivized to get around the rules, they will find ways do so. Relatedly, Holman (Reference Holman2015) describes the arms race occurring between pharmaceutical companies and officials seeking to regulate their influence on science. In this history, policy aimed to protect public health was repeatedly, creatively dodged by industry. Industry is an important funder of new science, but it is clear that current policy to prevent harms from industry funding of science is inadequate given these creative techniques.

One solution could be centralized bodies, under public control, which funnel industry money for some research area to the scientists and labs deemed best given public interest. In such a case, industry cannot choose which labs to fund based on their methods, and cannot dramatically over-fund just some part of the causal picture. We are not the first to suggest something along these lines (O’Connor and Weatherall Reference O’Connor and Owen Weatherall2019b; Pinto and Pinto Reference Pinto and Fernández Pinto2023). This is not necessarily an easy policy to implement given the complex involvement of industry in current research funding. Furthermore, Holman and Bruner (Reference Holman and Bruner2017) suggest that in the presence of industrial funding, centralized funding can sometimes exacerbate industry influence because it often rewards those who have already been rewarded. To work, such an agency would have to itself avoid significant influence from industry, which may not be easy given the (discussed) industry incentives to find creative ways to influence science. Pinto and Pinto (Reference Pinto and Fernández Pinto2023) suggest a greater reliance on lottery funding as a way to avoid industrial selection in such cases, which may be a useful tool.Footnote 20

Another relevant policy area concerns industry communication about science. In some cases industrial distraction functions mostly via communication rather than funding. Given free speech protections, it is tricky to regulate industry sharing of accurate scientific information. Relevant laws, though, could require sharing appropriate context along with distracting information. Under this policy Coca-Cola could share information about sedentary lifestyles only when also sharing information about the relationship between soda and diabetes. This proposal is related to journalistic “balance” norms—that reporters should share information with context and balance. The idea is to apply similar balance rules to industry-publicized science.

There is a related debate in philosophy of science. The question is when and whether it is right to suppress inappropriate scientific dissent—dissent that seems to be grounded in industrial or political interests rather than scientific doubt. Some authors argue that it is too difficult to delineate appropriate from inappropriate dissent, and that to suppress dissent without a clear delineation is too risky (de Melo-Mart n and Intemann Reference de Melo-Martn and Intemann2014; de Melo-Mart n and Intemann Reference de Melo-Martn and Intemann2018; Coates Reference Coatesforthcoming). On the other side are those who think it appropriate to identify and suppress this sort of dissent (Nash Reference Nash2018; Oreskes Reference Oreskes2017; Cook Reference Cook2017; Biddle and Leuschner Reference Biddle and Leuschner2015; Biddle et al. Reference Biddle, James Kidd and Leuschner2017; Leuschner Reference Leuschner2018). Analyses like ours, and those described above, looking into specific industry techniques do highlight difficulties for this sort of delineation. For example, as noted, Coca-Cola often funds legitimate scientists who are doing important work on exercise. It can be hard to say whether such work is either propaganda or normal science—it straddles the fence. On the other hand, though, understanding these techniques gives us a deeper ability to identify and fight them. Given the clear harms of industrial manipulation, and a track record of researchers successfully identifying and analyzing this manipulation, there will be many cases where inappropriate dissent can be identified and managed.

Recently, a great deal of work in philosophy and the social sciences has sought to define or delineate various sorts of misleading content, including misinformation, disinformation, malinformation, and fake news (Fallis Reference Fallis and Floridi2016; Weatherall and O’Connor Reference Weatherall and O’Connor2024). A typical claim, especially earlier in this literature, was to define terms like misinformation and disinformation as involving false or inaccurate content (Floridi Reference Floridi1996; Floridi Reference Floridi2011; Fetzer Reference Fetzer2004). But increasingly it is recognized that much content is true or accurate, but nonetheless misleading (Fallis Reference Fallis2015; Wardle and Derakhshan Reference Wardle and Derakhshan2017). And, in addition, misinformation and disinformation take many, varied forms, and can have many different sorts of impacts on belief and decision making (Harris Reference Harris2023; Simion Reference Simion2023; Habgood-Coote Reference Habgood-Coote2019). Analysis of industrial propaganda can helpfully inform this discussion (O’Connor and Weatherall Reference O’Connor and Owen Weatherall2019b). Techniques used by industry, as noted, mislead in a variety of creative ways, not all of which involve falsehoods. Ultimately, it is unlikely that it will be possible to derive definitions capturing all the types of content we might like to label as misinformation, disinformation, or industrial propaganda. Instead, specific analyses, like the one here, can help us better understand the variety of misleading content out there. And a thorough understanding of this variety can guide and shape successful policy aimed at regulating misleading content.

Before finishing, one last note. We focus in this paper on purposeful attempts to reshape causal understandings of the world, with the goal of shaping public behavior and policy. But there are going to be many similar cases where other sorts of factors bias (i) the list of causes and effects the public is aware of and (ii) their understanding of the relative strengths of these causes and effects. For example, it is widely recognized that the values scientists hold end up shaping what they choose to study and thus, often, what results exist on which topics (Haraway Reference Haraway1991; Longino Reference Longino1990). The values of science journalists, as well as incentives they face, shape what they communicate and when (Mohseni et al. Reference Mohseni, O’Connor and Owen Weatherall2022). Algorithms on social media, and the public values and cognitive tendencies that shape these algorithms, determine who sees what scientific results. All these factors determine what evidence members of the public and policy makers see, and thus what their causal picture of the world looks like. The sorts of effects we outline here can happen as an accidental result of endogenous social forces, rather than the purposeful results of propaganda. This means that in thinking about promoting good public belief, attention is needed not just to the quality of information shared, but to its distribution and frequency.

Altogether, we take it to be very important to provide clear analyses of industrial progaganda techniques like industrial distraction. Doing so makes clear how and when industry harms public belief, and how and when industry can sway policy in their favor. As is clear, this analysis illuminates the workings of industrial distraction, highlights its relevance to current discussions in philosophy and the social sciences, and suggests policy responses.

Acknowledgements

Thanks to Ben Genta, Chris Torsell, Tori Cotton, Matthew Coates, Rebecca Korf, and Jim Weatherall for comments on this manuscript. Thanks to participants in the SKAT workshop at Columbia University for comments and feedback. Thanks to commentary from attendees at the PSA 2024 meeting in New Orleans, and to anonymous referees.

Funding information and declarations

The authors have nothing to declare. No funding sources were used in the preparation of this work.

Footnotes

1 Elsewhere, Robert Proctor has referred to this technique as “distraction science,” but we wish to emphasize the role industry plays in it (Proctor Reference Proctor1995, Reference Proctor2012; Kourany and Carrier Reference Kourany and Carrier2020).

2 See, for example, Proctor (Reference Proctor1995); Proctor (Reference Proctor2012); Brownell and Warner (Reference Brownell2009); Oreskes and Conway (Reference Oreskes and Conway2011); Greenhalgh (Reference Greenhalgh2024); Carpenter (Reference Carpenter2025).

3 We assume the causal Markov assumption: each variable ${X_i}$ is conditionally independent of its non-descendants given its parents, ${\rm{Pa}}\left( {{X_i}} \right)$ .

4 They will form a factorized representation of a joint probability distribution, $P(X) = \Pi _{\mit{i} = 1}^n\,P({X_i}\ |\ Pa({X_i}))$ .

5 The table should be read as follows: If $P$ is false and $C$ is false, the probability of sneezing given both of these facts, ${\rm{P}}(S = {\rm{true}}|P,C)$ , is 0.1, and so on.

6 In a Bayesian network, we can generally identify a property of the graph structure, d-separation, to determine whether two variables must be statistically independent. If two variables are d-separated relative to a set of variables ${\bf{Z}}$ in a directed acyclic graph, then they must be statistically independent conditional on ${\bf{Z}}$ in all possible probability distributions that the graph can represent. The reverse does not hold. Two d-separated variables in a joint probability distribution might still be numerically independent given some other variables. See Pearl (Reference Pearl2009) for further details. We say that $X$ and $Y$ are d-separated by ${\bf{Z}}$ if there are no unblocked undirected paths through ${\bf{G}}$ that connect them. An undirected path between two nodes ${X_1}$ and ${X_n}$ is a sequence of nodes $\left( {{X_1},{X_2}, \ldots, {X_n}} \right)$ such that, for each pair of consecutive nodes ${X_i}$ and ${X_{i + 1}}$ , there is an edge between them in either direction. An undirected path is blocked by a set of nodes ${\bf{Z}}$ if the path contains a collider that is not in ${\bf{Z}}$ and has no descendants in ${\bf{Z}}$ , or if the path contains a non-collider that is in ${\bf{Z}}$ . A node ${X_i}$ on an undirected path $\left( {X = {X_1},{X_2}, \ldots, {X_n} = Y} \right)$ is a collider if it has two incoming edges from its neighbors on the path, i.e., both ${X_{i - 1}} \to {X_i}$ and ${X_{i + 1}} \to {X_i}$ , ${X_{i - 1}} \to {X_i} \leftarrow {X_{i + 1}}$ . If variables $X$ and $Y$ are d-separated they must be independent.

7 They are d-connected given $S$ , as the collider is now found in the set of dependent variables on which we are conditioning.

8 We, the authors, are not making or supporting any claims about the desirability of fatness, but are describing here the way it has been understood by policy makers and the general public.

9 For example, the Tobacco Industry Research Committee—a propaganda body funded by major US tobacco firms—publicized the work of Willhelm Hueper, a cancer researcher who appeared regularly as an expert witness arguing that lung illnesses of patients were caused by asbestos rather than smoking (Oreskes and Conway Reference Oreskes and Conway2011).

10 See the pamphlet “Environmental Tobacco Smoke and Health,” available at the UCSF’s Truth Tobacco Industry archive (The Tobacco Institute 1986).

11 Although $I$ and $D$ are d-separated (i.e., they are independent), they are not d-separated given the outcome $U$ . In other words, $I$ and $D$ become d-connected when conditioned on $U$ .

12 Note that, without making this many assumptions about which beliefs the evidence does or does not affect, the problem would be unconstrained. It is also important to note again that this model assumes statistical independence between the industrial cause ( $I$ ) and the distracting cause ( $D$ ). In reality, these causes might be correlated, which would require a more complex model.

13 That is, if the condition ${{{\rm{P}}(U = {\rm{true}}\ |\ I = {\rm{true}},\;D = {\rm{true}})} \over {{\rm{P}}(U = {\rm{true}}\ |\ I = {\rm{true}},\;D = {\rm{false}})}} \lt {{{\rm{P}}(U = {\rm{true}}\ |\ I = {\rm{false}},\;D = {\rm{true}})} \over {{\rm{P}}(U = {\rm{true}}\ |\ I = {\rm{false}},\;D = {\rm{false}})}}$ holds (Wellman and Henrion Reference Wellman and Henrion1993).

14 There are formal accounts in formal epistemology and philosophy of science of what accurate beliefs consist in, and what counts as deception. Here we do not ground claims about what is “misleading” using any such account. Instead, we will argue that whatever notion of “misleading” we develop should be broad enough to include cases like this one.

15 This is a case of what Kim and Pearl (Reference Kim and Pearl1983) term “inter-causes independence.”

16 To make this point more clearly, we could adopt a decision-theoretic framework with explicit utilities or payoffs for these outcomes. Suppose that we assign a utility of $1$ to $O = {\rm{true}}$ and $ - 1$ to $H = {\rm{true}}$ . With ${\rm{P}}(O = {\rm{true}}|P = {\rm{true}}) = 0.8$ and ${\rm{P}}(H = {\rm{true}}|P = {\rm{true}}) = 0.5$ , the expected utility of implementing $P$ is $0.3$ . If new evidence increases ${\rm{P}}(H = {\rm{true}}|P = {\rm{true}})$ to $0.9$ (while ${\rm{P}}(O = {\rm{true}}|H = {\rm{true}})$ remains at $0.8$ ), then the expected utility from $H$ being true equals $ - 0.1$ , making the policy much less desirable. In other words, increasing the probability that a harm occurs from some policy can shift the expected utilities from implementing that policy.

17 Earlier on, sugar funded independent researchers already looking at the link between fat and heart disease, while fat funded researchers already looking at sugar as a cause of heart disease (Johns and Oppenheimer Reference Johns and Oppenheimer2018; O’Connor and Weatherall Reference O’Connor and Weatherall2019a).

18 See also Eliaz and Spiegler (Reference Eliaz and Spiegler2024) and Mohseni et al. (Reference Mohseni, O’Connor and Owen Weatherall2022) for models of how news media, by sharing just some accurate content, can likewise mislead.

19 Both Resnik and Elliott (Reference Resnik and Elliott2013) and Elliott (Reference Elliott2014) discuss the differences between cases where industry is incentivized to fund and share accurate science, versus the sort of cases we focus on here.

20 Others have argued in favor of lottery funding for different potential benefits (Avin Reference Avin, Mäki, Votsis, Ruphy and Schurz2015; Avin Reference Avin2019; Gross and Bergstrom Reference Gross and Bergstrom2019; Smaldino et al. Reference Smaldino, Turner and Contreras Kallens2019; Shaw Reference Shaw2023; Wu and O’Connor Reference Wu and O’Connor2023).

References

Allen, David, Linsley, Chelsea, Spoelman, Naomi, and Johl, Alyssa. 2024. “The Fraud of Plastic Recycling”. Technical report, Center for Climate Integrity. Washington, DC. https://climateintegrity.org/uploads/media/Fraud-of-Plastic-Recycling-2024.pdf.Google Scholar
Avin, Shahar. 2015. “Funding Science By Lottery”. In Recent Developments in the Philosophy of Science: EPSA13 Helsinki, edited by Mäki, Uskali, Votsis, Ioannis, Ruphy, Stéphanie, and Schurz, Gerhard, 111126. New York: Springer.Google Scholar
Avin, Shahar. 2019. “Centralized Funding and Epistemic Exploration”. The British Journal for the Philosophy of Science 70 (3):629–56. https://doi.org/10.1093/bjps/axx059.Google Scholar
Bateman, Brooke L., Chad Wilsey, Lotem Taylor, Joanna Wu, Geoffrey S. LeBaron, and Gary Langham. 2020. “North American Birds Require Mitigation and Adaptation to Reduce Vulnerability to Climate Change”. Conservation Science and Practice 2 (8):e242. https://doi.org/10.1111/csp2.242.Google Scholar
Benshoff, Laura. 2022. “Renewable Energy Is Maligned by Misinformation. It’s a Distraction, Experts Say”. Report, National Public Radio. https://www.npr.org/2022/08/24/1110850169/misinformation-renewable-energy-gop-climate.Google Scholar
Bernstein, Paul, Montgomery, David, Ramkrishnan, Barat, and Tuladar, Sughanda. 2017. “Impacts of Greenhouse Gas Regulations On the Industrial Sector”. Technical report, NERA Consulting. https://accf.org/wp-content/uploads/2017/03/170316-NERA-ACCF-Full-Report.pdf.Google Scholar
Bes-Rastrollo, Maira, Schulze, Matthias B., Ruiz-Canela, Miguel, and Martinez-Gonzalez, Miguel A.. 2013. “Financial Conflicts of Interest and Reporting Bias Regarding the Association Between Sugar-Sweetened Beverages and Weight Gain: A Systematic Review of Systematic Reviews”. PLoS Medicine 10 (12):e1001578. https://doi.org/10.1371/journal.pmed.1001578.Google Scholar
Biddle, Justin B., James Kidd, Ian, and Leuschner, Anna. 2017. “Epistemic Corruption and Manufactured Doubt: The Case of Climate Science”. Public Affairs Quarterly 31 (3):165–87. https://doi.org/10.2307/44732791.Google Scholar
Biddle, Justin B. and Leuschner, Anna. 2015. “Climate Skepticism and the Manufacture of Doubt: Can Dissent in Science be Epistemically Detrimental?European Journal for Philosophy of Science 5:261–78. https://doi.org/10.1007/s13194-014-0101-x.Google Scholar
Biesecker, Michael and Wiseman, Paul. 2017. “AP Fact Check: Trump’s Shaky Claims on Climate Accord”. Report, Associated Press. New York. https://apnews.com/article/d4836217fa7b4d3eadea33dd20ceff3c.Google Scholar
Brownell, Kelly D and Kenneth E Warner. 2009. “The Perils of Ignoring History: Big Tobacco Played Dirty and Millions Died. How Similar Is Big Food?The Milbank Quarterly 87 (1):259–94. https://doi.org/10.1111/j.1468-0009.2009.00555.x.Google Scholar
Carpenter, Murray. 2025. Sweet and Deadly: How Coca-Cola Spread Disinformation and Makes Us Sick. Cambridge, MA: MIT Press.Google Scholar
Coates, Matthew. forthcoming. “Does it Harm Science to Suppress Dissenting Evidence?” Philosophy of Science. https://philsci-archive.pitt.edu/23472/.Google Scholar
Cook, John. 2017. “Response by Cook to ‘Beyond Counting Climate Consensus’”. Environmental Communication 11 (6):733–35. https://doi.org/10.1080/17524032.2017.1377095.Google Scholar
Cummings, K. Michael, Brown, Anthony, and O’Connor, Richard. 2007. “The Cigarette Controversy”. Cancer Epidemiology Biomarkers & Prevention 16 (6):1070–76. https://doi.org/10.1158/1055-9965.epi-06-0912.Google Scholar
de Melo-Martn, Inmaculada and Intemann, Kristen. 2014. “Who’s Afraid of Dissent? Addressing Concerns about Undermining Scientific Consensus in Public Policy Developments”. Perspectives on Science 22 (4):593615. https://doi.org/10.1162/POSC_a_00151.Google Scholar
de Melo-Martn, Inmaculada and Intemann, Kristen. 2018. The Fight Against Doubt: How to Bridge the Gap Between Scientists and the Public. Oxford: Oxford University Press. https://doi.org/10.1093/oso/9780190869229.001.0001.Google Scholar
Eliaz, Kfir, Galperti, Simone, and Spiegler, Ran. 2022. “False Narratives and Political Mobilization”. Preprint, arXiv:2206.12621.Google Scholar
Eliaz, Kfir and Spiegler, Ran. 2024. “News Media as Suppliers of Narratives (and Information)”. Preprint, arXiv:2403.09155.Google Scholar
Elliott, Kevin C. 2014. “Financial Conflicts of Interest and Criteria for Research Credibility”. Erkenntnis 79 (Suppl 5):917–37. https://doi.org/10.1007/s10670-013-9536-2.Google Scholar
Fallis, Don. 2015. “What is Disinformation?Library Trends 63 (3):401–26. https://doi.org/10.1353/lib.2015.0014.Google Scholar
Fallis, Don. 2016. “Mis- and Dis- Information”. In The Routledge Handbook of Philosophy of Information, edited by Floridi, Luciano, 332–46. New York: Routledge.Google Scholar
Fazio, Lisa K., Brashier, Nadia M., Keith Payne, B., and Marsh, Elizabeth J.. 2015. “Knowledge Does Not Protect Against Illusory Truth.” Journal of Experimental Psychology: General 144 (5):993. https://doi.org/10.1037/xge0000098.Google Scholar
Fetzer, James H. 2004. “Disinformation: The Use of False Information”. Minds and Machines 2 (14):231–40. https://doi.org/10.1023/B:MIND.0000021683.28604.5b.Google Scholar
Floridi, Luciano. 1996. “Brave.net.world: The Internet as a Disinformation Superhighway?Electronic Library 14:509–14. https://doi.org/10.1108/eb045517.Google Scholar
Floridi, Luciano. 2011. The Philosophy of Information. Oxford: Oxford University Press.Google Scholar
Freeborn, David Peter Wallis. 2023. “Polarization and Factionalization for Agents with Multiple, Related Beliefs”. PhD diss., University of California, Irvine.Google Scholar
Freeborn, David Peter Wallis. 2024. “Rational Factionalization for Agents with Probabilistically Related Beliefs”. Synthese 203 (2):46. https://doi.org/10.1007/s11229-024-04491-5.Google Scholar
Greenberg, Jon. 2017. “Fact-Checking Donald Trump’s Statement Withdrawing from the Paris Climate Agreement”. Politifact report, The Poynter Institute. Washington, DC. https://www.politifact.com/article/2017/jun/01/fact-checking-donald-trumps-statement-withdrawing-/.Google Scholar
Greenhalgh, Susan. 2024. Soda Science: Making the World Safe for Coca-Cola. Chicago, IL: University of Chicago Press.Google Scholar
Gross, Kevin and Bergstrom, Carl T.. 2019. “Contest Models Highlight Inherent Inefficiencies of Scientific Funding Competitions”. PLoS Biology 17 (1):e3000065. https://doi.org/10.1371/journal.pbio.3000065.Google Scholar
Habgood-Coote, Joshua. 2019. “Stop Talking About Fake News!Inquiry 62 (9–10):1033–65. https://doi.org/10.1080/0020174X.2018.1508363.Google Scholar
Haraway, Donna. 1991. Simians, Cyborgs, and Women: The Reinvention of Nature. New York: Routledge.Google Scholar
Harris, Keith Raymond. 2023. “Beyond Belief: On Disinformation and Manipulation”. Erkenntnis. https://doi.org/10.1007/s10670-023-00710-6.Google Scholar
Hassan, Aumyo and Barber, Sarah J.. 2021. “The Effects of Repetition Frequency on the Illusory Truth Effect”. Cognitive Research: Principles and Implications 6 (1):38. https://doi.org/10.1186/s41235-021-00301-5.Google Scholar
Holman, Bennett. 2015. “The Fundamental Antagonism”. PhD diss., University of California, Irvine.Google Scholar
Holman, Bennett and Bruner, Justin. 2017. “Experimentation by Industrial Selection”. Philosophy of Science 84 (5):1008–19. https://doi.org/10.1086/694037.Google Scholar
Hu, Akielly. 2023. “Republic Donors are Funding Misinformation About Offshore Wind”. Report, Canary Media. https://www.canarymedia.com/articles/wind/the-gop-donors-behind-a-growing-misinformation-campaign-to-stop-offshore-wind.Google Scholar
Jern, Alan, Chang, Kai-Min K., and Kemp, Charles. 2014. “Belief Polarization Is Not Always Irrational.” Psychological Review 121 (2):206. https://doi.org/10.1037/a0035941.Google Scholar
Johns, David Merritt and Oppenheimer, Gerald M.. 2018. “Was There Ever Really A ‘Sugar Conspiracy’?Science 359 (6377):747–50. https://doi.org/10.1126/science.aaq1618.Google Scholar
Katovich, Erik. 2023. “Quantifying the Effects of Energy Infrastructure on Bird Populations and Biodiversity”. Environmental Science & Technology 58 (1):323–32.Google Scholar
Kearns, Cristin E., Glantz, Stanton A., and Schmidt, Laura A.. 2015. “Sugar Industry Influence on the Scientific Agenda of the National Institute of Dental Research’s 1971 National Caries Program: A Historical Analysis of Internal Documents”. PLoS Medicine 12 (3):e1001798. https://doi.org/10.1371/journal.pmed.1001798.Google Scholar
Kearns, Cristin E., Schmidt, Laura A., and Glantz, Stanton A.. 2016. “Sugar Industry and Coronary Heart Disease Research: A Historical Analysis of Internal Industry Documents”. JAMA Internal Medicine 176 (11):1680–85. https://doi.org/10.1001/jamainternmed.2016.5394.Google Scholar
Kim, J. H. and Pearl, Judea. 1983. “A Computational Model for Combined Causal and Diagnostic Reasoning in Inference Systems”. Proceedings of the Eighth International Joint Conference on Artificial Intelligence (IJCAI-83). Karlsruhe, Germany. 190–93.Google Scholar
Kourany, Janet and Carrier, Martin. 2020. Science and the Production of Ignorance: When the Quest for Knowledge Is Thwarted. Cambridge, MA: MIT Press. https://doi.org/10.7551/mitpress/12146.001.0001.Google Scholar
Lesser, Lenard I., Ebbeling, Cara B., Goozner, Merrill, Wypij, David, and Ludwig, David S.. 2007. “Relationship Between Funding Source and Conclusion Among Nutrition-Related Scientific Articles”. PLoS Medicine 4 (1):e5. https://doi.org/10.1371/journal.pmed.0040005.Google Scholar
Leuschner, Anna. 2018. “Is It Appropriate To ‘Target’ Inappropriate Dissent? On the Normative Consequences of Climate Skepticism”. Synthese 195:1255–71. https://doi.org/10.1007/s11229-016-1267-x.Google Scholar
Lewandowsky, Stephan, Pilditch, Toby D., Madsen, Jens K., Oreskes, Naomi, and Risbey, James S.. 2019. “Influence and Seepage: An Evidence-Resistant Minority Can Affect Public Opinion and Scientific Belief Formation”. Cognition 188:124–39. https://doi.org/10.1016/j.cognition.2019.01.011.Google Scholar
Longino, Helen E. 1990. Science As Social Knowledge: Values and Objectivity in Scientific Inquiry. Princeton, NJ: Princeton University Press. https://doi.org/10.2307/j.ctvx5wbfz.Google Scholar
Ludwig, David S., Peterson, Karen E., and Gortmaker, Steven L.. 2001. “Relation Between Consumption of Sugar-Sweetened Drinks and Childhood Obesity: A Prospective, Observational Analysis”. The Lancet 357 (9255):505–8. https://doi.org/10.1016/S0140-6736(00)04041-1.Google Scholar
Malik, Vasanti S., Popkin, Barry M., Bray, George A., Després, Jean-Pierre, and Hu, Frank B.. 2010. “Sugar-Sweetened Beverages, Obesity, Type 2 Diabetes Mellitus, and Cardiovascular Disease Risk”. Circulation 121 (11):1356–64. https://doi.org/10.1161/CIRCULATIONAHA.109.876185.Google Scholar
Malik, Vasanti S., Schulze, Matthias B., and Hu, Frank B.. 2006. “Intake of Sugar-Sweetened Beverages and Weight Gain: A Systematic Review”. The American Journal of Clinical Nutrition 84 (2):274–88. https://doi.org/10.1093/ajcn/84.2.274.Google Scholar
Mohseni, Aydin, O’Connor, Cailin, and Owen Weatherall, James. 2022. “The Best Paper You’ll Read Today”. Philosophical Topics 50 (2):127–53. https://doi.org/10.5840/philtopics202250220.Google Scholar
Nash, Erin J. 2018. “In Defense of ‘Targeting’ Some Dissent about Science”. Perspectives on Science 26 (3):325–59. https://doi.org/10.1162/posc_a_00277.Google Scholar
Negin, Elliott. 2020. “ExxonMobil Claims Shift on Climate But Continues to Fund Climate Science Deniers”. Report, The Equation, Union of Concerned Scientists. https://blog.ucsusa.org/elliott-negin/exxonmobil-claims-shift-on-climate-continues-to-fund-climate-deniers.Google Scholar
Nestle, Marion. 2015. Soda Politics: Taking on Big Soda (and Winning). New York: Oxford University Press.Google Scholar
O’Connor, Anahad. 2015. “Coca-Cola Funds Scientists Who Shift Blame for Obesity Away From Bad Diets”. The New York Times August 9. https://archive.nytimes.com/well.blogs.nytimes.com/2015/08/09/coca-cola-funds-scientists-who-shift-blame-for-obesity-away-from-bad-diets/.Google Scholar
O’Connor, Cailin and Weatherall, James. 2019a. “How Powerful Interests Use Science to Sway Public Opinion”. Article, Zocalo Public Square. https://www.zocalopublicsquare.org/2019/09/05/how-powerful-interests-use-science-to-sway-public-opinion/ideas/essay/.Google Scholar
O’Connor, Cailin and Owen Weatherall, James. 2019b. The Misinformation Age: How False Beliefs Spread. New Haven, CT: Yale University Press.Google Scholar
Oreskes, Naomi. 2017. “Response by Oreskes to ‘Beyond Counting Climate Consensus’”. Environmental Communication 11 (6):731–32. https://doi.org/10.1080/17524032.2017.1377094.Google Scholar
Oreskes, Naomi and Conway, Erik M.. 2011. Merchants of Doubt: How a Handful of Scientists Obscured the Truth on Issues from Tobacco Smoke to Global Warming. New York: Bloomsbury Publishing.Google Scholar
Pearl, Judea. 2009. Causality: Models, Reasoning and Inference, 2nd ed. New York: Cambridge University Press.Google Scholar
Pinto, Manuela Fernández and Fernández Pinto, Daniel. 2023. “Epistemic Diversity and Industrial Selection Bias”. Synthese 201 (5):182. https://doi.org/10.1007/s11229-023-04158-7.Google Scholar
Proctor, Robert N. 1995. Cancer Wars: How Politics Shapes What We Know and Don’t Know About Cancer. New York: Basic Books.Google Scholar
Proctor, Robert N. 2012. Golden Holocaust: Origins of the Cigarette Catastrophe and the Case for Abolition. Berkeley, CA: University of California Press.Google Scholar
Quintana-Rizzo, Ester, Leiter, S., Cole, T. V. N., Hagbloom, M. N., Knowlton, A. R., Nagelkirk, P., Brien, O. O., Khan, C. B., Henry, A. G., Duley, P. A., Crowe, L. M., Mayo, C. A., and Kraus, S. D.. 2021. “Residency, Demographics, and Movement Patterns of North Atlantic Right Whales Eubalaena glacialis in an Offshore Wind Energy Development Area in Southern New England, USA”. Endangered Species Research 45:251–68. https://doi.org/10.3354/esr01137.Google Scholar
Resnik, David B. and Elliott, Kevin C.. 2013. “Taking Financial Relationships into Account when Assessing Research”. Accountability in Research 20 (3):184205. https://doi.org/10.1080/08989621.2013.788383.Google Scholar
Schulze, Matthias B., Manson, JoAnn E., Ludwig, David S., Colditz, Graham A., Stampfer, Meir J., Willett, Walter C., and Hu, Frank B.. 2004. “Sugar-Sweetened Beverages, Weight Gain, and Incidence of Type 2 Diabetes in Young and Middle-Aged Women”. Jama 292 (8):927–34. https://doi.org/10.1001/jama.292.8.927.Google Scholar
Serodio, Paulo, Ruskin, Gary, McKee, Martin, and Stuckler, David. 2020. “Evaluating Coca-Cola’s Attempts to Influence Public Health ‘In Their Own Words’: Analysis of Coca-Cola Emails with Public Health Academics Leading the Global Energy Balance Network”. Public Health Nutrition 23 (14):2647–53. https://doi.org/10.1017/S1368980020002098.Google Scholar
Shaw, Jamie. 2023. “Peer Review in Funding-By-Lottery: A Systematic Overview and Expansion”. Research Evaluation 32 (1):86100. https://doi.org/10.1093/reseval/rvac022.Google Scholar
Simion, Mona. 2023. “Knowledge and Disinformation”. Episteme. https://doi.org/10.1017/epi.2023.25.Google Scholar
Singla, Veena. 2022. “Recycling Lies: ‘Chemical Recycling’ of Plastic Is Just Greenwashing Incineration”. Briefing report, NRDC. New York. https://www.nrdc.org/resources/recycling-lies-chemical-recycling-plastic-just-greenwashing-incineration.Google Scholar
Smaldino, Paul E., Turner, Matthew A., and Contreras Kallens, Pablo A.. 2019. “Open Science and Modified Funding Lotteries Can Impede the Natural Selection of Bad Science”. Royal Society Open Science 6 (7):190194. https://doi.org/10.1098/rsos.190194.Google Scholar
Sovacool, Benjamin K. 2013. “The Avian Benefits of Wind Energy: A 2009 Update”. Renewable Energy 49:1924. https://doi.org/10.1016/j.renene.2012.01.074.Google Scholar
Spiegler, Ran. 2020. “Can Agents with Causal Misperceptions Be Systematically Fooled?Journal of the European Economic Association 18 (2):583617. https://doi.org/10.1093/jeea/jvy057.Google Scholar
Spirtes, Peter, Glymour, Clark, and Scheines, Richard. 2000. Causation, Prediction, and Search, 2nd ed. Cambridge, MA: MIT Press. https://doi.org/10.7551/mitpress/1754.001.0001.Google Scholar
The Tobacco Institute. 1986. “Environmental Tobacco Smoke and Health: The Consensus”. Pamphlet, The Tobacco Institute. Washington, DC. https://www.industrydocuments.ucsf.edu/tobacco/docs/#id=nnxy0137.Google Scholar
Thompson, Paul M., David Lusseau, Tim Barton, Simmons, Dave, Rusin, Jan, and Bailey, Helen. 2010. “Assessing the Responses of Coastal Cetaceans to the Construction of Offshore Wind Turbines”. Marine Pollution Bulletin 60 (8):12001208. https://doi.org/10.1016/j.marpolbul.2010.03.030.Google Scholar
Udry, Jessica and Barber, Sarah J. 2023. “The illusory truth effect: A review of how repetition increases belief in misinformation”. Current Opinion in Psychology 101736. https://10.1016/j.copsyc.2023.101736.Google Scholar
Union of Concerned Scientists. 2017. “How Coca Cola Disguised its Influence on Science about Sugar and Health”. Case study, Union of Concerned Scientists. https://www.ucsusa.org/resources/how-coca-cola-disguised-its-influence-science-about-sugar-and-health.Google Scholar
Wardle, Claire and Derakhshan, Hossein. 2017. “Information Disorder: Toward an Interdisciplinary Framework for Research and Policy Making”. Technical report, Council of Europe. https://www.coe.int/en/web/freedom-expression/information-disorder.Google Scholar
Weatherall, James O and O’Connor, Cailin. 2024. “Fake News!” Philosopher’s Imprint. https://10.1111/phc3.13005.Google Scholar
Weatherall, James Owen, O’Connor, Cailin, and Bruner, Justin P.. 2020. “How to Beat Science and Influence People: Policymakers and Propaganda in Epistemic Networks”. The British Journal for the Philosophy of Science. https://doi.org/10.1093/bjps/axy062.Google Scholar
Wellman, Michael P. and Henrion, Max. 1993. “Explaining ‘Explaining Away’”. IEEE Transactions on Pattern Analysis and Machine Intelligence 15 (3):287–92. https://doi.org/10.1109/34.204911.Google Scholar
Wood, Benjamin, Ruskin, Gary, and Sacks, Gary. 2020. “How Coca-Cola Shaped the International Congress on Physical Activity and Public Health: An Analysis of Email Exchanges Between 2012 and 2014”. International Journal of Environmental Research and Public Health 17 (23):8996. https://doi.org/10.3390/ijerph17238996.Google Scholar
Wu, Jingyi and O’Connor, Cailin. 2023. “How Should We Promote Transient Diversity in Science?Synthese 201 (2):37. https://doi.org/10.1007/s11229-023-04037-1.Google Scholar
Yang, Quanhe, Zhang, Zefeng, Gregg, Edward W., Dana Flanders, W., Merritt, Robert, and Hu, Frank B.. 2014. “Added Sugar Intake and Cardiovascular Diseases Mortality Among US Adults”. JAMA Internal Medicine 174 (4):516–24. https://doi.org/10.1001/jamainternmed.2013.13563.Google Scholar
Figure 0

Figure 1. A causal graph and associated conditional probability table representing two possible causes, high pollen count ($P$) or a cold ($C$), of sneezing ($S$). We assume that these two causes are independent.

Figure 1

Figure 2. A causal graph in which the effect $U$ has two independent possible causes, an industrial product $I$ and a distracting cause $D$.

Figure 2

Figure 3. A causal graph in which the effect $U$ is influenced by two causal factors, the industrial product $I$ and a mitigating factor $M$. The conditional probability table for $U$ shows that $M$ reduces the causal effect of $I$ on $U$.

Figure 3

Figure 4. A causal graph in which the effect $U$ is influenced by three causal factors: the industrial product $I$, a false mitigating factor $M$, and a distracting cause $D$. The conditional probability table for $U$ shows that $M$ reduces the causal effect of $I$ on $U$.

Figure 4

Figure 5. A causal graph in which the common cause, policy $P$, has two possible effects, a desirable outcome $O$ and a harmful outcome $H$, which are independent conditional on .