Hostname: page-component-586b7cd67f-t7czq Total loading time: 0 Render date: 2024-11-28T04:18:09.880Z Has data issue: false hasContentIssue false

COVID-19 and the Paradox of Scientific Advice

Published online by Cambridge University Press:  29 June 2021

Rights & Permissions [Opens in a new window]

Abstract

The scientific advisory committee is a neglected political institution whose importance became clear during the COVID-19 pandemic. What I call “the paradox of scientific advice” consists in that the two basic expectations from scientific advisory committees—neutrality and usefulness—are inherently in tension. To be useful, advisers must help governments set and attain their goals. Judgments about values and ends are necessary for useful advice, as are subjective judgments in the face of uncertainty and disagreement. This puts the committee in a double bind: if it tries to be more useful, it compromises the neutrality that is the source of its authority and legitimacy; if it tries to remain neutral, it sacrifices usefulness. I argue that this dilemma cannot be solved within the committee but that broader democratic scrutiny could mitigate its force. Advisory committees, in turn, should be structured to facilitate this scrutiny.

Type
Special Issue Articles: Pandemic Politics
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2021. Published by Cambridge University Press

In February 1976, there was a small outbreak of swine flu among army recruits at Fort Dix, New Jersey. One soldier died. The United States had not experienced a swine flu outbreak since 1918–1919, and the possibility of another pandemic raised alarm. The Centers for Disease Control called a special meeting of its Advisory Committee on Immunization Practices to consider the evidence. There was no sign of outbreaks elsewhere in the country, and scientists could not rule out the possibility that small-scale outbreaks had been occurring undetected without leading to a pandemic (Boffey Reference Boffey1976, 638). It was also unclear how virulent this strain of swine flu really was (ibid., 637). Predictions about the likelihood and severity of a possible pandemic were highly conjectural (Neustadt and Fineberg Reference Neustadt and Fineberg1978, 8). The committee initially recommended the production of a vaccine, but not its administration. The CDC director, however, was persuaded of the need to go ahead with a mass immunization program. He conducted a telephone poll of committee members to determine whether they would oppose the recommendation that all citizens be immunized within three months. He gained the assent or acquiescence of all members, and wrote an action-memo that moved swiftly through the bureaucracy and was accepted by President Gerald Ford (Boffey Reference Boffey1976, 640; Neustadt and Fineberg Reference Neustadt and Fineberg1978, 12). By the end of the year, 40 million Americans had been vaccinated at a cost of $135 million dollars. However, the pandemic never materialized, and the vaccine turned out to be associated with increased risk of Guillain-Barré syndrome—a rare nervous system disease. The incident was widely regarded as a fiasco.

Compare this with the United Kingdom’s response to COVID-19. In December 2019, the novel coronavirus disease erupted in Wuhan, China, causing thousands of deaths in two months. The UK’s top medical journal, The Lancet, published warnings about a global pandemic (Wu, Leung, and Leung Reference Wu, Leung and Leung2020), while the government’s modeling advisers predicted sustained transmission in the UK, with a death toll between 250,000 to 500,000 (SPI-M 2020). Until late March, however, the government’s main scientific advisory body—Scientific Advice Group for Emergencies—did not recommend any strict measures. Interviews with committee members later revealed that the committee did not seriously consider the possibility of a lockdown in part because it believed extreme measures would not be politically acceptable (Grey and MacAskill Reference Grey and MacAskill2020). Meeting minutes from the behavioral experts advisory subgroup likewise showed that scientific advisers were unsure about how the public would respond to different measures and disagreed about whether efforts to isolate the more vulnerable would be acceptable (SPI-B 2020). For weeks, the government followed a widely criticized mitigation strategy rather than attempting suppression. Although it is impossible to ascertain the exact cost of this delay and the role of scientific advisers in shaping policy, the UK’s initial COVID response was widely regarded as a failure (Horton Reference Horton2020).

Pandemics offer a particularly dramatic illustration of the dependence of modern societies on scientific advice. They also expose the difficulties of making decisions on the basis of scientific knowledge that is almost always uncertain and subject to disagreement. While COVID-19 has thrust formerly unknown scientific advisory bodies into the spotlight, scientific advice has long played a central role in policy making as an authoritative source of knowledge for the modern state. On issues ranging from climate change to nuclear weapons, environmental protection to biotechnology and artificial intelligence, scientific advice has defined new problems and offered solutions. But despite its crucial role in the most pressing problems of our time, the study of scientific advice in politics remains marginal to both political theory and political science.Footnote 1 This has meant that some of the central theoretical dilemmas of the democratic role of scientific advice remain unexplored.

My purpose is to identify and analyze an inherent contradiction between the two basic expectations from scientific advice in a democracy: neutrality with respect to moral and political values and usefulness for democratic purposes. The expectation that advisory committees stay neutral is not fully compatible with their basic task of providing useful advice to inform policy. This is what I call the paradox of scientific advice. To be useful, advice must be scientifically sound, first and foremost, but it must also be designed to help the government set and attain democratic goals. Judgments about ends and values are necessary for giving useful scientific advice, as are subjective judgments in the face of uncertainty and disagreement. It is not only very difficult to keep value judgments out of deliberations but doing so renders the advice less useful for its recipient. This puts experts in a double bind: if they try to be more useful, they risk making controversial judgments and compromising the neutrality that is the source of their authority and legitimacy; if they try to remain neutral, they sacrifice their usefulness and inadvertently block important courses of action. No matter what they do, they can end up being blamed: for unhelpfulness or activism—and sometimes both at once. This is a difficult if not impossible charge for scientific committees and renders the democratic role of scientific advice fundamentally unstable.

Existing accounts of scientific advice have revolved around the question of whether neutral or value-free expertise is possible (Douglas Reference Douglas2009; Longino Reference Longino1990; Jasanoff Reference Jasanoff2004). While the impossibility of value-free advice is now widely recognized, scholars disagree about what follows from this fact. Empirical studies have refrained from offering normative arguments about the desirability of holding onto neutrality as an ideal (Jasanoff Reference Jasanoff1990, Reference Jasanoff2004), while normative works are divided on whether scientists should still try to be as neutral as possible (Betz Reference Betz2013; Collins and Evans Reference Collins and Evans2017; Lacey Reference Lacey2013) or take on the responsibility of making necessary value judgments (Douglas Reference Douglas2009; Elliot Reference Elliott2011; John Reference John2015). Meanwhile, many scientists continue to subscribe to neutrality as an ideal and believe that the legitimacy of their political role depends on it (Oppenheimer et al. Reference Oppenheimer, Oreskes, Jamieson, Brysse, O’Reilly, Shindell and Wazeck2019; Shapin Reference Shapin2009). I aim to raise a new line of criticism against the aspiration to neutrality by examining its inverse relationship to usefulness, while also showing the democratic dangers of moving away from neutrality. I conclude that this fundamental dilemma is unlikely to be resolved at the committee level, but that expanding the scope of the problem beyond the committee could mitigate its force. Structuring scientific advice to facilitate broader democratic scrutiny could reduce the impact of this problem without sacrificing democratic decision making or scientific knowledge.

The paper is organized as follows. The first section briefly describes the persistence of neutrality as an ideal in scientific advice. The second section examines the interplay of evidentiary and value-based considerations in scientific advice and illustrates the relationship between value judgments and usefulness. The third and fourth sections discuss the limitations of responses that favor one horn of the dilemma over the other. The fifth section argues for the need to restructure scientific advice to allow democratic scrutiny and judgment. The sixth section traces the implications of this argument for the use of scientific advice during the COVID-19 pandemic, and the last section responds to objections.

The Neutrality Ideal

Independent bodies of scientific advice came to occupy a prominent role in public policy after World War II. The provision of large amounts of public funding for science in the postwar United States increased the scope and power of scientific research and cemented the mutual dependence of scientists and the state. The size, complexity, and institutionalization of expert bodies offering scientific advice to the government grew rapidly in this period, and scientists advising the government became a permanent feature of the institutional landscape of both democracy and science (Oppenheimer et al. Reference Oppenheimer, Oreskes, Jamieson, Brysse, O’Reilly, Shindell and Wazeck2019, 9). Over time, producing advice for policy became such a highly structured and complicated activity that it is recognized as a distinct form of scientific work today.

Scientific advisory bodies have an unusual status in the institutional landscape of politics because they are composed of independent scientists, rather than elected politicians or appointed bureaucrats. Their members combine high quality expertise with a plausible degree of detachment from politics. Unlike experts embedded in traditional bureaucracies, scientific advisers operate independently from clear structures of delegation and oversight. These features are the source of the authority and credibility of these bodies, but also complicate their democratic status. Independent scientific bodies present a rival source of authority in a democracy, which can at once strengthen and threaten democratic rule. On the one hand, reliance on the knowledge and competence of scientists can improve democratic outcomes and enhance the welfare of citizens, thus playing a key role in the success of democratic governments. On the other hand, the superior knowledge of scientists and the complexity of the science can result in more and more decisions being left in their hands, thus diminishing the scope of democratic decision making and possibly triggering a backlash against expertise.

Current institutional arrangements respond to this familiar dilemma of expertise by reverting back to the traditional Weberian solution of a division of labor between scientists and laypeople: scientists are meant to handle the facts, based on an analysis of the evidence, while citizens and their representatives decide on the ends to pursue based on their values and preferences. Despite the well-known challenges of drawing the boundaries between science and politics (Gieryn Reference Gieryn1983; Jasanoff Reference Jasanoff1990, Reference Jasanoff2004), this basic model still informs the formal mandate of many scientific advisory committees as well as the self-understanding of the scientists who serve on them. The National Academies of Sciences website describes its mission as providing a neutral assessment of the latest scientific evidence that Congress or the administration may need before it makes policy decisions. The Intergovernmental Panel on Climate Change, the most well-known and visible scientific advisory body of the past three decades, has likewise embodied this division of labor logic (Brown and Havstad Reference Brown and Havstad2017). Its self-stated aim is to assess the scientific literature relevant to understanding climate change in a way that is “policy-relevant, and yet policy-neutral, never policy-prescriptive” (IPCC 2020).

Oppenheimer et al. (Reference Oppenheimer, Oreskes, Jamieson, Brysse, O’Reilly, Shindell and Wazeck2019, 184-87) show that scientists who participated in scientific assessments on issues such as acid rain, ozone depletion and sea-level rise believed it was crucial for them to be seen as neutral in order for the assessment to be effective. Scientists who had publicly expressed their political views were not invited to participate in assessments lest they make the committee appear biased, even if these scientists were the most competent researchers working in the relevant area (13). Of the hundreds of scientists interviewed, the majority reported that they believed that reliably informing policy while remaining neutral was possible and indeed necessary and desirable (172). Some scientists saw neutrality to be crucial for the public credibility of science, especially given the declining levels of trust in science. Others gave a democratic justification of the division of labor, arguing that making political judgments is neither the right nor the responsibility of scientists and that their private opinions as citizens are irrelevant to their public responsibility for informing policy.Footnote 2

For the purposes of this paper, I define neutrality as a stance that requires scientists to refrain from making judgments about moral and political values. It is an attitude that advisers take in their deliberations and reports vis-à-vis the values and ends of those they advise. The aim of neutrality is to ensure that scientific advice can serve different value outlooks evenhandedly and does not privilege some over others (Lacey Reference Lacey2013). Neutrality in this sense does not suggest that science itself is or should be value free, but requires advisers to adopt an attitude of restraint and leave aside moral and political judgments during the advisory process. Scientists can be more or less neutral, even if absolute neutrality is not attainable. Neutrality is different than objectivity, which I take to refer to the empirical reliability of scientific claims. While some conceptions of objectivity may require neutrality, others do not (Douglas Reference Douglas2004). This understanding of neutrality should also be distinguished from two nearby alternatives. The first is neutrality as the active balancing of different values and interests with the aim of treating them all equally. Douglas (Reference Douglas2004) calls this stance “reflectively centrist.” The problem with this is that centrism is itself a moral and political attitude that must be justified. Some values may be objectionable, and balancing even unobjectionable values may be worse than selecting some over others. There is no reason to think balancing is desirable as a rule. The second alternative is to define neutrality as the position that emerges from critical interaction and negotiation among different values. I classify this later as one of the useful stances that a scientific advisory committee can take, but it would be conceptual stretching to call it neutrality, unless we treat neutrality entirely as a constructed pose.

My goal in this paper is to take a critical look at the stability and desirability of the aspiration to neutrality by examining the mechanics of decision making in advisory committees. Note that I will not be offering an empirical account of how well scientific committees live up to the charge of neutrality. Studies in the sociology of science have shown that scientific advisory bodies are never fully neutral or value-free in practice. Jasanoff (Reference Jasanoff1990, Reference Jasanoff2004) persuasively demonstrates that science and politics are “co-produced” in advisory contexts, and Gieryn (Reference Gieryn1983) argues that the boundary separating science from politics is actively constructed, negotiated, and defended by scientists and politicians working at the intersection of these two spheres. However, these arguments take neutrality as a pose and examine how it is constructed in advisory contexts. I will instead evaluate it as an ideal that could be approximated more or less successfully. It might well be that neutrality should also be discarded as an ideal, but this conclusion requires more normative argumentation.

Facts and Values

The advisory process is different from research in important ways. First, advisory bodies usually do not undertake or commission new research; they evaluate existing peer-reviewed literature. Secondly, their advice is oriented toward practical goals and intended to produce an action or decision. The final product of an advisory committee must be a set of claims that a decision maker can accept as true in deliberations and planning toward solving the problem. What constitutes appropriate scientific advice therefore depends on the values and purposes of the decision maker. Scientific advice that is right for one person or in one context will not be so for another person or in another context. Advice will be more useful—in the instrumental sense of helping the government to set and attain democratic goals reliably—insofar as it can incorporate the values and purposes of citizens and decision makers. This does not require advisers to make policy recommendations, but involves close engagement with users’ values in describing, simplifying, and assessing the evidence. I will illustrate this claim through a discussion of some key ways in which the determination of scientific advice requires assuming specific values and purposes, and show that taking a neutral stance makes the advice incomplete or misleading. The following examples are not meant to be exhaustive, but to illustrate the logic of the dilemma with respect to some key advisory tasks.

One of the main ways in which useful advice requires practical judgments is in the determination that the available evidence is sufficient for an action or decision under uncertainty. Scientific inference always requires a judgment about the sufficiency of evidence for accepting a hypothesis (Churchman Reference Churchman1948, Reference Churchman1956; Douglas Reference Douglas2009; Rudner Reference Rudner1953).Footnote 3 Inductive inference is inherently open ended; no amount of empirical observation can guarantee the truth of an inductive generalization. Scientists must therefore judge whether the evidence is sufficient to accept a hypothesis. This judgment must be relative to an assumed purpose: sufficient for what? This gap in inductive inference forces the scientist to consider the purposes for which the accepted hypothesis might be used and decide on the basis of the potential consequences of making a mistake. It is always possible to accept a wrong hypothesis or fail to accept a true one; this is the inherent risk of induction.

One of the main tasks of advisory committees is to evaluate the strength of the available evidence with respect to possible real-world consequences. In fact, the charge of advisory committees is often expressed in terms of assessing the sufficiency of evidence on a particular question. An advisory committee asked to assess whether children transmit SARS-CoV-2 to adults or whether mask use reduces transmission rates is essentially asked to decide whether the evidence can be considered sufficient to reach these conclusions. This requires considering the potential consequences of these judgments and making normative judgments about the relative badness of false positives and false negatives. These judgments depend on the assumed purposes and perspectives. People with different interests will demand different levels of evidence in order to accept a scientific claim to be sufficiently reliable to act upon. Any choice of evidentiary threshold implicitly favors one set of interests or priorities over another.

Advisory committees often express their conclusions using terms that tread a fine line between scientific observation and normative judgment. Descriptions of environmental or biological changes as “damage” or “harm” and the characterization of certain possibilities as “risks” or “dangers” exemplify this (de Melo-Martín and Intemann Reference de Melo-Martín and Intemann2016). These terms can be viewed as “thick concepts,” following Williams’s (Reference Williams2006) usage of the phrase to describe words that contain both factual and ethical content. In the scientific context, these thick concepts combine an observational quality, which is derived from scientific research, with an evaluative quality, which classifies observations as good or bad according to an implicit normative framework. This judgment cannot be made neutrally or without reference to a subject or purpose. Good for whom or for what? The answer will depend on whose perspective is adopted and how this perspective is represented.

The use of a thick concept to describe a natural phenomenon entails a commitment to the underlying normative framework and its practical implications. To claim that a disease has spread in a forest is not only to describe a change in a natural system and not only to signal that this change is bad, but also to imply that commonly held normative views about how to respond to disease are appropriately invoked in this context. The use of thick concepts has implications for the neutrality of an advisory committee. If the committee’s assessment is accepted as the factual background of subsequent political deliberations about whether and how to act, the normative link between certain natural changes and a specific quality of badness will have been established without debate. Scientists, policy makers, or citizens who want to argue against policy action will either have to question the scientific basis for the claims or argue that the costs or side effects make it undesirable to act in response to them.

Defenders of the neutrality ideal for scientific advice might argue that scientific committees should try to disentangle the descriptive from the normative and stick to the former. But even if it were possible for scientists to avoid using thick concepts, it would come at a loss to the usefulness of their advice. If scientists simply described the effects of acid deposition on the biological and chemical makeup of an aquatic system, for instance, laypeople would be unable to understand whether these changes are good or bad, safe or unsafe. These normative judgments are among the crucial pieces of information expected from scientific advice. We thus arrive at the same paradox: if scientists perform their advisory role helpfully, they will have to compromise their neutrality.

Another important function of an expert committee is to simplify complex information for decision makers and the public. Simplification allows policy makers to make decisions on the basis of technical information that they might otherwise be unable to understand. There is no neutral way of aggregating, summarizing, and simplifying information. The choice about what to include and what to leave out is usually made with reference to what is considered significant and relevant, which necessarily requires purpose-relative judgments. Insofar as an advisory committee ignores considerations about what is politically significant and relevant, its advice will be less useful. At the same time, the more a committee evaluates the significance and relevance of scientific claims, the more it will move away from neutrality.

Additionally, the most useful summary for a decision maker will not necessarily consist of claims that are evidentially best supported, especially under conditions of uncertainty and complexity. Accuracy can stand in tension with other features of scientific findings that are important for the attainment of practical goals. Since scientific advice is oriented toward action rather than just truth, it will be improved by attention to these practical considerations rather than just the quality and strength of the evidence. One way to simplify scientific advice is to focus on the evidence alone and report the findings that are scientifically best supported. However, this is not always helpful for decision makers because trading off accuracy against other values may in fact increase their chances of attaining their goals under uncertainty. Disease models that provide more information on variables and timescales of interest to policy makers and the public will be more useful even when their predictions are less accurate than alternatives. The decision maker’s risk aversion should also play a role in the selection of models. A risk-averse decision maker would have a better chance of attaining her ends by acting on models that give greater weight to bad outcomes, whereas a risk-loving decision maker would be rational to choose more optimistic models. Just how much more evidence one would need to have before accuracy would trump all other considerations about risk and payoffs depends in part on the risk aversion of the decision maker and the values she assigns to different outcomes. These show how an advisory committee’s decision about which findings or models to report could be improved by considering the ends to which the knowledge would be put to use, as well as the values and preferences of the user.

The fact that these practical judgments depend crucially on an understanding of the strength and uncertainty of the evidence creates a prima facie epistemic (though not yet moral or political) case for scientific committees to make these judgments for decision makers, on the grounds that they understand the evidence best. To be clear, this would not mean prescribing or advocating for specific policy decisions, but making the value judgments that arise in the advisory context by anticipating the needs and aims of citizens. This underscores the trade-off between the neutrality of a committee, which could be fulfilled by refraining from making these normative judgments, and the usefulness of its advice, which could be enhanced by making them.

Aiming for Neutrality

There are two ways to respond to the tension between neutrality and usefulness, each favoring one horn of the dilemma over the other: the first insists that scientists must still try to remain as neutral as possible, while the second endorses abandoning neutrality in various ways. Both approaches might concede that neutrality is impossible to achieve fully, but the first sees it as a valuable regulative ideal whose close approximation is possible and desirable, whereas the second sees approximation as undesirable. These approaches can be seen as corresponding to Pielke Jr’s (Reference Pielke2007) classification of advisory styles: what he calls “pure scientists” and “science arbiters” are examples of neutral advisers who simply provide information. His “honest brokers” and “issue advocates,” by contrast, favor usefulness and engage closely with the values and choices of the decision maker by making one or a few policy recommendations.Footnote 4 Not just individual advisers but different kinds of scientific advisory committees can be categorized by their choice between neutrality and usefulness. For instance, the National Academies of Sciences committees typically favor neutrality, while vaccination advisory committees favor usefulness and consider social and political issues alongside the science.

One way to approximate neutrality as closely as possible is to keep the discussion to evidentiary matters wherever possible and to use scientific values and purposes instead of ethical and political ones when values and purposes are unavoidable. Betz (Reference Betz2013) argues that scientific advisory bodies could avoid making value-laden judgments about the sufficiency of evidence by carefully reporting the uncertainty of different hypotheses. Where uncertainty forces a choice between different types of error, scientists should weaken their language to such an extent that the available evidence confirms their conclusions beyond reasonable doubt. This would minimize the risk of error and avoid moral judgments about the relative desirability of different types of error. The result of these efforts would not be perfect neutrality, but it could plausibly be described as an approximation. Indeed, there is evidence that scientists often respond to the charge of neutrality precisely in this way. For instance, Hauray and Urfalino (Reference Hauray and Urfalino2009) argue that scientific advisers to the EU have responded to the increasing pressure to remain neutral toward competing national interests by making decisions based on scientific arguments alone. Neutrality across countries was achieved at the expense of nation-specific socioeconomic priorities.

One problem with this approach is that it can end up masking the ways in which advice is in fact non-neutral. Scientists could envision the downstream political implications of different scientific claims and tailor scientific arguments to advance their political preferences. This is one of the standard fears about expert committees.Footnote 5 But even if we set aside strategic abuses of neutrality and focus on sincere attempts to remain neutral, this approach would be undesirable because it renders the advice less useful. Scientists might fail to give due weight to findings with important practical implications because their scientific merits are more tenuous. Valuable information might be lost in the process and important courses of action inadvertently ruled out. Moreover, advice that focuses on purely scientific issues without addressing practical concerns that matter to decision makers and citizens might simply be ignored.

There is a paradoxical tradeoff here: introducing non-evidentiary considerations into advisory committee deliberations can increase the chances of scientific error, even while it increases the chances of attaining desired practical goals. Thinking decision-theoretically can help us see why. The best course of action for an individual is determined by the values that she attaches to different outcomes, along with the probability of bringing them about. If a person attaches a large positive or negative value to an outcome, this outcome will acquire substantial weight in her decision calculus, even if it has a small probability of occurring. Possible outcomes that scientists leave out of a report on the grounds that the evidence for them is not strong can be crucial for someone who attaches a sufficiently large positive or negative value to them. Similarly, it can be rational for a person to bracket scientifically well-supported theories in her deliberations if they make no difference to her practical options.

Conceptualizing the challenges of advice through this lens can change our interpretation of some well-known public controversies around science by revealing the possibility that nonexperts who appear to be resistant to accepting scientific advice may in fact be attaching extremely high values to low-likelihood outcomes. Some parents’ refusal to vaccinate their children can be reinterpreted in this light. The probability that a vaccine will harm an individual child is usually so small that public health officials advising the government and the public may discard them as negligible and emphasize the safety of vaccines. Parents who refuse to vaccinate their children are then portrayed as denying sound scientific advice. However, an alternative interpretation is that parents are acting on the very small risk of injury that a vaccine always poses and assigning extremely large negative utility to the rare outcome of their child contracting a severe reaction to the vaccine. Since the benefits of vaccination accrue at the population level, this calculus may be rational for the individual parent (Kirkland Reference Kirkland2016). What the refusal to vaccinate betrays, more than anything, is a failure of solidarity and care for the well-being of other members of society, and in particular the most vulnerable who are most likely to suffer from a potential outbreak.Footnote 6

Seeing the case this way may not change our verdict on the rationality and ethics of failing to vaccinate, but it crucially alters our understanding of the causes. This view implies that trying to change the values and preferences of parents would be a more effective strategy for changing behavior than claiming that they fail to understand the science. Additionally, it also explains why advisory committees’ purportedly neutral attention to the scientific merits of claims about vaccine safety without attention to the concerns and priorities of the public can go badly wrong. While the pressure for simplification and the balance of the scientific evidence may justify a committee’s decision not to emphasize certain small possibilities, it can also lead to a loss of information that others would regard as important, which, in a hostile environment, can turn to distrust of the motives of the committee.

Useful Advice

The alternative solution is to move away from neutrality and aim for usefulness by conceptualizing scientific advisory committees as appropriate sites for deliberation about values and ends. On this view, advisory committees are expected to consider the practical implications of their advice and make necessary value judgments rather than simply reporting facts. They should describe scientific findings with an eye to their implications for different stakeholders, select information based on its moral and political significance rather than just evidentiary strength, tailor their reports to political priorities, and determine evidentiary standards in light of the consequences of different kinds of error. Depending on the context, they might also offer one or more policy options, but useful advice does not require making policy proposals and certainly need not involve advocating for a single course of action. What is crucial is that advisers show close engagement with the values and aims of different citizens during deliberations on the kinds of key advisory judgments I have highlighted. Scholars who study advisory bodies empirically have noted that expert committees do engage in this kind of political work in practice (Jasanoff Reference Jasanoff1990; Owens Reference Owens2015), but this has rarely been spelled out and defended as an ideal.

It should be clear by now that advice that considers the ends and values of the decision maker will be more useful. The challenge is to specify how expert advisers should come up with the appropriate ends and values. The advisory relationship would be straightforward if there were a single adviser and a single decision maker, and the content of the advice were fully accessible to the decision maker. The decision maker could communicate her values, priorities, and goals to the adviser; express her attitudes toward risks and her valuation of different outcomes; and the adviser could offer perfectly customized advice. Unfortunately, the scientific advisory process in politics is unlike this simple model in several ways: there are several advisers, several stakeholders with conflicting interests and values, and the science itself is often inaccessible to non-scientists. How should scientists make these value judgments in delivering scientific advice? Whose values should they use? These are the questions I turn to next.

Ethical Scientists

One solution is for scientists to try to make these judgments ethically. Douglas (Reference Douglas2009) argues that scientific advisers have a moral obligation to consider the social and political consequences of their advice, and they should focus especially on the potential consequences of being mistaken before deciding what advice to give. For instance, if a toxic air pollutant is associated with spikes in respiratory deaths but the evidence for causality remains uncertain, scientists should consider the moral consequences of emphasizing the dangers versus emphasizing the uncertainty (Douglas Reference Douglas2009, 81-82). Since the consequences of failing to emphasize real dangers would be worse than raising a false alarm, scientific advisers have a moral obligation to emphasize the dangers.

However, the plausibility of this example rests on the obviousness of the moral case, which is likely due to the widespread acceptance of the precautionary paradigm in environmental regulation. Against the background of a clear and politically negotiated agreement about the appropriate value trade-off between lives saved and economic interests, it is not problematic for scientists to rely on moral judgments that follow from the existing consensus. In many other cases, there will be no existing social consensus or politically negotiated compromise. Especially if the science is new and its effects not fully understood, the moral dilemmas and distributive problems involved will be largely speculative and subject to disagreement. In the absence of a social consensus, it is not clear how scientists should discharge this task. They could either follow their own best judgment or try to act as representatives and channel what specific groups or a majority of citizens might prefer.

Douglas (Reference Douglas2009) argues that it would be preferable for scientists to reflect the values of the public in making such judgments. Scientists should try to discern the values that the public and stakeholders might hold and make the trade-offs others would prefer. Several scholars have followed her in arguing that scientists in advisory contexts ought to use “the values of the public and its representatives” (Schroeder Reference Schroeder2017, 1052) or even “our values” (Elliott Reference Elliott2017, 160) when they need to make value judgments (see also Resnik Reference Resnik, Elliott and Richard2017; Havstad and Brown Reference Havstad, Brown, Elliott and Richards2017a; Plutynski Reference Plutynski, Elliott and Richards2017). While the suggestion that scientists should use so-called public values is more democratic than experts relying on their personal judgments, these approaches assume that the right values—whether moral or democratic/public—are easily discernable and that scientists can determine them through reflection and deliberation. The wider political processes of contestation, criticism, conflict, and compromise that construct and articulate different values and ends, as well as revealing and resolving disagreements, are absent from these accounts. Instead of offering a defense of why it is appropriate for scientists to act as representatives of public values, these views fail to acknowledge the problem of representation altogether, even while entrusting scientists to make essentially political judgments.

Scientists as Representatives

Brown (Reference Brown2009) has written most extensively about representation in contexts of expertise, so he offers a more direct response to this problem. He argues that scientific advisory committees should be conceived as sites where social and political representation can be achieved through a careful balancing of perspectives (236-44). Brown stresses that experts and laypeople should not be conceived as representing professional and social interests, respectively—that would replicate a problematic division of labor—but that both should be conceived as representing different social perspectives. The goal is to strike a fine balance between ensuring that the composition of advisory committees includes a range of perspectives and avoiding the two extremes of purely partisan alignments and purely scientific representation. He discusses the example of the President’s Council of Bioethics (245-50), which moved away over time from its original mandate of providing neutral advice to representing a variety of professional and social perspectives.

One problem with this view is that it is difficult to specify in advance the perspectives that will be relevant in composing a committee. Should perspectives be understood in terms of demographics, geographic location, nationality, or professional commitments? Brown suggests that the answer will be given by the purpose of the committee, but it is unlikely that identifying a purpose will be sufficiently determinate, especially if the purpose is to provide advice on a new scientific development and its practical implications. The task of scientific advisory committees is precisely to clarify the implications of the science. This creates a chicken-and-egg problem: without proper representation on the committee, the resulting advice may be biased; without a clear and unbiased sense of the issue, advisory committees cannot be composed in the properly representative way. The power to define the purpose then becomes the most crucial part of the process—one that will determine the direction of subsequent debates before the committee work has even begun. The problem of deciding on the purpose and membership of a committee is particularly acute for scientific advisory committees that work at the forefront of scientific research. Without a social consensus about the main perspectives on an issue, the appropriate composition of committees will be indeterminate, and any attempts to settle the issue will be open to the charge of arbitrariness.

But there is a deeper problem: as Brown is well aware, even the careful balancing of perspectives within a scientific committee is not adequate for democratic representation. There are intrinsic limits to how representative a committee can be due to its small size and special composition of experts. Members of the scientific community are quite different than the rest of the citizenry. They are highly educated, disproportionately male, white, and from a high socioeconomic background (Guterl Reference Guterl2014). Moreover, belonging to the same demographic, geographic, or professional group does not mean one will be a representative member of that group. Brown admits the impossibility of replicating the full diversity of society within a single committee, so he suggests that committee members should deliberate on their own judgment, rather than thinking of themselves as direct representatives of a group. While the point is reasonable enough, it also reveals the limits of political representation through expert committees, and falls back on something like the ethical scientist model, relying on competent reflection and high-quality deliberation by scientists.

Brown’s argument that expert committees must be understood as sites of political representation opens up a new and productive way of thinking about scientific advice, but also reveals its limits. Ultimately, we must accept that a small committee can never be sufficiently representative, given its size and its special composition of experts, and think about how democracies can respond to this fact.

Scientific Dissent and Public Scrutiny

The discussion of recent proposals for scientific advice reinforces our main dilemma: proposals that emphasize neutrality err on the side of usefulness, while those that aim for more useful advice end up giving scientists a political role that they are ill-equipped to fulfill. There might simply be a limit to how satisfactorily this dilemma can be solved at the committee level. I therefore propose that we try to mitigate the problem by changing its scope: instead of focusing only on how scientists could respond to these contradictory demands within a committee, we should ask how the inherent limitations of advice might be addressed through broader political processes. Scientific advisory committees should be conceived as initiating and guiding a democratic debate over science, rather than settling the science for policy makers. This would remove the pressure on committees to artificially separate the facts from the values, while reducing the stakes on their necessarily limited attempts at representing diverse societal interests. A more inclusive public debate, with participation from the rest of the scientific community as well as from affected citizens, NGOs, interest groups and activists, would examine the judgments and assumptions of the committee, while articulating a broader range of values, perspectives, and interests.

My argument for broadening the scope is primarily on democratic grounds, rather than on the grounds that it would ensure better outcomes defined independently from democratic procedures. Scientific advice is usually handled within elite channels and accepted (or dismissed) without much scrutiny and debate. Its internal dynamics are typically examined independently of broader political processes. For instance, for many years, the Intergovernmental Panel on Climate Change responded to pressures for public accountability mainly in terms of more effective communication strategies, failing to consider how its decision procedures could be revised to engage with the concerns of democratic publics around the world (Beck Reference Beck2012). It remains unusual to think of advisory committee practices as addressed to a large audience that is expected to take on an active and critical role rather than passively following advice. The suggestion for opening up the advisory process to a broader audience is similar to Moore’s (Reference Moore2017) argument for more public scrutiny of expertise, but my argument for this proposal is rooted in the tension between neutrality and usefulness and the difficulties it creates for the provision of good scientific advice, whereas Moore offers it as a way to legitimate epistemic authority in politics broadly speaking.

The suggestion to submit scientific advice to public scrutiny is not meant to be a solution in itself; it is a reframing of the problem to allow for more productive solutions. It directs our attention to the question of how scientific advice could be structured to facilitate public debate and scrutiny under conditions of asymmetric knowledge. For democratic scrutiny to be possible, nonexperts must acquire a sense of the committee’s assumptions and priorities, as well as the role of uncertainty, disagreement, and value trade-offs in their reports. However, scientists may not be able to identify their own value judgments as value judgments and may believe that their conclusions follow directly from scientific findings. It is particularly challenging for nonexperts to determine how relying on other values or making different assumptions would change the substance of the advice. The standard practice of aiming for consensus within scientific committees exacerbates this problem (Guston Reference Guston, Frickel and Moore2006; Moore Reference Moore2017, 134-6; Urfalino Reference Urfalino, Landemore and Elster2012). The presentation of a single committee position at the end hides the process from view and erases the alternative viewpoints that were considered but ultimately discarded. This makes it difficult for outsiders to appreciate the weaknesses of the committee’s advice and to envision objections and alternatives. The fact that there were different views on a committee and that the consensus was the result of a decision procedure rather than of uncoordinated scientific convergence is a crucial piece of information that should be visible to the public.

Moore (Reference Moore2017) proposes that committees should disclose the results of their votes in order to signal that there were different views. This proposal shows the right logic for holding experts accountable but does not go far enough. A record of committee votes does not reveal much about the sources of disagreement and how reasonable or significant they were. I propose that we take this idea one step further and enhance the accountability of scientific committees by adopting a practice from the U.S. Supreme Court: the writing of dissenting opinions. These opinions not only record disagreement with the decision, but also enhance the broader social and democratic role of the court’s decisions (Guinier Reference Guinier2008). If court decisions embody the authority and finality of law, dissenting opinions open these up to scrutiny in democratic processes that rest on the idea that the possibility for revision and change remains open. Since the court’s verdict is binding on the litigants, the real impact of a dissent is on these democratic processes outside the courtroom.

The increasing prevalence of Supreme Court dissents in the United States over the past century reflects a change in the understanding of the meaning of the court’s decisions, from statements of fixed and immutable principles to flexible and revisable decisions for a particular time and social purpose (Post Reference Post2000). This closely resembles the view of scientific advice I have defended here: as uncertain, fallible, and contested statements intended for a particular purpose, rather than certain and acontextual scientific facts. It thus makes sense for scientific advisory committees to adopt the court’s practice of offering one or more dissenting opinions, explaining and defending alternatives that were rejected in the committee. Such dissenting opinions on scientific advisory committees would have both epistemic and democratic value.

The main epistemic value would lie in recording and keeping alive the views that lost out. The pressure for simplicity and agreement pushes committees toward settling on evidentially well-supported options that might be highly diluted; committees tend to converge on the least common denominator (Oppenheimer et al. Reference Oppenheimer, Oreskes, Jamieson, Brysse, O’Reilly, Shindell and Wazeck2019, 16).Footnote 7 I mentioned earlier that this tendency creates an informational loss that might increase the chances of error and rule out the pursuit of certain ends. It is possible that other citizens will find alternatives discarded by a committee more significant and useful. The awareness that dissenters might write separate opinions would also improve the majority view by encouraging more attention to the limits and uncertainty of arguments and evidence, and more careful consideration of the assumptions underlying their conclusions. After all, scientists on the committee would have the clearest understanding of the assumptions, uncertainty, and possible error of its conclusions. The possibility that these would be publicly exposed in a dissenting opinion would be a disciplining force that would ensure that committee reports are well supported and refrain from overstating or understating the uncertainty of the evidence. This would improve the committee’s advice.

The democratic value of dissent would likewise be significant. The expression of divergent views from the committee would facilitate critical scrutiny in the public sphere. Nonexperts would have a better chance of examining expert views if they had guidance from experts themselves. Having several opinions from a committee would support dissenting views in society and provide stronger scientific grounds for dissent where such grounds can be found. It would also provide assurance that important alternatives have not been suppressed. The depth and breadth of the written dissent would reveal crucial information about how settled the scientific opinion is, which would be conveyed more persuasively through a dissent than through a single report. This would give policy makers more choice, while putting the emphasis on the limits of different views. Policy makers would still have to decide which actions to take and whether to follow the majority or minority opinion. The main difference is that this approach would draw attention to the limits of different scientific views and force policy makers to take responsibility for possible mistakes. Opposition parties, journalists, activists, and social movements could also find arguments and support from accessibly written dissenting views and use these to hold decision makers accountable. Minority reports would also allow the pursuit of different policy strategies at different levels of decision-making, thus facilitating experimentation at different scales.

The presentation of majority and minority opinions would strike a balance between neutrality and usefulness at the committee level, even if it would not fully resolve the tension between the two. This approach would offer more useful information than the presentation of purely scientific information since it requires committees to make and communicate value judgments about the sufficiency, significance, and relevance of the evidence. The fact that conclusions are presented as majority and minority opinions would also give a useful signal about the distribution of opinion on the committee. At the same time, this model would retain some advantages of neutrality: it would clarify the limitations of each view and offer alternative mappings of facts and values without making prescriptions.

The presentation of different expert views initiates and guides broader processes of debate and questioning. These include formal processes of review by authorized officials as well as informal processes of opinion formation, debate, pressure, and resistance in civil society. The government bears primary responsibility for questioning scientific advice, testing its limits, and ensuring its compatibility with public aims and values. However, the task of scrutiny cannot be entrusted entirely to government officials, as this would make it difficult for citizens to hold them accountable. The public must also have a sense of the reliability and certainty of expert advice, the implications of different courses of action, and the strength of the evidence in order to determine whether the government is furthering their interests and making sound judgments.

For scientific advisory committees to reach a mass audience and influence public opinion, their advice will be mediated through the press, as well as through digital and social media. This gives a critical role to journalists, bloggers, and science communicators in ensuring the success of scientific advice in a democracy and highlights the importance of effective communication strategies, especially for conveying uncertainty without creating distrust (e.g., Van der Bles et al. Reference Van der Bles, van der Linden, Freeman and Spiegelhalter2020). The widespread dissemination of misinformation in the media landscape today poses a challenge for these efforts and emphasizes the urgency of developing strategies to counteract misinformation. To this end, it might be best for scientific organizations themselves to develop online strategies and platforms, for instance by setting up operations to monitor networks and websites that spread false scientific information and respond through rebuttal campaigns on social media (Iyengar and Massey Reference Iyengar and Massey2019).

COVID–19

I now turn to the question of how this theoretical framework helps us evaluate the role of scientific advice during the COVID-19 pandemic. Social scientists and public health experts will study the merits of different countries’ COVID-19 responses for years to come; my aim here is not to assess these responses systematically but to discuss three cases of scientific advice that illustrate my arguments particularly well. I will show that my theory would have recommended a different approach to scientific advice at some critical junctures and also highlight some advisory processes that involved the kinds of critical democratic scrutiny I have argued for, with seemingly good results. However, since good scientific advice is just one of many variables that interact in complicated ways to produce all-things-told outcomes, it is hard to predict how the overall number of cases or deaths would have changed under the approach I recommend.

The first case concerns scientific advice about mask use in the United States in the early months of the pandemic. There is now widespread agreement that masks, social distancing, and hand washing are the most reliable measures for reducing the spread of COVID-19. In the early months of the pandemic, however, there was uncertainty and disagreement within the scientific community about whether a mask-based strategy would be effective in reducing community spread. The uncertainty was in large part due to the lack of evidence on two questions: whether the virus spread via aerosols and whether asymptomatic transmission was possible (Peeples Reference Peeples2020; Wright Reference Wright2021). If the answer to both questions was negative, then masks would not be as important or effective.

The dilemma that scientific advisers faced at the time can be formulated as an inductive risk problem, where judgments about the sufficiency of evidence had to be weighed alongside the consequences of false positives and false negatives. If advisers erred in the direction of understating the effectiveness of masks, disease transmission might increase significantly. If experts erred in the direction of overstating their effectiveness, the supplies for health workers might be unnecessarily depleted. Following the paradox of advice, scientists could either describe the (lack of) evidence and the major unknowns as neutrally as possible, or they could evaluate the consequences of different types of errors based on their own best judgment of the public interest. Scientific advisers in the United States took the second route. Not only did they understate the effectiveness of masks in describing the evidence but they went even further and weighed decisively and unanimously against mask use. CDC director Robert Redfield, Surgeon General Jerome Adams, NIAID director Anthony Fauci, and White House COVID Response Coordinator Deborah Birx presented a united front throughout February and March 2020, emphasizing that masks were not effective in protecting against the disease and imploring Americans not to buy them (Wright Reference Wright2021). “In the United States, there is absolutely no reason whatsoever to wear a mask,” Fauci said (O’Donnell Reference O’Donnell2020). These public health messages did not mention the uncertainty and disagreement in the scientific community, nor the fact-value calculation that advisers had made on behalf of the public. When the same experts reversed course in April and recommended mask use, the new message didn’t stick. The problem was exacerbated by President Donald Trump’s continued insistence that mask wearing was voluntary.

To be clear, scientific advice has to be updated in light of changing evidence, especially during a new crisis. However, scientists in this case did not have sufficient evidence that masks were not effective against COVID-19, and there was at least some evidence that countries with mask requirements were dealing more effectively with the disease. My argument suggests that under these circumstances, scientific advice should have been delivered in a way that facilitated public scrutiny of its evidentiary grounds and background assumptions, as well as offering a fair representation of the strongest alternative view considered and dismissed—namely, that it would be preferable to err in the direction of overstating the effectiveness of masks, given the uncertainty and the stakes. This approach would have given policy makers and citizens the opportunity to assess the balancing of facts and values, allowed individuals to decide for themselves whether to wear masks and also made it more likely that the reversal in messaging later on would be effective. In fact, dissenting opinions from experts could generally make it less damaging for governments to change course, by signaling the existence of good scientific reasons for taking a different approach.

A second example concerns the CDC’s Advisory Committee on Vaccine Practices (ACIP) recommendations for the allocation of COVID-19 vaccines in the US. Like the mask example, this case illustrates the problems that can arise when a scientific advisory committee makes moral and political judgments, but it also demonstrates the advantages of public scrutiny and criticism. Vaccine advisory committees are distinctive in that they are not neutral about the aim of vaccinating as many people as safely as possible, and they consider moral and social issues alongside scientific ones (Kirkland Reference Kirkland2016). In the case of COVID-19, for instance, ACIP took the desirability of widespread vaccination for granted. The committee’s aim was to advise the federal government and state governments on the best way to allocate a limited supply of vaccines, weighing the evidence about mortality and hospitalization rates across different social groups alongside ethical and feasibility considerations. What distinguished ACIP’s advisory approach was that it did not simply document the expected consequences of different vaccine allocation schemes and present the numbers neutrally to the government, but made a clear recommendation based on the committee’s view of what would constitute a just distribution.

ACIP’s preliminary recommendation was to prioritize healthcare workers first and then all essential workers ahead of older age groups. This was justified on the grounds that essential workers had a higher proportion of minorities, whose communities were hardest hit by the pandemic, whereas older Americans tended to be mostly white (Dooling Reference Dooling2020). While this would lead to a higher overall death count, the committee agreed that the difference in expected death counts was “minimal,” and that the allocation should therefore be determined by social justice considerations. But when these recommendations were publicized, public health officials, journalists, and others strongly criticized them. The data used by the CDC itself showed that thousands more lives would be lost under this proposal than one that prioritized the elderly (Bubar et al. Reference Bubar, Reinholt, Kissler, Lipsitch, Cobey, Grad and Larremore2021). Critics took issue with the committee’s characterization of thousands of deaths as minimal. This would also mean a higher number of deaths among elderly people of color—the most vulnerable groups in the population all told (Bubar et al. Reference Bubar, Reinholt, Kissler, Lipsitch, Cobey, Grad and Larremore2021).

My aim here is not to criticize ACIP’s interpretation of justice but to draw attention to the decision process. Unlike many advisory committees offering scientific advice during the pandemic, ACIP was open to the public. Its meetings were recorded and posted online. It stood out among scientific advisory committees in the extent to which it solicited and incorporated public input. This openness to scrutiny and participation played a crucial role in the vaccine allocation process. In the face of public criticism, ACIP quickly revised its initial proposal and moved up the priority of those over 75. Furthermore, the openness of ACIP’s data and reasoning, and the public criticism of its initial recommendations helped state governments set their own guidelines. Some states convened their own advisory groups to rethink the advice in light of their local needs (Chotiner Reference Chotiner2021).

My third example does not involve a particular advisory decision or recommendation, but a new institution that aligned most closely with my argument that minority opinions from experts themselves are crucial in organizing and facilitating the democratic scrutiny of expert advice. While I did not come across any instances of expert committees offering written dissents during COVID-19, a group of prominent UK scientists established an alternative scientific advisory group in response to the perceived failures of the government’s official Scientific Advisory Group for Emergencies (SAGE). Members of the rival group, called Independent SAGE, did not single out a specific scientific claim or policy position to challenge, but aimed more generally to counteract official SAGE’s lack of accountability and the government’s mishandling of the pandemic response, focusing on a range of issues from the government’s lockdown policies to the timing of school closings, the inadequacies of test-and-trace programs, to the challenges of combating vaccine hesitancy.

The most important different between SAGE and Independent SAGE was their visibility and accessibility to the public. The latter emphasized the importance of putting scientific advice in the public domain to ensure that citizens could engage with alternative scientific views. Some of their meetings were livestreamed on YouTube and all of their advice was shared openly with the government and the public. The advisory reports of Independent SAGE supplied valuable scientific analyses for the opposition parties, which the latter could use to criticize the government’s response and suggest alternatives. Finally, the pressure from Independent SAGE and its publicity forced the government’s official SAGE to become more open about its advice, sharing the evidentiary basis of its advice more readily with the public in the later months of the pandemic. Independent SAGE institutionalized and publicized the provision of dissenting advisory views and thus displayed exactly the spirit of my argument.

Objections

One objection I anticipate is that a system that encourages written dissents and facilitates public scrutiny would undermine trust in scientific advice and give politicians more leeway to do whatever furthers their political agenda. Politicians would get away more easily with choosing a dissent over the majority opinion than with ignoring a consensus report. Scientists could prevent the abuse of their advice if they resolved their technical disagreements internally and presented a consensus view to policy makers and the public.

It is difficult to refute this objection completely without empirical evidence on how policy makers and scientists behave under different advisory arrangements. Such evidence is difficult to obtain because it is uncommon for scientific advisory committees to offer public dissents and it is difficult to establish when politicians follow or reject advice in good faith and when they do so because of ulterior motives. It is generally difficult to make all-things-told assessments of the desirability of an institutional recommendation without empirical evidence of its consequences, which we cannot acquire without testing the proposal. In the absence of this evidence, I will offer some reasons why this objection is not convincing.

The plausibility of the objection rests on a specific set of assumptions about science, experts, and politicians, several of which I have already argued against. The claim that it would be best for scientists to present consensus views would be most persuasive if the consensus of the committee were likely to be true and if it either relied on no assumptions about moral and political matters or else relied on the “right” assumptions.Footnote 8 I have been arguing that these cannot be assumed because of the intrinsic uncertainty and incompleteness of science and the role of scientists’ own values in shaping scientific advice. Might we have reason to prefer consensual arrangements and discourage dissent even once we grant the uncertainty, incompleteness, and value-ladenness of science? Perhaps, under a set of very particular conditions: if we assume that politicians don’t care about the science or the interests of the citizenry, that scientists could be trusted to know and be motivated to advance the right political aims, and that the consensus view of the committee is unlikely to be mistaken. However, these assumptions are unrealistically asymmetrical in their level of idealization. They assume the worst of politicians and the best of science and scientists. If we grant pessimism about politicians, we should also resist idealizing science and scientists.

To shake off the intuition that consensual advisory processes would be more desirable as a rule, it is helpful to consider how things can go wrong when scientific advice is unanimous, authoritative, and mistaken. The 1976 swine flu case that I started out with provides a good example. Recall that this episode was marked by highly uncertain scientific knowledge and that the best scientific advisers, apparently in good faith, made a mistaken assessment on its basis, with disastrous consequences. Scientists were overconfident, and dissent within the committee was suppressed to produce a consensus view. Several scientists who served on the advisory committee later admitted that they had thought the chances of a pandemic were small (Boffey Reference Boffey1976). But their beliefs were not conveyed to policy makers. The CDC director’s contrived telephone polling is a reminder that outsiders know very little about how unanimity is reached on a committee. Perhaps the greatest damage in this case was the loss of public trust in the value and safety of public immunization programs and government public health initiatives (Neustadt and Fineberg Reference Neustadt and Fineberg1978, 81-4). Efforts to manage public trust through strategic disclosures will backfire when scientific advice is mistaken.

Still, it is worth considering whether the critical mechanisms I propose on democratic grounds would have such negative effects on the credibility of scientists that they would trump any possible democratic gains. This worry assumes that trust in science is bolstered by authoritative and certain scientific assertions and diminished by admissions of uncertainty and disagreement. However, this common view of the inverse relationship between uncertainty and trust is not supported by empirical evidence. Recent work on communicating scientific uncertainty and risk has shown that people do not reduce their trust in scientific findings if uncertainty is reported, especially if the uncertainty is expressed numerically rather than verbally (Van der Bles et al. Reference Van der Bles, van der Linden, Freeman and Spiegelhalter2020). Findings are robust across different sources and types of uncertainty.

Another objection concerns the temporality of expert advice. Minority reports from experts and wider public scrutiny may make expertise more democratic, but they might also slow down decision making. In an urgent crisis such as COVID-19, wouldn’t it be preferable to just follow a consensus opinion from scientists? To the contrary, I think the pressure to act quickly in an emergency is all the more reason to submit science to careful scrutiny first. Scientific research ordinarily moves slowly and passes through many quality-control mechanisms. In the midst of a new crisis, however, there isn’t enough time for these mechanisms to work; scientists are expected to provide advice based on uncertain, incomplete and often poor-quality evidence. Scientific advice in light of such evidence is more likely to involve controversial assumptions, and any consensus under these circumstances is more likely to reflect a desire to present a united front than the fact that many studies confirm a conclusion. Moreover, the effectiveness of decisions, on issues such as mask use or vaccine allocation, depends on significant public uptake and behavioral changes. Democratic processes that allow citizens and their representatives to examine and challenge scientists’ interpretations of their interests and needs are more likely to secure public buy-in.

Another question (if not quite objection) I anticipate is whether the proposal for advisory committees to write public dissents is nothing more than a call for greater transparency. On the surface, this proposal is of course related to transparency; it is a demand for committees to share more information about their beliefs and disagreements with the public. This would be valuable for all the reasons that transparency is valuable: it would prevent the misrepresentation of advice to the public, make shifting blame to experts more difficult, and make it easier to expose government officials’ false claims of following the science. But on a more nuanced level, my proposal requires both less and more than transparency understood as the disclosure of information. It does not require complete transparency because it does not involve the disclosure of internal deliberations or the immediate release of meeting minutes. Political theorists have shown that secrecy can be valuable in small-group deliberations, allowing participants to air more controversial claims and offer candid opinions about the weaknesses of their own positions (Bruno Reference Bruno2017; Chambers Reference Chambers2004). These can improve the quality of the resulting advice. The proposal for dissenting scientific opinions requires the disclosure only of unresolved disagreements. Moreover, since dissents would be explicitly directed at a public audience, they would include only the information that dissenters think the public and policy makers ought to know.

At the same time, this proposal also requires something that cannot be reduced to transparency: a culture of criticism and dissent within a group of advisers and willingness to speak to the public about them. These require the cultivation of professional norms that make it acceptable to express disagreement, as well as the adoption of formal rules that permit majoritarian decision making and written dissents in committees. These, in turn, would be meaningful only if a diversity of viewpoints could be found on the committee in the first place. The swine flu case illustrates that transparency alone would not provide valuable information about the limits of the committee advice if dissent were suppressed or discouraged internally. My proposal is for a model of public criticism and contestation around scientific advice, which goes beyond mere transparency without requiring full disclosure.

Conclusion

I have argued that there is an inevitable trade-off between the neutrality and usefulness of scientific advice and that advisory committees must favor one or the other. I pointed out the serious limitations of trying to approximate neutrality and argued that committees should make some value judgments. However, this move opens up a Pandora’s box of concerns around democratic representation, which an expert committee cannot address satisfactorily. To be useful, scientists must deliberate about matters that fall outside their areas of competence and on which they will be no better informed or qualified than nonexperts. Since the spectrum of political viewpoints cannot be adequately represented on an expert committee, scientific advice will always be open to the charge of bias and narrowness. I concluded that this dilemma cannot be solved within a committee and that we must rely on ex post accountability mechanisms to scrutinize scientific advice, contesting and revising it if necessary. Scientific advisory committees, in turn, should be conceived as aiming to facilitate such scrutiny, rather than settling the science for policy makers through opaque political channels. This argument is consistent with theoretical and practical efforts to democratize expert-driven areas of policy. My main contribution to this effort is in exploring the dynamics of a less studied but increasingly important expert institution—the scientific advisory committee—and offering precise answers for why it ought to be democratized and what it might take to democratize it.

Acknowledgements

The author is very grateful to Samuel Bagg, Eric Beerbohm, Daniel Butt, Sean Ingham, Tae-Yeoun Keum, Cécile Laborde, Maxime Lepoutre, David Miller, Daniel O’Neill, Élise Rouméas, Zofia Stemplowska, David Wiens, and four anonymous reviews for this journal for helpful comments and suggestions. Earlier versions of this article were presented at the University of California, San Diego political science department, the Irish Philosophical Association Annual Conference, and the Oxford Centre for the Study of Social Justice. She thanks the audiences for their feedback.

Footnotes

1 Exceptions include Brown Reference Brown2009; Jasanoff Reference Jasanoff1990; Moore Reference Moore2017; and Pielke Reference Pielke2007.

2 Shapin (Reference Shapin2009, 70-1) traces this back to an implicit contract struck during the Cold War: scientists were granted autonomy and vast resources from the state as long as they bracketed their opinions on moral, political, and military issues.

3 For responses to and extensions of Douglas’s argument, see Elliott and Richards Reference Elliott and Richards2017.

4 One important difference between my argument and Pielke Jr’s is that he closely associates these stances with the provision of one or a few policy recommendations. In my argument, by contrast, neutrality and usefulness are not defined with respect to policy recommendations but to advisers’ direct engagement with the values of others on the kinds of advisory judgments described in the section on facts and values.

5 An alternative defense of the neutrality ideal by Collins and Evans Reference Collins and Evans2017 recognizes that scientists may end up smuggling in their value judgments and offers the solution of a committee of social science experts—“owls”— tasked with assessing the strength and substance of the scientific consensus impartially. However, this solution cannot avoid the neutrality-usefulness dilemma since the owls would likewise face the challenges of determining the sufficiency of evidence, reporting a simplified summary, selecting relevant models, etc. It would also introduce an additional layer of uncertainty, disagreement, and possible value judgments, this time among the owls.

6 For a similar argument, see Goldenberg Reference Goldenberg2016.

7 Recall that Betz Reference Betz2013 has promoted this as an effective way to protect scientists’ neutrality.

8 Stephen John’s Reference John2018 provocative argument against honesty in science communication, for instance, assumes that the nonexpert ought to defer to claims that meet standards of scientific acceptance, even if the scientific consensus is artificial (i.e., reached through a vote). On this view, the aim of scientific communication is to secure the deference of the nonexpert. I disagree with this basic assumption, especially given the role of value judgments, uncertainty, and disagreement in science.

References

Beck, Silke. 2012. “Between Tribalism and Trust: The IPCC under the ‘Public Microscope.’” Nature and Culture 7(2): 151–73.CrossRefGoogle Scholar
Betz, Gregor. 2013. “In Defence of the Value Free Ideal.” European Journal for Philosophy of Science 3(2): 207–20.CrossRefGoogle Scholar
Boffey, Philip M. 1976. “Anatomy of a Decision: How the Nation Declared War on Swine Flu.” Science 192(4240): 636–41.CrossRefGoogle ScholarPubMed
Brown, Mark B. 2009. Science in Democracy: Expertise, Institutions, and Representation. Cambridge: MIT Press.CrossRefGoogle Scholar
Brown, Matthew J., and Havstad, Joyce C.. 2017. “The Disconnect Problem, Scientific Authority, and Climate Policy.” Perspectives on Science 25(1): 6794.CrossRefGoogle Scholar
Bruno, Jonathan. 2017. “Democracy beyond Disclosure: Secrecy, Transparency, and the Logic of Self-Government.” PhD diss. Harvard University.Google Scholar
Bubar, Kate M., Reinholt, Kyle, Kissler, Stephen M., Lipsitch, Marc, Cobey, Sara, Grad, Yonatan H., and Larremore, Daniel B.. 2021. “Model-informed COVID-19 Vaccine Prioritization Strategies by Age and Serostatus.” Science, published online, doi: 10.1126/science.abe6959.CrossRefGoogle Scholar
Chambers, Simone. 2004. “Behind Closed Doors: Publicity, Secrecy, and the Quality of Deliberation.” Journal of Political Philosophy 12(4): 389410.CrossRefGoogle Scholar
Chotiner, Isaac. 2021. “Deciding Who Should Be Vaccinated First.” The New Yorker, January 11.Google Scholar
Churchman, C. West. 1948. “Statistics, Pragmatics, Induction.” Philosophy of Science 15(3): 249–68.CrossRefGoogle Scholar
Churchman, C. West. 1956. “Science and Decision Making.” Philosophy of Science 22(3): 247–9.CrossRefGoogle Scholar
Collins, Harry, and Evans, Robert. 2017. Why Democracies Need Science. Cambridge: Polity Press.Google Scholar
de Melo-Martín, Inmaculada, and Intemann, Kristen. 2016. “The Risk of Using Inductive Risk to Challenge the Value-Free Ideal.” Philosophy of Science 83(4): 500–20.CrossRefGoogle Scholar
Dooling, Kathleen, 2020. “Phased Allocation of COVID-19 Vaccines.” Presentation to Advisory Committee on Immunization Practices, November 23.Google Scholar
Douglas, Heather. 2004. “The Irreducible Complexity of Objectivity.” Synthese 138(3:: 453–73.CrossRefGoogle Scholar
Douglas, Heather. 2009. Science, Policy and the Value-Free Ideal. Pittsburgh: University of Pittsburgh Press.CrossRefGoogle Scholar
Elliott, Kevin. 2011. Is a Little Pollution Good for You? Incorporating Societal Values in Environmental Research. New York: Oxford University Press.CrossRefGoogle Scholar
Elliott, Kevin. 2017. A Tapestry of Values: An Introduction to Values in Science. New York: Oxford University Press.CrossRefGoogle Scholar
Elliott, Kevin, and Richards, Ted, eds. 2017. Exploring Inductive Risk: Case Studies of Values in Science. New York: Oxford University Press.Google Scholar
Gieryn, Thomas F. 1983. “Boundary-Work and the Demarcation of Science from Non-Science: Strains and Interests in Professional Ideologies of Scientists.” American Sociological Review 48(6): 781–95.CrossRefGoogle Scholar
Goldenberg, Maya J. 2016. “Public Misunderstanding of Science? Reframing the Problem of Vaccine Hesitancy.” Perspectives on Science 24(5): 552–81.CrossRefGoogle Scholar
Grey, Stephen, and MacAskill, Andrew. 2020. “Special Report: Johnson Listened to His Scientists about Coronavirus—but They Were Slow to Sound the Alarm.” Reuters, April 7.Google Scholar
Guinier, Lani. 2008. “Demosprudence through Dissent.” Harvard Law Review 122:6137.Google Scholar
Guston, David. 2006. “On Consensus and Voting in Science: From Asilomar to the National Toxicology Program.” In The New Political Sociology of Science: Institutions, Networks, and Power, ed. Frickel, Scott and Moore, Kelly,378404. Madison: University of Wisconsin Press.Google Scholar
Guterl, Fred. 2014. “Diversity in Science: Where Are the Data?” Scientific American, October 1.Google Scholar
Hauray, Boris, and Urfalino, Philippe. 2009. “Mutual Transformation and the Development of European Policy Spaces: The Case of Medicines Licensing.” Journal of European Public Policy 16(3): 431–49.CrossRefGoogle Scholar
Havstad, Joyce C., and Brown, Matthew J.. 2017. “Inductive Risk, Deferred Decisions, and Climate Science Advising.” In Exploring Inductive Risk: Case Studies of Values in Science, ed. Elliott, Kevin and Richards, Ted, 101–25. New York: Oxford University Press.Google Scholar
Horton, Richard. 2020. The COVID-19 Catastrophe: What’s Gone Wrong and How to Stop It Happening Again. Cambridge: Polity Press.Google Scholar
Intergovernmental Panel on Climate Change (IPCC). 2020. “Organization.” Retrieved June 5, 2020 (https://archive.ipcc.ch/organization/organization.shtml).Google Scholar
Iyengar, Shanto, and Massey, Douglas S.. 2019. “Scientific Communication in a Post-Truth Society.” PNAS 116(16): 7656–61.CrossRefGoogle Scholar
Jasanoff, Sheila. 1990. The Fifth Branch: Science Advisers as Policymakers. Cambridge: Harvard University Press.Google Scholar
Jasanoff, Sheila., ed. 2004. States of Knowledge: The Co-production of Science and Social Order. London: Routledge.CrossRefGoogle Scholar
John, Stephen. 2015. “Inductive Risk and the Contexts of Communication.” Synthese 192(1): 7996.CrossRefGoogle Scholar
John, Stephen. 2018. “Epistemic Trust and the Ethics of Science Communication: Against Transparency, Openness, Sincerity and Honesty.” Social Epistemology 32(2): 7587.CrossRefGoogle Scholar
Kirkland, Anna. 2016. Vaccine Court: The Law and Politics of Injury. New York: New York University Press.CrossRefGoogle Scholar
Lacey, Hugh. 2013. “Rehabilitating Neutrality.” Philosophical Studies 163(1): 783.CrossRefGoogle Scholar
Longino, Helen. 1990. Science as Social Knowledge: Values and Objectivity in Scientific Inquiry. Princeton, NJ: Princeton University Press.CrossRefGoogle Scholar
Moore, Alfred. 2017. Critical Elitism: Deliberation, Democracy, and the Problem of Expertise. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Neustadt, Richard E., and Fineberg, Harvey V.. 1978. “The Swine Flu Affair: Decision-Making on a Slippery Disease.” Washington, DC: National Academies Press.Google ScholarPubMed
O’Donnell, Jayne. 2020. “Top Disease Official: Risk of Coronavirus in USA is ‘Miniscule’; Skip Mask and Wash Hands.” USA Today, February 17.Google Scholar
Oppenheimer, Michael, Oreskes, Naomi, Jamieson, Dale, Brysse, Keynyn, O’Reilly, Jessica, Shindell, Matthew, and Wazeck, Milena. 2019. Discerning Experts: The Practices of Scientific Assessment for Environmental Policy. Chicago: University of Chicago Press.CrossRefGoogle Scholar
Owens, Susan. 2015. Knowledge, Policy, and Expertise: The UK Royal Commission on Environmental Pollution 1970–2011. Oxford: Oxford University Press.CrossRefGoogle Scholar
Peeples, Lynne. 2020. “Face Masks: What the Data Say.” Nature 586:86–9. doi: https://doi.org/10.1038/d41586-020-02801-8 CrossRefGoogle ScholarPubMed
Pielke, Roger A. Jr. 2007. The Honest Broker: Making Sense of Science in Policy and Politics. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Plutynski, Anya. 2017. “Safe or Sorry? Cancer Screening and Inductive Risk.” In Exploring Inductive Risk: Case Studies of Values in Science, ed. Elliott, Kevin and Richards, Ted, 149–70. New York: Oxford University Press.Google Scholar
Post, Robert. 2000. “The Supreme Court Opinion as Institutional Practice: Dissent, Legal Scholarship, and Decisionmaking in the Taft Court.” Minnesota Law Review 85:1267–389.Google Scholar
Resnik, David B. 2017. “Dual-Use Research and Inductive Risk.” In Exploring Inductive Risk: Case Studies of Values in Science, ed. Elliott, Kevin and Richard, Ted, 5977. New York: Oxford University Press.Google Scholar
Rudner, Richard. 1953. “The Scientist qua Scientist Makes Value Judgments.” Philosophy of Science 20(1): 16.CrossRefGoogle Scholar
Schroeder, S. Andrew. 2017. “Using Democratic Values in Science: An Objection and (Partial) Response.” Philosophy of Science 84(5): 1044–54.CrossRefGoogle Scholar
Scientific Pandemic Influenza Group on Behaviour (SPI-B). 2020. Insights on Public Gatherings. March 16.Google Scholar
Scientific Pandemic Influenza Group on Modeling (SPI-M). 2020. Consensus Statement on 2019 Coronavirus. March 2.Google Scholar
Shapin, Steven. 2009. The Scientific Life: A Moral History of a Late Modern Vocation. Chicago: University of Chicago Press.Google Scholar
Urfalino, Philippe. 2012. “Reasons and Preferences in Medicine Evaluation Committees.” In Collective Wisdom: Principles and Mechanisms, ed. Landemore, Hélène and Elster, Jon, 173202. Cambridge: Cambridge University Press.CrossRefGoogle Scholar
Van der Bles, Anne Marthe, van der Linden, Sander, Freeman, Alexandra LJ, and Spiegelhalter, David J.. 2020. “The Effects of Communicating Uncertainty on Public Trust in Facts and Numbers.” Proceedings of the National Academy of Sciences 117(14): 7672–83.CrossRefGoogle ScholarPubMed
Williams, Bernard. 2006. Ethics and the Limits of Philosophy. London: Routledge.CrossRefGoogle Scholar
Wright, Lawrence. 2021. “The Plague Year.” New Yorker, January 4 and 11.Google Scholar
Wu, Joseph T., Leung, Kathy, and Leung, Gabriel M.. 2020. “Nowcasting and Forecasting the Potential Domestic and International Spread of the 2019-nCoV Outbreak Originating in Wuhan, China: A Modelling Study.” The Lancet 395(10225): 689–97.CrossRefGoogle ScholarPubMed