Hostname: page-component-cd9895bd7-lnqnp Total loading time: 0 Render date: 2024-12-29T05:34:12.391Z Has data issue: false hasContentIssue false

Understanding the Problem of “Hype”: Exaggeration, Values, and Trust in Science

Published online by Cambridge University Press:  07 December 2020

Kristen Intemann*
Affiliation:
Department of History and Philosophy, Montana State University, Bozeman, Montana, USA
Rights & Permissions [Opens in a new window]

Abstract

Several science studies scholars report instances of scientific “hype,” or sensationalized exaggeration, in journal articles, institutional press releases, and science journalism in a variety of fields (e.g., Caulfield and Condit 2012). Yet, how “hype” is being conceived varies. I will argue that hype is best understood as a particular kind of exaggeration, one that explicitly or implicitly exaggerates various positive aspects of science in ways that undermine the goals of science communication in a particular context. This account also makes clear the ways that value judgments play a role in judgments of “hype,” which has implications for detecting and addressing this problem.

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© The Author(s), 2020. Published by Canadian Journal of Philosophy

1. Introduction

Many science studies scholars have claimed that there is a prevalence of “hype” in science communication and that it is problematic (Caulfield Reference Caulfield2004; Bubela Reference Bubela2006; Bubela et al. Reference Bubela, Nisbet, Borchelt, Brunger, Critchley, Einsiedel and Geller2009; Besley and Tanner Reference Besley and Tanner2011; Caulfield and Condit Reference Caulfield and Condit2012; Rinaldi Reference Rinaldi2012; Master and Resnik Reference Master and Resnik2013; Weingart Reference Weingart, Jamieson, Kahan and Scheufele2017; Medvecky and Leach Reference Medvecky and Leach2019). Hype, broadly speaking, involves an exaggeration, such as exaggerations about the significance or certainty of research findings; the promise, safety, or future application of research programs or technological products; or the state of evidence about theories or models. Numerous empirical studies report finding evidence of hype in a variety of fields, including stem cell research (Mason and Mazotti Reference Mason and Manzotti2009; Caulfield Reference Caulfield2010; Kamenova and Caulfield Reference Kamenova and Caulfield2015), artificial intelligence (Hopgood Reference Hopgood2003; Brennen, Howard, and Nielsen Reference Brennen, Schulz, Howard and Nielsen2019), neuroimaging (Caulfield et al. Reference Caulfield, Rachul, Zarzeczny and Walter2010), nanotechnology (Maynard Reference Maynard2007), genetics and genomics research (Evans et al. Reference Evans, Meslin, Marteau and Caulfield2011; Caulfield Reference Caulfield2018), biobanking and personalized medicine (Petersen Reference Petersen2009 Marcon, Bieber, and Caulfield Reference Caulfield2018) and nutrition (Garza et al. Reference Garza, Stover, Ohlhorst, Field, Steinbrook, Rowe, Woteki and Campbell2019). Research suggests that science hype occurs not only in popular news media, but also in grant applications (Chubb and Watermeyer Reference Chubb and Watermeyer2017), human subject recruitment (Toole, Zarzeczny, and Caulfield Reference Toole, Amy, Caulfield and Spranger2012), conference presentations, peer-reviewed journal articles and reports of clinical trials (Millar, Salager-Meyer, and Budgell Reference Millar, Salager-Meyer and Budgell2019), institutional press releases (Bratton et al. Reference Bratton, Adams, Challenger, Boivin, Bott, Chambers and Sumner2019) and advertising (Caulfield and Condit Reference Caulfield and Condit2012). The charge of hype has also been extended to ethicists, who may be guilty of exaggerating the promise and perils of particular studies or emerging technologies (Caulfield Reference Caulfield2016).

A central concern about the prevalence of hype is that it might inflate public expectations such that when science falls short of these expectations, public trust in science is undermined as well as enthusiasm for particular technologies (Brown Reference Brown2003; Caulfield Reference Caulfield2004; Cunningham-Burley Reference Cunningham-Burley2006; Downey and Geransar Reference Downey and Geransar2008; Bubela et al. Reference Bubela, Nisbet, Borchelt, Brunger, Critchley, Einsiedel and Geller2009). It may also lead various agents to have false beliefs about the efficacy or safety of treatments, the likely benefits or risks of particular technologies, or the promise of a particular area of research. Communicating in a way that invites these misperceptions infringes on autonomous or well-grounded decision-making, because it deprives decision-makers of information that is reliable or relevant to their decision-making. In turn, this could lead to poor health decisions, misdirected investments and resources, and a failure to pursue other solutions that might better address the problems they intend to solve. In response to such concerns, some urge adopting new professional conduct codes and guidelines for science communication so as to avoid hype (Caulfield Reference Caulfield2018; Vallor and Greene Reference Vallor and Greene2018; Garza et al. Reference Garza, Stover, Ohlhorst, Field, Steinbrook, Rowe, Woteki and Campbell2019).

Despite claims that hype is prevalent and can raise both epistemological and ethical concerns, the concept of hype appears to be under-theorized and has been relatively neglected by philosophers. Those scholars who have examined hype often rely on different definitions that appear to conflict. For example, some have claimed that hype is best understood as an instance of scientific fraud (Begley Reference Begley1992; Wilson Reference Wilson2019). Others insist that hype is distinct from fraud and requires additional professional norms or codes of ethics (Weingart Reference Weingart, Jamieson, Kahan and Scheufele2017). In addition, some STS scholars rely on conceptions that are overly vague. For example, exaggeration is often taken to be the distinguishing characteristic of hype (Caulfield and Condit Reference Caulfield and Condit2012; Weingart Reference Weingart, Jamieson, Kahan and Scheufele2017), but more elaboration is needed about what constitutes an “exaggeration” or when it is problematic. After all, virtually all scientific claims involve going beyond existing evidence to make generalizations and predictions, but presumably not all involve the sort of “exaggeration” that would constitute hype. Finally, empirical studies also seem to employ methodology that may be either overly broad or overly narrow in identifying instances of hype. For example, some compare the frequency at which “potential benefits” of technologies are mentioned to the frequency at which “potential risks” or uncertainties are mentioned in media reports in order to assess whether hype is occurring (e.g., Bubela and Caulfield Reference Bubela and Caulfield2004; Caulfield and Bubela Reference Caulfield and Bubela2004; Partridge et al. Reference Partridge, Bell, Lucke, Yeates and Hall2011; Caulfield and Maguire Reference Caulfield and McGuire2012; Marcon, Bieber, and Caulfield Reference Marcon, Bieber and Caulfield2018; Marcon et al. Reference Marcon, Master, Ravitsky and Caulfield2019). But this may include cases where the risks are potentially either less probable or less significant than the potential benefits and, thus, perhaps worthy of less emphasis. This may also fail to catch cases where there is not explicit exaggeration, but communication is distorted as the result of omissions or a failure to contextualize risks or benefits (Bubela Reference Bubela, Nisbet, Borchelt, Brunger, Critchley, Einsiedel and Geller2009; Rachul et al. Reference Rachul, Rasko and Caulfield2017). Thus, we need a clearer account of what hype is in order to ensure that the methodology being used to detect it is accurate.

This conceptual vagueness is concerning from a philosophical perspective for a variety of reasons. First, we may be treating things as hype that are not normatively equivalent from either an epistemological or ethical perspective. If the concept of hype is masking distinct problems, it obscures the fact that different solutions may be needed. Conversely, studies on hype may be excluding instances of communication that are epistemically or ethically equivalent to instances of hype. Moreover, such confusion may prevent a successful normative account of why hype is problematic (either from an epistemological or ethical perspective). Finally, insofar as different conceptions of hype are operating in empirical studies on the prevalence of hype, it may be that such studies are not measuring the same things (which might lead to false conclusions about the prevalence of the problem).

The aim of this paper is to clarify the concept of hype in science communication. I will argue that hype is best understood as a particular kind of exaggeration, one that inappropriately exaggerates various positive or beneficial aspects of science in communicating explicitly or implicitly to specific audiences. Insofar as hype involves an inappropriate exaggeration, it is ultimately a value-laden concept that depends on two sorts of value judgments: (1) judgments about the proper goals of science communication in specific contexts, and (2) judgments about what constitutes an “exaggeration” in that context. Because hype depends on value judgments, it is not likely to be accurately identified by empirical methods that ignore the normative nature of hype. Moreover, understanding the sorts of value judgments involved is important to developing solutions for preventing or minimizing hype. I will begin by first considering different goals of science communication, which will help motivate and clarify the concept of hype. I will then argue that whether something is an instance of hype requires making certain value judgments. Finally, I will show what implications this has for identifying and preventing “hype” in science communication.

2. Goals of science communication

To clarify the concept of hype and what it intends to capture, it is first useful to consider the goals of science communication and what exactly we are trying to prevent in identifying something as “hype.” The aims of science communication may differ depending on the audience and context. One set of goals concerns empowering decision-makers (individual laypersons, policymakers, funding agencies, or even other scientists) to make well-grounded decisions with respect to science and technology (Priest Reference Priest2013; see also Elliott Reference Elliott2020). Having accurate information is important to empowering decision-makers for intertwined epistemic and ethical reasons. Accurate information is necessary for grounding decisions about what research to fund and pursue, whether certain theories or interventions are well justified, what personal choices and actions one might take (e.g., with respect to one’s health), and in some cases public policy. Accuracy is also important insofar as it helps increase autonomy in decision-making and potentially helps facilitate trust between scientists and those who rely on them to guide their decisions.

Practices that are currently considered scientific misconduct, including fabrication or misrepresentation of data, plagiarism, or irresponsible authorship typically involve making false claims in some way (about data or methodology, the source of ideas, or a contribution to a manuscript). Presumably these are problematic because (among other reasons) they involve practices that hinder the reliability of information in ways that might also undermine trust between scientists and stakeholders.

Yet, for many audiences, accurate or reliable information is not all that is needed. There are several ways in which “accurate” information can nonetheless fail to result in well-grounded decision-making. When communicating with lay persons, a further goal is to present information in a manner that is widely accessible and understandable. Focusing on accuracy may lead science communicators to precise, subtle, and overly technical language that may not convey the significance of what is being communicated in a way that could rationally guide actions (McKaughan and Elliott Reference McKaughan and Elliott2013; McKaughan and Elliott Reference McKaughan, Elliott, Priest, Goodwin and Dahlstrom2018, 206–7). This will be problematic when communicating with lay persons, who presumably also want information that is accessible or understandable.

In addition, decision-makers are not just concerned about what we know so far, or the state of a particular field of science or technological innovation. They also want to be able to make predictions about what will happen in the future (given what we know so far). They want to know how safe or effective various interventions might be, what trade-offs they might involve, and whether there are promising alternatives. Investors and funding agencies need to know where to devote limited research dollars. Policymakers need to know what regulations might be required to minimize risks, maximize benefits, or ensure that the benefits of a science or technological product are fairly distributed. Thus, the goal of some scientific communication, by its very nature, is aimed at enabling reliable predictions of various sorts. While this obviously requires being given accurate and accessible information, it also requires information that is relevant to their decision-making (Rowan Reference Rowan1991; Weingart, Engles, and Pansegrau Reference Weingart, Engels and Pansegrau2000). Thus, science communication must also often aim to provide predictive relevancy.

Not all “accurate” information provides the sort of relevant information that enables reliable predictions. For instance, during the first six months of the COVID-19 pandemic, the New York Times reported total numbers of confirmed virus cases so far reported for every state and county, which they updated daily (Almukhtar et al. Reference Almukhtar, Aufrichtig, Bloch, Calderone, Collins, Conlen and Cook2020). But this information by itself would not be helpful to either public officials or individuals in assessing how to minimize risks. Other information would be particularly relevant to these decisions, such as knowing what percentage of the population in a given place is infected, how many cases are currently active, the positivity rate of testing (and whether this is increasing or decreasing), the number of active hospitalizations, and what the hospital capacity is for each county. Much of this information was conspicuously missing, not only from the New York Times, but also from some state and county public health websites. Thus, a third goal of science communication is to present decision-makers with predictively relevant information, or information that is relevant to the kinds of decisions they must make.

In addition, some communication aims to generate interest in and enthusiasm about science, a particular area of research, or a particular emerging technology. The development of science or technology requires that new researchers be drawn to a particular field and invest their time and it requires resources for carrying out the research. Generating excitement and enthusiasm about a particular area of science or a particular emerging technology may be necessary for securing the resources needed for additional research, development, or testing (Master and Resnik Reference Master and Resnik2013, 2; Master and Ozdemir Reference Master and Ozdemir2008; Schrage Reference Schrage2004). Generating interest in science is also important to increase and maintain scientific literacy within communities. We want a populace that is interested in being scientifically informed, not only so they can make responsible decisions themselves, but also so that they understand the options facing policymakers and why some may be more justified than others. Generating enthusiasm and provoking interest in science helps engage laypersons to be scientifically informed.

Finally, in many contexts, a goal of science communication is to facilitate trust between scientists and various publics, which may require being attentive to transparency and consistency, sharing data, or ensuring that scientific results are communicated broadly (Nisbett and Scheufele Reference Nisbet and Scheufele2009; Weingart and Guenther Reference Weingart and Guenther2016). In this sense, a goal of science communication is to enhance and facilitate trust in general, as opposed to just conveying particular information.

Thus, there are several goals of science communication, some of which may be more or less important depending on audience and context. To summarize, these are: (1) accuracy, (2) understandability, (3) predictive relevancy, (4) generating enthusiasm/interest, and (5) facilitating trust. While this is not an exhaustive list of the possible goals of science communication, it demonstrates that the goals of science communication are multiple and interrelated. Conveying accurate information may also be important to predictive relevancy and facilitating trust, but conveying accurate information may not be sufficient for achieving the other goals (McKaughan and Elliott Reference McKaughan, Elliott, Priest, Goodwin and Dahlstrom2018). Indeed, the goals of science communication can conflict. Accuracy may be at odds with understandability and may not be sufficient to generate excitement. Generating excitement may pull contrary to facilitating trust and, in some cases, may conflict with accuracy (at least narrowly understood). Even when it is important to generate excitement and interest in research and new technologies, there is recognition that there are dangers if there are not tangible results for the public that align with their expectations (Mason and Manzotti Reference Mason and Manzotti2009). My aim in laying out these distinct goals of science communication is to help clarify different ways in which communication can go wrong by thwarting one or more of these goals and to pinpoint more precisely what kind of error “hype” might involve.

3. What is the category of “hype” trying to identify?

Some have treated hype as an instance of scientific misconduct or fraud (Begley Reference Begley1992; Wilson Reference Wilson2019). For example, Wilson characterizes the claims made by Elizabeth Holmes’s company Theranos, which purported to provide technology that could perform multiple diagnostic tests from a single drop of blood, as hype (2019). While claims about the promise of this technology were certainly exaggerated in assessing the early promise of this technology, it also seems like a rather clear case of scientific fraud, where claims about the state of the company’s technology and what it could do were completely fabricated. Scientific misconduct is already prohibited by existing regulations and codes of ethics, thus, if hype is essentially the same as something in this category, it is not clear that it requires additional policy changes. What, then, is the concern about hype that is distinct from more general concerns about scientific misconduct?

Instances of scientific misconduct, such as fabrication or falsification of data compromise the integrity of science itself, such that data or findings are not reliable. The concept of hype, however, is intended to capture an error in the communication of science (even in cases where the underlying science itself is sound). While scientific misconduct involves a violation of scientific norms, hype specifically involves a violation of the norms of communication. Of course, scientists might violate norms of communication in ways that also violate scientific norms (as in the case of plagiarism or fabrication). Hype, however, can be perpetuated by nonscientists and can come in varying degrees (not all of which may rise to the level of scientific misconduct).

One motivation for the concept of hype is that cases of scientific misconduct generally involve some willful or deliberate falsehood, for instance, making false claims about data, methodology, one’s own work, or the work of others. Yet, the goals of science communication can be thwarted even without the deliberate intention to deceive or mislead anyone and even without making statements that are, strictly speaking, false.

Consider the following case. On June 2, 2016, the Washington Post published a story reporting the findings of a phase I single-arm clinical trial study of stem cell therapy in stroke patients, with the headline: “Stanford researchers ‘stunned’ by stem cell experiment that helped stroke patient walk” (Cha Reference Cha2016). While the article itself was careful to point out that the study involved only eighteen patients and was only designed to test for safety (and not efficacy in improving clinical outcomes for stroke patients), it emphasized that the study was creating “a buzz” among neuroscientists, that some patients had “significant” improvement in motor functions and speech, and that this could have implications for an “understanding of an array of disorders including traumatic brain injury, spinal cord injury, and Alzheimer’s” (Reference Cha2016). The lead author of the study, Gary Steinberg, urged caution about “overselling” the results, but was also quoted as saying that the patient did not just experience “minimal recovery like someone who couldn’t move a thumb now being able to wiggle it. It was much more meaningful. One seventy-one-year-old wheelchair-bound patient was walking again" (Reference Cha2016). Moreover, both the article in the Post and the actual study published in the journal Stroke, emphasized the safety of the procedure. Yet, the Post headline stated that six of the eighteen patients suffered “Serious Treatment-Emergent Adverse Effects,” including one patient who suffered subdural hematoma that was classified as “definitely” the result of the treatment and another patient who experienced a “life threatening” seizure that was classified as “probably” the result of the procedure (Steinberg et al. Reference Steinberg, Kondziolka, Wechsler, Dade Lunsford, Coburn, Billigen and Kim2016). Thus, emphasis on the safety of the procedure may have been overly optimistic given these adverse outcomes and the small number of patients involved.

It is debatable whether the scientific article about the phase I clinical trial, or the press coverage of it, deliberately made false claims. One might think that the headline of the Post story was indeed false (because the experiment was incapable of producing the sort of data that would have warranted the claim that the stem cell therapy was what caused the stroke patient to regain mobility). Yet, it is not just the headline that is problematic, but also the ways in which the experiment, the findings, and the significance of those findings were represented. The reporting attributed far greater significance to the findings than the limited evidence supported. Outcomes were sensationalized in a way that would likely lead readers, especially readers without expertise in stem cell research or even general familiarity with how clinical trials work, to make unsupported inferences about the efficacy or safety of this treatment in treating strokes. That is, the research was represented in a way that potentially deceives or misleads readers into thinking there is more evidence or better evidence for this treatment than there is. It also obscured the potential risks of these therapies in ways that might prevent other scientists and policymakers from accurately identifying risks in order to address or minimize those risks. Finally, it exaggerated the extent to which there is evidence of potential benefits in ways that might mislead patients, policymakers, and investors.

It is not clear, then, that this is best described as a case of scientific misconduct since the underlying study was presumably adequate for its modest purposes. Rather, it seems better understood as a case of irresponsible scientific communication, where both the researchers and the journalist exaggerated the findings and significance of the study in a way that invites laypersons to make unjustified inferences and potentially poor predictions about whether to invest in or pursue such treatments. The problem of hype is a problem about how science is communicated by a range of potential actors, including scientists, science journalists, academic institutions, funding agencies, and companies. In particular, it seems to occur when, even unintentionally, the other important goals of communication are sacrificed for the sake of generating enthusiasm about some particular research field or innovation.

While definitions of hype have varied slightly within the literature, a central feature of the concept of hype is exaggeration, which may not strictly be false but have the potential to be misleading or deceptive. Caulfield et al. (Reference Caulfield, Sipp, Murry, Daley and Kimmelman2016) state that hype occurs when “the state of scientific progress, the degree of certainty in models or bench results, or the potential applications of research are exaggerated” (776). As this definition captures, there are several different sorts of claims and assumptions that might be exaggerated, including the promise of a research area or treatment, the certainty of specific models or methods, the statistical power of a particular study, the inferences drawn about research findings, translational timelines for treatments, or the potential applications of new techniques, interventions, or technologies.

Exaggeration can also result from the selective communication of some claims and the omission of other contextualizing facts or evidence. Rachul, Rasko, and Caulfield (Reference Rachul, Rasko and Caulfield2017) have examined the extent to which this has happened in reporting on platelet-rich plasma (PRP) injections. Despite the fact that there is little scientific evidence that PRP is effective in treating most acute or chronic musculoskeletal injuries, and even though it has only been approved for use in assisting surgical procedures involving bone grafts, there has been an unusually high number of news stories about the use of PRP by elite athletes and celebrities (Reference Rachul, Rasko and Caulfield2017). While the majority of these articles did not claim that there was scientific evidence that PRP was effective in treating any sports-related injury, they also did not point out the lack of any evidence that it was effective (despite multiple clinical trials that had attempted to generate such evidence). Most of the articles did provide anecdotal evidence from celebrities and elite athletes testifying that it had helped them a great deal. Rachul et al. argue that framing PRP as a routine treatment, using anecdotal evidence from elite athletes (likely to invoke cognitive biases in the public), and failing to provide the fact that existing evidence suggests PRP offers no benefits for a variety of sports-related injuries constitutes a sort of implicit hype. Thus, hype can be implicit, when an exaggeration (such as the benefits of a treatment) is accomplished by omitting information relevant to assessing benefits and risks, or the state and strength of the evidence regarding efficacy and safety (Bubela et al. Reference Bubela, Nisbet, Borchelt, Brunger, Critchley, Einsiedel and Geller2009). Implicit exaggeration occurs when the goal of providing relevant information is unduly suppressed in order to promote interest or enthusiasm. Moreover, this is likely to happen in cases where hype reinforces existing cognitive biases that we have, just as a desire to believe things that align with our interests (wishful thinking). In such cases, what is communicated is likely to cause readers to make an unjustified inference based on the omission of contextualizing evidence.

Hype, then, is when science communication exaggerates the benefits, or the evidence for those benefits, of particular theories, interventions, or technological products either explicitly or implicitly in a way that either: (a) obscures the risks presented by a technology (and thereby fails to allow us to develop it in a way attentive to these risks) or (b) invites unwarranted inferences about the promise or benefits given the evidence we have so far.

Yet, not all exaggeration is hype. As Nerlich (Reference Nerlich2013) pointed out, some degree of exaggeration in science communication (whether it be by scientists, science writers, or institutions) is unavoidable. Scientific reasoning, as an inductive enterprise, always goes “beyond the current evidence” in some sense, to draw more general conclusions about what is likely to be the case, despite some inevitable degree of uncertainty. Particularly when communication of scientific results is being communicated to various publics, there is an expectation that not just the current “scientific facts” be reported, but that there is some analysis of the significance of these facts for various social interests, practices, and policies. As Nerlich argues, science communication involves saying something about the future when reporting on current science:

What promises for the future does a drug or other technology hold? When will it become available? What social or bodily ailments will it cure and when? What social impact will it have? How will the general public perceive its risks or benefits? Or, relating more to warning rather than promises: How bad will the future be under conditions of global warming? Is global warming dangerous, catastrophic or, indeed, inevitable? What social impact will it have? (Nerlich Reference Nerlich2013, 44)

As noted in section 2, stakeholders want to be able to make predictions about what science or innovations are the most promising to invest in, about what are the risks and benefits of different possible kinds of solutions, treatments, actions, or policies. Conducting these sorts of analyses may necessarily involve going “beyond” current scientific evidence to make predictions. Of course, we might think that there is a spectrum where such conjecture may be more supported or less supported by the current evidence (Master and Resnik Reference Master and Resnik2013, 4).

In addition, emphasizing some benefits, or risks, or aspects of evidence over others is also both desirable and unavoidable when evaluating particular technologies or interventions. From the choice of which stories or studies to publish (and which to neglect), to which benefits or risks or uncertainties to discuss, or which features of the evidence to highlight as salient, or which alternative explanations to explore, decisions must be made about which aspects of science or innovation to communicate or not. Science communication occurs in a practical context with limited space and resources and often to audiences with varying degrees of expertise. Thus, the mere fact that some things are emphasized over others, or that complexities are neglected, or uncertainties are underemphasized may not, by itself, constitute hype. This suggests that the subset of exaggeration that we are interested in identifying is problematic or inappropriate exaggeration. What makes for “inappropriate” exaggeration will be considered in more detail in section 4.

Yet, it is not clear that even all inappropriate exaggeration should be considered “hype.” Exaggeration can be overly pessimistic, as well as overly optimistic. Just as it is possible to overstate the potential benefits, certainty, or significance of research findings, it is also possible to exaggerate the potential risks, uncertainties, or unknowns related to research.

Some STS scholars have been tempted to treat any case of exaggeration (even pessimistic exaggeration of, risks, uncertainties, or significance) as cases of hype. For example, Caulfield (Reference Caulfield, Sipp, Murry, Daley and Kimmelman2016) has noted that ethicists have sometimes been guilty of exaggerating the risks of technologies in addition to exaggerating the benefits. Weingart (Reference Weingart, Jamieson, Kahan and Scheufele2017) also points to the ways in which hype might involve a pessimistic sort of exaggeration. In 1995, newswire services (on the basis of a press release put out by the American Heart Association) ran a story announcing that the “60 million people in the United States who were receiving calcium channel blockers for treatment of hypertension might be increasing their risk for a heart attack by 60%” (Mittler Reference Mittler1995, 72). While both the news story and the press release correctly reported findings of studies on a few calcium channel blockers, they failed to include that the risks were primarily for patients with existing heart disease, or that the studies in question had been restricted to short-acting versions of the treatment (e.g., verapamil, nifedipine), and involved at higher than normal doses (Massie Reference Massie1998). Nonetheless, many patients being treated for hypertension became alarmed by the reported risks and in some cases even stopped taking their medication (Horton Reference Horton1995). Weingart (Reference Weingart, Jamieson, Kahan and Scheufele2017) gives this example as an instance of hype.

Should hype be understood as any sort of exaggeration, whether overly pessimistic or overly optimistic? Both optimistic and pessimistic exaggeration are likely to lead to similar consequences. Both may lead to false beliefs or unjustified inferences. They can also both hinder making reliable predictions or obscure relevant information that is important to well-grounded decision-making. While optimistic exaggeration may obscure the risks associated with particular technologies or interventions, pessimistic exaggeration may obscure potential benefits and overplay uncertainties. Both have the potential to undermine trust in science or science communicators. Yet, in the case of optimistic exaggeration, these goals of science communication are undermined for the sake of another: that of generating enthusiasm and interest. In the case of pessimistic exaggeration, the goal is not to generate enthusiasm, but to generate caution, skepticism, or fear. Sometimes generating caution can also be an appropriate goal of science communication but, as we shall see in the next section, whether enthusiasm or caution is warranted will depend on distinct value judgments and may involve different evidentiary standards. Thus, it may be useful to think of pessimistic exaggeration as an error closely analogous to hype, namely alarmism. Hype occurs when the promise or benefits or of research, particular interventions, or new technologies are exaggerated. When the exaggeration is inappropriately pessimistic, exaggerating risks and uncertainties, we might conceive of the error as alarmism. Insofar as alarmism may be as problematic as hype, scientists and science journalists must not only avoid being overly optimistic in reporting results, but also ensure that they are not being overly pessimistic or cautious. And, as we shall see, striking that balance can be difficult.

4. Hype and value judgements: When is exaggeration “inappropriate”?

What constitutes inappropriate exaggeration? Whether or not an exaggeration is inappropriate depends on (1) the goals of communication in a particular context, and (2) how much evidence is sufficient to warrant particular claims or inferences. Each of these assessments involves value judgments.

First, exaggerations are inappropriate when they are likely to thwart the goals of science communication that are important in particular contexts. As we saw in section 2, the goals of science communication can depend on the audience and can sometimes conflict. With respect to hype, of particular concern are cases where exaggeration is likely to hinder an audience’s ability to make well-grounded predictions about the benefits or risks of particular research programs, technologies, behaviors, policies, or scientific interventions. This depends on the audience. When researchers submit grant proposals to funding agencies, they know that they are communicating to other experts in the field. In this context, the aim of generating enthusiasm and interest might have more weight, particularly because researchers know they are communicating with experts who are in a position to evaluate the science behind the promises being made and can evaluate whether those claims are reasonable or sufficiently supported. In grant proposals, it might be reasonable to spend more time emphasizing the promises and potential benefits of a research area. Thus, some exaggerations in this context might not be inappropriate, so long as accuracy and predictive relevance are not neglected.

On the other hand, communicating science to the public or policymakers might require another standard. Accuracy, predictive relevance, and facilitating trust are perhaps even more important because the stakes of thwarting these goals may be higher. When communicating with nonexperts, there is greater risk that both explicit and implicit exaggeration will go undetected. Moreover, various cognitive biases may increase the likelihood that individuals will make unwarranted inferences. For example, exaggeration can be particularly dangerous when we reinforce things that people “want to believe” or that align with their interests. In other words, it can exacerbate “wishful thinking.”

Thus, exaggerations are inappropriate when they hinder the goals of science communication that are most important in particular contexts. Which goals are “most important” depends not only on the interests of science communicators, but also on the needs and interests of particular audiences, as well as other ethical constraints that may hold. For example, scientists may have an ethical obligation to communicate about the public health risks of certain behaviors in ways that will prompt rapid collective action. Assessing which goals are most important involves evaluating and weighing these diverse needs and other ethical considerations. Exaggeration becomes inappropriate when it imposes undue risks on audiences that can result from neglecting the goals of science communication that are important given the needs and ethical considerations at stake. This will require an ethical value judgment, that may also involve a political value judgment about how to weigh the competing interests at stake when there is reasonable disagreement about risks. Andrew Schroeder (Reference Schroeder2020) has shown that being attentive to these different kinds of value judgments may be important for how we resolve these conflicts.

Exaggerations are also inappropriate when they are not sufficiently supported by evidence. Yet, whether a predictive claim or an inference is sufficiently supported also depends on a value judgement about whether the evidence is “good enough” to accept or believe a claim. As Heather Douglas has argued (Reference Douglas2009), judgments about whether there is sufficient evidence for a claim involve a certain degree of “inductive risk” or the risk of being wrong. Generally, it is never the case that the evidence for a claim makes it 100 percent certain. There is always a chance of accepting a claim that is false or rejecting a claim that is true. How much evidence is needed depends on what sorts of risks of error we think are acceptable. In turn, this is determined not only by the probability of error, but also on how bad the consequences of different types of error would be. In other words, it depends on value judgments about the risks of being wrong.

In some cases, the social or ethical consequences of communicating particular claims is so grave and impactful that it would require a much higher bar for justification (Havstad Reference Havstad2020). Consider the communication that occurred around the promise of hydroxychloroquine in treating or preventing COVID-19. Based on a few small studies (Gautret et al. Reference Gautret, Lagier, Parola, Hoang, Meddeb, Mailhe, Doudier and Courjon2020; Chen et al. Reference Chen, Hu, Jiang, Han, Yan, Zhuang and Ben2020) US President Trump, French President Emmanuel Macron, and Brazilian President Jair Bolsanaro, among others, touted the promise of the malaria drug hydroxychloroquine for both preventing COVID-19 infections and treating the disease (so as to avoid severe illness or death) (e.g., Ledford Reference Ledford2020; Warraich Reference Warraich2020; Cunningham and Firozi Reference Cunningham and Firozi2020). Trump even encouraged people to take the drug as a prophylactic and to request it from their doctors. Yet, both early studies involved fewer than forty patients and neither were randomized controlled clinical trials. The Chinese study had not even been peer reviewed. Neither study was designed to determine whether the drug was safe or effective in preventing infection. Presumably leaders who touted the potential promise of this drug were trying to reassure members of the public. Moreover, they may have thought that hopeful enthusiasm about the treatment was warranted because it occurred in a moment where there were no proven treatments and so little was known about what else might work. No doubt it seemed reasonable to be hopeful that a drug that at least had already been approved for treating other illnesses might at least be safe to try, particularly given what was known about its antiinflammatory effects. Yet, the communication around hydroxychloroquine also neglected several potential risks and limitations about the evidence for this treatment in relation to COVID-19. The drug (and particularly the chloroquine version) was known to have harmful side effects. It was approved for conditions such as malaria and lupus because its proven efficacy in treating those diseases was found to be beneficial enough to outweigh any potential side effects. But whether it could have serious side effects in COVID-19 patients, or whether it would provide any benefit in preventing or treating serious complications from COVID-19 so as to offset those risks, was unknown. The evidence for the promise of this drug was insufficient, particularly given the risks. Partly as the result of hype from world leaders, many countries devoted significant resources to acquiring and testing the efficacy of hydroxychloroquine (Goodman and Giles Reference Goodman and Giles2020). As a result, several patients who needed these drugs for their approved use—treating lupus and rheumatoid arthritis—faced a shortage (Metah, Salmon, and Ibrahim Reference Mehta, Salmon and Ibrahim2020). Communication that hydroxychloroquine was so promising also potentially derailed research into other potentially effective treatments (Ledford Reference Ledford2020). Patients with COVID-19 began refusing to enroll in clinical trials for other treatments because they wrongly assumed hydroxychloroquine would be the most effective. The perception that hydroxychloroquine was safe and effective to treat COVID-19 even led some patients with access to the drug to overdose (Chai et al. Reference Chai, Ferro, Kirshenbaum, Hayes, Culbreth, Boyer and Erickson2020). There simply was not sufficient evidence that either chloroquine or hydroxychloroquine was safe or effective in treating or preventing COVID-19, particularly given the risks of error. The risks of error in this case were high given both the probability of error and the potential risks imposed on the public. The encouragement to pursue this as a treatment was disproportionate to the state of the science given the consequences of being wrong.

Thus, whether an instance of communication constitutes hype depends on two sorts of value judgments. First, it depends on a value judgment about what are the most important goals for communicating with the audience in a particular context. Exaggeration is inappropriate when it hinders the goals most important to that audience, such as predictive relevance or accuracy, for the sake of excitement, reassurance, or interest. Second, exaggeration is inappropriate to the extent to which the implicit and explicit inferences made are insufficiently supported by the existing evidence, given the risks of being wrong. When the promises, benefits, or state of the evidence about a particular technology or intervention are communicated, communicators should consider how much evidence is needed to support those inferences given the risks of error. In both of these ways, hype impose problematic risks on laypersons either because it hinders important goals of communication or because it invites risky inferences that do not meet appropriate evidentiary standards.

As in the case of optimistic exaggeration, whether pessimistic exaggeration is justified depends on value judgments about the goals of communication and the risks of error. Consider Patricia Hunt’s research on Bisphenol A (BPA). In 1998, Hunt was a reproductive biologist trying to understand why female eggs deteriorate with age (Hinterhuer Reference Hinterhuer2008). While conducting experiments with mice, she noticed a sharp unexpected deterioration in the eggs of female mice in her control group. After significant sleuthing, she discovered that the mice’s plastic cages and water bottles had been accidentally washed with the wrong detergent, which had caused them to leach out BPA and cause several chromosomal abnormalities in the mice and their offspring. Hunt (along with others) spoke out aggressively about the potential dangers of BPA and became leaders in warning the public and regulatory bodies (Hinterhuer Reference Hinterhuer2008, see also vom Saal and Hughes Reference vom Saal and Hughes2005). Industry representatives accused them of being alarmists and contributing to unnecessary regulation. At the time that Hunt made her discovery, her study was extremely limited, and many pointed to the potential problems of extrapolating conclusions about the health risks to humans from mice models. The plastics industry was quick to point to studies showing that BPA was only toxic to humans at very high doses (although these studies were focused on endpoints of toxicity that did not include changes to cells and the reproductive system). Even today, they argue that there is no documented case of BPAs harming humans (Hinterhuer Reference Hinterhuer2008). Nonetheless, in the late 1990s and early 2000s, the use of BPA in plastics was widespread and it was used in items like baby bottles and sippy cups. In part it was because of her concern for babies and children—groups that are more likely to be affected by BPA in virtue of their size and development—that Hunt thought it was better to risk being alarmist than risk serious and widespread health and reproductive problems.

Was Hunt guilty of alarmism or inappropriately and pessimistically exaggerating her findings? On the one hand, it might seem that the conclusions she drew were only weakly supported by the evidence available at the time. On the other hand, the initial alarm sounding might seem reasonable if we think that, particularly in the case of babies and children, the possibility that BPA might cause some grave harm is sufficient enough to warrant using extreme caution in allowing its continued widespread use in products for infants and toddlers. Thus, whether or not the pessimistic exaggeration was appropriate in this case depends on a value judgment about how much evidence is sufficient evidence for their claims. One might reasonably think that when there is a potential threat to public health, and particularly the health of children, the evidentiary bar for potential harm is lower. Making inferences about, for example, whether to regulate baby bottles might be warranted even if there is only limited evidence that they are harmful. This is also obviously related to questions about whether, in the case of toxicity or possible harms to humans or the environment, we should adopt a precautionary principle with respect to substances. It depends on whether we think the burden should be to sufficiently prove safety or sufficiently prove harmfulness. Insofar as we think even limited evidence of harm is sufficient reason to sound the alarm, then the researchers in this case are not guilty of alarmism.

Thus, both hype and alarmism involve inappropriate exaggeration, but whether or not an exaggeration is inappropriate depend on value judgments about (1) the most important goals of science communication in a particular context and (2) value judgments about what counts as “sufficient” evidence or reason for a claim or an omission, partly depending on the consequences of error. In cases of hype and alarmism, inappropriate exaggeration occurs because the communication either implicitly or explicitly invite audiences to make insufficiently supported inference by failing to give appropriate weight to the needs and risks imposed on those audiences.

Understanding hype in this way also reveals why hype is problematic even if audiences are not always “taken in” by hype when it occurs. Indeed, a few studies have shown that audiences at least believe they can identify and disregard hype (e.g., Chubb and Watermeyer Reference Chubb and Watermeyer2017; Peddie et al. Reference Peddie, Porter, Counsell, Caie, Pearson and Bhattacharya2009). But even if it is not believed, hype can undermine or erode warranted epistemic trust in scientists (or even scientific institutions) even if no one is actually taken in by hype. Epistemic trust is vital for interactions between scientists and various publics (Scheman Reference Scheman, Tuana and Morgen2001; Wilholt Reference Wilholt2013; Grasswick Reference Grasswick2010). Typically, trust is thought to involve epistemic reliability and competency, but it is also thought to have an ethical dimension involving traits such as honesty, integrity, and benevolence (Hardwig Reference Hardwig1991; Resnik Reference Resnik2011). When researchers or other science communicators engage in hype, they signal they are willing to impose risks or disregard the needs of the public in ways that undermine their reliability, honesty, and benevolence (traits necessary for trust), even if the public does not believe the hype. Thus, hype can undermine warranted epistemic trust. That is, it gives people good reason to distrust the claims made by those who hype. At best, it sends the message that science communicators are not completely honest. At worst, it suggests that they do not have sufficient regard for the public good.

5. Implications for identifying and preventing hype

Now that the concept of hype has been clarified, it is important to see how this information bears on both empirical studies that purport to identify hype and policies aimed at preventing it. Insofar as they involve inappropriate exaggerations, both hype and alarmism are normative concepts that attribute recklessness to science communicators. They involve deceptive science communication that is inappropriate or unjustified, though identifying when it occurs may also depend on value judgments about how much evidence is sufficient for making particular claims, or what sorts of risks are more acceptable. Understood in this way, it is not clear that some empirical studies aiming to examine the prevalence of hype are successful in identifying instances of hype.

Some empirical studies on hype examine written science communication (journal articles, press releases, and news reports) to measure the frequency that potential benefits are mentioned versus potential risks of particular treatments, research areas, or new technologies. Yet, this empirical approach tends to lack attention to the value judgments that are involved in categorizing certain instances as hype. A recent study by Marcon, Bieber, and Caulfield (Reference Caulfield2018) examined the portrayal of “personalized” and “precision” medicine (PM) in the North American news. They found that media publications that discussed PM between 2015 and 2016 overwhelmingly highlighted the potential benefits and expressed significant optimism about PM, while largely ignoring or failing to mention any of the numerous concerns that have been raised. They concluded that that serves as an example of science hyping.

While much of the press around PM may indeed constitute hype, it is important to note that it is not just a question of whether benefits are more frequently mentioned than risks. Whether these are instances of hype depend on whether the evidence for benefits was insufficiently supported given the consequences of error. It also depends on whether the risks of PM are likely or supported with sufficient evidence given those consequences. Moreover, it depends on value judgments about what we think are the most important goals of communication around PM in the context of popular press.

When empirical studies are not attentive to the normative dimensions of identifying hype, they may include instances that may not actually be hype or fail to capture instances of hype. For example, a similar study by Macron et al. (Reference Marcon, Master, Ravitsky and Caulfield2019) on the representation of CRISPR gene editing in North American popular press found that while nearly all of the articles mentioned the benefits and promise of the technology, many (61.4%) also included discussion of the risks, concerns, and uncertainties surrounding CRISPR. The authors this time did not conclude that CRISPR was being hyped and instead argued that this “media portrayal of CRISPR might help facilitate more sophisticated and balanced policy responses, where the scientific potential of the technology is highlighted alongside broader social considerations” (Reference Marcon, Master, Ravitsky and Caulfield2019, 1).

Should we conclude that CRISPR is not being hyped in the North American media? Whether communication constitutes hype depends not only on the extent to which risks and uncertainties are discussed but also how they are discussed. For example, a high percentage of the articles (83%) discussed the use of CRISPR in the context of medicine and health, while only (26.3%) discussed the benefits and risks related to animals (26.3%) or plants (20.2%). Yet, successful applications of CRISPR in medicine are likely to be further off than applications that are happening in agriculture. Moreover, as the authors note, a significant number of the articles singled out “designer babies” or the use of CRISPR for genetic enhancement of personality and nonhealth related traits as one of the main concerns. Yet, some of these concerns are actually speculative and not based on the actual applications, which themselves may present other more pressing ethical concerns, such as the possibility of off-target effects, concerns about informed consent, or worries about the consequences of synthetic gene drives. Thus, it might encourage people to believe that the most immediate ethical concern about CRISPR is designer babies, which is dubious given the current state of the science. In some cases, the possibility of designer babies was even discussed as a potential benefit, while others portrayed this as a risk or as leading to worries about, for example, eugenics. Thus, even what constitutes a risk or a benefit may vary. Moreover, 78.6% of the articles examined depicted CRISPR as “an improvement” or as “better” than other current alternatives in gene editing, which may also lead readers to think that the concerns and risks raised are not substantial or new. In some cases, although researchers discuss concerns about uncertainties and risks related to CRISPR, it is also often the case that those risks are normalized or discounted (e.g., “all new technology has risks”) while the benefits are not similarly tempered. This suggests that empirical methods for identifying hype need to be attentive to the normative dimensions of hype, and in particular whether the implicit or explicit exaggerations are inappropriate given the goals of the communication context, the nature of the risks and benefits discussed, and the evidentiary standards that may exist given the consequences of error.

Identifying hype, and particularly implicit hype, requires identifying whether the benefits and risks of a particular technology or intervention are explored in ways that are not likely to invoke or encourage false inferences or unreliable predictions. This is no doubt a more complicated assessment. Whether or not something is a case of hype will depend on additional judgments about what constitutes a benefit or risk, which benefits and risks are worth discussing, how they should be discussed, and so on.

Yet, even if hype is difficult to identify by purely empirical methods, this does not mean that we should not aim to try to cultivate responsible science communication so as to avoid or minimize hype. In response to concerns about hype, the International Society for Stem Cell Research (ISSCR) adopted new guidelines to try to address hype in science communications (Caulfield Reference Caulfield2018). The new guidelines require researchers to “promote accurate, balanced, and responsive public representations of their work … to monitor how their work is represented in the public sphere … and to create information resources that do not underplay risks and uncertainties” (Daley et al. Reference Daley, Hyun, Apperley, Barker, Benvenisty, Bredenoord, Breuer, Halfield, Cedars, Frey-Vasconcells, Heslop, Jin, Lee, McCabe, Munsie, Murry, Piantadosi, Rao and Kimmelman2016, 28).

Yet, the CRISPR example shows that mere “balance” in discussing the risks and benefits of a new technology is not sufficient to avoid hype. Ensuring that risks are not underplayed involves making value judgments about which risks and benefits are most important to decision-making, assessing how serious or imminent those risks are, and determining how much evidence is needed to tout particular benefits. Avoiding hype (and alarmism) requires training science communicators to be attentive to all the goals of science communication (including an assessment of which are most important to their audiences). It requires communicating not just accurate claims but ensuring that information relevant to making accurate predictions and decisions is not omitted. Communicators must ensure that claims reported are supported by sufficient evidence given the risks of error. Given the ways in which value judgments are relevant here, it may require science communicators to be attentive to these value judgments and the needs of their audiences. The goal of facilitating trust and providing information that is predictively relevant are particularly important when communicating with members of the public and policymakers.

While more work is needed to determine how hype might be avoided, some of the concerns might be addressed by the ISSCR’s urging that researchers strive to present responsive public representations of their work. That is, the communication ought to be responsive to the needs and interests of the public or particular audiences with whom they are communicating. Insofar as hype (and alarmism) involve reckless science communication, such recklessness might be avoided by paying careful attention to the needs of audiences to ensure that the communication is responsive to the goals of audiences and to the consequences of being wrong.

6. Conclusion

I have attempted to provide a more precise conceptual analysis of hype, distinguishing it from scientific misconduct. While scientific misconduct involves a transgression of the norms of research and typically involves a deliberate falsehood, hype involves a transgression of the norms of science communication and can occur unintentionally by a range of science communicators. Specifically, hype is an inappropriately optimistic exaggeration, which can be communicated explicitly or implicitly. Exaggerations become inappropriate when they impose risks on audiences by sacrificing the goals most important in a particular communication context for the sake of generating enthusiasm or interest. I have also identified the ways in which hype is closely related to alarmism in science communication, which involves an inappropriate pessimistic exaggeration. While both hype and alarmism involve exaggeration, not all exaggeration is inappropriate or constitutes a communicative error. Whether or not exaggerations are inappropriate depend on value judgments about (1) what the most important goals of communication in a particular context are, and (2) whether there is sufficient evidence given the risks of error. This also suggest that hype may come in degrees (some instances may be more reckless than others), and there may even be disagreement over whether a particular instance is a case of hype. While it is tempting to think that preventing either hype or alarmism might require a sort of “balance,” the kind of balance that is required is one in adjudicating the potentially conflicting goals of science communication and not merely a balance in the discussion of the risks and benefits associated with, for example, a particular technology. While there are challenges in identifying and addressing hype, it seems important to recognize this phenomenon as an important error in science communication that we should strive to avoid.

Hype has the potential to undermine warranted trust in science communicators, which would deny individuals resources needed not only to evaluate scientific claims, but also to make important decisions about their health and public policy.

Acknowledgments

This manuscript has significantly benefited from the thoughtful and thorough feedback of several who heard or read earlier versions of this material, including Ingo Brigandt, Kevin Elliott, Joyce Havstad, Inmaculada de Melo-Martín, Andrew Schroeder, two anonymous reviewers and other participants of the “Engaging with Science, Values, and Society” workshop at the University of Alberta. I am grateful for their helpful comments and suggestions.

Kristen Intemann is a professor of philosophy in the Department of History and Philosophy and director of the Center for Science, Technology, Ethics and Society at Montana State University. Her research focuses on values in science, science communication, inclusivity in STEM, and research ethics.

References

Almukhtar, Sarah, Aufrichtig, Aliza, Bloch, Matthew, Calderone, Julia, Collins, Keith, Conlen, Matthew, Cook, Lindsey et al. 2020. “Coronavirus in the U.S.: Latest Map and Case Count.” New York Times. https://www.nytimes.com/interactive/2020/us/coronavirus-us-cases.html.Google Scholar
Begley, Sharon. 1992. “Fraud and Hype in Science.” Bulletin of Science, Technology & Society 12 (2): 6971.Google Scholar
Besley, John C., and Tanner, Andrea H.. “What Science Communication Scholars Think about Training Scientists to Communicate.” Science Communication 33 (2): 239–63.Google Scholar
Bratton, Luke, Adams, Rachel C., Challenger, Aimée, Boivin, Jacky, Bott, Lewis, Chambers, Christopher D., and Sumner, Petroc. 2019. “Science News and Academic Press Releases: A Replication Study.” Wellcome Open Research. https://doi.org/10.12688/wellcomeopenres.15486.2.CrossRefGoogle ScholarPubMed
Brennen, J. Scott, Schulz, Anne, Howard, Philip N., and Nielsen, Rasmus Kleis. 2019. “Industry, Experts, or Industry Experts? Academic Sourcing in News Coverage of AI.” Reuters Institute for the Study of Journalism. https://reutersinstitute.politics.ox.ac.uk/industry-experts-or-industry-experts-academic-sourcing-news-coverage-ai.Google Scholar
Brown, Nik. 2003. “Hope against Hype—Accountability in Biopasts, Presents and Futures.” Science Studies 16 (2): 321.Google Scholar
Bubela, Tania M. 2006. “Science Communication in Transition: Genomics Hype, Public Engagement, Education and Commercialization Pressures.” Clinical Genetics 70 (5): 445–50.CrossRefGoogle ScholarPubMed
Bubela, Tania M., and Caulfield, Timothy. 2004. “Do the Print Media ‘Hype’ Genetics Research? A Comparison of Newspaper Stories and Peer-Reviewed Research Papers.” Canadian Medical Association Journal 170 (9): 13991407.CrossRefGoogle ScholarPubMed
Bubela, Tania, Nisbet, Matthew C., Borchelt, Rick, Brunger, Fern, Critchley, Christine, Einsiedel, Edna, Geller, Gail et al. 2009. “Science Communication Reconsidered.” Nature Biotechnology 27 (6): 514–18.CrossRefGoogle ScholarPubMed
Caulfield, Timothy. 2004. “Biotechnology and the Popular Press: Hype and the Selling of Science.” Trends in Biotechnology 22 (7): 337–39.Google ScholarPubMed
Caulfield, Timothy. 2010. “Stem Cell Research and Economic Promises.” Journal of Law, Medicine and Ethics 38 (2): 303–13.CrossRefGoogle ScholarPubMed
Caulfield, Timothy. 2016. “Ethics Hype?” Hastings Center Report 46 (5): 1316.Google ScholarPubMed
Caulfield, Timothy. 2018. “Spinning the Genome: Why Science Hype Matters.” Perspectives in Biology and Medicine 61 (4): 560–71.Google ScholarPubMed
Caulfield, Timothy, and Bubela, Tania M.. 2004. “Media Representations of Genetic Discoveries: Hype in the Headlines.” Health Law Review 12 (2): 5361.Google Scholar
Caulfield, Timothy, and Condit, Celeste. 2012. “Science and the Sources of Hype.” Public Health Genomics 15 (3–4): 209–17.Google ScholarPubMed
Caulfield, Timothy, and McGuire, Amy. 2012. “Athletes’ Use of Unproven Stem Cell Therapies: Adding to Inappropriate Media Hype?” Molecular Therapy 20 (9): 1656–58.CrossRefGoogle ScholarPubMed
Caulfield, Timothy, Rachul, Christen, Zarzeczny, Amy, and Walter, Henrik. 2010. “Mapping the Coverage of Neuroimaging Research.” ScriptED (7): 421–28.Google Scholar
Caulfield, Timothy, Sipp, Douglas, Murry, Charles E., Daley, George Q. and Kimmelman, J. 2016. "Confronting Stem Cell Hype." Science 352 (6287): 776–77.Google ScholarPubMed
Cha, Ariana Eunjung. 2016. “Stanford Researchers ‘Stunned’ by Stem Cell Experiment That Helped Stroke Patients Walk.” Washington Post, June 2. https://www.washingtonpost.com/news/to-your-health/wp/2016/06/02/stanford-researchers-stunned-by-stem-cell-experiment-that-helped-stroke-patient-walk.Google Scholar
Chai, Peter R., Ferro, Enrico G., Kirshenbaum, James M., Hayes, Brian D., Culbreth, Sarah E., Boyer, Edward W., and Erickson, Timothy B.. 2020. “Intentional Hydroxychloroquine Overdose Treated with High-Dose Diazepam: An Increasing Concern in the COVID-19 Pandemic.” Journal of Medical Toxicology 16: 314–20.CrossRefGoogle ScholarPubMed
Chen, Zhaowei, Hu, Jijia, Jiang, Shan, Han, Shoumeng, Yan, Dandan, Zhuang, Ruhong, and Ben, Hu. 2020. “Efficacy of Hydroxychloroquine in Patients with COVID-19: Results of a Randomized Clinical Trial.” https://doi.org/10.1101/2020.03.22.20040758.CrossRefGoogle Scholar
Chubb, Jennifer, and Watermeyer, Richard. 2017. “Artifice or Integrity in the Marketization of Research Impact? Investigating the Moral Economy of (Pathways to) Impact Statements within Research Funding Proposals in the UK and Australia.” Studies in Higher Education 42 (12): 2360–72.CrossRefGoogle Scholar
Cunningham, Paige Winfield, and Firozi, Paulina. 2020. “The Health 202: The Hydroxychloroquine Hype Is Over.” Washington Post, June 16. https://www.washingtonpost.com/news/powerpost/paloma/the-health-202/2020/06/16/the-health-202-the-hydroxychloroquine-hype-is-over/5ee7b5f888e0fa32f823ac92.Google Scholar
Cunningham-Burley, Sarah. 2006. “Public Knowledge and Public Trust.” Community Genetics 9 (3): 204–10.Google ScholarPubMed
Daley, George Q., Hyun, Insoo, Apperley, Jane F., Barker, Roger A., Benvenisty, Nissim, Bredenoord, Annelien L., Breuer, Christopher K., Halfield, Timothy, Cedars, Marcelle I., Frey-Vasconcells, Jiyce, Heslop, Helen E., Jin, Ying, Lee, Richard T., McCabe, Christopher, Munsie, Megan, Murry, Charles E., Piantadosi, Steven, Rao, Mahendra, and Kimmelman, Jonathan. 2016. “Setting Global Standards for Stem Cell Research and Clinical Translation: The 2016 ISSCR Guidelines. Stem Cell Reports 6 (6): 787–97.CrossRefGoogle ScholarPubMed
Douglas, Heather E. 2009. Science, Policy, and the Value-Free Ideal. Pittsburgh, PA: University of Pittsburgh Press.CrossRefGoogle Scholar
Downey, Robin, and Geransar, Rose. 2008. “Stem Cell Research, Publics’ and Stakeholder Views.” Health Law Review 16 (2): 6985.Google Scholar
Elliott, Kevin C. 2020. “A Taxonomy of Transparency in Science.”  Canadian Journal of Philosophy . https://doi.org/10.1017/can.2020.21.CrossRefGoogle Scholar
Evans, James P., Meslin, Eric M., Marteau, Theresa M., and Caulfield, Timothy. 2011. “Deflating the Genomics Bubble.” Science 331 (6019): 861–62.CrossRefGoogle ScholarPubMed
Garza, Cutberto, Stover, Patrick J., Ohlhorst, Sarah D., Field, Martha S., Steinbrook, Robert, Rowe, Sylvia, Woteki, Catherine, and Campbell, Eric. 2019. “Best Practices in Nutrition Science to Earn and Keep the Public’s Trust.” American Journal of Clinical Nutrition 109 (1): 225–43.CrossRefGoogle ScholarPubMed
Gautret, Philippe, Lagier, Jean-Christophe, Parola, Philippe, Hoang, Van Thuan, Meddeb, Line, Mailhe, Morgane, Doudier, Barbara, Courjon, Johan, et al. 2020. “Hydroxychloroquine and Azithromycin as a Treatment of COVID-19: Results of an Open-Label Non-randomized Clinical Trial.” International Journal of Antimicrobial Agents. https://doi.org/10.1016/j.ijantimicag.2020.105949.CrossRefGoogle ScholarPubMed
Goodman, Jack, and Giles, Christopher. 2020. “Coronavirus and Hydroxychloroquine: What Do We Know?BBC News. https://www.bbc.com/news/51980731.Google Scholar
Grasswick, Heidi E. 2010. “Scientific and Lay Communities: Earning Epistemic Trust through Knowledge Sharing.” Synthese 177 (3): 387409.CrossRefGoogle Scholar
Hardwig, John. 1991. “The Role of Trust in Knowledge.” Journal of Philosophy 88 (12): 693708.CrossRefGoogle Scholar
Havstad, Joyce C. 2020. "Archaic Hominin Genetics and Amplified Inductive Risk." Canadian Journal of Philosophy.Google Scholar
Hinterhuer, Adam. 2008. “Just How Harmful Are Bisphenol A Plastics?Scientific American. https://www.scientificamerican.com/article/just-how-harmful-are-bisphenol-a-plastics.Google Scholar
Hopgood, Adrian A. 2003. “Artificial Intelligence: Hype or Reality?Computer 36 (5): 2428.CrossRefGoogle Scholar
Horton, Richard. 1995. “Spinning the Risks and Benefits of Calcium Antagonists.” Lancet 346: 586–87.CrossRefGoogle ScholarPubMed
Kamenova, Kalina, and Caulfield, Timothy. 2015. “Stem Cell Hype: Media Portrayal of Therapy Translation.” Science Translational Medicine 7 (278): 278ps4.CrossRefGoogle ScholarPubMed
Ledford, Heidi. 2020. “Chloroquine Hype Is Derailing the Search for Coronavirus Treatments.”  Nature . https://www.nature.com/articles/d41586-020-01165-3.Google ScholarPubMed
Marcon, Alessandro R., Bieber, Mark, and Caulfield, Timothy. 2018. “Representing a ‘Revolution’: How the Popular Press Has Portrayed Personalized Medicine.” Genetics in Medicine 20 (9): 950–56.CrossRefGoogle ScholarPubMed
Marcon, Alessandro, Master, Zubin, Ravitsky, Vardit and Caulfield, Timothy. 2019. “CRISPR in the North American Popular Press.” Genetics in Medicine 21 (10): 2184–89.CrossRefGoogle ScholarPubMed
Mason, Chris, and Manzotti, Elisa. 2009. “Induced Pluripotent Stem Cells: An Emerging Technology Platform and the Gartner Hype Cycle.” Regenerative Medicine 4: 329–31.CrossRefGoogle ScholarPubMed
Massie, Barry M. 1998. “The Safety of Calcium-Channel Blockers.” Clinical Cardiology 21 (12, supp. 2), II12-17.Google ScholarPubMed
Master, Zubin, and Ozdemir, Vural. 2008. “Selling Translational Research: Is Science a Value-Neutral Autonomous Enterprise?American Journal of Bioethics 8 (3): 5254.CrossRefGoogle Scholar
Master, Zubin, and Resnik, David B.. 2013. “Hype and Public Trust in Science.” Science and Engineering Ethics 19 (2): 321–35.CrossRefGoogle ScholarPubMed
Maynard, Andrew D. 2007. “Nanotechnology: The Next Big Thing, or Much Ado about Nothing?Annals of Occupational Hygiene 51: 112.Google ScholarPubMed
McKaughan, Daniel J., and Elliott, Kevin C.. 2013. “Backtracking and the Ethics of Framing: Lessons from Voles and Vasopressin.” Accountability in Research 20 (3): 206–26.CrossRefGoogle ScholarPubMed
McKaughan, Daniel J., and Elliott, Kevin C.. 2018. “Just the Facts or Expert Opinion? The Backtracking Approach to Socially Responsible Science Communication.” In Ethics and Practice in Science Communication, edited by Priest, Susanna, Goodwin, Jean, and Dahlstrom, Michael F., 197213. Chicago: University of Chicago Press.Google Scholar
Medvecky, Fabien, and Leach, Joan. 2019. An Ethics of Science Communication. New York, NY: Springer Nature.CrossRefGoogle Scholar
Mehta, Bella Mehta, Salmon, Jane, and Ibrahim, Said. 2020. “Potential Shortages of Hydroxychloroquine for Patients with Lupus During the Coronavirus Disease 2019 Pandemic.” JAMA Health Forum 2020. https://jamanetwork.com/channels/health-forum/fullarticle/2764607.CrossRefGoogle ScholarPubMed
Millar, Neil, Salager-Meyer, Francoise, and Budgell, Brian. 2019. “It Is Important to Reinforce the Importance of …”: ‘Hype’ in Reports of Randomized Controlled Trials.” English for Specific Purposes 54: 139–51.Google Scholar
Mittler, Brant S. 1995. “Dangerous Medicine.” Forbes MediaCritic 36: 7278.Google Scholar
Nerlich, Brigette. 2013. “Moderation Impossible? On Hype, Honesty and Trust in the Context of Modern Academic Life.” The Sociological Review 61: 4357.Google Scholar
Nisbet, Matthew C., and Scheufele, Dietram A.. 2009. “What’s Next for Science Communication? Promising Directions and Lingering Distractions.” American Journal of Botany 96 (10): 1767–78.Google ScholarPubMed
Partridge, Bradley J., Bell, Stephanie K., Lucke, Jayne C., Yeates, Sarah, and Hall, Wayne D. 2011. “Smart Drugs ‘As Common as Coffee’: Media Hype about Neuroenhancement.” PloS one 6 (11): e28416.CrossRefGoogle Scholar
Peddie, Valerie L., Porter, Marya, Counsell, Carl, Caie, Liu, Pearson, Donald, and Bhattacharya, Siladitya, 2009. “‘Not Taken in by Media Hype’: How Potential Donors, Recipients and Members of the General Public Perceive Stem Cell Research.” Human Reproduction 24 (5): 1106–13.Google ScholarPubMed
Petersen, Alan. 2009. “The Ethics of Expectations: Biobanks and the Promise of Personalised Medicine.” Monash Bioethics Review 28: 05.1–.12.CrossRefGoogle ScholarPubMed
Priest, Susanna. 2013. “Can Strategic and Democratic Goals Coexist in Communicating Science? Nanotechnology as a Case Study in the Ethics of Science Communication and the Need for “Critical” Science Literacy.” Ethical Issues in Science Communication: A Theory-Based Approach. https://doi.org/10.31274/sciencecommunication-180809-45.Google Scholar
Rachul, Christen, Rasko, John E., and Caulfield, Timothy. 2017. “Implicit Hype? Representations of Platelet Rich Plasma in the News Media. PloS one 12 (8): e0182496.CrossRefGoogle ScholarPubMed
Resnik, David B. 2011. “Scientific Research and the Public Trust.” Science and Engineering Ethics 17 (3): 399409CrossRefGoogle ScholarPubMed
Rinaldi, Andrea. 2012. “To Hype, or Not To(o) Hype.” EMBO Reports 13 (4): 303–7.CrossRefGoogle ScholarPubMed
Rowan, Katherine E. 1991. “Goals, Obstacles, and Strategies in Risk Communication: A Problem‐Solving Approach to Improving Communication about Risks.” Journal of Applied Communication Research 19 (4): 300–29.Google Scholar
Scheman, Naomi. 2001. “Epistemology Resuscitated. Objectivity and Trustworthiness.” In Engendering Rationalities, edited by Tuana, N. and Morgen, S., 2352. Albany, NY: SUNY Press.Google Scholar
Schrage, Michael. 2004. “Great Expectations.” Technology Review 107 (8): 21.Google Scholar
Schroeder, S. Andrew. 2020. "Values in Science: Ethical vs. Political Approaches." Canadian Journal of Philosophy . https://doi.org/10.1017/can.2020.41.CrossRefGoogle Scholar
Steinberg, Gary K., Kondziolka, Douglas, Wechsler, Lawrence R., Dade Lunsford, L., Coburn, Maria L., Billigen, Julia B., Kim, Anthony S., et al. 2016. “Clinical Outcomes of Transplanted Modified Bone Marrow–Derived Mesenchymal Stem Cells in Stroke: A Phase 1/2a Study.” Stroke 47 (7): 1817–24.CrossRefGoogle ScholarPubMed
Toole, Ciara, Amy, Zarzeczny, and Caulfield, Timothy. 2012. “Research Ethics Challenges in Neuroimaging Research: A Canadian Perspective.” In International Neurolaw, edited by Spranger, Tade Matthias, 89101. Heidelberg: Springer.CrossRefGoogle Scholar
Vallor, Shannon, and Greene, Brian. 2018. “Best Ethical Practices in Technology.” Markkula Center for Applied Ethics at Santa Clara University. https://www.scu.edu/ethics-in-technology-practice/best-ethical-practices-in-technology.Google Scholar
vom Saal, Frederick S., and Hughes, Claude. 2005. “An Extensive New Literature Concerning Low-Dose Effects of Bisphenol A Shows the Need for a New Risk Assessment.” Environmental Health Perspectives 113 (8): 926–33.CrossRefGoogle ScholarPubMed
Warraich, Haider J. 2020. “The Risks of Trump’s Hydroxychloroquine Hype.” New York Times , May 19. https://www.nytimes.com/2020/05/19/opinion/trump-hydroxychloroquine-coronavirus.html.Google Scholar
Weingart, P. 2017. “Is There a Hype Problem in Science? If So, How Is It Addressed?” In The Oxford Handbook of the Science of Science Communication, edited by Jamieson, Kathleen Hall, Kahan, Dan, and Scheufele, Dietram A., 111–31. New York: Oxford University Press.Google Scholar
Weingart, Peter, Engels, Anita, and Pansegrau, Petra. 2000. “Risks of Communication: Discourses on Climate Change in Science, Politics, and the Mass Media.” Public Understanding of Science 9 (3): 261–84.Google Scholar
Weingart, Peter, and Guenther, Lars. 2016. “Science Communication and the Issue of Trust.” Journal of Science Communication 15 (5): C01.CrossRefGoogle Scholar
Wilholt, Torsten. 2013. “Epistemic Trust in Science.” British Journal for the Philosophy of Science 64 (2): 233–53.CrossRefGoogle Scholar
Wilson, Steven Ray. 2019. “Theranos: What Did Analytical Chemists Have to Say about the Hype?” Journal of Separation Science 42 (11): 1960–61.Google ScholarPubMed