Hostname: page-component-cd9895bd7-jn8rn Total loading time: 0 Render date: 2024-12-18T06:51:48.938Z Has data issue: false hasContentIssue false

Industry Funding by Itself is Not a Reason for Rating Down Studies for Risk of Bias

Published online by Cambridge University Press:  16 December 2024

João Pedro Lima
Affiliation:
DEPARTMENT OF HEALTH RESEARCH METHODS, EVIDENCE AND IMPACT, MCMASTER UNIVERSITY, HAMILTON, ONTARIO, CANADA
Arnav Agarwal
Affiliation:
DEPARTMENT OF HEALTH RESEARCH METHODS, EVIDENCE AND IMPACT, MCMASTER UNIVERSITY, HAMILTON, ONTARIO, CANADA DEPARTMENT OF MEDICINE, MCMASTER UNIVERSITY, HAMILTON, ONTARIO, CANADA MAGIC EVIDENCE ECOSYSTEM FOUNDATION, OSLO, NORWAY
Gordon H Guyatt
Affiliation:
DEPARTMENT OF HEALTH RESEARCH METHODS, EVIDENCE AND IMPACT, MCMASTER UNIVERSITY, HAMILTON, ONTARIO, CANADA DEPARTMENT OF MEDICINE, MCMASTER UNIVERSITY, HAMILTON, ONTARIO, CANADA MAGIC EVIDENCE ECOSYSTEM FOUNDATION, OSLO, NORWAY
Rights & Permissions [Opens in a new window]

Extract

To evaluate how study characteristics and methodological aspects compare based on presence or absence of industry funding, Hughes et al. conducted a systematic survey of randomized controlled trials (RCTs) published in three major medical journals. The authors found industry-funded RCTs were more likely to be blinded, post results on a clinical trials registration database (ClinicalTrials.gov), and accrue high citation counts.1 Conversely, industry-funded trials had smaller sample sizes and more frequently used placebo as the comparator, used a surrogate as their primary outcome, and had positive results.

Type
Commentary
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press on behalf of American Society of Law, Medicine & Ethics

To evaluate how study characteristics and methodological aspects compare based on presence or absence of industry funding, Hughes et al. conducted a systematic survey of randomized controlled trials (RCTs) published in three major medical journals. The authors found industry-funded RCTs were more likely to be blinded, post results on a clinical trials registration database (ClinicalTrials.gov), and accrue high citation counts.Reference Devereaux1 Conversely, industry-funded trials had smaller sample sizes and more frequently used placebo as the comparator, used a surrogate as their primary outcome, and had positive results.

Some individuals and teams conducting systematic reviews believe that industry funding per se always puts such trials at high risk of bias and that one should rate down certainty in them accordingly. We believe this is misguided. Indeed, industry-sponsored trials are typically far better funded than investigator-initiated RCTs, allowing much greater scrutiny regarding issues that include concealment of randomization, integrity of blinding procedures, adherence to protocol, and implementing measures to minimize loss to follow-up.

Industry sponsors typically utilize expensive contract research organizations that allow far more detailed oversight of procedures at individual centers than trials funded through public agencies. This is particularly true of RCTs conducted to achieve regulatory approval in which industry sponsors are aware of the high level of inquiry that regulatory agencies are likely to implement.

These considerations suggest that industry-funded trials should fare equally well or superiorly in mitigating risk of bias relative to investigator-initiated RCTs. Indeed, this is the case. A prior systematic survey comparing industry-funded and non-industry-funded studies demonstrated similar performance in sequence generation, allocation concealment, follow‐up and selective outcome reporting. Further, the study demonstrated that industry-funded trials are more often protected against bias through blinding procedures.Reference Lundh2 A second survey of RCTs of drug therapies for rheumatoid arthritis found that industry-funded trials were more frequently blinded, provided an adequate description of participant flow, and incorporated an intention-to-treat analysis.Reference Khan3 These two surveys substantiate the inference that resource and oversight considerations will, in general, result in industry-funded studies doing as well or better on several aspects of methodological rigor than investigator-funded studies.

However, numerous evidence syntheses, including the linked study by Hughes and colleagues, have found that industry-sponsored studies are associated with disproportionately positive findings relative to studies funded by other sources.Reference Bhandari4 If it isn’t risk of bias, what explains the phenomenon?

Vested intellectual and financial interests pose the largest threat to the trustworthiness of industry-funded studies. For-profit organizations may be inherently prone to designing and interpreting the findings of a study involving their therapeutic intervention overly optimistically and over-emphasizing the importance of their findings. Indeed, the evidence suggests that this is very much the case: industry-funded studies are more likely to be enthusiastic about treatments under investigation.Reference Caulfield and Ogbogu5

Numerous strategies may produce this phenomenon, referred to as “spin,” and lead to mischaracterized or misleading results. These include inappropriate interpretations of results for a given study design (e.g. interpreting non-significant results as being “equally good” in a superiority trial), inappropriate extrapolations unsupported by results, selective reporting (including omission of non-significant outcomes and over-emphasis on significant but less important surrogate outcomes and secondary analyses), misleading or over-favorable data presentation, undermining certainty in results, and shifting framing of the abstract or conclusions to a different objective.Reference Chiu, Grundy and Bero6 Moreover, some methodological features can also favor investigated interventions. This includes the use of suboptimal comparators (including placebo where therapies with proven efficacy exist or suboptimal active controls)Reference Safer7 and composite outcomes (where outcomes with variable patient importance, incidence and treatment effects are combined).Reference Montori8 Meta-epidemiological studies have consistently demonstrated that these design and interpretation features, which extend beyond standard risk of bias criteria, consistently lead to disproportionately favorable results in industry-funded trials relative to their non-industry-funded counterparts, and overly sanguine interpretations of results when drawing conclusions.9

Vested intellectual and financial interests pose the largest threat to the trustworthiness of industry-funded studies. For-profit organizations may be inherently prone to designing and interpreting the findings of a study involving their therapeutic intervention overly optimistically and over-emphasizing the importance of their findings.

Such inappropriate conclusions have led — perhaps understandably — to claims that evidence-based medicine has been hijacked to serve the agendas of conflicted parties, including for-profit organizations, rather than primarily focusing on scientific inquiry.Reference Ioannidis10 Indeed, evidence-based medicine must defend the integrity of the research that provides the basis for clinical decision-making. Fortunately, leaders in evidence-based medicine have worked hard to do so.11

Publication bias represents a second mechanism of evidence distortion whereby study results influence their likelihood of being published. Studies with positive or statistically significant findings are more likely to be published than their “negative study” counterparts, leading to overestimated treatment effects and threatening the validity and overall certainty in a body of evidence.Reference Guyatt12 Selective publication has been evident in industry-sponsored research for over two decades.Reference Melander13

When faced with an industry-funded trial (or any study in which vested intellectual or financial interests may be present), readers — clinicians, patients, and fellow researchers — should maintain a healthy skepticism to avoid being led astray by misleading claims and biased inferences.14 In addition to considering the methodological quality based on traditional risk of bias criteria,Reference Sterne15 readers should: (1) focus on the methods and results of studies to guide their interpretations rather than relying on the author’s interpretation presented in the discussion; (2) beware of faulty comparators and composite end-points; (3) exercise caution when interpreting small treatment effects and subgroup analyses16; and (4) ascertain the extent to which spin may influence results and, when considering studies together, to which positive treatment effects may be over-represented (or negative or non-significant effects under-represented). Alternatively, pre-appraised evidence resources such as the ACP Journal Club,Reference Haynes17 trustworthy practice guidelines such as the BMJ Rapid Recommendations seriesReference Siemieniuk18 and other evidence-based point-of-care clinical resources such as UpToDate 19 and DynaMed 20 offer balanced and methodologically sound interpretations of published studies.Reference Agoritsas21 Such resources may be particularly helpful to those with no training on health research methodology.

Note

The authors have no conflicts of interest to disclose.

References

Devereaux, P.J. et al., “Physician Interpretations and Textbook Definitions of Blinding Terminology in Randomized Controlled Trials,” JAMA 2285, no. 15 (2001): 2000–3.CrossRefGoogle Scholar
Lundh, A. et al., “Industry Sponsorship and Research Outcome,” Cochrane Database Systematic Reviews 2, no. 2 (2017): MR000033.Google ScholarPubMed
Khan, N.A., “Association of Industry Funding with the Outcome and Quality of Randomized Controlled Trials of Drug Therapy for Rheumatoid Arthritis,” Arthritis & Rheumatology 64, no. 7 (2012): 2059–67.CrossRefGoogle ScholarPubMed
Id; Bhandari, M. et al., “Association Between Industry Funding and Statistically Significant Pro-Industry Findings in Medical and Surgical Randomized Trials,” Canadian Medical Association Journal 170, no. 4 (2004): 477–80; J. Yaphe et al., “The Association Between Funding by Commercial Interests and Study Outcome in Randomized Controlled Drug Trials,” Family Practice 18, no. 6 (2001): 565–8; B. Als-Nielsen et al., “Association of Funding and Conclusions in Randomized Drug Trials: A Reflection of Treatment Effect or Adverse Events? JAMA 290, no. 7 (2003): 921–8; J.E. Bekelman, Y. Li, and C.P. Gross, “Scope and Impact of Financial Conflicts of Interest in Biomedical Research: A Systematic Review,” JAMA 289, no. 4 (2003): 454–65; J. Lexchin et al., “Pharmaceutical Industry Sponsorship and Research Outcome and Quality: Systematic Review,” BMJ 326, no. 7400 (2003): 1167–70; R.A. Davidson, “Source of Funding and Outcome of Clinical Trials,” Journal of General Internal Medicine 1, no. 3 (1986): 155–8; B. Djulbegovic et al., “The Uncertainty Principle and Industry-Sponsored Research,” Lancet 356, no. 9230 (2000): 635–8.Google ScholarPubMed
B. Als-Nielsen, supra note 4; Caulfield, T. and Ogbogu, U., "The Commercialization of University-Based Research: Balancing Risks and Benefits,” BMC Medical Ethics 16, no. 1 (2015): 70; I. Boutron et al., “Reporting and Interpretation of Randomized Controlled Trials with Statistically Nonsignificant Results for Primary Outcomes,” JAMA 303, no. 20 (2010): 2058–64; L.L. Kjaergard and B. Als-Nielsen, “Association Between Competing Interests and Authors’ Conclusions: Epidemiological Study of Randomised Clinical Trials Published in the BMJ,” BMJ 325, no. 7358 (2002): 249.CrossRefGoogle ScholarPubMed
Boutron, id; Chiu, K., Grundy, Q., and Bero, L., “‘Spin’ in Published Biomedical Literature: A Methodological Systematic Review,” PLOS Biology 15, no. 9 (2017): e2002173.CrossRefGoogle ScholarPubMed
Safer, D.J., “Design and Reporting Modifications in Industry-Sponsored Comparative Psychopharmacology Trials,” The Journal of Nervous and Mental Disease 190, no. 9 (2002): 583–92; H. Mann and B. Djulbegovic, “Comparator Bias: Why Comparisons Must Address Genuine Uncertainties,” Journal of the Royal Society of Medicine 106, no. 1 (2013): 30–3.CrossRefGoogle ScholarPubMed
Montori, V. M. et al., “Users’ Guide to Detecting Misleading Claims in Clinical Research Reports,” BMJ 329, no. 7474 (2004): 1093–6; V. M. Montori et al., “Validity of Composite End Points In Clinical Trials,” BMJ 330, no. 7491 (2005): 594–6.CrossRefGoogle ScholarPubMed
Lundh, supra note 2; Lexchin, supra note 4.Google Scholar
Ioannidis, J. P., “Evidence-Based Medicine Has Been Hijacked: A Report to David Sackett,” Journal of Clinical Epidemiology 73 (2016): 82–6.CrossRefGoogle ScholarPubMed
Montori, “Users’ Guide,” supra note 8.Google Scholar
Guyatt, G. H. et al., GRADE Guidelines: 5. Rating The Quality of Evidence — Publication Bias,” Journal of Clinical Epidemiology 64, no. 12 (2011):1277–82; S. Hopewell et al., “Publication Bias in Clinical Trials Due to Statistical Significance or Direction of Trial Results,” Cochrane Database Systematic Reviews 2009, no. 1 (2009): MR000006; I. Chalmers, “Underreporting Research is Scientific Misconduct,” JAMA 263, no. 10 (1990): 1405–8.CrossRefGoogle ScholarPubMed
Melander, H. et al., “Evidence B(i)ased Medicine — Selective Reporting From Studies Sponsored by Pharmaceutical Industry: Review of Studies in New Drug Applications,” BMJ 326, no. 7400 (2003): 1171–3; E.H. Turner et al., “Selective Publication of Antidepressant Trials and its Influence on Apparent Efficacy,” New England Journal of Medicine 358 no. 3 (2008): 252–60; C.W. Jones et al., “Non-Publication of Large Randomized Clinical Trials: Cross Sectional Analysis,” BMJ 347 (2013): f6104.CrossRefGoogle ScholarPubMed
Montori, “Users’ Guide,” supra note 8.Google Scholar
Sterne, J.A.C. et al., “RoB 2: A Revised Tool for Assessing Risk of Bias in Randomised Trials,” BMJ 366 (2019): l4898; J.P. Higgins, et al., “The Cochrane Collaboration’s Tool for Assessing Risk of Bias in Randomised Trials,” BMJ 343 (2011): d5928.CrossRefGoogle ScholarPubMed
Montori, “Users’ Guide,” supra note 8.Google Scholar
Haynes, R.B., “ACP Journal Club: The Best New Evidence For Patient Care,” ACP Journal Club 148, no. 3 (2008): 2.Google ScholarPubMed
Siemieniuk, R.A. et al., “Introduction to BMJ Rapid Recommendations,” BMJ 354 (2016): i5191; E. Guerra-Farfan et al., Clinical Practice Guidelines: The Good, The Bad, and The Ugly,” Injury 54, no. 3(supp) (2023): S26–S9.CrossRefGoogle ScholarPubMed
See Kluwer, “UpToDate,” available at <https://www.wolterskluwer.com/en-ca/solutions/uptodate> (last visited September 12, 2024).CrossRef+(last+visited+September+12,+2024).>Google Scholar
20. See DynaMed, available at <https://www.dynamed.com/> (last visited September 12, 2024).+(last+visited+September+12,+2024).>Google Scholar
Agoritsas, T. et al., Users’ Guides to the Medical Literature: A Manual for Evidence-Based Clinical Practice, Chapter 5: Finding Current Best Evidence (Chicago: McGraw-Hill; 2014).Google Scholar