We use cookies to distinguish you from other users and to provide you with a better experience on our websites. Close this message to accept cookies or find out how to manage your cookie settings.
To save content items to your account,
please confirm that you agree to abide by our usage policies.
If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account.
Find out more about saving content to .
To save content items to your Kindle, first ensure [email protected]
is added to your Approved Personal Document E-mail List under your Personal Document Settings
on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part
of your Kindle email address below.
Find out more about saving to your Kindle.
Note you can select to save to either the @free.kindle.com or @kindle.com variations.
‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi.
‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.
To evaluate how study characteristics and methodological aspects compare based on presence or absence of industry funding, Hughes et al. conducted a systematic survey of randomized controlled trials (RCTs) published in three major medical journals. The authors found industry-funded RCTs were more likely to be blinded, post results on a clinical trials registration database (ClinicalTrials.gov), and accrue high citation counts.1 Conversely, industry-funded trials had smaller sample sizes and more frequently used placebo as the comparator, used a surrogate as their primary outcome, and had positive results.
This paper examines the replicability of studies in design research triggered by the replication crisis in psychology. It highlights the importance of replicating studies to ensure the robustness of research results and examines whether the description in a publication is sufficient to replicate. Therefore, the publication of a reference study was analysed and a replication study was conducted. The design of the replication study appears similar to the reference study, but the results differ. Possible reasons for the differences and implications for replication studies are discussed.
Instrumental variable (IV) strategies are widely used in political science to establish causal relationships, but the identifying assumptions required by an IV design are demanding, and assessing their validity remains challenging. In this paper, we replicate 67 articles published in three top political science journals from 2010 to 2022 and identify several concerning patterns. First, researchers often overestimate the strength of their instruments due to non-i.i.d. error structures such as clustering. Second, IV estimates are often highly uncertain, and the commonly used t-test for two-stage-least-squares (2SLS) estimates frequently underestimate the uncertainties. Third, in most replicated studies, 2SLS estimates are significantly larger in magnitude than ordinary-least-squares estimates, and their absolute ratio is inversely related to the strength of the instrument in observational studies—a pattern not observed in experimental ones—suggesting potential violations of unconfoundedness or the exclusion restriction in the former. We provide a checklist and software to help researchers avoid these pitfalls and improve their practice.
To test for publication bias with alprazolam, the most widely prescribed benzodiazepine, by comparing its efficacy for panic disorder using trial results from (1) the published literature and (2) the US Food and Drug Administration (FDA).
Methods
From FDA reviews, we included data from all phase 2/3 efficacy trials of alprazolam extended-release (Xanax XR) for the treatment of panic disorder. A search for matching publications was performed using PubMed and Google Scholar. Publication bias was examined by comparing: (1) overall trial results (positive or not) according to the FDA v. corresponding publications; (2) effect size (Hedges's g) based on FDA data v. published data.
Results
The FDA review showed that five trials were conducted, only one of which (20%) was positive. Of the four not-positive trials, two were published conveying a positive outcome; the other two were not published. Thus, according to the published literature, three trials were conducted and all (100%) were positive. Alprazolam's effect size calculated using FDA data was 0.33 (CI95% 0.07–0.60) v. 0.47 (CI95% 0.30–0.65) using published data, an increase of 0.14, or 42%.
Conclusions
Publication bias substantially inflates the apparent efficacy of alprazolam XR.
Assess the extent to which the clinical trial registration and reporting policies of 25 of the world’s largest public and philanthropic medical research funders meet best practice benchmarks as stipulated by the 2017 WHO Joint Statement, and document changes in the policies and monitoring systems of 19 European funders over the past year.
Design, Setting, Participants:
Cross-sectional study, based on assessments of each funder’s publicly available documentation plus validation of results by funders. Our cohort includes 25 of the largest medical research funders in Europe, Oceania, South Asia, and Canada.
Interventions:
Scoring all 25 funders using an 11-item assessment tool based on WHO best practice benchmarks, grouped into three primary categories: trial registries, academic publication, and monitoring, plus validation of results by funders.
Main outcome measures:
How many of the 11 WHO best practice items each of the 25 funders has put into place, and changes in the performance of 19 previously assessed funders over the preceding year.
Results:
The 25 funders we assessed had put into place an average of 5/11 (49%) WHO best practices. Only 6/25 funders (24%) took the PI’s past reporting record into account during grant application reviews. Funders’ performance varied widely from 0/11 to 11/11 WHO best practices adopted. Of the 19 funders for which 2021(2) baseline data was available, 10/19 (53%) had strengthened their policies over the preceding year.
Conclusions:
Most medical research funders need to do more to curb research waste and publication bias by strengthening their clinical trial policies.
It is a long known problem that the preferential publication of statistically significant results (publication bias) may lead to incorrect estimates of the true effects being investigated. Even though other research areas (e.g., medicine, biology) are aware of the problem, and have identified strong publication biases, researchers in judgment and decision making (JDM) largely ignore it. We reanalyzed two current meta-analyses in this area. Both showed evidence of publication biases that may have led to a substantial overestimation of the true effects they investigated. A review of additional JDM meta-analyses shows that most meta-analyses conducted no or insufficient analyses of publication bias. However, given our results and the rareness of non-significant effects in the literature, we suspect that biases occur quite often. These findings suggest that (a) conclusions based on meta-analyses without reported tests of publication bias should be interpreted with caution and (b) publication policies and standard research practices should be revised to overcome the problem.
Are difficult decisions best made after a momentary diversion of thought? Previous research addressing this important question has yielded dozens of experiments in which participants were asked to choose the best of several options (e.g., cars or apartments) either after conscious deliberation, or after a momentary diversion of thought induced by an unrelated task. The results of these studies were mixed. Some found that participants who had first performed the unrelated task were more likely to choose the best option, whereas others found no evidence for this so-called unconscious thought advantage (UTA). The current study examined two accounts of this inconsistency in previous findings. According to the reliability account, the UTA does not exist and previous reports of this effect concern nothing but spurious effects obtained with an unreliable paradigm. In contrast, the moderator account proposes that the UTA is a real effect that occurs only when certain conditions are met in the choice task. To test these accounts, we conducted a meta-analysis and a large-scale replication study (N = 399) that met the conditions deemed optimal for replicating the UTA. Consistent with the reliability account, the large-scale replication study yielded no evidence for the UTA, and the meta-analysis showed that previous reports of the UTA were confined to underpowered studies that used relatively small sample sizes. Furthermore, the results of the large-scale study also dispelled the recent suggestion that the UTA might be gender-specific. Accordingly, we conclude that there exists no reliable support for the claim that a momentary diversion of thought leads to better decision making than a period of deliberation.
Publication bias has the potential to adversely impact clinical decision making and patient health if alternative decisions would have been made had there been complete publication of evidence.
Methods
The objective of our analysis was to determine if earlier publication of the complete evidence on rosiglitazone’s risk of myocardial infarction (MI) would have changed clinical decision making at an earlier point in time. We tested several methods for adjustment of publication bias to assess the impact of potential time delays to identifying the MI effect. We then performed a cumulative meta-analysis (CMA) for both published studies (published-only data set) and all studies performed (comprehensive data set). We then created an adjusted data set using existing methods of adjustment for publication bias (Harbord regression, Peter’s regression, and the nonparametric trim and fill method) applied to the limited data set. Finally, we compared the time to the decision threshold for each data set using CMA.
Results
Although published-only and comprehensive data sets did not provide notably different final summary estimates [OR = 1.4 (95 percent confidence interval [CI]: .95–2.05) and 1.42 (95 percent CI: 1.03–1.97)], the comprehensive data set reached the decision threshold 36 months earlier than the published-only data set. All three adjustment methods tested did not show a differential time to decision threshold versus the published-only data set.
Conclusions
Complete access to studies capturing MI risk for rosiglitazone would have led to the evidence reaching a clinically meaningful decision threshold 3 years earlier.
A critical barrier to generating cumulative knowledge in political science and related disciplines is the inability of researchers to observe the results from the full set of research designs that scholars have conceptualized, implemented, and analyzed. For a variety of reasons, studies that produce null findings are especially likely to be unobserved, creating biases in publicly accessible research. While several approaches have been suggested to overcome this problem, none have yet proven adequate. We call for the establishment of a new discipline-wide norm in which scholars post short “null results reports” online that summarize their research designs, findings, and interpretations. To address the inevitable incentive problems that earlier proposals for reform were unable to overcome, we argue that decentralized research communities can spur the broader disciplinary norm change that would bring advantage to scientific advance. To facilitate our contribution, we offer a template for these reports that incorporates evaluation of the possible explanations for the null findings, including statistical power, measurement strategy, implementation issues, spillover/contamination, and flaws in theoretical priors. We illustrate the template’s utility with two experimental studies focused on the naturalization of immigrants in the United States and attitudes toward Syrian refugees in Jordan.
Behavioural studies aim to discover scientific truths. True facts should be replicable, meaning that the same conclusions are reached if the same data are analysed, if the same methods are applied to collect a new dataset and if different methodological approaches are used to address the same general hypothesis. The replication crisis refers to a widespread failure to replicate published findings in the biological and social sciences. The causes of the replication crisis include the presence of uncontrolled moderators of behaviour, low statistical power and dubious research practices. Various sources of information can help to distinguish good research from bad. An evidence pyramid ranks different study types according to the quality of evidence produced. The Open Science movement encourages replication, preregistration and transparency over materials, methods and data, all of which should improve the quality of science and the likelihood that findings will be replicated.
Publication bias and p-hacking are threats to the scientific credibility of experiments. If positive results are more likely to be published than null results conditional on the quality of the study design, then effect sizes in meta-analyses will be inflated and false positives will be more likely. Publication bias also has other corrosive effects as it creates incentives to engage in questionable research practices such as p-hacking. How can these issues be addressed such that the credibility of experiments is improved in political science? This chapter discusses seven specific solutions, which can be enforced by both formal institutions and informal norms.
One of the strongest findings across the sciences is that publication bias occurs. Of particular note is a “file drawer bias” where statistically significant results are privileged over nonsignificant results. Recognition of this bias, along with increased calls for “open science,” has led to an emphasis on replication studies. Yet, few have explored publication bias and its consequences in replication studies. We offer a model of the publication process involving an initial study and a replication. We use the model to describe three types of publication biases: (1) file drawer bias, (2) a “repeat study” bias against the publication of replication studies, and (3) a “gotcha bias” where replication results that run contrary to a prior study are more likely to be published. We estimate the model’s parameters with a vignette experiment conducted with political science professors teaching at Ph.D. granting institutions in the United States. We find evidence of all three types of bias, although those explicitly involving replication studies are notably smaller. This bodes well for the replication movement. That said, the aggregation of all of the biases increases the number of false positives in a literature. We conclude by discussing a path for future work on publication biases.
Federal agencies invest taxpayer dollars every year in conservation programs that are focused on improving a suite of ecosystem services produced on private lands. A better understanding of the public benefits generated by federal conservation programs could help improve governmental efficiency and economic welfare by providing science-based evidence for use in policy decision-making regarding targeting of federal conservation investments. Of specific concern here are conservation investments made by the U.S. Department of Agriculture (USDA). While previous research has shown that efficiency gains are possible using cost-benefit analysis for targeting conservation investments, agency-wide implementation of this approach by policy makers has been constrained by the limited availability of location-specific information regarding conservation benefits. Cost-effective opportunities for integrating location-specific ecosystem service valuation research with USDA conservation decision-making include: (1) institutionalizing funding of comparable studies suitable for benefit transfer, (2) utilizing non-traditional data sources for research complementing benefit transfer, and (3) creating a state-of-the-art program for developing and communicating research in ecosystem service valuation exemplifying the highest standards of scientific conduct.
Systematic reviews in mental health have become useful tools for health professionals in view of the massive amount and heterogeneous nature of biomedical information available today. In order to determine the risk of bias in the studies evaluated and to avoid bias in generalizing conclusions from the reviews it is therefore important to use a very strict methodology in systematic reviews. One bias which may affect the generalization of results is publication bias, which is determined by the nature and direction of the study results. To control or minimize this type of bias, the authors of systematic reviews undertake comprehensive searches of medical databases and expand on the findings, often undertaking searches of grey literature (material which is not formally published). This paper attempts to show the consequences (and risk) of generalizing the implications of grey literature in the control of publication bias, as was proposed in a recent systematic work. By repeating the analyses for the same outcome from three different systematic reviews that included both published and grey literature our results showed that confusion between grey literature and publication bias may affect the results of a concrete meta-analysis.
Meta-analysis is a well-established approach to integrating research findings, with a long history in the sciences and in psychology in particular. Its use in summarizing research findings has special significance given increasing concerns about scientific replicability, but it has other important uses as well, such as integrating information across studies to examine models that might otherwise be too difficult to study in a single sample. This chapter discusses different forms and purposes of meta-analyses, typical elements of meta-analyses, and basic statistical and analytic issues that arise, such as choice of meta-analytic model and different sources of variability and bias in estimates. The chapter closes with discussion of emerging issues in meta-analysis and directions for future research.
This chapter reviews a broad emerging literature on research transparency and reproducibility. This recent literature finds that problems with publication bias, specification searching, and an inability to reproduce empirical findings create clear deviations from the scientific pillars of openness and transparency of research. These failings can also result in incorrect inferences.
Amidst rising concern about publication bias, pre-registration and results-blind review have grown rapidly in use. Yet discussion of both the problem of publication bias and of potential solutions has been remarkably narrow in scope: publication bias has been understood largely as a problem afflicting quantitative studies, while pre-registration and results-blind review have been almost exclusively applied to experimental or otherwise prospective research. This chapter examines the potential contributions of pre-registration and results-blind review to qualitative and quantitative retrospective research. First, the chapter provides an empirical assessment of the degree of publication bias in qualitative political science research. Second, it elaborates a general analytic framework for evaluating the feasbility and utility of pre-registration and results-blind review for confirmatory studies. Third, through a review of published studies, the paper demonstrates that much observational—and, especially, qualitative—political science research displays features that would make for credible pre-registration. The paper concludes that pre-registration and results-blind review have the potential to enhance the validity of confirmatory research across a range of empirical methods, while elevating exploratory work by making it harder to disguise discovery as testing.
Disseminating our findings is part of the scientific process, so that others know what we found. Not making our results available leads to duplication of effort because other researchers don’t know we did the work. Publication bias arises when researchers don’t publish findings because they are non-significant. We may need to publish to advance our career, but this is not the purpose of scientific articles. Confusing these two aims can lead to questionable research practices. This chapter goes through the of submitting a manuscript to a peer-reviewed journal. Peer review involves the scrutiny and evaluation of our work by experts. I begin with how to choose a journal, and things to consider before you submit, then I explain the cover letter, submission, and the review process. I explain the editor’s decision, what to do if your manuscript is rejected, revising your manuscript and resubmitting it. Finally, I cover what happens after your manuscript is accepted.
Introduction: Non-publication of trial findings results in research waste and compromises medical evidence and the safety of interventions in child health. The objectives of this study were to replicate, compare and contrast findings of a previous study (Klassen et al., 2002) to determine the impact of ethical and editorial mandates to register and publish findings. Methods: Abstracts accepted to the Pediatric Academic Societies meetings (2008-2011) were screened in duplicate to identify Phase-III RCTs enrolling pediatric populations. Subsequent publication was ascertained through a search of electronic databases. Study internal validity was measured using Cochrane Risk of Bias and Jadad Scale, and key variables (e.g., trial design, study stage) were extracted. Pearson X2, t-tests and Wilcoxon rank sum tests were used to examine association between variables and publication status. Logistic regression, log-rank tests, rank correlation and Egger regression were used to assess predictors of publication, time to publication and publication bias, respectively. Results: Compared to our previous study, fewer studies remained unpublished (27.9% vs 40.9%, p=.007). Abstracts with higher sample sizes (p=0.01) and those registered in ClinicalTrials.gov were more likely to be published (p<.0001). There were no differences in quality measures/risk of bias or in preference for positive results (p=0.36) between published and unpublished studies. Mean time to publication was 26.5 months and published manuscripts appeared most frequently in Pediatrics, the Journal of Pediatrics, and Pediatric Emergency Care. The funnel plot (p=0.04) suggests a reduced but ongoing existence of publication bias among published studies. Overall, we observed a reduction in publication bias and in preference for positive findings, and an increase in study size and publication rates over time. Conclusion: Despite heightened safeguards and editorial policy changes in recent decades, publication bias remains commonplace and presents a threat to assessing the efficacy and effectiveness of interventions in child health. Our results suggest a promising trend towards a reduction in publication bias over time and positive impacts of trial registration. Further efforts are needed to ensure the entirety of evidence can be accessed when assessing treatment effectiveness.