Hostname: page-component-586b7cd67f-t8hqh Total loading time: 0 Render date: 2024-11-24T07:39:27.533Z Has data issue: false hasContentIssue false

PEER-REVIEWED JOURNAL EDITORS’ VIEWS ON REAL-WORLD EVIDENCE

Published online by Cambridge University Press:  08 February 2018

Elisabeth M. Oehrlein
Affiliation:
University of Maryland School of [email protected]
Jennifer S. Graff
Affiliation:
National Pharmaceutical Council
Eleanor M. Perfetto
Affiliation:
University of Maryland School of Pharmacy, National Health Council
C. Daniel Mullins
Affiliation:
University of Maryland School of Pharmacy
Robert W. Dubois
Affiliation:
National Pharmaceutical Council
Chinenye Anyanwu
Affiliation:
University of Maryland School of Pharmacy
Eberechukwu Onukwugha
Affiliation:
University of Maryland School of Pharmacy
Rights & Permissions [Opens in a new window]

Abstract

Objectives: Peer-review publication is a critical step to the translation and dissemination of research results into clinical practice guidelines, health technology assessment (HTA) and payment policies, and clinical care. The objective of this study was to examine current views of journal editors regarding: (i) The value of real-world evidence (RWE) and how it compares with other types of studies; (ii) Education and/or resources journal editors provide to their peer reviewers or perceive as needed for authors, reviewers, and editors related to RWE.

Methods: Journal editors’ views on the value of RWE and editorial procedures for RWE manuscripts were obtained through telephone interviews, a survey, and in-person, roundtable discussion.

Results: In total, seventy-nine journals were approached, resulting in fifteen telephone interviews, seventeen survey responses and eight roundtable participants. RWE was considered valuable by all interviewed editors (n = 15). Characteristics of high-quality RWE manuscripts included: novelty/relevance, rigorous methodology, and alignment of data to research question. Editors experience challenges finding peer reviewers; however, these challenges persist across all study designs. Journals generally do not provide guidance, assistance, or training for reviewers, including for RWE studies. Health policy/health services research (HSR) editors were more likely than specialty or general medicine editors to participate in this study, potentially indicating that HSR researchers are more comfortable/interested in RWE.

Conclusions: Editors report favorable views of RWE studies provided studies examine important questions and are methodologically rigorous. Improving peer-review processes across all study designs, has the potential to improve the evidence base for decision making, including HTA.

Type
Policies
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
Copyright © Cambridge University Press 2018

Spurred by a proliferation of data sources and published guidelines supporting the conduct of rigorous real-world studies, the past decade has likely seen increasing submissions of real-world evidence (RWE) manuscripts to peer-reviewed journals. Sherman and colleagues (Reference Sherman, Anderson and Dal Pan1) define RWE as “information on health care that is derived from multiple sources outside typical clinical research settings, including electronic health records (EHRs), claims and billing data, product and disease registries, and data gathered through personal devices and health applications.” Emerging real-world data (RWD) sources are often used to quickly answer research questions that may never have been studied in randomized trials, assess different outcomes than those studied in trials, use routinely collected information among more generalizable populations, and allow for analyses of subpopulations (Reference White2;Reference Garrison, PJ, Erickson, Marshall and Mullins3). Recognizing the advantages of assessing patient experiences to evaluate effectiveness, safety, and quality of care, several large-scale investments in RWD infrastructure are under way (Reference Collins and Varmus46).

However, to impact clinical practice, investments in infrastructure to produce RWE are insufficient. Research results are typically translated and disseminated through clinical practice guidelines, reimbursement and payment policies, and other healthcare policies and protocols (Reference Gibson, Ehrlich and Graff7). These mechanisms often rely heavily upon evidence from peer-review publications to inform their recommendations. Thus, journal editors serve as gatekeepers to the translation of evidence, including RWE, into practice. Skeptics of RWE studies assert that lack of randomization may produce results prone to error or with larger treatment effects than seen in randomized controlled trials (RCTs); therefore, use should be limited (Reference Ioannidis, Haidich and Pappa8). However, reviews comparing treatment effects between RCTs and RWE studies found few differences based on randomization alone (Reference Concato, Shah and Horwitz9Reference Anglemyer, Horvath and Bero11).

Given improvements in data collection and statistical methods to address potential differences between comparison groups, many believe when done with high-quality data and methods, RWE can contribute to the body of best available evidence (Reference Anglemyer, Horvath and Bero11;Reference Starks, Diehr and Curtis12). Proponents of RWE have criticized the approach of relying solely on RCTs to inform evidence-based medicine and contend that if evidence emerging from RCTs is only generalizable to relatively small, homogenous populations, then providers will be unable to apply evidence-based approaches when treating the majority of their patient populations (Reference Greenfield and Kaplan13). For example, a recent assessment of Cochrane Reviews found that 44 percent concluded there is “insufficient evidence for clinical practice” (Reference Villas Boas, Spagnuolo and Kamegasawa14).

Health technology assessment (HTA) bodies often rely heavily upon evidence from peer-reviewed publications to inform their recommendations. Because journal editors’ attitudes likely influence the types of study designs that make it through the peer-review process and on to publication, journal editors serve as gatekeepers to translation of evidence, including RWE, into practice. Given past skepticism of RWE study designs, journal editors’ perceptions of and possible biases toward RWE are important to understand as they may impact dissemination and, therefore, uptake of research findings. The objective of this study was to examine current views of journal editors regarding: (i) The value of RWE and how it compares with other types of studies like RCTs; (ii) Education and/or resources journal editors provide to their peer reviewers or perceive as needed for authors, reviewers, and editors related to RWE.

METHODS

A mixed methods approach was used to gather peer-reviewed journal editors’ perceptions by means of telephone interview, survey, and in-person roundtable (RT) discussion. The protocol for this study was given exempt status by the University of Maryland, Baltimore Institutional Review Board.

Journal and Editor Selection and Recruitment

Journals were selected and sampled from three topic areas: general medicine (GM), specialty medicine (SM), and health policy/services research (HSR) to provide a broad representation of relevant journal content and readership audiences. Journals were identified using Reuters InCites, and inclusion and exclusion criteria were informed by prior studies (Reference Wager, Williams and Project15Reference Hing, Higgs, Hooper, Donell and Song17). Journals were eligible if: indexed in Medline; had an impact factor ≥2; and were published in the English language. Journals were ineligible if: instructions to authors state they only accept lab-based (bench) research, or solely dedicated to research on technology, education, or informatics. Contact information for editors of eligible journals was located on each journal's Web site and editors were contacted by email and telephone in descending order of impact factor with replacement following refusal to participate or nonresponse after three contact attempts (see Figure 1).

Figure 1. Journal editor recruitment process.

A standard definition of RWE and RWD was used to ensure consistency (see Supplemental Material 1).

Data Collection and Analysis

Semi-structured interviews were conducted using a 20-item interview guide (Supplementary Material 1). Before the interviews, the guide was pilot-tested with a convenience sample of four editors from peer-reviewed journals (topic areas: health economics, outcomes research, or health policy) not invited to participate in the study. Interviews were conducted between August 2015, and January 2016, transcribed verbatim, and a thematic analysis was performed using Nvivo software.

Editors were asked their reaction to the working definitions of RWE and RWD presented and about the value of RWE. To assess manuscript volume, review, and acceptance policies for RWE studies compared with other study designs (e.g., RCTs) a nine-item survey was developed and deployed using Survey Monkey software (Supplementary Table 1). Editors participating in the interviews and those who could not participate were asked to respond to the survey. The email survey was open for participation from August 2015 to February 2016.

Finally, to identify key challenges to the review and publication of RWE manuscripts, all interview participants were invited to join a roundtable discussion on March 9, 2016. At the in-person roundtable, the interview and survey findings were presented, discussed, and examined. All interview participants received honorarium and roundtable attendees received travel reimbursement.

RESULTS

Sample

In total, seventy-nine journals (n = 30 with an Editor-in-Chief/Co-Editor-in-Chief based outside of the United States) were contacted, resulting in a final sample of fifteen journal editors who completed both the telephone interview and survey, and two additional journals that completed the survey only (n = 17 surveys; 7/17 had an international Editor-in-Chief or Co-Editor-in-Chief) (see Table 1). Among the seventy-nine journals contacted, thirty responded and declined to participate, while thirty-four journals did not respond at all. Refusal reasons offered by those who specifically declined included: busy schedule/no time (n = 11); conflicts with journal policy (n = 2); not relevant to journal subject (n = 4); no reason provided (n = 12); referred to RWE as “garbage in-garbage out” (n = 1). Among the fifteen journal editors who participated in the telephone interview, eight attended the in-person roundtable. Among journals participating in the interview, 8/15 had an international Editor-in-Chief or Co-Editor-in-Chief. The range of impact factors for the journals invited to participate was 2.8 to 54.4; for those participating, it was 2.8 to 14.7.

Table 1. Study Sample by Type of Journal

RT, roundtable.

Editors interviewed varied in their experience with the number of years of experience as an editor ranging from 6 months to 24 years (mean = 8 years; median = 6.5 years). Time spent on editorial responsibilities averaged 19 hours per week (range, 2–70 hr/week) with some working full-time (40+ per week) and others part-time.

RWE Studies Have Value and Are Complementary to RCTs

In general, interviewed editors agreed with the definitions of RWE and RWD being used in this study and provided additional detail or reasons for disagreeing (Supplementary Table 2). Editors interviewed reported RWE, in general, to be valuable. Advantages included the ability to complement RCT evidence, assess the impact of interventions in the real world, and understand treatment effects among more diverse, representative patient populations. However, editors also cited disadvantages of RWE such as the lack of high-quality data and less-established methodological standards for RWE compared with other study designs.

Characteristics Defining High-Quality RWE Are Similar to Those Defining High-Quality RCTs

Editors reported that the value of a submitted RWE manuscript was grounded in whether the study asked important research question(s), fills a research gap, is generalizable, and whether the data source is well aligned with the question. High-quality RWE manuscripts were characterized as including: impactful, meaningful hypothesis-driven questions; generalizable subject populations; high-quality and clearly described statistical analyses and other methods; efforts to address selection bias are reported; and appropriate subject matter for the specific journal (Table 2).

Table 2. Comments from Interviews and RT Discussion on Value of RWE

RT, roundtable; RWE, real-world evidence; RCT, randomized controlled trials.

These features were also described for high-quality and low-quality intervention studies with one caveat. Editors highlighted specific features associated with high-quality RCTs such as: approach to investigator blinding and recruitment, protocol violations, and follow-up periods. Attributes specific to high-quality RWE not mentioned include methods to ensure similar patient populations, confirm treatment exposure, or address missing data.

All Manuscripts are Treated Equally – Almost. . .

Editors noted that manuscripts with high-quality attributes have a higher likelihood of being sent for peer-review regardless of the study design. Survey results indicate the majority of participating journal editors receive between 500 and 1,999 manuscripts annually (n = 11/17). Among these, the majority of sampled journals (n = 11) receive 25 percent or fewer manuscripts reporting on RCTs, 50 percent or more RWE manuscripts, and ~25 percent are other types of manuscripts. Overall, the majority of these journals (n = 14) send over 50 percent of submitted manuscripts, both RCT and RWE, for peer-review.

Participants reported they assess the quality of all manuscripts in the same way, irrespective of the study design. However, editors noted a prestige factor and comfort level with RCT designs that appear to provide advantages over RWE studies. RWE encounters more skepticism and may have a higher bar for demonstrating quality than for RCTs. For example, one editor explained that given the volume of RWD sources and relative speed of performing analyses using these data, “In some ways, it's too easy to do RWE studies.” During the roundtable, editors also reported that it is considered more prestigious to publish RCTs compared with RWE, and for that reason may not scrutinize RCTs to the same extent.

“We always get fewer RCTs than we want, so maybe we have a lower bar. But for RWE, we know we will get enough papers, so ‘was there an interesting question’ becomes more important.” -RT participant

Peer-Review Challenges Exist but Are not Unique to RWE

Interviewed editors reported that manuscript acceptance is, in part, predicated on finding competent reviewers that can recognize high-quality papers. Editors reported difficulty in finding reviewers in general, regardless of study design. These challenges are due to: limited time and availability of peer reviewers, reviewer skills incompatible with the manuscript topic, and competition among journals for good reviewers. During the RT discussion, one editor highlighted the need for RWE manuscript peer reviewers to scrutinize the appropriateness of data sources and be acquainted with the RWD source used. Unfortunately, editors report that searching for identifiable characteristics (e.g., experience with specific data sources) for peer-reviewers using existing editorial systems is not feasible.

Current editorial computer systems often do not allow for stratification of peer-reviewers by attributes highly relevant for their ability to review a particular manuscript, such as experience with specific RWD sources. More granular systems that could enable editorial staff to stratify potential peer-reviewers by interests, expertise, and knowledge of a specific data sets could identify peer-reviewers with capabilities to adequately scrutinize RWE study designs and thereby improve the quality of published studies.

“If they don't know the datasets, can reviewers really evaluate those studies?” -RT participant

Tools and Training for RWE for Authors, Peer-Reviewers, and Editors

Editors report infrequent use of training, tools, or checklists to assist authors, peer reviewers, and/or editors in the reporting and review of manuscripts for all study types. For example, seven of the seventeen journals surveyed did not have specific recommendations for authors to follow when conducting, reporting, or submitting RWE manuscripts. Among the nine journals that do provide recommendations, two respondents cited the STROBE guidelines, one recommended the “ISPOR Good Practices for Outcomes Research,” and the remainder alluded to their general author guidelines Web page.

In the roundtable, editors had mixed reactions about the benefits of specific guidelines and checklists (Table 3). The roundtable participants wished to avoid being “too prescriptive” or to impede methodological innovation if authors were to use methods not included on a reviewer checklist. Other editors expressed concerns that a “one-size-fits-all” RWE checklist is not appropriate due to the variety of study designs and sources that fall into the RWE category. Finally, other research has questioned whether the adoption of checklists for authors translates into increased methodological transparency or quality (Reference Pouwels, Widyakusuma, Groenwold and Hak18).

Table 3. Comments from Interviews and the RT Discussion on Editorial Decision Making

RT, roundtable; RWE, real-world evidence.

Editors May Welcome a Checklist to Improve the Efficiency of Manuscript Review

While editors did not support checklists aimed at authors or peer-reviewers, they did seek tools that help them make manuscript rejection decisions sooner, as well as tools that would provide constructive feedback to authors. Provided with the right tools, such as a comprehensive checklist, editors could identify a high-quality RWE study and resulting manuscript, and more quickly reject low-quality ones. Editors would be able to spend time on high-quality manuscripts that can make a contribution to science and are more likely to be accepted for publication.

Editors Are Interested in Improving the Rigor and Transparency of RWE

During the roundtable, editors debated three opportunities to improve the rigor and transparency of RWE studies. First, because word limits on methods sections may not be adequate for detailed methodologies, editors recommended the use of online supplementary materials to facilitate research transparency. Second, some, but not all editors highlighted the need for a protocol and hypothesis-driven analysis plan a priori. This would allow editors and reviewers to differentiate between results derived from prespecified analyses rather than results with an impressive and significant odds ratio. Third, the roundtable debated the benefits and challenges of open-source data and publication of the analytic code to improve transparency and enable readers and peer-reviewers to replicate study methods.

Editors Rely on Existing Peer Reviewer Knowledge and Provide Little in the Way of Formal Training

According to the survey, 10/17 journals did not provide formal training for peer-reviewers, while 7/17 provide training. Journals providing training referenced “online materials” or “Web courses” (3/7); written guidance or occasional lecture (3/7); or an in-person meeting (2/7). However, only one journal noted RWE-specific training, which was described as “general instructions,” for their peer-reviewers.

In addition, 9/17 journals reported they provide tools (checklists, etc.) to aid reviewer processes. Of those providing tools, these documents included “ratable issues and then open-ended review,” “online checklists,” “ratings on specific dimensions,” and “mostly, the tools focus on structure and formatting of reviews rather than content domains.” None of these included tools or checklists specific to RWE. Editors were also asked about training and resources for other research designs or statistical analyses. Among the respondents, 13/17 do not provide such training. Editors describe their resources as “for early career researchers,” “general instructions,” “sometimes within an online newsletter,” and “directed to guidance.” However, one stated they “require statistical review for clinical manuscripts.”

As explained by one editor, “the publisher wishes to have the same guidelines for all its journals, so field-specific information is discouraged.” In general, editors reported hesitancy to burden reviewers further with additional tools or training.

“We don't impose a lot of structure. I mean we're reliant on the generosity of our reviewers, and they're contributing their time for free. So, we hate to impose a lot of rules on them. But, we also don't hesitate to drop them if they're not being helpful.”-Interview participant - HSR

DISCUSSION

Participating journal editors reported being receptive to RWE, provided studies asked important research question(s), fill a research gap, are generalizable, and the data source is well aligned with the question. They believe that RWE is complementary to RCT evidence and that characteristics of high-value RWE overlap with characteristics of high-value RCTs. Beliefs were consistent with Price and colleagues’ (Reference Price, Bateman and Chisholm19) view that the emergence of RWE is a natural evolution from traditional evidence, not a revolution. Interviewed editors identified several challenges to peer review, but also offered suggestions which may help improve the quality of the evidence base.

Improving Rigor, Transparency, and Relevance of RWE

One inherent challenge is the need for authors to balance transparency and complexity of methodologies with a clear story that people can follow (Reference Pain20). The availability of virtually unlimited word counts through online supplemental material may help editors, peer-reviewers, and readers alike evaluate the quality of study designs and analyses. RT participant's suggestion that registering RWE studies a priori may improve the rigor and transparency of RWE. Similarly, a 2010 article by the Editors of the journal, Epidemiology, suggested that pre-registering studies could reduce publication bias, improve transparency, and improve ethical behavior among researchers (21). Kreis and colleagues (Reference Kreis, Panteli and Busse22) found that approximately half of sampled HTA agencies search trial registries to identify unpublished trial data; thus, analogous approaches could be adopted for RWE. In a recent editorial, White suggested that increasing transparency of methodology, selection/appropriateness of data source, registration of protocols, or implementation of corporate policies, and adhering to reporting guidelines can help researchers build trust in RWE (Reference White2).

As research is increasingly designed to answer research or stakeholder questions, the study quality and “fit-for-purpose” approach may prevail rather than historical study hierarchies in which RCTs are regarded the “highest” level of evidence (Reference Sabharwal, Graff, Holve and Dubois23). Several initiatives are shifting research culture and engaging stakeholders at the outset of research to identify useful and important study questions and endpoints (Reference Mullins, Abdulhalim and Lavallee24;Reference Perfetto, Burke, Oehrlein and Epstein25). These efforts may result in more meaningful research rather than studies that are “too easy to do.”

Improving Peer Review

Journals provide little in the way of training for their peer reviewers, in part because they do not wish to burden them further. However, there has been concern reported in the literature regarding reliance on peer reviewers using their existing knowledge to identify “fatal flaws” in manuscripts. An assessment of British Medical Journal peer-reviewers’ ability to identify major errors within manuscripts found that of nine major flaws, reviewers, even those randomized to receive training, identified approximately three (Reference Schroter, Black, Evans, Godlee, Osorio and Smith26). This is particularly concerning as Editors also commented that editorial software makes it difficult for Editors to sort potential peer reviewers by their expertise and experience. This may make Editors more reliant on authors recommending their own peer reviewers, which has been linked to several recent high-profile fabricated peer reviews (Reference Haug27).

High-quality review is increasingly important in maintaining both the publics’ and researchers’ trust in the legitimacy of published research findings. For example, pay-to-publish, open-access journals, which often feature studies that do not undergo rigorous peer review, if any at all, have recently received negative attention in both the peer-reviewed and lay press (Reference Sorokowski, Kulczycki, Sorokowska and Pisanski28). To ensure a more systematic peer-review approach, the European Association of Science Editors (EASE) suggests providing instructions openly on the journal Web site, links to guidelines and checklists, and requesting reviewers to use these sources (29).

However, access to checklists alone may be insufficient. For example, a recent study found that reporting of confounders in peer-reviewed cohort and case-control studies was insufficient despite adoption and endorsement of the STROBE guidelines (Reference Pouwels, Widyakusuma, Groenwold and Hak18). In addition to access to tools, corresponding education may be helpful. Education and tools have been shown improve healthcare decision-maker awareness, confidence, and skills to evaluate and apply findings from RWE studies in practice (Reference Perfetto, Anyanwu, Pickering, Zaghab, Graff and Eichelberger30).

Checklists to Improve Communication between Editors and Authors

Checklists could also serve as a communication tool allowing editors to more efficiently work with authors to provide suggestions for improving manuscripts for methodologically sound, innovative, and impactful RWE studies. This recommendation contrasts a report by MacMahon and Weiss (Reference MacMahon and Weiss31), who believe that tools developed “to support editors and reviewers when considering such articles for publication” in fact “smacks of condescension,” because journal editors are typically selected for their expertise and experience. However, efforts to accelerate the peer review process would likely also be met with praise from authors submitting manuscripts to peer-reviewed journals, because they have been critical of the length of time to receiving a review decision (Reference Shattell, Chinn, Thomas and Cowling32;Reference Weber, Katz, Waeckerle and Callaham33).

Policy Implications

Interviewed editor's are interested in improving peer review and the quality of the evidence base, including RWE. HTA bodies may find these insights useful as they consider the role of RWE and peer review in their decision-making. At present, use of RWE still varies by and within stakeholders. For example, in the United States, the Academy of Managed Care Pharmacy Foundation's Format for Formulary Submissions recommends using best available evidence including RWE. However, use remains limited among individual health plans (34Reference Hurwitz, Brown, Graff, Peters and Malone36). Similarly, Rangarao et al. found that 85 percent of clinical practice groups regularly use RWE in some aspect of guideline development, but the use is inconsistent (Reference Rangarao Peck, Kleiner, Graff, Lustig, Dubois and Wallace37). Synthesis tools such as the American Society for Clinical Oncology's Value Framework uses only RCT evidence for determining “value” (Reference Schnipper, Davidson and Wollins38).

Internationally, policies on the use of RWE in decision-making is inconsistent (Reference Makady, Ham and de Boer39). However, the Innovative Medicines Initiative's Get-Real Consortium identified three areas where RWD is currently being used to guide relative effectiveness analyses: as supplementary input for initial HTAs, as inputs in pharmacoeconomic models, and for re-assessment as part of conditional reimbursement schemes (Reference Makady, Ham and de Boer39).

Limitations and Challenges

The limited sample size and relative homogeneity of journals willing to participate in this study may reduce generalizability to a larger population of journal editors. Importantly, this study sample included an over-representation of HSR journals by virtue of them being more likely to agree to participate. While HSR journals represented only 20.3 percent of total journals solicited to participate, they represented 53 percent of interview participants, 56 percent of those completing surveys, and 63 percent of RT participants. Thus, HSR journal editors may be more comfortable with and value RWE studies more than journals in other fields, driving many of this study's results. Editors who declined to participate may have done so because they found little value in RWE. For example, one journal editor declined to participate and referred RWE as “garbage in, garbage out.” Thus, divergent points of view, important for grounded theory, are absent.

Furthermore, while the leadership of sampled journals was approximately 50 percent non–United States-based, editors, particularly those who were participated in the RT were generally United States-based. Thus, our findings may primarily represent an American perspective. Lastly, the survey may have been completed by a member of the journal's editorial staff who may not be as familiar with or have different views toward RWE as compared to the editor participating in the interview and roundtable.

CONCLUSION

Journal editors play a critical role in the translation of research findings into clinical practice. Editors suggest that with rigorous, transparent methodology, and improved data sources, RWE can contribute to the evidence base. Tools facilitating communication and the peer-review process may be useful for researchers, reviewers, and editors. Improving peer-review processes across all study designs, has the potential to improve the evidence base for decision making, including HTA.

SUPPLEMENTARY MATERIAL

Supplementary Material 1: https://doi.org/10.1017/S0266462317004408

Supplementary Table 1: https://doi.org/10.1017/S0266462317004408

Supplementary Table 2: https://doi.org/10.1017/S0266462317004408

CONFLICTS OF INTEREST

This project was funded by the National Pharmaceutical Council. Ms. Oehrlein reports grants from National Pharmaceutical Council, during the conduct of the study; grants from Pfizer, and the PhRMA Foundation outside the submitted work. Dr. Graff reports grants from National Pharmaceutical Council, during the conduct of the study. She is currently employed by the National Pharmaceutical Council and owns stock in Pfizer Inc. Dr. Perfetto is an employee of the National Health Council, a not-for-profit multi-stakeholder membership organization. As such it receives membership and sponsorship funds from a variety of organizations and businesses. The lists of members and sponsors can be found at www.nationalhealthcouncil.org. Dr. Perfetto discloses that while at the University of Maryland she had grants and contracts from the Patient-Centered Outcomes Research Institute, the Pharmaceutical Research Manufacturers Association Foundation, the National Pharmaceutical Council, and the Academy of Managed Care Pharmacy Foundation. Dr. Mullins reports grants from National Pharmaceutical Council, during the conduct of the study; grants and personal fees from Bayer, grants and personal fees from Pfizer, personal fees from Boehringer-Ingelheim, personal fees from Janssen/J&J, personal fees from Regeneron, from Sanofi, outside the submitted work. Dr. Dubois is employed by the National Pharmaceutical Council. Dr. Anyanwu reports grants from National Pharmaceutical Council, during the conduct of the study. She completed work while at University of Maryland School of Pharmacy, and is currently employed by the Patient-Centered Outcomes Research Institute. Dr. Onukwugha reports grants from National Pharmaceutical Council, during the conduct of the study; and grants from Bayer Healthcare and Pfizer, outside the submitted work.

References

REFERENCES

1. Sherman, RE, Anderson, SA, Dal Pan, GJ, et al. Real-world evidence — What is it and what can it tell us? N Engl J Med. 2016;375:22932297.Google Scholar
2. White, R. Building trust in real-world evidence and comparative effectiveness research: The need for transparency. J Comp Eff Res. 2017; 6:57.CrossRefGoogle ScholarPubMed
3. Garrison, LP Jr, PJ, Neumann, Erickson, P, Marshall, D, Mullins, CD. Using real-world data for coverage and payment decisions: The ISPOR Real-World Data Task Force report. Value Health. 2007;10:326335.Google Scholar
4. Collins, FS, Varmus, H. A new initiative on precision medicine. N Engl J Med. 2015;372:793795.Google Scholar
5. Health Policy Brief: The FDA's Sentinel Initiative. June 4, 2015. [Internet]. http://www.healthaffairs.org/healthpolicybriefs/brief.php?brief_id=139. (accessed June 1, 2016).Google Scholar
6. PCORnet, the National Patient-Centered Clinical Research Network. October 19, 2017. [Internet]. http://www.pcornet.org/. (accessed November 13, 2017).Google Scholar
7. Gibson, TB, Ehrlich, ED, Graff, J, et al. Real-world impact of comparative effectiveness research findings on clinical practice. Am J Manag Care. 2014;20:e208–e220.Google ScholarPubMed
8. Ioannidis, J, Haidich, A, Pappa, M, et al. Comparison of evidence of treatment effects in randomized and nonrandomized studies. JAMA. 2001;286:821829.CrossRefGoogle ScholarPubMed
9. Concato, J, Shah, N, Horwitz, RI. Randomized, controlled trials, observational studies, and the hierarchy of research designs. N Engl J Med. 2000;342:18871892.CrossRefGoogle ScholarPubMed
10. MacLehose, RR, Reeves, BC, Harvey, IM, Sheldon, TA, Russell, IT, Black, AM. A systematic review of comparisons of effect sizes derived from randomised and non-randomised studies. Health Technol Assess. 2000;4:1154.CrossRefGoogle ScholarPubMed
11. Anglemyer, A, Horvath, HT, Bero, L. Healthcare outcomes assessed with observational study designs compared with those assessed in randomized trials. Cochrane Database Syst Rev. 2014;4:MR000034.Google Scholar
12. Starks, H, Diehr, P, Curtis, J. The challenge of selection bias and confounding in palliative care research. Journal of Palliative Medicine. 2009;12:181187.Google Scholar
13. Greenfield, S, Kaplan, SH. Building useful evidence: Changing the clinical research paradigm to account for comparative effectiveness research. J Comp Eff Res. 2012;1:263270.Google Scholar
14. Villas Boas, PJ, Spagnuolo, RS, Kamegasawa, A, et al. Systematic reviews showed insufficient evidence for clinical practice in 2004: What about in 2011? The next appeal for the evidence-based medicine age. J Eval Clin Pract. 2013;19:633637.Google Scholar
15. Wager, E, Williams, Project, P Overcome failure to Publish nEgative fiNdings Consortium. “Hardly worth the effort”? Medical journals' policies and their editors' and publishers' views on trial registration and publication bias: Quantitative and qualitative study. BMJ. 2013;347:f5248.Google Scholar
16. Cals, JW, Mallen, CD, Glynn, LG, Kotz, D. Should authors submit previous peer-review reports when submitting research papers? Views of general medical journal editors. Ann Fam Med. 2013;11:179181.Google Scholar
17. Hing, CB, Higgs, D, Hooper, L, Donell, ST, Song, F. A survey of orthopaedic journal editors determining the criteria of manuscript selection for publication. J Orthop Surg Res. 2011;6:19.CrossRefGoogle ScholarPubMed
18. Pouwels, KB, Widyakusuma, NN, Groenwold, RH, Hak, E. Quality of reporting of confounding remained suboptimal after the STROBE guideline. J Clin Epidemiol. 2016;69:217224.CrossRefGoogle ScholarPubMed
19. Price, D, Bateman, ED, Chisholm, A, et al. Complementing the randomized controlled trial evidence base. Evolution not revolution. Ann Am Thorac Surg. 2014;11:S92S98.CrossRefGoogle Scholar
20. Pain, E. Your data, warts and all. October 4, 2013. [Internet]. http://www.sciencemag.org/careers/2013/10/your-data-warts-and-all#.Uk9ffEYHu3c.twitter. (accessed April 20, 2016).Google Scholar
21. Editors. The registration of observational studies–When metaphors go bad. Epidemiology. 2010;21:607609.Google Scholar
22. Kreis, J, Panteli, D, Busse, R. How health technology assessment agencies address the issue of unpublished data. Int J Technol Assess Health Care. 2014;30:3443.Google Scholar
23. Sabharwal, RK, Graff, JS, Holve, E, Dubois, RW. Developing evidence that is fit for purpose: A framework for payer and research dialogue. Am J Manag Care. 2015;21:e545–e551.Google ScholarPubMed
24. Mullins, CD, Abdulhalim, AM, Lavallee, DC. Continuous patient engagement in comparative effectiveness research. JAMA. 2012;307:15871588.Google Scholar
25. Perfetto, EM, Burke, L, Oehrlein, EM, Epstein, RS. Patient-focused drug development: A new direction for collaboration. Med Care. 2015;53:917.CrossRefGoogle ScholarPubMed
26. Schroter, S, Black, N, Evans, S, Godlee, F, Osorio, L, Smith, R. What errors do peer reviewers detect, and does training improve their ability to detect them? J R Soc Med. 2008;101:507514.CrossRefGoogle Scholar
27. Haug, CJ. Peer-review fraud — Hacking the scientific publication process. N Engl J Med. 2015;373:23932395.Google Scholar
28. Sorokowski, P, Kulczycki, E, Sorokowska, A, Pisanski, K. Predatory journals recruit fake editor. Nature. 2017;543:481483.CrossRefGoogle ScholarPubMed
29. EASE Guidelines for Authors and Translators of Scientific Articles to be Published in English. Acta Inform Med. 2014;22:210217.CrossRefGoogle Scholar
30. Perfetto, EM, Anyanwu, C, Pickering, MK, Zaghab, RW, Graff, JS, Eichelberger, B. Got CER? Educating pharmacists for practice in the future: New tools for new challenges. J Manag Care Spec Pharm. 2016;22:609616.Google Scholar
31. MacMahon, B, Weiss, NS. Is there a dark phase of this STROBE? Epidemiology. 2007;18:791.Google Scholar
32. Shattell, MM, Chinn, P, Thomas, SP, Cowling, WR III. Authors' and editors' perspectives on peer review quality in three scholarly nursing journals. J Nurs Scholarsh. 2010;42:5865.Google Scholar
33. Weber, EJ, Katz, PP, Waeckerle, JF, Callaham, ML. Author perception of peer review: Impact of review quality and acceptance on satisfaction. JAMA. 2002;287:27902793.CrossRefGoogle ScholarPubMed
34. Academy of Managed Care Pharmacy. The AMCP Format for Formulary Submissions Version 4.0. April 21, 2016. [Internet]. http://www.amcp.org/FormatV4/. (accessed November 13, 2017).Google Scholar
35. Weissman, JS, Westrich, K, Hargraves, JL, et al. Translating comparative effectiveness research into Medicaid payment policy: Views from medical and pharmacy directors. J Comp Eff Res. 2015;4: 7988.Google Scholar
36. Hurwitz, JT, Brown, M, Graff, JS, Peters, L, Malone, DC. Is real-world evidence used in P&T monographs and therapeutic class reviews? J Manag Care Spec Pharm. 2017;23:613620.Google Scholar
37. Rangarao Peck, S, Kleiner, H, Graff, JS, Lustig, A, Dubois, RW, Wallace, P. Are clinical practice guidelines being informed by real-world data? BMC Health Serv Res. In press.Google Scholar
38. Schnipper, LE, Davidson, NE, Wollins, DS, et al. American Society of Clinical Oncology Statement: A conceptual framework to assess the value of cancer treatment options. J Clin Oncol. 2015;33:25632577.Google Scholar
39. Makady, A, Ham, RT, de Boer, A, et al. Policies for use of real-world data in health technology assessment (HTA): A comparative study of six HTA agencies. Value Health. 2017;20:520532.Google Scholar
Figure 0

Figure 1. Journal editor recruitment process.

Figure 1

Table 1. Study Sample by Type of Journal

Figure 2

Table 2. Comments from Interviews and RT Discussion on Value of RWE

Figure 3

Table 3. Comments from Interviews and the RT Discussion on Editorial Decision Making

Supplementary material: File

Oehrlein et al. supplementary material

Oehrlein et al. supplementary material 1

Download Oehrlein et al. supplementary material(File)
File 41.2 KB
Supplementary material: File

Oehrlein et al. supplementary material

Oehrlein et al. supplementary material 2

Download Oehrlein et al. supplementary material(File)
File 60.1 KB
Supplementary material: File

Oehrlein et al. supplementary material

Oehrlein et al. supplementary material 3

Download Oehrlein et al. supplementary material(File)
File 28.7 KB