Introduction
Practice-based research networks (PBRNs) offer unique advantages to clinical research and quality improvement [Reference Davis, Gunn and Kenzie1-Reference Peterson, Lipman, Lange, Cohen and Durako6], primarily because they bring practice-relevant topics onto the research agenda and are conducted in “real-world,” nonacademic clinical settings where almost all of the population receives its health care. Historically most PBRNs have focused on medical care [7], but a recent review documented the growth in number of dental PBRNs [Reference Canceill, Monsarrat, Faure-Clement, Tohme, Vergnes and Grosgogeat8]. This review concluded that the largest dental PBRN globally is the “National Dental PBRN” (“Network”), and relied on the Network’s most-recent publication about these topics (from 2013) [Reference Gilbert, Williams and Korelitz9]. From 2005 to 2012, the Network operated as a multiregional PBRN in four USA regions and one region that comprised three Scandinavian countries. From 2012 to 2019, the Network became nationwide throughout the USA as six regions and no longer operated a Scandinavian region. Beginning in 2019, new organizational changes were made, the description of which comprises one purpose of this current article.
Journal impact factor and the h-index [Reference Hirsch10] are commonly used measures of scientific impact. However, these metrics have important limitations, such as obscuring large differences in the influence of individual articles or undervaluing some fields of research by failing to normalize raw citation counts. In an effort to address the limitations of these and other measures, the Relative Citation Ratio (RCR) was developed to quantify the influence of a research article that is article level and independent of the scientific field [Reference Hutchins, Yuan, Anderson and Santangelo11]. To facilitate its public use, a National Institutes of Health (NIH) PubMed site was established [12]. All peer-reviewed articles from NIH-funded studies are required to be publicly available in this database [13].
The aforementioned review [Reference Canceill, Monsarrat, Faure-Clement, Tohme, Vergnes and Grosgogeat8] compared scientific productivity between PBRNs and independent investigative teams. These comparisons included number of publications, range of clinical topics studied, and number of practitioners and patients enrolled. Only limited information was available to the authors about network-specific productivity. Indeed, reports of PBRN performance metrics are rare for both medical and dental PBRNs, and have primarily focused on the number of studies completed and the number of participants involved [Reference Peterson, Lipman, Lange, Cohen and Durako6,Reference Werner14,Reference Green and Hickner15]. There is no broad consensus about which metrics to quantify productivity across PBRNs and other research networks. The National Dental PBRN has reported productivity based on the metrics of practitioner engagement (e.g., practitioner participation in studies, webinars, network activities that provide continuing education credit, presentations and publications, and practitioner meetings) [Reference Mungia, Funkhouser and Makhija16,Reference Mungia, Funkhouser and Buchberg Trejo17], but has not reported other measures of overall network scientific productivity, such as the number of peer-reviewed publications and impact as measured by RCR. Therefore, this article aims to: (1) describe the new organizational structure and function of the Network and (2) quantify the Network’s scientific productivity.
Methods
The Network’s mission is “To improve oral health by conducting dental practice-based research and by serving dental professionals through education and collegiality.” It seeks to maximize the practicality of conducting research in everyday clinical practice across geographically dispersed regions and diverse practice types. Its structure is designed to focus some activities at the regional level (e.g., interactions with practitioners), while managing other activities centrally (e.g., study development).
Overall Network Structure and Oversight
The overall structure of the Network was revised in 2019 [18,19], as depicted in Fig. 1. The Network’s main funder is NIH. An Administrative and Resource Center (ARC) and Network Coordinating Center (NCC) support the infrastructure for study development and implementation. The NCC is located in Oregon and provides both scientific and administrative functions. NCC biostatisticians support study development and analysis plans and NCC management staff provide support for study operations and data management. The NCC also designs and implements technology for the network “Hub” which supports the practitioner and participation databases; houses key network-wide documents; implements and tracks study data collection, data quality management, study monitoring procedures, and data analysis. The ARC is the national administrative base (located in Alabama) for six regional nodes and one specialty node that span all 50 US states and territories. Nodes are administratively based in Alabama, Florida, Minnesota, New York, Oregon, Illinois, and Texas. All enrolled practitioners are associated with one of these regional centers. The ARC also directs “Components” focused on specific administrative tasks: a Communications and Dissemination Component; a Practitioner Recruitment and Engagement Component; a Practitioner Training Component; and a Practitioner and Patient Compensation System, which are based either in Alabama or Florida. Each Component Director and Node Director reports to the National Network Director, who is responsible for overall scientific, technical, and administrative leadership and who has primary responsibility for planning and directing Network infrastructure and managing Network operations and fiscal resources. Also shown in Fig. 1, the ARC manages the Practitioner Executive Committee (PEC) and Network interactions with the Central Institutional Review Board (CIRB; ethics committee). The PEC comprises practitioner representatives from each Network region, who provides input about the design, feasibility, and clinical interest of studies and research topics. The Network’s CIRB has been in operation since 2014 and enables the Network to comply with the NIH policy that requires a single IRB review for multi-site studies involving nonexempt human subject research in which each site conducts the same protocol.
Fig. 1 also depicts a Data and Safety Monitoring Board (DSMB). The DSMB is an independent group of experts that advises the NIH and study investigators on clinical studies, especially studies that involve an intervention. Its responsibilities include monitoring human subject safety; evaluating study data; reviewing study conduct and progress; and making recommendations to NIH concerning a study’s continuation, modification, or termination. NIH may appoint a “medical monitor” instead of a DSMB for minimal-risk studies.
Overall Committee Operations
Committees manage the bulk of Network operations. The main committee operational structure is depicted in Fig. 2. All committees have both ARC and NCC representation, and one of these two entities takes main responsibility for each committee. Study Principal Investigators lead “Study Teams,” which also include assigned ARC and NCC staff. Committees and Study Teams meet either weekly, biweekly, monthly, or quarterly, depending on the committee.
The Directors Committee provides primary operational oversight. This committee is responsible for optimizing and monitoring overall Network performance, prioritizing tasks, and approving study administration policies and procedures. It also reviews study coordination across nodes and makes decisions about practitioner recruitment, training, and engagement.
The ARC/NCC Leadership Team acts on behalf of the Directors Committee to facilitate its work. This group manages on a weekly to biweekly basis the operational and business components of the Network.
The Coordination Committee discusses node coordination and study implementation issues, shares best practices, and provides a forum for node coordinators and NCC Study Managers to collaborate throughout study development/implementation.
The IRB Committee implements the policies and operations of the CIRB. Because the CIRB has been in operation for several years, it currently meets on a rare, ad hoc basis only.
The Publications and Presentations Committee implements and ensures compliance with the Network’s publications policies document [20].
The Data Committee develops and implements strategies to standardize, collect, manage, analyze and share Network data; provides guidance on data collection tools and prioritizes data quality measures.
The Technology and Infrastructure Committee identifies, prioritizes, and evaluates Network “Hub” needs and functionality.
Study Teams are responsible for developing study-specific documents and procedures to ensure efficient implementation in the Network, providing input regarding data management systems for data collection and quality management activities, preparing documents for CIRB submission, and adhering to NIH policies. During the implementation phase, study teams are responsible for meeting enrollment and retention targets, implementing quality management processes, reporting, and responding to requests regarding study oversight.
Other Operations
Other key operations of the Network have remained very similar to that which was reported in our earlier publication [Reference Gilbert, Williams and Korelitz9]. These operations include recruitment and retention activities, enrollment processes, practitioner engagement activities, and the benefits of participating as communicated by network practitioners. Previous publications have reported the number and characteristics of Network practitioners, and these numbers also are almost always reported during study-specific publications. Appendix A provides descriptive information about currently enrolled numbers.
Measures of Study Characteristics
A broad range of study types is conducted in the Network. These include national clinical observational studies, national experimental (randomized clinical trial) studies, national questionnaire studies, pilot clinical studies, clinical simulation studies, and qualitative studies. Most studies are conducted nation-wide, but some involve only one to three regions. Many different funding mechanisms have been used since the Network’s inception in 2005, but since 2019 the Network has conducted only investigator-initiated studies; these are usually funded through specific NIH mechanisms [21,22]. Study length has ranged from less than a year to 3 years.
Clinical topics were categorized into one or more treatment classifications based on the American Dental Association Current Dental Terminology codes [23], with these classifications: Diagnostic; Preventive; Restorative; Endodontics; Periodontics; Prosthodontics (combined categories of Removable Prosthodontics, Maxillofacial Prosthetics, Implant Services, and Fixed Prosthodontics); Oral and Maxillofacial Surgery; Orthodontics; Adjunctive General Services. Clinical topics based on treatment classification were chosen instead of a diagnostic classification system because a single treatment may be the result of different diagnoses, and because in our experience Network clinicians often conceptualize clinical topics in treatment terms. We acknowledge that diagnostic codes, if they were widely recorded in everyday clinical dental practice, would enable a better linkage to health outcomes [Reference Yansane, Tokede and White24]. The Network also has had an impact internationally by either advising about the formation of new networks in other countries, such as in Japan and Brazil, or collaborating in research studies with investigators in the United Kingdom.
Measures of Scientific Productivity
Given that a key aspect of the Network’s mission is to conduct research studies and impact the field of clinical research and clinical care, we quantified metrics that have to do with study characteristics and publications: (1) number of studies conducted; (2) breadth of clinical topics investigated; (3) number of practitioners and patients enrolled; (4) number of peer-reviewed scientific publications produced; (5) scientific influence as measured by number of citations, RCRs, and weighted RCRs for Network peer-reviewed publications with a publication date of 2020 or earlier (n = 167); and (6) number of different journal titles.
All Network publications are included in the NIH PubMed database in compliance with NIH policy, making access to their PubMed identification number (“PMID”) publicly available and easily entered into the NIH iCite website for quick calculation. We used the iCite tool [12] for citation, RCR, and weighted RCR calculations.
RCR represents the field-normalized and time-normalized citation rate. Article citation rates are divided by an expected citation rate derived from the performance of articles in the same field and benchmarked to a peer comparison group. Fields are defined for each article by using its co-citation network. The RCR is benchmarked to 1.0 for a typical (median) NIH paper in the corresponding year of publication, ensuring that a paper with a median RCR of 1.0 has received the same number of citations per year as the median NIH-funded paper in its field, while a paper with a RCR of 2.0 has received twice as many citations per year as the median NIH-funded paper in its field. RCR data are available for articles that are at least one calendar year old. The database contains many papers that were not NIH funded and the same RCR value translates to a lower percentile ranking for papers that are not NIH funded. The RCR methodology was validated using citation data from about 90,000 published papers emanating from NIH-funded research and comparing calculated RCRs to the opinions on manuscript reach of recognized experts in selected fields. The weighted RCR is the sum of RCRs for Network articles, which weights the article count by their influence relative to NIH-funded papers. A highly influential set of articles will have a higher weighted RCR than the number of total publications, while a set of articles with below-average influence will have a lower weighted RCR than the number of total publications.
Results
Study Characteristics
Table 1 lists the characteristics of Network studies completed in data collection or in development. Fifty-six studies have been completed or are in development.
1 One network region only
2 Two network regions only
3 Three network regions only
4 One network region only; the study is a collaboration with five other sites at academic health center (i.e., non-PBRN) clinics in the USA, funded by U01-FD-005938. The planned enrollment of 1500 patients across the entire collaboration is at 1235 as of June 9, 2022.
Abbreviations: EDR: electronic dental record; HPV: human papillomavirus; ONJ: osteonecrosis of the jaw; Qx: questionnaire; PPE: personal protective equipment; RCT: randomized clinical trial; TMD: temporomandibular disorders
Detailed study information can be accessed at http://nationaldentalpbrn.org/studies.php
An early version of this table was published in 2018 [Reference Mungia, Funkhouser and Buchberg Trejo17].
Of the studies that have completed enrollment (studies 1–50), 19,827 practitioners have been enrolled (some participated in multiple studies), along with 70,665 patient participants (excluding data-only studies). Two data-only studies (studies 7 and 31) examined electronic health records for 790,493 patients.
The 58 studies listed in Table 1 comprise 22 questionnaire studies, 38 clinical studies, and 3 clinical simulation studies. Study designs have included 28 cross-sectional designs, 1 case-control design, 1 nested case series design, 18 prospective cohort studies, 3 retrospective cohort studies, 1 non-randomized feasibility controlled clinical trial, 5 randomized controlled clinical trials, and 3 clinical simulations.
Clinical topics were categorized as investigating 14 Diagnostic topics, 4 Preventive topics, 14 Restorative topics, 8 Endodontics topics, 4 Periodontics topics, 6 Prosthodontics topics, 7 Oral and Maxillofacial Surgery topics, 1 Orthodontics topic, and 17 Adjunctive General Services topics.
Publication Productivity
The Network has produced 193 peer-reviewed publications to date [20]. A total of 167 had publication dates of 2020 or earlier. Peer-reviewed publications have appeared in a total of 62 different journal titles so far, which comprise a broad range of scientific disciplines.
Fig. 3 shows results from the iCite analysis of the 167 publications that had publication dates of 2020 or earlier, displaying number per year, weighted RCR by year, and total citations by year cited. Not shown in the figure are these metrics: 11.13 publications per year; 2927 total citations; a mean (SE) of 17.53 (1.79) and a median of 11.0 citations per publication; mean (SE) of 2.09 (0.17) citations per publication per year. The mean (SE) RCR was 1.40 (0.11), the median was 1.07, and the weighted RCR was 233.84.
We compared the percentile ranking of the 167 publications to the corpus of PubMed publications based on RCRs; nine of the 167 publications were ranked above the 90th percentile and 15 articles were ranked between the 80th and 90th percentiles. An article-specific report is publicly available [25].
The weighted RCR (234 in the case of this analysis) was considerably higher than the number of total publications (167 in the case of this analysis), which indicated that this set of publications was highly influential relative to the average paper in the PubMed database.
Discussion
Having described the Network’s organizational changes in 2019 in the context of a Network history that began in 2005, we have documented that a PBRN can productively engage community practitioners, patients, and clinical research investigators over a sustained multi-year period. This adds to the evidence and conclusions made in a recent review of PBRNs [Reference Canceill, Monsarrat, Faure-Clement, Tohme, Vergnes and Grosgogeat8]. Our report focuses on one particular network to document its productivity as measured by recruiting and engaging practitioners in research, completing many studies, and disseminating research findings through peer-reviewed publications. These results suggest that everyday practitioners can be crucial partners in developing, implementing, and disseminating scientific research, and that community practices can be an effective venue for clinical research. Network studies have generated numerous, timely, and influential publications, in a broad range of clinical topic areas.
That the median RCR of Network publications exceeded 1.0 and the weighted RCR substantially exceeded the total number of publications upon which the analysis was based, demonstrate that the network’s publications have had a greater-than-average influence in their fields [Reference Hutchins, Yuan, Anderson and Santangelo11]. Because of its mission, the Network attempts to balance its interest in communicating with and disseminating to a clinical audience, with its desire to maintain a strong scientific credibility. Consequently, the Network decides on many occasions to target a particular peer-reviewed scientific journal because it has a heavily clinical readership, even though the journal’s impact factor is not as high as other target journals that we believe would accept the manuscript. Consequently, our a priori goal before conducting the iCite analysis was much lower (25th percentile) than the actual percentile obtained (higher than the 50th percentile). As one of several methods to quantify publication quality or its impact on the field, the RCR has performed better than journal impact factor, citations per year, or the Thompson–Reuters ratio [Reference Hutchins, Yuan, Anderson and Santangelo11,Reference Hutchins, Hoppe, Meseroll, Anderson and Santangelo26,Reference Janssens, Goodman, Powell and Gwinn27]. Establishing consensus about methods to quantify success and productivity of large research groups may be important to justify continued funding or establishing new networks [Reference Yu, Van and Patel28,Reference Bragg, Marchand, Hiplert and Cumming29]. Self-citation and publication frequency among network coauthors has the potential to inflate RCRs, but the effect is minimized by the wide range of study topics that limit the extent of self-citation (numerator), while articles overly benefitting from self-citation would increase the field citation rate (denominator). A limitation to PBRN research is that a high RCR demonstrates prominence among scientific peers, yet does not capture the Network’s influence on everyday clinical practice and patient health, which is the ultimate goal of the Network. Impact on everyday practice can be the key research question for some studies [Reference Kakudate, Yokoyama and Sumida30-Reference Gilbert, Gordan and Funkhouser32], although the typical goal is to contribute to the evidence base in a manner similar to individual “R01” studies or clinical trials. PBRNs offer recruitment sites (community practices) that complement or supplement academic health center sites with geographic and patient demographic diversity, while also providing an infrastructure to engage simultaneously both academic clinical scientists and the “end-users” (community practitioners) of results from the studies, at each step of the study development and implementation process [Reference Gilbert, Williams and Korelitz9,Reference Gilbert, Gordan and Funkhouser32].
The National Dental PBRN has operated at a high level of scientific productivity and has demonstrated the feasibility and effectiveness of PBRNs as a research context. The network seeks to foster a future in which research and quality improvement are done routinely in everyday clinical practice – just because that is what health care providers do as a profession. The ultimate goal is to advance the delivery of evidence-based care into daily clinical practice for the benefit of patients.
Supplementary material
To view supplementary material for this article, please visit https://doi.org/10.1017/cts.2022.421
Acknowledgments
Opinions and assertions contained herein are those of the authors and are not to be construed as necessarily representing the views of the respective organizations or the National Institutes of Health. An Internet site devoted to details about the network is located at http://NationalDentalPBRN.org.
This work was supported by NIH grants U19-DE-28717 (GHG), U01-DE-28727 (MAM), U19-DE-22516 (GHG), U01-DE-16747 (GHG), and UL1-TR-003096 (Robert Kimberly).
We gratefully acknowledge the current practitioner members of the network’s Practitioner Executive Committee members, all of whom are in private practice: Julie Ann Barna, DDS, Lewisburg, PA; Theresa E. Madden, DDS, MS, PhD, Olympia, WA; Tara R. Rios, DDS, Brownsville, TX; Peggy Richardson, DDS, MS, Tinley Park, IL; Diedra J. Snell, DDS, Port Gibson, MS; Joseph C. Spoto, III, DMD, Apollo Beach, FL. We also acknowledge excellent suggestions for an earlier version of this manuscript by Robert Kimberly, MD, Director of the University of Alabama at Birmingham Center for Clinical and Translational Research.
We also acknowledge the network’s Administrative and Resource Center, National Coordinating Center, and Regional Coordination staff: in Birmingham, AL: Muna Anabtawi, DDS, MS, CCRP, Ellen Funkhouser, DrPH, Yolanda “Terri” Jones, Shermetria Massengale, MPH, CHES, and Ellen Sowell, BA; in Portland, OR: Kim Funkhouser, BS, Suzanne Gillespie, MA, MS, Chalinya Ingphakorn, BS, Reesa Laws, BS, Michael Leo, PhD, Celeste Machen, BS, Sweta Mathur, BSD, MPH, PhD, Sacha Reich, BS, PMP, Kim Stewart, MPH, Natalia Tommasi, MA, LPC, Lisa Waiwaiole, MS; in Gainesville, FL: James “Danny” Johnson, Brenda E. Thacker, RDH, CCRP; in Minneapolis, MN: Sarah Basile, RDH, MPH, Chris Enstad, BS, Kimberly Johnson, RDH, MPH, Heather Weidner, RDH; in Rochester, NY: Kathy Bohn, AAS, Rita Cacciato, RDH, MS, Pat Regusa, BFA, Kathy Scott-Anne, BS, BA; in San Antonio, TX: Marissa Mexquitic, MS.
Disclosures
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this article. Authors’ institutions receive funding from NIH to support network activities.
Statement of Institutional Review Board Approval
The informed consent of all human subjects who participated in this investigation was obtained after the nature of the procedures had been explained fully. All Network studies and operations receive approval from the National Dental PBRN Central IRB or regional IRBs, depending on the activity.