Introduction
Experts in medical and health policy recognized in the early 2000’s that despite advances in basic biomedical research, the movement of laboratory innovations to patient-relevant treatments and public health arenas was lacking. This realization stimulated a variety of efforts, particularly inside the National Institutes of Health (NIH), to identify and overcome barriers to the transfer of scientific knowledge in the health realm [Reference Reis, Berglund, Bernard, Califf, FitzGerald and Johnson1]. One significant initiative continues to direct funding to states with relatively low levels of current NIH support, as part of the Institutional Development Award (IDeA) process [2]. To enhance competitiveness for clinical and translational research (CTR), grants have been awarded to build infrastructure and human resources in the form of pilot project funding, support for biostatistical and clinical research services and for providing training and mentorship opportunities. The Advance-CTR program provides awards, services and training to Rhode Island-based medical and health researchers. The program began in 2016 with support from an IDeA-CTR award through the National Institute of General Medical Sciences (NIGMS) [Reference Reis, Berglund and Bernard3], with a scope that includes the state’s two major research universities, Brown University and the University of Rhode Island, together with the Rhode Island Quality Institute, which houses Rhode Island’s state healthcare information exchange and the three major hospital systems in Rhode Island, Lifespan, Care New England, and the Providence Veterans Affairs (VA) Medical Center.
Evaluators have been called upon to inform these NIH initiatives and to develop methods that are responsive to the context while bringing in standards and principles from the field of evaluation science [Reference Trochim, Rubio and Thomas4]. A designated team of internal evaluators is responsible for tracking progress and evaluating the program’s effectiveness. A CTR program’s resources must be aligned to the needs of researchers, research administrators, and a range of institutional contexts to best address the foremost challenges of the translation of clinical research from discovery to utilization throughout the state. Program theory for this project must involve multiple organizations, often across considerable distances, and with varying organizational and professional cultures, bureaucratic processes and hurdles, and staff priorities. It also must be able to account for complex and dynamic chains of causality which are difficult to evaluate rigorously [Reference Forss, Marra and Schwartz5]. Finding effective strategies that can work to overcome the challenges to stakeholder engagement and make evaluation valid, responsive, and influential for examining outcomes and for continuing to plan the program is challenging and explicit processes are not agreed upon in the field [Reference Cole, Wells and Frazer6–Reference Klein8].
From the outset, we have taken a participatory approach, employing various strategies to engage with the full range of program stakeholder groups as we developed and conducted our evaluation plan [Reference Shula, Whitmore, Cousins, Gilbert and al Hudib9]. This paper examines one important strategy we have used to engage our community of researchers and clinicians as active participants and informants of their perceived challenges, barriers and needs for support.
Group Concept Mapping
Group Concept Mapping (GCM) is a validated approach to inform decision making and is especially useful in contexts involving diverse stakeholders and complex issues. It is a mixed methods approach that combines qualitative idea generation to an open-ended focus prompt with quantitative clustering and rating of generated ideas. In this way, it is a stakeholder-authored approach that corroborates input across varied perspectives and stakeholder priorities. Further, the process allows for using highly intuitive visualizations of results that are easily comprehended and more likely to be used [Reference Trochim10,Reference Trochim and Kane11]. GCM has been applied within Clinical and Translational Science contexts; for example, to map a research agenda for evaluating team science [Reference Falk-Krzesinski, Contractor and Fiore12], and to conceptualize trust in community-academic research partnerships [Reference Dave, Frerichs and Jones13]. Earlier work by Robinson and Trochim [Reference Robinson and Trochim14] employed this approach to examine barriers to participation of minority populations in clinical trials studies of medical interventions. Additionally, other efforts have suggested that for diverse and complex multi-organizational contexts, GCM works well to guide logic model development [Reference Trochim, Marcus, Masse, Moser and Weld15].
We used GCM as a follow-up to a prior online needs assessment survey to gain a better understanding of how Advance-CTR can accelerate clinical and translational research in Rhode Island, particularly during the initial years of the program [Reference Willey, Fede and Stevenson16]. This paper describes our application of the GCM methodology to inform the leadership of our CTR and our partner institutions about priorities and opportunities for enhancing clinical and translational research in our state, as perceived by the communities of researchers, clinicians, and institutional administrators served by Advance-CTR. The findings have also informed our evolving logic model for this program and planning and prioritizing of future initiatives. Investigation of the generalizability of these specific results to other CTR and CTSA settings will be a productive extension of this work.
The illustrative results reported here include a description of the ideas and themes rated by stakeholders according to their importance and feasibility and comparisons of these ratings across key subgroups. We also discuss how our results were disseminated and applied and share lessons learned regarding the advantages and limitations of GCM as an evaluation component for an IDeA-CTR program and for other similarly large-scale and complex organizational contexts.
Methodology
Participants
There is no minimum threshold of statements required for Group Concept Mapping; however, a minimum of 40 participants at each stage of the process and at least 10 participants per cell for any between group comparisons is needed [Reference Trochim10]. In order to recruit for this project, clinical and translational researchers, together with relevant staff and administration from all the affiliated institutions of the project, were identified by program administrative staff and contacted by email with an invitation to participate. The email solicitation was sent to approximately 3000 recipients through email listservs. Of those, 119 participated in online brainstorming and 57 participated in in-person sorting and rating at a statewide retreat.
Table 1 presents the sex, affiliation, position/role, and research experience of participants in the online idea generation and the on-site sorting and rating tasks. The two samples illustrate the range of participants and show that the two groups were sufficiently diverse to represent our stakeholders adequately. There were more female than male respondents among idea generators (57.1% versus 25.2%, respectively), and sorter/ raters (50.9% versus 29.8%, respectively), while Assistant Professor was the modal rank of idea generators (23.5%) and sorter/raters (26.3%). Hospital-based staff were less represented at the sorting-rating task (19.3% of participants). Of note, 41.2% of idea generators and 36.8% of sorter/raters had no previous federal funding as primary investigator.
* Staff were included with Administrators in analyses due to small cell size and similarity in roles.
Materials and Methods
The Group Concept Mapping process begins with the identification of a “focus prompt,” that is, a key question to which participants respond, generating the ideas that then drive all subsequent steps of the process. Ideas generated in response to the focus prompt are reviewed and synthesized, and then presented for sorting and rating, followed by analysis and presentation of results [Reference Kane and Trochim17]. These steps were facilitated using the CS Global MaxTM [18] online tools developed by Concept Systems Incorporated. The idea brainstorming process was performed online during the two months that preceded an in-person retreat during which the sorting and rating tasks were completed using the generated ideas. Sorters and raters included both individuals who had participated in brainstorming as well as other retreat attendees who had not. Analyses and presentation of results occurred in the months following the retreat, with an emphasis on getting the results considered and used by stakeholders having the power to affect change, as described in the discussion section of this report.
Our focus prompt: “To increase the quality and quantity of clinical and translational research in Rhode Island, we should…” was developed by the Tracking and Evaluation team in consultation with experienced clinical researchers from the Advance-CTR program, and with guidance from consultants at Concept Systems Incorporated. This prompt aligned closely with the goals of the Advance-CTR program, which aims to extend resources for enhancing translational research according to the needs of researchers in Rhode Island. Researchers’ needs were previously elucidated during the project’s participatory kick-off retreat and from a needs assessment survey conducted during the prior year.
In Group Concept Mapping, the study outcome is determined by the populations selected to participate. Therefore, we aimed to distribute our focus prompt by email as widely as possible across our participating institutions. We anticipated that respondents would self-select as stakeholders in translational science, and we specifically included all levels of research administrators as a key stakeholder group responsible for decisions ranging from resource allocation to policies, communications and support delivery.
Upon opening the link, participants were presented with a series of demographic questions (i.e., institutional affiliation, academic rank or role, prior research funding experience, sex, and role within or use of Advance-CTR services and programs). The demographic indicators were selected to represent factors that could theoretically influence perception of barriers to research. Once demographic questions were submitted, respondents then anonymously shared their ideas for how the quality and quantity of CTR could be improved in their setting. Participants were encouraged to submit as many ideas as they could think of, and no boundaries were provided regarding the range, specificity, domain, length, or generalizability of an idea to other researchers. The brainstorming period took place from the middle of October through early November of 2017, with participation encouraged through the Advance-CTR newsletter and with periodic email reminders to submit their feedback.
The ideas generated during brainstorming were then reviewed by the project team in preparation for the sorting and rating activities. Duplicate themes were consolidated into single items, ideas that were considered too specialized or individualized for broad consideration were deleted, and complex statements were broken into single ideas (e.g., the statement “we should have more networking events and get more protected time for research” would become two separate ideas). At a retreat in December of 2017, participants working individually on laptops participated in GCM’s sorting and rating activities. During the sorting task, participants grouped all unique ideas into clusters as they saw fit according to similarity in meaning alone. This process was carried out using the software program, which allows the generated ideas to be dragged and dropped into thematic clusters. At least two clusters were to be formed (i.e., one-cluster models are not allowed), but there was not an upper limit to how many clusters could be created. In addition, participants were required to name each of their created clusters based upon their conceptualization of how the clustered ideas were related. In a separate activity, retreat participants were then asked to rate each idea according to its perceived importance, and then according to its perceived feasibility, using a 5-point Likert scale ranging at lowest from “not at all important/feasible” to highest as “extremely important/feasible.” When the rating task was completed, the retreat attendees were thanked for their participation and told that the results of the group concept mapping activity would be shared with them at a later time.
The GCM software application uses the clustered and rated ideas to create a point map through multidimensional scaling [Reference Trochim10,Reference Kane and Trochim17], which illustrates the spatial relationship of items; items that were closer in proximity on the point map had been more frequently sorted together (i.e., placed in the same cluster) by participants. The software, using hierarchical cluster analysis [Reference Trochim10,Reference Kane and Trochim17], then analyzed the point map to identify a range of possible cluster structure solutions, and the evaluation team balanced parsimony with interpretability to choose a cluster structure set for further analyses. The participants’ ratings of the items within these clusters were averaged to determine the overall importance and feasibility of clusters as well as the average rated importance and feasibility for each individual statement. Statements were also plotted on the dimensions of importance and feasibility to determine which statements, on average were rated as the most important and the most feasible by the research community. Follow-up analyses used the ratings to compare perceptions of different subgroups of participants. Graphic representations were produced by the software with the intention to stimulate discussion of project priorities.
This study was approved by the Brown University and University of Rhode Island Institutional Review Boards.
Results
In response to our emailed focus prompt, we received 150 brainstormed statements from 75 respondents for ideas to enhance CTR in Rhode Island. Following the procedure described above, the 150 statements were then culled by our project team to yield 78 unique items. These ideas were then sorted and rated by the 57 self-selected participants who chose to attend a half-day retreat. Table 1 shows participant characteristics at both the brainstorming and sorting and rating stages of the project.
Figure 1 presents the cluster model chosen as best representing the point map of generated ideas. This model contains 9 themes, with titles generated as: internal financial support for research; grant writing support; administration; training; minority and special populations; large databases; institutional collaboration; community engagement; and connections between researchers. Table 2 provides the statements grouped by theme to clarify the content represented by theme titles.
Note: M imp. and M feas. represent the mean importance rating and mean feasibility rating, respectively.
A “pattern match” diagram (Fig. 2) illustrates our use of this graphic, showing differences between researchers’ and administrators’ ratings of the perceived importance of ideas within each theme. We made several similar comparisons between subgroups of the retreat participants, also useful for prompting follow-up consideration, as discussed below. Both groups ranked inter-institutional collaboration as the top priority, followed by building connections among researchers. Elucidating some differences, administrators generally rated the importance of these aids to CTR higher than researchers did. In particular, administrators ranked community engagement substantially higher than researchers did, while administration and grant writing support were seen as relatively more important by researchers than administrators. Nevertheless, cluster item rankings between researchers and administrators were strongly correlated (r = 0.74).
Another useful graphic presentation of our data was “go zone” quadrant mapping (Fig. 3). This chart identifies items that were rated higher and lower according to the dimensions of importance and feasibility. Feasibility is depicted on the vertical axis and importance is depicted on the horizontal axis. The items within the upper right quadrant were those rated highest on both dimensions. Particularly important and feasible items are circled, and include providing seed grants to encourage new collaborations across basic science and clinical faculty, and connecting researchers with common interests. Grant writing support and mentoring were also highly rated. The lower right quadrant comprises items that were considered highly important but thought to be less feasible. These include reducing administrative barriers to collaboration and having a common Institutional Review Board (IRB) across institutions.
These results were also analyzed according to subgroups of interest, and we provide salient highlights here (supplementary figures available upon request). We considered a comparison across the participating organizations to be of particular interest given the concern for inter-institutional collaboration. At our public university, support for grant writing and the availability of seed grants were deemed most important and feasible. We can contrast these findings with the results specific to our private university, whose participants rated items pertaining to collaboration and networking as most important and feasible. Participants from the hospital systems also identified seed grants for new collaborations as most important and feasible and for this group mentoring of junior faculty also appeared at the top. Faculty participants from academic institutions rated reduction in administrative barriers to collaboration higher in importance, while administrators and staff viewed the promotion of technical and analytic resources as a most important idea. It was observed that those without prior research funding rated the need to improve pre-award support for grant applications as a most important item. All of these contrasts were presented for consideration by stakeholders.
A final means for organizing the findings with an aim to promote action is presented in Table 3, which lists the ten ideas that participants rated as most important to enhancing CTR in Rhode Island, regardless of feasibility ratings. The most important items were providing seed grants, creating a common IRB, and developing blanket collaboration agreements between institutions in the Rhode Island CTR network.
Note: CTR, clinical translational research; RI, Rhode Island.
Discussion
The purpose of Group Concept Mapping is to represent the diverse perspectives of multiple stakeholders through a participant-authored, engaging approach (Trochim, 1989). For an evaluation with a participatory orientation, this method promised to go beyond providing information by stimulating discussion and informing action. Our results are specific to and discussed in the context of RI; however, it is important to note that the GCM process and the actions steps we took following the study are all likely achievable in a range of national, state, local, and institutional environments. As translational research continues to strive to understand assets and strengths and to address health needs across a diverse spectrum of stakeholders, communities and populations, engaging highly accessible participatory methods such as GCM will be critical.
The self-selection that followed the broad dissemination of our focus prompt is undeniable, but also speaks to the very nature of the task to increase investment and engagement of our stakeholders. The GCM activity included representatives from all of the participating organizations who comprised the demographic characteristics we thought most relevant to our purpose. The sorting exercise helped us to understand the functional and structural similarities of the generated ideas, and it provided a collective viewpoint of participants. Once clusters had been identified, ratings of importance and feasibility facilitated a better-informed discussion among stakeholders of means to accomplish the aims of our grant and potential action steps to follow. All of the 78 ideas were rated at least somewhat important (M > 2.85 out of 5.00) by our cross-section of stakeholders. Items that were rated most important were generally affirming of our project’s programmatic thrusts (e.g., more seed funding, ways to facilitate collaboration among individual researchers, better mentoring opportunities), and added a priority for breaking down barriers to inter-institutional collaboration and IRB agreements. Ideas supporting access to large datasets, statistical software, and support for research design and analyses were also perceived to be important, and those too are linked to major components of the project.
The particular circumstances of Rhode Island must be considered in the interpretation of results. These include a geographically compact base in which the participating institutions are independent. The two universities answer to different authorities, and the three health care organizations sometimes compete, and are only loosely affiliated with the universities. These circumstances likely contributed to the high degree of agreement that institutional collaboration is the most desired component of translational science enhancements in the state of Rhode Island. Though key differences in relative rankings were identified, there was agreement across institutions, researcher funding experience, and academic rank or role, that funding, networking and building collaborative connections across researchers and institutions were the most important ways to improve clinical and translational research in our state. However, differences were observed in terms of the relative ranking of the importance of community engagement by researchers and administrators. The different priority assigned to community engagement by the researchers and administrators is interesting, and we can only speculate on the viewpoints that might have contributed to this difference. The administrators may have understood that access to larger patient populations and enhanced connections to community groups should, in the longer term, strengthen institutional research capacity, while the researchers may have felt that they already had access to the patient populations they needed. Administrators may also be more attuned to NIH priorities for more effective connection of resources to the needs and concerns of diverse communities and the lack of such connection based on their reporting to NIH. The administrators’ perspectives seem prescient, given the emphasis that NIGMS has placed on community engagement in their current iteration of the CTR program. This finding may well extend to other CTR-supporting settings, and we encourage evaluators in other programs to examine the generalizability of this contrast in priorities as well as our more general picture of means for enhancing CTR.
The participatory nature of GCM lends itself to a high utility of results, and the evaluation team has also taken a series of steps to promote acting upon what has been learned. Graphic presentations made possible by the software (i.e., pattern match diagrams and go zone maps) have been featured, and we have also added a “top 10” chart ranking ideas by degree of importance alone and regardless of feasibility. After presenting these results to our operations committee, which represents the internal leadership across program components we then discussed the findings with our internal advisory committee, which brings together leadership from across the state, calling for suggestions of the best ways to move forward, particularly on those top ten items. The internal advisory committee consists of influential representatives of the institutions involved with the program (e.g., college deans, research directors) who typically have more power to implement change. They generated a number of suggestions for action, both orally and in writing, recognizing the need for work within and between their home institutions to foster collaboration. Following those meetings, we disseminated the results to GCM participants and others widely, in an electronic newsletter and in webinar presentation promoted through the Advance-CTR website (webinar available here: https://www.youtube.com/watch?v=nJ9yAj-u9NY).
Those who viewed that presentation (n = 311, December, 2020) were also asked to participate in a follow-up survey calling for their own suggestions and feedback based on the results presented in the webinar. We have also incorporated the GCM-generated ideas into our biannual reporting to project component leadership, identifying particular ideas contributed by GCM participants that seemed most relevant for each component activity. We then assisted these leaders in addressing these statements in their planning and programming. In many cases, the ideas validated activities that had already been prioritized, while other ideas added new possibilities for extending component missions and reach, and for refining current programming.
Several examples are provided here to highlight the direction of changes and initiatives. The top ranked idea, to provide seed grants to encourage new collaborations across basic science and clinical faculty, was already embedded in the project with several types of pilot award programs that prioritized and often mandated cross-institution investigators, as were “team science” workshops to facilitate interdisciplinary collaborations. However, building and maintaining those connections across institutions remains a challenge. Institutional leaders represented on the internal advisory committee have recognized that they play a key role in facilitating interdisciplinary and inter-institution collaboration and networking. State leadership reported a high degree of confidence that they could make progress on this front and the evaluation team will plan how to record and measure such efforts as they are implemented.
The desire to have a common IRB between institutions was also rated as highly important. Participants perceived this item to have low feasibility, indicating that there is recognition among researchers and administrators that this is a complex issue, despite widespread agreement that a common IRB would expedite and enhance clinical translation of research. Challenges regarding collaboration agreements across the project’s member organizations, as well as administrative barriers, were also considered important to address. We engaged our internal advisory committee to help us identify the most problematic administrative aspects to inter-institutional collaboration. The need for enhanced pre-award support, including grant writing, caused us to ask how Advance-CTR services and educational offerings can best be integrated with existing institutional resources for research supports and development. To promote networking opportunities, we are creating a unified statewide directory of researcher expertise and projects and have also started to track inter-institutional collaboration.
As illustrated by the contrasts between administrators and researchers in the pattern-matching rankings (Fig. 2), both the overlap and the differences in priorities aid in our understanding of which priorities should be considered for action. Administrators, for example, might recognize a need to clarify the importance of community engagement, and devise more accessible mechanisms for helping researchers to achieve this, while researchers may begin to perceive community engagement as more important if they receive explicit communication that federal funding agencies are prioritizing and even mandating these efforts. Effective mentoring was also rated as highly important to increase the quality and quantity of CTR in RI, particularly in the provision of experienced mentors with a track record of funding. Contrasting these ratings across institutions helped to highlight where mentoring needs were most acute. This led to several follow-up questions: what are the expectations of institutions, principal investigators (PIs), and junior investigators regarding mentorship? How can Advance-CTR help to facilitate better mentoring relationships that help lead to a more sustainable research infrastructure in the state? How feasible are opportunities for cross-institution mentoring? Addressing these questions aided our evaluation team in devising key indicators of successful mentorship, which is an important aspect of the CTR infrastructure.
Conclusions and Lessons Learned
Our use of the GCM method benefited from the geographic proximity of the participants. Travel to the retreat to participate in the sorting and rating tasks in person was relatively easy, and personal relationships were leveraged to yield ample participation in the idea generation task. Key decisions made in our approach involved devising a suitable focus prompt, determining the duration of the idea generation period, and developing an itinerary for the retreat that included additional participatory activities to complement the individualized sorting and rating tasks, for which we provided 90 minutes’ time. In retrospect, providing additional weeks for idea generation might have yielded a greater number of unique ideas, and an augmented effort towards recruitment for the retreat might have increased the level of participation and organizational representation. Furthermore, now knowing that the ideas generated were all rated by our community to be at least somewhat important, we may have opted to conceptualize a more specific rating scale to better discriminate among degrees of “importance.” Yet overall, we considered the GCM activity to be highly productive, yielding important insights that continue to contribute substantially to the direction of the Advance-CTR program. It is not uncommon to be in meetings with project leadership who cite the GCM results as support for or against priorities, ideas, programs, etc., and that is extremely powerful!
Applying GCM to our state’s CTR network has enabled our project leadership to better understand stakeholder-perceived priorities, and to act on ideas and aims most relevant to researchers in the state. The affiliated organizations have been stimulated to consider their own contributions to the problems and the solutions, recognizing the challenges of policy change and budgetary stumbling blocks. Aspects of our program’s logic model have been reinforced and in some cases elaborated (e.g., the outcomes associated with inter-institutional collaboration). We found the GCM methodology to be highly participatory and effective at engaging our research community, and gained valuable insights that affirmed current services, programs, and initiatives; generated new ideas; and highlighted research barriers and needs. Furthermore, we have been able to address challenges unique to specific institutions in the state.
We have presented these results to project leadership, deans and research administrators at our state’s academic institutions, leadership within the state’s hospital systems, the research community in Rhode Island, the national network of CTR and CTSA evaluators, program officers at NIH, and a national conference of evaluators. It is important to note that while some of these stakeholder groups generally have very good understanding of research methods and statistical processes, others have very little or no training in these areas. The nature of the results presented and the ease of the visuals generated foster understanding and comprehension among very diverse audiences. It is hard to act on results one cannot access or understand and GCM reduces that research and statistics-based language barrier to action. Centered on our own program, we are pleased with the degree of enthusiasm expressed for the use of this method and continually seek feedback for ways we may be able to address ideas generated and deemed important by our Rhode Island health research community. We strongly encourage evaluators from other translational research programs to apply the GCM approach to determine how their program can better address the needs of their constituents. Broader application of this approach will also help to determine if our specific findings are generalizable or unique and will provide important data to inform national leadership to promote greater successes in research that improve population health.
Acknowledgements
We gratefully acknowledge the time, thought, and effort of those who contributed the ideas presented here, and those who have responded to those ideas with new policies and practices to enhance CTR in Rhode Island.
The project described was supported by Institutional Development Award Number U54GM115677 from the National Institute of General Medical Sciences of the National Institutes of Health, which funds Advance Clinical and Translational Research (Advance-CTR). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.
Disclosures
The authors have no conflict of interests to report.