Hostname: page-component-7b9c58cd5d-dlb68 Total loading time: 0 Render date: 2025-03-16T14:48:53.210Z Has data issue: false hasContentIssue false

What research participants say about their research experiences in Empowering the Participant Voice: Outcomes and actionable data

Published online by Cambridge University Press:  10 January 2025

Rhonda G. Kost*
Affiliation:
The Rockefeller University Center for Clinical and Translational Science, New York, NY, USA
Joseph Andrews
Affiliation:
Clinical and Translational Science Institute, Wake Forest School of Medicine, Winston-Salem, NC, USA
Ranee Chatterjee
Affiliation:
Department of Medicine; Duke University School of Medicine; Duke Clinical Translational Science Institute; Durham, NC, USA
Alex C. Cheng
Affiliation:
Department of Biomedical Informatics, Vanderbilt University, Nashville, TN, USA
Lisa Connally
Affiliation:
Michigan Institute for Clinical & Health Research (MICHR), University of Michigan, MI, USA
Ann Dozier
Affiliation:
Department of Public Health Sciences, School of Medicine and Dentistry, University of Rochester, Rochester, NY, USA
Carrie Dykes
Affiliation:
Clinical and Translational Science Institute, University of Rochester, Rochester, NY, USA
Daniel Ford
Affiliation:
Johns Hopkins University Institute for Clinical and Translational Research, Baltimore, MD, USA
Nancy S. Green
Affiliation:
Dept. of Pediatrics, Columbia University Irving Medical Center, New York, NY, USA
Caroline Jiang
Affiliation:
The Rockefeller University Center for Clinical and Translational Science, New York, NY, USA
Sana Khoury-Shakour
Affiliation:
Human Research Protection Program, University of Michigan, MI, USA Office of Research Compliance Administration, University of California, Santa Cruz, CA, USA
Sierra Lindo
Affiliation:
Duke Clinical Translational Science Institute, Durham, NC, USA
Karen Marder
Affiliation:
Dept.of Neurology, Columbia University Irving Medical Center, New York, NY, USA
Liz Martinez
Affiliation:
Johns Hopkins University Institute for Clinical and Translational Research, Baltimore, MD, USA
Adam Qureshi
Affiliation:
The Rockefeller University Center for Clinical and Translational Science, New York, NY, USA
Jamie Roberts
Affiliation:
Duke Cancer Institute, Durham, NC, USA
Natalie Schlesinger
Affiliation:
The Rockefeller University Center for Clinical and Translational Science, New York, NY, USA
*
Corresponding author: R. G. Kost; Email: [email protected]
Rights & Permissions [Opens in a new window]

Abstract

Background:

Research participants” feedback about their participation experiences offers critical insights for improving programs. A shared Empowering the Participant Voice (EPV) infrastructure enabled a multiorganization collaborative to collect, analyze, and act on participants’ feedback using validated participant-centered measures.

Methods:

A consortium of academic research organizations with Clinical and Translational Science Awards (CTSA) programs administered the Research Participant Perception Survey (RPPS) to active or recent research participants. Local response data also aggregated into a Consortium database, facilitating analysis of feedback overall and for subgroups.

Results:

From February 2022 to June 2024, participating organizations sent surveys to 28,096 participants and received 5045 responses (18%). Respondents were 60% female, 80% White, 13% Black, 2% Asian, and 6% Latino/x. Most respondents (85–95%) felt respected and listened to by study staff; 68% gave their overall experience the top rating. Only 60% felt fully prepared by the consent process. Consent, feeling valued, language assistance, age, study demands, and other factors were significantly associated with overall experience ratings. 63% of participants said that receiving a summary of the study results would be very important to joining a future study. Intersite scores differed significantly for some measures; initiatives piloted in response to local findings raised experience scores.

Conclusion:

RPPS results from 5045 participants from seven CTSAs provide a valuable evidence base for evaluating participants’ research experiences and using participant feedback to improve research programs. Analyses revealed opportunities for improving research practices. Sites piloting local change initiatives based on RPPS findings demonstrated measurable positive impact.

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - SA
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-ShareAlike licence (https://creativecommons.org/licenses/by-nc-sa/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the same Creative Commons licence is used to distribute the re-used or adapted article and the original article is properly cited. The written permission of Cambridge University Press must be obtained prior to any commercial use.
Copyright
© The Author(s), 2025. Published by Cambridge University Press on behalf of Association for Clinical and Translational Science

Background

Research participants’ experiences during study participation influence how they perceive research, whether they feel valued and respected, and whether they will enroll again or recommend participation to others [Reference Verheggen, Nieman, Reerink and Kok1Reference Adler, Otado and Kwagyan4]. When queried, clinical trial participants say they would like to be asked for their feedback about research participation [Reference Kost, Lee and Yessis2,Reference Boyd, Sternke, Tite and Morgan5]. Academic medical centers and research organizations often routinely seek feedback about patient care, employee satisfaction, and institutional reputation, yet neglect to systematically and routinely seek feedback from their research participants.

The lack of large-scale validated outcome data about participants’ research experiences presents a barrier to translational science [6]. Persistent challenges include recruiting and retaining participants, addressing health disparities, and ensuring diverse and representative enrollment in clinical studies [Reference Smedley, Stith and Nelson79]. Federal agencies and others have long encouraged diverse enrollment in clinical trials, and recent changes to federal regulations have increased transparency and focus on representative enrollment [Reference Menikoff, Kaneshiro and Pritchard10,11]. However, minoritized populations, women, individuals living with health disparities and disabilities, and older adults continue to be underrepresented in clinical trials [Reference Hlávka12,Reference Flores, Frontera and Andrasik13]. Underrepresentation of specific groups arises from cultural and language barriers, potential mistrust or distrust of institutions or study staff, lack of awareness of research opportunities, and limited access to research; as a result, addressing these challenging issues requires layered and tailored participant-facing solutions [9,Reference Cunningham-Erves, Kusnoor and Villalta-Gil14Reference Smirnoff, Wilets and Ragin20]. When members of these groups do participate in research, there is little information gleaned about their perspectives about specific experiences. Systematically evaluating which practices best facilitate and sustain equitable research participation is important for advancing research equity and reducing health disparities.

Informed consent is the keystone of ethical research [Reference Grant2123]. Yet, achieving effective informed consent for participants is challenging given its complex requirements and forms, cultural and linguistic barriers to communication, and varying provider and study staff skills [Reference Kadam24 25,Reference Cohn and Larson26,–Reference Flory and Emanuel27]. An emerging consensus recommends a person-centered consent process, made available using multiple modalities, sensitive to individual context, and developed with community engagement [Reference Cunningham-Erves, Kusnoor and Villalta-Gil14,Reference Bona, Utecht and Kemp28Reference Spinner and Araojo32]. To this end, plain language glossaries [Reference Baedorf Kassis, White and Bierer29,Reference Krol, Kim, Gao, Amiri-Kordestani, Beaver and Kluetz33] and multiple other tools have been developed to simplify and personalize the informed consent process [Reference Bona, Utecht and Kemp28,Reference Lawrence, Dunkel and McEver31,Reference Landi, Mimouni and Giannuzzi34]. However, there is little evidence demonstrating the broad and effective adoption of these tools or their generalizability and impact. A recent review of informed consent forms in the ClinicalTrials.gov registry found that most included language that exceeded the recommended reading level [Reference Gelinas, Morrell, Tse, Glazier, Zarin and Bierer35]. The Association for Accreditation of Human Research Protection Programs (AAHRPP), requires that organizations seek and respond to participant feedback [36], and the federal Office of Human Research Protections recently issued new training to make consent more participant-centered [37]. Although AAHRPP’s Standards (1.4, 1.4B) concern informed consent procedures, currently, there are no widely recognized outcome measures or benchmarks to evaluate or enforce whether procedures for informed consent are effective. A recent study of how IRBs measured the effectiveness of their practices found an overreliance on process measures and few outcome measures.[Reference Fernandez Lynch and Taylor38] An independent research ethics consortium recommended incorporating participant-centered measures into evaluating consent effectiveness [39]. In their 2023 report, the US Government Accountability Office recommended that the Department of Health and Human Services consider surveys of research participants as a potential measure of the effectiveness of human research protections [40]. Thus, a systematic participant-centered approach is needed to evaluate whether participant-directed practices are achieving their intended impact, that is, do they enhance trust and fairness, are they effective and respectful, and do they ensure that discoveries are developed to benefit all affected populations [Reference Stallings, Cunningham-Erves and Frazier19,Reference Smirnoff, Wilets and Ragin20].

To address the need for patient-centered outcomes measures of the clinical research enterprise, we developed the Research Participant Perception Survey (RPPS) using rigorous mixed methods and a highly participatory process to capture what participants identified as important aspects of positive or negative research experiences [Reference Kost, Lee, Yessis, Coller and Henderson41,Reference Yessis, Kost, Lee, Coller and Henderson42]. In a large-scale validation study designed to minimize participation bias, outcomes from 4960 survey responses at 15 NIH-funded research institutions revealed one-time benchmarks for research experiences and a range of scores across sites [Reference Kost, Lee and Yessis2]. A shorter RPPS version was designed to be easier to deploy. It was validated and produced higher response rates and reinforced prior findings [Reference Kost and de Rosa43,Reference Kelly-Pumarol, Henderson, Rushing, Andrews, Kost and Wagenknecht44]. Several organizations adopted the RPPS, routinely using results for internal quality improvement, but did not publish outcomes, and broader adoption was limited. In 2020–2024, through the Empowering the Participant Voice (EPV) initiative funded by the National Center for Advancing Clinical and Translational Science (NCATS), we built, tested, and freely shared tools, infrastructure, and standards to streamline institutional adoption of the RPPS for eliciting participants’ feedback. We also developed a dashboard for quick scoring, filtering outcome data by participant and study characteristics, and supporting data sharing [Reference Kost, Cheng and Andrews45,Reference Cheng, Moragas and Ellis46]. We implemented the RPPS and related infrastructure in locally customized use cases at five CTSA institutions to collect participant experience feedback and aggregate results to a multisite dashboard as previously reported [Reference Kost, Cheng and Andrews45]. Here, we report the survey results, outcomes, and benchmarks from the RPPS responses of 5024 participants across seven CTSAs, and preliminary impacts from responsive initiatives. These data build on the existing participant experience evidence base to inform local and consortium action to improve research.

Methods

The Rockefeller University Institutional Review Board ruled the analysis of deidentified survey data Exempt from review. Sites conducted local surveys under the approval or exemption of their local IRB.

The RPPS survey teams included a project PI (research faculty, human protections, or research leadership), a project manager with recruitment, outreach or evaluation expertise, and informatics specialists.

Research experience survey

The validated RPPS-Short EPV survey questions were provided to sites in English and Spanish through a downloadable .xml file accessed through the EPV website and configured for implementation in the REDCap electronic data capture system [Reference Kost, Cheng and Andrews45,47,Reference Obeid, McGraw and Minor48]. Actionable survey questions are presented with a Likert scale of response options including the optimal (TopBox option, e.g., always), and non-Topbox options (e.g., usually, sometimes, never), and, if appropriate, a “Not Applicable” option. RPPS asks about the consent process, interpersonal interactions with the study team related to respect, listening, feeling valued, and how well language, privacy, and other needs were met. The survey includes two overall questions, “Rate your overall research experience (0 (worst) to 10 (best)”) and “Would you recommend joining research to friends and family,” and collects information about what the study required of the participant and basic demographics. Space is provided for free-text comments for unstructured feedback. The RPPS-Short EPV survey is shown in Supplemental Appendix S1. The average time to complete the survey is 5 minutes. Sites engaged iteratively with institutional and community stakeholders throughout survey planning, fielding, analysis, and dissemination of findings. Feedback was incorporated into local and overall project design and throughout implementation [Reference Kost, Cheng and Andrews45].

Survey infrastructure

The collaborating sites implemented the RPPS-Short EPV survey and related REDCap-based [Reference Harris, Taylor, Thielke, Payne, Gonzalez and Conde49] informatics infrastructure as previously described [Reference Kost, Cheng and Andrews45,Reference Cheng, Moragas and Ellis46,Reference Obeid, McGraw and Minor48]. Briefly, the infrastructure includes a survey data dictionary containing the questions, associated variables, settings for survey fielding, translation files to enable fielding the survey in English or Spanish using the REDCap multilanguage management application, defined variables and a data collection framework for participant and study descriptors (e.g. contact information, demographics, study characteristics, and locally defined variables (e.g., investigator, department)), defined sampling approaches (census, random, targeted), and specified timing for selecting participants eligible to receive the RPPS survey during or after study participation.

Metadata capture the survey timing, sampling, study characteristics, mode of survey distribution, and other variables tracked at the individual survey level. These data are used to evaluate data quality, comparability, the impact of variation on outcomes, and to minimize bias.

Selection of participants

Institutions surveyed participants at point(s) during their study participation as defined in the project. Informatics professionals located the data elements within their institutional databases (electronic medical record (EMR) or clinical trial management system (CTMS)) that mapped to the relevant participant and study descriptors, including those variables needed to determine eligibility such as on-study status, date of study registration, and contact information needed to send the survey. They identified eligible participants, extracted and transformed the relevant institutional data and imported them into their EPV REDCap project [Reference Kost, Cheng and Andrews45].

Mode of distribution

Survey invitations with personalized links were sent using REDCap via email or SMS/text (or both), or personalized survey links were exported from REDCap and implemented through the patient portal [Reference Kost, Cheng and Andrews45]. One site piloted sending surveys in person using hand-held electronic tablets after a study visit.

Scope of distribution (sampling)

Five sites administered surveys at an enterprise level across the organization, aiming to reach all participants (census) or to a random sample thereof (random). The scope of enterprise fielding depended on to what degree research participant data were centralized. Three sites had centralized listings of research participants and were able to include participants from all studies across the organization. Two sites had access to participants in the subset of studies managed in the CTMS/EMR systems, which excluded some social/behavioral studies. The sixth site fielded surveys on a study-by-study basis, aggregating data locally, and the seventh site only surveyed participants in studies three designated departments, or units (targeted sampling).

Data collection

Survey response data flow automatically and in real time to a local custom external module dashboard with built-in analytics that provide Topbox scores, response rates, filters for univariate analysis by participant and study descriptors, and other analyses in custom reports. Deidentified local survey data sync nightly to a central Consortium database and dashboard through an application programming interface (API) governed by a data use agreement (DUA) between each site and the Data Coordinating Center (DCC) [Reference Kost, Cheng and Andrews45,Reference Kost50]. Details of the fielding standards, instructions for operational and technical implementation of the software, formulas, and survey administration are available in a detailed EPV Implementation Guide [Reference Kost50].

Statistical methods

Questions with an optimal response, called “actionable” for the RPPS, are scored for Topbox responses, reported as a percent of respondents giving the optimal answer (e.g., “always” felt respected, or “never” felt pressured). Overall rating of the research experience (0 – worst to 10 – best) is scored using the Top Two Box scores (9 and 10). For questions offering a “not applicable” response option, descriptive frequencies for all responses were reported, and “not applicable” responses were removed from the denominator for the calculation of Topbox scores. For each question, we report the mean Topbox score across all sites (aggregate) and the range of site scores. Responses to qualitative questions without an optimal response are reported descriptively [Reference Yessis, Kost, Lee, Coller and Henderson42].

Survey response and completion rates were calculated using criteria applied to 14 core experience questions including Overall rating, Would recommend, and 12 other actionable items, not including demographic and study characteristic questions. Surveys that included responses to 80–100% (12–14) of core questions were classified as “complete,” 50–79% (7–11) as “partial,” 1–49% (1–6) as “break-off,” and those answering >0 questions (1–14) as “any” response [51]. Group-specific response rates were calculated by comparing the number of survey respondents with a given characteristic to the number of all survey recipients with that characteristic, including nonresponders. Several institutions did not provide demographic variables for some survey recipients; those records were analyzed as “variable not reported” for group-specific response rates.

Measures of significance

To compare scores across sites, or between cohorts in time or by characteristics, comparing Topbox responses between the different categorical variables, a chi-squared test of independence was used. In these analyses, the Topbox score proportions were compared across levels of the categorical variable of interest (i.e., Site). Statistical significance was determined by comparing the computed chi-squared statistic to the known distribution on (r-1(*(c-1) degrees of freedom, assuming a Type I error of 5%. When cell counts were low (e.g., due to removal of many “not applicable responses”), Fisher’s exact test was used.

A binary logistic regression was used to measure the association between research participants’ overall experience response and the responses to other individual questions in the RPPS. Overall research experience was dichotomized as Topbox response (selection of 9 and 10) and non-Topbox response (selection of ≤ 8), while individual questions as predictors retained their original ordinal scaling. Questions that contained a response option that was considered “not applicable” did not have that response option included in order to preserve the comparisons between each actionable response option. Statistical analyses were run in SAS v9.04 using PROC LOGISTIC, assuming a Type I error of 5%.

Two mixed-effect logistic regression models were used to look at research participants’ overall experience and whether participants felt fully prepared by the information and discussions prior to joining the study (informed consent discussions). Overall experience and informed consent discussions were modeled separately. For both models, variables included were a mix of demographic variables (age, gender, education, race, ethnicity) and study characteristics (demands of the study). These variables were chosen based on prescreening univariately using the consortium dashboard of the RPPS survey. Overall research experience was dichotomized as Topbox response (selection of 9 or 10) and non-Topbox response (selection of ≤ 8). Informed discussions were dichotomized similarly as Topbox response (selection of 4 “always”) and non-Topbox response (selection of ≤ 3 (usually, sometimes, never)). Site was included as a random effect for both models. Statistical analyses were run in SAS v9.04 using PROC GLIMMIX, assuming a Type I error of 5%.

Results

From February 2, 2022 to June 15, 2024, seven CTSA sites sent the RPPS survey to 28,096 research participants and received 5045 responses.

Timing and sampling

Surveys were deployed at several time points at sites. Overall, surveys were sent within 2 months of signing consent (17.6%), at year’s end for ongoing studies (8.4%) and at the end of study participation (51.9%), or with unspecified timing (28%), and were returned in approximately the same proportion. Similarly, the mode of sampling (census-all, random sample-all, or targeted) did not significantly affect response rates.

Response and representativeness

The overall survey response rate was 18%, ranging from 12% to 53% across sites. The highest response rates came from institutions fielding targeted surveys on a study-by-study level, or in-person at point-of-care. The lowest response rate came from the site utilizing the patient portal to deliver invitations. Group-specific response rates varied by age (ranging from 7% for ages 18–34 to 24% for ages 65–74), by ethnicity (11% for Latino/x, to 41% when no ethnicity was reported), by race (9% for American Indian to 18% for White), and by gender (6% for nonbinary individuals, 16% for women, 20% for men). Of surveys started (at least one question answered), 97% were completed.

The characteristics of survey respondents, survey recipients (including nonresponders), and the US population are shown in Table 1. Compared to all survey recipients, Black individuals and participants ages 18–54 were underrepresented among survey respondents. Latino/x participants and individuals >age 65 were overrepresented compared to all survey recipients, though they are underrepresented in research overall compared to the census. Upon review of the demographics of the first ~ 2500 responses (through May 2023) [Reference Kost, Cheng and Andrews45] and in consultation with stakeholders, sites worked to increase survey awareness and expand the reach of the survey. In approximately equal cohorts from the first year compared to the subsequent year of survey fielding, the representativeness of Black participants increased from 11% to 15.6% and that of Latino/x participants increased from 3.6% to 9.8%. To a lesser extent, representativeness increased for the youngest participants, Asians, individuals with less educational attainment, and disease-affected individuals (as opposed to healthy volunteers) (Table 1). Individuals with 4 or more years or years of college education (61%) were overrepresented among survey respondents compared to the general population (38%) [52]. This reflects a disparity in research participation that has been well described by others [Reference Baquet, Commiskey, Daniel Mullins and Mishra53,Reference Scanlon, Wofford, Fair and Philippi54].

Table 1. Characteristics of research participants who returned the Research Participant Perception Survey (RPPS) by year compared to all recipients of the RPPS survey and the US 2020 Census

M=million; *February 2022-May 2023; **May 2023- June 2024; *** include non-responders,tAdjusted to reflect % of all adults (age 18 or older) and exclude minors.

Respondents were drawn from studies with a range of characteristics: two-thirds required a diagnosis or disease to enroll and one-third involved a drug, device, or lifestyle intervention (Table 1). Sixty percent of responses (n = 3190) included the institutional variable, “cancer center study (yes or no),” of which 49% were studies conducted in a cancer center. Consent for 20% of the respondents was conducted remotely, 47% in person, and 28% through a hybrid approach.

Research participation experiences

Topbox scores for the research experience questions with an optimal response (e.g., always), so-called “actionable” questions, are shown as mean scores for aggregated data, with site range, in Table 2. Most respondents (>90%) replied that they “always” felt treated with courtesy and respect, free from pressure to join a study, culturally respected, and had sufficient privacy. Slightly fewer participants (80–89%) gave the Topbox answer, “always,” for feeling listened to and free from pressure to stay in the study (if they said they had considered withdrawing). About three-quarters of respondents felt “always” valued as a partner in research, “always” knew how to reach the study team, and received language assistance that “completely” met their needs (if they needed it). Only two-thirds (64–65%) of respondents reported feeling “completely” prepared for the research experience by the consent form or the consent discussions or could “always” reach the team when they needed to. About two-thirds (68%) gave the overall experience the highest rating (9 or 10) or said they would “definitely” recommend research participation to others (61%).

Table 2. Multisite Topbox scores for RPPS questions in aggregate with range across sites, and chi-squared / Fisher’s exact test (February 2022 – june 2024)

* Significant at 0.05 level;

** Significant at 0.01 level;1Fisher’s exact test used;2One site did not have data for this question, chi-square was performed with df = 4.

Research experience scores varied across sites, sometimes by 20 percentage points or more. For 10 of the 14 actionable questions, intersite differences were statistically significant (p < .001). The lowest site scores were observed in questions that also showed the broadest intersite range, specifically for feeling completely prepared by the consent discussion for the research experience (54–74%) and always being able to reach the research team when a participant needed to (55–84%) (Table 2). One site had consistently higher scores than other sites across most of the questions. When the highest scoring site was removed from the analysis, all but two of the significant intersite findings among the remaining six sites retained their significance though the intersite differences in scores for “would recommend research to friends and family” and for “feeling like a valued partner in the research process” were no longer significant.

In a crosstab analysis, we compared the ratings of each of the actionable research experience questions to the overall experience rating of 9 or 10 (Topbox) to evaluate how top-rated experiences in other aspects of the research experience correlated with a “best” overall experience (Table 3). For all the questions, the top-rated experiences were significantly, but not exclusively, associated with conferring a top overall rating. When ratings for aspects of the research experience were lower, the overall experience fell dramatically in parallel.

Table 3. Respondents’ overall ratings of their research experiences compared to their responses to questions about their research experiences (February 2022 –june 2024) (n = 5045)

*P-values were obtained via logistic regression models with the outcome being overall experience vs. each actionable question. Questions with “not applicable” response options did not include those who responded to the “not applicable” option.**For questions that include the response option “not applicable,” responses that are not applicable are not included in the denominator for scoring the actionable response options.

We sought to understand the characteristics of participants most closely associated with how they answered two questions: the overall experience rating and feeling like a valued partner in research. The results of the two mixed logistic regression analyses are shown in Table 4. For overall experience rating, age, gender, educational attainment, ethnicity, and the level of study demand were significantly correlated with conferring a Topbox score. Participants who were 18–34 years old, nonbinary/other gender, had higher educational attainment, or enrolled in a study with moderate or intense study demands were associated with significantly lower Topbox ratings for the overall research experience. Latino/x ethnicity was significantly associated with conferring a very positive overall rating. For the question about “Feeling like a valued partner in research,” age, gender, and study demands were significantly associated with the experience rating. Participants who were 18–34 years old, nonbinary/other gender, or in studies with moderate or intense demands were significantly less likely to select the Topbox rating for whether they always felt valued.

Table 4. Mixed-effects logistic regression models for Topbox scores in overall rating and feeling fully prepared by the informed consent discussions

* Site is included as a random effect.

In a univariate analysis using the filter and visual analytics display of the dashboard, it appeared that individuals who had completed education at the level of a high school diploma or GED or less returned significantly lower scores (>10 percentage points) for their consent experiences. However, when controlled for the demands of the study, this finding was no longer significant, highlighting the need to examine multiple variables before acting on preliminary findings.

Timing and outcomes

Topbox scores differed significantly (p < .05) for some questions depending on survey timing. Feeling prepared by the consent discussion, receiving language assistance, and successfully reaching the study team when needed scored lower in surveys after a year of participation (long studies) compared to surveys collected postconsent or at end-of-study.

Qualitative responses

Six of the seven sites included the standard survey question asking participants to rate a list of factors for their importance when considering joining a future study, resulting in data from 90% of all respondents. The factors most frequently rated as “Very Important” were the “Return of a summary of the overall results of the study” (62.5%), followed by the “Return of personal test results” (45.2%), “A more flexible visit schedule” (28.4%), “Accessible parking” (27.6%), and “Payment/More payment” (24.7%) The “Return of a summary of the overall study results” was the top highly rated choice for participants at five of the six sites fielding this question, and second highest at the remaining site, and was among the top two choices when results were filtered by participant or study characteristics..

Open text responses

To preserve site confidentiality, open text responses were not aggregated centrally, though sites shared anecdotes with the steering committee and indicated that free text provided valuable feedback data. Typical feedback included praise for specific staff, identification of specific actionable issues for sites tracking study-specific response data, and feedback about issues useful to drive broader institutional change.

Language

Few respondents (83, 1.6%) completed the survey in Spanish, of whom 97.5% identified as Latino/x, 34.4% identified as Black or African American, 70.5% identified as White, 1.6% Asian, 4.9% Native American, 3.3% Native Hawaiian or Pacific Islander, 60% were age 65 or older, 75% identified their gender as a “woman,” 7.5% as a “man,” 2.5% nonbinary, and 15.2% said none of these terms describe me or prefer not to say. In the Spanish language cohort, 37.8% said they “always” received adequate language assistance, 41% said they had no language issues, 4.6% said their language needs were “never” met during participation, 31% had not graduated from high school, 23% had graduated high school, 23% had completed some college, and 22% had completed college or graduate education. In contrast, of participants who completed the survey in English, only 1% said their language assistance needs were “never” met and 63% had graduated from college.

Acting on findings

Sites evaluated local findings and piloted interventions or innovations to test whether they could improve specific experience scores. Leadership called this impact return on investment (ROI) (Table 5). Acting on findings, sites were able to enhance accrual and satisfaction with communication (Site A), leverage actionable participant-centered feedback to accelerate an institutional decision, and drive the design of clinical translational science (Site B), and use incentives to improve response rates (Site C). Upon reviewing their findings, the Cancer Center leadership at one site (Site C) requested a new variable to compare results among cancer center studies across sites, which the EPV steering was able to implement for existing and future data. As of September 2024, the variable describing “cancer center study (yes/no)” was available for >80% of responses in the aggregate dataset.

Table 5. Local research experience findings, actions, and impacts

Site D has a history with the survey that predates the EPV project affording a longitudinal view of impact. In response to the finding that only 65% of participants reported “always” feeling like a valid partner in research, the institution initiated and sustained a campaign to communicate to participants that they are valued partners in the research enterprise, achieving significantly and sustainably increased scores for always feeling valued (Figure 1). Within the same time period, a study team undertook an initiative to improve the consent process for a study with a complex intervention. They engaged prior study participants to help develop a video explaining the study intervention. After implementing the video, scores rose for feeling prepared by consent, as scores for feeling valued and other experience ratings continued to rise.

Figure 1. shows the Topbox scores for three Research Participants Perception Survey (RPPS) experience questions from 2013 to 2024 at Site D, where the RPPS has been fielded for a decade. In 2013, the site began an initiative to communicate directly to research volunteers that they were valued by researchers and the institution as partners in the research process (blue arrow). Initially communicated through brochures, pins, and banners, over time, messaging was also incorporated into institutional values through training, teaching, and policy. In 2017–2018, a research team with many RPPS respondents enlisted participants to help develop a new informed consent video and began using it in a Phase I–II study (orange arrow). In 2020–2022, the COVID pandemic disrupted many clinical operations (green arrow), including in-person consent, with full recovery of in-person activities by 2023.

Site E noticed a trend of lower scores regarding receipt of language assistance that tracked with higher participant age. In free text, several participants mentioned using hearing aids and having difficulty understanding telephone assessments or consent conversations conducted with masks on. The team consulted with their Center for Healthy Aging to design and test innovations to make written and verbal research communication more accessible for participants with hearing or vision impairments. These and other results spearheaded the creation of a new permanent Committee for Equity in Research, consisting of stakeholders to participate in survey analysis and the design of institutional responses.

Site F is leveraging its strength in paying attention to issues of literacy (and significantly higher site score regarding language assistance) and participant feedback about receiving results to design a research program for returning study results that fulfill participants’ preferences.

Discussion

Seven CTSAs used a common collaborative platform, survey, and standards to collect valid comparable data about their participants’ experiences during research participation. Defining and tracking variations in timing, sampling, study characteristics, and survey populations preserved data comparability and enabled subgroup comparisons. Most participants rated their experiences very positively and were “very likely” to recommend research participation to others. Overwhelmingly survey respondents felt well treated, respected, and listened to by study staff. The crosstab analysis (Table 3) dramatically shows that the many aspects of a participant’s experience each importantly contribute to having a very positive Topbox overall experience. Understanding the nuances of the experiences of the one-third of participants who did not have a uniformly positive experience and who have reservations about recommending research is where the richest opportunities for performance improvement will be found. Sites differed significantly in some aspects of the research experience, suggesting there are better practices, among them, to be elucidated and shared.

Top overall experiences are significantly associated with responses about the likelihood of recommending research participation to others. Though not proven here, there is validity to the hypothesis that a high likelihood to recommend research to others predicts re-enrollment by the same individual, and the likelihood that others can be recruited to the study. Site A demonstrated the preferences of enrolled participants for flexible scheduling, were an accurate predator of preferences of volunteers not yet enrolled, as evidenced by their 60% increase in enrollment when offering a more flexible schedule. While only 61% of respondents said they would recommend research to others, the data suggest that improving participants’ research experiences will increase the likelihood that they will recommend research to others. Whether increasing scores for recommending research will positively impact study accrual rates is a testable hypothesis, and a potentially high-impact opportunity, especially within underrepresented communities.

Notably, one-third of respondents did not feel completely prepared for their research experience by the consent process they underwent (Table 2). This finding is concerning in light of the breadth of research and published work devoted to enhancing the consent process [Reference Cohn and Larson26Reference Baedorf Kassis, White and Bierer29,Reference Lawrence, Dunkel and McEver31,37,Reference Hadden, Prince, Moore, James, Holland and Trudeau55] and the priority afforded consent by IRBs, OHRP, and AAHRPP in their policies. Data suggest that despite publication of effective consent innovations, few have been broadly and effectively implemented or tailored to specific groups, e.g., by age, education, etc. [Reference Fernandez Lynch and Taylor38,39]. It is notable that the ratings for consent preparedness decline as the demands of studies increase. While some stakeholders have questioned whether it is ever possible to raise scores for challenging studies, the ethical mandate is not to make the study less demanding but to ensure that the participant was effectively prepared for what to expect before agreeing to participate. The study characteristics (study demands) and the participant characteristics that correlated with lower consent scores in the aggregate dataset (e.g., youngest and oldest age groups) are readily identifiable and present opportunities to study the root causes and design interventions. Importantly, at the organizational level, local RPPS data may point to other characteristics (e.g., race, educational level) that identify populations whose research experiences demand specific actions. RPPS data can inform the design of clinical translational science to test which interventions improve the participants’ perception of consent, which implementations work best for specific populations, and what are the downstream impacts on recruitment, retention, representativeness, data quality, and research equity.

That some scores were higher postconsent than at any other survey timepoint merits local investigation. Causes may vary by setting; examples include: participants’ recall of the consent might fade with time, their initial understanding may have been incomplete, the likelihood that teams reinforce consent, provide language assistance, or express appreciation may wane over time. Having explored root causes, organizations can test interventions to enhance research experiences throughout study participation as part of data-driven performance improvement.

Participants who self-identified as Latino/x were significantly more likely to give their overall experience a Topbox score (Table 4). Positive response bias in surveys among survey Latino/x respondents has been described [Reference Zolopa, Leon and Rasmussen56]. However, there is no simple correction, and the impact may be construct-dependent and nuanced [Reference Baquet, Commiskey, Daniel Mullins and Mishra53]. For RPPS respondents, aside from the overall rating score, there were no differences in scores from Latino/x versus non-Latino/x respondents.

As NCATS, the Patient-Centered Outcomes Research Institute (PCORI), the FDA, and sponsors urge investigators to engage participants and communities throughout the life of the protocol, another finding is notable: only two-thirds of respondents felt fully valued as partners in the research process (Table 2). The crosstab (Table 3) shows how well-correlated feeling valued is with overall experience. Site D has demonstrated across a decade that attention to this value with both research staff and participants can meaningfully increase participants’ perceptions of being valued and, in parallel, enhance the overall experience (Figure 1). Given that most participants want to receive a summary of the study’s overall results, organizations can convey that they value participants by responding to this strong preference. The RPPS measures can be used to assess the impact of returning results to participants and to tailor approaches for specific groups. This study has several limitations: 1). The response rate was modest which may result in response bias. Two approaches, study-level surveys and point-of-care electronic tablet surveys, yielded much higher response rates, as did survey completion incentives. The trade-off with these approaches is that they require much greater investment of resources to implement and sustain, and the point-of-care deployment competes with other priorities in the clinical setting. 2). There is bias in the response cohort which also reflects the bias inherent in the sites’ enrolled populations, though Latino/x populations and younger adults were especially affected. Black participants were underrepresented compared to their research participation at sites overall, though representative of the population at large. Mangal et al. demonstrated that sharing information and transparency around collected health information increased trust among Black /African American participants [Reference Mangal, Park and Reading Turchioe57]. We hope that returning the results of RPPS initiatives to the participant community could create a virtuous cycle of increasing trust and increased survey response rates. 3). Response rates for some minoritized groups were low relative to their share of the current U.S. population, e.g., American Indian, Latino/x, Asian, Native Hawaiian/Pacific Islander, risking perpetuation of existing research disparities in research results. However, Black participants (13.6%) were represented similar to that in the U.S. Census (14%). At the urban study sites, Black respondents made up 16–24% of respondents, suggesting these sites are high performers that may have successful practices to share.

The scores in our project are similar to those in the RPPS-validation study [Reference Kost, Lee and Yessis2] and to those from a very large dataset from the United Kingdom, where the National Institute for Health and Care Research has developed a very similar research experience survey, which they deployed in 2023 to >30,000 participants [58]. Thus, notwithstanding these limitations, the data from EPV form a valuable set of benchmarks that enable a study team, department, or institution using the RPPS to take the pulse of their own participant population, evaluate how their participants’ experiences compared to those at peer institutions, and then use participant-centered data to inform decisions about improvement actions, and evaluate their impact, fulfilling the key elements of continuous quality improvement.

Conclusion

Feedback from 5045 research participants in the EPV project contributed to a growing evidence base describing the experience of research participation and enabled comparisons among subgroups and sites. Most participants felt respected and listened to; the majority say that receiving a summary of the overall study results will be very important to future decisions about research participation. There is significant room for improvement in informed consent, in conveying to participants that they are valued partners in research, and in improving the aspects of the experience that contribute to a high overall experience rating. Benchmarks from aggregated data enabled sites to compare their local RPPS findings to multisite norms. Sites that implemented actions in response to local RPPS findings were able to increase participants’ scores for feelings of being valued, increase speed of enrollment, improve communication with the study team, and design practices more responsive to participant preferences. RPPS experience data can be used to effectively drive quality improvement in clinical translational research.

Supplementary material

The supplementary material for this article can be found at https://doi.org/10.1017/cts.2025.3.

Acknowledgments

The authors would like to thank members of the Empowering the Participant Voice team who contributed to project implementation and discussions throughout the execution of the project, including: at Duke University Michael Musty; at Johns Hopkins Scott Carey; at Rockefeller University, Cameron Coffran, Ummey Johra, Roger Vaughan and Barry Coller; at University of Michigan, Megan Haymart; at University of Rochester, Pavithra Punjala; and at Vanderbilt University, Eva Bascompte-Moragas, Ellis Thomas, Lindsay O’Neil, and Paul Harris. The authors also thank their institutional and community stakeholder committees for their engagement and feedback.

Author contributions

RGK conceived the project, designed, led, conducted, and analyzed the multisite research project, and wrote the first draft of the manuscript; JA, RC, AD, DF, NSG, KM, and SK-S led the local configuration of conduct and data collection at their respective sites, contributed to project design, data collection, and local analysis, and contributed to writing; NS, LC, CD, SL, LM, and JR contributed to project design, local implementation, data collection, analysis, and writing; ACC provided technical infrastructure and support for data aggregation, support for analytics, and contributed to writing; AQ and CJ provided statistical support to project design and analysis, and contributed to writing. RGK takes responsibility for the manuscript as a whole.

Funding statement

This work was supported in part by a Collaborative Innovation Award from the National Center for Accelerating Translational Science U01TR003206 to the Rockefeller University, and Clinical Translational Science Awards: UL1TR001866 (Rockefeller University), UL1TR002553 (Duke University), UL1TR003098 (Johns Hopkins University), UL1TR002001 (University of Rochester), UL1TR002243 (Vanderbilt University), UL1TR001420 (Wake Forest Health Sciences University), UM1TR004404 (Michigan University) and UL1TR001873 (Columbia University). The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.

Competing interests

The authors have no competing interests or conflicts to disclose.

Footnotes

Authors are listed alphabetically after the first author.

References

Verheggen, FW, Nieman, FH, Reerink, E, Kok, GJ. Patient satisfaction with clinical trial participation. Int J Qual Health Care. 1998;10(4):319330.Google Scholar
Kost, RG, Lee, LN, Yessis, JL, et al. Research participant-centered outcomes at NIH-supported clinical research centers. Clin Transl Sci. 2014;7(6):430440.Google Scholar
Smailes, P, Reider, C, Hallarn, RK, Hafer, L, Wallace, L, Miser, WF. Implementation of a research participant satisfaction survey at an academic medical center. Clin Res. 2016;30(3):4247.Google Scholar
Adler, P, Otado, J, Kwagyan, J. Satisfaction and perceptions of research participants in clinical and translational studies: an urban multi-institution with CTSA. J Clin Transl Sci. 2020;4(4):317322.Google Scholar
Boyd, P, Sternke, EA, Tite, DJ, Morgan, K. There was no opportunity to express good or bad”: perspectives from patient focus groups on patient experience in clinical trials. J Patient Exp. 2024;11:23743735241237684.Google Scholar
Institute of Medicine (US) Committee on Understanding and Eliminating Racial and Ethnic Disparities in Health Care. Unequal Treatment: Confronting Racial and Ethnic Disparities in Health Care.Smedley, BD, Stith, AY, Nelson, AR, eds. National Academies Press (US), 2003.Google Scholar
Lavizzo-Mourey, RJ, Besser, RE, Williams, DR. Understanding and mitigating health inequities - past, current, and future directions. N Engl J Med. 2021;384(18):16811684.Google Scholar
National Academies of Sciences, Engineering, and Medicine, Policy and Global Affairs, Committee on Women in Science, Engineering, and Medicine, Committee on Improving the Representation of Women and Underrepresented Minorities in Clinical Trials and Research. Improving Representation in Clinical Trials and Research: Building Research Equity for Women and Underrepresented Groups, National Academies Press; 2022.Google Scholar
Menikoff, J, Kaneshiro, J, Pritchard, I. The Common Rule, Updated, N Engl J Med, 2017;376(7):613615.Google Scholar
The National Library of Medicine. FDAAA 801 and the Final Rule. Clinicaltrials.gov. updated April 30 2024, Accessed Junae 25 2024, https://clinicaltrials.gov/policy/fdaaa-801-final-rule Google Scholar
Hlávka, J. Key trends in demographic diversity in clinical trials. In: BDKH, eds. National Academies of Sciences, Engineering, and Medicine; Policy and Global Affairs; Committee on Women in Science, Engineering, and Medicine; Committee on Improving the Representation of Women and Underrepresented Minorities in Clinical Trials and Research, National Academies Press (US), 2022, Appendices BGoogle Scholar
Flores, LE, Frontera, WR, Andrasik, MP, et al. Assessment of the inclusion of racial/Ethnic minority, female, and older individuals in vaccine clinical trials. JAMA Netw Open. 2021;4(2):e2037640.Google Scholar
Cunningham-Erves, J, Kusnoor, SV, Villalta-Gil, V, et al. Development and pilot implementation of guidelines for culturally tailored research recruitment materials for African Americans and latinos. BMC Med Res Methodol. 2022;22(1):248.Google Scholar
Kusnoor, SV, Villalta-Gil, V, Michaels, M, et al. Design and implementation of a massive open online course on enhancing the recruitment of minorities in clinical trials - faster together. BMC Med Res Methodol. 2021;21(1):44.Google Scholar
Otado, J, Kwagyan, J, Edwards, D, Ukaegbu, A, Rockcliffe, F, Osafo, N. Culturally competent strategies for recruitment and retention of African American populations into clinical trials. Clin Transl Sci. 2015;8(5):460466.Google Scholar
Heller, C, Balls-Berry, JE, Nery, JD, et al. Strategies addressing barriers to clinical trial enrollment of underrepresented populations: a systematic review. Contemp Clin Trials. 2014;39(2):169182.Google Scholar
McElfish, PA, Long, CR, Selig, JP, et al. Health research participation, opportunity, and willingness among minority and rural communities of arkansas. Clin Transl Sci. 2018;11(5):487497.Google Scholar
Stallings, SC, Cunningham-Erves, J, Frazier, C, et al. Development and validation of the perceptions of research trustworthiness scale to measure trust among minoritized racial and ethnic groups in biomedical research in the US. JAMA Netw Open. 2022;5(12):e2248812.Google Scholar
Smirnoff, M, Wilets, I, Ragin, DF, et al. A paradigm for understanding trust and mistrust in medical research: the community VOICES study. AJOB Empir Bioeth. 2018;9(1):3947.Google Scholar
Grant, SC. Informed consent-we can and should do better. JAMA Netw Open. 2021;4(4):e2110848.Google Scholar
Tendler, C, Hong, PS, Kane, C, Kopaczynski, C, Terry, W, Emanuel, EJ. Academic and private partnership to improve informed consent forms using a data driven approach. Am J Bioeth. Published online September. 2023;22:13.Google Scholar
United States. National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research. The Belmont Report: Ethical Principles and Guidelines for the Protection of Human Subjects of Research, The Commission, 1978.Google Scholar
Kadam, RA. Informed consent process: a step further towards making it meaningful!, Perspect Clin Res, 2017;8(3):107112.Google Scholar
National Academies of Sciences, Engineering, and Medicine, Health and Medicine Division. Board on population health and public health practice, roundtable on health literacy. In: Health Literacy in Clinical Research: Practice and Impact: Proceedings of a Workshop, National Academies Press; 2020.Google Scholar
Cohn, E, Larson, E. Improving participant comprehension in the informed consent process. J Nurs Scholarsh. 2007;39(3):273280.Google Scholar
Flory, J, Emanuel, E. Interventions to improve research participants’ understanding in informed consent for research: a systematic review. JAMA. 2004;292(13):15931601.Google Scholar
Bona, JP, Utecht, J, Kemp, AS, et al. The informed consent form navigator: a tool for producing readable and compliant consent documents. J Clin Transl Sci. 2023;7(1):e3.Google Scholar
Baedorf Kassis, S, White, SA, Bierer, BE. Developing a consensus-driven, plain-language clinical research glossary for study participants and the clinical research community. J Clin Transl Sci. 2022;6(1):e50.Google Scholar
Wilkins, CH, Edwards, TL, Stroud, M, et al. The recruitment innovation center: developing novel, person-centered strategies for clinical trial recruitment and retention. J Clin Transl Sci. 2021;5(1):e194.Google Scholar
Lawrence, CE, Dunkel, L, McEver, M, et al. A REDCap-based model for electronic consent (eConsent): moving toward a more personalized consent. J Clin Transl Sci. 2020;4(4):345353.Google Scholar
Spinner, J, Araojo, RR. FDA’s strategies to close the health equity gap among diverse populations. J Prim Care Community Health. 2021;12:21501327211000232.Google Scholar
Krol, D, Kim, J, Gao, J, Amiri-Kordestani, L, Beaver, JA, Kluetz, P. The development of the oncology center of excellence patient-friendly language glossary of oncology clinical trial terms. Oncologist. 2023;28(5):379382.Google Scholar
Landi, A, Mimouni, Y, Giannuzzi, V, et al. The creation of an adaptable informed consent form for research purposes to overcome national and institutional bottlenecks in ethics review: experience from rare disease registries. Front Med. 2024;11:1384026.Google Scholar
Gelinas, L, Morrell, W, Tse, T, Glazier, A, Zarin, DA, Bierer, BE. Characterization of key information sections in informed consent forms posted on ClinicalTrials.gov. J Clin Transl Sci. 2023;7(1):e185.Google Scholar
Association for Accreditation of Human Research Protection Programs, AAHRPP, 2024. https://www.aahrpp.org/resources/for-accreditation/instruments/evaluation-instrument-for-accreditation/Domain-I-Organization/standard-i-4, Accessed January 25, 2025.Google Scholar
U.S. Department of Health and Human Services. Participant-centered informed consent training. Office for Human Research Protections. April 29, 2024, Accessed June 17, 2024, https://www.hhs.gov/ohrp/education-and-outreach/human-research-protection-training/participant-centered-informed-consent-training/index.html Google Scholar
Fernandez Lynch, H, Taylor, HA. How do accredited organizations evaluate the quality and effectiveness of their human research protection programs? AJOB Empir Bioeth. 2023;14(1):2337.Google Scholar
AEREO Consortium to Advance Research Ethics Oversight. Recommendations for HRPPs and IRBs aiming to improve quality and effectiveness, December 17, 2023;https://www.med.upenn.edu/aereo/recommendations.html. Accessed June 17, 2024,Google Scholar
US Government Accountability Office. Institutional review boards: Actions needed to improve federal oversight and examine effectiveness (GAO-23-104721) 2023-January 17, 2023, US Government Accountability Office;, https://www.gao.gov/products/gao-23-104721, Accessed October 5, 2024.Google Scholar
Kost, RG, Lee, LM, Yessis, J, Coller, BS, Henderson, DK. Research participant perception survey focus group subcommittee. Assessing research participants’ perceptions of their clinical research experiences. Clin Transl Sci. 2011;4(6):403413.Google Scholar
Yessis, JL, Kost, RG, Lee, LM, Coller, BS, Henderson, DK. Development of a research participants’ perception survey to improve clinical research. Clin Transl Sci. 2012;5(6):452460.Google Scholar
Kost, RG, de Rosa, JC. Impact of survey length and compensation on validity, reliability, and sample characteristics for ultrashort-, short-, and long-research participant perception surveys. J Clin Transl Sci. 2018;2(1):3137.Google Scholar
Kelly-Pumarol, IJ, Henderson, PQ, Rushing, JT, Andrews, JE, Kost, RG, Wagenknecht, LE. Delivery of the research participant perception survey through the patient portal. J Clin Transl Sci. 2018;2(3):163168.Google Scholar
Kost, RG, Cheng, A, Andrews, J, et al. Empowering the participant voice (EPV): design and implementation of collaborative infrastructure to collect research participant experience feedback at scale. J Clin Transl Sci. 2024;8(1):e40.Google Scholar
Cheng, Alex C, Moragas, Eva Bascompte, Ellis, Thomas et al., Standards and infrastructure for multisite deployment of the research participant perception survey. JAMIA Open. https://doi.org/10.1093/jamiaopen/ooaf017 Google Scholar
Empowering the Participant Voice. January 19, 2021, Accessed October 12, 2023, http://www.rockefeller.edu/research/epv Google Scholar
Obeid, JS, McGraw, CA, Minor, BL, et al. Procurement of shared data instruments for research electronic data capture (REDCap). J Biomed Inform. 2013;46(2):259265.Google Scholar
Harris, PA, Taylor, R, Thielke, R, Payne, J, Gonzalez, N, Conde, JG. Research electronic data capture (REDCap)--a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42(2):377381.Google Scholar
Kost, RG. EPV team. Exploring EPV for your institution, EPV implementation guide. Empowering the participant voice (EPV).2022;15, Accessed May 31, 2023, https://www.rockefeller.edu/research/epv/joining-epv/ Google Scholar
AAPOR. American Association of Public Opinion Research. 2025. https://aapor.org/response-rates/. Accessed January 25, 2025.Google Scholar
US Census Census bureau releases new educational attainment data, 2022, Accessed December 7, 2024, https://www.census.gov/newsroom/press-releases/2022/educational-attainment.html Google Scholar
Baquet, CR, Commiskey, P, Daniel Mullins, C, Mishra, SI. Recruitment and participation in clinical trials: socio-demographic, rural/urban, and health care access predictors. Cancer Detect Prev. 2006;30(1):2433.Google Scholar
Scanlon, JK, Wofford, L, Fair, A, Philippi, D. Predictors of participation in clinical research. Nurs Res. 2021;70(4):289297.Google Scholar
Hadden, KB, Prince, LY, Moore, TD, James, LP, Holland, JR, Trudeau, CR. Improving readability of informed consents for research at an academic medical institution. J Clin Transl Sci. 2017;1(6):361365.Google Scholar
Zolopa, C, Leon, M, Rasmussen, A. A systematic review of response styles among Latinx populations. Assessment. 2024;31(4):947962.Google Scholar
Mangal, S, Park, L, Reading Turchioe, M, et al. Building trust in research through information and intent transparency with health information: representative cross-sectional survey of 502 US adults. J Am Med Inform Assoc. 2022;29(9):15351545.Google Scholar
Department of Health and Social Care (UK). Participant in research experience survey (PRES). NIHR National Institute for Health and Care Research, October. 2021;11, Accessed June 22, 2024, https://www.nihr.ac.uk/patients-carers-and-the-public/participant-in-research-experience-survey.htm Google Scholar
Census, US 2020. The United States Census Bureau. https://data.census.gov/, Accessed December 28, 2023.Google Scholar
Figure 0

Table 1. Characteristics of research participants who returned the Research Participant Perception Survey (RPPS) by year compared to all recipients of the RPPS survey and the US 2020 Census

Figure 1

Table 2. Multisite Topbox scores for RPPS questions in aggregate with range across sites, and chi-squared / Fisher’s exact test (February 2022 – june 2024)

Figure 2

Table 3. Respondents’ overall ratings of their research experiences compared to their responses to questions about their research experiences (February 2022 –june 2024) (n = 5045)

Figure 3

Table 4. Mixed-effects logistic regression models for Topbox scores in overall rating and feeling fully prepared by the informed consent discussions

Figure 4

Table 5. Local research experience findings, actions, and impacts

Figure 5

Figure 1. shows the Topbox scores for three Research Participants Perception Survey (RPPS) experience questions from 2013 to 2024 at Site D, where the RPPS has been fielded for a decade. In 2013, the site began an initiative to communicate directly to research volunteers that they were valued by researchers and the institution as partners in the research process (blue arrow). Initially communicated through brochures, pins, and banners, over time, messaging was also incorporated into institutional values through training, teaching, and policy. In 2017–2018, a research team with many RPPS respondents enlisted participants to help develop a new informed consent video and began using it in a Phase I–II study (orange arrow). In 2020–2022, the COVID pandemic disrupted many clinical operations (green arrow), including in-person consent, with full recovery of in-person activities by 2023.

Supplementary material: File

Kost et al. supplementary material 1

Kost et al. supplementary material
Download Kost et al. supplementary material 1(File)
File 1.1 MB
Supplementary material: File

Kost et al. supplementary material 2

Kost et al. supplementary material
Download Kost et al. supplementary material 2(File)
File 1.1 MB