Hostname: page-component-586b7cd67f-tf8b9 Total loading time: 0 Render date: 2024-11-27T18:42:30.549Z Has data issue: false hasContentIssue false

180 Building an evaluation platform to capture the impact of Frontiers CTSI activities

Published online by Cambridge University Press:  03 April 2024

Maggie Padek Kalman
Affiliation:
University of Kansas Medical Center
Shellie Ellis
Affiliation:
University of Kansas Medical Center
Mary Penne Mays
Affiliation:
University of Kansas Medical Center
Sam Pepper
Affiliation:
University of Kansas Medical Center
Dinesh Pal Mudaranthakam
Affiliation:
University of Kansas Medical Center
Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.

OBJECTIVES/GOALS: In 2021, Frontiers CTSI revamped its evaluation infrastructure to be comprehensive, efficient, and transparent in demonstrating outputs and outcomes. We sought to build a platform to standardize measures across program areas, integrate continuous improvement processes into operations, and reduce the data entry burden for investigators. METHODS/STUDY POPULATION: To identify useful metrics, we facilitated each Core’s creation of a logic model, in which they identified all planned activities, expected outputs, and anticipated outcomes for the 5-year cycle and beyond. We identified appropriate metrics based on the logic models and aligned metrics across programs against extant administrative data. We then built a data collection and evaluation platform within REDCap to capture user requests, staff completion of requests, and, ultimately, request outcomes. We built a similar system to track events, attendance, and outcomes. Aligning with other hubs, we also transitioned to a membership model. Membership serves as the backbone of the evaluation platform and allows us to tailor communication, capture demographic information, and reduce the data entry burden for members. RESULTS/ANTICIPATED RESULTS: The Frontiers Evaluation Platform consists of 9 redcap projects with distinct functions and uses throughout the Institute. Point-of-service collection forms include the Consultation Request Event Tracking. Annual Forms include a Study Outcome, Impact, and Member Assessment Survey. Set timepoint collections include K & T application, Mock Study Section, and Pilot grant application submission, review, and outcomes. Flight Tracker is used to collect scientific outcomes and integrated with the platform. Using SQL, the membership module has been integrated into all forms to check and collect membership before service access and provide relevant member data to navigators. All relevant data is then synched into a dashboard for program leadership and management to track outputs and outcomes in real-time. DISCUSSION/SIGNIFICANCE: Since the launch of the evaluation platform in Fall 2022, Frontiers has increased its workflow efficiency and streamlined continuous improvement communication. The platform can serve as a template for other hubs to build efficient processes to create comprehensive and transparent evaluation plans.

Type
Evaluation
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives licence (https://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is unaltered and is properly cited. The written permission of Cambridge University Press must be obtained for commercial re-use or in order to create a derivative work.
Copyright
© The Author(s), 2024. The Association for Clinical and Translational Science