Hostname: page-component-cd9895bd7-gbm5v Total loading time: 0 Render date: 2024-12-25T15:39:33.658Z Has data issue: false hasContentIssue false

OP70 Gaps In The Evaluation Of Clinical Decision Support Software (CDSS): Interviews With Australian Policymakers

Published online by Cambridge University Press:  23 December 2022

Rights & Permissions [Opens in a new window]

Abstract

Core share and HTML view are not available for this content. However, as you have access to this content, a full PDF is available via the ‘Save PDF’ action button.
Introduction

Clinical Decision Support Software (CDSS) can improve the quality and safety of care by providing patient-specific diagnostic and treatment recommendations. However, robust evaluation is required to ensure that the recommendations provided are clinically valid, up-to-date, and relevant to a specific clinical context. Most evaluation studies assess CDSS performance from the perspective of end-user requirements. But only occasionally is CDSS subject to stringent pre- and post-market evaluation, making it difficult to determine the safety and quality in practice. This study aimed to assess CDSS evaluation in Australia to identify gaps in evaluation approaches.

Methods

We conducted 11 semi-structured interviews with different policymakers from committees involved in digital health activities in Australia. Data were thematically analyzed using both theory-based (deductive) and data-driven (inductive) approaches.

Results

Our findings indicated that evaluating CDSS as a purely technical intervention has overly narrowed the assessment of benefits and risks by inadequately capturing the sociotechnical environment. Existing evaluation methods, adopting a static view of the implemented system, cannot discern the impact of the dynamic clinical environment and rapidly evolving technology on CDSS performance. The timeframe of evaluation studies are also incongruent with fast software upgrade cycles, with clinical practices and software potentially changing by the time evaluation is complete. The regulation of software as a medical device depends on the intended use. CDSS are exempt from regulation because they only ‘produce advice’; however, this ignores the fact that they can transition to specifying a diagnosis and treatment after a software update. There is no framework for continuous post-market monitoring, and this is especially important when a CDSS algorithm can change and impact on patient management.

Conclusions

The sociotechnical environment is a significant factor influencing the impact of CDSS on clinical practice, therefore evaluation approaches must acknowledge the dynamic nature of clinical and organizational contexts. Pragmatic and data-driven methodologies are required for CDSS evaluation that acknowledge the evolving landscape of clinical practice and its relationship to technology.

Type
Oral Presentations
Copyright
© The Author(s), 2022. Published by Cambridge University Press