Hostname: page-component-78c5997874-g7gxr Total loading time: 0 Render date: 2024-11-01T12:24:11.817Z Has data issue: false hasContentIssue false

Assessment of technical errors and validation processes in economic models submitted by the company for NICE technology appraisals

Published online by Cambridge University Press:  03 July 2020

Demi Radeva
Affiliation:
Department of Health Policy, London School of Economics and Political Science, London, UK United Health Group, Eden Prairie, Minnesota, USA
Gareth Hopkin*
Affiliation:
Department of Health Policy, London School of Economics and Political Science, London, UK Institute of Health Economics, Edmonton, Alberta, Canada
Elias Mossialos
Affiliation:
Department of Health Policy, London School of Economics and Political Science, London, UK
John Borrill
Affiliation:
Bristol-Myers Squibb, Uxbridge, London, UK
Leeza Osipenko
Affiliation:
Department of Health Policy, London School of Economics and Political Science, London, UK
Huseyin Naci
Affiliation:
Department of Health Policy, London School of Economics and Political Science, London, UK
*
Author for correspondence: Gareth Hopkin, E-mail: [email protected]

Abstract

Background

Economic models play a central role in the decision-making process of the National Institute for Health and Care Excellence (NICE). Inadequate validation methods allow for errors to be included in economic models. These errors may alter the final recommendations and have a significant impact on outcomes for stakeholders.

Objective

To describe the patterns of technical errors found in NICE submissions and to provide an insight into the validation exercises carried out by the companies prior to submission.

Methods

All forty-one single technology appraisals (STAs) completed in 2017 by NICE were reviewed and all were on medicines. The frequency of errors and information on their type, magnitude, and impact was extracted from publicly available NICE documentation along with the details of model validation methods used.

Results

Two STAs (5 percent) had no reported errors, nineteen (46 percent) had between one and four errors, sixteen (39 percent) had between five and nine errors, and four (10 percent) had more than ten errors. The most common errors were transcription errors (29 percent), logic errors (29 percent), and computational errors (25 percent). All STAs went through at least one type of validation. Moreover, errors that were notable enough were reported in the final appraisal document (FAD) in eight (20 percent) of the STAs assessed but each of these eight STAs received positive recommendations.

Conclusions

Technical errors are common in the economic models submitted to NICE. Some errors were considered important enough to be reported in the FAD. Improvements are needed in the model development process to ensure technical errors are kept to a minimum.

Type
Method
Copyright
Copyright © The Author(s), 2020. Published by Cambridge University Press

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

References

Angelis, A, Lange, A, Kanavos, P. Using health technology assessment to assess the value of new medicines: Results of a systematic review and expert consultation across eight European countries. Eur J Health Econ. 2017;19:123–52.CrossRefGoogle ScholarPubMed
National Institute of Health and Care Excellence. PMG19: Guide to the processes of technology appraisal. London, 2018.Google Scholar
Kim, LG, Thompson, SG. Uncertainty and validation of health economic decision models. Health Econ. 2010;19:4355.Google ScholarPubMed
McCabe, C, Dixon, S. Testing the validity of cost-effectiveness models. Pharmacoeconomics. 2000;17:501–13.CrossRefGoogle ScholarPubMed
Panko, RR. What we don't know about spreadsheet errors. In: The European Spreadsheet Risks Interest Group 16th Annual Conference. London: European Spreadsheet Risks Interest Group, 2015.Google Scholar
Hill, S, Mitchell, A, Henry, D. Problems with the interpretation of pharmacoeconomic analyses: A review of submissions to the Australian Pharmaceuticals Benefit Scheme. J Am Med Assoc. 2000;283:2116–21.CrossRefGoogle Scholar
Chilcott, J, Tappenden, P, Rawdin, A, Johnson, M, Kaltenthaler, E, Paisley, S, et al. Avoiding and identifying errors in health technology assessment models: Qualitative study and methodological review. Health Technol Assess. 2010;14:1107.CrossRefGoogle ScholarPubMed
Trueman, D, Livings, C. Technical errors in cost-effectiveness models: Evidence from the single technology appraisal programme in England and Wales. Value Health. 2013;16:A592.CrossRefGoogle Scholar
Van Gestel, A, Severens, JL, Webers, CAB, Beckers, HJM, Jansonius, NM, Schouten, J. Modelling complex treatment strategies: Construction and validation of a discrete event simulation model for glaucoma. Value Health. 2010;13:358–67.10.1111/j.1524-4733.2009.00678.xCrossRefGoogle Scholar
Philips, Z, Ginnelly, L, Sculpher, M, Claxton, K, Golder, S, Riemsma, R, et al. Review of guidelines for good practice in decision-analytic modelling in health technology assessment. Health Technol Assess 2004;8:36.CrossRefGoogle ScholarPubMed
Supplementary material: File

Radeva et al. supplementary material

Radeva et al. supplementary material

Download Radeva et al. supplementary material(File)
File 16.3 KB