Hostname: page-component-586b7cd67f-t7fkt Total loading time: 0 Render date: 2024-11-27T21:12:05.276Z Has data issue: false hasContentIssue false

Healthcare-associated infections and conditions in the era of digital measurement

Published online by Cambridge University Press:  25 September 2023

David C. Classen*
Affiliation:
Division of Epidemiology, University of Utah School of Medicine and IDEAS Center VA Salt Lake City Health System, Salt Lake City, UT, USA
Chanu Rhee
Affiliation:
Department of Population Medicine, Harvard Medical School and Harvard Pilgrim Health Care Institute, Boston, MA, USA Division of Infectious Diseases at Brigham and Women’s Hospital, Boston, MA, USA
Raymund B. Dantes
Affiliation:
Division of Hospital Medicine at the Emory University School of Medicine, Atlanta, GA, USA Division of Healthcare Quality Promotion at the Centers for Disease Control, Atlanta, GA, USA
Andrea L. Benin
Affiliation:
Division of Healthcare Quality Promotion at the Centers for Disease Control, Atlanta, GA, USA
*
Corresponding author: David Classen; Emails: [email protected], [email protected]
Rights & Permissions [Opens in a new window]

Abstract

As the third edition of the Compendium of Strategies to Prevent Healthcare-Associated Infections in Acute Care Hospitals is released with the latest recommendations for the prevention and management of healthcare-associated infections (HAIs), a new approach to reporting HAIs is just beginning to unfold. This next generation of HAI reporting will be fully electronic and based largely on existing data in electronic health record (EHR) systems and other electronic data sources. It will be a significant change in how hospitals report HAIs and how the Centers for Disease Control and Prevention (CDC) and other agencies receive this information. This paper outlines what that future electronic reporting system will look like and how it will impact HAI reporting.

Type
SHEA/IDSA/APIC Commentary
Creative Commons
Creative Common License - CCCreative Common License - BY
This is a work of the US Government and is not subject to copyright protection within the United States. Published by Cambridge University Press on behalf of The Society for Healthcare Epidemiology of America.
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (https://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited.
Copyright
© University of Utah School of Medicine, 2023

History of HAI reporting

Surveillance and reporting of healthcare-associated infections (HAIs) in the United States have evolved substantially over the past half-century. Although some hospitals began to establish infection control programs with active surveillance for HAIs in the 1960s, a national reporting system was absent until 1970 when the Centers for Disease Control and Prevention (CDC) established the National Nosocomial Infections Surveillance System (the precursor to the current National Healthcare Safety Network [NHSN]). Reference Dixon1 This system initially had just 62 participating hospitals but expanded rapidly in the ensuing years as HAI surveillance continued to gain momentum, propelled by the addition of infection surveillance and control programs to The Joint Commission’s hospital accreditation requirements in 1976 as well as by evidence that such programs were associated with lower facility-level HAI rates. Reference Haley, Culver and White2,Reference Tokars, Richards and Andrus3

The release of the Institute of Medicine’s “To Err is Human” report in 1999 documenting that HAIs were the leading cause of preventable harms in hospitals spurred even greater attention and catalyzed legislative mandates for public HAI reporting in several states beginning in 2002. Reference Kohn4 The Deficit Reduction Act of 2005 resulted in a series of Centers for Medicare & Medicaid Services (CMS) payment reforms codified in the 2008 Medicare Hospital-Acquired Conditions program and the subsequent 2012 Medicaid Healthcare-Acquired Conditions program. These programs ceased hospital payments for care resulting from preventable conditions acquired in the hospital (including several HAIs) as identified by the International Classification of Diseases, Ninth Revision, Clinical Modification (ICD-9-CM) codes. 5,6

Administrative codes have long been used for quality-of-care metrics given their availability, ease of use, and low cost. However, administrative billing data are known to have low sensitivity and low positive predictive values for certain HAIs. 7Reference Tehrani, Russell and Brown11 Because billing staff determine ICD codes based on medical record documentation, variability in HAI reporting may also occur. Reference Hartmann, Hoff, Palmer, Wroe, Dutta-Linn and Lee12,Reference Lee, Hartmann and Graham13 Furthermore, changes in documentation and billing practices can easily affect apparent HAI rates, particularly when financial incentives are at stake. This issue is underscored by studies demonstrating sudden declines in billing rates for certain HAIs targeted by Medicare after implementation of the Hospital-Acquired Conditions program, without corresponding changes in infection rates as determined by standardized NHSN criteria. Reference Calderwood, Kleinman and Soumerai14,Reference Lee, Kleinman and Soumerai15

In 2010, CMS incorporated HAI prevention into the Hospital Value-Based Purchasing Program established under the Affordable Care Act and implemented in fiscal year 2013, such that HAI rates are now used to benchmark hospitals and inform pay-for-performance programs. Recognizing the limitations of administrative codes for HAI surveillance, CMS shifted to using NHSN definitions beginning with central-line–associated bloodstream infections (CLABSIs), followed by catheter-associated urinary tract infections (CAUTIs), Clostridioides difficile infections, methicillin-resistant Staphylococcus aureus (MRSA) bacteremia, and select surgical site infections (SSIs). Reference Page, Klompas and Chan16 Using NHSN to determine HAI rates improves accuracy, clinical relevance, and credibility compared to administrative data.

As the breadth and importance of HAI reporting to NHSN has evolved over time, so have the methods of surveillance. Traditionally, HAI surveillance has been performed by infection preventionists who conduct manual medical record reviews of all patients at risk for the specified HAI and apply standardized case definitions then manually enter data into the NHSN database. However, this process is highly resource intensive and risks underdetection of HAIs depending on the rigor and breadth of the case-finding approach. Reference Haut and Pronovost17 In addition, some HAI definitions are complex and leave room for subjective interpretation, confounding attempts to use them for hospital benchmarking. Reference Backman, Melchreit and Rodriguez18Reference Thompson, Makvandi and Baumbach24

To overcome the drawbacks of traditional methods for surveillance for HAIs, particularly its resource intensiveness and imperfect interrater reliability, electronic surveillance methods have been increasingly explored and utilized. “Semiautomated surveillance” involves algorithms that select patients who are highly likely to have an HAI based on clinical data extractable from EHRs and subsequently requires manual confirmation; numerous studies have demonstrated how electronic screening based on microbiology results, antibiotic exposures, and discharge diagnosis codes (alone or in combination) can increase the efficiency—and potentially the interrater reliability—of HAI surveillance, including for CLABSIs, CAUTIs, and SSIs. Reference Pokorny, Rovira, Martin-Baranera, Gimeno, Alonso-Tarres and Vilarasau25Reference Yokoe, Noskin and Cunnigham29 Fully automated systems, in contrast, apply standardized definitions entirely using electronic data and eschew any manual medical record review. The potential advantages of allowing for fully standardized and efficient assessments of large population has led the CDC to increasingly explore this pathway, including a shift away from ventilator-associated pneumonia toward ventilator-associated events, and the recent development of a simple and automatable definition of hospital-onset bacteremia. Reference Dantes, Rock and Milstone30,Reference Magill, Klompas and Balk31

Evolution of digital quality measures

Measuring the quality of care is a critical component of assessing the overall value of care. Reference MacLean, Kerr and Qaseem32 Payors increasingly require that hospitals and ambulatory clinics measure and report on more and more quality measures. Reference Berwick and Cassel33 These quality measures can be used to identify opportunities to improve care and report performance, and they are increasingly tied by payors to reimbursement of healthcare personnel (HCP). Before the electronic health record (EHR) era, these quality measures were manually abstracted and reported, constituting a significant burden on hospitals and clinics. This burden increased as the number of quality measures required for reporting expanded exponentially. In a 2013 analysis, a major academic medical center was required to report >120 quality measures to regulators or payors, and the cost of measure collection and analysis consumed ∼1% of the net patient-service revenue. Reference Meyer, Nelson and Pryor34 Preliminary results from a survey of leadership in 20 healthcare systems, ranging in size from 180 to 3,000 beds, revealed that measurement activities required the equivalent of 50–100 full-time employees at each of these systems, at estimated costs ranging from $3.5 to $12 million per year. Reference Dunlap, Ballard and Cherry35

Interest in the use of electronic means such as EHRs to collect and report these quality metrics has grown due to the potential to reduce the burden of collecting and reporting these measures. The Health Information Technology for Economic and Clinical Health (HITECH) Act, passed in 2009 as a part of the American Recovery and Reinvestment Act, specified general guidelines for the development and implementation of a “nationwide health information technology infrastructure.” Reference Blumenthal and Tavenner36 Through the HITECH Act initiatives, the federal government has spent resources to promote widespread adoption of EHRs that were intended to improve the quality, safety, efficiency, coordination, and equity of healthcare in the United States. Reference Kern, Malhotra and Barron37 A key feature of this program was the use of clinical quality reporting measures that would be collected and reported using certified EHR systems. Reference Kern, Barron and Dhopeshwarkar38 These quality reports use standardized quality measures, as envisioned in the HITECH Act known as electronic clinical quality measures (eCQMs), to report hospital performance to healthcare payors. An electronic clinical quality measure (eCQM) is a clinical quality measure that is specified in a standard electronic format and is designed to use structured, encoded data present in the EHR. Reference Kern, Barron and Dhopeshwarkar38 As EHR adoption has risen dramatically over the past decade, many HCP now must generate and submit these eCQMs through their EHR or another certified technology.

The initial rollout of eCQMs was challenging. Many hospitals and clinics assumed a significant burden from collection and reporting of quality measures with eCQMs that was heavier than with prior manual collection. This occurred for many reasons, including poor and contradictory measure specifications; EHR vendor software that was unable to collect and report these eCQMs; inaccurate calculation of measures by vendor software; and lack of key measure documentation in the EHR system, often requiring clinicians to enter more documentation into their workflows in the EHR. Reference Dunlap, Ballard and Cherry35 In a 2015 study, the estimated cost of EHR quality reporting for general internists, family physicians, cardiologists, and orthopedists was $15.4 billion annually. Reference Casalino, Gans and Weber39 The accuracy of EHR-based quality reporting has also been shown to have mixed performance compared with manual chart review. Reference Kern, Malhotra and Barron37 Electronic quality reporting has been frequently inaccurate due to challenges in data completeness, accuracy, terminology use, gaps between structured fields and available free text, and inconsistency of measure logic implementation and EHR certification. Reference Chan, Fowles and Weiner40

In 2015, as these challenges were coming to light, the Institute of Medicine (IOM) released its report, Vital Signs, on national quality measurement and reporting that called for a significant reduction in the volume of different quality metrics reported and recommended the movement to eCQMs for all quality reporting. Reference Blumenthal and McGinnis41 CMS responded to this report and to numerous stakeholder concerns about CMS eCQM quality reporting program with the development of their new digital quality measurement strategy. 42,Reference Schreiber, Krauss, Blake, Boone and Almonte43 As part of this new digital measures program, CMS is updating and improving its existing eCQMs through a new program called the Centers for Medicare & Medicaid Services Electronic Clinical Quality Measure (eCQM) Strategy Project. This project began by outreach to working partners to identify burdens and recommendations related to the current eCQM implementation and reporting program, and then developed a series of initiatives to address these burdens and recommendations from frontline users. Reference Schreiber, Krauss, Blake, Boone and Almonte43 Six themes emerged from the stakeholder feedback. Better Alignment addressed the perceived lack of alignment across CMS programs, other payors, and regulatory agencies. Improved Value addressed the burdens with current reporting on eCQMs that HCP believe may not be relevant, do not contribute to their quality initiatives, or do not accurately represent the care they provide to patients. Future Development Processes addressed the burdens related to the length of time and lack of clarity of the eCQM development process. Measure developers and health IT vendors recommended a collaborative eCQM development environment to improve transparency for future measure needs and access to testing data to streamline the eCQM development lifecycle. Integrated Implementation and Reporting Processes addressed the complex eCQM workflows resulting from: 1) difficulty interpreting the eCQM specifications and discerning which data elements must be captured in the clinician workflow; 2) documentation required for eCQM reporting that does not support patient care directly; and 3) multiple submission mechanisms and formats resulting in delays, poor submitter feedback, and lack of usability and consistency. Finally, an enhanced EHR Certification Process outlined that hospitals and clinicians expected the EHR certification process to ensure accurate and successful eCQM calculation, reporting, and submission to CMS. In total CMS and its partners addressed over 100 recommendations to improve the eCQM development, implementation, and reporting experience by creating multiple implementation strategies. Reference Schreiber, Krauss, Blake, Boone and Almonte43

These eCQM changes are part of the larger CMS plan to move all quality reporting to digital quality measures (dQMs). CMS's new digital quality framework is built around 4 key domains: advancing technology; enabling measure alignment; improving the quality of data, such as standardized data elements and validation programs; and optimizing data aggregation. 42,Reference Schreiber, Krauss, Blake, Boone and Almonte43 The dQMs are an expansion of eCQMs that allows for data collected not only from EHRs but also from an array of other electronic sources. CMS defines dQMs as “quality measures, organized as self-contained measure specifications and code packages, that use one or more sources of health information that are captured and can be transmitted electronically via interoperable systems.” These data sources may include administrative systems, electronically submitted clinical assessment data, case management systems, EHRs, laboratory systems, prescription drug monitoring programs (PDMPs), instruments (eg, medical devices and wearable devices), patient portals or applications (eg, for collection of patient-generated data such as a home blood pressure monitor, or patient-reported health data), Health Information Exchange Organizations (HIEOs) or registries, and other sources. 42,Reference Schreiber, Krauss, Blake, Boone and Almonte43

The increasing use of structured, Fast Health Interoperable Resources (FHIR)–formatted EHR data exchanged through FHIR application program interfaces (APIs) can be leveraged to greatly reduce longstanding challenges to quality measurement. Currently, implementing individual EHR-based measures requires HCP to install and to adapt measure calculation software in their respective EHR systems, which often use variable or proprietary data models and structures. 42,Reference Schreiber, Krauss, Blake, Boone and Almonte43 This process is burdensome and costly, and it is difficult to reliably obtain high-quality data across EHR instances. FHIR is a recent set of health IT standards optimized for secure and rapid data interoperability based on widely used internet standards used across other industries. Once HCP map their EHR data (structured using a uniform FHIR standard) to a FHIR API (application program interfaces) to meet the 21st Century Cures Act requirements, it will be possible to exchange much of the foundational data needed for measures without significant additional HCP investment or effort. Reference Schreiber, Krauss, Blake, Boone and Almonte43 Learnings from these activities can be leveraged and applied to other digital data that live outside the clinical EHR, enhancing and expanding the use of data such as patient-reported outcomes (PRO) and patient-generated health data (PGHD) for quality measurement in the future. 42 These advances in interoperability are also expected to enable the development of measure calculation tools (MCTs) for dQMs that use EHR data solely so HCP no longer will need to install measures one-by-one and update them annually in their unique EHR systems. MCTs can be self-contained tools executed by the HCP on-site and by multiple other key actors in measurement, including CMS, other payors, clinical registries, and data aggregators. 42 In the future, improved interoperability of EHR and other digital health data can fuel a revolution in healthcare delivery and quality measurement that leverage healthcare data across all settings and data sources.

Current NHSN HAI surveillance: Strengths and challenges

NHSN has grown into the primary and most widely used tracking system for HAIs in the United States, with enrollment increasing from 300 hospitals in 2006 to currently >38,000 healthcare facilities in all 50 states, the District of Columbia, and Puerto Rico. This surveillance for HAIs continues to remain a national priority, 44 as HAIs still affect 1 of every 31 patients and lead to $28.4 billion in excess healthcare costs annually. Reference Magill, O’Leary and Janelle45,Reference Scott46 Fundamentally, HAIs violate the central medical oath of “first, do no harm” and are a bedrock of national efforts to improve the safety of healthcare.

NHSN has been instrumental in driving the successful reduction in key HAIs over the past decade by supporting 3 pillars required for measurable reductions: actionable outcome data; evidence-based, concrete interventions (eg, standardized central-line insertion practices); and an empowered healthcare workforce to enact prevention strategies, including both bedside clinicians and leadership from infection preventionists and hospital epidemiologists to clinical leaders and facility administrators. The stress on this third pillar from the COVID-19 pandemic resulted in the rise of several key HAIs in the face of stress and over-extension of the healthcare workforce. Reference Fleisher, Schreiber, Cardo and Srinivasan47 The well-established NHSN surveillance network was also able to document and draw attention to the impact of the COVID-19 pandemic on patient safety and has been an essential part of reinvigorating efforts to improve patient safety outcomes in the pandemic era. Reference Sapiano, Dudeck and Soe48

The use of NHSN quality measures in accountability programs has been instrumental in providing incentive for healthcare facilities to drive improvements in HAIs. To successfully support these programs, NHSN relies on benchmarking healthcare facilities against each other using risk-adjusted, standardized infection ratios. 49 To date, risk adjustment has relied on facility-level factors because of the practicalities of collecting data. Multiple authors have called for patient-level risk adjustment to benchmark facilities more fairly for accountability programs. Reference Jackson, Leekha and Pineles50 However, providing risk-adjustment using patient-level factors requires collecting detailed patient-level data on all patients in the relevant cohorts—virtually all patients in healthcare facilities nationally. Prior to the current era of digital measures, such data collection was entirely impractical because requiring detailed, data collection for so many patients would be too burdensome. As we move into the era of electronic data collection and digital measurement, such an approach becomes imaginable, and patient-level risk adjustment becomes plausible.

The greatest challenge that NHSN has faced in growing and evolving surveillance and quality measurement has been the high burden that properly rigorous data-collection places on the healthcare facilities. Several of the established NHSN HAIs track device or procedure-related infections (CLABSI, CAUTI, and SSI) using detailed and comprehensive surveillance definitions that are designed to emulate a clinical diagnosis of infection. As a result, these surveillance definitions may contain patient signs or symptoms (eg, chills in CLABSI, costovertebral tenderness in CAUTI or purulent drainage in SSI), which are relevant for clinical diagnosis but are documented in nonstandardized ways and are challenging to locate in structured EHRs. Despite advances in EHRs over more than a decade, these HAIs require some level of manual adjudication, and the resulting reporting burden has been well documented. Reference Bryant, Harris and Gould51,Reference Parrillo52 Furthermore, manual adjudication opens surveillance definitions to potential interrater variation and subjectivity. Reference Mayer, Greene and Howell53

In the ever-advancing era of digital measurement, NHSN has committed to addressing these challenges by leveraging developments in data interoperability to improve and evolve surveillance for HAIs as well as offering rigorously defined and measured surveillance for patient-safety and quality outside the realm of HAIs.

HAI reporting in the era of digital quality measures

Currently, NHSN has several quality measures that utilize the Clinical Data Architecture (CDA) framework for entirely electronic data transfer, including Antimicrobial Use/Antimicrobial Resistance, Late-Onset Sepsis and Meningitis, and the Multidrug-Resistant Organism (eg, Methicillin-resistant S. aureus [MRSA] bacteremia), and C. difficile Infection (ie, LabID modules). The CDA submission process, although reliable and well established, relies on facilities to package data collected by healthcare personnel into electronic submissions.

The next stage of NHSN electronic measurement will utilize both semiautomated and fully automated electronic surveillance for dQMs (Fig. 1). NHSN is developing a series of dQMs designed for data exchange via the FHIR. This approach will facilitate more robust, rapid, and efficient transfer of detailed healthcare data. Furthermore, because the approach enables the collection of patient-level data, NHSN will be able to evolve methods for risk adjustment while minimizing the burden of data collection. Importantly, NHSN will also be able to calculate balancing metrics that serve several functions. Balancing metrics can be used to identify whether the pressures created by measurement are causing unintended consequences, or they can monitor key process measures to evaluate the impact of key prevention practices. For example, alongside the planned measurement of healthcare-facility onset, antibiotic-treated C. difficile infection (HT-CDI), NHSN plans to track instances in which treatment for C. difficile infection was administered without a diagnostic test, which may indicate potential attempts to avoid recording HT-CDI events.

Fig. 1. Continuum of electronic measurement in NHSN.

The first NHSN digital measures that leverage FHIR reporting are planned for release in 2023 and will address glycemic control, mirroring reporting that is currently specified as a CMS eCQM. Later in 2023, NHSN will add the ability to collect HT-CDI and hospital-onset bacteremia and fungemia (HOB) fully digitally. Additional HAIs, conditions, and patient-safety measures are under development. They address topics such as sepsis, respiratory infections, venous thromboembolism, and emergency preparedness.

Opportunities and challenges as HAIs become digital quality measures

The progression of digital measurement presents new opportunities not only for patient-level risk adjustment and less burdensome data collection but also for more timely collection and reporting of surveillance metrics. The rapid data exchange enabled by tools like FHIR raises the question of how frequently HAI or other measure reporting in NHSN will take place, and how much closer NHSN can move to “real-time” reporting. Although the FHIR can enable nearly continuous data exchange, this cadence is likely not needed for most NHSN measures for several reasons. First, the creation of risk-adjusted quality measures may require inclusion of data elements such as administrative coding (which may also be used for excluding patients from a measure), which is typically not completed until after an encounter or admission is completed. Secondly, a higher frequency of data exchange with NHSN would need to be balanced against the computing burden with users’ FHIR servers. Facilities and users could utilize more frequent feedback of even preliminary events, such as daily new preliminary HOB events. However, on-site EHR tools or vender applications that leverage additional contextual local data may be better suited to provide near real-time feedback or clinical decision-making tools for improving the quality of care. For example, a local CDI prevention application could alert an infection preventionist to a daily list of preliminary HT-CDI events which could be analyzed to assess the physical distance of patients to each other, shared HCP, and records of specific cleaning practices, all of which go beyond what NHSN might need for HAI reporting.

Although FHIR is a transformative tool in data interoperability and may facilitate movement to dQMs with streamlined data exchange, it is important to acknowledge that advances in data interoperability cannot improve the quality of data documentation. For example, accurate electronic documentation of central lines is challenging. Reference Tejedor, Garrett and Jacob23 Until documentation practices improve, it will be challenging to bring these kinds of data elements into new dQMs and to transition legacy NHSN measures into dQMs. Standardized, comparable measurement of patient safety and quality will depend on the details of how each facility maps EHR data elements into FHIR resources so that the resulting data exchanged are consistent and standardized across all EHR implementations. Leveraging evolving FHIR standards and standardized terminologies will be essential to the success of FHIR-based dQMs.

NHSN is currently testing its FHIR gateway (called “NHSNLink”) with partner sites to validate and evaluate the process of data exchange. Efforts such as the US Core Data for Interoperability (USCDI) have been essential in prioritizing the first set of health data classes and elements for interoperable data exchange using FHIR and move the field toward realizing fully automated reporting. However, data submissions will require initial and ongoing validation to ensure that data are not missing or otherwise biased by any number of potential nuances or mappings of the EHR in each facility. NHSN is developing a validation structure, which will encompass both comprehensive validation prior to release of a measure as well as durable, ongoing validation as part of postimplementation routine activities.

Finally, there is a need to monitor how digital measurement leveraging new tools such as FHIR affects the engagement of frontline HCP. Current data abstraction by infection preventionists is labor intensive but often involves collaborations with frontline staff that are critical to driving improvement. Digital measurement, however, should not reduce the important need for education on how to use and interpret quality measures to improve care in individual units and facilities. NHSN program is committed to continuing to provide a comprehensive system for evaluating healthcare quality that includes data feedback and education.

In conclusion, advances in data interoperability have provided new opportunities for more robust digital quality measures while shifting the burden of reporting onto electronic systems and away from manual data collection. These new digital quality metrics include new HAI surveillance that will soon be incorporated into the CDC NHSN portfolio as well as measurement of patient safety beyond the realm of infection prevention. As the field continues to move forward, lessons learned from earlier efforts to digitize and automate quality measurement will be leveraged to achieve the vision of seamless, automated measurement that is also scientifically rigorous and valid and promotes the best possible care of patients.

Disclaimer

The findings and conclusions in this report are those of the authors and do not necessarily represent the views of the Centers for Disease Control and Prevention (CDC).

References

Dixon, RE. Control of healthcare-associated infections, 1961–2011. MMWR Suppl 2011;60:5863.Google ScholarPubMed
Haley, RW, Culver, DH, White, JW, et al. The efficacy of infection surveillance and control programs in preventing nosocomial infections in US hospitals. Am J Epidemiol 1985;121:182205.CrossRefGoogle ScholarPubMed
Tokars, JI, Richards, C, Andrus, M, et al. The changing face of surveillance for health care-associated infections. Clin Infect Dis. 2004;39:13471352.CrossRefGoogle ScholarPubMed
Kohn, L. To err is human: an interview with the Institute of Medicine’s Linda Kohn. Jt Comm J Qual Improv 2000;26:227234.Google Scholar
Centers for Medicare & Medicaid Services. Medicare program; changes to the hospital inpatient prospective payment systems and fiscal year 2008 rates. Fed Regist 2007;72:4712948175.Google Scholar
Centers for Medicare & Medicaid Services. HHS Medicaid program: Payment adjustment for provider-preventable conditions including health care-acquired conditions. Final rule. Fed Regist 2011;76:3281632838.Google Scholar
Centers for Medicare and Medicaid Services. HHS Medicare program: Hospital inpatient prospective payment systems for acute-care hospitals and the long-term-care hospital prospective payment system and Fiscal Year 2014 rates; quality reporting requirements for specific providers; hospital conditions of participation; payment policies related to patient status. Final rules. Fed Regist 2013;78:5049551040.Google Scholar
Jhung, MA, Banerjee, SN. Administrative coding data and healthcare-associated infections. Clin Infect Dis 2009;49:949955.CrossRefGoogle Scholar
Landers, T, Apte, M, Hyman, S, Furuya, Y, Glied, S, Larson, E. A comparison of methods to detect urinary tract infections using electronic data. Jt Comm J Qual Patient Saf 2010;36:411417.Google ScholarPubMed
Stevenson, KB, Khan, Y, Dickman, J, et al. Administrative coding data, compared with CDC/NHSN criteria, are poor indicators of healthcare-associated infections. Am J Infect Control 2008;36:155164.CrossRefGoogle Scholar
Tehrani, DM, Russell, D, Brown, J, et al. Discord among performance measures for central-line–associated bloodstream infection. Infect Control Hosp Epidemiol 2013;34:176183.CrossRefGoogle ScholarPubMed
Hartmann, CW, Hoff, T, Palmer, JA, Wroe, P, Dutta-Linn, MM, Lee, G. The Medicare policy of payment adjustment for healthcare-associated infections: perspectives on potential unintended consequences. Med Care Res Rev 2012;69:4561.CrossRefGoogle ScholarPubMed
Lee, GM, Hartmann, CW, Graham, D, et al. Perceived impact of the Medicare policy to adjust payment for healthcare-associated infections. Am J Infect Control 2012;40:314319.CrossRefGoogle Scholar
Calderwood, MS, Kleinman, K, Soumerai, SB, et al. Impact of Medicare’s payment policy on mediastinitis following coronary artery bypass graft surgery in US hospitals. Infect Control Hosp Epidemiol 2014;35:144151.CrossRefGoogle ScholarPubMed
Lee, GM, Kleinman, K, Soumerai, SB, et al. Effect of nonpayment for preventable infections in US hospitals. N Engl J Med 2012;367:14281437.CrossRefGoogle Scholar
Page, B, Klompas, M, Chan, C, et al. Surveillance for healthcare-associated infections: hospital-onset adult sepsis events versus current reportable conditions. Clin Infect Dis 2021;73:10131019.CrossRefGoogle ScholarPubMed
Haut, ER, Pronovost, PJ. Surveillance bias in outcomes reporting. JAMA 2011;305:24622463.CrossRefGoogle ScholarPubMed
Backman, LA, Melchreit, R, Rodriguez, R. Validation of the surveillance and reporting of central line-associated bloodstream infection data to a state health department. Am J Infect Control 2010;38:832838.CrossRefGoogle ScholarPubMed
Backman, LA, Nobert, G, Melchreit, R, Fekieta, R, Dembry, LM. Validation of the surveillance and reporting of central-line–associated bloodstream infection denominator data. Am J Infect Control 2014;42:2833.CrossRefGoogle ScholarPubMed
Haley, VB, DiRienzo, AG, Lutterloh, EC, Stricof, RL. Quantifying sources of bias in National Healthcare Safety Network laboratory-identified Clostridium difficile infection rates. Infect Control Hosp Epidemiol 2014;35:17.CrossRefGoogle ScholarPubMed
Hazamy, PA, Van Antwerpen, C, Tserenpuntsag, B, et al. Trends in validity of central-line–associated bloodstream infection surveillance data, New York State, 2007–2010. Am J Infect Control 2013;41:12001204.CrossRefGoogle ScholarPubMed
Keller, SC, Linkin, DR, Fishman, NO, Lautenbach, E. Variations in identification of healthcare-associated infections. Infect Control Hosp Epidemiol 2013;34:678686.CrossRefGoogle ScholarPubMed
Tejedor, SC, Garrett, G, Jacob, JT, et al. Electronic documentation of central venous catheter days: validation is essential. Infect Control Hosp Epidemiol 2013;34:900907.CrossRefGoogle ScholarPubMed
Thompson, DL, Makvandi, M, Baumbach, J. Validation of central-line–associated bloodstream infection data in a voluntary reporting state: New Mexico. Am J Infect Control 2013;41:122125.CrossRefGoogle Scholar
Pokorny, L, Rovira, A, Martin-Baranera, M, Gimeno, C, Alonso-Tarres, C, Vilarasau, J. Automatic detection of patients with nosocomial infection by a computer-based surveillance system: a validation study in a general hospital. Infect Control Hosp Epidemiol 2006;27:500503.CrossRefGoogle ScholarPubMed
Trick, WE, Zagorski, BM, Tokars, JI, et al. Computer algorithms to detect bloodstream infections. Emerg Infect Dis 2004;10:16121620.CrossRefGoogle ScholarPubMed
Woeltje, KF, Butler, AM, Goris, AJ, et al. Automated surveillance for central-line–associated bloodstream infection in intensive care units. Infect Control Hosp Epidemiol 2008;29:842846.CrossRefGoogle ScholarPubMed
Yokoe, DS, Khan, Y, Olsen, MA, et al. Enhanced surgical-site infection surveillance following hysterectomy, vascular, and colorectal surgery. Infect Control Hosp Epidemiol 2012;33:768773.CrossRefGoogle ScholarPubMed
Yokoe, DS, Noskin, GA, Cunnigham, SM, et al. Enhanced identification of postoperative infections among inpatients. Emerg Infect Dis 2004;10:19241930.CrossRefGoogle ScholarPubMed
Dantes, RB, Rock, C, Milstone, AM, et al. Preventability of hospital onset bacteremia and fungemia: a pilot study of a potential healthcare-associated infection outcome measure. Infect Control Hosp Epidemiol 2019;40:358361.CrossRefGoogle ScholarPubMed
Magill, SS, Klompas, M, Balk, R, et al. Developing a new, national approach to surveillance for ventilator-associated events: executive summary. Infect Control Hosp Epidemiol 2013;34:12391243.CrossRefGoogle ScholarPubMed
MacLean, CH, Kerr, EA, Qaseem, A. Time out—charting a path for improving performance measurement. N Engl J Med 2018;378:17571761.CrossRefGoogle ScholarPubMed
Berwick, DM, Cassel, CK. The NAM and the quality of health care—inflecting a field. N Engl J Med 2020;383:505508.CrossRefGoogle ScholarPubMed
Meyer, GS, Nelson, EC, Pryor, DB, et al. More quality measures versus measuring what matters: a call for balance and parsimony. BMJ Qual Saf 2012;21:964968.CrossRefGoogle Scholar
Dunlap, NE, Ballard, DJ, Cherry, RA, et al. Observations from the Field: Reporting Quality Metrics in Health Care. NAM Perspectives. Discussion Paper, Washington, DC: National Academy of Medicine; 2016.CrossRefGoogle Scholar
Blumenthal, D, Tavenner, M. The “meaningful use” regulation for electronic health records. N Engl J Med 2010;363:501504.CrossRefGoogle Scholar
Kern, LM, Malhotra, S, Barron, Y, et al. Accuracy of electronically reported “meaningful use” clinical quality measures: a cross-sectional study. Ann Intern Med 2013;158:7783.CrossRefGoogle ScholarPubMed
Kern, LM, Barron, Y, Dhopeshwarkar, RV, et al. Electronic health records and ambulatory quality of care. J Gen Intern Med 2013;28:496503.CrossRefGoogle ScholarPubMed
Casalino, LP, Gans, D, Weber, R, et al. US physician practices spend more than $15.4 billion annually to report quality measures. Health Aff (Millwood) 2016;35:401406.CrossRefGoogle ScholarPubMed
Chan, KS, Fowles, JB, Weiner, JP. Review: electronic health records and the reliability and validity of quality measures: a review of the literature. Med Care Res Rev 2010;67:503527.CrossRefGoogle Scholar
Blumenthal, D, McGinnis, JM. Measuring vital signs: an IOM report on core metrics for health and healthcare progress. JAMA 2015;313:19011902.CrossRefGoogle Scholar
Centers for Medicare & Medicaid Services. Digital Quality Measurement Strategic Roadmap, 2022. https://ecqi.healthit.gov/sites/default/files/CMSdQMStrategicRoadmap_032822.pdf. Published 2022. Accessed July 13, 2023.Google Scholar
Schreiber, M, Krauss, D, Blake, B, Boone, E, Almonte, R. Balancing value and burden: the Centers for Medicare & Medicaid Services electronic clinical quality measure (eCQM) strategy project. J Am Med Inform Assoc 2021;28:24752482.CrossRefGoogle ScholarPubMed
US Department of Health and Human Services. HAI National Action Plan. https://www.hhs.gov/oidp/topics/health-care-associated-infections/hai-action-plan/index.html. Accessed November 4, 2022.Google Scholar
Magill, SS, O’Leary, E, Janelle, SJ, et al. Changes in prevalence of healthcare-associated infections in US hospitals. N Engl J Med 2018;379:17321744.CrossRefGoogle Scholar
Scott, RD. The direct medical costs of healthcare-associated infections in US hospitals and the benefits of prevention. Centers for Disease Control and Prevention website. https://www.cdc.gov/hai/pdfs/hai/scott_costpaper.pdf/ Published 2009. Accessed July 15, 2023.Google Scholar
Fleisher, LA, Schreiber, M, Cardo, D, Srinivasan, A. Healthcare safety during the pandemic and beyond—building a system that ensures resilience. N Engl J Med 2022;386:609611.CrossRefGoogle ScholarPubMed
Sapiano, MRP, Dudeck, MA, Soe, M, et al. Impact of coronavirus disease 2019 (COVID-19) on US hospitals and patients, April–July 2020. Infect Control Hosp Epidemiol 2022;43:3239.CrossRefGoogle ScholarPubMed
Centers for Disease Control and Prevention, Division of Healthcare Quality Promotion. The NHSN Standardized Infection Ratio (SIR). https://www.cdc.gov/nhsn/pdfs/ps-analysis-resources/nhsn-sir-guide.pdf. Accessed November 4, 2022.Google Scholar
Jackson, SS, Leekha, S, Pineles, L, et al. Improving risk adjustment above current Centers for Disease Control and Prevention methodology using electronically available comorbid conditions. Infect Control Hosp Epidemiol 2016;37:11731178.CrossRefGoogle ScholarPubMed
Bryant, KA, Harris, AD, Gould, CV, et al. Necessary infrastructure of infection prevention and healthcare epidemiology programs: a review. Infect Control Hosp Epidemiol 2016;37:371380.CrossRefGoogle ScholarPubMed
Parrillo, SL. The burden of National Healthcare Safety Network (NHSN) reporting on the infection preventionist: a community hospital perspective. Am J Infect Control 2015;43:S17.CrossRefGoogle Scholar
Mayer, J, Greene, T, Howell, J, et al. Agreement in classifying bloodstream infections among multiple reviewers conducting surveillance. Clin Infect Dis 2012;55:364370.CrossRefGoogle ScholarPubMed
Figure 0

Fig. 1. Continuum of electronic measurement in NHSN.