Share this article on:

An Examination of Longitudinal CAUTI, SSI, and CDI Rates from Key HHS Data Systems

Weinberg, Daniel A. PhD*; Kahn, Katherine L. MD†,‡

doi: 10.1097/MLR.0000000000000027
Original Articles

Background: In response to the growing concern about healthcare–associated infections (HAIs), US Department of Health and Human Services (HHS) developed the National Action Plan to Prevent Healthcare-associated Infections. A key focus of the Action Plan is the setting of HAI metrics and targets and the enhancement and development of data systems to support HAI surveillance.

Objectives: To identify and assess the strengths and weaknesses of HHS data systems available for surveillance of catheter-associated urinary tract infections, surgical site infections, and Clostridium difficile infections. To present national data from each of the data systems and assess concordance in trends over time.

Research Design: Literature review on data system characteristics and HAI measurement. Graphical and descriptive analyses of longitudinal HAI rates from HHS data systems.

Measures: HAI rate information expressed as prevalence rates or standardized infection ratios.

Results: We identified four HHS data systems—Medicare claims data, Healthcare Cost and Utilization Project, Medicare Patient Safety Monitoring System, and National Healthcare Safety Network—capable of surveillance of at least one of the HAIs under study. Surgical site infection and Clostridium difficile infection rates display concordance in trends, although there is no evidence of concordance in catheter-associated urinary tract infections rates. We have identified a number of desirable HAI data system characteristics: clinically valid; provide information on a broad range of HAIs; have large sample size to support statistical inference; be representative of the United States; and display consistency in cohort, surveillance protocols, and data collection methodology.

Conclusions: Although the data systems included in this study vary along the desirable data system dimensions we identified, trends in HAI rates are generally concordant across the data systems. This increases confidence in observed trends.

Supplemental Digital Content is available in the text.

*IMPAQ International, LLC, Columbia, MD

RAND Health, Santa Monica, CA

Division of General Internal Medicine and Health Services Research, David Geffen School of Medicine at University of California, Los Angeles

Supplemental Digital Content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal's Website,

This paper was prepared for a special issue of Medical Care, based upon an evaluation funded by the Agency for Healthcare Research and Quality (AHRQ). Some of the information included in this manuscript was presented as a poster at the 2012 Academy Health Annual Research Meeting.

The authors declare no conflict of interest.

Reprints: Daniel A. Weinberg, PhD, IMPAQ International, LLC, 10420 Little Patuxent Parkway, Columbia, MD 21044 (e-mail:

A 2008 US Government Accountability Office (GAO) report recommended that the US Department of Health and Human Services (HHS)1 provide stronger leadership to prevent and track healthcare–associated infections (HAIs). A major criticism noted in the report was that, even though there are multiple HHS data systems capable of tracking HAIs, they are limited and remain siloed.2 The GAO recommended that the Secretary of HHS work to improve data on HAIs to enhance their utility in generating reliable national HAI estimates.

Among the achievements of the HHS Action Plan were the development of national targets and metrics to measure progress on HAI prevention and enhancement of coordination among HHS agencies in improving HAI data.3 The most recent version of the Action Plan was released in April 2012. As part of the IMPAQ-RAND evaluation of the HHS Action Plan, we profiled the HHS data systems capable of HAI surveillance, assessed their strengths and weaknesses, and gathered and presented HAI rates from these data systems to external and HHS stakeholders. On the basis of these activities, we identified a number of features that an “ideal” HAI surveillance system would possess: clinical validity; capacity to provide information on a broad range of HAIs; large sample size; representativeness of the United States; and consistency over time in cohort, surveillance definitions, and data system function, which includes all other aspects of the data system’s operations such as data collection tools.

This article first reviews the features of HHS data systems capable of HAI surveillance to assess their adherence to the desirable features listed above. In addition, we present data from these systems to examine longitudinal trends in rates of catheter-associated urinary tract infections (CAUTI), surgical site infections (SSI), and Clostridium difficile infections (CDI). These infections are among the 6 that are the foci of the Action Plan’s first phase. Another article considers 2 of the other HAIs from the Action Plan [central line-associated bloodstream infection (CLABSI) and methicillin-resistant Staphylococcus aureus]. Ventilator-associated pneumonia (PNEU), although an important HAI and a focus of the Action Plan, is excluded from this study because of difficulties associated with surveillance of this infection. This research reflects the growing importance of public reporting2,4 and value-based purchasing in incentivizing providers to prevent HAIs.

Back to Top | Article Outline


We first identified the HHS data systems capable of CAUTI, SSI, and CDI surveillance. We found that all 3 infections can be tracked using the Centers for Medicare and Medicaid Services (CMS) Medicare claims files and the Agency for Healthcare Research and Quality’s (AHRQ) Healthcare Cost and Utilization Project (HCUP). In addition, CAUTI and CDI are captured by AHRQ’s Medicare Patient Safety Monitoring System (MPSMS) and CAUTI, and SSI data are available from the Centers for Disease Control and Prevention’s (CDC) National Healthcare Safety Network (NHSN).

We collected publicly available information on these data sources and also met with HHS agency data experts to learn more about the features of each data system. National NHSN data were publicly available on the CDC’s website ( and, whereas Medicare claims, HCUP, and MPSMS data required special requests to the data-holding agencies. For Medicare claims and HCUP, we developed analysis plans for HAI surveillance (eg, ICD-9-CM code specifications, numerator and denominator inclusion and exclusion criteria) in consultation with agency experts, who performed the analyses.

We analyzed national-level data from 2005 through 2010. Each data system/HAI combination had between 2 and 6 longitudinal data points, making our effective sample size per data system very small. Sample years varied by data system so that there was imperfect overlap in the years of data available for each system and even fewer common data points among the systems. This prevented us from conducting statistical tests of similarity in trends across data systems over time, although we do assess concordance across datasets. Data systems are concordant over a specified period of time if changes in those data systems’ HAI rates are in the same direction. However, inference regarding time trends within data systems was feasible. Throughout this study we use P<0.05 as the threshold for statistical significance. At several points throughout our analysis, we discussed results with CDC, AHRQ, ASPE, and CMS data experts, who provided feedback and suggestions for additional research.

Back to Top | Article Outline


This section first provides general results from our research into the HHS data systems, followed by a brief discussion of the strengths and shortcomings of the data systems, and then a discussion of HAI rates, respectively, for CAUTI, SSI, and CDI. For each infection, we present findings on differences in surveillance definitions used by each data system and provide longitudinal HAI rates from each data source.

Back to Top | Article Outline

Discussion of Individual HAI Surveillance Data Systems


NHSN was established in 2005, replacing 3 prior CDC surveillance systems: National Nosocomial Infections Surveillance, Dialysis Surveillance Network, and National Surveillance of Healthcare Workers.5 NHSN uses an active surveillance methodology, in which trained hospital personnel use standard definitions and draw on multiple data sources (eg, observation in patient care areas, laboratory results, chart review) to monitor adverse events prospectively (ie, while patients are still in the institution).6 Historically, NHSN has been a voluntary reporting system; however, several states have mandated reporting of HAIs using NHSN, and the Medicare inpatient prospective payment system (IPPS) final rules for FY 2011 through FY 2013 provide a strong incentive for hospitals to report data through NHSN in the future. In particular, hospitals failing to report according to the IPPS rule have their annual payment update decreased by 2 percentage points. Surveillance of HAIs is conducted by the hospitals’ infection preventionists, who submit data collection protocols electronically to CDC monthly. The number of hospitals participating in NHSN grew from 211 in 20065 to 1749 in 2009,7 and 3472 in 2011.8 The 2006 figure reflects the number of facilities reporting any DA module data; the 2009 figure reflects the number of hospitals submitting at least denominator data for some patient cohorts under surveillance in 2009; and the 2011 figure reflects the number of facilities reporting CLABSI data.

Back to Top | Article Outline


MPSMS, a collaborative effort across several HHS agencies including CMS, AHRQ, and CDC, was designed to retrospectively monitor the prevalence of adverse events (ie, surveillance occurs after the patient leaves the institution) at the national level. State-level HAI rate estimates are not feasible using MPSMS because of the small number of inpatient records per state abstracted each year. Moreover, changes in the system’s data collection methodology and patient cohort present challenges for longitudinal analysis of rates. Although during the 2002–2006 and 2009–2010 periods, MPSMS used the same data collection software, in 2007, MPSMS adopted a new data collection tool, which may have affected its consistency over time. In 2009, MPSMS abandoned the 2007 data collection tool and reverted back to the tool used before 2007. Also in 2009, MPSMS experienced a change in the cohort under study: before 2009, MPSMS drew random samples of Medicare fee-for-service (FFS) beneficiaries from each state. In 2009 and later years the sample shifted to include care paid for by all payers, but only for individuals who were diagnosed with acute myocardial infarction, heart failure, or PNEU, or who received a procedure included in the Surgical Care Improvement Project, which aims to reduce surgical complications.9 According to a presentation given by Dr Bill Munier, Director of AHRQ’s Center for Quality Improvement and Patient Safety during the 2012 AHRQ Annual Conference, MPSMS is currently in the process of being redesigned both in terms of its adverse event surveillance definitions as well as the software tool that will be used for data collection.

Back to Top | Article Outline

HCUP and Medicare FFS Claims Data

Two administrative data sources capable of detecting HAIs are AHRQ’s HCUP databases and Medicare FFS claims data. The HCUP State Inpatient Databases (SID) contain all-payer information on the census of discharges from community hospitals in the participating states (44 states as of 2011) and account for over 90% of inpatient discharges.10 HCUP’s National Inpatient Sample (NIS) is a weighted probability sample of the SID and is nationally representative. Medicare FFS claims capture all billing claims for beneficiaries in the traditional Medicare FFS program. Researchers are able to develop HAI surveillance specifications using ICD-9-CM codes, which are then applied to the claims databases to detect HAIs. All surveillance based on administrative claims data is retrospective because the claims capture information is based on discharges.

Back to Top | Article Outline

Discussion Across HAI Data Systems

Below we describe 5 data system features that are desirable for HAI surveillance. These characteristics are based on a literature scan and iterative discussions with HAI data experts.

Clinical validity refers to a data system’s capability to accurately capture a specific clinical phenomenon. This requires that the data source contain sufficiently granular data and that the accompanying surveillance definitions reflect accurate clinical understanding.

Consensus has emerged regarding the strength of NHSN on the clinical validity dimension. Hospital staff gathering NHSN data are able to interact both with the patient and the physicians caring for that patient, thus enabling more precise differentiation between infections that are healthcare–associated and those that are not. The Society for Healthcare Epidemiology of America and the Association for Professionals in Infection Control and Epidemiology support the use of NHSN data for public reporting, and several NHSN measures are NQF-endorsed.11 Also, NHSN and its definitions have been adopted by states requiring mandatory HAI reporting, and in the private sector, the LeapFrog Group’s annual survey of hospital quality collects information on hospital CLABSI rates based on the CDC surveillance definition. Also, NHSN has been used as a “gold standard” in studies.12–14

On the basis of the clinical algorithms documented in the MPSMS annual reports,9 MPSMS surveillance methodologies also incorporate rich clinical detail. However, MPSMS data are based on chart abstraction and are thus limited to documentation existing in the medical record. Claims data are limited to information captured for billing purposes (ie, ICD-9-CM codes) and are thus the least clinically detailed. A number of studies suggest that HAI surveillance using administrative data produces rates of questionable accuracy.12–22

Back to Top | Article Outline

Broad Range of HAIs

Holding all other data system features constant, a data system with the capacity to provide information on many HAIs is preferable to a system having more restricted scope. In general, subject to the limitations described above, administrative claims can be used for surveillance of any in-hospital event recorded using billing codes: finalized claims data are stored for years and researchers need to only develop additional protocols for identifying HAIs to expand the group of HAIs under surveillance. Medical records also remain available for chart abstraction, such as that used by MPSMS, long after hospital discharge. In comparison, as an active system, NHSN requires data collection during the index hospitalization (or, for SSI, during the index hospitalization or during a readmission), thus making it difficult and sometimes impossible to retrospectively apply new surveillance protocols to the data after the patient’s discharge.

Large samples permit the use of statistical inference to detect trends over time in HAI rates and also to compare rates across geography and subpopulations of interest. Medicare claims and HCUP data provide very large samples: Medicare claims are available for the census of FFS Medicare beneficiaries in the United States, and HCUP data are available for every hospital discharge in participating states. MPSMS and NHSN, although richer in clinical detail, place more burden on those collecting the data and thus require substantial resources to maintain large samples. MPSMS data are collected from medical records by CMS’s Clinical Data Abstraction Center contractor. The MPSMS sample is much smaller than that afforded by administrative claims data. For example, 17,975 records were abstracted in 2009.9 In practice, this relatively small sample reduces statistical power, making it difficult to detect statistical significance for small effect sizes. As described previously, for most of its life, NHSN has been a voluntary system. Over time, use of NHSN has increased with states’ mandating reporting using NHSN and CMS’s implementation of pay-for-reporting as part of its annual payment update process outlined in Medicare’s IPPS rules since 2011. The 2006 DA module report indicates that hospitals reported approximately 982,000 central line days. The corresponding number in 2010 was approximately 9.3 million.5,23 It is important to note, however, that different NHSN modules have different participation rates, with the DA module being the most widespread and CLABSI being the most-reported infection.

Back to Top | Article Outline


From policy and research perspectives, it is desirable to understand how HAI rates are changing over time for the entire United States and also at the state level. The MPSMS sample draws the same number of medical records from each state and thus oversamples small states and undersamples large states. Qualidigm adjusts the rate estimates accordingly to make MPSMS adverse event rates nationally representative. The Medicare claims data are clearly representative of the Medicare population in the United States. HCUP’s SID, which includes all inpatient stays in participating states (over 40 states in 2011), is representative of individuals in participating states. The NIS is a stratified sample of the SID, weighted to be nationally representative. AHRQ data experts have indicated that as states’ participation in the SID has increased over time, the data have become increasingly representative of the US population. As described previously, NHSN participation has increased over the last several years, broadening its reach to a more diverse set of hospitals and all 50 states, Washington, DC, and Puerto Rico.8 This is in contrast to the system’s early periods, during which participating hospitals tended to be larger, more likely to be affiliated with academic institutions, and more likely to be located in the mid-Atlantic and south-Atlantic regions of the United States.24

Back to Top | Article Outline

Consistency in Cohort, Surveillance Definitions, and System Function

To conduct longitudinal research and to gauge progress on HAI prevention, it is necessary to have stability in HAI rate data over time. Cohort stability refers to consistency in the population under study; surveillance definition stability requires that the data system use consistent specifications to identify HAIs; and system function stability requires that all other aspects of the data infrastructure (eg, data collection tool, data collection methodology) remain constant. Medicare claims and HCUP have remained stable in terms of cohort because these datasets include all FFS Medicare beneficiaries and all inpatient claims in participating states, respectively. In addition, because researchers apply surveillance protocols to existing data, maintaining consistent surveillance definitions over time only requires researchers to apply consistent surveillance protocols. Exceptions occur when ICD-9 codes are introduced or retired. In addition, the future shift to ICD-10 is likely to cause complications.

Unlike claims data, both NHSN and MPSMS have experienced major changes. The group of hospitals participating in NHSN has changed over time; of particular concern is the fact that the cohort of hospitals under surveillance during the 2006–2008 period that NHSN uses as its referent sample for calculation of standardized infection ratios (SIR) differs from the much broader and more representative group reporting in recent years. Although the SIR methodology adjusts for case-mix differences across cohorts by stratifying rates by care location, care location bed size, and medical school affiliation (in the case of CAUTI) and procedure-level risk factors (in the case of SSI), it is likely that the quality of surveillance conducted by hospitals reporting to NHSN during the baseline period differs from that of hospitals that have been incentivized by the Medicare IPPS rules to begin reporting.8 If this is the case, then the cohort reporting during the referent period differs in important ways from the cohort reporting in more recent years, threatening the validity of observed trends in HAI rates reported through NHSN. In addition, NHSN surveillance protocols have changed (the cases of CAUTI and SSI are discussed in the following sections) and there is evidence of variation in the application of NHSN surveillance definitions.25

Similarly, although MPSMS algorithms have remained stable, the system has experienced changes in the cohort of patients under surveillance and the data collection tool used for abstraction.

In summary, there are advantages and disadvantages to all existing data systems. Administrative data (HCUP and Medicare claims) are limited to information captured for the purpose of billing and thus lack clinical detail. However, claims data are available over time, inexpensive to gather and use, and have large sample size, consistent cohorts, and broad geographical coverage. NHSN and MPSMS have rich clinical detail but have not consistently had the other positive attributes listed above. Similar to NHSN, MPSMS is clinically detailed. However, MPSMS has small sample size, which limits the ability to use statistical inference to detect longitudinal changes in HAI rates, and has experienced changes in its data collection methodology and cohort, making longitudinal comparisons more difficult.

Back to Top | Article Outline

HAI Rate Results

CAUTI Results

Table 1 provides an overview of CAUTI surveillance for each of the data systems included in this study. In 2009, the NHSN CAUTI criteria underwent major changes, and the 2009 definition is presented in the exhibit. The 2009 changes updated the symptomatic urinary tract infection (UTI) criteria by removing symptoms related to the use of catheters alone, removed asymptomatic bacteriuria from the numerator, and added asymptomatic bacteremic UTI to the numerator. Since CAUTI highlights infections among patients who have been instrumented with a catheter, all 4 data systems have denominator specifications that seek to limit included cases to patients with a urinary catheter during the hospital stay. However, the HCUP CAUTI specifications include all nonmaternal adult discharges, which include many cases never exposed to a urinary catheter. With the exception of NHSN, all data systems use discharges as the denominator unit.



There are also important differences between the numerators used by MPSMS and by NHSN. MPSMS requires that diagnosis of the UTI must occur during the inpatient stay and at least 1 day following catheter insertion. NHSN, however, requires that a catheter must have been present within the 48 hours before UTI onset, and there is no minimum amount of time during which the catheter must have been in place before the UTI. In addition, to be counted as a CAUTI, MPSMS requires that antibiotics be prescribed; NHSN does not require this. Although NHSN requires laboratory confirmation of the UTI, MPSMS relies on physician diagnosis, which, according to the MPSMS’s technical expert panel, in the acute care hospital setting generally requires laboratory evidence (Table 1).

Figure 1 combines in the same diagram the CAUTI rates from Medicare claims, HCUP, MPSMS, and NHSN. Present on admission (POA) indicators are available in the Medicare claims beginning on October 1, 2007. Although it would be possible to incorporate the POA indicator into the analysis, doing so would make longitudinal comparisons difficult because the POA indicator was introduced in the middle of our sample period. Thus, we do not use the POA indicator and label the series as “regardless of POA.” The HCUP measure does not account for infections that were POA either. However, the surveillance protocols used by MPSMS and NHSN do account for infections existing at the time of admission. Although the introduction of the POA indicator has the potential to improve the validity of HAI surveillance using administrative data by providing a means to differentiate between infections that are present on admission versus those developed during the hospital stay, the limited observation window (post-October 2007) for rates that use the POA indicator limits the ability to track HAI longitudinally. In the future, however, after several years of demonstrated stability in the use of the POA indicator, we recommend using the “excluding POA” specification for comparisons with the nonadministrative data systems. Figure 1 shows that CAUTI rates from Medicare claims, HCUP, and MPSMS differ substantially in their levels due to the differences in cohort and surveillance definitions and methodologies discussed earlier; NHSN data are expressed as SIRs so are not comparable in terms of levels to the other systems’ prevalence rates.8 The exhibit also indicates that CAUTI rates from both Medicare claims and HCUP increased over 2007–2009, and the Medicare claims rate continued to increase through 2010, although at a slower rate. Similarly, the HCUP rate increased more slowly in 2009 than in 2008; changes in rates based on the HCUP NIS are statistically significant. MPSMS rates were stable over time and do not display any statistically significant changes. Although the NHSN SIR decreased by 7% in 2010 (a statistically significant change), the Medicare claims rate increased in 2010, indicating discordance in CAUTI rates for these 2 data sources.



Back to Top | Article Outline

SSI Results

Table 2 presents important features of SSI surveillance for Medicare claims, HCUP, and NHSN. The NHSN SSI definition changed beginning in 2013 and the exhibit reflects the prior definition, which is applicable to the sample period included here (more information on the new surveillance specifications is available from CDC27). The surgical procedures included in the Medicare claims and HCUP denominators differ somewhat from those included in the NHSN definition. The procedures in bold listed under Medicare claims and HCUP are included in those systems’ definitions but not in NHSN’s definition. The item in bold under NHSN is not included under the Medicare and HCUP definitions. HCUP and NHSN use surgeries as the denominator unit, whereas for Medicare claims the denominator is expressed in terms of discharges where one of the listed surgeries was delivered during the hospital stay.



An important difference among the data systems is the use of a follow-up period. One of the Medicare claims specifications (the “augmented” specification) uses a follow-up period of 30 days if the procedure did not involve an implant, and 1 year if the procedure involved an implant. In this context, implant refers to: “A non-human–derived object, material, or tissue that is permanently placed in a patient during an operative procedure and is not routinely manipulated for diagnostic or therapeutic purposes.”28 For Medicare claims, the follow-up period captures any inpatient stays, regardless of whether the hospital in which the follow-up occurs differs from that of the index stay. We do not present the augmented specification for 3 reasons: (i) the 1-year follow-up period is incomplete for 2010 because Medicare claims data were not yet fully available for the 2011 follow-up period. This results in an artificial drop in rates in the final sample year, which is an artifact of data availability; (ii) we found that the movement of the augmented and nonaugmented rates is similar; and (iii) using the nonaugmented specification enhances the comparability of the Medicare rates with those of HCUP. NHSN uses the same follow-up periods, although the NHSN follow-up period is only for the hospital at which the initial surgery took place. Both NHSN and Medicare claims include only those follow-up infections detected during inpatient hospital stays. A follow-up period is not feasible using HCUP data, because the HCUP databases have only limited identifiers that allow researchers to follow patients across years, and the identifiers are only available for the SID and not the NIS.29 Thus, for HCUP, the numerator includes only SSIs occurring during the hospital stay in which the surgery takes place (Table 2).

Figure 2 provides SSI rates from Medicare claims, HCUP, and NHSN in the same diagram. Rates based on Medicare claims and HCUP are similar in their magnitudes. As noted earlier, NHSN data are expressed as SIRs. Although their levels cannot be compared with prevalence rates, we can analyze trends over time. All 3 data sources indicate that SSIs have decreased over the sample period. The observed concordance across the data sources, particularly following 2008, may be the result of more accurate coding for the administrative data systems. The reason may be that there are a limited number of postsurgical infections and ways in which a surgical patient can develop an infection, and this limited scope may improve coding accuracy.



Back to Top | Article Outline

C. difficile Results

Table 3 highlights the important features of CDI surveillance for the systems included in this study. There are important differences between MPSMS surveillance and those of the administrative data sources. MPSMS requires that denominator cases have a C. difficile assay ordered and an antibiotic administered during the hospital stay because antibiotic administration is a major risk factor for developing CDI. Thus, the MPSMS denominator is substantially smaller than those of Medicare claims and HCUP (in 2009, the ventilator-associated PNEU denominator for MPSMS was 303; for Medicare claims it was 234,677; and for HCUP it was 541,850). Also, while MPSMS incorporates the timing of antibiotic administration into its numerator criteria, the administrative data do not have any antibiotic administration requirement. MPSMS requires that antibiotics be received at least 1 day before the C. difficile assay (Table 3).



Figure 3 provides CDI rates from Medicare claims, HCUP, and MPSMS. Rates from HCUP and Medicare claims are similar in terms of their levels, whereas those from MPSMS are lower. This difference is due to the varying case finding protocols and surveillance methodologies, as described previously. MPSMS rates did not show any statistically significant changes over time. The 2 administrative data systems, Medicare claims, and HCUP, show that CDI rates increased in 2008, decreased in 2009, and then increased in 2010. In addition, the HCUP projection (which imputes full SID estimates using the SAS Time Series Forecasting System applied to quarterly inpatient data for states that reported early) indicates that CDI rates will increase in 2011. CDI rates based on Medicare claims and HCUP are concordant in terms of their longitudinal trends.



Back to Top | Article Outline


This study identifies desirable features of data systems for HAI surveillance, examines 4 HHS data systems capable of CAUTI, SSI, and CDI surveillance, and presents and compares HAI rates over time for each system and infection type. We find that trends over time for SSI and CDI are concordant, whereas those for CAUTI are not. Also, as a result of important differences in case finding methodology and cohort, the infection rates differ substantially across data systems in terms of their levels. The HAIs studied here are among the 6 included in phase 1 of the National Action Plan effort.

Back to Top | Article Outline


According to the Action Plan’s NHSN metric, CAUTI rates decreased by 7% in 2010 in comparison to the 2009 baseline period.30 Our analysis indicates that HCUP and Medicare claims increased from 2007 through 2009, and that Medicare claims rates continued to increase in 2010. There is no evidence of concordance among data systems for CAUTI.

Back to Top | Article Outline


The Action Plan’s NHSN metric indicates that SSI rates decreased by 2% and 10% in 2009 and 2010, respectively, in comparison with the 2006–2008 baseline period.30 MPSMS did not exhibit statistically significant changes in SSI over this time period. However, Medicare claims and HCUP did show decreases. The data systems examined in this study are generally concordant. This observed concordance in SSI trends among data systems may be related to the better accuracy of ICD-9 coding associated with the narrower set of codes pertinent to postsurgical patients.

Back to Top | Article Outline


The Action Plan’s HCUP CDI metric shows that CDI rates decreased slightly in 2010 in comparison with the 2008 baseline and are projected to increase by approximately 6% in 2011 in comparison with the 2008 baseline. Similarly, Medicare claims indicate that CDI rates decreased slightly in 2010 in comparison with 2008, and MPSMS rates do not display any statistically significant changes over time. The HCUP metric and the Medicare claims measures are concordant regarding their 2010 result.

Back to Top | Article Outline


Under the best circumstances, researchers and policy makers would have access to an “ideal” data system to track progress on HAI prevention efforts. Such a data source would be clinically valid; provide information on a broad range of HAIs; have large sample size to support statistical inference; be representative of the United States; and display consistency in cohort, surveillance protocols, and data collection methodology.

In practice, however, the HHS data systems discussed here vary along these desirable dimensions and no one data system displays all of these features. NHSN possesses clinical validity but historically has not provided broad coverage of the US for the range of HAIs of interest, while the group of hospitals reporting has changed over time and data are not fully validated.31 Although it possesses clinical detail, MPSMS has a small sample, reducing researchers’ ability to identify statistical significance; it is limited to information captured in the medical record, and has undergone changes in data collection methodology and cohort. Although claims provide large samples, broad coverage, and consistency over time, they lack important clinical detail.

Despite these differences, the measures from each data system included in this study are intended to capture similar information (ie, national HAI rates). Our analyses indicate that SSI and CDI rate trends are generally concordant among data systems. Thus, although all data systems have flaws with respect to surveillance quality, we observe concordance across data sources over a multiyear period for SSI and CDI. In comparison with HAI trend analyses where only 1 (imperfect) data system is available or multiple data systems are discordant, the concordance in SSI and CDI rates among multiple data systems increases our confidence in observed trends.

Back to Top | Article Outline


1. 2012aNational Action Plan to Prevent Healthcare-Associated Infections: Roadmap to Elimination. Available at: Accessed July 8, 2012.
2. 2008Health-care-associated infections in hospitals: an overview of state reporting programs and individual hospital initiatives to reduce certain infections. GAO-08-808. Available at: Accessed August 28, 2012.
3. 2009Action Plan to Prevent Healthcare-Associated Infections.
5. Edwards JR, Peterson KD, Andrus ML, et al..National Healthcare Safety Network (NHSN) Report, Data Summary for 2006 through 2007, issued November 2008.Am J Infect Control.2008;36:609–626.
6. 2010aOverview of the Patient Safety Component. Available at: Accessed August 8, 2012.
7. Dudeck MA, Horan TC, Petersen KD, et al.2011aNational Healthcare Safety Network (NHSN) Report, Data Summary for 2009, device-associated module. Available at: Accessed September 7, 2012.
8. Malpiedi PJ, Peterson KD, Soe MM, et al.2011National and State Healthcare-Associated Infection Standardized Infection Ratio Report. Available at: Accessed March 28, 2013.
9. 2010Medicare Patient Safety Monitoring System 2009. Annual Data Report, Project report submitted to the Agency for Healthcare Research and Quality under contract HHSA290200910018.
10. 2012bIntroduction to the HCUP State In-patient Databases (SID). Available at: Accessed August 8, 2012.
11. Farber M, Patterson JE2012Comment Letter from APIC and SHEA Regarding CMS-1588-P: Medicare program; Hospital Inpatient Prospective Payment Systems. Available at: Accessed March 27, 2013.
12. Sherman ER, Heydon KH, St. John KH, et al..Administrative data fail to accurately identify cases of healthcare-associated infection.Infect Control Hosp Epidemiol.2006;27:332–337.
13. Stevenson KB, Khan Y, Dickman J, et al..Administrative coding data, compared with CDC/NHSN criteria, are poor indicators of health care-associated infections.Am J Infect Control.2008;36:155–164.
14. Stone PW, Horan TC, Shih HC, et al..Comparisons of health care-associated infections identification using two mechanisms for public reporting.Am J Infect Control.2007;35:145–149.
15. Best WR, Khuri SF, Phelan M, et al..Identifying patient preoperative risk factors and postoperative adverse events in administrative databases: results from the Department of Veterans Affairs National Surgical Quality Improvement Program.J Am Col Surg.2002;194:257–266.
16. Cima RR, Lackore KA, Nehring S, et al..How best to measure surgical quality? Comparison of the Agency for Healthcare Research and Quality Patient Safety Indicators (AHRQ-PSI) and the American College of Surgeons National Surgical Quality Improvement Program (ACS-NSQIP) postoperative adverse events at a single institution.Surgery.2011;150:943–949.
17. Jhung MA, Banerjee SN.Administrative coding data and health care-associated infections.Clin Infect Dis.2009;49:949–955.
18. Julian KG, Brumbach AM, Chicora MK, et al..First year of mandatory reporting of healthcare-associated infections, Pennsylvania: an infection control-chart abstractor collaboration.Infect Control Hosp Epidemiol.2006;27:928–930.
19. Lawson EH, Louie R, Zingmond DS, et al..A Comparison of clinical registry versus administrative claims data for reporting of 30-day surgical complications.Ann Surg.2012;256:973–981.
20. Romano PS, Chan BK, Schembri ME, et al..Can administrative data be used to compare postoperative complication rates across hospitals?Med Care.2002;40:856–867.
21. Romano PS, Mull HJ, Rivard PE, et al..Health Serv Res.2009;44:182–204.
22. Zhan C, Miller MR.Administrative data based patient safety research: a critical review.Qual Saf Healthcare.2003;12:ii58–ii63.
23. Dudeck MA, Horan TC, Petersen KD, et al.2011bNational Healthcare Safety Network (NHSN) Report, Data Summary for 2010, device-associated module. Available at: Accessed September 7, 2012.
24. Klevens RM, Edwards JR, Richards CL, et al..Estimating health care-associated infections and deaths in US hospitals, 2002.Public Health Rep.2007;122:160–166.
25. Lin MY, Hota B, Khan YM, et al..Quality of traditional surveillance for public reporting of nosocomial bloodstream infection rates.J Am Med Assoc.2010;304:2035–2041.
26. Zhan C, Elixhauser A, Richards CL, et al..Identification of hospital-acquired catheter-associated urinary tract infections from Medicare claims: sensitivity and positive predictive value.Med Care.2009;47:364–369.
    27. 2012cNHSN Members Meeting at APIC, San Antonio. Available at: Accessed August 29, 2012.
    28. 2008NHSN E-News. Winter 2008, 3(4). Available at: Accessed August 23, 2013.
    29. .HCUP Supplemental Files for Revisit Analyses.2013.Rockville, MD:Agency for Healthcare Research and QualityAvailable at: Accessed March 28, 2013.
    30. 2012bNational targets and metrics, monitoring progress toward action plan goals: a mid-term assessment. Available at: Accessed August 22, 2012.
    31. Perla RJ, Peden CJ, Goldmann D, et al..Health care-associated infection reporting: the need for ongoing reliability and validity assessment.Am J Infect Control.2009;37:615–618.

    healthcare–associated infections (HAIs); data and monitoring; surveillance; HCUP; NHSN; MPSMS; Medicare claims; information technology

    © 2014 by Lippincott Williams & Wilkins.