Redefining the Standardized Infection Ratio to Aid in Consumer Value Purchasing : Journal of Patient Safety

Secondary Logo

Journal Logo

Original Articles

Redefining the Standardized Infection Ratio to Aid in Consumer Value Purchasing

Saman, Daniel M. DrPH, MPH, CPH*†; Kavanagh, Kevin T. MD, MS, FACS*; Abusalem, Said K. PhD, RN*‡

Author Information
Journal of Patient Safety 9(2):p 55-58, June 2013. | DOI: 10.1097/PTS.0b013e3182809f31
  • Free


The reporting of health care-associated infections in a consumer-understandable and meaningful format has become a priority for many health care agencies. Various formats have been used, including total number of infections, total infections per 1000 discharges, total infections per 1000 catheter days, and a risk-adjusted rate of infections per 1000 catheter days.1

One method of risk adjustment is the use of the standardized infection ratio (SIR). The SIR is used on consumer Web sites such as Hospital Compare, which are designed to promote consumer value purchasing.1 Currently, the SIR is used as one of the most important metrics in evaluating the rate of health care–associated infections in hospitals reporting to the National Healthcare Safety Network (NHSN).

It is the purpose of this short report to review the derivation of the SIR for infections and to suggest improvements in its definition. Two widely varying methods are used to calculate SIRs for central line–associated blood stream infections (CLABSIs), catheter-associated urinary tract infections (CAUTIs), and surgical site infections (SSIs).2

CLABSIs and CAUTIs are adjusted by a ratio depending on the type of hospital or acute care ward in which the patient developed an infection.2,3 An infection ratio is calculated by dividing the number of observed infections by the number of catheter days and multiplying by 1000.

The SIR is calculated by dividing the infection ratio by a correction factor based upon the type of hospital ward or ICU unit. The correction factor is the predicted ratio for infections and is derived from the report by Edwards et al (2009).3 For example, if a medical, nonmajor teaching ICU had an infection ratio of 2.6 (infections per 1000 catheter day), this number would be divided by the correction factor or predicted ratio of infections, 1.9, yielding an SIR of 1.37. If the ICU was part of a major teaching facility, the infection ratio would be divided by 2.6, yielding an SIR of 1.0. A SIR of 1.0 is taken as the National Benchmark for determining a facility’s performance.

The SIRs for SSI rates are adjusted for a wide variety of risk factors including but not limited to age, American Society of Anesthesiologists (ASA) Class, duration, hospital bed size, body mass index (BMI), emergency, sex, trauma, and medical school affiliation.2,4,5 The risk factors used vary with the type of procedure.2,4 SIRs for SSIs are adjusted by the use of a logistic regression model, which adjusts for a varying number and type of variables depending upon the surgical procedure.2

The purpose of this report is to visually display through histograms and graphs the distribution of the SIR across all reporting hospitals in the United States and to provide recommendations meant to improve the functionality of the SIR. The SIR for CLABSIs was selected for study because there is adequate available data, and many health-care facilities have been slow to adopt protocols, such as checklists, which have been shown to be effective in preventing these life-threatening infections.


Two data sets from the Centers for Disease Control and Prevention (CDC) National Healthcare Safety Network (NHSN) were used in the analysis: A data set for state aggregate SIR data for CLABSIs, which occurred in intensive care units (ICUs) and a data set which contained individual facility SIRs for CLABSIs which occurred in the ICU. Both data sets are available on Hospital Compare for the data collection period from January 1, 2011, to December 31, 2011. These data sets were derived from the Centers of Medicare and Medicaid mandatory reporting initiative for CLABSIs in ICU locations. Of the 3639 hospitals in the facility data set, data were available for analysis from 1988 facilities from all 50 states, Puerto Rico, and the District of Columbia. No data were available from 1085 facilities, 531 facilities did not have an ICU location, and 35 facilities had an ICU but did not report any central line days. To be included in either of the NHSN data sets, a facility’s predicted number of CLABSI had to be 1 or greater.

To demonstrate the variability of the SIR among states, a line graph of the point estimate SIR from lowest to highest was plotted using the first data set of aggregate state SIRs.

To estimate an “obtainable SIR,” individual facility SIRs from the Hospital Compare data set was analyzed. Standardized infection ratios from individual hospitals were plotted in a histogram with the width of a data bin equal to an SIR of 0.05.


The aggregate state data set varied widely with a mean of 0.562, median of 0.540, and excluding Puerto Rico, there is over an 8-time difference between the best and worst performing states (Fig. 1), but the National Benchmark (expected) SIR is set such that all of the states fall into an as expected category (SIR, ≤1). The only exception is the territory of Puerto Rico (Fig. 1).

State-specific SIRs, central line associated blood steam infections (CLABSI)—Hospital Compare data collection period from January 1, 2011, to December 31, 2011.

The histogram plot of the SIR from individual reporting facilities in the Hospital Compare data set produced a skewed distribution, with a peak approximating an SIR of 0.35. In 465 facilities, the SIR equaled zero, and in 58 facilities, the SIR was greater than 2 (Fig. 2). The mean SIR was 0.568.

Histogram of the number of hospitals by CLABSI SIR—Hospital Compare data collection period from January 1, 2011, to December 31, 2011.


Analysis of both the state aggregate data from the CDC (Fig. 1) and individual facility data (Fig. 2) found that the current SIR level of 1.0 is not reflective of what can be achieved by the majority of facilities and that there is also large variation between facilities. A major problem with the SIR for CLABSIs is that the SIR National Benchmark is not well defined or useful for the consumer. As defined in a December 2010 CDC publication, an SIR of 1.0 is described as the “expected” SIR.2 Currently, Hospital Compare describes an SIR of 1.0 as the National Benchmark. This may create a considerable amount of confusion for the consumer. The Hospital Compare Web site describes the expected SIR as derived from 2006 to 2008 data and that “A score of 1 means the hospital’s CLABSI score was no different than hospitals of similar type and size.”1

It can be argued that an obtainable or a redefined expected SIR needs to be calculated to present data on consumer Web sites in a more meaningful way. The current expected SIR is based upon data from 2006 to 2008, during the period when the rate of CLABSIs was unacceptably high and before checklists and other interventions were widely recognized as effective preventive measures. As illustrated in Figure 2, SIR data obtained from 1988 facilities produced a skewed curve with a peak at approximately 0.35. It is our contention that the position of this peak should be set as the obtainable level. In actuality, facilities can obtain an even lower level, as evidenced by the large numbers of facilities that reported an SIR of zero.

The adjustment for being a major teaching institution also needs to be questioned. This adjustment for CLABSIs can be quite large and effectively erases more than 1 in 5 infections in major teaching ICUs.3 Although it can be argued that patients serviced in these ICUs may be different from other institutions, there are also concerns regarding resident supervision and fatigue from long work hours which may adversely affect patient outcomes.6

A redacted June 2012 chart report from Hospital Compare, publicly available online, is shown in Figure 3. As illustrated, the sample state’s average SIR was 0.54. This SIR approximates the average hospital reported performance in the Hospital Compare data set but is greater than the peak of the distribution curve. The curve’s peak is more reflective of higher performing facilities. However, the state’s SIR (in Fig. 3) is much better than the Hospital Compare’s National Benchmark SIR of 1.0. The SIR for the sample University hospital in Figure 3 is 0.79. Again, it could be stated that it beats the National Benchmark of 1.0 and, to the consumer, may appear to be better than expected. However, this level is in fact worse than what is expected for facility performance as calculated from the Hospital Compare data set. In addition, because this facility was a major medical teaching institution, it was given a further adjustment in the calculation of its SIR. If one assumes all of the reported CLABSIs occurred in the medical ICU and the adjustment for a major teaching institution was not taken, the institution’s SIR would be calculated to be 1.08. If the institution’s SIR was then reset in relation to the average rate observed for all facilities (high and low performers) in the Hospital Compare data set, the SIR for this facility would have a level of approximately 1.90 {1.08 X (1.0/0.568)}. If one performs this same analysis using the position of the distribution curve’s peak as having an SIR of 1.0, the facility’s SIR would be approximately 3.09 {1.08 X (1.0/0.35)}.

Hospital Compare Reporting of CLABSI as of June 2012—Redacted.

Numerical rankings of the data are important. Nonquantitative reporting using the descriptors of “Better than,” “No Different than,” or “Worse than” the U.S. national average may be easy for the consumer to understand but may not provide adequate differentiation between high and low performing facilities as illustrated by a recent report in Health Affairs. The report described a mortality rate data set as being “unlikely to influence patients’ or hospitals’ behavior,” because 99.5% of the facilities were reported as having a normal mortality rate during the majority of the study period.7 Both numerical data and use of descriptors can be used concomitantly on reports intended for the health-care consumer.

The limitations of available data prevented a similar analysis for the SIRs used in SSIs. However, two observations can be made. First, overlapping risk factors, which can be highly correlated, such as age and ASA, should only be used together with caution. Researchers have recommended incorporating both factors into a combined grading system.8 Second, a major adjustment factor used in the calculation of some of the SIRs for surgical site infections is affiliation with a medical school.5

Adjustments for medical school affiliation could be interpreted to mean that there is a significant risk of infection if a patient is admitted to a university hospital. For example, the largest risk factor adjustment for the vaginal hysterectomy SIR is affiliation with a medical school with an odds ratio greater than 2.0.4 Such a large adjustment may be questionable for simply being affiliated with a medical school especially when the SIR for abdominal hysterectomy is not adjusted for this variable.2

It is our contention that in risk adjustments, only metrics for patient risks should be factored into the equation. Metrics that primarily reflect facility factors that are associated with a high risk of infection should not be used to adjust the data. One of the purposes of public reporting is to identify differences in facilities not to adjust for those differences. Thus, the use of facility adjustment factors (e.g., hospital bed size, bed size of patient care location, and medical school affiliation) in calculating a SIR should be questioned and further studied to best understand what influences the increased risk associated with these facility characteristics.


To have the highest utility in the consumer’s quest to obtain high value health care, the National Benchmark SIR should be easily understandable and reflect what is obtainable and not based on a benchmark or expected rate that is derived from outdated data. Factors used for risk adjustment should relate to the characteristics of the patient population and not based upon risks associated with facility type, size, or unit bed size.

A suggested methodology to estimate the obtainable SIR is to set this value at the peak of the distribution curve. The curve’s peak is more reflective of higher performing facilities. It is suggested that this type of analysis should be used to calculate an obtainable SIR whose value is set to 1.0. The remainder of the facility SIRs can then be adjusted accordingly to promote meaningful use of the data.

Moreover, the obtainable SIR should be calculated every other year using data from the most recent 3 years. This enables the SIR to be reset as the control of health care–associated infections progressively improves. The data used to calculate the SIR need to reflect the outcomes obtainable from higher performing facilities, which are using current standards of care.

Finally, although we question the use of adjustment of the SIR for facility characteristics and contend it is a serious limitation of the SIR, we recognize that an operationalized metric accounting for patient characteristics presents great challenges along with its own limitations. As illustrated in Figure 2, great strides have been made in the control of CLABSI, based upon the ground breaking work of Berenholz et al (2004).9 In a significant percentage of institutions, CLABSIs are becoming a rare event, which may mitigate the need for facility adjustments.

Pending the creation of additional metrics that account for patient population risk characteristics, we stop short of suggesting that facility adjustments be completely dismissed. Just as institutions of higher education are often ranked together in groups of facility type, we would recommend that health-care facilities of similar types be compared side by side. If facility adjustments are made, they should be readily and fully transparent to the health-care consumer.


1. U.S. Department of Health and Human Services. Hospital Compare. June 2012. Available at: Accessed June 1, 2012.
2. Centers for Disease Control and Prevention. NHSN e-News: SIRs Special Edition. Oct. 2010, Updated Dec. 10, 2010. Available at: Accessed November 1, 2012.
3. Edwards JR, Peterson KD, Mu Y, et al. National Healthcare Safety Network (NHSN) report: data summary for 2006 through 2008, issued December 2009. Am J Infect Control. 2009; 37: 783–805.
4. Mu Y, Edwards JR, Horan TC, et al. Improving risk-adjusted measures of surgical site infection for the national healthcare safety network. Infect Control Hosp Epidemiol. 2011; 32: 970–986. Epub 2011 Sep 1. Accessed November 1, 2012.
5. Centers for Disease Control and Prevention. National and State Healthcare-Associated Infections Standardized infection ratio Report. January—December 2010. Apr.19, 2012. Available at: Accessed November 1, 2012.
6. Jagsi R, Kitch BT, Weinstein DF, et al. Residents report on adverse events and their causes. Arch Intern Med. 2005; 165: 2607–2613. PMID: 16344418
7. Ryan AM, Nallamothu BK, Dimick JB. Medicare’s public reporting initiative on hospital quality had modest or no impact on mortality from three key conditions. Health Affairs. 2012; 31: 585–592.
8. Van der Walt P and Nizami H. The correlation between the AA Grading system, length of hospital stay and complication rates after total hip and knee arthroplasty. Orthopaedic Proceedings. J. Bone Joint Surg Br. 2012 Vol. 94-B Supp XII 6. Available at:;94-B/SUPP_XII/6. Accessed November 1, 2012.
9. Berenholtz SM, Pronovost PJ, Lipsett PA, et al. Eliminating catheter-related bloodstream infections in the intensive care unit. Crit Care Med. 2004; 32: 2014–2020.

SIR; standardized infection ratio; CLABSI; central line associated blood stream infections; SSI; surgical site infections; hospital acquired infection; healthcare associated infections; HAI; HAC; medical school affiliation; unit bed size

© 2013 by Lippincott Williams & Wilkins