Hospital-acquired infections (HAIs) are associated with significant morbidity and mortality. It is estimated that 1.7 million infections are acquired during hospital stays in the United States annually, resulting in nearly 100,000 deaths and $20 billion in costs.1 The Centers for Medicare and Medicaid Services (CMS) have implemented several steps to limit HAIs and reduce the associated financial costs.
The 2005 Deficit Reduction Act required that the Secretary of Health and Human Services use evidence-based medicine to identify preventable conditions, including hospital-acquired conditions (HACs).2 Then, on October 1, 2008, CMS started denying payments to hospitals for the treatment of 10 of those HACs, including three HAIs: central-line-associated bloodstream infections (CLABSIs), catheter-associated urinary tract infections (CAUTIs), and surgical site infections (SSIs), which make up about half of all reported HAIs.3 , 4 Additionally, Medicare no longer pays the extra cost of treating patients who develop a “never event” that is considered an avoidable medical error.5
Under the Patient Protection and Affordable Care Act, the Hospital-Acquired Condition Reduction Program (HACRP) was established. Medicare is now required by law to penalize hospitals that are in the lowest-performing quartile (by HACRP score) by reducing their payment by 1%.6 This program complements two other Medicare programs (the Hospital Value-Based Purchasing program and the Hospital Readmissions Reduction Program) that have as their goal improving the quality of health care through financial rewards and penalties.7
The HACRP began in fiscal year (FY) 2015 and focused on three measures: (1) patient safety indicators (PSIs), (2) CLABSIs, and (3) CAUTIs. The program added SSIs after colon surgeries and abdominal hysterectomies in FY16 and methicillin-resistant Staphylococcus aureus infections and Clostridium difficile infections in FY17.8 Additionally, CLABSI and CAUTI monitoring was expanded from adult intensive care units only to pediatric and adult medical and surgical wards in FY17.9 The HACRP includes acute care hospitals but does not apply to other facilities such as long-term acute care hospitals, cancer hospitals, inpatient rehabilitation facilities, inpatient psychiatric facilities, and critical access hospitals.8
Under the HACRP, hospitals are given a score made up of two domains. Domain 1 takes into account several PSIs, as suggested by the Agency for Healthcare Research and Quality. Domain 2 takes into account several HAIs, as suggested by the National Healthcare Safety Network. In FY15–FY17, hospitals received a score of 1 to 10 (a higher score is worse) for each domain. In FY18, CMS replaced this decile-based scoring method with a continuous scoring method (Winsorized z score) that relies on actual measure values and is meant to improve precision and reduce ties between the included measures.10 A hospital’s total HACRP score is calculated by adding the two weighted domain scores together. Domain 2 had a 65% weight in FY15, a 75% weight in FY16, and an 85% weight in FY17. CMS also changed the HACRP score cutoff for penalties; hospitals with scores above 7 in FY15, 6.75 in FY16, and 6.57 in FY17 were penalized.6 Figure 1 shows the HACRP domain weighting and included measures for FY17–FY19.
The HACRP has had a prominent financial impact on hospitals. Medicare reduced its payment by 1% to the lowest-performing quartile of hospitals. This penalty affected 724 hospitals in FY15 and 758 hospitals in FY16.11 , 12 In FY16, Medicare expected to save $365 million from this reduction in payment. More than half of the hospitals (53.7%) that were penalized in 2016 were also penalized in 2015.12
Early summary results from the first year of the HACRP suggest that teaching hospitals are being penalized a greater amount than other hospitals.7 This finding raised concerns that, in the absence of risk adjustment or other mechanisms for normalization, the HACRP is “biased” against larger hospitals and those with higher-acuity patients. This concern is coupled with others about the overlap in penalties in Domains 1 and 2 (e.g., bloodstream infections were included in both domains until FY18, when it was removed from Domain 1; see Figure 1).
Now that three years of data, which incorporate the changes made by CMS to the parameters and cutoff scores for penalties, are available, we evaluated the HACRP in detail, with a particular focus on trends over time. We asked whether hospitals were able to improve their HACRP scores, and if so (or not), what characteristics were associated with that improvement (or lack thereof).
The primary objective of this study was to assess for independent variables that were associated with the mean total HACRP scores for U.S. hospitals over a three-year period (FY15–FY17). Our secondary objective was to assess for independent variables that were associated with receiving CMS penalties. We included both objectives because the HACRP cutoff score to receive a CMS penalty changed between FY15 and FY17. We evaluated the following variables: (1) type of hospital (teaching vs. nonteaching; we considered hospitals that received direct graduate medical education or indirect medical education payments as teaching hospitals), (2) disproportionate patient percentage (DPP) (based on the percentage of patients who were entitled to Medicaid or both Medicare Part A and Supplemental Security Income), (3) case mix index (CMI), (4) number of staffed beds, (5) length of stay (LOS), (6) gross patient revenue, and (7) region.
We obtained data regarding the hospital name, type of hospital, DPP, CMI, county, state, HACRP scores, and CMS penalties for FY15–FY17 from CMS.6 , 13–15 We obtained the number of staffed beds, total number of discharges, number of patient days (total number of days for all admitted patients), gross patient revenue, hospital name, city, state, and ZIP code from the American Hospital Directory website. LOS was calculated by dividing the number of patient days by the total number of discharges.16 These data are publicly available from CMS and the American Hospital Directory in the form of a list of hospital names with other identifiers (city, county, ZIP code, and/or CMS certification number). We obtained ZIP code, county, and city data for each hospital, which enabled us to merge the CMS and American Hospital Directory databases using ZIP code and hospital name.17 We used a fuzzy matching procedure when the information did not match exactly between databases; this process allowed us to find similarities between segments from different databases. We used the “COMPGED” function in SAS (version 9.41, SAS Institute, Inc., Cary, North Carolina) for these comparisons. This study was completed at the University of Arizona and Baylor College of Medicine. We did not seek institutional review board approval because our study did not involve human subjects, and all data were publicly reported.
We conducted a univariate analysis to identify variables associated with total HACRP scores and CMS penalties for the period FY15–FY17. Variables associated with HACRP scores with P values less than .005 were then included in the multivariate linear regression. All variables associated with receiving CMS penalties were included in the multivariate logistic regression. Factors with P values less than .005 were considered significant. Logarithmic values were used for number of staffed beds, LOS, total number of discharges, and gross patient revenue. We assessed for trends over time by adding interaction with time for each of the dependent variables in both the multivariate linear regression and the multivariate logistic regression. We used SAS for performing the statistical analysis.
A total of 2,249 hospitals were included in our analysis. The majority were nonteaching hospitals (1,717/2,249; 76.3%) and had a high DPP, defined as DPP > 15% (1,801/2,208; 81.6%). The mean CMI was 1.36. The average number of staffed beds, total number of discharges, and number of patient days were 202; 8,620 per year; and 40,130 per year, respectively. The average LOS was 4.3 days. The average gross patient revenue was $812.3 million.
The mean total HACRP scores across hospitals for FY15, FY16, and FY17 were 5.38, 5.35, and 5.18, respectively. There was a significant improvement in scores between FY15 and FY17 (difference of −0.199; 95% confidence interval −0.330 to −0.067). Independent variables associated with total HACRP scores are presented in Tables 1 and 2. All independent variables were significantly associated with total HACRP scores according to our univariate analysis.
We found a strong correlation between gross patient revenue and number of staffed beds, so we chose to not include gross patient revenue in our model building. The final multivariate linear regression modeling showed that total HACRP scores were significantly associated with type of hospital (teaching vs. nonteaching), number of staffed beds (common logarithm base 10 [log10]), LOS (log10), CMI, and region (P < .001). Hospitals that were teaching hospitals, had more staffed beds, had longer LOS, or had higher CMI were more likely to have higher total HACRP scores. Hospitals in regions 1, 2, 8, and 10 were more likely to have higher total HACRP scores than hospitals in regions 4 and 6 (see Table 2 for the states included in each region).
In FY15, 21.2% (476/2,249) of hospitals received a CMS penalty compared with 22.6% (508/2,249) in FY16 and 31.3% (704/2,249) in FY17 (P < .001). Independent variables associated with receiving a CMS penalty are presented in Tables 1 and 2. All independent variables were significantly associated with receiving a CMS penalty according to our univariate analysis.
Our multivariate logistic regression model showed that receiving a CMS penalty was significantly associated with year, type of hospital (teaching vs. nonteaching), number of staffed beds (log10), LOS (log10), CMI, and region (P < .001). Hospitals that were teaching hospitals, had more staffed beds, had longer LOS, or had higher CMI were more likely to receive a CMS penalty (see Figure 2). Hospitals in regions 1, 2, 8, 9, and 10 were more likely to receive a penalty than hospitals in regions 4 and 6 (see Figure 2).
Trends in HACRP scores and CMS penalties
Only the type of the hospital (teaching vs. nonteaching) and number of staffed beds (log10) showed an interaction with time in terms of total HACRP scores (P < .001; see Figure 3). Nonteaching hospitals and hospitals with fewer than 100 staffed beds improved their HACRP scores over the three-year period.
The odds ratio of receiving a CMS penalty in FY17 compared with receiving one in FY15 was 1.70 (95% confidence interval 1.47–1.96). However, this effect was not influenced by type of hospital (teaching vs. nonteaching), number of staffed beds, or any other variable (see Figure 3).
Our findings extend those of Roberson and Reid,7 regarding the burden of HACRP penalties on teaching hospitals. We looked at HACRP scores over a three-year time period, allowing for an analysis of trends over time. Also, unlike Roberson and Reid, we used the actual CMS penalty data rather than the estimated CMS penalty data; performed our analysis on both HACRP score data and CMS penalty data for FY15, FY16, and FY17; and adjusted for confounders.
Similar to Roberson and Reid,7 we found that the HACRP leads to significant inequalities as it is currently applied. Our data showed that teaching hospitals (which are generally large and have high patient acuity) were significantly more likely to receive the CMS penalty compared with small and nonteaching hospitals. Hospitals in the Northeast (regions 1 and 2) and West (regions 8, 9, and 10) were more likely to receive the CMS penalty compared with hospitals in the South (regions 4 and 6). Of note, hospitals also face other payment adjustments from CMS, including the Merit-based Incentive Payment System, which focuses on quality improvement activities, advancing care information, and cost.18 This system went into effect in 2017 but is in an extended transition phase until 2022.19 It is unclear whether teaching hospitals will face additional inequalities in payment adjustments from this program.
A striking finding from our study is the trend in HACRP scores over time when large hospitals and teaching hospitals are compared with small hospitals (< 100 staffed beds) and nonteaching hospitals. In particular, we found no improvement in HACRP scores for the former group (large/teaching hospitals) but a significant improvement for the latter group (small/nonteaching hospitals). We considered multiple explanations for this difference. It is highly unlikely that the large/teaching hospitals were less aware of or attentive to the HACRP than the small/nonteaching hospitals. We even saw this trend when we accounted for the closure of small hospitals during this time period (data not shown). We have no mechanism to assess whether our findings are due to a change in reporting, documentation, or the transfer of patients with high acuity from small/nonteaching hospitals to large/teaching hospitals. One of us (K.A.J.) has shown recently that the adoption of electronic health records increases documentation of HACs, but there is no reason to assume that electronic health record capabilities declined over this time period for smaller hospitals.20 The most likely explanation is that it is easier to prevent HACs in small/nonteaching hospitals. Although staffing ratios, organizational structure, facility layout, and a myriad of other factors may be responsible for this difference, the most likely explanation is that small/nonteaching hospitals see less complex patient cases (reflected by a lower CMI) than large/teaching hospitals, making HACs less likely.
Although small/nonteaching hospitals were more likely to see improvements in their HACRP scores between FY15 and FY17, we did not find any difference in the odds of them receiving a CMS penalty over this time period. This finding could be explained by changes to the cutoff score for receiving a penalty, which went from 7 in FY15 to 6.57 in FY17.
The Infectious Diseases Society of America expressed concerns to CMS regarding the all-or-nothing approach of the HACRP. They argued that infection rates cannot be decreased to zero for certain HAIs even when guidelines are strictly followed.21 Also, the CMS penalty system does not take into consideration whether optimal care was performed or not. Additionally, many hospitals were penalized despite an improvement in their total HACRP scores because of the change in the penalty cutoff score and in the criteria used to calculate the domain scores. All of these factors would be minimized if CMS stratified hospitals into homogeneous categories (e.g., teaching vs. nonteaching, large vs. small) and applied penalties to those with the worst scores in each category.
In light of our findings, we recommend that teaching hospitals be proactive to avoid penalties under the HACRP. They should actively look out for HACs and change and improve their processes to improve the quality of the care they provide. Creating a culture of safety is also crucial to achieving these results. Finally, teaching hospitals should focus on coding episodes of care to document patient acuity and whether a HAC diagnosis was present at the time of a patient’s inpatient admission.22
Our study has several limitations. We are still in the early stages of research on the impact of the Patient Protection and Affordable Care Act and resulting CMS programs on health care quality and patient outcomes. In addition, outcomes data for the entire hospital system in the United States are not available yet. We used CMI to account for patient acuity, but it is a more global measure than using diagnosis-related groups. Also, we matched two separate databases together (CMS and American Hospital Directory) by fuzzy matching procedures using hospital ZIP code and name; this process could have resulted in mismatching a small number of hospitals with very similar names. Finally, it is possible that we found a false significance due to multiple comparisons and a large data set. We attempted to minimize this effect by using a smaller P value (< .005) to define statistical significance.
In conclusion, the HACRP has had a serious financial impact on CMS hospital payments. We found significant differences in penalties between hospitals. We recommend that careful consideration be given to when the program performs best, how to improve patient outcomes, and how to address the unintended consequences of the penalties. Further studies are also needed to design a new methodology for calculating scores to limit the inequalities of the HACRP.
3. Stone PW, Glied SA, McNair PD, et al. CMS changes in reimbursement for HAIs: Setting a research agenda. Med Care. 2010;48:433–439.
20. Gowrisankaran G, Joiner KA, Lin J. Does health IT adoption lead to better information or worse incentives? NBER working paper no. 22873. http://www.nber.org/papers/w22873.pdf
. Accessed July 18, 2018.
© 2018 by the Association of American Medical Colleges
21. Teufack SG, Campbell P, Jabbour P, Maltenfort M, Evans J, Ratliff JK. Potential financial impact of restriction in “never event” and periprocedural hospital-acquired condition reimbursement at a tertiary neurosurgical center: A single-institution prospective study. J Neurosurg. 2010;112:249–256.