Sepsis is a common cause of morbidity and mortality, affecting over 1.5 million individuals annually in the United States alone (1). Among hospitalized patients, sepsis is the leading cause of death (2). Beyond the human toll of morbidity and mortality, sepsis imposes substantial financial costs, accounting for over $20 billion in annual U.S. hospital spending (3). Identifying and treating sepsis early in its course can reduce sepsis-related morbidity and mortality, but many patients with sepsis do not receive early and potentially life-saving therapy (4–7).
One approach to improve the early recognition and treatment of sepsis across the health system is to use regulatory mandates for sepsis care (8). In New York State, where hospitals are required to report their compliance with guideline-based sepsis treatment bundles, adherence to these bundles is associated with lower sepsis mortality (4). At the federal level, the Centers for Medicare and Medicaid Services (CMS) instituted a sepsis quality measure as part of its Hospital Inpatient Quality Reporting Program (IQRP) in October 2015 (9). This program, known as “SEP-1”, requires hospitals to report their compliance with a multicomponent 3- and 6-hour treatment and resuscitation bundle for patients with sepsis, which includes antibiotic and fluid administration, blood culture and lactate measurement, the use of vasopressors for fluid-refractory hypotension, and the bedside evaluation of a patient’s response to treatment.
Although there is consensus on the importance of sepsis-focused quality improvement in general, the SEP-1 reporting program itself has generated considerable controversy related to the administrative burden of data abstraction and reporting, the potential to result in financial penalties for hospitals, and concerns about the program’s ultimate impact on patient care and outcomes (10–15). We sought to inform this debate by evaluating national reporting patterns from the first year of the program. Specifically, we sought to answer three questions critical to understanding the SEP-1 program: 1) what characteristics are associated with whether a hospital reports SEP-1 data, 2) what hospital characteristics are associated with SEP-1 performance among reporting hospitals, and 3) is SEP-1 performance associated with hospital performance on other quality measures related to time-sensitive healthcare?
MATERIALS AND METHODS
Study Design and Data
We performed a cross-sectional study of U.S. hospitals participating in Medicare’s IQRP, which is a requirement for all nonfederal hospitals that provide care for Medicare beneficiaries. IQRP data, including data from the SEP-1 reporting program, were obtained from CMS’s Hospital Compare website, which publicly reports performance data for participating hospitals. Hospital organizational data, including hospital size, ownership, and academic status, were obtained from Medicare’s Healthcare Cost Reporting Information System (HCRIS). Hospital Compare and HCRIS were linked using unique hospital identifiers. The Hospital Compare data were from the fiscal year 2017 reporting period, running from October 1, 2016, to September 30, 2017. We used HCRIS data from 2016, the most recent year that reliable data are available. We restricted the analysis to general, short-stay, acute-care hospitals because these are the hospitals to which the SEP-1 program applies. We excluded all other hospitals including critical access hospitals, long-term acute-care hospitals, and specialty hospitals. We also excluded hospitals with data in the Hospital Compare dataset but not the HCRIS dataset.
Using the Hospital Compare and HCRIS datasets, we identified four sets of variables: 1) whether a hospital reported SEP-1 data, 2) SEP-1 performance, 3) general hospital characteristics, and 4) performance on other quality measures related to time-sensitive medical conditions.
We used the Hospital Compare data to identify whether a hospital reported any SEP-1 compliance data. For hospitals that did not report SEP-1 data, we used text fields in the Hospital Compare data to identify the reasons cited for not reporting.
The Hospital Compare data contain the percent compliance with the SEP-1 bundle among eligible patients, as reported by hospitals. Since the SEP-1 measure is an “all-or-none” measure, this is the percentage of patients with severe sepsis or septic shock who received every required element of the SEP-1 bundle. We also identified the number of SEP-1 cases each hospital reported to CMS, which is included in the Hospital Compare data. This reported case volume does not necessarily represent total annual sepsis case volume for each hospital because the SEP-1 measure excludes patients transferred from other hospitals, and it allows hospitals with very high case volumes to report data on a subsample of patients (i.e., 60 per quarter).
General Hospital Characteristics.
Using 2016 HCRIS data, we categorized hospitals according to ownership (nonprofit, for-profit, government), teaching status using the resident-to-bed ratio (nonteaching if no residents, small teaching if ratio < 0.2, large teaching if ratio 0.2 or greater), hospital bed totals (small < 100 beds, medium 100–249 beds, and large 250 beds or more), and ICU bed totals (0 beds, < 5 beds, 5–14 beds, 15–29 beds, and 30 or more beds), as performed previously (1617).
Performance for Other Time-Sensitive Conditions.
We used Hospital Compare data to assess hospital performance on other core measures related to time-sensitive conditions. We focused our analyses on quality measures from the “Timely and Effective Care” domain, which includes SEP-1. In addition to SEP-1, we included three other “Timely and Effective Care” measures that were reported by hospitals that also reported SEP-1 data and might provide insight into how a hospital performs in treatment of time-sensitive conditions: OP-4, which is the proportion of patients presenting with chest pain and acute myocardial infarction (AMI) who receive aspirin in the emergency department (ED); OP-20, which is the proportion of patients with stroke or intracranial hemorrhage for whom the interpretation of a head CT scan is available within 45 minutes of ED arrival; and OP-5, which is the median time to obtaining an electrocardiogram (ECG) for ED patients with chest pain or myocardial infarction. We hypothesized these measures to reflect a hospital’s underlying quality of care for time-sensitive conditions. To the degree that SEP-1 performance also reflects the quality of care for time-sensitive conditions, it should correlate with these measures.
To understand the hospital factors associated with SEP-1 reporting, we compared characteristics of hospitals that reported SEP-1 data with those that did not, using chi-square statistics. We also summarized the frequency of different reasons cited for not reporting data.
To understand variation in SEP-1 performance, we first dropped hospitals that did not report on SEP-1. Among the remaining hospitals, we calculated the mean and sd of the reported SEP-1 performance rates. To visually illustrate the variation in SEP-1 performance, we created a caterpillar plot of the reported SEP-1 performance rates. We calculated the 95% CIs for these rates using binomial ses.
To understand the association between hospital characteristics and SEP-1 performance, we performed a series of linear regression models, with SEP-1 performance as the dependent variable and categorical hospital characteristics as independent variables. We first fit univariable models with each of several categorical variable: categorized reported SEP-1 case volume, hospital ownership, teaching status, hospital size, and ICU size. Next, to understand which characteristics were independently associated with SEP-1 performance, we fit a multivariable linear regression model with all hospital characteristic variables. Using this multivariable model and STATA’s postestimation margins command (StataCorp, College Station, TX), which generates population-averaged estimates, we created graphs illustrating the relationship between adjusted SEP-1 performance and reported SEP-1 case volume, hospital ownership, and hospital size.
Finally, we evaluated whether hospital performance on SEP-1 was associated with performance on other measures of timely and effective care—timely head CT interpretation in stroke, and aspirin administration and time to ECG for patients with chest pain or AMI. We first excluded hospitals with performance on these measures above the 99th percentile or below the first percentile to improve the visual interpretability of the comparisons. We then calculated Spearman rank correlation coefficient (ρ) for pairwise comparisons between SEP-1 performance and each of the other performance measures. Because not all hospitals reported on all measures, the number of hospitals varied across these pairwise comparisons. To visually represent each comparison, we created scatterplots with lines of best fit.
We conducted all analyses using STATA version 15.1 (StataCorp, College Station, TX). We defined statistically significant associations using a p value of less than 0.05. This research was reviewed by the University of Pittsburgh Human Research Protection Office and determined not to constitute human subjects research because it used only publicly available hospital-level data.
A total of 3,283 general, short-stay, acute-care hospitals participated in IQRP and could be linked to HCRIS data. Of these hospitals, 2,851 (86.8%) reported SEP-1 performance data in Hospital Compare (Table 1). Compared with hospitals that did not report, hospitals reporting SEP-1 data were more likely to be large, nonprofit, teaching institutions. The most common reason for not reporting SEP-1 data was that there were no eligible cases or too few eligible cases to report (366 hospitals, 11% of total). A small minority of hospitals (66 hospitals, 2% of total) cited no reason or other reasons for not reporting SEP-1.
Among hospitals reporting SEP-1 data, SEP-1 performance was highly variable, with a mean of 48.9% ± 19.4% bundle compliance and a range from 0% to 100% (Fig. 1). The median number of patients per hospital was 87 (range, 11–1,117; interquartile range, 59–133).
Table 2 displays the results of linear regression models analyzing SEP-1 performance and hospital characteristics. In pairwise single predictor variable models, higher SEP-1 performance was associated with larger reported SEP-1 case volumes, for-profit ownership, nonteaching status, smaller hospital size, and intermediate ICU size. In the multivariable model, case volume, hospital ownership, and hospital size were most strongly associated with SEP-1 performance. Figure 2 depicts the relationship between selected hospital characteristics and SEP-1 performance, adjusted for the other hospital characteristics in the model from Table 2.
Performance on SEP-1 was statistically significantly associated with all three timely and effective care measures (Fig. 3). Higher rates of SEP-1 bundle compliance were associated with higher rates of timely head CT interpretation for stroke patients (ρ = 0.16; p < 0.001; 1,365 hospitals), more frequent aspirin administration for patients with chest pain or AMI (ρ = 0.24; p < 0.001; 1,771 hospitals), and shorter median time to ECG for patients with chest pain or AMI (ρ = –0.12; p < 0.001; 1,794 hospitals).
In a national study of hospital-level SEP-1 reporting and performance, we found that the vast majority of eligible hospitals reported SEP-1 data. Among hospitals reporting SEP-1 data, the average SEP-1 bundle compliance rate was only around 50%, confirming prior work demonstrating that many patients may not be receiving care consistent with current sepsis guidelines (71819). In addition, performance varied widely across hospitals, with smaller, for-profit, nonteaching hospitals reporting higher SEP-1 bundle completion rates, as did hospitals caring for greater numbers of patients with sepsis.
The finding that nearly all eligible hospitals reported SEP-1 data is reassuring, since SEP-1 reporting requires a major financial and organizational investment (1012). Despite these costs, it appears that the vast majority of hospitals were able to successfully collect and report SEP-1 data. This finding suggests that the SEP-1 measure will not force large numbers of hospitals to face financial penalties for nonreporting, at least so long as SEP-1 remains only a “pay-for-reporting” measure rather than a “pay-for-performance” measure. At the same time, the opportunity costs of investments in data reporting may impose indirect effects even when hospitals do not receive financial penalties for nonreporting. Future work should be devoted to developing and testing ways to reduce the burden of SEP-1 data collection and reporting, perhaps through the development of tools in the electronic health record.
Our finding that SEP-1 bundle compliance was higher in smaller, for-profit hospitals provides preliminary insight to the organizational determinants of variation in sepsis performance. One possible explanation for these findings is variation in case-mix. Sepsis case-mix differs across hospitals (1), and the increased complexity of the SEP-1 bundle for patients with septic shock may drive lower compliance in hospitals with a greater proportion of patients in shock (20). In addition, patients with comorbid cardiac or renal disease may be less likely to receive fluid volumes consistent with the SEP-1 bundle (21). Greater concentrations of patients with more severe sepsis or comorbid illnesses in larger, nonprofit hospitals could contribute to our findings.
Another consideration is that EDs at smaller, for-profit hospitals may be less crowded, facilitating earlier identification and treatment of patients with sepsis and other time-sensitive conditions (22–24). This explanation would be consistent with our observation that better SEP-1 performance is associated with other measures of hospitals’ ability to provide time-sensitive care. A prior study of sepsis resuscitation bundles also identified smaller hospitals as providing more bundle-compliant care (4). Ultimately, understanding the mechanisms by which some hospitals achieve more rapid sepsis identification and treatment is a prerequisite to expanding these strategies to other hospitals, which would improve sepsis care and outcomes broadly.
Our findings provide some mechanistic insight into known volume-outcome relationships and point to time-sensitive care processes as potential targets for quality improvement. Previous studies consistently report a volume-outcome relationship in sepsis, whereby patients admitted to hospitals caring for higher volumes of patients with sepsis experience greater survival (25). We found that SEP-1 compliance was lowest in hospitals with very low case volumes, but that the effect of higher case volumes leveled off at around 75 annual reported cases. The absence of a consistent volume-performance relationship at higher case volumes may reflect the fact that the SEP-1 program excludes patients transferred between hospitals, for whom care may differ from those directly admitted (26) and allows hospitals with very high case volumes to report data on a subsample of patients. Nevertheless, our findings support a conceptual model of the sepsis volume-outcome relationship in which worse outcomes at the lowest volume hospitals are explained in part by less timely sepsis care at these hospitals. Under this model of the volume-outcome relationship, SEP-1 bundle compliance is at least a marker, if not the defining feature, of timely sepsis care. Understanding which strategies allow higher volume hospitals to excel at delivering time-sensitive care, so that these practices can be disseminated to lower volume hospitals, could thereby improve sepsis care across the health system.
These results support the overall value of the SEP-1 measure by providing additional construct validity (27). Specifically, our analysis demonstrates that performance on the SEP-1 measure tracks with multiple other established measures of quality for time-sensitive conditions, as would be expected if hospital quality for time-sensitive conditions is related to shared factors related to timely and effective emergency care. Examples of such factors may be an organizational commitment to communication and coordination among care groups (28) or the use of written protocols for recognition and treatment of acute illness (29). There is a robust body of evidence demonstrating that early identification and treatment of sepsis saves lives (45). Our findings suggest that hospitals that comply with the SEP-1 bundle also implement time-sensitive diagnostic and treatment processes for other emergency medical conditions.
Our study has several limitations. First, SEP-1 compliance and other process data are self-reported and have not undergone external audit, creating the potential for inaccuracies. For example, variability between data abstractors in their approach to defining sepsis “time zero” relative to the bundle components could lead to better SEP-1 performance (30); if this were to occur systematically in smaller, for-profit hospitals, it could contribute to our findings. This vulnerability is particularly challenging given the complex measure specification and concerns over the challenge of resolving different clinical definitions of sepsis (10).
Second, we analyzed only overall SEP-1 performance, and it is possible that the reasons for SEP-1 failure differed across hospitals. A hospital with low SEP-1 performance due to a lack of documentation, which is not inherently linked to patient outcomes, likely differs from a hospital with low SEP-1 performance due to widespread delays in antibiotic administration, which correlates strongly with higher sepsis mortality (45). SEP-1 is collected and reported as an “all-or-none” measure, which necessarily limits the ability of hospitals and investigators to use the data to understand the mechanisms behind low and high performance. Although some hospitals track the individual components of SEP-1 performance, not all hospitals have the resources to do so (10). Many have argued to allow hospitals more flexibility to focus on aspects of care that are most tightly linked to better patient-centered outcomes (101520). Indeed, since the original release of SEP-1, CMS simplified the component required for the reassessment of a patient’s response to therapy, which may allow hospitals to concentrate their efforts beyond documentation. Ongoing changes to the SEP-1 reporting requirements that yield more granular insight into reasons for success or failure and increase the flexibility of the measure might facilitate both process improvement and research insights across the health system.
Finally, although our results provide evidence supporting construct validity of the SEP-1 measure in general, its impact on patient outcomes remains uncertain. The overall magnitude of the associations between SEP-1 and other performance measures was weak, tempering the strength of our conclusions. The evidence base for sepsis diagnosis and treatment is dynamic, and all quality measures should incorporate ongoing evidence as it accumulates. Perhaps more importantly, protocolized sepsis bundles do not improve outcomes in randomized trials—and in fact may incur excess costs (31). We therefore need to understand how SEP-1 implementation has affected outcomes of patients with and without sepsis, including intended benefits like earlier sepsis recognition and treatment, and unintended harms such as excessive fluid administration or adverse effects of the widespread application of broad-spectrum antibiotics across the health system.
In a national study of U.S. hospitals’ SEP-1 reporting and performance, we found that the primary reason for nonreporting was an inadequate case volume and that SEP-1 performance was higher in smaller, for-profit hospitals and in those with higher case volumes. SEP-1 performance was also associated with other ED-based process measures for time-sensitive care, providing a preliminary signal that compliance with the SEP-1 bundle is a marker of a hospital’s ability to deliver timely sepsis care. Future work will need to evaluate the link between these hospital-level observations and patient-level data on sepsis treatment processes and outcomes associated with the SEP-1 reporting program.
1. Rhee C, Dantes R, Epstein L, et al.; CDC Prevention Epicenter Program: Incidence and trends of sepsis
in US hospitals using clinical vs claims data, 2009-2014. JAMA 2017; 318:1241–1249
2. Liu V, Escobar GJ, Greene JD, et al. Hospital deaths in patients with sepsis
from 2 independent cohorts. JAMA 2014; 312:90–92
3. Torio CM, Moore BJ. National Inpatient Hospital Costs: The Most Expensive Conditions by Payer, 2013. Statistical brief No. 204. Healthcare Cost Utilization Project (HCUP). 2016Rockville, MD, Agency for Healthcare Research and Quality.
4. Seymour CW, Gesten F, Prescott HC, et al. Time to treatment and mortality during mandated emergency care for sepsis
. N Engl J Med 2017; 376:2235–2244
5. Liu VX, Fielding-Singh V, Greene JD, et al. The timing of early antibiotics and hospital mortality in sepsis
. Am J Respir Crit Care Med 2017; 196:856–863
6. Han X, Edelson DP, Snyder A, et al. Implications of centers for medicare
& medicaid services severe sepsis
and septic shock early management bundle and initial lactate measurement on the management of sepsis
. Chest 2018; 154:302–308
7. Levy MM, Rhodes A, Phillips GS, et al. Surviving Sepsis
Campaign: Association between performance metrics and outcomes in a 7.5-year study. Crit Care Med 2015; 43:3–12
8. Cooke CR, Iwashyna TJ. Sepsis
mandates: Improving inpatient care while advancing quality improvement. JAMA 2014; 312:1397–1398
10. Barbash IJ, Rak KJ, Kuza CC, et al. Hospital perceptions of medicare
quality reporting initiative. J Hosp Med 2017; 12:963–968
11. Barbash IJ, Kahn JM, Thompson BT. Opening the debate on the new sepsis
reporting program: Two steps forward, one step back. Am J Respir Crit Care Med 2016; 194:139–141
12. Wall MJ, Howell MD. Variation and cost-effectiveness of quality measurement programs. The case of sepsis
bundles. Ann Am Thorac Soc 2015; 12:1597–1599
13. Pepper DJ, Jaswal D, Sun J, et al. Evidence underpinning the centers for medicare
& medicaid services’ severe sepsis
and septic shock management bundle (SEP-1): A systematic review. Ann Intern Med 2018; 168:558–568
14. Faust JS, Weingart SD. The Past, present, and future of the centers for medicare
and medicaid services quality measure SEP-1: The early management bundle for severe sepsis
/septic shock. Emerg Med Clin North Am 2017; 35:219–231
15. Klompas M, Rhee C. The CMS sepsis
mandate: Right disease, wrong measure. Ann Intern Med 2016; 165:517–518
16. Wallace DJ, Seymour CW, Kahn JM. Hospital-level changes in adult ICU bed supply in the United States. Crit Care Med 2017; 45:e67–e76
17. Wallace DJ, Angus DC, Seymour CW, et al. Critical care
bed growth in the United States. A comparison of regional and national trends. Am J Respir Crit Care Med 2015; 191:410–416
18. Ferrer R, Artigas A, Levy MM, et al.; Edusepsis Study Group: Improvement in process of care and outcome after a multicenter severe sepsis
educational program in Spain. JAMA 2008; 299:2294–2303
19. Levy MM, Dellinger RP, Townsend SR, et al.; Surviving Sepsis
Campaign: The Surviving Sepsis
Campaign: Results of an international guideline-based performance improvement program targeting severe sepsis
. Crit Care Med 2010; 38:367–374
20. Rhee C, Filbin MR, Massaro AF, et al. Compliance with the national SEP-1 quality measure and association with sepsis
outcomes. Crit Care Med 2018; 1
21. Liu VX, Morehouse JW, Marelich GP, et al. Multicenter implementation of a treatment bundle for patients with sepsis
and intermediate lactate values. Am J Respir Crit Care Med 2016; 193:1264–1270
22. Pines JM, Decker SL, Hu T. Exogenous predictors of national performance measures for emergency department crowding. Ann Emerg Med 2012; 60:293–298
23. Mullins PM, Pines JM. National ED crowding and hospital quality: Results from the 2013 Hospital Compare data. Am J Emerg Med 2014; 32:634–639
24. Gaieski DF, Agarwal AK, Mikkelsen ME, et al. The impact of ED crowding on early interventions and mortality in patients with severe sepsis
. Am J Emerg Med 2017; 35:953–960
25. Walkey AJ, Wiener RS. Hospital case volume and outcomes among patients hospitalized with severe sepsis
. Am J Respir Crit Care Med 2014; 189:548–555
26. Barbash IJ, Zhang H, Angus DC, et al. Differences in hospital risk-standardized mortality rates for acute myocardial infarction when assessed using transferred and nontransferred patients. Med Care 2017; 55:476–482
27. Bagozzi RP, Yi Y, Phillips LW. Assessing construct validity in organizational research. Adm Sci Q 1991; 36:421
28. Curry LA, Spatz E, Cherlin E, et al. What distinguishes top-performing hospitals in acute myocardial infarction mortality rates? A qualitative study. Ann Intern Med 2011; 154:384–390
29. Fonarow GC, Smith EE, Saver JL, et al. Timeliness of tissue-type plasminogen activator therapy in acute ischemic stroke: Patient characteristics, hospital factors, and outcomes associated with door-to-needle times within 60 minutes. Circulation 2011; 123:750–758
30. Rhee C, Brown SR, Jones TM, et al.; CDC Prevention Epicenters Program: Variability in determining sepsis
time zero and bundle compliance rates for the centers for medicare
and medicaid services SEP-1 measure. Infect Control Hosp Epidemiol 2018; 39:994–996
31. PRISM Investigators: Early, goal-directed therapy for septic shock - a patient-level meta-analysis. N Engl J Med 2017; 376:2223–2234
Keywords:Copyright © 2018 by the Society of Critical Care Medicine and Wolters Kluwer Health, Inc. All Rights Reserved.
critical care; healthcare quality indicators; health policy; health services research; Medicare; sepsis