Sepsis afflicts approximately 1.7 million adults each year in the United States and potentially contributes to up to 270,000 in-hospital deaths (1). Previous studies have reported that risk-adjusted sepsis mortality rates derived from claims data vary substantially between hospitals, suggesting that there is ample room to improve sepsis care in many institutions (2–5). Local and national initiatives consequently seek to encourage best practices and benchmark the quality of sepsis care provided by hospitals.
Comparing trends and differences in hospitals’ sepsis rates and outcomes could help drive quality improvement efforts. Administrative claims data are commonly used for sepsis surveillance and comparisons, but recent analyses suggest that temporal trends in incidence and mortality using claims are biased by rising sepsis awareness and more diligent coding of sepsis and organ dysfunction over time (1 , 6–12). It is unknown, however, whether claims data can be used to compare hospital sepsis rates and outcomes in the same time period and reliably identify low- or high-performing hospitals, or if variability in diagnosis, documentation, and coding practices limits their use for this purpose as well. Variability in diagnosing organ dysfunction could also impact sepsis case-finding strategies that use combinations of infection and organ dysfunction codes as well as risk-adjustment methods that incorporate present-on-admission organ dysfunction codes (4 , 5).
The aim of this study was to evaluate variation in the sensitivity of claims data for sepsis and acute organ dysfunction in U.S. acute care hospitals relative to detailed clinical criteria derived from electronic health records (EHRs). We further examined the correlation between claims versus clinical data for comparing hospital sepsis prevalence and mortality rates and the degree to which hospitals’ relative mortality rankings differed using either method.
Study Design, Data Sources, and Population
This was a retrospective cohort study using EHR and administrative data from adult patients (age ≥ 20 yr old) admitted as inpatients in calendar years 2013 or 2014 at 193 U.S. acute care hospitals. These hospitals were drawn from six datasets: Brigham and Women’s Hospital, Cerner HealthFacts, Emory Healthcare, HCA Healthcare, the Institute of Health Metrics, and University of Pittsburgh Medical Center. These datasets, which have previously been described in detail, together include a diverse mix of academic and community hospitals (1). The study was approved with a waiver of informed consent by the Institutional Review Boards at Harvard Pilgrim Health Care Institute, Partners HealthCare, University of Pittsburgh, and Emory University.
Claims Versus Clinical Criteria for Sepsis and Organ Dysfunction
Our primary claims-based method for identifying sepsis was an “explicit” definition that requires International Classification of Diseases, 9th revision, Clinical Modification (ICD-9-CM) codes for severe sepsis (995.92) or septic shock (785.52). Secondarily, we also examined an “implicit” definition that requires at least one infection code and one organ dysfunction code (or explicit severe sepsis or septic shock codes alone) because this method is more sensitive than explicit codes and is commonly used to characterize sepsis epidemiology and compare hospitals (2 , 3 , 13 , 14).
Our primary EHR-based method for identifying sepsis was a validated surveillance definition that requires clinical indicators of organ dysfunction concurrent with presumed serious infection (1). Presumed serious infection was defined as greater than or equal to one blood culture order and initiation of a new systemic antibiotic 2 days before the blood culture order to 2 days after, with continuation of antibiotics for greater than or equal to 4 consecutive days (or fewer if death or discharge to hospice or another acute care hospital occurred before 4 d). Concurrent organ dysfunction was defined as initiation of vasopressors, initiation of mechanical ventilation, doubling in baseline creatinine, doubling in total bilirubin to greater than or equal to 2.0 mg/dL, or decrease in platelet count to less than 100 cells/L within ± 2 days relative to the blood culture order date (Supplemental Table 1, Supplemental Digital Content 1, http://links.lww.com/CCM/E181). These thresholds were selected to mirror Sequential Organ Failure Assessment (SOFA) organ dysfunction scores greater than or equal to 2 as well as the Risk, Injury, Failure, Loss of kidney function, and End-stage kidney disease criteria for acute kidney injury, but adapted for automated implementation using routine EHR data (1 , 11 , 15 , 16). We also included a criterion for elevated lactate because it was part of the consensus definitions for severe sepsis in use during the study time period (17–19). We identified mechanical ventilation using ICD-9-CM procedure codes (96.7, 96.71, 96.72) or Current Procedural Terminology codes (94002–94004, 94656–94657) because other structured clinical indicators of respiratory failure were unavailable in our datasets.
We also assessed the sensitivity of explicit and implicit claims-based sepsis definitions for bacteremic shock because this is a rare but unambiguous event that all clinicians would agree constitutes “sepsis” (20). Bacteremic shock was defined as the presence of positive blood cultures (excluding common skin contaminants) with concurrent vasopressors within ± 2 days (21).
Finally, we evaluated the sensitivity of claims codes for organ dysfunction relative to EHR data using similar thresholds as above (Supplemental Table 2, Supplemental Digital Content 1, http://links.lww.com/CCM/E181). We focused on acute kidney injury, hepatic injury, and thrombocytopenia because these can be objectively defined using routine laboratory data. We also examined hypotension/shock codes relative to vasopressors. We did not examine the sensitivity of respiratory failure codes because our only clinical measures of respiratory dysfunction were procedure codes for mechanical ventilation. Our organ dysfunction comparisons were conducted in all patients with one or more blood cultures obtained during hospitalization, with or without antibiotics, in order to maintain a large denominator while still focusing on encounters where there was some suspicion of infection.
Outcomes for Hospital Comparisons and Reliability Adjustment
We calculated the sensitivity of sepsis and organ dysfunction codes relative to EHR clinical criteria at the hospital level. We also calculated sepsis prevalence rates for each hospital (using all adult hospitalizations as the denominator) and sepsis in-hospital mortality rates using claims versus the primary clinical sepsis surveillance definition. We did not examine the specificity of codes because we selected high clinical thresholds for organ dysfunction to provide unambiguous reference comparisons for sensitivity; in this scenario, specificity is less meaningful (e.g., patients can have hypotension without requiring vasopressors).
In order to minimize random statistical noise from hospitals with small number of sepsis and organ dysfunction cases, we only included study hospitals that had 50 or more hospitalizations with explicit sepsis codes and 50 or more cases meeting clinical criteria for sepsis during the 2-year study period. After excluding hospitals that did not meet these minimum case counts, we performed reliability adjustment using random-effects logistic regression models to generate empirical Bayes estimates of the sensitivity of each hospitals’ claims data and for sepsis incidence and mortality rates (22 , 23). This method is used in hospital comparisons to account for variations in denominators; when denominators are small, very low or very high outcome rates are more likely to be due to chance than to true differences in hospital performance (2). The model shrinks point estimates back toward the average rate, with the degree of shrinkage proportional to hospital sample size and the amount of true variation across hospitals. We used the normality assumption to generate 95% CIs for reliability-adjusted outcomes.
We calculated the reliability-adjusted median, minimum and maximum, and interquartile range (IQR) of hospital coding sensitivity rates for sepsis and each organ dysfunction. We quantified the correlation between claims versus EHR-based clinical estimates of hospitals’ reliability-adjusted sepsis prevalence and mortality rates using Pearson correlation coefficients (r). We ranked all hospitals by reliability-adjusted sepsis mortality rates using claims and clinical criteria and examined how hospitals’ relative rankings within quartiles differed using either method. We did not perform any risk adjustment because our goal was to examine whether and how observed mortality varies among hospitals using claims versus clinical data rather than attempting to gauge true differences in the quality of care delivered by hospitals.
The comparisons of sepsis prevalence rates and mortality by claims versus EHR clinical data were conducted using the entire cohort of 193 hospitals drawn from all six datasets. Based on data availability, the analyses of the sensitivity of organ dysfunction and sepsis codes were only conducted in the 178 hospitals in the Cerner, HCA Healthcare, and Institute of Health Metrics datasets.
Analyses were conducted using SAS version 9.4 (SAS Institute, Cary, NC) and R version 3.3.1 (r-project.org). All tests of significance used two-sided p values at less than or equal to 0.05.
The study cohort included 4,323,303 adult hospitalizations in 193 hospitals during calendar years 2013 and 2014. There were 1,154,061 hospitalizations with blood culture draws, 505,738 with implicit sepsis codes, 117,293 with explicit sepsis codes, and 266,383 meeting EHR clinical criteria for sepsis. The characteristics of study hospitals and case counts are summarized in Table 1.
Sensitivity of Claims Data Relative to EHR Clinical Criteria
Among hospitalizations meeting EHR-based clinical criteria for sepsis, the sensitivity of hospitals’ claims data ranged from 5% to 54% for explicit sepsis codes (median, 30%; IQR, 25–35%) and from 42% to 80% for implicit codes (median, 65%; IQR, 61–70%) (Fig. 1A). Among hospitalizations with bacteremic shock, the sensitivity of explicit sepsis codes ranged from 8% to 94% (median, 80%; IQR, 68–85%) and 48% to 97% for implicit codes (median, 92%; IQR, 88–94%).
Among hospitalizations with blood culture draws meeting EHR-based clinical criteria for organ dysfunction, the sensitivity of hospitals’ claims data ranged from 26% to 84% for acute kidney injury (median, 66%; IQR, 58–72%), 16% to 60% for thrombocytopenia codes (median, 39%; IQR, 35–43%), 29% to 44% for hepatic injury codes (median, 36%; IQR, 34–37%), and 29% to 84% for hypotension/shock codes (median, 66%; IQR, 58–72%) (Fig. 1B).
Only one of the low sensitivity hospital outliers in Figure 1A was repeatedly an outlier for both explicit and implicit sepsis codes relative to clinical criteria and bacteremic shock. None of the outlier hospitals in Figure 1B were repeatedly outliers for other organ dysfunction types.
Claims Versus EHR Clinical Criteria for Determining Hospital Sepsis Rates and Outcomes
The reliability-adjusted prevalence of sepsis among hospitals ranged from 0.3% to 9.4% (median, 2.6%; IQR, 2.0–3.2%) using explicit sepsis codes, from 3.2% to 29.9% (median, 11.6%; IQR, 9.8–13.4%) using implicit sepsis codes, and from 1.1% to 17.1% (median, 5.9%; IQR, 4.7–7.4%) using clinical criteria. There was also substantial variation in hospitals’ sepsis mortality rates using all definitions, ranging from 13.7% to 36.8% using explicit sepsis codes (median, 24.8%; IQR, 21.1–27.9%), from 5.5% to 18.0% using implicit codes (median, 9.8%; IQR, 8.0–11.2%), and from 6.5% to 30.8% using clinical criteria (median, 15.5%; IQR, 13.1–17.4%). The distribution of hospital sepsis mortality rates using explicit sepsis codes versus clinical criteria is shown in Figure 2.
There was moderate correlation between sepsis prevalence rates of hospitals measured using explicit sepsis codes versus clinical criteria (r = 0.64; 95% CI, 0.54–0.71) and using implicit codes versus clinical criteria (r = 0.64; 95% CI, 0.55–0.72). Similarly, there was moderate correlation between sepsis mortality rates of hospitals using explicit sepsis codes (r = 0.61; 95% CI, 0.51–0.69) and implicit sepsis codes (r = 0.69; 95% CI, 0.61–0.76) versus clinical criteria. The relationship between sepsis mortality rates of hospitals by clinical versus claims data when ranked according to clinical criteria is shown in Figure 3.
After ranking all 193 hospitals by sepsis mortality, 22 of 48 hospitals (46%) ranked in the lowest mortality quartile by explicit sepsis codes shifted into the second, third, or fourth quartile when using clinical criteria; 10 of 48 (21%) shifted by two or three quartiles (Fig. 4A). Similarly, 17 of 48 hospitals (35%) in the lowest mortality rate quartile by implicit sepsis codes shifted into higher mortality quartiles when using clinical criteria, including five of 48 (10%) that shifted into quartiles 3 or 4 (Fig. 4B). Twenty-eight of the 193 hospitals (15%) had sepsis mortality rankings by explicit sepsis codes that differed by two or more quartiles relative to clinical criteria; 12 of these 28 hospitals (43%) also differed by two or more quartiles by implicit sepsis codes relative to clinical criteria.
Using EHR clinical data from a large cohort of U.S. hospitals, we found that the sensitivity of claims data for identifying sepsis and organ dysfunction was highly variable and the correlation between hospitals’ sepsis prevalence and mortality rates using claims versus clinical criteria was only moderate. The relative rankings of hospitals for sepsis mortality rates differed substantially when using claims data versus clinical data; almost half the hospitals in the lowest mortality quartile according to claims shifted to higher mortality quartiles when using clinical data.
Substantial variations in sepsis outcomes across hospitals and regions have previously been reported using claims data, even after adjusting for severity of illness (2–5). Our findings suggest, however, that caution should be exercised when using claims data to measure variation in hospitals’ sepsis outcomes. Our results are consistent with a recent analysis demonstrating that agreement between septic shock codes and clinical criteria based on IV antibiotics, blood cultures, and vasopressors for identification of outlier hospitals in septic shock mortality was only moderate (4). Our study expands on this analysis to include the full spectrum of sepsis and demonstrates the variable sensitivity of claims data for each type of sepsis-associated organ dysfunction. Our findings also underscore the differences in sepsis cohorts identified by different surveillance methods. In particular, explicit sepsis codes have high specificity but low sensitivity and capture the most severely ill patients, whereas implicit codes have better sensitivity but lower specificity and identify a cohort with lower mortality rates (1 , 14 , 24 , 25).
Surveys of healthcare information managers have shown that general coding practices are variable (26 , 27), but this is likely even worse with sepsis due to the subjectivity inherent in making the diagnosis (20). Sepsis has no pathologic gold standard, and it is often unclear whether a patient is infected and whether organ dysfunction is due to infection or some other inflammatory condition (28). This subjectivity is compounded by variable thresholds for defining organ dysfunction. For example, there are multiple definitions for acute kidney injury (16 , 29), which differ from thresholds used in the SOFA score and other ICU-based organ dysfunction scores (15 , 30 , 31).
Currently, the Centers for Medicare and Medicaid Services’ Severe Sepsis/Septic Shock Early Management Bundle (SEP-1) requires manual review of patients diagnosed with sepsis in order to benchmark hospital adherence to sepsis bundles (32). Our findings suggest that there may be substantial variation between hospitals in the cases selected for SEP-1 review. The EHR-based clinical surveillance definition used in this study may present a more objective option for identifying sepsis compared with claims data. Tracking sepsis incidence and outcomes using more consistent criteria could drive further innovation and improvements in care, similar to how national comparative data helped spur widespread efforts to decrease central catheter-associated bloodstream infection rates, and objective surveillance for ventilator-associated events has improved knowledge around best practices for mechanically ventilated patients (33–36). The U.S. Centers for Disease Control and Prevention recently released an “Adult Sepsis Event” toolkit to help hospitals implement the surveillance definition used in this study (21). Although its primary purpose is to help hospitals better track their sepsis rates and outcomes, it could also potentially serve as a starting point for hospital comparisons if coupled with rigorous risk-adjustment tools.
The need for risk adjustment when comparing hospitals is underscored by the substantial variation in sepsis mortality rates observed in our cohort even when using clinical criteria. It is unclear the extent to which this represents differences in sepsis case mix across hospitals, differences in sepsis care, or both (37). Several sepsis risk-adjustment scores have already been developed, but most rely on administrative claims data and thus may be susceptible to the variability observed in this study (4 , 5 , 38 , 39). In particular, the variation in sensitivity of organ dysfunction codes we observed may confound risk-adjustment methods that incorporate these codes when present-on-admission.
Our study has several limitations. First, the study cohort used a convenience sample of hospitals with overrepresentation of hospitals in the South and may not be generalizable to the rest of the country. However, the hospital cohort was diverse with respect to teaching status and bed size. Second, we did not review medical records to confirm the accuracy of our EHR-based organ dysfunction criteria. Indeed, multiple definitions for organ dysfunction exist with no single consensus standard. However, we used high clinical thresholds that most clinicians likely would agree indicate organ dysfunction, and our primary goal was to examine the relative variability that exists across hospitals when using a consistent definition rather than to describe the absolute sensitivity of these codes relative to a single reference standard. Third, we cannot separate out the degree to which variability in claims data was due to differences in coding practices versus physician diagnosis and documentation practices (which govern hospital coding). Fourth, for our EHR-based sepsis criteria, we used procedure codes for mechanical ventilation because EHRs do not consistently contain reliable clinical indicators of respiratory dysfunction (blood gases are not always obtained in all patients with respiratory failure and when obtained FIO2 and venous vs arterial source are variably documented). However, previous work has demonstrated that administrative definitions for mechanical ventilation are reasonably accurate in identifying patients with respiratory failure (40). Fifth, our data source used ICD-9-CM codes and future studies will need to determine to what extent our findings still hold true in the current ICD, 10th revision (ICD-10) era.
Last, there is no true gold standard for sepsis (28); our clinical criteria for sepsis relied on physician judgments to draw blood cultures, initiate and continue antibiotics, and to diagnose and manage organ dysfunction and are thus an imperfect reference standard for comparison. However, these clinical actions are the cornerstones of sepsis management and our approach has the merit of using consistent criteria based on EHR data that are measured and reported in a relatively uniform manner across hospitals (7). The validity of our findings is also strengthened by the substantial variability we observed in the sensitivity of sepsis codes for bacteremic shock, a rare but unambiguous form of sepsis.
In conclusion, hospitals varied significantly in the sensitivity of their claims data for organ dysfunction and sepsis, with only moderate correlation between sepsis prevalence and mortality rates measured by claims versus clinical data and substantial differences in hospitals’ sepsis mortality rankings. Variations in diagnosis, documentation, and coding practices may confound efforts to risk adjust and benchmark hospital sepsis performance. Objective sepsis surveillance using clinical data from EHRs, with rigorous adjustments for severity of illness, may facilitate more meaningful comparisons among hospitals in the future.
We thank Richard Platt, MD, MS, Harvard Medical School/Harvard Pilgrim Health Care Institute, and Jonathan B. Perlin, MD, HCA Healthcare, for their support and review of the article.
1. Rhee C, Dantes R, Epstein L, et al. Incidence and trends of sepsis
in US hospitals using clinical vs claims data, 2009–2014. JAMA 2017; 318:1241–1249
2. Prescott HC, Kepreos KM, Wiitala WL, et al. Temporal changes in the influence of hospitals and regional healthcare networks on severe sepsis
mortality. Crit Care Med 2015; 43:1368–1374
3. Wang HE, Donnelly JP, Shapiro NI, et al. Hospital variations in severe sepsis
mortality. Am J Med Qual 2015; 30:328–336
4. Walkey AJ, Shieh MS, Liu VX, et al. Mortality measures to profile hospital performance for patients with septic shock. Crit Care Med 2018; 46:1247–1254
5. Hatfield KM, Dantes RB, Baggs J, et al. Assessing variability in hospital-level mortality among U.S. Medicare beneficiaries with hospitalizations for severe sepsis
and septic shock. Crit Care Med 2018; 46:1753–1760
6. Rhee C, Gohil S, Klompas M. Regulatory mandates for sepsis
care–reasons for caution. N Engl J Med 2014; 370:1673–1676
7. Klompas M, Rhee C. We need better tools for sepsis surveillance
. Crit Care Med 2016; 44:1441–1442
8. Rudd KE, Delaney A, Finfer S. Counting sepsis
, an imprecise but improving science. JAMA 2017; 318:1228–1229
9. Rhee C, Kadri S, Huang SS, et al. Objective sepsis surveillance
using electronic clinical data. Infect Control Hosp Epidemiol 2016; 37:163–171
10. Kadri SS, Rhee C, Strich JR, et al. Estimating ten-year trends in septic shock incidence and mortality in United States Academic Medical Centers using clinical data. Chest 2017; 151:278–285
11. Rhee C, Murphy MV, Li L, et al; Centers for Disease Control and Prevention Epicenters Program: Improving documentation and coding for acute organ dysfunction
biases estimates of changing sepsis
severity and burden: A retrospective study. Crit Care 2015; 19:338
12. Jafarzadeh SR, Thomas BS, Marschall J, et al. Quantifying the improvement in sepsis
diagnosis, documentation, and coding: The marginal causal effect of year of hospitalization on sepsis
diagnosis. Ann Epidemiol 2016; 26:66–70
13. Angus DC, Linde-Zwirble WT, Lidicker J, et al. Epidemiology of severe sepsis
in the United States: Analysis of incidence, outcome, and associated costs of care. Crit Care Med 2001; 29:1303–1310
14. Iwashyna TJ, Odden A, Rohde J, et al. Identifying patients with severe sepsis
using administrative claims: Patient-level validation of the angus implementation of the international consensus conference definition of severe sepsis
. Med Care 2014; 52:e39–e43
15. Vincent JL, Moreno R, Takala J, et al. The SOFA (Sepsis
-related Organ Failure Assessment) score to describe organ dysfunction
/failure. On behalf of the Working Group on Sepsis
-Related Problems of the European Society of Intensive Care Medicine. Intensive Care Med 1996; 22:707–710
16. Bellomo R, Ronco C, Kellum JA, et al; Acute Dialysis Quality Initiative Work Group: Acute renal failure - definition, outcome measures, animal models, fluid therapy and information technology needs: The Second International Consensus Conference of the Acute Dialysis Quality Initiative (ADQI) Group. Crit Care 2004; 8:R204–R212
17. Bone RC, Balk RA, Cerra FB, et al. Definitions for sepsis
and organ failure and guidelines for the use of innovative therapies in sepsis
. The ACCP/SCCM Consensus Conference Committee. American College of Chest Physicians/Society of Critical Care Medicine. Chest 1992; 101:1644–1655
18. Levy MM, Fink MP, Mar4shall JC, et al. 2001 SCCM/ESICM/ACCP/ATS/SIS International Sepsis
Definitions Conference. Intensive Care Med 2003; 29:530–538
19. Dellinger RP, Levy MM, Rhodes A, et al. Surviving sepsis
campaign: International guidelines for management of severe sepsis
and septic shock: 2012. Crit Care Med 2013; 41:580–637
20. Rhee C, Kadri SS, Danner RL, et al. Diagnosing sepsis
is subjective and highly variable: A survey of intensivists using case vignettes. Crit Care 2016; 20:89
22. Dimick JB, Staiger DO, Birkmeyer JD. Ranking hospitals on surgical mortality: The importance of reliability adjustment. Health Serv Res 2010; 45:1614–1629
23. MacKenzie TA, Grunkemeier GL, Grunwald GK, et al. A primer on using shrinkage to compare in-hospital mortality between centers. Ann Thorac Surg 2015; 99:757–761
24. Whittaker SA, Mikkelsen ME, Gaieski DF, et al. Severe sepsis
cohorts derived from claims-based strategies appear to be biased toward a more severely ill patient population. Crit Care Med 2013; 41:945–953
25. Gaieski DF, Edwards JM, Kallan MJ, et al. Benchmarking the incidence and mortality of severe sepsis
in the United States. Crit Care Med 2013; 41:1167–1174
26. Lorence D. Regional variation in medical classification agreement: Benchmarking the coding gap. J Med Syst 2003; 27:435–443
27. Lorence DP, Ibrahim IA. Benchmarking variation in coding accuracy across the United States. J Health Care Finance 2003; 29:29–42
28. Angus DC, Seymour CW, Coopersmith CM, et al. A framework for the development and interpretation of different sepsis
definitions and clinical criteria. Crit Care Med 2016; 44:e113–e121
29. Mehta RL, Kellum JA, Shah SV, et al; Acute Kidney Injury Network: Acute Kidney Injury Network: Report of an initiative to improve outcomes in acute kidney injury. Crit Care 2007; 11:R31
30. Marshall JC, Cook DJ, Christou NV, et al. Multiple Organ Dysfunction
Score: A reliable descriptor of a complex clinical outcome. Crit Care Med 1995; 23:1638–1652
31. Le Gall JR, Klar J, Lemeshow S, et al. The logistic organ dysfunction
system. A new way to assess organ dysfunction
in the intensive care unit. ICU Scoring Group. JAMA 1996; 276:802–810
32. Centers for Medicare and Medicaid Services: QualityNet - Inpatient Hospitals Specifications Manual. Available at: https://www.qualitynet.org
. Accessed October 5, 2018
33. Klompas M, Rhee C. The CMS sepsis
mandate: Right disease, wrong measure. Ann Intern Med 2016; 165:517–518
34. Wise ME, Scott RD II, Baggs JM, et al. National estimates of central line-associated bloodstream infections in critical care patients. Infect Control Hosp Epidemiol 2013; 34:547–554
35. Klompas M, Anderson D, Trick W, et al; CDC Prevention Epicenters: The preventability of ventilator-associated events. The CDC Prevention Epicenters Wake Up and Breathe Collaborative. Am J Respir Crit Care Med 2015; 191:292–301
36. Klompas M, Li L, Kleinman K, et al. Associations between ventilator bundle components and outcomes. JAMA Intern Med 2016; 176:1277–1283
37. Kempker JA, Martin GS. Does sepsis
case mix heterogeneity prevent outcome comparisons? Crit Care Med 2016; 44:2288–2289
38. Phillips GS, Osborn TM, Terry KM, et al. The New York sepsis
severity score: Development of a risk-adjusted severity model for sepsis
. Crit Care Med 2018; 46:674–683
39. Ford DW, Goodwin AJ, Simpson AN, et al. A severe sepsis
mortality prediction model and score for use with administrative data
. Crit Care Med 2016; 44:319–327
40. Kerlin MP, Weissman GE, Wonneberger KA, et al. Validation of administrative definitions of invasive mechanical ventilation across 30 intensive care units. Am J Respir Crit Care Med 2016; 194:1548–1552