Sepsis, a life-threatening organ dysfunction caused by a systemic infection (1), is a leading cause of mortality and high healthcare costs (2,3). Until recently, sepsis was defined as a systemic inflammatory response syndrome (SIRS) to infection, with severe sepsis indicating organ dysfunction using arbitrary thresholds and septic shock indicating cardiovascular dysfunction (4). In 2016, Sepsis-3 reframed adult sepsis as an infection causing an acute increase of greater than or equal to 2 points on the Sequential Organ Failure Assessment (SOFA) organ dysfunction score and septic shock as sepsis requiring vasopressor therapy and hyperlactatemia (1). A similar framework for pediatric sepsis has been proposed, but not yet established (5–7).
The practical application of sepsis consensus criteria across clinical, research, quality improvement (QI), and epidemiologic efforts is challenging. For example, because the 2005 International Pediatric Sepsis Consensus Conference (IPSCC) focused on criteria to aid in standardizing observational studies and clinical trials (4), existing consensus criteria fail to include many children diagnosed and treated for sepsis, particularly those with early or mild organ dysfunction (8,9). In addition, tracking cases of sepsis within and across hospitals using consensus criteria requires laborious chart reviews, with variable inter-rater reliability (9,10). Other data sources, particularly large administrative datasets commonly leveraged for research or epidemiologic surveillance, do not contain sufficient clinical data to apply consensus criteria. As a result, reliance on claims data has become a popular substitute. However, numerous studies have shown that claims data are neither reliable nor accurate in identifying episodes of sepsis (9,11).
The wide availability of electronically recorded clinical data provides an opportunity to apply a surveillance algorithm to identify episodes of sepsis within and across healthcare systems. Using electronic health record (EHR) data from 409 hospitals, Rhee et al (12) adapted clinical criteria from Sepsis-3 and the SOFA score to estimate the incidence of adult sepsis in the United States. Their surveillance approach yielded 69.7% sensitivity and 98.1% specificity and was overall more accurate than claims data. A similar algorithm has not been developed for pediatric sepsis. Therefore, we sought to derive and validate a surveillance algorithm to identify episodes of pediatric sepsis using routine clinical data available within the EHR and then apply the algorithm to study longitudinal trends in sepsis epidemiology.
MATERIALS AND METHODS
Study Design and Population
We performed a retrospective study using routine clinical data recorded in the EHR at a single academic children’s hospital. Patients treated in the emergency department (ED) or admitted to an inpatient ward at the Children’s Hospital of Philadelphia (CHOP) between January 1, 2011, and January 31, 2019, were eligible. To ensure an algorithm inclusive of the entire age spectrum cared for within the scope of pediatrics, we did not apply an upper age limit. We excluded patients in the neonatal ICU and cardiac center (including cardiology ward and cardiac ICU) because these populations were anticipated to have unique physiology that may require a more tailored surveillance approach. However, patients with cardiac conditions treated in other hospital locations were included. This study was approved by the CHOP Institutional Review Board (IRB) with a waiver of informed consent and assent (CHOP IRB number 18-015197).
Derivation of the Pediatric Sepsis Surveillance Algorithm
A team of medical informatics experts in the Department of Biomedical and Health Informatics Arcus program at CHOP worked with clinical sepsis experts to develop a surveillance algorithm. Our goal was to include criteria indicative of a sepsis computable phenotype (13). The Arcus program is a strategic initiative of the CHOP Research Institute created to link clinical and research data and offers secure data archives, patient privacy protection, and access to clinical data on greater than 2 million patients. Although the IPSCC defined pediatric severe sepsis as: 1) greater than or equal to 2 SIRS criteria, 2) suspected or confirmed invasive infection, and 3) cardiovascular dysfunction, acute respiratory distress syndrome, or greater than or equal to 2 other organ dysfunctions (4), we focused on the Sepsis-3 framework of infection with acute organ dysfunction without the qualifying term “severe sepsis” or the need to include SIRS criteria. This approach is consistent with calls for a revised definition of pediatric sepsis to align with Sepsis-3 (14).
We derived our surveillance algorithm in a cohort of patients identified as suspected sepsis by our sepsis QI program via electronic flags for antibiotics, blood culture, and/or fever (eTable 1, Supplemental Digital Content 1, http://links.lww.com/PCC/B124). As part of this QI work, starting September 1, 2017, all patients flagged for suspected sepsis were also manually adjudicated through chart review by trained clinicians to confirm or reject the presence of severe sepsis or septic shock based on IPSCC criteria (eTable 2, Supplemental Digital Content 1, http://links.lww.com/PCC/B124). Patients with nonbacterial causes of infection-mediated organ dysfunction (i.e., viral and fungal sepsis) were included as having severe sepsis or septic shock, as were patients in whom infection was suspected and treated by the clinical team in the absence of a definitive source of infection. Patients adjudicated as having severe sepsis or septic shock based on the 2005 IPSCC criteria between September 1, 2017, and June 30, 2018, served as the reference-standard for the derivation cohort.
For our surveillance algorithm, we a priori defined infection as a blood culture (ordered or collected) and sustained administration of new antibiotics, similar to Rhee et al (12). In order to capture patients for whom a blood culture may have been obtained during initial resuscitation at a referring facility, we allowed for transfer to CHOP from a referring facility to substitute for a blood culture. Patients were required to receive at least 4 consecutive days of antibiotics (although not necessarily the same antibiotics on all days), starting within ± 2 calendar days of the blood culture or transfer and at least one dose of a parental antibiotic. The first antibiotic was required to be “new,” defined as that antibiotic not having been administered within the prior 7 days. Four days of antibiotics were required in order to exclude patients for whom bacterial infection was suspected but not confirmed. Fewer than 4 days of antibiotics were allowed for patients who died or were discharged to hospice before 4 days.
We defined organ dysfunction based on criteria from the pediatric SOFA (pSOFA) score reported by Matics and Sanchez-Pinto (5), with modifications derived from the adult sepsis surveillance criteria (12), another well-validated pediatric organ dysfunction score (Pediatric Logistic Organ Dysfunction 2 score) (15,16), and three iterative revisions among our local sepsis experts. Sepsis-3 requires an increase in SOFA of greater than or equal to 2 points, which equates to moderate dysfunction in a single organ system or mild dysfunction in two or more organ systems (1). Similar to Rhee et al (12), we required at least one moderate organ dysfunction within ± 2 calendar days of blood culture or transfer, and selected criteria readily available within the EHR that corresponded to pSOFA greater than or equal to 2 points in each organ system (except renal, see Table 1). Although not in SOFA or pSOFA, we included hyperlactatemia because it is a common manifestation of cardiovascular dysfunction in septic shock, associated with mortality, and demonstrated utility in the adult sepsis surveillance definition (12,17,18). We selectively included clinical variables likely to be available in the EHR for patients across a variety of hospitals. When data were not available in the EHR for a criterion, it was assumed to be normal.
Each iterative review was focused on optimizing both sensitivity and specificity of our surveillance criteria relative to the IPSCC reference-standard with changes to organ dysfunction criteria based on manual review of the reasons for false-negatives and false-positives. For example, after the second iterative review, we excluded bilirubin because isolated hyperbilirubinemia as the defining organ dysfunction is rare in pediatric sepsis (19,20), and this criterion led to substantial false-positives from nonseptic hepatic dysfunction. After the third iterative review, we added fluid resuscitation greater than 60 mL/kg to capture children with fluid-responsive shock who were otherwise relegated as “false-negatives” (21). We ceased with iterative changes to the surveillance algorithm once it was deemed unlikely that we could further reduce the number of false-positives or false-negatives without including criteria targeted to single patients.
Validation of the Pediatric Sepsis Surveillance Algorithm
Once derived, the surveillance algorithm was tested in a separate validation cohort that included patients adjudicated as having severe sepsis or septic shock based on the 2005 IPSCC criteria between July 1, 2018, and January 31, 2019. We did not a priori specify targets for the performance of the surveillance algorithm in the validation cohort.
Data were collected about demographics, chronic comorbid conditions (CCCs) (22), admission to the PICU, and mortality. Onset of sepsis was identified as the earliest of either the blood culture order or collection time, time of hospital arrival (for transfers), or first new qualifying antibiotic order. Sepsis was deemed community-acquired if onset occurred within 2 days of initial hospitalization or hospital-acquired if onset occurred on or after day 3. A patient could account for greater than 1 sepsis episode, either within the same or across different hospital encounters. However, to differentiate clinical decompensation from a new episode of sepsis, we required at least 14 days to elapse from sepsis onset before classifying a new sepsis episode. Adjudicated cases confirmed to have sepsis that were not identified by the surveillance algorithm (false-negatives) and a random subset of the false-positives in the derivation cohort were manually reviewed by two authors to determine the cause of misclassification, with a third-party adjudication for discrepancies.
Analyses were performed using R Version 3.3.3 (R Foundation, Vienna, Austria) and STATA 12.0 (StataCorp, College Station, TX). Descriptive data are summarized as medians (interquartile range) or percentages. Test characteristics of the surveillance algorithm were calculated for derivation and validation cohorts with 95% CIs using Wald estimates. We then determined the incidence and mortality of pediatric sepsis episodes across the 8-year study period using the surveillance algorithm. We used multivariable Poisson and logistic regression models to assess for longitudinal changes in incidence and mortality over time, respectively, adjusting for available patient-level characteristics likely to influence changes in sepsis epidemiology over time. Statistical significance was defined as a p value of less than 0.05.
Across the entire 8-year study period, derivation period, and validation period, there were 832,550, 93,897, and 64,388 eligible hospital encounters, respectively. The characteristics for the entire study population and derivation and validation periods were similar (eTable 3, Supplemental Digital Content 1, http://links.lww.com/PCC/B124). The derivation and validation cohorts were derived from episodes flagged as suspected sepsis using the QI electronic criteria within mutually exclusive time periods (eFig. 1, Supplemental Digital Content 2, http://links.lww.com/PCC/B125; legend, Supplemental Digital Content 1, http://links.lww.com/PCC/B124).
Within the derivation period, there were 1,065 suspected sepsis episodes flagged by the QI electronic criteria, of which 187 were confirmed sepsis episodes after manual adjudication. Using these flagged/adjudicated episodes as the reference-standard, the surveillance algorithm yielded a sensitivity of 78% (95% CI, 72–84%), specificity of 76% (95% CI, 74–79%), positive predictive value (PPV) of 41% (95% CI, 36–46%), and negative predictive value (NPV) of 94% (95% CI, 92–96%) (eFig. 2a, Supplemental Digital Content 3, http://links.lww.com/PCC/B126; legend, Supplemental Digital Content 1, http://links.lww.com/PCC/B124).
Within the validation period, there were 361 suspected sepsis episodes flagged by the QI electronic criteria, of which 85 were confirmed sepsis episodes after manual adjudication. Using these flagged/adjudicated episodes as the reference-standard, the surveillance algorithm was validated to have a sensitivity of 84% (95% CI, 77–92%), specificity of 65% (95% CI, 59–70%), PPV of 43% (95% CI, 35–50%), and NPV of 93% (95% CI, 90–97%) (eFig. 2b, Supplemental Digital Content 3, http://links.lww.com/PCC/B126; legend, Supplemental Digital Content 1, http://links.lww.com/PCC/B124).
All 42 episodes adjudicated as sepsis in the derivation cohort that were missed by the surveillance algorithm (“false-negatives”) and a random selection of 50 (24%) of the 207 episodes adjudicated as not sepsis that were identified as having sepsis by the surveillance algorithm (“false-positives”) were manually reviewed (Table 2). The most common reasons for surveillance algorithm “false-negatives” were less than 4 days of antibiotics (usually due to viral etiology), slow fluid resuscitation over greater than 7 hours, and incorrect adjudication (“misclassification error”). The reasons for “false-positives” were largely due to single organ dysfunction that met surveillance criteria, which were more inclusive than the IPSCC criteria. That some patients were identified with infection and at least one acute organ dysfunction but not adjudicated as having severe sepsis or septic shock by the 2005 IPSCC criteria indicates that the surveillance algorithm did not completely align with the IPSCC definition sepsis.
The hospital-wide incidence of pediatric sepsis episodes, including all 832,550 ED and inpatient visits, across the entire 8-year study period using the final surveillance algorithm was 6.9 episodes per 1,000 hospital encounters (0.69%; 95% CI, 0.67–0.71%). Among 207,368 hospital admissions (excluding ED visits that did not result in admission), the incidence of sepsis was 27.8 episodes per 1,000 hospital admissions (2.8%; 95% CI, 2.7–2.9%). The incidence of sepsis among all hospital encounters increased over time after controlling for age, sex, race, and number of CCC categories (adjusted incidence rate ratio per year 1.07; 95% CI, 1.06–1.08; p < 0.001) (Fig. 1; and eTable 4, Supplemental Digital Content 1, http://links.lww.com/PCC/B124). Although lactate measurement was increasingly emphasized during the study period, there was no change in proportion of sepsis episodes identified by the hyperlactatemia criterion over time (eFig. 3, Supplemental Digital Content 4, http://links.lww.com/PCC/B127; legend, Supplemental Digital Content 1, http://links.lww.com/PCC/B124). Patient characteristics for all sepsis episodes are shown in Table 3.
Overall mortality was 6.7% (95% CI, 6.1–7.3%). Mortality did not change over time after accounting for age, sex, race, community- versus hospital-acquired sepsis onset, presence of CCC, or number of organ dysfunctions (adjusted odds ratio per year 0.98; 95% CI, 0.93–1.03; p = 0.38) (Fig. 1; and eTable 5, Supplemental Digital Content 1, http://links.lww.com/PCC/B124). The distribution and number of organ dysfunctions, with associated mortality, among sepsis episodes identified using the surveillance algorithm are shown in Figure 2.
We developed a surveillance algorithm to identify episodes of pediatric sepsis using routine clinical data available within the EHR. In both derivation and validation cohorts, the surveillance algorithm achieved comparable sensitivity (pediatric derivation 78%, validation 84% vs adult 69.7%) to a recent electronic surveillance definition for adult sepsis (12). Although our specificity was lower than that reported for adults (pediatric derivation 76%, validation 65% vs adult 98.1%), our reference-standard focused on suspected cases of sepsis rather than randomly selected hospitalizations with a much lower likelihood of including “sepsis-like” cases. Our surveillance algorithm was able to be applied retrospectively to generate 8 years of epidemiological data about pediatric sepsis episodes using consistent clinical criteria without having to rely on laborious and expensive manual chart review or claims data that suffer from variability across providers and time.
A reliable method to identify episodes of pediatric sepsis has been elusive. We, and others, have relied on a combination of clinician self-report, medical record review, “home-grown” electronic flags, and billing codes to identify children with sepsis (3,23–25). None of these methods provides an objective, efficient, and reliable approach, making epidemiologic comparisons across time and location difficult. For example, a review of 94 pediatric sepsis studies found substantial variability in patient characteristics between studies (26). The extent to which this variability reflected differences in local case-mix versus heterogeneity in the criteria used to identify sepsis was unclear. Furthermore, there is a known disconnect between clinician diagnosis, fulfillment of consensus criteria, and assignment of billing codes for pediatric sepsis such that none of these methods provides a clear standard (8,9,11).
We chose an approach that parallels the Sepsis-3 framework rather than the 2005 IPSCC to ensure relevance to forthcoming pediatric sepsis updates (14) and to be consistent with the adult sepsis surveillance algorithm (12). Nonetheless, the best reference-standard for pediatric sepsis is currently the 2005 IPSCC criteria (4). Key differences are that the IPSCC criteria include: 1) clinician suspicion of infection irrespective of antibiotics, 2) greater than or equal to 2 SIRS criteria, and 3) a complex set of criteria that can fulfill dysfunction among six organ systems. However, the 2005 IPSCC criteria were intended to identify a subset of children with sepsis at high risk of mortality for clinical trials, while our surveillance algorithm was designed for retrospective surveillance of pediatric sepsis episodes with high reliability and efficiency (27). Because we compared a novel surveillance approach based on Sepsis-3 to an IPSCC-based reference-standard with a different framework, we expected differences in the patients identified with sepsis. Furthermore, we applied an assumption that clinically relevant sepsis can occur in children without meeting the 2005 IPSCC consensus definitions. For example, we deemed it acceptable that children treated with greater than or equal to 4 days of antibiotics with isolated hyperlactatemia or respiratory dysfunction requiring invasive or noninvasive mechanical ventilation be included as having sepsis for the purposes of epidemiologic surveillance. This approach is consistent with clinical guidelines outlining less stringent criteria to clinically diagnose pediatric sepsis than IPSCC criteria, as well as prior research showing that clinicians commonly diagnose and treat sepsis that has not met IPSCC criteria (8,9,21).
Ours is not the first attempt at a medical informatics-based approach to identify pediatric sepsis episodes. Matics and Sanchez-Pinto (5) also applied the Sepsis-3 framework to 8,711 PICU encounters from a single institution, substituting age-based values into a pSOFA score. Notably, their pSOFA score was not developed or validated in children prior to utilization; rather, it simply adopted the same criteria used in adults. However, some features of SOFA may not be useful in children. For example, Glasgow Coma Score is not reliably scored or consistently recorded in pediatric hospitalizations (28), and hepatic dysfunction is rarely the only organ dysfunction in pediatric sepsis (19,20). Furthermore, Matics and Sanchez-Pinto (5) required hyperlactatemia to define septic shock—similar to Sepsis-3—but lactate is not always measured. Finally, it is not clear how accurately pSOFA would identify sepsis outside of the PICU, such as those with fluid-responsive septic shock. Our efforts expanded work by Matics and Sanchez-Pinto (5) to derive and validate an algorithm applicable to children in emergency, inpatient, and intensive care settings.
Applying our surveillance algorithm to 8 years of EHR data revealed a rising incidence of sepsis that is less sensitive to changes in diagnosis and could not be attributable to variation in billing practices. This distinction is critical because, to this point, it has not been possible to disentangle the extent to which the reported rise in pediatric sepsis reflects intensified sepsis recognition, changes to coding practices, or a true increase in disease (3,23,24). Notably, the 2.8% incidence of sepsis among admissions was between prior estimates that used different claims-based strategies to identify sepsis (23,29). Because strategies combining billing codes for infection and organ dysfunction have demonstrated trouble with specificity while sepsis-specific codes lack sensitivity (11,30,31), our surveillance approach seeks to achieve overall better accuracy than claims data. Rhee et al (12) demonstrated that a similar EHR-based surveillance approach more accurately identified adult sepsis episodes than claims data; a similar comparison is warranted for our pediatric sepsis surveillance algorithm. Finally, the stable mortality we observed over time also contrasts with claims-based data that mortality has trended lower over time (3,23,24). Rather, our data indicate that the temporal rise in sepsis incidence translates into a higher absolute number of children dying, a finding with key public health implications.
There are several limitations. First, the surveillance algorithm excluded children not treated with 4 days of inpatient antibiotics, such as viral sepsis. Given that children with viral sepsis without bacterial co-infection have a low risk of death (32), we felt this omission was acceptable. Second, the applicability of this algorithm to the neonatal ICU and cardiac center requires further study. Third, we did not include hepatic or neurologic dysfunction. Hepatic dysfunction was excluded because 1) this criterion identified many children with primary liver disease treated for mild infections and 2) most children with sepsis-associated hepatic dysfunction had other qualifying organ dysfunctions. Neurologic dysfunction was excluded because a reliable indicator was not ascertainable from the EHR. Fourth, since our surveillance algorithm was based on Sepsis-3 but compared 2005 IPSCC criteria and there was variability in the number of suspected sepsis episodes flagged by the QI electronic criteria between the derivation and validation cohorts, interpretation of the test characteristics should be done with caution. Thus, an independent validation of our algorithm is needed at other institutions and in comparison to other criteria, such as claims data. Fifth, although the incidence was measured independently from clinician diagnosis and claims data, we cannot exclude the possibility that clinicians used organ dysfunction-defining therapies (e.g., vasoactives, mechanical ventilation) more aggressively over time. Six, this algorithm is limited to retrospective data only, and thus is not intended to assist in real-time clinical recognition of children with sepsis. Last, the epidemiologic estimates reflect the experience at a single institution in one geographic and healthcare setting. However, broader application of this algorithm, if validated in a multicenter study, could provide an objective, efficient, and reliable method for epidemiologic surveillance of pediatric sepsis.
An algorithm that uses routine clinical data available within the EHR can provide an objective, efficient, and reliable method for pediatric sepsis surveillance across emergency and inpatient hospital settings. Applying this algorithm to 8 years of data from a single-center demonstrated an increase in sepsis incidence and stable mortality free from influence of changes in diagnosis or billing practices.
1. Singer M, Deutschman CS, Seymour CW, et al. The third international consensus definitions for sepsis
and septic shock
-3). JAMA 2016; 315:801–810
2. Angus DC, Linde-Zwirble WT, Lidicker J, et al. Epidemiology of severe sepsis
in the United States: Analysis of incidence, outcome, and associated costs of care. Crit Care Med 2001; 29:1303–1310
3. Hartman ME, Linde-Zwirble WT, Angus DC, et al. Trends in the epidemiology of pediatric severe sepsis
*. Pediatr Crit Care Med 2013; 14:686–693
4. Goldstein B, Giroir B, Randolph A; International Consensus Conference on Pediatric Sepsis
: International pediatric sepsis
consensus conference: Definitions for sepsis
and organ dysfunction in pediatrics
. Pediatr Crit Care Med 2005; 6:2–8
5. Matics TJ, Sanchez-Pinto LN. Adaptation and validation of a pediatric sequential organ failure assessment score and evaluation of the sepsis
-3 definitions in critically ill children. JAMA Pediatr 2017; 171:e172352
6. Schlapbach LJ, Straney L, Bellomo R, et al. Prognostic accuracy of age-adapted SOFA, SIRS, PELOD-2, and qSOFA for in-hospital mortality among children with suspected infection admitted to the intensive care unit. Intensive Care Med 2018; 44:179–188
7. Weiss SL, Deutschman CS. Are septic children really just “septic little adults”? Intensive Care Med 2018; 44:392–394
8. Weiss SL, Fitzgerald JC, Maffei FA, et al.; SPROUT Study Investigators and Pediatric Acute Lung Injury and Sepsis
Investigators Network: Discordant identification of pediatric severe sepsis
by research and clinical definitions in the SPROUT international point prevalence study. Crit Care 2015; 19:325
9. Weiss SL, Parker B, Bullock ME, et al. Defining pediatric sepsis
by different criteria: Discrepancies in populations and implications for clinical practice. Pediatr Crit Care Med 2012; 13:e219–e226
10. Evans IVR, Phillips GS, Alpern ER, et al. Association between the New York sepsis
care mandate and in-hospital mortality for pediatric sepsis
. JAMA 2018; 320:358–367
11. Balamuth F, Weiss SL, Hall M, et al. Identifying pediatric severe sepsis
and septic shock
: Accuracy of diagnosis codes. J Pediatr 2015; 167:1295–1300.e4
12. Rhee C, Dantes R, Epstein L, et al.; CDC Prevention Epicenter Program: Incidence and trends of sepsis
in US hospitals using clinical vs claims data, 2009-2014. JAMA 2017; 318:1241–1249
13. Martin B, Bennett TD. Sepsis
in the service of observational research. Crit Care Med 2019; 47:303–305
14. Schlapbach LJ. Time for sepsis
-3 in children? Pediatr Crit Care Med 2017; 18:805–806
15. Leclerc F, Duhamel A, Deken V, et al.; Groupe Francophone de Réanimation et Urgences Pédiatriques (GFRUP): Can the pediatric logistic organ dysfunction-2 score on day 1 be used in clinical criteria for sepsis
in children? Pediatr Crit Care Med 2017; 18:758–763
16. Leteurtre S, Duhamel A, Salleron J, et al.; Groupe Francophone de Réanimation et d’Urgences Pédiatriques (GFRUP): PELOD-2: An update of the PEdiatric logistic organ dysfunction score. Crit Care Med 2013; 41:1761–1773
17. Scott HF, Brou L, Deakyne SJ, et al. Lactate clearance and normalization and prolonged organ dysfunction in pediatric sepsis
. J Pediatr 2016; 170:149–55.e1
18. Scott HF, Brou L, Deakyne SJ, et al. Association between early lactate levels and 30-day mortality in clinically suspected sepsis
in children. JAMA Pediatr 2017; 171:249–255
19. Lin JC, Spinella PC, Fitzgerald JC, et al.; Sepsis
Prevalence, Outcomes, and Therapy Study Investigators: New or progressive multiple organ dysfunction syndrome in pediatric severe sepsis
: A sepsis
phenotype with higher morbidity and mortality. Pediatr Crit Care Med 2017; 18:8–16
20. Tantaleán JA, León RJ, Santos AA, et al. Multiple organ dysfunction syndrome in children. Pediatr Crit Care Med 2003; 4:181–185
21. Davis AL, Carcillo JA, Aneja RK, et al. The American College of Critical Care Medicine clinical practice parameters for hemodynamic support of pediatric and neonatal septic shock
: Executive summary. Pediatr Crit Care Med 2017; 18:884–890
22. Feudtner C, Feinstein JA, Zhong W, et al. Pediatric complex chronic conditions classification system version 2: Updated for ICD-10 and complex medical technology dependence and transplantation. BMC Pediatr 2014; 14:199
23. Balamuth F, Weiss SL, Neuman MI, et al. Pediatric severe sepsis
in U.S. children’s hospitals. Pediatr Crit Care Med 2014; 15:798–805
24. Ruth A, McCracken CE, Fortenberry JD, et al. Pediatric severe sepsis
: Current trends and outcomes from the pediatric health information systems database. Pediatr Crit Care Med 2014; 15:828–838
25. Weiss SL, Balamuth F, Hensley J, et al. The epidemiology of hospital death following pediatric severe sepsis
: When, why, and how children with sepsis
die. Pediatr Crit Care Med 2017; 18:823–830
26. Tan B, Wong JJ, Sultana R, et al. Global case-fatality rates in pediatric severe sepsis
and septic shock
: A systematic review and meta-analysis. JAMA Pediatr 2019; 173:352–362
27. Seymour CW, Coopersmith CM, Deutschman CS, et al. Application of a framework to assess the usefulness of alternative sepsis
criteria. Crit Care Med 2016; 44:e122–e130
28. Kirschen MP, Snyder M, Winters M, et al. Survey of bedside clinical neurologic assessments in U.S. PICUs. Pediatr Crit Care Med 2018; 19:339–344
29. Odetola FO, Gebremariam A, Freed GL. Patient and hospital correlates of clinical outcomes and resource utilization in severe pediatric sepsis
30. Iwashyna TJ, Odden A, Rohde J, et al.; Identifying patients with severe sepsis
using administrative claims: Patient-level validation of the Angus implementation of the International Consensus Conference Definition of Severe Sepsis
. Med Care 2014; 52:e39–e43
31. Whittaker SA, Mikkelsen ME, Gaieski DF, et al. Severe sepsis
cohorts derived from claims-based strategies appear to be biased toward a more severely ill patient population. Crit Care Med 2013; 41:945–953
32. Hall MW, Geyer SM, Guo CY, et al.; Pediatric Acute Lung Injury and Sepsis
Investigators (PALISI) Network PICFlu Study Investigators: Innate immune function and mortality in critically ill children with influenza: A multicenter study. Crit Care Med 2013; 41:224–236