Background: In 1992, the first consensus definition of severe sepsis was published. Subsequent epidemiologic estimates were collected using administrative data, but ongoing discrepancies in the definition of severe sepsis produced large differences in estimates.
Objectives: We seek to describe the variations in incidence and mortality of severe sepsis in the United States using four methods of database abstraction. We hypothesized that different methodologies of capturing cases of severe sepsis would result in disparate estimates of incidence and mortality.
Design, Setting, Participants: Using a nationally representative sample, four previously published methods (Angus et al, Martin et al, Dombrovskiy et al, and Wang et al) were used to gather cases of severe sepsis over a 6-year period (2004–2009). In addition, the use of new International Statistical Classification of Diseases, 9th Edition (ICD-9), sepsis codes was compared with previous methods.
Measurements: Annual national incidence and in-hospital mortality of severe sepsis.
Results: The average annual incidence varied by as much as 3.5-fold depending on method used and ranged from 894,013 (300/100,000 population) to 3,110,630 (1,031/100,000) using the methods of Dombrovskiy et al and Wang et al, respectively. Average annual increase in the incidence of severe sepsis was similar (13.0% to 13.3%) across all methods. In-hospital mortality ranged from 14.7% to 29.9% using abstraction methods of Wang et al and Dombrovskiy et al. Using all methods, there was a decrease in in-hospital mortality across the 6-year period (35.2% to 25.6% [Dombrovskiy et al] and 17.8% to 12.1% [Wang et al]). Use of ICD-9 sepsis codes more than doubled over the 6-year period (158,722 – 489,632 [995.92 severe sepsis], 131,719 – 303,615 [785.52 septic shock]).
Conclusion: There is substantial variability in incidence and mortality of severe sepsis depending on the method of database abstraction used. A uniform, consistent method is needed for use in national registries to facilitate accurate assessment of clinical interventions and outcome comparisons between hospitals and regions.
1Department of Emergency Medicine, University of Pennsylvania, Philadelphia, PA.
2Department of Biostatistics and Epidemiology, University of Pennsylvania, Philadelphia, PA.
3The Leonard Davis Institute of Health Economics, University of Pennsylvania, Philadelphia, PA.
*See also p. 1361.
Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal’s website (http://journals.lww.com/ccmjournal).
Supported, in part, by an unrestricted educational and research grant from the Beatrice Wind Gift Fund to Dr. Gaieski; by National Institutes of Health to Dr. Carr.
Dr. Gaieski had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. Dr. Gaieski, Dr. Edwards, Mr. Kallan, and Dr. Carr helped in study concept and design. Mr. Kallan and Dr. Carr helped in data acquisition. Dr. Gaieski, Dr. Edwards, Mr. Kallan, and Dr. Carr performed the analysis and interpretation of data. Drs. Gaieski and Edwards drafted the manuscript. Drs. Gaieski and Carr performed critical revision of the manuscript for important intellectual content. Mr. Kallan performed statistical analysis. Dr. Gaieski obtained funding. Dr. Gaieski, Mr. Kallan, and Dr. Carr provided administrative, technical, or material support. Dr Gaieski performed study supervision.
The Beatrice Wind Gift Fund had no role in the conception of this study, the collection or analysis of the data, or the writing, revision, and submission of the manuscript. No one from the Beatrice Wind Gift Fund has reviewed the manuscript prior to submission.
The authors have not disclosed any potential conflicts of interest.
For information regarding this article, E-mail: email@example.com