Secondary Logo

Journal Logo

Linking Big Data and Prediction Strategies

Tools, Pitfalls, and Lessons Learned

Yang, Shiming, PhD1,2; Stansbury, Lynn G., MD, MPH2; Rock, Peter, MD, MBA1,2; Scalea, Thomas, MD2,3; Hu, Peter F., PhD1,2,3

doi: 10.1097/CCM.0000000000003739
Concise Definitive Review
Open
SDC

Objectives: Modern critical care amasses unprecedented amounts of clinical data—so called “big data”—on a minute-by-minute basis. Innovative processing of these data has the potential to revolutionize clinical prognostics and decision support in the care of the critically ill but also forces clinicians to depend on new and complex tools of which they may have limited understanding and over which they have little control. This concise review aims to provide bedside clinicians with ways to think about common methods being used to extract information from clinical big datasets and to judge the quality and utility of that information.

Data Sources: We searched the free-access search engines PubMed and Google Scholar using the MeSH terms “big data”, “prediction”, and “intensive care” with iterations of a range of additional potentially associated factors, along with published bibliographies, to find papers suggesting illustration of key points in the structuring and analysis of clinical “big data,” with special focus on outcomes prediction and major clinical concerns in critical care.

Study Selection: Three reviewers independently screened preliminary citation lists.

Data Extraction: Summary data were tabulated for review.

Data Synthesis: To date, most relevant big data research has focused on development of and attempts to validate patient outcome scoring systems and has yet to fully make use of the potential for automation and novel uses of continuous data streams such as those available from clinical care monitoring devices.

Conclusions: Realizing the potential for big data to improve critical care patient outcomes will require unprecedented team building across disparate competencies. It will also require clinicians to develop statistical awareness and thinking as yet another critical judgment skill they bring to their patients’ bedsides and to the array of evidence presented to them about their patients over the course of care.

1Shock Trauma and Anesthesiology Research Center, University of Maryland School of Medicine, Baltimore, MD.

2Shock Trauma and Anesthesiology Research Center, University of Maryland School of Medicine, Baltimore, MD.

3R Adams Cowley Shock Trauma Center, Baltimore, MD.

Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal’s website (http://journals.lww.com/ccmjournal).

Dr. Rock’s institution received funding from the National Institutes of Health, Zygood, and John Hopkins University, and he received other support from the American Board of Anesthesiology and the Department of Defense. The remaining authors have disclosed that they do not have any potential conflicts of interest.

For information regarding this article, E-mail: phu@som.umaryland.edu

This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal.

Modern ICUs generate patient care data with sufficient volume, speed, and diversity to overwhelm non–electronic management and analysis systems. In response, “big data” research is evolving so rapidly that its scope is evolving almost too fast to capture and assess. PubMed shows geometrically accelerating increases in big data articles in the last 10 years. Google Scholar evolves even as one searches it. This phenomenon is perhaps best summed up by Sanchez-Pinto et al (1): Like most emerging technologies, the products of data science research in critical care will undoubtedly go through a series of hype and disillusionment cycles before becoming accepted, proven assets in the study and care of critically ill patients.”

Many excellent recent reviews provide useful perspectives for clinicians trying to sort through this deluge (1–7). Our aim in this review is to add our experience and thinking about the epidemiologic and statistical aspects of big data research as a window through which clinicians can make critical judgments about and best use of this novel evidence stream.

We started with a broad look at peer-reviewed scientific and clinical literature accessible via free-access search engines. Our objective was to identify areas of thought and concern, not to identify work that would support meta-analysis (8,9). This search was augmented by exploration of selected bibliographies from these references. To the roughly 300 articles thus identified as of interest, we brought our mutual and personal histories of practice and research in advanced analytics and computer science (S.Y.), primary care and trauma epidemiology (L.G.S.), anesthesia and critical care (P.R.), trauma surgery and critical care (T.S.), and critical care electronics systems engineering and computer science (P.F.H.) (see also Supplemental Fig. 1 [Supplemental Digital Content 1, http://links.lww.com/CCM/E496] and Supplemental References [Supplemental Digital Content 2, http://links.lww.com/CCM/E497] for search terms, review criteria, and primary review bibliography).

The “big” of big data is some combination of acquisition speed, source diversity, and/or large input volume that must be stored, collated, and processed (10). It is therefore inseparable from electronic data gathering, record keeping, and advanced computational support. Most of this activity therefore takes place beyond the direct control of clinicians and, increasingly, in algorithms and/or other proprietary systems opaque to their users. However, our premise in this review is that the basic tenants and vulnerabilities of these systems are knowable and can be used by bedside clinicians to support an informed skepticism when assessing published results and recommendations or themselves assembling interdisciplinary clinical or research teams.

The main healthcare science benefit of big data research is the prospect of “precision medicine…the right treatment for the right patient at the right time” (11). The second is the possibility of unbiased epidemiology, that is, assessment of patterns of illness and efficacy of intervention in cohorts so large that they truly represent populations (12). The unprecedented size of big-data data pools has important potential to support clinical outcomes prediction models (13). However, understanding the challenges posed by these hitherto unimaginably large inputs is essential to making informed judgments about the quality of information being presented (14).

Back to Top | Article Outline

DATA POOLS: THE TWO FACES OF BIG DATA

All aggregated clinical data—even the solo case report—have two faces, two data pools whose interactions must be assessed in systematic ways if we are to derive valid generalizable or decision support information from them (15). These pools are the number of individual patients and the array of particular patient variables or features available. As detailed below and in Table 1, for the purposes of this review, we use the large-case abbreviations N or P for relatively large number of individual patients (N) or particular patient variables (P) in a given data pool and the small-case abbreviations n or p for relatively small examples of those respective data pools. For example, in a solo case report, the number of patients is 1, and the particular patient variables being described are usually relatively few. We would describe this as an np data pool. But a case report on an individual with a genomic variant may describe particular variable features numbering in the millions. This would be an nP data pool, and the most recent reporting on -omics data follows this nP pattern. As yet, few published studies can be classified as NP, that is, population-based studies that also include, for example, -omics data or continuous electronic monitoring input data. However, as we discuss later, the development of machine learning (ML) tools will facilitate such work.

Table 1 illustrates the interplay of the four elements that create the two faces of big data: a matrix of large and small number of patients (one face) versus large and small number of particular patient variables (the other face). Most extant clinical literature is np, that is, patient cohorts of fewer than a thousand individuals (n) and particular variables numbering at best in the several hundreds (p). Currently, most of what is being reported as big data research sorts into the other two boxes, that is, Np (typically registry or other electronic medical record [EMR] data) or nP (commonly -omics but also some of the newer work analyzing continuous digital or waveform monitoring inputs). The potentials and vulnerabilities of the two (i.e., Np vs nP) are distinctive and will be discussed in greater detail below.

TABLE 1

TABLE 1

Back to Top | Article Outline

Beginnings of Big Data Critical Care Research: Np and ICU Scoring Systems

The unique gift that physicians have always brought to the bedside is prognosis, the ability to tell patients and families what to expect from an illness (16). A primary goal of big data research is still prognosis or, as more usually put, outcomes prediction. (“Prognosis” and “outcomes prediction” actually describe rather different things when applied to ML; we will return to this issue.) The dawn of desktop computing in the 1970s introduced the possibility of comparing outcomes across patient groups of hitherto unimaginable size—N data pools. In turn, this demanded ways to standardize patient load and illness severity across comparison groups. Since then, an array of critical illness scoring systems, using an array of clinical data sources (although not all those with potential for electronic deconstruction and analysis), has been developed, published, and come into routine use (Tables 2 and 3) (17–29).

TABLE 2

TABLE 2

As originally published, all of these scoring systems were proportional hazards estimates derived from conventional regression analyses of np data from American clinical data pools. Much recent publication on ICU prediction focuses on calibrating these and other scoring systems using increasingly sophisticated automated data-gathering systems to increase population size (N) and improve validity for populations outside the United States (30–38).

The earliest and most familiar of these scores, the Acute Physiology And Chronic Health Evaluation (APACHE), published in 1981, is based on a classic np dataset: 833 adult admissions to two U.S. hospital ICUs (17). APACHE II expanded to 5,815 patients from 13 hospitals but had even fewer patient variables—Np (18). Much of the international work cited above showed the relative insularity—statistically speaking, the poor calibration—of even this amended system. More recent versions of APACHE use newer big data tools to include more than 100,000 patients and many more variables—although still within the Np framework—and can be embedded into EMR systems to facilitate use in clinical care (3, 20).

Even as these scoring systems expand in scope and dimensionality, they still raise key issues about the vulnerabilities of model-driven N data pool research: missing data, bias, and attributable risk. Regression models are exceptionally vulnerable to missing data. In a system built around determining the shape of a line, for every value y, there must be an x, and there are essentially two options when one or the other is missing: drop the entry with the missing datum (list or pairwise deletion) or assume a value (impute) (39–41). List deletion—dropping the patient entry altogether—is the easiest solution, may happen automatically in some analytic systems, and is the most likely to introduce significant and systematic selection biases in the consideration of relatively uncurated EMR-based registry populations of acute care patients (42). Imputation can be done based on various assumptions. The substitution of a calculated mean or modal value has probably the least potential for biasing results, but maximum likelihood or multiple imputation methods have been used and are plausible (38, 43, 44).

In sum, the newer iterations of older ICU scoring systems are good examples of Np analytics using classic regression modeling and assumptions of statistical normality. They also exemplify the kinds of data mining techniques that are increasingly used to propose novel insights about population patterns of disease while being stretched to what may be the limits of their capabilities (13, 14). An example of the latter is the concern recently raised by the American Statistical Society regarding the validity of conventional notions of statistical probability in the face of huge numbers (15, 45–47).

Back to Top | Article Outline

nP Data Pools: -Omics Data and “Precision Medicine”

-Omics data are a commonly cited example of precision medicine and a good example of big data analyses based on nP data pools. Most reports describe relatively few individuals (n) for whom the input features of the particular patient variables being analyzed may number in the millions (P) (2, 48–54). In an innovative effort to expand the scope of P data in critical care, the National Institutes of Health Inflammation and Host Response to Injury—Genomics in Trauma (“Glue Grant”) collaborative explored the dynamic, age-related genomics of the immune response to severe injury, particularly regarding sepsis (55). Representing 22 U.S. trauma centers, roughly 3,000 individual patient records with 1,200 data fields and 5,000 microarrays involving more than 6.9 million input features, the Glue Grant data pools approach the NP ideal and required wide-ranging and innovative solutions to data collation and analysis that are worth exploring as examples of dealing with dimensionality and diversity in big data research (55–64).

Back to Top | Article Outline

NP Data Pools for Big Data Research: Digital and Waveform Data

In contrast to the relatively static and costly nature of -omics data, critical care is rich in dynamic, high-throughput data from patient monitoring and clinicians’ sequential decisions and actions, all now being recorded and stored electronically. The complexity of human systems and their interactions create enormous pools of variables that can provide many features (P) during the stages of data exploration. The potential for predictive continuous electroencephalogram waveform analysis exists (65, 66) and has been explored for possible incorporation into mobile monitoring devices (67) and in tracking an association with acute liver failure in the neuro-ICU (68). Decreased heart rate variability derived from electrocardiogram waveform analysis has been shown to be a useful ICU metric in critical illness (69), sepsis detection (70), multiple organ dysfunction (71), neuroworsening (72, 73), and cardiovascular mortality (74). Our own and other centers have investigated waveform analysis from continuous automated electronic arterial blood pressure monitoring and intracranial pressure monitoring to predict short- and longterm outcomes in critical care patients (75, 76). Whether N, P, or both, big data by definition, involves machines to collect, collate, store, and manipulate those data. ML is the scientific discipline that explores how computers learn from data, but the degree to which that learning capacity can be harnessed to improve critical care outcomes—where the costs of wrong decisions are high—is not yet clear. In the next section, we review basic assumptions that underlie ML and consider how those assumptions affect how ML is being deployed in critical care medicine.

Back to Top | Article Outline

ML: NOVEL ANALYTIC SYSTEMS TO COPE WITH NOVEL DATA POOLS

The size and complexity of ML processes mean that they are often summarized in metaphors—visual imagery like “tree-based” or “support vector,” physical actions like “lasso” or “elastic net,” or physiologic processes like “neural networks”—that only minimally convey the reality of the mathematical and technical steps involved (77–79). Among the most common metaphors for summarizing ML methods are “model-driven” versus “data-driven” or “top-down” versus “bottom-up.” These are informal terms for deduction versus induction, that is, moving from a hypothesis to some kind of guaranteed conclusion versus moving from a collection of observations to generalizations that may or may not be true. Top-down analysis starts with a theory and hypothesis, then uses observations (data) to confirm or reject the hypothesis via established statistical methods. Bottom-up discovers patterns from data, forms hypotheses from these patterns, then distills an overarching theory. ML methods mostly do inductive learning from a given data pool, that is, are mostly data driven. However, ML also depends on initial assessment of the structure of the data pool using conventional, hypothesis-driven statistical tools. So even the most advanced current forms of artificial intelligence, the latter stages of which are entirely inductive, start as model driven (80). Beam and Kohane (81) describe ML as:

...existing along a continuum between fully human-guided vs fully machine-guided data analysis. To understand the degree to which a predictive or diagnostic algorithm can said to be an instance of machine learning requires understanding how much of its structure or parameters were pre-determined by humans.

This continuum is often summarized using two other familiar terms: “supervised” and “unsupervised.” In supervised ML, what could be called the more human end of the continuum, knowledge of the outcome is provided to the model. Results provide statistical estimations of likelihood and translate comfortably to prognoses of outcomes like mortality. As analysis progresses, preestablished criteria are used to select those features in ever-shrinking (as less “useful” features are discarded) pools of data most closely associated with the outcomes of interest (82). Basic statistical tools like chi-square analysis, regression modeling, and Bayesian analysis are often used in these stages, and several clinical journals are now publishing series of short reviews of statistical thinking and methodology in recognition that some familiarity with these tools has become a key asset for bedside critical care physicians (83–89).

Newer methods that manage outliers without distorting results and that are good at assessing the relative importance of sequential sets of variables—such as Random Forest (a way of assessing successive informational branching patterns) or Bayesian ensemble (based on Bayes’s analysis of sequentially revised prior estimations of likelihood depending on sequentially incoming information)—have been useful in arrhythmia alarm classification (90, 91). Early neurocritical care work used conventional regression techniques and then support vector machines (92) and Boosting algorithms (93) to select features of continuous electronic vital signs monitor data (intracranial pressure, heart rate, systolic blood pressure) that predicted specified outcomes after severe traumatic brain injury (94, 95). Luyt et al (96) used similar techniques to describe analysis of the magnetic resonance imagery of brain tissue molecular water flow to predict outcome in comatose ICU patients during therapeutic hypothermia after cardiac arrest. Lajnef et al (77) describe a Decision Trees approach to sleep staging using continuous electrocardiographic, electroencephalographic, electrooculographic, and electromyographic data. Despite the unfamiliar terminology, all of the above work provided clinically useful information to its patient care teams based on a much more sophisticated and granular examination of available physiologic data than has been hitherto possible.

In contrast, unsupervised ML approaches data pools without outcome data being provided for the model, deriving outcomes probabilities via patterns of association. Veloso et al (91) describe clustering techniques as potential tools to predict ICU readmission. The University of California, San Francisco and the University of California, Berkeley clinical research team has used both hierarchical clustering and correlation network analysis to describe the complex and dynamic metabolic states of critically ill trauma patients (97) and responses to intervention (98). Ghosh et al (99) used hidden Markov models, a Bayesian approach to nonlinear recursive filtering of data (100) to describe changes in sequential patterns of blood pressure and heart rate. These inductive bottom-up analyses provide numerical assessments of association as “outcome predictions” that are important hypothesis generators but are not structured to support clinical decision-making in ways that have been widely validated (1, 101, 102).

As we consider attempts to move big data systems to the ICU bedside, both the strengths and the weaknesses of these analytic approaches must be kept in mind.

Back to Top | Article Outline

BIG DATA AT THE BEDSIDE

In general, bedside big data clinical applications involve moving beyond prognosis into decision assist. As an example, “intelligent” telemedicine systems like the Philips eICU (Philips, Amsterdam, The Netherlands) can integrate and interpret remote location EMR and ICU monitoring data via algorithms derived from N databases to provide advanced-care monitoring and prognostic capabilities in support of consultative clinical decision-making (103–106).

As yet, however, systems advances based on big data research have proved more useful in supporting and documenting improvements in critical care processes than improvement in critical care patient care outcomes (107, 108). Two exceptions are in critical care of the newborn and trauma resuscitation. The Kaiser algorithm for calculating newborn sepsis risk, an open-access multivariable risk model derived from analysis of 200,000 perinatal admissions to a single Kaiser Permanente hospital, 2010–2015, demonstrably reduces perinatal laboratory testing and antibiotic use without adverse effects and is currently in use in children’s hospitals in the United States (109, 110). In trauma resuscitation, electronic registry databases supported the recognition of an acute coagulopathy of trauma (111, 112) and the need for rapid hemostatic resuscitation (113–115).

Over the last decade, our research group at Maryland Shock Trauma has harnessed automated electronic vital signs monitoring data collection systems to a variety of critical care prediction tasks, including assessing subtle physiologic effects like those of intracranial pressure variations—missed by conventional recording—on long-term outcome after severe neurotrauma (76, 94, 95, 116–121). The analyses at the core of this work employ a range of ML techniques (94, 95, 110) as well as evolving computer engineering solutions (122). The close links among critical care bedside data sources, data networking systems, and the computer science team have allowed development and deployment of translational instrumentation now being tested in fixed facility and U.S. Air Force airborne ICUs (123).

Back to Top | Article Outline

INTO THE FUTURE

Optimizing the critical care potential of big data research will require solutions to two tightly intertwined problems: the nature of the data at the heart of big data research and the nature of clinical decision-making. Put very simply, ML is essentially an extension of existing mathematical concepts and statistical tools to take advantage of the data manipulation capabilities of advanced electronics. However, unlike the N databases assembled to assess epidemiologic patterns in the past—infant diarrhea in the 1960s (124); asbestos exposure and cancer in the 1970s (125)—the retrospective databasesthat are the sources for many current N studies are comprised of data originally collected for other uses. Anathema to the traditionally trained epidemiologist, secondary analyses are now accepted as inevitable but still require alert judgment to avoid the worst effects of selection and ascertainment bias (12). As an example, when it was released in 2008, Google Flu Trends was hailed as a rapid-response public health disaster-preparedness breakthrough but itself fell victim to overfitting and concept drift—the ML manifestation of classic selection bias and resulting spurious association. That is, in the particularly severe 2012 flu season, the worried well were as likely to Google-search flu-related questions as those who were truly ill (5, 126, 127).

The Multiparameter Intelligent Monitoring in Intensive Care (MIMIC) database has evolved in part in response to this problem of inappropriate secondary data mining. MIMIC began as a vision of precision medicine at Beth Israel Deaconess and the Massachusetts Institute of Technology, a dynamic pool of individual and collective patient data that could be queried in real time on rounds to provide immediate answers to patient care questions (128–131). Greatly expanded, updated, and deidentified to allow public access, MIMIC II has supported studies in cardiovascular time series dynamics, modeling intracranial pressure for noninvasive estimation, and mortality prediction (132–134). Medical Information Mart for Intensive Care (MIMIC) III, released in 2016, integrates increasingly granular access to high-quality physiologic measurements, including potential for user visualization to increase non-expert access (135) and, in a sub-set of patients, for access to P data pools in the form of continuous electronic monitoring and -omics data. The commitment to open-access peer-reviewed science that MIMIC represents is encouraging but will require an equal commitment to quality control, calibration, and open access on the part of its users (101).

The Google Trends algorithm was quickly amended by its developers, and the game, in a sense, goes on. Which raises the second major problem with the current status of big data research. So far, the ML applications where the algorithms clearly outperform humans are in low-risk settings—games, etc.—where the cost of a wrong decision made on the basis of the information provided is relatively trivial. Similarly-structured medical applications now exist that can perform better than expert humans on image-based pattern recognition diagnosis (136, 137) but do not do as well when presented with the range of bedside uncertainties that are the routine fare of the senior attending physician (1, 101, 138–140). These have as yet (despite enthusiastic predictions [141]) defied ML and other forms of artificial intelligence deconstruction and codification (1).

The challenge—and potential—of big data research is the integration of big data pools and novel methodologies to identify that individual patient likely to have the unexpected—not the expected—response to a particular illness or treatment (6). And to do so soon enough that cost-effective intervention is possible and individual and population outcomes demonstrably improve. The challenge to clinicians is to be able to incorporate statistical thinking into their daily practice as readily as they do visual, tactile, and auditory information and to be willing to exercise the same degree of critical judgment about the evidence provided by big data methods and instrumentation as they do other evidence.

TABLE 3

TABLE 3

Back to Top | Article Outline

ACKNOWLEDGMENTS

We thank Dr. Zaka Ahmad for his efforts in assembling the first round of citations for review. We also thank Drs. John R. Hess and Aaron S. Hess for their patience and support in reviewing drafts of this article in various stages.

Back to Top | Article Outline

REFERENCES

1. Sanchez-Pinto LN, Luo Y, Churpek MM. Big data and data science in critical care. Chest 2018; 154:1239–1248
2. Wu PY, Cheng CW, Kaddi CD, et al. -Omic and electronic health record big data analytics for precision medicine. IEEE Trans Biomed Eng 2017; 64:263–273
3. Johnson AE, Ghassemi MM, Nemati S, et al. Machine learning and decision support in critical care. Proc IEEE 2016; 104:444–466
4. Zimmerman JE, Kramer AA. A history of outcome prediction in the ICU. Curr Opin Crit Care 2014; 2C0:550–556
5. Ghassemi M, Celi LA, Stone DJ. State of the art review: The data revolution in critical care. Crit Care 2015; 19:118
6. Maslove DM, Lamontagne F, Marshall JC, et al. A path to precision in the ICU. Crit Care 2017; 21:79
7. Seymour CW, Gomez H, Chang CH, et al. Precision medicine for all? Challenges and opportunities for a precision medicine approach to critical illness. Crit Care 2017; 21:257
8. Moher D, Liberati A, Tetzlaff J, et al; PRISMA Group: Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA statement. PLoS Med 2009; 6:e1000097
9. Liberati A, Altman DG, Tetzlaff J, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: Explanation and elaboration. PLoS Med 2009; 6:e1000100
10. Laney D; META Group: 3D Data Management: Controlling Data Volume, Velocity and Variety. February 2001. Available at: http://blogs.gartner.com/doug-laney/files/2012/01/ad949-3D-Data-Management-Controlling-Data-Volume-Velocity-and-Variety.pdf. Accessed April 18, 2018
    11. Buchman TG, Billiar TR, Elster E, et al. Precision medicine for critical illness and injury. Crit Care Med 2016; 44:1635–1638
    12. Olsen J. Rothman KJ, Greenland SS, Last TL. Using secondary data. In: Modern Epidemiology. 2008, pp Third Edition. New York, NY, Lippincott Williams & Wilkins, 481–491
    13. Badawi O, Liu X, Hassan E, et al. Evaluation of ICU risk models adapted for use as continuous markers of severity of illness throughout the ICU stay. Crit Care Med 2018; 46:361–367
    14. Maslove DM. With severity scores updated on the hour, data science inches closer to the bedside. Crit Care Med 2018; 46:480–481
    15. Lee CH, Yoon HJ. Medical big data: Promise and challenges. Kidney Res Clin Pract 2017; 36:3–11
    16. Thomas L. The Youngest Science. 1995New York, NY, Penguin Random House.
    17. Knaus WA, Zimmerman JE, Wagner DP, et al. APACHE-Acute Physiology and Chronic Health Evaluation: A physiologically based classification system. Crit Care Med 1981; 9:591–597
    18. Knaus WA, Draper EA, Wagner DP, et al. APACHE II: A severity of disease classification system. Crit Care Med 1985; 13:818–829
    19. Knaus WA, Wagner DP, Draper EA, et al. The APACHE III prognostic system. Risk prediction of hospital mortality for critically ill hospitalized adults. Chest 1991; 100:1619–1636
    20. Zimmerman JE, Kramer AA, McNair DS, et al. Acute Physiology and Chronic Health Evaluation (APACHE) IV: Hospital mortality assessment for today’s critically ill patients. Crit Care Med 2006; 34:1297–1310
    21. Le Gall JR, Loirat P, Alperovitch A, et al. A simplified acute physiology score for ICU patients. Crit Care Med 1984; 12:975–977
    22. Le Gall JR, Lemeshow S, Saulnier F. A new Simplified Acute Physiology Score (SAPS II) based on a European/North American multicenter study. JAMA 1993; 270:2957–2963
    23. Moreno RP, Metnitz PG, Almeida E, et al; SAPS 3 Investigators: SAPS 3–From evaluation of the patient to evaluation of the intensive care unit. Part 2: Development of a prognostic model for hospital mortality at ICU admission. Intensive Care Med 2005; 31:1345–1355
    24. Lemeshow S, Teres D, Klar J, et al. Mortality Probability Models (MPM II) based on an international cohort of intensive care unit patients. JAMA 1993; 270:2478–2486
    25. Le Gall JR, Klar J, Lemeshow S, et al. The Logistic Organ Dysfunction system. A new way to assess organ dysfunction in the intensive care unit. ICU scoring group. JAMA 1996; 276:802–810
    26. Vincent JL, Moreno R, Takala J, et al. The SOFA (Sepsis-related Organ Failure Assessment) score to describe organ dysfunction/failure. Intensive Care Med 1996; 22:707–710
    27. Marshall JC, Cook DJ, Christou NV, et al. Multiple Organ Dysfunction Score: A reliable descriptor of a complex clinical outcome. Crit Care Med 1995; 23:1638–1652
    28. Moreno R. Webb A, Gattinoni L. Organ failure scoring. In: Oxford Textbook of Critical Care. 2016, pp Oxford, United Kingdom, Oxford University Press, 130–132
    29. Moreno RP, Metnitz PGH. Parrillo JE, Dellinger RP. Severity scoring systems: Tools for the evaluation of patients and intensive care units. In: Critical Care Medicine: Principles of Diagnosis and Management in the Adult. 2008, pp Third Edition. Philadelphia, PA, Mosby Elsevier, 1547–1565
    30. Paul E, Bailey M, Van Lint A, et al. Performance of APACHE III over time in Australia and New Zealand: A retrospective cohort study. Anaesth Intensive Care 2012; 40:980–994
    31. Paul E, Bailey M, Pilcher D. Risk prediction of hospital mortality for adult patients admitted to Australian and New Zealand intensive care units: Development and validation of the Australian and New Zealand Risk of Death model. J Crit Care 2013; 28:935–941
    32. Aktuerk D, McNulty D, Ray D, et al. National administrative data produces an accurate and stable risk prediction model for short-term and 1-year mortality following cardiac surgery. Int J Cardiol 2016; 203:196–203
    33. Ferrando-Vivas P, Jones A, Rowan KM, et al. Development and validation of the new ICNARC model for prediction of acute hospital mortality in adult critical care. J Crit Care 2017; 38:335–339
    34. Gillies MA, Harrison EM, Pearse RM, et al. Intensive care utilization and outcomes after high-risk surgery in Scotland: A population-based cohort study. Br J Anaesth 2017; 118:123–131
    35. Engerström L, Kramer AA, Nolin T, et al. Comparing time-fixed mortality prediction models and their effect on ICU performance metrics using the simplified acute physiology score 3. Crit Care Med 2016; 44:e1038–e1044
    36. Fang X, Wang Z, Yang J, et al. Clinical evaluation of sepsis-1 and sepsis-3 in the ICU. Chest 2018; 153:1169–1176
    37. Akar AR, Kurtcephe M, Sener E, et al; Group for the Turkish Society of Cardiovascular Surgery and Turkish Ministry of Health: Validation of the EuroSCORE risk models in Turkish adult cardiac surgical population. Eur J Cardiothorac Surg 2011; 40:730–735
    38. Haaland OA, Lindemark F, Flaatten H, et al. A calibration study of SAPS II with Norwegian intensive care registry data. Acta Anaesthesiol Scand 2014; 58:701–708
    39. Allison PD. Missing Data. Sage University Papers Series on Quantitative Applications in the Social Sciences. 2001Thousand Oaks, CA, SAGE.
    40. Enders C. Applied Missing Data Analysis. 2010New York, NY, Guilford Press.
    41. Little RJ, Rubin D. Statistical Analysis with Missing Data. 2002Hoboken, NJ, John Wiley & Sons.
    42. Ondeck NT, Fu MC, Skrip LA, et al. Missing data treatments matter: An analysis of multiple imputation for anterior cervical discectomy and fusion procedures. Spine J 2018; 18:2009–2017
    43. Schafer JL, Graham JW. Missing data: Our view of the state of the art. Psychol Methods 2002; 7:147–177
    44. Lee KJ, Simpson JA. Introduction to multiple imputation for dealing with missing data. Respirology 2014; 19:162–167
    45. Ioannidis JPA. The proposal to lower P value thresholds to .005. JAMA 2018; 319:1429–1430
    46. Ioannidis JP. Why most published research findings are false. PLoS Med 2005; 2:e124
    47. Wasserstein RL, Lazar NA. The ASA’s statement on P-values: Context, process, and purpose. Am Stat 2016; 70:129–133
    48. Shimada T, Oda S, Sadahiro T, et al. Outcome prediction in sepsis combined use of genetic polymorphisms: A study in Japanese population. Cytokine 2011; 54:79–84
    49. Swanson JM, Wood GC, Xu L, et al. Developing a gene expression model for predicting ventilator-associated pneumonia in trauma patients: A pilot study. PLoS One 2012; 7:e42065
    50. Blaise BJ, Gouel-Chéron A, Floccard B, et al. Metabolic phenotyping of traumatized patients reveals a susceptibility to sepsis. Anal Chem 2013; 85:10850–10855
    51. Mickiewicz B, Vogel HJ, Wong HR, et al. Metabolomics as a novel approach for early diagnosis of pediatric septic shock and its mortality. Am J Respir Crit Care Med 2013; 187:967–976
    52. Finnerty CC, Jeschke MG, Qian WJ, et al; Investigators of the Inflammation and the Host Response Glue Grant: Determination of burn patient outcome by large-scale quantitative discovery proteomics. Crit Care Med 2013; 41:1421–1434
    53. Vanzant EL, Hilton RE, Lopez CM, et al; Inflammation and Host Response to Injury Investigators: Advanced age is associated with worsened outcomes and a unique genomic response in severely injured patients with hemorrhagic shock. Crit Care 2015; 19:77
    54. Garcia-Simon M, Morales JM, Modesto-Alapont V, et al. Prognosis biomarkers of severe sepsis and septic shock by 1H NMR urine metabolomics in the intensive care unit. PLoS One 2015; 10:e0140993
    55. Tompkins RG. Genomics of injury: The glue grant experience. J Trauma Acute Care Surg 2015; 78:671–686
    56. Ferrario M, Cambiaghi A, Brunelli L, et al. Mortality prediction in patients with severe septic shock: A pilot study using a target metabolomics approach. Sci Rep 2016; 6:20391
    57. Calvano SE, Xiao W, Richards DR, et al; Inflamm and Host Response to Injury Large Scale Collab. Res. Program: A network-based analysis of systemic inflammation in humans. Nature 2005; 437:1032–1037
    58. Zhou B, Xu W, Herndon D, et al; Inflammation and Host Response to Injury Program: Analysis of factorial time-course microarrays with application to a clinical study of burn injury. Proc Natl Acad Sci U S A 2010; 107:9923–9928
    59. Storey JD, Xiao W, Leek JT, et al. Significance analysis of time course microarray experiments. Proc Natl Acad Sci U S A 2005; 102:12837–12842
    60. Rajicic N, Finkelstein DM, Schoenfeld DA; Inflammation and Host Response to Injury Research Program Investigators: Analysis of the relationship between longitudinal gene expressions and ordered categorical event data. Stat Med 2009; 28:2817–2832
    61. Qian WJ, Liu T, Petyuk VA, et al; Inflammation and the Host Response to Injury Large Scale Collaborative Research Program: Large-scale multiplexed quantitative discovery proteomics enabled by the use of an (18)O-labeled “universal” reference sample. J Proteome Res 2009; 8:290–299
    62. Hayden D, Lazar P, Schoenfeld D; Inflammation and the Host Response to Injury Investigators: Assessing statistical significance in microarray experiments using the distance between microarrays. PLoS One 2009; 4:e5838
    63. Desai KH, Tan CS, Leek JT, et al; Inflammation and the Host Response to Injury Large-Scale Collaborative Research Program: Dissecting inflammatory complications in critically injured patients by within-patient gene expression changes: A longitudinal clinical genomics study. PLoS Med 2011; 8:e1001093
    64. Cuenca AG, Gentile LF, Lopez MC, et al; Inflammation and Host Response to Injury Collaborative Research Program: Development of a genomic metric that can be rapidly used to predict clinical outcome in severely injured trauma patients. Crit Care Med 2013; 41:1175–1185
    65. Seale C. Real-Time Processing of EEG Signals for Mobile Detection of Seizures. 2012Galway, Ireland, National University of Ireland OE Gaillimh.
    66. Juan E, Kaplan PW, Oddo M, et al. EEG as an indicator of cerebral functioning in postanoxic coma. J Clin Neurophysiol 2015; 32:465–471
    67. Serhani MA, Menshawy ME, Benharref A, et al. New algorithms for processing time-series big EEG data within mobile health monitoring systems. Comput Methods Programs Biomed 2017; 149:79–94
    68. Stewart J, Särkelä M, Koivusalo AM, et al. Frontal electroencephalogram variables are associated with the outcome and stage of hepatic encephalopathy in acute liver failure. Liver Transpl 2014; 20:1256–1265
    69. Ernst G. Heart Rate Variability. 2014London, United Kingdom, Springer.
    70. Ahmad S, Ramsay T, Huebsch L, et al. Continuous multi-parameter heart rate variability analysis heralds onset of sepsis in adults. PLoS One 2009; 4:e6642
    71. Pontet J, Contreras P, Curbelo A, et al. Heart rate variability as early marker of multiple organ dysfunction syndrome in septic patients. J Crit Care 2003; 18:156–163
    72. Rajendra Acharya U, Paul Joseph K, Kannathal N, et al. Heart rate variability: A review. Med Biol Eng Comput 2006; 44:1031–1051
    73. Malik M, Bigger JT, Camm AJ, et al: Heart rate variability. Standards of measurement, physiological interpretation, and clinical use. Task Force of the European Society of Cardiology and the North American Society of Pacing and Electrophysiology. Eur Heart J 1996; 17:354–381
    74. Bates DW, Saria S, Ohno-Machado L, et al. Big data in health care: Using analytics to identify and manage high-risk and high-cost patients. Health Aff (Millwood) 2014; 33:1123–1131
    75. Melinosky C, Yang S, Hu P, et al. Continuous vital sign analysis to predict secondary neurological decline after traumatic brain injury. Front Neurol 2018; 9:761
    76. Hatib F, Jian Z, Buddi S, et al. Machine-learning algorithm to predict hypotension based on high-fidelity arterial pressure waveform analysis. Anesthesiology 2018; 129:663–674
    77. Lajnef T, Chaibi S, Ruby P, et al. Learning machines and sleeping brains: Automatic sleep stage classification using decision-tree multi-class support vector machines. Crit Care Med 2018; 46:361–367
    78. Lee CK, Hofer I, Gabel E, et al. Development and validation of a deep neural network model for prediction of postoperative in-hospital mortality. Anesthesiology 2018; 129:649–662
    79. Hinton G. Deep learning-A technology with the potential to transform health care. JAMA 2018; 320:1101–1102
    80. Silver D, Hubert T, Schrittwieser J, et al. A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science 2018; 362:1140–1144
    81. Beam AL, Kohane IS. Big data and machine learning in health care. JAMA 2018; 319:1317–1318
    82. Naylor CD. On the prospects for a (Deep) learning health care system. JAMA 2018; 320:1099–1100
    83. Hess AS, Hess JR. Linear regression and correlation. Transfusion 2017; 57:9–11
    84. Hess AS, Hess JR. Understanding tests of the association of categorical variables: The Pearson chi-square test and Fisher’s exact test. Transfusion 2017; 57:877–879
    85. Hess AS, Hess JR. Principal component analysis. Transfusion 2018; 58:1580–1582
    86. Hess AS, Hess JR. Analysis of variance. Transfusion 2018; 58:2255–2256
    87. Tolles J, Meurer WJ. Logistic regression: Relating patient characteristics to outcomes. JAMA 2016; 316:533–534
    88. Agoritsas T, Merglen A, Shah ND, et al. Adjusted analyses in studies addressing therapy and harm: Users’ guides to the medical literature. JAMA 2017; 317:748–759
    89. Quintana M, Viele K, Lewis RJ. Bayesian analysis: Using prior information to interpret the results of clinical trials. JAMA 2017; 318:1605–1606
    90. Johnson AE, Dunkley N, Mayaud L, et al. Patient specific predictions in the intensive care unit using a Bayesian ensemble. Comput Cardiol 2010; 2012:249–252
    91. Veloso R, Portela F, Santos MF, et al. A clustering approach for predicting readmissions in intensive medicine. Procedia Technology 2014; 16:1307–1316
    92. Dudoit S, Fridlyand J, Speed TP. Comparison of discrimination methods for the classification of tumors using gene expression data. J Am Stat Assoc 2002; 97:77–87
    93. Kulkarni S, Harman G. An Elementary Introduction to Statistical Learning Theory. 2011Vol 853. New Jersey, John Wiley & Sons.
    94. Stein DM, Hu PF, Chen HH, et al. Computational gene mapping to analyze continuous automated physiologic monitoring data in neuro-trauma intensive care. J Trauma Acute Care Surg 2012; 73:419–424
    95. Stein DM, Brenner M, Hu PF, et al. Timing of intracranial hypertension following severe traumatic brain injury. Neurocrit Care 2013; 18:332–340
    96. Luyt CE, Galanaud D, Perlbarg V, et al; Neuro Imaging for Coma Emergence and Recovery Consortium: Diffusion tensor imaging to predict long-term outcome after cardiac arrest: A bicentric pilot study. Anesthesiology 2012; 117:1311–1321
    97. Cohen MJ, Grossman AD, Morabito D, et al. Identification of complex metabolic states in critically injured patients using bioinformatic cluster analysis. Crit Care 2010; 14:R10
    98. Grossman AD, Cohen MJ, Manley GT, et al. Altering physiological networks using drugs: Steps towards personalized physiology. BMC Med Genomics 2013; 6(Suppl 2):S7
    99. Ghosh S, Li J, Cao L, et al. Septic shock prediction for ICU patients via coupled HMM walking on sequential contrast patterns. J Biomed Inform 2017; 66:19–31
    100. Baum LE, Petrie T, Soules G, et al. A maximization technique occurring in the statistical analysis of probabilistic functions of Markov Chains. Ann Math Stats 1970; 41:164
      101. Mathis MR, Kheterpal S, Najarian K. Artificial intelligence for anesthesia: What the practicing clinician needs to know: More than black magic for the art of the dark. Anesthesiology 2018; 129:619–622
      102. Shah ND, Steyerberg EW, Kent DM. Big data and predictive analytics: Recalibrating expectations. JAMA 2018; 320:27–28
      103. Breslow MJ. Remote ICU care programs: Current status. J Crit Care 2007; 22:66–76
      104. Ries M. Telemedicine application to progressive care units: A new role for telemedicine. Crit Care Med 2018; 46:816–817
      105. McShea M, Holl R, Badawi O, et al. The eICU research institute: A collaboration between industry, health-care providers, and academia. IEEE Eng Med Biol Mag 2010; 29:18–25
      106. Ries M. Evaluating tele-ICU cost: An imperfect science. Crit Care Med 2016; 44:441–442
      107. Kheterpal S, Shanks A, Tremper KK. Impact of a novel multiparameter decision support system on intraoperative processes of care and postoperative outcomes. Anesthesiology 2018; 128:272–282
      108. Kendale S, Kulkarni P, Rosenberg AD, et al. Supervised machine-learning predictive analytics for prediction of postinduction hypotension. Anesthesiology 2018; 129:675–688
      109. Kuzniewicz MW, Puopolo KM, Fischer A, et al. A quantitative, risk-based approach to the management of neonatal early-onset sepsis. JAMA Pediatr 2017; 171:365–371
      110. Dhudasia MB, Mukhopadhyay S, Puopolo KM. Implementation of the sepsis risk calculator at an academic birth hospital. Hosp Pediatr 2018; 8:243–250
      111. Brohi K, Singh J, Heron M, et al. Acute traumatic coagulopathy. J Trauma 2003; 54:1127–1130
      112. MacLeod JB, Lynn M, McKenney MG, et al. Early coagulopathy predicts mortality in trauma. J Trauma 2003; 55:39–44
      113. Dutton RP, Stansbury LG, Leone S, et al. Trauma mortality in mature trauma systems: Are we doing better? An analysis of trauma mortality patterns, 1997-2008. J Trauma 2010; 69:620–626
      114. de Biasi AR, Stansbury LG, Dutton RP, et al. Blood product use in trauma resuscitation: Plasma deficit versus plasma ratio as predictors of mortality in trauma (CME). Transfusion 2011; 51:1925–1932
      115. Kotwal RS, Howard JT, Orman JA, et al. The effect of a golden hour policy on the morbidity and mortality of combat casualties. JAMA Surg 2016; 151:15–24
      116. Kahraman S, Hu P, Stein DM, et al. Dynamic three-dimensional scoring of cerebral perfusion pressure and intracranial pressure provides a brain trauma index that predicts outcome in patients with severe traumatic brain injury. J Trauma 2011; 70:547–553
      117. Bonds BW, Yang S, Hu PF, et al. Predicting secondary insults after severe traumatic brain injury. J Trauma Acute Care Surg 2015; 79:85–90
      118. Hu PF, Mackenzie CF, Dutton R, et al. Real-time patient vital sign data collection network for trauma care. Telemedicine and e-Health 2008; 14
      119. Kahraman S, Dutton RP, Hu P, et al. Automated measurement of “pressure times time dose” of intracranial hypertension best predicts outcome after severe traumatic brain injury. J Trauma 2010; 69:110–118
      120. Kahraman S, Dutton RP, Hu P, et al. Heart rate and pulse pressure variability are associated with intractable intracranial hypertension after severe traumatic brain injury. J Neurosurg Anesthesiol 2010; 22:296–302
      121. Kalpakis K, Yang S, Hu PF, et al. Permutation entropy analysis of vital signs data for outcome prediction of patients with severe traumatic brain injury. Comput Biol Med 2015; 56:167–174
      122. Hu PF, Yang S, Li HC, et al. Reliable collection of real-time patient physiologic data from less reliable networks: A “Monitor of Monitors” system (MoMs). J Med Syst 2017; 41:3
      123. Beninati W, Meyer MT, Carter TE. The critical care air transport program. Crit Care Med 2008; 36:S370–S376
      124. Gordon JE, Behar M, Scrimshaw NS. Acute diarrhoeal disease in less developed countries. I. An eidemiological basis for control. Bull World Health Organ 1964; 31:1–7
      125. Kolonel LN, Yoshizawa CN, Hirohata T, et al. Cancer occurrence in shipyard workers exposed to asbestos in Hawaii. Cancer Res 1985; 45:3924–3928
      126. Lazer D, Kennedy R, King G, et al. Big data. The parable of Google Flu: Traps in big data analysis. Science 2014; 343:1203–1205
      127. Butler D. When Google got flu wrong. Nature 2013; 494:155–156
      128. Celi LA, Mark RG, Lee J, et al. Collective experience: A database-fuelled, inter-disciplinary team-led learning system. J Comput Sci Eng 2012; 6:51–59
      129. Celi LA, Mark RG, Stone DJ, et al. “Big data” in the intensive care unit. Closing the data loop. Am J Respir Crit Care Med 2013; 187:1157–1160
      130. Sorani MD, Hemphill JC III, Morabito D, et al. New approaches to physiological informatics in neurocritical care. Neurocrit Care 2007; 7:45–52
      131. Goldberger AL, Amaral LA, Glass L, et al. PhysioBank, PhysioToolkit, and PhysioNet: Components of a new research resource for complex physiologic signals. Circulation 2000; 101:E215–E220
      132. Kashif FM, Verghese GC, Novak V, et al. Model-based noninvasive estimation of intracranial pressure from cerebral blood flow velocity and arterial pressure. Sci Transl Med 2012; 4:129ra44
      133. Lehman LW, Adams RP, Mayaud L, et al. A physiological time series dynamics-based approach to patient monitoring and outcome prediction. IEEE J Biomed Health Inform 2015; 19:1068–1076
      134. Saeed M, Villarroel M, Reisner AT, et al. Multiparameter intelligent monitoring in intensive care II: A public-access intensive care unit database. Crit Care Med 2011; 39:952–960
      135. Johnson AE, Pollard TJ, Shen L, et al. MIMIC-III, a freely accessible critical care database. Sci Data 2016; 3:160035
      136. Marchetti MA, Codella NCF, Dusza SW, et al; International Skin Imaging Collaboration: Results of the 2016 International Skin Imaging Collaboration International Symposium on biomedical imaging challenge: Comparison of the accuracy of computer algorithms to dermatologists for the diagnosis of melanoma from dermoscopic images. J Am Acad Dermatol 2018; 78:270–277.e1
      137. González G, Ash SY, Vegas-Sánchez-Ferrero G, et al; COPDGene and ECLIPSE Investigators: Disease staging and prognosis in smokers using deep learning in chest computed tomography. Am J Respir Crit Care Med 2018; 197:193–203
      138. Cohen MJ. Use of models in identification and prediction of physiology in critically ill surgical patients. Br J Surg 2012; 99:487–493
      139. Verghese A, Shah NH, Harrington RA. What this computer needs is a physician: Humanism and artificial intelligence. JAMA 2018; 319:19–20
      140. Stead WW. Clinical implications and challenges of artificial intelligence and deep learning. JAMA 2018; 320:1107–1108
      141. Gambus P, Shafer SL. Artificial intelligence for everyone. Anesthesiology 2018; 128:431–433
      Keywords:

      big data; critical care prediction; decision support; intensive care unit prediction; prediction strategies

      Supplemental Digital Content

      Back to Top | Article Outline
      Copyright © by 2019 by the Society of Critical Care Medicine and Wolters Kluwer Health, Inc. All Rights Reserved.