Secondary Logo

Journal Logo

Review Articles

Machine Learning for Predicting Outcomes in Trauma

Liu, Nehemiah T.; Salinas, Jose

Author Information
doi: 10.1097/SHK.0000000000000898



Injury is the leading cause of mortality for persons ages 1 to 44 in the United States (1); it is also a leading cause of mortality worldwide (2–4). In the United States alone in 2013, deaths from injuries among persons 1 to 44 years of age totaled 59% of all deaths, more than that of non-communicable diseases and infectious diseases combined (1). Furthermore, there were more than 192,900 deaths from injury, an estimated 31 million emergency department visits, and an estimated 2.5 million people hospitalized with injury (1).

Due to the increasing burden of injuries, many risk-prediction models have been developed to aid trauma centers and providers, not only retrospectively, but also prospectively as a triage/risk-assessment tool (5–7). In recent years, especially, due to increasing technological advances, data availability, and the need for large amounts of data to be analyzed, machine learning has been applied increasingly to help develop more accurate predictive models (5–9).


Machine learning (ML), a field of computer science and a part of artificial intelligence, refers to the science and engineering by which machines (i.e., computer systems) can analyze data and “learn” information from that data. “Learned” information is then used to explain processes and gain additional knowledge from data (e.g., prediction of outcomes) (5). Alternatively, ML denotes an algorithm or set of algorithms often implemented in software that takes as inputs extracted features from data and returns as an output acquired knowledge.

Unlike conventional statistics, ML can help develop much more sophisticated data models using advanced mathematical techniques. Moreover, ML has the ability to handle complex data sets and perform well on nonlinear data, especially missing data. Importantly, the success of an ML algorithm depends upon which features to extract and the criteria for developing (training) a data model. These features and criteria are likely to vary across ML applications, as well as the final performances of algorithms. Furthermore, model validation usually follows model development. In many cases, cross-validation is employed, in which a subset of the data is used to build/train the model, and a different subset is used to test/validate the model. This cycle can be repeated to improve model performance.

While ML has greatly developed from the theory and application of artificial neural networks (ANNs), it now encompasses a much more diverse set of algorithms and techniques such as support vector machines (SVMs), k-nearest neighbor algorithms (KNNs), decision trees (DTs), naive Bayes classifiers (NBs), and Bayesian belief networks (BBNs), to name a few.

Simply, ANNs model data via a network of processing nodes that weigh extracted features differently, thereby attempting to simulate human intelligence like that of neurons in the human brain. ANNs yield predictions as probabilities by summing the weighted data. SVMs are similar to neural networks but add additional preprocessing layers such as non-linear weighting to the data. KNNs model the data based on clustering the data into distinct subsets of data. In other words, KNNs divide datasets into groups of similar data using a chosen metric and then make predictions based on these groups. DTs model data according to a tree-like structure and make predictions in a branch-transversal manner, whereas random forests employ multiple DTs. NBs and BNNs model data using a probabilistic approach and predict using this a priori knowledge.


One of the first to employ ML to predict probability of survival in trauma patients was McGonigal et al. (10), who determined that ANNs provided a more sensitive technique for survival scoring than did existing trauma scores (A Severity Characterization Of Trauma, TRISS, Trauma and Injury Severity Score). Importantly, this translated into a mortality reduction and the potential for more efficient use of quality assurance resources (10). Today, ML models are capable of predicting more than just mortality or survival in patients with trauma. Rather, they can predict diverse outcomes ranging from hemorrhagic shock and sepsis to the need for life-saving interventions (LSIs).

Because of the vastness of ML applications in medicine, few literature reviews have been exhaustive. In general, reviews in the field have either established the potential benefits of ANN-based prediction models as a tool for assisting clinicians, stratifying patients, improving processes, and to a lesser extent, reducing hospital costs, and improving patient outcomes in the intensive care unit (6). One of the first reviews to cover a wider range of ML techniques was the 2001 review by Hanson and Marshall (7), but their application scope was broader than risk-prediction models. In addition, they differentiated the term “machine learning” from well-known ML techniques (7). Other reviews have taken an altogether different focus, examples being on intervention (11), physical medicine and rehabilitation (12), clinical biomechanics (13), and thoracic surgery (14). Almost all well-known reviews have dealt primarily with ANNs, despite the growth of ML theory and development.

To date, there are no reviews on ML for predicting outcomes in trauma. Consequently, it remains unclear as to how ML-based prediction models and applications compare (e.g., design, features used, performance) when triaging and assessing traumatically injured patients. Therefore, considering the growth of ML applications in medicine and the complexities and challenges of trauma care, this review specializes on risk analysis using ML for trauma care and research. This work followed many consensus recommendations by the Meta-analysis of Observational Studies in Epidemiology Group (15). The objective of this review was to survey and identify relevant studies involving ML for predicting outcomes in trauma, with the hypothesis that models predicting similar outcomes may share common features but the performance of ML in these studies will differ greatly.


Identification of trials

This meta-analysis was restricted to those studies that involved ML, prediction of outcomes, and trauma patients. The aim was to identify all relevant observational studies that reported features and the impact of ML for predicting outcomes such as mortality and/or morbidity in trauma patients. Outcomes also included the need for LSIs, since LSIs are a resource-based endpoint useful for prehospital triage and may identify more patients requiring attention from providers and resources of a trauma center than mortality (16, 17). A multimethod approach was used to identify relevant studies. The National Library of Medicine's MEDLINE database was searched for relevant studies in English published before December 2016 using the following medical subject headings and keywords: trauma, burns, AND machine learning, artificial neural networks, support vector machines, decision trees, naive Bayes classifiers, and nearest-neighbor algorithms. In addition, the Cochrane Database of Systematic Reviews and ScienceDirect (Elsevier Inc, Waltham, Mass) were searched. Bibliographies of all selected articles and review articles that included information on ML and trauma were reviewed for other relevant articles.

Data extraction and analysis

Data were abstracted from all studies using a standardized form and consisted of article title, authors, journal, study design, study size, year, application, ML technique(s), key features, and algorithm performance. When reported, the numerical results of studies were used to quantitate the impact of ML on the endpoints of interest. In addition, the discussion sections of studies were examined for comparisons between ML and conventional statistics and for further insight into validation approaches and future work. The overall benefits of ML on patient outcome in each study were also noted.

Performance metrics included the accuracy (ACC), sensitivity (Se), specificity (Sp), and/or receiver-operating characteristic (ROC) curve area under the curve (AUC) for achieving targets of interest. Se reports the percentage of true positives that were correctly predicted by the algorithm, i.e., the true positive rate, whereas Sp reports the true negative rate. Because of anticipated heterogeneity among studies, available Se-Sp values were plotted in a gap plot to examine the variability in performance among prediction models. This plot provided a simple visual tool for displaying the disparities in performance across studies/ML models. (A ROC curve of the individual Se and Sp values was not useful for this study). Corresponding gap values G were defined by the following equation: 

Excel (Microsoft Corporation, Redmond, Wash) was used for abstracting the data. MATLAB (The Mathworks Inc, Natick, Mass) and Excel were used for data analysis. Narrower gaps indicated higher Se and/or Sp; on the contrary wider gaps indicated lower Se and/or Sp.


The search strategy of this meta-analysis generated 1,453 potentially relevant articles. Of those, 784 focused on the wrong topic (e.g., robotics) and were excluded from the analysis. Afterwards, titles and abstracts of 928 articles were screened, and a total of 65 studies were identified and included in the analysis (10, 18–81). Four different pairs of studies (19, 20, 40, 43, 52, 66, 68, 74) used the same datasets. The number of studies evaluated at each stage of the search process is illustrated in Figure 1. A summary of the studies is listed in Supplemental Digital Content, Table 1, In total 2,433,180 patients with trauma were included in the studies. The studies focused on prediction of the following outcome measures: survival/mortality (n = 34), morbidity/shock/hemorrhage (n = 12), hospital length of stay (n = 7), hospital admission/triage (n = 6), traumatic brain injury (n = 4), life-saving interventions (n = 5), post-traumatic stress disorder (n = 4), and transfusion (n = 1).

Fig. 1:
Process of study inclusion for the meta-analysis.

Furthermore, six studies (25, 50, 56, 57, 60, 77) were prospective observational studies involving trauma patients, and the remaining studies were all retrospective. Of the 65 studies, 33 (10, 18–46, 48, 73, 75) used ANNs for prediction or classification purposes, 11 studies leveraged decision trees (20, 21, 31, 39, 40, 42, 48, 49, 63, 64, 72, 80), 15 employed support vector machines (46, 48, 49, 51, 53, 54, 56, 58, 66, 71, 74, 75, 77, 78, 79, 81), 8 used naive Bayes classifiers or Bayesian belief networks (31, 42, 52, 53, 63, 64, 75, 79), 4 used clustering or K-nearest neighbor algorithms (58, 59, 62, 76), and 3 used random forest techniques (65, 70, 75). Other techniques included SuperLearner (combination of some of the aforementioned ML techniques) (61, 65) and linear or ensemble classifiers (similar to linear regression models) (41, 44, 57, 73). Although ML algorithms are often tested using a cross-validation approach, with 10-fold cross-validation being commonly applied during development, only 11 studies reported this (18, 20, 21, 42, 45, 49, 52, 53, 65, 78, 79). All other studies used non-conventional validation approaches. Unfortunately, only eight studies (25, 28, 30, 37, 39, 60, 63, 64) reported calibration or goodness of fit, with all using the Hosmer–Lemeshow statistic.

Importantly, most studies demonstrated the benefits of ML models for predicting outcomes in trauma. However, datasets differed greatly in population size (training/testing), and algorithm performance was assessed differently by different authors, as shown in Supplemental Digital Content, Table 1, Moreover, even when studies used the same metrics (e.g., Se and Sp), values varied greatly, as shown in the Se-Sp gap plot in Figure 2. Here all available Se-Sp values are shown for studies involving ML for predicting outcomes in trauma to depict the variability in performance among prediction models. Se-Sp gaps were drawn as red horizontal lines, with the left endpoint of each line denoting Se and the right endpoint denoting Sp. Markers at 20% intervals were drawn as blue vertical dashed lines. Narrower gaps between Se-Sp values indicated higher algorithmic performance, whereas wider gaps indicated poorer performance. Where studies reported multiple values, notes were displayed to the right of gaps. Coordinates on the x-axis represented Se or Sp values, whereas coordinates on the y-axis represented all available studies. For this analysis, gap values G varied from 0.035 to 0.927 and were displayed to the far right of the gap plot.

Fig. 2:
Sensitivity-specificity gaps.

When ML algorithms were compared against traditional statistical methods such as logistic regression, the former almost always outperformed the latter (statistical significance, P < 0.05); only one study reported contrarily (52). Here the study involved a small sample size of 32 combat casualties, so the authors concluded that further development and validation in other patient groups is needed. Another study (75) reported marginal benefits of ML over logistic regression models. Five studies (20, 21, 30, 37, 73) indicated that both ML and logistic regression had strengths and weaknesses, whereas one study (39) concluded none was able to meet performance criteria. For studies employing and comparing multiple ML techniques, there was no consensus on which technique was the best.

Notably, as shown in Supplemental Digital Content, Table 1,, studies shared many features for model development, including age, Glasgow coma score, trauma scores, injury severity score, various blood pressures, respiratory rate, and heart rate. For studies involving burn patients, key features included age, gender, presence of inhalation injury, and total body surface area burned. Three studies (45, 46, 67) demonstrated enhanced predictive capability in ML models when using electrocardiogram-derived features, such as heart-rate variability and heart-rate complexity; likewise, the same was found for studies using transcranial Doppler (46), computed tomography (56), electroencephalogram (57), and ultrasound (81).


This meta-analysis investigated the potential of ML for predicting outcomes in traumatically injured patients. As implicated by the studies in this review, many aspects of patient care and assessment are “pattern recognition” tasks and could benefit from ML-based prediction models and applications. ML models have several advantages over other methods. By relying less on clinical knowledge and experience and more on standard, measured, physiological variables, ML models are more objective than other methods, and thus, limit intrarater variability (7). Unlike conventional statistical approaches, ML can capture nonlinearities that exist among independent variables such as age, severity scores, and physiologic information such as vital signs. Furthermore, ML techniques may be optimized and combined into a multimethod solution given the right parameters and performance criteria. At least three studies (23, 48, 69) have helped demonstrate the capability of a combined approach to model development.

By listing key features that may be useful for developing future ML-based prediction models and algorithms, this work also suggested that a common set of ML features (feature base) may be determined for predicting outcomes in trauma. These features may be employed not only for triage and assessment of these patients but also retrospective process improvement of trauma systems. Although many studies employed demographics and severity scores and some studies, even laboratory values, it is important to point out why real-time models should rely less on these features and more on noninvasive physiologic information such as blood pressure, respiratory rate, heart rate, and waveform-derived variables. Demographic information and scores when available are convenient but do not always support the concepts of automation and continuous data analysis, especially within a prehospital or battlefield environment. In other words, demographics often require communication with the patient or access to the patient's background and history, while scores require physical examination of the patient. Neither may always be available when basing treatment decisions solely on electronic data (e.g., evacuation). Automation and continuous data analysis have many potential implications for prehospital and battlefield trauma care. Constant physiologic observations and data could enhance the medic's ability to assess and treat civilian and battlefield injuries (67, 68).

In addition, through a Se-Sp gap plot, this review reiterated the need for common performance criteria. Besides Wolfe et al. (39) who concluded that ML techniques did not meet performance criteria, all other studies concluded that ML models offered potential benefits to clinicians. Ironically, as shown above, Se and Sp values of studies differed over a broad spectrum. Marble and Healy (27) reported the highest performance (Se: 1.000, Sp: 0.965, G: 0.035), as shown in the gap plot. Paetz (34) reported the lowest performance (Se: 0.150, Sp: 0.923, G: 0.927). The huge variability among Se-Sp gaps reiterated the need for the establishment of common performance criteria before widespread clinical acceptance of ML in trauma care can be achieved.

This may be one reason why ML-based prediction models have not pervaded trauma care and clinical practice. Another reason may be that many clinicians or researchers in medicine do not understand the principles of ML algorithms and their potential to change practice. Hence, there is a need for further education in these fields, as well as in health information technology and informatics. Because many different techniques, methodologies, and features exist for designing ML models, variability among ML models is common and may lead clinicians to believe that models are more subjective than objective tools. Wolfe et al. (39) reported negative results for logistic regression and ML techniques (decision tree, ANN), but several different authors developed separate models. Hence, again parametric details and tools for developing algorithms must be shared within the trauma care/research community and a uniform set of performance criteria must be agreed upon. Lastly, there are not enough prospective observational studies, even randomized trials, for validating potential applications. ML will require further validation before widespread clinical acceptance can be achieved. Only limited clinical evidence has been available and thus, more definitive clinical trials are necessary to convince clinicians about the efficacy of ML in trauma care.

Although many different ML techniques were identified in this work, it was not clear whether one ML technique or class of techniques was superior in performance over other techniques. Likewise, it was not clear whether one ML technique or class of techniques was superior for predicting a particular outcome. Historically, ANNs have been popular since the inception of ML and artificial intelligence. Due to the problem of overfitting (making a model too complex) and other limitations, ANNs were considered inferior to newer ML techniques such as SVMs and BNNs. However, in the past few years, a concept known as deep learning and convolutional neural networks have been emerging outside of medicine and have demonstrated potential in medical applications. This may prove further that ML is a constantly growing field, in which different ML techniques have their usefulness for different applications. Future studies will be needed to explore the concepts of deep learning in medicine.

This review had a number of limitations and was not exhaustive due to the enormity of the task. Despite intensive efforts, the collection of studies was likely to be incomplete as some studies were not available in the public domain, others were published outside the peer-reviewed academic literature, and others were published in languages other than English. Of the available studies, it was sometimes difficult to assess whether a study involved only trauma patients and whether a study involved ML for prediction of trauma outcomes. Moreover, pooled studies were very variable in terms of model design, model size (machine learning experience), and performance assessment. Nevertheless, this review was unique because it surveyed studies involving ML-based models for predicting outcomes in trauma and contributed to the literature by detailing various ML applications and studies targeting trauma patients. The most consistently effective approaches used key features in this review. These findings remain consistent with other literature. Unfortunately, there was insufficient evidence to draw conclusions about the impact of ML models in trauma care/research.

In summary, ML has great potential in trauma care. This review supported the potential efficacy of ML for predicting outcomes in trauma patients and showed that a common ML feature base may be determined for predicting trauma outcomes. However, the impact of ML will require further validation in prospective observational studies and randomized clinical trials, establishment of common performance criteria, and high quality evidence about clinical and economic impacts before ML-based prediction models can be widely accepted in practice.


1. Centers for Disease Control and Prevention: Injury prevention and control: leading causes of death. Available at:, 2013. Accessed April 9, 2015.
2. World Health Organization: The Global Burden of Disease: 2004 Update. World Health Organization Press, Geneva, Switzerland, 2008.
3. Centers for Disease Control and Prevention: 10 leading causes of death, United States. 1999–2006, all races, both sexes. Available at:, 2010. Accessed March 14, 2015.
4. Mathers CD, Boerma T, Ma Fat D. Global and regional causes of death. Br Med Bull 2009; 92:7–32.
5. Alpaydin E: Introduction to Machine Learning. 1st ed. Cambridge, MA: MIT Press, 1–16, 2004.
6. Rosenberg AL. Recent innovations in intensive care unit risk-prediction models. Curr Opin Crit Care 2002; 8 4:321–330.
7. Hanson CW, Marshall BE. Artificial intelligence applications in the intensive care unit. Crit Care Med 2001; 29 2:427–435.
8. Baxt WG. Application of artificial neural networks to clinical medicine. Lancet 1995; 346 8983:1135–1138.
9. Dybowski R, Gant V. Artificial neural networks in pathology and medical laboratories. Lancet 1995; 346 8984:1203–1207.
10. McGonigal MD, Cole J, Schwab CW, Kauder DR, Rotondo MF, Angood PB. A new approach to probability of survival scoring for trauma quality assurance. J Trauma 1993; 34 6:863–868.
11. Lisboa PJ. A review of evidence of health benefit from artificial neural networks in medical intervention. Neural Netw 2002; 15 1:11–39.
12. Ohno-Machado L, Rowland T. Neural network applications in physical medicine and rehabilitation. Am J Phys Med Rehabil 1999; 78 4:392–398.
13. Schöllhorn WI. Applications of artificial neural nets in clinical biomechanics. Clin Biomech (Bristol, Avon) 2004; 19 9:876–898.
14. Esteva H, Núñez TG, Rodríguez RO. Neural networks and artificial intelligence in thoracic surgery. Thorac Surg Clin 2007; 17 3:359–367.
15. Stroup DF, Berlin JA, Morton SC, Olkin I, Williamson GD, Rennie D, Moher D, Becker BJ, Sipe TA, Thacker SB. Meta-analysis of observational studies in epidemiology: a proposal for reporting. Meta-analysis Of Observational Studies in Epidemiology (MOOSE) group. JAMA 2000; 283 15:2008–2012.
16. Garner A, Lee A, Harrison K, Schultz CH. Comparative analysis of multiple-casualty incident triage algorithms. Ann Emerg Med 2001; 38 5:541–548.
17. Holcomb JB, Niles SE, Miller CC, Hinds D, Duke JH, Moore FA. Prehospital physiologic data and lifesaving interventions in trauma patients. Mil Med 2005; 170:7–13.
18. Frye KE, Izenberg SD, Williams MD, Luterman A. Simulated biologic intelligence used to predict length of stay and survival of burns. J Burn Care Rehabil 1996; 17 (6 pt 1):540–546.
19. Rutledge R. Injury severity and probability of survival assessment in trauma patients using a predictive hierarchical network model derived from ICD-9 codes. J Trauma 1995; 38 4:590–597.
20. Hadzikadic M, Hakenewerth A, Bohren B, Norton J, Mehta B, Andrews C. Concept formation vs. logistic regression: predicting death in trauma patients. Proc Annu Symp Comput Appl Med Care 1995; 1:198–202.
21. Hadzikadic M, Hakenewerth A, Bohren B, Norton J, Mehta B, Andrews C. Concept formation vs. logistic regression: predicting death in trauma patients. Artif Intell Med 1996; 8 5:493–504.
22. Dybowski R, Weller P, Chang R, Gant V. Prediction of outcome in critically ill patients using artificial neural network synthesised by genetic algorithm. Lancet 1996; 347 9009:1146–1150.
23. Lim CP, Harrison RF, Kennedy RL. Application of autonomous neural network systems to medical pattern classification tasks. Artif Intell Med 1997; 11 3:215–239.
24. Izenberg SD, Williams MD, Luterman A. Prediction of trauma mortality using a neural network. Am Surg 1997; 63 3:275–281.
25. Rutledge R, Osler T, Emery S, Kromhout-Schiro S. The end of the Injury Severity Score (ISS) and the Trauma and Injury Severity Score (TRISS): ICISS, an International Classification of Diseases, ninth revision-based prediction tool, outperforms both ISS and TRISS as predictors of trauma patient survival, hospital charges, and hospital length of stay. J Trauma 1998; 44 1:41–49.
26. Edwards DF, Hollingsworth H, Zazulia AR, Diringer MN. Artificial neural networks improve the prediction of mortality in intracerebral hemorrhage. Neurology 1999; 53 2:351–357.
27. Marble RP, Healy JC. A neural network approach to the diagnosis of morbidity outcomes in trauma care. Artif Intell Med 1999; 15 3:299–307.
28. DiRusso SM, Sullivan T, Holly C, Cuff SN, Savino J. An artificial neural network as a model for prediction of survival in trauma patients: validation for a regional trauma area. J Trauma 2000; 49 2:212–220.
29. Hunter A, Kennedy L, Henry J, Ferguson I. Application of neural networks and sensitivity analysis to improved prediction of trauma survival. Comput Methods Programs Biomed 2000; 62 1:11–19.
30. Becalick DC, Coats TJ. Comparison of artificial intelligence techniques with UKTRISS for estimating probability of survival after trauma. UK Trauma and Injury Severity Score. J Trauma 2001; 51 1:123–133.
31. Demsar J, Zupan B, Aoki N, Wall MJ, Granchi TH, Beck JR. Feature mining and predictive model construction from severe trauma patient's data. Int J Med Inform 2001; 63 (1–2):41–50.
32. Estahbanati HK, Bouduhi N. Role of artificial neural networks in prediction of survival of burn patients-a new approach. Burns 2002; 28 6:579–586.
33. DiRusso SM, Chahine AA, Sullivan T, Risucci D, Nealon P, Cuff S, Savino J, Slim M. Development of a model for prediction of survival in pediatric trauma patients: comparison of artificial neural networks and logistic regression. J Pediatr Surg 2002; 37 7:1098–1104.
34. Paetz J. Knowledge-based approach to septic shock patient data using a neural network with trapezoidal activation functions. Artif Intell Med 2003; 28 2:207–230.
35. Walczak S. Artificial neural network medical decision support tool: predicting transfusion requirements of ER patients. IEEE Trans Inf Technol Biomed 2005; 9 3:468–474.
36. Fuller JJ, Emmett M, Kessel JW, Price PD, Forsythe JH. A comparison of neural networks for computing predicted probability of survival for trauma victims. W V Med J 2005; 101 3:120–125.
37. Eftekhar B, Mohammad K, Ardebili HE, Ghodsi M, Ketabchi E. Comparison of artificial neural network and logistic regression models for prediction of mortality in head trauma based on initial clinical data. BMC Med Inform Decis Mak 2005; 5:3.
38. Pearl A, Caspi R, Bar-Or D. Artificial neural network versus subjective scoring in predicting mortality in trauma patients. Stud Health Technol Inform 2006; 124:1019–1024.
39. Wolfe R, McKenzie DP, Black J, Simpson P, Gabbe BJ, Cameron PA. Models developed by three techniques did not achieve acceptable prediction of binary trauma outcomes. J Clin Epidemiol 2006; 59 1:26–35.
40. Talbert S, Talbert DA. A comparison of a decision tree induction algorithm with the ACS guidelines for trauma triage. AMIA Annu Symp Proc 2007; 1127.
41. Chen L, Reisner AT, McKenna TM, Gribok A, Reifman J. Diagnosis of hemorrhage in a prehospital trauma population using linear and nonlinear multiparameter analysis of vital signs. Conf Proc IEEE Eng Med Biol Soc 2007; 2007:3748–3751.
42. Pang BC, Kuralmani V, Joshi R, Hongli Y, Lee KK, Ang BT, Li J, Leong TY, Ng I. Hybrid outcome prediction model for severe traumatic brain injury. J Neurotrauma 2007; 24 1:136–146.
43. Pearl A, Bar-Or R, Bar-Or D. An artificial neural network derived trauma outcome prediction score as an aid to triage for non-clinicians. Stud Health Technol Inform 2008; 136:253–258.
44. Chen L, McKenna TM, Reisner AT, Gribok A, Reifman J. Decision tool for the early diagnosis of trauma patient hypovolemia. J Biomed Inform 2008; 41 3:469–478.
45. Batchinsky AI, Salinas J, Jones JA, Necsoiu C, Cancio LC. Predicting the need to perform life-saving interventions in trauma patients using new vital signs and artificial neural networks. Lect Notes Comput Sc 2009; 5651:390–394.
46. Najarian K, Hakimzadeh R, Ward K, Daneshvar K, Ji SY. Combining predictive capabilities of transcranial doppler with electrocardiogram to predict hemorrhagic shock. Conf Proc IEEE Eng Med Biol Soc 2009; 2009:2621–2624.
47. Pearl A, Bar-Or D. Using artificial neural networks to predict potential complications during trauma patients’ hospitalization period. Stud Health Technol Inform 2009; 150:610–614.
48. Ji SY, Smith R, Huynh T, Najarian K. A comparative analysis of multi-level computer-assisted decision making systems for traumatic injuries. BMC Med Inform Decis Mak 2009; 9:2.
49. Yang CS, Wei CP, Yuan CC, Schoung JY. Predicting the length of hospital stay of burn patients: comparisons of prediction accuracy among different clinical stages. Decision Support Systems 2010; 50 1:325–335.
50. Rughani AI, Dumont TM, Lu Z, Bongard J, Horgan MA, Penar PL, Tranmer BI. Use of an artificial neural network to predict head injury outcome. J Neurosurg 2010; 113 3:585–590.
51. Tang CH, Middleton PM, Savkin AV, Chan GS, Bishop S, Lovell NH. Non-invasive classification of severe sepsis and systemic inflammatory response syndrome using a nonlinear support vector machine: a preliminary study. Physiol Meas 2010; 31 6:775–793.
52. Jadinovic A, Eberhardt J, Brown TS, Hawksworth JS, Gage F, Tadaki DK, Forsberg JA, Davis TA, Potter BK, Dunne JR, et al. Development of a Bayesian model to estimate health care outcomes in the severely wounded. J Multidiscip Healthc 2010; 3:125–135.
53. Patil BM1, Joshi RC, Toshniwal D, Biradar S. A new approach: role of data mining in prediction of survival of burn patients. J Med Syst 2011; 35 6:1531–1542.
54. Ribas VJ, López JC, Ruiz-Sanmartin A, Ruiz-Rodríguez JC, Rello J, Wojdel A, Vellido A. Severe sepsis mortality prediction with relevance vector machines. Conf Proc IEEE Eng Med Biol Soc 2011; 2011:100–103.
55. Hanisch E, Brause R, Paetz J, Arlt B. Review of a large clinical series: predicting death for patients with abdominal septic shock. J Intensive Care Med 2011; 26 1:27–33.
56. Davuluri P, Wu J, Tang Y, Cockrell CH, Ward KR, Najarian K, Hargraves RH. Hemorrhage detection and segmentation in traumatic pelvic injuries. Comput Math Methods Med 2012; 2012:898430.
57. Prichep LS1, Jacquin A, Filipenko J, Dastidar SG, Zabele S, Vodencarevic A, Rothman NS. Classification of traumatic brain injury severity using informed data reduction in a series of binary classifier algorithms. IEEE Trans Neural Syst Rehabil Eng 2012; 20 6:806–822.
58. Stein DM, Hu PF, Chen HH, Yang S, Stansbury LG, Scalea TM. Computational gene mapping to analyze continuous automated physiologic monitoring data in neuro-trauma intensive care. J Trauma Acute Care Surg 2012; 73 2:419–424.
59. Moulton SL, Mulligan J, Grudic GZ, Convertino VA. Running on empty? The compensatory reserve index. J Trauma Acute Care Surg 2013; 75 6:1053–1059.
60. Shi HY, Hwang SL, Lee KT, Lin CL. In-hospital mortality after traumatic brain injury surgery: a nationwide population-based comparison of mortality predictors used in artificial neural network and logistic regression models. J Neurosurg 2013; 118 4:746–752.
61. Hubbard A, Munoz ID, Decker A, Holcomb JB, Schreiber MA, Bulger EM, Brasel KJ, Fox EE, del Junco DJ, Wade CE, et al. Time-dependent prediction and evaluation of variable importance using superlearning in high-dimensional clinical data. J Trauma Acute Care Surg 2013; 75 (1 suppl 1):S53–S60.
62. Convertino VA, Grudic G, Mulligan J, Moulton S. Estimation of individual-specific progression to impending cardiovascular instability using arterial waveforms. J Appl Physiol (1985) 2013; 115 8:1196–1202.
63. Schetinin V, Jakaite L, Jakaitis J, Krzanowski W. Bayesian Decision Trees for predicting survival of patients: a study on the US National Trauma Data Bank. Comput Methods Programs Biomed 2013; 111 3:602–612.
64. Schetinin V, Jakaite L, Krzanowski W. Prediction of survival probabilities with Bayesian Decision Trees. Expert Syst Appl 2013; 40:5466–5476.
65. Kessler RC, Rose S, Koenen KC, Karam EG, Stang PE, Stein DJ, Heeringa SG, Hill ED, Liberzon I, McLaughlin KA, et al. How well can post-traumatic stress disorder be predicted from pre-trauma risk factors? An exploratory study in the WHO World Mental Health Surveys. World Psychiatry 2014; 13 3:265–274.
66. Galatzer-Levy IR, Karstoft KI, Statnikov A, Shalev AY. Quantitative forecasting of PTSD from early trauma responses: a machine learning application. J Psychiatr Res 2014; 59:68–76.
67. Liu NT, Holcomb JB, Wade CE, Darrah MI, Salinas J. Utility of vital signs, heart-rate variability and complexity, and machine learning for identifying the need for life-saving interventions in trauma patients. Shock 2014; 42:108–114.
68. Liu NT, Holcomb JB, Wade CE, Batchinsky AI, Cancio LC, Darrah MI, Salinas J. Development and validation of a machine learning algorithm and hybrid system to predict the need for life-saving interventions in trauma patients. Med Biol Comput Eng 2014; 52:193–203.
69. Jiménez F, Sánchez G, Juárez JM. Multi-objective evolutionary algorithms for fuzzy classification in survival prediction. Artif Intell Med 2014; 60 3:197–219.
70. Scerbo M, Radhakrishnan H, Cotton B, Dua A, Del Junco D, Wade C, Holcomb JB. Prehospital triage of trauma patients using the Random Forest computer algorithm. J Surg Res 2014; 187 2:371–376.
71. Ribas Ripoll VJ, Vellido A, Romero E, Ruiz-Rodríguez JC. Sepsis mortality prediction with the Quotient Basis Kernel. Artif Intell Med 2014; 61 1:45–52.
72. Chapman MP, Moore EE, Burneikis D, Moore HB, Gonzalez E, Anderson KC, Ramos CR, Banerjee A. Thrombelastographic pattern recognition in renal disease and trauma. J Surg Res 2015; 194 1:1–7.
73. Chong SL, Liu N, Barbier S, Ong ME. Predictive modeling in pediatric traumatic brain injury using machine learning. BMC Med Res Methodol 2015; 15:22.
74. Karstoft KI, Galatzer-Levy IR, Statnikov A, Li Z, Shalev AY. Members of Jerusalem Trauma Outreach and Prevention Study (J-TOPS) group:. Bridging a translational gap: using machine learning to improve the prediction of PTSD. BMC Psychiatry 2015; 15:30.
75. Stylianou N, Akbarov A, Kontopantelis E, Buchan I, Dunn KW. Mortality risk prediction in burn injury: comparison of logistic regression with machine learning approaches. Burns 2015; 41 5:925–934.
76. Bonds BW, Yang S, Hu PF, Kalpakis K, Stansbury LG, Scalea TM, Stein DM. Predicting secondary insults after severe traumatic brain injury. J Trauma Acute Care Surg 2015; 79 1:85–90.
77. Karstoft KI, Statnikov A, Andersen SB, Madsen T, Galatzer-Levy IR. Early identification of posttraumatic stress following military deployment: application of machine learning methods to a prospective study of Danish soldiers. J Affect Disord 2015; 184:170–175.
78. Chen G, Han N, Li G, Li X, Li G, Liu Y, Wu W, Wang Y, Chen Y, Sun G, et al. Prediction of feature genes in trauma patients with the TNF rs1800629 a allele using support vector machine. Comput Biol Med 2015; 64:24–29.
79. Mossadegh S, He S, Parker P. Bayesian scoring systems for military pelvic and perineal blast injuries: is it time to take a new approach? Mil Med 2016; 181 (5 suppl):127–131.
80. Follin A, Jacqmin S, Chhor V, Bellenfant F, Robin S, Guinvarc’h A, Thomas F, Loeb T, Mantz J, Pirracchio R. Tree-based algorithm for prehospital triage of polytrauma patients. Injury 2016; 7 7:1555–1561.
81. Sjogren AR, Leo MM, Feldman J, Gwin JT. Image segmentation and machine learning for detection of abdominal free fluid in focused assessment with sonography for trauma examinations: a pilot study. J Ultrasound Med 2016; 35 11:2501–2509.

Machine learning; neural networks; prediction models; trauma; ACC; accuracy; ANN; artificial neural network; APACHE; acute physiology and chronic health evaluation; BBN; Bayesian belief network; CT; computed tomography; CVP; central venous pressure; DBP; diastolic blood pressure; DT; decision tree; ECG; electrocardiogram; EEG; electroencephalogram; ER; emergency room; GCS; Glasgow coma score; HR; heart rate; HRC; heart-rate complexity; HRV; heart-rate variability; ICD; International Classification of Diseases; ICP; intracranial pressure; ISS; injury severity score; KNN; k-nearest neighbor algorithm; LOS; hospital length of stay; LSIs; life-saving interventions; MAE; mean absolute error; MAP; mean arterial pressure; NB; naive Bayes classifier; PP; pulse pressure; PTSD; post-traumatic stress disorder; R2; correlation coefficient; RF; random forest; ROC AUC; receiver-operating characteristic curve area under the curve; RR; respiratory rate; RTS; revised trauma score; SaO2; saturation of oxygen; SAPS; simplified acute physiology score; SBP; systolic blood pressure; Se; sensitivity; SI; shock index; SOFA; sequential organ failure assessment; Sp; specificity; SVM; support vector machines; TBI; traumatic brain injury; TBSA; total body surface area burned; TCD; transcranial Doppler; Temp; temperature

Supplemental Digital Content

Copyright © 2017 by the Shock Society