ABSTRACT: Currently available triage and monitoring tools are often late to detect life-threatening clinically significant physiological aberrations and provide limited data in prioritizing bleeding patients for treatment and evacuation. The Compensatory Reserve Index (CRI) is a novel means of assessing physiologic reserve, shown to correlate with central blood volume loss under laboratory conditions. The purpose of this study was to compare the noninvasive CRI device with currently available vital signs in detecting blood loss. Study subjects were soldiers volunteering for blood donation (n = 230), and the control group was composed of soldiers who did not donate blood (n = 34). Data collected before and after blood donation were compared, receiver operator characteristic curves were generated after either donation or the appropriate time interval, and areas under the curves (AUCs) were compared. Compared with pre–blood loss, blood donation resulted in a mean reduction of systolic blood pressure by 3% (before, 123 mmHg; after, 119 mmHg; P < 0.01). The CRI demonstrated a 16% reduction (before, 0.74; after, 0.62; P < 0.01). Heart rate, diastolic blood pressure, and oxygen saturation remained unchanged. The AUC for change in CRI was 0.81, 0.56 for change in heart rate, 0.53 for change in systolic blood pressure, 0.55 and 0.58 for pulse pressure and shock index, respectively. The AUCs for detecting mild blood loss at a single measurement were 0.73 for heart rate, 0.60 for systolic blood pressure, 0.62 for diastolic blood pressure, 0.45 for pulse oximetry, and 0.84 for CRI. The CRI was better than standard indices in detecting mild blood loss. Single measurement of CRI may enable a more accurate triage, and CRI monitoring may allow for earlier detection of casualty deterioration.
*The Trauma and Combat Medicine Branch, Surgeon General’s Headquarters, Israeli Defense Forces, Ramat Gan, Israel; †US Army Institute of Surgical Research, Fort Sam Houston, Texas; ‡Department of Emergency Medicine, Rambam Health Care Campus, Haifa, Israel; and §Department of Pediatric Cardiology, Schneider Children’s Medical Center of Israel, Petach Tikvah, Israel
Received 7 Jan 2014; first review completed 22 Jan 2014; accepted in final form 17 Mar 2014
Address reprint requests to Elon Glassberg, MD, MHA, The Trauma and Combat Medicine Branch, Medical Corps, Surgeon General’s Headquarters, Military POB 02149, Israeli Defense Forces, Ramat Gan, Israel. E-mail: firstname.lastname@example.org.
E.G. conceived the study. E.G., R.N., and A.M.L. designed the trial. E.G. and R.N. supervised the conduct of the trial and data collection. G.L., R.N., and S.G. undertook recruitment of participating centers and patients and managed the data, including quality control. R.N., A.M.L., V.A.C., S.C., and A.L. provided statistical advice on study design and analyzed the data. R.N. and E.G. drafted the manuscript, and all authors contributed substantially to its revision. R.N. takes responsibility for the article as a whole.
Assessment of a trauma casualty in the emergency department can be a challenging process. This is particularly true at prehospital settings where limited evaluation tools present a substantial challenge even for the most experienced caregiver. This assessment includes collecting data regarding injury mechanism, vital signs, and physical examination findings. These data are then integrated in an attempt to form an accurate patient status and to determine the urgency of treatment and evacuation to the next echelon of care when necessary. When caring for multiple casualties, this process takes on an even greater importance because the care of one patient can delay the care of others.
Field triage and monitoring of casualties usually consists of several vital sign measurements including blood pressure, oxygen saturation, and heart rate. These measurements show varying correlation with patient survival, Injury Severity Score, and the need for life-saving intervention (1–6). The most significant disadvantage of their use as part of patient triage is that they are all retrospective by nature, and a change in these indices occur only after substantial hemodynamic compromise and failure of compensatory mechanisms (7)—when life-saving interventions might be too late (1, 8).
Because of the limitations inherent to these vital signs, several calculated indices have been suggested in an attempt to integrate a few vital signs into more sensitive metrics for prediction of patient outcomes. The most frequently described metric is shock index (SI), which is calculated as the ratio between heart rate and systolic blood pressure (normal values, 0.5 – 0.7), and has demonstrated superiority over other indices (9–11). Heart rate variability has also been frequently suggested as a calculated vital sign, but its clinical utility in the acute blood loss setting has proven to be limited as a result of its high interpatient and intrapatient variability (12).
The Compensatory Reserve Index (CRI) represents a new paradigm for measuring the physiological reserve of integrated cardiopulmonary mechanisms (e.g., tachycardia, vasoconstriction, breathing) that compensate for reduced central blood volume (13, 14). Advanced sensor technologies such as photoplethysmography enable noninvasive recordings of analog arterial waveforms (15). Using a model that induces stepwise reduction of central blood volume (lower body negative pressure [LBNP]) in volunteering young healthy human test subjects, through application of negative pressure to the lower body, feature-extraction and machine-learning techniques were used to reveal subtle changes in waveform features that are associated with a declining volume. This approach enables simultaneous abstraction and normalization of various characteristics of the arterial waveform. As such, the CRI aims to reflect the capacity of all factors contributing to physiological compensatory mechanisms, including compensatory reflexes, various muscle contractions, and respirations, among others. Compensatory Reserve Index values range from 0 (complete decompensation) to 1 (full compensatory reserve available). The device itself is compact, light, and can be placed on the patient’s finger, and the test can be performed within 30 s, making the measurement of CRI theoretically feasible in almost any setting. The approach was designed to prospectively identify ongoing loss of central blood volume and thus estimate the point at which individuals will experience hemodynamic decompensation (onset of shock) well in advance compared with changes in standard or “legacy” vital signs (14). The CRI has been shown to correlate with central blood volume changes in human subjects in laboratory conditions (14, 16, 17); however, few published data regarding its use in other experimental models or its ability to detect actual blood loss exist. The current investigation represents the first effort to apply a small pulse oximeter unit to test the CRI during controlled blood loss scenarios in human subjects.
The purpose of the study was to test the hypothesis that a novel noninvasive CRI monitoring algorithm would demonstrate greater sensitivity and specificity compared with standard vital signs for identifying patients with minimal blood loss, thus enabling appropriate measures to be taken before more substantial blood loss has occurred.
The protocol for this study was reviewed and approved by the institutional review board of the Israeli Defense Forces and conducted in accordance with the approved protocol.
Voluntary blood donations performed by the Israeli National Blood Service at different military bases were attended by the researchers. Subjects eligible for blood donation, according to the National Blood Service’s criteria, were approached by the researchers, received an oral and written explanation concerning the study, and, if expressed a wish to participate in the study, provided an informed consent.
Test subjects underwent blood loss of 450 mL after completion of a medical questionnaire as part of routine blood donation under the auspices of the Israeli National Blood Service. Heart rate, blood pressure, pulse oximetry, and CRI were collected by the researchers before and immediately after donation.
Control subjects were a randomly selected group of similar age, military status, and profession, who did not intend to donate blood. Control subjects were approached by the researchers, received an oral and written explanation concerning the study, and, if expressed a wish to participate in the study, provided an informed consent. Data for control subjects were collected in a similar setting and across similar time intervals without concomitant blood donation.
Sample size calculations were complicated by there being no accepted variance for the CRI measurement. After experimenting with the device and based on available literature, our best estimate was that the SD would be roughly 0.2 CRI units, equal across the before and after groups. Our minimum detectable difference of interest was 0.1 CRI units. For convenience reasons, we planned an unbalanced study with 15% of the subjects being in the non–blood donation arm. Given these criteria, we estimated that we would need 280 subjects, with approximately 243 in the donation group and 37 in the nondonation group.
Vital signs and CRI collection
Blood pressure and heart rate were measured using an M2 Automatic Blood Pressure Monitor (Omron Healthcare Co. Ltd, Lake Forest, Ill). Pulse pressure was calculated as the difference between systolic and diastolic blood pressures, and SI was calculated as heart rate divided by systolic blood pressure. Each subject was assigned a personal data sheet on which all data were recorded.
The CRI was measured using a single device CipherSensor pulse oximeter (Flashback Technologies Inc, Boulder, Co). As detailed previously (14, 16, 17), state-of-the-art feature-extraction and machine-learning techniques were used to collectively process the arterial waveforms. The CRI algorithm estimates the remaining proportion of physiological reserve available to compensate for changes in effective circulating blood volume by comparing individual waveforms to a large library of reference waveforms previously obtained from more than 200 healthy humans (aged 18–55 years) who underwent progressive central hypovolemia to the point of presyncope induced by LBNP. Therefore, LBNP protocols that required hemodynamic decompensation formed a framework for a reference CRI that provided a linear estimate of LBNP during progressive stepwise central hypovolemia (16). Robust CRI estimates require assessment of the entire waveform and comparison with the reference waveforms. The estimated CRI value corresponds to the CRI value of the most similar reference waveform in the library set (Fig. 1). For clinical simplicity, the CRI was normalized on a scale of 1 to 0 (100% to 0%), where 1 reflects the maximum capacity of physiological mechanisms (e.g., baroreflexes, respiration) to compensate for reduced central blood volume, and 0 represents imminent cardiovascular instability and decompensation. Values between 1 and 0 indicate the proportion of compensatory reserve remaining. Conceptually, CRI is the following quantity:
* BLV is the current blood loss volume of the patient
* BLVHDD is the blood loss volume at which the patient will enter hemodynamic decompensation
The model calculates the first CRI value estimate after 30 heartbeats of initialization and then, in real time, provides a new CRI estimate after every subsequent heartbeat. The model has been shown to be 96.5% accurate in predicting the estimated amount of reduced central blood volume (14), with a correlation coefficient of 0.89 between predicted and actual LBNP level for hemodynamic decompensation generated from 184 human subjects (17).
Monitoring versus triage capabilities
Two separate analyses were performed, one aimed at analyzing differences between two consecutive measurements in the test group, as well as the percentage of change in the test and control group (monitoring). A second analysis was aimed at comparing a single point in time measurement between the test and control groups (triage).
Summary statistics are presented as means and 95% confidence intervals (95% CIs). Chi-square was used to compare parametric data. Student t test for independent groups was used to compare means for data with a normal distribution, whereas the Mann-Whitney-Wilcoxon test was used to compare oxygen saturation values because these were highly skewed distributions.
Receiver operator characteristic (ROC) curves were plotted using the data collected after either donation or the appropriate time interval control, and areas under the curves (AUCs) were compared. An ROC curve was also plotted using the differences between CRI measured before and after blood donation or the appropriate time interval. Several cutoff points were chosen post facto to compare sensitivity, specificity, positive predictive value, negative predictive value, and positive and negative likelihood ratios. The probabilities of observing chance effects on the dependent variables of interest are presented as exact P values.
Statistical analyses were performed using R: A language and environment for statistical computing (R Foundation for Statistical Computing, Vienna, Austria) (18, 19).
Complete data were collected from 230 test subjects after blood donation and 34 control subjects. Demographic data were similar between test and control groups (Table 1). Measurements taken before and after blood donation in the test group are presented in Table 2. Mean CRI in the test group was 0.74 before blood donation. After blood donation, the mean CRI dropped by 16% to 0.61 (P < 0.01), SI rose by 7% (before donation, 0.67; after donation, 0.71; P < 0.01). Similarly, systolic blood pressure decreased by 3% (before donation, 124 mmHg; after donation, 119 mmHg; P < 0.01). Changes in diastolic blood pressure, heart rate, and oxygen saturation were not statistically significant.
Analysis of the difference between measurements before and after blood donation (Table 3) revealed CRI to be the only index that displayed a change that was greater in the test group compared with the control group (test group, 16% decrease; control group, 4% increase; P < 0.01). The 3% reduction in systolic blood pressure in the test group was statistically indistinguishable from the 2% reduction measured in the control group (P = 0.33). The ROC curves for the percentage of change in CRI, heart rate, systolic and diastolic blood pressure, SI, and pulse pressure were plotted (Fig. 2). The AUC for the difference in CRI was 0.81 (95% CI, 0.74 – 0.89), 0.56 (95% CI, 0.48 – 0.65) for the difference in heart rate, 0.53 (95% CI, 0.44 – 0.61) for the difference in systolic blood pressure, 0.55 (95% CI, 0.45 – 0.65) and 0.58 (95% CI, 0.49 – 0.66) for the difference in pulse pressure and SI, respectively.
To determine the potential value of CRI as a triage adjunct (i.e., when taken once, at a single point in time), separate analysis of the data collected only after blood donation was performed or a suitable time interval elapsed. Receiver operator curves were plotted for all indices (Fig. 3) with regard to blood loss (donation). The AUCs for detecting mild blood loss were 0.84 (95% CI, 0.77 – 0.9) for CRI, 0.73 (95% CI, 0.65 – 0.82) for heart rate, 0.64 (95% CI, 0.54 – 0.74) for SI, 0.60 (95% CI, 0.5 – 0.7) for systolic blood pressure, 0.62 (95% CI, 0.52 – 0.71) for diastolic blood pressure, 0.51 (95% CI, 0.41 – 0.62) for pulse pressure, and 0.55 (95% CI, 0.45 – 0.65) for pulse oximetry. The AUC for CRI was higher than that of heart rate (P = 0.02), systolic blood pressure, SI, diastolic blood pressure, pulse pressure, and oxygen saturation (all P < 0.01). The AUC for heart rate was also higher than that for blood pressure (P < 0.05).
Table 4 presents several optional triage cutoff points and their respective yielded sensitivity, specificity, positive and negative predictive values, and likelihood ratios. For example, a CRI triage cutoff of 0.6 (the manufacturer’s yellow warning line) yielded a sensitivity of 0.43 and a specificity of 0.97.
Hemorrhage is the direct cause of death in approximately 40% of all trauma-related deaths. Hemorrhage is the most prevalent cause of death from injuries sustained on the battlefield and comprises approximately 90% of the causes of “potentially survivable” death (20). Although many advances in point-of-injury care were achieved in recent years (21, 22), the mainstay of therapy of truncal exsanguination remains rapid evacuation to a facility with surgical capabilities. This scenario leaves the point-of-injury caregiver charged with the difficult task of promptly identifying bleeding casualties in need of rapid evacuation without reliable differences in their clinical presentations or standard vital signs or the benefit of imaging. This situation is further complicated in military settings where scant resources and delayed evacuation times make accurate patient triage all the more important while relatively inexperienced caregivers and hostile conditions make the technical aspect of triage more complicated. The main disadvantage of currently used vital signs is that they are relatively insensitive to small amounts or rates of blood loss (10, 23). In fact, change in these indices appears only after substantial decompensation has occurred and lifesaving interventions may be too late (1).
The American College of Surgeons Committee on Trauma sets the benchmark for field triage at 95% or greater sensitivity and 50% or greater specificity (24). Several models have been suggested in an attempt to improve patient triage and to allow for more informed decisions regarding urgency of care to be made. However, the current triage models are inadequate with respect to these requirements (4, 25). For example, Newgard et al. (24) correlated field triage data with Injury Severity Score (ISS), demonstrating that a 90% maximal sensitivity with a corresponding specificity of 63% is offered by the best currently available triage model. This model was limited by the fact that an attempt to increase sensitivity results in an unacceptable reduction in specificity (i.e., a sensitivity level of 95% resulted in a reduction of specificity to 19%). The outcome variable of our study was blood loss rather than ISS, making it difficult to compare our results with those of Newgard and coworkers. However, it is not unexpected that patients with significantly recognizable severe trauma that translates to high ISS would correlate highly with a model for triage priority. In contrast, the most significant challenge to caregivers, particularly first responders, is the internally bleeding patient who is initially effectively compensating for blood loss. In this regard, the results of the present study are remarkable in that the CRI was relatively effective (demonstrating a sensitivity of 95% with a specificity of 20% or a sensitivity of 90% with a 45% specificity) in providing early recognition of relatively mild blood loss compared with standard vital signs. Although the loss of 450 mL of blood by itself has little clinical significance, it can serve as an early indicator for ongoing hemorrhage. Similar to existing triage models, CRI in the present study failed to reach the threshold set by the American College of Surgeons Committee on Trauma. Nevertheless, one should bear in mind that these results describe a single measurement rather than a complex triage algorithm.
The hemorrhage model used in this study was intended to test the ability of CRI to detect minor blood loss. According to classic trauma care guidelines, changes in heart rate appear only after 750 mL total blood loss, and changes in blood pressure appear only after more than 1,500 mL has been lost (26). Our results support the notion that heart rate, blood pressure, and calculated SI failed to provide early sensitive and specific indices capable of detecting a 450-mL blood loss. In our study, CRI demonstrated a mean decline of 16% compared with predonation values, whereas changes in conventional vital signs were much less distinguishable. The SI rose by 7%, translating only to a slight deviation from the upper limit of clinical normality. Systolic blood pressure showed a decline by only 3% (4 mmHg), which has little, if any, clinical significance. Analysis of the ROC curves for changes in conventional vital signs and CRI reveals changes in conventional vital signs (AUC, 0.53 – 0.58) to be substantially less indicative of blood loss compared with changes in CRI (AUC, 0.81) (Fig. 2). Table 4 provides several relevant values for the percentage of CRI decline and their corresponding sensitivity and specificity, with the optimal cutoff determined by local needs and capabilities. When describing the vital signs of prehospital trauma patients, Cooke and coworkers (2) reported that pulse pressure was lower in patients who died, whereas pulse rate, arterial pressure, and SpO2 were similar to those patients who lived. In the present study, the pulse pressure displayed the lowest ROC AUC at 0.51 among our subjects. These findings, taken together, support the recent observation that pulse pressure and other standard vital signs are insensitive during the early compensatory phase of mild levels of blood loss, whereas the CRI provided an early indication of bleeding taking place (16).
As expected, analysis of data collected after blood donation (or after the appropriate time period) reveals higher blood pressure (systolic and diastolic) and heart rate in the test group compared with the control group, probably reflecting a higher sympathetic tone because of the utilization of compensatory mechanisms and consumption of physiologic reserves. Although statistically significant, these differences are of no clinical value.
Identifying a threshold below which a patient will be deemed in need of urgent care remains a question of clinical judgment. Selection of a higher threshold will ensure high sensitivity and lower the chance of delaying a patient in need of urgent care. However, such overtriage puts more burden on the evacuation treatment capabilities and, with the lower specificity, will thus be less likely to assist in differentiating between several patients in need of urgent care. This becomes paramount when the care of one patient is likely to affect the care of other patients, as is expected to occur on the battlefield or following a multicasualty incidence. Table 4 provides several tradeoff points between sensitivity, specificity, negative and positive predictive values, and likelihood ratios for different thresholds of CRI. An appropriate threshold will depend on the prevalence of significant injury in the population of interest and should not necessarily be made based on our laboratory-like study conditions.
Our study had several limitations. The CRI is an experimental technology, which was not validated on bleeding patients. Our experiment was conducted under optimal conditions on young healthy volunteers in a (generally) stress- and pain-free environment. The effect of stress levels, inherent to injury, on CRI measurements is a question that is not addressed in this study. Our study did not include field use of CRI and thus cannot indicate whether this measurement is feasible or reproducible in field conditions. As such, extrapolating our results to more common clinical comorbidities (e.g., diabetes, hypertension, beta-blocker therapy, arteriosclerosis) requires further studies.
The CRI was better than standard indices in detecting mild blood loss. Compensatory Reserve Index monitoring may allow for early detection of casualty deterioration, whereas a single measurement of CRI may enable more accurate triage in comparison with the other indices obtained from standard vital signs.
As a non–operator-dependent, easily obtainable, rapid measurement, the CRI may prove an important tool when monitoring a patient or performing casualty triage. Further studies are required to determine whether CRI measurements are feasible and accurate in actual trauma patients.
1. Convertino VA, Ryan KL, Rickards CA, Salinas J, McManus JG, Cooke WH, Holcomb JB: Physiological and medical monitoring for en route care of combat casualties. J Trauma
64 (Suppl 4): S342–S353, 2008.
2. Cooke WH, Salinas J, Convertino VA, Ludwig DA, Hinds D, Duke JH, Moore FA, Holcomb JB: Heart rate variability and its association with mortality in prehospital trauma patients. J Trauma
60 (2): 363–370, 2006.
3. King DR, Ogilvie MP, Pereira BM, Chang Y, Manning RJ, Conner JA, Schulman CI, McKenney MG, Proctor KG: Heart rate variability as a triage tool in patients with trauma during prehospital helicopter transport. J Trauma
67 (3): 436–440, 2009.
4. Lin G, Becker A, Lynn M: Do pre-hospital trauma alert criteria predict the severity of injury and a need for an emergent surgical intervention? Injury
43 (9): 1381–1385, 2012.
5. Mackenzie CF, Hu P, Sen A, Dutton R, Seebode S, Floccare D, Scalea T: Automatic pre-hospital vital signs waveform and trend data capture fills quality management, triage and outcome prediction gaps. AMIA Annu Symp Proc
(Nov 9): 318–322, 2008.
6. Chen L, Reisner AT, Gribok A, McKenna TM, Reifman J: Can we improve the clinical utility of respiratory rate as a monitored vital sign? Shock
31 (6): 574–580, 2009.
7. McDonough KH, Giaimo M, Quinn M, Miller H: Intrinsic myocardial function in hemorrhagic shock. Shock
11 (3): 205–210, 1999.
8. Soller BR, Zou F, Ryan KL, Rickards CA, Ward K, Convertino VA: Lightweight noninvasive trauma monitor for early indication of central hypovolemia and tissue acidosis: a review. J Trauma Acute Care Surg
73 (2 Suppl 1): S106–S111, 2012.
9. Bruijns SR, Guly HR, Bouamra O, Lecky F, Lee WA: The value of traditional vital signs, shock index, and age-based markers in predicting trauma mortality. J Trauma Acute Care Surg
74 (6): 1432–1437, 2013.
10. Birkhahn RH, Gaeta TJ, Terry D, Bove JJ, Tloczkowski J: Shock index in diagnosing early acute hypovolemia. Am J Emerg Med
23 (3): 323–326, 2005.
11. Apodaca AN, Morrison JJ, Spott MA, Lira JJ, Bailey J, Eastridge BJ, Mabry RL: Improvements in the hemodynamic stability of combat casualties during en route care. Shock
40 (1): 5–10, 2013.
12. Ryan KL, Rickards CA, Ludwig DA, Convertino VA: Tracking central hypovolemia with ecg in humans: cautions for the use of heart period variability in patient monitoring. Shock
33 (6): 583–589, 2010.
13. Convertino VA: Blood pressure measurement for accurate assessment of patient status in emergency medical settings. Aviat Space Environ Med
83 (6): 614–619, 2012.
14. Convertino VA, Moulton SL, Grudic GZ, Rickards CA, Hinojosa-Laborde C, Gerhardt RT, Blackbourne LH, Ryan KL: Use of advanced machine-learning techniques for noninvasive monitoring of hemorrhage. J Trauma
71 (Suppl 1): S25–S32, 2011.
15. Chen L, Reisner AT, Gribok A, Reifman J: Is respiration-induced variation in the photoplethysmogram associated with major hypovolemia in patients with acute traumatic injuries? Shock
34 (5): 455–460, 2010.
16. Convertino VA, Grudic GZ, Mulligan J, Moulton S: Estimation of individual-specific progression to impending cardiovascular instability using arterial waveforms. J Appl Physiol
115 (8): 1196–1202, 2013.
17. Moulton SL, Mulligan J, Grudic GZ, Convertino VA: Running on empty? The compensatory reserve index. J Trauma Acute Care Surg
75 (6): 1053–1059, 2013.
18. Team RC. R: A Language and Environment for Statistical Computing
. Vienna, Austria: R Foundation for Statistical Computing, 2013.
19. Robin X, Turck N, Hainard A, Tiberti N, Lisacek F, Sanchez JC, Muller M: pROC: an open-source package for R and S+ to analyze and compare ROC curves. BMC Bioinformatics
12: 77, 2011.
20. Eastridge BJ, Mabry RL, Seguin P, Cantrell J, Tops T, Uribe P, Mallett O, Zubko T, Oetjen-Gerdes L, Rasmussen TE, et al.: Death on the battlefield (2001 – 2011): implications for the future of combat casualty care. J Trauma Acute Care Surg
73 (6 Suppl 5): S431–S437, 2012.
21. Blackbourne LH, Baer DG, Eastridge BJ, Kheirabadi B, Bagley S, Kragh JF Jr, Cap AP, Dubick MA, Morrison JJ, Midwinter MJ, et al.: Military medical revolution: prehospital combat casualty care. J Trauma Acute Care Surg
73 (6 Suppl 5): S372–S377, 2012.
22. Hooper T, Nadler R, Badloe J, Butler F, Glassberg E: Implementation and execution of Military Forward Resuscitation Programs. Shock
41 Suppl 1: 90–97, 2014.
23. Scully CG, Selvaraj N, Romberg FW, Wardhan R, Ryan J, Florian JP, Silverman DG, Shelley KH, Chon KH: Using time-frequency analysis of the photoplethysmographic waveform to detect the withdrawal of 900 mL of blood. Anesth Analg
115 (1): 74–81, 2012.
24. Newgard CD, Hsia RY, Mann NC, Schmidt T, Sahni R, Bulger EM, Wang NE, Holmes JF, Fleischman R, Zive D, et al.: The trade-offs in field trauma triage: a multiregion assessment of accuracy metrics and volume shifts associated with different triage strategies. J Trauma Acute Care Surg
74 (5): 1298–1306, 2013.
25. Horne S, Vassallo J, Read J, Ball S: UK triage–an improved tool for an evolving threat. Injury
44 (1): 23–28, 2013.
26. American College of Surgeons: Advanced Trauma Life Support for Doctors
, 9th ed. Chicago, Illinois: American College of Surgeons, 2012.
Keywords:© 2014 by the Shock Society
Compensatory reserve index; prehospital; triage; hemorrhage