Secondary Logo

Journal Logo

A Novel Mobile Phone Application for Pulse Pressure Variation Monitoring Based on Feature Extraction Technology: A Method Comparison Study in a Simulated Environment

Desebbe, Olivier MD; Joosten, Alexandre MD; Suehiro, Koichi MD, PhD; Lahham, Sari BS; Essiet, Mfonobong MS; Rinehart, Joseph MD; Cannesson, Maxime MD, PhD

doi: 10.1213/ANE.0000000000001282
Technology, Computing, and Simulation: Research Report
Free
SDC

BACKGROUND: Pulse pressure variation (PPV) can be used to assess fluid status in the operating room. This measurement, however, is time consuming when done manually and unreliable through visual assessment. Moreover, its continuous monitoring requires the use of expensive devices. Capstesia™ is a novel Android™/iOS™ application, which calculates PPV from a digital picture of the arterial pressure waveform obtained from any monitor. The application identifies the peaks and troughs of the arterial curve, determines maximum and minimum pulse pressures, and computes PPV. In this study, we compared the accuracy of PPV generated with the smartphone application Capstesia (PPVapp) against the reference method that is the manual determination of PPV (PPVman).

METHODS: The Capstesia application was loaded onto a Samsung Galaxy S4TM phone. A physiologic simulator including PPV was used to display arterial waveforms on a computer screen. Data were obtained with different sweep speeds (6 and 12 mm/s) and randomly generated PPV values (from 2% to 24%), pulse pressure (30, 45, and 60 mm Hg), heart rates (60–80 bpm), and respiratory rates (10–15 breaths/min) on the simulator. Each metric was recorded 5 times at an arterial height scale X1 (PPV5appX1) and 5 times at an arterial height scale X3 (PPV5appX3). Reproducibility of PPVapp and PPVman was determined from the 5 pictures of the same hemodynamic profile. The effect of sweep speed, arterial waveform scale (X1 or X3), and number of images captured was assessed by a Bland-Altman analysis. The measurement error (ME) was calculated for each pair of data. A receiver operating characteristic curve analysis determined the ability of PPVapp to discriminate a PPVman > 13%.

RESULTS: Four hundred eight pairs of PPVapp and PPVman were analyzed. The reproducibility of PPVapp and PPVman was 10% (interquartile range, 7%–14%) and 6% (interquartile range, 3%–10%), respectively, allowing a threshold ME of 12%. The overall mean bias for PPVappX1 was 1.1% within limits of −1.4% (95% confidence interval [CI], −1.7 to −1.1) to +3.5% (95% CI, +3.2 to +3.8). Averaging 5 values of PPVappX1 with a sweep speed of 12 mm/s resulted in the smallest bias (+0.6%) and the best limits of agreement (±1.3%). ME of PPVapp was <12% whenever 3, 4, or 5 pictures were taken to average PPVapp. The best predictive value for PPVapp to detect a PPVman > 13% was obtained for PPVappX1 by averaging 5 pictures showing a PPVapp threshold of 13.5% (95% CI, 12.9–15.2) and a receiver operating characteristic curve area of 0.989 (95% CI, 0.963–0.998) with a sensitivity of 97% and a specificity of 94%.

CONCLUSIONS: Our findings show that the Capstesia PPV calculation is a dependable substitute for standard manual PPV determination in a highly controlled environment (simulator study). Further studies are warranted to validate this mobile feature extraction technology to predict fluid responsiveness in real conditions.

Supplemental Digital Content is available in the text.Published ahead of print May 3, 2016

From the *Department of Anesthesiology, University of California at Irvine School of Medicine, Orange, California; Department of Anesthesiology and Intensive Care, Clinique de la Sauvegarde, Lyon, France; Department of Anesthesiology and Perioperative Care, CUB ERASME, Free University of Brussels, Brussels, Belgium; §Department of Anesthesiology and Perioperative Care, University of California Irvine, Orange, California; and Department of Anesthesiology and Perioperative Medicine, David Geffen School of Medicine at UCLA, Los Angeles, California.

Koichi Suehiro, MD, PhD, is currently affiliated with the Department of Anesthesiology, Osaka City University Graduate School of Medicine, Osaka, Japan.

Accepted for publication February 17, 2016.

Published ahead of print May 3, 2016

Funding: None.

Conflict of Interest: See Disclosures at the end of the article.

Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal’s website.

Reprints will not be available from the authors.

Address correspondence to Olivier Desebbe, MD, Department of Anesthesiology and Intensive Care, Clinique de la Sauvegarde, 29 av des Sources, 69009 Lyon, France. Address e-mail to oldesebbe@yahoo.com.

Invasive arterial pulse pressure variation (PPV) is based on the cardiopulmonary interactions in mechanically ventilated patients under general anesthesia.1 In specific settings,2 PPV can predict fluid responsiveness (FR).3 Although various techniques can be used to measure PPV at the bedside without specialized monitoring devices, these techniques cannot discriminate between responders and nonresponders to volume expansion.4 Moreover, visual determination of PPV is not reliable,5 and thus, PPV measurement requires manual calculation, a dedicated monitoring system, or additional devices. These approaches are often time consuming, not always available, and costly. A new Android™/ iOS™ application called Capstesia™ (GalenicApp, Vitoria, Spain), running on commercially available mobile phones, calculates PPV using a digital photograph of the arterial waveform displayed by any monitor. The application measures PPV (pulse pressure variation calculated by the Capstesia application [PPVapp]) by detecting peaks and troughs of the arterial curve, but it has not been tested against manual calculation of PPV (PPVman).

The goal of our study was to assess reproducibility, accuracy, and precision of PPVapp when compared with PPVman in a simulation environment by altering hemodynamic values, components of the arterial waveform (sweep speed and arterial scale), and the number of values to average the final PPVapp value.

Back to Top | Article Outline

METHODS

Description of the Smart Phone Application

Figure 1

Figure 1

Figure 2

Figure 2

Capstesia is a new smart phone application (version 1.1.1), functioning on iOS or Android. Launching this application displays the camera mode with a focus. The entire monitor screen is photographed, prompting a green box signal to crop the image including only the arterial pressure wave. The cropped picture of the arterial waveform is adjusted to exclude any other trace, and 10 arterial peaks are selected. The picture is then sent to proprietary software through WiFi, and the determination of PPV is generated by digitalization of the arterial waveform. Of note, the application’s PPV calculation does not require hemodynamic data. The result displays a PPV value (PPVapp). The corresponding file on the phone provides access to the screen picture and cropped image, and arterial pressure waveform (Fig. 1). This scan displays circles placed on peak values and arrows placed on minimal arterial values. Figure 2 represents the different steps required to obtain a PPVapp value (Supplemental Digital Content, Video,.

Back to Top | Article Outline

Description of the PPV Generated by the Hemodynamic Simulator

The arterial waveforms were displayed by a hemodynamic simulator on a computer screen (Dell™, Round Rock, TX). This simulator has been described elsewhere6,7 and was previously used as a reference for visual estimation.8 The display mode allows setting of the following hemodynamic values: systolic and diastolic arterial pressure (SAP and DAP, respectively), heart rate (HR), central venous pressure, systolic and diastolic pulmonary pressure, respiratory rate (RR), tidal volume, and PPV. Briefly, the waveform PPV appearance is generated in 3 steps. First, the length of the respiratory cycle is determined (equal to 60/RR) in seconds. Once the cycle length is known, a sine wave is extrapolated over the length of the cycle going from 0 to 1 and back. The sine value at any point on the wave is subtracted from 1, and this value is multiplied by the percent PPV output by the simulator, and the waveform height at that point is reduced by the resulting proportion, creating systolic pressure variation. Finally, the baseline of the waveform is modulated in the same way, but at only 20% of the height effect. The net modifications result in a smooth graphical waveform that, when measured, yielded the PPV dictated by the simulator.

The arterial waveform in the display mode is specifically dependent on the following variable settings: SAP, DAP, HR, RR, and PPV. The sweep speed and scale of the arterial pressure waveform can also be adjusted. Because the arterial waveform is generated by the simulator software, there is no time variability of the waveform shape. A sample screen from the hemodynamic simulator is presented in Appendix 1.

Back to Top | Article Outline

Study Protocol

The study protocol was devised to assess the repeatability, accuracy, and precision of PPVapp compared with PPVman. The smart phone was fastened to a tripod at a fixed distance from the simulator screen (0.6 m). A Samsung Galaxy S4™ (Daegu, South Korea) with Android version 4.4.2, and a camera resolution of 13 megapixels was used. Luminosity of 105 lux was maintained throughout the experiment without use of the camera zoom function. A Dell monitor (dimensions: 14.5 inches width × 12 inches height) displayed hemodynamic data from the simulator. Twenty-four series of measurements in 17 values of PPV predefined by the simulator were tested (2%, 4%, 5%, 7%, 8%, 9%, 10%, 11%, 12%, 13%, 14%, 15%, 16%, 17%, 19%, 21%, and 24%). For each series, some combination of the following hemodynamic variables and arterial waveforms was set on the simulator: SAP value (90 or 120 mm Hg), DAP value (45 or 60 mm Hg), HR value (60 or 80 bpm), RR value (10 or 15 per minute), sweep speed of the arterial waveform (6 or 12 mm/s, to obtain at least 10 arterial peaks on the computer screen), and height of arterial waveform (nonoptimized scale [1X] or optimized scale [3X]).

Nonoptimized (PPVappX1) and optimized scale PPVapp (PPVappX3) were recorded as either 1 reading (PPV1app) or the average of 2, 3, 4, or 5 values (PPV5app) within the same hemodynamic profile. The number of attempts required to obtain an acceptable PPVapp value (defined as a scan showing circles placed on peak values and arrows placed on minimal arterial values) and the time (for the first 200 PPVapp determinations) between the snapshot and the displayed value for each specific hemodynamic combination were also recorded. PPVman values were considered the reference method against PPVapp. PPVman was calculated by measuring the amplitude of the maximum and the minimum pulse pressures during a respiratory cycle on the screen capture from the monitor immediately after the photograph was taken to generate the PPVapp. Therefore, 1 PPVman was calculated for each PPVapp. For example, PPVman was calculated 5 times for PPV5app. This calculation was done off-line by an observer blinded to the results of PPVapp (AJ).

Back to Top | Article Outline

Statistical Analyses

Distributions of values were evaluated by a Kolmogorov-Smirnov test. Values were expressed in mean (± SD) or median (interquartile range) according to their distribution.

Back to Top | Article Outline

Repeatability of PPVapp and PPVman

Repeatability was assessed as precision error, measured by calculating the variation of 5 PPV values within the same hemodynamic profile. Precision error (%) at each time point was calculated using:

where CV is the coefficient of variation of each measurement (

) and n is the number of replications kept for each measurement.9 To evaluate the maximal variation of PPVapp and PPVman (i.e., the maximal change because of random error with a probability of 95%), we calculated the least significant changes of PPV proposed by Cecconi et al.,9 where

Mean and SD or median and interquartile range of precision error and least significant changes were then calculated according to the mean values of each time plot.

Back to Top | Article Outline

Agreement and Responsiveness

The agreement between the measurements obtained with PPVapp and those obtained with PPVman was determined using the coefficient of determination (R2) and the Bland-Altman method.10 If the mean difference between PPVman and PPVapp (bias) was normally distributed, the mean bias and limits of agreement (LOA; 1.96 × SD of the bias) were calculated.11 In addition, the measurement error (ME) was computed for each set of data as follows12:

This calculation of ME is possible regardless of the distribution of the bias (PPVapp − PPVman), and ME is also impacted by the range of mean PPV [(PPVapp + PPVman) × 0.5].13 Distribution of the ME was expressed in median (95% confidence interval [CI]). Because the ME depends on the precision of each technique,9 we calculated a posteriori the threshold ME value to accept a good agreement between PPVapp and PPVman according to the formula13:

ME was calculated for each component of PPVapp: scale optimization (X1 or X3), number of PPVapp values averaged (mean of 2, 3, 4, or 5 PPVapp values), and sweep speed (6 or 12 mm/s). In addition, the relationship between PPVman and ME was assessed with the coefficient of determination R2.

Back to Top | Article Outline

Ability of PPVapp to Discriminate a PPVman > 13%

To assess the ability of PPVapp to identify a PPVman > 13%, receiver operating characteristic (ROC) curves were generated. The areas under the ROC curves were calculated for 3 different components of determining PPVapp (scale, sweep speed, and average) and compared as described previously.14 The Youden Index was determined for each ROC curve (the maximum difference between sensitivity and 1 − specificity).15 Ninety-five percent CI of the threshold PPVapp value was considered the gray zone.16

Back to Top | Article Outline

Sample Size Estimation

Because the precision of each PPV calculation (PPVman and PPVapp) was unknown before performing the experiment, we estimated a precision error of 20% (Equation 1) for both PPV calculations for sample size determination. Therefore, an acceptable ME would have an upper limit of the 95% CI of 28% according to the Equation 2. Considering a potential large distribution of the ME (SD of 25%) and a mean ME of 20%, we needed to compare 41 pairs of data by subgroup analyses (sweep speed, height of scale) according to the following calculation:

where n is the number of data pairs, ME is the expected mean value of the ME, and SD is the standard deviation of the ME. Because PPVapp was also averaged from 5 values, we needed at least 205 pictures by subgroup analyses.

All analyses were performed using MedCalc Statistical Software version 14.8.1 (MedCalc® Software bvba, Ostend, Belgium). P < 0.05 was considered as significant.

Back to Top | Article Outline

RESULTS

Two thousand one hundred pictures were recorded, of which 60 (3%) were excluded because of scan process error in the application. Eight hundred sixteen pairs of data (PPVapp versus PPVman) were ultimately evaluated.

Table 1

Table 1

Table 2

Table 2

Figure 3

Figure 3

Figure 4

Figure 4

Figure 5

Figure 5

The median time to obtain a PPVapp value in the app was 24 seconds (21–28 seconds). Figure 3 describes the number of PPVapp values according to the setting of the simulator. Precision error of PPVapp and PPVman was 10% (7%–14%) and 6% (3%–10%), respectively (Table 1). An acceptable threshold value for ME between PPVapp and PPVman was then calculated at 12%. Mean values of PPVapp and coefficient of determination between PPVapp and PPVman are presented in Table 2. Distribution of PPVappX3 bias was not normally distributed. Figure 4 displays Bland-Altman analysis for 1 value (Fig. 4A) and for 5 averaged values (Fig. 4B) of PPVapp at scale X1. The least ME was obtained with a sweep speed of 12 mm/s and the average of 5 values (ME = 6%; 95% CI, 5–10; Table 2). Upper limit of 95% CI of ME was <12% when 3, 4, or 5 pictures were obtained to average PPVapp (Appendix 2) at scale X1. There was a significant relationship between ME and PPVman (R2 = 0.38, P < 0.001; Fig. 5). Areas under ROC curves for each type of PPVapp are presented in Appendix 3. The greatest area was obtained with PPV5appX1.

Back to Top | Article Outline

DISCUSSION

The main finding of this pilot study is that, in a highly controlled environment, PPVapp shows an acceptable accuracy with PPVman when at least 3 pictures are taken to average PPVapp at scale X1 (upper limit of the 95% CI of the ME <12%). The best accuracy is obtained with a sweep speed of 12 mm/s with 5 averaged PPVapp. Second, with a low rate of unsuccessful scan process (3%) and a short period of time to obtain a PPV value (24 seconds), PPVapp determination is feasible. Finally, a PPVapp threshold of 13.5% (gray zone, 12.9%–15.2%) is potentially able to discriminate FR (PPVman > 13%).

Cardiac output (CO) optimization has the potential to decrease postoperative complications17; however, CO measurement lacks reliability and is expensive.18 By predicting FR, PPV is an acceptable surrogate for CO optimization.19 Therefore, the promise of an easy-to-use pocket application capable of guiding fluid therapy is valuable. Also, by providing other advanced hemodynamic variables (CO and inotropy), this application questions the need to buy supplemental equipment for advanced monitoring. However, the accuracy of these advanced hemodynamic variables was not evaluated in this study. More generally, feature extraction technologies are becoming readily available in health care delivery and could soon be an essential tool for the anesthesiologist.20–22

Nevertheless, PPVapp determination requires a WiIFi connection, and the picture needs careful attention: avoiding light glares, holding the smart phone parallel to the screen (otherwise at risk of disturbing the ratio between maximal and minimal pulse pressure), and preventing image obstruction by other artifacts in the selected box. A confirmatory visualization of the processed scan also ensures that there is no misinterpreted or erroneous data (Fig. 2). Contrary to our assumptions, increasing the scale and decreasing the sweep speed worsened the LOA. These 2 modifications may have decreased the contrast between red pixels (representing the arterial waveform) and the black screen and also decreased the definition of the waveform. It may, consequently, have altered the scan process. Averaging the number of PPVapp values increased the accuracy of the application, consistent with a study demonstrating that determinations of PPV averaged on 3 respiratory cycles were better than 1.23 Particularly, it has recently been shown that the ability to predict FR depends on the period of averaging the PPV, a greater interval worsening the results.4

In a 1999 meta-analysis, on CO measurement, Critchley and Critchley13 introduced the notion of percentage error (PE) to propose an acceptable threshold derived from the LOA of the Bland-Altman analysis. The PE is based on the 95% confidence interval of the bias and on the mean CO (of both methods) of each data set. This value is, therefore, calculated after all the comparisons have been made. It provides a rough estimate against which other clinical study results can be compared at a level that most clinicians can use. However, PE presents some intrinsic limits. First, PE does not consider the range of CO.24 Second, PE is based on LOA, thus considering that the bias between the 2 assessed methods is normally distributed. Notably, numerous recent method comparison studies did not test the distribution of the bias, with a risk of using inappropriate statistical tools (LOA, 95% CI, PE).25–27 Finally, PE is a value with no dispersion dimension. To overcome these 3 limits (consider the range of CO, nonnormal distribution of the bias, dispersion of the error measurement between 2 techniques), we calculated the ME that is based on each individual set of data, used in 199212 and also described in meta-analysis by Critchley and Critchley.13 Noteworthy is the citation stating that one should calculate the “percentage error for each set of data rather than calculating a single percentage error from the averaged data.”13 Interestingly, the ME depends on the value of PPVman (Fig. 5), indicating that unacceptable high ME values involved low PPVman values, wherein its exact value has limited clinical relevance. It has also been proposed that the LOA be defined a priori.28 However, the LOA and the ME depend on the precision error of both methods. Therefore, if precision error of each technique can be quantified, the interchangeability of 2 methods should be accepted if the 95% CI of the ME are equal to or less than the square root of both square precision techniques [√(precision PPVman2+precision PPVapp2)].13

Back to Top | Article Outline

LIMITATIONS OF THE STUDY

We tested PPVapp in an ideal simulation environment. The arterial waveform displayed by the simulator is consistent over time, without the effect of other physiologic variables (sympathetic tone, HR variability).29 We did not choose the PPV displayed by the simulator (PPVsim) as the reference method for 2 main reasons. First, PPVsim has never been validated. Second, the PPVsim is determined according to a 20% variation between the height and the bottom of the arterial waveform; therefore, it can differ according to the dimensions of the screen on which it is displayed or according to the height of the arterial scale. To avoid any confounding bias between the generated PPVsim and the one actually displayed, recalculating PPVman for each displayed screenshot allowed us to achieve a robust comparison. However, as PPVman was calculated within 1 respiratory cycle, we did not specifically select 10 heart beats contrary to the determination of PPVapp. These differences in the generation of PPVman and PPVapp may have altered the agreement between both PPV determinations. Accordingly, PPVapp manufacturers recommend selection of only 7 to 8 peaks to avoid the pollution by low arterial frequencies, which could alter the peak and the trough arterial values. Furthermore, PPV applicability may vary in an operating room environment where light reflection is greater, and the conditions of use are not respected (smart phone parallel to the screen for instance). Finally, other computer monitors may produce different results because of variability in definition and resolution.

Back to Top | Article Outline

CONCLUSIONS

With a low precision error and accurate LOA compared with manual PPV, PPVapp could predict FR. We demonstrated that an arterial scale of 1X combined with an average of at least 3 pictures of the same screen were the best conditions for obtaining a valuable PPVapp, especially for a sweep speed of 12 mm/s. Nonetheless, real conditions are warranted to test this application with these settings.

Testing PPVapp during general anesthesia and test FR is the next step to validate this tool before its wider use can be recommended.

Back to Top | Article Outline

APPENDIX 1

Example of a Screen Displaying a PPV Value of 16%

Figure

Figure

Notes: The sweep speed is 12 mm/s, allowing to crop 10 peaks to determine the PPVapp. PPVapp = pulse pressure variation calculated by the Capstesia application.

Appendix 2

Appendix 2

Back to Top | Article Outline

APPENDIX 3

ROC Curves Comparing the Ability of PPVapp to Discriminate Between PPVman Under or Upper 13%

Figure

Figure

CI = confidence interval; PPVapp = pulse pressure variation calculated by the Capstesia™ application; PPV1appX1 = pulse pressure variation displayed by the smart phone application from 1 value at scale X1; PPV5appX1 = average of 5 pulse pressure variation values displayed by the smartphone application at scale X1; PPV1appX3 = pulse pressure variation displayed by the smart phone application from 1 value at scale X3; PPV5appX3 = average of 5 pulse pressure variation values displayed by the smartphone application at scale X3; ROC = receiver operating characteristic curve.

Back to Top | Article Outline

DISCLOSURES

Name: Olivier Desebbe, MD.

Contribution: This author was the first author and helped design the study, conduct the study, data collection, data analysis, and manuscript preparation.

Attestation: Olivier Desebbe approved the final manuscript, attests to the integrity of the original data and the analysis reported in this manuscript, and is the archival author.

Conflicts of Interest: Olivier Desebbe declares no conflicts of interest.

Name: Alexandre Joosten, MD.

Contribution: This author helped design the study, conduct the study, data collection, data analysis, and manuscript preparation.

Attestation: Alexandre Joosten approved the final manuscript and attests to the integrity of the original data and the analysis reported in this manuscript.

Conflicts of Interest: Alexandre Joosten is a consultant for Edwards Lifesciences™.

Name: Koichi Suehiro, MD, PhD.

Contribution: This author helped design the study, conduct the study, analyze the data, and write the manuscript.

Attestation: Koichi Suehiro has seen the original study data, reviewed the analysis of the data, approved the final manuscript, and is the author responsible for archiving the study files.

Conflicts of Interest: Koichi Suehiro declares no conflicts of interest.

Name: Sari Lahham, BS.

Contribution: This author helped conduct the study and write the manuscript.

Attestation: Sari Lahham approved the final manuscript.

Conflicts of Interest: Sari Lahham declares no conflicts of interest.

Name: Mfonobong Essiet, MS.

Contribution: This author helped conduct the study and write the manuscript.

Attestation: Mfonobong Essiet approved the final manuscript.

Conflicts of Interest: Mfonobong Essiet declares no conflicts of interest.

Name: Joseph Rinehart, MD.

Contribution: This author helped conduct the study and write the manuscript.

Attestation: Joseph Rinehart approved the final manuscript.

Conflicts of Interest: Joseph Rinehart has an ownership interest in Sironis™ and is a consultant for Edwards Lifesciences™.

Name: Maxime Cannesson, MD, PhD.

Contribution: This author helped conduct the study and write the manuscript.

Attestation: Maxime Cannesson approved the final manuscript.

Conflicts of Interest: Maxime Cannesson is a consultant for Masimo™, Edwards Lifesciences™, and Covidien™. He is has an ownership interest in Sironis™ and Gauss Surgical™. He benefits research funding from Masimo™ and Edwards™. The Galenic App Company distributed to this author the Capstesia™ application without control concerning its use, implementation, or conduction, and results of this study.

Back to Top | Article Outline

RECUSE NOTE

Dr. Maxime Cannesson is the Section Editor for Technology, Computing, and Simulation for Anesthesia & Analgesia. This manuscript was handled by Dr. Steven L. Shafer, Editor-in-Chief, and Dr. Cannesson was not involved in any way with the editorial process or decision.

Back to Top | Article Outline

REFERENCES

1. Michard F, Boussat S, Chemla D, Anguel N, Mercat A, Lecarpentier Y, Richard C, Pinsky MR, Teboul JLRelation between respiratory changes in arterial pulse pressure and fluid responsiveness in septic patients with acute circulatory failure.Am J Respir Crit Care Med2000162134–8
2. Maguire S, Rinehart J, Vakharia S, Cannesson MTechnical communication: respiratory variation in pulse pressure and plethysmographic waveforms: intraoperative applicability in a North American academic center.Anesth Analg201111294–6
3. Yang X, Du BDoes pulse pressure variation predict fluid responsiveness in critically ill patients? A systematic review and meta-analysis.Crit Care201418650
4. Lansdorp B, Lemson J, van Putten MJ, de Keijzer A, van der Hoeven JG, Pickkers PDynamic indices do not predict volume responsiveness in routine clinical practice.Br J Anaesth2012108395–401
5. Cannesson M, Le Manach Y, Hofer CK, Goarin JP, Lehot JJ, Vallet B, Tavernier BAssessing the diagnostic accuracy of pulse pressure variations for the prediction of fluid responsiveness: a “gray zone” approach.Anesthesiology2011115231–41
6. Rinehart J, Alexander B, Le Manach Y, Hofer C, Tavernier B, Kain ZN, Cannesson MEvaluation of a novel closed-loop fluid-administration system based on dynamic predictors of fluid responsiveness: an in silico simulation study.Crit Care201115R278
7. Rinehart J, Lee C, Cannesson M, Dumont GClosed-loop fluid resuscitation: robustness against weight and cardiac contractility variations.Anesth Analg20131171110–8
8. Rinehart J, Islam T, Boud R, Nguyen A, Alexander B, Canales C, Cannesson MVisual estimation of pulse pressure variation is not reliable: a randomized simulation study.J Clin Monit Comput201226191–6
9. Cecconi M, Rhodes A, Poloniecki J, Della Rocca G, Grounds RMBench-to-bedside review: the importance of the precision of the reference technique in method comparison studies—with specific reference to the measurement of cardiac output.Crit Care200913201
10. Bland JM, Altman DGStatistical methods for assessing agreement between two methods of clinical measurement.Lancet19861307–10
11. Bland JM, Altman DGAgreement between methods of measurement with multiple observations per individual.J Biopharm Stat200717571–82
12. Shoemaker WC, Wo CC, Bishop MH, Appel PL, Van de Water JM, Harrington GR, Wang X, Patil RSMulticenter trial of a new thoracic electrical bioimpedance device for cardiac output estimation.Crit Care Med1994221907–12
13. Critchley LA, Critchley JAA meta-analysis of studies using bias and precision statistics to compare cardiac output measurement techniques.J Clin Monit Comput19991585–91
14. Hanley JA, McNeil BJA method of comparing the areas under receiver operating characteristic curves derived from the same cases.Radiology1983148839–43
15. Youden WJIndex for rating diagnostic tests.Cancer1950332–5
16. Cannesson M, Aboy M, Hofer CK, Rehman MPulse pressure variation: where are we today?J Clin Monit Comput20112545–56
17. Pearse RM, Harrison DA, MacDonald N, Gillies MA, Blunt M, Ackland G, Grocott MP, Ahern A, Griggs K, Scott R, Hinds C, Rowan KOPTIMISE Study GroupEffect of a perioperative, cardiac output-guided hemodynamic therapy algorithm on outcomes following major gastrointestinal surgery: a randomized clinical trial and systematic review.JAMA20143112181–90
18. Peyton PJ, Chong SWMinimally invasive measurement of cardiac output during surgery and critical care: a meta-analysis of accuracy and precision.Anesthesiology20101131220–35
19. Cannesson MArterial pressure variation and goal-directed fluid therapy.J Cardiothorac Vasc Anesth201024487–97
20. Konig G, Holmes AA, Garcia R, Mendoza JM, Javidroozi M, Satish S, Waters JHIn vitro evaluation of a novel system for monitoring surgical hemoglobin loss.Anesth Analg2014119595–600
21. Pronovost PJ, Bo-Linn GW, Sapirstein AFrom heroism to safe design: leveraging technology.Anesthesiology2014120526–9
22. Cannesson M, Tanabe M, Suffoletto MS, McNamara DM, Madan S, Lacomis JM, Gorcsan J IIIA novel two-dimensional echocardiographic image analysis system using artificial intelligence-learned pattern recognition for rapid automated ejection fraction.J Am Coll Cardiol200749217–26
23. Kim HK, Pinsky MREffect of tidal volume, sampling duration, and cardiac contractility on pulse pressure and stroke volume variation during positive-pressure ventilation.Crit Care Med2008362858–62
24. Preiss D, Fisher JA measure of confidence in Bland-Altman analysis for the interchangeability of two methods of measurement.J Clin Monit Comput200822257–9
25. Yamada T, Tsutsui M, Sugo Y, Sato T, Akazawa T, Sato N, Yamashita K, Ishihara H, Takeda JMulticenter study verifying a method of noninvasive continuous cardiac output measurement using pulse wave transit time: a comparison with intermittent bolus thermodilution cardiac output.Anesth Analg201211582–7
26. Bubenek-Turconi SI, Craciun M, Miclea I, Perel ANoninvasive continuous cardiac output by the Nexfin before and after preload-modifying maneuvers: a comparison with intermittent thermodilution cardiac output.Anesth Analg2013117366–72
27. Wagner JY, Sarwari H, Schön G, Kubik M, Kluge S, Reichenspurner H, Reuter DA, Saugel BRadial artery applanation tonometry for continuous noninvasive cardiac output measurement: a comparison with intermittent pulmonary artery thermodilution in patients after cardiothoracic surgery.Crit Care Med2015431423–8
28. Mantha S, Roizen MF, Fleisher LA, Thisted R, Foss JComparing methods of clinical measurement: reporting standards for Bland and Altman analysis.Anesth Analg200090593–602
29. Akselrod S, Gordon D, Ubel FA, Shannon DC, Berger AC, Cohen RJPower spectrum analysis of heart rate fluctuation: a quantitative probe of beat-to-beat cardiovascular control.Science1981213220–2
© 2016 International Anesthesia Research Society