Patients undergoing anesthesia are monitored directly by observation of clinical signs and indirectly by viewing a display of the patient's physiological variables. The anesthesiologist is required to make rapid clinical decisions on the basis of complex information gained from different sources within the operating room (OR). This reliance on monitoring has increased as technology has progressed and as an increasing number of physiological variables are measured. The measured physiological variables are presented on the monitor display in a combination of visual formats (numeric, waveform, and trend) along with an auditory display of heart rate and oxygen saturation. Visual and auditory alarms denote when critical thresholds of physiological variables are breached. Despite the increase in the number of variables measured, very little information is available on how anesthesiologists interact with the monitor display. Humans have a limited ability to detect visual changes in the environment (change blindness); therefore, increasing the amount of information displayed can reduce the ability to detect changes.1 Understanding how anesthesiologists interact with these displays could inform monitor design, improve information transfer, and ultimately improve patient safety.
The tasks performed by anesthesiologists in the OR, including monitor observation, have been reported.2,3 These studies used a trained observer in the OR assessing anesthesiologist subjects while they performed routine anesthesia and vigilance tests, potentially affecting subject behavior. Video techniques to observe clinician behavior in the OR have been reported, but again the observer/ operator either was present in the OR4 or was required to wear eye tracking equipment potentially interrupting the work environment.5
The purpose of this study was to covertly observe the natural behavior of anesthesiologists, to determine how frequently, and for how long, they observed the patient monitoring display during different times in the maintenance phase of anesthesia. The impact of activities—such as more than 1 anesthesia provider present in the OR, reading, and charting—was evaluated with respect to monitor use. This was a pilot study to test a new methodology and provide information on looking behavior to appropriately power future studies.
Approval was attained for this study from the University of British Columbia Clinical Research Ethics Board and the Children's and Women's Health Centre of British Columbia Research Review Committee. Written informed consent was obtained from staff and resident anesthesiologists who were recruited through departmental seminars, and a departmental recruitment poster. Subjects consented to being videotaped with sound recording while conducting anesthesia in a designated OR during the study period. Other members of the OR staff were aware of the study (notification by e-mail and a sign displayed in the OR), but not consented. Informed consent was not obtained from the patients because they were specifically excluded from the video field of view. OR staff and the subjects were permitted to terminate recording at any time by unplugging the video camera or covering it.
The study took place in 1 OR at British Columbia Children's Hospital over a 7-month period. For the first 2 months of the study, the video cameras were mounted in the OR, but no recordings were performed. This allowed for a period of acclimation to the equipment. Cases scheduled to last >40 minutes were then recorded on sequential days when a subject performed routine anesthesia in the designated OR on ASA 1 or 2 patients. An equal number of cases with the anesthesiologist working alone (solo provider) and when with a trainee or fellow (dual-provider cases) were recorded.
Two webcams were positioned in the OR, 1 (1.3 MP webcam, Logitech Rightlight™ Technology, Fremont, Calfornia) above the anesthesia monitor display panel (S/5™, Datex-Ohmeda, Madison, WI) and the other (Umax Astrapix 230, Techville, Inc., Dallas, Texas) fixed to the ceiling to provide a bird's eye view of the anesthesia machine and the subject. Indicator lights on the cameras were covered to ensure that subjects would not be aware of when they were being recorded. The bird's eye camera provided video recording only, and the anesthesia monitor camera provided video and audio. The audio and video data were captured on computers (MA7, Gateway Inc., Irvine, California, and MiniPC, AOpen, Taipei, Taiwan) positioned behind the anesthesia machine using video capture software (SuperVideoCap, V5.3, MySuperSoft.com). This software provided a continuous time stamp on both video recordings to facilitate synchronization. Recording was initiated remotely using an ad hoc wireless connection from outside the OR via a laptop computer (HP Omnibook 6000, Hewlett Packard, Palo Alto, California) and remote network access software (TightVNC, www.tightvnc.com). The recorded data were downloaded to a portable external hard drive from the 2 OR computers for further analysis after the end of the cases.
Video Processing and Coding
Three segments of video and audio recording, each 10 minutes long, were selected from each OR case (n = 60 total segments). The first segment (early maintenance) started 5 minutes after intubation or the completion of any regional anesthetic procedure. The last 10-minute segment (late maintenance) was taken immediately before the drapes were taken down and the middle 10-minute segment (mid-maintenance) was taken exactly half-way between the early and late maintenance segments. Video segmentation was performed using Ulead VideoStudio 9.0.SE (Ulead Systems Inc., Taipei, Taiwan) by an experienced anesthesiologist (Simon Ford) not involved in the coding. Coding was performed by an investigator who was not involved in the collection of data (Ashlee King). Segments were coded using Noldus Observer XT (Version 6.0, Noldus Information Technology, Wageningen, The Netherlands), for 3 clinician tasks (looking at the anesthesia monitor, charting [manual paper record], or reading other written material). The segments were viewed at normal playback speed, and when an instance of a particular behavior was detected, the playback was paused and reduced to frame-by-frame advancement (at a resolution of 30 frames per second). The coder advanced or reversed the video 1 frame at a time to find the first and last frames of the behavior. A second coder (Elina Birmingham) coded a subset of cases (6 in total) for monitor look durations, broken down by case and segment to measure consistency of coding. The short glances seen in monitor-viewing behavior were most likely to lead to interobserver error and therefore were the focus of our reliability assessment.
Intercoder reliability was assessed for monitor-looking behavior. Total look times were divided into 5-second intervals and Krippendorff's alpha coefficient of reliability and percentage agreement calculated for 2 coders (http://dfreelon.org/utils/recalfront/recal2/).
Monitor-looking behavior was analyzed by comparing total look time (the total time spent looking at the monitor per segment), individual glance duration (the average duration of individual glances at the monitor per segment), and glance frequency (the number of glances at the monitor per segment) over early-, mid-, and late-maintenance segments using mixed analysis of variance (ANOVA) (NCSS Statistical Systems. Kaysville, Utah). Mixed ANOVAs were also used to compare monitor-looking behavior with looking at other written reading material and the anesthesia chart. The regions of observation (anesthesia chart, other reading material, and monitor) and the effect of dual anesthesia provider were compared using a mixed ANOVA test. If a significant difference was detected, a Bonferroni correction for multiple pairwise comparisons was applied. A P value <0.05 was considered to be statistically significant.
Five staff anesthesiologists, 2 anesthesia fellows, 3 anesthesia residents, and 2 medical students consented to take part in the study. The 10 recorded solo cases were performed by 4 staff and 2 fellows. The dual-provider cases were all supervised by staff anesthesiologists and included 1 of the residents, fellows, or medical students (Table 1). Some subjects performed cases in both the dual-provider and solo-provider groups, and others performed cases in only 1 of the groups, preventing paired analysis.
Ten cases were recorded, segmented, and coded for dual-provider cases and 10 for solo-provider cases. Two segments on 2 separate cases were too short (<6 minutes) because of technical difficulties with the video capture. Both of these segments were from the early-maintenance segment recorded in the solo-provider group. These 2 segments were removed from further analysis, although the rest of these 2 cases were analyzed (mid- and late-maintenance segments).
Percentage agreement between coders for monitor-looking behavior was 76.5%, and Krippendorff's alpha coefficient of reliability was 0.73.
Subjects spent little time observing the monitoring display (32.3 ± 4.5 seconds during a 10-minute segment [mean ± SD]), and this did not change across the 3 segments of the maintenance phase of anesthesia (P = 0.88). Monitor observation occurred in frequent, brief glances. These glances were on average between 1.5 and 2.1 seconds in duration and occurred between 15 and 20 times during each 10-minute segment (Table 2).
The trainees in the dual anesthesia provider cases exhibited very similar behavior to their supervisors in terms of total look time (P = 0.57), individual glance durations (P = 0.41), and glance frequencies (P = 0.37) (Table 2). There was no difference in any of these measures across the different segments of this study, or interaction between role (senior vs. trainee) and segment of maintenance anesthesia.
Overall Looking Behavior
The total time spent looking at the monitor, anesthesia chart, or other reading material as a function of provider number and segment of maintenance anesthesia is shown in Table 3. Subjects spent more time looking at other reading material during the mid-maintenance segment than during the early- or late-maintenance segments of anesthesia (P < 0.01) and more time looking at other reading material than they did looking at the monitor during the mid-maintenance segment (P < 0.05). The anesthesia chart was viewed for longer during the early-maintenance segment than during the late-maintenance segment (P < 0.05).
Effect of Anesthesia Provider Number
Subjects in the dual-provider group spent significantly less time looking at other reading material (P < 0.05) during the mid-maintenance segment. Time spent looking at the anesthesia chart was not influenced by the number of anesthesia providers. The total time spent looking at the monitors by staff in the solo- and dual-provider groups did not differ significantly.
Observing the monitoring behavior of the anesthesiologist in the OR with a covert observation technique is both feasible and practical. Anesthesiologists look at the monitoring display in short frequent glances; this is consistent across different segments during the maintenance phase of anesthesia, and is not affected by the number of anesthesia providers or provider role. Viewing behavior during the maintenance phase of anesthesia is made up of multiple brief glances, generally <2 seconds, performed on average every 30 to 45 seconds. This culminates in a short overall time (approximately 5%) spent looking at the patient monitor.
The use of monitoring to assist the anesthesiologist in providing safe anesthesia is mandatory in the OR.a,b Although monitoring has developed rapidly with the advent of technology, little is understood about the clinician–monitor interaction and application of this interaction in guiding monitor design. To accurately describe this behavior, it is important that the clinician be unaware of when he or she is being observed to minimize the effects of observation. Our principal objective was to produce a reliable method for assessing the interaction of anesthesiologists and their monitoring displays. Previous studies using overt observation in the OR suggested that the monitors are rarely looked at during induction and emergence from anesthesia.2,3 During these periods the clinician is likely to be mobile in the OR, performing regional anesthesia techniques or transferring the patient from the OR table to a bed. We therefore chose the segments of early-, mid-, and late-maintenance anesthesia to measure a range of monitor observation behaviors to inform future design, while facilitating the evaluation of this new methodology.
In comparison with previous studies the subjects spent less time looking at the monitor.2,3 These studies reported monitor viewing behavior of 12.7% for the maintenance phase of anesthesia2 and 20.3% for the maintenance and extubation phases of anesthesia.3 This is longer than the 5% of time observed in this study for the maintenance phase of anesthesia. An early study by Boquet et al.6 did report a lower percentage of monitor observation of only 3%. It is possible that the reduced monitor observation behavior we observed was due to our sampling methodology. We may have inadvertently selected periods of low monitor-looking behavior. However, the duration of cases did vary, providing a variable sampling period for the mid-maintenance segment, which still showed consistent monitoring behavior. Case type and experience of the anesthesiologist has an effect on monitor-viewing behavior via changes in cognitive workload. Our study was not designed to assess workload and therefore may be a factor in the discrepancy in monitor-viewing times.
The subjects in previous studies were aware that they were being assessed and were often required to complete vigilance tests as part of a workload assessment. This may have increased their awareness and altered their behavior, increasing the time that they spent observing the monitor. Despite the difference in the duration of monitor viewing, these studies reported a similar pattern of short, frequent glances at the monitor.
Monitoring equipment has progressed from electrocardiogram and invasive arterial blood pressure monitoring in the 1980s to equipment that provides automated noninvasive arterial blood pressure monitoring, pulse oximetry, analysis of inspired and expired airway gases, airways pressure, and temperature, all in real time. Additional sensors (such as processed electrocardiogram) and additional information derived from current sensors (such as pressure or volume variation) will increasingly be introduced into routine clinical practice. Despite this progression in the amount of information displayed, the relative amount of time viewing these monitors has not increased pro rata.7,8 This level of monitor usage coincides with Loeb and Weinger et al. 's2,3 observations in 1994. If monitor displays are becoming more complex, then an increase in monitor observation would be expected to obtain additional information. Loeb reported mid-maintenance segment short-glance monitor viewing behavior of 1 to 2 seconds with a median frequency of 24 glances per 15-minute period, similar behavior to that in our study.2 They had a greater range of frequency and look duration, producing the longer overall percentage of time spent viewing monitors. Because the monitor is viewed in short glances, only limited information can be obtained at any given time. It would be anticipated that increasing the amount of information displayed would increase the time spent looking at the monitor. The reason for not seeing an increase in duration of monitor-looking behavior may have been due to improved display design. However, it is known that anesthesiologists direct their attention to portions of the display that are relevant to their ongoing cognitive processing.2 Rapid task-related glances at the patient monitor are consistent with the types of eye movements described in eye-tracking studies by subjects performing active tasks.9 Subjects fixate exclusively on objects that are directly relevant to the task at hand. When engaged in everyday activities, subjects rarely look at task-irrelevant objects (5% of all fixations); instead, they look at the object that is about to be acted upon.9 Thus, it is very likely that the anesthesiologist only glances at the portions of the display that contain specific physiological information pertinent to the current cognitive model of the patient's state. If this is the case, then extra displayed physiological variables that are not immediately relevant will not change clinician looking behavior.
Dual-provider anesthesia did not have a significant effect on the measures of monitoring behavior. Research by Weinger et al. showed workload to be higher during teaching cases and monitor vigilance to be reduced in comparison with nonteaching cases.10 We are unable to comment on the content of dual-provider presence and teaching using the current methodology, but we did not see any reduction in the duration or frequency of monitor observation by senior subjects between the solo- and dual-provider cases. It may be that the increased mental workload of teaching does not affect the intrinsic monitor observation, i.e., duration or frequency, but does affect the amount of information gained and understood from the monitors. An appropriately powered larger study is required to investigate what effect dual anesthesia providers and teaching have on monitor-looking behavior and if there is a discrepancy between looking behavior and vigilance, because this would have significant implications for the teaching of anesthesia.
This study had a number of limitations. The limited sample size of this pilot study restricts the conclusions that can be drawn. Future studies would ideally use more subjects and a paired study design between the dual- and solo-provider groups, reducing the potential bias of some subjects participating in multiple cases. The cases and segments were not randomly selected, which may have introduced some bias. A variety of case types were analyzed; however, all were healthy children undergoing elective surgery. The results should not be extrapolated to other clinical settings. The control of recording and segment selection was performed by investigators not involved in the video analysis phase of the project to reduce coding bias. The investigators analyzing the segments did not know the clinicians in the video, and moderate agreement was found with the subset of recoded cases.
The covert nature of the observations meant that we were limited to coding looking behavior from the video footage. There was no way to know precisely where the anesthesiologists were looking (e.g., exact locations on the patient monitor), and the subjects were on occasion located outside the field of view of the camera. We were unable to evaluate the significance of auditory representation of physiological variables (heart rate and oxygen saturation) or exactly how important auditory information might be.11 A corroboration of our covert observations with eye-tracking devices would be very informative.12
In conclusion, the present study used a covert observation technique to quantify the natural looking behavior of anesthesiologists in the OR. We found that anesthesiologists during different segments of maintenance-phase anesthesia viewed the primary monitor display in frequent short glances. The monitors were observed for about 5% of the time, which is shorter than that reported in previous overt observational studies. This pattern of behavior did not change across the maintenance phase of anesthesia and did not seem to be affected by the presence of >1 anesthesia provider, charting, or the use of other reading material. The presence of “at-a-glance monitoring” has implications for the design of patient monitoring displays and potentially for anesthesiologist training.
a Committee of origin: standards and practice parameters. Standards for basic anesthetic monitoring. http://www.asahqorg/publicationsAndServices/standards/02 pdf 2005. Approved by the ASA House of Delegates on October 21, 1986, and last amended on October 25, 2005.
b Canadian Anesthesiologist Society guidelines for patient monitoring. http://www.casca/members/sign_in/guidelines/practice_of_anesthesia/defaultasp?loadbpatient_monitoring 2008.
1. Rensink RA. Change detection. Annu Rev Psychol 2002;53: 245–77
2. Loeb RG. Monitor surveillance and vigilance of anesthesia residents. Anesthesiology 1994;80:527–33
3. Weinger MB, Herndon OW, Zornow MH, Paulus MP, Gaba DM, Dallen LT. An objective methodology for task analysis and workload assessment in anesthesia providers. Anesthesiology 1994;80: 77–92
4. Weinger MB, Gonzales DC, Slagle J, Syeed M. Video capture of clinical care to enhance patient safety. Qual Saf Health Care 2004;13:136–44
5. Mackenzie CF, Hu FM, Xiao Y, Seagull FJ. Video acquisition and audio system network (VAASNET) for analysis of workplace safety performance. Biomed Instrum Technol 2003;37: 285–91
6. Boquet G, Bushman JA, Davenport HT. The anaesthetic machine—a study of function and design. Br J Anaesth 1980;52:61–7
7. McDonald JS, Dzwonczyk RR. A time and motion study of the anaesthetist's intraoperative time. Br J Anaesth 1988;61: 738–42
8. McDonald JS, Dzwonczyk R, Gupta B, Dahl M. A second time-study of the anaesthetist's intraoperative period. Br J Anaesth 1990;64:582–5
9. Land M, Mennie N, Rusted J. The roles of vision and eye movements in the control of activities of daily living. Perception 1999;28: 1311–28
10. Weinger MB, Reddy SB, Slagle JM. Multiple measures of anesthesia workload during teaching and nonteaching cases. Anesth Analg 2004;98:1419–25
11. Sanderson PM, Watson MO, Russell WJ, Jenkins S, Liu D, Green N, Llewelyn K, Cole P, Shek V, Krupenia SS. Advanced auditory displays and head-mounted displays: advantages and disadvantages for monitoring by the distracted anesthesiologist. Anesth Analg 2008;106: 1787–97
12. Schulz-Stubner S, Jungk A, Kunitz O, Rossaint R. [Analysis of the anesthesiologist's vigilance with an eye-tracking device: A pilot study for evaluation of the method under the conditions of a modern operating theatre]. Anaesthesist 2002;51:180–6