On the basis of extensive resource utilization and revenue production, the operating room (OR) is a focus of intense administrative efforts to increase efficiency, reduce costs, and improve patient safety (1–8). To optimize this economic benefit, intense “production-line” pressure has been placed on all caregivers in the OR (9–13). Anesthesiologists are particularly exposed to production pressure, as they are being evaluated not only on the basis of quality of clinical care but also on the basis of improvements in productivity and decrease in scheduling delays. In academic medical centers this issue is further compounded by the expectation to provide perioperative teaching (14–16).
The preincision period, defined as patient-on-table to surgical incision, is an interval of intense effort by anesthesiologists, nursing staff, and surgeons (10–12). Perhaps the most critical part of this time period is induction of anesthesia, which is affected by variables such as surgical procedure, ASA physical status (ASA PS), anesthetic techniques and monitoring, coexisting diseases, and resident teaching. To reduce variability and improve efficiency, administrators have targeted this period for standardizing times required for anesthesia tasks (8,17–19). This time period is also of major significance in producing an accurate OR schedule. That is, adding the appropriate amount of preincision time to an accurate estimate of surgical case duration will increase the likelihood of accurate scheduling (19–20). Although OR data obtained from computerized databases have been published, no independently collected data are currently available to indicate the actual time required to complete these responsibilities. Although recent studies have reported data regarding the preincision period, the information for these reports was obtained from an OR information system (ORIS), where reporting biases can frequently occur in data entry by either the OR nursing and/or anesthesia staff who are engaged in concurrent clinical responsibilities (2,8,21–23). Using this strategy, Overdyk et al. (8) reported only a 60% compliance rate for data collection. Sandberg et al. (22), using a validation subset of patients to compare direct observation to a Nursing Perioperative Record, reported that although the mean difference was small, the greatest variability (standard deviation) occurred in data for “OR anesthesia time” (induction of anesthesia).
To address this limitation, we designed an independent observer-based study to evaluate the time required to complete various tasks in the preincision period by the anesthesiologist, surgeon, and nursing staff. We submit that these data are important, as the time required by the anesthesiologist could be added to the accurate time estimates required for surgery and thus allow better prediction of case duration for OR scheduling.
After protocol review by the Human Investigation Committee, an exemption of informed consent was granted. All guidelines for confidentiality were followed. This observer-based prospective study was performed in the South Pavilion OR at Yale New Haven Hospital from September 24, 2001 to July 22, 2002. The South Pavilion OR includes 19 ORs and 2 cystoscopy suites. Although a few outpatient procedures are performed in this location, it is predominantly an adult inpatient OR suite in a tertiary academic medical center. The inclusion criteria included all patients ASA physical status I–IV scheduled to undergo elective surgical procedures. Patients who arrived in the OR with the regional anesthetic already in place (nerve block placed in “the block room,” Postanesthesia Care Unit) then received only minimal to moderate sedation for the surgical procedure were grouped with monitored anesthesia care cases. Exclusion criteria for the study were emergency procedures, ASA PS V, and patients with an artificial airway on arrival to the OR.
All data were recorded on a standardized form by trained observers who were not involved in patient care. The observers were trained by two of the authors (AE, ED) using a formal syllabus and a 2-wk instructional period in the OR. After the training period, the observers completed practice data collection sessions. These data were then compared to those simultaneously collected by one of the authors (AE or ED). Two observers were then assigned to the same OR case and interobserver agreement was examined. The observers were allowed to collect data independently if their results, in each of the training sessions, were within 1 min of the anesthesia release time (ART) and surgical preparation time (SPT) recorded by the trainers (2.5% interobserver error). In addition, recording the interval data had to be within one unit (1 U = 5 min) of the trainer’s.
To prevent bias, all observers were rotated through each of the 21 ORs with an assignment sequence based on the use of a random number generator. This ensured that all observers sampled all ORs and were randomly exposed to all combinations of cases, as well as a variety of anesthesia and surgical teams. The observers were in place for each study prior to the patient entering the OR and left the OR after skin incision. They stationed themselves in the OR so as not to be obtrusive and yet close enough to be able to observe and hear all OR events. OR staff were not identified by name on the data collection instrument. A pilot study was conducted (n = 125 patients), to determine if any logistical or data recording methods required revision. No pilot data were used in the compilation and analysis of the study data.
Overall, definitions of clinical practice were based on the standardized definitions of the Association of Anesthesia Clinical Directors (24). The study period was defined as the point at which the patient was placed on the OR table (time zero) until the skin incision was made or a procedure started by the surgeon (e.g., endoscopy). This time period was subdivided into two phases: ART and SPT. ART is defined as the time at which the patient had a sufficient level of anesthesia established to begin the surgical preparation and the remaining anesthesia tasks did not preclude positioning and surgical preparation. Where possible, objective end-points of ART were used to determine the completion of this phase, e.g., endotracheal intubation, placement of the pulmonary artery catheter. SPT is defined as the time from the completion of ART to skin incision or when the institution of a painful stimulus occurred, such as introduction of a cystoscope. ART, SPT, and incision time were measured by use of a stopwatch and are reported in minutes with start equal time zero. Other specific time-related details of the study period, such as induction, tracheal intubation time, times for placement of regional anesthetics, and invasive monitors, teaching, and delays were recorded on the data sheet in units (1 U = 5-min time interval). Total case duration is defined as patient-on-table to dressing completed. Delays are defined as nonprogression of patient care for ≥5 min as the result of practitioner-related problems (e.g., the absence of the anesthesia or surgical attending) or system-related problems (e.g., appropriate equipment unavailable). The causes of delays were categorized using the Yale New Haven Hospital OR Definitions of Delays. Delays attributable to surgeons included: surgeon late, surgeon in another case running late, patient without medical clearance, incorrect booking by surgeon, surgeon unavailable, change in attending surgeon, additions to booked procedure, no consent in the chart, and other causes of delay related to surgeon. Delays attributed to anesthesiologists included: attending in another room, anesthesia team not ready, attending performing a preoperative interview.
The sample size met standards for simple random sampling as recommended by the Joint Commission on Accreditation of Healthcare Organizations (25). Data were read by an optical scanner and stored in Microsoft Access 2000. All data were reviewed sequentially by two investigators before entry and inquiries regarding errors or outliers were resolved on a weekly basis. Data were analyzed using SAS statistical software (Version 8.12; Cary, NC). Other than noted, data are expressed as mean ± sd. The entire data set was initially investigated with descriptive statistics including Pearson correlation coefficients. Simple and multiple linear regression analyses were performed based on these results. Analysis of variance with Tukey’s HSD was used to define between-group significance. A level of P < 0.05 was considered significant.
A total of 1559 patients were enrolled in the study. One patient, who had a difficult tracheal intubation and had an ART (206 min) more than 12 sd above the mean, was considered an extreme outlier and excluded from data analysis. Thus, 1558 surgical cases were observed and encompassed 15,506 total time units (unit = 5 min) or 1292 h of observation. The mean age of the patients was 57.4 ± 17.2 yr (range, 16–92 yr). The distribution of ASA PS was: I (9%), II (36%), III (31%), and IV (24%). The mean Medicare case mix complexity was 2.98 ± 2.0. An anesthesia attending and resident were the caregivers in 92.7% and an anesthesia attending and Certified Registered Nurse Anesthetist in 7.3% of cases. The percentage of patients receiving general anesthesia was 72%, monitored anesthesia care 18%, and regional anesthesia 10%.
The overall mean ART was 21.1 ± 15.7 min (range, 1–115 min); SPT 22.1 ± 13.4 min (range, 1–130 min), and mean case duration 206.7 ± 123.2 min. The ART and SPT represented 10.2% and 10.7% of total case duration, respectively. When analyzed by type of anesthetic technique, we found that ART for monitored anesthesia care was significantly shorter (7.2 ± 9.2 min) as compared with ART for general anesthesia (23.0 ± 15.0 min) and regional anesthesia (26.9 ± 16.8 min) (P < 0.01). ASA PS had a significant impact on ART when compared across ASA PS I–IV. That is, for procedures that involved ASA PS IV patients who received general or regional anesthesia, the ART was 2.0 and 1.4 times longer than those required for ASA PS ≤ III. (P < 0.05) (Table 1). Also, as can be seen from Table 2, placement of various invasive hemodynamic monitors significantly affected ART.
Overall, ART was significantly related to year of anesthesiologist resident training. That is, the ART of CA-1 residents was 17.9 ± 13.9 min compared with the ART of CA-2 residents 22.8 ± 16.4 min and the ART of CA-3 residents 25.4 ± 16.8 min (P < 0.005). However, this observation is related to increased case complexity by training year as evidenced by the fact that mean case length, mean Medicare case weight, and ASA PS increased by year of training. In a subset of general anesthetics, ASA PS IV and comprehensive monitoring (arterial and central venous pressure/pulmonary artery catheters and transesophageal echocardiography) there was no significant difference by resident year (P = 0.12). Backward stepwise linear regression analysis demonstrated that ASA PS, level of resident training, invasive monitoring, case length, and case number in the room (case 1 versus case 2, etc.), were positive predictors of ART length (F = 57.85, P = 0.001). In contrast, gender, body mass index (BMI), number of anesthesia personnel concurrently in the room, and number of rooms covered per anesthesia attending (1 or 2) were not predictors of ART.
As with ART, there was also significant variation in the SPT (Fig. 1 A–B). ART and SPT were strongly correlated; for most surgical services the SPT was associated with the ART (r = 0.77) (P < 0.036).
Delays were encountered in 24.5% (383/1558) of all procedures. Surgeons were responsible for 66.8% (256/383), anesthesiologists 21.7% (83/383), and other services (e.g., nursing) for 11.5% (44/383). However, surgeons were responsible for a disproportionate share of the delay time. Surgeons accounted for 77.4% (3690/4770) minutes of delay versus anesthesiologists 14.3% (680/4770 min) and others 8.4% (400/4770 min).
This independent observer-based study documents significant variability in the time required for both anesthetic and surgical activities in the preincision period. Higher ASA PS resulted in a significant increase in ART, and this was further affected by the need to insert invasive hemodynamic monitors. Case length and case number in the room influenced ART. In general, longer and more complex cases were scheduled as the first case and these were followed by progressively shorter cases.
Overall, the level of training of the anesthesia resident had significant impact on ART, with more senior residents having a higher ART. This was related to increasing case complexity. The teaching aspect of the study is covered in greater detail in a parallel report focusing on resident education (26). Briefly, we documented an average increase in time to incision of 4.5 ± 3.2 minutes (range, 1–20 minutes) for teaching cases. That is, each increase in percent teaching added 0.18*(ART + SPT) to time to incision. Using different methodologies, other investigators have reported similar results (23,27).
Some of the nonpredictors of ART are of particular interest. We had hypothesized the BMI would significantly correlate with ART for both general and regional anesthesia. This hypothesis, however, was not confirmed. Indeed, there are conflicting reports in the medical literature of the impact of BMI on procedural aspects of anesthetic management (28–32). Two other factors that did not predict ART are of interest to OR managers. First, the number of personnel in the room (e.g., additional residents, technicians) did not appear to reduce ART. Second, room coverage ratio by attending anesthesiologist did not have any relationship to ART duration. That is, whether an attending supervised residents in one room or two rooms did not impact ART. One should consider, however, that typically an attending would be assigned two noncomplicated rooms or one complicated room.
Although one would expect differences in ART (based on, e.g., type of surgical procedure, ASA PS), we were surprised at the large variability in SPT even for the same surgical procedure. Theoretically surgical preparation for similar procedures should be independent of variables such as ASA PS or coexisting diseases. In cardiac surgical patients, for example, this activity demonstrated significant variation by a factor of three. Interestingly, we found that ART in many instances was very closely matched by the SPT. Overdyk et al. (8) and Mazzei (33) reported similar data regarding SPT and showed the variability encountered when trying to select a target time for anesthesia induction (ART). Strum et al. (21) also observed significant variability for surgical procedures, primarily attributable to the surgeon.
Delays in start time of surgical procedures can have a significant effect on OR time management. Delays were encountered in 25% of cases studied. In our study, most of these delays were attributed to the surgeons. Further, surgeons accounted for a disproportionate percentage of minutes of delay. Although nursing and equipment delays are often blamed for OR inefficiencies, we found that these are “background” noise. There is also a difference in the pattern of the delay between anesthesiologists and surgeons. For surgeons the range of delay was 5–95 minutes. In contrast, the anesthesiologists had a much narrower time pattern of delay (5–15 minutes). In an era with intense focus on OR efficiency, patterns of delay found in this study can increase OR costs and reduce efficiency and patient throughput.
Finally, several methodological issues related to this study are relevant to any conclusions to be drawn from the results. Four techniques of data collection were considered: self-reporting by the OR personnel, data from an ORIS, video recordings in each OR, and use of trained observers. Recording of study data by individuals involved in direct clinical care has been shown to produce numerous inaccuracies in data collection (8,21,34). Because small periods of time were being examined, even small recording errors could invalidate the data. Overdyk et al. (8) stated that the self-reporting methodology was associated with a 23% rate of incorrectly or partially completed forms and the overall compliance rate for completion of the forms was only 60%. Similarly, Sandberg et al. (22), comparing a time stamp methodology to direct observation, reported the highest rate of variability for the time period designated “OR anesthesia time,” which is similar to the ART period in the present study. Although the use of videotaping has many advantages over direct observation, this method might be deemed too intrusive in an OR and may actually be less accurate as compared with an observer-based system (35). The use of an OR database was considered. This is, in essence, data being collected by caregivers as addressed above. In addition, at the time of the study, a new ORIS was being implemented, and data collection may have been compromised by the staff’s lack of experience with the new system. We felt that using trained independent observers combined with the use of a standard curriculum, practice observation sessions, and interobserver comparisons were the best way to assure the validity of the data collected (36). We also took further actions to remove any bias. Assigning observers at random to a given OR ensured equal rotation of all observers through all ORs to (e.g., so that an observer would not become a specialist in, for example, cardiac surgery or neurosurgery). In addition, the investigators had no control over case assignments of the attending anesthesiologists, residents, or observers. An additional limitation of this study is the potential for a “Hawthorne” effect on the OR personnel. While possible, the length of this study (10 months), as well as the consistency of the collected data on attending presence does not support this notion. Further, the number of cases of regional anesthetic technique is too limited to draw conclusions as compared to the general or monitored anesthesia care cases. This reflects the observation that the majority of regional anesthetics are administered in the ambulatory ORs, an area not covered by the study protocol. Finally, because the data were collected predominantly in an inpatient OR in one institution, applicability to other settings may be limited.
In conclusion, significant variability in ART was observed over a wide range of anesthetic techniques, variety of surgical procedures, and case length. Similar variability was also noted in SPT. For general anesthesia and regional anesthesia, ASA PS IV patients sustained an ART that was 2.0 and 1.4 times longer respectively than ART required for ASA PS I–III. The placement of invasive monitors also increased ART. Nonpredictors of ART included variables such as gender, BMI, number of anesthesia personnel concurrently in the room, and number of rooms covered per anesthesia attending. Delays significantly contributed to the duration of the overall preincision period (ACT + SPT) in 25% of surgeries. The majority of the delays were surgically based. This study demonstrates that for OR scheduling purposes, a constant fixed duration for anesthetic induction is inappropriate. More importantly, such an approach will result in creating erroneous administrative expectations.
The authors wish to acknowledge the continuing support of Mr. Norman Roth, Senior Vice President, Yale-New Haven Hospital. In addition, we wish to recognize the assistance of our operating room colleagues in the Departments of Anesthesiology, Nursing, and Surgery.
1. Dexter F, Epstein RH, Ippolito GV. Practical application of research on operating room efficiency and utilization. Adv Anesth 2004;22:29–59.
2. Schuster M, Standl T, Wagner JA, et al. Effect of different cost drivers on cost per anesthesia minutes in different anesthesia subspecialties. Anesthesiology 2004;101:1435–43.
3. Udelsman R. The operating room: war results in casualties. Anesth Analg 2003;97:936–7.
4. Dexter F, Abouleish AE, Epstein RH, et al. Use of operating room information system data to predict the impact of reducing turnover times on staffing costs. Anesth Analg 2003;97:1119–26.
5. Macario A, Dexter F, Traub RD. Hospital profitability per hour of operating room time can vary among surgeons. Anesth Analg 2001;93:669–75.
6. Macario A, Vitez TS, Dunn B, et al. Hospital costs and severity of illness in three types of elective surgery. Anesthesiology 1997;86:92–100.
7. Macario A, Vitez TS, Dunn B, McDonald T. Where are the costs in perioperative care? Analysis of hospital costs and charges for inpatient surgical care. Anesthesiology 1995;83:1138–44.
8. Overdyk FJ, Harvey SC, Fishman RL, Shippey F. Successful strategies for improving operating room efficiency at academic institutions. Anesth Analg 1998;86:896–906.
9. Gaba DM, Howard SK, Jump B. Production pressure in the work environment. Anesthesiology 1994;81:488–500.
10. Vredenburgh AG, Weinger MB, Williams KJ, et al. Developing a technique to measure anesthesiologists’ real-time workload. Proceedings of The International Ergonomics Association (IEA)/ and The Human Factors and Ergonomics Society (HFES) Congress 2000;44:241–4.
11. Weinger MB, Reddy SB, Slagle JM. Multiple measures of anesthesia workload during teaching and nonteaching cases. Anesth Analg 2004;98:1419–25.
12. Weinger MB, Vredenburgh AG, Schumann CM, et al. Quantitative description of the workload associated with airway management procedures. J Clin Anesth 2000;12:273–82.
13. Kain ZN, Chan KM, Katz JD, et al. Anesthesiologists and acute perioperative stress: a cohort study. Anesth Analg 2002;95:177–83.
14. Abouleish AE, Apfelbaum JL, Prough DS, et al. The prevalence and characteristics of incentive plans for clinical productivity among academic anesthesiology programs. Anesth Analg 2005;100:493–501.
15. Miller RD. Academic anesthesia faculty salaries: incentives, availability and productivity. Anesth Analg 2005;100:487–9.
16. Miller RD, Cohen NH. The impact of productivity-based incentives on faculty salary-based compensation. Anesth Analg 2005;101:195–9.
17. Fernandopulle R. Six lessons from the research. Lesson #3: No “lockbox,” putting everything on the table. In: Fernandopulle R, The Clinical Advisory Board, eds. Surgical services reform: executive briefing for clinical leaders. Washington, DC: The Advisory Board Company, 2001:74–5.
18. Britton P, Miller C. Front-loaded anesthesia prep. In: Britton P, Miller C, and The Clinical Advisory Board, eds. Clockwork surgery: hardwiring efficiency into the perioperative process. Washington, DC: The Advisory Board Company, 2001:57–61.
19. Vitez TS, Macario A. Setting performance standards for an anesthesia department. J Clin Anesth 1998;10:166–75.
20. Dexter F, Macario A. Applications of information systems to operating room scheduling. Anesthesiology 1996;85:1232–4.
21. Strum DP, Sampson AR, May JH, Vargas LG. Surgeon and type of anesthesia predict variability in surgical procedure times. Anesthesiology 2000;92:1454–66.
22. Sandberg WS, Daily B, Egan M, et al. Deliberate perioperative systems design improves operating room throughput. Anesthesiology 2005;103:406–18.
23. Eappen S, Flanagan H, Bhattacharyya N. Introduction of anesthesia resident trainees to the operating room does not lead to changes in anesthesia controlled times for efficiency measures. Anesthesiology 2004;101:1210–4.
24. Donham RT. Defining measurable OR-PR scheduling, efficiency and utilization data elements: the Association of Anesthesia Clinical Directors Procedural Times Glossary. Int Anesthesiol Clin 1998;36:15–29.
25. Specifications manual for National Implementation of Hospital Core Measures. 2003. (v. 2.0) Section 4:1-6. Available at http://www.jcaho.org/pms/core+measures/0btableofcontents1.pdf
. Accessed September 26, 2005.
26. Davis EA, Escobar A, Ehrenwerth J, et al. Resident teaching versus the operating room schedule: an independent observer based study of 1558 cases. Anesth Analg 2006;103:932–7.
27. St. Jacques P, James H, Higgins M. Level of training of anesthesiology residents or nurse anesthetist affects anesthesiology controlled intraoperative time periods [abstract]. J Clin Anesth 2003;15:77–8.
28. Nielsen KC, Guller U, Steele SM, et al. Influence of obesity on surgical regional anesthesia in the ambulatory setting: an analysis of 9,038 blocks. Anesthesiology 2005;102:181–7.
29. Juvin P, Blarel A, Bruno F, Desmonts JM. Is peripheral line placement more difficult in obese than in lean patients? Anesth Analg 2003;96:1218.
30. Juvin P, Lavaut E, Dupont H, et al. Difficult tracheal intubation is more common in obese than in lean patients. Anesth Analg 2003;97:595–600.
31. Carles M, Pulcini A, Macchi P, et al. An evaluation of the brachial plexus block at the humeral canal using a neurostimulator (1417 patients): the efficacy, safety and predictive criteria of failure. Anesth Analg 2001;92:194–8.
32. Conn RA, Cofield RH, Byer DE, Linstromberg JW. Interscalene block anesthesia for shoulder surgery. Clin Orthop 1987;216:94–8.
33. Mazzei WJ. Operating rooms start times and turnover time in a university hospital. J Clin Anesth 1994;6:405–8.
34. Sanborn KV, Castro J, Kuroda M, Thys DM. Detection of intraoperative incidents by electronic scanning of computerized anesthesia records: comparison with voluntary reporting. Anesthesiology 1996;85:977–87.
35. Weinger MB, Gonzalez DC, Slagle J, Syeed M. Video capture of clinical care to enhance patient safety. Qual Saf Health Care 2004;13:136–44.
36. Slagle J, Weinger MB, Dinh M-TT, et al. Assessment of the intrarater and interrater reliability of an established clinical task analysis methodology. Anesthesiology 2002;96:1129–39.