Secondary Logo

Journal Logo

Economics, Education, and Policy: Research Report

Quality of Supervision as an Independent Contributor to an Anesthesiologist’s Individual Clinical Value

Dexter, Franklin MD, PhD; Hindman, Bradley J. MD

Author Information
doi: 10.1213/ANE.0000000000000843
  • Free


The clinical production of individual anesthesiologists has been measured in multiple related ways, including by clinical days, American Society of Anesthesiologists’ Relative Value Guide Units, and hours of direct clinical care.1–4 These measures are highly correlated to one another over long periods (e.g., 6 months). However, none measures the quality of that effort.

In this article, we consider the quality of clinical supervision provided by anesthesiologists who are supervising anesthesia residents and nurse anesthetists.5–14 By supervision, we mean all clinical oversight functions directed toward assuring the quality of clinical care whenever the anesthesiologist is not the sole anesthesia care provider. Thus, as used in this article, the term “supervision” is distinct from US billing nomenclature (“medical supervision”). When residents or nurse anesthetists are the direct providers of anesthesia care, the value added by the anesthesiologist includes their supervision.

The quality of daily supervision provided to both resident physicians and nurse anesthetists by individual anesthesiologists can be measured reliably and validly using the scale developed by de Oliveira Filho et al. (Table 1).5,7,9,10,13 The quality of anesthesiologists’ supervision might be positively correlated with clinical production (e.g., hours of clinical care). For example, people tend to do what they are good at and “practice makes perfect.” In contrast, anesthesiologists who work fewer hours in operating rooms (ORs), work part time, work more in the pain clinic or the intensive care units, and/or perform administrative functions15 may “get rusty” and exhibit diminished quality performance in the OR. Thus, measures of clinical productivity (e.g., hours of clinical care) and measures of the quality of supervision might be positively correlated.

Table 1
Table 1:
de Oliveira Filho’s et al. Instrument5 for Measuring Faculty Anesthesiologists’ Supervision of Anesthesia Residents During Operating Room Care

If there were moderate or strong positive correlation between clinical production (hours) and quality (supervision), then, from a purely administrative perspective, there would be little reason to make the effort needed to measure the quality of clinical supervision. However, if clinical production and supervision quality were not positively correlated, then it would be important for departments to measure the quality of clinical supervision. Essentially, the clinical value provided by an anesthesiologist providing supervision would be correlated with but not proportional to their clinical hours. The absence of a positive correlation between clinical hours and supervision scores would also provide some assurance that anesthesiologists who work in ORs relatively infrequently can provide effective clinical supervision.


The University of Iowa IRB determined that our study did not meet the regulatory definition of human subjects research. Analyses were performed with deidentified data.

As described previously, since July 1, 2013, our department has sent daily e-mail requests to anesthesia residents and nurse anesthetists to evaluate the supervision provided by each anesthesiologist with whom they worked the previous day.9,10,13 Evaluation requests were sent automatically by e-mail when, on any given day, either a resident or nurse anesthetist worked with an anesthesiologist for ≥1 hour in an operative setting, determined from the anesthesia information management system.15,16 For details, see the Appendix of our recent article, including its analyses and summary of the processes of case assignment.9 The locations were our hospital’s main surgical suite (32 ORs), ambulatory surgery center (8 ORs), urology suite, labor and delivery suite, electroconvulsive therapy suite, and pediatric catheterization laboratory. Residents and nurse anesthetists evaluated anesthesiologists’ supervision using a secure webpage.9 They answered the 9 questions developed by de Oliveira Filho et al.5 using a 4-point Likert scale (Table 1). The evaluation could only be submitted when all 9 questions were answered and could not be revised after submission.9

We have established the reliability and validity of the question set (Table 1) when used both by residents7,9,12 and nurse anesthetists.8,9,13 When residents evaluated anesthesiologists, the Cronbach α coefficient = 0.885.7 When nurse anesthetists evaluated anesthesiologists, the α = 0.895 (SE = 0.003).13 Individual anesthesiologists’ supervision scores provided by residents and nurse anesthetists are correlated (i.e., the anesthesiologists who receive high mean scores from residents are the same as those who receive high mean scores from nurse anesthetists, P < 0.0001).9,10,13 However, we have demonstrated that (1) supervision scores provided by residents are significantly greater than scores provided by nurse anesthetists,9,10,13 and (2) supervision score differences between residents and nurse anesthetists are heterogeneous among anesthesiologists.13 Consequently, there must be separate consideration of supervision scores provided by residents and nurse anesthetists.8–10,13

The current project used all anesthesiologists’ supervision scores for dates of service between July 1, 2013 and December 31, 2014 (i.e., 18 months) (Table 2). We compared anesthesiologists’ clinical OR activity (total hours) and supervision scores obtained during the first (July 1, 2013 to December 31, 2013) and last (July 1, 2014 to December 31, 2014) of the three 6-month periods. We used sample sizes of 6-month periods because we showed previously that 6 months was a sufficient duration at our department for there to be an adequate number of raters (for both residents and nurse anesthetists) to differentiate individual anesthesiologists’ mean supervision score from those of other anesthesiologists in the department.9,10 Among residents during the first and last 6-month periods, there were 3011 evaluations with all 9 questions answered among 3202 occasions (94.0% completed) and 2699 evaluations among 2898 occasions (93.1% completed), respectively. Among nurse anesthetists, there were 3532 evaluations among 3910 occasions (90.3% completed) and 3812 evaluations among 4375 occasions (87.1% completed), respectively.a We used the first and last 6-month periods because they include the same months of the year (i.e., level of resident training).

Table 2
Table 2:
Clinical Supervision Scores and Hours by Period Among Anesthesiologists

During the first 6 months, anesthesiologists received no feedback regarding the supervision scores. The data from the first 6 months were included in our previous studies of resident and nurse anesthetist evaluations of faculty supervision.9,10,13 During the middle 6 months, all anesthesiologists received individual feedback regarding supervision scores and comments provided by the residents from the preceding 6 months, but they did not receive feedback regarding scores provided by nurse anesthetists.b Consequently, this period was (and could) be included in our previous report of the reliability and validity of nurse anesthetists’ evaluations of faculty supervision.13 During the last 6 months, there was feedback to all anesthesiologists regarding their individual supervision scores and comments provided by residents (during the preceding 6 months) and nurse anesthetists (during the preceding 12 months).b In addition, at 12 months, the importance of high-quality supervision of nurse anesthetists was discussed at a faculty meeting, and, also starting at 12 months, anesthesiologists with the lowest nurse anesthetist supervision scores (see Discussion) were individually counseled by the Vice-Chair for Faculty Development (BJH).10

Anesthesiologists’ mean supervision scores were calculated as described previously.5,9 For each individual evaluation, the supervision score was the mean of the Likert scale answers to the 9 questions (Table 1). Mean supervision scores for each anesthesiologist were calculated with each rater (resident or nurse anesthetist) given equal weight (i.e., the mean is taken of each rater’s scores of the anesthesiologist and the mean among the means used; meanequal).9,10 We showed previously that, in our department, when evaluations are obtained during a 6-month period, the differences between the simple arithmetic means of all evaluations (in which each evaluation is given equal weight; meanpooled) and the weighted means (meanequal) are negligibly small because most (>90%) raters work with individual anesthesiologists on only 1 or 2 occasions.9,10 Nevertheless, for the current article, the weighted means were used (meanequal) for greater psychometric reliability and dependability.7,9,10 Specifically, the SEMs could be calculated accurately.9,10

We tested for positive (1 sided) Pearson correlation between each anesthesiologist’s mean supervision score during each 6-month period and the anesthesiologist’s hours of OR supervision during the same 6-month period. Analyses were limited to anesthesiologists with ≥9 different resident raters or ≥9 different nurse anesthetist raters during the period (Table 2).7 The correlations were recalculated with weighting by the precision of the estimate of each anesthesiologist’s mean supervision score. The precision was estimated as the inverse of the square of the SEM. The anesthesiologist’s hours of supervision during the 6-month period was reported as hours per week for ease of interpretation (Table 2, Fig. 1). These hours are the sum of hours providing clinical supervision for both residents and nurse anesthetists because anesthesiologists in our department often supervise a resident and nurse anesthetist at the same time.9c The upper 95% confidence limit for the correlation is reported using the less than symbol (<).

Figure 1
Figure 1:
Anesthesiologists’ mean supervision scores provided by residents during the last 6-month period plotted versus the anesthesiologists’ hours per week of clinical activity during the 6-month period. The figure shows absence of a positive correlation. The red line is a LOWESS smoothing line with tension of 0.7. The error bars show the SEM, equally weighting the mean scores from each resident for the anesthesiologist. Because of overlap of data points, horizontal jitter from −0.25 to +0.25 hours has been added to some data points; overall 0.0 hours.

The primary hypothesis considers whether the quality of clinical (OR) supervision represents a unique dimension of the value of anesthesiologists versus whether there is moderate or strong positive correlation between clinical hours and quality of supervision. However, finding an absence of positive correlation would not necessarily indicate that monitoring of the quality of supervision is useful. Monitoring would be useful if, because of monitoring, change is possible. Because our department uses monitoring and feedback,10,16 and the expectations of residents and nurse anesthetists for supervision have been published,8,12,13 we quantified the numbers of anesthesiologists meeting residents’ and nurse anesthetists’ expectations for supervision during the first and last 6 months of the study (i.e., without feedback versus with feedback). We also quantified pairwise changes in each anesthesiologist’s meanequal supervision scores between the 2 periods. This was done using Student 1-group 2-sided t tests. They were repeated weighting each difference of means by the precision of the differences. The precision was estimated as the inverse of the variance of the SE of the differences (i.e., sum of the squares of the anesthesiologist’s SEMs for each of the 2 periods).


Anesthesiologists’ mean supervision scores were not positively correlated with hours of clinical activity. For the first 6 months of the studied period (i.e., when no feedback was provided to anesthesiologists), the correlations were r = −0.184 among scores provided by residents (P = 0.92, 95% upper r < −0.01, N = 57 anesthesiologists) and r = −0.038 among scores provided by nurse anesthetists (P = 0.70, r < 0.12, N = 61 anesthesiologists). For the last 6 months of the studied period (i.e., when feedback was provided to anesthesiologists regarding supervision scores provided by both residents and nurse anesthetists), the correlations were r = −0.283 among scores provided by residents (P = 0.98, r < −0.12, N = 51) and r = −0.095 among scores provided by nurse anesthetists (P = 0.79, r < 0.08, N = 62). With anesthesiologists’ mean supervision scores weighted by the precisions of estimates, the correlation coefficients were r = −0.54, r = −0.05, r = −0.76, and r = −0.06, respectively (all P > 0.65).d Figure 1 shows (graphically) the absence of significant positive correlation between anesthesiologists’ mean supervision scores and hours of direct clinical activity, among residents providing scores during the last 6 months.

Previous studies have shown that, among residents, anesthesiologists’ supervision scores ≥3.40 meet expectations (scoring range 1.00–4.00; Table 1).e The means of supervision scores provided by residents were ≥3.40 among 56 of 57 anesthesiologists (98%) during the first 6 months and 51 of 51 (100%) during the last 6 months. Pairwise by anesthesiologist (N = 44),f the mean supervision scores provided by residents increased by 0.08 ± 0.01 points when equally weighting each anesthesiologist (P < 0.0001) and by 0.04 ± 0.02 points weighting by the precision of the difference (P = 0.0011) (Table 2).

Previous studies have shown that, among nurse anesthetists, anesthesiologists’ supervision scores ≥3.14 meet expectations.8,13 The means of supervision scores provided by nurse anesthetists were ≥3.14 among 31 of 61 anesthesiologists (51%) during the first 6 months and 57 of 62 (92%) during the last 6 months. Pairwise by anesthesiologist (N = 49), the mean supervision scores provided by nurse anesthetists increased by 0.28 ± 0.02 points when equally weighting each anesthesiologist (P < 0.0001) and by 0.27 ± 0.02 points weighting by the precision of the difference (P < 0.0001) (Tables 2 and 3). The pairwise increases were greater for anesthesiologists with lesser means during the first 6 months (P < 0.0001).

Table 3
Table 3:
Change in Nurse Anesthetists’ Answers to the 9 Questions of Table 1 Between the First and Last 6-Month Period


Previously, we showed that daily evaluations of faculty clinical supervision are only weakly (in fact, negligibly) influenced by daily variations in faculty clinical assignments (number of rooms, case acuity, hours worked together, or hours worked with others).9 In this study, we show that long-term (6 months) faculty clinical supervision scores also are affected negligibly by long-term (6 months) direct clinical activity. Stated simply, in our department, anesthesiologists’ effectiveness in clinical supervision is not positively correlated with the amount of direct clinical care that each provides weekly. The observation that all correlation coefficients were negative indicates that, in our department, anesthesiologists who provide care less frequently in the surgical suites may tend to provide (very) slightly more effective supervision than anesthesiologists who provide care in the surgical suites more often. Thus, the amount of clinical work performed by an anesthesiologist and the quality of the supervision they provide do not necessarily follow one another. A very active clinician can provide ineffective supervision, and a less frequent clinician can be very effective. Thus, when the role of the anesthesiologist is to supervise anesthesia residents and/or nurse anesthetists, monitoring faculty supervision scores makes sense. Supervision serves as an independent contributor to the value that an individual anesthesiologist adds to the care of the patient.

Our study also indicates that providing clinical supervision scores to anesthesiologists can change behaviors. There was, for our department, an increase in anesthesiologists’ clinical supervision scores provided by residents (Table 2). However, the increase in scores was very small, both on an absolute scale and when compared with the increase in supervision scores provided by nurse anesthetists. We do not know why. It could be that the supervision scores provided by residents during the first 6 months were sufficiently favorable that there was little room for improvement. However, we know that it was not due to lesser feedback to the anesthesiologists regarding resident scores because the duration and frequency of feedback regarding residents’ scores were greater than that provided regarding nurse anesthetists’ scores (see below).

The value of supervision, specific to anesthesia residents, is somewhat understood. First, there is the influence on clinical care. A recent systematic review concluded that enhanced faculty supervision of residents (of multiple specialties) favorably affects (1) procedural complications, (2) assessment of patient acuity, (3) diagnostic and treatment plans, and (4) resident adherence to quality of care guidelines.17 In a survey of US anesthesia residents using the same de Oliveira Filho et al. supervision question set (Table 1), residents who reported mean (department wide) supervision scores <3.0 (“frequent”) reported significantly greater frequencies of occurrences of mistakes with negative consequences to patients as well as medication errors.6 In a second survey study by the same investigators asking about anesthesia residents’ rotations, again it was found that lesser rotation-wide supervision scores were associated with more resident errors.12 Indices of resident burnout were also associated with more resident errors.12,18 When both supervision scores and resident burnout were included in the model, resident burnout was no longer associated with errors.12 This suggests that faculty supervision may, at least in part, compensate for variations in anesthesia residents’ performance and contribute to better patient outcomes. Second, the supervision scores provided by residents reflect their perceptions of the rotation and/or department in which they work. Anesthesiologists with greater individual supervision scores contribute to greater departmental supervision scores.11 Anesthesiologists with greater individual scores are considered by residents to be more suitable to care for their family members.7 Rotations with greater supervision scores are considered to have stronger safety culture, in all 5 of its dimensions.12

When we did not provide feedback to the anesthesiologists regarding supervision scores provided by the nurse anesthetists, many (half) anesthesiologists did not have average supervision scores greater than the minimum value for nurse anesthetists’ expectations.8e Providing individual scores as feedback was associated with a significant positive change in anesthesiologists’ behaviors13 and supervision scores, with nearly all (92%) anesthesiologists meeting nurse anesthetists’ minimum expectations for the scores.8 Although each of the 9 questions had an increase in mean score among the nurse anesthetists, the largest increases were for the 2 questions with the lowest scores during the first and last periods: questions 3 and 7, the 2 questions associated with teaching and safety (Tables 1 and 3). Thus, the anesthesiologists increased their in-OR (bedside) teaching and emphasis on safe practice.

Previous studies8,12,13 and our current results provide some insight into perceptions of how supervision influences the work environment. In a national survey of anesthesia residents, supervision quality was closely associated with the safety-culture variable: “Teamwork within [the rotation].”12 Thus, although residents’ and nurse anesthetists’ expectations for supervision have been studied,8,11 likely equivalent assessments would have been for expectations of teamwork. For example, the teamwork dimension includes: “When one area in this rotation gets busy, others help out,” and “When a lot of work needs to be done quickly, we work together as a team to get the work done.”12 Applying this information, in a prior qualitative analysis of written comments from our department, nurse anesthetists (based on data from the first year [i.e., no feedback to anesthesiologists]) assigned low supervision scores more often when there was perceived to be limited physical presence of the anesthesiologist (odds ratio = 74, P = 0.0003).13 Thus, a supervising anesthesiologist may communicate what they want to have done and yet receive a low supervision score by failing to be perceived as present as a team member.13 Our current study (Table 3) supplements those findings because anesthesiologists cannot have increased their scores for teaching in the clinical setting without having been perceived as present as team members.

Our secondary findings (increased supervision scores between the first and last 6-month periods) are limited in part because we had no control group (i.e., our findings are simply before/after observations paired by anesthesiologist). However, our finding that providing feedback to anesthesiologists was associated with greater quality of supervision (Tables 2 and 3) was expected. Hastings and Rickard19 recently reviewed that “providing instructors with feedback from their pupils is an effective method to promote the quality of clinical teaching.” Finding associations between teaching and supervision, even for nurse anesthetists, also matched previous findings from our department.7,13 Although the level of supervision among nurse anesthetists is less than residents, and especially so for the questions related to teaching, a score of 4 (“always”) for all 9 questions (including the teaching ones) was (is) more common than even the next most common combination of scores.13

All of our findings regarding nurse anesthetists should be interpreted in terms of State of Iowa law and University of Iowa policy. Because Iowa is an “opt-out” state, when anesthesia care is provided by nurse anesthetists in our department, the degree of supervision provided by anesthesiologists is based on the judgment of the anesthesiologist and the nurse anesthetist, not billing directives of the US Centers for Medicare and Medicaid services.g This allowed us previously to study a broad continuum of anesthesiologist supervision13 and of anesthesiologist to nurse anesthetist supervisory ratios (e.g., at most 1-to-3, often 1-to-2, and sometimes just 1-to-1, such as on weekends).9,15 We have shown that supervision scores of anesthesiologists are (highly) insensitive to supervisory ratios within this range.9 However, our results are limited in that they do not apply supervisory ratios beyond this range (e.g., 1-to-4). Using data from electronic communications logs and electronic medical record data, studies from other institutions have shown that anesthesiologists cannot supervise >3 ORs and attend to patients’ critical events, unless ORs frequently wait for the anesthesiologists’ presence.20–23 In addition, we know that low supervision scores are more common when the anesthesiologist is not physically present and team activity is expected.12,13

In conclusion, we found strong evidence for a lack of positive correlation between the quantity of anesthesiologists’ clinical work and quality of anesthesiologists’ clinical supervision. Consequently, departments should measure not only the clinical hours (or equivalent days or other units) of anesthesiologists as indexes of their individual values in the OR setting but also the quality of their supervision. Our secondary observations show that when supervision quality is monitored and feedback is provided to anesthesiologists, behavior changes and supervision quality can increase. In our opinion, the results suggest that anesthesiology department managers should (1) be monitoring (and perhaps reporting) the quality of their departments’ level of supervision,14 and (2) establishing processes so that individual anesthesiologists can learn about the quality of the supervision that they are providing.24–35h


Name: Franklin Dexter, MD, PhD.

Contribution: This author helped design the study, analyze the data, and write the manuscript.

Attestation: Franklin Dexter has approved the final manuscript.

Name: Bradley J. Hindman, MD.

Contribution: This author helped conduct the study and write the manuscript.

Attestation: Bradley J. Hindman has approved the final manuscript.


Dr. Franklin Dexter is the Statistical Editor and Section Editor for Economics, Education, and Policy for Anesthesia & Analgesia. This manuscript was handled by Dr. Steven L. Shafer, Editor-in-Chief, and Dr. Dexter was not involved in any way with the editorial process or decision.


a The values for the first 6 months differ slightly from the previous report.9 In the current study, working together for ≥1 hour was based on electronic medical record data. For the previous report, billing data were used so that associations could be studied including quantification of intensity of clinical care.9 We reported responses for 92.9% of 3196 resident occasions (<3202) and for 90.5% of 3858 nurse anesthetists occasions (<3910).9
Cited Here

b All comments were reviewed, word for word, by the Vice-Chair for Faculty Development (BJH), and then redacted to protect the identity of the raters and the confidentiality of the patients.
Cited Here

c The reported hours do not include the hours of clinical care that were provided by anesthesiologists working with student registered nurse anesthetists, providing care directly (personally), and/or providing other forms of clinical care (e.g., Pain Clinic or Critical Care).
Cited Here

d With weighting, the large mean supervision scores achieved by some anesthesiologists have a substantial and likely disproportionate influence because of the boundary of 4.0 (“always”).
Cited Here

e Surveyed anesthesiology residents and nurse anesthetists at Mayo Clinic gave their impression “of the hypothetical supervising anesthesiologist who meets … expectations … not … who exceeds expectations or whose activity is below … expectations.”8 Among N = 47 residents, the differences from the boundary of 4.0 were 0.604 ± 0.305 (mean ± SD). Among N = 153 nurse anesthetists, the differences were 0.865 ± 0.424. The coefficients of variation were 49.0% and 50.4%, respectively. Thus, residents have expectations for more supervision than do nurse anesthetists, but the heterogeneity among individuals in expectations is indistinguishable between groups.
Cited Here

f The number of anesthesiologists is less because the N = 44 were the anesthesiologists with ≥9 different resident raters for both the first and last 6 months.
Cited Here

g Available at: Accessed March 6, 2015.
Cited Here

h Stepaniak and Dexter recently summarized and referenced multiple other measures of the quality of an anesthesia group’s managerial decisions (e.g., reducing how long patients wait for surgery).24–35
Cited Here


1. Abouleish AE, Zornow MH, Levy RS, Abate J, Prough DS. Measurement of individual clinical productivity in an academic anesthesiology department. Anesthesiology. 2000;93:1509–16
2. Abouleish AE, Apfelbaum JL, Prough DS, Williams JP, Roskoph JA, Johnston WE, Whitten CW. The prevalence and characteristics of incentive plans for clinical productivity among academic anesthesiology programs. Anesth Analg. 2005;100:493–501
3. Reich DL, Galati M, Krol M, Bodian CA, Kahn RA. A mission-based productivity compensation model for an academic anesthesiology department. Anesth Analg. 2008;107:1981–8
4. Abouleish AE. Productivity-based compensations versus incentive plans. Anesth Analg. 2008;107:1765–7
5. de Oliveira Filho GR, Dal Mago AJ, Garcia JH, Goldschmidt R. An instrument designed for faculty supervision evaluation by anesthesia residents and its psychometric properties. Anesth Analg. 2008;107:1316–22
6. De Oliveira GS Jr, Rahmani R, Fitzgerald PC, Chang R, McCarthy RJ. The association between frequency of self-reported medical errors and anesthesia trainee supervision: a survey of United States anesthesiology residents-in-training. Anesth Analg. 2013;116:892–7
7. Hindman BJ, Dexter F, Kreiter CD, Wachtel RE. Determinants, associations, and psychometric properties of resident assessments of faculty operating room supervision in a US anesthesia residency program. Anesth Analg. 2013;116:1342–51
8. Dexter F, Logvinov II, Brull SJ. Anesthesiology residents’ and nurse anesthetists’ perceptions of effective clinical faculty supervision by anesthesiologists. Anesth Analg. 2013;116:1352–5
9. Dexter F, Ledolter J, Smith TC, Griffiths D, Hindman BJ. Influence of provider type (nurse anesthetist or resident physician), staff assignments, and other covariates on daily evaluations of anesthesiologists’ quality of supervision. Anesth Analg. 2014;119:670–8
10. Dexter F, Ledolter J, Hindman BJ. Bernoulli Cumulative Sum (CUSUM) control charts for monitoring of anesthesiologists’ performance in supervising anesthesia residents and nurse anesthetists. Anesth Analg. 2014;119:679–85
11. Hindman BJ, Dexter F, Smith TC. Anesthesia residents’ global (departmental) evaluation of faculty anesthesiologists’ supervision can be less than their average evaluations of individual anesthesiologists. Anesth Analg. 2015;120:204–8
12. De Oliveira GS Jr., Dexter F, Bialek JM, McCarthy RJ. Reliability and validity of assessing subspecialty level of faculty anesthesiologists’ supervision of anesthesiology residents. Anesth Analg. 2015;120:209–13
13. Dexter F, Masursky D, Hindman BJ. Reliability and validity of the anesthesiologist supervision instrument when certified registered nurse anesthetists provide scores. Anesth Analg. 2015;120:214–9
14. de Oliveira Filho GR, Dexter F. Interpretation of the association between frequency of self-reported medical errors and faculty supervision of anesthesiology residents. Anesth Analg. 2013;116:752–3
15. Dexter F, Wachtel RE, Todd MM, Hindman BJ. The “fourth mission:” the time commitment of anesthesiology faculty for management is comparable to their time commitments to education, research, and indirect patient care. A&A Case Reports. 2015 in press
16. Epstein RH, Dexter F, Patel N. Influencing anesthesia provider behavior using anesthesia information management system data for near real-time alerts and post hoc reports. Anesth Analg. 2015 in press
17. Farnan JM, Petty LA, Georgitis E, Martin S, Chiu E, Prochaska M, Arora VM. A systematic review: the effect of clinical supervision on patient and residency education outcomes. Acad Med. 2012;87:428–42
18. de Oliveira GS Jr, Chang R, Fitzgerald PC, Almeida MD, Castro-Alves LS, Ahmad S, McCarthy RJ. The prevalence of burnout and depression and their association with adherence to safety and practice standards: a survey of United States anesthesiology trainees. Anesth Analg. 2013;117:182–93
19. Hastings RH, Rickard TC. Deliberate practice for achieving and maintaining expertise in anesthesiology. Anesth Analg. 2015;120:449–59
20. Epstein RH, Dexter F. Influence of supervision ratios by anesthesiologists on first-case starts and critical portions of anesthetics. Anesthesiology. 2012;116:683–91
21. Epstein RH, Dexter F. Implications of resolved hypoxemia on the utility of desaturation alerts sent from an anesthesia decision support system to supervising anesthesiologists. Anesth Analg. 2012;115:929–33
22. Smallman B, Dexter F, Masursky D, Li F, Gorji R, George D, Epstein RH. Role of communication systems in coordinating supervising anesthesiologists’ activities outside of operating rooms. Anesth Analg. 2013;116:898–903
23. Epstein RH, Dexter F, Lopez MG, Ehrenfeld JM. Anesthesiologist staffing considerations consequent to the temporal distribution of hypoxemic episodes in the postanesthesia care unit. Anesth Analg. 2014;119:1322–33
24. Stepaniak PS, Dexter F. Monitoring anesthesiologists’ and anesthesiology departments’ managerial performance. Anesth Analg. 2013;116:1198–200
25. Dexter F, Willemsen-Dunlap A, Lee JD. Operating room managerial decision-making on the day of surgery with and without computer recommendations and status displays. Anesth Analg. 2007;105:419–29
26. Stepaniak PS, Mannaerts GH, de Quelerij M, de Vries G. The effect of the Operating Room Coordinator’s risk appreciation on operating room efficiency. Anesth Analg. 2009;108:1249–56
27. Ledolter J, Dexter F, Wachtel RE. Control chart monitoring of the numbers of cases waiting when anesthesiologists do not bring in members of call team. Anesth Analg. 2010;111:196–203
28. Wang J, Dexter F, Yang K. A behavioral study of daily mean turnover times and first case of the day start tardiness. Anesth Analg. 2013;116:1333–41
29. McIntosh C, Dexter F, Epstein RH. The impact of service-specific staffing, case scheduling, turnovers, and first-case starts on anesthesia group and operating room productivity: a tutorial using data from an Australian hospital. Anesth Analg. 2006;103:1499–516
30. Pandit JJ, Dexter F. Lack of sensitivity of staffing for 8-hour sessions to standard deviation in daily actual hours of operating room time used for surgeons with long queues. Anesth Analg. 2009;108:1910–5
31. van Oostrum JM, Van Houdenhoven M, Vrielink MM, Klein J, Hans EW, Klimek M, Wullink G, Steyerberg EW, Kazemier G. A simulation model for determining the optimal size of emergency teams on call in the operating room at night. Anesth Analg. 2008;107:1655–62
32. Kynes JM, Schildcrout JS, Hickson GB, Pichert JW, Han X, Ehrenfeld JM, Westlake MW, Catron T, Jacques PS. An analysis of risk factors for patient complaints about ambulatory anesthesiology care. Anesth Analg. 2013;116:1325–32
33. Dexter F, Wachtel RE, Epstein RH, Ledolter J, Todd MM. Analysis of operating room allocations to optimize scheduling of specialty rotations for anesthesia trainees. Anesth Analg. 2010;111:520–4
34. Xiao Y, Jones A, Zhang BB, Bennett M, Mears SC, Mabrey JD, Kennerly D. Team consistency and occurrences of prolonged operative time, prolonged hospital stay, and hospital readmission: a retrospective analysis. World J Surg. 2015;39:890–6
35. Bayman EO, Dexter F, Todd MM. Assessing and comparing anesthesiologists’ performance on mandated metrics using a Bayesian approach. Anesthesiology. 2015;123:101–15
© 2015 International Anesthesia Research Society