Secondary Logo

Journal Logo

Economics, Education, and Policy: Research Report

Anesthesiology Residents’ and Nurse Anesthetists’ Perceptions of Effective Clinical Faculty Supervision by Anesthesiologists

Dexter, Franklin MD, PhD*; Logvinov, Ilana I. RN, MSN, CCRP; Brull, Sorin J. MD, FCARCSI (Hon)

Author Information
doi: 10.1213/ANE.0b013e318286dc01

At academic hospitals, anesthesia care in operating rooms (ORs) is provided principally by anesthesia trainees (e.g., anesthesiology residents and fellows [“residents”]) and certified registered nurse anesthetists [CRNAs]), who are guided, both clinically and educationally, by faculty anesthesiologists. Performance appraisals of faculty anesthesiologists in clinical practice should therefore evaluate their principal activity, which is such supervision.1

In the present article, by supervision we are not referring to a regulatory requirement (e.g., Center for Medicare and Medicaid Services’ billing phrases of “medical supervision” or “medical direction”). Rather, when we use the term “supervision,” we refer to an activity with multiple attributes including being physically present during critical portions of cases, participating in perianesthesia planning, providing clinical and educational guidance during the anesthetic, and providing autonomy with feedback to supervised nonconsultant anesthesia providers (Table 1).2

Table 1
Table 1:
de Oliveira Filho et al.’s Instrument2 for Measuring Faculty Anesthesiologists’ Supervision of Anesthesia Residents During Clinical Operating Room Care

de Oliveira Filho et al.2 developed a valid and reliable instrument for measuring faculty supervision of anesthesiology residents. There are 9 attributes of supervision assessed in the de Oliveira Filho instrument,2 each one scored by a response on a 4-point (1–4) scale, 3 being “frequent.” The overall mean is the final supervision score. There is high internal consistency among attributes when used to evaluate faculty (Cronbach α = 0.93), showing that the questions evaluate attributes of one common dimension: quality of faculty supervision.2,3

De Oliveira Jr. et al.4 surveyed anesthesiology residents from >100 US programs. Less than “frequent” supervision was associated with reported “mistakes that [had] negative consequences for the patient” with an accuracy (area under the curve) of approximately 89%.4,5 Supervision less than “frequent” also predicted “medication errors (dose or incorrect drug) in the last year” with an accuracy of approximately 93%.4,5

In this article, we report a survey performed to evaluate the applicability of the instrument for measuring faculty clinical and educational guidance (supervision) at hospitals with both residents and CRNAs. We evaluate whether not only residents2–4 but also CRNAs perceive that what constitutes supervision meeting expectations is at least frequent. We also assess sensitivity of the findings for CRNAs to years in practice.

METHODS

This survey study was considered “to be exempt from IRB review” on April 19, 2012 by the Mayo Clinic IRB (IRB application #12-003670). The survey was performed using REDCap Survey Software, Version 1.3.9 (© 2012 Vanderbilt University, Nashville, TN).

E-mail invitations to participate in the survey were sent on April 20, 2012, Friday, to all residents (N = 80), CRNAs (N= 300), and student registered nurse anesthetists (SRNAs; N = 54) at 3 US teaching hospitals. No statistical power analysis was performed because the entire population of trainees and CRNAs was invited to participate. However, we had calculated that even with only a 40% response rate for the CRNAs, there was a 95% statistical power to detect 65% (vs 50%) answering with a score of at least “frequent.” The SRNA data are considered only as secondary information as explained later. The final survey date (May 18) was 4 weeks after the initial e-mail invitation and was specified in that invitation. A planned e-mail reminder was sent 14 days after the initial e-mail.

The survey started with 2 demographic questions: “You are being asked to participate in a survey that seeks to evaluate the attributes that define medical supervision at” <hospital name> “operating rooms. This is NOT a survey of current practice. What is your current role as an anesthesia provider: resident, SRNA, CRNA, or Fellow?” The second question asked the participant to “Enter the month and year when you started your clinical anesthesia training.” If an SRNA or CRNA, the participant was asked to “Enter the month and year when you started nurse anesthesia school.” If a resident physician or fellow, the participant was asked to “Enter the month and year when you started your CA-1 year.”

The following instructions were then provided: “Limit your responses to operating room anesthesia, not anesthesia provided outside the OR, such as remote locations. This survey is not meant to rate individual anesthesiologists. Instead, give your impression of 9 attributes of the hypothetical supervising anesthesiologist who MEETS your expectations. Do not provide scores based on an anesthesiologist who EXCEEDS your expectations or whose activity is BELOW your expectations. Also, please do not provide scores based on an anesthesiologist working with other anesthesia providers, just what would be meeting expectations when working with you. For each of the 9 questions, enter a number between 1 and 4 including decimals (e.g., numbers such as 1.0, 1.5 or 3.5).” The subsequent 9 questions were taken verbatim from the original study by de Oliveira Filho, with the 4-point scale’s bounds also taken verbatim (Table 1).2 The bounds were listed below each of the 9 questions in the instrument. We used a continuous (analog) scale for each question rather than literally 4-point because we were concerned how we might analyze the data if the internal consistency was low for the CRNAs.

Responses to all 9 questions of the instrument were required. Participants could choose to close their Web browser without answering, but to submit the survey all questions had to be completed. On submission, the following instructions were given: “Thank you for completing the survey. Please do not discuss your survey response with others. All anesthesia providers have been invited to participate and we need to obtain unbiased responses from each individual.”

Because one hospital was much smaller than the others (N = 62 of the total 434 providers invited), and survey replies were collected by resident type and CRNA years, by design, data were not identified by hospital to assure anonymity. Furthermore, a plot of score versus year by group was deliberately not included because for residents (i.e., fellows) and CRNAs with many years of experience the data were sparse.

The statistical analysis was performed using StatXact-9 (Cytel Software Corporation, Cambridge, MA) and SYSTAT 13.1 (Systat Software, Inc., Chicago, IL). Percentages were compared to ½ (testing “most”) using the binomial test. The month and year of the start of training was converted into a decimal number based on the training considered to have started on the first day of the entered month. The association between the years from the start of training and score provided was tested using Kendall τb. Differences in the mean between groups was tested using Welch t test (i.e., Student t test with unequal variances). The residuals followed a normal distribution (Shapiro-Wilk test P = 0.19). All P values shown are 2-sided. The 95% confidence interval (CI) for Cronbach α was calculated using 1000 bootstrap samples, percentile method.

RESULTS

Most (>50%) CRNAs (67%, N = 103/153) and residents (94%, 44/47) perceived that mean faculty supervision score that met their expectations was at least “frequent,” i.e., a mean score ≥3.0 (Table 1; both P < 0.0001). There was no association between years since starting training and perception of meeting expectations for supervision by faculty anesthesiologists among CRNAs (Kendall τb = 0.01; 95% CI, −0.13 to +0.10; P = 0.90) or residents (τb = 0.03; 95% CI, −0.16 to +0.23; P = 0.77).

The mean score for a level of supervision that met expectations was 3.14 ± 0.42 for CRNAs and was 3.40 ± 0.30 for residents. The CRNAs’ mean level of expected supervision was 0.26 less than that of residents (P < 0.0001; 95% CI, 0.15 to 0.37 less). However, 30% of individual CRNAs had observed supervision expectation scores greater than the residents’ mean expectation of 3.4, and 23% of individual residents had observed supervision expectation scores less than the CRNAs’ mean expectation of 3.14.

Secondary Observations

The following additional results are secondary observations (e.g., for future investigations).

The participation rate was 51% among CRNAs and 58% among resident physicians. There was only 1 additional participant added during the final survey week. The mean ± SD of years since the start of training was 15.1 ± 10.3 and 2.2 ± 1.1 years for CRNAs and residents, respectively.

Because we designed the study to evaluate “meeting expectations,” the design was expected to (and did) result in the questions having a lower internal consistency than when used for assessment of faculty.2,3 The Cronbach α was 0.83 (95% CI, 0.80 to 0.86; N = 227). When repeated, excluding the SRNAs, the Cronbach α was unchanged (N = 200).

The month and year starting training could be left blank and was left blank by 4 CRNAs (resulting N = 149), 1 resident (resulting N = 46), and 1 SRNA (resulting N = 26). There was a difference among SRNAs (τb = −0.37; P = 0.023; 95% CI, −0.65 to −0.08). The 11 SRNAs with <1 year training had been principally in their didactic (classroom) period, with approximately 4 months of supervised clinical care. They gave mean ± SD scores of 3.10 ± 0.44 whereas the others had scores of 3.42 ± 0.37. Given the small sample size and heterogeneity, SRNA results are limited to these secondary data.

There was no effect on scores of days from the start of survey, as assessed by analysis of covariance, P = 0.58. There were 155 responses before the reminder and 45 afterwards.

Differences between resident and CRNA responses were calculated for each question. The following Dunn-Sidak adjusted P values for mean differences were <0.05: “Stimulate … learning” (P = 0.0018; mean difference Resident – CRNA = 0.37), “Discusses … management … prior to starting an anesthetic” (P = 0.0015; mean difference = 0.35), “Present during the critical moments” (P = 0.0031, mean difference = 0.28), and “Available” (P = 0.0040; mean difference = 0.27). The smallest observed mean difference was for “ethical behavior” (P = 0.94; mean difference = 0.10).

DISCUSSION

The finding that most CRNAs and residents considered faculty supervision that meets expectations to be at least “frequent” (i.e., a mean supervision score ≥3.0; Table 1) suggests that ongoing appraisal of anesthesiologists’ effectiveness of supervision may have policy implications. As stated previously, by supervision we are not referring to a regulatory billing term. Instead, we refer to faculty educational and clinical guidance (Table 1). As such, supervision is time consuming and involves discussing the case before the start of the anesthetic, being present with the nonfaculty provider in the OR, altering management as necessary, providing feedback afterwards, etc. As the number of nonfaculty providers who are supervised simultaneously by faculty increases (i.e., as the ratio of nonfaculty to faculty providers increases), the ability of faculty anesthesiologists to meet expectations for supervision may decrease, because the faculty anesthesiologist cannot be present in 2 (or more) ORs simultaneously.6–8 At the studied hospitals, for the vast majority of cases, the anesthesiologists supervised 2 providers simultaneously, including when working with CRNAs.

There was a small difference of 0.26 units (on a 1–4-point scale) between CRNAs and residents in their perceptions of faculty supervision meeting expectations. Thus, if performance appraisals of faculty anesthesiologists include evaluation of the adequacy of supervision (e.g., using Table 1),2,3 supervision scores probably should be considered separately for residents and CRNAs. However, the differences between these 2 groups are less important quantitatively than the differences among individual nonfaculty providers. There is substantial heterogeneity among nonfaculty anesthesia providers (both residents and CRNAs) in their expectations for what constitutes adequate supervision (i.e., supervision that met their expectations). To reduce the effect of interindividual heterogeneity, each faculty anesthesiologist should be evaluated by several raters to obtain mean supervision scores with sufficient reliability and dependability.2,3 Details are in the Results and Discussion of our companion paper.3

We know little of the economic and/or safety implications (i.e., cost utility) of supervision.4,5 Future research can investigate whether anesthesiologists with higher supervision scores have fewer cases with rare critical events (e.g., 10-minute intervals with no blood pressure monitored).1,9,10 Furthermore, correlation can be made with knowledge and skills to respond appropriately in crisis scenarios.11–13

The finding that differences in perceptions of supervision between CRNAs and residents were not a function of the years of practice suggests that the instrument works for providers with a broad range of ages. However, the survey was from just 3 US hospitals. In these 3 hospitals, all nonfaculty providers know that he/she will be working in an anesthesia care team. How expectations for supervision would differ at facilities that do not have a long history of collaborative work in a care team model is unknown.14 However, as a practical matter, this limitation may be unimportant because, nationwide, >85% of CRNAs practice as part of an anesthesia care team.15 In addition, >80% of both CRNAs and anesthesiologists report that at least half of their “practice involves nurse anesthetists and anesthesiologists working together.”16

At the hospital in Brazil where the instrument was developed, resident scores for faculty supervision that “met expectation” was 3.4 ± 0.4. Our scores for residents were virtually identical, 3.40 ± 0.30. Thus, comparison of resident assessments of supervision among departments and countries appears appropriate. However, it may not be useful. For example, in the United Kingdom, there was substantial heterogeneity in 2003 among hospitals in supervision of trainee anesthetists.17 More than half of consultants reported not knowing daily which trainee elective lists they were supervising.17 In Canada, where anesthesiologists often personally perform anesthetics, performance appraisal of anesthesiologists can easily include multisource feedback (e.g., surgeons or OR nurses evaluating anesthesiologists).18 Surgeon perceptions are often extrapolated from the activity they observe when present,19 and, yet, most supervisory activity (Table 1) is not observable to most surgeons. In Finland, anesthesia nurses have different training and substantially different scopes of practice than in the United States.15,20

In conclusion, most CRNAs and residents considered faculty guidance that meets expectations to be at least “frequent,” regardless of years in practice.

RECUSE NOTE

Dr. Franklin Dexter is the Statistical Editor and Section Editor for Economics, Education, and Policy for the Journal. Dr. Sorin J. Brull is the Section Editor for Patient Safety for the Journal. This manuscript was handled by Dr. Steven L. Shafer, Editor-in-Chief, and Drs. Dexter and Brull were not involved in any way with the editorial process or decision.

DISCLOSURES

Name: Franklin Dexter, MD, PhD.

Contribution: This author helped design the study, analyze the data, write the manuscript, and is the archival author.

Attestation: Franklin Dexter has approved the final manuscript. Franklin Dexter reviewed the data analysis.

Name: Ilana I. Logvinov, RN, MSN, CCRP.

Contribution: This author helped design the study and conduct the study.

Attestation: Ilana Logvinov has approved the final manuscript.

Name: Sorin J. Brull, MD, FCARCSI (Hon).

Contribution: This author helped design the study and write the manuscript.

Attestation: Sorin Brull has approved the final manuscript.

REFERENCES

1. Ehrenfeld JM, Henneman JP, Peterfreund RA, Sheehan TD, Xue F, Spring S, Sandberg WS. Ongoing professional performance evaluation (OPPE) using automatically captured electronic anesthesia data. Jt Comm J Qual Patient Saf. 2012;38:73–80
2. de Oliveira Filho GR, Dal Mago AJ, Garcia JH, Goldschmidt R. An instrument designed for faculty supervision evaluation by anesthesia residents and its psychometric properties. Anesth Analg. 2008;107:1316–22
3. Hindman BJ, Dexter F, Kreiter CD, Wachtel RE. Determinants, associations, and psychometric properties of resident assessments of faculty operating room supervision in a U.S. Anesthesia residency program. Anesth Analg. 2013;116:1342–51
4. De Oliveira GS Jr, Rahmani R, Fitzgerald PC, Chang R, McCarthy RJ. The association between frequency of self-reported medical errors and anesthesia trainee supervision: a survey of United States anesthesiology residents-in-training. Anesth Analg. 2013;116:892–7
5. de Oliveira Filho GR, Dexter F. Interpretation of the association between frequency of self-reported medical errors and faculty supervision of anesthesiology residents. Anesth Analg. 2013;116:752–3
6. Paoletti X, Marty J. Consequences of running more operating theatres than anaesthetists to staff them: a stochastic simulation study. Br J Anaesth. 2007;98:462–9
7. Epstein RH, Dexter F. Influence of supervision ratios by anesthesiologists on first-case starts and critical portions of anesthetics. Anesthesiology. 2012;116:683–91
8. Smallman B, Dexter F, Masursky D, Li F, Gorji R, George D, Epstein RH. Role of communication systems in coordinating supervising anesthesiologists’ activities outside of operating rooms. Anesth Analg. 2013;116:898–903
9. Ehrenfeld JM, Epstein RH, Bader S, Kheterpal S, Sandberg WS. Automatic notifications mediated by anesthesia information management systems reduce the frequency of prolonged gaps in blood pressure documentation. Anesth Analg. 2011;113:356–63
10. Epstein RH, Dexter F. Mean arterial pressures bracketing prolonged monitoring interruptions have negligible systematic differences from matched controls without such gaps. Anesth Analg. 2011;113:267–71
11. Murray DJ, Boulet JR, Avidan M, Kras JF, Henrichs B, Woodhouse J, Evers AS. Performance of residents and anesthesiologists in a simulation-based skill assessment. Anesthesiology. 2007;107:705–13
12. Henrichs BM, Avidan MS, Murray DJ, Boulet JR, Kras J, Krause B, Snider R, Evers AS. Performance of certified registered nurse anesthetists and anesthesiologists in a simulation-based skills assessment. Anesth Analg. 2009;108:255–62
13. McIntosh CA. Lake Wobegon for anesthesia.where everyone is above average except those who aren’t: variability in the management of simulated intraoperative critical incidents. Anesth Analg. 2009;108:6–9
14. Bacon DR, Lema MJ. Anaesthetic team and the role of nurses–North American perspective. Best Pract Res Clin Anaesthesiol. 2002;16:401–8
15. Shumway SH, Del Risco J. A comparison of nurse anesthesia practice types. AANA J. 2000;68:452–62
16. Taylor CL. Attitudes toward physician-nurse collaboration in anesthesia. AANA J. 2009;77:343–8
17. McHugh GA, Thoms GM. Supervision and responsibility: The Royal College of Anaesthetists National Audit. Br J Anaesth. 2005;95:124–9
18. Lockyer JM, Violato C, Fidler H. A multi source feedback program for anesthesiologists. Can J Anaesth. 2006;53:33–9
19. Masursky D, Dexter F, Isaacson SA, Nussmeier NA. Surgeons’ and anesthesiologists’ perceptions of turnover times. Anesth Analg. 2011;112:440–4
20. Vakkuri A, Niskanen M, Meretoja OA, Alahuhta S. Allocation of tasks between anesthesiologists and anesthesia nurses in Finland. Acta Anaesthesiol Scand. 2006;50:659–63
© 2013 International Anesthesia Research Society