Skip Navigation LinksHome > December 2010 - Volume 5 - Issue 6 > A Randomized Controlled Trial of the Impact of Simulation-Ba...
Simulation in Healthcare: The Journal of the Society for Simulation in Healthcare:
doi: 10.1097/SIH.0b013e3181e602b3
Empirical Investigations

A Randomized Controlled Trial of the Impact of Simulation-Based Training on Resident Performance During a Simulated Obstetric Anesthesia Emergency

Scavone, Barbara M. MD; Toledo, Paloma MD; Higgins, Nicole MD; Wojciechowski, Kyle MD; McCarthy, Robert J. PharmD

Free Access
Article Outline
Collapse Box

Author Information

From the Department of Anesthesiology, Northwestern University Feinberg School of Medicine, Chicago, IL.

Reprints: Barbara M. Scavone, MD, Department of Anesthesiology, Northwestern University Feinberg School of Medicine, 251 E. Huron Street, F5-704, Chicago, IL 60611 (e-mail: bimscavone@aol.com).

Collapse Box

Abstract

Introduction: The percentage of patients having cesarean delivery (CD) under general anesthesia has decreased, which may have implications for residency training in anesthesiology. We undertook this study to assess the effect of focused simulation-based training on resident performance during a simulated general anesthetic for emergency CD.

Methods: Thirty-two second-year anesthesiology resident volunteers were randomly assigned to one of the two groups: a group trained on the patient simulator performing general anesthesia for emergency CD (CD group) and a control group trained on the simulator using a different general anesthetic scenario unrelated to obstetric anesthesia (SHAM group). Between 6 and 9 weeks, all the residents performed the emergency CD scenario on the simulator and were videotaped. Two blinded observers scored the videotaped performances using a valid and reliable scoring system separately and were blinded to each others score. The time interval from the start of the scenario until the simulated surgical incision was noted. Total scores and component scores in six subcategories were compared between resident groups, as was the start to incision time interval.

Results: Residents in the CD group had higher total scores and higher scores in the preoperative assessment, equipment availability check, and intraoperative management before delivery subcategories than residents in the SHAM group. The start to incision time interval did not differ between the groups.

Conclusions: Anesthesiology residents who underwent focused training on a simulator that included performance of a general anesthetic for emergency CD exhibited improved performance during a subsequent simulated anesthetic scenario compared with trainees who did not undergo such instruction.

The percentage of patients undergoing cesarean delivery (CD) under general anesthesia has decreased,1 most likely as a result of concerns with general anesthesia-related maternal morbidity and mortality.2 This has important implications for residency training in obstetric anesthesia, because residents may not gain adequate exposure to this technique during their training. Therefore, educators recommend the increased use of surrogate training modalities such as simulation-based training.3

We have previously reported4 the development of a scenario for general anesthesia for emergency CD on our patient simulator and the use of a modified Delphi technique5 to garner consensus among several experts so as to develop a standardized scoring system to objectively evaluate resident performance of this simulated obstetric anesthetic emergency. The scoring system, which was found to be both valid and reliable, consists of a checklist of 48 tasks, each weighted on a scale of importance from 1 to 5, with a total possible score of 198.5 points. The 48 tasks are organized into six subcategories: preoperative assessment, preoperative patient care, equipment availability check, induction/intubation, intraoperative management before delivery, and intraoperative management after delivery (Appendix).

We undertook this study to assess the effect of simulation-based training on resident performance of a general anesthetic for emergency CD, hypothesizing that such training would have a direct relationship with residents' ability to properly perform a simulated general anesthetic for a simulated emergency CD.

Back to Top | Article Outline

METHODS

This study was approved by the Internal Review Board of the Office for the Protection of Research Subjects of Northwestern University. Flyers soliciting second-year anesthesiology (CA2) resident volunteers for the study were posted in the anesthesiology department; similar flyers were sent to each CA2 resident the day she/he began her/his 2-month obstetric anesthesia rotation. All CA2 residents on their 2-month obstetric anesthesia rotation were eligible to volunteer for the study. Informed consent was obtained from all participants. The details of the Northwestern University Feinberg School of Medicine Department of Anesthesiology Simulator Center and the emergency CD under general anesthesia simulation scenario have previously been described.4

At the beginning of their 2-month obstetric anesthesia rotation, all CA2 residents were routinely given reading material covering CD, including emergency CD under general anesthesia. They also received a lecture on anesthesia for CD, including emergency cesarean under general anesthesia. Resident volunteers were divided into two groups by computer-generated randomization: one group trained on the simulator during a scenario consisting of an emergency CD under general anesthesia (CD group) and a control group trained on the simulator using a different general anesthetic scenario unrelated to obstetric care (SHAM group).

Within 2 weeks of starting their 2-month obstetric anesthesia rotation, the residents in the CD group observed one of the faculty anesthesiologist authors (B.M.S.) perform the emergency CD under general anesthesia scenario in person on the simulator. Each step was explained as it was performed. The resident themselves then performed the simulation while being videotaped. The SHAM group residents were videotaped while performing one of the nonobstetric simulation scenarios, which had previously been developed by the Department of Anesthesiology (eg, scenarios for anaphylaxis or shock) and included areas related to preoperative assessment and care of the patient. After the simulation, the same attending anesthesiologist debriefed the resident while watching the videotape, in an effort to maximize educational effect. Each resident was debriefed regarding only her/his own performance/scenario. This first session was considered the teaching phase.

Within 2 weeks of the end of the 2-month obstetric anesthesia rotation, all the residents (CD and SHAM groups) performed the emergency CD scenario on the simulator and were videotaped. This second session was considered the evaluation phase. Immediately before the evaluation session, each resident reported how many actual cesarean deliveries she/he had performed under general anesthesia during her/his training and how many times she/he had worked on the simulator during the course of her/his residency. Each resident also reported her/his degree of comfort with performing emergency CD under general anesthesia using a visual analog scale (VAS) score on a 100-mm line, with end points labeled “not at all confident” and “extremely confident.”

Two faculty anesthesiologists (N.H. and K.W.) blinded to resident group viewed and scored each videotape according to the weighted scoring system. (The previous study of reliability demonstrated that two reviewers are sufficient to explain 96% of the scores' variance.4) In an effort to blind the reviewers to the identity of the resident, the residents wore surgical hoods, masks, goggles, gown, and gloves. Raters viewed and scored the videotapes separately and were blinded to each other's score.

The time interval from start of the scenario (resident “paged” by an audible sound emitted from the simulation technician) until simulated surgical incision (resident informs surgeon that they may make incision) was recorded by the simulator computer technician, who made note of these events on the simulator computer that had an internal clock. The study ended 15 minutes after the simulated delivery of the neonate.

The residents knew that they would be performing on the simulator but did not know the purpose of the study or what the scenarios involved until immediately before performing them. They were asked not to discuss the study with any other residents. At the end of the study, any of the residents who performed the CD scenario only one time (the SHAM group residents) were offered the opportunity to repeat the CD scenario, so that all residents in the study had similar obstetric anesthesia educational opportunities. The residents' individual scores were not used in any formal or informal evaluation of their clinical competence.

Back to Top | Article Outline
Statistics

The primary outcome variable was total score on the weighted scoring tool during the evaluation phase. In our previous study, the mean score of CA3 residents (compared with CA1 residents) was 151 32 ± 7 points. We estimated that this focused training would increase the mean score by approximately 10 points with the same SD. Using the aforementioned consideration, a sample size of 14 per group would achieve 95% power to detect a 10-point difference between the groups, with a significance level (alpha) of 0.05 using a two-sided two-sample t test. The number of CA2 residents available over the 2.5-year study period was 42, and all residents who volunteered their participation were enrolled in the study.

Total score on the weighted scoring tool, component scores in the six subcategories, and time interval between the start of the simulation and the simulated surgical incision were compared between the CD and the SHAM resident groups. In addition, comparisons were made between the resident groups regarding number of actual cesarean deliveries performed under general anesthesia, number of times the resident had performed on the simulator, and subject self-assessment of confidence. Comparisons were also made across time: scores of residents whose 2-month obstetric anesthesia rotations began in July versus September versus November, etc., through May, were compared; residents' scores between years (July of the first year vs. July of the second year, September of the first year vs. September of the second year, etc.) were also compared. Comparisons were made using the Mann-Whitney U test. Interrater reliability was assessed using the Cronbach's alpha. A P value <0.05 was required to reject the null hypothesis.

Back to Top | Article Outline

RESULTS

A total of 42 CA2 residents rotated through the obstetric anesthesiology division over 2.5 years; of those, 32 volunteered to participate and completed the study. The number of actual cesarean deliveries performed under general anesthesia and the number of times the resident had previously performed on the simulator did not differ between resident groups (Table 1).

Table 1
Table 1
Image Tools

Residents in the CD group scored higher than residents in the SHAM group. The analysis of component scores in the six subcategories demonstrated that the CD residents scored higher in the preoperative assessment, equipment availability check, and intraoperative management before delivery subcategories (Table 2). Regarding specific steps within these three subcategories, the CD residents more often obtained the pertinent obstetric history and were more likely to ask the patient their medication, allergy, and anesthetic histories. When checking the equipment, they more often verified that the laryngoscope was functional. During intraoperative management before delivery, the CD residents placed esophageal stethoscopes and monitored temperature more often.

Table 2
Table 2
Image Tools

The time interval from start of scenario to simulated surgical incision did not differ between the two groups nor did the subjects' self-assessment of confidence. There were no differences in scores across time: residents had similar scores no matter in what month their obstetric anesthesia rotation started nor did scores differ from 1 year to the next. Reliability analysis demonstrated high agreement between the two scorers, with a Cronbach's alpha of 0.91.

Back to Top | Article Outline

DISCUSSION

Focused training on a patient simulator improved resident performance of this scenario that involved general anesthesia for emergency CD. The use of general anesthesia for CD continues to decrease in the United States,1 raising concerns about the adequate training of anesthesiology residents with respect to this anesthetic scenario. As a result, educators recommend investigating the employment of alternative training modalities, such as simulation-based training, to teach residents this skill set.3 To the authors' knowledge, this is the first report on improvement in resident performance of this obstetric anesthetic emergency after focused simulation-based training.

The use of simulation scenarios to train anesthesiology residents in the management of both common and rare clinical problems has increased in recent years, and the role of simulation-based techniques in both formative and summative evaluations is still evolving.6 Evidence is accumulating to indicate that simulation in anesthesiology may improve learner performance. Chopra et al7 randomized participants to simulation-based training involving either malignant hyperthermia or anaphylactic shock and then evaluated their performance concerning the malignant hyperthermia scenario 4 months later and found that the malignant hyperthermia-trained participants undertook the appropriate steps more thoroughly and more quickly than the participants in the other group. Anesthesiology trainees introduced to a new anesthesia delivery system via training on a patient simulator resolved emergencies successfully more quickly than those trainees introduced to the new equipment with a standard introductory in-service.8 Conversely, medical students taught the proper responses to myocardial ischemia, anaphylaxis, or hypoxemia via video-based versus simulation-based curricula performed similarly, although the simulator-trained students rated their sessions more valuable than the video-trained students.9 Likewise, Nyssen et al10 compared checklist-type scores and time to diagnosis of anaphylactic shock between anesthesia trainees taught via a computer screen-based simulation versus a full-scale mannequin-based simulation and demonstrated similarities in performance results.

It remains unclear how conduct in a simulated environment reflects that during a live emergency; thus, we cannot necessarily extrapolate that demonstrations of competence in the simulated operating room correspond to the same level of expertise in the actual operating room. Study in this area proves difficult because of constraints assessing actual clinical performance; however, some recent publications lend credence to the theory that simulation performance does indeed translate into real clinical performance. Residents who received simulation-based teaching regarding advanced cardiac life support were more likely to adhere to American Heart Association guidelines when responding to hospital emergencies than residents who received traditional training.11 Barsuk et al12 reported that residents who learned central venous catheterization techniques on a simulator required fewer needle passes to accomplish central line insertion in real patients compared with residents who did not undergo simulation training. In addition to showing better resident technique, the same authors, in an additional report, demonstrated better patient outcomes, namely, central venous catheters placed by residents trained on a simulator resulted in fewer bloodstream infections than catheters placed by nonsimulator-trained residents.13

In this study, not only did performance during the simulation improve but also did efficiency. The CD-trained residents successfully completed more tasks on the checklist in the same amount of time, thus demonstrating greater efficiency than the SHAM-trained residents. Furthermore, performance improved in areas related to preoperative assessment—quickly and efficiently obtaining pertinent history—and equipment availability check; performance in these areas may have implications for patient safety. Also, competence in anesthetic management before delivery of the neonate differed between groups. This area of anesthetic care is specifically related to general anesthesia for obstetrics and therefore was affected by focused obstetric anesthesia training more than an area such as induction and intubation. Although proficiency in anesthetic care after delivery of the neonate is also obstetric related, the tasks listed under that subcategory (administering oxytocin and providing supplemental sedation/analgesia) are not as specific to general anesthesia and may have been learned during administration of neuraxial anesthetics for CD during the obstetric anesthesia rotation. All the trainees were in the second year of anesthesia training. At the time this study took place, the first year of anesthesia training consisted of 12 months of general operating room rotations (anesthesia for general surgery, anesthesia for gynecologic surgery, anesthesia for genitourinary surgery, etc.) and residents did not rotate through subspecialty rotations (obstetric anesthesia, cardiac anesthesia, critical care medicine, etc.) until their second year. Our institution performs in excess of 30,000 operating room cases per year, and therefore, it is likely that all the subjects in this study had extensive experience administering general anesthesia, including performing rapid sequence induction and intubation.

Gaba et al14 described a scoring checklist for use during simulated anesthesia crises that included certain “essential” items that, if missed, resulted in a failing score even if all other aspects of the simulated anesthetic were performed perfectly. Our checklist did not include such items, but one might classify the administration of the induction agent and succinylcholine and performance of laryngoscopy as essential, and none of our study subjects neglected to perform these three tasks. Preoxygenation and confirmation of the presence of end-tidal CO2 might also be considered to have increased importance. No resident failed to preoxygenate. Three residents did not confirm end-tidal CO2 presence, but two of the three did confirm proper endotracheal tube placement with other maneuvers including listening to bilateral breath sounds; the third did neither and was in the control group.

Because there was no difference in the number of times the CD versus the SHAM residents had participated in actual cesarean deliveries under general anesthesia, or worked on the simulator, we were not likely measuring differences in clinical or simulator experience but were seeing a reflection of differences in abilities that resulted from focused training per se. Because scores did not change among rotations that started early versus late in the academic year, there did not seem to exist any measurable effect of increased clinical experience during the CA2. Of interest, the residents themselves did not perceive any differences in their capabilities: SHAM residents reported a similar degree of confidence in their ability to properly execute the procedure as CD residents. Others have similarly demonstrated trainees' limited ability to accurately assess their own competence.8

There are potential limitations to this study. Although participating residents were asked not to discuss the study, it is possible that some discourse took place, which could have introduced bias, such as a resident “studying” and thereby improving her/his score. The fact that scores did not change over time, or from 1 year to the next, gives us more confidence that such extra-scenario learning did not take place. Because our intervention occurred in two steps—watching the instructor perform the anesthetic and then performing it oneself with a debriefing—it remains unclear if either one of those interventions alone would have been sufficient to augment the residents' scores. It is of interest to know that this two-step process does seem to impart learning. In addition, it remains unknown how the residents would have performed if they had been exposed to an alternate teaching modality regarding general anesthetic administration for emergency CD, such as a video simulation or a case-based learning discussion. It is possible that residents trained on a mannequin do not fare any better than those trained with alternate educational techniques. Also, we may have been observing a repetition effect such that residents who performed a CD scenario once scored more highly when they performed such a scenario a second time. Finally, as stated above, the validity of extrapolating the results from the simulated environment to the clinical environment is still being investigated, and therefore, it is unknown whether the observed 22-point difference in median scores is clinically relevant.

In summary, anesthesiology resident trainees who underwent focused training on a patient simulator regarding performance of a general anesthetic for emergency CD exhibited improved performance of this obstetric anesthesia emergency as compared with similar trainees who did not undergo such instruction. The role of simulation-based teaching in anesthesiology residency training programs, particularly for uncommon events, warrants further study.

Back to Top | Article Outline

ACKNOWLEDGMENTS

The authors thank Leonard Wade, MS, and Rozanna Chester, MS, for their help administering the simulation scenarios in the simulation laboratory.

Back to Top | Article Outline

REFERENCES

1.Bucklin BA, Hawkins JL, Anderson JR, Ullrich FA. Obstetric anesthesia workforce survey: twenty-year update. Anesthesiology 2005;103:645–653.

2.Hawkins JL, Koonin LM, Palmer SK, Gibbs CP. Anesthesia-related deaths during obstetric delivery in the United States, 1979–1990. Anesthesiology 1997;86:277–284.

3.Lipman S, Carvalho B, Brock-Utne J. The demise of general anesthesia in obstetrics revisited: prescription for a cure. Int J Obstet Anesth 2005;14:2–4.

4.Scavone BM, Sproviero MT, McCarthy RJ, et al. Development of an objective scoring system for measurement of resident performance on the human patient simulator. Anesthesiology 2006;105:260–266.

5.Clayton MJ. Delphi: a technique to harness expert opinion for critical decision-making tasks in education. Educ Psychol 1997;17:373–386.

6.Wong AK. Full scale computer simulators in anesthesia training and evaluation. Can J Anesth 2004;51:455–464.

7.Chopra V, Gesink BJ, de Jong J, Bovill JG, Spierdijk J, Brand R. Does training on an anaesthesia simulator lead to improvement in performance? Br J Anaesth 1994;73:293–297.

8.Dalley P, Robinson B, Weller J, Caldwell C. The use of high-fidelity human patient simulation and the introduction of new anesthesia delivery systems. Anesth Analg 2004;99:1737–1741.

9.Morgan PJ, Cleave-Hogg D, McIlroy J, Devitt JH. Simulation technology: a comparison of experiential and visual learning for undergraduate medical students. Anesthesiology 2002;96:10–16.

10.Nyssen AS, Larbuisson R, Janssens M, Pendeville P, Mayne A. A comparison of the training value of two types of anesthesia simulators: computer screen-based and mannequin-based simulators. Anesth Analg 2002;94:1560–1565.

11.Wayne DB, Didwania A, Feinglass J, Fudala MJ, Barsuk JH, McGaghie WC. Simulation-based education improves quality of care during cardiac arrest team responses at an academic teaching hospital: a case-control study. Chest 2008;133:56–61.

12.Barsuk JH, McGaghie WC, Cohen ER, Balachandran JS, Wayne DB. Use of simulation-based mastery learning to improve the quality of central venous catheter placement in a medical intensive care unit. J Hosp Med 2009;4:397–403.

13.Barsuk JH, Cohen ER, Feinglass J, McGaghie WC, Wayne DB. Use of simulation-based education to reduce catheter-related bloodstream infections. Arch Intern Med 2009;169:1420–1423.

14.Gaba DM, Howard SK, Flanagan B, Smith BE, Fish KJ, Botney R. Assessment of clinical performance during simulated crises using both technical and behavioral ratings. Anesthesiology 1998;89:8–18.

Back to Top | Article Outline
Cited Here...
Keywords:

General anesthesia; Obstetric anesthesia; Education; Simulation

Appendix. Valid and ...
Appendix. Valid and ...
Image Tools
Appendix. Continued...
Appendix. Continued...
Image Tools

© 2010 Lippincott Williams & Wilkins, Inc.

Login

Article Tools

Images

Share

Search for Similar Articles
You may search for similar articles that contain these same keywords or you may modify the keyword list to augment your search.