ANESTHESIOLOGISTS pioneered simulation, as a technology and an educational methodology, in the late 1980s.1,2
Yet, its adoption has been limited largely to residency training programs and medical schools. A number of institutions and researchers have reported the benefits of simulation for teaching medical students, anesthesiology residents, and trainees in other disciplines. The benefit of simulation for teaching procedural skills has been shown to transfer to the bedside. Although the benefit of simulation for training cognitive and behavioral skills has been shown to improve performance in the simulation laboratory, similar improvements are harder to measure in a clinical setting. Appropriately, using the simulated setting for measuring outcomes raises concerns over whether the performance improvement is a result of “teaching to the test,” and whether performance gains are transferable to patients. In this issue of Anesthesiology, the link between simulation and clinical care has finally been made.3
Bruppacher et al. show that 2 h of simulation training, when compared with a similar duration of training via an interactive seminar, results in improved patient care during weaning from cardiopulmonary bypass 2 and 5 weeks after the training. The primary and secondary outcome measures were “nontechnical” and “technical” skills, respectively. The nontechnical skills, defined as behavioral skills such as teamwork and decision making, were measured by the Anesthesiologists Non-Technical Skills (ANTS) rating scale, a previously described and widely used rating. The technical skills, or discrete tasks, were measured using a checklist developed for this study using the Delphi method. It is unlikely that the two scoring techniques are entirely independent, yet the two scales viewed in aggregate probably represent global performance. Regardless, the scores of the simulation group were 20% higher than that of the seminar group at 2 and 5 weeks after the training.
A common measure of training effectiveness is the effect size of an intervention. An effect size of 1 standardized mean is considered large, whereas 0.6–0.8 is moderate and 0.2–0.4 is slight.4
Although Bruppacher et al
. reported standard error rather than standard deviation, the effect size for the two interventions can be easily ascertained and is nearly 2 for the seminar group and twice that for the simulation group. This illustrates that although the seminar had a sizable impact, the simulation group's gains are even more impressive by comparison.
Are these improvements so large that they are unreasonable? A scoring bias may have lowered pretest scores. Raters were not aware of the group assignment of the subjects but were aware that none of the subjects had received training and, therefore, may have been reluctant to give high scores. Similarly, all scores could have been inflated after the intervention; however, this would not explain the larger improvements in the simulation group compared with the seminar group, which occurred in each of the four categories in the ANTS scale (as shown in table 3).3
The checklist, less susceptible to subjective interpretation, supports the main findings. Interestingly, the interrater reliability of the checklist was lower than that of the ANTS scale. This may be due to the difficulty encountered when attempting to observe a number of discrete actions without the benefit of videotape or may have been due to ambiguities in the descriptions of the items. Experience with the checklist could have minimized these difficulties. In favor of the plausibility of the large magnitude of the effect of training is the nature of the setting chosen by the authors. Weaning from cardiopulmonary bypass occurs in a complex clinical environment from both a technological and physiologic standpoint, and it is certainly conceivable that large improvements in patient management would occur as a result of training novices.
Are the outcomes of the study sufficiently convincing? Is a better ANTS score something patients, or healthcare providers, would consider compelling? Perhaps not at first glance, but after reading the descriptors for the anchors in the ANTS categories, these attributes are certainly ones that every patient seeks during their care. Avoiding rescue interventions by an attending cardiac anesthesiologist is not identical to improved survival, but given the obvious gravity of the clinical setting, it is easy to understand why process matters in creating optimal outcomes. Furthermore, no 2-h training intervention has been shown to improve patient survival. This is too much to ask from resident education or a single continuing medical education activity.*
What makes simulation-based training better than a seminar? The authors provide their perspective, suggesting that contextual learning improves information retrieval. Reflection, an activity accompanying simulation during a facilitated debriefing, helps assimilate and integrate the experience. Reflection can occur as a result of an interactive seminar, but the emotional aspect of simulation—making a firm commitment and seeing the consequences—does not occur to the same degree during an interactive seminar.
Reality, actual patients in clinical settings, has served as the training ground for generations of anesthesiologists. Yet, this method is intrinsically inefficient. The focus of the clinical encounter is determined by the needs of the patient, rather than the needs of the trainee. This unnecessarily delays training as a result of unneeded redundancy (for common conditions) and experience gaps (for uncommon conditions). Because many life-threatening conditions occur at a frequency that is considerably less than the number of cases residents encounter during their training, it is not difficult to appreciate why well-rounded training requires supplementation. Public concerns over safety arise when trainees make decisions or perform procedures for the first time, further emphasizing the need for newer educational methods.
With growing evidence supporting its use, what is next for simulation? The American Board of Anesthesiology has implemented a simulation requirement for Maintenance of Certification in Anesthesiology®, which will undoubtedly disseminate simulation training to the majority of practicing anesthesiologists. Yet, the optimal frequency of simulation training is unknown, as is the optimal clinical subject matter. The American Society of Anesthesiologists Committee on Simulation Education is interested in the content of the simulation-based component of Maintenance of Certification in Anesthesiology®. Interestingly, during this past year, the committee agreed that training in hemodynamic events, hypoxemic events, and teamwork are obvious areas in which to focus. The results from the study by Bruppacher et al. are relevant in that they concentrate on two of these areas: hemodynamic management and teamwork. Weaning from cardiopulmonary bypass is an innovative extension of this focus on complex decision making in an acute setting. Bruppacher et al. provided considerable reinforcement to the foundation supporting the use of simulation. The clinical setting is no longer the only place to train physicians. Reality is not the only option.
Randolph H. Steadman, M.D.,
Department of Anesthesiology, David Geffen School of Medicine at the University of California, Los Angeles, Los Angeles, California. firstname.lastname@example.org
1.Good ML, Gravenstein JS: Anesthesia simulators and training devices. Int Anesth Clin 1989; 27:161
2.Gaba DM: Improving anesthesiologists' performance by simulating reality. Anesthesiology 1992; 76:491–4
3.Bruppacher HR, Alam SK, LeBlanc VR, Latter D, Naik VN, Savoldelli GL, Mazer CD, Kurrek MM, Joo HS: Simulationbased training improves physicians' performance in patient care in high-stakes clinical setting of cardiac surgery. Anesthesiology 2010; 112:985–92
4.Colliver JA: Effectiveness of problem-based learning curricula: Research and theory. Acad Med 2000; 75:259–66
* Marinopoulos SS, Dorman T, Ratanawongsa N, Wilson LM, Ashar BH, Magaziner JL, Miller RG, Thomas PA, Prokopowicz GP, Qayyum R, Bass EB: Effectiveness of Continuing Medical Education. Evidence Report/Technology Assessment No. 149 (Prepared by the Johns Hopkins Evidence-based Practice Center, under Contract No. 290-02-0018.) AHRQ Publication No. 07-E006. Rockville, MD: Agency for Healthcare Research and Quality January 2007. Available at: http://www.ahrq.gov/downloads//pub/evidence/pdf/cme/cme.pdf
. Accessed January 4, 2009. Cited Here...
© 2010 American Society of Anesthesiologists, Inc.