Secondary Logo

Journal Logo

Exposure to Simulated Mortality Affects Resident Performance During Assessment Scenarios

Goldberg, Andrew MD; Samuelson, Stefan MD; Khelemsky, Yury MD; Katz, Daniel MD; Weinberg, Alan MS; Levine, Adam MD; Demaria, Samuel MD

doi: 10.1097/SIH.0000000000000257
Empirical Investigations
Free

Background The utility of simulated mortality remains controversial in the literature. We therefore sought primarily to determine whether there was a difference in performance for residents exposed to varying levels of simulated mortality during training scenarios. As a secondary objective, we also sought to determine whether their self-reported anxiety levels, attitudes toward, and engagement in the simulated encounters differed based on group assignment.

Methods Fifty junior anesthesiology residents were randomized to one of the three simulation cohorts. The residents were broken into groups that either always experienced simulated patient survival (never death), always experienced simulated mortality (always death), or had a variable result based on performance (variable death). All residents experienced 12 identical training simulations with only the predetermined outcome as the variable. Residents were brought back 6 weeks after initial training for four assessment scenarios and subsequently rated on nontechnical skills and anxiety levels.

Results Residents in the always and never death groups showed no difference in nontechnical skills using the Anesthetists' Nontechnical Skills Score before and after the simulations. Residents in the variable death group, however, had improved nontechnical skill scores when brought back for the assessment (45.2 vs 41.5 and 42.9 respectively, P = 0.01). Although all three groups had higher State-Trait Anxiety Index scores from baseline after training, only the always death group had higher anxiety scores during the assessment (43 vs 37 vs 37 P = 0.008).

Conclusions We found that participants who experienced simulated mortality that was variably delivered, and more directly related to performance, performed better on later assessment scenarios.

From the Departments of Anesthesiology (A.G., Y.K., D.K., S.S., A.L., S.D.) and Population Health Science and Policy (A.W.), Icahn School of Medicine at Mount Sinai, New York, NY.

Reprints: Andrew Goldberg, MD, Department of Anesthesiology, Icahn School of Medicine at Mount Sinai, 1 Gustave L Levy Place Box 1010, New York, NY 10029 (e-mail: Andrew.Goldberg@mountsinai.org).

The authors declare no conflict of interest.

The study was funded internally by the Department of Anesthesiology at the Icahn School of Medicine at Mount Sinai.

The role for ethically challenging scenarios in simulation-based training is controversial.1,2 Many opinion pieces exist in the simulation literature, which argue quite disparate views on this issue.3–7 There is evidence that exposure to simulated mortality may lead to improved performance8 without causing undue stress to learners9; however, there is also evidence and commentary to the contrary.3,4 Given that our previous reported study showed that permissive failure leading to simulated mortality can result in improved performance, it is worth asking whether this relationship always holds and, if so, under what circumstances? Is there a dose–response relationship to experiencing simulated mortality where it will always improve knowledge retention/performance or is there a saturation point? Is there such a thing as too much mortality when it comes to simulation and if so, how much is too much? There are many educational venues where a stressful dose–response relationship improves learning including the military, professional athletics, and nuclear engineering.

Due to concerns for emotional harm2 and potential for worsening later performance,10 some have argued that simulated patient death should never occur unless the learner is warned ahead of time or if the primary educational goal of the scenario directly pertains to mortality (eg, breaking bad news, end-of-life care).1,2 Others argue that scenario fidelity is critical and because patient mortality is a reflection of actual clinical experiences, it belongs in simulation when deemed appropriate by facilitators.7,8,11

One element of this subject that warrants further clarity is the relationship between scenario outcome and learner performance, engagement, and stress; that is, should simulator death occur when a participant performs poorly as a means to enhance learning and future performance; not based purely on a predetermined outcome. If previous experiences with simulation lead to differences in performance during simulation-based assessments or hamper participant experience and engagement in simulation, it is possible that many centers should change the way their training scenarios are delivered. We therefore primarily sought to determine whether junior residents exposed to high-fidelity simulation training scenarios (covering a multitude of different topics) would demonstrate improvement in nontechnical skills, if the simulated patient encounters resulted in patient survival (always), patient death (always), or a variable result. We hypothesized a linear relationship wherein more exposure to death would mean more memorable scenarios and better performance. As a secondary objective we also sought to determine whether individual participants' self-reported anxiety levels, attitudes toward, and engagement in the simulated encounters varied based on group assignment.

Back to Top | Article Outline

METHODS

Study approval was obtained from the Icahn School of Medicine at Mount Sinai Program for the Protection of Human Subjects. The study was granted an exemption from the need for written consent, but verbal informed consent was obtained from each participant. Because we would be testing the impact of a theoretically psychologically harmful experimental intervention, psychological services were made available for participants if they felt traumatized by their experiences during the simulations. Participants were followed for 1 year after the experiment concluded.

The study was performed using a convenience sample, grafting the experiment to a standardized 6-week simulation-based educational curriculum that is mandatory for every first-year resident at our institution as part of their basic anesthesiology training (as PGY-2 residents) (Fig. 1). Two consecutive years of entering PGY-2 classes were enrolled. The standard curriculum at our institution consists of two simulations a week for 6 weeks. Each week is designed around basic principles of independent practice in the operating room as well as commonly encountered issues to troubleshoot. Broad topics covered included the following: anesthetic induction and emergence, hypoxia, hypotension, dysrhythmias, and difficult airway management. Residents were aware of the weekly topics but were not aware of the exact scenario they would encounter before entering the simulated environment.

FIGURE 1

FIGURE 1

Each simulation was subsequently followed by a thorough debrief led by one of five board-certified attending anesthesiologists (all of whom participate in simulation-based education for medical student through faculty-level simulation courses) regarding the topic covered, salient points regarding crisis resource management and/or logistical/departmental points (eg, how to check a unit of blood, emergency numbers to call); the debriefings did not deliberately focus on the patient's ultimate outcome (ie, death or survival) so as to not bias the groups. However, if this became a part of the discussion, it was addressed inasmuch as it related to the patient condition or care provided. Because skilled debriefing has been shown to positively alter the emotional response to a difficult simulation situation, we attempted to avoid the situation completely where possible to preserve the integrity of the groups. No other focus on the patient outcome was attempted. Of note, in week 2 of training, all residents had a morning lecture regarding “Stressors of being an anesthesiologist” where issues such as workload, fatigue, and patient demise were explicitly discussed. All residents are required to attend and all did so and were apprised of institutional resources and mechanisms of dealing with poor patient outcomes.

Before the beginning of the curriculum, participants were informed that they would be observed for the entire 6 weeks but were blinded to the objective of the study and experimental intervention; in other words, the simulated mortality would be unexpected for all participants. All residents signed a confidentiality agreement attesting to the fact that they would not discuss the outcomes of their training scenarios with residents not in their training group. Although the curriculum was mandatory for all residents, participation in the study was not (ie, residents could participate in the simulation and not be graded or submit answers to any of the surveys/tests). All 50 residents who were offered the opportunity to enroll in the study did so. Each had previous simulation experience as a PGY-1 resident, having participated in four simulation-based exercises related to floor medicine, intensive care, and basics of anesthesiology (ie, cardiac and pulmonary physiology). None of these simulations ended in simulated mortality. All residents had skill training in central line placement during their intensive care unit rotation, but no resident had any part-task simulation training in procedural skills.

Once enrolled, residents were randomized using a computerized random integer generator for placement into one of the following three groups: never death group, always death group, and a variable death group. In the never death group, scenarios were halted at the near-death stage before actual expiration should that situation arise or be scripted. In other words, the scenario was halted with the patient in any rhythm besides asystole. In the always death group, the simulated patient would always die (end up asystolic) regardless of participant performance. In the variable death group, the patient would die intermittently, with outcomes based on performance of the participants with an effort to match death to likely clinical outcome based on the scenario facilitator's clinical judgment and not predefined scenario scripting. Thus, as an example, in the hypotension 2 scenario in the always death group, the patient's pulmonary embolus would cause the patient to eventually become asystolic regardless of the intervention. In the never death group, the pulmonary embolus would cause the patient to become severely hypotensive leading to ventricular tachycardia and fibrillation but the scenario would be halted before the patient became asystolic regardless of the treatment. In the variable death group, the outcome would be based on how the participants performed. If appropriate and timely interventions were made with pharmacologic treatment, defibrillation, advanced cardiopulmonary life support (ACLS), etc. then the patient would live, if not then the patient would die.

Each group participated in 12 training scenarios (2 training scenarios per week for 6 weeks) and 4 assessment scenarios after the 6-week course (Appendix 1). Training scenarios were experienced in small groups of three residents working together as a team with one resident assuming the role of primary anesthesiologist in a scheduled and rotating fashion. Each scenario was identical except for the predetermined patient mortality outcomes. The scenarios covered a variety of topics including induction, emergence, hypotension, hypoxia, difficult airway management, and ACLS in the perioperative setting. The debriefing for every scenario was standardized and did not focus on the patient mortality or ultimate outcome, only the predefined learning points for the given scenario. The debrief was provided by the same instructor to ensure each resident received identical debriefs and consisted of a structured lecture about the topic covered, formulating a differential, appropriate interventions/next steps and how to anticipate issues when alone in the operating room. Although all residents were given opportunities to express their emotions and reactions, the focus of the debrief was on the material being covered during the simulation in an effort to standardize the groups.

The 12 training scenarios were the first 12 scenarios administered during the curriculum, and patient outcome was determined by group assignment. The final four simulations were the assessment scenarios taking place 6 weeks after the course, and patient outcome was determined by resident performance in those cases. Residents were assessed individually, and evaluation protocols are described hereinafter. Participants were blinded to the fact that they had been randomized to groups and which scenario was training or assessment. Participants were encouraged throughout to maintain confidentiality about the cases they encountered and the outcomes of those cases for purposes of the experiment. Each scenario (training and assessment) was administered by two simulation faculty (one member acting as a patient and the other as a confederate in the scenario playing the surgeon or nurse, depending on the scenario).

All of the sessions were videotaped, and each of the assessment scenarios was independently reviewed and graded by two board-certified anesthesiologist raters blinded to group assignment using the Anesthetists' Nontechnical Skills Score (ANTS).12 The checklist itself specifically analyzes participant's abilities in the following four skill categories: task management, teamwork, situational awareness, and decision making (Each category has 3–5 underlying elements making 15 different subcategories to more specifically describe the nontechnical skills. Each element is rated on a four-point scale making the scale range from 15 to 60 with an average interrater agreement of 0.62) (intraclass correlation coefficient, two-way model).13 The observers had been previously trained with the checklist, had ample time to familiarize themselves with the scoring system, and had used it in previous studies.14 Furthermore, our raters had maintenance ANTS training consisting of a didactic presentation and online training program.

One week before the first training scenario, all participants were given the trait portion of the State-Trait Anxiety Index (STAI) to determine baseline anxiety levels.15,16 The state portion of the STAI was then administered before the first training scenario and again before the assessment scenarios.

Participants were asked after the assessment scenarios to rate their own engagement in the simulated encounters, as well as whether or not they believe simulation in general is a traumatic form of learning. They were also asked whether they believe simulation is a helpful form of learning compared with traditional didactic teaching or self-directed learning. The rating scales used for these items are in Appendix 2.

Back to Top | Article Outline

Statistical Analysis

Outcome measures were analyzed using multiple linear regression techniques. A convenience sample was used given the limited number of participants per year; no sample size calculation was performed. The mean ANTS values were used for analysis and comparison. The global mean ANTS score was obtained by calculating means from the two raters for each scenario. Then, to find a global ANTS rating, the mean of the four scenarios was calculated. Parametrical statistical analysis was used for normally distributed data and one-way repeated measures analysis of variance was performed. Nonparametric data were analyzed using the Kruskal-Wallis testing approach. Interrater data were examined by intraclass correlation coefficient calculations (two-way model). All hypothesis testing measures were two-tailed. All data were analyzed using SAS system software, Version 9.2 (SAS Institute Inc, Cary, NC).

Back to Top | Article Outline

RESULTS

Fifty residents (96%) completed participation in this study; groups did not differ in terms of median age or gender composition (Table 1). Two residents in the variable death group could not complete the study because of scheduling conflicts (both residents were in the year two cohort). The median number of simulated mortalities in the variable death group was of the initial 12 training scenarios and did not differ by year of enrollment.

TABLE 1

TABLE 1

The self-reported anxiety scores for each group are shown in Table 2. Each group showed increased STAI scores immediately before entering their first simulation training scenarios. The always death group showed elevated STAI scores preceding the assessment scenarios compared with the never death and variable death groups (P = 0.008 and P = 0.009, respectively).

TABLE 2

TABLE 2

When averaged for the four assessment scenarios, higher total ANTS score was positively associated with assignment to the variable death group {45.2 [95% confidence interval (CI) = 42.7–47.68] vs 41.5 [95% CI = 39.63–43.48] [always death] and 42.9 [95% CI = 41.16–44.77] [never death], P = 0.010}. When specifically analyzed, the variable death group displayed higher global scores in both the situational awareness and decision-making domains (P = 0.008 and P = 0.02, respectively). Effect size of the groups analyzed were r = 0.19 (always vs never death), r = 0.27 (variable vs always death), and r = 0.40 (variable vs never death). Interrater reliability (intraclass correlation) for the ANTS scores was very good overall (0.86) and for the individual scenarios (range = 0.81–0.88). For the individual scenarios, significant differences in the scores were noted in scenario 1 (never death group with a lower score, P = 0.011) and scenarios 2 and 3 (variable death group with higher scores, P = 0.013 and P = 0.025, respectively) (Fig. 2).

FIGURE 2

FIGURE 2

Experimental group assignment did not have a significant effect on endorsing simulation as traumatic (P = 0.65). The overall median rating for this item was 3 (agree). Higher baseline STAI score correlated to a higher likelihood of endorsing simulation as traumatic (ie, a rating of 4 or 5) irrespective of group assignment (P < 0.01). The experimental groups did not differ on ratings of simulation as a helpful learning technique, with an overall median rating of 4 (strongly agree) (P = 0.71). This was true also by baseline STAI score (P = 0.80). When asked to rate their subjective level of engagement in the assessment scenarios, the variable death group was slightly more likely to choose a score of 4 or 5 as compared with the other groups, but this did not reach statistical significance (P = 0.07).

Back to Top | Article Outline

DISCUSSION

The role, if any, for simulated mortality remains controversial. Proponents argue that avoidance of mortality is unrealistic and thereby may hinder scenario value, participant engagement, and/or performance.6,17 Others argue that exposure to unexpected mortality may traumatize learners, in turn hampering any meaningful value derived from simulation.1,10,18 This study demonstrated that previous exposure to variably delivered mortality led to better nontechnical performance (ie, ANTS scores) in most of the subsequent simulation-based assessment scenarios. Furthermore, because the clinical scenario more closely mimics reality, the difference in effect size becomes greatest (variable death vs never death, 0.40. though admittedly, this is not a large effect size). All groups found simulation helpful and considered it somewhat traumatic, as compared with traditional didactic training, but this did not differ by group assignment.

The effectiveness of any simulation is dependent on physical fidelity, psychological and emotional realism, and the ultimate engagement of learners in a scenario.11,19 Previously, our group had shown that simulated mortality was a useful learning tool to improve performance.8 This follow-up study has accomplished several things. First, we have again illustrated the use of simulated mortality as a tool that will improve performance if used properly and therefore is critical to the armamentarium of the simulationist if that is the ultimate goal. Second, we have begun to answer some of the questions raised by our earlier work. Clearly, too much simulated mortality is a bad thing that will ultimately worsen performance. Although no exact number yet exists to how many mortalities one should experience before the learner is pushed too far, the dose–response relationship between simulated mortality and performance is therefore a critical avenue for future studies. Furthermore, it is impossible to tease out whether it was the death itself or the fidelity/emotional activation from the death that lead to improved performance. A future study directly comparing performance independent simulated mortality versus performance dependent death would be of great use to the literature in clarifying this distinction.

Although we have no specific data on exactly how many centers avoid mortality in all of their scenarios or use a high volume of variable death, these extremes are not true representations of actual clinical practice and the benefit of variably experiencing death observed herein should be considered seriously by those conducting simulations. We would argue that all the groups who espouse the never death group may be attempting to protect the learner from experiencing simulated mortality and that they may be doing a disservice to their participants. Our results suggest that these centers should perhaps reconsider their stance.

The explanation for the observed relationship between death and performance likely involves many factors, such as the following: improved ownership of material when the outcome depends on participant performance, increased memorability from feelings of accountability for the outcome of a case, a greater amount of self-directed reading/studying postsimulation when the participant is confronted with their own knowledge gaps, or increased degree of vigilance in the assessment scenarios by participants expecting “anything” to happen (compared with the always or never groups where the outcome may have been presumed to be out of participants' control). In addition, more realistic consequences for action (or inaction) during scenarios may have prompted better engagement during future scenarios in the variable group, although we did not observe engagement to be reported to a statistically significant level.

It is important to note that although performance differences between the always death and never death groups were not observed, the always death group was more anxious before their assessment scenarios. This suggests that although repeated exposure to mortality may not hamper performance or engagement compared with never experiencing death, this may unnecessarily increase the level of anxiety a participant attaches to simulation. This alone could be an argument against frequently experiencing death in the simulated environment. We offered all participants the opportunity for psychological counseling during and after our curriculum, acknowledging simulated mortality could possibly be psychologically harmful. Although these services were offered, it is important to note that no participant used them in the immediate or long-term (1 year) periods after this study.

It should be mentioned that in the fourth assessment scenario (myocardial infarction and subsequent dysrhythmias), the groups did not differ significantly in terms of performance. This could be because we were inadequately powered to show such a difference or simply because this scenario was less complex than the other three or simply not as good of a scenario as to determine differences. This could also be indicative of previous training, because this scenario ultimately tested ACLS skills, which are more commonplace and deployable than the skills needed in managing a scenario such as an oxygen pipeline contamination (ie, more heavily focused on mobilization of resources, knowledge of the anesthesia workstation).

Back to Top | Article Outline

Limitations

The present investigation has several limitations. First, this was a single institution study and the findings may not generalize to other centers, groups of residents, or specialties. This limitation does, however, represent a potentially valuable avenue for future studies to further examine the role for simulated mortality in other populations. This study did not examine the precise dose–response curve of simulated mortality as it relates to performance and/or engagement. We were only able to show that variable mortality is beneficial when compared with always or never approaches. It is possible that there is a tipping point to participant engagement and when “enough” simulated mortalities are experienced, a participant is no longer engaged with the scenario to perform well in the environment; this would resemble an important Yerkes-Dodson type20,21 curve for this particular concept and should be considered a topic for future investigation. Furthermore, it is impossible to tease out whether the simulated mortality leads to increased emotional activation (where a theoretical saturation point could be made) versus just scenario fidelity where the context of the mortality and not just the absolute number is paramount. Likewise, not every scenario is fit to be split into the three simplistic categories we have chosen for methodologic purposes. There are many scenarios in which either the simulated patient must die (eg, the goal is to teach end-of-life care) or must not die (eg, the goal is to teach junior residents standard monitor application and function). It should also be noted that P values were reported without correction for multiple testing and the potential inflation of the type 1 error rate, due to the exploratory nature of the study.

It should also be noted as a limitation that we did not get prescenario ANTS scores on our participants so there is a chance that the randomization of the residents lead to groups that were not equivalent. With that said, all residents had the same exposure during intern year to the same scenarios and the same number of simulated mortalities (none). Furthermore, none of the participants experienced mortality in the operating room previously. Lastly, although they were not polled as to their experience with mortality on the floors, there is still a lack of external mortality exposures that exists as a potential confounder.

Regarding the ANTS scores themselves, the scores obtained and interpreted within the study are used as global rating scores of nontechnical performance taking into account task management, teamwork, situational awareness, and decision making. Although we have shown a statistically significant difference between the groups, there have been no studies looking at what specifically the difference means between the participants who gets a 30, 40, or 50, for example, let alone smaller differences. We do not believe that this detracts from our findings that participants receiving a variable amount of simulated mortality will perform better on a global nontechnical scale. The specific differences in scoring and how this relates to real-world practice would, however, be a valuable avenue for future research. In addition, a similar study should employ multiple measures of performance, not just the ANTS tool, to determine whether the differences could be corroborated and indeed whether they “mean” anything.

Another potential limitation of the present study is that all participants were trainees. While the use of simulation for resident assessment has become more widely accepted, there remains a knowledge gap with regards to how any assessment scenario should look for this group (and also for attending physicians). It is possible that if the same study was recreated with attending physicians, the ANTS scores and anxiety scores would be different. As the role for simulation-based assessment continues to grow,22–29 this potentially confounding relationship is important to study. Finally, although we found that previous exposure to variably delivered mortality led to better nontechnical performance, we cannot answer the question as to why this was the case. Using an educational strategy whereby the learners encounter scenarios with variable patient outcomes certainly replicates actual patient care and seems to provide the catalyst for improved performance, so perhaps realism is an explanation; still, we did not find increased endorsement of realism in this group that can allow us to make this claim with certainty.

Back to Top | Article Outline

CONCLUSIONS

In this work, we sought to investigate whether the incidence of simulated mortality in an always, never, or sometimes methodologic approach would have an effect on performance in the simulated arena. We found that participants who experienced simulated mortality that was variably delivered and more directly related to performance performed better on later assessment scenarios. Although there are many facets of simulated mortality still to be investigated, we hope the use of simulated mortality will continue to be a matter of future study so that its precise role and influence can be better defined.

Back to Top | Article Outline

REFERENCES

1. Corvetto MA, Taekman JM. To die or not to die? A review of simulated death. Simul Healthc 2013;8:8–12.
2. Leighton K. Death of a simulator. Clin Sim in Nursing 2009;5:e59–e62.
3. Calhoun AW, Pian-Smith MC, Truog RD, Gaba DM, Meyer EC. The importance of deception in simulation: a response. Simul Healthc 2015;10:387–390.
4. Bruppacher HR, Chen RP, Lachapelle K. First, do no harm: using simulated patient death to enhance learning?. Med Educ 2011;45:317–318.
5. Yardley S. Death is not the only harm: psychological fidelity in simulation. Med Educ 2011;45:1062.
6. Rogers G, de Rooy NJ, Bowe P. Simulated death can be an appropriate training tool for medical students. Med Educ 2011;45:1061.
7. Goldberg AT, Katz D, Levine AI, Demaria S. The importance of deception in simulation: an imperative to train in realism. Simul Healthc 2015;10:386–387.
8. Goldberg A, Silverman E, Samuelson S, et al. Learning through simulated independent practice leads to better future performance in a simulated crisis than learning through simulated supervised practice. Br J Anaesth 2015;114:794–800.
9. DeMaria S, Silverman ER, Lapidus KA, et al. The impact of simulated patient death on medical students' stress response and learning of ACLS. Med Teach 2016;38:730–737.
10. Fraser K, Huffman J, Ma I, et al. The emotional and cognitive impact of unexpected simulated patient death: a randomized controlled trial. Chest 2014;145:958–963.
11. DeMaria S, Bryson EO, Mooney TJ, et al. Adding emotional stressors to training in simulated cardiopulmonary arrest enhances participant performance. Med Educ 2010;44:1006–1015.
12. Fletcher G, Flin R, McGeorge P, Glavin R, Maran N, Patey R. Anaesthetists' Non-Technical Skills (ANTS): evaluation of a behavioural marker system. Br J Anaesth 2003;90:580–588.
13. Rutherford JS, Flin R, Irwin A, McFadyen AK. Evaluation of the prototype Anaesthetic Non-technical Skills for Anaesthetic Practitioners (ANTS-AP) system: a behavioural rating system to assess the non-technical skills used by staff assisting the anaesthetist. Anaesthesia 2015;70:907–914.
14. DeMaria S, Samuelson ST, Schwartz AD, Sim AJ, Levine AI. Simulation-based assessment and retraining for the anesthesiologist seeking reentry to clinical practice: a case series. Anesthesiology 2013;119:206–217.
15. Spielberger CD. State-Trait Anxiety Inventory. 1987;19:2009.
16. Endler NS, Kocovski NL. State and trait anxiety revisited. J Anxiety Disord 2001;15:231–245.
17. Gettman MT, Karnes RJ, Arnold JJ, et al. Urology resident training with an unexpected patient death scenario: experiential learning with high fidelity simulation. J Urol 2008;180:283–288.
18. Pai DR, Ram S, Madan SS, Soe HH, Barua A. Causes of stress and their change with repeated sessions as perceived by undergraduate medical students during high-fidelity trauma simulation. Natl Med J India 2014;27:192–197.
19. Hamstra SJ, Brydges R, Hatala R, Zendejas B, Cook DA. Reconsidering fidelity in simulation-based training. Acad Med 2014;89:387–392.
20. Broadhurst PL. Emotionality and the Yerkes-Dodson law. J Exp Psychol 1957;54:345–352.
21. Teigen KH. Yerkes-Dodson: a law for all seasons. Theory Psychol 1994;4:525–547.
22. Issenberg SB, McGaghie WC, Hart IR, et al. Simulation technology for health care professional skills training and assessment. JAMA 1999;282:861–866.
23. Scalese RJ, Obeso VT, Issenberg SB. Simulation technology for skills training and competency assessment in medical education. J Gen Intern Med 2008;23:46–49.
24. Berkenstadt H, Ziv A, Gafni N, Sidi A. Incorporating simulation-based objective structured clinical examination into the Israeli National Board Examination in Anesthesiology. Anesth Analg 2006;102:853–858.
25. Girzadas DV, Clay L, Caris J, Rzechula K, Harwood R. High fidelity simulation can discriminate between novice and experienced residents when assessing competency in patient care. Med Teach 2007;29:472–476.
26. Blum RH, Boulet JR, Cooper JB, Muret-Wagstaff SL. Simulation-based assessment to identify critical gaps in safe anesthesia resident performance. Anesthesiology 2014;120:129–141.
27. Sidi A, Baslanti TO, Gravenstein N, Lampotang S. Simulation-based assessment to evaluate cognitive performance in an anesthesiology residency program. J Grad Med Educ 2014;6:85–92.
28. Boulet JR, Murray DJ. Simulation-based assessment in anesthesiology: requirements for practical implementation. Anesthesiology 2010;112:1041–1052.
29. Nunnink L, Foot C, Venkatesh B, et al. High-stakes assessment of the non-technical skills of critical care trainees using simulation: feasibility, acceptability and reliability. Crit Care Resusc 2014;16:6–12.
Back to Top | Article Outline

APPENDIX 1. Assessment Scenarios

Table

Table

Back to Top | Article Outline

Appendix 2: Self-rating Scale for Simulations.

Trauma rating: on a scale of 1–5, how much do you agree or disagree with the following statement?

Simulation-based education is more traumatic for learners than traditional educational modalities (eg, didactic or case-based learning).

(1 = very strongly disagree, 2 = strongly disagree, 3-agree, 4 = strongly agree, 5 = very strongly agree) helpfulness rating: using the same scale, how much do you agree or disagree with the following statement?

Simulation-based education is more helpful to my education than traditional educational modalities.

Engagement rating: using the same scale, how much to you agree with the following statement?

I felt engaged in the simulated encounters I just completed (ie, they felt real and I took them seriously).

Keywords:

Simulated mortality; predictable death; curriculum design

© 2017 Society for Simulation in Healthcare