Van Heukelom, Jon N. MD; Begaz, Tomer MD; Treat, Robert PhD
Debriefing is the aspect of the simulation experience during which the learners are given the opportunity to reflect on the simulation, and the instructor is given the opportunity to teach and provide feedback to the participants. Debriefing is an essential component of all simulation experiences.1–7 A review of the medical simulation literature has identified debriefing as the most critical feature of a simulation.3 Studies have indicated that in the absence of structured feedback, no learning of clinically relevant parameters occurs8 and that groups who receive external feedback have higher posttest performance scores compared with those who do not.9
Common elements of any effective debriefing include the debriefer, the participants to be debriefed, the simulation experience, the impact of the experience, recollection of the experience, reporting the experience, and the timing of the debriefing session in relation to the experience.10 Several different approaches to debriefing have been noted in the literature,11,12 different models have been proposed,13–15 and different styles described.2,16
Despite the consensus that debriefing is an essential component of a simulation experience, there are surprising deficiencies in the literature describing debriefing and assessing its effectiveness.10,17 Fanning and Gaba2 state that “there are surprisingly few papers in the peer-reviewed literature to illustrate how to debrief, how to teach or learn to debrief, what methods of debriefing exist and how effective they are at achieving learning objectives and goals.” In addition, after review of the literature, we were unable to find any studies that directly compared differing styles against each other. Specialty organizations have increasingly called for such research to be performed.1
The goal of this study was to directly compare the effectiveness of two styles of debriefing that differ in their timing. Some educators prefer to hold a formal debriefing session after the simulator session,18 whereas others use suspension of the simulation to instruct and allow reflection throughout the simulation experience.3 This suspension of the simulation has been referred to as “in-simulation” debriefing.2
Retrospective pre-post assessment was made through survey, in an attempt to determine whether any differences in perception of the simulation existed between the two groups receiving differing debriefing styles. Student perceptions of the simulation as measured using a survey was the primary outcome. The secondary outcomes included pretest and posttest analysis of students' subjective confidence in their ability to perform a medical resuscitation both before and after completing the simulation.
This investigation used an observational study with a retrospective pre-post survey of student confidence levels on managing a simulator session where participants were randomly distributed into one of two groups. One group received postsimulation debriefing, and the other group received in-simulation debriefing.
Study Setting and Population
Participants consisted of third year medical students enrolled in the “Clinical Procedures Rotation” at the Medical College of Wisconsin from September 2007 to June 2008. This is a required course for third year medical students. A component of this rotation is intensive training in medical resuscitation, including Advanced Cardiac Life Support (ACLS), taught by Emergency Medicine faculty and residents. Following instruction in the ACLS algorithms, the students undergo a resuscitation simulation that is facilitated by Emergency Medicine faculty. The students enrolled in this course had no previous training in ACLS or in medical simulation. The participants were randomly assigned to receive either postsimulation debriefing or in-simulation debriefing. Randomization was performed using the website www.random.org, which assigned each participant an odd or even number corresponding to one of the two study groups. This project was determined to be exempt from review by the Medical College of Wisconsin Institutional Review Board. No informed consent was required, as this was part of an established educational practice and represented minimal risk to the participants. The participants were aware that they were participating in a research study, but they were unaware of the objectives of the study.
All the simulations were performed using the Laerdal SimMan® (SimMan SW version 3.3.1, Laerdal Medical, Stavangen, Norway). The simulation scenarios included a patient with an ST-elevation myocardial infarction that deteriorated into ventricular fibrillation and a patient with a third degree atrialventricular block who required cardiac pacing. After randomization, the students were divided into three person “teams” to complete the simulation. Before the start of the simulation, the participants were oriented to the simulation with the type of orientation received dependent on the type of debriefing that they were to receive. The in-simulation group was instructed that all the teaching would be occurring during the simulation and that there would not be any postsimulation teaching, whereas the postsimulation group was told that no questions would be answered and no teaching would occur until the completion of the simulation. The first team would complete the simulation with a second team observing the simulation in the same room. Observers were provided an outline of the simulation case along with the critical action steps to follow along. After the completion of the first simulation, the observation team completed the second case with the first team observing. Both teams received the same type of debriefing. The same two individuals facilitated the simulations and provided debriefing throughout the course of the study. The facilitators were in the same room with the participants during the simulation.
Critical actions, as well as primary and secondary learning objectives, for each of the cases were identified by the facilitator prior the start of the simulation. It was determined that these critical actions and objectives would be the main topics discussed for both the in-simulation and the postsimulation debriefing groups. In this way, our debriefing was objective oriented.16 In addition to these critical actions and objectives, any other errors or specific questions that the learners had during the simulation could be reviewed during the debriefing sessions as well. See Addendums 1 and 2 for outlines of the two simulation cases used and the list of critical actions and learning objectives for each case.
In the in-simulation debriefing group, at any point during the simulation when the students made an error in management, it was apparent to the facilitator that they were unsure of further management (a 30-second time limit without any action taken by the students was set as the indicator that the students were unsure of further management) or the learners failed to perform a critical action, and the simulation was suspended. At this point, the facilitator would tell the participants of the consequences of the error or inaction. The facilitator would then instruct the participants in the correct action. For example, if the learners failed to defibrillate a patient in ventricular fibrillation, the facilitator would say something like, “The patient was in ventricular fibrillation which requires both defibrillation and vasoactive medications. Ventricular fibrillation is one of the two shockable rhythms, the other being pulseless ventricular tachycardia. Using our biphasic device the energy level is 200 Joules. Immediately following defibrillation you must resume cardiopulmonary resuscitation (CPR) for 5 cycles or approximately 2 minutes prior to rechecking for a pulse.” The facilitator would also review the identification of ventricular fibrillation and differentiate it from other rhythms. Following the instruction, the simulation scenario would be restarted. These steps were repeated as often as needed until the completion of the simulation. In this way, instruction in the correct case management and correction of errors in management were made immediately and throughout the case. After the simulation, the facilitators were allowed to answer any specific questions that the students had, but it was emphasized that this was to be kept to a minimum and that the majority of the teaching was to occur during the simulation suspensions.
In the postsimulation debriefing group, at any point during the simulation when the students made an error in management or it was apparent to the facilitator that the students were unsure of further management (again a thirty second time limit was used), the results of the error in management or of the students inaction was allowed to occur. If the error causes the learners to significantly deviate from appropriate management, the facilitator would redirect the learners by stating the correct action, and the simulation would be restarted. No further teaching or explanation was provided during the simulation. In contrast to above, if the learners failed to defibrillate a patient in ventricular tachycardia, the facilitator would simply state, “defibrillate the patient now.” These steps were repeated as often as needed until the completion of the simulation. The occasional redirection was necessary as our learners were an inexperienced group with little prior knowledge of ACLS and thus did make several errors. If the facilitators provided no redirection, the simulation would have not been able to progress. See Table 1 for further examples of some of the common types of error and how debriefing addressed these errors for both in-simulation debriefing and postsimulation debriefing.
The postsimulation debriefing used a structured, three-step method informed by previously described models of debriefing2 (Fig. 1). In step 1, students were allowed to “decompress” from the emotionally charged simulation and reflect on their experience. The facilitator asked the students the following questions: “How do you think this went? What did you think went well and what were you challenged by?” Students were allowed to discuss with little to no participation from the facilitator. In step 2, the facilitator clarified the facts, concepts, and principles involved in the case. The facilitator reviewed the stepwise ideal management of the case and along the way systematically pointed out the correct and incorrect actions that the students performed in relation to this ideal management. Questions about the case were answered at this time. In step 3, students were given supportive encouragement and asked to generalize what they learned from the experience. In this step, the students were given a “pep talk” and asked how they would apply the things they learned to a real case. The script for this step was along the lines of “That was a challenging case. You did a really good job! As you see you achieved many of the critical actions. You may have had the experience that resuscitation can feel stressful and chaotic. That is because they often are. That is why it is important to be well prepared. We would like you to think about how you will be a team leader when the time comes for you to save someone's life”. At this point in the debriefing time constraints generally led us to end the experience with the student reflecting without further group discussion. In both of the groups, the total time for the simulation and debriefing was limited to 20 minutes.
Two days after the completion of the simulation experience, anonymous surveys were distributed to the students. The initial portion of the survey consisted of retrospective pretest and posttest questions related to the students' subjective self-reported confidence in their abilities to perform the needed skills during a medical resuscitation. The students were asked to rate the statements using a seven-point Likert-scale, which ranged from 1 (no confidence) to 7 (very confident). The second portion of the survey consisted of Likert-scale questions related to the teaching effectiveness of the facilitator, the effectiveness of the debriefing strategy used, and the realism of the simulation. The students were asked to rate these statements on a seven-point Likert-scale of 1 (strongly disagree) to 7 (strongly agree).
The anonymous survey data was analyzed using SPSS 15.0. The internal consistency of the data was assessed through Cronbach α. Statistically significant differences in Likert-scale survey responses for the two debriefing groups were ascertained through Mann-Whitney U tests. Retrospective pretest/posttest differences were determined through Wilcoxon signed-rank tests with effect sizes reported though Spearman ρ correlations. Descriptive statistics include medians and interquartile ranges.
One hundred sixty-one third year medical students enrolled in the Medical College of Wisconsin “Clinical Procedures Rotation” were randomized to receive either in-simulation debriefing (N = 84) or postsimulation (N = 77). All students enrolled in the course were included in the analysis, as they all completed the required simulation and the subsequent optional questionnaire.
Comparative Analysis of Groups Receiving In-Simulation Debriefing Versus Postsimulation Debriefing
The internal consistency for the survey data from the in-simulation debriefing group was á = 0.69, and for the postsimulation debriefing group was á = 0.60, which demonstrated a moderate level of reliability for the interrelated debriefing/feedback and session items in the data for both groups. The analysis reported in Table 2 provided statistically significant differences in signed-rank scores between the two groups in the following three statements: “the debriefing helped me learn effectively,” “the debriefing helped me to understand the correct and incorrect actions,” and “the debriefing style was effective” (all P = 0.001) with the group that received postsimulation debriefing ranking these measures significantly higher. Furthermore, the high-median scores (≥6.0) from these three items suggest that students agreed that the debriefing was effective, helped them learn effectively, and understand the correct and incorrect actions to take and that the debriefing after the simulation was significantly greater than during the simulation (Table 2).
The internal consistency of the retrospective pretest/posttest data was á = 0.91, which demonstrated a high level of reliability in the survey data of the interrelated student confidence items. This indicated that the items were consistently measuring the same construct (presumably of student confidence) and that students could reliably self-assess their confidence in their knowledge and in their abilities to manage, lead, and adapt to a changing/novel medical resuscitation. Analysis of the pretest-posttest items showed statistically significant increases in median scores (all P ≤ 0.001) and moderate associations through Spearman ρ correlations (all ρ ∼ 0.5 with P ≤ 0.001) between the pretest ratings and the posttest ratings for all the statements related to the students' self-reported confidence and knowledge (Table 3).
Collectively, the reliable results from these four items indicated that students were confident in their knowledge and in their abilities to manage, lead, and adapt to a changing/novel medical resuscitation. When the group receiving in-simulation debriefing was compared with the group receiving postsimulation debriefing, no differences were found between their results (ie, all four pretest/posttest differences remained statistically significant for both groups, but there were no between-group differences for either pretest or posttest items).
Feedback about the results of one's actions is one of the features of simulation that can most facilitate learning.3 Van Ments19 states that “the debriefing session is the most important part of the activity. It is here that the meaning of the enactment is clarified; the lessons to be learnt are underlined; and the connections are made to what the students already know and what they need for the future.”
There are potential advantages and disadvantages of each of the approaches that were used. In postsimulation debriefing, the participants are allowed to actually experience the consequences of their mistakes, as they are allowed to play out in real time during the simulation producing a high level of clinical realism.1 Some educators believe that this type of uninterrupted simulation is essential to the simulation experience.20 In contrast, when debriefing is given throughout a simulation, the simulation is suspended at the time of error, and the participants are told what the consequences of the error would have been. Actually, experiencing the consequences of their actions can have a greater impact and leave a greater imprint on the participants than simply being told what the results would have been. Being allowed to complete a simulation without interruption does tend to produce a higher level of clinical and emotional realism for the participants who may be lost with the repeated interruptions that occur when debriefing occurs throughout a simulation.21
A significant concern regarding simulation is the potential for negative learning to occur. Negative learning occurs if the student learns something incorrectly because of an imperfect simulation. Examples of imperfections that may lead to negative learning include time acceleration, technological limitations, learner errors, and others.1 When negative learning occurs, the participants retain the mistake that was made without retaining the correct action. Negative learning may be effectively mitigated by the facilitator of the simulation. There is greater opportunity to mitigate negative learning when the debriefing occurs throughout the simulation as opposed to only giving it after the completion of the simulation.
Results of previous work have shown that learners perceive simulation to be an effective way to teach skills and medical knowledge and that they enjoy simulation-based learning.22,23 The initial portion of the survey contained retrospective pretest and posttest questions, which were selected to indicate how they perceived their own knowledge base, skills, and confidence levels regarding medical resuscitation both before and after the simulation. This means that students filled out one survey, after the experience, which asked them to estimate their knowledge, skills, and confidence before the simulation, when compared with after the simulation. Regardless of the type of debriefing that the participants received, there were significant increases in their responses to all the questions asked. Although both groups of participants rated their posttest confidence levels significantly higher, there was no difference between the groups that received postsimulation debriefing versus the group that received in-simulation debriefing. Our participants also rated their enjoyment of the simulation experience highly. These results indicate that our participants have similar attitudes toward simulation as those in previous studies.
Participants did not rate the realism of the simulation, the type of debriefing interfering with the simulation, or if they felt that the facilitator was disruptive any differently compared with those in the in-simulation group who had repeated interruptions during the simulation. Although correcting learners' mistakes in real time does help to reinforce the correct information, there are concerns that repeated interruptions that take place during the simulation may detract from the realism of the simulation and lead to less effective learning.21 Regardless of whether the debriefing occurred during the simulation or whether it was given after the completion of the simulation, the learners' highly rated the realism of the simulation. This is important because even if a postsimulation debriefing is planned, the facilitator may occasionally need to redirect the learners. Also, occasional pauses to redirect learners may help to negate any potential negative learning that may occur. Based on our data, pauses can occur during a simulation without negatively affecting the perceived realism of the simulation.
Participants perceived that limited feedback during the simulation followed by a comprehensive debriefing session helped them to better learn the subject matter, to better understand the correct versus incorrect actions and that the session was overall more effective compared with in-simulation debriefing. This may have been the result of allowing them to complete the simulation without interruption and allowing them to experience the simulation in a more realistic environment. However, as previously mentioned, they did not rate the realism any differently. Another explanation for the differences is that the opportunity to do a comprehensive review of the case after the simulation provided the participants an opportunity to learn and ask questions without the stress involved in an ongoing simulation scenario. Also, the complete review allowed the participants to place all the actions in the greater context of the entire case as opposed to the intermittent teaching that occurred throughout the case in the in-simulation debriefing group.
There are several limitations to this study. First, the participants in the study were limited to third year medical students with limited experience in medical resuscitation. It is not possible to generalize the results to all simulation participants. Residents or faculty who are involved in simulation experiences, who likely have more experience in the subject material may benefit from a different type of debriefing than medical students. It would seem probable that the more experience that a group of learners has with the simulation subject matter, the more they would benefit from being allowed to complete the simulation without interruption. It would be of interest in future studies to see whether a group of learners such as medical residents or faculty would have different responses to the survey. Second, medical simulation is used across a wide variety of clinical scenarios including trauma resuscitation, procedural skills, and others. Our study only included medical resuscitations, and the results may not be applicable to other types of simulation experiences. Future research could be directed to evaluate whether the results presented here can be generalized to different groups of learners in varying medical situations. Third, although our study examined student perceptions of the simulation, it did not include any clinical outcome data. No data regarding test scores or subsequent performance in simulation or actual clinical practice were included. Although participant perceptions of a simulation experience are important, changes in participant subsequent performance is the overall goal for any educational intervention. Studies using standardized patient encounters, pre-post test questionnaires on the subject matter, or repeat simulation experiences to evaluate future performance and long-term knowledge retention are possible and would help to evaluate any changes in clinical practice that are the result of simulation training and the different styles of debriefing. Fourth, although the sessions were limited to 20 minutes in overall duration (including debriefing time in the postsimulation debriefing group), the specific amount of time spent performing the simulation versus coaching/debriefing was not recorded. This could be controlled for in future studies with strict time keeping requirements.
The third year medical students feel that simulation is an effective learning tool that significantly increases their confidence in their ability to perform critical care skills. Limited feedback during a medical simulation followed by a comprehensive debriefing session helped the students to learn the subject matter more effectively, to better understand the correct versus the incorrect actions, and was rated as more effective overall, when compared with in-simulation debriefing. Interruptions during a simulation to redirect students can be done without significantly altering the realism of the simulation.
1. Bond WF, Lammers RL, Spillane LL, et al. The use of simulation in emergency medicine: a research agenda. Acad Emerg Med 2007;14:353–364.
2. Fanning RM, Gaba DM. The role of debriefing in simulation-based learning. Simul Healthc 2007;2:115–125.
3. Issenberg SB, McGaghie WC, Petrusa ER, Lee Gordon D, Scalese RJ. Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME systematic review. Med Teach 2005;27:10–28.
4. Kaufman HH, Wiegand RL, Tunick RH. Teaching surgeons to operate: principles of psychomotor skills training. Acta Neurochir 1987;87:1–7.
5. Rall M, Manser T, Howard S. Key elements of debriefing for simulator training. Eur J Anaestesiol 2000;17:516–517.
6. Small SD, Wuerz RC, Simon R, Sharpiro N, Conn A, Setnik G. Demonstration of high-fidelity simulation team training for emergency medicine. Acad Emerg Med 1999;6:312–323.
7. Stafford F. The significance of de-roling and debriefing in training medical students using simulation to train medical students. Med Educ 2005;39:1083–1085.
8. Mahmood T, Darzi A. The learning curve for a colonoscopy simulator in the absence of any feedback. Surg Endosc 2004;18:1224–1230.
9. Rogers DA, Rehehr G, Howdieshell TR, Yeh KA, Palm E. The impact on computer-assisted learning for surgical technical skill training. Am J Surg 2000;179:341–343.
10. Lederman LC. Debriefing: toward a systematic assessment of theory and practice. Simul Gaming 1992;23:145–160.
11. Petranek CF. Written debriefing: the next vital step in learning with simulations. Simul Gaming 2000;31:108–118.
12. Steinwachs B. How to facilitate a debrief. Simul Gaming 1992;23:186–195.
13. Lederman LC. Differences that Make a Difference: Intercultural Communication, Simulation, and the Debriefing Process in Diverse Interaction. Presented at the Annual Conference of the International Simulation and Gaming Association. Kyoto, Japan, July 15–19, 1991.
14. Petranek C. Maturation in experiential learning: principles of simulation and gaming. Simul Gaming 1994:513–522.
15. Thatcher DC, Robinson MJ. An Introduction to Games and Simulation in Education. Hants: Solent Simulations; 1985.
16. Wallin CJ, Meurligh L, Hedren L, Hedegård J, Felländer-Tsai L. Target-focused medical emergency team training using a human patient simulator: effects on behavior and attitude. Med Educ 2007;41:173–180.
17. Holmes DS. Debriefing after psychological experiments. I. Effectiveness of postdeception dehoaxing. Am Psychol 1976;31:858–867.
18. Dunn WF. Simulators in Critical Care and Beyond. Des Plaines, IL: Society of Critical Care Medicine; 2004.
19. Van Ments M. The Effective Use of Role-Play. 2nd ed. London: Kogan Page Ltd; 1999.
20. Flanagan B, Nestel D, Joseph M. Making patient safety the focus: crisis resource management in the undergraduate curriculum. Med Educ 2004;38:56–66.
21. McLaughlin SA, Doezema D, Sklar DP. Human simulation in emergency medicine training: a model curriculum. Acad Emerg Med 2002;9:1310–1318.
22. Cook DA, Dupras DM, Thompson WG, Pankratz VS. Web-based learning in residents' continuity clinics: a randomized, controlled trial. Acad Med 2005;80:90–97.
23. Osman LM, Muir AL. Computer skills and attitudes to computer-aided learning among medical students. Med Educ 1994;28:381–385.
© 2010 Lippincott Williams & Wilkins, Inc.