Advances in resuscitation science, educational efficiency, and local implementation are needed to maximize survival from cardiac arrest.1 Well-designed education programs can narrow the gap between the ideal and actual performance of providers.2 For this purpose, simulation has been actively applied in advanced life support training, and there have been several reports on improvements in the quality of care and survival as a result of this training.3–5
In conventional advanced cardiovascular life support (ACLS) courses, feedback was recommended to be obtained through guided reflection by asking learners to summarize the roles of team members and areas for improvement and to critique the case.6 However, the instructor was generally expected to address approximately 20 objectives within 4 to 6 minutes. Given this time constraint, the instructor chose specific actions observed during a simulation that could be corrected or reinforced and provided feedback to learners with guidance and direction. This type of focused and corrective feedback (FCF) is an efficient but instructor-centered debriefing method. With an increasing recognition that learner-centered debriefing is the most important component of simulation-based learning, the American Heart Association (AHA) recommended that debriefing, as a technique to facilitate learning, should be included in all ACLS courses.7 In 2009, the AHA implemented the Structured and Supported Debriefing (SSD) model, which was developed by leaders (O'Donnell and Phrampus) at the WISER institute and was applicable to ACLS and Pediatric Advanced Life Support (PALS) courses.8–10 In SSD, as a learner-centered debriefing method, the instructor invites learners to think about and describe what they think, what they did, why and how they implemented certain actions, and how they can improve in a safe learning environment.9 The SSD can help learners address gaps in their cognitive and behavioral performance by engaging in postsimulation debriefing.10 The SSD model follows a three-step format to achieve comprehensive and effective debriefing [gather-analyze-summarize (GAS)] and uses a practical debriefing script to standardize the instructor-student interaction.10,11
In simulation training, active participation and reflection on an event are the cornerstones of experiential learning. Through simulations, the performance gap between desired performance, which is a predetermined objective, and actual observed performance during simulation is exposed.12 In practice, learners cannot thoroughly reflect on their own experiences and the performance gap. Thus, debriefing, which is a postexperience analysis or guided reflection with an instructor, is needed and paramount to identifying and closing the performance gap.13 Debriefing, as a method of maximizing learning effectiveness, is considered one of the priorities in simulation-based educational studies.14 With learner-centered debriefing, the use of a standardized script by novice instructors in PALS scenario simulations may improve learning outcomes.15 However, there are no studies comparing learner-centered debriefing with instructor-centered debriefing in a healthcare simulation. We do not know whether SSD, a newly implemented, learner-centered debriefing method, is superior to FCF, a conventional, instructor-centered debriefing method, for cardiac arrest team training.
In aviation simulation training, learner-centered debriefing increased discussion and self-analysis of crew resource management performance.16 In the light of the experiential learning theory, reflection by the learner is thought to provide deeper learning and enhanced performance. In a recently suggested integrated conceptual framework to achieve a blended approach to debriefing, focused facilitation, a learner-centered debriefing method, was suggested as an optimal debriefing method for the behavioral performance domain than directive feedback and teaching.17 Thus, we hypothesized that those randomized to the SSD group would have a greater improvement in team dynamic scores, which reflects nontechnical skills such as crew resource management, than those in the FCF group. The aim of our study is to compare the educational impact of two different methods of postsimulation debriefing (FCF vs. SSD) on team dynamics in a simulation-based cardiac arrest team training for fourth-year medical students.
This was a pilot randomized controlled study conducted at a simulation center in a medical school. The local institutional review board approved this study (CUMC11U088), and all students provided written informed consent to participate.
A convenience sample of 100 fourth-year medical students who participated in an emergency medicine clerkship for 3 months was recruited for this study. Within the fourth-year medical graduate school curriculum, the emergency medicine clerkship was a 1-week mandatory program. All students had received a basic life support (BLS) certificate from the AHA. They participated in a 2-hour simulation-based team training experience for cardiac arrest and multiple trauma during their introduction to clinical medicine course in their third year. Early in their fourth year, they passed an objective structured clinical examination (OSCE) on basic ACLS skills, such as bag-valve-mask ventilation and defibrillation. Medical students who did not want to participate in the study were excluded.
After a written test followed by a topic review, effective team dynamics instructions and a pretraining survey, the students were randomly assigned to two groups (FCF or SSD), with each team composed of six students. Each student drew a number from an opaque box. For randomization, a statistician who did not participate in the study design generated a table of random numbers assigned to each group (See Appendix 1. Additional study method details).
The students assigned to either group entered their respective simulation rooms and received a 10-minute standard orientation to the simulator and simulation environment. Each group experienced two cardiac arrest scenarios, each followed by the assigned type of postsimulation debriefing for 20 minutes: (1) baseline scenario, (2) assigned type of postsimulation debriefing (FCF vs. SSD), (3) exercise scenario, and (4) assigned type of postsimulation debriefing (FCF vs. SSD). Three 10-minute cardiac arrest scenarios including the test scenario were standardized and programmed equally into each simulator (See Appendix 2. Simulation scenario development and running, and Text documents, Supplemental Digital Content 1, http://links.lww.com/SIH/A323, which demonstrate simulation scenarios and critical action checklists for debriefing).
After running the scenario and evaluating the performances of the student team using the critical action checklist for debriefing, the instructors began the postsimulation, nonvideo-assisted debriefing based on the evaluation results. The two AHA-certified, experienced ACLS instructors worked as instructors in either room and conducted the simulation scenarios and postsimulation debriefing with the groups. The instructor in charge of the SSD received SSD training in the postgraduate course provided by the 2009 International Meeting on Simulation in Healthcare and subsequently completed the AHA online module.8 After that training, the instructor was able to conduct SSD with healthcare providers several times in official ACLS provider courses. The other instructor in charge of the FCF did not receive any form of SSD training but received a formal training for providing feedback as an AHA-certified ACLS instructor and had substantial experience with FCF in conventional ACLS provider courses. Before the study, an investigator (course director) oriented the instructors to the assigned debriefing method.
On the basis of the evaluation results, each instructor discussed the performances of the student teams and sought to equally allocate the time spent on debriefing between team dynamics and team clinical performance that is high-quality cardiopulmonary resuscitation (CPR) and management of shockable and nonshockable arrest (See Text documents, Supplemental Digital Content 2, http://links.lww.com/SIH/A324, which demonstrate postsimulation debriefing outline). In the FCF, a type of instructor-centered debriefing, the instructor gave direct feedback regarding the student team's dynamics and clinical performance without using a script; guidance and direction were provided by correcting errors in performance or by reinforcing appropriate performance. On the other hand, in the SSD, a type of learner-centered debriefing, the instructor focused on the reasons why a certain decision was made and why a certain action was implemented through an interactive instructor-student discussion using the scripted GAS debriefing tool.8 Based on the recommendation of Raemer et al,14 the graphical framework of the present study is shown in Figure 1. During the postsimulation debriefing, an investigator (course director) monitored the implementation of SSD as a script in the control room and provided feedback to the instructor in charge of the SSD. A bell rang halfway through debriefing time to facilitate the equal discussion of content and rang again at the end of the debriefing. If the debriefing extended past 20 minutes, an investigator informed the group and intervened to stop the debriefing.
After completing these two cardiac arrest scenarios and postsimulation debriefings, each group participated in the same test scenario. After the test scenario, the students evaluated the postsimulation debriefings that they had received and their satisfaction with the training. A posttraining survey was conducted to evaluate the changes in their comprehension of and confidence in cardiac arrest management and team dynamics. After the posttraining survey, all of the students participated in an SSD of the test scenario with the facilitation of the course director.
Outcomes and Measurements
The primary outcome was the improvement in team dynamics scores between baseline and test simulation. The secondary outcomes were improvements in team clinical performance scores, self-assessed comprehension and confidence in cardiac arrest management and team dynamics before and after training, as well as postsimulation debriefing evaluations.
We collected data on sex, age, ACLS knowledge test scores, OSCE scores for defibrillation and BLS, and students' experience with ACLS skills to compare the baseline characteristics of the participants between the two groups. To assess the team dynamics and team clinical performance, all of the recorded videos of the three simulation sessions for each group, excluding the postsimulation debriefing, were edited and coded. The videos were sent to two AHA-certified ACLS instructors who did not participate in the study and were blinded to the assigned group. They evaluated the videos independently using checklists that were developed through several steps and validated in a previous study18 (See Text documents, Supplemental Digital Content 3, http://links.lww.com/SIH/A325, which demonstrate team dynamics and team clinical performance checklists).
The team dynamics checklist consisted of 10 items on the relationships between the team leader and its members such as closed loop communication and mutual respect, and the team clinical performance checklist consisted of 10 items associated with critical resuscitation actions such as rhythm analysis and high-quality CPR.18 An interrater discrepancy score was averaged. Self-assessed comprehension of and confidence in cardiac arrest management and team dynamics and the evaluation of the postsimulation debriefing were examined through a pretraining and posttraining survey with a 10-point scale.
On the basis of the normality of data distribution, descriptive data were presented as frequencies and percentages, means and standard deviation (SD), or medians and ranges and were compared using χ2 tests, t tests, or Mann–Whitney U tests as appropriate.
We calculated the differences in the team dynamics score of each team between the baseline and test simulation, as well as in the team clinical performance score using the team data. In addition, we compared the differences between both groups for each assessment tool using Mann–Whitney U tests. The interrater reliability between the two assessors in assessing team dynamics and team clinical performance was measured using the intraclass correlation coefficient (Cohen κ).
Improvements in self-assessed comprehension and confidence levels for cardiac arrest management and team dynamics before and after training were also tested using the individual data as described previously. The pretraining and posttraining survey data were described as the mean and SD or median and range and were compared using t tests or Mann–Whitney U tests according to the normality of data distribution.
The analyses were conducted with SAS software (Version 9.12; SAS Institute, Inc, Cary, NC), and a P value of less than 0.05 was considered statistically significant.
Of the 100 fourth-year medical school students, 95 participated in the study and five were excluded because they did not attend the clerkship orientation. Forty-seven and forty-eight students were randomly assigned to the FCF and SSD groups, respectively, constituting eight teams in each group (Fig. 2). All of the participating students finished the study with no cases of dropout. There were no significant differences in age, sex, BLS OSCE scores, and ACLS knowledge test scores between the two groups, although the defibrillation OSCE score of the FCF group was higher than that of the SSD group (P = 0.007, Table 1). In the pretraining survey on experience, only experience with defibrillation on mannequins was higher in the FCF group than in the SSD group (P = 0.042, Table 1).
There were no significant differences between the two groups in team dynamics or team clinical performance during the baseline simulation (Table 2). In the SSD group, the team dynamics score during the test simulation improved compared with the score during the baseline simulation [baseline: 74.5 (65.9–80.9), test: 85.0 (71.9–87.6), P = 0.035], but the team clinical performance score did not improve [baseline: 76.0 (57.3–83.9), test: 77.5 (74.5–87.6, P = 0.123]. There were no improvements in the FCF group between the baseline and test simulations regarding the team dynamics or team clinical performance scores [baseline: 70.8 (67.4–75.3), test: 71.5 (63.3–86.5), P = 0.326; baseline: 73.0 (63.3–77.1), test: 73.8 (67.8–83.9), P = 0.482, respectively].
The improvement in team dynamics and team clinical performance scores during the test simulation compared with the baseline simulation, which was the study's primary outcome, did not differ between the two groups (P = 0.328, Table 2). The intraclass correlation between the two raters for the team dynamics and team clinical performance scores showed a high reliability, with coefficients of 0.815 and 0.703, respectively.
Self-assessed comprehension and confidence levels regarding cardiac arrest management and team dynamics after training increased compared with the levels before training (P<.001, respectively), but there was no difference between the two groups. In the evaluation of the postsimulation debriefing, the students in the SDD group rated the following aspects higher than those in the FCF group: enough time for discussion, opportunity to participate in discussion, understanding of cardiac arrest management, understanding of team dynamics, and helpful in achieving learning objectives.
In this pilot randomized controlled study comparing the educational impact of two different methods of postsimulation debriefing (FCF or SSD) in a simulation-based cardiac arrest team training for fourth-year medical students, we found that there were no differences in the improvements in team dynamics and team clinical performance scores between the two debriefing groups. However, the team dynamics score of the SSD group during the test simulation, representing the behavioral performance of the team, was significantly improved compared with the score at baseline. To the best of our knowledge, this is the first study to compare the SSD, a newly implemented, learner-centered ACLS postsimulation debriefing method, with FCF, a conventional, instructor-centered ACLS postsimulation debriefing method, both applied by experienced ACLS instructors.
In conventional ACLS courses, instructors assess the critical actions of trainees during the simulation with checklists and provide corrective feedback to trainees after simulations. Although an experienced instructor may be able to handle a large amount of content in a short period, this type of postsimulation debriefing, FCF, may be less effective for adult learning because it provides directive feedback from the instructor rather than inducing self-reflection and analysis by trainees. In simulation-based medical education, debriefing is a purposeful and structured period that involves a review and discussion of the participant's experiences during simulation.19 Debriefing aims to provide learners with opportunities to identify responses that induce distress and explore and learn from their own and others' practices as well as to provide emotional support. It functions as a bridge that can close the natural gap between experience and making sense of experiential learning, in which reflection and subsequent analysis of events or activities are cornerstones.13
Structured and Supported Debriefing model focuses on the self-reflection of learners on their own performance during simulations and can stimulate learning facilitated by instructors using a scripted debriefing tool on the basis of learning objectives.11 The structured elements of this debriefing model include three phases, that is, GAS or Gather-Analyze-Summarize, with related specific goals, actions, and time estimates.8 The goal of the Gather phase is to listen to what participants say to understand what they are thinking and feeling. The Analyze phase aims to facilitate the participants' reflection on and thoughtful analysis of their actions, and the Summarize phase targets the identification and review of lessons learned from the scenarios. Supported elements include interpersonal support as well as the use of protocols, algorithms, and best evidence.8
Various postsimulation debriefing methods have positively influenced trainees' performances in simulated crises or cardiac arrests.20–22 Eppich and Cheng17 suggested that focused facilitation, a learner-centered debriefing method, is optimal for the behavioral performance domain of an integrated conceptual framework to achieve a blended approach to debriefing. In a recent multicenter study that aimed to determine whether the use of scripted debriefing by novice instructors affected knowledge and performance in simulated cardiopulmonary arrest, the behavioral performance of the team leader and the team clinical performance improved after the learner-centered debriefing, regardless of the use of the debriefing script tool.15 However, in our study, the team clinical performance scores of the SSD group, reflecting the critical action steps of the student teams, did not improve compared with baseline, whereas the team dynamics score, reflecting leadership and teamwork, did improve. Although the team clinical performance score of the SSD group was not improved, we believe that the strengthened intervention components, such as providing enough time for debriefing and number of sessions before test simulation, may also improve the team clinical performance scores.
In our study, there were no differences in the improvements in team dynamics or clinical performance scores at the test simulation compared with baseline between the FCF and SSD groups. Although the debriefing method was the main factor affecting the scores in our study, several other factors can influence the team dynamics and clinical performance of the resuscitation teams, which were composed of fourth-year medical students. First, the cardiac arrest events may not have been relevant enough to students' everyday activities to have an impact.23 Although the participants learned the ACLS core knowledge and practiced the resuscitation skills on mannequins and simulators, the teams might have showed little initiative or responded only superficially to cardiac arrest management.13 Furthermore, abstract conceptualization based on the medical students' previous experiences could have been inadequate because of their limited experience. Second, cultural aspects might also have affected our results. Because debriefing is a facilitated or guided reflection in the cycle of experiential learning, its success relies on active student-to-student and facilitator-to-student communication.13 However, compared with Western cultures, the social framework in Asian countries is more hierarchical and influenced by Confucian traditions, which place an emphasis on authority and respect for the teacher in the teacher-student relationship.24 Because of these cultural differences, the Korean medical students participating in this study may have felt uncomfortable communicating with the facilitators during the SSD.25 Although the students in the SSD group reported that they had enough time for discussion and more opportunities to participate in the discussion, it is possible that there was insufficient discussion for self-reflection on the team dynamics because of the 20-minute time constraint of the postsimulation debriefing. Debriefings with Korean medical students may require more time to elicit their values and perspectives or to engage them in discussion because they are typically hesitant to participate in discussion.25 Furthermore, didactic lectures in which teachers convey knowledge to students and students listen passively constitute the main approach used by the conventional Korean educational system. Therefore, the students may be more familiar with instructor-centered methods, such as FCF, than with learner-centered methods, such as SSD.
In a recent concept article, Cheng et al26 suggested several key variables, including the amount of time available, the knowledge and experience of learners, and national culture, that should be considered to manage the delicate balance between learner-centered debriefing and instructor-centered debriefing. These concepts may explain the results of our study. The educational impact of SSD on team dynamics and team clinical performance may have been similar to FCF in the time-limited, simulation-based cardiac arrest team training for fourth-year medical students in Korea, a culture with a high power distance and avoidance of uncertainty.
Our study has several limitations. First, the debriefing intervention may not have been sufficiently powerful or standardized. In ACLS provider courses, the AHA recommends that debriefing is 4 to 6 minutes long per scenario unless more is needed, even in SSD.9 When there is less time available, it is easy for an instructor to teach learners through instructor-centered debriefing.26 To ensure that there was relatively enough time for both debriefing methods, we decided to debrief for 20 minutes, as in a multicenter debriefing study.15 Although the postsimulation debriefing time was increased to 20 minutes, it may still have been short to debrief both the performance gaps and exceptional performance with the fourth-year medical students who have limited clinical experience. It is also unclear how well the instructor adhered to SSD. The instructor in charge of SSD received SSD training and had substantial experience in learner-centered debriefing, including SSD. The instructor also used a debriefing script tool that the AHA recommended. Although an investigator (course director) closely monitored the debriefing session, gave feedback to the instructor, and managed the debriefing time, we did not measure the compliance of each debriefing session. Thus, measuring compliance with the intervention is necessary in future debriefing studies. Similarly, it is unclear how many simulations are needed; more than two simulations before the test scenario may be needed to intensify the intervention. Second, the team leader differed in all scenarios. Because this study was conducted within a training program during an emergency medicine clerkship, we had to provide students the opportunity to work as a team leader in the simulations. Although the team-based simulation and postsimulation debriefing might have indirectly influenced the effects of leadership on learning, changing the team leader may have confounded the outcome because team leader performance often strongly influences overall team performance. Third, despite the random assignment, the baseline OSCE score and experiences with defibrillation performed on a mannequin differed between groups. However, we presume that the effects of these differences in one skill on the overall team clinical performance were not significant because there were no differences in the scores during the baseline simulation. Fourth, because of the small sample size of the teams, the statistical power to indicate improvements in scores might have been inadequate. We conducted a post hoc power analysis with data from the total sample of 16 teams to determine the statistical power of our study. When we assumed that the difference in mean improvement in team dynamic scores between the two groups was 15, the statistical power of this study was 0.87 with a type 1 error rate of 5% (two-sided α). However, when the difference was 10, the statistical power was 0.54. A total of 28 teams would have been needed for the study to have 80% power to detect a 10-point difference in mean improvement in the team dynamics score between the two groups. Although there was adequate power for the team dynamics considering a 15-point mean difference, there was inadequate power for a 10-point mean difference. Thus, this study should be considered a pilot, and an adequately powered subsequent study should be conducted considering both scores to determine the effectiveness of the two debriefing methods. Fifth, although we used validated checklists for assessment of team clinical performance and team dynamics, the scores showed a limited sensitivity and restricted range. This can mean that there were very few differences among the teams or that the raters rarely used the full range of the scale. Sixth, the cardiac arrest management teams only included medical students because this study was conducted during a predetermined curriculum in a medical college. An interprofessional team composition would be appropriate to evaluate the educational impact of debriefing methods on team dynamics. Seventh, only one instructor in each group participated in our study. Although both instructors were trained in each type of postsimulation debriefing method and one instructor was optimal for providing consistency to compare each method, the quality of the postsimulation debriefing may have depended on the instructor's facilitation skills. Thus, further studies in which a number of instructors participate and are randomly assigned to one of the groups are needed. Eighth, because the participants were medical students with limited clinical experience, we could not evaluate the performance of team leaders or individual team members. Because team leaders' behaviors are an important element of team dynamics, it might be meaningful to evaluate team leaders' performances with a specific tool in future studies with healthcare providers. Finally, because fourth-year medical students in an Asian country participated in this study, there might be limitations in generalizing the results to students in other countries with different cultural backgrounds or healthcare providers. Thus, further studies of healthcare providers reflecting cultural differences are also needed.
In this pilot randomized study, there was no significant difference between the two postsimulation debriefing methods (SSD vs. FCF) in the improvement in team dynamics from baseline to test simulations in a simulation-based cardiac arrest team training for fourth-year Korean medical students. Further adequately powered studies with healthcare providers in formal ACLS courses are needed.
The authors thank Dr. Paul E. Phrampus and Professor John O'Donnell of the Winter Institute for Simulation, Education and Research at the University of Pittsburgh Medical Center for their advice on conception of the study and their mentorship on simulation-based educational research. The authors also thank Dr. Sung Phil Chung for his permission to use the checklists on team dynamics and team clinical performance and Dr. Hyun Woo Yim and Professor Seung-Hee Jeong of the Clinical Research Coordinating Center at the Catholic Medical Center for their statistical assistance.
1. Chamberlain DA, Hazinski MF. Education
in resuscitation: an ILCOR symposium: Utstein Abbey: Stavanger, Norway: June 22–24, 2001. Circulation
2. Bhanji F, Finn JC, Lockey A, et al. Part 8: Education
, Implementation, and Teams: 2015 International Consensus on Cardiopulmonary Resuscitation and Emergency Cardiovascular Care Science With Treatment Recommendations. Circulation
3. Moretti MA, Cesar LA, Nusbacher A, Kern KB, Timerman S, Ramires JA. Advanced cardiac life support
training improves long-term survival from in-hospital cardiac arrest
4. Perkins GD. Simulation in resuscitation training. Resuscitation
5. Wayne DB, Didwania A, Feinglass J, Fudala MJ, Barsuk JH, McGaghie WC. Simulation-based education
improves quality of care during cardiac arrest
team responses at an academic teaching hospital: a case-control study. Chest
6. Field J, Doto F. Advanced Cardiac Life Support Instructor Manual
. Dallas, TX: American Heart Association; 2006.
7. Bhanji F, Donoghue AJ, Wolff MS, et al. Part 14: Education
: 2015 American Heart Association Guidelines Update for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care. Circulation
8. O'Donnell J, Rodgers D, Lee W, et al. Structured and Supported Debriefing [Interactive Multimedia Program]
. Dallas, TX: American Heart Association; 2009.
9. Sinz E, Navarro K. Advanced Cardiac Life Support Instructor Manual
. Dallas, TX: American Heart Association; 2011.
10. Cheng A, Rodgers DL, van der Jagt E, Eppich W, O'Donnell J. Evolution of the Pediatric Advanced Life Support course: enhanced learning with a new debriefing tool and Web-based module for Pediatric Advanced Life Support instructors. Pediatr Crit Care Med
11. Phrampus P, O'Donnell J. Debriefing using a structured and supported approach. In: Levine AI, DeMaria S Jr, Schwartz AD, Sim AJ, eds. The Comprehensive Textbook of Healthcare Simulation
. New York: Springer; 2013:73–84.
12. Rudolph JW, Simon R, Raemer DB, Eppich WJ. Debriefing as formative assessment: closing performance gaps in medical education
. Acad Emerg Med
13. Fanning RM, Gaba DM. The role of debriefing in simulation-based learning. Simul Healthc
14. Raemer D, Anderson M, Cheng A, Fanning R, Nadkarni V, Savoldelli G. Research regarding debriefing as part of the learning process. Simul Healthc
15. Cheng A, Hunt EA, Donoghue A, et al. Examining pediatric resuscitation education
using simulation and scripted debriefing: a multicenter randomized trial. JAMA Pediatr
16. Dismukes RK, Jobe KK, McDonnell LK. LOFT debriefings: An Analysis of Instructor Techniques and Crew Participation (NASA Tech. Memo. No. 110442). Moffett Field, CA: NASA Ames Research Center; 1997.
17. Eppich W, Cheng A. Promoting Excellence and Reflective Learning in Simulation (PEARLS): development and rationale for a blended approach to health care simulation debriefing. Simul Healthc
18. Chung SP, Cho J, Park YS, et al. Development of assessment tools for performance and leadership of a cardiopulmonary resuscitation team. Korean J Crit Care Med
19. Flanagan B. Debriefing: theory and techniques. In: Riley RH, ed. Manual of Simulation in Healthcare
. New York: Oxford University Press; 2008:155–170.
20. Dine CJ, Gersh RE, Leary M, Riegel BJ, Bellini LM, Abella BS. Improving cardiopulmonary resuscitation quality and resuscitation training by combining audiovisual feedback and debriefing. Crit Care Med
21. Morgan PJ, Tarshis J, LeBlanc V, et al. Efficacy of high-fidelity simulation debriefing on the performance of practicing anaesthetists in simulated scenarios. Br J Anaesth
22. Savoldelli GL, Naik VN, Park J, Joo HS, Chow R, Hamstra SJ. Value of debriefing during simulated crisis management: oral versus video-assisted oral feedback. Anesthesiology
23. Lederman LC. Debriefing: toward a systematic assessment of theory and practice. Simul Gaming
24. Kim KJ, Kee C. Reform of medical education
in Korea. Med Teach
25. Chung HS, Dieckmann P, Issenberg SB. It is time to consider cultural differences in debriefing. Simul Healthc
26. Cheng A, Morse KJ, Rudolph J, Arab AA, Runnacles J, Eppich W. Learner-centered debriefing for health care simulation education
: lessons for faculty development. Simul Healthc
APPENDIX 1. ADDITIONAL STUDY METHOD DETAILS
We prepared two separate simulation rooms that were equipped with the same patient monitor, defibrillator, patient simulator, and emergency cart. The patient simulator, SimMan (Laerdal, Stavanger, Norway) was set up on a hospital bed with a relatively hard surface and controlled by an instructor in each room. Three ceiling video cameras that were mounted in the simulation rooms simultaneously recorded the behavior and communication of the students from multiple angles during simulation scenarios; birds eye view from foot of bed and overhead views of simulator from both side of room. Audio and visual information from three video cameras and mannequin data including vital signs from a patient monitor were collected and combined in a quad screen for video review.
On the first day of every week, 12 students participated in a 5-hour simulation-based cardiac arrest team training program. After a written test (10 multiple-choice questions) on cardiac arrest management and team dynamics, a program director reviewed the test questions with the students. In addition, the students learned the basic concept of effective resuscitation team dynamics by watching the AHA ACLS team dynamic video and by discussing it. After a pretraining survey in which the students were asked about their experiences of performing basic ACLS skills and comprehension and confidence of cardiac arrest management and team dynamic concepts, they were randomly assigned to two groups (FCF or SSD).
Because the cardiac arrest team was designed with a ratio of six students to one instructor to one mannequin or station in conventional ACLS course, each team was composed of six students. The six students decided a team leader in each simulation. The team leader assigned team member roles to other students (airway, compression, intravenous access, medication monitor, and defibrillator recorder, respectively) like a team model for simulation in the AHA ACLS team dynamic video. The same team member went through all simulations, but the role was switched between simulations to prevent an excellent student from performing as a team leader in every simulation.
A 2-hour rater-training workshop was conducted to help two other AHA-certified ACLS instructors as the assessors understand the checklist evaluation criteria and minimize the judgement criterion gap. In 30 minutes, an investigator explained the checklists and they had a question and answer session for the checklist. With two checklists, they watched twice and independently evaluated four 8-minute simulated CPR videos of student teams. Four videos for rater training were selected from previous videos of student teams resuscitating a simulated cardiac arrest. The videos featured various responses of student teams on the scenario. An investigator and two raters discussed ratings of each particular item on the checklist and decided detailed evaluation criteria.
APPENDIX 2. SIMULATION SCENARIO DEVELOPMENT AND RUNNING
The cardiac arrest scenarios and checklists for critical actions were developed in advance from the megacode cases and testing checklists used in the ACLS course. Each group experienced three cardiac arrest scenarios. The first one was the baseline scenario about a patient who developed ventricular fibrillation right after admission with chest pain, the rhythm changed to asystole and then had a return of spontaneous circulation (ROSC). The second one was an exercise scenario about a patient with pulseless electrical activity due to hyperkalemia, followed by asystole and ROSC. The last one was an test scenario about the patient is admitted due to sudden collapse and has cardiac arrest rhythm changes (asystole-ventricular fibrillation-asystole) followed by ROSC.
All three cardiac arrest scenarios were developed to run for 10 minutes. The cardiac arrest rhythm changed every minute. During orientation for simulation environment, investigator informed rhythm change every minute and established a fiction contract with students to regard it as every 2 minutes. All scenarios were preprogrammed into the simulators to run equally in each group. One confederate was in each simulation scenario, a nurse in the baseline scenario and an emergency medical technician in exercise and test scenario. They delivered information for patient when students asked, based on a standardized script (e.g., chief complaint, present illness, medical history).
Education; Cardiac arrest; Patient simulation; Advanced cardiac life support; Patient care team
Supplemental Digital Content
© 2017 Society for Simulation in Healthcare