A key question in simulation is how to design courses to maximize learning within time and resource limits.1–3 For cardiopulmonary resuscitation skills, certain practices show strong evidence of educational effectiveness, such as those outlined in the recent American Heart Association Resuscitation Education Science Statement.4 These include 2 practices of interest in the present study: use of deliberate practice toward mastery standards and high-quality feedback and debriefing. Because there is limited time for training, one must choose between maximizing hands-on practice opportunities versus allowing time for debriefing and reflection. Some research suggests maximizing hands-on practice—even at the expense of reflective debriefing—is preferable.5–7 Conversely, there is wide appreciation for the value of reflective, learner-centered debriefing in simulation.8–11
It seems reasonable that a course's learning objectives might inform an optimal balance. For example, the PEARLS framework12 proposes that for “technical/cognitive” learning objectives, brief directive feedback and coaching are suitable, permitting more hands-on practice. Conversely, for teamwork and communication, those authors suggest that time for more elaborate “focused facilitation” during lengthier debriefings is often preferable. However, in a professional activity such as resuscitation, performance elements may not fit cleanly in such categories. For instance, although “chest compression” may seem to be a purely technical/cognitive task, it also requires team situation monitoring, communication, and backup. Similarly, because resuscitation guidelines are relatively proscriptive, leadership during resuscitation may be characterized by the performance of certain proscribed behaviors (eg, assigning particular roles and using scripted action-linked phrases to guide performance),7 rather than being something that may manifest in a wide variety of leadership behaviors. As such, perhaps resuscitation leadership could be “drilled” much as if it was a technical/cognitive skill.
Given these issues, our team wondered how maximal deliberate practice of component skills would compare with repetitions of a full cardiopulmonary arrest, followed by debriefing and reflection. This question suggested 2 coherent conditions to be directly compared. We hypothesized that “drill”-style training, maximizing hands-on practice with direct coaching, would best improve taskwork (eg, compression rate and depth), whereas “scrimmage”-style practice, emphasizing full rehearsals of resuscitations with reflective debriefing, would best improve teamwork (eg, role assignment and clarity).
The study is a randomized comparison trial with pretest and posttest. One hundred thirty-one first-year residents from 19 specialties in a preresidency “boot camp” consented to participate in the University of Kansas Medical Center Institutional Review Board–approved study. All participants held current American Heart Association Basic Life Support certification; approximately 96% held Advanced Cardiac Life Support certification. Twenty-six teams of 5 to 6 learners were formed and then randomly assigned to “drill” or “scrimmage” training conditions for the 2-hour course.
Learners were welcomed with introductions, a discussion of psychological safety in simulation,13 and a scripted tour of the simulation environment. Each group then completed a pretest scenario where the simulated patient promptly declined into cardiac arrest with ventricular tachycardia. One learner was randomly assigned to be the first to enter the room, present at arrest. Upon calling for help, 4 to 5 learners plus an embedded simulation person acting as a nurse joined for the 5- to 7-minute scenario. The embedded simulation person followed a script to ensure minimal and consistent impact on team performance, being primarily there to ensure safety. At the end of the training, all teams completed a posttest resuscitation scenario with a slightly different case stem, where the cardiac rhythm was ventricular fibrillation followed (if shocked) by pulseless electrical activity.
The learning objectives for the 2-hour course were to prepare interns to provide high-quality cardiopulmonary resuscitation (CPR), timely defibrillation, and to work as a coordinated initial response team during the first 5 minutes of an in-hospital cardiac arrest (IHCA). All teams had 75 minutes to train between pretest and posttest.
“Drill” condition teams debriefed after the pretest and then rotated through 3 approximately 25-minute stations representing chronological resuscitation “phases.” Each of 3 stations aimed to maximize deliberate practice opportunities for individual and team skills needed for IHCA response1: first response, involving “assess, distress, compress” and initial role assignment, choreography, and communication7,14,15,2; compression quality, practicing 2 minutes of 30:2 CPR with objective compression data and coaching on rate, depth, recoil, and continuity, plus coordination of step stool and backboard use3; and defibrillation, practicing preparing the defibrillator, coordinating team action (eg, assigning pulse check and next compressor), determining shockability, and delivering a shock. Learners rotated to ensure equivalent practice time. Total hands-on practice time was approximately 60 minutes, the remainder involving orientation/demonstrations, coaching, and transitions.
“Scrimmage” condition teams debriefed after the pretest and then completed 3 additional full scenarios, each with a unique case stem and an assortment of shockable and nonshockable rhythms. After each approximately 5-minute scenario, teams spent approximately 15-minute debriefing, structured according to the principles of debriefing with good judgment, which aims to close high priority performance gaps by identifying and addressing the “root cause” of observed individual and team actions.10,11 Each debrief was led by 2 facilitators, at least one of whom had completed advanced training in debriefing and had 2 or more years of experience facilitating debriefs, and all of whom attended a course-specific debrief training workshop. Total hands-on practice time for learners was approximately 18 minutes, with 50 minutes of reflective debriefing and 7 minutes of transitions.
Video recordings of team pretest and posttest performances were coded for key observable behaviors (Appendix 1) applying the event-based approach to training (EBAT) assessment development process.16,17 Like with other assessments using EBAT, the set of items represent not only a comprehensive or global set of general concepts (eg, “communication”) but also an inventory of specific behaviors that are clinically important based on current resuscitation science and international guidelines,18–21 suitably and deliberately triggered by cues within the simulation scenarios, and aligned with the learning objectives of our course. As such, the assessment items are specific to resuscitation and to our context, eg, they emphasize the first critical minutes rather than the entirety of a prolonged code response. The criterion standard inventory of behaviors for the team-based care of patients in cardiopulmonary arrest at our institution was developed for a period of 3 years by approximately 15 senior members of the Health System Code Blue Response Team. These physicians, nurses, and respiratory therapists gathered in the simulation center to develop a specific “script” for coordinated management of the first 5 minutes of cardiopulmonary arrest. Once approved, this served as the foundation for curriculum design and assessment in the health system-wide resuscitation courses. For this course adaptation, designed to train residents very early in their tenure, 3 members of the broader team, including a pulmonary critical care physician (E.D.), a critical care nurse (M.B.), and an industrial and organizational psychologist with extensive psychometric training (M.L.) selected the behaviors for observation from the large set already developed and added detail to support trained observers in objective observation of behavior, thus minimizing the need for subjective assessment and inference. Coders were not blind to condition but were blind to study hypotheses. Coders worked collaboratively to ensure accurate coding.
We secured a Zoll Stat Padz accelerometer beneath a Laerdal 3G SimMan simulator's skin anterior to the sternum, connected to a hidden Zoll R-Series defibrillator to log compression data. We used per-compression data to derive the following 3 statistics: percentage of compressions where rate was between 100 and 120 compressions per minute (“percent correct rate”), percentage of compressions with depth greater than 2 in (“percent correct depth”), and “compression fraction,” defined as (elapsed time since start of compressions – sum of compression pauses >1 second) / (elapsed time since start of compressions). From these, we computed an overall “compression quality” score, equal to the average of percent correct rate and depth, multiplied by compression fraction.
For sets of individual items that naturally cluster (grouped under a single numeral in Appendix 1), we computed composite scores as the sum of those items divided by the number of items. For instance, the composite “Airway Management” is defined as the proportion of correct performance on “Held BVM (bag valve mask) correctly,” “Connected BVM to oxygen at 15 L per minute,” and “Coordinated breaths with compressor.” These composite measures should be thought of as formative rather than reflective constructs22; that is, unlike imagining a construct such as “leadership” and then trying to write items that reflect that construct, we defined the construct by its constituent items, meaning that those composite score labels should not be interpreted too broadly.
For each behavior, we first statistically tested for differences in performance at pretest versus posttest, selecting tests according to the data distribution. If a significant difference was found, we then statistically tested for an interaction effect, ie, a difference in improvement between the “drill” versus “scrimmage” experimental conditions. Consistent with recent guidance,23 this study compares active interventions rather than using an educationally impoverished control condition; as such, we expect lower effect sizes for the comparison. Furthermore, it is essential to set statistical tests to balance the risk of false positives and false negatives, particularly for early studies in a new line of research.24 Therefore, we opted to set alpha (ie, the nominal false positive rate) at 0.05 for each test, ie, not to apply a correction for multiple comparisons.
First Response Postarrest
Nearly all teams performed initial pulse checks at both pretest and posttest, with no discernible difference by time (89% and 96%, respectively; related-samples McNemar test, P = 0.63). However, far fewer teams exposed the wrist where a DNAR (Do Not Actively Resuscitate) wristband would be. “Drill” condition teams improved discernibly more than did “scrimmage” teams, in a logistic regression predicting posttest by both pretest and condition (odds ratio = 14.75, P = 0.03; drill: 23% pretest, 58% posttest; scrimmage: 8% pretest, 9% posttest).
Time from arrest to start of compressions improved from pretest to posttest (related-samples t(22) = 7.63, P = 0.00, from mean 37 to 11 seconds). This improvement was not discernibly different across condition [analysis of covariance (ANCOVA), F(1,20) = 0.20, P = 0.66].
From pretest to posttest, average team compression quality scores improved discernibly (paired-sample t(21) = 3.08, P = 0.01) (Fig. 1A). One of the the following 3 compression parameters differed discernibly: percent correct depth (P = 0.01; 62%–81%); no discernible differences were found for compression rate (P = 0.54; 53%–57%); or compression fraction (P = 0.10; 71%–75%).
An ANCOVA showed no discernible effect of condition on posttest compression quality scores, controlling for pretest scores (F(1,19) = 1.24, P = 0.28), nor was the effect of condition significant for any of the 3 compression parameters (compression rate: F(1,19) = 1.24, P = 0.28; compression depth: F(1,19) = 1.69, P = 0.21; compression fraction: F(1,19) = 2.59, P = 0.12).
Use of Compression Adjuncts
The percentage of 3 key compression adjuncts (lowered bed rail, backboard, and stepstool) used improved from pretest to posttest (paired-sample t(22) = 3.89, P < 0.01), from 41% to 71%. An ANCOVA showed a discernible effect of condition on posttest use of compression adjuncts, controlling for pretest, favoring the “drill” condition (F(1,20) = 0.66, P = 0.03; estimated marginal means = 75% drill, 67% scrimmage).
Minutes elapsed from arrest to first defibrillation improved markedly from pretest to posttest (t(22) = 5.14, P = 0.00) (Fig. 1B); pretest and posttest mean minutes and seconds were 4′10″ and 1′57″, respectively. Analysis of covariance showed no discernible effect of condition on this improvement (F(1,20) = 0.20, P = 0.66).
Composite airway management scores discernibly improved from pretest to posttest (paired-samples t(22) = 2.11, P = 0.047). Teams' mean percentage of performance of the 3 airway performance criteria (proper hand position, proper connection to oxygen, and coordination with compressions) increased from 79% to 95%. Analysis of covariance showed no discernible effect of condition on this improvement (F(1,20) = 0.34, P = 0.57).
Behaviors Establishing Leadership
No discernible improvement was seen from pretest to posttest in overt verbal declarations of leadership (related-samples McNemar test, P = 0.11). A leader was declared for 12% of teams at pretest and 39% at posttest. Logistic regression showed no discernible effect of condition on posttest performance, controlling for pretest (P = 0.36). However, leaders did improve in staying at the foot of the bed (paired-samples t(22) = 3.59, P = 0.00) (Fig. 1C); pretest and posttest means were 1′58″ and 3′47″, respectively. Condition did not discernibly affect this improvement (F(1,22) = 1.44, P = 0.24). Leaders also improved in avoiding taskwork (paired-samples t(22) = −4.56, P = 0.00); pretest and posttest means were 1′36″ and 0′13″ minutes of taskwork, respectively. Condition did not discernibly affect this improvement (F(1,20) = 0.25, P = 0.62).
Role Assignment and Clarification by Leader
Role leadership scores improved discernibly from pretest to posttest (t(22) = 4.17, P = 0.00) (Fig. 1D), from an average of 47% of the 4 key roles (ie, compressions, defibrillation, airway, and medications) assigned and/or clarified at pretest to 70% at posttest. This improvement was not discernibly different by condition (F(1,20) = 0.33, P = 0.57).
Rhythm Check Coordination by Leader
Rhythm check coordination (reflecting explicit calls for compressor rotation, pulse check, and fast resumption of compressions) did not improve discernibly from pretest to posttest (paired-samples t(22) = 1.63, P = 0.12), with average scores of 33% and 50%, respectively. Condition did not discernibly affect this improvement (F(1,20) = 2.14, P = 0.16).
This study compared repetitive practice of contextualized skills with coaching (“drill”) versus repeated rehearsal of team resuscitation scenarios with debriefing and reflection (“scrimmage”). Both led to marked improvements in nearly all key performance parameters. Only attention to DNAR status and use of compression adjuncts improved discernibly more under “drill” conditions.
The striking improvements seen overall mirror previous reports.6,7,14 Our study builds upon previous work by directly comparing instructional strategies, each incorporating key design features supported by evidence from previous work and paired with a complementary feedback and debriefing strategy. The “drill” condition was designed to maximize time for repetitive practice, and it incorporated several key principles of deliberate practice generally and rapid cycle deliberate practice specifically,25,26 including baseline performance assessment, explicit “criterion standard choreography” of a contextually relevant IHCA,14,15 and real-time objective feedback and direct coaching.5 It was surprising that the improvements for notionally “technical” tasks in the “drill” condition were not discernibly greater than those in the “scrimmage” condition. For example, during the drill-style training, each participant practiced 2 minutes of CPR with coaching related to real-time data on rate, depth, and recoil. On the other hand, during the “scrimmage” condition, each member of the 5 to 6 person team rotated through the role of compressor, likely several times. However, classically structured deliberate practice with objective CPR feedback was not provided. Furthermore, although the importance of high-quality CPR was a standard debrief objective, it was discussed amid a conversation, which afforded time to explore learner frames and reflections across many themes and topics.
Equally unexpected were the similar improvements in the teamwork variables. We predicted teams that repeatedly rehearsed full resuscitations together would realize greater improvements in teamwork, especially because teamwork and communication were prominently covered in the reflective debriefings. When “criterion standard” resuscitation teamwork behaviors were objectively defined and observed, repetitive practice of key team behaviors in a contextualized “drill” station with directive coaching was similarly likely to improve performance. Possible explanations for the relatively similar learning outcomes across the rather disparate instructional strategies include vicarious learning of specific skills by individuals participating in the 5 full resuscitations in the “scrimmage” condition. Alternately, perhaps individuals in the “drill” condition complemented the repetitive practice and coaching by performing their own internal reflection and debriefing.
This course targeted new medical school graduates, likely on the early, “steep” portion of the learning curve in coordinating a cardiopulmonary resuscitation. It may be that the impact of structured practice opportunities designed upon sound principles is strong enough that the strengths and weaknesses of particular design features are more difficult to detect, at least when assessed immediately after training and at the team level. Future study may reveal the durability of learning or the consistency of learning across individuals would vary across “scrimmage” versus “drill” conditions.
Strengths of this study include the direct comparison of 2 strong instructional interventions (rather than comparison against an educationally impoverished control or pretest), measurement of a broad range of observable behaviors, pretraining and posttraining performance assessment with a team sized to cover all main resuscitation roles, and rigor in experimental design. In comparison trials, effect sizes are likely to be lower than in treatment-control studies, so statistical power is of course a concern; we can only say that, for a reasonably large data collection, few large effects were observed (ie, we cannot say that there are no effects).24
In this study, dramatically different allocations of time in hands-on practice versus reflection led to similar learning outcomes in taskwork and teamwork aspects of cardiopulmonary resuscitation. As the science of simulation pursues more comparative design effectiveness research, we will have an opportunity to begin to untangle multiple pathways to learning and the complex interplay between individual and team performance.
The authors thank the members of the Code Blue Committee and Code Blue Response Team at The University of Kansas Health System for years of work in developing an institutional best practice for care during cardiopulmonary arrest. The authors also thank the course faculty and the simulation education team for their role in planning and implementing the course this research is based on. The authors thank Amy Follmer and the simulation delivery team as well as Melissa Nickel and the administrative team at ZIEL for their tremendous work in supporting the delivery of the course. The authors also thank Dr. Betsy Hunt and her team for her mentoring and guidance as they developed this program of education and research. The authors thank Drs. David and Mary Zamierowski for their support and guidance.
1. Cook DA, Hatala R, Brydges R, Zendejas B, Szostek JH, Wang AT, et al. Technology-enhanced simulation for health professions education: a systematic review and meta-analysis. JAMA
2. Cook DA. How much evidence does it take? A cumulative meta-analysis of outcomes of simulation-based education. Med Educ
3. Issenberg SB, Ringsted C, Østergaard D, Dieckmann P. Setting a research agenda for simulation-based healthcare education: a synthesis of the outcome from an Utstein style meeting. Simul Healthc
4. Cheng A, Nadkarni VM, Mancini MB, Hunt EA, Sinz EH, Merchant RM, et al. Resuscitation education science: educational strategies to improve outcomes from cardiac arrest: a scientific statement from the American Heart Association. Circulation
5. Eppich WJ, Hunt EA, Duval-Arnould JM, Siddall VJ, Cheng A. Structuring feedback and debriefing to achieve mastery learning goals. Acad Med
6. Wayne DB, Butter J, Siddall VJ, Fudala MJ, Wade LD, Feinglass J, et al. Mastery learning of advanced cardiac life support skills by internal medicine residents using simulation technology and deliberate practice. J Gen Intern Med
7. Hunt EA, Duval-Arnould JM, Nelson-McMillan KL, Bradshaw JH, Diener-West M, Perretta JS, et al. Pediatric resident resuscitation skills improve after “Rapid Cycle Deliberate Practice” training. Resuscitation
8. Cheng A, Morse KJ, Rudolph J, Arab AA, Runnacles J, Eppich W. Learner-centered debriefing for health care simulation education: lessons for faculty development. Simul Healthc
9. Salas E, Klein C, King H, Salisbury M, Augenstein JS, Birnbach DJ, et al. Debriefing medical teams: 12 evidence-based best practices and tips. Jt Comm J Qual Patient Saf
10. Rudolph JW, Simon R, Dufresne RL, Raemer DB. There's no such thing as “nonjudgmental” debriefing: a theory and method for debriefing with good judgment. Simul Healthc
11. Rudolph JW, Simon R, Raemer DB, Eppich WJ. Debriefing as formative assessment: closing performance gaps in medical education. Acad Emerg Med
12. Eppich W, Cheng A. Promoting Excellence and Reflective Learning in Simulation (PEARLS): development and rationale for a blended approach to health care simulation debriefing. Simul Healthc
13. Rudolph JW, Raemer DB, Simon R. Establishing a safe container for learning in simulation: the role of the presimulation briefing. Simul Healthc
14. Hunt EA, Duval-Arnould JM, Chime NO, Jones K, Rosen M, Hollingsworth M, et al. Integration of in-hospital cardiac arrest contextual curriculum into a basic life support course: a randomized, controlled simulation study. Resuscitation
15. Hunt EA, Cruz-Eng H, Bradshaw JH, Hodge M, Bortner T, Mulvey CL, et al. A novel approach to life support training using “action-linked phrases”. Resuscitation
16. Fowlkes J, Dwyer DJ, Oser RL, Salas E. Event-based approach to training (EBAT). Int J Aviat Psychol
17. Rosen MA, Salas E, Wu TS, Silvestri S, Lazzara EH, Lyons R, et al. Promoting teamwork
: an event-based approach to simulation-based teamwork
training for emergency medicine residents. Acad Emerg Med
18. Link MS, Berkow LC, Kudenchuk PJ, Halperin HR, Hess EP, Moitra VK, et al. Part 7: Adult Advanced Cardiovascular Life Support: 2015 American Heart Association Guidelines Update for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care. Circulation
2015;132(18 Suppl 2):S444–S464.
19. Kleinman ME, Brennan EE, Goldberger ZD, Swor RA, Terry M, Bobrow BJ, et al. Part 5: Adult Basic Life Support and Cardiopulmonary Resuscitation Quality: 2015 American Heart Association Guidelines Update for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care. Circulation
2015;132(18 Suppl 2):S414–S435.
20. Bhanji F, Donoghue AJ, Wolff MS, Flores GE, Halamek LP, Berman JM, et al. Part 14: Education: 2015 American Heart Association Guidelines Update for Cardiopulmonary Resuscitation and Emergency Cardiovascular Care. Circulation
2015;132(18 suppl 2):S561–S573.
21. Meaney PA, Bobrow BJ, Mancini ME, Christenson J, de Caen AR, Bhanji F, et al. Cardiopulmonary resuscitation quality: [corrected] improving cardiac resuscitation outcomes both inside and outside the hospital: a consensus statement from the American Heart Association. Circulation
22. Edwards JR, Bagozzi RP. On the nature and direction of relationships between constructs and measures. Psychol Methods
23. Cook DA. One drop at a time: research to advance the science of simulation. Simul Healthc
24. Lineberry M, Walwanis M, Reni J. Comparative research on training simulators in emergency medicine: a methodological review. Simul Healthc
25. McGaghie WC. Research opportunities in simulation-based medical education using deliberate practice. Acad Emerg Med
26. Ericsson KA, Krampe RT, Tesch-römer C. The role of deliberate practice in the acquisition of expert performance. Psychol Rev