Extracorporeal membrane oxygenation (ECMO) is a high-risk, complex therapy used for pulmonary and/or cardiac failure in patients who have an expected mortality of 90%.1 Although we are a large children's hospital, our ECMO utilization is only on average 20 times annually. Thus, there are few opportunities for our ECMO circuit providers to develop the teamwork skills and expertise needed to mitigate risks associated with ECMO.
Training for ECMO has relied on didactic education, hands-on water drills, and animal laboratory testing.2,3 These may overemphasize cognitive skills, underemphasize technical skills, and completely ignore behavioral skills.3 These drills are static, lacking the time pressure, typical alarms, and sense of urgency inherent to actual critical ECMO scenarios.3
Simulation-based training provides an opportunity for staff to develop and maintain technical proficiency in high-risk, infrequent events without fear of harming patients. In addition, it provides opportunities for interdisciplinary training and improved communication among team members.4 A previous investigation evaluating ECMO simulation, assessed participant satisfaction with training and demonstrated the feasibility of training.5 A relatively small number of subjects indicated that simulation-based ECMO training was more realistic and challenging than traditional training. Results also demonstrated some improvement of technical proficiencies in the simulation laboratory after training but lacked robust assessment or follow-up.
We hypothesized that multidisciplinary, simulation-based training would improve technical proficiencies and teamwork behaviors, leading to improvements during the ECMO cannulation process. Our objective was to assess whether simulation would improve technical and nontechnical proficiencies of our circuit providers in managing emergencies of a simulated ECMO patient and allow transfer of skills learned within the simulated setting to the clinical environment.
Simulation-based training consisted of high-risk ECMO scenarios performed on an infant simulator using a fully functional ECMO circuit. Subjects performed scenarios in a simulation laboratory that closely resembled their clinical environment. Technical and nontechnical skills were assessed, as well as translation of skills into the clinical environment surrounding the cannulation process.
Subjects were registered nurses and respiratory therapists trained to manage the ECMO system. This protocol was approved by our Institutional Review Board with a waiver of informed consent. Subjects were asked to sign video consents and confidentiality agreements. Participation in the research aspects of the course was voluntary, but training was mandatory as part of ongoing skill maintenance.
A mannequin, the ECMO system, and one room within our simulation center were adapted to mimic the clinical setting. Modifications were made to a neonatal mannequin using polyvinyl chloride (PVC) tubing plus arterial and venous cannulas to allow for circuit flow.3 Standard Laerdal SimBaby software was used to project patient's vital signs. Screen shots of circuit blood gas and mixed venous saturation were projected to replicate Terumo CDI Blood Parameter Monitoring System readings.
Modifications made to the ECMO system allowed simulation of poor venous return, arterial air, tubing rupture, and pump failure. Tubing added to the circuit allowed for manipulation of fluid and introduction of arterial air remotely. For emergent cannulation, the Laerdal SimBaby simulator was used, and teams cared for a neonate in septic shock while the circuit was being prepared and cannula placement was simulated.
Each session was 4 hours in length and consisted of four scenarios with three or four subjects rotating as lead provider, back-up provider, patient registered nurse (RN), and code assist. The order of the four scenarios within sessions was by convenience. Poor venous return, emergent cannulation, and arterial air were conducted during all sessions. Tubing rupture and pump failure, which occur less frequently in the clinical setting, were each conducted during half the sessions. Scenarios started with report to the patient RN and lead provider. Additional help could be called to the bedside as needed. Phone and bedside consultation occurred between subjects and a designated physician who was not a study subject. This physician was consistent during enrollment and acted as the ECMO physician. Simulation personnel augmented care teams as code responders.
Training was divided into quarters, defined as a group of six consecutive simulation sessions. During initial training (quarter 1), each subject participated in one session. During maintenance training (quarters 2–4), subjects participated in three additional sessions on average once per quarter. Team composition changed for each session based on subject availability.
To provide clinical expertise during simulations and debriefings, several providers were trained as facilitators before the study. This four-member group matriculated through a 1-day simulation-based debriefing course, followed by ongoing experience cofacilitating other courses before study enrollment. During ECMO training, facilitators co-led debriefings with simulation center educators. Video-assisted debriefings, which occurred immediately after scenario completion, used a standardized format to elicit communication, teamwork, and safety discussions. A key component of debriefings was the identification of latent safety threats (LSTs), which have been defined as system-based threats to patient safety that can materialize at any time and are previously unrecognized by healthcare providers, unit directors, or hospital administration.6 Debriefings also reviewed management decisions, problem solving, and technical proficiencies.
Simulation Laboratory Technical Measures
During scenario development, specific trigger events and expected interventions were predetermined. Technical team-level measures were (1) time intervals and (2) percentage of correct actions performed. For pump failure, arterial air, and raceway rupture, the interval measured was the time the patient was off ECMO support. For poor venous return, the interval was time from circuit alarm to re-establishment of full flow. For emergent cannulation, the interval was time from notification until the circuit was crystalloid primed and ready to connect to cannulas. Correct actions were defined as recognition of the need for an action and performance according to institutional protocol. During emergent cannulation, compliance with our institution's ECMO initiation checklist (initiation checklist) was one of the actions assessed. Time intervals and percentage of correct actions were assessed by video review.
Simulation Laboratory Nontechnical Measures
Individual-level measures were provider knowledge of and attitudes toward patient safety. Team-level measures were teamwork performance during the simulations and identification of LSTs during the simulations and debriefings. Knowledge applicable to patient safety principles was assessed through a 20-question test previously developed at our institution that has been used in prior simulation-based training programs. The test foci were epidemiology of medical error, specific techniques to improve team communication, and known obstacles to teamwork. Subjects completed this test before (baseline) and after (first reassessment) their first training session and after each subsequent session (second, third, and final reassessments). Each test contained the same questions; however, question order varied for each reassessment. To maintain integrity of the instrument, tests were not scored until enrollment was completed. This insured that subjects were not given their scores after any test nor were they aware of their incorrect answers. Attitudes toward safety, teamwork, and communication were assessed using the Safety Attitude Questionnaire (SAQ).7 Subjects completed the SAQ before (baseline) and after their initial training session (first reassessment) and after their final session (final reassessment). The SAQ consists of two subscales, which were teamwork climate measured with 14 items and safety climate measured with 13 items. The questions use a 5-point scale ranging from 1 = “disagree strongly” to 5 = “agree strongly.” The negatively worded questions were reverse coded such that a scale value of 5 denoted the highest level of teamwork. Ratings were averaged across questions resulting in an average teamwork score that ranged from 1 to 5.
Team performance was evaluated using the Mayo High Performance Teamwork Scale (MHPTS).8 This validated scale assessed teamwork and communication in simulation-based training and was designed for subjects to retrospectively score team performance.8 The MHPTS consists of 16 behaviors focused on crew resource management training, with each eligible for 0, 1, or 2 points. In this study, video reviewers were used to score teamwork performance. A three-step process trained each reviewer: detailed review of the original publication, didactic session reviewing each scored behavior, and group video review and discussion of scoring. Two reviewers scored each scenario. Reviewers were blinded to each other's results. Scenario assignments to reviewers were randomized within each quarter.
LSTs were classified as knowledge deficits or threats related to care environment (resources, medications, equipment, or personnel). Knowledge deficits were assessed at the team level and subclassified as cognitive deficits, omission of necessary actions, inappropriate actions, or incorrectly performed procedures.
The primary clinical measure was time from ECMO circuit blood available at bedside until circuit was primed with blood and ready to connect to cannulas (time to circuit ready). Time intervals were obtained from standardized documentation recorded during cannulations. The second clinical measure was compliance with the initiation checklist, a tool maintained in the patient's medical record that includes 24 key steps in the process for patients undergoing cannulation. Compliance was defined as >90% of steps performed. In addition, occurrence and frequency of serious safety events were tracked.
Translation of simulation-based education to the clinical setting was assessed by different measures than in the simulation laboratory for several reasons. Clinically, our standard is to institute blood-primed ECMO; however, this requires the presence of a primer (provider with specialized circuit preparation training). As primers are not always in-house, there is a potential for emergent need of crystalloid-primed ECMO. Therefore, the goal in the laboratory was to train subjects to emergently institute crystalloid-primed ECMO in case the primer is not available. Second, videotaping of cannulations and ECMO emergencies does not occur during clinical care. Third, study personnel were not available at cannulations, prohibiting use of the MHPTS and facilitated debriefings. To assess teamwork, communication, and LSTs in the clinical setting, all healthcare providers involved were surveyed anonymously within 24 hours of cannulation using a seven-item online survey developed for this study.
Descriptive statistics were generated for each outcome variable. Three general linear models were developed to test for differences over time for team-level measures. We used the mean of the two reviewer's MHPTS scores as team performance for each scenario. The dependent variables were the MHPTS score, time intervals, and percentage of correct actions performed. Independent variables included an indicator variable to denote quarter, which was used to detect significant changes over time. We controlled for scenario, whether or not the team had a trained facilitator as an active member, and the order in which the scenario was conducted within the session by including these as independent variables within the model. Finally, to determine whether the type of scenario and the scenario order during the session interacted with one another, we included scenario type by scenario order as an interaction term in the model. Coding of data was used to classify identified LSTs.
Mixed models were developed to test for differences over time for individual-level measures to account for the repeated measures within clinicians. An autoregressive covariance structure for the repeated measures model was chosen as it resulted in the smallest value for Akaike's criterion. When applicable, contrast statements were used to test for significant differences in team and individual measures to determine between which periods significant changes occurred. For subjects lost to attrition, team-level data were included in the analysis; however, provider-level data were excluded.
The data from the two clinical measures, time to circuit readiness and percent compliance with the initiation checklist, were summarized and analyzed using graphical methods (ie, run charts) and t test, respectively. Survey responses were collected electronically, and results were generated as descriptive frequencies.
Ninety-six simulations (24 sessions) were performed from May 2008 to August 2009. Twenty-three subjects were enrolled. Four subjects were lost to attrition, thus complete data reflect 19 subjects (Table 1). Of these 19, 4 also served as facilitators during the investigation.
Simulation Laboratory Technical Outcomes
Within quarter 1, a mean of 86% (SD 12%) correct actions were performed across all scenarios. Over the following three quarters, this mean was 86% (SD 17%), 86% (SD 14%), and 92% (SD 10%), respectively. Controlling for scenario, scenario order, and presence of facilitator, a trend toward improvement occurred during training but did not reach statistical significance (P = 0.078). For the emergent cannulation scenario, subjects completed a mean of 21.4 (SD 2.16) of 24 possible steps of the initiation checklist in quarter 1. Over the following three quarters, they completed 23.0 (SD 1.68), 22.3 (SD 1.92), and 23.8 (SD 0.72) steps, respectively. No statistical improvement was shown in the simulation laboratory for checklist compliance. In addition, no significant improvements were shown for time intervals during the study period.
Simulation Laboratory Nontechnical Outcomes
For the MHPTS, we computed the correlation between reviewers. The resulting Pearson's correlation coefficient was 0.41 (P < 0.001), indicating a moderate level of correlation between reviewers. Frequency distributions of scores by quarter are displayed in Figure 1. Scores were improved from baseline (quarter 1) to each following quarter (P = 0.001, 0.001, and <0.001, respectively). Scores were similar from quarter 2 to 3 (P = 0.506) and quarter 3 to 4 (P = 0.506). In addition, scenario and the scenario by order interaction were significant (P = 0.037 and 0.021, respectively). Overall, the average MHPTS score increased as the order number increased from 16.2 for order 1 (first scenario run during session) to 17.4 for order 4 (final scenario run during session). In addition, the average MHPTS scores were highest for pump failure (18.71) and emergent cannulation (17.17). Because of significant interaction effects, however, it is important to consider the change in scores from order 1 to 4 within the context of a scenario. Thus, we saw the largest change in the arterial air scenario, which increased from 16.2 (order 1) to 21.5 (order 4).
Compared with the baseline mean of 18.1 (SD 1.5) of 20 questions, safety test scores were significantly higher at the third (mean 19.2, SD 1.0, P = 0.010) and final (mean 19, SD 0.9, P = 0.033) reassessments. At baseline, SAQ scores showed a mean of 4.16 (SD 0.31). Scores did not significantly change after one session of training (mean 4.23, SD 0.36, P = 0.204) but were significantly higher compared with baseline at study completion (mean 4.41, SD 0.44, P = 0.001).
Ninety-nine occurrences of LSTs were identified during simulation and debriefing sessions: 69 related to knowledge deficits (Table 2) and 30 related to the care environment (Table 3). Debriefings focused on these outcomes, including team problem solving for etiologies of performance gaps and, if applicable, demonstration of correct performance. When identified, the teams discussed potential solutions and the study team reported both threats and solutions to ECMO leadership. Despite these discussions, 35% of LSTs occurred more than once.
Twenty-six patients underwent cannulation during the study period. Eighty-nine percent of subjects participated in at least one of these cannulations. Time from blood available to circuit ready ranged from 5 to 95 minutes, with a median of 17 minutes, with no improvement over the study period. Subjects complied with the 24-item initiation checklist during 25 (96.2%) cannulations. They completed a mean of 23.23 (SD 1.61) steps, an improvement compared with a pretraining baseline of 17.14 items (P < 0.0001). No ECMO patient experienced a serious safety event.
After cannulations, the survey instrument had a 69% (N = 180) response rate. Eighty-three percent of physicians felt that the circuit was ready in a timely fashion. Briefings and debriefings occurred 81% and 84% of the time, respectively. Physicians and nonphysicians indicated that the briefings were useful (97% and 100%, respectively) as were the debriefing (100% and 94%, respectively).
We used simulation as the strategy for providing frequent, deliberate practice and reinforcement of technical and nontechnical aspects of ECMO care. Our curriculum emphasized effective communication and teamwork in critical situations and pursued the development of personal investment in safety. This investigation was unique in attempting to assess whether simulation positively impacts the ECMO clinical environment.
Benefits of Training
Outcomes that showed improvement during the study were provider safety knowledge, provider attitude toward safety, teamwork behaviors in the simulation laboratory, and initiation checklist compliance within the clinical environment. In a previous study assessing knowledge and practical skills acquired through simulation, self-reporting scales were used in which subjects rated or described their own performances.9 Other educational models indicated that learning, although varied, tends to decrease with time if reinforcement of skills or knowledge is not continued.10,11 We addressed these issues through objective pre- and posttests and measurement at multiple points throughout enrollment. Subject knowledge regarding safety was high at baseline, similar to previous courses with participants from other high-risk units.12 This likely reflects the environment at our institution, where patient safety has been a focus since 2005, and all employees are required to take two web-based safety modules annually. Our results indicated an improvement after three sessions, which was maintained through the end of the training. Limitations include no external validation of the test, and each test contained the same 20 questions. However, individual questions were based on the best available evidence in the literature, the test has been used in several previous simulation-based courses, and tests were not scored until enrollment was completed. Despite the small sample size and relatively low number of questions, statistical improvement was shown.
The SAQ results reflected subject attitudes regarding teamwork climate and safety climate within their respective clinical environments, thus lack of improvement after one training session was not surprising. However, improvement by the end of enrollment was evident. A strength of simulation is the ability to perform immediate debriefings. Our facilitators, who were invested in the safety aspects of the curriculum, established an atmosphere during debriefings where errors were discussed openly. This was aided by their participation as subjects, as they could freely admit their errors or knowledge gaps during debriefings and relate personal strategies to prevent repetition of these errors. We feel that the combination of improved knowledge and attitudes were important factors in changing teamwork behaviors.
The MHPTS was designed to be sufficiently brief, behavioral, and understandable, allowing it to be used practically by naive subjects in training and other settings to rate key behaviors of high-performance teams.8 Although not exhaustive in describing all possible behaviors that characterize high-performance teams, the MHPTS items provide a representative sample of the range of key teamwork behaviors. In this investigation, there was significant improvement in teamwork during each quarter compared with baseline. In addition, teamwork scores from second to third quarter and third to fourth quarter remained stable. This plateau was possibly due to the rotation of subjects within scenarios and between sessions. Scores peaked in the high 20s, leaving room for improvement. We feel that if the teams were kept intact during training, scores may have continued to improve. However, our rotation best replicated the clinical schedule and used a strength of simulation-based training: forcing teams to communicate and make decisions in a pressurized environment that mirrors the clinical setting. Traditional training methods do not achieve this level of suspension of disbelief, which limits deliberate practice of teamwork principles.
Trained reviewers, not subjects, rated team behaviors after video review of the simulations. Despite our reviewers being more knowledgeable about crew resource management principles, we felt that this behavioral tool was applicable to our investigation. We are novel in our application of the MHPTS by trained reviewers and were able to show correlation, although moderate, between such reviewers. One limitation is that we did not have subjects apply the MHPTS, as was done by Malec et al.8 We acknowledge that this would have strengthened our study and suggest that future investigations apply the MHPTS by both subjects and reviewers. Malec et al8 also noted that ratings from multiple perspectives might be optimal.
Historically, subjects did not initiate preparation of the ECMO circuit. Instead, they waited for a primer to arrive. In the year before enrollment, all subjects received nonsimulation-based training to prepare the circuit for blood prime. In multiple clinical cannulations, subjects started initial preparation of the circuit using the initiation checklist. Compliance was attained during 57.1% of cannulations (unpublished data). In contrast, during enrollment within simulation-based training, clinical compliance was attained during 96.2% of cannulations. A portion of this improvement may be attributed to adjusting to the checklist in the clinical environment. However, during laboratory training, subjects increased the number of steps completed from baseline through maintenance training, highlighting the utility of the hands-on, pressurized simulation-based environment.
In addition, during debriefings, previously unrecognized errors in the initiation checklist were identified and corrected. Ensuring that the checklist reflects current practice is key to achieving its benefit. Use of checklists have been associated with a decrease in complications and rates of death in adult surgical patients.13 We are not aware of any previous literature surrounding the use of checklists in preparation for ECMO cannulations, although it is unlikely we are the only ECMO center to use a checklist. Clinically, we now see that when ECMO is activated, the subjects start preparing the circuit and have everything ready so that the primer can begin the blood prime when he/she arrives.
LSTs were addressed in debriefings and, when necessary, increased education for the subjects was given. We feel that this was an effective strategy as the majority of the deficits occurred only once. When inappropriate actions continued despite discussion in debriefings, the focus shifted from discussion to hands-on reenactment and deliberate practice. We worked with ECMO leadership to address identified LSTs through ongoing education, modifications of initiation checklist, and clarification of protocols.
Outcomes Without Demonstratable Improvement
Team technical skills (percent of correct actions and time interval to task completion) in the simulation laboratory and time to circuit readiness in the clinical setting did not show improvement. Technical skill expectations were based on protocols designed to promote patient safety. Some teams achieved 100% of correct actions; therefore, the measures were attainable. Role assignments for lead and backup rotated during training, so subjects infrequently led the same scenario each quarter. This rotation may have contributed to lack of improvements in the laboratory. Our sample sizes were small, making it difficult to demonstrate significant improvement. For correct actions, there was a trend toward improvement (P = 0.078) that may have reached significance with greater number of simulations. In addition, for time intervals within scenarios for arterial air, pump failure, and poor venous return, there appeared to be a trend toward improvement.
For emergent cannulation, the increased times during quarter 4 likely reflected a scenario manipulation. We required connecting the crystalloid-primed circuit to the cannulas during this period, forcing the communication piece of subject-to-surgeon handoff of cannulas and potentially interrupting circuit preparation. The manipulation was made to further stress the team and better replicate a high-risk portion of the cannulation process; however, this may have biased our results. The foci of debriefings were multifactorial and included teamwork, communication, and LSTs. This may have hampered improvement in technical skills through subject interpretation of what was most important to practice in the simulation laboratory or by not thoroughly discussing technical errors.
The most commonly omitted action was the removal of sweep gas during the time the simulated patient was off ECMO. Although this is of lesser concern for brief times off the circuit, it can become problematic when the patient remains off for longer periods. Subjects readily and frequently self-identified this missed action during the debriefing process. However, despite two different educational strategies, the problem persisted over the course of the project, accounting for 38% of the knowledge deficits. ECMO leadership is exploring additional strategies to address this deficit, such as a reminder tag attached to the bridge, which will be trialed in simulation before implementation.
While poor venous return is a common occurrence, other simulated emergencies are rare and unpredictable, thus we were only able to assess for clinical improvements during cannulations. The data demonstrated no improvement in circuit readiness. This may have reflected improvements in teamwork and safety attitudes from laboratory training, which led to a safer, more diligent approach in preparing the circuit. In addition, simulation laboratory focus was circuit preparation without performing a blood prime. Conversely, all clinical cannulations were blood primed. Laboratory training likely left out important blood-priming steps, possibly preventing improvement in clinical circuit readiness. In recognition of this limitation, subjects are now training in the laboratory using both crystalloid priming and blood priming scenarios. Another potential explanation is that blood may have arrived before the subject was ready to add it to the circuit, thus artificially lengthening circuit readiness. Although this may have affected outcomes, in the principal investigator's experience during actual cannulations, the subjects usually have the circuit ready and are waiting for blood.
A limitation of the overall study design was the inclusion of facilitators as subjects. They were aware of scenarios that potentially affected technical performance. In addition, given their facilitator training, one can assume a higher performance level with regards to teamwork and communication, potentially influencing teamwork scores within sessions they participated. However, given that there were only 19 subjects and training was mandatory, we felt that it was appropriate to use facilitator data as the majority of the outcomes were team-level measures. We attempted to address this limitation by controlling for the presence of facilitators within the analysis.
ECMO leadership values simulation and has continued monthly training for all providers, including the addition of neonatal and pediatric critical care physicians. New staff hired as ECMO circuit providers are required to participate in simulation-based training. In addition, we have incorporated training on a second checklist focused on unit preparation.
Simulation-based training is an effective method to improve safety knowledge, attitudes, and teamwork surrounding ECMO emergencies. On-going training is feasible and allows identification of LSTs. Further work is needed to assess translation of learned skills and behaviors into the clinical environment.
The authors acknowledge the contributions from members of the Center for Simulation and Research, ECMO facilitators, and statistician Martin Levy from the University of Cincinnati.