Journal Logo

Empirical Investigations

Improving Safety Recommendations Before Implementation: A Simulation-Based Event Analysis to Optimize Interventions Designed to Prevent Recurrence of Adverse Events

Langevin, Mélissa MD; Ward, Natalie PhD, CE; Fitzgibbons, Colleen RN; Ramsay, Christa RRT; Hogue, Melanie RN; Lobos, Anna-Theresa MD

Author Information
Simulation in Healthcare: The Journal of the Society for Simulation in Healthcare: February 2022 - Volume 17 - Issue 1 - p e51-e58
doi: 10.1097/SIH.0000000000000585

Abstract

Children admitted to hospital have high documented rates of AEs ranging from 2% to 9%.1–3 Given evidence that up to 60% of these events are preventable, hospital administrators have identified safety risks and preventable adverse events (AE) as a priority for quality improvement.1–4 Health care teams traditionally use root cause analysis (RCA) to analyze safety events and identify recommendations for change to address and prevent the event from happening again.5–7

Although RCAs are used widely in hospitals all over the world, studies highlight limitations with the RCA process and the generated recommendations.8–10 Even with RCA being used frequently, the rate of AEs has remained essentially unchanged with many AEs occurring repeatedly.10 The reasons for this are multifactorial. Often, the RCA process fixates on a single error and has hindsight bias.5,11 Kellogg et al10 reported that most recommendations from RCAs are unlikely to result in effective and sustained change. With the rate of AEs remaining unchanged, many authors suggest that the recommendations and proposed interventions are ineffective and of poor quality.5,10

Our research group created a simulation-based event analysis (SBEA) protocol12 to try to address limitations of the RCA process. The SBEA systematically reviews AEs by recreating them using in situ simulated patients, to understand clinician decision making, improve error discovery, and, through guided sequential debriefing, recommend interventions for error prevention. At the time of our study, the RCA recommendations generated from the traditional event analysis for the studied AEs had already been disseminated in our hospital. Despite this, our first study showed that when SBEA was used to analyze AE in hospitalized pediatric patients, the AE recurred and debriefing revealed discovery of unique causes for errors and generation of new recommendations, compared with the traditional event analysis.12

Studies show that recommendations are rarely tested before widespread implementation and many hospitals lack a systematic way to analyze whether the interventions had the desired impact.5,8–10 Our first study showed that the traditional RCA recommendations had not prevented the original AE from occurring.12 In support of the current literature, we felt that even if new recommendations were generated using SBEA, they needed to be tested and optimized.

This current study investigates the use of simulation to test and optimize recommendations generated from SBEA in clinical environments before widespread implementation.

METHODS

Setting

This study was conducted as part of a 2-phase research project at the Children's Hospital of Eastern Ontario (CHEO) in Ottawa, Canada. The CHEO is a free-standing tertiary pediatric hospital affiliated with the University of Ottawa, Faculty of Medicine. The CHEO has more than 6700 admissions annually. Ethics approval was obtained through our hospital ethics board.

Event Identification

As described in our first study, 2 cases, based on real-life AEs, were chosen from the hospital's safety reporting system to be replicated using simulation.12 Event A (an error of commission) involved an inpatient with known food allergies who experienced anaphylaxis and received the correct medication (epinephrine) but wrong dose and wrong route [intravenous (IV) administration instead of intramuscular (IM) epinephrine]. Event B (error of detection) involved detecting an error that already occurred in a patient in the pediatric intensive care unit (PICU) with traumatic brain injury and diabetes insipidus (DI). This patient received the wrong concentration and wrong dose of vasopressin, complicated by communication concerns during patient handoff.12 The simulation scenarios have been previously published and can be found in Supplementary Digital Content 1 (see document, Supplementary Digital Content 1, intervention SBEA scenario setup: https://links.lww.com/SIH/A684).12

Simulation-Based Event Analysis Recommendations for Interventions

We previously published a detailed description of the first phase of this study.12 In brief, during this first phase, we repeated each simulation scenario 5 times with different voluntary clinicians and then debriefed participants to understand decision-making processes. After allowing time for reflection about why they did or did not make the same or similar errors, the debriefing focused on the development of recommendations for error prevention. Table 1 summarizes the traditional event analysis to the final SBEA recommendations. Educational interventions were deferred. At the beginning of our study, only the traditional event analysis recommendations had been implemented at our hospital. Removal of 1 mg/mL of epinephrine vials from the ward medication cabinets had occurred when the last 3 scenario A simulations were conducted. The SBEA recommendations generated from our first study12 were not disseminated outside of our research team.

TABLE 1 - Comparison of RCA Versus SBEA Recommendations for Scenarios A (Epinephrine) and B (Vasopressin)
Recommendations Traditional Event Analysis SBEA
Design changes/product labeling A None A - Anaphylaxis drug kits (clear labeling, IM needle)
- Add indication on the epinephrine vials and boxes
B None B - New infusion drug labeling [match colors for syringe (concentration)] and pump (indication)
Guarding against harm A - Pharmacy to review the number of epinephrine vials on ward available and determine whether this can be reduced A - Alert signs with clear messaging in patient rooms
- Removal of 1 mg/mL of epinephrine vials from the ward*
B None B Alert signs on infusion pumps for high-alert medications, ie, “vasopressin for DI U/kg per hour” or “vasovasopressin for shock U/kg per minute”
Transfer of knowledge A - Develop (and educate) new anaphylaxis order set A Update hospital resuscitation medication sheets:
- list medications alphabetically, bold IM epinephrine dose list indication first
B None B Checklists:
- RN handoff checklist
- MD: assessing acute physiological changes in patients
Education A Update for RN/MD
- Anaphylaxis education and review of documentation of allergies
- Hospital resuscitation medication sheets
A - Simulation session of the common 5 mistakes
- Standardized common emergencies laminated management card
B Update for RN/MD in PICU
- DI/vasopressin protocol + preparation of meds
- Update resident binder
B None
*Recommendations in italics were not assessed/evaluated during the study.

Recruitment of Participants

We recruited participants with similar training and skill set to the clinicians involved in the real-life event (nurses, pediatric trainees) from within our institution using posters, meetings, and newsletters. All data points were deidentified to retain only summary team performance results and discussions for opinions regarding the interventions. Individual performance was not tracked. Gift cards and continuing education credits were offered to participants in appreciation of their time.

Intervention Testing Using Simulation Scenarios, Prebriefing, and Debriefing

The goal of this study was to test the interventions developed during SBEA in the first phase of our study. As in the first phase, both scenarios were set up in the clinical environment. Scenario A involved 2 participants (nurse, pediatric resident), whereas scenario B involved 2 to 4 participants (up to 2 PICU nurses, 1 senior pediatric resident, 1 PICU fellow if requested). Participants had access to all hospital references (drug manuals) and standard tools (online references). Participants received a standardized prebriefing before participating in the scenario (see document, Supplemental Digital Content, for simulation prebriefing) and signed consent. Interventions generated from the SBEA described previously were embedded within simulation scenarios for events A and B. For scenario A, participants were not told about these new interventions before participating in the simulation. For scenario B, participants were only told about the nursing handover checklist by the confederate nurse and residents who were given the physician checklist during the prebriefing to review. Participants were not told about the alert signs, medication kits, resuscitation sheets, or new medication labels.

We determined a priori that scenarios and debriefing sessions would be repeated until data saturation was achieved. For the purpose of this part of the study, saturation was defined as the point at which participants in debriefing sessions were no longer suggesting new intervention improvement opportunities and the critical error no longer occurred after at least 2 further testing scenarios.

A total of 8 simulations were completed for both scenarios. For scenario A, the manikin's vital signs were preset to mimic anaphylaxis and clinicians were shown a picture of a typical urticarial rash. The scenario ended when epinephrine was given, regardless of dose or route chosen. For scenario B, the manikin's vital signs were preset to demonstrate evidence of a vasopressin overdose. The scenario ended after the error was discovered (wrong concentration and rate of vasopressin infusion). After each simulation, a debriefing was held by trained facilitators and transcribed by a study coordinator.13–15 Video recordings of debriefings were used to ensure accuracy of the transcripts. The debriefing occurred in the clinical environment to allow for participants to reflect on their surroundings and the interventions. During the debriefing, facilitators used a debriefing framework similar to the Promoting Excellence and Reflective Learning in Simulation (PEARLS) for Systems Integration framework described by Dube et al.15 Using advocacy inquiry, the facilitators explored participants' decision-making processes and used standard open-ended questions to explore their performance.13–15 In the middle of the debriefing, participants were informed about the outcome of the actual case, allowing for more reflection about their decisions. Using the PEARLS for Systems Integration framework, the debriefing then focused on the interventions for error prevention, specifically the efficacy of the intervention and how it could be improved.14 Interventions were modified with subsequent simulations until data saturation was reached.

Data Analysis

Once the team identified potential interventions, the scenarios were repeated with the specific goal of ensuring the interventions would work to prevent the error. Debriefing data included both notes taken by the investigators attending the session and a transcript of the debriefing. Once transcribed, debriefings were deidentified, with participants coded with their discipline to facilitate analysis. Transcripts from the debriefing were analyzed using a deductive content analysis approach before the next scenario to determine what could be further modified to improve the intervention.16 Given that some of the scenarios were conducted on the same day, some of the analysis took place without transcription to ensure that the intervention could be modified before the next scenario. The modifications were then put into practice for the next scenario with the goal of repeatedly deterring the error using the intervention. Video analysis was used retrospectively to retrieve qualitative (quotes, reflections) and quantitative data (time to medication administration/error discovery).

Results

Each scenario was repeated 8 times, and data saturation was reached after 6 simulations for both scenarios.

Scenario A

Scenario A ran, on average, for 13 minutes. Erroneous administration of epinephrine IV (wrong concentration 0.1 mg/mL for simulation 1 and 1 mg/mL for simulation 2) instead of IM occurred during the first 2 scenarios using the initial interventions (1 mg/mL of epinephrine kit, edited hospital resuscitation sheets, alert signs). After modification of the interventions, errors were either corrected or mitigated and epinephrine was administered correctly via IM route and using the correct concentration in the remaining 6 scenarios.

Three interventions were created for the study: (1) specialized 1 mg/mL of epinephrine drug kits, (2) alert signs, and (3) revision of the hospital resuscitation sheet. Our previous study found that when the participants were asked to treat anaphylaxis, they repeatedly described being confused by the route, concentration, and labeling of epinephrine.12 To address this, a specialized drug kit and an anaphylaxis sign were created to alert against harm and provide exactly what providers needed to treat the first stages of anaphylaxis. Despite previous suggestions for intervention design (graphics, color) in our original study, the participants in this study used the new kit but preferred a version that was simple and contained only clinical information, with no photos (Table 2). Figure 1 summarizes the comments and evolution of suggestions, which lead to the changes to the epinephrine kit. Similarly, for the anaphylaxis poster, the clinicians in this study sought out key medication information rather than an esthetically pleasing poster. Participants provided comments, such as “it needs to look more like a hospital problem” suggesting that the bright colors and graphics looked more like a patient pamphlet. Participants felt very strongly that the “Do not give IV” message was a key piece of information on the poster and led them to perform the correct action. Debriefing revealed that some participants were challenged when a patient already had an IV, as they instinctively wanted to use this route to avoid pain to the child. The “Do not give IV” message on the poster was a clear direction to the participants who highlighted the importance of this phrase on the poster delivered in a simple design. A summary of the changes and comments to the epinephrine poster is provided in Figure 2.

TABLE 2 - Epinephrine Drug Kit Intervention Debriefing
Simulation Participant Quote
3 “Pics are not necessary. I get why it is there, but I read it first. By the time I get there I have already read it and I know. I can see it being helpful for some people.”
4 “Clear with wording. Do not need pictures. I was hoping it would show me blue to the sky orange to the thigh.”
5 “Pictures are good. Do not have to read anything. Matches my sign as a second reminder and I have everything I need in it.”
6 “Very helpful. Everything I needed was in it. Was nice to have it all there. I feel bad that I did not even notice the pictures on the zip lock baggie.”
7 “It is clear. Do not give IV. And, supplies are all IM stuff, so I would not even consider IV.”

F1
FIGURE 1:
Evolution of epinephrine drug kit signage.
F2
FIGURE 2:
Evolution of epinephrine poster.

Our prior study12 recommended modifying 3 elements regarding epinephrine dosing on our hospital's standardized hospital resuscitation sheet that is printed and made available for each admitted patient at the bedside: (1) list medications alphabetically, (2) “bold” IM epinephrine, and (3) prioritize indications for use. The edited hospital resuscitation sheet was only used in 2 simulations by the clinicians who wanted to clarify specific epinephrine dosing orders. It was felt to be “all together it is more clear” and “way better… I saw the bolded and the indication.” However, debriefing revealed that the resuscitation sheet was only helpful if the participants knew to look for it. In all but one of the scenarios where the participants did not access the sheets, they reported that they did not notice the sheet at the end of the bed or did not know that it existed. Participants comments included: “When I look at it, it is familiar, but I would not know where to find it” and “I did not clue in there was a [resuscitation] sheet. I was just focused on patient and was not looking for other resources.” One of the 2 clinicians who did use the resuscitation sheet said that they initially thought about using the IV route for epinephrine but quickly changed their mind after reading the newly revised hospital resuscitation sheet. When asked to compare versions during the debriefing, all the participants said that they found the revised sheets provided clearer instructions that led them to the correct route and dose. However, many participants highlighted lack of awareness regarding the placement of the resuscitation sheet and recommended different locations where the sheet could be placed. Some participants preferred other cognitive aids, such as personal pocket cards that they were familiar with or the clear and concise anaphylaxis sign: “Seeing the red sign was reassuring, especially that it said do not give IV epinephrine.”

Time to administration of epinephrine (TAE), defined as the time from the physician order of epinephrine to administration of epinephrine to the patient, fluctuated over the course of both parts of our study. Our preintervention and postintervention averages were 226 and 471 seconds, respectively.

Scenario B

Scenario B ran, on average, for 14 minutes. Four interventions were created: (1) drug labeling (matching color coding between syringe and pump for each concentrations and dose of vasopressin), (2) alert signs on infusion pumps for high-alert medications, (3) nursing checklist for break handover, and (4) physician checklist for evaluating an acute patient change. In the first phase of our study, it took the participants an average of 15 minutes to discover the error.12 The nursing checklist had the highest impact, resulting in an average time to error detection of 6 minutes.

The first 2 interventions included signs for high-alert medications and medication labeling. Although most participants commented that the alert signs and labels were “good” and “helpful” when the resources were reviewed at the end of the debriefing, the signs and labels were not regularly noticed during the simulation. Participant comments included: “I looked at that signage and… I do not know. I just did not connect the two. That's all it was. But, I think those signs are great. I think when we are using vasopressin, it [alert sign] would be a really good thing to have in there.” Participants also revealed that when they lacked knowledge (ie, did not understand the difference in units and rate for vasopressin in DI and shock), the alert signs were unhelpful with one participant remarking “At the moment, ‘hours’ and ‘minutes’ were just words, not triggering anything for me.” During the debriefing, the participants suggested many changes to the signs and labels to increase effectiveness. However, even with multiple changes tested in 8 simulations, only 1 clinician (a physician with a nursing background) identified the medication error using the alert sign intervention after noting the discrepancy between the alert sign and the clinical set up. Supplemental Digital Content 2 summarizes the comments of the study participants and the evolution of the alert signs and product labeling for vasopressin intervention (see document, Supplementary Digital Content 2, evolution vasopressin alert signs and product labeling: https://links.lww.com/SIH/A685).

All of the participants highlighted the benefits of the nursing checklist (Table 3). The first debriefing provided insightful comments surrounding nursing handover culture. The participants (both RNs and MDs) suggested a checklist design that led to both nurses (the one leaving for break and the one going on handover) to review the pumps together at the bedside. The revised checklist allowed for early identification of the medication infusion error in all subsequent simulations where it was used. All other debriefings included themes suggesting that the checklist empowered nurses to raise concerns in a safe, nonthreatening way. The changes and comments are summarized in Table 3.

TABLE 3 - Nursing Handover Checklist Evolution
Initial Final
Nursing break coverage checklist Covering RN checks:
Date Initials
Time Age, weight, allergies
Age, weight, allergies Diagnosis and current issues
Diagnosis and current issues Current state and targets (eg, VS, ICP, comfort)
Current state and targets (eg, VS, ICP, comfort)
Goals for break Critical safety info (equipment, alarms, etc)
Critical safety info (equipment, alarms, etc) Airway secure?
Airway secure? PRNs reviewed?
Infusions checked? (medication, concentration, rate) Nurses check together:
PRNs reviewed? Initials
Initials Check infusions and pumps together
- Medication
- Concentration
- Rate
When do I call the MD?
Simulation Participant Quote
1 “I assumed she checked that [the pumps], and we do that a lot here. Something like this is great. Certain nurses would give you a look if you asked to go and double check their pumps.” “I would modify the checklist to say”: physically go to pump and check infusions
2 “Without the checklist, it would be pretty condescending of me to say ‘RN, is your MIE [equipment] working’”?
3 “But you also feel that—I'm starving and want to leave. So, I am glad I had the checklist to be like ‘No, we are doing this.’”
4 “If I went to that [checklist], people would just have to deal with me using it”
8 “I like it [checklist]. People often run off without giving enough information. Culture is one of the hardest things to change.”
Nursing checklist participant comments.
ICP, intracranial pressure; PRNs, as needed medications; VS, vital signs.

A physician checklist (see document, Supplemental Digital Content 3, physician acute clinical change assessment checklist: https://links.lww.com/SIH/A686) was created in parallel to the nursing checklist to address the knowledge gap that doctors had identified regarding nursing pumps and medication administration. However, once the nursing checklist was optimized and used in the scenarios, the medication error was discovered independently by the nurse and did not require the involvement of the resident. Of the 4 scenarios that involved a physician, only one referred to it during the simulations: “To be honest, it totally left my mind once I got to the situation. I totally forgot about this sheet, and you told me 2 seconds before to you use it. And, it would have been helpful. If I had gone through that checklist, it probably would have helped me think about a differential there.” The physician who did use the checklist noted that “we usually go through all the assessment, but we usually do not think of the pumps and the lines. We rely on the nurses checking that and making sure everything is fine.” Although residents said that they liked the checklist, they focused immediately on the patient examination. One physician even added “… if I have a checklist, I have a crutch that prevents me from becoming an independent critical thinker.” One physician had previous training as a nurse and described in the debriefing that she was used to checking pumps when she was a nurse, which led to the error being discovered. Ultimately, the physician checklist was not used enough to be modified or have impact on our simulated cases. During the debriefing, a few physicians remarked that education would be an important intervention although agreed it is often difficult to implement broadly. Their comments included: “... it feels better to have this engrained in my mind rather than having this piece of paper [checklist] as a crutch” and “… what resonates... is to train me to have medications on my radar.”

DISCUSSION

To our knowledge, this is the first study using simulation to test and optimize interventions generated during SBEA in pediatrics. Simulation and debriefing allowed different clinicians to test the interventions in their real-life clinical environment and provide immediate feedback for modification. This direct feedback provided a deeper understanding of the practical use of the interventions and allowed for prompt editing and retesting of each intervention. Although the initial interventions were developed through SBEA, almost all interventions still needed modifications.

Unfortunately, although there is significant effort to understand why AEs occur, the rates of AEs in hospitalized patients have remained essentially unchanged.9 Recent literature questions whether the RCA process is truly able to address all the underlying causes. Even worse, the interventions generated from the RCA are rarely tested before implementation.5,8–10 To address this, our study tested interventions before widespread implementation in the safety of the simulation environment. Using simulation to test interventions allows for the direct observation of clinicians in their own environment using the new intervention. Through the use of multiple repeated SBEA, clinicians were highly engaged in modifying the interventions and helping us design the tools in real time.

Intervention SBEA not only allowed us to optimize each tool but also provided another look into the causes of error. As we discovered in our first study, the debriefings provide a safe space for the participants to share their mental model regarding their choices and behavior, allowing them a forum to openly express their ideas for change.12 The debriefings during this study revealed new causes for error in scenario B. These causes for error included the hospital climate and culture. During the debriefing of scenario B, nurses openly discussed the discomfort felt when questioning colleagues or asking for clarification. Studies suggest that barriers to reporting medical errors include the organizational climate, such as differences in status/position of the individual who made the error and the person reporting it.17,18 During the debriefing, nurses suggested that the nursing checklist was a helpful tool to combat the perceived consequences of challenging another nurse, even with no training before implementation. Embedding checklists and standard work into the hospital culture improves communication and patient safety by empowering health care providers to speak up.18–20 Gawande19 has shown how the implementation of a checklist can improve communication across disciplines and especially when there is perceived hierarchy (ie, surgical nurses and surgeons).

Research suggests that the most effective interventions generated through the RCA process consist of guarding against or changing the hazard.5 Although all of the interventions that we tested in our study were focused on system changes and guarding against harm, we found that each intervention required repeated testing and thoughtful modifications in the clinical environment to achieve optimal success. This suggests that even when an improved technique of event analysis, such as SBEA, is used, strong recommendations that consist of guarding against or changing the hazard should not be implemented without testing. For example, during our original study, the participants frequently suggested using alert signs to decrease error.12 In this study for scenario A, the participants felt that the sign was important but suggested modification of the sign to improve clarity by limiting graphics and colors. Their comments also revealed insights regarding cognitive load in challenging situations and allowed the study team to make the intervention more appropriate to the situation by simplifying the epinephrine kit (removal of graphics), alert sign, and hospital resuscitation sheet accordingly. For scenario B, although teams said that they liked the medication alert signs and thought that they “were good,” the use of signs did not translate to error prevention during the simulation. Instead of implementing an intervention (ie, high-alert medication signs) across an institution following an RCA recommendation, administrators should consider using simulation with clinicians in their environment to ensure that the intervention they are implementing (ie, alert signs) provide the right information, address the targeted problem, and are in the appropriate clinical environments. In scenario A, guarding against harm by changing the hazard (supplying an epinephrine kit with IM needle only) was important in preventing error. However, even the epinephrine kit required modification (less graphics, more plain type) for optimal dose-related choice effectiveness.

For scenario A, despite our interventions improving decision making, video time data analysis showed that TAE remained unchanged throughout the study and was significantly longer than the TAE in a recently published simulation study (114 seconds).21 Given the real-life AE at our hospital (patient received wrong dose, route, concentration of epinephrine), we were specifically focused on improving key aspects of anaphylaxis treatment, including ensuring the right concentration, right dose, and right route. Time to TAE had not been flagged as a concern and, although of significant clinical importance, only became apparent after collating time-based video data. Had we also focused on TAE and provided that information to the participants in the debriefing after each simulation, there would have likely been opportunities for more modifications and likely engaging conversation regarding how to improve time to administration. As we described, during the debriefings, the participants were only told if they made the critical error (which we defined as wrong route, dose, concentration) and were not informed about the TAE. Clearly, based on a recently published study, TAE is also an essential component, and delayed TAE should be considered an error.21 After this study was completed and our results were shared, our hospital's early response and resuscitation committee recommended that the epinephrine kits be placed inside the ward resuscitation carts (not locked, on wheels). Of interest, this new location of the kits has been tested in simulation over 20 times because our study has proven to be very effective for cases of anaphylaxis in patients who are not known to have anaphylaxis.

The SBEA for intervention testing before implementation is efficient and efficacious. Hospital administrators and educators are often resource limited and find it challenging to implement hospital-wide education and roll out of new protocols and procedures.22 In this study, intervention SBEA was completed without training or education before implementation. Although it would be unlikely that hospitals would implement new interventions without first providing education to staff, we decided to test the interventions blindly to understand their baseline effectiveness and assess their impact even without staff preparation. This provided key information on what interventions were most likely to be successful without training and what interventions need more education or culture change before implementation. For example, although edits to the hospital resuscitation sheet made the tool easier to understand, the participants were either not aware of it or did not know where to find it. Furthermore, the checklist for resident physicians was introduced as an intervention to help structure their approach to an acute clinical change in a critical care patient. Although, in the debriefing, some residents said that they liked the checklist, it was not used during the scenario. This suggests that the introduction of physician checklists may require more training and education before implementation. A study of pediatric critical care physicians found that physicians may resist implementation of a cognitive aid if there is minimal physician input before implementation and if physicians are asked to use the aid before demonstration of benefit.22 Our study certainly showed that introducing a physician checklist, without their input and demonstration of benefit, did not improve use or acceptance of the checklist even during a simulation when residents were told to use it. Ensuring that the right intervention is implemented is important as it saves time and resources and affects clinician buy-in and interest in adopting new policies. Furthermore, using simulation to test interventions could save crucial health care dollars by allocating resources to the highest yield interventions.

Lessons Learned

Our initial study12 showed that SBEA required a significant investment in person-hours. When applying the protocol to intervention testing, the process became more efficient and could build upon work already done. The cases, setup, and procedure did not change, and our study protocol was integrated into voluntary nursing education days, which decreased participant recruitment time. The time allotment for intervention design and modification is difficult to quantify as interventions were modified both “on the fly” between simulations occurring on the same day and retrospectively, after reviewing the videos. This represented most “new” dedicated person-hours. In situ simulation was key in continuing to make this protocol successful. It is inexpensive and critical in AE review as it allows situational awareness of the environment. Given that all of the testing occurred in the simulation environment, at the very least, SBEA gave us situational awareness of how interventions might perform and the effort required to optimize them. Lastly, after review of recent literature,21 it is clear that we should have included TAE as one of our elements of a critical error. Although TAE was not an original study objective, the data highlight the key role that simulation plays in recreating clinical scenarios and allowing for deliberate, time-based outcomes analysis.

Limitations

Our study has limitations. We tested the interventions in separate clinical contexts (PICU and the ward) but not in all areas of the hospital (ie, emergency department, operating room), and we did not control for individual clinician experience or seniority. Practice patterns and comfort change according to the environment, and it may be that the recommendations obtained in those clinical settings are not generalizable to other areas. Testing in all clinical areas may not be feasible given the manpower required to reproduce these simulations. In addition, simulating an intervention before formal education or implementation strategies have been used may limit our ability to measure effectiveness of specific interventions. The physician checklist was ineffective. It is unclear whether this was because residents were not familiar with the new tool, had not bought into its effectiveness, or the tool itself was not useful. Lastly, for scenario A, our study focused on analyzing the critical error made during a real-life AE. In doing so, our key end points and debriefing emphasized interventions that were specific to that AE. In the future, we would seek to ensure that published clinical standard of care end points were included in data collection so as to address the original AE and further the clinical excellence at our institution.

CONCLUSIONS

We recommend that hospitals use simulation to test, modify, and optimize interventions recommended through event analysis. Simulation optimizes interventions and provides opportunity to assess efficacy in real-life settings with clinicians, before valuable resources are dedicated to system-wide implementation.

REFERENCES

1. Matlow AG, Baker GR, Flintoft V, et al. Adverse events among children in Canadian hospitals: the Canadian Paediatric Adverse Events Study. CMAJ 2012;184(13):E709–E718.
2. Krug SE, Frush K; Committee on Pediatric Emergency Medicine, American Academy of Pediatrics. Patient safety in the pediatric emergency care setting. Pediatrics 2007;120(6):1367–1375.
3. Berchialla P, Scaioli G, Passi S, Gianino MM. Adverse events in hospitalized paediatric patients: a systematic review and a meta-regression analysis. J Eval Clin Pract 2014 Oct;20(5):551–558. doi:10.1111/jep.12141.
4. Woods D, Thomas E, Holl J, Altman S, Brennan T. Adverse events and preventable adverse events in children. Pediatrics 2005;115(10):155–160.
5. Wu AW, Lipshutz AK, Pronovost PJ. Effectiveness and efficiency of root cause analysis in medicine. JAMA 2008;299(6):685–687.
6. Charles R, Hood B, Derosier JM, et al. How to perform a root cause analysis for workup and future prevention of medical errors: a review. Patient Saf Surg 2016;10:20: eCollection 2016.
7. Joint Commission Resources. Root Cause Analysis in Health Care, Tools and Techniques. 5th edition, 2015. Oak Brook, Illinois. Available at: https://www.jcrinc.com/assets/1/14/EBRCA15Sample.pdf. Accessed January 3, 2019.
8. Percarpio KB, Watts BV, Weeks WB. The effectiveness of root cause analysis: what does the literature tell us?Jt Comm J Qual Patient Saf 2008;34:391–398.
9. Peerally MF, Carr S, Waring J, Dixon-Woods M. The problem with root cause analysis. BMJ Qual Saf 2017;26(5):417–422.
10. Kellogg KM, Hettinger Z, Shah M, et al. Our current approach to root cause analysis: is it contributing to our failure to improve patient safety?BMJ Qual Saf 2017;26(5):381–387.
11. Simms ER, Slakey DP, Garstka ME, Tersigni SA, Korndorffer JR. Can simulation improve the traditional method of root cause analysis: a preliminary investigation. Surgery 2012;152(3):489–497.
12. Lobos AT, Ward N, Farion KJ, et al. Simulation-based event analysis improves error discovery and generates improved strategies for error prevention. Simul Healthc 2019;14(4):209–216.
13. Rudolph JW, Simon R, Raemer DB, Eppich WJ. Debriefing as formative assessment: closing performance gaps in medical education. Acad Emerg Med 2008;15(11):1010–1016.
14. Eppich W, Cheng A. Promoting Excellence and Reflective Learning in Simulation (PEARLS): development and rationale for a blended approach to health care simulation debriefing. Simul Healthc 2015;10:106–115.
15. Dube M, Reid J, Kaba A, et al. PEARLS for Systems Integration: a modified PEARLS framework for debriefing systems-focused simulations. Simul Healthc 2019;14(5):333–342.
16. Creswell J, Plano Clark V. Designing and Conducting Mixed Methods Research. Thousand Oaks: Sage; 2007.
17. Levine KJ, Carmody M, Silk KJ. The influence of organizational culture, climate and commitment on speaking up about medical errors. J Nurs Manag 2020;28(1):130–138.
18. Reese J, Simmons R, Barnard J. Assertion practices and beliefs among nurses and physicians on an inpatient pediatric medical unit. Hosp Pediatr 2016;6(5):275–281.
19. Gawande A. Checklist Manifesto: How to Get Things Right. Metropolitan Books; 2010.
20. Weiss MJ, Kramer C, Tremblay S, Côté L. Attitudes of pediatric intensive care unit physicians towards the use of cognitive aids: a qualitative study. BMC Med Inform Decis Mak 2016;16:53.
21. Maa T, Scherzer DJ, Harwayne-Gidansky I, et al. Prevalence of errors in anaphylaxis in kids (PEAK): a multicenter simulation-based study. J Allergy Clin Immunol Pract 2020;8(4):1239–1246.e3.
22. Geerligs L, Rankin NM, Shepherd HL, Butow P. Hospital-based interventions: a systematic review of staff-reported barriers and facilitators to implementation processes. Implement Sci 2018;13:36.
Keywords:

Simulation; pediatrics; root cause analysis; adverse event; patient safety

Supplemental Digital Content

Copyright © 2021 The Author(s). Published by Wolters Kluwer Health, Inc. on behalf of the Society for Simulation in Healthcare.