BACKGROUND
SARS-CoV-2 forced drastic changes in hospital workflows, including the transformation of nonclinical areas into clinical care spaces and medical staff working in unfamiliar units.1,2 Our community hospital had a median 22.9% mortality rate among admitted patients between March – June 2020 with a decrease in risk of mortality by 49% each month attributed to the institution's ability to respond dynamically to a novel viral illness.2–4 Debriefs held by the Emergency Department (ED) leadership with interprofessional staff identified systems-based latent safety threats (LST) particularly among newly developed SARS-CoV-2 clinical areas related to infection control precautions, equipment availability, and interdisciplinary team communication.2 The simulation department at the hospital had a successful history of running quality improvement-based simulation programs3,5–7 and played a critical role in preparing the hospital during the first wave of the pandemic.2,3 In June 2020, our hospital leadership provided prediction models to medical staff signaling an anticipated second wave in the early fall. Armed with this knowledge, the hospital's simulation team designed a program with the goal of improving hospital response elicited from concerns that surfaced during department-wide debriefs following the first wave to prepare and manage through the upcoming predicted second wave and beyond.
In situ simulation recreates complex care environments and seeks to understand and identify healthcare gaps by utilizing the actual clinical space where medical staff function.8,9 The goal through systems-focused debriefing is to uncover LSTs that could predispose to medical errors with the goal of addressing these threats, improving patient safety, and enhancing quality.8 In situ simulation has established itself as an effective tool for the identification of LSTs,10–12 but existing literature is underdeveloped as to how it can be leveraged in a structured, data-driven, integrated quality improvement initiative.
This study leverages 3 frameworks: (1) the Model for Improvement,13 (2) the Donabedian model,14 and (3) the Pearls for System Integration.8 The Model for Improvement is a tool for accelerating developed improvements and is used by the Institute for Healthcare Improvement. This model highlights the Plan-Do-Study-Act (PDSA) cycle, one of the most well-known, experience-based learning models in healthcare for quality improvement15 for testing changes on a small scale (Institute for Healthcare Improvement—Model for Improvement). The Donabedian model is a conceptual framework that evaluates the quality of medical care, focusing on methods, rather than findings. In this model, the interplay between the central concepts of structure, process, and outcome is critical to measuring and improving quality.16 Structure-based measures are defined as characteristics of the space where care occurs, including architecture and availability of equipment, whereas process-based measures include delivery of care to patients and the workflows encompassed therein.2 Finally, the Pearls for System Integration framework is the only standardized tool for systems-focused debriefing used to maximize the identification of LST. This framework includes: (1) participant assessment of predetermined objectives, (2) facilitated discussion on systems issues identification, and (3) obtaining information and background through direct feedback.8
This study design longitudinally followed a single ED as it used in situ simulation as a primary tool to identify and track LSTs. Using the abovementioned 3 frameworks, interventions were put into place and the impact evaluated based on changes in LSTs over time. The study addressed the following research question: “Will a structured in situ simulation /quality improvement (QI) program result in the reduction of LSTs over the course of 3 PDSA cycles in a nonacademic emergency department?” The Survey Analysis For Evaluating Risk (SAFER) Matrix by the Joint Commission was used to evaluate and track trends in threats in real time based on their potential risk of harm as well as scope in the department.17 The primary outcome of interest was the shift in the SAFER score over each PDSA cycle. Secondary outcomes of interest included monitoring: (1) identified threats per cycle and (2) the impact of interventions on primary drivers per cycle. The team hypothesized that observing the actual environment in which clinical care occurs would provide an in-depth understanding of the structure and process of clinical management, identify opportunities for improvement, and develop an understanding of the relationship between structure and process measures as they relate to the abovementioned outcomes, with the goal of creating a safer department upon anticipated arrival, and duration, of future waves of the SARS-CoV-2 pandemic.
METHODS
Context
This was a single-center prospective interventional study taking place in a community hospital in Westchester County between June 2020 and February 2021. Our community hospital's ED is the highest volume in Westchester County, surpassing 65,000 visits in 2019. Our ED served as the pilot site for the Merging In Situ Simulation and Quality (MISS-Q) collaborative, a QI initiative to improve preparedness for SARS-CoV-2 airway management across 5 EDs in the New York Tri-State Area of which the first and second authors also serve as the principal investigators. This report abides by the reporting guidelines suggested by the Standards for Quality Improvement Reporting Excellence (SQUIRE 2.0) guidelines.18
Intervention
This report provides a detailed description of the first 3 PDSA cycles of this QI initiative. The first cycle (June–November 2020) served to gather baseline data, the second cycle (November–February 2021) as the measure of impact of the first round of interventions, and the third cycle (February–March 2021) as the measure of impact of interventions of the second cycle, along with sustainability. Publication of 3 cycles was purposefully performed by the study team to include a review of both impact of initial round of interventions along with sustainability. Visualization of the organization of our PDSA format is in Figure 1 .
FIGURE 1: Pictorial representation of a PDSA cycle, conducted 3 times during the study.
Inclusion criteria for participants in this program included all medical staff expected to assist in a critical airway intervention in the ED (respiratory therapists, nursing technicians, registered nurses, advanced practice providers, attending physicians). Exclusion criteria included students and visitors. Leaders of the ED quality team were also engaged throughout this program. This group consisted of a combination of 10 individuals: physicians, registered nurses, and advanced practice providers who are leaders in the department, work clinically, review 10% of all ED cases, and track key quality metrics in the department.
In the planning stage, May 2020, internal surveys and virtual department-wide debriefs after the first wave of SARS-CoV-2 were held to identify areas that would benefit from systems-based improvement. Three predetermined categories of threats were decided upon: (1) infection control measures including cross-contamination; (2) knowledge and availability of equipment; and (3) team communication during intubation.2 In meetings with key hospital stakeholders (administration, department leaders, front line staff), an in situ program focused on the previously mentioned predetermined categories was proposed and widely supported. These predetermined categories were also determined to be of consequence to the larger MISS-Q collaborative and subsequently adapted in the larger quality multicenter quality initiative.
For the “Do” in PDSA, 10 simulations were performed in 2- to 3-week block intervals to consistently identify unique LSTs during each cycle. Video recording was purposefully not performed as an additional safety measure, as recommended by Jafri et al (2020).19 This was performed to limit the use of external equipment into closed units. In addition to this, our prior work with in situ simulation required numerous cameras to capture multiple angles, along with additional staffing, which violated our safety precautions of using a single debriefer model for a quick, minimally disruptive program.5,6,19 Once an LST was identified, it was studied vigorously, followed by implementation of change concepts, and tracking of primary drivers. The LSTs identified therefore served to measure the impact of the interventions put into place.
All simulations and debriefs were performed by a single simulation educator, the primary author, who also served as lead designer and trainer of the simulation and debriefs for the larger MISS-Q collaborative.19 Simulations were performed in either a scheduled manner (at the end of the night shift or start of a day shift) or in the middle of a shift during a less-busy moment at the debriefer's discretion when working clinically. Simulations were conducted in one of the empty patient rooms in the ED including the usual staff in the unit participating in the same roles that they would normally be presented for an acute resuscitation. Participants were not excluded if they had already attended a prior simulation, but prior participation was documented to control for this variable. The ED under study is experienced with simulation-based programs as most medical staff have rotated through both laboratory-based simulation and in situ simulation programs over the past 4 years.2,5,7
“No-Go” criteria, from Bajaj et al,20 for canceling in situ simulation were incorporated and documented. Simulations including the debrief lasted approximately 10 to 15 minutes and abided by stringent safety checks built by the current authors for this program.19
The case scenario was kept simple to focus the debrief on systems-based threats rather than complex medical knowledge. Although the simulations were scheduled based on when the debriefer was working clinically, the time was based on the state of the ED and the staff were unaware of the simulation until just before the case being run. The case was a 70-year-old woman, previously seen in the ED with a positive polymerase chain reaction test for SARS-CoV-2, arriving by ambulance on bilevel positive airway pressure (BiPAP) in a hypoxic and altered state. The primary airway would fail, requiring the team to have a secondary device ready. The case ended when the secondary device was brought to the head of the bed. The program used only low-fidelity task trainers and a portable storage bag for ease of cleaning, portability, and storage in contaminated zones.19
Data collected from each debrief were transcribed into an electronic database, which included: (1) location, (2) shift (day vs night), (3) number of participants, (4) roles in the department of participants, (5) perceived workload of that shift (from the ED unit leader), (6) threats identified for each predetermined category, and (7) if the debrief was cut short. No individual identifying information was collected.
The systems-focused debrief began with a description phase as outlined by Dubé et al.8 The purpose was to reinforce a shared mental model among the facilitator and participants, which included a statement of purpose in identifying LSTs that threaten the quality of care and staff/patient safety. After the simulation, during the analysis phase, the debriefer used the plus/delta model to cover each predetermined objective. A plus is an area where the team performed well while a delta is an area that could be improved upon. Upon conclusion of the analysis phase and before the summary phase, the debriefer asked for any additional open comments. Finally, in the summary phase, the data captured during the debrief were summarized aloud and cross-checked by the team. This provided an opportunity to cross-validate participant contributions while providing an accurate, comprehensive, and transparent summary. After cross-validation, information was inputted into an electronic form while in the room upon conclusion of the debrief. The transcripts from the debriefing sessions were organized into 3 categories: (1) infection control, (2) equipment availability, and (3) communication.
Study of Interventions
The “study” in the PDSA process occurred through evaluation of threats. These threats were evaluated through the development of subcategories to group the large number of threats identified. Subcategorization of threats was performed by group consensus upon conclusion of each cycle by the research team. All subcategories and interventions were subsequently labeled by 2 of the authors (F. J., A. K.) based on the Donabedian model as either being primarily structural or process. Each subcategory's status as structure or process was included as a dummy variable in the primary outcome regression analysis.
The 10 ED quality team members were surveyed individually after each PDSA cycle through an anonymous online form, evaluating the subcategories of the threats identified in preparation for plotting on the SAFER-Matrix. Respondents assigned each subcategory a likelihood of harm (low, moderate, or high) along with their estimation of the scope of this threat (limited, pattern, or widespread) during clinical practice, as opposed to during the simulation. The surveys were performed after each PDSA cycle. Subsequently, the primary author met with the ED quality team and developed cause-and-effect and driver diagrams focused on key interventions and coordinated implementation of change concepts between each cycle. Any identified LST needing immediate action was acted upon and documented. To establish whether observed outcomes (the change in LSTs over each cycle) were due to the interventions put into place, the primary drivers from created diagrams were tracked across each cycle.
Measures
The primary outcome of interest was the shift in the SAFER score over each cycle with the secondary outcome of interest being an evaluation of (1) LST count per cycle along with (2) the change in impact of the interventions on primary drivers per cycle. Each outcome was analyzed with a separate data set and measures.
The unit of analysis for the primary outcome was the subcategory of the threat that was identified. A longitudinal data set was prepared in which each observation was a subcategory of threat (n = 35) and measured once per PDSA cycle. Therefore, each subcategory of threat appears 3 times in the data set (n = 105). The dependent variable, SAFER score, was measured once per PDSA cycle. After each cycle, threats were placed into larger subcategories. The ED QI team members subsequently completed a survey based on the subcategory of threat so that they could be mapped on the SAFER-Matrix. To do so, weighted means were taken for the scores designated to each subcategory, given a value of 1—low; limited, 2—moderate; pattern, or 3—high; widespread. Using the weighted average, a SAFER score was created by multiplying the weighted average of the subcategory of the threat by the likelihood of harm, as determined by the QI experts team, which was then multiplied by the number of occurrences of that subcategory in that specific PDSA cycle. This method is like a risk priority number in a failure modes and effects analysis.21
The unit of analysis for the secondary outcome when evaluating the LST count data was the simulation. Primary drivers developed upon conclusion of the cause-and-effect diagrams with the ED quality team were subsequently tracked across each cycle as a measure of the change concepts put into place. A detailed review of cause-and-effect and driver diagrams, hospital images, and threats identified are available for review (see Supplemental Digital Content 1, all raw data along with cause-and-effect and driver diagrams, https://links.lww.com/SIH/A782 ). Also available are debriefing scripts, case summary, SAFER-Matrix forms, interventions put into place, and a comprehensive and transparent review of our data analysis (see Supplemental Digital Content 2, including case summary and scripting, all scripting and data analysis, https://links.lww.com/SIH/A783 ).
Analysis
All statistical analyses were performed in R.22
Primary Outcome
A bivariate analysis was performed to evaluate the association between PDSA cycle and SAFER score. To understand the relationship between SAFER score and PDSA cycle over time, a negative binomial regression model with random intercepts for each subcategory of threat was used. The glmmTMB function from the glmmTMB package23 in R was used with the family argument set as nbinom2, which was the most appropriate model to fit the distribution of our dependent variable. Independent variables in this model were (a) dummy variables for 2 of the 3 category types and (b) an interaction term between PDSA cycle number and the structure/process dummy variable (to determine whether the relationship between SAFER score and PDSA cycle is different for structure and process subcategories of threats). Assumption testing was performed using the DHARMa package24 in R testing for dispersion, uniformity, and outliers, passing all assessments (shown in SDC).
Secondary Outcomes
A review of the LST count was performed by a run chart followed by an ordinary least squares (OLS) regression analysis. The change in primary drivers was monitored through descriptive statistics.
Ethical Considerations
The study was reviewed by the institutional review board and considered exempt as a QI program.
Results
In total, 30 simulations were run over 3 cycles (10 per cycle) with 50% run during night shifts, 7:00 pm to 7:00 am (cycle 1: 60%, cycle 2: 50%, cycle 3: 40%). Average census at the time of the simulations was 16.14% (cycle 1: 20%, cycle 2: 15.4%, cycle 3: 13%). Perceived workload for the medical staff on a linear scale from 1 to 5 was on average a 2.4 (cycle 1: 3.3, cycle 2: 2.2, cycle 3: 1.7). In total, 156 participants went through the program, including 34 physicians, 5 advanced practice providers, 81 registered nurses, 5 respiratory therapists, and 31 nursing technicians. Cycle 1 had no repeat members who had attended a prior simulation. In cycle 2, 50% of the simulations, 14.8% of all participants, had a team member who had attended a prior simulation. In cycle 3, 60% of the simulations, 35% of total participants, included a member who had attended a prior simulation. Apart from the 30 simulations that were successfully run, 7 (18.9%) were canceled because of the preplanned “No-Go” criteria, primarily because of clinical load and department acuity. In total, we identified 35 subcategories of threats and 172 total LSTs. Examples of LSTs, subcategories and threats, and assigned likelihood of harm are available in Table 1 .
TABLE 1 -
Examples of Categories, Assigned Subcategories, Recorded LSTs, and Associated Likelihood of Harm Along With a Sample of a SAFER score Over three cycles for a Single Subcategory of Threat
Category
Assigned Subcategory
Recorded LST
Likelihood of Harm (Weighted Mean for Subcategory)
Infection control
PPE gowns are difficult to remove, increasing risk of self-contamination
“The laundered gowns are difficult to remove, risk of self-contamination. (We) have tight knots at the end making it difficult to remove.”
Moderate
Infection control
Location to dispose of dirty gowns is unclear
“No one knows where the dirty gowns go and where the hamper is.”
Moderate
Equipment
Airway cart missing, not easily found, or difficult to access
“Airway cart looks like the infection control cart and is confusing.”
High
Equipment
Missing/unclear location of cric tray; cart with tray is difficult to access
“(There was a) significant delay in time to obtain a trach tray. (The unit) the trach tray was located in was locked, (and then) got jammed making it difficult to obtain.”
High
Communication
Lack of discussion of a backup plan for difficult airway
“The doctor had a plan for intubation but did not share it with team and the NP felt she had to “force it out of you” to get the plan on intubation so that the equipment was ready. If there was a less experienced nurse, it would have been difficult for the team to get the supplies.”
Moderate
SAFER Matrix Scoring
Difficult to Find Critical Airway Equipment
LST Count
Harm Score
Scope
SAFER Score
PDSA 1
10
3 (High)
2 (Pattern)
60
PDSA 2
2
3 (High)
2 (Pattern)
12
PDSA 3
0
3 (High)
1 (Limited)
0
Primary Outcome: SAFER Score
The median SAFER score decreased from 10.94 in PDSA cycle 1 to 6.77 in PDSA cycle 2 to 4.71 in PDSA cycle 3. Bivariate regression analysis for the SAFER score demonstrated a slope of −3.114 for every additional PDSA cycle (P = 0.0167). When evaluating for threats identified as being primarily structure based, there was a decrease in SAFER score of 1.28 per every additional PDSA cycle (P = 0.001).
Secondary Outcome: Total LST
Our secondary outcome of LST count data was visualized through a run chart, in Figure 2 . The run chart demonstrates a total gradual decrease in threats over time and per PDSA cycle. Through an OLS regression, there is a decrease in total LST of 0.20 per additional simulation run (P = 0.02) after controlling for shift type, census, perceived workload, team size, prior attendance, and debriefs that were cut short. Teams that had participants who attended prior simulations were not associated with a decrease in threats identified (P = 0.59). Our ability to draw conclusions from this regression is limited as it may violate the OLS assumption that there is no serial autocorrelation among observations. There is potential autocorrelation as the simulations took place serially with participants drawn from the same department, with some involved in multiple simulations. Furthermore, interventions occurred in between each PDSA cycle that would affect all of the simulations in a subsequent cycle further rendering simulations non independent of each other. While this regression analysis shows a decrease in LST across simulations and cycles, our ability to make confident inferences based on this result is limited.
FIGURE 2: Run charts demonstrating (1) total LSTs, (2) infection control, (3) equipment, and (4) communication across all 3 PDSA cycles. Time gap between PDSA 1 and 2 was 5 months and between PDSA 2 and 3 was 4 months.
Upon evaluation of the bar graph for primary drivers (Fig. 3 ), we noted an improvement in LST related to infection control, equipment, and access to personal protective equipment (PPE), which were primarily structure based, but no improvement in threats related to workflow changes, primarily process.
FIGURE 3: Bar graphs of primary drivers for all 3 categories (1) infection control, (2) airway equipment, and (3) communication across all 3 PDSA cycles.
Primary Drivers
A detailed review of all primary drivers and interventions related to infection control, equipment, and communication is available in Table 2 . Equipment interventions were primarily structure based, whereas communication interventions were primarily process based. In cycle 2, a structured huddle used an already established communication system to send messages to team members' handheld devices, reminding them to huddle and preventing them from receiving patients for 7 minutes when activated. This also allowed for continued surveillance via tracking the huddles for the communication subgroup. Infection control had a mix of process measures (workflow updates) and structural (physical changes in PPE placement and changes in laundered gowns).
TABLE 2 -
Detailed Review of Primary Drivers, Structure, and Process Interventions Placed After PDSA Cycles 1 and 2
Category
Primary Drivers
PDSA 1
PDSA 2
Infection control
Problems attributed to intubation workflow
• Updated workflow for intubation with clear role designations and number of staff in the room consisting solely of ED staff along with respiratory therapists (process)
• Intubation and PPE training added to the new hires program, along with continued dedicated training for all medical staff through departmental meetings (process)
• Disposal areas for laundered PPE in every room (structure)
• Continued PPE training (process)
Threats attributed to access to available PPE
• Dedicated infection control carts placed in each zone with clear signage and bright yellow floor decals (structure)
• PAPR system removed (gowns, face shields, N95 masks only) (structure)
• PPE gowns placed on top of airway cart (structure)
Infection control-related equipment concerns
• New laundered gowns with Velcro for easy doffing developed, tested, and send to tailor for design (structure)
• Restocking system put into place by the unit leader in the ED (process)
• 20,000 new laundered Velcro gowns implemented throughout the hospital (structure)
Equipment
No. difficult to find equipment
• New orange airway carts implemented (structure)
• Improved signage, including floor decals (structure)
• Additional stylet added to airway cart (structure)
No. missing or expired equipment found
• Airway cart restocking outsourced to transport department (structure)
• Additional safety seal implemented (structure)
• Unit clerk and staff education for contacting the transport department for airway cart management (process)
Equipment that was present, but staff unfamiliar with the location or item
• “Airway menu” with images and locations for each drawer placed on airway cart (structure)
• Prior airway carts and “to-go” bags removed (structure)
• Dedicated training for all medical staff (departmental and in-person) to familiarize staff with medical carts (process)
• Signage and markers placed on the floors for the carts for ease of identification (structure)
• Continued training of medical staff using “just-in-time” training and departmental meetings (process)
Communication
General lack of role designation
• Departmental education and reviewing threats from the simulation (process)
• Review and training of clear workflows (process)
• Role labels placed on top of airway carts (structure)
• Dedicated communication subcommittee developed (process)
• Artificial intelligence structured huddle system implemented (structure)
Lack of a clear team leader or presence of shared leadership
Lack of a designated runner outside of the resuscitation room
Upon evaluation of the bar graph for primary drivers (Fig. 3 ), we noted an improvement in LSTs related to infection control, equipment, and access to PPE, which were primarily structure based, but no improvement in threats related to workflow changes, primarily process.
Of particular interest were LSTs related to infection control, given its increased focus during the pandemic. Based on the run chart, there were no clear shifts or trends that demonstrated a decrease in LSTs over time. As discussed earlier, we noted that LSTs that were primarily structure based improved longitudinally compared with those that were primarily process based. The primary drivers offer further insight into this distinction. LSTs related to access to PPE (ie, location and storage) and infection control equipment (ie, gowns difficult to remove) decreased longitudinally, both of which are primarily structure (ie, updated infection control carts, new laundered gowns with Velcro attachments) with support from process interventions (ie, stocking plan for new carts, phased removal of old gowns). Our primary driver attributed to infection control workflows was primarily process related (ie, crash carts being brought into room, improper donning and doffing on entering or leaving a room) without corresponding structure. Our teams noted an increase in workflow related LSTs as changes were made. As an example, in PDSA 1, the powered air purifying respirators (PAPR) system, a fixture during our initial peak, trained through laboratory-based simulation, was removed. Changes in PPE and updated airway management strategies were developed and trained during staff meetings. However, during PDSA 2, our team noted that this change in workflow resulted in increased confusion and threats about what PPE to wear.
Discussion
This report provides a detailed description of a comprehensive in situ simulation -based quality improvement initiative to respond to concerns from frontline staff in experiences during the first wave of the pandemic. The identification and mitigation of threats resulted in department-wide changes to how airway management is practiced and how teams communicated and improved hospital-wide PPE.
Our QI team noted that improving structure-based threats exposed LSTs derived from poorly defined or poorly followed processes. This study illustrated that interventions had a larger impact when involving structure and process. When process alone was updated, without corresponding structure-based changes, we discovered no improvement in LSTs across cycles. Our ED has focused heavily on teamwork training, including yearly simulation training on crisis resource management, which has been primarily process based in nature.2,25 In this study, communication workflows did not improve until addressed by means of a structure-based intervention that supported process-based interventions. This study suggests that structure-based interventions are needed to impact process-based threats in the ED under investigation. Structure is focused primarily on changing the environment of care, whereas process is focused on behavior modification, a much more challenging aspect to change benefitted from the addition of structure.
The program resulted from staff concerns over perceived threats during the first wave of the pandemic. The ability to perform in situ simulation allowed the hospital to provide a more granular evaluation of these threats. Incorporating a structured QI approach provided the ability to track these threats over time and understand the impact of interventions administered. While designed by and for the ED, the program impacted the entire hospital. In addition, many of the interventions came from direct feedback from staff during the debriefs (including the design of new laundered PPE with easy donning/doffing, an orange airway cart to match prior “orange” airway boxes, creation of an “airway menu” for carts to recognize and obtain equipment, among others). The program changed the environment of care and allowed medical staff to become active participants in that change. The overall cost of the program was minimal, mainly instructor time, with simulations performed primarily when on shift. The low-fidelity task trainer was already in the hospital's simulation center.
Limitations
This study is a single-center trial and evaluates the workflow and processes of that institution. The program has since expanded to 4 other EDs, both academic and nonacademic centers, and a pediatric hospital, while also expanding to inpatient units including intensive care units across 5 hospitals in the New York Tri-State Area. The details of the larger multisite trial along with generalizability of such a program will be available on conclusion of that study. The results presented in this study depend on several assumptions, one of these is that the changes observed were caused by this study's interventions rather than external factors. Making this assumption could have been avoided by including a control group, which was not feasible in this single institution study. As part of the QI process, we did our best to monitor as many external factors as possible and are not aware of any unforeseen influences that could have impacted the SAFER score or LSTs identified in subsequent cycles.
Our prior in situ QI program leveraged video recordings,5,6 but video was not used in the current study. If video review had been performed, it would have improved our ability to measure outcomes of interest more objectively. It was purposefully not performed as an added safety precaution; reasons include avoiding unnecessary staff from entering closed units, restricting simulations to when the debriefer was working clinically, and limiting unnecessary equipment coming into and out of the closed unit.19
In addition, we did not exclude staff who attended prior simulations. We designed the program to limit confounding from prior participation, focusing on threat identification, and keeping the case simple, using it in all 3 cycles. Furthermore, our purpose was on improving the environment of care by mitigating structure- and process-related threats longitudinally, rather than training individual healthcare workers. Interestingly, the areas that one would expect confounding, primarily communication (knowing that role designation was being monitored) and processes related to infection control (knowing not to bring extra equipment into the room and having a designated runner), did not demonstrate any improvement based on prior attendance.
Conclusions
Before initiation of this program and during the first wave of the SARS-CoV-2 pandemic, our hospital attributed its ability to dynamically respond by implementing structure and process measures for the drop in mortality rate month by month, including early tracheostomy and the development and training of acute airway teams.2,3 Using our hospital's focus on the Donabedian model, this report, including the supplementary content, provides a comprehensive “road map” on how a single institution leveraged in situ simulation to identify threats, used it as a vehicle to develop interventions, and monitored the impact of those interventions on subsequent cycles.
The program will now be used quarterly with the goal of sustained and continued tracking of airway-related threats and will be altered once SARS-CoV-2 decreases in the community to evaluate for other airway-related concerns. Clinical outcomes of measure for future research should consider postintubation mortality, first-pass success along with intubation-related complications. The merging of in situ with key QI tools has since expanded to other key quality initiatives both in the ED and outside, including airway management between the intensive care unit, interventional cardiology, and anesthesia in the cardiac catheterization laboratory, pediatric seizure management in the ED and pediatric units, and mass casualty drills. External validation of this same program on diverse hospitals is currently underway as part of the larger MISS-Q collaborative.
ACKNOWLEDGMENTS
The authors thank Susan O'Boyle, RN; Dean J. Straff, MD; Jean Lesko, MD; Matthew Colantoni, DO; Michael Palumbo, MD; Cairenn Binder, RN; Janice C. Palaganas, PhD; and Kenay Johnson, MA, for mentorship with QI interventions and simulation planning. The authors also thank Marife Reyes, Michael Gelormino, Sarina Colarusso, and Michael Conroy for assistance with QI implementation. The authors thank Andrew Yoon, MD; Andrew Restivo, MD; Maninder Singh, MD; Hillary Moss, MD; Sharan Shah, MD; and Molly Bourke, MD, from the Merging In Situ Simulation and Quality (MISS-Q) collaborative for monthly planning and collaboration and Nicholas Dadario, BS; and Brennan Cook, BA, for editing and statistical assistance. Dr. Christina Yang is a Clinical Research Training Program scholar supported by NIH/National Center for Advancing Translational Science (NCATS) Einstein Montefiore CTSA Grant Number UL1TR001073.
REFERENCES
1. Locke CJ, Koo B, Baron SW, Shapiro J, Pacifico J. Creation of a medical ward from non-clinical space amidst the Covid-19 pandemic.
J Eval Clin Pract 2021;27:992–995. doi:10.1111/jep.13560.
2. Binder C, Torres RE, Elwell D. Use of the Donabedian model as a framework for COVID-19 response at a Hospital in Suburban Westchester County, New York: a facility-level case report.
J Emerg Nurs 2021;47(2):239–255. doi:10.1016/j.jen.2020.10.008.
3. Sammartino D, Jafri F, Cook B, et al. Predictors for inpatient mortality during the first wave of the SARS-CoV-2 pandemic: a retrospective analysis.
PLoS One 2021;16(5):e0251262. doi:10.1371/journal.pone.0251262.
4. Cardasis JJ, Rasamny JK, Berzofsky CE, Bello JA, Multz AS. Outcomes after tracheostomy for patients with respiratory failure due to COVID-19.
Ear Nose Throat J 2021;14556132199356. doi:10.1177/0145561321993567.
5. Shah SJ, Cusumano C, Ahmed S, Ma A, Jafri FN, Yang CJ.
In situ simulation to assess pediatric tracheostomy care safety: a novel multicenter quality improvement program.
Otolaryngol Head Neck Surg 2020;163(2):250–258. doi:10.1177/0194599820923659.
6. Ahmed ST, Cusumano C, Shah SJ, Ma A, Jafri FN, Yang CJ. Response to “Mitigating tracheostomy-related latent safety threats through
in situ simulation : catch them before they fall”.
Otolaryngol Head Neck Surg 2021;164(6). doi:10.1177/0194599820977193.
7. Dadario NB, Bellido S, Restivo A, et al. Using a logic model to enable and evaluate long-term outcomes of a mass casualty training program: a single center case study.
Disaster Med Public Health Prep 2021;28:1–7. doi:10.1017/dmp.2021.66.
8. Dubé MM, Reid J, Kaba A, et al. PEARLS for systems integration: a modified PEARLS framework for debriefing systems-focused simulations.
Simul Healthc 2019;14(5):333–342. doi:10.1097/SIH.0000000000000381.
9. Burton KS, Pendergrass TL, Byczkowski TL, et al. Impact of simulation-based extracorporeal membrane oxygenation training in the simulation laboratory and clinical environment.
Simul Healthc 2011;6(5):284–291. doi:10.1097/SIH.0b013e31821dfcea.
10. Patterson MD, Geis GL, Falcone RA, LeMaster T, Wears RL.
In situ simulation : detection of safety threats and teamwork training in a high risk emergency department.
BMJ Qual Saf 2013;22(6):468–477. doi:10.1136/bmjqs-2012-000942.
11. Knight P, MacGloin H, Lane M, et al. Mitigating latent threats identified through an embedded
in situ simulation program and their comparison to patient safety incidents: a retrospective review.
Front Pediatr 2018;5:281. doi:10.3389/fped.2017.00281.
12. Couto TB, Barreto JKS, Marcon FC, Mafra ACCN, Accorsi TAD. Detecting latent safety threats in an interprofessional training that combines
in situ simulation with task training in an emergency department.
Adv Simul (Lond) 2018;3:23. doi:10.1186/s41077-018-0083-4.
13. Langley GJ, Moen RD, Nolan KM, Nolan TW, Norman CL, Provost LP.
The Improvement Guide .
A Practical Approach to Enhancing Organizational Performance . 2nd ed. Jossey-Bass; 2009.
14. Ayanian JZ, Markel H. Donabedian's lasting framework for health care quality.
New Engl J Med 2016;375(3):205–207. doi:10.1056/nejmp1605101.
15. Taylor MJ, McNicholas C, Nicolay C, Darzi A, Bell D, Reed JE. Systematic review of the application of the plan-do-study-act method to improve quality in healthcare.
BMJ Qual Saf 2014;23(4):290–298. doi:10.1136/bmjqs-2013-001862.
16. Berwick D, Fox DM. Evaluating the quality of medical care: Donabedian's classic article 50 years later.
Milbank Q 2016;94(2):237–241. doi:10.1111/1468-0009.12189.
17. The SAFER matrix: a new scoring methodology.
Jt Comm Perspect 2016;36(5):1.
18. Ogrinc G, Davies L, Goodman D, Batalden P, Davidoff F, Stevens D. SQUIRE 2.0 (Standards for QUality Improvement Reporting Excellence): revised publication guidelines from a detailed consensus process.
BMJ Qual Saf 2016;25(12):986–992. doi:10.1136/bmjqs-2015-004411.
19. Jafri FN, Shah S, Yang CJ, et al. Safety considerations for
in situ simulation in closed SARS-CoV-2 units.
Simul Healthc 2020. doi:10.1097/SIH.0000000000000542.
20. Bajaj K, Minors A, Walker K, Meguerdichian M, Patterson M. “No-Go considerations” for
in situ simulation safety.
Simul Healthc 2018;13(3):221–224. doi:10.1097/SIH.0000000000000301.
21. Kiran DR.
Total Quality Management: Key Concepts and Case Studies . Chapter 26 - Failure Modes and Effects Analysis; Butterworth-Heinemann; 2016.
22. R Core Team. R: A language and environment for statistical computing.
R Foundation for Statistical Computing 2019.
23. Brooks ME, Kristensen K, van Benthem KJ, et al. glmmTMB balances speed and flexibility among packages for zero-inflated generalized linear mixed modeling.
R Journal 2017;9(2). doi:10.32614/rj-2017-066.
24. Hartig F. DHARMa: residual diagnostics for hierarchical (multi-level/mixed) regression models. R package version 0.2.0. Available at:
https://CRAN.R-project.org/package=DHARMa .
https://CRANR-project.org/package=DHARMa . Published online 2018. Accessed September 15, 2021.
25. Jafri FN, Mirante D, Ellsworth K, et al. A microdebriefing crisis resource management program for simulated pediatric resuscitation in a community hospital: a feasibility study.
Simul Healthc 2021;16(3):163–169. doi:10.1097/SIH.0000000000000480.