Secondary Logo

Clinical Macrosystem Simulation Translates Between Organizations

Bender, G. Jesse, MD, FAAP, CHSE; Maryman, James A., BSN, RN

doi: 10.1097/SIH.0000000000000263
Empirical Investigations
Free
SDC

Introduction Simulation has become an integral tool in healthcare facility redesign. Immersing clinical experts into their future environment has demonstrated benefits for transition planning. This study evaluates translation of a proven macrosystems testing protocol, TESTPILOT, to an organization with limited simulation experience.

Methods An experienced TESTPILOT team guided Woman's Hospital Baton Rouge's simulation preparation for their new neonatal intensive care unit. Metrics included participant evaluations, latent safety threats (LST), and clinician surveys. Latent safety threats recorded during debriefings were addressed by workflow committees. Clinicians were surveyed at four time points for readiness and preparedness on 24 key processes.

Results The local team invested nearly 750 hours into learning and implementing seven simulations that participants rated positively. Most of the 305 LST were minor issues. Surveys at baseline (42% of staff), postsim (18%), pretransition (26%), and postmove (29%) demonstrated strong internal consistency. System readiness lagged behind staff preparedness (P < 0.007); both were higher after simulations (P ≤ 0.001) but at no other interval. Critical laboratory notification, rounding structure, team coverage, and feedback were still evolving as of move day (P < 0.02).

Discussion Macrosystems testing using simulation identifies LST, improves process, and prepares staff. The methodology is implementable in organizations with limited prior exposure. Woman's Hospital Baton Rouge accrued essential skills to model and orchestrate an immersive neonatal intensive care unit and then drive effective multidisciplinary debriefings. Staff immersed in the new environment began to articulate their jobs before moving in. The trajectory of system readiness improvement corroborated LST correction. Future research is needed to determine the extent of simulation required for different organizational structures.

From the Alpert Medical School Brown University (J.B.); Department of Pediatrics (J.B.), Women & Infants' Hospital, CNE Simulation Program, Providence, RI; and Woman's Hospital Baton Rouge (J.A.M.), Baton Rouge, LA.

Reprints: G. Jesse Bender, MD, Department of Pediatrics, Mission Health System, 509 Biltmore Ave, Asheville NC 28801 (e-mail: George.Bender@msj.org).

The authors declare no conflict of interest.

Transitions to new healthcare environments necessitate cultural transformation. This is particularly true in acute multidisciplinary care environments such as the neonatal intensive care unit (NICU) healthcare model. Every year, more NICUs transition from a traditional open bay to hybrid or single-family room (SFR) designs.1 Although the ideal NICU model debate2,3 continues, all hope to positively impact medical and neurodevelopmental outcomes by reducing infections,4 controlling light and sound exposure,5 and improving family-centered care,6,7 with better bonding and breast feeding.8 Intensive care environments in which neonates, infants, and medically fragile children are cared for differ from those for adults in the frequency of life-threatening apnea, bradycardia, and labile oxygenation. This physiology dictates considerable interdependence between care providers, who rely heavily on cross monitoring, interdisciplinary communication, and efficient response team activation. With transition to hybrid or SFR models, routines that were robust and efficient after decades of practice in the open bay model become disjointed and unreliable. Uncertainty with procedural arrangements in the unfamiliar environment increases the risk of preventable errors. Fragile patients may be at risk until new practice patterns emerge. However, sufficient attention to transforming care processes and staff attitudes before transition may maintain previously demonstrated care excellence levels.

Clinical simulation is a transformative tool. The mainstreaming of simulation into medical education and hospital cultures creates new opportunities for process improvement. For decades, perinatal patient safety programs have simulated microsystems to improve the functioning of small units of care delivery (Berwick9 level B). Macrosystems simulation involving interconnected service lines within an organization (Berwick level C) require considerably more resources. Historically limited by high cost and uncertain return,10 macrosystems simulation is now becoming more widely integrated into human factors research,11 transforming care processes,12 and assessing staff attitudes.13 One such macrosystem testing protocol, Transportable Enhanced Simulation Technologies for Pre-Implementation Limited Operations Testing (TESTPILOT),14 facilitated our transition to a new 80-bed NICU at Women & Infants' Hospital of Rhode Island (WIHRI). Our 5-year-old WIHRI simulation program created a functional NICU within the new space with the explicit intent of enhancing patient safety at transition. We aimed to identify latent safety threats (LSTs) or dormant weaknesses in technology and processes that could potentially harm patients or staff. Clinicians participated in two progressive 30-minute simulations, followed by 60-minute debriefings, identifying numerous LST without exposing any neonates to risk. Overall system readiness improved because LSTs were corrected and processes were refined. The details for practice in the new NICU did not fall into place until simulation highlighted existing process deficiencies and gave opportunities for correction. The results were so salutary, and we speculated that this approach could become standard practice for any transitioning acute care environment.

However, successful implementation of the TESTPILOT process at one institution does not imply its feasibility at other centers, nor it could be used to accurately measure readiness of newly constructed care environments outside of the context of the original study. Women's Hospital in Baton Rouge (WHBR)15 was motivated to explore their new 72-bed level III SFR NICU. The new space was double the size of the previous open bay design, with longer travel distances to labor and delivery, operating rooms, and the emergency assessment center. With an average NICU census of 50 and approximately 700 admissions per year, the previous nurse-patient ratios from 1:3 to 1:4 could be a challenge in SFR. Some technologies were new during this transition (eg, handheld Cisco phones), whereas others (eg, electronic medical record) had been implemented 1 year ago. However, implementation could still be thwarted by insufficient organizational safety culture or inadequate simulation experience to model a functional intensive care unit. Women's Hospital in Baton Rouge patient safety systems included routine multidisciplinary root cause analysis and a high-volume near-miss error reporting. Preparation for the new unit had included whiteboards and walk-through scenarios. Across the institution, simulation was generally perceived as helpful for mock codes and team training. The local delivery of neonatal resuscitation program16 integrated simulation skills practice without video playback or in-depth exploration into behaviors. Measuring change was also not a given. Even if the simulations were successful, the extent to which LST would be discovered and corrected was unknown. As LSTs were corrected, it was unclear whether the original WIHRI survey instruments would corroborate improved system readiness. We hypothesized that using (1) instruments refined with serial cognitive interviews, (2) an effective TESTPILOT implementation in another organization would (3) measurably demonstrate improved system readiness and staff preparedness.

Back to Top | Article Outline

METHODS

This study was a quasi-experimental mixed-methods design without a control group. The WIHRI guided WHBR TESTPILOT development. Figure 1 shows an overall timeline. After engaging senior leadership and recruiting the core team, we determined simulation scope, developed scenarios, prepared the core team and set the stage, orchestrated simulations, and facilitated debriefings. The findings were integrated with education programs, and data were collected with revised instruments.

FIGURE 1

FIGURE 1

Back to Top | Article Outline

Engagement of Senior Leadership and Recruitment of the Core Team

A partnership was formed with WHBR to replicate TESTPILOT. Hospital administrative and NICU clinical leadership buy-in were essential first steps for enabling the most effective exploration. Administrators committed to support staff involvement and mobilize resources for process improvement. Nursing and physician front-line leaders committed to participate in the simulations, creating a safe learning space for participants. A core team of WHBR staff and educators was selected to prepare for and facilitate the simulations. Nursing and physician champions who had volunteered for transition preparation more than 1 year ago brought their strengths to core team roles. Roles included scenario preparation, scheduling, supplies and equipment, environment preparation, information technology liaison, orchestration and debriefing, and research. Each role had a lead with multiple contributors. The local lead for research obtained expedited institutional review board approval and informed consent by all participants.

Back to Top | Article Outline

Determination of Scope

We explored the optimal portion of the new SFR NICU to simulate, balancing the expected yield with the effort required to develop scenarios and orchestrate simulations. The minimum space would enable routine care team interactions among RNs, MDs, secretaries, and respiratory therapists, when responding to important clinical situations. A neighborhood cluster of six rooms plus interfaces with labor and delivery room (LDR) and magnetic resonance imaging (MRI) became the simulation arena. The level of staffing needed to cover these rooms defined participant throughput and facilitator requirement. In situ simulations were scheduled between construction completion and patient move dates. Three simulation block sessions were initially scheduled in intervals before the expected move date. A consulting firm separately assisted with move day logistics.

Back to Top | Article Outline

Scenario Development

The core team developed scenarios with guidance from our WIHRI team. Learning objectives were refined from local high acuity concerns and organizational priorities (Table 1). The goal was to emulate a busy NICU day without overwhelming available resources. Four patient vignettes were developed to progress for two 30-minute simulations. Alternate versions tested code response from different locations. Simulation 1 “Bob Perry,” whose opening vignette described hypoxic-ischemic encephalopathy, was designed to evaluate cooling blanket and dirty utility workflows as well as documentation and communication protocols. His scenario progressed to explore order entry for MRI, medication retrieval from Pyxis, interdepartmental communication, organizing the transport team, and physically transporting him to the radiology suite with respiratory support. Simulation 2 “Wanda” was a 3 week old ex–26 week gestation infant recovering from respiratory distress syndrome on high-flow nasal cannula, whose intolerance to continuous feeds triggered testing of laboratory and radiology technologist recruitment, the pneumatic tube system, ventilator changes, new intravenous fluids, and consultation to neonatology and surgery. In the progression, she acutely decompensated, needing an oscillator and preparation for a bedside surgical procedure.

TABLE 1

TABLE 1

Simulation 3, “The Simon Triplets,” occupied the next rooms and had hypervigilant parents who inadvertently activated the code button, commented on nasogastric tube insertion, and pushed behavioral boundaries. One triplet was hypoglycemic, with intravenous access issues and discrepant glucometer and laboratory values. The other two were feeding and growing. Simulation 4, “Minnie” had alternative versions to simulate code response from various locations: resuscitation rooms, assessment center, or labor and delivery. Handheld phone alerts triggered code team responses to each location, finding equipment and supplies, stabilizing a 28-week preterm on mobile warmer with shuttle, intubation, and ventilation, routing back to NICU, progressing to scanning medication request to pharmacy, placing lines, and settling in. Simulation 5, “Charles Green,” was a 3-week-old 29-week infant with a distended abdomen. This simulation tested aspects of basic workflow such as intubation and ventilator management, obtaining x-rays and laboratory work, and emergent operating room transport. Simulation 6, “Henrietta Wells,” was a 42-week infant readmitted from home for hernia repair. This simulation tested clinical pathways for outpatient surgical patients. This included check-in at front desk, admission to the hospital, parent orientation to the department, and patient documentation as well as moving the patient to and from the operating room suite.

Back to Top | Article Outline

Setting the Stage and Preparation of the Core Team

The WHBR simulation coordinator solicited the core team for script development, arena preparation, parent actor training, and simulation orchestration coordination. Rooms were stocked with supplies, equipment, props, and mannequins. Familiar and crucial environmental cues were placed, such as vital sign monitors, handheld communication devices, and an active electronic medical record. For equipment not yet available in the new space, solutions were created. For example, a box mounted on a cart with the same footprint simulated the x-ray machine. The core team was coached on facilitating immersion at the bedside and techniques to guide without directing participants. They reviewed the session learning objectives and scripting for their individual vignettes. These clinical educators were coached on identifying LSTs, disseminating solutions, and iterative improvement. Staff from all shifts volunteered 4-hour blocks during nonclinical time. Although participation did not replace mandatory training requirements, nursing staff earned credits toward their professional development program “Career Ladder.”

Back to Top | Article Outline

Orchestration of the Simulation and Facilitation of the Debriefing

Each session integrated principles designed to enhance learning effectiveness in clinical simulation.17 After orientation to the simulation arena, staff cared for patients in their usual capacity for 30 minutes, followed by an hour of facilitated group debriefing. During each simulation, one or more patients would have events refocusing staff attention or compelling shifting of resources. The second simulation was a progression of the same vignette to quickly immerse participants. Handheld camcorder devices captured part of the unfolding scenarios. Immediately after each simulation, all participants gathered in a nearby room to discuss the progression of their scenarios. One experienced and one novice facilitator co-debriefed each session. Debriefers created a safe learning environment with a reminder to focus constructive critique on care processes instead of individual performances. Brief video segments were played back during debriefing to reign in conversation or clarify an event. Facilitators structured the conversation with a scripted debriefing agenda, a blend of open-ended themes and scenario-specific learning objectives. Participants solved problems in real time then brought their experiences back to pre-existing NICU process improvement workgroups. The workgroups refined processes for a 7- to 10-day turnaround period. Subsequent simulation runs using the same scenario scripting and learning objectives retested newly defined processes. Summary information was shared with hospital administrators to substantiate corrective measures and enhance educational programs. Key organizational decision makers, including a Clinical Vice President and NICU Medical Director, directly observed most debriefings.

Back to Top | Article Outline

Integration With Existing Educational Programs

The WHBR integrated simulation-based systems testing with the broader hospital education program. A small group of NICU clinicians had previously designed a three-phased education program designed to serve overall organizational goals and safety requirements. Phase 1, which focused on general way finding and fire safety within the unit, was done from 3 to 2 months before the move. In phase 2, tours and scavenger hunts aligned staff to NICU-specific processes. This phase included orientation to patient rooms and processes, new medication rooms, Pyxis locations, syringe pump libraries, nourishment preparation areas, equipment storage and cleaning, nurse station set up, location of supplies, surgery admission area and postanesthesia care unit, parent space and patient space, emergency response pathways, code packs, and crash carts. Phase 3, from 5 to 2 weeks before the move, centered on new systems and contingency planning. This included familiarizing with the pneumatic tube system, Rauland nurse call system, high census overflow plan, hospital code responses (fire, security, weather, etc.), elevator override function, procedures for infant transport to MRI imaging, and infant code blue responses in various hospital areas. The TESTPILOT simulations began, while phase 2 education program was in progress. Simulation results were available in real time to members of hospital leadership and relevant work group members. Debriefing summaries were shared to inform key decisions and bolster problem-solving sessions. As changes in policy and procedures emerged, solutions were integrated into phase 2 and phase 3 of the hospital education programs. Staff members who had previously completed their training were given supplemental training to bring them up to speed in the final 2 weeks before the move.

Back to Top | Article Outline

Data Collection

The mixed-methods approach includes three sources of qualitative and quantitative data: participant evaluations, LSTs, and the system readiness/staff preparedness surveys. Participants evaluated the impact, organization, content, and facilitation at the end of each simulation session, whereas multiple scribes recorded LSTs during each debriefing. Facilitators reviewed themes immediately after each simulation session, generating transcript summaries with action items that became drivers for process improvement and revision to subsequent scenarios, as well as a catalogue of LSTs. A grounded theory approach was used to clarify recurrent patterns across the spectrum of LSTs using qualitative analysis software NVivo (QSR, Melbourne Australia). Two clinical researchers who personally implemented TESTPILOT-WHBR categorized LST post hoc. The researcher (J.B.) involved in previous TESTPILOT provided initial coding themes and definitions. The researcher (J.A.M.) who had guided each WHBR debriefing clarified specific issues. Joint analysis with WIHRI and WHBR staff ensured consistent interpretation of both the node definitions and the local issue context. They sorted LST into themes, debated interpretations, and iteratively refined category definitions for bridging issues. Saturation was attained when no further themes emerged. Node definitions evolved with categories emerging in recruitment, scripting, written communication, family-centered care, staffing, supplies/equipment, training, provider workflow, code blue/staff assist, ergonomics, way finding, communication device, and facilities. For the summary lists, issues happening in repeated simulations were only counted once. After all issues were associated with a descriptive category, the authors together re-examined each one for degree of severity: hazard, LST, or minor issue. Active hazards were defined a priori as those likely to cause significant harm if not addressed before caring for patients in the new NICU. Latent safety threats had potential to cause harm to patients, family, or staff. Minor issues were inefficiencies or annoyances that were unlikely to cause harm. The third set of data was from system readiness and staff preparedness surveys. These were quantitatively captured by serially surveying the process experts: bedside nurses, respiratory therapists, and physicians. Anonymous surveys were distributed on paper at baseline, postsimulation, 1 week before the move, and between 1 and 2 months after the move. Professional education hours incentivized completion.

Back to Top | Article Outline

Survey Validity

The validity of the system readiness and staff preparedness survey instrument was assessed under Messick framework, including content (theoretical framework), response process (item-specific Cronbach α and factor analysis), and internal structure (internal consistency).18 A more rigorous instrument was developed and tested before the WHBR implementation. The original instrument was burdensome, with 71 questions assessing interrelated measures of system readiness and staff preparedness. Staff bias modulators were considered, including measures of emotional exhaustion, personal accomplishment, control over practice, handling conflict, staff relationships, and internal work motivation adapted from other instruments.19–22 An expert panel independently scored and classified all items. Items with low factor loadings or Cronbach α were removed. Components of Oreg's Resistance to Change23 and Reaction to Change24 were added to illuminate biases related to professional change in the NICU environment. The WIHRI NICU staff talked through the questions with think-aloud responses and solicited suggestions over serial cognitive interviews. The instrument was restructured to parallel system readiness and staff preparedness items, with a six-point Likert scale plus the option “I don't know.” Another round of cognitive interviews retested these modifications. The WHBR surveys served as an independent second-stage pilot for the instrument. The resulting instrument (see Appendix) assessed local expert opinion of system readiness and staff preparedness for each process using a snapshot of key clinical care processes expected to evolve in the new environment. Examples included negotiating decentralized communication, defining admissions workflows, recruiting correct staff and running codes, mobilizing rapid response teams, getting supplies and equipment, staffing and training issues, and scripting staff for family-centered care. Each question focused on individual or team functioning in the new environment. Staff preparedness was simultaneously assessed for each process, because without knowledgeable bedside staff, system readiness would not benefit patients.

Back to Top | Article Outline

Statistical Analysis

Analyses of the trajectory of system readiness and staff preparedness were provided for summary means and standard deviations across all 24 process items. One-way analysis of variance and Tukey post hoc tests determine differences between study periods and between system readiness and staff preparedness. Internal consistency among respondents in each survey period was assessed with Cronbach α. Subgroup analysis was done on the subgroup of staff most vested in process improvement, defined as those respondents not listing “I don't know” on at least 20 process items at baseline. Among well-informed individuals completing serial surveys, paired interval change was reported by Cohen effect size. Each of the 24 key clinical care processes for readiness and preparedness was evaluated using the Friedman test to determine whether there was a difference in readiness and preparedness between study periods. Each of the 24 key clinical care processes for readiness and preparedness was evaluated using the Wilcoxon signed-rank test with a Bonferroni correction applied to compare baseline and post-TESTPILOT surveys to determine whether simulation made a difference in these processes and between pretransition and postmove surveys to determine whether patients were safe at transition.

Back to Top | Article Outline

RESULTS

Demographics

The local simulation team estimated that 750 hours were dedicated to TESTPILOT-WHBR implementation in addition to ongoing preparations. Their success learning the premise and implementing the simulations was evidenced by participant evaluations (Table 2), which were on a par with TESTPILOT-WIHRI. The 148 (63%) of 233 WHBR staff participating in simulation were similar to WIHRI in demographics (95% white, 4% African American, 94% female) and move committee work (37% involved in 1 or 2, 23% involved in 3 or more committees). Nurses, respiratory therapists, care technicians, physicians, nurse practitioners, parents, radiology technologists, social workers, and administrators participated in seven simulations for 4 days. Staff pushed for the fourth day of simulations after experiencing its high impact. Debriefings unveiled 305 minor issues (Fig. 2), many of which were unexpected. Many of the hazards and LST involved remote monitoring, nurse call integration, and mobile communication technologies that enable the new geography of a SFR NICU. New technology rollouts invariably require technical troubleshooting. Clinical staff also needed to frame surrounding processes that integrated the new technologies with their care routines.

TABLE 2

TABLE 2

FIGURE 2

FIGURE 2

Back to Top | Article Outline

Survey Validation

Regarding survey content, response process, and internal structure, one of the most important sources of evidence is content. The content of the original instrument was assessed and made more rigorous based on expert panel feedback. For response process, the expert panel independently scored and classified all items. Items with low factor loadings or Cronbach α were removed. Clarity and face validity were enhanced with cognitive interviews. The resulting instrument assessed local expert opinion of system readiness and preparedness for each process using a snapshot of key clinical care processes expected to evolve in the new environment. Regarding internal structure, system readiness surveys were completed by 42% of all staff at baseline, 18% post-TESTPILOT, 26% premove, and 29% postmove. There was strong internal consistency in the systems readiness scale (αbaseline = 0.97, αpost-TESTPILOT = 0.95, αpretransition = 0.95, αpostmove = 0.94) and staff preparedness scale (αbaseline = 0.98, αpost-TESTPILOT = 0.96, αpretransition = 0.96, αpostmove = 0.95).

Back to Top | Article Outline

Effects of TESTPILOT

The revised instruments demonstrated the overall trajectory of system readiness (Fig. 3). System readiness (F = 30.03, P ≤ 0.001) and staff preparedness (F = 10.97, P ≤ 0.001) differed between study periods, higher after TESTPILOT, but not significantly different between other intervals. System readiness lagged behind staff preparedness at each survey period (P < 0.007). Pairwise temporal analysis was performed for the 64% of 143 respondents who completed more than one survey. Participants corroborated improving overall system readiness from baseline to post-TESTPILOT (t22 = 4.20, P < 0.001, Cohen d = 0.88), from baseline to pretransition (t37 = 7.00, P < 0.001, Cohen d = 1.14), and from baseline to postmove (t41 = 7.11, P < 0.001, Cohen d = 1.10). Participants identified modestly improving overall staff preparedness from baseline to post-TESTPILOT (t20 = −1.89, P = 0.07, Cohen d = 0.41), from baseline to pretransition (t33 = −3.3, P = 0.002, Cohen d = 0.57), and from baseline to postmove (t39 = −4.26, P < 0.001, Cohen d = 0.67). The proportion of participants responding “I don't know” on readiness items decreased from 51% at baseline (38%–61%) to 13% post-TESTPILOT (2%–26%) to 21% pretransition (5%–33%) to 6% postmove (0%–19%). Subgroup analysis of 31 most vested staff showed similar effect sizes (data not shown).

FIGURE 3

FIGURE 3

Interval change was examined for 22 specific processes (Fig. 4). “Identify provider” and “suggestions/improvements” were excluded given more than 20% dropout from a paper survey photocopying error. Readiness was significantly different between baseline and post-TESTPILOT for 17 processes, and preparedness was significantly different for six processes (Appendices 2, 3). Readiness was significantly different between pretransition and postmove for the following six processes: equipment access, critical laboratory results, verbal protocols, written protocols, team rounds structure, and medical model. Preparedness was significantly different between pretransition and postmove for the same six processes plus Signage and Buddy coverage.

FIGURE 4

FIGURE 4

Back to Top | Article Outline

DISCUSSION

Some of the earliest institutions transitioning to SFR NICU suggested that a systems approach may help resolve both microergonomic and macroergonomic challenges that the new design inevitably introduces.25 This study evaluates the translation of a proven simulation-based systems testing protocol to another healthcare organization. The WHBR successfully replicated TESTPILOT before transitioning into their new NICU. They motivated their staff, executed the key preparations, and immersed participants in simulations that were realistic, relevant, and impactful. Participant evaluations validate that they accrued the requisite simulation and debriefing expertise. Latent safety threats discovered during the debriefing process provided the greatest immediate value to the institution.

Back to Top | Article Outline

Simulation Implementations

The WHBR demonstrated that the methodology was effective, despite subtle variations in both implementation structure and scenarios. The WHBR held seven simulation sessions that occurred between 3 to 12 weeks before the move, whereas WIHRI held six sessions between 6 and 9 weeks ago. When building occupancy certification delays extended the timeline, the WHBR staff added an extra simulation day. Scenarios were built on locally relevant learning objectives to help participants vest, immerse, and optimize discovery. The WHBR learning objectives and scenarios (eg, way finding to MRI on loading dock) differed somewhat from those at WIHRI (eg, simultaneous code blues, rapid response). Both WHBR and WIHRI created variants of baseline scenarios to address these session learning objectives. Exposures were similar between WIHRI and WHBR: two progressive simulations with 60-minute debriefings. Participants tended to immerse more readily in the second simulations. After the first sessions demonstrated the excitement of exploring the new environment and the lack of personal risk, a wave of staff volunteered to get involved in subsequent simulations at each institution. At each institution, involving department leadership and workgroup committee members in debriefings catalyzed process refinement around new technology. The TESTPILOT participants channeled safety threats directly to their workflow committees in hours, formulated corrections in days, and retested in weeks, instead of a more typical organizational process that may take months to years. Staff at each institution were extensively orientated to policy and procedure changes in the new space. More than 400 clinical staff participated in scenario-based WIHRI orientation workshops and WHBR phase three education sessions.

Back to Top | Article Outline

Strengths

One of the key strengths of TESTPILOT is its ability to uncover safety concerns via human factors using in situ experts. It accomplishes beta testing without live patients. Local experts' systems knowledge is crucial to discovery—in this case, more important than previous simulation experience. Developing learning objectives from a wide swath of clinical staff concerns made scenarios relevant and facilitated buy-in, which had been a significant hurdle. A priori testing enabled correction of errors without adverse consequences to patients or staff. Immersing staff in situ performing their usual duties in the actual environment revealed actual threats. One unmeasured yet significant byproduct of the simulation sessions, also noted in the WIHRI implementation, was a shifting from anticipatory anxiety to positive anticipation about the move. A discordant gestalt mellowed as the critical mass grasped what their new workplace would be like.

Back to Top | Article Outline

Lessons Learned

This second implementation highlighted a number of opportunities to improve subsequent implementations: preparation time, new technology, video use, and debriefing.

  • 1. Avoid underestimation of preparation time. Both administrative planning and simulation team commitments benefit from clear expectations. Effective delegation to distinct simulation team roles reduces the risk of one simulation coordinator being overwhelmed.
  • 2. Preemptively introduce new technologies that amplify SFR NICU culture change. After troubleshooting and process development, all staff need time to practice the mechanics and learn the surrounding process. Any technology introduced near the move date amplifies the stress of transition.
  • 3. Use video recording during debriefing. Time spent downloading recordings to a playback device in the debriefing arena wastes valuable engagement time and degrades participant input. Portable recording equipment with cued annotated playback ameliorates this issue.
  • 4. Enhance debriefing preparation. Our novice facilitators had a 2-hour introduction to debriefing just before the first simulations, focusing on safe learning environment, rules of engagement, and debriefing structure. An experienced facilitator co-debriefed with each novice facilitator on the first simulation day. Participants rated lower overall simulation experience and debriefing quality in the second compared with first session. These scores recovered in later sessions. This suggests that more extensive debriefing practice, remote from final simulation preparations, may be beneficial before facilitating LST discovery with such a dynamic multidisciplinary crowd.
Back to Top | Article Outline

Qualitative LSTs

Simulation again proved effective in revealing unidentified LSTs. Multidisciplinary planning committees identify many latent threats, but not all can be forecast. Launching from WIHRI lessons learned, TESTPILOT-WHBR followed a similar trajectory of LST discovery as TESTPILOT-WIHRI. Similar to WIHRI, most (66%) were minor issues, with 29% LSTs and 5% active hazards. Many that were solved before transition would not have been anticipated without TESTPILOT and, more importantly, could have led to patient harm. Participants channeled these debriefing discoveries, amplified by their direct simulation experience, to their 13 respective workflow committees. The most frequent LST categories were workflow, code blue/staff assist, and family-centered care. The TESTPILOT-WIHRI LST, conversely, centered around communication, minor facilities issues, and mobilizing rapid response. Recurrent themes in simulations at both institutions included staff isolation, handheld communication device usage, and crisis response times. However, interinstitutional comparison of cumulative safety threats may be misleading. The potential for deterioration in care delivery depends on pre-existing institutional variables, the state of transition planning, and the scope of culture change. Institutions vary in their starting point. Institutions vary in their preparations. Tripler Army Medical Center and Texas Children's Hospital report positive experiences using simulation before their SFR NICU changes. In the absence of standardized process measures, the relative strength of their implementations is unknown. Similarly, one cannot predict whether immersive simulations would have identified all LSTs before the move at Canberra Hospital.26 Each discovered LST adds value, as a threat undiscovered cannot be corrected. Each corrected LST adds direct value by not exposing patients as well as indirect value to the organization. To illustrate the latter, one WHBR simulation examined transport staffing for transferring a moderately ill neonate from the NICU to the MRI temporarily located on the loading dock. The consensus during debriefing was that a complete advanced skills team would be safest practice. This action plan was brought to their workgroup to work through logistics, and procedures were subsequently retested with simulation. In the end, equipment placement and procedures were thoroughly vetted resulting in a smooth inspection from the Department of Health.

Back to Top | Article Outline

Quantitative System Readiness and Staff Preparedness

The quantitative trajectory of key process improvement corroborated the correction of qualitatively discovered LST. This mixed-methods approach was drawn from social sciences, as adapted to organizational decision-making in healthcare27 and intensive care unit safety culture.28 The Agency for Healthcare Research and Quality has endorsed mixed-methods approaches to patient safety improvement implementations.29 The WIHRI TESTPILOT documented a trajectory of improvement with specialty-specific survey instruments. How well this insight would translate across organizations was unknown. Although the revisions hindered direct comparison with our WIHRI experience, the resulting instruments successfully demonstrated a disparity between WHBR readiness and preparedness. Instrument validity was demonstrated for content, response process, and internal structure. Given strong internal consistency within the system readiness and staff preparedness scales, overall scale and individual item differences could be examined over time. The following three observations can be made regarding the overall trajectory of improvement (Fig. 3): step up, lag, and scale. The significant step up in system readiness and staff preparedness after simulations was not matched over the subsequent intervals. This suggests that WHBR NICU achieved a steady state in process change and training before the move. Second, care providers' perceived readiness lagged behind preparedness at each assessment. This could be interpreted as personal pride in handling any problem that comes their way. Alternatively, it could be construed as some providers being prepared for the wrong process. All providers not being on same page are itself a LST. Finally, the overall scale of responses was not overwhelmingly positive even after the move, which invites the possibility of whether more preparation would have been beneficial for their transition. Simulation improved individual process system readiness and staff preparedness except where the technology was not yet available. Some processes were still evolving as of transition day.

Back to Top | Article Outline

Patient Safety

Safe patient care on day 1 is an objective for every complex healthcare transition. Simulation accelerates the evolution of hazard resolution, provided that the scenarios are realistic, the debriefings constructive, the latent threats corrected, and the processes refined. The TESTPILOT enhances safety by integrating specific care practices into the raw environment. It reinforces the protective layer of redundant checks and balances that reduce the risk of harm reaching the patient. Uniform staff preparedness for working in the new NICU also reduces the risk of human error at transition. Preparedness is an essential component of maintaining patient safety whether the culture change is around adopting an electronic medical record30 or transitioning to SFRs.31 Preparedness spreads across NICU staff over time. Few staff members understand how their workday will unfold 1 year before transition. Simulation participants develop more precise expectations of the environment. Clinicians working within their comfort zone are less distracted and more efficient. The exploratory simulations also gave specific fodder for the final wave of preparedness, the WHBR phase 3 education. Their staff were ready to perform routine and emergent duties as of move day. Seventeen days after the move, Hurricane Isaac made landfall in Southeast Louisiana as a category I storm on August 28. This storm maintained tropical storm strength winds through August 29, when the center of circulation passed near the city of Baton Rouge. Clinicians were prepared.

Back to Top | Article Outline

Limitations

The integrity of WHBR system readiness and staff preparation was evident in their robust care provision under hurricane response conditions. However, the study does not distinguish whether they would have been prepared in the absence of simulation. There is no control group. Designing an internal control would be problematic because implementing large-scale interventions changes the underlying conditions.32 Designing an external control would require sufficient equipoise to randomize to transition preparation without simulation. The results may be confounded by differences between complex medical systems. Future research is needed to determine the extent of simulation most beneficial for different organizational structures. Other study limitations include low survey completion rates and loosely defined process improvement strategy. The workflow committee turnaround period could be tightened within a formalized Plan-Do-Study-Act construct. Cycling iterative process improvements between simulations may prove more important than the overall time before the move.

Back to Top | Article Outline

CONCLUSIONS

Immersive in situ simulation augments healthcare transitions. Staffing and equipping an inpatient hospital setting before such a transition require serious commitment of time and resources. The investment pays off in system readiness, staff preparedness, and presumably patient safety on transition day. The methodology is implementable in organizations with limited prior simulation exposure. The WHBR accrued essential skills to model and orchestrate an immersive NICU. They reproduced key TESTPILOT elements: local objectives driving challenging scenarios, in situ testing with staff in usual roles immersed in a safe learning environment, and structured multidisciplinary debriefing followed by iterative improvement. System readiness improved as LSTs were corrected. Staff immersed in the new environment began to articulate their jobs before moving into the new environment. Any healthcare organization preparing for major culture change should consider the power of simulation in their preparations. For organizations still developing their simulation expertise, collaboration with experienced simulationists may improve efficiency and return on investment.

Back to Top | Article Outline

REFERENCES

1. Horbar, J.D. (2014). [Vermont Oxford Network Survey]. Unpublished data.
2. Lester BM, Hawes K, Abar B, et al. Single-family room care and neurobehavioral and medical outcomes in preterm infants. Pediatrics 2014, 134:754–760.
3. Pineda RG, Neil J, Dierker D, et al. Alterations in brain structure and neurodevelopmental outcome in preterm infants hospitalized in different neonatal intensive care unit environments. J Pediatr 2014;164:52–60.e52.
4. Walsh WF, McCullough KL, White RD. Room for improvement: nurses' perceptions of providing care in a single room newborn intensive care setting. Adv Neonatal Care 2006;6:261–270.
5. Graven S. Sound and the developing infant in the NICU: conclusions and recommendations for care. J Perinatol 2000;20(8 Pt 2):88–93.
6. White R, Graven S. New concepts, science, experiences drive innovation designs: the changing face of the newborn ICU. Adv Fam Centered Care 2000;9:7–10.
7. Altimier L, Phillips R. The neonatal integrative developmental care model: advanced clinical applications of the seven core measures for neuroprotective family-centered developmental care. Newborn Infant Nurs Rev 2016;16:230–244.
8. Domanico R, Davis D, Coleman F, Davis B. Documenting the NICU design dilemma: comparative patient progress in open-ward and single family room units. J Perinatol 2011;31(4):281–288.
9. Berwick DM. A user's manual for the IOM's ‘Quality Chasm’ report. Health Aff (Millwood) 2002;21:80–90.
10. Small S. Simulation applications for human factors and systems evaluation. Anesthesiol Clin 2007;25:237–259.
11. Deutsch ES, Dong Y, Halamek LP, Rosen MA, Taekman JM, Rice J. Leveraging Health Care Simulation Technology for human factors research: closing the gap between lab and bedside. Hum Factors 2016;58(7):1082–1095.
12. Preston P, Lopez C, Corbett N. How to integrate findings from simulation exercises to improve Obstetrics Care in the Institution. Semin Perinatol 2011;35:84–88.
13. Gardner AK, Ahmed RA, George RL, Frey JA. In situ simulation to assess workplace attitudes and effectiveness in a new facility. Simul Healthc 2013;8:351–358.
14. Bender J, Shields R, Kennally K. Transportable enhanced simulation technologies for pre-implementation limited operations testing: neonatal intensive care unit. Simul Healthc 2011;4:204–212.
15. Amy Eagle. A hospital reborn: replacement facility gives organization a new beginning. Health Facilities Management. November 1, 2012. Available at: http://www.hfmmagazine.com/articles/211-a-hospital-reborn. Accessed October 6, 2016.
16. Kattwinkel J. Textbook of Neonatal Resuscitation. 6th ed. Elk Grove Village, IL: American Academy of Pediatrics and American Heart Association; 2011.
17. Issenberg BS, McGaghie WC, Petrusa ER, Lee Gordon D, Scalese RJ. Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME systematic review*. Med Teach 2005;27:10–28.
18. Downing SM. Validity: on meaningful interpretation of assessment data. Med Educ 2003;37:830–837.
19. Massachusetts General Hospital. Professional Practice Environment Scale (PPE). MGH: Center for Clinical and Professional Development, Boston; 1999.
20. Maslach C, Jackson SE. MBI Human Services Survey. Palo Alto, CA: Consulting Psychologists Press, Inc.; 1986.
21. Shields L, Tanner A. Pilot study of a tool to investigate perceptions of family-centered care in different care settings. Pediatr Nurs 2004;30(3):189–197.
22. French SE, Lenton R, Walters V, Eyles J. An empirical evaluation of an expanded nursing stress scale. J Nurs Meas 2000;8(2):161–178.
23. Oreg S. Resistance to change: developing an individual differences measure. J Appl Psychol 2003;88(4):680–693.
24. Oreg S, Sverdlik N. Ambivalence toward imposed change: the conflict between dispositional resistance to change and the orientation toward the change agent. J Appl Psychol 2011;96(2):337–349.
25. Smith T, Schoenbeck K, Clayton S. Staff perceptions of work quality of a neonatal intensive care unit before and after transition from an open bay to a private room design. Work 2009;33(2):211–227.
26. Jean Peter. “Baby in Near Miss at New Hospital.” The Canberra Times. October 5, 2012. Available at: http://www.canberratimes.com.au/act-news/baby-in-near-miss-at-new-hospital-20121004-272oz.html. Accessed October 16, 2016.
27. Shoemaker LK, Kazley AS, White A. Making the case for evidence-based design in healthcare: a descriptive case study of organizational decision making. HERD 2010;4(1):56–88.
28. Manojlovich M, Saint S, Forman J, Fletcher CE, Keith R, Krein S. Developing and testing a tool to measure nurse/physician communication in the intensive care unit. J Patient Saf 2011;7(2):80–84.
29. Harris D, Westfall J, Fernald D. Mixed methods analysis of medical error event reports: a report from the ASIPS collaborative. In: Henriksen K, Battles J, Marks E, Lewin D, eds. Advances in Patient Safety: From Research to Implementation. Rockville, MD: Agency for Healthcare Research and Quality; 2005.
30. Rikli J, Huizinga B, Schafer D, Atwater A, Coker K, Sikora C. Implementation of an electronic documentation system using microsystem and quality improvement concepts. Adv Neonatal Care 2009;9(2):53–60.
31. Milford C, Zapalo B, Davis G. Transition to an individual-room NICU design: process and outcome measures. Neonatal Netw 2008;27(5):299–305.
32. Pawson R, Greenhalgh T, Harvey G, Walshe K. Realist review—a new method of systematic review designed for complex policy interventions. J Health Serv Res Policy 2005;10(Suppl 1):21–34.
Back to Top | Article Outline

APPENDIX 1

Survey Instrument

Table

Table

Back to Top | Article Outline

APPENDIX 2

System Readiness for 24 processes

Table

Table

Back to Top | Article Outline

APPENDIX 3

Staff Preparedness for 24 Processes

Table

Table

Keywords:

Macrosystem; simulation; immersion; patient safety; NICU; neonatal; intensive care; SFR; single family room; transition; preparation; human factors

© 2018 Society for Simulation in Healthcare