Our institution recently opened a satellite emergency department (SED) staffed by teams that include nurses, respiratory therapists, paramedics, and pediatric emergency physicians. No residents, fellows, or subspecialists are available in this facility, a major difference compared with our main hospital academic emergency department (ED). In addition, at the SED, only one emergency medicine-trained physician is present at any time. This mandates a different team model (one physician, fewer nurses, and no pharmacist) in the SED resuscitation bay compared with the main ED.
The importance of developing optimal health care teams cannot be overstated. The Institute of Medicine, To Err is Human, stated “Most care delivered today is done by teams of people, yet training often remains focused on individual responsibilities leaving practitioners inadequately prepared to enter complex settings.”1 Qualitative human factors methods have been effective in evaluation of technical and nontechnical skills of medical care teams. Moorthy et al2 used human factors methods to evaluate nontechnical skills among surgical (physician) trainees within formed surgical teams, including piloting the use of a nontechnical skills assessment scale. The authors showed no differences between trainees at different experience levels except in leadership; however, they did not assess the nonphysicians nor did they attempt to design and assess a new team structure, which we hoped to perform in this project.
Providers in the SED practice in an environment that differs in physical arrangement, has fewer resources, and is both a receiving facility for ambulances and a transporting facility to definitive care. In addition, the satellite facility has a low-acuity observation unit where pediatric patients are admitted if their management is expected to require <23 hours of care. A hospitalist manages these children; however, as he/she is not always in house, patients admitted to the observation unit who acutely worsen and require resuscitation are brought to the SED. This again is substantially different than the main hospital.
A specific concern in a new facility is the existence of unrecognized or latent threats to safety that could affect actual patients once the facility opens, such as missing equipment, inefficient setup, or insufficient space for procedures.3 This concern was significant, due to the new team structure and differences in setting described above. Latent safety threats (LSTs) have been defined as system-based threats to patient safety that can materialize at any time and are unrecognized by healthcare providers.4 “Aside from their use as a training tool, in situ simulation-based teaching sessions have the ability to identify potential systems issues and equipment problems that are likely to arise during a genuine emergency.”5 One published report and an earlier simulation-based investigation within our main ED demonstrated that simulation can identify latent hazards in both new and established ED settings, respectively.3 Blike et al6 used simulation and video review to identify latent hazards associated with pediatric sedation. Villamaria et al7 used simulation to orient code teams to a new facility and also identified potential safety concerns during debriefing sessions. A multidisciplinary group used simulation to identify hazards before implementation of an intraoperative radiation protocol.8
When new clinical systems, such as computer systems or resident duty hour regulations, are implemented, the potential for unintended consequences should be considered.9 In human factors and systems engineering, unintended consequences reflect the fact that system modifications, although intended to be beneficial, may also result in negative, unanticipated outcomes. For example, Han et al10 demonstrated that implementation of a computerized physician order entry system in a pediatric intensive care unit resulted in an unexpected increase in mortality, mostly attributed to changed clinician workflow patterns, even when controlling for patient acuity.
In this pilot project, our objective was to define optimal health care team roles and responsibilities, identify LSTs within the new environment, and screen for unintended consequences of proposed solutions. Our hypotheses were as follows: (1) simulation-based evaluation can help define and optimize team composition, responsibilities, and scope of practice; and (2) in situ simulation can uncover latent threats to patient safety that may exist in the new clinical environment.
This study was a prospective pilot investigation using laboratory (phase 1) and in situ (phase 2) simulations totaling 24 critical patient scenarios (Table 1) conducted over four sessions before the new facility opening. Scenarios were based on predicted SED cases. As this ED is not a trauma center, a higher percentage of medical scenarios were developed. The scenarios used in the laboratory and in situ settings incorporated similar levels of medical complexity, including need for rapid assessment, performance of at least one procedure, and need to administer multiple medications. An example was a child who sustained blunt trauma; suffered pelvic fractures resulting in hemorrhagic shock; and required pelvis stabilization, intraosseous (IO) and/or central venous access (as we did not allow peripheral access to be successful), fluid resuscitation, and early transport to a trauma center. One-third of the scenarios were run as multiple-patient simulations, meaning one scenario was ongoing when a second patient presented to the resuscitation bay, requiring the team to divide itself and/or recruit more help.
The two 4-hour laboratory sessions (phase 1) were separated by 10 days, allowing SED leadership to make changes based on recommendations from the first session. The laboratory sessions used four simulated scenarios, each followed immediately by video-assisted debriefing. Two 8-hour in situ sessions (phase 2) were conducted on site 1 month later and were separated by 3 weeks, providing leadership time to react to the initial in situ session recommendations. The in situ sessions used eight simulated scenarios, each followed by video-assisted debriefing. The second round of simulations within each phase focused on evaluating the quality of initial solutions and identifying any unintended consequences that occurred as a result of applying the developed solutions.
Participants were health care providers scheduled to work at the new hospital in the SED and observation unit. Providers from the observation unit were involved during the multiple-patient simulations in the resuscitation bay, during a code in the observation unit, and an adult ventricular fibrillation arrest scenario in the hallway.
As part of the facility's orientation process, simulation training was required for all staff. Participation occurred during scheduled work hours and before hospital opening. There was no patient involvement or risk. This protocol was approved by our Institutional Review Board with a waiver of informed consent. Participants were asked to sign video and confidentiality consents.
Phase 1—Simulation Laboratory
Table 2 provides an organized summary of the study, which helps explain the methods with regard to elements investigated and tools used.
The goal of the laboratory sessions was to define strengths and weaknesses of individual roles within the health care team, to characterize provider responsibilities, and to define scope of practice of the health care providers. The laboratory was configured as an ED resuscitation bay. After each scenario, participants and ED leadership were debriefed by trained facilitators (G.L.G. and M.D.P.) using video recordings of the simulation. Debriefing was performed using a standardized format that reviewed the positives and negatives of performance (including errors in clinical proficiency), discussed teamwork concepts, and identified LSTs. A human factors expert was present during all simulations to evaluate, provide feedback, and help develop solutions. Debriefing feedback was entered into a password-protected database at the completion of each session.
The National Aeronautics and Space Administration-Task Load Index (NASA-TLX) was completed by participants to assess perceived workload after each simulation and before debriefing. The NASA-TLX assesses workload on six separate domains: mental demand, physical demand, temporal demand, performance, effort, and frustration.11 The first three domains relate to the demands imposed on the participant, whereas the other domains focus on the interaction of the subject with the task.12 Increments of high, medium, and low estimates for each point result in 21 gradations on the scales, allowing each domain to be ranked from 0 (very low) to 100 (very high). NASA-TLX scores in the 30s and below are considered low workload. Moderate workload ranges between ∼40 and 60. Scores >60 signify high workload. The NASA-TLX is regarded as a strong tool for reporting perceptions of workload and has been used to evaluate workload within anesthetists and cardiovascular critical nurses.13–15
A video of each simulation was scored for team behaviors by two (of four) trained reviewers, blinded to each other's results, using the Mayo High Performance Team Scale (MHPTS), which was derived within simulation-based training.16 A three-step process trained each reviewer: detailed review of the original publication, didactic session reviewing each scored behavior, and group video review and discussion of scoring. The MHPTS consists of 16 items that focus on crew resource management (CRM) training, with each item eligible for a 0, 1, or 2 point assignment, resulting in a range of possible scores from 0 to 32.16 Within this scale, some items are more commonplace, or easier to obtain, than others. The more difficult behaviors require a team member to recognize disagreements, conflicts, or potential errors and act on them. The more difficult behaviors will be seen less frequently and thus indicate a higher level of teamwork.16 Higher scores generally indicate better adherence to CRM principles and better team performance.
Phase 2—In Situ
In situ simulations were performed in the actual care environment using the personnel, equipment, medications, and resources intended for clinical care. The majority were conducted in the resuscitation bay of the SED. After each simulation, debriefing and documentation of data were performed as described in phase 1. The goal of the in situ phase was the identification of LSTs as well as to screen for any unintended consequences that occurred based on changes implemented after phase 1.
The NASA-TLX was completed by participants, and videotapes of the simulations were scored for team behaviors using the MHPTS, as described in phase 1. Coding of data was used to classify LSTs and unintended consequences identified during debriefing sessions.
Follow-up email surveys were sent to each provider 1 week after completion of training. Participants rated the value of simulation training, realism of simulations, effect on confidence, impact on readiness, and overall experience using a 5-point Likert scale. Open-ended questions were used to allow participants to give general feedback, identify LSTs not discussed during debriefing sessions, inquire about benefit of future simulations, and provide suggestions for improvement. Survey answers were recorded electronically by the survey instrument and added to the protected database.
Qualitative debriefing feedback was pulled from the database by category, discussed in detail by study personnel, and used to structure the formal reports given to ED leadership after each session. Recommendations on team composition, roles and responsibilities, and scope of practice were determined by a combination of the debriefing feedback and NASA-TLX workload scores.
Descriptive statistics were computed for the NASA-TLX and MHPTS. Raw NASA-TLX scores were used, as high correlations have been shown between weighted and raw (unweighted) scores.17,18 We compared the subjective workload between teams and among team roles using the NASA-TLX questionnaire. Means were calculated and compared using an analysis of variance. Means served as the dependent variables while independent variables were the team roles themselves. Correlations were calculated between NASA-TLX means of associated team member (roles) and qualitative feedback from simulations.
The team was the unit of analysis for the MHPTS. To test for significant changes in teamwork behaviors over the course of the four sessions, the mean MHPTS score for each session was used to represent an overall team score, and scores were compared between sessions using a Student t test. Independent reviewer MHPTS scores were analyzed with a Pearson correlation test to assess for correlation, including using a t test to assess whether the mean score difference equaled 0 (mean = 2.0, SE = 1.76, P = 0.29), and z-test was used for testing the slope of the line being 1.
Eighty-one health care providers participated in the study. Nurses, paramedics, and physicians made up the majority; however, all disciplines participated (Fig. 1). The leadership group from the SED, including the physician clinical director, nursing director, and registration manager, were also present during the simulations and debriefing sessions. Leadership were the only staff present during multiple sessions, and the physician director was the only subject to participate in multiple training sessions.
Simulation laboratory debriefing identified needs to reexamine the scope of practice around initial patient assessment, IO placement, and endotracheal tube placement (Table 2). Earlier in the study, the lone physician repetitively tried to assess the patient, team lead, and perform procedures. In debriefing sessions and reports to leadership, physicians were encouraged to follow CRM principles allowing them to team lead rather than becoming fixated on procedures. It was recommended that the single physician should position him/herself at the foot of the patient's stretcher, allow nursing and respiratory therapy to perform the primary survey, and maintain situational awareness. To prevent task fixation resulting from physicians performing multiple procedures, we recommended that paramedics and/or nurses place IO needles and respiratory therapists perform endotracheal intubations that the team classified as uncomplicated airways. The development of early transport protocols was deemed vital, as no subspecialists are available to deliver definitive care.
NASA-TLX means for simulation laboratory and in situ environments showed mental and effort workloads in the high range but with no statistical differences between settings (Table 3). When simultaneous patient encounters occurred, mean workloads increased in the majority of domains, often into the high range (Table 4). Domain mean scores by role are illustrated in Figure 2. The overall mean for the medication nurse was 69 (SD 17.1), compared with the remaining team role means, which ranged from 45.2 to 57.7 (P = 0.088) (Fig. 3). During debriefings, medication nurses qualitatively described frustrations with the number of medications ordered and the time pressures within their role, which correlated with the NASA-TLX results. Participants voiced the need to develop a system around delivery of medications, with the inclusion of pharmacy to ensure independent double checks (IDCs) of critical medications.
MHPTS raw scores are displayed in Figure 4, plotted as distribution of scores with ranges of four points. The assessment of reliability showed a Pearson correlation of 0.59, indicating moderate correlation between reviewers. MHPTS means were calculated for each phase of training. Simulation laboratory teamwork scores showed a mean of 18.1 for the first session and 18.9 for the second session (P = 0.68). In situ teamwork scores showed a mean of 12.3 for the first session and 15 for the second session (P = 0.25). Overall laboratory mean was 18.5 (SD 2.31) compared with overall in situ mean of 13.7 (SD 4.40), indicating worse teamwork during in situ simulation (P = 0.008). In addition, MHPTS means were calculated based on whether a single or simultaneous multiple simulations occurred. When two simultaneous scenarios were conducted and the formed team needed to redefine their roles and responsibilities dynamically, the teamwork score mean was 13.8 compared with single simulation mean of 15.8 (P = 0.37).
In situ debriefing sessions identified 37 LSTs, with the majority involving equipment or resources (Table 5). The most significant threats present after both in situ sessions were lack of defibrillators, inadequate oxygen flow to support bag-mask ventilation for a concurrent second patient resuscitation, and persistent use of one medication station when presented with two critical patients. These were addressed by leadership (Table 2); however, 5 (14%) of the 37 were unable to be completed before facility opening. After the initial in situ session, the resuscitation room setup and responsibilities for supplies were altered. These modifications, although intended to assist the team with patient care, resulted in negative and unintended consequences, including ease of room access, oxygen accessibility, and delivery of vital products (Table 2).
Forty-six errors in clinical proficiency were identified during the four sessions (Table 6). The two categories with the highest number of errors were knowledge gaps—procedure performed incorrectly, and systems/resources overwhelmed—necessary action omitted. More clinical errors were identified during in situ simulation than were identified in the laboratory. Of interest, certain errors occurred during multiple scenarios in the same session despite discussion of the particular error during the debriefings between each scenario. In particular, failure to attempt or perform IDCs of critical medications, failure to perform cardiopulmonary resuscitation correctly, and failure to address apnea or perform bag-valve-mask ventilation occurred repetitively (Table 2).
Only 14 (17%) of the participants responded to the survey (despite the anonymity of the instrument), limiting our ability to draw conclusions due to response bias. On the open-ended questions, nurses repeatedly commented on the difficulties faced by medication nurse. Specific comments included “We have repeatedly expressed our concerns of only have one person doing medications,” “Need for individual medication nurse and cart for each patient,” and “As a medication nurse, I think it is a safety issue not having a double check, drawing up meds by yourself and calling pharmacy on the phone … too much for one person to do … too much room for error.”
Although it may seem intuitive that certain roles on resuscitation teams are more heavily tasked, the combination of video-assisted debriefing and the NASA-TLX scales provided clear evidence of the need to reconfigure specific responsibilities. The majority of simulation laboratory findings and recommendations surrounded scope of practice. Availability of only one physician for resuscitations was novel for the majority of providers, and CRM fundamentals were compromised when the physician became task fixated performing procedures. CRM training develops situation awareness, communication skills, anticipation of error chains, and error containment and management strategies.19,20 Realigning procedure responsibilities to nonphysicians made the most sense to address these issues in this setting. In situ simulation allowed testing of this realignment in the actual care environment and provided deliberate practice of communication between providers in assignment of “new” tasks.
The other major finding regarding team composition was the persistent high scores medication nurses reported on the NASA-TLX instrument and their inability to perform IDCs on high-risk medications, especially when two patients presented simultaneously. The identification of LSTs around the use of one medication bench and anonymous comments from providers in the follow-up survey supported this finding. Ensuring resources to have two providers double check high-risk medications was one of our recommendations and is supported by the Joint Commission. Research shows that people find about 95% of all mistakes when checking the work of others.21
The MHPTS was designed to be brief and easy to understand, allowing it to be used by naive participants (trainees) to rate key behaviors of high-performance teams.16 Although not exhaustive in describing all possible behaviors, the MHPTS items provide a representative sample of the range of key teamwork behaviors. The scenarios used in the simulation laboratory and in situ settings incorporated similar levels of medical complexity and demanded communication among multiple providers. Despite similar scenarios, there was no improvement in teamwork over the course of the training. Conversely, teamwork was worse in the in situ setting. This is the first investigation to compare teamwork between these settings with the MHPTS. One possible reason for deviance from CRM principles in the in situ environment is increased provider anxiety or greater “suspension of disbelief.” Missing equipment, unintended in scenario development, may have added stress and contributed to lower levels of teamwork. Finally, evolution of roles and responsibilities as a result of previous session feedback may have affected teamwork as providers were adjusting to new expectations.
In our study, trained reviewers, not participants, rated team behaviors after videotape review of the simulations. We are the first to report application of the MHPTS by trained reviewers and to show correlation, although moderate, between such reviewers. One limitation of our study is that we did not have study subjects who applied the MHPTS. We acknowledge that this would have strengthened our study and that future investigations should attempt to apply the MHPTS by both subjects and reviewers. Malec et al16 also noted that ratings from multiple perspectives may be optimal. Those authors felt that use of naive raters was a potential limitation of their study and suggested that “reliability and validity would be expected to improve on any measure by using well-trained, expert raters.”
The identification of LSTs before the use of this space by patients is an important benefit. Detection allowed for correction before reaching and negatively impacting patients. SED leadership was able to correct 86% of identified threats before facility opening. Although it was believed that defibrillators had been ordered and delivered to the new facility, defibrillators could not be located within the new building. The lack of this essential piece of resuscitation equipment put anyone visiting or seeking care at the facility at risk. The missing defibrillators were particularly concerning because the building opening was planned for <10 days after completion of training. Once it was identified that the defibrillators had not been delivered to the patient care areas, they were located and placed for use throughout the hospital.
It was not until the resuscitation room was used for two simultaneous simulation patients that inadequate oxygen flow was discovered. The room was designed with one tower containing suction, medical gases, and electrical outlets. The tower was designed to serve multiple patients, with all of the needed equipment on each side. However, the oxygen flow was not adequate to support bag-mask ventilation of more than one patient at a time. To address the limited oxygen flow, remodeling was performed to establish independent oxygen flow for each patient's bed space.
The presence of one medication cart for multiple patients was particularly concerning because debriefing and NASA-TLX scores demonstrated that the medication nurse trended toward the greatest workload during resuscitations. Preparing medications for two patients in one small space and from the same cart puts patients at risk of receiving the wrong medication and/or wrong dose. As pediatric medication dosing is weight-based, the potential dosing error can be significant if both an infant and adolescent patient receive care simultaneously. In response to this LST, portable medication carts were developed, which could be taken directly to the patient's bed space. Preparing medications closer to the team allows the medication nurse to be a part of the resuscitation, better anticipate needs, and eliminate potential for the patient receiving medications intended for another patient.
Although clinical proficiency was not the primary focus of this project, a relatively large number of errors were identified. It is interesting to note the large number of recurrent errors despite feedback that addressed these issues. Remarkably, performance of cardiopulmonary resuscitation and adequate bag-valve-mask ventilation accounted for the largest single types of error. Healthcare providers are often assumed to be competent in these basic skills, and in the chaos of an actual clinical crisis, deficiencies in the performance of these skills may not be noted.
Failure to perform IDCs of critical medications was the other large category of clinical errors. This failure occurred despite multiple reminders of the importance and need for IDCs, particularly in a stressful resuscitation. When the nurses were questioned about the omission of this action, it was repeatedly stated that there was no one available who could assist with the IDC. This demonstrates that despite the recognized need for the IDC and the ongoing reminders during debriefing of the expectation to perform IDCs, the lack of resources made it unlikely that this would occur without changes in the system. A number of approaches were subsequently trialed, including using the pharmacist and adding a nurse to the team.
In this project, simulation was used to identify unintended consequences that developed as a result of the suggested initial solutions. Simulation provided an opportunity to “fine tune” the environment and role responsibilities before any actual patient interaction. It allowed care providers and leadership to test solutions of perceived issues or inefficiencies within a new system without compromising patient care. From these findings, optimal room layout and personnel assignments were defined.
SED leadership valued the outcomes of this project and has continued monthly in situ training for all providers, including physicians, using 2-hour sessions. Given the distance from our simulation center to the SED, we have begun using web-based software to provide facilitation and debriefing of these simulations. This allows us to send only one member of our staff to their ED, decreasing costs and providing greater scheduling flexibility. Future investigation is needed regarding the efficacy of on-site versus distance-based training. In addition, although we have used simulation to identify LSTs within our academic center, we have not applied the NASA-TLX to those care teams, either within simulation-based training or after actual resuscitations. We could assume that medication nurses face similar workloads in many clinical units; however, future research in an academic setting may result in different outcomes. Future areas of investigation could include the inclusion of simulation earlier in the process of designing and building new facilities and systems. At the point that we began this project, much of the structure was fixed. We anticipate that even greater benefits would be possible if simulation techniques were included earlier in the design and development process.
In conclusion, simulation provides a method to determine provider workload, refine team responsibilities, assess team behaviors, and identify LSTs in the clinical environment. This project provides a template for evaluation of team configurations, scope of practice, and clinical settings before patients being exposed to the risks of a new system and environment. Although the use of human factors methodology is still novel in healthcare, this type of simulation-based assessment and reassessment can become part of a standard approach in the implementation of new clinical teams, units, and facilities.
The authors acknowledge the contributions of other members of the Center for Simulation and Research and Ami Becker to this project.
1. To Err is Human: Building a Safer Health System
. Washington, DC: National Academy Press; 2000.
2. Moorthy K, Munz Y, Adams S, Pandey V, Darzi A. A human factors analysis of technical and team skills among surgical trainees during procedural simulations in a simulated operating theatre. Ann Surg
3. Kobayashi L, Shapiro MJ, Sucov A, et al. Portable advanced medical simulation
for new emergency
department testing and orientation. Acad Emerg Med
4. Alfredsdottir H, Bjornsdottir K. Nursing and patient safety
in the operating room. J Adv Nurs
5. Nunnink L, Welsh AM, Abbey M, Buschel C. In situ simulation
-based team training for post-cardiac surgical emergency
chest reopen in the intensive care unit. Aneasth Intensive Care
6. Blike GT, Christoffersen K, Cravero JP, Andeweg SK, Jensen J. A method for measuring system safety
and latent errors associated with pediatric procedural sedation. Anesth Analg
7. Villamaria FJ, Pliego JF, Wehbe-Janek H, et al. Using simulation
to orient code blue team to a new hospital facility. Simul Healthc
8. Rodriguez-Paz JM, Mark LJ, Herzer KR, et al. A novel process for introducing a new intraoperative program: a multidisciplinary paradigm for mitigating hazards and improving patient safety
. Anesth Analg
9. Wachter RM, Shojania KG. The unintended consequences of measuring quality on the quality of medical care. N Engl J Med
10. Han YY, Carcillo JA, Venkataraman ST, et al. Unexpected increased mortality after implementation of a commercially sold computerized physician order entry system. Pediatrics
11. Hart SG, Straveland LE. Development of the NASA-TLX (Task Load Index): results of the experimental and theoretical research. In: Hancock PA, Meshkati N, eds. Human Mental Workload
. Amsterdam: Elsevier; 1988:139–183.
12. Cao A, Chintamani KK, Pandya AK, Ellis RD. NASA TLX: software for assessing mental workload
. Behav Res Methods
13. Becker AB, Warm JS, Dember WN, Hancock PA. Effects of jet engine noise and performance feedback on perceived workload
in a monitoring task. Int J Aviat Psychol
14. Weinger MB, Reddy SB, Slagle JM. Multiple measures of anesthesia workload
during teaching and nonteaching cases. Anesth Analg
15. Gregg A. Relationship among subjective mental workload
, experience, and education of cardiovascular critical care register nurses. Doctoral dissertation, University of Alabama at Birmingham; 1993:178.
16. Malec JF, Torsher LC, Dunn WF, et al. The mayo high performance teamwork
scale: reliability and validity for evaluating key crew resource management skills. Simul Healthc
17. Byers JC, Bittner AC, Hill SG. Traditional and raw task load index (TLX) correlations: are paired comparisons necessary? In: Mitral A, ed. Advances in Industrial Ergonomics & Safety
. London: Taylor & Francis; 1989:481–485.
18. Eggemeier FT. Properties of workload
assessment techniques. In: Hancock PA, Meshkati N, eds. Human Mental Workload
. Amsterdam: Elsevier; 1988:41–62.
19. Helmreich RL, Wilhelm JA, Klinect JR, Merritt AC. Culture, error, and crew resource management. In: Salas E, Bowers CA, Edens E, eds. Improving Teamwork in Organizations
. Hillsdale, NJ: Erlbaum; 2001:305–331.
20. Reason J. Human Error
. New York, NY: Cambridge University; 1990.
21. Grasha AF, Reilley S, Schell KL, Tranum D, Filburn J. Process and Delayed Verification Errors in Community Pharmacy: Implications for Improving Accuracy and Patient Safety
. Technical Report 112101. Cincinnati, OH: Cognitive-Systems Performance Laboratory; 2001.