Journal Logo

Empirical Investigations

Teamwork Skills in Actual, In Situ, and In-Center Pediatric Emergencies

Performance Levels Across Settings and Perceptions of Comparative Educational Impact

Couto, Thomaz Bittencourt MD, CHSE; Kerrey, Benjamin T. MD, MS; Taylor, Regina G. MA, CCRP; FitzGerald, Michael PhD; Geis, Gary L. MD

Author Information
Simulation in Healthcare: The Journal of the Society for Simulation in Healthcare: April 2015 - Volume 10 - Issue 2 - p 76-84
doi: 10.1097/SIH.0000000000000081

Abstract

For providers in any acute care setting, there are numerous barriers to achieving and maintaining team-level expertise in the resuscitation of critically ill or injured patients.1 An example is the pediatric setting, where perhaps the greatest obstacle is the relative rarity of resuscitations.2–4 Because of the infrequency of events, care providers risk losing, or never obtaining, the skills associated with excellent care.3,5 In addition, given the nature of shift work in the emergency department (ED), especially in an academic setting, it is uncommon for the members of a resuscitation team to be consistent. Given these barriers, there is a clear need to establish effective methods of teamwork training for providers responding to emergencies. Simulation provides an ideal opportunity for the deliberate practice of critical, nontechnical skills such as teamwork.6,7 Through simulation, care teams can be exposed to rare, high-risk patient encounters8 and can practice teamwork behaviors.

Traditionally, simulation-based teamwork training is conducted in a laboratory (in center) setting. This approach has several advantages: providers are removed from clinical duties, scenarios and debriefing sessions can be longer, and interference with patient care is minimized. Disadvantages include an unfamiliar environment, dissimilar equipment, and missing team member roles due to scheduling constraints.9 As an alternative, in situ simulations are conducted in the actual care environment, which allows a care team to train in their typical roles in a familiar setting and to use the equipment, resources, and the system processes involved in actual patient care.10–13 In situ simulations can also minimize the space requirements and travel time incurred during in-center training.14 However, in situ simulation has several drawbacks, such as difficulty scheduling sessions (room availability, decreased provider staffing due to call offs, etc), the potential to interrupt patient care, frequent interruptions, higher cancelation rates (as empty patient areas might not be available when desired and simulations may need to be aborted to make site available for patient care), the cost of real supplies used during simulation, relative difficulty of organizing audiovisual recording, less time for didactic teaching and debriefing, and difficulty of reaching providers on all shifts, making in situ sessions logistically difficult to organize and schedule.15,16 In addition, as noted by Raemer,14 in situ simulation may amplify safety hazards of simulation itself, including maintaining control of simulated medications and equipment, requiring mitigation efforts such as labeling and securement of simulation supplies, and development of consistent policies and procedures for in situ training.

In the health care setting, simulation-based training has become a primary strategy to improve teamwork skills.16–19 Teamwork is an essential component for safe care in an emergent situation. In one study of closed ED claims, teamwork failures accounted for more than 40% of these cases.20 In-center training has demonstrated improved provider knowledge of crew resource management principles, improved safety attitudes, ability to recognize latent threats to patient safety, and improved patient safety within the ED of our institution.17 In situ training in the ED of our institution has been shown to be a valuable strategy for continuing teamwork training and has demonstrated a higher rate of identified latent threats than in-center simulation.16 However, an empirical comparison of in-center versus in situ training for teamwork has not been performed. In our experience, when comparing team performance before opening a new satellite hospital, ED providers demonstrated higher teamwork scores during in-center compared with in situ simulations. However, these teamwork scores were a secondary outcome, and the study was not powered to show a difference between environments.21

As educators, we hope that the safe and realistic environment provided in simulation-based training accelerates the development of and creates consistency in the application of teamwork skills. Once developed and consistent, it is imperative that these learned behaviors are demonstrated and further refined for actual patient care. Thus, an important question is how teamwork skills developed through in-center or in situ simulation training translate to actual practice. Several studies have looked at teamwork behaviors during the care of actual trauma patients, but these did not compare teamwork between simulated and actual emergencies. Typically, simulation training is the intervention, and teamwork is assessed during the care of actual patients.22–24 No studies have directly compared team behaviors across all 3 settings. Within our ED, team members have roles and responsibilities assigned by the pager they carry on each shift. This differs from in-center simulations, where roles are often assigned just before or during a scenario. In addition, there are potential environment-specific effects that impact teamwork, such as familiarity, location of equipment and medications, and higher expectations when providing care to actual patients. Thus, if simulations produce a reasonable representation of actual provider performance and specifically in situ simulation better represents reality, one might expect a higher correlation of scores between simulated and actual patient care. Thus, we hypothesized that, within a pediatric ED where simulation-based teamwork training is an existing part of the culture, teamwork behaviors measured by a valid scoring tool would be higher for actual emergencies and in situ simulations than for in-center simulations. Our objectives were to establish baseline teamwork behaviors among pediatric emergency medicine providers during actual and simulated emergencies; evaluate differences in teamwork behaviors among in situ simulations, in-center simulations, and actual emergencies; and understand learners’ perspectives on strengths and weaknesses of in situ and in-center simulation-based teamwork training.

METHODS

This study was a retrospective, video-based comparison of ED provider teamwork during actual pediatric emergencies, in-center simulations, and in situ simulations. In addition, a cross-sectional survey of providers was performed. Our institutional review board approved this investigation (study ID 2013–6275) before initiation.

The study population was health care providers who work in our ED and care for critically ill and injured children in the ED resuscitation area. Providers included physicians (faculty and fellows), nurses, respiratory therapists, paramedics, patient care assistants, and pharmacists, all of whom are Pediatric Advanced Life Support certified. Other ED provider disciplines, such as child life specialists, chaplains, and registration personnel, were excluded from assessment because they do not provide bedside care. Resident physicians were excluded from the survey-based assessment because they were not involved during in-center ED simulation-based teamwork training and they participate in numerous in center (ie, resident resuscitation course, resident code conference) and in situ simulation-based training programs (ie, critical care units, floor mock codes) outside of the ED. We felt that including them in the survey aspect of this study would affect their responses when asked to compare in-center versus in situ training for ED emergencies.

This study was conducted at an academic pediatric institution. Assessment was performed in 2 settings, namely, simulation center and ED. In-center simulations took place in 1 of 2 rooms set up to mirror the ED resuscitation areas, including similar spacing, equipment, monitors, and medications. Since 2005, the simulation center has enrolled ED providers in multidisciplinary, teamwork training. Video-based debriefing has been standard for in-center simulations since 2001. All providers are asked to sign video consent and confidentiality agreements before training. These videos were available to review for the current study, using a proprietary program (SimCapture, B-line Medical, Washington, DC). In the simulation center, 2 standard camera views are recorded, one from the foot of the bed and one from the head. Both views were available and played simultaneously during video review. In situ simulations and actual patient resuscitations occurred in the ED of a pediatric Level I trauma center with more than 85,000 visits annually, of which 4200 patients are classified as critical and are initially managed in 1 of 4 resuscitation bays. In situ simulation was implemented in the ED in 2007, and an average of 60 in situ simulations are performed annually. These sessions are unannounced, scheduled based on feasibility, and are not linked to individual provider schedules. For more than 15 years, clinical care in each resuscitation bay has been video recorded for quality assurance, peer review, and education. All 4 bays in the resuscitation area are audio and video recorded continuously, 24 hours a day, providing one view from ceiling mounted digital video cameras. All actual patient care and in situ simulations were for review using a proprietary software program (VideoSphere, March Networks, Ottawa, Ontario, Canada). The view available during review is from above the foot of the patient, similar to that obtained in the laboratory.

During the study period, in-center simulation-based training for the ED providers was offered in 2 courses, “ED patient safety” and “trauma team.” Scenarios delivered during each course were developed based on actual patient encounters, were written using a standard design template, and were piloted before use. As part of the design template, scenarios included a flow sheet describing the expected, the alternate, and the incorrect paths; the timing of simulator changes (ie, vital signs, mental status, and respiratory pattern); and the management needs for the simulator to stabilize/improve. These details improve scenario consistency across sessions and help mitigate threats to validity for research purposes.25 The learning objectives for each scenario included technical (assessment of patient, medical decision making, procedural skills, delivery of care) and nontechnical (communication, teamwork) skills. In terms of severity, all scenarios represented critically ill or injured children who required intervention to stabilize the patient. These sessions were multidisciplinary, to mirror care teams in the ED. The exception was resident physicians, as noted earlier. Given work hour restrictions and the participation of this same resident pool in multiple other simulation-based training courses (4 other resident-based courses were offered during the study period, in addition to any in situ training they were exposed to), we were unable to have residents attend these sessions. For each session, 6 to 10 providers from multiple disciplines were scheduled and participated, depending on availability. During in-center training, each scenario generally lasted 45 to 50 minutes (15–20 minutes for simulation, 30–35 minutes for debriefing). Debriefing sessions were facilitated by trained content experts from the ED and trauma services. Discussions were based on the written objectives for each scenario using a standard debriefing template.16 Video review was available to the facilitators for all sessions but not necessarily used. Eight-four–hour ED patient safety courses and twenty-four 2-hour trauma team courses were offered annually during the study period.

During in situ simulations, the medical/trauma team pagers were activated and the care team reported to the resuscitation bay expecting an actual patient. The care team then participated in a 10-minute simulation, followed by a 10-minute team-level debriefing. The scenario content was directed by the ED’s medical resuscitation committee, a standing group of multidisciplinary providers also trained in simulation facilitation. Similar to the in-center simulations, scenarios were based on actual patient encounters, were written using the standard design template, included technical and nontechnical objectives, and required providers to manage critical illnesses or injuries. Debriefing was similar to the in-center simulations in both content and flow, although video review was not available to the facilitators and the length of debriefing was shorter. During the study period, we scheduled 8 in situ simulations per month across all shifts (day, evening, night) and avoided shift change. In addition, strict cancellation criteria were used, including an actual patient in the resuscitation area, more than 70 patients in the ED, more than 15 high-acuity patients in the ED, or inadequate nurse staffing as defined by the charge nurse.

The Team Emergency Assessment Measure (TEAM) tool was used for video-based evaluation of teamwork behaviors. The TEAM tool (see TEAM tool figure, Supplemental Digital Content 1, http://links.lww.com/SIH/A193, which was used for assessing TEAM scores) is a nontechnical skills questionnaire that measures team leadership, teamwork, and task management and was designed for both simulated and actual emergencies.26–28 The TEAM tool is composed of 11 individual items (0 to 4 scale) and a global rating (1 to 10 scale). Individual items include the following elements: leadership control, communication, cooperation and coordination, team climate, adaptability, situation awareness (perception and projection), prioritization, and clinical standards. The TEAM tool was developed in 5 stages: (1) review of the literature for teamwork instruments; (2) development of a draft instrument with an expert clinical team; (3) review by an international team of 7 independent experts for face and content validity; (4) instrument testing on 56 video-recorded events (3 actual emergencies and 53 simulations) for construct, consistency, concurrent validity, and reliability; and (5) a final set of ratings for feasibility on 15 simulated events, which were rated in real time (as opposed to video review). The tool had a high Total Content Validity Index (0.96), internal consistency (0.89), and concurrent validity demonstrated by high correlation of the items with the global rating item. Intraclass correlation was included with interrater and retest rating of 0.60 and 0.80, respectively. The scale is a reliable and feasible observational tool, although it lacks validation on real simulated pediatric events and clinical events.26,28

We used Qualtrics (Provost, UT) a Web-based survey application, to design and administer an anonymous survey (see Teamwork in Simulation survey, Supplemental Digital Content 2, http://links.lww.com/SIH/A194, which is a print copy of the survey questions). Nonresponse and measurement errors were minimized through features such as nonlinear display logic, question flow logic, and item/response randomization. To ensure content validity, the survey went through a number of revisions, based on input from content experts from simulation, survey design, and pediatric resuscitation. The survey was pretested by our medical resuscitation committee, an 11-member multidisciplinary group of ED providers, to identify and address content issues, to ensure that questions were clearly written, that optimal response options were provided, and to assess whether content was organized in a manner that was coherent and logical.

The sampling frame for the survey was all ED providers who work in the resuscitation area. Potential respondents received an invitation e-mail, which described the study’s purpose and contained a link to the survey. A weekly reminder was also e-mailed to nonresponders. The survey was administered over a 3-week period in December 2013, which overlapped with a portion of the videos selected for review (discussed later).

Videos of simulations and actual cases were consecutively screened and selected for review until the required sample size was obtained. Simulations were included if the providers were from the ED and the teamwork training was multidisciplinary. Simulations were excluded if portions of the recording were inaudible. As stated earlier, each scenario developed for in-center and in situ simulations was based on actual cases of critically ill or injured children and required emergent management. Actual case videos were included if there was a multidisciplinary team involved in patient care in the trauma bay and the same exclusion criteria on quality of recording was used. One investigator (G.L.G.) wrote all the simulation scenarios. In an attempt to create a uniform comparison, this investigator also screened all actual patient videos and excluded those where patients were not critically ill or injured. Actual videos were excluded if the patient did not require either respiratory support (defined at minimum as application of oxygen) or intravenous access in the resuscitation area to decrease confounding effects of illness/injury severity. In addition, actual videos were screened based on illness and injury severity; when patients did not require emergent management, their videos were excluded. All videos were from calendar year 2013, with in-center videos covering January to December, in situ videos covering May to November, and actual case videos covering September to December. The period for in situ simulations and actual case videos was shorter for 2 reasons. First, actual emergencies occur more frequently than simulations; thus, screening them retrospectively did not require as long to identify videos that met inclusion and exclusion criteria. Second and more importantly, in accordance with our video recording policy of actual emergencies, videos are only saved on the encrypted, password-protected server for 90 days. Thus, at the time of video abstraction, we were unable to obtain actual videos before September 2013. Recording of in situ simulations that do not fall under the same clinical policy were available farther back in 2013. Once all videos were selected, they were block randomized for order of review.

To optimize the reliability of data collection, 2 investigators (T.B.C. and G.L.G.) piloted the TEAM tool using videos of actual emergencies (from 2013) and simulation training sessions (from 2012). These videos were not included in the study sample. Both investigators are attending pediatric emergency medicine physicians, Pediatric Advanced Life Support and Advanced Cardiac Life Support instructors, and Advanced Trauma Life Support certified. One investigator (G.L.G.) is also an Advanced Trauma Life Support instructor, and the other (T.B.C.) is also a Certified Health Simulation Educator. Each reviewer independently reviewed a video and applied the tool. Then, the reviewers discussed scoring together, specifically where differences occurred, to understand discrepancies. This process was repeated until the reviewers were applying the tool consistently, defined as both reviewers scoring within 1 point of each other on each item. After consistency was achieved, which required 7 videos (3 in-center simulations, 2 in situ simulations, and 2 actual cases), a primary reviewer (T.B.C.) was chosen. As he did not work clinically in the ED and did not personally know the providers, we felt this decreased scoring bias. After the primary reviewer had scored 10 videos, the secondary reviewer (G.L.G.) independently scored the 10th video. The reviewers compared scores and discussed reasons for any discrepancies, if needed. These discussions were intended to help limit scoring “drift.”29 If scoring was consistent, the primary reviewer continued with the next 10 videos. If scoring was inconsistent, both reviewers scored the following video independently and results were compared and discussed. This process was repeated until scoring was consistent. The score of the primary reviewer was reported for consistent videos, and the mean score of both reviewers reported for inconsistent videos. The process continued for every 10th video during the length of the study.

The primary outcome was the overall assessment of teamwork, measured by the sum of the 11 individual items of the TEAM tool. In the derivation study, the authors noted that the tool can be used in this fashion, rather than reporting each item individually.26 Additional outcomes were the TEAM global rating score and survey responses.

Statistical Analysis

Descriptive statistics were generated for all data. For the total TEAM and global rating scores, mean differences among the 3 settings were analyzed using 1-way between-subjects analysis of variance (ANOVA). Potential interaction effects between setting and type of team leader (faculty/third-year pediatric emergency medicine fellows vs. first-year and second-year fellows/residents) and between setting and scenario type were tested using 2-way ANOVA. For ordinal survey-based outcomes, we assessed for differences between groups using a paired t test. For nominal questions, we used the related-samples Wilcoxon signed rank test. Open-ended responses were thematically coded and reported in summary format.

To estimate the sample size for our primary hypothesis, we made the following assumptions: an α error of 5%, desired power of 80%, that the mean total TEAM score would be 27 for in-center simulations and 33 for both actual emergencies and in situ simulations, and a SD of 10 for each mean. After derivation of the TEAM tool, Cooper et al26 performed real-time testing in center using multidisciplinary groups throughout 15 simulations with a mean score of 27.2. We based the 6-point improvement (from 27 to 33) by estimating where we felt providers would have improved performance in the actual environment (as compared with in center) on items 1 to 11. These estimates were based on simulation and clinical experience of teamwork among ED providers and teams. Based on these assumptions, we estimated that we would need to review 44 videos from each setting to detect a 6-point difference in mean total score.

RESULTS

Video Review

A total of 132 videos were reviewed, 44 for each setting. Table 1 demonstrates characteristics of the videos assessed. For all settings, a majority of physician team leaders were faculty/third-year fellows. Consistent with our required teams in actual resuscitations, all teams were multidisciplinary and composed of at least 1 physician in the role of team leader, bedside nurses, an airway provider (respiratory therapist or physician), and medication nurses. Independent of training or setting, a mean of 8 providers were present. Physician trainees were present in all of the actual cases and in situ simulations but not for all of the in-center simulations. In center, trainees were either ED fellows or surgical residents (trauma team simulations). The majority of trauma resuscitations reviewed were in-center simulations; the scenario types were more evenly distributed in the other settings.

TABLE 1
TABLE 1:
Characteristics of Actual Emergencies, In Situ, and In-Center Simulations

Twenty-one videos (16%) were reviewed by a second investigator. In 12 (57%), the reviewers were consistent across all items on the TEAM tool, including the last 4 coreviews. In the other 9, reviewers were inconsistent (scoring > 1 point difference) on a total of 10 items (1 item on 8 videos; 2 items on 1 video). Six of the 10 inconsistencies were on item 12 (global assessment). Thus, of the 252 items coscored by reviewers (21 videos, 12 items per video) the investigators were consistent on 96%. Additional details of video reviewer retraining are provided as supplemental material (see Video Reviewer Retraining Results, Supplemental Digital Content 3, http://links.lww.com/SIH/A195, for detailed results of video reviewer retraining).

Primary Outcome

Mean total TEAM scores were similar among the 3 settings in one-way ANOVA (31.2 actual, 31.1 in situ, and 32.3 in center, P = 0.39) (Table 2). These scores were higher than the values reported in other articles using this tool.26,27,30–32 A secondary outcome, global rating scores, also showed no difference between settings in one-way ANOVA (P = 0.34). There was no interaction effect between setting and type of team leader (ie, faculty/third-year pediatric emergency medicine fellows vs. first-year and second-year fellows/residents) for either total TEAM scores (P = 0.18) or global rating scores (P = 0.47) in 2-way ANOVA. There was also no interaction effect found between setting and type of scenario (ie, trauma vs. nontrauma) for total TEAM scores (P = 0.08) and global rating score (P = 0.09) in 2-way ANOVA.

TABLE 2
TABLE 2:
Total TEAM Scores and Global Rating Scores for 3 Settings

Survey

At the time of video review, the study survey was sent by e-mail to 236 individual providers; 154 (65%) responded (Table 3). Ninety-nine percent of the respondents reported participation in simulation. Two respondents reported no simulation participation and were routed out of the remainder survey. The response rate was at least 65% for each discipline, except patient care assistants (43%). The mean (SD) time working in the ED was 5.9 (6.2) years, with a range of 0 to 35 years. Most respondents (84%) reported participation in both simulation settings and were presented with all questions. Those reporting participation in only 1 setting (11% in center, 4% in situ) were not presented with questions that referred to the other setting.

TABLE 3
TABLE 3:
Representation by Role (Invited vs. Responded)

Respondents considered simulations to positively impact teamwork skills of both individual providers (63% rated in situ as very or extremely positive impact, 66% for in center; P = 1.0) and the entire care team (62% rated in situ very or extremely positive impact, 57% for in center; P = 0.33). For both settings, thematic coding suggested that the most frequently identified benefits of simulation-based training were improvements in communication, team interaction, shared mental models, clarifying roles and responsibilities, and task management. When comparing settings, respondents rated in situ simulation as more effective in training for leadership, team interaction, task management, and roles/responsibilities (Fig. 1). When respondents identified one setting as “much more effective,” they were prompted to provide examples. Thematic coding of their open-ended responses is shown in Tables 4 (in situ) and 5 (in center).

TABLE 4
TABLE 4:
Thematic Frequencies on the 1 to 2 Aspects That Make In Situ Much More Effective Than In Center for Optimizing Teamwork Skills
FIGURE 1
FIGURE 1:
Providers’ survey responses regarding effectiveness of in situ versus in-center teamwork training by teamwork skill area (n = 128).

Fifty-nine percent of the respondents rated in situ simulation as more realistic than in center; 10% rated in situ as less realistic. Thirty percent rated in situ simulation as more negatively impacting ED workload, whereas 17% rated in situ as less negatively impacting workload. For both settings, respondents rated the appropriateness of simulation frequency, the duration of the simulations, and the duration of the debriefings as “about right.” For in situ simulations, however, a higher proportion requested more simulations overall (31% in situ vs. 19% in center; P = 0.03), longer scenario duration (18% in situ,12% in center; P = 0.02), and longer duration of debriefings (13% in situ, 6% in center; P = 0.03).

DISCUSSION

In a video-based study from an academic, pediatric ED and simulation center, where simulation-based education is an established part of the learning culture, we found a similarly high level of teamwork among actual and simulated settings. Survey respondents reported a positive perception of the impact of simulation on teamwork, with in situ simulation preferred over in-center simulation.

A large proportion of published studies evaluating simulation-based training have been targeted, time-limited investigations using a preassessment and postassessment. In studies where ED providers were enrolled and teamwork was assessed, trauma-related scenarios were used most often.22–24,33 For studies that both showed improvement in teamwork and procedural skills and followed outcomes after training, there was often a decay or loss of measured skills.23,34–36 In contrast, the “culture” in the studied ED is one where simulation-based teamwork training is used as an ongoing intervention for both medical and trauma emergencies; thus, decay or loss of attained teamwork skill is theoretically mitigated. All emergency and trauma physicians and the “core” resuscitation nurses are required to train regularly in center, and all ED providers who work in the resuscitation bays are expected to participate during in situ simulations. A high level of teamwork was achieved in each setting, with a mean range of 31.1 to 32.3 of a possible 44, including during actual resuscitations. These mean total TEAM scores are higher than reported in any previous study where the TEAM tool was used as the unit of measure.26,27,30–32 For the limited scope of clinical events we studied, the type of simulation setting did not have an association with the quality of teamwork.

These results refute our hypothesis that teams would score higher in the actual ED setting than in center. This study challenges our belief that complete teams with preassigned roles and responsibilities, provision of necessary equipment at the bedside, and working in a familiar environment would translate into better teamwork. Conversely, additional training time without coexisting patient responsibilities, which had previously been shown to favor in-center teamwork performance,35 also did not result in better teamwork. It may be that there are differences in aspects of clinical performance that we did not measure, such as technical skills and changes in processes and systems, or that such a high level of teamwork was achieved that behaviors are reproduced regardless of setting. This is a strong possibility for our setting, which may make our results not generalizable, given the amount of simulation-based team training our ED providers are exposed to and the degree to which patient safety and communication are emphasized by our institution. Our results suggest that, where simulation-based training is an established aspect of the learning and improvement cultures, simulation setting does not have strong, specific effects on the quality of teamwork.

To our knowledge, this study is the first to assess the perceived benefits and risks of simulation-based teamwork training from the learner’s perspective across in situ and in-center settings as well as the translation of simulation training in nontechnical skills to a clinical setting. The providers participating in our study reported a high level of experience with both in situ and in-center simulation, which supports our statement that simulation is part of the existing culture in our institution. Although the impact of the training on teamwork was considered high for both settings, providers favored in situ simulation, rating it more effective for developing teamwork skills. From their free-text comments, respondents stated that training with all roles present better represented the actual team, that training in the resuscitation area was more realistic, and that direct transfer of teamwork training was enabled (Table 5). Coincidentally, the survey results and the thematic comments paralleled our reasons for hypothesizing that teamwork performance would score higher in the actual environment.

TABLE 5
TABLE 5:
Thematic Frequencies on the 1 or 2 Aspects That Make In Center Much More Effective Than In Situ for Optimizing Teamwork Skills

The providers’ preference for in situ simulation in our setting is somewhat surprising because these are performed in an unannounced fashion within a high-volume ED. During in situ simulations, the medical/trauma team pagers are activated, the staff reports to the resuscitation bay expecting an actual patient, and instead, they participate in a 10-minute simulation, followed by a 10-minute team-level debriefing. As demonstrated in the results, a mean of approximately 8 providers participated during each in situ simulation and 89% of staff responding to the survey had participated in in situ simulations. Thus, providers are not avoiding participation or leaving the simulation to get back to clinical responsibilities. Their preference may be due to our strategy for scheduling and canceling in situ simulations. We schedule 8 in situ simulations per month and include all shifts (day, evening, and night) and avoid shift change. In addition, we have strict cancellation criteria including an actual patient in the resuscitation area, more than 70 patients in the ED, more than 15 high-acuity patients in the ED, or inadequate nurse staffing as defined by the charge nurse. This generates trust that we will not put real patients at risk and provides a safe atmosphere for learning.

This study has some limitations. First, although a high level of teamwork (Kirkpatrick level 3-behavior) was found, we did not attempt to measure patient-level outcomes (Kirkpatrick level 4-results) and thus could not evaluate whether a high level of teamwork behaviors translated into improved patient outcomes.37 Lack of teamwork and communication has been linked with adverse events and malpractice claims,38,39 which is a strong argument for continuous teamwork training and supports the need to understand how simulation-based teamwork training translates to actual clinical care. Second, we applied the TEAM tool in a novel way, using it to compare simulations and actual resuscitations. We were able to extract all elements of the TEAM from the videos during reviewer training, so we feel this limitation is minimal. In addition, the tool was designed to be used in emergencies and has been shown to be valid in simulation-based assessments.27 Our third limitation is that we did not attempt to demonstrate interrater reliability. We chose instead to have 1 video reviewer (T.B.C.) abstract data from all the videos and have a second reviewer (G.L.G.) review a portion of videos to prevent “drift” in scoring and ensure consensus. Our primary video reviewer was chosen to eliminate bias because he did not personally know or work with any of the providers in the videos. We feel this lessened the chance that he would “overscore” or “underscore” teams. In particular, this reviewer is less subject to authority gradients, halo effects, or previous interpersonal conflicts. Moreover, as demonstrated in the results, consistency between reviewers was good (96% of items scored consistently), and it improved during the course of the study. Fourth, the relative heterogeneity of scenarios, care team providers, and team leader might have introduced bias, especially in a retrospective review. To help mitigate threats to validity, we followed multiple strategies, as suggested by Cheng et al, for scenario design (conceptual realism), scripting of confederate roles (emotional realism), designing the in-center environment to mirror the ED (physical realism), and by training all facilitators and using a standard debriefing tool16,25 (standardizing debriefings). The potential bias of scenario variability is also minimized by the TEAM tool itself, which only has 1 item in which specific standards/guidelines are considered.26 Fifth, this was a single-center study conducted in an institution with all staff familiar with simulation, and thus, our results cannot be easily extrapolated. Finally, this was a retrospective study using video recordings of simulations for all of 2013. However, the survey was conducted over 3 weeks in December 2013. Thus, in this cross-sectional design, we could not guarantee that respondents to the survey were also participants in the simulations reviewed. We attempted to minimize this issue by excluding the responses of the 2 providers with no simulation experience from the data analysis.

The results of our video-based study provide support for simulation as an effective method to sustain high levels of teamwork in practice when included as a part of ongoing training programs. The results of our survey suggest that the majority of these simulations should be in situ. In light of the study limitations, however, an appropriate next step would be to prospectively evaluate team behaviors by matching actual cases with simulation scenarios, including matching severity of illness and injury, thus decreasing heterogeneity and allowing more specific measurements of teamwork within subsets of illness or injury. Such a study could identify behaviors that are difficult to perform across specific provider groups (ie, cardiology and emergency medicine) and mitigate other limitations of a retrospective design. Second, and likely more importantly, we need to evaluate for translational improvements in care by assessing specific patient-level outcomes, especially those outcomes thought to be influenced by teamwork training.

CONCLUSIONS

In a video-based study in an academic pediatric institution, ratings of teamwork were relatively high and similar among actual resuscitations and 2 simulation settings, supporting the influence of simulation-based training on instilling a culture of high-quality communication and teamwork in the ED. Based on survey results, providers favored the in situ setting for teamwork training and suggested an expansion of our existing in situ simulation program. Prospective studies are needed to explore further these results, potentially across multiple institutions.

REFERENCES

1. Weller J, Boyd M, Cumin D. Teams, tribes and patient safety: overcoming barriers to effective teamwork in healthcare. Postgrad Med J 2014; 90: 149–154.
2. Chen EH, Cho CS, Shofer FS, Mills AM, Baren JM. Resident exposure to critical patients in a pediatric emergency department. Pediatr Emerg Care 2007; 23: 774–778.
3. Mittiga MR, Geis GL, Kerrey BT, Rinderknecht AS. The spectrum and frequency of critical procedures performed in a pediatric emergency department: implications of a provider-level view. Ann Emerg Med 2013; 61: 263–270.
4. Hansen M, Fleischman R, Meckler G, Newgard CD. The association between hospital type and mortality among critically ill children in US EDs. Resuscitation 2013; 84: 488–491.
5. Overly FL, Sudikoff SN, Shapiro MJ. High-fidelity medical simulation as an assessment tool for pediatric residents’ airway management skills. Pediatr Emerg Care 2007; 23: 11–15.
6. Cheng A, Duff J, Grant E, Kissoon N, Grant VJ. Simulation in paediatrics: an educational revolution. Paediatr Child Health 2007; 12: 465–468.
7. Eppich WJ, Adler MD, McGaghie WC. Emergency and critical care pediatrics: use of medical simulation for training in acute pediatric emergencies. Curr Opin Pediatr 2006; 18: 266–271.
8. Chiniara G, Cole G, Brisbin K, et al. Simulation in healthcare: a taxonomy and a conceptual framework for instructional design and media selection. Med Teach 2013; 35: e1380–e1395.
9. Kobayashi L, Patterson MD, Overly FL, Shapiro MJ, Williams KA, Jay GD. Educational and research implications of portable human patient simulation in acute care medicine. Acad Emerg Med 2008; 15: 1166–1174.
10. Miller KK, Riley W, Davis S, Hansen HE. In situ simulation: a method of experiential learning to promote safety and team behavior. J Perinat Neonatal Nurs 2008; 22: 105–113.
11. Nunnink L, Welsh AM, Abbey M, Buschel C. In situ simulation-based team training for post-cardiac surgical emergency chest reopen in the intensive care unit. Anaesth Intensive Care 2009; 37: 74–78.
12. Weinstock PH, Kappus LJ, Garden A, Burns JP. Simulation at the point of care: reduced-cost, in situ training via a mobile cart. Pediatr Crit Care Med 2009; 10: 176–181.
13. Allan CK, Thiagarajan RR, Beke D, et al. Simulation-based training delivered directly to the pediatric cardiac intensive care unit engenders preparedness, comfort, and decreased anxiety among multidisciplinary resuscitation teams. J Thorac Cardiovasc Surg 2010; 140: 646–652.
14. Raemer DB. Ignaz semmelweis redux? Simul Healthc 2014; 9: 153–155.
15. Deering S, Johnston LC, Colacchio K. Multidisciplinary teamwork and communication training. Semin Perinatol 2011; 35: 89–96.
16. Patterson MD, Geis GL, Falcone RA, LeMaster T, Wears RL. In situ simulation: detection of safety threats and teamwork training in a high risk emergency department. BMJ Qual Saf 2013; 22: 468–477.
17. Patterson MD, Geis GL, LeMaster T, Wears RL. Impact of multidisciplinary simulation-based training on patient safety in a paediatric emergency department. BMJ Qual Saf 2013; 22: 383–393.
18. Ziesmann MT, Widder S, Park J, et al. S.T.A.R.T.T.: development of a national, multidisciplinary trauma crisis resource management curriculum—results from the pilot course. J Trauma Acute Care Surg 2013; 75: 753–758.
19. Theilen U, Leonard P, Jones P, et al. Regular in situ simulation training of paediatric medical emergency team improves hospital response to deteriorating patients. Resuscitation 2013; 84: 218–222.
20. Croskerry P, Chisholm C, Vinen J, Perina D. Quality and education. Acad Emerg Med 2002; 9: 1108–1115.
21. Geis GL, Pio B, Pendergrass TL, Moyer MR, Patterson MD. Simulation to assess the safety of new healthcare teams and new facilities. Simul Healthc 2011; 6: 125–133.
22. Steinemann S, Berg B, Skinner A, et al. In situ, multidisciplinary, simulation-based teamwork training improves early trauma care. J Surg Educ 2011; 68: 472–477.
23. Miller D, Crandall C, Washington C 3rd, McLaughlin S. Improving teamwork and communication in trauma care through in situ simulations. Acad Emerg Med 2012; 19: 608–612.
24. Capella J, Smith S, Philp A, et al. Teamwork training improves the clinical care of trauma patients. J Surg Educ 2010; 67: 439–443.
25. Cheng A, Auerbach M, Hunt EA, et al. Designing and conducting simulation-based research. Pediatrics 2014; 133: 1091–1101.
26. Cooper S, Cant R, Porter J, et al. Rating medical emergency teamwork performance: development of the Team Emergency Assessment Measure (TEAM). Resuscitation 2010; 81: 446–452.
27. Cooper S, Cant RP. Measuring non-technical skills of medical emergency teams: an update on the validity and reliability of the Team Emergency Assessment Measure (TEAM). Resuscitation 2014; 85: 31–33.
28. TEAM Web site. Available at: http://medicalemergencyteam.com. Accessed May 15, 2014.
29. Feldman M, Lazzara EH, Vanderbilt AA, DiazGranados D. Rater training to support high-stakes simulation-based assessments. J Contin Educ Health Prof 2012; 32: 279–286.
30. Cooper S, Cant R, Porter J, et al. Managing patient deterioration: assessing teamwork and individual performance. Emerg Med J 2013; 30: 377–381.
31. Bogossian F, Cooper S, Cant R, et al. Undergraduate nursing students’ performance in recognising and responding to sudden patient deterioration in high psychological fidelity simulated environments: an Australian multi-centre study. Nurse Educ Today 2014; 34: 691–696.
32. Boet S, Bould MD, Sharma B, et al. Within-team debriefing versus instructor-led debriefing for simulation-based education: a randomized controlled trial. Ann Surg 2013; 258: 53–58.
33. Steinemann S, Berg B, DiTullio A, et al. Assessing teamwork in the trauma bay: introduction of a modified “NOTECHS” scale for trauma. Am J Surg 2012; 203: 69–75.
34. Thomas SM, Burch W, Kuehnle SE, Flood RG, Scalzo AJ, Gerard JM. Simulation training for pediatric residents on central venous catheter placement: a pilot study. Pediatr Crit Care Med 2013; 14: e416–e423.
35. Ahya SN, Barsuk JH, Cohen ER, Tuazon J, McGaghie WC, Wayne DB. Clinical performance and skill retention after simulation-based education for nephrology fellows. Semin Dial 2012; 25: 470–473.
36. Roberts NK, Williams RG, Schwind CJ, et al. The impact of brief team communication, leadership and team behavior training on ad hoc team performance in trauma care settings. Am J Surg 2014; 207: 170–178.
37. Kirkpatrick DL. Evaluating Training Programs: The Four Levels. 2nd ed. San Francisco, CA: Berrett-Koehler Publishers; 1998.
38. Wahr JA, Prager RL, Abernathy JH 3rd, et al. Patient safety in the cardiac operating room: human factors and teamwork: a scientific statement from the American Heart Association. Circulation 2013; 128: 1139–1169.
39. Croskerry P, Wears RL, Binder LS. Setting the educational agenda and curriculum for error prevention in emergency medicine. Acad Emerg Med 2000; 7: 1194–1200.
Keywords:

Simulation; In situ; Simulation center; High fidelity; Video review; Pediatric emergencies; Teamwork; Team Emergency Assessment Measure; TEAM

Supplemental Digital Content

© 2015 Society for Simulation in Healthcare