Simulation-based training for teams is playing an increasingly critical role in the development of competent medical teams.1 However, to optimize the educational value of this training technique, it is critical that we borrow relevant theories and methodologies from team researchers in other domains.2 Inclusion of validated team-based concepts can inform training design and evaluation efforts to enrich trainee competency and ultimately patient safety.
One characteristic of highly effective teams that has been documented across a number of high-risk industries is situation awareness.3–6 Situation awareness (SA) describes the ability to perceive environmental elements, comprehend their meaning, and anticipate future events.7 Specifically, SA is broken into 3 distinct levels: (1) perception of the environment; (2) comprehension of the meaning of this information; and (3) projection of future events and actions based on perception and comprehension. Understandably, SA is a critical component in decision making for medical teams. In settings in which large amounts of information exists in a complex and dynamic environment, developing and maintaining SA can be an extremely difficult task.
Understandably, the measurement of SA is a difficult task. Fortunately, a number of tools have been created for such endeavors, such as the Situational Awareness Rating Technique,8 the Cranfield Situation Awareness Scale,9 the Situation Awareness Behavioral Rating Scale,10,11 the Situation Awareness Global Assessment Technique (SAGAT),12 or by 1 to 2 items placed on a global performance tool. Of these, the SAGAT is the most widely used, likely because it provides direct, objective, and real-time assessment of SA, rather than subjective opinions of the trainee or observers after the fact that these other tools offer. The SAGAT was originally developed in nonmedical domains as a way to measure SA and has documented validation evidence above and beyond these other tools.12 The SAGAT involves freezing a training task periodically and administering multiple-choice and/or open-ended questions assessing SA at each of the 3 aforementioned levels of SA. In this way, responses can be objectively assessed as correct or incorrect. For example, pilots-in-training might experience a “freeze” during a simulated take-off in which they are asked what types of warning lights were activated during the take-off (level 1), how serious the warning lights are with regard to immediate air worthiness and amount of runway remaining (level 2), and what their next action should be (level 3). The feasibility and value of this approach has been extensively documented in aviation simulation-based training.13,14
Unfortunately, parallel assessment tools and curricula to assess and enhance SA among medical trainees are lacking. It has been noted that SA can build up and change over time as trainees gain more exposure to situations and their outcomes,15 but it remains unclear how best to design educational curricula to enhance SA in traditional medical teams. Simulation-based training is an ideal setting for the development and assessment of SA, as it offers trainees opportunities to practice and demonstrate their skills in a safe learning environment with no risk to patients. Furthermore, simulation provides a unique opportunity to measure team SA during the course of tasks and activities. For example, a trauma simulation-based scenario can be “frozen” (i.e., simulators paused and vitals monitor “turned off”) immediately after an x-ray is shown, but before the team communicates or reacts to the new information that the x-ray harbors. During this “freeze,” the team members are then independently asked to answer questions assessing what the x-ray just displayed (e.g., radiolucency of left lung field with absent peripheral lung markings), what it means (e.g., pneumothorax with collapsed lung), and what will happen next (e.g., patient will continue to decompensate, tube thoracostomy will be needed).
The goal of this study was to examine the feasibility and predictive use of using the SAGAT technique for simulation-based training of newly formed medical teams. Scholars have noted that novice teams have substantial problems in building SA.2,12,16 Compared to expert teams, novices are unable to take in key information, manage distractions and high workload, monitor effectively, and project future events.13 Thus, exploring this technique among newly formed teams, rather than already established medical teams, will offer the most use in expanding our knowledge base of how SA can be assessed and trained.
Two scenarios were developed conforming to Advanced Cardiac Life Support experiences and training of the learners according to an institutional review board–approved protocol. The first scenario presented a postoperative complication of a myocardial infarction, including assessment of the patient’s new complaint of chest pain, identification of an ST segment elevation myocardial infarction on electrocardiogram, management of a ventricular fibrillation cardiac arrest, and ultimate disposition of the patient to interventional cardiology. The second scenario presented a postprocedural complication from central line placement for a preoperative patient, including assessment of the patient’s new complaint of shortness of breath, identification of a pneumothorax on a chest radiograph, management of eventual tension pneumothorax physiology with needle decompression, and ultimate disposition of the patient with placement of a thoracostomy tube. Scenario order was randomized by drawing team numbers from a hat to ensure that half of the teams saw the MI scenario first and half of the teams saw the pneumothorax scenario first. The primary objective in both scenarios was to perform a standard ACLS evaluation with the appropriate interventions to identify and resolve clinical and task management issues arising during the care of the simulated patients. Queries on SAGAT representing levels 1, 2, and 3 SA were developed based on recommendations by Endsley using Goal-Directed Task Analysis17 Specifically, key decisions for the overall goals of each scenario and the SA requirements (divided out by levels) were identified. Clinicians identified the highest level goals associated with the task and the requisite subgoals, and listed the SA requirements (at each of the 3 levels) needed to make those decisions. These SA requirements formed the basis for determining the queries to be used within each scenario freeze. For example, the goals for the pneumothorax scenario were to recognize the pneumothorax, understand the physiological basis for the pneumothorax, and anticipate future deterioration and interventions. Within each scenario, 3 SAGAT freezes were designed to focus on specific clinical issues and decisions that arose within the scenario. For each individual SAGAT freeze, participants had to provide responses to 3 queries representing each level of SA as identified above. These queries took a more specific form of “what just happened?” (level 1), “what does it mean?” (level 2), and “what should/will happen next?” (level 3) after predefined event-based points in the scenario. For example, after the x-ray was shown for the second scenario, the scenario was frozen and trainees were asked: (1) “What did the chest x-ray show?”, (2) “What is the cause of the pathophysiological problem?”, and (3) “What do you expect to happen to the patient in the next couple of minutes?”
Both scenarios were programmed into a human patient simulator with an initial standardized script that became dynamic as participants took action with the patient. Scenarios took place in the simulation laboratory, fully equipped with all necessary instruments and materials, reflecting the appearance of a postanesthesia care unit. This room was equipped with cameras and overhead microphones for video and audio recording.
Participants engaged in a 15-minute orientation to the simulation space and equipment before the training sessions. The scenarios were designed to be approximately 15 minutes in length with 3 SAGAT freezes per session. Before the simulation, team members were assigned team roles (team leader, airway, shock, compressions) that remained constant throughout both simulations. Upon entering the room, teams were provided with a situation, background, assessment, recommendations report by the nurse confederate. The nurse was instructed to aid in the resuscitations only under requests from participants.
At 3 predefined moments throughout each scenario, the scenario was “frozen” (i.e., simulators turned off) and participants were turned away from the patient and monitors. An instructor entered the scene and trainees independently answered identical SA assessments, which consisted of 3 questions assessing levels 1 to 3. Trainees had approximately 1 minute to respond to each of the 3 questions and were unaware of other team members’ responses, as they completed their responses on individual answer sheets away from other team members. As suggested by Endsley,17 no stops were scheduled for the first 3 minutes of the scenario, and all stops were at least 1 minute apart and at random times during the scenario.
After completion of each scenario, teams were debriefed using a standard approach using the 4E method18 by faculty debriefers in surgery and emergency medicine (EM), both with formal backgrounds in simulation and debriefing. Additionally, the advocacy and inquiry approach19 was used to ascertain teams’ perceptions of their performance in a debriefing with good judgment process. Trainees were asked to reflect on and discuss aspects of the scenario that prompted them to consider other’s expertise, trust one another, communicate, access information from others, specialize, share tasks, coordinate behaviors, delegate, and set team goals. These topics were chosen to encourage participants to reflect on these teamwork principles.
Situation awareness was assessed by evaluating the correctness of each team member’s responses (as determined by expert judgment of the correct answer for that freeze) to queries as previously noted. Specifically, before the scenario, clinical faculty (surgery and EM) determined what the correct responses (which were fairly objective in nature) to each query would be, such that a nonexpert could subsequently grade each trainee’s response. Each correct item was allotted one point, of which 0.5 point was given to responses deserving partial credit (i.e., they correctly identified a tension pneumothorax, but on the wrong side). The EM faculty debriefer graded all items for correctness. Although each team member independently completed SA assessments, team-level SA was compiled and used via summing individual responses similar to previous studies.20 Summing individual scores to create team scores reflects the extent to which team members are on the same page.21,22
Additionally, teamwork was evaluated using a previously validated 8-item teamwork scale23 frequently used in other surgical team simulation training sessions.24,25 Consensus teamwork ratings were obtained from 2 simulation faculty (A.K.G. and J.M.) trained in simulation, assessment, and team training, as inclusion of multiple sources have been identified as a best practice for team training evaluation.26 Specifically, raters reviewed each video together, independently rated the videos, and discussed any discrepancies to achieve one consensus rating.
At the conclusion of both scenarios, participants completed a questionnaire designed to assess basic demographics and overall satisfaction with the training program using 5 items (e.g., “These simulations were effective for promoting teamwork skills,” “I’d be interested in participating in more simulations like these,” etc.) on a 1 (strongly disagree) to 5 (strongly agree) scale.
Descriptive statistics were analyzed using SPSS version 21 (SPSS Inc; Chicago, IL) and a significance level of P<0.05 was chosen. Cronbach alpha was used to assess the reliability of the teamwork scale. Paired sample t tests were used to compare changes in SA and teamwork scores from scenario 1 to scenario 2 among all teams. Hierarchical regression was used to examine if SA predicts teamwork ratings.
Forty-three third-year medical students enrolled in surgical rotations participated in the training sessions. Students reported a mean±SD age of 26±2.21 years, and 59% were women. Participants reported overall satisfaction with the course as 4.40±0.63.
The mean SA score for the MI scenarios was 61±11. The mean score for the pneumothorax scenarios was 63±18. The mean teamwork scores were 11.4±3.27 and 10.2±4.10 for the MI and pneumothorax scenarios, respectively. The mean SA score for the first scenario encountered was 59±19 and 64±20 for the second scenario. The mean teamwork scores were 9.89±4.21 and 11.86±2.50 for the first and second scenarios, respectively.
Overall, there was a high variability in team SA, ranging from 45% to 79% and 46% to 97% for the first scenario and second scenario, respectively. As shown in Table 1, team SA differed from one scenario to the other, with some teams demonstrating higher or lower SA, whereas one team had no difference. Average SA across all teams was 59% to 64% from scenario 1 to scenario 2.
Reliability of the Mayo High-Performance Teamwork scale for both scenarios was 0.91 using Cronbach alpha. Mean teamwork ratings were 9.89±4.21 (range, 4–16) and 11.86±2.39 (range, 8–15) for the first scenario and second scenario, respectively. Paired sample t tests indicate no difference between the first scenario and the second scenario (P=0.14).
Hierarchical regression indicated that team SA significantly predicted teamwork ratings for both the first scenario (F(1,9)=8.02; P<0.05; R 2=0.50) and the second (F(1,9)=9.94; P<0.01; R 2=0.55) scenario.
The results of this study indicate that measuring situation awareness in a simulated team setting is feasible. Although surgical educators agree that SA is a critical component of teams,1,2 we have yet to identify optimal methods to assess and improve it. Instead, we frequently rely on observer ratings, which can only be based on behaviors and/or verbalizations of the participants.23 By directly asking participants while in the moment of the simulation and without other intervening events to disrupt memory, the SAGAT method allows for gathering real-time data that are not prone to the many problems associated with retrospective report of past events. Additionally, from our experiences in this pilot study, we did not find that the temporary freeze in the simulation to collect the SAGAT data affected the fidelity or “flow” of the simulation itself, as might be expected. Instead, we found that having insight into what the team was thinking throughout the simulation actually enhanced our educational endeavors, as we could focus our debriefing discussion on specific areas of need. Thus, implementing the SAGAT methodology may not only help surgical educators quickly identify specific deficiencies among trainees, but it can also inform training design and customized interventions. For example, common incorrect responses or decisions may be indicative of additional training needs at both the individual and programmatic levels.
Although this study was not sufficiently powered to determine whether training enhanced team SA over time, future research with a larger sample size should examine the extent to which simulation training may actually enhance team SA over time. If additional research is able to demonstrate an improvement with a larger sample size, it would suggest that simulation may be an effective approach to catalyzing the pathway to forming expert surgical teams. Thus, future research is needed to investigate how SA among teams develops and changes over time.
Additionally, these results provide initial work validating the use of the SAGAT in team training sessions among surgical trainees as SA predicted teamwork ratings. Our work aligns with propositions that SA is a crucial component of effective teamwork.2 Even among newly formed teams, we were able to demonstrate that teams who have heightened awareness of their environment are able to work more effectively together. Looking at these relationships across levels of learners is instrumental to better understand these team processes. Additionally, future work should examine other relevant variables and outcomes as well. Exploring other team characteristics and how they related to the SA-team relationship might help expand our existing knowledge base. For example, a deeper examination of team composition is likely warranted to better understand if certain team members’ SA is more critical to effective teamwork or if there needs to be a baseline of SA among all teams.
Finally, since this work suggests that the SAGAT tool may prove a feasible and valid method to assess SA among trainees, a discussion of specific interventions to enhance SA is warranted. As other work has shown that novices struggle to take in key information, manage distractions and high workload, monitor effectively, and project future events,13 interventions focused on equipping trainees on appropriate task management, situation monitoring, and planning strategies may be a reasonable next step. Excitingly, as validity around use of the SAGAT method continues to grow, it can also be used to evaluate the effectiveness of such interventions.
Although this study provides just a snapshot of how the SAGAT method can be implemented, we believe it provides a crucial first step to better understand and develop simulation-based team training programs for medical trainees. Despite this, we must note a number of limitations for our study. For example, although our learners were all of the same training level and had previous experience with ACLS, there are likely limitations to the extent to which they were adept at the role (e.g., airway, team leader, etc.) to which they were assigned. Similarly, because trainees were taken from the same cohort, it is possible that their previous experiences with one another and the extent to which they know each other may have affected these teamwork outcomes. However, varied experiences and relationships with other health care providers likely reflect the reality of medical teams in the clinical environment.
Additionally, the choice of measurement tools is another area that warrants further refinement. In our study, we chose to focus on teamwork performance (rather than team task performance) and to use the Mayo team performance assessment tool. The former decision was made based on the focus of our curriculum having a balance of nontechnical and technical skills, in which the team training scenarios serve to provide a nonthreatening environment in which trainees can interact with one another and polish teamwork behaviors. Task and clinical performance is measured by a number of other components of the curriculum. Because of this decision, we cannot say with certainty that teams who score higher on SA metrics actually will perform better on clinical metrics. Additionally, we used the Mayo high performance tool in a way slightly different from how it was originally validated. Specifically, we had external observers use this tool rather than self-ratings. Our intention was to reduce any biases that may emerge from self-report data.
Finally, further exploration of the way in which team SA is measured is warranted. We had only one clinical faculty rate SA responses, but future work should examine the use of multiple raters to reduce potential biases. In addition, theoretical work suggests that high team SA may only occur when all individual members of the team possess a baseline level of SA that is required for their respective roles.22 In other words, a team may have deficient SA if one member of the team possesses and understands all relevant information whereas other team members remain unknowing. Thus, although we summed individual team member SA responses to form an overall team score in line with other studies,20 this may not be the most appropriate reflection of team SA. However, without further exploration of this methodology and topic, best practices in its assessment will remain unknown.
For teams to be maximally effective, team members need to be aware of the actions and activities going on around them (e.g., changing patient status, treatment interventions), to understand their meaning, and to be able to anticipate what might happen in the next few moments as a result of those activities and interpretations. Our work suggests that this cognitive understanding among teams, situation awareness, can be assessed in simulation-based training settings using the SAGAT technique. Our findings indicate that this team-based competency may increase from one simulation to the next, and that teams who have heightened SA also demonstrate more effective teamwork.
1. Stefanidis D, Sevadalis N, Paige J, Zevin B, Aggarwal R. Simulation
in surgery. What’s needed next? Ann Surg
2. Gardner AK, Scott DJ. Important concepts for developing expert surgical teams using simulation
. Surg Clin N Am
3. Endsley MR. Expertise and situation awareness. In Ericsson KA, Charness P, Feltovich P, Hoffman R (eds). The Cambridge Handbook of Expertise and Expert Performance
. New York. Cambridge University Press; 2006.
4. Paige JT, Kozmenko V, Yang T, et al. High-fidelity, simulation
-based interdisciplinary operating room team training at the point of care. Surgery
5. Burke CS, Salas E, Wilson-Donnelly K, Priest H. How to turn a team of experts into an expert medical team: guidance from the aviation and military communities. BMJ Qual Saf Health Care
6. Stout RJ, Cannon-Bowers JA, Salas E. The role of shared mental models in developing shared situational awareness
. In: Gilson RD, Garland DJ, Quince JM, eds. Situational Awareness In Complex Environments
. Dayton Beach, FL: Embry-Riddle Aeronautical University Press; 1994:297–304.
7. Endsley MR. Design and evaluation for situation awareness enhancement. In: Proceedings of the Human Factors Society 32nd Annual Meeting
. Santa Monica, CA: Human Factors Society, 1988:97–101.
8. Taylor RM. Situational awareness
rating technique (SART): The development of a tool for aircrew systems design. In: Situational Awareness in Aerospace Operations (AGARD-CP-478)
. Neuilly Sur Seine, France: NATO-AGARD; 1990:3/1–3/17.
9. Dennehy K. Cranfield—Situation Awareness Scale: user manual. Applied psychology unit, College of Aeronautics, Cranfield University, COA report No. 9702, Bedford. January 1997.
10. Matthews MD, Beal SA. Assessing Situation Awareness in Field Training Exercises. Research Report 1795
. Alexandria, VA: U.S. Army Research Institute for the Behavioral Sciences; 2002.
11. Matthews MD, Pleban RJ, Endsley MR, Strater LD. Measures of infantry situation awareness for a virtual MOUT environment. In: Proceedings of the Human Performance, Situation Awareness and Automation: User-Centred Design for the New Millennium Conference
, October 2002: SA Technologies; Savannah, GA.
12. Salmon P, Stanton N, Walker G, Green D. Situation awareness measurement: a review of applicability for C4i environments. Appl Ergon
13. Endsley MR, Garland DJ, Shook RWC, et al. Situation Awareness Problems in General Aviation
. Marietta, GA: SA Technologies; 2000.
14. Endsley MR, Mogford R, Allendoerfer K, et al. Effect of Free Flight Conditions on Controller Performance, Workload, and Situation Awareness: A Preliminary Investigation of Changes in Locus of Control Using Existing Technology
. Federal Aviation Administration William J Hughes Technical Center: Atlantic City, NJ; 1997.
15. Mohammed S, Hamilton K, Lim A. The incorporation of time in team research: Past, current, and future. In: Salas E, Goodwin GF, Burke CS. Team Effectiveness in Complex Organizations: Cross-Disciplinary Perspective and Approaches
. New York, NY. Taylor & Francis/Routledge. 2009.
16. Hogan MP, Pace DE, Hapgood J, Boone DC. Use of human patient simulation
and the situation awareness global assessment technique in practical trauma skills assessment. J Trauma
17. Endsley MR, Garland DJ. Situation awareness analysis and measurement
Mahwah, NJ: Lawrence Erlbaum Associates; 2000.
18. Mort TC, Donahue SP. Debriefing the basics. In: Dunn WF, ed. Simulators in Critical Care and Beyond
. Des Plains, IL: Society of Critical Care Medicine; 2004.
19. Rudolph JW, Simon R, Dufresne RL, Raemer DB. There’s no such thing as “nonjudgmental” debriefing: a theory and method for debriefing with good judgment. Simul Healthc
20. Crozier MS, Ting HY, Boone DC, O'Regan NB, Bandrauk N, Furey A, Squires C, Hapgood J, Hogan MP. Use of human patient simulation
and validation of the Team Situation Awareness Global Assessment Technique (TSAGAT): A multidisciplinary team assessment tool in trauma education. J Surg Educ
21. Wright MC, Endsley MR. Building shared situation awareness in healthcare settings. In: Nemeth CP, ed. Improving Healthcare Team Communication: Building on Lessons from Aviation and Aerospace
. Burlington, VT: Ashgate; 2008:97–116.
22. Wright MC, Taekman JM, Endsley MR. Objective measures of situation awareness in a simulated medical environment. Qual Saf Health Care
23. Malec JF, Torsher LC, Dunn WF, et al. The Mayo high performance teamwork scale: reliability and validity for evaluating key crew resource management skills. Simul Healthc
24. Garbee DD, Paige JT, Barrier K, et al. Interprofessional teamwork among students in simulated codes: a quasi-experimental study. Nurs Educ Perspect
25. Garbee DD, Paige JT, Bonanno LS, et al. Effectiveness of teamwork and communication education using an interprofessional high-fidelity human patient simulation
critical care code. J Nurs Educ Prac
26. Rosen MA, Salas E, Wilson KA, et al. Measuring team performance in simulation
-based training: adopting best practices for healthcare. Sim Healthcare