In acute trauma care, effective leadership has been identified as one of the key contributors to timely patient assessment and management and the reduction of preventable errors.1–4 Having a designated trauma leader is an important strategy of the multidisciplinary trauma team for accessing and synchronizing the different types of expertise. Trauma leaders hold a central position in team communication5 and facilitate “macro-cognitive” team processes, such as managing attention, coordination, detecting problems, and maintaining common ground.6,7
These tasks require specialized nontechnical skills.8,9 Leadership training is therefore increasingly being incorporated into trauma courses and simulation-based training.9–11 An increasing demand is thereby put on the medical staff providing such training, as they must have adequate understanding of the behaviors by which the trauma leader can advance team processes and of how leadership relates to quality, patient safety, and efficiency.12 From such an understanding, they are to foster trainees' learning cycles by guided reflection and valid recommendations for targeted practice. However, observing and reflecting on nontechnical skill performance can be complex,13–15 and few resources are available to help trauma care instructors systematically select and carefully attend to the relevant elements in leadership performance.1 Without systematic guidance, vital aspects in the performance may be missed, and subsequent learning conversations with trainees may lack the detail that is required for deliberate practice toward expertise.
To address the challenge of nontechnical skill evaluation, behavioral marker tools have been developed in a number of areas, including anesthesia and surgery.13,16,17 These tools support training and evaluation by detailing specific, observable nontechnical behaviors that contribute to superior or substandard performance. Surprisingly, a tool is missing that is specific to the trauma environment and focuses on leadership skills.18 A specific tool is important because, although generic leadership skills have been identified across healthcare settings,18 trauma leaders may display a slightly different pattern of behavior than, for instance, resuscitation leaders (given different degrees to which procedures are protocolized) or surgeons (given different levels of hands-on involvement). Two tools that have been specifically designed for trauma assessment, the nontechnical skills scale for trauma19 and the trauma team performance observation tool,2 may also be less appropriate for detailed leadership evaluation, as they focus on skills for the entire team, rather than the trauma leader alone. They thereby miss the granularity needed to support targeted practice of a variety of leadership strategies.
In this article, we therefore present the development of a novel tool that specifically targets trauma leadership performance. We aimed for a tool that serves as a cognitive aid to set performance expectations, direct attention to key behaviors, and support quick note taking and memorization of thoughts, concerns, and appraisals for until the debriefing. We based the tool on our previous work, in which we conducted a thorough task analysis of trauma leadership and from which we developed a granular skill taxonomy, called the “Taxonomy of Trauma Leadership Skills” (TTLS; see Leenstra et al20 and its online supplementary content for full details). The TTLS contains 5 skill categories (ie, information coordination, action coordination, decision making, communication management, and coaching and team development), capturing a total of 37 skill elements, which in turn are further specified by 67 examples of excellent behavior. With its 3-level structure (category, element, and example level), the TTLS is a comprehensive resource meant for research, course development, and skill benchmarks. Its coverage of all phases in trauma care (ie, briefing, handovers, patient handling, debriefing) provides a broad scope so that its users can select the specific aspects in trauma leadership they wish to study, teach, or practice. In the present study, we selected those phases that are of explicit interest in trauma simulation: the briefing phase and patient handling phase (see Table 1 for the selected categories and elements).
TABLE 1 -
Comparison of Skill Elements in the Original Skill Taxonomy and the Final Prototype After the Video-Based Testing and Live-Testing
||Elements in the Taxonomy
||Elements in the Final Prototype
||Exchanging prehospital information
Discussing strategy and tasks
Setting positive team climate
|Discusses plan based on preannouncement
Discusses tasks and responsibilities
Discusses “what-if” plans
Knows names and individual competences
|Patient Handling Phase
||Elements in the Original Taxonomy
||Elements in the Final Prototype
||Collecting patient information
|Asks and shares findings
Points out changes in patient condition
Thinks aloud (eg, diagnosis, concerns, expectations)
Involves team in sense making
||Planning and prioritizing care
Monitoring actions/protocol adherence
Updating about progress
Providing action/correction instructions
Anticipating/responding members' task needs
|Prioritizes and plans
Facilitates efficient task sequencing
Gives concrete instructions
Limits number of instructions
Selecting and communicating option
Thinks aloud (eg, because of X we might need to Y)
Verifies team consent
Explores options/risks w. team
||Handling communication environment
Applying communication standards
|Concise, loud, and clear
Timing based on workload and relevance
Handles noise or interruptions
Closes communication loop
|Coaching and team development
||Recognizing limits of own competence
Stimulating concern reporting
Stimulating positive cooperative atmosphere
|Recognizes own limits
Encourages input and responds constructively
Balances inclusiveness and directness
Anticipates members' needs
*In the original skill taxonomy, each skill element in the briefing phase was also associated with 1 of the 5 skill categories. The specification of categories was excluded from the observation tool as it was perceived being redundant.
†Removed after the final evaluation round.
During the development of the TTLS,20 our focus was on establishing a taxonomy of theoretically sound constructs and hierarchy. However, once developed, skill taxonomies need to be subjected to practical evaluation.21 It remained to be tested whether the TTLS skill elements—when used as standalone items in a cognitive aid—were sufficiently instructive to the clinicians providing scenario training and whether they support conducting targeted observation and feedback. It also needed to be evaluated whether the tool was sufficiently easy to use in high-fidelity simulation training, as instructors generally balance multiple tasks, such as simulator operation and communicating scenario-related information, while also tracking, processing, and memorizing the leader's actions.
The importance of optimizing the ease of use and usefulness of behavioral marker tools is emphasized by findings that they may require extensive background knowledge and rater training and thus seem only applicable by expert raters.14,15,22 It has been suggested that their interface should be better tailored to the clinician and the clinical setting.22 It is recommended that tools be well organized and fit onto one page, but this will inherently limit the number of elements and room for explanations.16 However, too-generic skill descriptions may offer insufficient direction for targeted observations and in-depth feedback.13,23,24 It thus seems that a balance must be struck between a tool's conciseness and its specificity.23 To strike this balance, we subjected the TTLS to practical evaluation in a user-centered, iterative approach. This resulted in the TTLS–Shortened for Observation and Reflection in Training (SHORT): a cognitive aid for observing and debriefing trauma leadership performances, which was specifically tailored to the vocabulary of clinicians and the workflow and workload during simulation-based training.
The modification of the TTLS into an easy-to-use observation tool for trauma scenario training was carried out in 2 phases of iterative testing. In the first phase, we aimed to improve the skill elements' mutual exclusivity and observability, as well as their clarity to both experts and less-experienced instructors. To achieve this, 3 different testing panels of various trauma care experts performed a behavior coding task using the list of elements—or subsequent iterations—and brief excerpts from videotaped simulation scenarios, after which they provided suggestions for improving the elements. In the second phase, we aimed to further improve the tool's ease of use and usefulness for observations and debriefings by live testing the tool in actual simulation training, taking into account instructors' actual work demands. The live testing took place in multiple advanced trauma life support (ATLS) refresher courses in the Netherlands.25 Both the video-based and live-testing phases consisted of iterative cycles of testing, each testing cycle involving a new testing panel of subject matter experts and a modified version of the observation tool. The methods used in both phases will be explained in further detail hereinafter.
Phase One: Video-Based Testing
To establish a tool that is easy to use by both experienced and less-experienced instructors, we purposefully sampled instructors with varying levels of experience in training nontechnical skills. Both novice and expert instructors were asked to improve the clarity of elements. In addition, the experts were included to safeguard the tool's construct validity during the modifications, whereas the novices were included to minimize expert vocabulary that might be unclear to them. Furthermore, our panelists were selected from differing specialties, to reflect the broad population of clinicians teaching leadership skills. All panelists in the video-based testing were selected from our teaching hospital. Table 2 displays their background and years of experience as an instructor.
TABLE 2 -
Demographics of the Testing Panel Members
||Instructor Experience, y
|Phase 1. Video-based testing
||In simulation scenario training
| Panel #1
||1 Anesthetist (author O.J.)
|1 Nurse anesthetist
|1 Emergency nurse
|1 Psychologist (author N.L.)
| Panel #2
||2 Emergency physicians
|1 Trauma surgeon
| Panel #3
||1 Surgical resident
||1 Trauma surgeon
|Phase 2. Live testing
||In ATLS courses
||In simulation scenario training
| Panel #4
||1 Trauma surgeon
|1 Emergency physician
|2 Anesthetist intensivists
|1 Trauma surgeon
| Panel #5
||5 Emergency physicians
||3, 5, 8, 10, 15
||0, 0, 10, 10, 11
|6 Trauma surgeons
||5, 6, 10, 15, 15, 22
||0, 5, 9, 10, 15, 20
|1 Surgeon intensivist
|1 Anesthetist intensivist
With an online survey (http://www.qualtrics.com), the panelists were shown 29 short excerpts (60–120 seconds) from 2 videotaped simulation scenarios from the local trauma team training. The videos were originally used to debrief the teams' performances. The participants shown in the videos gave their consent for using the videos for this study. The video-taped scenarios involved 2 actual performances of 2 different teams, handling 2 different trauma cases. In the videos, a second- and a fifth-year surgical residents were the respective team leaders in an 8-person trauma team. The testing panels indicated from the list of 23 skill elements (see the original elements in Table 1) which behavior or behaviors they felt were being shown by the trauma leader. The vignettes included the teams' briefing (6 vignettes) and patient handling (23 vignettes) and covered all the skill elements from the TTLS. Multiple answers per vignette were possible to ensure that overlapping or ambiguous items would be revealed. Elements that were found to be not observable or that were ambiguous were noted. After the task, the panel members were asked by N.L. what they felt was meant by the elements to assess their clarity. Elements were discussed among the panel members and combined, rephrased, or split up into more concrete subcomponents. With each iteration of the list of elements, feedback was also collected on whether any salient or exemplary leadership behaviors were missing. After round 3, the modified elements were arranged into a first prototype observation tool.
Phase 2: Live Testing and Finalizing the Tool
Stage 2 was conducted in the ATLS refresher course in the Netherlands. The course included a focus on nontechnical skill use in a variety of trauma cases (eg, hypothermia, intoxication, burns, and injuries to head, neck, spine, chest, abdomen, and extremities). The instructors were consultants with differing specialties from hospitals across the Netherlands (Table 2). Brief simulation scenarios were ran by dyads of instructors, wherein trainees took turns to act as standardized patient. The simulation room contained a trauma table, cart, a tablet-controlled patient monitor and procedure packs. Imaging results (eg, echo fast) were displayed on the patient monitor. The trainees (ie, physicians from various specialties) practiced in teams of 5 trainees, rotating the roles of team leader, consultant, nurse, and scribe.
In the first live-testing round, 6 ATLS instructors used the sheet during 3 consecutive scenarios to collect impressions on the trauma leaders' performances and to debrief scenarios. The instructors received the prototype and instructions in advance and an additional 30-minute verbal instruction before the course. They were instructed to collect impressions on as many items as possible. Note taking was optional. They were not instructed to debrief in any particular way, but they should use the sheet for reference. At the end of the 1-day course, the 6 instructors filled out a questionnaire regarding the tool's clarity, completeness, ease of use, usefulness, and impact on their workload (Figs. 1 and 2 display the items). Answers were given on a 3-point scale (no, moderately, or yes). In a subsequent group discussion, they were asked by N.L. about their experiences and recommendations. Their feedback was used to further improve the tool's usability.
The procedure was repeated for a new iteration of the tool by additional 16 instructors. They observed and debriefed between 3 and 8 live performances on a single day. They too filled out the evaluation questionnaire that is described previously. We strived for 70% positive evaluations (“moderately” and “yes”) per item given that we gave the instructors only limited opportunities of practice with the tool. As this round yielded no new suggestions for reformulation or clarifications, this was the final evaluation round.
Overview of Modifications After Video-Based and Live Testing
Based on the feedback, the observation tool was given a simpler structure: instead of the original 3 levels (ie, specifying skills at category, element, and example level), we adopted a 2-level structure by omitting the third example level. This significantly reduced the amount of written text on the sheet. To ensure that the remaining elements in the sheet were sufficiently instructive without the example level, they were made more specific. For instance, “handling communication environment” was changed into “handles noise, distractions, or interruptions.” In addition, for a number of elements, it was decided to have them replaced by the more concrete examples from the example level. For instance, the original “applying communication standards” was replaced by its examples “concise and loud and clear,” “timing based on workload and relevance,” and “closes communication loop.” It was also decided to have the sheet support feedback at the category level; note-taking fields were provided for entire categories, thereby encouraging users to note the most important observations per category.
Three elements have been added: “limits number of instructions” reflects an important aspect of managing workloads; and “verifies consent” and “explores options/risks with team” reflect that the requirements of team decision making may vary depending on the context. Table 1 shows a comparison of skill elements in the original taxonomy and the feedback sheet. The final prototype is displayed in Table 3.
Evaluation of the Final Prototype
Overall, the final prototype of the TTLS-SHORT was evaluated to contain the most important nontechnical skills for a trauma leader, as indicated by almost all (14 of 16) of the ATLS instructors. It was perceived as being helpful at different stages of simulation-based training: in advance by helping set performance expectations; during the scenario by guiding observations and identifying feedback points; and during the debriefing of the scenario by offering a structure and reminders of key observations. Comments regarding the utility of the TTLS-SHORT included that it offered excellent example behaviors to look out for and that it helped with putting a name to observations and with being more critical and precise in evaluating performances. Interestingly, 9 out of 15 ATLS instructors indicated that the tool provided them with behaviors to look out for which they had not explicitly considered before. Furthermore, half of the ATLS instructors indicated that the tool was easy to use, commenting that the tool was concise, clear, and offered instructive references to the behaviors of interest. The remaining instructors were moderately positive and remarked that its practicality could be further improved by reducing the number of elements, as this would provide an even more lean overview of key elements. Some instructors suggested to omit “applies guidelines,” which we did. Figure 1 summarizes the ATLS instructors' evaluations of the final tool.
With regard to the question whether using the TTLS-SHORT as a cognitive aid would influence instructors' job demands, 8 of 15 responses indicated that it made their job easier (Fig. 2). The other 7 respondents experienced neither positive or negative effects. An explanation for this might be that our instructions have been too brief to some instructors to become sufficiently familiar with the tool. This notion seems to be supported by the fact that all respondents indicated a desire to use the tool more often, with comments including that further practice would likely increase familiarity and ease of use.
We also asked them their opinion regarding the tool's potential to grade performances. Four instructors felt that giving grades would make their job easier, whereas an equal number of instructors felt that it would make their job harder as it yielded an additional task that did not contribute to the current debriefing practice (Fig. 2). Seven instructors felt that giving grades would not change work demands.
In this article, we addressed the need of an observation tool that focuses specifically on trauma leadership skills and which leverages the conduct of performance observations and feedback during simulation-based scenario training. We used the content and structure of our previously developed the TTLS as a valid starting point for an optimized tool for “in-action” observations. After multiple practical testing rounds by trauma instructors, we adopted a simpler structure that could be consulted more easily during in-action training situations. The original skill elements were also translated into more concrete and self-explanatory descriptions that better align with clinicians' vocabulary and would increase the specificity of feedback.
Because of its specific focus on trauma leadership, the TTLS-SHORT is an important addition to other trauma assessment tools, such as the nontechnical skills scale for trauma19 and the trauma team performance observation tool.2 It shares with previous trauma team assessment tools an emphasis on the leader's tasks in structuring and briefing the team; coordinating actions and information; and facilitating team problem solving, but, importantly, the TTLS-SHORT adds a level of specificity by providing a number of supplemental, concrete descriptions of how the team leader can fulfill these tasks (eg, “summarizes regularly,”;“thinks aloud”). Our tool further emphasizes responsibilities in managing the effectiveness of communication and in maintaining a supportive team climate. Because the level of specificity supports concrete directions for targeted practice and offers a plain vocabulary to share with trainees, the TTLS-SHORT can facilitate the training of expert trauma leaders.
The validity of the included elements is supported by previous studies, which have shown that, for instance, sharing and assessing information out loud enhance teams' coordination, sense making, and decision making.26–28 Moreover, multiple elements can be seen to reflect “inclusive” leadership behaviors (eg, “involves team in sense making,” “explores options/risks with team,” “verifies team consent”). This is defined as the “words and deeds by a leader that indicate an invitation and appreciation for others' contributions.”29 It promotes psychological safety, speaking up, team learning, and engagement in quality improvement.29–32 However, it has also been noted that more directive leadership can be a complementary strategy under specific circumstances, such as when trauma cases involve severe injuries.33 The TTLS-SHORT's element “balances inclusiveness and directness” encourages joint reflections during debriefings on how to strike a balance when, for instance, after a team discussion, the team remained indecisive regarding the weighing of risks.
Decisions in the Development of the TTLS-SHORT
During the modification of the TTLS into the TTLS-SHORT, one challenge was striking a balance between specificity and conciseness: specificity (ie, breaking skill elements down to more specific behaviors) was needed to instill with the instructors concrete representations of what to look out for, whereas conciseness (ie, maintaining the more generic skill descriptions) was needed to achieve a lean overview. In the final TTLS-SHORT, the number of elements has increased from 23 to 31 (although the overall amount of text has significantly been reduced). Our testing panels valued more concrete descriptions over generic descriptions, even if this entailed an increase in the number of elements. This resonates with Tavares and Eva's (2012)23 notion that evaluation items should invoke clear images of the behaviors they represent or otherwise risk that a significant amount of evaluators' processing capacity is spent on retrieving the items' meaning and benchmarks from memory. There are limitations to the number of performance dimensions that can be attended to accurately in one performance, however.23 It might be that our panelists deemed the increase in the number of elements acceptable given the fact that they were not asked to address them all individually, but rather to view them as examples of the categories.
Interestingly, we could not achieve absolute consensus among our testing panels regarding the length of the tool, with some suggesting that the number of items could be further reduced. This might reflect differing expectations regarding the use of the tool. Some instructors might prefer the tool to be highly instructive as to, for instance, facilitate their personal learning process in evaluating nontechnical skills, or to enable detailed observations regarding a specific skill category. Others might prefer more generic items to function as quick references to the more detailed behaviors they are already familiar with. We decided the tool to be slightly more aligned with those who prefer specificity, to ensure the tool's ability to “instruct the instructor” and to allow for more flexibility in prioritizing observation points.
Our initial aim was to include concrete behavioral descriptions and clear norms for good leadership. However, a number of elements are included that do not entirely meet these qualifications (eg, “recognizes own limits,” “balances inclusiveness and directness”). The TTLS-SHORT was intended as a cognitive aid to support debriefings, referring to debriefing practices, wherein both the instructor and the trainees are involved in evaluating the performance and deriving lessons from it.34,35 Whereas these conversations certainly benefit from having concrete behavioral descriptions and clear norms, important discussions might not take place if salient, but less-observable elements were omitted. Debriefings provide a platform to explore trainees' considerations underlying their performances, which can be particularly helpful regarding the constructs that are not necessarily observable (eg, “anticipated members' needs”), do not involve a clear norm (eg, “number of instructions”), or may be experienced differently by team members (eg, “balances inclusiveness and directness”). Whereas these elements may be less appropriate for objectively grading performances, they can be wielded as important learning tools in learning conversations.
The TTLS-SHORT distinguishes between skills for the briefing phase and the patient handling phase. This is an important distinction with most other marker systems, which provide overall evaluations over the entire performance.36 Including the briefing as a distinct phase is important because trainees' level of performance (eg, task coordination) can vary across phases, and recording specific examples can help prevent recall bias or the diluting effect of overall impressions.37 In addition, explicating when to focus on which behaviors helps reduce instructors' cognitive load.36
Strengths, Limitations and Future Research
Previous evaluation studies of assessment tools vary in the amount of practice opportunities offered to practice with the tools, ranging from no to multiple practice sessions,16,19,38–41 we purposely restricted the amount of training with the tool before testing to integrate the reality that practitioners generally have received limited training in nontechnical skill evaluation. Based on our testing panels' positive evaluations, we conclude that our tool can be applied relatively intuitively.
We did not instruct the instructors during the live testing how to exactly integrate the tool into their debriefing practices. We did not do so to let our study be of minimal interference to the usual proceedings of the training. Consequently, we observed that some instructors incorporated their observations/notes into their usual style of debriefing, whereas others structured the debriefing around the skill categories. This may have led to differing perceptions regarding the usability of the tool. We suggest that instructors maintain their use of established debriefing techniques, such as advocacy/inquiry or plus/delta, and use the TTLS-SHORT to aid the formulation of feedback or inquiries. This can best be achieved when instructors mark those skill elements or categories that they wish to cover in the debriefing and keep the notes at hand that will help them remember specific details.
The evaluation of the TTLS-SHORT was focused on our testing panels' perceptions of clarity, ease of use, and usefulness, as these data were critical in aligning our tool with clinicians' vocabulary and the workload demands during simulation training. In addition, we have used primarily qualitative feedback from our panelists, as this would grant us the most specific information in terms of identifying areas to improve the tool. Subsequent work should focus on observable changes in instructors' ability to identify and reflect on trainees learning points. Areas of interest include whether using the tool enhances the specificity of recommended or appreciated behaviors in debriefings. Furthermore, our present focus was on the tool's application in conversational debriefing practices in simulation-based training. There is also a growing need of valid and reliable measurement of leadership performance,36 for instance, to benefit research and formal measurement of progress within educational programs. Future work could explore the extent to which the TTLS-SHORT offers a basis for a sensitive grading tool that facilitates reliable performance measurement.
A comparison of the TTLS-SHORT's items with those summarized in a review of leadership assessment tools across various health care action teams18 shows that we included almost all elements identified in the review. This suggests not only that leadership serves identical functions across contexts but also, more importantly, that the TTLS-SHORT's skill categories capture those functions really well. We foresee that the TTLS-SHORT would be very useful for developing targeted training interventions, but we also believe that it situates the TTLS-SHORT as a valid starting point for further research to assess similarities and differences of leadership requirements across healthcare domains. As the current variety of terminology and definitions of leadership hampers a more systematic analysis of leadership across healthcare domains,36 such work would be extremely valuable, both from a theoretical as a practical (training) perspective.
Applying the TTLS-SHORT
With the TTLS-SHORT, we laid the foundation for targeted observations and feedback, but it is advised that instructors receive training in the use of the tool as this could further their ability of reflecting on nontechnical performance.37 In addition, the ease of use of the TTLS-SHORT can be extended by prioritizing skill categories or elements for evaluation per scenario. This would lower workload and heighten the specificity of feedback.
The TTLS-SHORT was specifically designed to support instructors in conducting simulation-based trauma leadership training. The tool can be consulted to set performance expectations at the onset of training—preferably together with the trainees. It directs attention, supports note taking, and provides a helpful framework to discuss performances. The positive evaluations of the tool's content validity, ease of use, and usefulness suggest that the TTLS-SHORT is a valid tool for raising the quality of trauma leadership training.
The authors thank the participants who took part in this study and the Dutch Advanced Life Support Group for granting their support in testing and improving the TTLS-SHORT.
1. Arora S, Menchine M, Demetriades D, et al. Leadership and teamwork in trauma and resuscitation. West J Emerg Med
2. Capella J, Smith S, Philp A, et al. Teamwork training improves the clinical care of trauma patients. J Surg Educ
3. Cooper S. Developing leaders for advanced life support: evaluation of a training programme. Resuscitation
4. Manser T. Teamwork and patient safety in dynamic domains of healthcare: a review of the literature. Acta Anaesthesiol Scand
5. Cole E, Crichton N. The culture of a trauma team in relation to human factors. J Clin Nurs
6. Klein KJ, Ziegert JC, Knight AP, Xiao Y. Dynamic delegation: shared, hierarchical, and deindividualized leadership in extreme action teams. Adm Sci Q
7. Künzle B, Kolbe M, Grote G. Ensuring patient safety through effective leadership behaviour: a literature review. Saf Sci
8. Hjortdahl M, Ringen AH, Naess AC, Wisborg T. Leadership is the essential non-technical skill in the trauma team - results of a qualitative study. Scand J Trauma Resusc Emerg Med
9. Larsen T, Beier-Holgersen R, Meelby J, Dieckmann P, Østergaard D. A search for training of practising leadership in emergency medicine: a systematic review. Heliyon
10. Bond WF, Lammers RL, Spillane LL, et al. The use of simulation in emergency medicine: a research agenda. Acad Emerg Med
11. Ringen AH, Hjortdahl M, Wisborg T. Norwegian trauma team leaders—training and experience: a national point prevalence study. Scand J Trauma Resusc Emerg Med
12. Hull L, Arora S, Symons NRA, et al. Training faculty in nontechnical skill assessment. Ann Surg
13. Flin R, Patey R, Glavin R, Maran N. Anaesthetists' non-technical skills. Br J Anaesth
14. Graham J, Hocking G, Giles E. Anaesthesia non-technical skills: can anaesthetists be trained to reliably use this behavioural marker system in 1 day? Br J Anaesth
15. Yule S, Rowley D, Flin R, et al. Experience matters: comparing novice and expert ratings of non-technical skills using the NOTSS system. ANZ J Surg
16. Yule S, Flin R, Paterson-Brown S, Maran N, Rowley D. Development of a rating system for surgeons' non-technical skills. Med Educ
17. Flowerdew L, Brown R, Vincent C, Woloshynowych M. Development and validation of a tool to assess emergency physicians' nontechnical skills. Ann Emerg Med
18. Rosenman ED, Ilgen JS, Shandro JR, Harper AL, Fernandez R. A systematic review of tools used to assess team leadership in health care action teams. Acad Med
19. Steinemann S, Berg B, DiTullio A, et al. Assessing teamwork in the trauma bay: introduction of a modified “NOTECHS” scale for trauma. Am J Surg
20. Leenstra NF, Jung OC, Johnson A, Wendt KW, Tulleken JE. Taxonomy of trauma leadership skills: a framework for leadership training and assessment. Acad Med
21. Henrickson Parker S, Flin R, McKinley A, Yule S. The Surgeons' leadership inventory (SLI): a taxonomy and rating system for surgeons' intraoperative leadership skills. Am J Surg
22. Watkins SC, Roberts DA, Boulet JR, McEvoy MD, Weinger MB. Evaluation of a simpler tool to assess nontechnical skills during simulated critical events. Simul Healthc
23. Tavares W, Eva KW. Exploring the impact of mental workload on rater-based assessments. Adv Heal Sci Educ
24. Kolbe M, Burtscher MJ, Manser T. Co-ACT - a framework for observing coordination behaviour in acute care teams. BMJ Qual Saf
25. Advanced Life Support Group Netherlands. Available at: https://atls.nl
. Accessed May 10, 2019.
26. Larson JR, Christensen C, Abbott A, Franz TM. Diagnosing groups: charting the flow of information in medical decision-making teams. J Pers Soc Psychol
27. Tschan F, Semmer NK, Gurtner A, et al. Explicit reasoning, confirmation bias, and illusory transactive memory: a simulation study of group medical decision making. Small Group Res
28. Tschan F, Semmer NK, Gautschi D, Hunziker P, Spychiger M, Marsch SU. Leading to recovery: group performance and coordinative activities in medical emergency driven groups. Hum Perform
29. Nembhard IM, Edmondson AC. Making it safe: the effects of leader inclusiveness and professional status on psychological safety and improvement efforts in health care teams. J Organ Behav
30. Edmondson AC. Speaking up in the operating room: how team leaders promote learning in interdisciplinary action teams. J Manag Stud
31. Nacioglu A. As a critical behavior to improve quality and patient safety in health care: speaking up! Saf Health
32. Hu YY, Parker SH, Lipsitz SR, et al. Surgeons' leadership styles and team behavior in the operating room. J Am Coll Surg
33. Yun S, Faraj S, Sims HP. Contingent leadership and effectiveness of trauma resuscitation teams. J Appl Psychol
34. Rudolph JW, Simon R, Rivard P, Dufresne RL, Raemer DB. Debriefing with good judgment: combining rigorous feedback with genuine inquiry. Anesth Clin
35. Sawyer T, Eppich W, Brett-Fleegler M, Grant V, Cheng A. More than one way to debrief. Simul Healthc J Soc Simul Healthc
36. Dietz AS, Pronovost PJ, Benson KN, et al. A systematic review of behavioural marker systems in healthcare: what do we know about their attributes, validity and application? BMJ Qual Saf
37. Jepsen RMHG, Østergaard D, Dieckmann P. Development of instruments for assessment of individuals' and teams' non-technical skills in healthcare: a critical review. Cogn Technol Work
38. Jepsen RMHG, Dieckmann P, Spanager L, et al. Evaluating structured assessment of anaesthesiologists' non-technical skills. Acta Anaesthesiol Scand
39. Fletcher G. Anaesthetists' non-technical skills (ANTS): evaluation of a behavioural marker system. Br J Anaesth
40. Cooper S, Cant R, Connell C, et al. Measuring teamwork performance: validity testing of the TEAM emergency assessment measure (TEAM) with clinical resuscitation teams. Resuscitation
41. Walker S, Brett S, McKay A, Lambden S, Vincent C, Sevdalis N. Observational skill-based clinical assessment tool for resuscitation (OSCAR): development and validation. Resuscitation