Secondary Logo

Journal Logo

Simulation in Interprofessional Clinical Education: Exploring Validated Nontechnical Skills Measurement Tools

von Wendt, Carl Eugene Alexander, MD; Niemi-Murola, Leila, MD, PhD, MME

doi: 10.1097/SIH.0000000000000261
Review Article
Free
SDC

Summary Statement The research literature regarding interprofessional simulation-based medical education has grown substantially and continues to explore new aspects of this educational modality. The aim of this study was to explore the validation evidence of tools used to assess teamwork and nontechnical skills in interprofessional simulation-based clinical education. This systematic review included original studies that assessed participants’ teamwork and nontechnical skills, using a measurement tool, in an interprofessional simulated setting. We assessed the validity of each assessment tool using Kane’s framework. Medical Education Research Study Quality Instrument scores for the studies ranged from 8.5 to 17.0. Across the 22 different studies, there were 20 different assessment strategies, in which Team Emergency Assessment Measure, Anesthetist’s Nontechnical Skills, and Nontechnical Skills for Surgeons were used more than once. Most assessment tools have been validated for scoring and generalization inference. Fewer tools have been validated for extrapolation inference, such as expert-novice analysis or factor analysis.

From the Department of Anaesthesiology and Intensive Care Medicine, University of Helsinki and Helsinki University Hospital, Helsinki, Finland.

Reprints: Carl Eugene Alexander von Wendt, MD, Department of Anaesthesiology and Intensive Care Medicine, University of Helsinki, Helsinki, Finland (e-mail: alexander.vonwendt@gmail.com).

This work is attributed to the Department of Anaesthesiology and Intensive Care Medicine, University of Helsinki.

The authors declare no conflict of interest.

Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal’s Web site (www.simulationinhealthcare.com).

The research literature regarding simulation-based medical education (SBME) has grown substantially, and, especially, interprofessional simulation-based clinical education has become widely used. Full-scale medical simulation has been shown to be an effective educational modality that provides a safe and realistic learning environment that does not compromise patient safety, and it has been used successfully in various different medical specialties to teach medical content and important clinical skills.1–13

Interest in interprofessional education (IPE) has been growing in the field of medicine, and it has become an attractive form of SBME. We used the IPE definition provided by the Centre For The Advancement Of Interprofessional Education,14 with the addition of simulation. It defines IPE as “educational opportunities where two or more professions learn with, about and from each other to improve the collaboration and quality of care.” Simulation IPE allows for multiple professions to come together and, through simulation, learn collaboration and teamwork. There are claims that this will help interprofessional healthcare teams provide more effective patient care and allow professionals to reflect on their typical collaboration.15 A growing body of literature has shown that effective teamwork results in better patient outcomes and is important to achieve optimal patient care.16–24 This also applies to medical crises.24–26 The different factors that make teamwork effective have been documented in earlier studies.27–29 These aspects include structured communication,30 effective assertion,31 active information sharing, and a shared understanding (“a shared mental model”),32,33 psychological safety,34,35 situational awareness,36 and effective leadership behaviors.37

Although SBME has been shown to be an effective educational modality, some systematic reviews have found the scientific quality of earlier studies to be suboptimal. This was usually due to suboptimal validation reporting of the included studies.5–7,12,13 The Medical Education Research Study Quality Instrument (MERSQI) was created to assess the scientific quality of medical education studies.38 This measurement tool has been used in earlier systematic reviews5,6,12 and has been found to be a valid and reliable tool for scientific quality measurement.38 In this study, we systematically reviewed the available literature regarding interprofessional SBME with a special focus on teamwork and nontechnical skills (NTSs). The purpose of this article was to determine how the included studies measured teamwork and NTSs and to analyze the validation evidence of the identified assessment tools, with the aim of informing future research. We also assessed the scientific quality of the included studies using the MERSQI tool.

Back to Top | Article Outline

Objectives

This systematic review aims to answer the following questions: (1) What assessment strategies have been used to assess teamwork in interprofessional, simulation-based clinical education? (2) How have these assessment strategies been validated?

Back to Top | Article Outline

METHODS

This article was reported in adherence to the “Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement.”39

We included research articles published in English that studied how high-fidelity simulation education was used to teach and assess teamwork and NTSs in interprofessional medical teams. The participants could be at any stage of training or practice, and the outcome of interest was teamwork and NTS (corresponding to skills and knowledge, second level of education, and learning)40 as objectively assessed with a measurement tool.

We systematically searched the literature databases freely available to the Helsinki University National Library of Health Sciences (Ovid MEDLINE, 1946–December 2015; Web of Science, 1945–December 2015; and the Cochrane Library, 2015). The last search was in December 2015. We used the search terms “interprofessional,” “interdisciplinary,” “multiprofessional,” “multidisciplinary,” “team,” “medical education,” “nursing education,” “high fidelity simulation,” “high-fidelity simulation,” and “simulation based medical education.” We then combined these search terms using the Boolean operators “OR” and “AND,” which gave us the final result. We used RefWorks to store our abstracts and to remove any duplicates using the programs built-in duplicate removal function. We also manually searched the Simulation in Healthcare Journals from the past 5 years. During this search, we found 1 relevant article41 that had already been included at an earlier stage, and thus it was removed as a duplicate.

After duplicate removal, we had the final result of 522 abstracts that we analyzed for inclusion based on an exclusion algorithm (Fig. 1). This algorithm included 10 criterions for exclusion: “no access,” “duplicate,” “article not in English,” “article not a medical science study,” “study did not include simulation,” “overviews and/or descriptions,” “congress and/or meeting abstracts,” “reviews,” “not interprofessional,” and “study did not evaluate NTSs objectively using an assessment tool.” The criterion “paper did not measure NTS objectively” excludes studies that measured NTS through self-assessment or questionnaire forms. Two assessors read the articles, and differences were solved by discussion (C.E.A.v.W., L.N.M.). After this process, we were left with 22 studies that were included for review (see Document, Supplementary Digital Content 1, http://links.lww.com/SIH/A344, bibliography of included studies).

FIGURE 1

FIGURE 1

Back to Top | Article Outline

Analysis

The data extracted were the medical topic and medical field, study country of origin, the NTS assessment strategy, the different domains of MERSQI (study design, sampling, type of data, validity reporting, data analysis, and outcomes), the aim of the study, and the study's main outcome. The last 2 items (study aim and study's main outcome) were gleamed directly from the study article and simplified into a single sentence for ease of use. It was also determined whether the assessment tool used was a previously established tool (ie, conceived and described/used in earlier studies) or a new tool developed for the study in question. If a previously established tool was used, but modified for the study in which it was used, it was counted as a new tool.

After identifying the assessment tools used, we also collected information on their validity, either from the included studies themselves or from previous validation studies. We used Kane's42,43 framework for validity evidence to provide structure to our collection and analysis. Only studies that reported specific validity values scored any MERSQI points for validity. The different tools and their validation were then compared with one another.

The MERSQI tool was used to assess the quality of the included studies. Our usage of MERSQI corresponds closely to that of Reed38 in their original article, with a few exceptions as explained hereinafter. Earlier reviews have used different grouping for total MERSQI score.5,6,12 We chose 12 as our cutoff point and categorized studies as “methodologically good” (MERSQI score >12) and “methodologically great” (MERSQI score >14). Although the MERSQI tool allows for the use of “not applicable (N/A),” we opted to instead use the “not reported” option where applicable. This was done to get a numerical score across all the items included in MERSQI. In addition, we defined item 8 (“appropriateness of analysis”) and item 9 (“complexity of analysis”) as “comparison between results and study question” and “study result in relation to the scientific literature,” respectively. After data were extracted and compiled, descriptive statistical values (“mean,” “standard deviation,” “variance,” and “median”) were calculated for the MERSQI scores.

Back to Top | Article Outline

RESULTS

The data extracted from each individual study are summarized in Supplemental Table 1 (see Table, Supplementary Digital Content 2, main results; http://links.lww.com/SIH/A372) and Supplemental Table 2 (see Table, Supplementary Digital Content 3, validation results; http://links.lww.com/SIH/A373). Our results are presented in Supplemental Table 2 and Figure 2, and our main results are summarized hereinafter. Twenty-two studies in total were included for review; 5 (~23%) of these were randomized controlled trials (RCTs). Eleven studies (50%) were posttest-only descriptive studies; 4 studies (~18%) used pre- and posttest designs, and 2 studies (~9.1%) were nonrandomized 2-group studies. The majority of the included studies were single-institution studies (17, ~77%). Only 5 studies (~23%) were multicenter studies consisting of 2 or more institutions. Because we only included studies where a facilitator or a faculty member assessed NTS by use of a measurement strategy, all studies score 3 on “type of data.” Validity of measurement tool was reported in 13 studies (~59%); 8 of these studies reported all validity items (internal structure, content, and relationship to other variables).

FIGURE 2

FIGURE 2

Because we included studies that evaluated teamwork and NTS, most studies (18, ~82%) scored 1.5 on “outcomes.” Two studies41,44 examined NTS in the workplace and therefore scored “2” on outcomes. One study21 examined the impact on “patient/healthcare outcomes” and thus scored “3” on outcomes. The total MERSQI score for the included studies ranged from 8.5 to 17.0 with a mean score of 11.8 (SD, 2.5; variance, 6.3) and a median of 11.25. Because the included studies had total MERSQI scores of moderate variance, we reported the median to deal with outlier results.

Across the included 22 studies, there were 20 different strategies used to assess NTS and teamwork. A comprehensive list of all the tools is presented in Supplemental Table 2. Twenty of the included studies used quantitative assessment strategies. Of these, 16 studies used a single assessment tool in their assessment of teamwork and NTSs, whereas 4 studies45–48 used 2 or more different assessment tools. Three tools (Team Emergency Assessment Measure [TEAM], Anesthetist's Nontechnical Skills [ANTSs], and Nontechnical Skills for Surgeons [NOTSSs]) were used more than once; the other assessment tools were used only once. Twelve (~55%) of the included studies used previously established NTS assessment tools, whereas the other 10 studies used newly developed or modified assessment strategies.

Two studies49,50 used a qualitative methodology to assess NTS and teamwork, one being a “template analysis approach” and the other “language patterns analysis.” The scientific rigor of these studies was reported in different ways. One of these studies50 reported arguments for their sampling method, whereas the other49 reported a “high degree of consistency” between the 2 independent raters.

Most included assessment tools have been validated for scoring and generalization inference. Fewer tools had been validated for extrapolation inference, such as expert-novice analysis or factor analysis. In scoring, the process of item development was explained for 12 of the assessment tools. Details about the raters were reported for 11 of the assessment tools. Evidence for content validation was reported for 7 of the assessment tools. In generalization, rater agreement was reported for 13 of the assessment tools. Internal consistency was reported for 9 of the assessment tools. The reported rater agreement and internal consistency values were all above 0.7 (acceptable), with most values being above 0.8 (good). In extrapolation, expert-novice analysis was reported for 5 of the assessment tools, and factor analysis was reported for 4 of the assessment tools. Many authors also provided descriptive details about their simulators to argue for the simulations authenticity.

With the exception of 2 studies that provided limited validation evidence,21,51 the included studies reported validation evidence of their quantitative assessment strategy within the same article or cited an associated article describing it. The 2 exceptions did not report the validation of their assessment tool in the included study, and we could not find or could not gain access to their validation studies. The Kramer and Schmalenberg Nurse-Physician Scale52,53 (KSNPS) was also reported48 to be reliable and validated, but we were unable to gain access to the validation study. One study45 referred to the validation of 1 of their used assessment tool (Team Performance During Simulated Crisis Instrument [TPDSCI]) but not the other one (Crisis Resource Management [CRM] checklist).

There were also other validation parameters across the included assessment strategies. These parameters were used infrequently and were therefore not included in Supplemental Table 2. Pearson coefficient was reported for 4 assessment strategies,44,46,54,55 and the TEAM assessment tool has been validated in earlier studies56,57 by the use of Content Validity Index (CVI) and Spearman ρ.

Back to Top | Article Outline

DISCUSSION

The included 22 studies used 20 different assessment tools. These tools were used across a wide range of medical fields, including surgery, anesthesia, internal medicine, obstetrics and gynecology, pediatrics, and emergency medicine. A small majority of the assessment tools were well established assessment tools; the others were newly created or modified for the specific study. Of all the assessment tools, TEAM,56 NOTSS,58 and ANTS59 were the most frequently used (these were used in more than 1 study). The big variety in assessment tools is possibly a result of the numerous medical fields represented in the included studies, and the authors' desire to use a modified assessment strategy specifically tailored for their medical field of interest.

The study44 with the highest MERSQI score used the Clinical Teamwork Scale (CTS) to assess the NTS of their participants. Clinical Teamwork Scale was developed to assess the teamwork performance of interprofessional teams in high-fidelity simulated obstetric emergency scenarios. The validity of CTS60 has been analyzed through a wide range of validity parameters. The process of item development, and details about the raters and their training, is presented by Guise et al.60 Rater agreement, measured by κ statistics, was substantial. Reliability of the tool's ratings was also examined by estimating the variance of each component based on generalizability theory. The tool was originally validated60 in a simulated scenario and performed by scripted actors, and therefore the authenticity of the context could be challenged. The performance observed in that setting might not reflect desired real-life clinical performance. Based on this evidence, the CTS is only valid in an obstetric emergency setting, and more evidence is needed to support the assumption that CTS scores translate into meaningful real-life performance.

The TEAM was used in 2 of the included studies. It was developed to assess teamwork performance by interprofessional teams, in real and simulated emergencies. The validity and reliability of TEAM56,57 have been analyzed through many different parameters (Cronbach α, intraclass correlation coefficient [ICC], CVI, Cohen's κ, and Spearman ρ, among others). The TEAM has been analyzed for face and content validity by an international panel of resuscitation experts and has been shown to have good internal consistency and fair interrater reliability. The authors stated the aim for TEAM to be used in emergency situations. However, the tool was only validated in pediatric emergencies, and, therefore, more validation evidence is needed to verify its use in other emergency settings, as the authors intended. Clinical Teamwork Scale differs from TEAM in that it assesses specifically the overall teamwork performance by the whole group,60 whereas TEAM also assesses individual performance (specifically the leader of the group).56 In our review, there were 2 studies41,61 that used TEAM as their assessment tool, and both of them were “methodologically good” (MERSQI >12).

The ANTS tool was also used in 2 of the included studies. Previous validation evidence59,62 supports the use of this tool when assessing the NTS and teamwork skills of anesthesiologists. However, ANTS was originally developed and rated by this single profession, and aspects of interprofessional teamwork, relevant to other professions, might not be addressed by ANTS because of its creation process. Jankouskas et al62 argue for the use of ANTS to assess interprofessional teams, but more evidence is needed to support this.

Our results echo that of earlier reviews in that many of the included studies had methodological limitations.5–7,12,13 Onwochei et al63 looked at NTS assessment tools used in obstetrical emergencies to assess the teamwork of interprofessional obstetrical teams. In their systematic review, the assessment tools identified were analyzed for reliability and validity, and they concluded that many of the NTS assessment tools needs more work to establish their validity evidence. This correlates to our results, because many of the tools in our work were found lacking in validity evidence. Onwochei et al also concluded, as we have done, that there is a need for more research looking at higher levels of educational outcomes. The moderate variance (6.3) of our included studies reflects some inconsistency between studies. This might not be surprising, because the studies used different methods and different measurement tools and addressed different clinical specialties, but it makes between-study comparison challenging. As can be expected, we found that good methodological structure was associated with higher study quality (ie, higher MERSQI scores). The majority of included studies could have benefited from better methodological rigor and a better study design.

The most usual methodological flaws we encountered in the included studies were poor study design and poor validation reporting. A majority of studies used posttest only, after which there was an even distribution of pretest/posttest and RCT. Most studies were single institutional resulting in lower sampling scores. Samples were often small and poorly reported. This is expected, because larger, multi-institutional, and randomized studies are more challenging to conduct. However, higher forms of study design with larger samples grant higher scientific quality and therefore are something worth striving for. Not all included studies reported values for the validity of their assessment tools, which contributed to lower total MERSQI scores. Although it is perfectly acceptable to refer to earlier validation of an assessment tool, it would be helpful to report the validation values. A majority of studies reported the “teamwork” and “non-technical skill” of study participants. This was to be expected because we only included studies assessing teamwork using a measurement tool. Only 1 study21 also reported higher, downstream patient care and healthcare outcome. The good studies (MERSQI score >12, 8 studies) were characterized by good validity reporting (except 1 study61). There was some heterogeneity between the studies in this group, mostly in “study design” and in “sampling.” The great studies (MERSQI score >14, 4 studies) were all RCT studies and received maximum scores for study validity (Fig. 2).

Although only broad and general comparisons could be drawn between the included studies because of the great variance, individual study results bring forth many interesting points. Calhoun et al64 looked at communication and self-insight in an interprofessional team. They showed how to support reflective learning using multirater feedback and gap analysis. Because self-insight has been shown to be stable over time and unlikely to be altered without specific external intervention,65 the use of multirater feedback and gap analysis offered a great possibility to promote the learner's strengths and to intervene in the learner's perceptual deficits.64,66 Team training is essential to achieve effective teamwork as “a team of experts does not make an expert team.”67,68Sigalet et al69 bring forth an interesting point; most teamwork assessment tools have been used and validated in studies conducted on postgraduate participants. Our results reflect this because most of the included studies had postgraduate participants from different medical and nursing fields. Only 4 of our studies54,69–71 included medical and nursing students.

Calhoun et al45 illuminates the problem with interprofessional hierarchy and how it relates to healthcare error, specifically errors in communication. They showed through their simulated scenario that participants usually did not challenge authority, even though it would have been merited. Although not statistically significant because of low sample size, there was a trend toward lower NTS score among participants who failed to challenge authority where merited. This relates to earlier work done about communication, especially the work done by Dr Lorelei Lingard72,73 on interprofessional communication. Her work gives us valuable knowledge about the different kinds of errors in communication and offers an insight into how to safeguard against them. Errors of communication have also frequently been cited in other studies as a significant contributor to medical errors74–77; some studies suggesting that up to 60% to 70% of medical errors are related to communication.20,78

Back to Top | Article Outline

Limitations

We acknowledge the limitations of our systematic review. First, our search strategy used 3 research databases, mentioned earlier. Although these databases are broad, and cover a wide variety of studies, it would be hubris on our part to believe that this search strategy gives us full coverage of all the relevant research. We also only included articles in English, because translation of potentially relevant articles would have been outside the scope of this study. Furthermore, our database search revealed a wide variety of different journals, making a wider manual search unpractical, and therefore we chose to manually search the Simulation in Healthcare Journals from the past 5 years, because we found that many abstracts from our database search had been published in this journal.

For the purpose of this review, we used the term “high-fidelity” to describe the technologically advanced simulators used in the included studies. This term has been extensively used in the research literature, but it is not without problems.79 Different professions perceive fidelity and realism differently, making the term diverse and might not make the examined situations generalizable. The term “fidelity” can be separated into “engineering fidelity” and “psychological fidelity”80,81 and into “physical fidelity” and “functional fidelity.”82 The inconsistent use of the term “fidelity” has led to much confusion. What is more is that the educational efficiency of a simulation might not be uniquely attributable to its fidelity.83 Others, perhaps preferable terms, are “high-technology” or “technology-enhanced.” These terms describe the build of the simulator, rather than make assumptions about the participants' perception of “realism” or their level of engagement or immersion in the simulation.

Back to Top | Article Outline

CONCLUSIONS

Future research should strive for good research methodology. The study design should be, at the very least, a “pre/post-test” model for evaluating educational impact. We feel that it is of paramount importance to explicitly report the validity of any measurement tools used. There is still too little research in interprofessional simulation-based clinical education regarding higher levels of educational outcomes, which warrants further research, as patient and/or healthcare outcomes should be the highest priority.

The assessment tools CTS, TEAM, and ANTS have been thoroughly tested and have been found to be reliable and of high validity. Future researchers would do well to assess previously established and validated tools when designing their research. Using a previously established NTS assessment tool, instead of creating a new one, will not only lend credibility to the study results but also make between-study comparisons easier, thereby giving more structure to this growing research field. No NTS assessment tool is without flaw, however, and it is especially important to make sure that the chosen tool is suited for the field and setting of the study. The tools in this review have been found to be reliable and of high validity, when used in a specific profession or context, and more validation evidence is needed if one wishes to use them in other settings.

It is clear that using a measurement tool allows researchers to evaluate educational impact on a team's performance in relation to a baseline and in relation to each individual team member. However, there is yet very little research about the different educational needs of the different professions in an interprofessional team. More research is needed to understand how to design interprofessional simulation for maximum educational benefit. This also applies to undergraduates, who will benefit from effective and meaningful interprofessional simulations facilitating their future practice.

Back to Top | Article Outline

REFERENCES

1. Ziv A, Wolpe PR, Small SD, Glick S. Simulation-based medical education: an ethical imperative. Acad Med 2003;78(8):783–788.
2. Issenberg SB, McGaghie WC, Petrusa ER, Lee Gordon D, Scalese RJ. Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME systematic review. Med Teach 2005;27(1):10–28.
3. Hammond J. Simulation in critical care and trauma education and training. Curr Opin Crit Care 2004;10(5):325–329.
4. Issenberg SB, McGaghie WC, Hart IR, et al. Simulation technology for health care professional skills training and assessment. JAMA 1999;282(9):861–866.
5. Cook DA, Hatala R, Brydges R, et al. Technology-enhanced simulation for health professions education: a systematic review and meta-analysis. JAMA 2011;306(9):978–988.
6. Cook DA, Brydges R, Hamstra SJ, et al. Comparative effectiveness of technology-enhanced simulation versus other instructional methods: a systematic review and meta-analysis. Simul Healthc 2012;7(5):308–320.
7. McGaghie WC, Issenberg SB, Cohen ER, Barsuk JH, Wayne DB. Does simulation-based medical education with deliberate practice yield better results than traditional clinical education? A meta-analytic comparative review of the evidence. Acad Med 2011;86(6):706–711.
8. Gordon JA, Shaffer DW, Raemer DB, Pawlowski J, Hurford WE, Cooper JB. A randomized controlled trial of simulation-based teaching versus traditional instruction in medicine: a pilot study among clinical medical students. Adv Health Sci Educ Theory Pract 2006;11(1):33–39.
9. Steadman RH, Coates WC, Huang YM, et al. Simulation-based training is superior to problem-based learning for the acquisition of critical assessment and management skills. Crit Care Med 2006;34(1):151–157.
10. Schwartz LR, Fernandez R, Kouyoumjian SR, Jones KA, Compton S. A randomized comparison trial of case-based learning versus human patient simulation in medical student education. Acad Emerg Med 2007;14(2):130–137.
11. Ten Eyck RP, Tews M, Ballester JM. Improved medical student satisfaction and test performance with a simulation-based emergency medicine curriculum: a randomized controlled trial. Ann Emerg Med 2009;54(5):684–691.
12. Cook DA, Hamstra SJ, Brydges R, et al. Comparative effectiveness of instructional design features in simulation-based education: systematic review and meta-analysis. Med Teach 2013;35(1):e867–e898.
13. McGaghie WC, Issenberg SB, Petrusa ER, Scalese RJ. A critical review of simulation-based medical education research: 2003–2009. Med Educ 2010;44(1):50–63.
14. CAIPE - Centre For The Advancement Of Interprofessional Education. Interprofessional Education—The Definition. Available at: http://caipe.org.uk/resources/defining-ipe/. Updated 20022016. Accessed 04.02.2016.
15. van Soeren M, Devlin-Cop S, MacMillan K, Baker L, Egan-Lee E, Reeves S. Simulated interprofessional education: an analysis of teaching and learning processes. J Interprof Care 2011;25(6):434–440.
16. Lingard L, Espin S, Evans C, Hawryluck L. The rules of the game: interprofessional collaboration on the intensive care unit team. Crit Care 2004;8(6):R403–R408.
17. Piquette D, Reeves S, Leblanc VR. Interprofessional intensive care unit team interactions and medical crises: a qualitative study. J Interprof Care 2009;23(3):273–285.
18. Piquette D, Reeves S, LeBlanc VR. Stressful intensive care unit medical crises: how individual responses impact on team performance. Crit Care Med 2009;37(4):1251–1255.
19. Weller JM, Janssen AL, Merry AF, Robinson B. Interdisciplinary team interactions: a qualitative study of perceptions of team function in simulated anaesthesia crises. Med Educ 2008;42(4):382–388.
20. Kohn LT, Corrigan JM, Donaldson MS. To Err Is Human: Building a Safer Health System. Vol 6. National Academies Press; 2000.500 Fifth St. N.W. | Washington, D.C. 20001.
21. Capella J, Smith S, Philp A, et al. Teamwork training improves the clinical care of trauma patients. J Surg Educ 2010;67(6):439–443.
22. Baker GR, Norton PG, Flintoft V, et al. The Canadian adverse events study: the incidence of adverse events among hospital patients in Canada. CMAJ 2004;170(11):1678–1686.
23. Baker DP, Day R, Salas E. Teamwork as an essential component of high-reliability organizations. Health Serv Res 2006;41(4 Pt 2):1576–1598.
24. Manser T. Teamwork and patient safety in dynamic domains of healthcare: a review of the literature. Acta Anaesthesiol Scand 2009;53(2):143–151.
25. Bogner MS. Human Error in Medicine. Lawrence Erlbaum Associates, Inc; 1994.10 Industrial Avenue Mahwah, NJ 07430 United States.
26. Helmreich RL. Threat and error in aviation and medicine: similar and different. Innovation and Consolidation in Aviation 2003:99–108.
27. Sargeant J, Loney E, Murphy G. Effective interprofessional teams:“contact is not enough” to build a team. J Contin Educ Health Prof 2008;28(4):228–234.
28. Salas E, Cooke NJ, Rosen MA. On teams, teamwork, and team performance: discoveries and developments. Hum Factors 2008;50(3):540–547.
29. Leonard MW, Frankel AS. Role of effective teamwork and communication in delivering safe, high-quality care. Mt Sinai J Med 2011;78(6):820–826.
30. Leonard MW, Graham S, Bonacum D. The human factor: the critical importance of effective teamwork and communication in providing safe care. Qual Saf Health Care 2004;13(Suppl 1):i85–i90.
31. Bognár A, Barach P, Johnson JK, et al. Errors and the burden of errors: attitudes, perceptions, and the culture of safety in pediatric cardiac surgical teams. Ann Thorac Surg 2008;85(4):1374–1381.
32. Burtscher MJ, Kolbe M, Wacker J, Manser T. Interactions of team mental models and monitoring behaviors predict team performance in simulated anesthesia inductions. J Exp Psychol Appl 2011;17(3):257.
33. Westli HK, Johnsen BH, Eid J, Rasten I, Brattebø G. Teamwork skills, shared mental models, and performance in simulated trauma teams: an independent group design. Scand J Trauma Resusc Emerg Med 2010;18(1):47.
34. Edmondson A. Psychological safety and learning behavior in work teams. Adm Sci Q 1999;44(2):350–383.
35. Edmondson AC. Managing the Risk of Learning: Psychological Safety in Work Teams. Boston, MA: Division of Research, Harvard Business School, 2002.
36. Carthey J, de Leval MR, Reason JT. The human factor in cardiac surgery: errors and near misses in a high technology medical domain. Ann Thorac Surg 2001;72(1):300–305.
37. Krause TR. Leading With Safety. John Wiley & Sons; 2005.Hoboken, New Jersey, USA.
38. Reed DA, Cook DA, Beckman TJ, Levine RB, Kern DE, Wright SM. Association between funding and quality of published medical education research. JAMA 2007;298(9):1002–1009.
39. Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann Intern Med 2009;151(4):264–269.
40. Kirkpatrick D, Kirkpatrick J. Evaluating Training Programs: The Four Levels. 3rd ed. Oakland, CA: Berrett-Koehler Publishers; 2006.
41. Couto TB, Kerrey BT, Taylor RG, FitzGerald M, Geis GL. Teamwork skills in actual, in situ, and in-center pediatric emergencies: performance levels across settings and perceptions of comparative educational impact. Simul Healthc 2015;10(2):76–84.
42. Cook DA, Brydges R, Ginsburg S, Hatala R. A contemporary approach to validity arguments: a practical guide to Kane's framework. Med Educ 2015;49(6):560–575.
43. Kane MT. An argument-based approach to validity. Psychol Bull 1992;112(3):527.
44. Fransen AF, van de Ven J, Merien AE, et al. Effect of obstetric team training on team performance and medical technical skills: a randomised controlled trial. BJOG 2012;119(11):1387–1393.
45. Calhoun AW, Boone MC, Porter MB, Miller KH. Using simulation to address hierarchy-related errors in medical practice. Perm J 2014;18(2):14–20.
46. Lee JY, Mucksavage P, Canales C, McDougall EM, Lin S. High fidelity simulation based team training in urology: a preliminary interdisciplinary study of technical and nontechnical skills in laparoscopic complications management. J Urol 2012;187(4):1385–1391.
47. Pascual JL, Holena DN, Vella MA, et al. Short simulation training improves objective skills in established advanced practitioners managing emergencies on the ward and surgical intensive care unit. J Trauma 2011;71(2):330–337.
48. Messmer PR. Enhancing nurse-physician collaboration using pediatric simulation. J Contin Educ Nurs 2008;39(7):319–327.
49. Minehart RD, Pian-Smith MC, Walzer TB, et al. Speaking across the drapes: communication strategies of anesthesiologists and obstetricians during a simulated maternal crisis. Simul Healthc 2012;7(3):166–170.
50. Muller-Juge V, Cullati S, Blondon KS, et al. Interprofessional collaboration between residents and nurses in general internal medicine: a qualitative study on behaviours enhancing teamwork quality. PLoS One 2014;9(4):e96160.
51. Daniels K, Lipman S, Harney K, Arafeh J, Druzin M. Use of simulation based team training for obstetric crises in resident education. Simul Healthc 2008;3(3):154–160.
52. Kramer M, Schmalenberg C. Staff nurses identify essentials of magnetism. Magnet Hospitals Revisited: Attraction and Retention of Professional Nurses 2002:25–59.
53. Kramer M, Schmalenberg C. Staff Nurses Identify Essentials of Magnetism: What Are “Good” RN/MD Relationships 2003.Nursing Management, RCN; United Kingdom.
54. Jankouskas TS, Haidet KK, Hupcey JE, Kolanowski A, Murray WB. Targeted crisis resource management training improves performance among randomized nursing and medical students. Simul Healthc 2011;6(6):316–326.
55. Hogan MP, Pace DE, Hapgood J, Boone DC. Use of human patient simulation and the situation awareness global assessment technique in practical trauma skills assessment. J Trauma 2006;61(5):1047–1052.
56. Cooper S, Cant R, Porter J, et al. Rating medical emergency teamwork performance: development of the team emergency assessment measure (TEAM). Resuscitation 2010;81(4):446–452.
57. Cooper SJ, Cant RP. Measuring non-technical skills of medical emergency teams: an update on the validity and reliability of the team emergency assessment measure (TEAM). Resuscitation 2014;85(1):31–33.
58. Yule S, Flin R, Maran N, Rowley D, Youngson G, Paterson-Brown S. Surgeons' non-technical skills in the operating room: reliability testing of the NOTSS behavior rating system. World J Surg 2008;32(4):548–556.
59. Fletcher G, Flin R, McGeorge P, Glavin R, Maran N, Patey R. Anaesthetists' non-technical skills (ANTS): evaluation of a behavioural marker system. Br J Anaesth 2003;90(5):580–588.
60. Guise JM, Deering SH, Kanki BG, et al. Validation of a tool to measure and promote clinical teamwork. Simul Healthc 2008;3(4):217–223.
61. Rubio-Gurung S, Putet G, Touzet S, et al. In situ simulation training for neonatal resuscitation: an RCT. Pediatrics 2014;134(3):E790–E797.
62. Jankouskas T, Bush MC, Murray B, et al. Crisis resource management: evaluating outcomes of a multidisciplinary team. Simul Healthc 2007;2(2):96–101.
63. Onwochei DN, Halpern S, Balki M. Teamwork assessment tools in obstetric emergencies: a systematic review. Simul Healthc 2017;12(3):165–176.
64. Calhoun AW, Rider EA, Peterson E, Meyer EC. Multi-rater feedback with gap analysis: an innovative means to assess communication skill and self-insight. Patient Educ Couns 2010;80(3):321–326.
65. Lockyer JM, Violato C, Fidler HM. What multisource feedback factors influence physician self-assessments? A five-year longitudinal study. Acad Med 2007;82(10 Suppl):S77–S80.
66. Calhoun AW, Rider EA, Meyer EC, Lamiani G, Truog RD. Assessment of communication skills and self-appraisal in the simulated environment: feasibility of multirater feedback with gap analysis. Simul Healthc 2009;4(1):22–29.
67. Burke CS, Salas E, Wilson-Donnelly K, Priest H. How to turn a team of experts into an expert medical team: guidance from the aviation and military communities. Qual Saf Health Care 2004;13(Suppl 1):i96–i104.
68. Wiener EL. Cockpit Resource Management. Gulf Professional Publishing; 1995.Oxford, United Kingdom.
69. Sigalet E, Donnon T, Cheng A, et al. Development of a team performance scale to assess undergraduate health professionals. Acad Med 2013;88(7):989–996.
70. Paige JT, Garbee DD, Kozmenko V, et al. Getting a head start: high-fidelity, simulation-based operating room team training of interprofessional students. J Am Coll Surg 2014;218(1):140–149.
71. Fernandez R, Pearce M, Grand JA, et al. Evaluation of a computer-based educational intervention to improve medical teamwork and performance during simulated patient resuscitations. Crit Care Med 2013;41(11):2551–2562.
72. Lingard L, Espin S, Whyte S, et al. Communication failures in the operating room: an observational classification of recurrent types and effects. Qual Saf Health Care 2004;13(5):330–334.
73. Lingard L, Whyte S, Espin S, Ross Baker G, Orser B, Doran D. Towards safer interprofessional communication: constructing a model of “utility” from preoperative team briefings. J Interprof Care 2006;20(5):471–483.
74. Reader TW, Flin R, Cuthbertson BH. Communication skills and error in the intensive care unit. Curr Opin Crit Care 2007;13(6):732–736.
75. Halverson AL, Casey JT, Andersson J, et al. Communication failure in the operating room. Surgery 2011;149(3):305–310.
76. Christian CK, Gustafson ML, Roth EM, et al. A prospective study of patient safety in the operating room. Surgery 2006;139(2):159–173.
77. Singh H, Thomas EJ, Petersen LA, Studdert DM. Medical errors involving trainees: a study of closed malpractice claims from 5 insurers. Arch Intern Med 2007;167(19):2030–2036.
78. Pham JC, Story JL, Hicks RW, et al. National study on the frequency, types, causes, and consequences of voluntarily reported emergency department medication errors. J Emerg Med 2011;40(5):485–492.
79. Schoenherr JR, Hamstra SJ. Beyond fidelity: deconstructing the seductive simplicity of fidelity in simulator-based education in the health care professions. Simul Healthc 2017;12(2):117–123.
80. Maran NJ, Glavin RJ. Low- to high-fidelity simulation—a continuum of medical education? Med Educ 2003;37(Suppl 1):22–28.
81. Miller RB. Psychological considerations in the design of training equipment. Psychological Considerations in the Design of Training Equipment 1954.DTIC, Defence Technical Information Centre, 8725 John J. Kingman Road, Ft. Belvoir, USA.
82. Allen JA, Hays RT, Buffardi LC. Maintenance training simulator fidelity and individual differences in transfer of training. Hum Factors 1986;28(5):497–509.
83. Norman G, Dore K, Grierson L. The minimal relationship between simulation fidelity and transfer of learning. Med Educ 2012;46(7):636–647.
Keywords:

Simulation-based medical education; Interprofessional education; Team training; Teamwork; Nontechnical skill

Supplemental Digital Content

Back to Top | Article Outline
© 2018 Society for Simulation in Healthcare