The use of simulation-based education (SBE) is now ubiquitous across all healthcare professions, among learners and practicing professionals.1,2 Effective SBE requires educators to have an understanding of the learning theories through which it can be applied.3–7 Learning theories are coherent frameworks of ideas about how people learn; they “permit educators to identify teaching approaches that can optimize the opportunity afforded by a simulation encounter thereby assisting learners to acquire new knowledge or skills” (p.34).8 The prominent learning theories used in early SBE included experiential learning and Ericsson's theory of deliberate practice.9,10 Both of these theories emphasize learning by physically interacting with one's surroundings and other participants followed by reflection to assimilate new knowledge with existing beliefs.9,10 These theoretical underpinnings were influenced by the goal of improving patient safety by allowing providers an opportunity to practice their skills in simulated environments before performing them on real patients.9,11
More contemporary understandings of the factors involved in determining the successful transfer of training to real-life situations has led educators to examine the opportunities offered from observation in SBE.12 The potential benefits of using observation in simulation can be understood through several learning theories that have grown in use SBE.13 The social learning theory proposed by Albert Bandura in 1977 incorporates observation for learning in a 4-part process and has been adapted to SBE.13–15 This process includes observation of simulated behaviors, debriefing, practice, and motivation.13 Observation can also be followed by reflection for professional development, known as reflection-on-action as described in Schon's Reflective Practicioner.16,17 Observation may be complementary, or even superior, to active participation when integrated using appropriate learning theories that are aligned with the training's learning objectives.18 For these reasons, there is a growing need to assess the most effective conditions in which observation can be incorporated into the field of SBE. This is an important area of research that has the potential to further expand the flexibility and use of SBE in healthcare.
To synthesize a comprehensive understanding of the literature on observation versus active participation in SBE, we conducted a systematic review to identify, critically appraise, and meta-analyze data from randomized trials comparing participants' reactions, learning outcomes, and behavior changes as well as patient outcomes. Until recently, the evidence to support effectiveness of the observer compared with the active role in SBE included nonrandomized studies or small single-centered randomized trials limiting our ability to make meaningful comparisons across contexts.19–22 One systematic review assessed the various types of observer roles and found that assigning a task to guide observation resulted in better outcomes than nondirected observation, but this study did not include a meta-analysis comparing actively engaged participants to observers.23 Furthermore, the effectiveness of the different learning theories used by educators to incorporate observation remains unclear. The aggregate learning benefits of the observer role in simulation require investigation to advance the development of SBE in healthcare.
MATERIAL AND METHODS
A systematic review was conducted adherent to the Methodological Expectations of Cochrane Intervention Reviews Framework.24 Reporting is consistent with the criteria outlined in the Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines.25 Ethics approval was not required for this meta-analysis. The protocol was prospectively registered in PROSPERO (Registration Number CRD42018098735).
Population, Interventions, Comparators, Outcome Measures, Settings, and Study Designs
The aim of this study was to address the following question, “Compared to active simulation, is observed simulation as effective in healthcare training for improving patient outcomes and participant behavior, learning and reactions?” The Society for Simulation in Healthcare's definition of simulation was used, which describes simulation as “a technique -not a technology- that creates a situation or environment to allow persons to experience a representation of a real event for the purpose of practice, learning, evaluation, testing, or to gain understanding of systems or human actions.”26 This definition includes simulations that occur with and without debriefings. The observer role was defined as encompassing 2 broad types using O'Regan et al.'s classification.23 The first distinction is between in-scenario versus external observers. An observer is considered an in-scenario observer when they are given a passive role, such as a family member, compared with an external observer, who is watching but not participating in the simulation. The second distinction made by O'Regan et al.23 is between directed versus nondirected observation. Directed observation is when the observer is provided with an instructional briefing or observer tool that contains information on specific learning objectives, behaviors, or activities to consider. Observers may or may not take part in the debriefing.
Prospective, cross-over, parallel, and quasi-experimental randomized controlled trials including any type of healthcare professional or trainee and any type of simulation training (ie, technical and nontechnical skills) were eligible for inclusion. Only data before cross-over were considered for inclusion due to potential cross-over effects. Virtual reality-based simulation training, cluster randomized controlled trials, cohort, case-control, and case studies were excluded. Cluster randomized controlled trials were excluded because of statistical challenges around combining data with different levels of analysis (individual- vs. facility-level).27
Learning effectiveness was evaluated using Kirkpatrick's 4-level training evaluation model because it is a widely used method of evaluating educational programs in the medical education literature.28 Kirkpatrick level 1 measures how participants react to the training. Level 2 measures the learning that occurs because of the training, where learning is defined as increase in knowledge, skills, or attitudes. Level 3 looks at whether the knowledge, skills, and attitudes that are learned are transferred to the actual work environment outside the context of training. Level 4 determines whether the training had an impact on patient outcomes. The primary outcome was defined by Kirkpatrick level 4 because we felt that this is the most important reason for finding more effective ways of training and represents the top priority for leaders and policy makers. Secondary outcomes included Kirkpatrick level 1, 2, and 3. If an outcome was identified as a Kirkpatrick level by the study's authors, this designation was used; otherwise, outcomes were coded into their appropriate Kirkpatrick level and agreed upon by all members of the research team. Self-reported outcomes were included.
Adverse outcomes included any reported by patients and/or participants. Outcomes measured immediately after training and at longest follow-up were included for all Kirkpatrick levels.
Search Strategy for Identification of Studies
MEDLINE (Ovid), CENTRAL (The Cochrane Library – Wiley), EMBASE, CINAHL, Scopus, Web of Science, PsycINFO, and ERIC were searched from inception to April 2018 using individualized search strategies developed and peer reviewed by independent information professionals (see Table, Supplemental Digital Content 1, http://links.lww.com/SIH/A433, which contains the search strategies for all databases).29 A backward search in Scopus was performed to identify all references of included studies and a forward search of included articles was performed to identify relevant additional citations. The World Health Organization's International Clinical Trials Registry Platform and ClinicalTrials.gov were searched for unpublished or ongoing trials using the key words “simulation” and “education.” EndNote (X8.2; Thomson Reuters, Carlsbad, CA) was used for reference management.
Data Abstraction and Management
Citations were imported into a spreadsheet (Microsoft Excel 2017; Microsoft Corporation, Redmond, WA) and 2 reviewers independently screened the titles and abstracts of each citation in duplicate to identify studies meeting inclusion criteria. After pilot testing, each screened citation was categorized as follows: “include,” “exclude,” “unsure,” or “duplicate.” Full-text reports of all citations categorized as “include” or “unsure” by either reviewer were retrieved for review independently and in duplicate to determine whether the study satisfied the inclusion or exclusion criteria. Disagreements were resolved by discussion between the 2 reviewers or by third-party adjudication if consensus could not be achieved.
Data were extracted using a standardized form and entered into a spreadsheet (Microsoft Excel 2017; Microsoft Corporation, Redmond, WA). The form was piloted on a sample of studies. Data from study reports were extracted by 2 blinded reviewers independently and in duplicate with disagreements resolved through consensus and with the assistance of a third party if consensus could not be achieved.
Risk of Bias Assessment
The internal validity was assessed in duplicate using the Cochrane Collaboration Risk of Bias tool.27,30 Disagreements were resolved by discussion between the 2 reviewers or by third-party adjudication if consensus could not be achieved. The Cochrane Collaboration Risk of Bias tool consists of 6 domains (sequence generation, allocation concealment, blinding, incomplete outcome data, selective outcome reporting, and other sources of bias) and a categorization of the overall risk of bias. Each separate domain is rated “high risk,” “unclear risk,” or “low risk.” Overall risk of bias was considered low only if all components were rated as having a low risk of bias. If one or more individual domains were assessed as having a high risk of bias, the overall judgement was rated as having a high risk of bias. Studies with 2 or more individual domains assessed as high risk were considered overall very high risk.
Measures of Treatment Effect
Data were analyzed using Review Manager (Version 5.3.5. Copenhagen: The Nordic Cochrane Centre, The Cochrane Collaboration, 2014). Outcomes for immediate and longest follow-up time were analyzed separately. Data that were statistically and clinically homogenous were meta-analyzed. Statistical homogeneity was determined based on how the data were reported. Only mean postintervention scores were used. Authors were contacted to obtain mean and standard deviation (SD) when not reported. If the authors did not respond after 2 attempts, conversion to mean and SD was done, when possible, using accepted calculations.31 Directionality of all scales were converted to the same direction with higher scores representing better outcomes. Clinical homogeneity was defined as outcomes that corresponded to a Kirkpatrick level. When multiple outcomes for the same Kirkpatrick level were reported in a single study, they were combined using the formula described in the Cochrane Handbook for Systematic Reviews and Meta-Analyses only if they were on the same scale.27 If they were reported on different scales, the total sample size was divided by the total number of scales for that Kirkpatrick level from that study. These 2 processes were performed before pooling the data to ensure that studies were not overrepresented and were given their appropriate weight.
A random-effects model was used given the expected variability of the included studies. Inverse variance was used to pool data across studies. Pooled continuous data were expressed as standardized mean difference (SMD) with 95% confidence intervals (CIs) as multiple different scales were used. The I2 and τ2 statistic was used to quantify heterogeneity across studies. Publication bias was assessed using funnel plots when approximately 10 or more studies were included.
A priori subgroup and sensitivity analyses that were able to be successfully performed with the available data included industry funding versus no industry funding, presence versus absence of debriefing, and technical versus nontechnical skills training.32 Other a priori subgroup and sensitivity analyses that could not be performed because of limited data included published articles versus conference proceedings, expert versus peer-led debriefing, immediate versus delayed debriefing, synchronous versus asynchronous debriefing, and high/unclear versus low risk of bias. Synchronous debriefing was defined as observers and active participants being debriefed together whereas asynchronous was when they were debriefed separately.
A posteriori subgroup analyses were performed to compare directed versus nondirected observation for Kirkpatrick level 1 and 2 outcomes and for learning of knowledge versus skills for Kirkpatrick level 2 outcomes. A posteriori sensitivity analyses were performed excluding the only trial reporting participant anxiety and stress as a Kirkpatrick level 1 outcome,33 studies where mean and SD were converted from median and interquartile range,34 and studies at very high risk of bias because all studies were considered high risk.
Of the 5040 unique citations identified from electronic and hand searches, 13 full-text trials were included with publication dates ranging from 2010 to 2018 (Fig. 1; Table 1).33–46 A total of 768 participants were included of which 426 were active participants and 374 were observers (Table 2). Practicing professionals accounted for 5% (n = 21) of active participants and 5% (n = 18) of observers.37,45 The mean (SD) age was reported in 7 trials and ranged from 27.2 (2) to 42.6 (12.2) years.34,37,40,42–45 Sex was reported by 5 trials and the proportion of men ranged from 22% to 64%.34,40,42–44 Duration of interventions lasted between one and 2 sessions. The longest follow-up ranged from 3 weeks to 3 months with loss to follow-up ranging from 0% to 58%.34,37,38,42,45 All debriefs were led by an expert and conducted immediately after the intervention. There were no multicenter trials. McCoy et al.41 conducted a randomized cross-over trial. They did not measure outcomes before cross-over, so their results could not be included in the meta-analyzed data.41
All trials were classified as having a high risk of bias because of the inherent inability to blind participants to their allocation resulting in high risk for performance bias (Fig. 2). Seven studies were considered very high risk because they had an additional reason for risk of bias.33,34,37,38,42,43,46
No studies reported Kirkpatrick level 4 outcomes.
Kirkpatrick Level 1 – Reactions
Kirkpatrick level 1 was reported by 6 trials immediately after training (see Table, Supplemental Digital Content 2, http://links.lww.com/SIH/A434, which contains all outcome measures reported by each study).33,34,38,41,44,45 Pooled results included a total of 119 observers and 134 active participants. There was no significant difference in reactions between observers and active participants (SMD = −0.03, 95% CI = −0.48 to 0.43, P = 0.91, I2 = 67%, τ2 = 0.18). McCoy et al.41 was not included in this pooled analysis because they did not report outcomes before cross-over, but the authors also found no significant difference in self-reported reactions to training between groups.
A posteriori sensitivity analysis was performed excluding the study Bong et al.33 because this study was the only one that measured negative outcomes for Kirkpatrick level 1 (ie, stress and anxiety). The remaining studies assessed positive outcomes, such as satisfaction and interest. Without this study, observers had a significantly worse reaction to training compared with active participants (SMD = −0.28, 95% CI = −0.55 to −0.01, P = 0.04, I2 = 0%, τ2 = 0.00). A posteriori sensitivity analysis excluding studies at very high risk of bias found no significant difference between observers and active participants (SMD = 0, 95% CI = −0.46 to 0.47, P = 0.99, I2 = 0%, τ2 = 0.00).44,45 A posteriori sensitivity analysis excluding Blanie et al.34 was performed as the authors reported median and interquartile range. This exclusion resulted in no significant difference between the groups (SMD = 0.10, 95% CI = −0.45 to 0.65, P = 0.72, I2 = 65%, τ2 = 0.21).
Observers reported significantly better reactions to nontechnical skills training compared with those actively participating in the simulation (Fig. 3). There was no significant difference in reactions between groups undergoing technical skills training or trainings that included both technical and nontechnical skills training (Fig. 3). There was no difference between observers' and active participants' reactions in the presence or absence of debriefing or with directed versus nondirected observation (Figs. 4, 5, respectively).
Only one study reported a Kirkpatrick level 1 outcome at longest follow-up. A total of 42% of observers (n = 19/45) and 53% of active participants (n = 31/59) were included at longest follow-up.34 There was no significant difference in reactions between observers and active participants.
Kirkpatrick Level 2 – Learning
Kirkpatrick level 2 outcomes were reported in 12 trials immediately after the intervention.34–41,43–46 Hobgood et al.39 compared observed versus active participation in high- and low-fidelity simulations. Pooled results included a total of 323 observers and 374 active participants. Active participants had significantly better learning compared with observers (SMD = −0.196, 95% CI = −0.371 to −0.022, P = 0.028, I2 = 0%, τ2 = 0.00). There was no evidence of publication bias (Fig. 6).
Nilsson et al.43 reported the change in test scores pre- versus post-Advanced Life Support in obstetrics training, and the authors did not respond to 2 requests for posttraining test scores. Therefore, their results could not be included in the meta-analysis. They found no significant difference between observers and active participants. As mentioned before, the data by McCoy et al.41 were not included in the pooled analysis because they did not report results before cross-over. They also found no significant difference in medical knowledge between active participants and observers.
A priori sensitivity analyses were performed assessing causes of heterogeneity. There was no longer any significant difference between groups when the only study reporting industry funding was excluded (SMD = −0.16, 95% CI = −0.36 to 0.04, P = 0.117, I2 = 0%, τ2 = 0.00).39 A second sensitivity analysis was performed excluding the study by Blanie et al.34 because the authors reported median and interquartile ranges and mean and SDs had to be calculated.31 There was no longer a significant difference between groups (SMD = −0.18, 95% CI = −0.375 to 0.015, P = 0.07, I2 = 0%, τ2 = 0.00). A final sensitivity analysis was performed removing studies at very high risk of bias, and there was no longer a significant difference between groups (SMD = −0.233, 95% CI = −0.481 to 0.016, P = 0.067, I2 = 0%, τ2 = 0.00).
Blanie et al.34 reported both technical and nontechnical skills outcomes. Prespecified subgroup analyses found no difference in learning between observers and active participants undergoing technical or nontechnical skills training (Fig. 7). On the other hand, active participants learned significantly better than observers when debriefing was present, but there was no difference between groups when debriefing was absent (Fig. 8). Hobgood et al.34 reported learning outcomes for knowledge and skills. Active participants learned skills significantly better than observers, but there was no difference in knowledge acquisition (Fig. 9). There was no difference between directed observers and active participants, but active participants learned significantly better than nondirected observers (Fig. 10).
Five studies reported learning at follow-up.34,37,38,42,45 The follow-up rate was 66% (n = 71/108) among the observers and 67% among active participants (n = 81/121). There was no significant difference in learning retention based on simulation role (SMD = −0.104, 95% CI = −0.387 to 0.36, P = 0.942, I2 = 18%, τ2 = 0.03). A sensitivity analysis was performed removing the studies by Blanie et al.34 and Brydges et al.38 because of more than 10% loss to follow-up leaving 41 observers and 38 active participants. There was no significant change in the effect estimate (SMD = −0.063, 95% CI = −0.667 to 0.47, P = 0.837, I2 = 38%, τ2 = 0.11).34,38
Kirkpatrick Level 3 – Behavior Change
Blanie et al.34 defined perceived learning transfer as a Kirkpatrick level 3 outcome. They found no significant different between the 2 cohorts.
No adverse outcomes were reported by participants or patients.
The primary results of this systematic review and meta-analysis suggest no significant differences in Kirkpatrick level 1 reactions to training between observers and active participants. For Kirkpatrick level 2 outcomes, active participants seemed to learn significantly better than observers. Only one study reported Kirkpatrick level 3 outcomes, and they found no significant difference in behavior change between groups.34 Kirkpatrick level 4 outcomes were not reported by any of the studies despite it being arguably the most important training outcome for patient care. Several subgroup analyses were planned, but because of the limitations related to the available data, comparisons were only possible between technical versus nontechnical skills training, presence versus absence of debriefing, directed versus nondirected observation, and skills versus knowledge learning. The results of these subgroup analyses are important to understanding the circumstances in which observation can be most effectively incorporated in SBE.
Observers and active participants seem to have similar Kirkpatrick level 1 reactions overall. However, subgroup analyses comparing outcomes between nontechnical and technical skills suggest the main benefit may be related to decreasing participant anxiety and stress. Observers had significantly better reactions in nontechnical skills training but not technical skills training. This finding must be interpreted with caution because only one study was included in the nontechnical skills category. As previously mentioned, the reactions included in this study were self-reported anxiety and stress, which the authors correlated with salivary cortisol.33 Classically described Kirkpatrick level 1 reactions include interest and satisfaction. The role of participants' anxiety and stress has been demonstrated to be an important determinant of the effectiveness of nontechnical skills training.47 For example, Naber et al.47 found team trainings had decreased effectiveness when a greater number of participants experienced high levels of social anxiety (p.164). Anxiety and stress may also have degrading effects for novices because of the additional cognitive load. It has been recommended that SBE designed for novices should have “no noise (eg, collegial interactions, competing clinical tasks or symptoms)” until participants achieve a certain level of mastery after which contextual fidelity stressors can be introduced to facilitate transfer.18,48 Thus, the observer role may be beneficial for some personality types and for novices depending on the learning objectives.
For Kirkpatrick level 2 outcomes, active participants seemed to do better than observers. However, subgroup analyses demonstrated that when directed observation was used, there was no difference between observers and active participants. This aligns with the social learning theory and suggests that directed observation is an important part of the 4-step process, along with debriefing, motivation, and an opportunity for practice.13 Motivation in this context refers to participants understanding of the potential benefits of incorporating the observed behaviors into their own lives. In our systematic review, some elements of the social learning theory were included in the studies that used directed observation, but none used all 4 components. For example, of the studies that used directed observation, only 3 included an opportunity to debrief and none included an opportunity for practice. Another limitation to assessing the true potential of social learning theory in SBE is that the best method for providing direction in the observer role remains unclear. O'Regan et al.23 found observer tools used in healthcare simulation currently include being assigned a single task to focus on,49 open-ended written questions,50 and checklists.19,23,51 The studies that used directed observation in our review exclusively used orientation to the learning objectives limiting our ability to make any comparisons between observer tools. Future studies seeking to incorporate directed observation should aim to find the best method for directed observation and should do so by incorporating all 4 essential elements of the social learning theory.
Another subgroup analysis of the Kirkpatrick level 2 outcome found that active participation was significantly better than observation when debriefing was present. Reflective practice requires participants to have prior experiences so they can examine their underlying beliefs and values that gave rise to these prior behaviors (ie, “double loop learning”).16 This is referred to as reflection-on-action in Schon's Reflective Practicioner.17 However, our results suggest active participation may benefit more from debriefing compared with observation. Observation followed by debriefing may have been less effective than active participation given the homogeneity of our sample, which was mostly trainees. Trainees may require active participation in simulation to provide them with content and material to reflect on. Observation followed by reflection-on-action, as described by Schon,17 may be more effective in participants who have acquired a certain degree of professional experience. Practicing professionals made up only 5% of participants in our review. These results highlight the need for further research on the use of observation in SBE at different stages along the learning continuum, including continuing professional education.
Observation in SBE may also play an increasingly important role in helping spread this effective learning modality. Although there are more than 600 simulation centers worldwide providing active participatory SBE, access to SBE remains variable with small surveys demonstrating between 60% and 100% of respondents having access to simulation.52–56 The use of SBE in most healthcare providers who have not been surveyed globally is unknown. We know that an important barrier to establishing a successful simulation program is access to trained facilitators to design and deliver simulation curricula.57–59 Observed simulation may help reduce this barrier because it can be used through telesimulation, defined as “a process by which telecommunication and simulation resources are utilized to provide education, training, and/or assessment to learners at an off-site location.”60 Telesimulation may be a way to provide high-quality SBE until a more scalable method of training new facilitators is developed. This is an important area of research that has the potential to further expand the flexibility and use of SBE in healthcare.
The strengths of this systematic review and meta-analysis include the formulation of a focused question pertaining to a novel technique increasing in use; use of a widely used educational evaluation model allowing to combine multiple different interventions reporting variously measured outcomes; implementation of a comprehensive, peer-reviewed search strategy with no language restriction; and appraisal of internal validity using the Cochrane Risk of Bias Tool. All meta-analyzed data were extracted from trials published within the last 8 years, so the relevance of our findings to current practice is high. Limitations of our study include the small numbers of included trials, the difference in sample size between groups, lack of multicenter trials, relatively short duration of interventions, and high loss to follow-up. The difference in sample size is a particularly important limitation because the number of learners in any training context inversely correlates to all Kirkpatrick levels (ie, high numbers lead to reduced outcomes). The sparse reporting of baseline scores, Kirkpatrick level 3 and 4 outcomes, and safety measures represent further potential limitations, and the predominantly North American, medical, trainee-based population limits the generalizability of the conclusions. Finally, none of the studies included in this review mentioned the theories they used to inform their interventions and more than half were at very high risk of bias. This is consistent with the findings from a prior Best Evidence in Medical Education review that found only 109 of 670 articles on SBE to be of robust methodological quality.6,61 This limits the strength of any conclusions we can make about the use of observation in SBE based on the currently available studies.
Our study provides important insights into where further research is needed to better understand the use of observation in SBE. Observation seems to reduce participant anxiety and stress and may be better when directed and for knowledge rather than skills learning. Participants may benefit more from observation when they have attained a certain degree of experience on the job to be able to meaningfully reflect on the relevance of what is being taught. Future interventions seeking to incorporate observation in SBE should do so using appropriate learning theories, such as Bandura's Social Learning Theory15 or Schon's Reflective Practitioner.17
There does not seem to be a significant difference in reactions or behavior change between observers and active participants undergoing SBE. However, active participation in SBE may improve learning outcomes. Observation may be more successful in certain circumstances, but many questions still remain unanswered. More research is needed, particularly on Kirkpatrick level 3 and 4 outcomes, to better understand under what circumstances the observer role is most effective and how it can be optimized in theoretically sound educational interventions.
We would like to acknowledge Janet Rothney, a health science information specialist at the University of Manitoba, for her assistance in the systematic review. We would also like to acknowledge Dr. Ahmed Abou-Setta, Dr. Rasheda Rabbani and Dr. Ryan Zarychanski from the University of Manitoba for their assistance in ensuring high methodological quality and statistical approach. Finally, we would like to thank Dr. Steven Yule from STRATUS Center for Medical Simulation at Harvard Medical School for his content expertise.
1. Steadman RH, Huang YM. Simulation for quality assurance in training, credentialing and maintenance of certification. Best Pract Res Clin Anaesthesiol
2. Chiniara G, Cole G, Brisbin K, et al. Simulation in healthcare: a taxonomy and a conceptual framework for instructional design and media selection. Med Teach
3. Bradley P, Postlethwaite K. Simulation in clinical learning. Med Educ
4. Clapper TC. Beyond Knowles: what those conducting simulation need to know about adult learning theory. Clin Simul Nurs
5. Rosen KR. The history of medical simulation. J Crit Care
6. Bradley P. The history of simulation in medical education and possible future directions. Med Educ
7. Persky AM, Robinson JD. Moving from novice to expertise and its implications for instruction. Am J Pharm Educ
8. Nestel D, Jolly B, Watson M, Kelly M. Healthcare Simulation Education: Evidence, Theory & Practice
. West Sussex: John Wiley & Sons; 2018.
9. Wang EE. Simulation and adult learning. Dis Mon
10. Ferrari M. The Pursuit of Excellence through Education
. London: Lawrence Erlbaum Associate Publishers; 2002:22–55.
11. Ziv A, Ben-David S, Ziv M. Simulation based medical education: an opportunity to learn from errors. Med Teach
12. Issurin VB. Training transfer: scientific background and insights for practical application. Sports Med
13. Bethards ML. Applying social learning theory to the observer role in simulation. Clin Simul Nurs
14. Gordon M. Building a theoretically grounded model to support the design of effective non-technical skills training in healthcare: the SECTORS model. J Contemp Med Educ
15. Bandura A. Social learning theory. J Commun
16. Sandars J. The use of reflection in medical education: AMEE Guide No. 44. Med Teach
17. Schon DA. Educating the Reflective Practitioner: Toward a New Design for Teaching and Learning the Professions
. San Francisco: Jossey-Bass Publishers; 1987.
18. Roussin CJ, Weinstock P. SimZones: an organizational innovation for simulation programs and centers. Acad Med
19. Stegmann K, Pilz F, Siebeck M, Fischer F. Vicarious learning during simulations: is it more effective than hands-on training? Med Educ
20. Bloch SA, Bloch AJ. Simulation training based on observation with minimal participation improves paediatric emergency medicine knowledge, skills and confidence. Emerg Med J
21. Zulkosky KD, White KA, Price AL, Pretz JE. Effect of simulation role on clinical decision-making accuracy. Clin Simul Nurs
22. Stroben F, Schroder T, Dannenberg KA, Thomas A, Exadaktylos A, Hautz WE. A simulated night shift in the emergency room increases students' self-efficacy independent of role taking over during simulation. BMC Med Educ
23. O'Regan S, Molloy E, Watterson L, Nestel D. Observer roles that optimise learning in healthcare simulation education: a systematic review. Adv Simul
24. Chandler J, Churchill R, Lasserson T, Tovey D, Higgins J. Methodological standards for the conduct of new Cochrane Intervention Reviews. Available at: http://editorial-unit.cochrane.org/sites/editorial-unit.cochrane.org/files/uploads/MECIR_conduct_standards%2023%2002122013.pdf
. Accessed November 1, 2018.
25. Liberati A, Altman DG, Tetzlaff J, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. J Clin Epidemiol
26. Lopreiato JO, Downing D, Gammon W, et al. Society for Simulation in Healthcare. Healthcare Simulation Dictionary
2016. Available at: https://www.ssih.org/Dictionary
. Accessed on April 10, 2019.
27. Higgins JPT, Green S: Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0. 2011; Table 7.7.a: formulae for combining groups. Available at: http://handbook-5-1.cochrane.org/
. Accessed on April 10, 2019.
28. Cook DA, Cook DA, Hamstra SJ, et al. Comparative effectiveness of instructional design features in simulation-based education: systematic review and meta-analysis
. Med Teach
29. McGowan J, Sampson M, Salzwedel DM, Cogo E, Foerster V, Lefebvre C. PRESS Peer Review of Electronic Search Strategies: 2015 Guideline Statement. J Clin Epidemiol
30. Higgins JP, Altman DG, Gotzsche PC, et al. The Cochrane Collaboration's tool for assessing risk of bias in randomised trials. BMJ
31. Borenstein M, Hedges LV, Higgins JPT, Rothstein HR. Introduction to meta-analysis
. Psychother Res J Soc Psychother Res
32. Hughes AM, Gregory ME, Joseph DL, et al. Saving lives: a meta-analysis
of team training in healthcare. J Appl Psychol
33. Bong CL, Lee S, Ng ASB, Allen JC, Lim EHL, Vidyarthi A. The effects of active (hot-seat) versus observer roles during simulation-based training on stress levels and non-technical performance: a randomized trial. Adv Simul
34. Blanie A, Gorse S, Roulleau P, Figueiredo S, Benhamou D. Impact of learners' role (active participant-observer or observer only) on learning outcomes during high-fidelity simulation sessions in anaesthesia: a single center, prospective and randomised study. Anaesth Crit Care Pain Med
35. Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med
36. Baxter P, Akhtar-Danesh N, Landeen J, Norman G. Teaching critical management skills to senior nursing students: videotaped or interactive hands-on instruction? Nurs Educ Perspect
37. Boncyk CS, Schroeder KM, Anderson B, Galgon RE. Two methods for teaching basic upper airway sonography. J Clin Anesth
38. Brydges R, Nair P, Ma I, Shanks D, Hatala R. Directed self-regulated learning versus instructor-regulated learning in simulation training. Med Educ
39. Hobgood C, Sherwood G, Frush K, et al. Teamwork training with nursing and medical students: does the method matter? Results of an interinstitutional, interdisciplinary collaboration. Qual Saf Health Care
40. Lai A, Haligua A, Dylan Bould M, et al. Learning crisis resource management: practicing versus an observational role in simulation training – a randomized controlled trial. Anaesth Crit Care Pain Med
41. McCoy CE, Sayegh J, Rahman A, Langdorf MI, Anderson C, Lotfipour S. Prospective randomized crossover study of telesimulation versus standard simulation for teaching medical students the management of critically ill patients. AEM Educ Train
42. Nasr Esfahani M, Behzadipour M, Jalali Nadoushan A, Shariat SV. A pilot randomized controlled trial on the effectiveness of inclusion of a distant learning component into empathy training. Med J Islam Repub Iran
43. Nilsson C, Sørensen BL, Sørensen JL. Comparing hands-on and video training for postpartum hemorrhage management. Acta Obstet Gynecol Scand
44. Semler MW, Keriwala RD, Clune JK, et al. A randomized trial comparing didactics, demonstration, and simulation for teaching teamwork to medical residents. Ann Am Thorac Soc
45. VanderWielen BA, Harris R, Galgon RE, VanderWielen LM, Schroeder KM. Teaching sonoanatomy to anesthesia faculty and residents: utility of hands-on gel phantom and instructional video training models. J Clin Anesth
46. Weiler DT, Gibson AL, Saleem JJ. The effect of role assignment in high fidelity patient simulation on nursing students: an experimental research study. Nurse Educ Today
47. Naber AM, McDonald JN, Asenuga OA, Arthur W. Team members' interaction anxiety and team-training effectiveness: a catastrophic relationship? Hum Factors
48. Klein MI, Warm JS, Riley MA, et al. Mental workload and stress perceived by novice operators in the laparoscopic and robotic minimally invasive surgical interfaces. J Endourol
49. Hober CL. Student perceptions of the observer role play experiences in the implementation of a high fidelity patient simulation in bachelor's degree nursing programs. ProQuest Dissertations and Theses
. Kansas City, Kansas: University of Kansas; 2012.
50. Lau KC, Stewart SM, Fielding R. Preliminary evaluation of “interpreter” role plays in teaching communication skills to medical undergraduates. Med Educ
51. Kaplan BG, Abraham C, Gary R. Effects of participation vs. observation of a simulation experience on testing outcomes: implications for logistical planning for a school of nursing. Int J Nurs Educ Scholarsh
52. Society for Simulation in Healthcare. Available at: https://www.ssih.org/Home/SIM-Center-Directory
. Accessed November 28, 2018.
53. Harper MG, Gilbert GE, Gilbert M, Markey L, Anderson K. Simulation use in acute care hospitals in the United States. J Nurses Prof Dev
54. Norman G, Dore K, Grierson L. The minimal relationship between simulation fidelity and transfer of learning. Med Educ
55. Zhao Z, Niu P, Ji X, Sweet RM. State of simulation in healthcare education: an initial survey in Beijing. JSLS
56. Wagner M, Heimberg E, Mileder LP, Staffler A, Paulun A, Löllgen RM. Status quo in pediatric and neonatal simulation in four central European regions: The DACHS Survey. Simul Healthc
57. Blazeck A. Simulation anxiety syndrome: presentation and treatment. Clin Simul Nurs
58. Jones AL, Hegge M. Faculty comfort levels with simulation. Clin Simul Nurs
59. Cheng A, Grant V, Dieckmann P, Arora S, Robinson T, Eppich W. Faculty development for simulation programs: five issues for the future of debriefing training. Simul Healthc
60. McCoy CE, Sayegh J, Alrabah R, Yarris LM. Telesimulation: an innovative tool for health professions education. AEM Educ Train
61. Issenberg SB, McGaghie WC, Petrusa ER, Gordon DL, Scalese RJ. Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME systematic review. Med Teach