Simulation-based training (SBT) in health care has grown rapidly over the past decade.1–9 Providing simulation training of the highest quality requires skilled educators who can facilitate SBT sessions and debriefings.10–19 The demand for experts and leaders in the field has driven the development of simulation fellowship programs.20 The extent to which simulation fellowship programs achieve the primary goal of creating leaders and educators in the field of health care simulation is yet to be determined.
To date, we are aware of no standardized curricula for simulation fellowship programs. Further, U.S. fellowship programs are not accredited by the Accreditation Council for Graduate Medical Education (ACGME). Like many other nonaccredited programs, simulation fellowships provide an opportunity for fellows to acquire the skills needed to develop, deliver, and evaluate simulation curricula for medical trainees.21 SBT fellowship programs achieve their objectives by providing fellows with the knowledge, experience, skills, and behaviors required to use simulation as an effective educational tool.
As in other pioneering fields, many fellowship programs in simulation have been started by faculty without formal training who built expertise through practice. As such, variation in the structure and content of fellowship training programs is probable. Furthermore, graduates of SBT fellowship programs may all successfully complete their different programs with skills, knowledge, and expertise that are heterogeneous; that is, different programs likely emphasize different topic areas, and a few programs may not address some topics at all. Currently, no organization outside the field of surgery offers an accreditation process for fellowships in simulation. The American College of Surgery accredits fellowships in surgical simulation through its Accredited Educational Institutes Program.22
We aimed to describe the current status and future goals of English-language-based simulation fellowship programs around the world. The objectives of this study were as follows: (1) to identify the composition and infrastructure of existing simulation fellowship programs, (2) to describe current training practices of these programs, (3) to disclose existing barriers and their impact on programs, and (4) to highlight opportunities for standardizing fellowship training.
Study design and approval
A team of investigators conducted this cross-sectional study electronically using a Web-based survey administered from September 2014 through September 2015. The Rutgers New Jersey Medical School Institutional Review Board approved this study.
Study setting and population
The team of investigators developed an initial data set of English-language-based programs through a review of national and international institutional and academic society Web sites, a review of health care simulation literature, and a Web-based search for training programs. They defined “simulation fellowship” as “a training program offered by a simulation center, hospital, and/or university whereby trainees are engaged in simulation-specific learning, teaching, training, and/or research for a pre-defined period of time ranging from a minimum of 6 months to 2 years.” In addition, the team used peer snowball sampling to maximize the likelihood of finding programs not identified through the literature and Internet searches.
The investigators developed a survey instrument based on existing literature, knowledge of current simulation fellowship training, discussions with experts in the field, and their own experience in simulation fellowship training. Four investigators (D.S., Z.B., J.P., and A.C.) reviewed the survey tool to provide feedback on the relevance and clarity of questions and on the time needed to complete the questionnaire. The team incorporated these reviewers’ feedback into the final draft and then distributed the questionnaire through the commercial survey service provider Survey Monkey (Survey Monkey, Inc., Seattle, WA). Simulation fellowship program directors (hereafter, simply program directors) received an e-mail invitation to participate in the Web-based survey. Participants acknowledged consent before completing the survey and did not receive an incentive for participating. The investigators sent monthly follow-up e-mails for 3 months after the initial contact, and followed-up by e-mail and phone at 6, 9, and 12 months to optimize the response rate. The team deidentified respondents after receiving their completed surveys.
The survey consisted of 50 open- and closed-ended questions (using skip logic) that assessed the following:
- fellowship demographics,
- educational objectives and expected outcomes,
- training resources and instructional methods,
- additional training (i.e., master’s degree) included,
- program administration and support,
- fellow assessment and program evaluation, and
- recommendations for best practices.
The complete survey is available as Supplemental Digital Appendix 1 at http://links.lww.com/ACADMED/A438.
The team of investigators exported the survey results into Microsoft Excel (Microsoft Corp., Redmond, WA) and summarized data by question. They analyzed responses using traditional, descriptive analyses and have reported summary statistics as means and/or number and percent where appropriate. They analyzed each question that used a five-point Likert scale (i.e., where 1 = strongly disagree and 5 = strongly agree) individually as an ordinal variable, and they collapsed categories into binary groups to calculate proportions (i.e., agree [a score of 4 or 5] or neutral/disagree [scores of 1, 2, or 3]). They analyzed responses to all other questions as categorical variables. They summarized free-text comments and grouped them by theme.
Forty-nine programs met the inclusion criteria. From these, 32 fellowship directors (65%) participated in the survey. Of these 32 respondents, 21 (66%) were from the United States, 8 (25%) from Canada, 2 (6%) from England, and 1 (3%) from Australia. Nonrespondents comprised 10 from the United States, 2 from Canada, 3 from England, and 2 from Australia. The question completion rate was variable. We have included results for all completed questions.
We have presented program characteristics in Table 1. The first simulation fellowship program began in 1998, but the majority (18 of 32 [56%]) started between 2010 and 2014. Programs were sponsored by and/or affiliated primarily with either hospitals (19 [59%]) or medical schools (12 [37.5%]). More than half the simulation centers (n = 19 [59%]) were accredited by governing bodies, including the Society for Simulation in Healthcare (SSH), the Royal College of Physicians and Surgeons of Canada (RCPSC), the American College of Surgeons (ACS), the American Society of Anesthesiologists, and the Australian College of Emergency Medicine.
Fellowship program length varied; most of the programs (n = 21 [66%]) lasted one to two years. The number of fellows starting in each program differed considerably by program, ranging from 1 to 7 per year. At the time of the survey, across the 32 programs, 186 fellows had graduated from simulation fellowship programs since 1998; the number of graduates ranged from 1 to 36 per program.
Simulation resources varied across programs. All 31 program directors responding to the relevant question reported that they had simulation facility space. Almost all (n = 29 [94%]) reported that their programs had task trainers, and a large majority (n = 26 [84%]) also had human patient simulators. More than three-quarters (n = 24 [77%]) reported that they had simulation audiovisuals, and most (n = 23 [74%]) also involved standardized patients. Computerized simulation technologies, data management systems, and virtual reality simulators were uncommon.
Fellow and fellowship director characteristics
Of the 32 SBT fellowship programs, most (28 [87.5%]) matriculated fellows who had completed their residency programs; 9 (28%) matriculated fellows who had completed only medical school. Programs also matriculated nurses (5 [16%]), nonclinical educators (2 [6%]), and “others” (6 [19%]), such as paramedics. Most training programs (n = 21 [66%]) enrolled fellows as full-time trainees; fellows in the other 11 programs (34%) were part-time and concurrently enrolled in a clinical training program. The number of clinical duty hours required of fellows varied, but most programs (n = 20 [62.5%]) required 12 to 20 hours weekly. Simulation fellows worked primarily in clinical roles—as attendings/consultants at 22 programs (69%), as clinical fellows at 11 programs (34%), and as nurses at 4 programs (12.5%). We have provided additional characteristics of fellows in Supplemental Digital Table 1 at http://links.lww.com/ACADMED/A439.
Fellowship directors were physicians in 27 of 29 programs (93%). Of the 29 directors who answered the relevant questions, many had additional simulation training in the form of a master’s degree in education or equivalent (10 [34%]), a simulation certificate (7 [24%]), or a simulation fellowship (7 [24%]). All of the 29 responding directors reported that their programs were supported by additional faculty members, and the additional faculty members at 22 of the programs (76%) were considered full-time. Additional faculty at most of the programs had some formal simulation training—most often in the form of local workshops (22 [76%]), national courses (21 [72%]), simulation certificates (18 [62%]), or simulation fellowships (13 [45%]).
Funding for fellows and directors alike came primarily from their departments (respectively, 17/29 [59%] and 7/14 [50%]; see Figure 1). Most fellows (12/25 [48%]) earned an annual salary of $80,000; however, 4 of 25 (16%) earned $70,000 to $79,999, and 9 of 25 (36%) earned $50,000 to $69,999. The majority of fellows (16/26 [62%]) did not pay tuition, but of those who did, 6/26 (23%) paid on average $1,000 to $5,000 yearly, 3/26 (12%) paid over $20,000 yearly, and a minority (1/26 [4%]) paid $10,000 to $14,999 a year. Additional benefits for fellows participating in SBT programs included funding to attend regional and/or national conferences or lectures and the availability of faculty development workshops/courses.
Training and curriculum
Programs employed a wide range of instructional methodologies. The approaches used most frequently (by at least 90% of the programs) included attendance at conferences (n = 31 [100%]), experiential learning (n = 30 [97%]), research (n = 29 [94%]), mentorship (n = 29 [94%]), participation (as a learner) in instructor training courses (n = 28 [90%]), and informal contributions to simulation lab teaching (n = 28 [90%]). According to the 31 directors who responded to the questions, other common modalities included participation in a journal club (n = 26 [84%]), attendance at lectures (n = 26 [84%]), and participation (as an instructor) in instructor training courses (n = 26 [84%]). We have reported the most frequently used instructional methods in Supplemental Digital Figure 1 at http://links.lww.com/ACADMED/A439. Of 30 responding directors, 15 (50%) reported that their fellowship programs were combined with a certificate and/or advanced degree program—primarily (10 of 15 [67%]) with a master’s of education program. Of the 15 directors who responded to the question about whether these additional degree/certificate programs were mandatory or optional, 9 (60%) said they were optional.
According to the 31 program directors who answered the question, 15 programs (48%) provided an average of 10 to 19 hours of simulation training for fellows per week (range: 10–39 hours), and fellows performed 10 to 49 simulation sessions per month. (We defined a simulation session as a period of time devoted to a simulation-based activity followed by debriefing.) Figure 2, showing the top five activities fellows engaged in during their training, highlights that according to our respondents, fellows spent most of their simulation training time (24%) as an instructor.
Over 90% of the programs (n = 31) covered four core objectives: (1) professional values and capabilities (i.e., leadership, advocacy) (n = 28 [90%]); (2) educational principles, practice, and methodology in simulation (n = 29 [93%]); (3) implementation, assessment, and management of simulation-based activities (n = 31 [100%]); and (4) scholarship (n = 29 [93%]). The program directors considered a number of educational outcomes to be important to a successful simulation fellowship program. Table 2 shows the relative importance of these educational outcomes.
Program barriers and impacts
Of the 29 program directors who responded to the relevant questions, few did not consider the variables listed to be barriers to their programs (see Supplemental Digital Figure 2 at http://links.lww.com/ACADMED/A439). Only 10 (34%) agreed that lack of funding was a barrier to their program’s success; the remaining 19 (66%) disagreed. Similarly, only 7 (24%) agreed that the lack of a standardized curriculum was a barrier. Despite the small number of program directors agreeing that a lack of standardization was a barrier, 16 (55%) of them felt their programs would benefit from a structured curriculum. Only 10 program directors (34%) agreed that more didactic time would be a benefit, while far more—23 (79%)—agreed that additional faculty development would be most beneficial. We have presented additional elements that directors thought would add value in Supplemental Digital Figure 3 (http://links.lww.com/ACADMED/A439).
Standardization, accreditation, and certification
Of 28 program directors, 18 (64%) felt that there should be standardized fellowship guidelines, and 17 of 25 directors (68%) felt that these guidelines should be standardized at a national level. Only 1 director (of 28 [4%]) felt that salary needed to be standardized. At the other extreme, more than three-quarters of directors (22 of 28 [79%]) felt that core objectives needed to be standardized (Figure 3). Of 28 program directors, 5 (18%) felt that if there had to be standardized guidelines then the SSH would be a good governing body for accreditation, and 3 (11%) felt that some other organization should oversee accreditation. The directors from U.S. programs advocated the ACGME and American Board of Emergency Medicine as potential accrediting agencies. Similarly, program directors from outside of the United States identified their national organizations for possible oversight (e.g., RCPSC for Canada).
Twenty-nine directors answered questions about board certification. Of these, the majority (24 [83%]) did not agree with board certification. Most felt that board certification would add an unnecessary level of “bureaucratic burden” due to “U.S.-style board certification programs not relevant to international programs” and should be left to “clinical entities.” Others felt certification could potentially “hamper innovation and creativity” inherent to simulation and, ultimately, “limit the growth” of this “ever-developing” specialty.
Twenty-three program directors responded to a question about best practices for simulation fellowships, and many shared similar ideas. Eight (35%) advocated standardization, 8 (35%) promoted research, and 6 (26%) endorsed formative assessments as a part of any list of best practices. Five (22%) supported incorporating educational experiences and theory; 3 (13%) stressed teaching with different simulation modalities; and 2 (9%) emphasized each of the following: the need for debriefing tools, additional lectures, and requiring a master’s degree in education as part of the training.
Our study represents, to our knowledge, the first international survey of simulation fellowship training programs. The majority of programs were based in the United States, and over half had begun between 2010 and 2014. Programs were primarily affiliated with hospitals and/or medical schools, and many of the sponsoring centers had been accredited by governing bodies. Simulation fellows were mostly medical trainees, and fellowship directors were mostly physicians. Training fellows and their directors were primarily funded by an associated department in the hospital/university, and instructors applied a range of instructional modalities. Surprisingly, most programs did not identify any major barriers, perhaps reflecting the developed nature of fellowship training. We identified various elements of fellowship training that could be standardized across programs; specifically, directors identified core objectives as the element most suited for standardization.
To our knowledge, no prior work has described the scope and objectives of existing simulation fellowship programs on an international scale. One previous study sought to identify postgraduate medical simulation training programs in the United States.20 The authors identified 17 different simulation fellowship programs, the majority of which were sponsored by emergency medicine, anesthesiology, and/or interdisciplinary units. According to this previous study,20 most programs required fellows to do clinical work as part of their training; our finding—that the majority of fellows were required to work in the clinic between 12 and 20 hours a week—aligns with these results. We found, just as Kotal and colleagues20 had previously, that the majority of training programs ranged from one to two years in length. The U.S.-based study did not explore curricular content and instructional methods, nor did the authors discuss program barriers and opportunities for standardization. Our study offers unique information that can shape the content, structure, and instructional methods of simulation fellowship programs around the world.
Because simulation-based education in health care training is growing rapidly,1,2,17,23 we were not surprised to find a similar trend with simulation fellowship training programs. Although the characteristics of training programs varied, the majority of program directors indicated they had access to simulation facility space, simulators, and task trainers. The availability of these resources, along with secured financial support for most programs, suggests that fellowship training is typically offered by simulation programs that are highly developed and well resourced. We expect the number of fellowship programs to grow in the coming years, thus signaling an ever-growing need to set standards for training.
More than half of the program directors surveyed indicated that their simulation centers were accredited by a governing body. Accreditation systems offered by different governing bodies (e.g., SSH, the RCPSC) were driven by the pressing need for standards amongst simulation centers and programs.24,25 Depending on the governing body, accreditation may take on different forms; some agencies may offer partial accreditation to the centers for specific areas (e.g., patient safety), and others may offer accreditation for simulation as a whole. Although some standards for simulation center accreditation (e.g., curriculum development, program evaluation) may relate to fellowship training, we are aware of no attempt to standardize the content of fellowship training programs, especially nationally. Aside from the ACS, no governing bodies offer overall accreditation of multiple health care simulation fellowship training programs.
Most of the fellowships identified in our survey grew out of clinical training programs. In the future, we believe simulation fellowship programs may find themselves attached to or affiliated with clinical training programs, graduate training programs (e.g., masters in medical education), fellowships in medical education, and/or individual simulation programs. Ultimately, the best “home” for a specific simulation fellowship program may depend on program focus and goals (e.g., education, research, patient safety), local policies, and funding source, amongst other factors.
Through the results of our survey, we identified a large number of educational outcomes or objectives that were common to all programs. As the majority of program directors indicated a desire to standardize educational content across training programs at the national level, we wonder if this goal could be achieved through a consensus-building process (e.g., a consensus conference) amongst key stakeholders and program directors. With key competencies for medical26 and simulation educators11 in mind, a consensus conference would provide an opportunity both to review existing curricula and, importantly, to identify core content that should be delivered by all simulation fellowship training programs. Most programs seemed to be achieving their goals through the application of multiple instructional methods, including experiential learning, research, mentorship, formal courses, journal clubs, lectures, and conference opportunities. Many of these instructional methods have been found to be effective for faculty development.27,28 Unfortunately, other effective methods of faculty development, such as expert feedback,29 peer feedback,30–32 self-assessment,33 or group discussion, were not explicitly described by the program directors surveyed, thus indicating potential areas for growth in the future.10,28
Of note, a minority of training programs included simulation-based research as a fellowship training activity. Given the importance of simulation-based research as a means of advancing simulation in health care,1,34 we encourage program directors to consider integrating simulation-based research as a key component in fellowship training. Doing so will ensure that the next generation of simulation leaders in health care are versed in the application of simulation for research purposes. Furthermore, the use of simulation to address patient safety issues in a targeted fashion was not identified as a fellowship training activity. We encourage the thoughtful integration of simulation-based patient safety and quality improvement initiatives into fellowship training programs to maximize the potential effect on patient outcomes.
Our study has several limitations. No centralized directory lists all the simulation fellowship training programs around the world, so, despite our efforts, we may have missed some programs in our search. For example, our sample included primarily programs in the United States and Canada, even though SBT is very well developed in other areas of the world, such as Australia and Europe. Additionally, we limited our search to English-language-based programs, thus likely excluding any programs for which English is not the primary language. We defined simulation fellowship as ranging in length from 6 months to 2 years, so we excluded programs of shorter or longer duration. As simulation is a growing field, we may have missed new fellowship training programs that have just started in the period of time since we finished this project.
Paralleling the fast growth and integration of SBT, fellowship training opportunities have grown rapidly in the United States, Canada, and beyond. This period of growth provides an opportune time to galvanize the energy and support of training simulation leaders and intentionally develop a nascent field into a scientific discipline. A consensus conference (and any accompanying consensus statement) could lead to standardization and accreditation of fellowship training programs that develop measurable, comparable competencies in graduates. Alongside instructor certification and research requirements, these changes could help define the requirements and characteristics of effective simulation fellowship programs.
Acknowledgments: The authors would like to thank and acknowledge the contributions of the following individuals who constitute the International Simulation Fellowship Training Investigators team: Thomas Nowicki, MD; Joel M. Clingenpeel, MD, MPH, MS, MEDL; Mike Falk, MD; Vinay M. Nadkarni, MD; Ernest Wang, MD; David Salzman, MD, MEd; Rami Ahmed, DO; Charles Pozner, MD; Jordan Tarshis, MD; Ian M. Julie, MD, MAS; and Paul E. Phrampus, MD. The principal investigator, Dr. Brenda Natal, had full access to all the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
1. Cook DA, Hatala R, Brydges R, et al. Technology-enhanced simulation for health professions education: A systematic review and meta-analysis. JAMA. 2011;306:978–988.
2. Cheng A, Lang TR, Starr SR, Pusic M, Cook DA. Technology-enhanced simulation and pediatric education: A meta-analysis. Pediatrics. 2014;133:e1313–e1323.
3. Cheng A, Lockey A, Bhanji F, Lin Y, Hunt EA, Lang E. The use of high-fidelity manikins for advanced life support training—A systematic review and meta-analysis. Resuscitation. 2015;93:142–149.
4. Cheng A, Eppich W, Grant V, Sherbino J, Zendejas B, Cook DA. Debriefing for technology-enhanced simulation: A systematic review and meta-analysis. Med Educ. 2014;48:657–666.
5. Issenberg SB, McGaghie WC, Petrusa ER, Lee Gordon D, Scalese RJ. Features and uses of high-fidelity medical simulations that lead to effective learning: A BEME systematic review. Med Teach. 2005;27:10–28.
6. Ziv A, Wolpe PR, Small SD, Glick S. Simulation-based medical education: An ethical imperative. Acad Med. 2003;78:783–788.
7. Cheng A, Duff J, Grant E, Kissoon N, Grant VJ. Simulation in paediatrics: An educational revolution. Paediatr Child Health. 2007;12:465–468.
8. Cheng A, Goldman RD, Aish MA, Kissoon N. A simulation-based acute care curriculum for pediatric emergency medicine fellowship training programs. Pediatr Emerg Care. 2010;26:475–480.
9. McGaghie WC, Issenberg SB, Petrusa ER, Scalese RJ. A critical review of simulation-based medical education research: 2003–2009. Med Educ. 2010;44:50–63.
10. Cheng A, Grant V, Dieckmann P, Arora S, Robinson T, Eppich W. Faculty development for simulation programs: Five issues for the future of debriefing training. Simul Healthc. 2015;10:217–222.
11. Eppich W, Cheng A. Competency-based simulation education: Should competency standards apply to simulation educators? BMJ Simul Technol Enhanc Learn. 2015;1:3–4.
13. Eppich W, Cheng A. Promoting excellence and reflective learning in simulation (PEARLS): Development and rationale for a blended approach to health care simulation debriefing. Simul Healthc. 2015;10:106–115.
14. Cheng A, Palaganas J, Eppich W, Rudolph J, Robinson T, Grant V. Co-debriefing for simulation-based education: A primer for facilitators. Simul Healthc. 2015;10:69–75.
15. Cheng A, Rodgers DL, van der Jagt É, Eppich W, O’Donnell J. Evolution of the pediatric advanced life support course: Enhanced learning with a new debriefing tool and Web-based module for pediatric advanced life support instructors. Pediatr Crit Care Med. 2012;13:589–595.
16. Cheng A, Hunt EA, Donoghue A, et al; EXPRESS Investigators. Examining pediatric resuscitation education using simulation and scripted debriefing: A multicenter randomized trial. JAMA Pediatr. 2013;167:528–536.
17. Cook DA, Hamstra SJ, Brydges R, et al. Comparative effectiveness of instructional design features in simulation-based education: Systematic review and meta-analysis. Med Teach. 2013;35:e867–e898.
18. Raemer D, Anderson M, Cheng A, Fanning R, Nadkarni V, Savoldelli G. Research regarding debriefing as part of the learning process. Simul Healthc. 2011;6(suppl):S52–S57.
19. Rudolph JW, Simon R, Raemer DB, Eppich WJ. Debriefing as formative assessment: Closing performance gaps in medical education. Acad Emerg Med. 2008;15:1010–1016.
20. Kotal ER, Sivertson RM, Wolfe SP, Lammers RL, Overton DT. A survey of simulation fellowship programs. J Emerg Med. 2015;48:351–355.
21. Hayden EM, Gordon JA. Levin A, DeMaria S Jr, Schwartz AD, Sim A. Fellowship training in education. In: The Comprehensive Textbook of HealthCare Simulation. 2013:New York, NY: Springer; 587–592.
23. Qayumi K, Pachev G, Zheng B, et al. Status of simulation in health care education: An international survey. Adv Med Educ Pract. 2014;5:457–467.
24. Society for Simulation in Healthcare. SSH accreditation of healthcare simulation programs. www.ssih.org/Accreditation
. Published 2017. Accessed February 8, 2017.
26. Srinivasan M, Li ST, Meyers FJ, et al. “Teaching as a competency”: Competencies for medical educators. Acad Med. 2011;86:1211–1220.
27. Steinert Y, Mann K, Centeno A, et al. A systematic review of faculty development initiatives designed to improve teaching effectiveness in medical education: BEME guide no. 8. Med Teach. 2006;28:497–526.
28. Leslie K, Baker L, Egan-Lee E, Esdaile M, Reeves S. Advancing faculty development in medical education: A systematic review. Acad Med. 2013;88:1038–1045.
29. Archer JC. State of the science in health professional education: Effective feedback. Med Educ. 2010;44:101–108.
30. Sullivan PB, Buckle A, Nicky G, Atkinson SH. Peer observation of teaching as a faculty development tool. BMC Med Educ. 2012;12:26.
31. Finn K, Chiappa V, Puig A, Hunt DP. How to become a better clinical teacher: A collaborative peer observation process. Med Teach. 2011;33:151–155.
32. Adshead L, White PT, Stephenson A. Introducing peer observation of teaching to GP teachers: A questionnaire study. Med Teach. 2006;28:e68–e73.
33. Skeff KM. Evaluation of a method for improving the teaching performance of attending physicians. Am J Med. 1983;75:465–470.
34. Cheng A, Auerbach M, Hunt EA, et al. Designing and conducting simulation-based research. Pediatrics. 2014;133:1091–1101.