Health care simulation training in the United States and worldwide has made impressive inroads during the past 15 years, given the number of schools of medicine and nursing that have initiated simulation centers and programs.1 It is readily acknowledged that simulation technology has noticeably improved, leading to greater technical functionality and a broader range of clinical issues addressed. Other laudable hallmarks include the recognition that performance competency is just as important as acquisition of knowledge, support from funding agencies and foundations to help establish an evidence base, the numbers of energetic professionals who flock to annual simulation conferences, and heightened interest from accreditation and certification entities. Yet, as fine as these trends are, there may be less to cheer about when considering whether we are learning something new about the effective use of health care simulation in relation to improving patient safety and quality of care.
Are We Pursuing the Right Questions?
Systematic reviews of simulation as an educational intervention typically find that, compared with no intervention or with traditional training groups, the simulation group of interventions is efficacious.2,3 Beneficial effects range from moderate to more noteworthy.4–6 Authors of the reviews have been diligent in pointing out methodological shortcomings and limitations in the reviews and the studies themselves. Much depends on the nature of the interventions, the prevailing skill level of subjects, and how the reviews are conducted—the specific areas of clinical or surgical focus, the eligibility and selection methods that rule some studies “in” and others “out,” how the studies are grouped for data synthesis, the unit of analysis and measures used, and the integrity of the studies and statistical methods. At the same time, investigators have raised questions about how much evidence is needed and have suggested that it is time to retire the efficacy question. A cumulative meta-analysis that was applied to nearly 600 comparative studies asked, “How much evidence does it take?”—a question so pressing that the author saw fit to place it in the paper’s title.7 For some investigators, the evidence is sufficient when, across a body of cumulative studies, “the effect is statistically significant, stable in size and varies little in value.”8
To provide some context, our position stems from our experience as program officers at a U.S. government research agency, the Agency for Healthcare Research and Quality, who have the responsibility of formulating funding announcements for simulation research, examining applications that are received, and making recommendations for funding that are informed by an external panel of peer reviewers. We have been funding simulation research from a portion of our patient safety budget for the past decade.9 Recurring questions we periodically ask ourselves include the following: Are we improving the safety and quality of care provided to patients; are we making a difference; and what are we learning? These questions are asked not only of simulation projects but of other patient safety projects as well. In brief, we have a keen interest in the type of research questions that are proposed.
Many applications that are received have the primary aim of showing that a particular cohort of residents or nurses can learn a set of clinical specialty skills with the aid of simulation training or that compared with traditional instructional methods, the simulation group performs better. Are we likely to learn something new when this is the primary aim of the research? Pursuing research questions that are simply repeated demonstrations of what has already been established would not be using scarce research dollars wisely. Nor would it help us learn something new about which features of simulation are most beneficial to clinical performance or most relevant to patient safety outcomes. In seeking immediate scientific gratification with control group comparisons, are we running the risk of forsaking essential questions that would actually generate new knowledge? Moreover, educators and directors of simulation centers who are looking for useful evidence-based guidance regarding the most effective deployment of simulation are not likely to find much if research proceeds in a redundant fashion. More useful for educators and early-career researchers are reviews of simulation-based health care research that identify relevant educational issues, and where the gaps in our understanding are (e.g., what are the factors that influence the type of feedback given; what is the best combination of massed versus distributed practice for tasks to be trained; how does one best match simulation fidelity to the readiness level of the learner?). With a firm evidence base, practical guidance for users of simulation could more likely be exercised. While review authors have made progress in addressing these goals,10–12 researchers need to be heedful of the gaps that are identified.
Clearly, research questions need to be asked that have the potential to generate new information and knowledge, if gaps are to be filled. Health care simulation can be a great tool for learning something new about skill acquisition and maintenance; establishing acceptable levels of competency and proficiency with respect to critical tasks; and determining the best ways to structure deliberate practice, provide feedback, and assess performance. Outcome measurement remains a challenge. More needs to be learned about how knowledge and skills acquired during simulation training transfer to the clinical setting and, in turn, to improved quality and safety for patients. Improved understanding is needed in how best to integrate simulation-based learning with clinical setting learning experiences; how to take advantage of simulation as a usability test bed for new procedures, protocols, and technologies in early stages of development; and how best to use simulations to enhance the resilient and adaptive capacity of teams in responding to the unexpected. To learn something new, the question needs to shift from “Is simulation effective?” to “How can it be most effective?”13
Providing More Than an Interesting Educational Experience
Because simulation enables hands-on practice and affords a measure of concordance between the training environment and clinical practice, it generally enjoys good face validity and acceptance. Participants tend to be those just starting their careers and typically report positive experiences in the surveys they take.14,15 However, investigators in the procedural and surgical domains of health care simulation have been vocal in urging a shift in the conversation, in seeking more than an interesting experience, and in focusing more attentively on a systematic range of questions.16–18 For these investigators, other fundamental questions need to be addressed for simulation to be deployed most effectively. What type of task analysis needs to be performed for assessing the skills and knowledge of entering participants? What performance records need to be kept so that participants reenter the skill progression ladder at appropriate points on return visits to the simulation lab? What aspects of clinical performance are most subject to skill decay? What learning strategies can be deployed to resist skill decay and restore performance to an earlier established level of competency or proficiency? To realize desired levels of cost-effectiveness, is there consideration of an integrated use of desktop forms of instruction, part-task and procedure trainers, virtual reality, and fuller-scale simulations that are purposefully matched to the tasks to be trained? How does simulation more meaningfully become part of a well-rounded curriculum instead of standing alone as a detached experience?19 At a continuing educational level, how can simulation serve as an integral component of lifelong learning that spans a clinician’s career—from entry into practice to the later years?20 These questions point to essential foundational work that is still needed.
How Are Skill Acquisition, Maintenance, and Progression Managed?
While simulation technology has steadily improved, there is not much evidence that similar gains have been made in the use and effective management of simulation across the skill progression continuum. The less glamorous management aspects receive scant mention in the literature. If there is to be a shift from “is simulation effective” to “how can it be most effective,” the question of how it can be most effectively managed needs to be addressed. In those high-risk industries (e.g., aerospace, petro-chemical, power generation, military, transportation) where simulation is used effectively, it is taken very seriously. There is a high premium placed on the safety of people, equipment, and the environment. Adequate resources are provided for research and development. Improvements are based on extended periods of develop–test–revise iterations. Beyond the core simulation, other technological, work system, human, organizational, and external variables are recognized as important because they can converge in ways that either facilitate or impede the effectiveness of any targeted intervention. Technological variables could refer to differing hardware and software features of the medical equipment; work system characteristics might involve varying procedures followed for performing clinical work or working with or without a coworker; human factors refer to differences in human strengths and limitations in terms of physical, sensory, and cognitive capabilities, as individuals interact with devices and their work environments; organizational issues could refer to variations in staff availability, location of supplies, and management practices; while external factors that might indirectly influence the simulation could include shifting demographics or changes in health care policy. While not always appreciated (and dependent on the specific aims of the research), these other system factors, in addition to the core simulation, need to be represented and implemented as part of the simulation. In addressing the management challenge, a systems development perspective provides an opportunity to purposefully integrate, align, and test the various socio-technical factors with the core simulation components so that they work together to truly optimize the simulation’s effectiveness and achieve a greater impact.21–23
Are We Taking Our Own Medicine?
Rather than placing responsibility for an expanded vision of the questions that need to be asked solely on researchers, we need funders of research, journal reviewers, and anyone else serving in a gatekeeping role also to be open-minded. With respect to patient safety, useful simulation evidence is not limited to any one type of research design (such as randomized controlled, quasi-experimental, or time series trials) but depends on the questions that are raised, the ingenuity of the people pursuing them, and the open-mindedness of those serving in gatekeeping roles. However, an open and purposeful pursuit of essential questions is just as important in the so-called nontechnical group of social and cognitive skills—those that involve teamwork, leadership and management, situation awareness, and decision making. Given the importance of teamwork in critical care settings, the most reliable methods for assessing and building teamwork require continued exploration. The work moving in this direction is encouraging.24 Likewise, for those investigators interested in educational applications of simulation for assessing and teaching clinical or surgical decision making,25 or in exploring strategies to minimize cognitive bias in diagnostic work,26 progress is likely to be most evident by addressing fundamental issues.
A Final Note
As program officers with a responsibility to help the Agency for Healthcare Research and Quality fulfill its patient safety mission, our comments are best understood in light of this circumscribed yet very important mission. The occurrence of preventable patient harms in the U.S. health care delivery system continues to be a serious public health problem. After nearly two decades of efforts to reduce harms and improve safety, both inside and outside of government, the agency’s efforts have contributed to national trends of reduced hospital-acquired harms, lives saved, and cost savings.27 But nobody is completely satisfied because of the numerous harms and loss of lives that persist. In our complex health care system, patient safety remains a multifaceted challenge that requires multifaceted approaches. Simulation training in health professions education is but one of these approaches, and in our view, it can be a very promising approach, assuming we start to ask multifaceted questions that improve our knowledge regarding its optimal use.
We recognize that we could be perceived as expecting a lot from simulation as one of these promising approaches. After all, progress in both the basic and applied sciences doesn’t proceed in a neat, incremental, linear fashion, but is characterized by periods of limited progress at times and encouraging leaps at other times. Indeed, it takes time for any new approach to gain momentum, undergo refinement, and become part of clinical culture. It may also be the case that independent of having direct effects on patient safety outcomes, simulation funding as well as funding for other patient safety concerns may be having broader and cumulative effects over the years. These outcomes help keep patient safety and associated interventions such as simulation, education, and training on the national radar; and this, in turn, may contribute to the catalyst for change and improvement. At the same time, each of the hazardous settings—aerospace, maritime, rail, power generation, and military—that has achieved noteworthy safety gains by incorporating simulation has done so by learning how to optimize the use of simulation, given its own unique circumstances. This also will be true for health care, and with further examination, open discussion, and purposeful action, collectively we should be able to nudge health care simulation beyond the beginning stages of its own learning curve.
1. Gordon JA, Oriol NE, Cooper JBBringing good teaching cases “to life”: A simulator-based medical education service. Acad Med. 2004;79:23–27.
2. Cook DA, Hatala R, Brydges R, et alTechnology-enhanced simulation for health professions education: A systematic review and meta-analysis. JAMA. 2011;306:978–988.
3. McGaghie WC, Issenberg SB, Cohen ER, Barsuk JH, Wayne DBDoes simulation-based medical education with deliberate practice yield better results than traditional clinical education? A meta-analytic comparative review of the evidence. Acad Med. 2011;86:706–711.
4. Cook DA, Brydges R, Hamstra SJ, et alComparative effectiveness of technology-enhanced simulation versus other instructional methods: A systematic review and meta-analysis. Simul Healthc. 2012;7:308–320.
5. Zendejas B, Brydges R, Hamstra SJ, Cook DAState of the evidence on simulation-based training for laparoscopic surgery: A systematic review. Ann Surg. 2013;257:586–593.
6. Khanduja PK, Bould MD, Naik VN, Hladkowicz E, Boet SThe role of simulation in continuing medical education for acute care physicians: A systematic review. Crit Care Med. 2015;43:186–193.
7. Cook DAHow much evidence does it take? A cumulative meta-analysis of outcomes of simulation-based education. Med Educ. 2014;48:750–760.
8. Colliver JA, Cianciolo ATWhen is enough enough? Judging the sufficiency of evidence in medical education. Med Educ. 2014;48:740–741.
10. McGaghie WC, Issenberg SB, Petrusa ER, Scalese RJA critical review of simulation-based medical education research: 2003–2009. Med Educ. 2010;44:50–63.
11. Barsuk JH, Cohen ER, Wayne DB, Siddall VJ, McGaghie WCDeveloping a simulation-based mastery learning curriculum: Lessons from 11 years of advanced cardiac life support. Simul Healthc. 2016;11:52–59.
12. Sawyer T, Eppich W, Brett-Fleegler M, Grant V, Cheng AMore than one way to debrief: A critical review of healthcare simulation debriefing methods. Simul Healthc. 2016;11:209–217.
13. Selzer DJ, Dunnington GLSurgical skills simulation: A shift in the conversation. Ann Surg. 2013;257:594–595.
14. Morgan PJ, Cleave-Hogg DA Canadian simulation experience: Faculty and student opinions of a performance evaluation study. Br J Anaesth. 2000;85:779–781.
15. Brazzi L, Lissoni A, Panigada M, et alSimulation-based training of extracorporeal membrane oxygenation during H1N1 influenza pandemic: The Italian experience. Simul Healthc. 2012;7:32–34.
16. Gallagher AG, Satava RMSurgical simulation: Seeing the bigger picture and asking the right questions. Ann Surg. 2015;262:e50–e51.
17. Gallagher AGMetric-based simulation training to proficiency in medical education: What it is and how to do it. Ulster Med J. 2012;81:107–113.
18. Satava RM, Gallagher AG, Pellegrini CASurgical competence and surgical proficiency: Definitions, taxonomy, and metrics. J Am Coll Surg. 2003;196:933–937.
19. Henriksen K, Patterson MDSimulation in health care: Setting realistic expectations. J Patient Saf. 2007;3:127–134.
20. Sachdeva AK, Blair PG, Lupi LKEducation and training to address specific needs during the career progression of surgeons. Surg Clin North Am. 2016;96:115–128.
21. Tropello SP, Ravitz AD, Romig M, Pronovost PJ, Sapirstein AEnhancing the quality of care in the intensive care unit: A systems engineering approach. Crit Care Clin. 2013;29:113–124.
22. Pronovost PJ, Bo-Linn GWPreventing patient harms through systems of care. JAMA. 2012;308:769–770.
23. Scerbo MW, Murray WB, Alinier G, et alA path to better healthcare simulation systems: Leveraging the integrated systems design approach. Simul Healthc. 2011;6(suppl):S20–S23.
24. Steinemann S, Berg B, DiTullio A, et alAssessing teamwork in the trauma bay: Introduction of a modified “NOTECHS” scale for trauma. Am J Surg. 2012;203:69–75.
25. Andersen DKHow can educators use simulation applications to teach and assess surgical judgment? Acad Med. 2012;87:934–941.
26. Croskerry PThe importance of cognitive errors in diagnosis and strategies to minimize them. Acad Med. 2003;78:775–780.