There is growing evidence that health care simulation contributes to increased quality of care and survival through skills training and team training.1–3 The use of health care simulation does however reach beyond education and can probably impact patient safety beyond improving skills, knowledge, and attitudes in health care professionals.4–7 One example is the use of in situ simulation in risk assessment and management to detect system flaws that can be mitigated before they may combine to promote errors.8
Previous consensus meetings have proposed strategies for improving research in health care simulation by suggesting research areas and providing guidance for the conduct of such research.4,5,9 The 2008 Academic Emergency Medicine Consensus Conference also produced an extensive number of articles with recommendations on the use of simulation-based education and evaluation in medical education.9 Similarly, we think it is necessary to define more clearly ways in which health care simulation can contribute most effectively to improve patient safety. Some other reviews have touched on this issue, although they have not addressed it directly. For example, a recent review describes how simulation can promote the 6 core competencies of medical practitioners as described by the Accreditation Council for Graduate Medical Education in the United States and thus contribute to improved medical care.10 The Patient Safety Curriculum Guide by the World Health Organization emphasizes simulation as an integral part of any patient safety curriculum.11
Simulation in health care has however many facets, and no one has attempted to delineate the most effective use of health care simulation in improving patient safety. When a scholarly field in health care has little existing evidence available, a consensus statement by subject experts can help to define priorities in education, clinical practice, organization, planning, and research. Commonly used methods for establishing consensus are the Delphi method or the nominal group technique (NGT).12
We determined that the time is right for creating such a consensus statement about simulation and patient safety to provide guidance to patient safety and simulation communities and to suggest research activities that are most likely to enhance or demonstrate the effect of simulation on patient safety. We initiated a consensus process to accomplish this goal. This article reports the methods and results from the consensus process and discusses its findings.
The consensus process was based on a 4-stage modified NGT13,14 similar to that performed in other studies.15 An international expert panel, selected based on their reputation and experience in simulation and patient safety, was invited to take part in the process. The experts were identified through the professional network of the project group, a search in relevant scientific databases (PubMed, CINAHL, ERIC), and through suggestions from other group members. We aimed for a group representing different countries and a mix of professions and disciplines. Some of these experts had participated in previous consensus meetings about simulation.4,5
In stage 1, the expert group was briefed on the process via e-mail, and each individual was asked to propose 5 topics in health care simulation that would contribute the most to improve patient safety. All topics were sent to the project coordinator (S.J.M.S.), who collected them in a worksheet (Microsoft Excel for Mac 2011, Microsoft Corporation; Redmond, WA) without ranking, categorizing, or editing them.
In stage 2, the expert group received the worksheet with all topics provided in stage 1. They were asked to consider all topics and propose a prioritized list of 10. By asking for 10 topics, we prevented the experts from only contemplating and nominating the same 5 topics as in stage 1. We also encouraged the experts to combine topics they considered to be similar. They were also asked to attach notes or comments to the topics suggested.
In stage 3, the prioritized topics from round 2 were combined by the project leader (S.J.M.S.) to create an overall list, using a modified version of a system described by Delbecq and Van de Ven in 197113,14 and recently used in a similar consensus process.15 In this system, the suggested topics were first awarded points depending on their rank in the individual top 10 lists from each expert. A first place ranking was awarded 10 points, a second place 9 points, and so forth. Two extra points were awarded to a topic for every time it was suggested in the top 10 lists of one of the experts, allowing for a maximum of 44 extra points if suggested by all the experts. This point awarding system allowed the ranking of the topics to be based not only on the individual ranking of each topic by the experts but also on the frequency they were suggested by all the experts, thus allowing frequently suggested topics with low scores to be considered too. The overall top 5 topics and next 5 “runner-ups” identified in this stage were then presented to the expert group 2 weeks before the consensus meeting.
Stage 4 was performed as a 2-day consensus meeting in June 2012 at the Utstein Abbey outside Stavanger, Norway. Four of the experts, who were part of the project group, acted as group leaders and facilitated the discussions. We defined the following 4 goals for the meeting:
- Goal 1: Agree on the final list of 5 topics in health care simulation that contribute the most to improving patient safety.
- Goal 2: Define the patient safety problems that are related to these topics, that is, which are underlying the topic or which might be solved by ideas within the topic.
- Goal 3: Describe possible solutions to the problems identified in goal 2, when considering the topics from goal 1.
- Goal 4: Suggest implementation strategies for the solutions identified as part of goal 3.
The goals were addressed in small groups, and the results were presented with a brief plenary discussion and final summary. Goals 1 and 2 were discussed in 2 groups on day 1 with the summary and conclusion at the beginning of day 2. Goals 3 and 4 were discussed in 4 groups on day 2 and summarized and concluded at the end of the meeting on day 2.
Of the 24 invited experts, 22 accepted the invitation to take part in the consensus process (Table 1). Two experts were not able to physically attend the consensus meeting in stage 4 but contributed to the discussion via e-mail and videoconference.
A total of 108 topics were suggested in the first stage. This list was reduced to 68 topics in stage 2 (Appendix 1), and points were awarded to the topics of this list in stage 3 as previously described. A challenge in this process was that several topics were similar in theme (e.g., “use of simulation to test new equipment before introduction to clinical practice” and “using simulation for design, redesign, and testing of clinical practice protocols”), but different wording was used. Such nuances made it difficult to combine the topics. Some topics were therefore awarded points as separate topics although they could be considered thematically as 1 topic. It was then left to the expert group to discuss and decide in stage 4 if some of these topics could be combined, which then changed the relative ranking of the topics. By leaving this to the experts, we hoped to avoid bias by the project leader in combining these topics. A further rationale was that the experts would have a better understanding of any similarities in the topics that would allow for combination since they had suggested them.
The 10 topics that received the highest score in stage 3 were as follows:
- Technical skills training
- Credentialing/simulation-based high-stakes assessment of health professionals' readiness for practice in well-selected junctions
- Translational science/studies on cost-effectiveness and evidence of effect
- Interprofessional training and collaboration
- Nontechnical skills simulation
- Team training
- Simulation as a required assessment for health care professional in all acute care settings, with fail/pass consequences, in all levels of a career
- Top-down simulation-based training of faculty to improve their debriefing skills
- Use of simulation to test new equipment before introduction to clinical practice
Goal 1 of Stage 4
The final list of 5 topics in health care simulation that contribute the most to improving patient safety as agreed on by the expert group contained technical skills, nontechnical skills, system probing, assessment, and effectiveness. The expert group did however find it difficult to rank the 5 topics according to their importance, especially because 2 of the topics, assessment and effectiveness, can be considered to be part of the other 3. The final 5 topics are therefore presented as equally important, and Figure 1 illustrates how the expert group felt they interconnected.
Goal 2 of Stage 4
Based on the discussion in small groups, the experts proposed a list of up to 5 patient safety problems that each topic is addressing or may contribute to solve. Table 2 lists the patient safety problems for each of the 5 topics, unedited as proposed by the groups to the rest of the expert group.
Goals 3 and 4 of Stage 4
The expert group decided to combine the last 2 tasks as 1. Three groups worked with 1 of the topics technical skills, nontechnical skills and system probing, respectively, and the fourth group worked with the topics effectiveness and assessment. The fourth group quickly established that it was difficult to suggest and describe solutions and implementation strategies for the topics effectiveness and assessment. The group was able to suggest solutions for effectiveness but felt that the task could not be solved within the time frame given. In the case of assessment, the group decided that there were too many unanswered questions regarding the use of assessment in health care simulation to suggest solutions and implementation strategies. There is, for example, little proof available on metrics and methodology and how and when assessment should be performed. The group therefore recommended that both effectiveness and assessment be the theme of a new separate Utstein style meeting to discuss and agree on solutions for how to prove effectiveness of health care simulation on patient safety and how to implement these solutions.
The expert group discussed the results and challenges of tasks 3 and 4 in plenum and agreed on the results before the meeting was adjourned. Table 3 lists the solutions proposed by the groups to solve the patient safety problems previously identified for the topics technical skills, nontechnical skills, and system probing. Table 4 lists the implementation strategies for these solutions within the same topics. Both tables present the unedited content as presented by the groups to the rest of the expert group.
The expert group was able to identify and agree on 5 topics in health care simulation that contribute the most to improving patient safety. These topics are technical skills, nontechnical skills, assessment, effectiveness, and system probing. As stated earlier, the expert group agreed that the topics could not be ranked or sequenced. The expert groups recommend that these 5 topics be the focus of the use of simulation in patient safety initiatives. In the following, we will put these topics into a context, describe their connections, and discuss how they can be used to guide future patient safety initiatives.
Technical and Nontechnical Skills
Health care providers must possess a combination of technical and nontechnical skills, as reflected in the 7 Canadian Medical Education Directions for Specialists (CanMEDS) roles: medical expert, communicator, health advocate, professional, scholar, collaborator, and academic.16 Technical skills are traditionally the core element of patient management and have always been part of health care training and the competence-as-performance discourse.17 In recent years, nontechnical skills training has received increased attention and is now an integral part of many training programs in health care. One example is the TeamSTEPPS program that aims to improve quality of patient care through team training.18 Both technical and nontechnical skills trainings are important; training one without the other is likely not effective for patient safety outcomes. Having good nontechnical skills could allow health care providers to concentrate more on the technical side of the task. By, for example, delegating subtasks to team members and reducing one's own workload, an additional safety layer is introduced in the technical performance. Conversely, good technical performance might leave the mental resources to actually draw on the help of others. The labels of nontechnical skills are similar across systems and context and are as such generic. However, to what extent they are brought to life is very much context specific. Surgeons will, for example, do different things to maintain situational awareness than anesthesiologists; they need to understand different types of information. It therefore seems likely that certain combinations of nontechnical skills are more effective than others across contexts. This context dependency of nontechnical skills combinations to improve technical skills performance remains to be studied.
Skills training using simulation play a central part in this discourse, and testing of skills is widespread in health care education, for example, using the Objective Structured Clinical Examinations (OSCE). Thus, there seems to be a well-defined mutual understanding about what good technical performance is. The nontechnical side only recently became part of the competence-as-performance discourse17 and consequently parts of the nontechnical skills terms and definitions are still ill defined and not widely accepted.19,20 This poses challenges for the design, implementation, and evaluation of patient safety improvement interventions.
Assessment and Effectiveness
Measuring the effect of simulation on patient safety in practice demands to use approaches to measure both technical and nontechnical skills in their mutual interdependencies. One way of doing this is to develop behavioral marker systems or checklists. Validity evidence for these tools must be collected, and content expert raters need to be trained to use such tools. Importantly, content expertise in this context means the technical medical as well as human factors nontechnical expertise. This is the way to take the interdependency of technical and nontechnical skills into account.
Assessment tools can be used for formative and summative assessment. Assessment of health care providers has moved from only summative assessment in the end of an education to continuous formative assessment (feedback) to facilitate learning and improve the health professionals' reflective skills, which is needed to become lifelong learners. The assessment would be facilitated by multiple formative assessments during training and aligning the tools used in the educational (simulation) settings and clinical settings. The combination of all the formative assessments performed during training can have a summative function. Beyond using assessment strategies for individual learning, tools are needed to assess simulation training effects on patient safety. Existing educational and clinical registries or outcome databases can be used,21 but we need to decide what data points are relevant to extract. This decision probably depends on the nature of the intervention.
Since the early days of health care simulation-based training, evaluation was at the reaction level. Later, the studies aimed at demonstrating an effect at the other Kirkpatrick levels: learning,22–25 application,26,27 and outcome.1,3,28 There is an increasing interest in measuring the effect of simulation-based training on patient outcome. This is difficult if the training only addresses individuals and neglects system-based influences and context.29 For example, if not all members of a team are trained in using a standardized terminology, the training of some members might actually hinder a good communication flow and introduce barriers between the trained and nontrained subgroups, ultimately potentially decreasing patient safety.29
To understand why some initiatives work and some do not, it is important to more closely investigate the concrete implementation on a systematic level. The Helping Babies Breathe program in Tanzania is a good example, where the overall mortality was improved by implementing simple simulation-based training.2 However, the improvement in mortality at the individual hospital differed greatly because the training was implemented differently. Hospitals with high-frequency low-dose in situ training had better outcome than hospitals where only initial training was conducted.30 A recent study on the retention of skills in cardiopulmonary resuscitation supports this and indicates a retention optimum when training is repeated every 3 months.31
Despite the fact that patient safety problems are known to exist for a long time32 and many initiatives, including health care simulation, were launched, patient safety still remains a problem.33 System probing can identify those patient safety problems that can be remedied by training or system changes. The latter might be equipment missing or an inefficient layout of the department.34 Such system probing can serve as a needs assessment and help define learning objectives and educational interventions. On the level of learning objectives, system probing can reveal that knowledge-oriented objectives may not lead to improvement as the real challenge is rooted in attitudes and culture. It seems reasonable to assume that bulk training of an entire organization in a single topic has a stronger impact on patient safety as compared with superficial trainings in many topics in isolated spots in an organization. One example is the study by Neily et al3 showing a decrease in annual mortality rates of 18% by introducing team training in an organization. The system probing also allows for defining which part of the system would need to be looked at or investigated to evaluate the value of any intervention. For example, the training of a certain target group might lose its effect because other groups that interact with the target group are not trained or the introduction of a safety measure might not improve safety because it is counter-balanced by risk behavior.
To really assess how effective simulation is, it is necessary to establish relevant assessment strategies on the individual, team, and organizational level. By taking the system probing into account, it will also be easier to identify and take into account the confounding variables that can make or break the success of simulation-based interventions. For example, where an intervention is provided only to parts of the department, the degree of implementation is measured and not the effect of the training itself, as was demonstrated in the Helping Babies Breathe program.30 The full effect of simulation-based intervention is probably only released when the entire system is integrated into the intervention. Otherwise, the intervention might stay subthreshold of impacting the patient safety.
In line with standard risk management models, for example, the ISO-31000:2009 standard for risk management, simulation-based system probing should be an integrated part in any program aiming at improving patient safety. In this context, simulation-based system probing will not only serve as a tool for needs assessment and hazards identification but also provide feedback on the effect of the simulation program by comparing preintervention and postintervention simulations. This notion is consistent with recommendations made by previous consensus meetings addressing system probing.35
Although the expert group decided not to rank the 5 topics according to importance, we propose that they can be assigned a sequential priority. Each topic probably contributes to improve patient safety on its own, but together, they have the potential for synergetic impact beyond the sum of each individual topic. As mentioned earlier, system probing can be a good starting point and serve as a needs assessment and identify areas that need improvement. Technical and nontechnical simulation programs can then be developed to meet the needs and solve the problems identified. Assessment should then determine to what extent the simulation program has contributed to solve the problems identified. Future studies should explore if this model of using simulation to improve patient safety is effective. However, monitoring challenges encountered during training can also be interpreted on a systemic basis and would then indicate structural challenges in an educational system rather than individual challenges. So, the assessment of training results, further investigations in the actual work system, and refinement of training and assessment could be another sequence, in which the topics can be linked.
The limitations of using NGT to answer scientific questions, and more specifically the modified NGT used in this study, has been discussed by others.15 Our greatest challenge with the technique occurred when we attempted to condense the suggestions from stages 1 and 2 and to combine suggested topics having similar content. The suggestions were often formulated as a complete or partially complete sentence, in some cases also including a subpremise that made it difficult to combine with other similar suggestions.
For future attempts at using the same technique, we suggest being clearer in the instructions to the expert group, that suggested topics must be clear and concise without subpremises. A brief explanation along with examples might help the experts, but there would be a risk that they could cause further confusion or even suggest additional subpremises. The potential danger for the final results is that a topic might receive fewer votes/points because it ends up being counted as 2 different topics (i.e., “diluting” the vote). We tried to address this potential process error by allowing the expert group to review and revise the topic list in the initial part of stage 4.
The members of the expert group all came from either a North American or European country. For a more global perspective, it would have been desirable to have representatives from other continents too. This might have voiced potential cultural differences that could influence the top 5 list. The experts did represent different professional groups within health care. Some of the experts also came from non–health care backgrounds, and they helped broaden the perspective from health care itself to the more general issues of the methodology of simulation and of patient safety theory.
An international expert group identified 5 topics in health care simulation that contribute the most to improve patient safety: technical and nontechnical skills training, effectiveness, assessment, and system probing. For each topic, the expert group identified problems that each topic addresses and discussed and suggested strategies for implementation of solutions to these problems. Based on the discussion in the expert group, we have suggested a framework for implementing the 5 topics. We do acknowledge that the consensus regarding assessment in simulation is lacking and warrants further experience, research, and consensus. Proof of effectiveness is emerging, but there is still a need for research to support the systematic effect of health care simulation on patient safety.
1. Draycott TJ, Crofts JF, Ash JP, et al. Improving neonatal outcome through practical shoulder dystocia training. Obstet Gynecol
2. Msemo G, Massawe A, Mmbando D, et al. Newborn mortality and fresh stillbirth rates in Tanzania after helping babies breathe training. Pediatrics
3. Neily J, Mills PD, Young-Xu Y, et al. Association between implementation of a medical team training program and surgical mortality. JAMA
4. Issenberg SB, Ringsted C, Ostergaard D, et al. Setting a research agenda for simulation-based healthcare education: a synthesis of the outcome from an Utstein style meeting. Simul Healthc
5. Dieckmann P, Phero JC, Issenberg SB, et al. The first Research Consensus Summit of the Society for Simulation in Healthcare: conduction and a synthesis of the results. Simul Healthc
. 2011;(6 suppl): S1–9.
6. Cook DA, Brydges R, Hamstra SJ, et al. Comparative effectiveness of technology-enhanced simulation versus other instructional methods: a systematic review and meta-analysis. Simul Healthc
7. Hamstra SJ, Dubrowski A, Backstein D. Teaching technical skills to surgical residents: a survey of empirical research. Clin Orthop Relat Res
8. Riley W, Davis S, Miller KM, et al. Detecting breaches in defensive barriers using in situ simulation for obstetric emergencies. Qual Saf Health Care
. 2010;19(Suppl 3): i53–i56.
9. Gordon JA, Vozenilek JA. 2008 Academic Emergency Medicine Consensus Conference. Acad Emerg Med
10. Issenberg SB, Chung HS, Devine LA. Patient safety training simulations based on competency criteria of the Accreditation Council for Graduate Medical Education. Mt Sinai J Med
11. World Health Organization. Patient Safety Curriculum Guide: Multi-professional Edition
. WHO Press: Geneva, Switzerland, 2011.
12. Jones J, Hunter D. Qualitative research: consensus methods for medical and health services research. BMJ
13. Delbecq AL, Van de Ven AH. A group process model for problem identification and program planning. J Appl Behav Sci
14. Van de Ven AH, Delbecq AL. The nominal group as a research instrument for exploratory health studies. Am J Public Health
15. Fevang E, Lockey D, Thompson J, et al. The top five research priorities in physician-provided pre-hospital critical care: a consensus report from a European research collaboration. Scand J Trauma Resusc Emerg Med
16. Frank JR. The CanMEDS 2005 Physician Competency Framework. Better standards. Better Physicians. Better Care
, The Royal College of Physicians and Surgeons of Canada.: Ottawa, Canada, 2005.
17. Hodges B. Medical education and the maintenance of incompetence. Med Teach
18. Guimond ME, Sole ML, Salas E. TeamSTEPPS. Am J Nurs
19. Nestel D, Walker K, Simon R, et al. Nontechnical skills: an inaccurate and unhelpful descriptor? Simul Healthc
20. Glavin RJ. Skills, training, and education. Simul Healthc
21. Cook DA, Andriole DA, Durning SJ, et al. Longitudinal research databases in medical education: facilitating the study of educational outcomes over time and across institutions. Acad Med
22. Aggarwal R, Ward J, Balasundaram I, et al. Proving the effectiveness of virtual reality simulation for training in laparoscopic surgery. Ann Surg
23. Crofts JF, Ellis D, Draycott TJ, et al. Change in knowledge of midwives and obstetricians following obstetric emergency training: a randomised controlled trial of local hospital, simulation centre and teamwork training. BJOG
24. Ellis D, Crofts JF, Hunt LP, et al. Hospital, simulation center, and teamwork training for eclampsia management: a randomized controlled trial. Obstet Gynecol
25. Larsen CR, Soerensen JL, Grantcharov TP, et al. Effect of virtual reality training on laparoscopic surgery: randomised controlled trial. BMJ
26. Barsuk JH, Cohen ER, Feinglass J, et al. Use of simulation-based education to reduce catheter-related bloodstream infections. Arch Intern Med
27. Wayne DB, Didwania A, Feinglass J, et al. Simulation-based education improves quality of care during cardiac arrest team responses at an academic teaching hospital: a case-control study. Chest
28. Schmidt E, Goldhaber-Fiebert SN, Ho LA, et al. Simulation exercises as a patient safety strategy: a systematic review. Ann Intern Med
29. Rasmussen MB, Dieckmann P, Barry Issenberg S, et al. Long-term intended and unintended experiences after Advanced Life Support training. Resuscitation
30. Mduma ER, Ersdal HL, Svensen E, et al. Low-Dose High Frequency (LDHF) Helping Babies Breathe (HBB) Training Reduces Early Neonatal Mortality (ENM) Within 24 Hours in a Rural African Hospital
, Washington, DC: Pediatric Academic Societies, 2013.
31. Sullivan NJ, Duval-Arnould J, Twilley M, et al. Simulation exercise to improve retention of cardiopulmonary resuscitation priorities for in-hospital cardiac arrests: a randomized controlled trial. Resuscitation
32. Kohn LT, Corrigan J, Donaldson MS, Institute of Medicine (U.S.) Committee on Quality of Health Care in America. To Err Is Human: Building a Safer Health System
, Washington, DC: National Academy Press, 2000.
33. Landrigan CP, Parry GJ, Bones CB, et al. Temporal trends in rates of patient harm resulting from medical care. N Engl J Med
34. Kobayashi L, Shapiro MJ, Sucov A, et al. Portable advanced medical simulation for new emergency department testing and orientation. Acad Emerg Med
35. Kobayashi L, Overly FL, Fairbanks RJ, et al. Advanced medical simulation applications for emergency medicine microsystems evaluation and training. Acad Emerg Med