Preventable patient harm costs an estimated 15,000 lives and $350 million per month for Medicare and Medicaid patients in the United States.1 Simulation can reduce the risks of preventable patient harm by enabling health care professionals (1) to build technical2 and nontechnical skills,3 (2) to identify and mitigate local work process defects and latent errors through in situ simulations and systems debriefing,4 and (3) to evaluate medical devices, information systems, and other performance shaping factors.5,6 However, to date, there are few applications of this approach to developing leadership for patient safety and quality.7,8 Formal and informal organizational leaders play critical roles in managing safety improvement efforts9–12 yet large gaps exist in the available methods for developing safety and quality related leadership competencies.13 Senior leaders in other industries commonly engage in simulation,14 but only limited examples exist in health care including both tabletop management15 and human patient simulation sessions.16,17
We describe results of an organizational simulation where 55 senior health care leaders practiced skills in 3 areas: (1) analyze the impact of direct and delegated executive involvement in patient safety initiatives, (2) develop accountability structures and processes related to patient safety across levels of an organization, and (3) identify and prioritize patient safety improvement goals. We outline key challenges to applying organizational simulations for patient safety leadership, lessons learned, and future directions for this potentially high-impact application of simulation.
We developed this simulation as part of a larger project involving hospital leaders participating in the Centers for Medicare and Medicaid Services–funded Partnership for Patients Program. This simulation and the broader leadership network complemented traditional microsystem or frontline clinician-driven safety improvement efforts by actively engaging senior leaders and developing strategic safety leadership skills. The simulation content drew from theoretical models of shared leadership18 and organizational accountability19 as well as practical safety and quality leadership guidance.20 Design and delivery of the activity used instructional design methods for practice-based learning.21 The simulation lasted for 4 hours and required teams of leaders to function collectively as a new CEO hired to rescue a hospital system failing in patient safety performance. This project was judged to be exempt by the Johns Hopkins Institutional Review Board (IRB) for human subject research.
Senior leaders from 38 hospitals participating in the Centers for Medicare and Medicaid Services Partnership for Patients Program were invited to participate based on their registration for a national leadership conference. A total of 55 participants registered and completed the simulation. Participants included six CEOs or presidents; 14 chief nursing officers; 8 chief medical officers or vice presidents of medical affairs; 19 chief operating, compliance, quality, or safety officers (or vice presidents of quality, compliance, or risk); and 8 directors of safety or quality. This role diversity complicates evaluation but represents realistic conditions as leadership in organizations typically happens in teams. We organized participants into 8 teams, varying from 6 to 8 members each. One team consisted of only CEOs, and the others blended executive suite members and other leaders. All teams were composed of members from different organizations to foster psychological safety and knowledge sharing across organizations.
The simulation focused on organizational issues of governance priority, culture of continuous improvement, and internal transparency and feedback. These were chosen from the Institute of Medicine’s CEO’s Checklist for High Value Care20 because strong evidence supports their relationship with hospital-level safety performance. We operationalized these 3 broad concepts within the simulation as goal setting and communication (governance priority), feedback on progress toward goals (internal transparency and feedback), and consequences for meeting and failing to meet goals (culture of continuous improvement) and 3 learning objectives:
- Analyze the impact of direct versus delegated responsibility for patient safety on the effectiveness of improvement efforts;
- Develop accountability structures and processes related to patient safety across levels of an organization; and
- Identify and prioritize strategic goals specific to patient safety.
Scenario development ensured that learners had the opportunity to practice or experience situations linked to the learning objectives.22,23 For example, by presenting participants with an organization that has no existing process for communicating goals, we create an opportunity to recognize and correct this issue and address learning objective 2. We describe connections between learning objectives and scenario design in a table (see Table, Supplementary Digital Content 1, http://links.lww.com/SIH/A251, for the simulation domains/scenarios).
All teams had a modifiable simulation board representing the current dysfunctional organizational structure for the simulated hospital. The simulation board represented levels of the organization (the board, CEO, CEO’s direct reports, directors, unit-level leaders, frontline staff, and patient and families) and the relationship between them in terms of organizational goal setting, communication of goals, feedback on progress toward goals, and consequences when goals were or were not met within the organization. A figure (see Figure, Supplementary Digital Content 2, http://links.lww.com/SIH/A252, for the simulation board) illustrates the initial states of the simulation board, which are further described in a table (see Table, Supplementary Digital Content 3, http://links.lww.com/SIH/A253, for the simulation board summary). Participants could modify the simulation board as a means of changing the simulated hospital’s organizational structure and interaction processes as they saw fit. A figure (see Figure, Supplementary Digital Content 4, http://links.lww.com/SIH/A254, photograph), shows a simulation board setup before the session.
Simulation Design Elements
Throughout the simulation, participants had access to 2 types of information: (1) basic hospital information available to all participants in handouts and introductory comments and (2) limited information resources gained through choices about what further information they would seek out. Specifically, the limited information resources involved visiting 1 of 4 different simulated encounter sessions where participants interviewed personnel from within and outside the simulated hospital. Time-limited decision making and competing priorities in the simulation represented real-world pressures faced by executive leaders.
Simulation Session Overview
Table 1 details the timeline and overall scenario flow. Participants were assigned to their teams with 1 facilitator at each table, briefed on the ground rules and scenario components, and given a scenario packet. The simulation began with a video message from the chairman of the board of directors congratulating the participants on their appointment as the new CEO and reminding them of the recent bad press, poor performance on safety indices, low staff and patient satisfaction, and the goal to become a regional leader in patient safety within 1 year. Participants in each group then worked collectively to decide on their team’s initial goal(s) for rescuing their simulated hospital.
Teams then had access to limited information sources in 1 of 4 encounter sessions in 2 separate rounds, for a total of 2 of the 4 possible encounters. The teams could interact with (1) a struggling internal frontline process improvement team working on safety in the simulated hospital; (2) a successful “expert” external process improvement team; (3) a simulated executive “walk round” in a simulated unit; and (4) an external management consultant. Each team collectively decided which information they wanted to access and how to use the information. These resources required participants to balance an internal focus (ie, executive walk rounds and/or meeting with an internal improvement team) and external focus (ie, management consultant and/or external expert improvement team)—a known challenge in high-reliability organizations. By selecting internal sources, teams could learn more about barriers and breakdowns in the simulated hospital. Choosing external sources provided teams with general guidance on best practices and emerging trends.
During each round of the simulation, teams visited 1 of the 4 encounter sessions for 5 minutes. Team members then returned to their tables to discuss and plan. Team members completed each round with group planning and final decisions on their organizational goals and implementation priorities. They recorded their thoughts using goal setting forms and denoted any structural changes to the simulated hospital by manipulating the roles and connections between roles on the simulation board. Teams modified goal setting, goal communication, performance feedback, and consequences of performance between the different levels of the organization.
Debriefing is a critical component of learning.24–26 During the 2-hour debrief, each team described the goals their team set and their overall strategy for rescuing the failing hospital. Subsequently, we presented a visual depiction of changes made to the chain of accountability in their redesigned organization illustrating how changes could alter goal setting, communication, feedback, or consequences across levels of their organization. A facilitated discussion connected choices in the simulation with the learning objectives.
Data Collection and Analysis
Formative evaluation data include participant reaction surveys and debriefs. Although a useful source of information about engagement, participant reactions outside of perceived utility have limited relationships with learning outcomes.27,28 To that end, the postsession evaluation form captured perceptions of simulation design, execution, and utility as well as reactions to the program in which the simulation was embedded. We collected the teams’ changes to organizational structures within the scenario using a template and visually fed back variations in strategies across teams during the debrief.
Table 2 shows the variation in strategies developed across the teams in terms of what tactics they implemented at which levels of the organization. None of the teams addressed accountability across all levels. Only 2 teams involved patients in creating a culture of continuous improvement, and 1 of the teams did not address accountability structures at all. Overall, solutions varied dramatically, with no 2 teams having the same strategy.
We distributed the postsession evaluation via e-mail several days after the session concluded, and 28 participants completed it (51% response rate, Table 3). This low response rate likely stemmed from the time lag in administration and the time constraints on senior leaders. The majority of respondents felt that the simulation met core learning objectives and the session enabled them to discuss new strategies to enhance patient safety. In general, the participants were satisfied with the simulation materials. Although the majority (64%) of respondents reported that the goals of the simulation were clearly defined, only 39% responded that the simulation was easy to understand. This discrepancy between perceptions of the clarity of simulation goals and the scenario was explored in qualitative analysis of open-ended questions. Some participants felt the simulation was rushed and overly compressed, and several suggested that the simulation board purpose and function could be communicated more clearly during the introduction.
This article provides an example of simulation-based learning methods applied to senior leader development for patient safety. Findings are preliminary but promising. We discuss results, lessons learned, and future directions for safety leadership simulations.
Simulation Learning Objectives
For each learning objective, we discuss opportunities to practice targeted skills in the simulation, how the teams responded, and how these were addressed in debriefing.
Direct Versus Delegated Responsibility for Patient Safety
The initial conditions presented in the simulation either explicitly delegated all safety and quality work to the midlevel management and frontline staff or omitted focus on the issues. Teams tended to devise strategies at the top management levels (ie, CEO, CEO direct reports, and department heads) and taper off at unit-level leader, frontline and patient levels, as well as with the board. This is not surprising given team composition (ie, participants focused on their own roles) and the literature indicating breakdowns in accountability across levels of organizations. This pattern prompted debrief discussions around how delegation or omission of safety responsibilities would impact performance, whether this pattern reflected their own organization, and what can be done to improve. Consistent with the literature,29 there was consensus that direct senior leader involvement is necessary to maintain focus on safety as a priority and to ensure decisions are made about interventions to pursue and that needed resources are available. Most participants acknowledged that their organization could improve direct leader involvement in safety. Common learner preferences for achieving this included executives chairing or cochairing safety committees and rounding.
Develop Accountability Structures
The scenario presented a hospital with an almost completely dysfunctional accountability system. Teams tended to focus their solutions on governance priority (ie, goal setting and communication) and internal transparency and feedback (ie, performance feedback) strategies, with less emphasis on culture of continuous improvement (ie, establishing consequences for reaching or not reaching goals). This focus on goals and feedback is consistent with the recent and widespread uptake of data-driven management tools (eg, dashboards) in health care. Debriefs uncovered 2 main reasons leaders avoided establishing contingencies for goals: lack of trust in current data quality and uncertainty as to whether safety improvement goals are achievable. Specifically, leaders were not comfortable holding people accountable to performance metrics if they did not believe the data quality was high. Similarly, they hesitated to establish rewards or penalties for a target level of performance if they were uncertain that goal was realistic. Methods discussed to address these barriers including developing safety and quality data assurance processes and obtaining appropriate benchmarks to ensure targeted performance levels are achievable.
Identify and Prioritize Strategic Patient Safety Goals
Teams were provided with a very ambiguous goal: become a top performer in safety and quality. To be effective, they needed to define specifically what this meant. Three of the teams made no modification to this ambiguous goal. Four teams revised the goal but in equally ambiguous ways (eg, make safety a top priority, build a culture of safety/reliability/teamwork). Only one team defined specific and measureable goals including rates for falls, readmissions, and health care–acquired infections. The team stated that they chose these goals because of alignment with national efforts, opportunity to improve based on simulated hospital data, and ties to public reporting and revenue risk. Debrief discussion focused on the value of goal specificity and approaches to balancing a variety of risks (eg, patient harm, regulatory, financial, reputational) in strategically defining goals.
Broader Lessons Learned and Future Directions
Participant reactions are encouraging but clearly indicate opportunities to improve. We generated 6 lessons learned through project team debriefs and informal follow-up with participants and other subject matter experts.
There Is No Competency Model for Patient Safety Leadership
The absence of a well-defined competency model for patient safety leadership complicates the development of effective simulations. We created learning objectives from the Institute of Medicine’s CEO’s Checklist.20 However, this tool outlines organizational characteristics and interventions for improving patient safety, not the individual competencies of effective patient safety leadership. Health care management must follow medical and nursing education in defining competency models for their professions.
Leadership Happens in Teams, and It Should Be Trained in Teams
Our simulation required teams of leaders to play the role of 1 individual (ie, a new CEO) in the simulation. This choice was made to reduce the complexity of the simulation but became problematic for teams composed of members from lower levels of leadership. Emerging research suggests that many of organizational failures related to poor safety performance result from breakdowns in leadership teams and multiteam systems distributed across an organization.29 Consequently, one critical focus for leadership simulations involves targeting entire teams of leaders from across the organization.
Delivery Strategies Must Evolve to Meet Practical Constraints
We designed this simulation as a streamlined introduction to complex issues of accountability and patient safety. It took 4 hours to complete. This duration may be unrealistic for many health care leaders. Requests were received from participants to replicate the activity on a more compressed time scale with an intact team from their organization (eg, from 60 to 30 minutes) even though some participants felt the activity was rushed at 4 hours. Finding a balance between exploring complex issues in a meaningful way and accommodating executive time constraints is a challenge. Potential solutions may include online and distributed simulations to minimize travel and individual and self-paced simulations to adapt to learners’ availability. These individual-level simulations could lead up to team simulations described earlier.
Organizational Simulations Have Unique Fidelity Requirements
This study used relatively low-technology (ie, no computer-generated components) simulation with deviations from real-time scales (ie, months were compressed into hours) and other fidelity issues (ie, an abstract simulation board to represent accountability structures). Clearly, physical fidelity (ie, correspondence of the sensory input between real and simulated tasks) is a challenge for organizational simulations, and a better understanding of functional fidelity (ie, the degree to which learners engage in the same performance processes in real and simulated tasks)30 requirements is necessary.
Safety Leadership Simulations Should Balance Training and Knowledge Sharing or Discovery Objectives
Simulation can serve many aims. As outlined earlier, clearly defined patient safety leadership competencies will enable more effective senior leadership training simulations. However, just as the definitiveness of the underlying evidence base for various aspects of medical care impacts learning and assessment practices for clinicians,31 the current quality of evidence for patient safety management strategies is variable. Learning strategies should accommodate this. Simulations can focus on training for empirically supported leadership and management practices and on knowledge discovery and sharing for areas of innovation and emerging best practices. This can include simulations to connect individuals across organizations to share locally innovative practices (as in the present study) as well as a means to collaboratively analyze problems and plan. For example, this study presented a fictitious hospital, but the same approach could use real hospital data to explore system weaknesses and interventions in a safe environment.
New Approaches to Evaluating Patient Safety Leadership Are Needed
Transfer of training to on-the-job behavior is critically important but difficult to measure in many settings. For patient safety leadership simulations, program evaluators require novel and practical approaches. For example, situational judgment tests present a situation to respondents (eg, safety related failures at a hospital) and ask them to respond (what would or should you do?). Situational judgment tests predict on-the-job performance,32 particularly leadership situational judgment tests.33 Situational judgment tests have been applied to medical school admissions testing34 but not to health care safety leadership. These and other novel job performance proxies can advance safety leadership simulation evaluations.
Simulation is a core method for improving patient safety. As leadership and organizational factors determine the success of safety improvement efforts, applying simulation to help senior leaders understand the dynamic and complex relationships between layers of an organization may provide a multiplying effect to the patient safety and simulation efforts targeting frontline staff and unit-level learning.
1. Adverse Events in Hospitals: National Incidence among Medicare Beneficiaries: Department of Health and Human Services: Office of Inspector General; 2010. Report No.: OEI-06-09-00090.
2. Cook DA, Hatala R, Brydges R, et al. Technology-enhanced simulation for health professions education: a systematic review and meta-analysis. JAMA
2011; 306 (9): 978–988.
3. Eppich W, Howard V, Vozenilek J, Curran I. Simulation-based team training in healthcare. Simul Healthc
2011; (Suppl 6): S14–S19.
4. Rosen MA, Hunt EA, Pronovost PJ, Federowicz MA, Weaver SJ. In situ simulation in continuing education for the health care professions: a systematic review. J Contin Educ Health Prof
2012; 2 (4): 243–54.
5. Scerbo MW, Murray WB, Alinier G, et al. A path to better healthcare simulation systems: leveraging the integrated systems design approach. Simul Healthc
2011; (Suppl 6): S20–S23.
6. LeBlanc VR, Manser T, Weinger MB, Musson D, Kutzin J, Howard SK. The study of factors affecting human and systems performance in healthcare using simulation. Simul Healthc
2011; 6 (7): S24–S29.
7. Aggarwal R, Mytton OT, Derbrew M, et al. Training and simulation for patient safety. Qual Saf Health Care
2010; 19 (Suppl 2): i34–i43.
8. Salas E, Paige JT, Rosen MA. Creating new realities in healthcare: the status of simulation-based training as a patient safety improvement strategy. BMJ Qual Saf
2013; 22 (6): 449–452.
9. Taylor SL, Dy S, Foy R, et al. What context features might be important determinants of the effectiveness of patient safety practice interventions? BMJ Qual Saf
2011; 20 (7): 611–617.
10. Salas E, Almeida SA, Salisbury M, et al. What are the critical success factors for team training in health care? Jt Comm J Qual Patient Saf
2009; 35 (8): 398–405.
11. Keroack MA, Youngberg BJ, Cerese JL, Krsek C, Prellwitz LW, Trevelyan EW. Organizational factors associated with high performance in quality and safety in academic medical centers. Acad Med
2007; 82 (12): 1178–1186.
12. Paull DE, Mazzia LM, Izu BS, Neily J, Mills PD, Bagian JP. Predictors of successful implementation of preoperative briefings and postoperative debriefings after medical team training. Am J Surg
2009; 198 (5): 675–678.
13. Wachter RM. Patient safety at ten: unmistakable progress, troubling gaps. Health Aff (Millwood)
2010; 29 (1): 165–173.
14. Thornton GC, Cleveland JN. (1990) Developing managerial talent through simulation. American Psychologist, 45(2), 190. Bury E, Horn J, Meredith D. How to use war games as a strategic tool in healthcare. Health International
2011; 11: 28–37.
15. Wood CJ, Foster HD, Hardy NE. Crisis simulation and health care systems. Simulation Gaming
1997; 28 (2): 198–216.
16. Cooper JB, Singer SJ, Hayes J, et al. Design and evaluation of simulation scenarios for a program introducing patient safety, teamwork, safety leadership, and simulation to healthcare leaders and managers. Simul Healthc
2011; 6 (4): 231–238.
17. Denham CR, Guilloteau FR. The cost of harm and savings through safety: using simulated patients for leadership decision support. J Patient Saf
2012; 8 (3): 89–96.
18. Conger JA, Pearce CL. A landscape of opportunities: future research in shared leadership. In: Pearce CL, Conger JA, eds. Shared Leadership: Reframing the Hows and Whys of Leadership
. Thousand Oaks, CA: Sage; 2003: 285–303.
19. Emanuel EJ, Emanuel LL. What is accountability in health care? Ann Intern Med
1996; 124 (2): 229–239.
20. Cosgrove D, Fisher M, Gabow P, et al. A CEO Checklist for High-Value Health Care. Discussion Paper
. Washington, DC: Institute of Medicine; 2012. Available at: http://www.iom.edu/CEOChecklist
21. Rosen MA, Salas E, Tannenbaum SI, Provonost P, King HB. Simulation-based training for teams in health care: designing scenarios, measuring performance, and providing feedback 2011: 571–92.
22. Rosen MA, Salas E, Wu TS, et al. Promoting teamwork: an event-based approach to simulation-based teamwork training for emergency medicine residents. Acad Emerg Med
2008; 15 (11): 1190–1198.
23. Fowlkes JE, Dwyer DJ, Oser RL, Salas E. Event-based approach to training (EBAT). Int J Aviat Psychol
1998; 8: 209–221.
24. Fanning RM, Gaba DM. The role of debriefing in simulation-based learning. Simul Healthc
2007; 2 (2): 115–25.
25. Savoldelli GL, Naik VN, Park J, et al. The value of debriefing in simulation-based education: oral versus video-assisted feedback. Simul Healthc
2006; 1 (2).
26. Kriz WC. A systemic-constructivist approach to the facilitation and debriefing of simulations and games. Simulation Gaming
2010; 41 (5): 663–680.
27. Aguinis H, Kraiger K. Benefits of training and development for individuals and teams, organizations, and society. Annu Rev Psychol
2009; 60: 451–474.
28. Alliger GM, Tannenbaum SI, Bennett W, Traver H, Shotland A. A meta-analysis of the relations between training criteria. Personnel Psychology
1997; 50: 341–358.
29. Weaver SJ, Che X, Pronovost PJ, Goeschel CA, Kosel KC, Rosen MA. Improving patient safety and care quality: a multi-team system perspective. In: Salas E, Rico R, Shuffler M, eds. Pushing the Boundaries: Multiteam Systems in Research & Practice
. In press.
30. Curtis MT, DiazGranados D, Feldman M. Judicious use of simulation technology in continuing medical education. J Contin Educ Health Prof
2012; 32 (4): 255–260.
31. Rosen MA, Pronovost PJ. Advancing the use of checklists for evaluating performance in health care. Acad Med
2014; 89 (7): 963–965.
32. Chan D, Schmitt N. Situational judgment and job performance. Human Performance
2002; 15 (3): 233–254.
33. Christian MS, Edwards BD, Bradley JC. Situational judgment tests: constructs assessed and a meta-analysis of their criterion-related validities. Personnel Psychology
2010; 63 (1): 83–117.
34. Lievens F, Buyse T, Sackett PR. The operational validity of a video-based situational judgment test for medical college admissions: illustrating the importance of matching predictor and criterion construct domains. J Appl Psychol
2005; 90 (3): 442–452.