Journal Logo

Special Article

Applying Principles From Aviation Safety Investigations to Root Cause Analysis of a Critical Incident During a Simulated Emergency

Imach, Sebastian MD; Eppich, Walter MD, PhD; Zech, Alexandra PhD; Kohlmann, Thorsten MD; Prückner, Stephan MD; Trentzsch, Heiko MD

Author Information
Simulation in Healthcare: The Journal of the Society for Simulation in Healthcare: June 2020 - Volume 15 - Issue 3 - p 193-198
doi: 10.1097/SIH.0000000000000457
  • Open


Medical errors frequently cause death in modern healthcare independent of other factors based on the best possible estimates.1 As in other high-risk industries, human factors often contribute to adverse events. Although human error is inevitable, members of healthcare teams should work to enhance the safety of the entire system. However, healthcare teams often lack a comprehensive understanding of the formation of error chains. An explicit analysis of error causation uncovers error chains and knowledge of these linkages enables teams to provide safe patient care.

Since the late 1990s, structured root cause analysis (RCA) has been a commonly used tool to identify causes of errors and minimize future errors in healthcare.2 Such investigations are resource intensive and retrospectively oriented in their structure. Using RCA methodology, an investigator team applies a structured approach to find answers to key questions underlying each RCA: what exactly happened, how did it happen, why it happened, and what can be done to avoid another incident?3

The use of structured approaches to the analysis of critical incidents represents an excellent example of the lesson's healthcare can learn from aviation. National bodies in civil aviation (eg, Federal Air Force Inspection Office, Braunschweig, Germany) work according to international civil aviation agreements4 and use flight data and cockpit voice recorders to analyze incidents.5 Investigators then turn conclusions from such analyses into sound actions to avoid repeat mistakes.

Multiple factors such as medicolegal concerns make such data acquisition rare in daily medical practice. However, healthcare simulation allows the collection of valid and unbiased data through audiovisual (A/V) recordings and performance metrics captured by full-scale simulators. Possible causes of failure can be identified prospectively, without directly causing patient harm. After a cause has been identified, modifying the simulation setting is a valuable possibility to validate the observations made.

We report a case study from our simulation center in which a critical incident occurred during a simulated cardiac arrest managed by a professional emergency medical services (EMS) team. An unexpected, yet technically correct voice prompt from an automated external defibrillator (AED) led to significant delays including timely defibrillation of ventricular fibrillation. In our estimation, such a critical event meets the wider definition of a “Sentinel Event” of the Joint Commission on Accreditation of Healthcare Organizations, which defines a sentinel event as “an unexpected occurrence or variation (during medical care) involving death or serious physical or psychological injury, or the risk thereof.”6 In our simulated case, full A/V recordings of the incident were available and allowed us to conduct an in-depth root cause analysis of possible causes of error related.



The critical incident occurred during a full-scale medical simulation conducted as part of a randomized controlled study focusing on checklists in prehospital settings (German Clinical Trials Register Study ID 00005156). Ethics committee of the Faculty of Medicine of the Ludwig-Maximilians-Universität in Munich approved the study (ID 475-12). All participants provided written informed consent before participation. The simulation scenario took place at the Human Simulation Center (HSC) at University Hospital Munich's Institut für Notfallmedizin und Medizinmanagement. The study participants were professional EMS providers who had completed training according to federal regulations and had an active assignment with an EMS.

The team consisted of 2 paramedics according to legal EMS requirements in the state of Bavaria (Germany). Before the start of the scenario, an HSC instructor gave study participants a standardized introduction to the patient simulator (SimMan 3G; Laerdal Medical AS, Stavanger, Norway), the simulation environment, and equipment. The scenario took place inside a mock-up of a standard ambulance vehicle located at HSC. Of note, the guidelines in effect at the time of the study were those issues by the European Resuscitation Council in 2010. The simulation scenario represented an adult patient after a witnessed collapse with cardiac arrest due to ventricular fibrillation (VF).

The scenario began with the patient unresponsive and in VF. The patient had just received shock #1, and the participants of the scenario immediately resumed cardiopulmonary resuscitation (CPR) as per guidelines. The first shock was administered using an AED in semiautomatic mode (Lifepak 15 defibrillator, PhysioControl, Redmond). The device was adjusted to cprMAX mode, a technology intended to minimize hands-off times while charging. The team indicated that they used this configuration in their daily routine.

Data Collection

We captured the following data: (a) continuous A/V recordings of the simulation scenario from different viewing angles and (b) real-time vital sign data from the patient's monitor using picture-in-picture-technology. I.S. and T.H. ran the scenario from the control room. T.H. conducted the subsequent debriefing, which explored relevant parts and provided participants an opportunity to share their thinking at various points in time during the case. All comments were recorded in writing.

The participants also completed a standardized questionnaire with demographic data (before scenario) and rated the significance of the simulation scenario for their daily practice (right after scenario).

Data Analysis

Data were analyzed by a team of 3 physicians with expertise in resuscitation (I.S., K.T., T.H.) and 1 occupational psychologist (Z.A.) with particular expertise in RCA methodology. All team members had experience performing prior RCAs at the Institut für Notfallmedizin und Medizinmanagement. The researchers deliberately performed formal analysis A/V-recordings of the case weeks after the training to provide some detachment from the initial impressions during the hot debriefing.

To perform data analysis, the team followed several steps based on RCA methodology:

  • First, each research team member independently reviewed the video recordings to determine what happened and how it happened without receiving additional instructions. Each researcher noted each event and its timing to the full second by referring to the video timeline.
  • Second, the research team met to compile a graphical transcript of the incident based on their observations. The team achieved consensus about conflicting observations through joint video review of the events in question. The final transcript depicts this consensus.
  • Third, the research team evaluated all measures and dialogs as noted in the transcript to identify causes and possible implications for prevention/mitigation in future7 using Sakichi Toyoda's 5-Why-Technique for exploring cause-and-effect relationships.8 Posing 5 consecutive “why” questions promotes deep reflection on events. Discussion about the first “why” question occurred in the debriefing room immediately after the scenario. After the rules of RCAs, those findings were displayed in a cause-and-effect diagram (CED, fishbone or Ishikawa diagram; Fig. 1). CEDs are time-effective and reliable for RCAs.9,10
A CED, fishbone or Ishikawa diagram, points critical to success marked red.


The incident occurred in a simulation with 2 paramedics with varying degrees of work experience: RA1 (male, 20 years) and RA2 (female, 1 year). Both reported several similar missions in actual practice (RA1: 7–9 times, RA2: 1–3 times) during the past year, most recently less than 6 months ago for both.

At the outset of the scenario, the patient received the first shock with the AED (time 0). With RA1 as leader, the team then immediately established high-quality basic life support. At time 1:50 minutes, the team prepared to deliver the second shock if needed. At time 2:03 minutes, the AED prompted a rhythm check and the team paused chest compressions. However, the rhythm analysis failed because the push button electrode had lost contact with the manikin. Thus, this relevant equipment failure defined the beginning of the incident (CED, equipment).

Next, the AED correctly prompted “Insert electrode cable,” which RA1 reconnected only 28 seconds later. However, with a properly functioning electrode cable, the rhythm check is not possible. The AED terminated this action and emitted a constant beep. By default, the beep indicated a 10-second-delay before automatically repeating the analysis (CED, machine). However, this beeping did not prompt rhythm analysis because RA1 verbally insisted on defibrillation (Fig. 2, bubble 1) but took no further action including active chest compressions (CED, people).

Transcript of the incident. On the very left, the actual duration of the scenario is given according to the video time track. The scale is not proportioned into even intervals to allow better “temporal” resolution in some phases of the incident. The column labeled “Event” describes what goes on on-scene, the dialog between RA1 and RA2 is captured in the next 2 columns. Note that there is 1 column per person. The central axis is the column labeled “ongoing chest compressions.” It shows progression of chest compressions, which are color coded for easy visual detection: green, chest compressions performed; red, paused or halted chest compressions. The next columns depicts heart rhythm at the given time as display on the LP15's monitor. Automated external defibrillator voice prompts are noted in a separate column opposite to the central axis. A second column, marked with an hourglass symbol, indicates separate steps in the AED programming routine.

At time 2:54 (51 seconds after the onset of the incident), the AED automatically initiated the previously aborted heart rhythm analysis and completed the pending analysis. Being in cprMAX mode, the AED prompted the team to initiate chest compressions during defibrillator charging. RA1 was clearly irritated by this announcement (CED, people/machine). Apparently, he expected the AED command to deliver a shock and repeatedly announced the diagnosis of VF (Fig. 2, bubble 2 + 3). Presumably, RA1 did not register the visual cues from cprMAX that indicate the need for chest compressions during charging and repeated stated “we need to defibrillate the VF.” However, he did not hesitate to immediately resume chest compressions when the AED prompts “start resuscitation.” At that moment, chest compressions had been on hold for 56 seconds.

At time 3:27 (1:24 minutes after onset of the incident), RA1 did not respond to numerous voice prompts to perform manual defibrillation and ordered RA2 to defibrillate. Interestingly enough, RA2 seemed to doubt the correctness of this measure and urged RA1 to take over full responsibility for her delivery of the shock (CED, management). More surprisingly, she did not wait for confirmation but immediately pressed the shock button without charging (CED, training). Consequently, no shock was delivered. Operator failure marked her manual defibrillation attempt. RA2 seemed to recognize her mistake (talking to herself “you have to press ‘charge’ first”), but neither shared this realization directly with RA1 nor reported that a shock has not been delivered successfully (CED, people). Because there is no perceptible effect, RA1 resumed chest compressions, which caused another disconnection of the cable from the electrode to the AED. Both paramedics now attempted to reconnect the cable to the manikin. Immediately after reconnection, the AED prompted another rhythm check. RA2 responded by pressing the button. Again, the AED (cprMAX mode) prompted chest compressions during defibrillator charging. RA1 and RA2 missed this prompt as they were administering medication. According to European Resuscitation Council guidelines current at that time, no medications were indicated at that point (CED, training). Another possible interpretation is a displacement activity while being under pressure. Again, chest compressions were on hold, which allowed the AED to complete the analysis sequence and provide the prompt “shock recommended.” In response, RA2 delivered the shock at time 4:20 (2:17 minutes after the beginning of the incident). She immediately recommended RA1 to resume chest compressions; however, RA1 deliberately waited to resume chest compressions until the AED prompted for chest compressions. These behaviors seemed to indicate RA1's rigid adherence to AED prompts. CPR then continued without incident (CED, machine, people).

In total, the incident lasted 2:21 minutes during which the cumulative hands-off time was 1:39 minutes, or 72% of the total duration. Fifty-six seconds was the longest period of interrupted CPR. The required defibrillation for this patient in VF was delayed for 2:17 minutes from the beginning of the incident until successful shock delivery.

In the subsequent debriefing, both team members believed that the AED had not malfunctioned. Upon questioning, they stated that despite the incident, they saw no need to change their behavior since the patient was unharmed.


During the simulation scenario, we observed a relevant critical incident due to equipment failure, which would likely have affected patient outcome because of delayed defibrillation and increased long no-flow time.11,12 Determining the underlying mechanisms could help avoid patient harm in actual clinical practice.

Much like civil aviation accident investigators use flight recorders, we used A/V recordings of the simulated resuscitation scenario to analyze the critical incident and identify possible root causes. The 3 key questions that form the basis of RCA methodology should be considered below from a medical and a psychological point of view. These 3 questions include the following: (a) what happened? (b) why did it happen? and (c) what can be done to prevent it from happening again?

What Happened?

The incident was a technical issue triggered by the loss of contact between a push button electrode and the cable that connected the AED to the manikin. One may argue that this simulation artifact does not exist in reality because typically defibrillator pads are applied directly to the patient. However, this incident induced by flawed AED decision-making strongly influenced the behavior of a professional EMS team. According to Calle et al,13 “wrongful” heart rhythm analysis of shockable heart rhythms by AED must be expected in up to 16%.

The team was without a doubt aware of the correct diagnosis and the necessary therapeutic steps to control the situation, namely, VF that required timely defibrillation. The team recognized the need for another defibrillation 37 seconds after the beginning of the incident, which was verbalized 4 times by the team leader, RA1. Moreover, the team attempted one single, manual defibrillation attempt 1:28 minutes after the onset of the incident, which was in fact was not performed correctly and no shock was actually delivered. These actions indicated that the team knew exactly what needed to happen to solve the problem. However, RA1 failed to declare the critical situation and deviated from the anticipated algorithm.

We suspect that these errors arose because of the complicated interaction with the AED. Notably, the team seemed to neglect high-quality basic life support, including chest compressions early in the case. In aviation, the first step in controlling a critical situation is to not stop flying the airplane (ie, stabilization of the aircraft in its flight position). Similarly, in healthcare, providers must continue basic life support, no matter what. We were quite surprised to see that team did not adhere to this basic yet potentially life-saving measure.

Why Did It Happen?

For more than 80% of the time, the team literally awaited instructions from the AED instead of following their training as EMS professionals. This behavior underlines great trust in the AED. Lee et al14 defined “trust” as the fundamental belief in an “agent” that will help achieve one's individual goal in vague and vulnerable situations safely. The strong reliance on the AED most likely stemmed from previous positive experiences when using the device.15 From this point of view, we must critically question resuscitation training, which use a stereotypically responding AED. In this case, trust in the AED was so profound that RA1 abandoned his role as team leader and the team mostly followed the lead of the AED. We postulate that the team essentially recognized the AED as an authority. Obviously, however, the AED cannot fulfill this rather complex responsibility.

Mosier et al16 described “automation bias” as an important factor in the use of automated decision aids that may lead to 2 different types of error: (a) failure by omission (ie, information is withheld by the automated system and thus is not incorporated into decision-making) and (b) error by commission (ie, inappropriate instructions of the automated system are executed, although they contradict other available information, personal experience, or training). The latter plays a central role in this case: Although the team correctly diagnosed VF (at times 2:27, 4:59, and 5:25) and initiated the correct treatment (at times 4:45 and 5:34), they followed misleading instructions from the automated system until the AED finally managed to initiate and deliver the defibrillation (at times 6:15 and 6:23, respectively). This resulted in a 1:38-minute delay between recognition of the correct diagnosis by the human and defibrillation by the machine. RA1 repeatedly called for defibrillation at several points. His intention was most likely based on his personal experience and training but last but not least his recognition of VF on the monitor. However, automation bias prevented him from taking correct actions. One manual defibrillation attempt was initiated 47 seconds before the AED recommended defibrillation after analysis. It is unclear why no additional attempt to defibrillate, especially since RA2 recognized the failed manual defibrillation attempt. Eventually, a second defibrillation attempt was withheld because the de facto team leader—the AED—did not provide the appropriate prompt. Thus, in addition to automation bias, we observed addition cognitive errors:

  • fixation error (losing track of how the situation evolves)17
  • premature closure (incomplete decision-making secondary to aborting further analysis of situation once a decision has been made).18

What Can Be Done to Prevent It From Happening Again?

In this case, post hoc analysis allowed us to fully understand how the critical situation evolved and how such insights can help prevent similar errors in the future. Thus, we should expect possible errors when using AEDs. Thus, appropriate trouble shooting measures should be part of AED training for all health professionals.

High-quality simulation-based training at the interface between man and machine should follow a goal-oriented approach to mitigate potential error from automation bias during critical situations. Changes in AED design and programming may help reduce human error from automation bias. For example, visual or acoustic warnings may indicate that the device is working outside routine parameters and thus prompt the operator to take over responsibility for control and decision-making. Moreover, this case illustrates that even professional EMS providers may depend on voice prompts that remind them to continue chest compressions. Our observation supports the call for further research on targeted design of voice prompts.19–21

Our findings reveal that professional EMS personal can be susceptible to error resulting from unexpected AED prompts that impede high-quality basic life support measures. Automation bias is most likely responsible for this adverse outcome and may have led to flawed decision-making and loss of leadership. Presumably, this root cause can be transferred to similar situations. We draw our conclusions from one single case, which restricts generalizability. The incident took place in a simulation center and one could argue that the team's performance was affected by the unfamiliar and unusual working environment. Our conclusions cannot be transferred into reality without further research.

Root cause analysis methodology can be used also be used in healthcare simulation to develop deep understanding of critical incidents. We effectively achieved this understanding after 12 hours of analysis compared with estimated 20 to 90 person-hours for a traditional RCA.2 Such an in-depth analysis would be unfeasible during a postevent team debriefing immediately after a critical incident. Therefore, we propose a 2-step debriefing process with the team involved to ensure a closed loop with front line providers.


The combination of a standardized full-scale simulation with RCA methodology has great potential to understand the basis of critical situations observed in simulation-based training sessions. Professional observers with backgrounds in both healthcare and psychology can enhance the depth of analysis.


1. Makary MA, Daniel M. Medical error-the third leading cause of death in the US. BMJ 2016;2016(353):2139.
2. Wu AW, Lipshutz AK, Pronovost PJ. Effectiveness and efficiency of root cause analysis in medicine. JAMA 2008;299(6):685–687.
3. Charles R, Hood B, Derosier JM, et al. How to perform a root cause analysis for workup and future prevention of medical errors: a review. Patient Saf Surg 2016;10:20.
4. International Civil Aviation Organisation (ICAO). Chapter 5 Investigation. In: Annex 13 to the Convention on International Civil Aviation - Aircraft Accident and Incident Investigation. 10th ed. Quebec, Canada: ICAO; 2010.
5. European Aviation Safety Agency (EASA). Flight Recorder CS 25.1459. In: Certification Specification for Large Aeroplanes CS-25 Amendment 3 Book 1, ed. Cologne, Germany: EASA; 2007.
6. Wilf-Miron R, Lewenhoff I, Benyamini Z, Aviram A. From aviation to medicine: applying concepts of aviation safety to risk management in ambulatory care. Qual Saf Health Care 2003;12(1):35–39.
7. Chang A, Schyve PM, Croteau RJ, O'Leary DS, Loeb JM. The JCAHO patient safety event taxonomy: a standardized terminology and classification schema for near misses and adverse events. Int J Qual Health Care 2005;17(2):95–105.
8. Lina LR, Ullah H. The Concept and Implementation of Kaizen in an Organization. Global J Manag Bus Res 2019.
9. Ishikawa K, Lu DJ. What Is Total Quality Control?: The Japanese Way. Vol 215: Englewoods, NJ: Prentice-Hall; 1985.
10. Doggett AM. A statistical comparison of three root cause analysis tools. J Ind Technol 2004;20(2):2–9.
11. Christenson J, Andrusiek D, Everson-Stewart S, et al. Chest compression fraction determines survival in patients with out-of-hospital ventricular fibrillation. Circulation 2009;120(13):1241–1247.
12. Cheskes S, Schmicker RH, Christenson J, et al. Perishock pause: an independent predictor of survival from out-of-hospital shockable cardiac arrest. Circulation 2011;124(1):58–66.
13. Calle PA, Mpotos N, Calle SP, Monsieurs KG. Inaccurate treatment decisions of automated external defibrillators used by emergency medical services personnel: incidence, cause and impact on outcome. Resuscitation 2015;88:68–74.
14. Lee JD, See KA. Trust in automation: designing for appropriate reliance. Hum Factors 2004;46(1):50–80.
15. Hoff KA, Bashir M. Trust in automation: integrating empirical evidence on factors that influence trust. Hum Factors 2015;57(3):407–434.
16. Mosier KL, Skitka LJ, Heers S, Burdick M. Automation bias: decision making and performance in high-tech cockpits. Int J Aviat Psychol 1997;8(1):47–63.
17. Cook RI, Woods DD. Operating at the sharp end: the complexity of human error. Hum Error Med 1994;13:225–310.
18. Keinan G. Decision making under stress: scanning of alternatives under controllable and uncontrollable threats. J Pers Soc Psychol 1987;52(3):639–644.
19. Fleischhackl R, Losert H, Haugk M, et al. Differing operational outcomes with six commercially available automated external defibrillators. Resuscitation 2004;62(2):167–174.
20. Müller MP, Poenicke C, Kurth M, et al. Quality of basic life support when using different commercially available public access defibrillators. Scand J Trauma Resusc Emerg Med 2015;23:48.
21. Plattner R, Schabauer W, Baubin MA, Lederer W. Hands-off-Zeiten durch AED-Sprachanweisungen. Notfall Rettungsmed 2013;16(6):449–453.

Simulation in healthcare; incident root cause analysis; aviation safety investigations; cardiopulmonary resuscitation; crew resource management - CRM; EMS training

Copyright © 2020 The Author(s). Published by Wolters Kluwer Health, Inc. on behalf of the Society for Simulation in Healthcare.