Journal Logo

Special Article

Augmenting Health Care Failure Modes and Effects Analysis With Simulation

Nielsen, Ditte S. MD; Dieckmann, Peter PhD; Mohr, Marlene MD; Mitchell, Anja U. MD; Østergaard, Doris MD, DMSc, MHPE

Author Information
Simulation in Healthcare: The Journal of the Society for Simulation in Healthcare: February 2014 - Volume 9 - Issue 1 - p 48-55
doi: 10.1097/SIH.0b013e3182a3defd

Abstract

Health care failure modes and effects analysis (HFMEA) is a risk analysis technique designed to identify and analyze failure modes, causes, and effects, in a system or process before actual sentinel events or near misses occur (Fig. 1).1 Failure modes are defined as areas in which the process can break down and cause poor outcomes. When conducting a HFMEA, a defined process is broken down into steps, and each step is analyzed for potential failure modes. A hazard analysis is conducted for every failure mode to identify those failure modes that warrant deeper analysis. Potential causes for every failure mode are identified, and another hazard analysis is conducted for the causes. Finally, actions and outcome measures are identified for those causes that need further attention, according to scoring. The goal is to improve and optimize the given process. The prospective nature of HFMEA is seen as a special advantage of the method.1,2

FIGURE 1
FIGURE 1:
Overview of steps in a HFMEA.

A key step in the HFMEA technique is identifying the failure modes. Different methods have been used to accomplish this identification, such as brainstorming, sentinel event databases, direct observation, focus groups, or information from organizations that collect data on health care systems.1,2 Davis et al2 combined simulation and HFMEA and found that simulation allowed for collecting data that uniquely helped in understanding and prioritizing failure modes that might have gone unnoticed and unrecognized without the simulations. These failure modes included the need for training, role clarification, and consistent communication regarding timing treatment steps.

Patient safety depends on the functional interplay among human beings, technology, and organizations.3 Simulation has increasingly been used to train health care professionals. It focuses somewhat on the human side and less on analysis and improvements at the technical, organizational, or systemic level. In this regard, recognition of its potential is increasing.3–9 Simulation has also played a role in usability testing of medical devices, which is increasingly required (eg, ISO 60 601).10,11

To our knowledge, no studies have compared the results found when combining simulation and HFMEA with the results found during a traditional HFMEA (in other words, without simulation). The assumption is that augmenting a traditional HFMEA with simulation will identify additional safety issues relevant for organizational improvements, as compared with a traditional HFMEA alone.12,13

Purpose of the Study

The purpose of this study was to explore if a higher number of failure modes, causes, and effects, in a health care process, could be identified when a group of process experts actively simulate the process, as compared with if they brainstorm on this question. The second aim was to determine whether process experts would score additional failure modes identified after simulation as worthy of further exploration.

MATERIALS AND METHODS

Research Team and Study Participants

A multidisciplinary research team consisting of a medical student (D.S.N.), a psychologist (P.D.), 2 consultant anesthesiologists (A.U.M., D.O.), and 1 consultant obstetrician (M.M.) developed and conducted this study. The study was performed at the Danish Institute for Medical Simulation in Herlev Hospital, Capital Region of Denmark. The research team designed the study, assembled 2 multidisciplinary teams as study participants, facilitated the brainstorming and simulation sessions, and collected, processed, and analyzed the data. Two different multidisciplinary teams performed the brainstorming and simulation sessions, as described in step 2 later.

Modifications to the HFMEA Method

The study followed the HFMEA method in steps 1 to 4b, as described by the Department of Veterans Affairs National Center for Patient Safety1 (Fig. 1), with some adaptations (explained after each description, where necessary).

  • Step 1: The research team defined the process to be studied.
  • Step 2: The research team assembled 2 multidisciplinary teams of process experts to identify failure modes, causes, and effects (as opposed to 1 suggested team).
  • Step 3: The research team (as opposed to process experts) developed a graphic description of the process under study.
  • Step 4a: Both multidisciplinary teams identified failure modes through brainstorming and simulation (as opposed to only brainstorming).
  • Step 4b: A consultant anesthesiologist (A.U.M.) and a consultant obstetrician (M.M.) from the research team (as opposed to a multidisciplinary team) scored the failure modes using the Hazard Scoring Matrix and the HFMEA Decision Tree.

Step 1: Process to be Studied

The process under study was an obstructed breech delivery because the guideline for treating this specific situation was recently changed at the hospital. Furthermore, the risk of potential failures in perinatal units was issued in the Joint Commission Sentinel Event Alert #30. Breech deliveries represented 6% of 47 cases reported from 1996 to 2004 regarding perinatal death or permanent disability.14 The situation was high risk and low frequency, which the hospital staff seldom experiences. This makes optimization of the situation relevant if it is possible to use means other than clinical experience.

In the clinical practice at the hospital, a breech delivery takes place in a delivery room, where forceps, Bricanyl (Terbutalin), and sublingual nitroglycerin are available. All women having a breech delivery have prophylactic intravenous access established. In the event of an obstructed breech delivery, once the obstetric crew decides to apply forceps, an emergency anesthesia crew is called to the delivery room. The crew brings a bag containing emergency intubation equipment and monitors for both mother and neonate. The woman’s airway and a short relevant history are assessed. If the baby cannot be delivered by forceps, the woman is preoxygenated and anesthetized, and rapid-sequence intubation is performed in the delivery room. Intravenous anesthesia is maintained until the baby is born.

Step 2: Assembling the Multidisciplinary Teams

Both multidisciplinary teams of process experts consisted of several crews15: 1 obstetric crew with 4 people (1 experienced nurse assistant from the delivery unit, 1 experienced midwife, 1 resident obstetrician, and 1 consultant obstetrician), 1 anesthesia crew with 3 people (1 nurse anesthetist, 1 resident anesthesiologist, and 1 consultant anesthesiologist), and 1 consultant pediatrician with neonatal experience. The multidisciplinary teams were typical for the clinical practice in the hospital, in terms of length of education and clinical experience. The participants’ respective departments funded their participation.

Step 3: Graphic Description of the Process

The graphic description of the process (process flow diagram) was performed in advance by 1 consultant anesthesiologist (A.U.M.) and 1 consultant obstetrician (M.M.) from the research team because time with the 2 multidisciplinary teams of process experts was limited. The process flow diagram was based on existing hospital guidelines and included 6 process steps (Fig. 2). Taking into account the project’s resources, we concentrated on step 6, in which 6 subprocess steps (6a–f) were identified. These 6 subprocess steps were used to create a subprocess flow diagram (Fig. 2). The flow diagrams were validated by observations during visits to the authentic worksite and by discussions with the head of the Department of Gynecology and Obstetrics, who affirmed that the diagrams represented the current process. Figure 2 describes how the process under study should be handled, according to existing hospital guidelines.

FIGURE 2
FIGURE 2:
Steps in the health care process under study: “obstructed UK delivery” (breech delivery) (steps 1–6), with subprocess steps from calling the anesthesia crew to initiating emergency treatment (steps 6a–f).

Step 4a: Identification of Failure Modes

The study was performed on 2 consecutive days, with 1 multidisciplinary team on each day. Study participants received a letter before participation, which described the HFMEA and the purpose of the study. Their focus should be on analyzing the concrete process. Neither brainstorms nor simulations were intended to test their knowledge or skills.

Two members from the research team (D.S.N., P.D.) showed participants the 2 flow diagrams (Fig. 2). Participants were asked to complete a brainstorming session, during which they were to identify areas where the process of managing the breech delivery could break down and cause poor outcomes. They were asked to concentrate on subprocess steps 6a to 6f, taking one subprocess step at a time. Participants were allowed 3 hours for the study, and the time was strictly scheduled. Brainstorming was limited to 50 minutes, so that each subprocess step was allocated approximately 8 minutes for brainstorming (Fig. 3 for the overall timing). The brainstorming sessions were facilitated by 2 members from the research team (D.S.N., P.D.). All data the multidisciplinary teams identified was written on flipcharts by a third research team member (D.O.). Facilitation included keeping participants focused on the data collection, answering questions if necessary, managing the time, and guiding the participants to the next subprocess step every 8 minutes.

FIGURE 3
FIGURE 3:
Flow of the study across the 2 days of data collection. Each team did its work on 1 day, and they did the work on different days.

After the brainstorming session, 2 members from the research team (P.D., M.M.) familiarized the multidisciplinary teams with the simulation environment and introduced the teams to the concrete start of the simulation scenario and their respective roles.

We worked with a hybrid model to create relevant simulation scenarios. The consultant obstetrician on the research team (M.M.) played the role of the mother and held a part-task trainer against her body that was designed for delivery simulation. It consisted of a pelvis with cervix, vagina, and perineum. A baby manikin in the trainer helped simulate a breech delivery. The delivering mother’s vital signs were displayed on relevant monitors using the SimMan software. If other signs (including stethoscopy sounds or laboratory results) were needed, one member of the research team experienced in running simulation scenarios (D.O.) provided them orally to the participants.

To help participants into the simulated situation, the scenario began at the point at which the baby’s body presented, approximately 1 minute before the subprocesses under study started. The obstetric crew was asked to deliver the baby’s shoulders and arms. When the crew recognized an obstruction of the breech delivery and decided to apply forceps (step 4, Fig. 2), they called the anesthesia nurse and resident anesthesiologist by telephone. From that point, the simulation was completed by following the same 6 subprocess steps addressed in the brainstorming session (Fig. 2). The obstetric crew started timekeeping and prepared the working area while waiting for the anesthesia crew. The resident anesthesiologist called their consultant. The anesthesiologists were asked to wait for 3 minutes before entering the simulation room, mimicking the expected minimum travel time to the delivery room. The obstetric crew provided a short briefing to the anesthesia crew when they arrived. Thereafter, the 2 crews worked simultaneously. The anesthesia crew assessed the patient for anesthesia, and the obstetric consultant applied the forceps and attempted delivery. The scenario stopped when the attempted delivery by forceps did not succeed, and the anesthesia crew prepared for emergency intubation.

Each multidisciplinary team performed simulations differently on its study day. On the first day, the simulation was followed by a second brainstorming session in the simulation room. Study participants were asked to mention any new failure modes that came to mind after the simulation. With this variant, we wanted to explore what influence simulation would have if the overall scenario was kept as one whole episode. On the second day, the simulation was followed by another simulation that included a stop procedure that the research team developed. With this variant, we aimed to explore the degree to which simulation could help us dig deeper into the details of smaller pieces of the scenario, such as reducing memory load compared with the simulation use on the first day. The stop procedure was a combination of simulation and brainstorming. The multidisciplinary team was asked to stop the simulation whenever they discovered a new failure mode or after every subprocess step. After registering the failure modes on a flipchart, the simulation started again at the point that it was stopped. During the stop procedure, data were collected whenever the participants discovered a relevant detail and stopped themselves or were stopped after every subprocess step. When participants passed subprocess step 6b without any stops, the research team interpreted that they got too involved in the simulation to remember to focus on the study task. Therefore, the research team stopped the simulation after each subprocess step.

Both days ended with an open discussion. This allowed study participants to comment on the study conditions and provide further information regarding failure modes, causes, and effects. All simulation and brainstorming sessions were facilitated by the same members from the research team (D.S.N., P.D.). Data that the multidisciplinary teams identified were again registered on a flipchart by a third member of the research team (D.O.).

Step 4b: Scoring of the Failure Modes

All data identified by both multidisciplinary teams was transcribed from the flipcharts to computer files and edited for readability. Only data identified by the 2 multidisciplinary teams during the actual discussion and simulation sessions were included. Members from the research team did not add more data. All quotes were transferred to their own piece of paper and sorted into clusters based on similarity of content. This is a recognized method for handling this type of data.16 The clusters were inductively created in several steps. If a single data point thematically matched a previously created cluster, it was put into that cluster. If it did not match, a new cluster was created. After an initial sorting, the clusters were revised as needed, until no more changes seemed necessary. After the clusters were created, the research team analyzed all the data, one cluster at a time, by sorting them into failure “modes,” “causes,” or “effects” categories. The raw data the 2 multidisciplinary teams produced were a mix of these 3 categories. The clusters formed the basis for Table 1 (Table, Supplemental Digital Content 1, https://links.lww.com/SIH/A89, which shows all details of the data regarding clusters, failure modes, causes, and effects). Each cluster is in horizontal rows, and the failure modes, causes, and effects are in the vertical column.

TABLE 1
TABLE 1:
Coding of the Results

Two members from the research team (A.U.M., M.M.) independently scored all failure modes using scoring tools for HFMEA, as described by the Department of Veterans Affairs National Center for Patient Safety.1 Failure modes were first scored by using a hazard analysis. Every failure mode received a score from 1 to 16, based on severity and probability. All failure modes with a score of 8 or greater were analyzed using Decision Tree Analysis. This determined if a given failure mode was to “proceed” or be worthy of further analysis.1 The Decision Tree Analysis examined if a failure mode was a single point of weakness and if control measures already existed and its degree of detectability. The raters took the mother’s and baby’s perspectives into account separately when individually scoring each failure mode during the hazard analysis. For example, “delay in the process” is more hazardous for the baby than for the mother. A failure mode was categorized as “worth further analysis” if at least 1 of the 2 raters, either from the mother’s or the baby’s perspective, independently used the hazard analysis to flag it as “proceed.” The list on which the failure modes were scored did not include the information on how they were collected (via brainstorming before the simulations or by which of the simulations). Although the 2 raters might recall the mode of data collection, it was not immediately obvious for them while rating.

During data collection, participants noted that there were 2 ways to initiate the alarm chain: using the telephone to call the relevant people and pushing 1 of 2 alarm buttons in the delivery unit office that automatically sent a message to a predefined set of people. One button was labeled “critically ill child” and the other was labeled “critically ill mother.” The guideline stated that the anesthesia crew should be called by telephone. However, participants on day 2 also focused on the alarm button. Data on both calling methods were collected. For clarity, we analyzed the data about the button separately from the telephone method of calling for help because of the smaller volume of data concerning the alarm button (Table, Supplemental Digital Content 2, https://links.lww.com/SIH/A90, which shows details regarding number of data items detected by brainstorms and simulations via subprocess steps and for the alarm button).

Ethical Considerations

Because this study did not involve patients, it required no ethical approval. Participants were informed about the nature of the study and data handling and gave their oral consent for processing and publication of the anonymous data. They could withdraw from the study at any point without consequence.

RESULTS

General Findings

Two different multidisciplinary teams did this HFMEA process. Each time, the addition of simulation identified more failure modes, causes, and effects. Both simulation methods produced relevant results, but the comparison between them is inconclusive. Table 2 shows that the first brainstorming on day 1 and day 2 identified 30 failure modes, 49 causes, and 15 effects. The simulations on day 1 and day 2 identified an additional 10 failure modes, 32 causes, and 22 effects. Overall, the study identified 40 failure modes, 81 causes, and 37 effects across all data collection points. Relatively speaking, simulations were more effective in identifying effects (22 of 37 or 60%) and causes (32 of 81 or 40%) than failure modes (10 of 40 or 25%).

TABLE 2
TABLE 2:
Total Number of Failure Modes, Failure Modes Scored to Proceed, Causes, and Effects Identified on Days 1 and 2 by the First Brainstorms and After Simulation

Of the 40 failure modes identified, 13 were scored as hazardous enough to proceed. Of these 13 failure modes, 8 were identified after brainstorming, whereas an additional 5 were identified after simulation. Simulation identified a relatively higher number of more hazardous, “proceed” failure modes than did brainstorming. Five of 10 failure modes identified after simulation and 8 of the 30 failure modes identified by brainstorming were scored to proceed. No relevant differences were found between the 2 ways of using simulation regarding the number of failure modes scored to proceed or the total number of identified failure modes and their causes and effects (failure modes: simulation day 1, 5; simulation day 2, 5; causes: simulation day 1, 14; simulation day 2, 18; effects: simulation day 1, 10; simulation day 2, 12; Table, https://links.lww.com/SIH/A90, Supplemental Digital Content 2, which shows details regarding number of data items detected via brainstorms and simulations).

The main interest of this study was the number of failure modes. However, simulation also helped identify different types of failure modes in those subprocess steps in which the anesthesia and obstetric crews (steps 6d and 6e) worked simultaneously. For these steps, brainstorming primarily produced data regarding communication between the 2 crews regarding each one’s needs. Brainstorming also produced data regarding communication with the patient. For example, communication may have been difficult because of different languages, time pressure, or labor pain. Data from the simulations also described practical coordination between the crews, which was not identified during brainstorming sessions. Some examples were team spirit may be missing, one crew may have not understood the other crew’s procedures, and the crews may have differentiated focus (the anesthesia crew focused on the mother, the obstetric crew focused on the baby), making it difficult to prioritize tasks on which to focus.

During the open discussion rounds, participants mentioned that the results might change if the order of brainstorming and simulation changed. They also pointed out that in situ simulations might be advantageous. On day 2, participants labeled data identified through brainstorming as 90% theoretical and 10% practical, whereas they perceived the relationship as reversed during simulation.

DISCUSSION

Our assumption of this study was that simulating a process would add more data in a HFMEA than by brainstorming alone. The nature of simulation allows for a detailed examination of every single step in the process, revealing vulnerabilities that may be missed when just brainstorming. While both ways of using simulation produced relevant, additional failure modes, causes, and effects, the comparison between both methods is not conclusive.

Our study showed that a higher number of failure modes, causes and effects were found when augmenting HFMEA with simulation after an initial brainstorming session. This finding is in line with the study by Davis et al.2 Their study combined HFMEA with in situ simulation and found that simulation provided information that routine HFMEA might have missed. However, Davis et al did not compare the results found by combining HFMEA and simulation to results from a HFMEA without simulation. Another difference between our study and that of Davis et al is that experts viewing videos of the simulations performed their data detection, whereas participants in our study were active during data collection in the brainstorming and simulation sessions. In this sense, the studies supplement each other.

Simulation also helped identify additional types of failure modes, particularly regarding practical coordination between the crews, which brainstorming did not detect. Brainstorming did not reveal the complexity of crew coordination when both crews were cooperating face-to-face regarding the same patient. This fact seems plausible because a crew in a brainstorming session may find it easier to think and talk about the case from their own perspective. In principle, everybody involved might offer their ideas in parallel without thinking about mutual linking of ideas during brainstorming. One might pick up ideas and develop them further without being aware of the other crew’s challenges. Simulation, on the other hand, forced participants to experience the consequences if the process did not proceed optimally. Examples include waiting time, uncertainty over what the other crew was doing, and prioritizing tasks between the crews. Brainstorming may make it difficult to fully account for such practical challenges. Brainstorming only entails talking about actions, not actually fulfilling the actions. This process indicates that HFMEA can benefit from augmentation by simulation, particularly for more complex processes involving a higher number of participants from different professions and specialties. Certainly, the additional required resources need to be weighed against any potential benefit, but this study supports our earlier theoretical considerations on augmenting HFMEA with simulation.12 This indicates that further exploring this connection would be beneficial.

The overall lower volume of data identified after simulation compared with that identified by the first brainstorming session may be explained by the method because participants were instructed not to repeat data already mentioned during brainstorming. It might be beneficial in further investigations to systematically vary the order of the different parts in the study. In particular, simulation helps identify additional causes and effects. Identifying causes might help the health professions obtain a deeper understanding of patient safety issues, so is more important than identifying more failures. An extensive discussion of this aspect is beyond the scope of this article.

This study included only some of the steps (1–4b, Fig. 1) of the HFMEA framework developed by the Veterans Affairs National Center for Patient Safety.2 In a complete HFMEA study, the causes of the failure modes scored to proceed should be scored in the same way as the failure modes were scored, and action and outcome measures should be developed. This concept was beyond the scope of our project.

The results from steps 1 to 4b indicate that simulation is a valuable tool for identifying data in HFMEA studies. The simulation identified failure modes that experts found relevant for the involved departments and were scored as relevant for further analysis. This simulation also identified a relatively higher number of more hazardous, “proceed” failure modes than did brainstorming (simulation, 5 of 10, 50%; brainstorming, 8 of 30, 27%). Simulation was also more effective in identifying causes and effects of failure modes, possibly as the result of the availability of more concrete experience with the identified issues and a more practical orientation of the findings after simulation (as expressed by the 90%/10% comment of one study participant).

Simulation seems to be a valuable supplement in a HFMEA study because it is a realistic and action-oriented representation of the process under study.13 Those concrete representations and behaviors do not allow remembering “standard memories” and filling in potential knowledge gaps about the process with assumptions.17 For example, people usually know how to use the technical devices at their workplace. However, these devices might function in slightly different ways. Those differences are more easily forgotten when using HFMEA data collection methods based on recall. People actively enact the process during simulation and are less prone to certain memory biases and confabulations. Visual, auditory, and other cues experienced during simulation might help promote a full, concrete understanding. It is certainly possible, as with any method, that simulation will introduce different biases. Therefore, a combination of methods may be beneficial.17,18

In this study, 2 different simulation methods were tried as follows: without interruptions followed by a second brainstorming session and without interruptions followed by a stop procedure. Both methods were suitable for detecting vulnerabilities in a process, and no differences were seen in the number or severity of data points detected. Our data were not sufficient to allow in-depth comparisons between both methods but suggests that both are worth exploring further.

This study used simulation in the context of system improvements. An interesting observation was that this type of simulation created some very open discussions, although the task was to detect failures including participants’ individual action errors and those of their colleagues. The participants were very open-minded and seemingly worked with little inhibitions possibly because of the approach to the simulations, which was described as a method to identify system failures, not personal mistakes. This observation may be interesting from a learning perspective because such a focus of the simulation may also be considered during training.

This study took place in the simulation center at the hospital. One could speculate whether in situ simulation would have provided more information. A realistic time perspective may have been possible, as the participants would have to move around in the entire hospital and not only wait on the hallway for 2 minutes, which would be an extremely short transport time in any hospital. Besides this, the participants might have been able to identify a greater volume of data, such as in the category “missing technical resources,” if the simulation was performed in actual surroundings. Another advantage is that the team members may have more memories/knowledge about possible data because they may be more easily reminded of previous experiences.

Limitations

The study cannot conclude that the simulation helped identify additional failure modes. Given the lack of a control group, one might argue that additional failure modes may have been found with more brainstorming alone or that the sequence of the data collection points might have biased the findings. The study’s limited budget did not allow for setting up a more robust design. It was also not possible to study the implementation of consequent changes in the actual work system. Another limitation to the study was its short duration, also related to budget constraints. Many HFMEAs are done over weeks with many subteams. However, this study showed support for the assumption that simulation is a valuable supplement to HFMEA. This finding does not necessarily suggest that the simulation alone is preferable, but it is promising to suggest augmenting HFMEA with simulation. To draw stronger conclusions, a larger experiment with a more robust design would be needed, such as varying orders of brainstorming and simulations and more multidisciplinary teams, cases, and repetitions. Our results suggest that it is worth running such studies.

Practical Considerations When Augmenting HFMEA With Simulation

This study asked participants from the multidisciplinary teams to mention everything they thought might go wrong in the given process. This open instruction resulted in a mix of failure modes, causes, and effects. One might learn more about the process in this way, but additional sorting work is needed afterward. It is important to check the given instructions against the precise goals of the session. A benefit of this open instruction is that participants do not have to spend time and energy checking that their contribution is actually a failure mode.

In this study’s stop procedure, we stopped participants after every step if they did not stop by themselves. Our experience was that participants in the stop procedure who were immersed in, and engaged by, the simulation so forgot to stop when discovering new data. Future studies may consider scheduling stops in advance to avoid improvising during the simulation.

Simulation was found to be especially useful during the steps of the process in which more crews had to cooperate. The use of simulation as a supplement to traditional HFMEA is recommended whenever a process includes a larger number of participants from different professions and specialties.

CONCLUSIONS

This study demonstrated that simulation can effectively identify additional, potentially critical, failure modes, causes, and effects, when added to brainstorming in a traditional HFMEA. Simulation shows system shortcomings in an experiential way, thus avoiding potential biases that occur when just talking about a process.

ACKNOWLEDGMENTS

The authors acknowledge the contributions of the 2 multidisciplinary teams participating in the study and the time and efforts they took for the study. They also thank the reviewers of this article, the associate editor, and Editor-in-Chief Dr. Gaba, for the valuable comments that greatly helped us improve the manuscript.

REFERENCES

1. DeRosier J, Stalhandske E, Bagian JP, Nudell T. Using health care failure mode and effect analysis: the VA National Center for Patient Safety’s prospective risk analysis system. Jt Comm J Qual Improv 2002; 28 (5): 248–267, 209.
2. Davis S, Riley W, Gurses AP, et al. Failure modes and effects analysis based on in-situ simulations: a methodology to improve understanding of risks and failures. 2008. Available at: http://www.ncbi.nlm.nih.gov/books/NBK43662/pdf/advances-davis_60.pdf.
3. Rall M. Human Performance and Patient Safety. In. Miller’s Anesthesia. 7th Edition. London: Elsevier Churchill Livingstone; 2009.
4. Daniels K, Lipman S, Harney K, et al. Use of simulation-based team training for obstetric crises in resident education. Simul Healthc 2008; 3 (3): 154–160.
5. Dieckmann P, Reddersen S, Wehner T, Rall M. Prospective memory failures as an unexplored threat to patient safety: results from a pilot study using patient simulators to investigate the missed execution of intentions. Ergonomics 2006; 49 (5–6): 526–543.
6. Dieckmann P, Phero JC, Issenberg SB, et al. The first Research Consensus Summit of the Society for Simulation in Healthcare: conduction and a synthesis of the results. Simul Healthc 2011; (Suppl 6): S1–S9.
7. Gaca AM, Lerner CB, Frush DP. The radiology perspective: needs and tools for management of life-threatening events. Pediatr Radiol 2008; 38 (Suppl 4): S714–S719.
8. LeBlanc VR, Manser T, Weinger MB, et al. The study of factors affecting human and systems performance in healthcare using simulation. Simul Healthc 2011; (Suppl 6): S24–S29.
9. Small SD. Simulation applications for human factors and systems evaluation. Anesthesiol Clin 2007; 25 (2): 237–259.
10. Dieckmann P, Rall M, Ostergaard D. The role of patient simulation and incident reporting in the development and evaluation of medical devices and the training of their users. Work 2009; 33 (2): 135–143.
11. IEC 60601-1 (2012-08) Ed. 3.1 Medical electrical equipment—part 1: general requirements for basic safety and essential performance. Available at: http://www.iec-normen.de/219099/iec-60601-1-2012-08-ed-3-1-englisch.html. Accessed November 12, 2012.
12. Dieckmann P, Rall M. Simulators in anaesthetic training to enhance patient safety. In: Cashman JN, Grounds RM, eds. Recent Advances in Anaesthesia & Intensive Care. Cambridge, UK: Cambridge University Press; 2007; 24: 211–232.
13. Dieckmann P. Prospektive Simulation: Ein Konzept zur methodischen Ergänzung von medizinischen Simulatorsettings [Prospective simulation: a concept for the methodological complementation of medical simulator settings]. Zeitschrift für Arbeitswissenschaft 2005; 59 (2): 172–180.
14. Preventing infant death and injury during delivery. Available at:http://www.jointcommissionorg/sentinel_event_alert_issue_30_preventing_infant_death_and_injury_during_delivery/2004.
15. Gaba D. HSFKSBSY. Simulation-based training in anesthesia crisis resource management (ACRM): a decade of experience. Simul Gaming 2012; 32 (2): 175–193.
16. Ryan GW, Bernard HR. Techniques to identify themes. Field methods 2003; 15 (1): 85–109.
17. Pohl RF. Cognitive Illusions. A Handbook on Fallacies and Biases in Thinking, Judgment and Memory. New York, NY: Psychology Press; 2004.
18. Chabris CF, Simons DJ. The Invisible Gorilla: How Our Intuitions Deceive Us. 1st ed. New York: Crown; 2010.
Keywords:

Simulation; Patient safety; Health care failure mode and effects analysis

Supplemental Digital Content

© 2014 Society for Simulation in Healthcare