KEY POINTS
- Question: When, how, how often, and with what impacts are emergency manuals used and not used clinically?
- Findings: Emergency manuals were used during the majority of applicable clinical crises for many different event types, clinicians perceived emergency manual use to add value to crisis management, and the biggest reason for nonuse was nobody suggesting it despite sufficient time.
- Meaning: This study shows that emergency manuals are being used clinically in some settings, with perceived benefits for delivering patient care, extending the previous simulation-based evidence for emergency manuals as a patient safety tool.
See Article, p 1812
Anesthesiologists’ provision of optimal patient care during clinical crises can be lifesaving, yet even expert clinicians omit or delay key actions.1 The use of emergency manuals (EMs; aka crisis checklists) has been mandated in military and commercial aviation for over 80 years.2–4 In health care, development of sets of cognitive aids for crises began several decades ago. However, robust design processes with simulation testing5 and widespread dissemination beyond Advanced Cardiac Life Support began only in the last decade.6 Globally, more than a half-million clinicians have downloaded perioperative EMs.6–8 Simulation-based studies showed that correct performance of key actions during crises increases dramatically when EMs are used, decreasing omission errors by up to 75%.4,9–14 Yet, there is a large gap between having EMs and using them effectively during actual crises.
There are few data about actual clinical EM uses and even less about how often they are not used when available and applicable. National surveys have assessed the success factors for EM implementation among institutions; single-institution surveys and case studies have investigated factors impacting their clinical use.4,15–21 These studies have not measured in what fraction of applicable cases EMs are used, nor probed in depth for when, why, or how they are or are not used. Understanding these factors should help translate the apparent benefits seen in simulation-based studies to clinical settings.
To address these issues, we performed a mixed-methods study of clinical crises from 2 early adopting academic medical centers, examining both use and nonuse of EMs.
METHODS
This study was conducted at Massachusetts General Hospital (MGH) and Stanford Hospital; each had begun implementing the Stanford Emergency Manual - Cognitive Aids for Perioperative Critical Events22 1 and 3 years before interviews, respectively. Both institutions had already provided systematic EM training, although neither has formal policies mandating EM use.23 These sites were chosen using critical case sampling, which is useful for studying in depth a small number of relevant sites.24 The study was approved by each site’s institutional review board (IRB), with only verbal consent required. We interviewed clinicians about their experiences in using or not using an EM during applicable clinical crises. This was a convergent mixed-methods design involving quantitative descriptive data (counts derived from structured, close-ended questions) and qualitative thematic analyses from interview transcripts (themes derived from semistructured, open-ended questions).25
Selection of Events
Using quality assurance reports, logs of emergency help calls, and crises noted in electronic medical records, we identified cases that involved any of the 25 crises included in the Stanford EM (criterion-based sampling) occurring between October 2014 to May 2015 at MGH and July 2015 to May 2016 at Stanford.22,24 We aimed for ≥60 events to ensure thematic saturation, including within subgroups.26,27
Participant Interviews
Relevant clinicians were invited to participate by email, and those consenting underwent audio-recorded interviews lasting 30–60 minutes. Most interviews were in-person (3 from MGH were by phone due to scheduling conflicts). Two Stanford investigators jointly conducted all interviews: a primary interviewer (S.B.M., qualitative researcher) and a secondary interviewer (S.N.G.-F., anesthesiologist). They performed interviews within 4 weeks of identified events at Stanford and within 3 months at MGH.25,28 If a participant was at multiple events, each was discussed serially in the same session.
Interview Guide
We used expert consensus to develop the semistructured Interview Guide, drawing upon previously published work, authors’ expertise in qualitative research and implementation science, and input from the steering committee of the Emergency Manuals Implementation Collaborative.4,16,24,29 The Guide (Supplemental Digital Content, Appendix A, https://links.lww.com/AA/D123) was expanded from our previously published single-case qualitative study.21 The Guide was tested iteratively for understandability, completeness, and neutrality of questions via pilot interviews with 8 nonparticipant anesthesia professionals. Questions differed based on whether the EM was used in the event. When it was not used, participants also did a retrospective step-by-step exercise, reading aloud from the applicable EM event, and commenting on each item to identify actions performed, applicability, and any omissions or delays.
Coding Interview Data.
Interview recordings were transcribed verbatim. The research team developed and iteratively refined a codebook, using a combined approach: initially deductive, that is, based upon previously existing theory, then building upon this framework with inductive additions of new categories, that is, derived from interview data.21,28 Once the codebook was stable, 2 researchers (1 author, M.D.L.C. and 1 research assistant [RA], J. Whittemore) tested inter-rater reliability and consistency (Cohen’s κ = 0.8028). One researcher (M.M.D.L.C. or RA) then coded each transcript using this codebook and searched each transcript for statements on potential negatives, including words “negative,” “distract(ed),” and “harm(ful).” Our qualitative team’s analysis process explicitly sought disconfirming and “negative” data, brought different perspectives, resolved discrepancies, derived themes, and categorized findings.28–32
Data Analysis
We counted (i) applicable events, (ii) event types, and (iii) EM uses and nonuses. Because some participants described multiple events, we also counted an independent subset of unique cases using only each participant’s first-described event.
Using codebook categories, we counted:
When the EM “was” used, its impacts on:
- Clinicians: “helpful” (improved clinician confidence, teamwork, or atmosphere), “neutral” (did not impact clinicians), and “unhelpful” (distracting or confusing).
- Delivery of patient care: “specific action improvement” (helped catch ≥1 omissions/errors or provided key details for diagnostic/treatment actions), “process improvement” (used early in guiding actions or later as a double-check), and “impediment” (prevented or delayed key actions).
When the EM was “not” used:
- Crisis too brief for EM to be used, as identified by the participant.
- Nobody suggested to use even with sufficient time, as identified by the participant, subcategorized by whether the retrospective exercise (Supplemental Digital Content, Appendix A, https://links.lww.com/AA/D123) found omissions/delays to care.
RESULTS
Crisis Events From Participant Interviews
We conducted 80 crisis event interviews involving 69 unique patient cases with crises. Eleven cases were “duplicates,” where different participants discussed the same case (Figure 1). Of the 69 unique cases, 90% were in perioperative locations, including 47 (68%) in operating rooms (ORs), 2 (3%) in the preoperative area, 3 (4%) in the postanesthesia care unit, and 10 (14%) in out-of-OR anesthetizing locations. The remaining 7 (10%) were in other locations in the hospital, including labor and delivery and intensive care units. Several patients had evolving crises (eg, anaphylaxis to cardiac arrest), yielding a total of 83 categorized events described from these 69 unique cases (Figure 2).22
Figure 1.: Summary of participants and applicable crises. Summary of participants and patient cases with applicable crises, across 2 institutions, MGH and Stanford Hospital. MGH indicates Massachusetts General Hospital.
Figure 2.: Emergency manual use and nonuse by event type, categorized by ACLS and non-ACLS events. EM was used in 37 of the 69 unique cases (54%; 95% CI [41–66]). Some crises progressed to multiple event types, resulting in more events by type than unique cases. Other applicable cognitive aids were used during 4 events, such as ACLS cards or LAST. ACLS indicates Advanced Cardiac Life Support; CI, confidence interval; EM, emergency manual; LAST, Local Anesthetic Systemic Toxicity; PEA, pulseless electrical activity; SVT, supraventricular tachychardia; VF, ventricular fibrillation; VT, ventricular tachycardia.
We interviewed 57 of 62 identified anesthesia professionals with applicable crises (92% participation). Some participants described multiple cases. Five anesthesia professionals declined to participate (8%), each stating they were too busy; 4 did not have applicable crises (6%). This resulted in eligible interviews with a total of 53 anesthesia professionals, 23 from MGH and 30 from Stanford (27 attendings, 22 residents, and 4 Certified Registered Nurse Anesthetists) (Figure 1).
Mixed Quantitative and Qualitative Findings
The EM was used during 37 of the 69 unique cases (54%; 95% confidence interval [CI], 41–66). Pulseless electrical activity (PEA)/asystole had the highest EM use (67%, 14/21) (Figure 2). Calculating EM use within the independent subset of first-described events from each participant yielded 27 uses of 48 unique cases (56%; 95% CI, 41–71).
The primary themes of impact were as follows: (a) on clinicians and (b) on delivery of patient care. For all subcategories below, thematic saturation was achieved within the 80 crisis event descriptions, that is, no novel themes emerging by the end of the analysis. For each theme, we present first the quantitative counts by high-level category and then organize qualitative descriptions within subthemes.
Impacts on Clinicians
Figure 3.: Impacts of EM uses on clinicians. A joint display of quantitative counts and qualitative descriptions with categorized impacts of EM uses on clinicians. Participant responses were categorized as “helpful” (improved clinician confidence, teamwork, or calm atmosphere), “neutral” (did not impact clinicians), and “unhelpful” (distracting or confusing). CRNA indicates certified registered nurse anesthetist; EM, emergency manual.
Using the EM helped anesthesia professionals by improving individual stress or confidence in 35 (95%) events, teamwork in 27 (73%), and atmosphere in 17 (46%). Specific qualitative themes for each subcategory are described below and are presented jointly with quantitative data in Figure 3.
Stress and Confidence.
EM use was described as “a reboot for the brain,” “reduc[ing] the stress level,” and a reassuring “double check” of decisions and actions that all increased “confidence of treatment” for clinicians. The lower cognitive load was noted to allow participants as event leaders “more mental capacity” for situational awareness and team coordination, rather than attending to specific management details. Participants noted that teams were calmer and more efficient because EM helped them prioritize, organize, and “remind about key tasks.”
Teamwork, Communication, and Potential Distraction.
Using the EM allowed for more effective teamwork and clearer communication. Specific ways EMs supported teamwork included a “reader” role and enabling “speaking up.” When a designated reader worked with a leader by reading aloud to the team, participants described the team being literally and figuratively “on the same page” by naming the event, establishing a shared mental model for priorities, and enabling the leader to coordinate their implementation. EM use enabled voicing of relevant ideas by team members. Residents and certified registered nurse anesthetists (CRNAs) especially expressed that EM use helped overcome perceived hierarchical barriers to speaking up, noting that having a locally approved reference validated their own ideas, adding importance and details to their management ideas.
Of note in 3 (8%) of 37 events, participants identified that the use of an EM may have distracted from some aspects of team communication or prioritization, while noting that other aspects of use were helpful in these same cases (eg, for “teamwork overall”). Nonetheless, in Figure 3, these were all assigned conservatively as “unhelpful” for teamwork. Some details of these 3 cases are informative.
- i. In a case of delayed emergence, later attributed to stroke, the EM may have briefly distracted the team from the actual etiology, focusing them initially to rule out common causes using laboratory tests and medication reversal. However, the participant noted that any delay was short, and the EM use helped them organize their full response, including stroke code activation. The computed tomography scan was performed <30 minutes after recognition of delayed emergence.
- ii. In a case of asystole in an out-of-OR setting, with rapid initiation of cardiopulmonary resuscitation (CPR), a junior anesthesia attending simultaneously led the code and read from the EM despite the presence of numerous helpers. This may have distracted from big-picture leader tasks. Dividing leader and reader roles was suggested by the participant as a better future approach. The participant also noted positives from EM use, including (a) specific task instructions to team members and (b) identifying change to shockable rhythm during pulse check, switching event pages, and sharing new priorities with team members. In particular, the participant emphasized that naming event changes aloud to the team was useful for both teamwork and delivery of patient care, and rare during previous crises.
- iii. In a case of PEA cardiac arrest during the prebypass portion of an open-heart case, use of the EM was initially helpful for task organization, medications, and diagnoses to consider (“the Hs & Ts”). When bypass began, management of “cardiac arrest” became moot, yet the EM reader continued to read aloud. The event leader (participant) then felt distracted and stopped the reader.
In each “distracting” case, as in all other uses of EMs, participants said they would use the EM in the future and added that they could now do so more effectively.
Atmosphere.
Participants reported that the crisis event atmosphere was calmer when using the EM. They emphasized that typically crises are chaotic, with multiple people working in a disorganized manner and speaking over each other. When using the EM, the chaos and noise in the room decreased, with team members sharing a “mental model,” knowing which event type was suspected, and going through priorities in a shared order. This calm, organized process was noted especially when they were listening to a designated “reader.” The EM reader and leader combination provided a structure for discussing and assigning both diagnostic and treatment actions, quieting the room.
Impacts on Delivery of Patient Care
Participants reported that EM use had positive impacts on delivery of patient care (Figure 4). They described specific action improvements (22/37, 59%), including catching omissions/errors or providing key details for diagnostic/treatment actions, and process improvements (15/37, 41%), including early use of EM to guide crisis management or later double-checking that actions were completed. Participants described no impediments to delivery of patient care.
Figure 4.: Impacts of EM uses on delivery of patient care. A joint display of quantitative counts and qualitative descriptions about the categorized impacts of EM uses on delivery of patient care. Participant responses were categorized as “specific action improvement” (helped catch ≥1 omissions/errors or provided key details for diagnostic or treatment actions), “process improvement” (used early in guiding correct actions or later as a double-check), and “impediment” (prevented or delayed key actions). CRNA indicates certified registered nurse anesthetist; EM, emergency manual; PEA, pulseless electrical activity; STEMI, ST-segment elevation myocardial infarction.
Regarding design, participants mentioned that consistent layouts, phrasings, and other design aspects supported effective use. For example, inclusion of conditional statements (“If ___, Go to ___,”) enabled rapid cognitive and page switching for dynamic situations. Bolding of key medications facilitated easy scanning. Some participants suggested improving tabbing to get to a relevant event more quickly (and that was incorporated into the next EM version).
Specific Action Improvements.
Participants described that their team began immediate actions, then consulted the EM, realized that important diagnostic and/or therapeutic actions had been missed, and then performed them. Specifically, this included (1) broadening differential diagnoses, (2) guiding key treatment details, and/or (3) mobilizing necessary help. (1) For diagnoses, participants reported avoiding fixation errors, when treatment for the initial diagnosis proved ineffective and the EM provided other relevant diagnoses to consider, with details about how to rule out or treat them. (2) For treatments, EM use helped teams to identify unintentionally omitted or incorrect actions or forgotten details for known actions (eg, medication dosages), each noted as difficult to remember when under stress even when known—what is termed inert knowledge.33 (3) For mobilizing help, the EM phone list of local emergency contacts was described as useful in some cases to facilitate rapidly mobilizing the right response teams.
Process Improvements.
These were identified when participants described potentially helpful EM use, but without any specific actions identified as missing in that event. Early use of EM included guiding crisis management, for example, for PEA cardiac arrest, after CPR was initiated, EM guided relevant Hs & Ts with details to rule out or treat each. Late use of EM included as a double-check after diagnosis was made and treatment administered, with all actions already completed.
Impediments.
No participants reported impediments to patient care from EM use, even in the 3 potential “distraction” events. Some participants responded to explicit questions seeking negative impacts by stating “just the opposite,” adding that EM use “saved” their patient’s life.
Nonuse of EM
Figure 5.: Categorizing emergency manual nonuses. A joint display of quantitative counts and qualitative descriptions categorizing emergency manual (EM) nonuses. Subcategories are as identified by participants. EM indicates emergency manual; N/A, not applicable; STEMI, ST-segment elevation myocardial infarction; TEE, transesophageal echocardiogram.
The EM was not used in 32 unique patient cases (46%; 95% CI, 34–59). Stated reasons for nonuse comprised 2 categories: (1) crisis too brief for EM to be used (10 crises) and (2) nobody suggested to use EM, even with sufficient time (22 crises). For the first category of “too brief” crises, all clinicians were involved in necessary immediate actions, with crisis resolved before additional helpers arrived. For the second category of longer, nonuse crises, there were multiple, often overlapping, reasons participants discussed, including: event leader did not assign role, no other team member spoke up to suggest EM, and/or event leader participant said she/he knew all key management points. However, in 18 of 22, “nobody suggested to use EM” crises; in a retrospective exercise, the participant self-identified additional relevant diagnostic and/or treatment detail(s) that could have been helpful during the crisis (Figure 5).
Impacts on Intended Future Use
Of those who used an EM, all reported wanting to use it again for future applicable events. Some participants who initially considered themselves “against EM” explicitly stated that their views had changed after observing EM impacts during a clinical use. Many participants commented that each use, simulated or clinical, helps the next use.
DISCUSSION
Amidst mounting simulation-based evidence for positive impacts of EMs on diagnostic and treatment actions, a common concern has been “Will these tools actually get used in clinical settings?”4,9–14 Survey studies provide some evidence of EM clinical use15,16 but have limitations that prevent determination of frequency of use. Because we minimized participation bias and collected the denominator of applicable crises, our study provides a reasonable data point for frequency of EM use during applicable crises at 2 early adopting institutions. This study also identifies characteristics of EMs that made their use valuable to clinicians. Our analysis of clinical nonuses for applicable crises addresses some significant gaps in the understanding of barriers to EM use during actual crises.
EMs were used in about two-thirds of relevant crises (37/59, 63%; excludes 10 crises that resolved too quickly for EM use to be relevant). While 100% use during relevant crises might be ideal, we do not yet know the conditions required to achieve such adherence (nor whether complete adherence is indeed desirable, as there will be some situations where EM use could detract from optimal performance, with both when and how being vital factors). Despite significant previous implementation efforts at the 2 study institutions, the magnitude of behavioral, political, and cultural change necessary to enable close to uniform EM use is large, takes time, and is probably not fully in place even at these sites.16
Distractions from teamwork were found for only 3 of 37 EM uses, even with explicit probing in both data collection and analysis phases for any negative or distracting impacts. In these 3 events, the negative impacts on communication were small, were often offset by other EM benefits, and had no reports of impediments to patient care.
Previous studies suggest a positive impact of EM and cognitive aid use on clinician decision-making and teamwork, in contexts of simulated events13,34 and in several case reports.18–21 Our data support and extend these findings in a large series of clinical crises at 2 academic medical centers. This study builds upon our framework for understanding impacts of EM use, including supporting and synergizing with robust teamwork skills.21 No checklist, cognitive aid, or EM will ever replace robust clinical and leadership skills. Rather, our data and previous literature show that using EMs effectively is one of multiple important aspects of teamwork and dynamic decision-making that can synergistically enable better care.35–41
Our data also support previous publications in suggesting that institutions should provide training in why, when, and how to use EMs.4,16,17 Experiencing EM use, whether simulated or clinical, was described as powerful for demonstrating EM value and understanding how best to use them. Even skeptical anesthesia professionals reported their intent in the future to use EMs when applicable, supporting that “use begets use.”21
These 2 academic centers made extensive implementation efforts before this study; previous publications address the various implementation science aspects of dissemination, adoption, and effective use of EMs.4,17,23,35 In this study, 7 uses of the EM during crises were described from clinical settings other than those specified for EM implementation. Without dedicated EMs deployed in these settings, clinicians successfully used pocket or mobile electronic versions, often even without teammates trained in EM use. Such clinician-initiated spread beyond the intended implementation contexts is an indication of diffusion, adoption, and, by inference, benefit.
Some subthemes in our results warrant further study, for example, “how” EMs enable improved teamwork in clinical settings, and how best to spread these practices. Three examples are as follows:
- A “reader” role was perceived as helpful, supporting and extending previous simulation-based literature.11 How can the use of a reader be widely adopted?
- Using an EM promoted speaking up about safety concerns in some cases. Speaking up is a behavior associated with improved outcomes across industries, with diverse efforts to encourage voicing concerns.42–44 EMs, like surgical time-outs, may especially empower team members of lower power status. We do not know how strong that effect is, nor how it interacts with other aspects of nontechnical crisis management skills and team cultures.35,36,38–40
- Why did nobody suggest EM use in 22 of 32 EM nonuse events? When prompted by the retrospective exercise for crises with EM nonuse, participants, including those who originally did not think the EM would have been useful in their event, often identified additional tips they thought could have been helpful (18/22; Figure 5). No participants reported a team member suggesting EM use and the leader declining. In contrast, when team members asked about using EM, leaders integrated EM use into crisis management. Several participants emphasized that it is hard for leaders to remember to assign an EM reader, or even think to use an EM, when they are under stress and used to jumping directly into crisis management. These 3 examples are relevant for supporting impactful future EM use.
Resulting from this project, MGH developed a rule of thumb that the first crisis helper who is not needed for immediate hands tasks should ask the leader “Do you want me to get the emergency manual?” and to clarify which event. Both institutions have since expanded interprofessional team trainings with integrated EM modules. Quality improvement leaders at each institution have begun to ask during various formal discussions of crises, “Was an EM used (and if not, why not?)” This question may help raise EM awareness and encourage use. Neither institution has formal policies regarding the use of EM during crises. We, the authors, do not advocate “mandatory use” because we believe that EMs will never include all possible crises; there is value in maintaining clinician judgment on when, how, and whether to use EM even while raising awareness and enabling use; and creating behavior change is more likely through culture and training than by policy mandates.
Many institutions have begun implementing EMs, enabled recently by a cost-free implementation toolkit created through research funded by the Agency for Healthcare Research and Quality (AHRQ).23,45,46 The current study fills knowledge gaps by providing more robust data on clinical use rates with denominators, drivers of nonuse, and deeper qualitative understanding of how clinical EM use enables improved crisis management. We did not attempt to gather information about patient outcomes given many confounders and an expected small sample size, which together made meaningful analyses of patient outcomes unlikely. Future large EM studies might consider a stepped-wedge design to directly assess impacts on patient morbidity/mortality, with staggered-start institutional implementations.
LIMITATIONS
First, due to interviewer travel constraints, for some MGH crises, up to 3 months passed between crisis and interview, with potential for memory decay or shaping.47 Considering this potential downside, we opted for in-person interviews to enhance participation and richness of interviews.28 Despite the time passed, participants vividly described the events and EM use details.48 There is evidence to support that recollections about salient, basic facts, such as whether an EM was used, are generally accurate, even months later. This is in contrast to memory of details like a crime suspect’s appearance, which tends toward existing cultural biases.48,49 We maximized accuracy of descriptions and minimized biases by: interviewing in low-stake contexts, focusing on both use and nonuse of EMs during crises, and using neutral language.50,51
Second, because some participants described >1 crisis, there is potential for nonindependence of results. To address this, we calculated EM use within the independent subset of first-described events from each participant, yielding almost the same proportion of EM use as for all events (56% vs 54% with similar 95% CIs). Because a few participants described >1 EM use, none >2, there remains a small potential for bias due to nonindependence of described impacts.
Third, because S.N.G.-F. was known to be involved in the development of the EM used, social-acceptability bias, for participants to provide more “positive” recollections of EM, is possible. To minimize this impact, S.B.M., a nonclinician qualitative researcher, was the primary interviewer for all events. We explicitly encouraged reporting of EM nonuse and of EM faults or impediments to optimal patient care.
Fourth, the sites of this study were large academic hospitals, one of which is the source of the EM used. Both study sites had systematically implemented EMs, including training for clinicians about why and how to use them. Both were identified in a previous implementation study as “high success implementers,” making them “bright spots” to learn from, although not necessarily typical.17 The detailed implementation process used at MGH and a similar process at Mayo Clinic have been described.23,45 One might conjecture that academic centers are more inclined to adopt innovations. It is perhaps equally likely that their many expert faculty would be less likely to welcome and use cognitive aids. Indeed, there is evidence from a large US implementation study that EM implementation successes spanned institutional size, type, and geography.17 There was a direct dose–response effect between the extensiveness of the implementation (number of implementation steps done) and the likelihood of EM use during applicable crises, across institutional sizes. Smaller institutions, many with private anesthesia practices, actually had greater EM implementation successes, potentially because of nimbler and less formal processes for adoption. This suggests potential for effective clinical EM use beyond large academic medical centers.
CONCLUSIONS
This study provides evidence that EMs are being used in ORs and beyond, during many applicable crises at 2 large academic medical centers. The interviewed anesthesia professionals perceived EMs to add value, with the perceived positive impacts strongly outweighing the perceived negative impacts in every category. Information from clinical nonuses of EMs during applicable crises in this study enhances the field’s overall understanding of barriers to use. Together, these data support greater use of EMs in perioperative settings.
ACKNOWLEDGMENTS
We thank all of the anesthesia professionals at both institutions who participated in this study, as well as the leadership at Massachusetts General Hospital and Stanford Hospital for being open to having these processes studied. Specifically, for enabling this study, we thank Vanessa Kurzweil, Kelsey McRichards, and Vanessa Rao (all at study time from Massachusetts General Hospital [MGH] quality improvement team), all MGH and Stanford OR leaders and quality teams, and Dr Elena Brandford (Stanford medical student, now anesthesiology resident) for their help with applicable event identification; Professor Andrea Nevedal, Stanford qualitative research expert, for her input on interview guide and review of qualitative research design; and Jessica Whittemore (research assistant) for qualitative coding work.
DISCLOSURES
Name: Sara N. Goldhaber-Fiebert, MD.
Contribution: This author helped lead the study design, collect and analyze the data, and prepare the manuscript.
Conflicts of Interest: S. N. Goldhaber-Fiebert is a founding member of the Emergency Manuals Implementation Collaborative (EMIC) steering committee, and of the Stanford Anesthesia Cognitive Aid Group, that developed the Stanford Emergency Manual: Cognitive Aids for Perioperative Critical Events.
Name: Sylvia Bereknyei Merrell, DrPH.
Contribution: This author helped design the study, collect and analyze the data, and prepare the manuscript.
Conflicts of Interest: None.
Name: Aalok V. Agarwala, MD, MBA.
Contribution: This author helped collect the data and significantly edit the manuscript.
Conflicts of Interest: A. V. Agarwala is a member of EMIC steering committee.
Name: Monica M. De La Cruz, MPH.
Contribution: This author helped design the study, analyze the data, and prepare the manuscript.
Conflicts of Interest: None.
Name: Jeffrey B. Cooper, PhD.
Contribution: This author helped design the study, analyze the data, and prepare the manuscript.
Conflicts of Interest: None.
Name: Steven K. Howard, MD.
Contribution: This author helped design the study, analyze the data, and significantly edit the manuscript.
Conflicts of Interest: S. K. Howard is a founding member of the EMIC steering committee, and of the Stanford Anesthesia Cognitive Aid Group, which developed the Stanford Emergency Manual: Cognitive Aids for Perioperative Critical Events.
Name: Steven M. Asch, MD.
Contribution: This author helped design the study, analyze the data, and significantly edit the manuscript.
Conflicts of Interest: None.
Name: David M. Gaba, MD.
Contribution: This author helped design the study, analyze the data, and prepare the manuscript.
Conflicts of Interest: D. M. Gaba is a founding member of the EMIC steering committee, and of the Stanford Anesthesia Cognitive Aid Group, which developed the Stanford Emergency Manual: Cognitive Aids for Perioperative Critical Events.
This manuscript was handled by: Richard C. Prielipp, MD, MBA.
REFERENCES
1. McEvoy MD, Field LC, Moore HE, Smalley JC, Nietert PJ, Scarbrough SH. The effect of adherence to ACLS protocols on survival of event in the setting of in-hospital cardiac arrest. Resuscitation. 2014;85:82–87.
2. Dismukes RK, Goldsmith TE, Kochan JA. Effects of Acute Stress on Aircrew Performance: Literature Review and Analysis of Operational Aspects. 2015.Moffett Field, CA: National Aeronautics and Space Administration Ames Research Center;
3. Helmreich RL, Merritt AC, Wilhelm JA. The evolution of Crew Resource Management training in commercial aviation. Int J Aviat Psychol. 1999;9:19–32.
4. Goldhaber-Fiebert SN, Howard SK. Implementing emergency manuals: can cognitive aids help translate best practices for patient care during acute events? Anesth Analg. 2013;117:1149–1161.
5. Burian BK, Clebone A, Dismukes K, Ruskin KJ. More than a tick box: medical checklist development, design, and use. Anesth Analg. 2018;126:223–232.
6. Hepner DL, Arriaga AF, Cooper JB, et al. Operating room crisis checklists and emergency manuals. Anesthesiology. 2017;127:384–392.
7. Huang J. Implementation of emergency manuals in China. Anesth Patient Saf Found APSF Newsl. 2016;31:43–45.
8. Emergency Manuals Implementation Collaborative (EMIC). Available at:
www.emergencymanuals.org/. Accessed May 18, 2020.
9. Harrison TK, Manser T, Howard SK, Gaba DM. Use of cognitive aids in a simulated anesthetic crisis. Anesth Analg. 2006;103:551–556.
10. Neal JM, Hsiung RL, Mulroy MF, Halpern BB, Dragnich AD, Slee AE. ASRA checklist improves trainee performance during a simulated episode of local anesthetic systemic toxicity. Reg Anesth Pain Med. 2012;37:8–15.
11. Burden AR, Carr ZJ, Staman GW, Littman JJ, Torjman MC. Does every code need a “reader?” improvement of rare event management with a cognitive aid “reader” during a simulated emergency: a pilot study. Simul Healthc. 2012;7:1–9.
12. Arriaga AF, Bader AM, Wong JM, et al. Simulation-based trial of surgical-crisis checklists. N Engl J Med. 2013;368:246–253.
13. Marshall SD, Mehra R. The effects of a displayed cognitive aid on non-technical skills in a simulated “can’t intubate, can’t oxygenate” crisis. Anaesthesia. 2014;69:669–677.
14. Marshall S. The use of cognitive aids during emergencies in anesthesia: a review of the literature. Anesth Analg. 2013;117:1162–1171.
15. Neily J, DeRosier JM, Mills PD, Bishop MJ, Weeks WB, Bagian JP. Awareness and use of a cognitive aid for anesthesiology. Jt Comm J Qual Patient Saf. 2007;33:502–511.
16. Goldhaber-Fiebert SN, Pollock J, Howard SK, Bereknyei Merrell S. Emergency manual uses during actual critical events and changes in safety culture from the perspective of anesthesia residents: a pilot study. Anesth Analg. 2016;123:641–649.
17. Alidina S, Goldhaber-Fiebert SN, Hannenberg AA, et al. Factors associated with the use of cognitive aids in operating room crises: a cross-sectional study of US hospitals and ambulatory surgical centers. Implement Sci. 2018;13:50.
18. Ramirez M, Grantham C. Crisis checklists for the operating room, not with a simulator. J Am Coll Surg. 2012;215:302–303.
19. Ranganathan P, Phillips JH, Attaallah AF, Vallejo MC. The use of cognitive aid checklist leading to successful treatment of malignant hyperthermia in an infant undergoing cranioplasty. Anesth Analg. 2014;118:1387.
20. Eberl S, Koers L, Van Haperen M, Preckel B. Cognitive aids: “a must” for procedures performed by multidisciplinary sedation teams outside the operation room? BMJ Case Rep. 2017;2017:bcr-2017-221645.
21. Bereknyei Merrell S, Gaba DM, Agarwala AV, et al. Use of an emergency manual during an intraoperative cardiac arrest by an interprofessional team: a Positive-Exemplar Case Study of a new patient safety tool. Jt Comm J Qual Patient Saf. 2018;44:477–484.
22. Stanford Anesthesia Cognitive Aid Group*. Emergency Manual: Cognitive Aids for Perioperative Critical Events. Creative Commons BY-NC-ND. V3.1. *Core contributors in random order: Howard SK, Chu LK, Goldhaber-Fiebert SN, Gaba DM, Harrison TK; 2016. Available at:
http://emergencymanual.stanford.edu. Accessed May 18, 2020.
23. Agarwala AV, McRichards LK, Rao V, Kurzweil V, Goldhaber-Fiebert SN. Bringing perioperative emergency manuals to your institution: a “How To” from concept to implementation in 10 steps. Jt Comm J Qual Patient Saf. 2019;45:170–179.
24. Patton MQ. Qualitative Research & Evaluation Methods: Integrating Theory and Practice. 2014.4th edLos Angeles, CA: SAGE Publications;
25. Creswell JW. Research Design: Qualitative, Quantitative, and Mixed Methods Approaches. 2014.4th edThousand Oaks, CA: SAGE Publications;
26. Guest G, Bunce A, Johnson L. How many interviews are enough? An experiment with data saturation and variability. Field Methods. 2006;18:59–82.
27. Mason M. Sample size and saturation in PhD studies using qualitative interviews. Forum Qual Soc Res. 2010;11. Available at:
http://www.qualitative-research.net/index.php/fqs/article/view/1428. Accessed May 18, 2020.
28. Creswell JW. 30 Essential Skills for the Qualitative Researcher. 2015.Thousand Oaks, CA: SAGE Publications;
29. Rossman GB, Rallis SF. An Introduction to Qualitative Research: Learning in the Field. 2016.Thousand Oaks, CA: SAGE Publications;
30. MacQueen KM, McLellan-Lemal E, Bartholow K, Milstein B. Guest G, MacQueen KM, eds. Team-based codebook development: structure, process, and agreement. In: Handbook for Team-Based Qualitative Research. 2008:Lanham, MD: AltaMira Press; 119–136.
31. Miles MB, Huberman AM, Saldaña J. Qualitative Data Analysis: A Methods Sourcebook. 2014.3rd edThousand Oaks, Ca: SAGE Publications, Inc;
32. Sawatsky AP, Ratelle JT, Beckman TJ. Qualitative research methods in medical education. Anesthesiology. 2019;131:14–22.
33. Renkl A, Mandl H. Inert knowledge: analyses and remedies. Educ Psychol. 1996;31:115.
34. Marshall S. The Effects of Cognitive Aids on Formation and Functioning of Teams in Medical Emergencies. Psychology PhD, 2015. University of Queensland, Available at:
https://core.ac.uk/download/pdf/43365470.pdf. Accessed May 18, 2020.
35. Goldhaber-Fiebert SN, Macrae C. Emergency manuals: how quality improvement and implementation science can enable better perioperative management during crises. Anesthesiol Clin. 2018;36:45–62.
36. Leape LL. The checklist conundrum. N Engl J Med. 2014;370:1063–1064.
37. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4:50.
38. Luedi MM, Doll D, Boggs SD, Stueber F. Successful personalities in anesthesiology and acute care medicine: are we selecting, training, and supporting the best? Anesth Analg. 2017;124:359–361.
39. Gaba DM, Fish KJ, Howard SK, Burden A. Crisis Management in Anesthesiology, 2014. 2015.2nd ed. Philadelphia, PA: Elsevier Saunders;
40. Marshall SD, Touzell A. Human factors and the safety of surgical and anaesthetic care. Anaesthesia. 2020;75suppl 1e34–e38.
41. Dixon-Woods M, Baker R, Charles K, et al. Culture and behaviour in the English National Health Service: overview of lessons from a large multimethod study. BMJ Qual Saf. 2014;23:106–115.
42. Nembhard IM, Edmondson AC. Making it safe: the effects of leader inclusiveness and professional status on psychological safety and improvement efforts in health care teams. J Organ Behav. 2006;27:941–66.
43. Morrison EW, Rothman NB. Silence and the dynamics of power. Voice Silence Organ. 2009;6:111–134.
44. Edmondson AC. The Fearless Organization: Creating Psychological Safety in the Workplace for Learning, Innovation, and Growth. 2018.Hoboken, NJ: John Wiley & Sons;
45. Gleich SJ, Pearson ACS, Lindeen KC, et al. Emergency manual implementation in a large academic anesthesia practice: strategy and improvement in performance on critical steps. Anesth Analg. 2019;128:335–341.
46. Ariadne Labs, Emergency Manuals Implementation Collaborative (EMIC), and Stanford Anesthesia Cognitive Aid Group. TheOperating Room Emergency Checklist Implementation Toolkit. 2017.Available at:
https://www.implementingemergencychecklists.org/. Accessed May 18, 2020.
47. Bernstein DM, Loftus EF. How to tell if a particular memory is true or false. Perspect Psychol Sci. 2009;4:370–374.
48. Brown R, Kulik J. Flashbulb memories. Cognition. 1977;5:73–99.
49. Sara SJ, Bouret S. Orienting and reorienting: the locus coeruleus mediates cognition through arousal. Neuron. 2012;76:130–141.
50. Baddeley AD. Human Memory: Theory and Practice. 1997.East Sussex, UK: Psychology Press Ltd;
51. Pinker S. How the mind works. Ann N Y Acad Sci. 1999;882:119–127.