Share this article on:

Emergency Manual Uses During Actual Critical Events and Changes in Safety Culture From the Perspective of Anesthesia Residents: A Pilot Study

Goldhaber-Fiebert, Sara N. MD; Pollock, Justin MD; Howard, Steven K. MD; Bereknyei Merrell, Sylvia DrPH, MS

doi: 10.1213/ANE.0000000000001445
Patient Safety: Original Clinical Research Report

BACKGROUND: Emergency manuals (EMs), context-relevant sets of cognitive aids or crisis checklists, have been used in high-hazard industries for decades, although this is a nascent field in health care. In the fall of 2012, Stanford clinically implemented EMs, including hanging physical copies in all Stanford operating rooms (ORs) and training OR clinicians on the use of, and rationale for, EMs. Although simulation studies have shown the effectiveness of EMs and similar tools when used by OR teams during crises, there are little data on clinical implementations and uses. In a subset of clinical users (ie, anesthesia residents), the objectives of this pilot study were to (1) assess perspectives on local OR safety culture regarding cognitive aid use before and after a systematic clinical implementation of EMs, although in the context of long-standing resident simulation trainings; and (2) to describe early clinical uses of EMs during critical events.

METHODS: Surveys collecting both quantitative and qualitative data were used to assess clinical adoption of EMs in the OR. A pre-implementation survey was e-mailed to Stanford anesthesia residents in mid-2011, followed by a post-implementation survey to a new cohort of residents in early 2014. The post-implementation survey included pre-implementation survey questions for exploratory comparison and additional questions for mixed-methods descriptive analyses regarding EM implementation, training, and clinical use during critical events since implementation.

RESULTS: Response rates were similar for the pre- and post-implementation surveys, 52% and 57%, respectively. Comparing post- versus pre-implementation surveys in this pilot study, more residents: agreed or strongly agreed “the culture in the ORs where I work supports consulting a cognitive aid when appropriate” (73.8%, n = 31 vs 52.9%, n = 18, P = .0017) and chose more types of anesthesia professionals that “should use cognitive aids in some way,” including fully trained anesthesiologists (z = −2.151, P = .0315). Fifteen months after clinical implementation of EMs, 19 respondents (45%) had used an EM during an actual critical event and 15 (78.9% of these) agreed or strongly agreed “the EM helped the team deliver better care to the patient” during that event, with the rest neutral. We present qualitative data for 16 of the 19 EM clinical use reports from free-text responses within the following domains: (1) triggering EM use, (2) reader role, (3) diagnosis and treatment, (4) patient care impact, and (5) barriers to EM use.

CONCLUSIONS: Since Stanford’s clinical implementation of EMs in 2012, many residents’ self-report successful use of EMs during clinical critical events. Although these reports all come from a pilot study at a single institution, they serve as an early proof of concept for feasibility of clinical EM implementation and use. Larger, mixed-methods studies will be needed to better understand emerging facilitators and barriers and to determine generalizability.

From the *Department of Anesthesiology, Perioperative and Pain Medicine, Stanford University School of Medicine, Stanford, California; Anesthesiology and Perioperative Care Service, VA Palo Alto Health Care System, Palo Alto, California; and Department of Medicine, Stanford University School of Medicine, Stanford, California.

Accepted for publication May 6, 2016.

Funding: Departmental.

The authors declare no conflicts of interest.

Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal’s website.

This report was previously presented, in part, as an abstract at American Society of Anesthesiologists (ASA) 2014.

Reprints will not be available from the authors.

Address correspondence to Sara N. Goldhaber-Fiebert, MD, Department of Anesthesiology, Perioperative and Pain Medicine, Stanford University School of Medicine, 300 Pasteur Dr, Room H3589 Stanford, CA 94305. Address e-mail to saragf@stanford.edu.

Simulation studies show that operating room (OR) teams deliver known best practices during critical events more often and more efficiently when using emergency manuals (EMs), context-relevant sets of cognitive aids such as crisis checklists.1–5 These studies resonate with the human factors observation that, regardless of experience, humans do not retrieve information optimally from long-term memory during high-stress situations.6 High-hazard industries, such as aviation, integrate EMs along with other crisis resource management teamwork and decision-making skills into their professional trainings and require appropriate use of EMs during critical events.7 There is early evidence suggesting that the team communication aspects of crisis management are also positively impacted by the use of EMs.8,9

In health care, critical events can be lethal, and rapid performance of key actions can save lives.10 Even when professionals possess the correct knowledge, however, this knowledge can remain “inert” when needed,11,12 and stress can increase the incidence of omission errors.13,14 Over time, EMs increasingly are being adopted and implemented in health care settings as valuable resources.15 Targeting the gaps in delivering evidence-based management during critical events, simulation-based crisis training at Stanford and elsewhere emphasizes the effective use of cognitive aids integrated with other crisis resource management skills.16 Yet, before institutional clinical implementation of EMs, resident simulation training did not necessarily translate to resident clinical use of such resources during real critical events.12 Although they were convinced that EMs positively impacted team management actions during simulated critical events, multiple residents had expressed reluctance to use even personal “pocket” copies of an EM during clinical care for concern of being negatively judged by some anesthesia attendings or OR team members (unpublished data from simulation debriefs and personal communications with multiple residents, 2009–2011). Previous safety culture research suggests that clinicians’ (or more broadly professionals’) behavior, such as speaking up, is influenced by their own perceptions of safety culture and that frontline workers express more concerns than senior managers,17 suggesting that it is relevant to study resident perceptions even though their attendings’ perceptions may be discordant.

The Stanford Anesthesia Cognitive Aid Group designed, simulation-tested, and clinically implemented the Stanford Emergency Manual for perioperative critical events.18 This EM has been downloaded more than 20,000 times and is available cost-free. Iterative versions integrate clinical user feedback, as well as ongoing simulation testing and literature reviews. More broadly, EM use in health care is increasing, with several other perioperative EMs also freely available. Links to various EMs, as well as training and implementation resources, are available from the Emergency Manuals Implementation Collaborative.19

As explained in the Consolidated Framework for Implementation Research,20 many factors influence whether effective health care innovations actually reach patients.21 For EMs, there already are considerable data that support effectiveness to improve clinician actions in simulated crises. From our review of the literature to date, however, the clinical implementation or use of EMs is limited to the following: a 2007 national survey study of EM use by Neily et al22 at Veterans’ Administration (VA) hospitals, curricular materials, 2 case reports, and the impact of an OR staff training program on familiarity with EMs.23–27

To begin to address this gap, we surveyed anesthesia residents at our institution: (1) to assess local OR safety culture regarding cognitive aid use before and after a systematic clinical implementation of perioperative EMs; and (2) to describe early clinical uses of EMs. Terminology: throughout this article, we use 2 overlapping terms. The broader term, cognitive aid, has been used for decades, locally and in the health care simulation literature, to denote anything that helps professionals to remember, determine, or act upon key information. Emergency Manual (EM) is a context-relevant set of cognitive aids or crisis checklists, deriving from a similar aviation term. The term EM has spread within healthcare since the formation of the Emergency Manuals Implementation Collaborative in 2012.

Back to Top | Article Outline

METHODS

The Stanford University School of Medicine Institutional Review Board approved this study. Answering the survey indicated consent to participate. To protect confidentiality of residents, we did not collect identifying information from respondents. Nor were any patient-identifiers collected in the described critical events.

In the fall of 2011, before Stanford clinical implementation of EMs, we sent surveys to all residents in the Department of Anesthesiology, Perioperative and Pain Medicine. The survey was created by content (S.N.G.-F., S.K.H.) and survey-design experts, using a modified Delphi method and consensus process, following survey development principles.28,29 The survey underwent multiple iterations and pre-launch pilot testing with nonparticipant anesthesiologists.

The intent of conducting pre- and post-implementation study surveys at 1 early-adopting institution was to capture pilot data for hypothesis generation about interactions between EM clinical implementation and patient safety culture, and therefore, we did not perform an a priori sample size calculation. Rather, we used a convenience sample, targeting all current anesthesia residents at each survey time point.

The surveys were distributed to resident e-mail lists using SurveyMonkey software (SurveyMonkey Inc, Palo Alto, CA) with 2 reminders at 1-week intervals. The pre-implementation survey included 15 questions in an effort to specifically assess attitudes regarding: (1) the general value of cognitive aids for critical events; (2) use of cognitive aids during critical events by categories of anesthesia professionals, including self-use; and (3) acceptability of cognitive aid use within local OR safety culture.

At the time of the 2011 pre-implementation survey, Stanford anesthesia residents were already familiar with and were encouraged to use cognitive aids during simulation sessions. In fall 2012, Stanford clinical implementation of EMs included leadership buy-in and interdisciplinary planning, physically hanging EMs in all Stanford ORs, ensuring electronic accessibility, and training other OR clinicians on the use of, and rationale for, EMs. Implementation and training concepts, as well as practical details, are described further in previous publications, including the 4 vital elements shown in Figure 1.4,25–27

Figure 1

Figure 1

In early 2014, 15 months after clinical implementation of EMs, we distributed a post-implementation survey to a new cohort of Stanford anesthesia residents. We used Qualtrics software (Qualtrics, Provo, UT) to electronically distribute the survey to all subjects with 2 reminders sent at 1-week intervals. For comparison purposes across the 2 time points, question order and wording were retained as much as possible and only modified to account for survey time-point context post-implementation. In addition, we added 11 questions (26 questions total) to the post-implementation survey to assess EM training effectiveness, barriers to EM use, types of EM use, and EM use during actual critical events. See Figure 2 for implementation and survey timeline, and Supplemental Digital Content 1, Appendix A, http://links.lww.com/AA/B445 and Supplemental Digital Content 2, Appendix B, http://links.lww.com/AA/B446 for complete surveys.

Figure 2

Figure 2

Pre- and post-implementation survey questions contained restricted responses including 5-point Likert scale (strongly disagree to strongly agree), binary (no, yes), and “Select All that Apply.” In addition, for respondents who reported using the EMs during actual critical events, free-text questions asked how the EM was used, facilitators and barriers to EM use, and any impact EM had on patient management.

Although this was a hypothesis-generating pilot study, we sought to determine whether differences existed in respondents’ perceptions before and after clinical implementation to inform future study designs. For statistical analysis to compare the pre- and post-implementation survey responses, we used STATA SE 12.1 software (StataCorp, College Station, TX). To assess statistical significance in post-implementation versus pre-implementation survey comparisons, we used the Wilcoxon rank sum test for independent samples for Likert scale responses, as well as for composite scores calculated from binary responses within a single question domain. For reporting of additional postimplementation descriptive data, we present numbers and percentages of survey respondents and confidence intervals. We performed the Spearman rank correlations and post hoc bootstrapping (1000 replications) using 99% confidence intervals to assess the relationships between residents who used the EMs during a clinical critical event with the following other types of uses, each rated on an ordinal scale (never, yearly, monthly, weekly, daily): self-review, intraoperative educational resource, “just in time” review before a patient case, and reviewing the EM after a critical event.

To better understand the factors associated with EM use, we collected 3 free-text responses from residents’ self-reporting EM use regarding their most recent critical event for the following: how the EM was used, if at all; facilitators and barriers to effective EM use; and the perceived impact the EM had on patient care (see Supplemental Digital Content 2, Appendix B, http://links.lww.com/AA/B446 for survey questions). The qualitative responses were then iteratively thematically categorized (J.P., S.B.M.) using an inductive approach and adjudicated (S.N.G.-F.) until 100% inter-rater agreement was reached.30–33

Back to Top | Article Outline

RESULTS

Table 1

Table 1

Response rates among anesthesia residents were similar for both surveys, with 34 (51.5%) pre-implementation and 42 (56.8%) post-implementation (Table 1). Between survey years, there were no statistically significant differences in residents’ year of training or sex.

Back to Top | Article Outline

Pre- Versus Post-Implementation Exploratory Inferential Survey Comparison

On the post- versus pre-implementation survey, more residents agreed or strongly agreed with “The culture in the ORs where I work supports consulting a cognitive aid when appropriate” (73.8%, n = 31 vs 52.9%, n = 18, z = −3.1300, P = .0017). For the same question, there is also positive shift from “agree” toward “strongly agree” from pre- to postclinical implementation, suggesting a safety culture shift (Figure 3). For post- versus pre-implementation, more residents chose more of the 8 types of anesthesia professionals in response to the question “Which of the following anesthesia professionals should use cognitive aids in some way?” (z = −2.151, P = .0315 using a composite score of binary answers for types of anesthesia professionals by Wilcoxon rank sum test). Anesthesia professional types were listed as “check all that apply.” with 8 total choices: medical students, each year of anesthesia residents, fellows, faculty, private practice anesthesiologists, and certified registered nurse anesthetists.

Figure 3

Figure 3

For the rest of the survey questions in Supplemental Digital Content 1, Supplemental Appendix A, http://links.lww.com/AA/B445, pre-clinical-implementation, there were already high rates of support for the use of cognitive aids, which were maintained post-clinical-implementation.

Back to Top | Article Outline

Post-Implementation Descriptive Data

Clinical Uses

Nearly half of the respondents (45.2%, n = 19) reported using an EM at least once during a clinical critical event with 11.9% (n = 5) using an EM ≥3 times, over the 15 months since implementation. Figure 4 shows the numbers of resident-reported uses of EMs during clinical events, by event types.

Figure 4

Figure 4

In this pilot study, 78.9% (n = 15) of the respondents who had used EMs during a perioperative critical event agreed or strongly agreed “the EM helped the team deliver better care to the patient,” with the rest neutral and none disagreeing. All respondents who had used an EM (100%, n = 19) agreed or strongly agreed “Having Emergency Manuals in our operating rooms improves patient care.” All respondents (100%, n = 19) who had used an EM also reported at least monthly use of an EM for self-review or as an intraoperative educational resource, with many using it more frequently.

Back to Top | Article Outline

Training Facilitating EM Use

Figure 5

Figure 5

Training is one of the multiple elements that may facilitate effective EM use, with each element described in more detail elsewhere.4 Immersive simulations of critical events (95.2%, n = 40) and self-review of EM (73.8%, n = 31) were the trainings selected by the most anesthesia residents for agree or strongly agree with “positively influenced my emergency manual use.” No respondents disagreed with either category above. See Figure 5 for full training data.

Back to Top | Article Outline

Barriers to EM Use

Figure 6

Figure 6

“Events in the operating room happen too quickly” (40.5%, n = 17) and “Insufficient people to help (eg, nobody available as reader)” (23.8%, n = 10) were the barriers selected by the most respondents as moderate to significant. In contrast, “My colleagues may not approve” (4.8%, n = 2) and “Lack of sufficient training programs” (7.1%, n = 3) were the barriers selected by the fewest as moderate to significant. See Figure 6 for barrier response data.

Back to Top | Article Outline

Post-Implementation Exploratory Inferential Data

Relationship Between Various EM Exposures and EM Use During a Critical Event

EM use during an event had a positive correlation with increasing personal frequency of EM use in each of the following categories: self-review (rs = 0.4506, P = .0031, CI [0.1004–0.8008]), intraoperative educational resource (rs = 0.4076, P = .0082, CI [0.0518–0.7634]), “just in time” review of a relevant EM event before a patient’s case (rs = 0.4398, P = .0040, CI [0.0532–0.8265]), and post-event EM review after a critical event (rs = 0.5873, P = .0001, CI [0.2313–0.9433]).

Back to Top | Article Outline

Reader Role

Post-implementation, 69% (n = 29) of respondents answered that an EM would be helpful “during an event, once there are enough people that someone could be READING IT OUT LOUD for the team.” This was significantly fewer than pre-implementation (94%, n = 32, P = .0082).

Back to Top | Article Outline

Qualitative Themes From EM Use During Clinical Critical Events

Table 2

Table 2

Of the 19 participants who self-reported using the EM during clinical critical events, 84.2% (n = 16) described in free-text how the EM was used, any facilitators, barriers, or limitations, and the perceived impact of EM use on patient care. In Table 2, we present qualitative data from free-text responses in the following major domains: (1) triggers for EM use, (2) significance of the reader role, (3) differential diagnosis and treatment plan support, (4) patient care impacts, and (5) barriers to EM use. In particular, we identified specific themes related to barriers to EM use: competing priorities with limited time, remembering to trigger EM use, challenges with reader role, and cultural acceptance of EM use.

Back to Top | Article Outline

DISCUSSION

This study assesses pre- to post-clinical implementation changes of resident perspectives regarding local patient safety culture and presents 19 resident reports of EM use for multiple types of critical events at a single institution. Here, we integrate the discussion of complementary quantitative and qualitative results33,34 for EM uses during clinical events. Particularly worthy of further exploration are the barriers and facilitators for EM clinical use, which will be relevant in guiding clinical implementations and future research.

Back to Top | Article Outline

Pre- Versus Post-Implementation Implementation Exploratory Inferential Survey Comparison

Changes in perceived safety culture are presented in this pilot study. Despite long-standing enthusiastic acceptance of cognitive aid use by residents during simulation courses, before clinical EM implementation residents may have been hesitant to use those same tools in the OR because of “safety culture” concerns about how others viewed using a cognitive aid. Relevant implementation factors that we suspect helped to make residents more comfortable using EMs as clinical tools include the following: (1) highly visible accessibility of EMs in all ORs helped make EMs an accepted institutional “norm.” (2) OR leadership and respected senior anesthesiologists vocally endorsed EM use during departmental meetings and by their own modeling behavior. Although faculty adoption of clinical EM use was a process, purposeful faculty engagement such as inviting input alleviated many initial concerns. (3) Multimodal education for anesthesia and other OR team members provided rationale and practical strategies for the appropriate use of EMs. (4) Success stories of effective EM use during critical events spread fast informally and were sometimes also presented formally at educational sessions.

Safety culture is an important factor impacting the implementation of any new intervention into a health care system but should not be viewed as an insurmountable hurdle.21 For institutions considering EM implementation, some internal dissent should inspire sharing of evidence and an engaged, healthy debate with key stakeholders to express concerns regarding appropriate use, but it should not discourage local champions from embarking on clinical implementation. Although this was a hypothesis-generating pilot study, after clinical implementation of EMs, we did find a meaningful rightward shift toward “strongly agree” in the residents’ perceptions of patient safety culture regarding cognitive aid use.

Importantly, there may be broader patient safety implications that are clinically and organizationally significant when safety culture is influenced positively.17,21,35 A potentially generally applicable lesson is that the implementation process itself may help to positively nudge the perceived safety culture regarding cognitive aid use via mechanisms including training for interprofessional clinicians, institutional “approval” of EM use implied by accessibility, and informal or formal sharing of local effective uses.

Larger, in-depth, mixed-methods studies will be needed to more deeply understand the relationships between clinical EM implementation and safety culture.

For most survey questions, resident attitudes toward cognitive aid use were already very positive pre-implementation and these levels were maintained post-implementation. There was not much room for improvement here given the long-standing support and use of cognitive aids during resident simulation trainings at this institution. What was previously unknown was whether clinical implementation of EMs would facilitate the clinical use of EMs during critical events, within the complex context of use barriers, including resident perception of the surrounding safety culture—that is, do trainees interpret the safety culture as sufficiently supporting use of EMs?

Back to Top | Article Outline

Post-Implementation Descriptive Data

Clinical Uses

Residents self-reported EM use over the 15-month period spanning 3 broad event types, previously suggested. These include rare events (eg, malignant hyperthermia), slower refractory events (eg, refractory hypoxemia or hypotension), and complicated rapidly evolving events (eg, PEA cardiac arrest).4

With 45.2% of all respondents using the EM during at least 1 critical event, these data show the potential for broad adoption and applicability, particularly given the rarity of these events. In previously published United States Department VA data by Neily et al,22 7% of respondents reported use during at least 1 critical event, although VA anesthesia professionals were surveyed nationally only 6 months post-implementation and there were varied levels of EM trainings depending on the local institution. Both the VA study and our study found perceived positive EM impacts on team delivery of patient care, even during rare critical events, suggesting that EM use may also address an important teamwork need that deserves further exploration in future studies.

The value of EM use during clinical critical events was further illuminated via free-text responses from 16 anesthesia residents. These qualitative data indicate that in many cases, an EM helped the team during a critical event to provide patient care in a more organized manner, act upon real-time reminders of omitted actions, access detailed therapeutic information, broaden or confirm differential diagnoses, and reaffirm appropriately performed actions. Residents also described barriers to EM use, which are grouped into themes in Table 2: competing priorities with limited time, remembering to trigger EM use, challenges with reader role, and cultural acceptance of EM use. These themes and specific comments indicate opportunities to refine EM design, training, and implementation processes for even more effective use at appropriate times during critical events.

Although multiple previous studies have demonstrated that EMs and similar cognitive aids can significantly increase appropriate clinician actions during simulated critical events, this study adds significantly to the previously scant published data on clinical EM implementations and uses for the delivery of actual patient care.22–26 For EMs to be most effective, they require a thoughtful process of implementation into our organizationally complex work environments. Mere presence of any EM does not guarantee use.

Back to Top | Article Outline

Trainings Facilitating EM Use

Anesthesia residents in this study predominantly identified use of EMs during immersive simulation trainings and self-review as positively influencing their subsequent effective EM use. These results should not be misinterpreted to mean that self-review alone would be sufficient without immersive simulation, given that the far rarer but powerful simulation experiences are likely to synergistically spur the frequent self-review.

Back to Top | Article Outline

Barriers to EM Use

Anesthesia residents identified the biggest barriers to further local EM use as events happening too quickly and insufficient people available to help, and they assessed barriers related to safety culture and training not to be nearly as important in the local context. Although some events truly happen and resolve “too fast,” other critical events may be amenable to appropriate EM use after immediate actions are begun, given the negative impact of stress on memory recall.13,14 Of note, institutions at early stages of implementation will likely face different priority barriers (eg, safety culture and training of staff) that were no longer large major barriers at Stanford, likely because of a more mature stage of implementation and long-standing resident exposure in simulation-based trainings.

Back to Top | Article Outline

Post-Implementation Exploratory Inferential Data

Relationship Between Various EM Exposures and EM Use During a Critical Event

Given the positive correlations between EM use during an event and frequency of various resident EM exposures, this study provides pilot data for future exploration of a “dose-response effect” between training or exposures and subsequent use, with potential support for the role of training with such tools. Although in the current pilot study this is a correlation and not necessarily causally related, it is promising that multiple previous EM exposures may prompt EM use during clinical critical events.

Back to Top | Article Outline

Reader Role

The role of a dedicated EM “reader” during a critical event has been shown to increase significantly the rate of completion of vital actions during simulated crises36 and was described in a case report of successful clinical EM use.24 Interestingly, after implementation, significantly fewer but still a majority of survey respondents thought that an EM reader role would be helpful. This may reflect that after clinical implementation residents now had a more nuanced understanding of barriers to effective reader role, particularly given that several explicitly described reader role challenges in free-text responses. These findings suggest that the reader role in clinical care, while valuable when successful, is complex and reliant on other factors, such as enough people to perform vital actions and previous familiarity with the role. To trigger a reader role in clinical critical events, either a leader must delegate or another clinician must volunteer to be a reader. Although EM readers can be effective clinically, many questions and challenges remain for how to implement a reader role broadly.

Back to Top | Article Outline

Limitations

EMs are designed for use in rare and critical situations, the nature of which limits prospective or controlled clinical studies. Many limitations of this study are intrinsic to its design as an institutional survey at a single academic center with one study population.

Because of the exploratory nature of this hypothesis-generating pilot study, the survey questions were not formally validated using psychometric analyses. However, as described in the Methods, multiple experts developed the questions through a rigorous, consensus-building, modified Delphi process, and the survey did undergo pilot testing.

All anesthesia residents were solicited to complete the survey with efforts to increase response rates, including 3 e-mail reminders at each time point. One criticism common to many surveys is low response rate, although survey research suggests that response rates may indicate survey fatigue rather than necessarily nonresponse error.28,37 An element of selection bias is still possible, given that residents who chose to complete the survey may have felt more strongly for or against the topic than nonresponders. However, even assuming the extreme case that all nonresponders had never used an EM, over one-quarter of all residents would have used an EM during at least one clinical critical event, indicating significant applicability and adoption, particularly given the rarity of events.

This study surveyed anesthesia residents who make up only part of the OR team. However, the survey revealed that use of the EM in real clinical work did affect the team’s management of patients during critical events. Moreover, residents are key stakeholders to monitor for institutional change and perception of safety culture. All trainees are potential early adopters of EMs, can receive significant training in the use of such safety tools, and may be appropriate messengers to disseminate cultural acceptance and use both within and across institutions.21

These data represent only 2 cross-sectional time points. Future inquiries should include additional data collection across multiple time points and perspectives of multiple clinician groups to further assess key factors associated with successful implementation.

The qualitative data in our survey only begin to suggest relevant themes, as surveys are limited by inability to ask relevant follow-up questions with a tendency for respondents to enter only brief free-text entries. Future implementation studies should delve more deeply into the rich qualitative data of EM clinical uses and also assess dissemination and implementation of EMs more broadly across multiple institutions, using mixed methods. Particularly for studying a tool used during rare critical events, rigorous qualitative interviews will be necessary to better understand barriers and facilitators to use, how teams effectively or ineffectively use EMs, and any positive or negative impacts on the patient care that was delivered. The themes seen in this early study (see Table 2) will help shape appropriate interview guides to collect more robust and broadly applicable qualitative data in the future.

Back to Top | Article Outline

CONCLUSIONS

Although these reports all come from a pilot study at a single institution, they serve as an early proof of concept for feasibility of clinical EM implementation and use. For anesthesia departments considering implementing EMs into clinical practice, this study offers insights regarding safety culture, types of training, and mixed-methods data on clinical uses during critical events. Larger, mixed-methods studies will be needed to better understand emerging facilitators and barriers and to determine generalizability.

Back to Top | Article Outline

ACKNOWLEDGMENTS

The authors thank Stanford Hospital and Clinics and specifically the Department of Anesthesiology, Perioperative and Pain Medicine for their implementation efforts and departmental support of this study. They also thank Dr. David Gaba (anesthesiologist; patient safety/simulation expert) for his pioneering work in cognitive aids, EM simulation testing, advising for clinical implementation efforts, and feedback for this study along with all members of the Stanford Anesthesia Cognitive Aid Group for their design, simulation testing, and iterative improvements; Dr. Kelley Skeff (internal medicine; education expert) and Professor Wendy Mackay (human factors expert) for survey input; and Professor Rita Popat (biostatistician) for her statistical expertise. They appreciate the participation and input from all the Stanford resident respondents. S.N.G.-F. is grateful for a Research in Education Grant from the Foundation for Anesthesia Education and Research, which later in this project supported career development time relevant for data analysis, manuscript writing, and pursuing subsequent mixed-methods studies.

Back to Top | Article Outline

DISCLOSURES

Name: Sara N. Goldhaber-Fiebert, MD.

Contribution: This author helped design the study, conduct the study, analyze the data, and write the manuscript.

Name: Justin Pollock, MD.

Contribution: This author helped design the study, conduct the study, analyze the data, and write the manuscript.

Name: Steven K. Howard, MD.

Contribution: This author helped design the study, conduct the study, analyze the data, and write the manuscript.

Name: Sylvia Bereknyei Merrell, DrPH, MS.

Contribution: This author helped design the study, conduct the study, analyze the data, and write the manuscript.

This manuscript was handled by: Sorin J. Brull, MD.

Back to Top | Article Outline

REFERENCES

1. Harrison TK, Manser T, Howard SK, Gaba DM. Use of cognitive aids in a simulated anesthetic crisis. Anesth Analg. 2006;103:551–556.
2. Neal JM, Hsiung RL, Mulroy MF, Halpern BB, Dragnich AD, Slee AE. ASRA checklist improves trainee performance during a simulated episode of local anesthetic systemic toxicity. Reg Anesth Pain Med. 2012;37:8–15.
3. Arriaga AF, Bader AM, Wong JM, et al. Simulation-based trial of surgical-crisis checklists. N Engl J Med. 2013;368:246–253.
4. Goldhaber-Fiebert SN, Howard SK. Implementing emergency manuals: can cognitive aids help translate best practices for patient care during acute events? Anesth Analg. 2013;117:1149–1161.
5. Marshall S. The use of cognitive aids during emergencies in anesthesia: a review of the literature. Anesth Analg. 2013;117:1162–1171.
6. Driskell JE, Salas E, Johnston J. Does stress lead to a loss of team perspective? Group Dyn. 1999;3:291–302.
7. Gaba DM, Fish KJ, Howard SK, Burden AR, Gaba DM.Crisis Management in Anesthesiology. 2014.2nd ed. , Philadelphia, PA: Elsevier Health Sciences (1st ed. 1994, published by Churchill Livingstone).
8. Marshall SD, Mehra R. The effects of a displayed cognitive aid on non-technical skills in a simulated ‘can’t intubate, can’t oxygenate’ crisis. Anaesthesia. 2014;69:669–677.
9. Implementing and Using Emergency Manuals and Checklists to Improve Patient Safety. 2015. Phoenix, AZAnesthesia Patient Safety Foundation (APSF) Experts' Conference, Available at: http://www.apsf.org/newsletters/html/2016/February/08_EmerManuals.htm. Accessed June 16, 2016.
10. Ghaferi AA, Birkmeyer JD, Dimick JB. Variation in hospital mortality associated with inpatient surgery. N Engl J Med. 2009;361:1368–1375.
11. Renkl A, Mandl H, Gruber H. Inert knowledge: analyses and remedies. Educ Psychol. 1996;31:115–121.
12. Gaba DM. Perioperative cognitive aids in anesthesia: what, who, how, and why bother? Anesth Analg. 2013;117:1033–1036.
13. Bourne LE Jr, Yaroush RA. Stress and cognition: A cognitive psychological perspective. NASA technical report. 2003. Available at: http://ntrs.nasa.gov/search.jsp?R=20040034070. Accessed June 16, 2016.
14. Staal MA. Stress, cognition, and human performance: a literature review and conceptual framework. NASA Technical Memorandum, 2004;212824.
15. Gaba DM, Pierce EC Jr. Patient Safety Memorial Lecture: Competence and Teamwork Are Not Enough: The Value of Cognitive Aids. American Society of Anesthesiologists Annual Meeting. 2014.New Orleans, LA, .
16. Howard SK, Gaba DM, Fish KJ, Yang G, Sarnquist FH. Anesthesia crisis resource management training: teaching anesthesiologists to handle critical incidents. Aviat Space Environ Med. 1992;63:763–770.
17. Singer SJ, Rosen A, Zhao S, Ciavarelli AP, Gaba DM. Comparing safety climate in naval aviation and hospitals: implications for improving patient safety. Health Care Manage Rev. 2010;35:134–146.
18. Stanford Anesthesia Cognitive Aid Group. Stanford Emergency Manual for perioperative critical events.Available at: http://emergencymanual.stanford.edu. Accessed June 12, 2016.
19. Emergency Manuals Implementation Collaborative.Available at: www.emergencymanuals.org. Accessed March 21, 2016.
20. Consolidated Framework for Implementation Research (CFIR).Available at: http://cfirguide.org. Accessed March 21, 2016.
21. Damschroder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4:50.
22. Neily J, DeRosier JM, Mills PD, Bishop MJ, Weeks WB, Bagian JP. Awareness and use of a cognitive aid for anesthesiology. Jt Comm J Qual Patient Saf. 2007;33:502–511.
23. Ramirez M, Grantham C. Crisis checklists for the operating room, not with a simulator. J Am Coll Surg. 2012;215:302–303.
24. Ranganathan P, Phillips JH, Attaallah AF, Vallejo MC. The use of cognitive aid checklist leading to successful treatment of malignant hyperthermia in an infant undergoing cranioplasty. Anesth Analg. 2014;118:1387.
25. Goldhaber-Fiebert SN, Lei V, Jackson ML, McCowan K. Simulation-based Team Training: Crisis Resource Management and the Use of Emergency Manuals in the OR. 2014. MedEdPORTAL Publications; Available at: https://www.mededportal.org/publication/9992. Accessed March 21, 2016.
26. Goldhaber-Fiebert SN, Lei V, Bereknyei Merrell S, Nandagopal K.Perioperative Emergency Manuals in Clinical Clerkships: Curricula on “Why, How, and When to Use” for Teaching Medical Students. 2015. MedEdPORTAL Publications; Available at: https://www.mededportal.org/publication/10056. Accessed March 21, 2016.
27. Goldhaber-Fiebert SN, Lei V, Nandagopal K, Bereknyei S. Emergency manual implementation: can brief simulation-based or staff trainings increase familiarity and planned clinical use? Jt Comm J Qual Patient Saf. 2015;41:212–220.
28. Krosnick JA. Survey research. Annu Rev Psychol. 1999;50:537–567.
29. Kelley K, Clark B, Brown V, Sitzia J. Good practice in the conduct and reporting of survey research. Int J Qual Health Care. 2003;15:261–266.
30. Malterud K. Qualitative research: standards, challenges, and guidelines. Lancet. 2001;358:483–488.
31. Fereday J, Muir-Cochrane E. Demonstrating rigor using thematic analysis: a hybrid approach of inductive and deductive coding and theme development. Int J Qual Methods. 2008;5:80–92.
32. Saldanña J.The Coding Manual for Qualitative Researchers. 2013.2nd ed. Thousand Oaks, CA: SAGE Publishing, Inc.
33. Miles MB, Huberman AM, Saldanña J.Qualitative Data Analysis: A Methods Sourcebook. 2014.3rd ed. Thousand Oaks, CA: SAGE Publishing, Inc.
34. Creswell JW, Plano Clark VL.Designing and Conducting Mixed Methods Research. 2011.2nd ed. Thousand Oaks, CA: SAGE Publishing, Inc, .
35. Weaver SJ, Lubomksi LH, Wilson RF, Pfoh ER, Martinez KA, Dy SM. Promoting a culture of safety as a patient safety strategy: a systematic review. Ann Intern Med. 2013;158:369–375.
36. Burden AR, Carr ZJ, Staman GW, Littman JJ, Torjman MC. Does every code need a “reader?” improvement of rare event management with a cognitive aid “reader” during a simulated emergency: a pilot study. Simul Healthc. 2012;7:1–9.
37. Visser PS, Krosnick JA, Lavrakas PJ.Handbook of Research Methods in Social and Personality Psychology. 2000.New York, NY: Cambridge University Press.

Supplemental Digital Content

Back to Top | Article Outline
© 2016 International Anesthesia Research Society