Secondary Logo

Journal Logo

Economics, Education, and Policy: Special Article

Differences in Safety Climate Among Hospital Anesthesia Departments and the Effect of a Realistic Simulation-Based Training Program

Cooper, Jeffrey B. PhD*†; Blum, Richard H. MD†‡; Carroll, John S. PhD§; Dershwitz, Mark MD, PhD; Feinstein, David M. MD†¶; Gaba, David M. MD#; Morey, John C. PhD**; Singla, Aneesh K. MD, MPH*†

Author Information
doi: 10.1213/01.ane.0000296462.39953.d3
  • Free

Organizational culture is a primary determinant of the safety of organizations. Culture is the shared expectations, values, and assumptions that underlie “the way we do things” in an organization.1,2 Whereas culture contains deep content that is often below the surface and difficult to measure simply,2 climate refers to the more evident and more easily measured positive or negative attitudes and perceptions toward aspects of culture.3 Safety climates have been measured in various industries, such as aviation, oil exploration and, more recently, health care settings.4–11 However, there have been few measures of safety climate specifically in anesthesia and no comparisons among anesthesia groups.12 In this article, we examine some aspects of safety climate in six anesthesia departments and suggest how this information can be used to identify characteristics that may be changed to enhance safety culture.

This study was motivated in part by the introduction of an intervention expected to improve aspects of patient safety in anesthesia. In November 2000, the captive insurance organization of the Harvard Medical School (HMS) Institutions announced a two-tier rate structure for anesthesiologists. The premium for faculty anesthesiologists who had participated in simulation-based Crisis Resource Management (CRM) training would be 6% less than for those who had not participated, effective January 1, 2001. Accordingly, the Center for Medical Simulation* developed and deployed a one-day training intervention specifically for anesthesia faculty.

We saw this intervention as an opportunity to assess the overall safety climate of the different organizations, to learn if there were meaningful differences among them and to measure any overall changes in climate that might occur in association with the new training program. Previous CRM training had been offered almost exclusively to residents, whereas these new sessions focused on teamwork for their supervisory staff physicians and included a scenario on debriefing after important clinical events. We wondered if the introduction of training to all faculty might generally lift the safety climate by virtue of everyone experiencing training directed at safety. Specifically, we expected that some aspects of safety culture that are addressed directly or indirectly in the course, e.g., calling for help, ability to manage a crisis, might improve.


Research Design

We conducted a two-phase (before-after) study of safety climate comparing four experimental hospitals, where most of the anesthesia faculty participated in the one-day simulation-based CRM faculty training program, to two control hospitals that did not receive the training intervention. The first phase began in February 2001, with distribution of the safety climate survey to all 610 faculty, residents, and nurse anesthetists (CRNAs) of the four experimental hospitals. To increase the typically low response rate in physician populations, a second wave was sent 6 wk later only to those who did not return postcards identifying them via an identification number as having returned a survey form. After a decision to add the control hospitals to the study design, two waves of surveys were sent, 6 wk apart, to all 98 anesthesia clinicians in the two control hospitals, beginning in April 2002. The second phase began in April 2004, when the same survey was administered in two waves to all 674 anesthesia providers then at the four experimental hospitals and all 98 in the two control hospitals.


The four experimental hospitals have academic anesthesia programs affiliated with HMS. Both control hospitals are located in Massachusetts and also have residency programs. Between February 15, 2001 and April 30, 2004, 272 (82%) of the approximately 330 clinical faculty from the anesthesia departments of the experimental hospitals participated in the new CRM training program. From information provided by the insurance carrier, on the basis of their survey a year before, we estimated that approximately 37% of all anesthesia faculty in the HMS cohort had previously participated in anesthesia CRM sessions either as residents or in an earlier pilot study in 1992. The portion of faculty respondents from the experimental group reporting previous simulation-based training on our survey was 57% in Phase 1, which is higher than we had anticipated, and 92% in Phase 2. In the two control hospitals, some residents and faculty had participated in a previous simulation-based CRM program, but no faculty participated in the new faculty-training program. Of those in the control group, 29% reported having been trained in Phase 1 and 20% in Phase 2.

The usable response rate on the safety climate survey was 43.6% (309/708) in Phase 1 and significantly lower at 38.0% (293/772) in Phase 2 (Table 1; χ2(1) = 4.95, P < 0.05). We label this “usable” because we excluded as responding 3 returned surveys in Phase 1 and 30 in Phase 2 from respondents who indicated that they were currently nonclinical. There were wide variations in response rates between hospitals and provider groups. Response rate by hospital varied from 34% to 63% in Phase 1 (χ2(5) = 26.05, P < 0.001) and from 21% to 59% in Phase 2 (χ2(5) = 30.36, P < 0.001) (Table 1). However, there was no significant difference in response rate by treatment, combining the four experimental hospitals versus two control hospitals, in either phase. There was no significant difference in response rate by provider type in Phase 1 (χ2(2) = 2.75, n.s.), but there was a significant different in Phase 2 (Table 2; χ2(2) = 14.68, P < 0.001), with 32.6% of residents, 40.6% of faculty, and 61.5% of CRNAs providing usable responses.

Table 1:
Response Rates by Hospital and Phase
Table 2:
Response Rate by Provider Type and Phase

Simulation Training

The one-day simulation training for faculty was based on the anesthesia CRM model format described by Howard et al., and similar to those used for residents in our hospitals since 1994.13–15 More detail is given in Appendix 1.

Safety Climate Survey

A 54-question survey was assembled with questions primarily drawn from the 100-question survey developed by the Veterans Administration Palo Alto Patient Safety Center of Inquiry, which had been developed for a broad hospital audience.5 A subset of questions was selected intuitively for this survey to include some with a relationship to the CRM training intervention but most with relevance to overall safety attitudes, although not inclusive of all issues. Nine new questions were developed to query two important elements of the CRM training program: teamwork and debriefing of residents. Three were asked only of supervisors (faculty) and six only of supervisees (residents and fellows). Thirty-seven questions were worded positively (agreement with the statement indicated a safe attitude) and 22 were worded negatively. Responses were on 5-point Likert scales. Two final questions prompted for free-text comments regarding suggestions for safety improvements in the local departments and “additional comments.” The complete questionnaire is available at (

Methods of Analysis

Given the large number of survey items, factor analysis was used on the 45 questions asked of all subjects to identify multi-item scales that could be used for comparing the experimental and control hospitals across phases.

We also computed the responses in terms of “positive safety climate,” i.e., the percent of responses to each question that agreed or strongly agreed with questions indicative of a positive safety climate, reverse coded for questions that were worded negatively.

All statistical analyses were conducted using SPSS (Version 14). Unweighted means multivariate and univariate analyses of variance were conducted to minimize the differences in provider response rates by using the general linear model procedure and Type III sums of squares for unbalanced designs. The statistical significance level was P < 0.05.

Two anesthesiologists not associated with the study design or implementation reviewed and classified the free text comments. The anesthesiologists (co-author AKS and one other) were blind to the phase and identity of hospitals. They created a set of 29 categories, based on a set of dimensions identified in the literature,4 and independently placed each comment in one category. Disagreements were settled by consensus.

The appropriate IRBs approved the study. Return of a survey was deemed sufficient consent from subjects with the exception of one hospital in which a separate written consent was required and obtained.


The factor analysis (Appendix 2) led to eight scales, of 36 questions (Table 3). Scores for each scale were calculated by averaging questions on the scale, with appropriate reversed items such that high scores indicated stronger safety climate. Cronbach’s α for the eight scales ranged from 0.66 to 0.84, which is typical for surveys under development.16 The research team gave each scale a descriptive label based on the content of the scale items. In addition, a total safety climate score was computed as the average of the eight scale-scores.

Table 3:
Item and Scale Statistics and Cronbach’s α for Phase 1 Data
Table 3:

The Effect of Treatment on Safety Climate

We tested the hypothesis that safety climate would improve more in the treatment hospitals than in the control hospitals (a treatment by phase interaction). We tested for this effect overall using both a multivariate analysis of variance on the eight scale-scores and analysis of variance on the total safety climate score controlling for provider group (attendings, residents, CRNAs). These analyses tested two treatment groups × two phases × three provider groups. Both analyses failed to find an effect for treatment or for treatment by phase. To illustrate the relative stability of the individual scales overall in the experimental hospitals, the means and 95% confidence intervals are shown for each scale in Figure 1. However, there was a significant difference between phases in the multivariate analysis (Wilks’ lambda = 0.93, F(8, 601) = 5.01, P < 0.001) with small increases in six of the eight scales. Individual analysis of variance on each scale revealed that only the “safe workload” scale was significantly different between phases (F(1, 608) = 36.95, P < 0.001) with an improvement in workload over time (Phase 1 M = 2.83, Phase 2 M = 3.23). Viewing the perceptions in terms of positive safety climate score (Table 3), on average, 33% of respondents exhibited positive responses to the three questions in the safe workload factor in Phase 1. That increased to about 47% in Phase 2 (accounting for reverse coding of negatively worded questions).

Figure 1.:
Means for the eight-scale scores for hospitals with training for each phase (See Table 3 for labels of each scale; [error bars are 95% confidence intervals]).

This effect found in the multivariate analyses was mirrored more weakly in the analysis of the total climate score, which showed a marginally significant improvement in total score over time (Phase 1 M = 3.65, Phase 2 M = 3.72, t(608) = 1.94, P = 0.052). Both analyses also showed a significant difference across hospitals, but not by treatment condition. (See also the next section).

Seven questions were directly related to the actual training intervention. Four made up the factor “asking for help” (which did not change by treatment as tested above) and three were in other scales. We tested each of those three questions independently via a Student’s t-test (the most liberal but not the most valid test) and none was significant for the experimental hospitals at the 0.05 level. Thus, none of the climate issues that might be expected to improve directly from training did so.

Although none of the provider types (attendings, residents/fellows, CRNAs) showed significant differences between phases, significant differences were shown among these three groups for the phases combined. When compared to the attending mean total score (M = 3.71, sd = 0.43, n = 304), the mean total score for the residents/fellows (M = 3.62, sd = 0.40, n = 231) was significantly lower (t(533) = −2.49, P = 0.013). The CRNA mean total score (M = 3.89, sd = 0.43, n = 42) was significantly higher than the mean for the attendings (t(334) = 2.54, P = 0.011). These differences are shown in Figure 2. Individual ANOVAs (three provider groups × two phases) for each hospital revealed only two effects, a significantly higher mean for attendings than residents/fellows in one hospital (F(2, 154) = 5.17, P = 0.007) and a significantly higher mean total score in Phase 2 in another hospital (F(1, 51) = 8.59, P = 0.005). From a 6-hospitals × 3-provider groups ANOVA, we computed generalized omega squared parameters to compare the amounts of total score variance associated with providers and hospitals.17 Providers accounted for a considerably smaller percentage of the variability of total scores (0.84%) than hospitals (2.02%).

Figure 2.:
Climate Survey Mean Total Scale by Provider Group and Phase (error bars are 95% confidence intervals).

Hospital Comparisons

Figure 3 shows the mean total scale score of the six hospitals in each phase. Although there was no overall effect of treatment, there were significant, but not large, differences among hospitals in each phase and in both phases aggregated. The multivariate analysis of variance (six hospitals × two phases × three provider groups) of the eight scales showed a significant hospital effect (Wilk’s lambda = 0.74, F(40, 2527) = 4.63, P < 0.001) as did the analysis of variance on the total scores (F(5, 586) = 4.25, P < 0.001). The multivariate test for the hospital by phase interaction approached but did not reach significance (Wilk’s lambda = 0.91, F(40, 2527) = 1.35, P = 0.07). Individual analyses of variance of each of the eight subscales revealed that only the “safe workload” scale showed a significant hospital by phase interaction (F(5, 586) = 3.98, P = 0.001). The safe workload scale scores by hospital and phase are shown in Figure 4. A post hoc test of differences of overall safety climate among the six hospitals (averaging across phases) showed the best and worst hospitals differed significantly with the other four grouped in the middle. Figure 3 offers suggestive evidence that two hospitals increased in safety climate, three were relatively stable, and one decreased, but the hospital by phase interaction was only marginally significant.

Figure 3.:
Climate Survey Mean Total Scale by Phase and by Hospital. Hospital U & Z are controls; others experimental (error bars are 95% confidence intervals).
Figure 4.:
Safe Workload Scale by Hospital and Phase (error bars are 95% confidence intervals).

Debriefing Questions

The CRM course included specific content around debriefing after incidents as well as general content on teamwork and safety. Because the questions asked of attendings and residents were worded differently, these were not included in the safety climate scores and analyses above. We expected both attendings and residents to report more debriefing of clinical events in Phase 2 compared with Phase 1 in the experimental hospitals but, consistent with our results on safety climate, there were no statistically significant differences by treatment, phase, or treatment by phase. However, in both phases, the percentage of attendings that agreed or strongly agreed that they “always debrief” their resident after a “difficult clinical situation with a patient” was significantly higher (χ2(2) = 173, P < 0.001) than the percentage of residents who agreed or strongly agreed that “my attending always debriefs with me” after such a situation (Figure 5). There was no statistically significant change in the perceptions of debriefing both by residents or attendings from Phase 1 to 2, and no difference in the change between experimental and control hospitals.

Figure 5.:
Response to questions about debriefing.Asked of residents and fellows: “After I’ve been involved in a difficult clinical situation with a patient, my attending staff always debriefs with me about my performance in managing the situation.”Asked of Attendings: “After my resident has been involved in a difficult clinical situation with a patient, I always debrief him/her about his/her performance in managing the situation.”

Positive Climate Responses

We also calculated the percentage of positive responses (agree or agree strongly with positively worded questions, disagree or disagree strongly with negatively worded questions) across all questions in the eight scales, and then averaged questions within the eight scales and averaged these eight scores for a total positive climate response. Table 3 shows that the climate scores (all hospitals, Phase 1) were most positive for “In my department, we believe that safety is an essential part of patient care” (95% positive among all respondents) and “Staff are genuinely concerned about patient safety” (94%) and most problematic for “Senior management rewards people who report the mistakes they have made” (6%) and “The anesthesia attendings in this department are overworked” (32%; reverse coded).


One-hundred-thirty-six surveys (40%) in Phase 1 and 123 surveys (36%) in Phase 2 included comments to at least one of the two open-ended questions. Answers to the question about safety improvement suggestions were classified into 248 and 159 comments in Phases 1 and 2 respectively. The most frequently noted categories of comments in each phase are shown in Table 4. The most frequently suggested category of improvement involved “production pressure” (e.g., “Production pressure often forces us to go ahead with cases that should be delayed;” “Decrease pressure to increase production”) with “staffing” (e.g., “Institute a more organized call team, especially in PM;” “Attending coverage is stretched too thin. Need more staff”) and “organizational leadership actions regarding safety” (heterogeneous comments suggesting the need for leadership to take actions about perceived safety issues) nearly as frequent. Although overall the suggestions were less frequent in Phase 2, the relative reductions in suggestions about staffing levels and feedback of information were most noticeable. A relatively large increase was found in the rate of comments about “organizational leadership actions regarding safety.”

Table 4:
Summary of Comments from Phases 1 and 2 to Question “What Can be Done to Improve Patient Safety in Your Department?”


This study measured elements of safety climate (attitudes) in six academic anesthesia departments in close geographical proximity and of varying size. Although we were motivated by an interest in testing a hypothesis about a possible impact of crisis management training on overall safety climate, in retrospect, that was not a reasonable expectation for several reasons explained below. Although the specific finding of a lack of change probably says little about the impact of that single intervention, it does suggest useful reminders about the challenges in effecting behavior change. What is of greater interest is that we identified a set of safety climate indicators for which there are statistically significant differences among the hospitals and across time, but not likely related to this intervention. This is a reminder of how safety climate varies over place and time. Moreover, we see that the different elements of safety climate vary considerably as well, which suggests that some need more attention than others. We consider below how to interpret and use the data and compare the survey and results to those reported elsewhere. We examine the impact of the anesthesia CRM faculty training, the implications for safety culture, and the limitations of the research.

Safety Climate Dimensions and Meaning

The eight factors are similar to the kinds of factors that have been identified in other health care climate surveys, for example the Operating Room Management Attitudes Questionnaire/Safety Attitudes Questionnaire (ORMAQ/SAQ), originally developed in aviation and then applied to health care, including a survey of anesthesia departments in the United Kingdom,12,18 and a survey developed by the Agency for Healthcare Research and Quality.19 These surveys have similar intent but differ in content and wording from each other and from our survey.

The existence of eight factors in our study demonstrates that safety climate is not a unitary concept. Respondents distinguished dimensions that varied in content as well as level or locale. For example, the second factor that we labeled “report mistakes” contains items specific to reporting or revealing mistakes or problems and is at an interpersonal level regarding how supervisors will discipline someone or how coworkers will lose respect. The seventh factor, “ask for help,” and the eighth factor, “reveal mistakes,” are even more personal, reporting how the respondent feels about these behaviors and whether they perform them. The first factor, “safety priority,” the third factor, “safety valued,” and the fifth factor, “management support,” have content about departmental values and senior management behaviors. The sixth factor, “safe workload,” is about overwork at both personal and departmental levels. The fourth factor, “emergency teamwork,” is specific to emergency situations and team cooperation. At this time, there is no consensus on the content of safety climate, and therefore different health care organizations are likely to focus on different dimensions in their efforts to assess themselves and improve.

The open-ended suggestions are most meaningful to each department to identify issues of concern to their staff. That about one-third of all respondents made at least one suggestion is, we think, indicative of a strong interest in the need for more improvements in safety in anesthesia, despite the substantial improvements that are perceived by many for the specialty overall. It seems consistent that both the “production pressure” and “staffing” response rate decreased considerably as a percent of the total responses and that it was only the safe workload scale that improved between Phase 1 and Phase 2. Whereas we cannot identify any specific actions taken by the departments as a whole, perhaps increases in staffing over this time frame contributed to these responses to the specific and open-ended questions. The introduction of the 80-hr work week limitation was also introduced in this time frame (see below).

Overall, the open-ended suggestions for safety improvements focus on an overlapping but somewhat different set of safety characteristics than are captured in our eight factors. Practitioners have a strong interest in improving their immediate working conditions by redesigning equipment, creating specific safety actions, and reporting problems in a blame-free environment. Much of the content of the safety climate survey deals with attitudes and values, including senior management support, but has less focus “on the ground” in specific work practices and relationships. This suggests that additional items could be written to capture more of these proximate elements of safety climate.

Impact of CRM Faculty Training

Contrary to our expectations, the study gives no evidence that CRM faculty training produced any overall improvement in safety climate in the experimental hospitals, compared to the control hospitals. This is despite the very positive faculty evaluations of the training immediately after the course and about 1 yr later.14 Other researchers have had similar disappointment in seeking evidence for the impact of interventions based on safety climate measures.20 There are two possibilities for this result: either no such change occurred, or the limitations of our study precluded our finding it. The limitations are discussed in a later section. But, we believe that our expectation that a single day of training for faculty would impact safety climate measured up to 3 yr later was, in retrospect, naïve. Most of the individual questions and scales in the survey had no direct relationship to the training so expecting an indirect effect was unwarranted.

The fact that so many faculty who completed the survey had participated in simulation-based CRM training before this new program was introduced was unexpected when the study was conceived. Training had been piloted in the early 1990s in these academic programs and some residents who joined the faculty during these years had participated in training during their residency. Thus, the increase of exposure to this kind of training was only from 57% to 93% of the respondents between Phase 1 and Phase 2 (approximately 3 yr), far less than expected, which could have contributed to the lack of impact of the intervention. Despite the relatively high response rate, this was a biased sample in this respect.

We did find some overall improvement in safety climate across all hospitals during the time of our study, against a background of considerable variation among hospitals. We did not attempt to track other potential organizational interventions or events that may have influenced safety attitudes during the interval between surveys. We retrospectively sought to identify possible confounds but found almost no such candidates. In one hospital, a series of adverse events (not anesthesia related) led to strong hospital-wide safety actions. One chair left and moved to one of the other hospitals, but the leadership changes were not traumatic. The Accreditation Council for Graduate Medical Education rule limiting resident work hours went into effect in July, 2003, 1 yr before the second survey, but generally did not impact working hours of residents. Yet, it curiously coincides with improvement in the “safe workload” scale as well as the relatively fewer comments about production pressure and staffing, although we draw no cause and effect relationship.

Perceptions of one important aspect of climate that was addressed in the training, asking for help, also did not change. A possible reason is that both the scale score and scores of the individual questions making up the scale started at a relatively high level. Even for climate characteristics that are addressed in any training program, it should be expected that changing attitudes and behaviors requires a more comprehensive intervention that is periodically reinforced. Thus, because of the strong positive perceptions of the CRM training for anesthesia faculty, it is being continued on a regular basis at the departments involved in the study.

Of particular interest is the wide disparity in perceptions of debriefing by residents and attendings. Residents are substantially less likely than attending staff to perceive that they are always debriefed after a difficult clinical situation. That is, the attendings believe they do it and the residents largely do not agree. The simulation training intervention did not succeed in motivating attendings to conduct such debriefings with sufficient depth or regularity to impress their residents, nor was there any follow-up after training to reinforce this behavior. Unfortunately, the baseline phase of survey data was not analyzed quickly enough to recognize the preexisting disparity between residents and attendings on this point; had we seen that difference we might have enhanced our training and follow-up. Again, such behavior changes typically require more practice than this single intervention provided and periodic reinforcement as dictated by change management precepts.21

Safety Climate Variability by Hospital and Provider Type

We consistently found significant differences in safety climate across hospitals, which is consistent with other studies.5,11 Studies by Flin et al. and by Sexton et al. reported differences across hospitals specific to anesthesia and the overall operating room respectively.11,12 In our study, the highest and lowest scoring hospitals had participated in training, yet they exhibited considerable differences. These findings further demonstrate the ability of a safety climate measurement process to find differences that could identify clinical and management issues that might be addressed with specific interventions.

Attendings, residents, and CRNAs responded differently to the safety climate survey, with residents having the most negative view and CRNAs the most positive view. The number of CRNAs was relatively small so this finding should be interpreted cautiously. Dramatic differences in some safety climate measures have been identified in other health care settings. Some studies have found differences between nurses and physicians,11,22,23 whereas other studies of more diverse clinician populations have found few nurse-physician differences.5

These differences between hospital anesthesia departments and providers types suggest that safety climate can be measured well enough to distinguish among hospitals, provider groups and time periods. The multidimensionality and variation in the way this is measured in the literature suggest that safety climate measures need to be carefully developed considering both the accepted and common measures, but also the particular issues in the local site. The differences among these provider types, smaller in magnitude than differences among hospital departments, are noteworthy but seem of secondary priority to the differences among hospitals. If a particular hospital were to measure large differences in perceived climate across provider groups, this would be a signal for concern.

Limitations and Future Research

Since we began this study in 2001, many safety climate dimensions have been described and several measures came into common use.4,8,24 There remains no standard way to measure safety climate or absolute measure of what constitutes an acceptably good climate. Most instruments are used to compare across institutions rather than to understand culture in any detail. Additional work is needed to establish a broad and useful safety climate questionnaire for anesthesia and/or to ensure that the surveys that are used include anesthesia issues.

Surveys are always limited by the degree of candor in responses and any biases in the sample. We have no reason to believe that the respondents were not candid, since the survey was anonymous. Our response rate was approximately 40%, which is relatively good for such surveys involving physicians,5,25 although much higher response rates have been reported.11,22,23 The significant differences in response rates across hospitals (but not comparing experimental versus control conditions), provider groups, and phase present further challenges for interpretation. It is not obvious whether hospitals or provider groups with higher response rates have more helpful people, less busy people, or more safety-conscious people. Even if they were more safety conscious, would that produce higher or lower ratings of safety climate (e.g., would their expectations be higher and therefore their ratings lower)? The lower response rate in Phase 2 may have been due to fatigue with questions about safety culture or any number of other factors. Overall, it seems difficult to use response rate as an explanation for the lack of effect of the treatment, or for the differences in ratings of workload across time, or for the differences across hospitals. Our sample size per hospital was modest, giving us little statistical power to examine complex interrelationships among these variables. Multicenter trials of interventions may be required.

Ultimately, the value of surveys of safety climate is in how the information is used to drive change. Although climate indicators suggest places to improve attitudes and behaviors, they do not provide a recipe for culture change. The uneven level of positive safety climate response indicates that an optimum safety climate is not yet in force in the anesthesia departments we surveyed. Health care, even anesthesiology, still has a significant way to go to create and sustain the type of safety culture that is believed necessary to field a truly high reliability organization.26


The authors gratefully acknowledge the participation of the anesthesia providers of the six hospitals involved in this study. We thank several people who advised and assisted during various stages of the project or contributed various efforts including Scott Segal, MD, Jennifer Daley, MD, Allan Frankel, MD, Christine Vogeli, PhD and Manju Gookale.


Simulation-Based Training Program for Anesthesia Faculty

The Anesthesia Crisis Resource Management for Faculty (ACRMF) training sessions were designed for four to five participants, although as few as three and as many as six participated in a single session. Session length varied between 6 and 7 h depending on the number of participants. Because the insurance premium was reduced for those with prior training, initial preference was given to those without prior training. Sessions were run between March 2001 and March 2004 with schedules constrained by the available capacity of the Center for Medical Simulation (CMS) and the anesthesia faculty clinical responsibilities.

ACRMF session content was based on the ACRM model described by Howard et al. and Gaba et al.13,27 Sessions were conducted by CMS faculty and staff, which since 1994 have regularly conducted variants of simulation-based CRM sessions for emergency medicine, intensive care, interventional radiology, in-hospital cardiac arrest teams, and other groups and teams.14,15** The CMS Crisis Resource Management (CRM) curriculum is based on a set of five principles aimed at improving management of critical events by individuals and teams: Role Clarity, Communication, Resources, Support, and Global Assessment.

Simulations were conducted in a highly realistic recreation of an operating room or radiology suite setting (see Scenarios were designed to address situations that plausibly would be encountered by faculty in an academic setting and to elicit discussion about several CRM principles. One scenario included a situation that evoked discussion about debriefing of residents by faculty after a significant clinical event. Depending on the scenario, the patient was one of two different models of high-fidelity mannequin simulator (Medsim Patient Simulator or METI HPS®) sometimes supplemented with standardized patients. CMS staff acted in the roles of surgeon, nurses, radiologist, and other personnel and gave voice to the patient (from the simulation control room) as required for each scenario. Trainees not participating in a simulation scenario observed from behind a one-way mirror. One trainee typically remained in the conference room and was called into the procedure room if and when the anesthesiologist in the “hot seat” role requested another anesthesiologist to assist.


Factor Analysis to Develop Safety Climate Scales

A principal axis factor analysis was conducted on the 45 questions given to all respondents. Results revealed 11 initial factors explaining 60.7% of variance, and an acceptable Kaiser-Meyer-Olkin measure of sampling accuracy (KMO = 0.896). The scree plot showed an inflection point at seven factors, accounting for 50.6% of the variance. Using an oblimin rotation, we assigned each question to one factor based on its highest factor loading, excluding four questions that loaded below 0.30 on all factors. Questions associated with each factor were then examined for content and reliability (Cronbach’s α) and 10 were removed that made their scale less meaningful or reliable, with appropriate reversed items such that high scores indicated stronger safety culture. Finally, the 14 questions that did not appear on any of these seven scales were factor analyzed separately, and 5 questions that appeared to constitute an additional scale with acceptable reliability were included as an eighth scale.


1. Carroll JS, Quijada MA. Redirecting traditional professional values to support safety: changing organisational culture in health care. Qual Saf Health Care 2004;13 Suppl 2:ii16–21
2. Schein EH. Organizational culture and leadership. 2nd ed. San Francisco: Jossey-Bass, 1992
3. Flin R, Mearns K, O-Connor P, Bryden R. Measuring safety climate: identifying the common features. Saf Sci 2000;1–3:177–92
4. Singla AK, Kitch BT, Weissman JS, Campbell EG. Assessing patient safety culture: a review and synthesis of the measurement tools. J Patient Saf 2006;2:105–15
5. Singer SJ, Gaba DM, Geppert JJ, Sinaiko AD, Howard SK, Park KC. The culture of safety: results of an organization-wide survey in 15 California hospitals. Qual Saf Health Care 2003;12:112–8
6. Gershon RR, Stone PW, Bakken S, Larson E. Measurement of organizational culture and climate in healthcare. J Nurs Adm 2004;34:33–40
7. Makary MA, Sexton JB, Freischlag JA, Millman EA, Pryor D, Holzmueller C, Pronovost PJ. Patient safety in surgery. Ann Surg 2006;243:628–32, discussion 32–5
8. Colla JB, Bracken AC, Kinney LM, Weeks WB. Measuring patient safety climate: a review of surveys. Qual Saf Health Care 2005;14:364–6
9. Sexton JB, Helmreich RL, Neilands TB, Rowan K, Vella K, Boyden J, Roberts PR, Thomas EJ. The safety attitudes questionnaire: psychometric properties, benchmarking data, and emerging research. BMC Health Serv Res 2006;6:44
10. Sexton JB, Holzmueller CG, Pronovost PJ, Thomas E J, McFerran S, Nunes J, Thompson DA, Knight AP, Penning DH, Fox HE. Variation in caregiver perceptions of teamwork climate in labor and delivery units. J Perinatol 2006;26:463–70
11. Sexton JB, Makary MA, Tersigni AR, Pryor D, Hendrich A, Thomas EJ, Holzmueller CG, Knight AP, Wu Y, Pronovost PJ. Teamwork in the operating room: frontline perspectives among hospitals and operating room personnel. Anesthesiology 2006;105:877–84
12. Flin R, Fletcher G, McGeorge P, Sutherland A, Patey R. Anaesthetists’ attitudes to teamwork and safety. Anaesthesia 2003;58:233–42
13. Howard S, Gaba D, Fish K, Yang G, Sarnquist FH. Anesthesia crisis resource management training: teaching anesthesiologists to handle critical incidents. Aviat Space Environ Med 1992;63:763–70
14. Blum RH, Raemer DB, Carroll JS, Sunder N, Feinstein DM, Cooper JB. Crises resource management training for anaesthesia faculty: a new approach to continuing education. Med Educ 2004;38:45–55
15. Holzman R, Cooper J, Gaba D, Philip JH, Small SD, Feinstein D. Anesthesia crisis resource management: real-life simulation training in operating room crises. J Clin Anesth 1995;7:675–87
16. Nunnally JC. Psychometric theory. 2nd ed. New York: McGraw-Hill, 1978
17. Olejnik S, Algina J. Generalized eta and omega squared statistics: measures of effect size for some common research designs. Psychol Meth 2003;3:434–47
18. Sexton JB, Thomas EJ, Helmreich RL. Error, stress, and teamwork in medicine and aviation: cross sectional surveys. BMJ 2000;320:745–9
19. Nieva VF, Sorra J. Safety culture assessment: a tool for improving patient safety in healthcare organizations. Qual Saf Health Care 2003;12 Suppl 2:ii17–23
20. Alvarado CJ, Carayon P, Hundt AS. Patient safety climate (PSC) in outpatient surgery centers—Part Two, Proceedings of the Human Factors and Ergonomics Society 49th Meeting, 2005, Orlando, FL
21. Kotter JP. Leading change. Boston: Harvard Business School Press, 1996
22. Pronovost PJ, Weast B, Holzmueller CG, Rosenstein BJ, Kidwell RP, Haller KB, Feroli ER, Sexton JB, Rubin HR. Evaluation of the culture of safety: survey of clinicians and managers in an academic medical center. Qual Saf Health Care 2003;12:405–10
23. Makary MA, Sexton JB, Freischlag JA, Holzmueller CG, Millman EA, Rowen L, Pronovost PJ. Operating room teamwork among physicians and nurses: teamwork in the eye of the beholder. J Am Coll Surg 2006;202:746–52
24. Kho ME, Carbone JM, Lucas J, Cook DJ. Safety climate survey: reliability of results from a multicenter ICU survey. Qual Saf Health Care 2005;14:273–8
25. Thomas EJ, Sexton JB, Helmreich, RL. Discrepant attitudes about teamwork among critical care nurses and physicians. Crit Care Med 2003;31:956–9
26. Gaba DM, Singer SJ, Sinaiko AD, Bowen JD, Ciavarelli AP. Differences in safety climate between hospital personnel and naval aviators. Hum Factors 2003;45:173–85
27. Gaba D, Fish K, Howard S. Crisis Management in Anesthesiology. Philadelphia: Churchill Livingstone, 1994

*A non-profit educational organization originally formed in 1993 by the anesthesia departments affiliated with Harvard Medical School.
Cited Here

Cited Here

© 2008 International Anesthesia Research Society