Secondary Logo

Journal Logo

Comparing Trainee and Staff Perceptions of Patient Safety Culture

Bump, Gregory M. MD; Coots, Nordisha MHA; Liberi, Cindy A.; Minnier, Tamra E. RN, MSN; Phrampus, Paul E. MD; Gosman, Gabriella MD; Metro, David G. MD; McCausland, Julie B. MD, MS; Buchert, Andrew MD

doi: 10.1097/ACM.0000000000001255
Research Reports
Free

Purpose The Accreditation Council for Graduate Medical Education implemented the Clinical Learning Environment Review (CLER) program to evaluate and improve the learning environment in teaching hospitals. Hospitals receive a report after a CLER visit with observations about patient safety, among other domains, the accuracy of which is unknown. Thus, the authors set out to identify complementary measures of trainees’ patient safety experience.

Method In 2014, they administered the Hospital Survey on Patient Safety Culture to residents and fellows and general staff at 10 hospitals in an integrated health system. The survey measured perceptions of patient safety in 12 domains and incorporated two outcome measures (number of medical errors reported and overall patient safety). Domain scores were calculated and compared between trainees and staff.

Results Of 1,426 trainees, 926 responded (65% response rate). Of 18,815 staff, 12,015 responded (64% response rate). Trainees and staff scored five domains similarly—communication openness, facility management support for patient safety, organizational learning/continuous improvement, teamwork across units, and handoffs/transitions of care. Trainees scored four domains higher than staff—nonpunitive response to error, staffing, supervisor/manager expectations and actions promoting patient safety, and teamwork within units. Trainees scored three domains lower than staff—feedback and communication about error, frequency of event reporting, and overall perceptions of patient safety.

Conclusions Generally, trainees had comparable to more favorable perceptions of patient safety culture compared with staff. They did identify opportunities for improvement though. Hospitals can use perceptions of patient safety culture to complement CLER visit reports to improve patient safety.

G.M. Bump is associate professor of medicine, University of Pittsburgh School of Medicine and University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania.

N. Coots is administrative fellow, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania.

C.A. Liberi is director of patient safety, Wolff Center for Quality, Safety, and Innovation, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania.

T.E. Minnier is chief quality officer, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania.

P.E. Phrampus is associate professor of emergency medicine and anesthesiology, University of Pittsburgh School of Medicine and University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania.

G. Gosman is associate professor, Department of Obstetrics, Gynecology, and Reproductive Sciences, University of Pittsburgh School of Medicine and University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania.

D.G. Metro is professor of anesthesiology, University of Pittsburgh School of Medicine and University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania.

J.B. McCausland is associate professor, Departments of Medicine and Emergency Medicine, University of Pittsburgh School of Medicine and University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania.

A. Buchert is assistant professor of pediatrics, University of Pittsburgh School of Medicine and University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania.

Funding/Support: None reported.

Other disclosures: None reported.

Ethical approval: The University of Pittsburgh institutional review board deemed this study exempt from review.

Correspondence should be addressed to Gregory M. Bump, University of Pittsburgh Medical Center, UPMC Montefiore, 9 South, 200 Lothrop St., Pittsburgh, PA 15213-2582; telephone: (412) 802-6648; e-mail: bumpgm@upmc.edu.

U.S. hospitals have attained only negligible to modest improvements in patient safety, despite years of trying, and medical errors are still common.1,2 Evidence from the field of obstetrics suggests that completing graduate medical education (GME) training in hospitals with lower rates of complications is associated with achieving better patient outcomes throughout a physician’s career. This effect is independent of a physician’s board certification and licensing exam scores.3 Prior efforts to improve patient safety may have been unsuccessful because the interventions were oriented toward practicing physicians with entrenched practice styles. Focusing on younger physicians whose practice styles are still developing may be more successful. Accordingly, the Accreditation Council for Graduate Medical Education (ACGME) initiated the Clinical Learning Environment Review (CLER) program to evaluate the clinical learning environment in GME settings.4 The goal of this program is to assess and improve the educational environment, including its focus on patient safety and quality improvement, so trainees develop better practice patterns and achieve better patient outcomes as they move into independent practice.

During CLER visits, ACGME representatives tour the clinical environment and conduct impromptu interviews with staff, patients, and providers. Interspersed are semistructured group interviews with residents, program directors, faculty, and hospital leaders. Anonymous audience response technology is used to understand trainees’, faculty members’, and program directors’ perceptions about trainees’ integration and knowledge of their hospital’s support system for patient safety and the other CLER program focus areas. At the conclusion of the three-day CLER visit, the ACGME provides a report of their observations to the program’s sponsoring institution and hospital leaders. This report allows institutions to identify the strengths and opportunities for improvement in their learning environment.

Because CLER visits are still novel, national standards do not exist for institutions to compare their outcomes. In addition, CLER reports do not identify which domains of patient safety are most pressing for institutions to address. Thus, institutions must interpret the report and set their priorities without external benchmarks or mandates. Next, the accuracy of the report’s observations about the learning environment has not been verified. For these reasons, hospital leaders could be reticent to rapidly enact changes in response to their CLER report. Thus, we were interested in identifying complementary measures of trainees’ patient safety experience.

Patient safety culture (PSC) measurement is a valid assessment of an institution’s safety climate. PSC represents the investment of a health care institution and its workers in identifying and improving processes that avoid harm to patients. Institutions with a robust PSC cultivate an environment that encourages individuals to report errors and hazards without fear of reprimand while empowering frontline staff to improve care. Institutions with a better PSC encourage collaboration across ranks and disciplines to seek solutions to patient safety problems.5 Creating a positive PSC is consistent with ACGME goals and is endorsed in the CLER Pathways to Excellence document.6 Currently, trainees’ experience with PSC measurement is limited, so we do not know how they react to PSC compared with other frontline health care workers.

We set out to complete a comparative cross-sectional analysis of PSC at a large GME-sponsoring institution that includes diverse hospital types. In particular, we compared trainees’ perceptions of the PSC with general staff members’ perceptions within a large integrated health system and at individual hospitals. We believe that PSC measurement can be a useful tool to complement CLER visit reports to improve the clinical learning environment as it relates to patient safety and quality improvement.

Back to Top | Article Outline

Method

Settings and participants

In May and June of 2014, we administered the Agency for Healthcare Research and Quality Hospital Survey on Patient Safety Culture (HSOPSC) to 1,426 residents and fellows in 73 GME training programs at 10 hospitals within our integrated health system. The 10 hospitals included a tertiary care center, a psychiatric hospital, a women’s and infants’ hospital, a pediatric hospital, and six community hospitals. To ensure confidentiality, we included only GME programs with at least four trainees. In June and July of 2014, we administered the survey to 18,815 general staff members at the same 10 hospitals (see Table 1 for the types of staff included). The earlier administration dates for trainees were chosen to minimize conflict with other compulsory trainee surveys. All surveys were administered anonymously.

Table 1

Table 1

We used Survey Monkey to administer the surveys, which were sent by e-mail up to three times during the study period. Program directors, program coordinators, and clinical leaders encouraged trainees to complete the survey. No incentives were offered to trainees for survey completion. Two randomly awarded $50 gift certificates were offered to staff for survey completion.

Back to Top | Article Outline

Survey

The HSOPSC is a validated instrument that measures perceptions of patient safety in 12 domains and incorporates two outcome measures.7 Previous psychometric analyses have shown that all 12 dimensions are reliable and valid measures of PSC at the individual, unit, and hospital levels.8,9 The survey was designed for health care workers from varied disciplines, including physicians, nurses, pharmacists, technicians, and administrators. It includes three to four statements for each of the 12 domains (see List 1). Respondents use five-point Likert agreement or frequency scales (“strongly disagree” to “strongly agree” or “never” to “always”) to respond to each statement. The single-item outcome measures ask about (1) the number of medical errors a respondent reported in the preceding 12 months (defined as errors of any type, regardless of whether they resulted in patient harm); and (2) overall patient safety.

Unlike nurses, pharmacists, and patient care technicians who generally work on one defined unit with a clearly identified supervisor, residents and fellows often work in multiple units with multiple supervising faculty members. Therefore, we used an adapted version of the HSOPSC for trainees. These adaptations have been described previously.10,11 To summarize, we added a definition of “event reporting” and replaced the word “staff” throughout the survey with the phrase “resident/fellow” and the words “hospital work area” and “unit” with “hospital.” Because trainees often work in several hospitals, we defined their hospital as the hospital in which they spend the majority of their time. In addition, we clarified that the phrase “agency/temporary staff” meant moonlighters or cross-covering physicians and changed the term “manager” to “program director.” Otherwise, we maintained the statement format, statement order, and response options from the original HSOPSC in the survey that trainees received. We made no changes to the original HSOPSC in the survey that staff received.

Back to Top | Article Outline

Analysis

Scoring the HSOPSC consisted of several steps.7 First, to calculate response rates, we divided the number of respondents by the total number of possible respondents and then multiplied by 100. Next, we classified individual responses to the positively worded survey statements as “positive” if the response was “agree/strongly agree” or “most of the time/always.” As the survey also incorporated several negatively worded statements, we scored responses to those statements as “positive” if respondents answered “disagree/strongly disagree” or “rarely/never.” For each domain, we calculated a domain score by dividing the number of positive responses by the number of statements in that domain and then multiplying by 100. Domain scores ranged from 0 to 100, and higher scores represented a better PSC. The HSOPSC also asked about two outcome measures—overall patient safety and the number of medical errors reported in the preceding 12 months. The overall patient safety grade responses ranged from 1 (excellent) to 5 (failing), and the numbers of medical errors were categorized into six groups, from “none” to “21 or more.”

When interpreting the HSOPSC domain scores, the Agency for Healthcare Research and Quality recommends using an absolute five-percentage-point difference to signify meaningful differences in PSC results. Thus, a PSC score that is five percentage points higher than that of a comparator signifies a better PSC in that domain. Similarly, a PSC score that is five percentage points lower than that of a comparator signifies a worse PSC in that domain.12 These recommendations pertain to comparisons between individual hospitals and national norms published annually. We used this same convention when comparing trainees’ perceptions and staff members’ perceptions. For statistical purposes, we also used binomial logistic regression to correct for missing responses in all comparisons. For all statistical analyses, we used Stata 13.0 (StataCorp LP, College Station, Texas).

The University of Pittsburgh institutional review board deemed this study exempt from review.

Back to Top | Article Outline

Results

The HSOPSC was administered to 1,426 residents and fellows in 73 GME training programs at 10 hospitals within the University of Pittsburgh Medical Center Health System. A total of 926 trainees responded from 10 hospitals for a response rate of 65% (range 37%–86% by hospital). Similarly, the survey was administered to 18,815 staff. A total of 12,015 staff responded for a response rate of 64% (range 49%–88% by hospital). Respondent demographics are displayed in Table 1. Staff respondents were mostly nonphysicians; nurses were most common (39%; 4,737/12,015).

A comparison of trainees’ and staff members’ aggregate PSC scores is shown in Figure 1. Trainees’ scores were the same as staff members’ scores in five domains, including communication openness, facility management support for patient safety, organizational learning/continuous improvement, teamwork across units, and handoffs/transitions of care. Trainees had higher PSC scores in four domains, including nonpunitive response to error, staffing, supervisor/manager expectations and actions promoting patient safety, and teamwork within units. Trainees had lower PSC scores in the remaining three domains, including feedback and communication about error, frequency of event reporting, and overall perceptions of patient safety.

Figure 1

Figure 1

Recognizing that PSC varies by hospital (and that the HSOPSC has been validated at the hospital level but not the health system level), we also compared results at each individual hospital. Because several of our community hospitals support a limited number of GME programs, we only report the findings from our largest hospital, where 511 trainees and 2,111 staff responded. At this hospital, we found nearly identical results to those at the health system level (see Figure 2 for hospital-level results). In addition, we noted several trends at our smaller hospitals. Smaller hospitals tended to have higher PSC scores. In our health system, the smaller hospitals are also community based. It is unclear whether these higher PSC scores were a product of hospital size or setting. Across hospitals, handoffs/transitions of care and nonpunitive response to error were the domains with the lowest PSC scores according to trainees.

Figure 2

Figure 2

Trainees gave their hospital an overall patient safety grade of good (mean 2.2 [standard deviation 0.9]), while staff gave their hospital a nearly identical score (mean 2.2 [standard deviation 0.7]) (P = .54). The majority of trainees (51%; 466/913) responded that they had not reported a medical error in the preceding 12 months, compared with 53% (5,768/10,883) of staff who had not reported a patient safety event in the same time frame. This difference was not statistically significant (P = .17).

Back to Top | Article Outline

Discussion

Physicians who train in high-performing hospitals provide higher-quality care later in their careers. Presumably, when immersed in a high-quality care environment, trainees adopt practices, independent of medical knowledge acquisition, that enable them to provide better care later in their careers.3 The ACGME’s CLER visits are one intervention designed to assess clinical environments and give feedback to institutions about their suitability as learning environments. Following a CLER visit, the ACGME provides a report on the institution’s learning environment but does not provide comparative analytics of any kind. This lack of comparative data makes it challenging for a hospital (or training program) to prioritize areas of improvement and leads to several important questions. How should academic medical centers go about understanding their weaknesses related to patient safety and quality improvement? What aspects of the clinical environment discourage positive safety attitudes and practices? An assessment of PSC could be useful to supplement the CLER report findings and make priority decisions about safety and quality interventions.

Using a validated survey, we demonstrated that trainees have congruent to more favorable perceptions of PSC compared with staff in a large integrated health system, as well as at a large tertiary care academic hospital. Trainees’ perceptions of PSC in the learning environment have not been widely studied. A previous analysis compared trainees’ perceptions with national data derived mostly from independent (i.e., established) practitioners. The researchers did not know if the differences between the trainee results and the national data in that study reflected the trainees’ inexperience or the local culture.10 The observed differences may have been attributable to the fact that trainees had less exposure to the PSC of their institutions because they were in their work environments for fewer years. We believe that, with the addition of our current findings, it is clearer that trainees can accurately and fairly assess PSC.

Although trainees and staff generally provided congruent PSC ratings, we did observe some differences. For example, trainees scored four domains higher than staff did. First, to decrease survey ambiguity, we defined supervisors/managers in our survey of trainees as program directors. This change shifted the focus of the survey from hospital leaders in general to specific individuals. Given that residents and fellows have a personal relationship with their program directors and more often interact with these individuals and that other health care providers identify a number of individuals as supervisors, we were not surprised that trainees rated this domain higher than staff did. Second, in the domain of staffing, the higher PSC scores reported by trainees could reflect work hours restrictions and caps on the number of patients on resident/fellow services. Other staff members likely do not experience these same protective limitations, which could have lowered their scores. Third, physicians previously rated highly teamwork within units in labor and delivery units,13 in a pediatric hospital,14 in operating rooms,15 and in intensive care units.16,17 Physicians seem to evaluate teamwork more favorably than nonphysicians, perhaps because of their position in the team hierarchy. Alternatively, just as physicians overestimate the clarity of their communication,18 they also likely overestimate how well their team members evaluate their leadership. Finally, trainees reported higher PSC scores with regard to nonpunitive response to error compared with staff. Faculty supervisors often use training situations as opportunities for formative feedback and learning, so trainees may accept such circumstances as routine aspects of learning. Conversely, staff may see medical errors as opportunities for blame to be assigned to an individual; thus, their ratings in this domain were lower.

We also noted that trainees’ PSC scores were lower in three domains compared with staff. These domains were feedback and communication about error, frequency of event reporting, and overall perceptions of patient safety. These observations indicate a need for changes in GME. We believe that these findings represent a disconnect between trainees and the hospital systems that support patient safety. Improving such connections should be a priority for institutions. In our study, many staff members had worked in their unit for more than five years. With longer employment, more knowledge and closer personal relationships develop between the employee and the administration. Such familiarity facilitates error reporting and discussions of actions taken to improve safety. We postulate that a closer relationship between frontline care providers and the hospital administration that supports safety is one explanation for the higher PSC scores at the smaller, community-based hospitals in our study and nationally.19 This finding raises an important question: How do we most effectively engage trainees to more closely collaborate with hospital leaders?

Measuring and discussing PSC is an important process for GME programs and hospitals. Previous studies have shown that PSC can be improved. For example, using a multidisciplinary teamwork and communication intervention, Blegen and colleagues showed improvement in multiple domains of PSC.20 Similarly, making multiple enhancements simultaneously led to improvements in the PSC at a large academic medical center.21 However, whether improvements in the PSC are associated with better clinical outcomes is controversial. Two multicenter studies showed that higher PSC scores were associated with better clinical care.22,23 Yet, other work suggests no relationship.24 A recent systematic review concluded that interventions can improve perceptions of PSC and potentially reduce patient harm.25

At our institution, we have made numerous changes to improve the PSC. For example, each GME program is given a report of their HSOPSC results. National benchmarks and data from other local GME programs are also provided for comparison. Programs are encouraged to discuss their results in group meetings with trainees and faculty members to understand trainees’ perspectives. They also are encouraged to identify at least one area for improvement. The GME office offers guidance on this process on an as-needed basis. In addition, the results of the staff and trainee PSC surveys are shared with hospital leaders. Since we started measuring and discussing PSC, more trainees have joined and made substantive contributions to committees that work on quality and safety. In particular, our health system has prioritized venous thromboembolism, nosocomial infections, and procedural complications as areas for improvement. Trainees participate on committees across the health system to share best practices to achieve these goals. Our expectation is that trainees will discuss these projects with their peers to increase their familiarity and comfort with safety practices.

Our study has a number of limitations that warrant comment. To improve clarity and ensure that the survey was relevant to trainees, we made minor survey modifications to the version that trainees received. These changes may have decreased survey validity because the trainee survey did not undergo psychometric analysis. Although we believe that the modifications were minor, they could have affected our results. Second, we administered the survey in only one health system. As PSC varies geographically, we cannot generalize our results beyond our institution.26

In summary, we found that trainees and staff rate the PSC of their institutions similarly. Measuring how these two groups rate PSC can serve multiple goals, including as a way to monitor the learning and clinical environments, to identify clinical and educational opportunities for improvement, and to longitudinally track the impact of quality improvement and patient safety education.

Acknowledgments: The authors would like to thank Andrew Bilderback, MS, for his assistance with all statistical comparisons.

Back to Top | Article Outline

References

1. Landrigan CP, Parry GJ, Bones CB, Hackbarth AD, Goldmann DA, Sharek PJ. Temporal trends in rates of patient harm resulting from medical care. N Engl J Med. 2010;363:21242134.
2. Neily J, Mills PD, Eldridge N, et al. Incorrect surgical procedures within and outside of the operating room: A follow-up report. Arch Surg. 2011;146:12351239.
3. Asch DA, Nicholson S, Srinivas S, Herrin J, Epstein AJ. Evaluating obstetrical residency programs using patient outcomes. JAMA. 2009;302:12771283.
4. Weiss KB, Wagner R, Nasca TJ. Development, testing, and implementation of the ACGME Clinical Learning Environment Review (CLER) program. J Grad Med Educ. 2012;4:396398.
5. Agency for Healthcare Research and Quality. Patient Safety Network. Safety culture. http://psnet.ahrq.gov/primer.aspx?primerID=5. Accessed April 11, 2016.
6. Accreditation Council for Graduate Medical Education. CLER pathways to excellence: Expectations for an optimal clinical learning environment to achieve safe and high quality patient care. 2014. http://www.acgme.org/acgmeweb/Portals/0/PDFs/CLER/CLER_Brochure.pdf. Accessed April 11, 2016.
7. Sorra J, Gray L, Streagle S, Famolaro T, Yount N, Behm J. AHRQ Hospital Survey on Patient Safety Culture: User’s Guide. 2016. Rockville, Md: Agency for Healthcare Research and Quality; AHRQ publication no. 15-0049-EF. http://www.ahrq.gov/sites/default/files/wysiwyg/professionals/quality-patient-safety/patientsafetyculture/hospital/userguide/hospcult.pdf. Accessed April 11, 2016.
8. Blegen MA, Gearhart S, O’Brien R, Sehgal NL, Alldredge BK. AHRQ’s hospital survey on patient safety culture: Psychometric analyses. J Patient Saf. 2009;5:139144.
9. Sorra JS, Dyer N. Multilevel psychometric properties of the AHRQ hospital survey on patient safety culture. BMC Health Serv Res. 2010;10:199.
10. Bump GM, Calabria J, Gosman G, et al. Evaluating the clinical learning environment: Resident and fellow perceptions of patient safety culture. J Grad Med Educ. 2015;7:109112.
11. Jasti H, Sheth H, Verrico M, et al. Assessing patient safety culture of internal medicine house staff in an academic teaching hospital. J Grad Med Educ. 2009;1:139145.
12. Agency for Healthcare Research and Quality. Chapter 6: Comparing your results. In: 2014 User Comparative Database Report. March 2014. http://www.ahrq.gov/professionals/quality-patient-safety/patientsafetyculture/hospital/2014/hosp14ch6.html. Accessed April 11, 2016.
13. Sexton JB, Holzmueller CG, Pronovost PJ, et al. Variation in caregiver perceptions of teamwork climate in labor and delivery units. J Perinatol. 2006;26:463470.
14. Grant MJ, Donaldson AE, Larsen GY. The safety culture in a children’s hospital. J Nurs Care Qual. 2006;21:223229.
15. Carney BT, West P, Neily J, Mills PD, Bagian JP. Differences in nurse and surgeon perceptions of teamwork: Implications for use of a briefing checklist in the OR. AORN J. 2010;91:722729.
16. Chaboyer W, Chamberlain D, Hewson-Conroy K, et al. CNE article: Safety culture in Australian intensive care units: Establishing a baseline for quality improvement. Am J Crit Care. 2013;22:93102.
17. Thomas EJ, Sexton JB, Helmreich RL. Discrepant attitudes about teamwork among critical care nurses and physicians. Crit Care Med. 2003;31:956959.
18. Ha JF, Longnecker N. Doctor–patient communication: A review. Ochsner J. 2010;10:3843.
19. Agency for Healthcare Research and Quality. 2014 User Comparative Database Report. Hospital Survey on Patient Safety Culture. March 2014. http://www.ahrq.gov/professionals/quality-patient-safety/patientsafetyculture/hospital/2014/hosp14tablea-1.html. Accessed April 11, 2016.
20. Blegen MA, Sehgal NL, Alldredge BK, Gearhart S, Auerbach AA, Wachter RM. Improving safety culture on adult medical units through multidisciplinary teamwork and communication interventions: The TOPS project. Qual Saf Health Care. 2010;19:346350.
21. Paine LA, Rosenstein BJ, Sexton JB, Kent P, Holzmueller CG, Pronovost PJ. Assessing and improving safety culture throughout an academic medical centre: A prospective cohort study. Qual Saf Health Care. 2010;19:547554.
22. Hansen LO, Williams MV, Singer SJ. Perceptions of hospital safety climate and incidence of readmission. Health Serv Res. 2011;46:596616.
23. Singer S, Lin S, Falwell A, Gaba D, Baker L. Relationship of safety climate and safety performance in hospitals. Health Serv Res. 2009;44(2 pt 1):399421.
24. Rosen AK, Singer S, Shibei Zhao, Shokeen P, Meterko M, Gaba D. Hospital safety climate and safety outcomes: Is there a relationship in the VA? Med Care Res Rev. 2010;67:590608.
25. Weaver SJ, Lubomksi LH, Wilson RF, Pfoh ER, Martinez KA, Dy SM. Promoting a culture of safety as a patient safety strategy: A systematic review. Ann Intern Med. 2013;158(5 pt 2):369374.
26. Agency for Healthcare Research and Quality. Table A-9: Composite-level average percent positive response by geographic region. In: 2012 User Comparative Database Report. December 2012. http://www.ahrq.gov/professionals/quality-patient-safety/patientsafetyculture/hospital/2012/hosp12taba9.html. Accessed April 11, 2016.
Copyright © 2016 by the Association of American Medical Colleges