Driven by rising costs and increasing demands from both consumers and third-party payers, the health care industry has been under significant pressure to reform. Improving patient safety has emerged as one of the most pressing health care challenges. Despite the unprecedented focus on patient safety over the last 10 years, there is little measurable evidence of progress in terms of reducing the occurrence of medical errors (Pronovost et al., 2009). Over 10 years ago, the Institute of Medicine (IOM) report “To Err is Human” reported that medical errors cause between 44,000 and 98,000 patients deaths annually, making medical errors more deadly than AIDS, motor vehicle accidents, or breast cancer. Moreover, the cost of medical errors has been estimated at around $38 billion a year (IOM, 2000). Unfortunately, medical errors are still a widespread problem today. HealthGrades, a leading independent health care rating organization, reported nearly one million errors on Medicare patients over the years 2006–2008, a figure that was virtually unchanged from their prior study. Medicare no longer reimburses hospitals for expenses if some types of medical errors occurred during the treatment. Other insurers will soon follow Medicare’s lead. Besides, under Medicare rules that went into effect in 2011, hospitals are required to report hospital-acquired infections or pay a fine. It is becoming increasingly clear that patient safety is an urgent national concern because of not only the magnitude of unnecessary errors but also its high cost.
Some studies position patient safety within the research domain of quality, likening medical errors to product defects in manufacturing (Stock, McFadden, & Gowen, 2010) or to operational failures in services (Tucker, 2007). Conversely, other research describes quality and patient safety as separate and distinct issues (Pronovost, Nolan, Zeger, Miller, & Rubin, 2004). The debate about the possible distinction between safety and quality began after the publication of the first IOM report (Kazandjian, Wicker, Matthes, & Ogunbo, 2008). A later IOM (2001) report describes the relationship between patient safety and quality in that patient safety is one dimension of quality.
According to the IOM, patient safety is defined as “freedom from accidental injury” (IOM, 2000, p. 4), and medical error is “the failure of a planned action to be completed as intended (i.e., an error of execution) or the use of a wrong plan to achieve an aim (i.e., an error of planning)” (IOM, 2000, p. 28). The IOM defines health care quality as “the degree to which health services for individuals and populations increase the likelihood of desired health outcomes and are consistent with current professional knowledge” (IOM, 2001, p. 46). Our understanding about the connection between quality and safety has evolved since the time of these reports. It seems logical that improvements in safety could be achieved by implementing quality initiatives, especially within a safety climate.
This article builds on the work of McFadden, Henagan, and Gowen (2009) and investigates how general continuous quality improvement (CQI) quality initiatives relate to both quality and patient safety outcome measures. McFadden et al. focused only on specific patient safety initiatives (not general quality initiatives) and patient safety outcomes (not quality outcomes). This study also adds new understanding by going beyond perceptual measures and incorporates secondary objective outcome data from the Centers for Medicare and Medicaid Services (CMS).
Patient Safety Climate (PSC) and Transformational Leadership (TFL)
There is some distinction in the literature between safety culture and safety climate. Singer et al. (2009, p. 300) explain that “safety climate refers to shared perceptions of what an organization is like with regard to safety, whereas safety culture refers to employees’ fundamental ideology and orientation and explains why safety is pursued in the manner exhibited within a particular organization.” Safety climates are more readily measurable aspects of safety culture (Sexton et al., 2006); therefore, the term “safety climate” will be used in this study.
Research evidence suggests that executive leadership directly affects perceived safety climate (Kelloway, Mullen, & Francis, 2006). For example, when executive leaders show commitment and provide the necessary resources, incentives, and rewards to promote and improve safety, employees’ perceptions of safety are enhanced (Barling, Loughlin, & Kelloway, 2002). Because of the important role senior leadership plays in achieving organizational safety goals, it follows that we explore effective leadership styles that align with safety reliability. To that end, the multifactor leadership theory (Avolio & Bass, 2004) is an approach that characterizes much of the literature. The theory consists of three alternative leadership styles: TFL (based on charisma inspiration), transactional leadership (based on rewards and punishment), and laissez-faire leadership (based on lack of leadership). TFL has been recognized as the most effective of the three leadership styles (Avolio & Bass, 2004), even among health care professionals (Savič, Pagon, & Robida, 2007). Transformational leaders are said to transform the organization by empowering followers through inspirational motivation efforts and encourage innovation, which is required for creating a safety climate and implementing new CQI initiatives.
This study focuses on the charisma-inspiration dimension of the TFL, which emphasizes leadership behaviors that provide “followers with a clear sense of purpose that is energizing and a role model for ethical conduct, which builds identification with the leader and her/his articulated vision” (Avolio & Bass, 2004, p. 29). It also fosters organizational change, which is required for creating a safety climate and for implementing quality initiatives (Kotter, 1990). Empirical findings indicate that TFL style involves displaying a strong level of commitment to safety, employing safety practices and procedures, and placing safety as top priority (Vogus, Sutcliffe, & Weick, 2010). Therefore, prior research (Barling et al., 2002; Kelloway et al., 2006; McFadden et al., 2009) supports the following hypothesis: (H1) TFL will be positively associated with patient safety climate.
CQI initiatives have been successfully applied in many different types of manufacturing and service organizations. Hospitals have been using CQI since the 1980s, but it has not been consistently implemented (Westphal, Gulati, & Shortell, 1997). CQI employs statistical tools to analyze processes and systems. The use of teams for solving quality problems is also an important part of CQI, as is the involvement of everyone in the organization who would be trained in basic quality tools. Popular CQI techniques used in health care include statistical process control; plan, do, check, and act; and competitive benchmarking.
Scholars argue that, to achieve the full benefit of CQI implementation, organizations must possess a strong internal organizational climate (Douglas & Judge, 2001). Growing empirical evidence in the literature suggests that organizational climate supports the implementation of quality and safety initiatives. Prior research (Detert, Schroeder, & Mauriel, 2000; McFadden et al., 2009; Shortell et al., 1995) therefore leads to the following hypothesis regarding the implementation of quality initiatives in hospitals: (H2) Patient safety climate will be positively associated with CQI initiatives.
The recent national initiative to improve patient safety is exemplified by practices to reduce hospital-acquired condition (HAC) rates (Pronovost et al., 2009). HACs are “preventable” events, including hospital-acquired urinary tract infections, hospital falls/traumas, and catheter-associated infections. The lower the HAC rate, the better the patient safety outcome. Similarly, quality outcomes are measured by using process quality scores (PQSs), where the higher the score, the better the quality. HAC rates are influenced by a number of operational practices performed by many workers with an emphasis on vigilance. PQSs, on the other hand, measure a hospital’s ability to execute a defined set of tasks for patients with particular conditions. More about these two objective outcome measures of safety and quality will be explained in the methodology section. Studies (Gowen, McFadden, Hoobler, & Tallon, 2006; McFadden et al., 2009; Shortell et al., 1995) suggest that the deployment of quality practices and patient safety initiatives are associated with positive quality and patient safety outcomes. The research hypotheses can be stated as follows: (H3) CQI initiatives will be negatively associated with HAC rates, a patient safety outcome measure; and (H4) CQI initiatives will be positively associated with PQSs, a process quality measure.
This study employed a survey methodology, using the hospital organization as the unit of analysis. An initial questionnaire was tested in a pilot survey sent to several quality directors in local hospitals. Telephone interviews were also initially conducted to improve clarity and reduce ambiguity. A directory of medical organizations posted on Hospitallink.com was used to obtain a list of U.S. hospitals, and telephone numbers for the organizations listed on this Web site were accessed. After eliminating nonhospital organizations (i.e., clinics, mammography centers, and associations), multiple attempts were made to contact via telephone the chief quality officer, patient safety director, quality director, risk management director, and director of nursing for the remaining hospitals. Calls were made with the intent of receiving multiple responses from each hospital. Only those 626 hospitals where someone was personally contacted received the survey.
During the telephone call, it was explained to respondents that the focus of the study was patient safety, quality, and the reduction of medical errors. Calling the personnel directly ensured that the surveys were emailed to the appropriate individuals and that the email addresses were accurate. The contacts received the survey via an email attachment along with a cover letter outlining the research focus. Three rounds of email reminders spaced about 3 weeks apart were also sent.
The survey sample consisted of hospitals from 48 of the 50 U.S. states. The regional distribution of the respondent hospitals included 19% Western, 20% Midwestern, 27% Southern, 14% Southwestern, and 21% Eastern hospitals. This distribution was not statistically different from the regional distribution of the hospitals included in the Hospitallink.com directory (χ2 = 8.89, df = 4, p = .06). Completed surveys were received from 371 hospitals resulting in a response rate of 59.3%. This response rate compares favorably with the recommended range of 50%–60% required to reasonably assure generalizability (Flynn, Sakakibara, Schroeder, Bates, & Flynn, 1990). There were no outliers or missing values in the survey data.
The dependent variables, HAC rate and PQS, were computed using data collected from the Hospital Compare Web site (www.hospitalcompare.hhs.gov) operated by CMS and the Hospital Quality Alliance. The data for HAC rate and PQS were retrieved in 2011. This set of data has the advantage of providing evidence of actual patient safety outcomes (HAC rate) and quality outcomes (PQS). In addition, these data are collected independently and a few years after the collection of our survey data, so there should be no common-method bias in the sample. However, reporting of these data is voluntary, and many of the hospitals that completed our survey questionnaire did not provide complete data for the measures used to compute HAC rate and PQS. Many of the hospitals that reported these data did not report data in all of the constituent items, so we chose to include in the HAC rate and PQS variables a sample of the individual measures with the highest response rate.
We matched hospitals in our original responding sample (n = 371) with hospitals that reported Hospital Compare data. When the process of matching was completed, the merged sample was reduced to 205 hospitals. In addition, we removed one very small hospital that had only six beds, yielding a final sample of 204 hospitals and a response rate of 33%. Of the final sample of 204 hospitals, 124 provided multiple responses for the independent variables. Although the final sample includes urban and rural locations, private and government ownership, for-profit or not-for-profit, and system or nonsystem membership, it did not include any Veteran Administration, psychiatric, or rehabilitation hospitals.
For those hospitals with multiple respondents, we assessed whether the multiple respondents exhibited consensus in their perceptions about leadership practices, organizational climate, and CQI practices within their organizations. In particular, we assessed the level of interrater agreement using the rwg measure (James, Demaree, & Wolf, 1984). The average rwg values were as follows: TFL (0.90), PSC (0.75), and CQI (0.77). All values of rwg were above recommended values of 0.70. Therefore, there was acceptable agreement between respondents, and the responses received from multiple raters of each hospital were averaged for each item. These averaged values were then used in the data analysis. In addition, t tests revealed no significant differences (p < .05, level of significance) between questionnaire responses from hospitals with multiple and single respondents.
Skewness and kurtosis levels were checked for every item of all constructs and did not show a significant deviation from normality. However, a test of multivariate normality did indicate a significant departure from normality. As a result, a bootstrap approach was used to estimate the measurement and structural models in the analysis. More details about the bootstrapping approach used are provided in the Results section.
Comparing characteristics of the sample of multiple-respondent hospitals with the hospitals with only one respondent, there were no statistically significant differences in terms of number of beds (mean = 162.82 vs. 140.31, respectively; t = −1.09, p > .05) or number of full-time equivalent employees dedicated to error prevention (mean = 3.33 vs. 2.15, respectively; t = −1.91, p > .05).
The possibility of nonresponse bias was examined in two different ways. First, we took a random sample of 161 nonresponding hospitals in the original survey sample and compared the sizes (measured by number of beds) of the two groups. There was no statistically significant difference in the average number of beds between the two groups (mean = 216 for respondent hospitals vs. 210 for nonrespondent hospitals, t = 0.15, p > .05). We also compared those hospitals in the original sample of 371 hospitals that did and did not report Hospital Compare data by assessing the differences in means of several different variables. There was no significant difference in PSC score (t = 1.38, p > .05). There were significant differences in the means of TFL (t = 2.98, p < .01), CQI (t = 7.32, p < .001), and beds (t = 7.67, p < .001). These results suggest that there is no difference between respondents and nonrespondents of the original survey. However, there appear to be some systematic differences between hospitals that do and do not report Hospital Compare data.
Independent and Dependent Variables
The questionnaire includes items based on prior research relating to TFL, safety climate, and CQI. TFL was measured by eight items asking respondents to assess the frequency with which each item described the top leadership (chief executive officer [CEO]) at their hospitals. These items were measured using the published Multifactor Leadership Questionnaire 5-point scale (Avolio & Bass, 2004). The PSC construct was measured on a 6-point scale using six items taken from a published Safety Climate Survey (Sexton & Thomas, 2003). Although the original survey included 19 items, we extracted a subset that included only those items that measured safety at the organizational level and were most closely aligned with the components of a safety climate drawn from high reliability organization theory. The survey items covered the major patient safety topics outlined in Gaba, Singer, Sinaiko, Bowen, and Ciavarelli (2003). CQI was measured using three items (statistical quality/process control using control charts, process competitive benchmarking of best-in-class processes, and quality teams of employees) adapted from past studies (Goldstein & Schweikhart, 2002; Gowen et al., 2006). The model also included seven control variables. Four were collected from the survey data: hospital size (measured by the natural log of the number of beds), teaching status (teaching vs. nonteaching hospital), system or nonsystem membership, and a variable that indicated whether there were multiple respondents. The other three were indicator variables collected from CMS Cost Reports data: urban or rural location, private or government ownership, and for-profit or not-for-profit status.
Patient safety outcomes were measured using HAC data obtained from Hospital Compare, as discussed. This is an aggregated rate of eight different HACs per 1,000 discharges. Process quality performance outcomes were measured using the PQS data also obtained from Hospital Compare. This measure is an aggregated score of the percentage of patients who receive the best practice treatments (recommended process of care) across three different diseases (eight treatments for heart attacks, four for heart failure, and six for pneumonia). Hospitals voluntarily submit data from medical records about the treatments their patients receive for these conditions. A comparison of mean values of the PQS variable from the sample firms and the nonsample firms in the overall population of hospitals reporting data to Hospital Compare revealed a significant difference (t = 4.77, p < .001), with the sample firms exhibiting a higher mean PQS (94.08 for the sample firms vs. 92.02 for the nonsample firms). However, a similar comparison of HAC rate between sample and nonsample firms did not show a significant difference (t = 0.29, p > .05).
To test the research hypotheses, we used structural equation modeling (SEM) as the primary analytical technique. We employed the two-step procedure recommended by Hair, Black, Babin, Anderson, and Tatham (2006). First, confirmatory factor analysis was used to verify the measures. Then, SEM tested the hypothesized relationships among TFL, PSC, CQI, HAC rates, and PQS. AMOS version 19.0 was used to estimate the measurement and structural models.
Table 1 shows the minimums, maximums, means, standard deviations (SD), Cronbach’s alpha, composite reliabilities, correlations, and average variance extracted for each of the constructs. Values below the diagonal show the correlations between each of the latent constructs, and the values in bold on the diagonal show the average variance extracted for each of the latent constructs. Discriminant validity is assessed by comparing the average variance extracted for each latent construct to the square of the correlations between latent constructs. The average variance extracted is greater than the squared correlations with other constructs, which provides evidence of discriminant validity (Fornell & Larcker, 1981). All factor loadings were significant at p < .001 and ranged from .52 to .89, which indicates that each indicator reflected the constructs they were intended to measure. The Cronbach’s alpha reliability values for the three latent constructs range from .70 to .93, which exceed the acceptable levels of .60 for exploratory research (Flynn et al., 1990) and .70 for general research (Nunnally & Bernstein, 1994). Therefore, the measures show acceptable reliability. Also, all indicator variable loadings were significant (at p < .001), which provides evidence of convergent validity (Anderson & Gerbing, 1988).
As noted above, the data are not multivariate normal. Therefore, a bootstrapping technique was employed in estimating the measurement and structural models. Bootstrapping is a nonparametric method that is recommended for SEM with nonnormal data (Bollen & Stine, 1992). To assess the overall fit of a model, the Bollen–Stine bootstrap chi-Square p value is computed. If the Bollen–Stine p value is greater than .05, the model is usually judged to fit the data well. In keeping with recently published research that has employed this approach, this study reports both the traditional fit statistics and the Bollen–Stine p value (Demi, Coleman-Jenses, & Snyder, 2010). The measurement model showed a good fit to the data (χ2 = 177.54, 116 df, p < .001, χ2/df = 1.53, comparative fit index [CFI] = 0.97, incremental fit index [IFI] = 0.97, Tucker–Lewis index [TLI] = 0.96, root mean square error of approximation [RMSEA] = 0.05, Bollen–Stine p = .11). Standardized factor loadings for the measurement model are shown in Table 2.
In the conceptual model, this study hypothesized that TFL is associated with PSC. PSC was hypothesized to relate to CQI implementation, and CQI implementation was predicted to link to both HAC rate and PQS. Statistical support was found for most of the hypothesized paths in the structural model. The structural model also included the control variables described previously.
To test the hypotheses, we estimated the structural model with the hypothesized relationships shown in Figure 1. As with the measurement model, bootstrapping was employed to deal with the nonnormal data. We first computed the Bollen–Stine bootstrap-adjusted p value to assess overall model fit. We then employed bias-corrected bootstrapping to adjust standard error values of the estimates. For some variables, the bootstrapped standard errors were larger than the regular maximum likelihood standard errors. However, all regression coefficient estimates that were statistically significant in the maximum likelihood model were also significant in the bootstrapped model.
Significant paths are shown by solid lines along with its coefficient. Nonsignificant paths are shown by dashed lines. The same indicator variables used in the measurement model are used in the structural model. Fit indices showed an acceptable fit between the structural model and the data (χ2 = 356.46, 248 df, p < .001, χ2/df = 1.44, CFI = 0.95, IFI = 0.95, TLI = 0.94, RMSEA = 0.05, Bollen–Stine p = .08). As shown in Figure 1, we find support for H1, H2, and H4. H3, which predicted a negative relationship between CQI initiatives and HAC rate, was not supported.
To test alternative models, we estimated a series of nested models, starting with the fully mediated hypothesized model (see Figure 1) and two other more highly constrained models testing additional paths. To compare fit of the nested models, we first computed and tested for significance the χ2 difference for each successive model. We also compared the Bollen–Stine bootstrapped adjusted p values, where higher values indicate better fit (e.g., Demi et al., 2010). The best-fitting model includes an additional direct path from PSC to HAC rate and to PQS. Adding additional paths did not result in an improvement in overall fit compared with the final model.
Figure 2 shows the final estimated structural model along with path coefficients of the significant paths. Fit indices showed an acceptable fit between the structural model and the data (χ2 = 352.32, 247 df, p < .001, χ2/df = 1.43, CFI = 0.95, IFI = 0.96; TLI = 0.94, RMSEA = 0.05, Bollen–Stine p = .09). The significant paths in the structural model show support for most of the hypothesized relationships. In particular, we find support for H1, H2, and H4. These relationships were all statistically significant in the hypothesized model (shown in Figure 1) as well. H3 was not supported because, although it was significant, the relationship between CQI initiatives and HAC rates was positive, not negative as predicted. One additional path in Model 2 was also statistically significant that was not included in Model 1. In particular, PSC was negatively related to HAC rate. Also, there were some significant relationships with the control variables. Specifically, size was positively related to CQI. Urban location was positively related to PQS, and teaching status was negatively related to PQS. Private ownership was negatively related to HAC rate and positively related to PQS. All other relationships with control variables were nonsignificant. Given that the multirespondent control variable, which indicates whether the hospital had single or multiple respondents, was not significantly related to any variable in the model, it shows that whether there was one or more respondents did not have a statistically significant effect in the model.
This research focuses on the relationship among TFL, safety climate, and CQI implementation on both process quality (PQS) and patient safety outcomes (HAC rates) in hospitals. In the final model (as well as in the hypothesized model), H1, H2, and H4 were fully supported and in the expected direction. H3 was not supported because, although it was significant in the final model, the relationship between CQI and HAC rates was positive and not negative as predicted.
It is interesting that CQI was associated with higher HAC rates rather than lower rates as hypothesized (H3), and with higher PQSs (H4). These findings are not completely unexpected because of earlier studies (Anderson, Rungtusanatham, Schroeder, & Devaraj, 1995; Douglas & Fredendall, 2004; Rungtusanatham, Forza, Filippini, & Anderson, 1998) that found mixed results on the relationship between CQI and service performance outcomes. CQI initiatives measured in this study focus on controlling variation through statistical process control, the use of benchmarking best practices, and employing quality teams. The emphasis on managing the quality process has a much broader scope than the specific safety objectives needed for reducing patient safety outcomes such as HAC rates. It therefore makes sense that CQI could have a positive influence on PQS.
To help explain why CQI may have a negative relationship to patient safety outcomes, we look to the organizational control theory and total quality management theory. Organizational control theory (Ouchi, 1979; Turner & Kakhija, 2006) provides support for the distinction between outcome and process controls, where outcome controls deal with the evaluation of desired outcomes and process controls center on appropriate behavior. CQI would be considered a process control. Total quality management theory teaches that process quality centers on ensuring employees do things in the right way, at the right time, and at the right cost, while incorporating best practices. It focuses on reducing defect rates. The focus of patient safety, on the other hand, is on individual events. It is about what happens to patients, where the goal is to avoid negative patient safety outcomes. Safety reliability assumes a specific set of characteristics that often infringe upon organizational process outcomes and are in direct opposition to efficiency as organizations expand rather than contract resources (Vogus & Welbourne, 2003). In fact, the focus on controlling variation and standardizing or reducing process steps could actually have a negative effect on patient safety outcomes. Rijsdijk and van den Ende (2011) state that research supports the idea that different controls have different consequences and found that synergies and trade-offs exist. Specifically, their empirical findings support that some process quality controls can work against each other in attaining individual outcomes. Thus, it follows that managerial actions that promote process quality may actually harm patient safety outcomes, and vice versa.
The final model includes a significant negative path from PSC to HAC rates, a patient safety outcome measure. In other words, PSC is directly related to lower HAC rates. This provides further information regarding the interplay between process quality and patient safety outcomes. The key finding is that, whereas CQI implementation is directly associated with higher PQSs and higher rate of HACs, PSC is directly linked to lower HAC rates. Therefore, the overall pattern of findings of this study supports the need for both PSC and CQI initiatives to improve both HAC rates and PQSs in hospitals.
The notion that PSC and CQI initiatives are not interchangeable or universally beneficial is an important contribution to the literature. Furthermore, the framework in this study differs in several important ways from previous models presented in the literature. First, whereas other patient safety and quality studies may have included some of the same constructs, no other study has simultaneously tested the interrelationships among all of the variables in this model. Second, the literature lacks research about the direct effect of CQI initiatives on both patient safety and quality outcomes. Third, to supplement survey data, this study incorporates objective data, collected by Hospital Compare, on patient safety and quality performance measures of hospitals. Most health care studies have relied on perceptual data for their dependent variables. Current literature is limited in regards to effectively identifying the most effective initiatives for improving both the safety and quality of patient care (Pronovost et al., 2009). Therefore, this study tests a framework that will help to focus future patient safety and quality efforts.
Conclusions and Managerial Implications
Hospitals are struggling to better prioritize and implement patient safety and quality outcomes. The findings from this research provide some guidance to hospital administrators. First, this study offers some important insights into the relationship between safety and quality. The findings suggest that quality and safety are not the same and may actually work against each other. This may be because of the fact that quality and safety focus on different objectives. Current CQI implementation in hospitals may be done more in the interest of improving business processes and quality results instead of in the interest of patient safety. It might also be true that PQS and HAC rates are measuring different phenomenon. HAC is an outcome variable and measures how many patients actually had an error. On the other hand, PQS can be thought of as a proxy for quality processes, but not necessarily an outcome. Preventing HAC requires a broader vigilance and safety net than improving PQS. Given the breadth of the PSC measures, it seems fitting that there is a significant relationship between it and HAC rates. Conversely, CQI measures seem more specific and could be used for the more narrowly defined PQS. Patient safety initiatives such as reporting errors without blame, redesigning systems, and openly discussing errors are grounded in quality principles (McFadden et al., 2009). The findings indicate that possibly combining these types of patient safety initiatives with traditional CQI initiatives could be more effective in addressing safety outcomes.
Second, this study shows the importance of leadership, emphasized in the finding with respect to the initial path to PSC. Specifically, the hospital CEOs’ TFL style was directly related to employees’ perception of a strong safety climate. Moreover, hospitals with a strong PSC were more likely to successfully implement CQI. In turn, CQI initiatives were associated with higher PQSs. This group of findings suggests that hospital executives’ TFL style is significantly related to employees’ perceptions of a safety climate and indirectly negatively related to both CQI implementation and PQSs. Consequently, it implies that executive leadership should play an active role in creating a safety climate in which employees feel comfortable voicing safety concerns and ensuring the implementation of quality and safety practices.
In summary, the findings of this study suggest that, to achieve gains in quality outcomes, hospital leaders should focus on implementing quality initiatives in general. Similarly, to improve patient safety outcomes, hospital leaders should implement safety initiatives to see gains in safety. The overall pattern of findings indicates that simultaneous implementation of CQI initiatives and PSC produces greater combined benefits.
Future Research and Limitations
This research should spur a host of new ideas for future research to more deeply examine the relationship between safety and quality in health care. It would be interesting to develop a conceptual model that hypothesizes two climates/cultures—a safety climate and a quality climate, to see whether a culture of quality would lead to a culture of safety or if implementing a broad quality initiative would lead to implementing patient safety initiatives. In addition, more empirical research is needed to clarify the impact of CQI initiatives on patient safety outcomes, such as possibly including other constructs in the model. Moreover, future research is needed to examine whether hospital units could achieve improvements in patient safety and quality in the absence of an organization-wide cultural commitment. Although TFL was shown to be an important construct, additional research could investigate other leadership styles (such as transactional leadership) to determine possible relationships with safety climate, CQI, and outcomes. Because no leader is entirely transformational or transactional, examining different levels of each style could help determine if what is required for a safety climate is different from what is required for process quality. Future research might also explore the leadership styles of middle managers to determine their possible relationships to safety climate, CQI, and outcomes. Likewise, it might be of value to examine the costs and benefits of patient safety and quality initiatives and how and why they may conflict with one another.
As is the case with any empirical research, there are limitations in this study. One shortcoming is the use of perceptual data for the independent variables. Using multiple respondents helps to address reliability and validity issues that might result from perceptual measures. However, the use of more objective, publicly available data to construct outcome measures addresses at least some of the potential issues associated with using perceptual data. A second limitation is that other managerial or organizational variables not formally addressed in this study might be related to quality and safety performance. Furthermore, hospital size may affect CQI and PQS as larger hospitals may have more resources to devote to CQI implementation. Future research should explore other theoretical models that include variables such as size and resource allocation.
To improve the safety and quality of care in the U.S. hospital system, it is essential that hospitals implement effective solutions to decrease the frequency, severity, and impact of medical errors. This study provides a three-prong approach to improving the quality of care in hospitals that involves leadership, safety climate, and CQI initiatives. Specifically, the findings suggest that improved process quality outcomes are related to the implementation of CQI initiatives, which is linked to a safety climate and associated with a CEO’s leadership style. Similarly, the findings indicate that improved patient safety is associated with the presence of a strong PSC, which is also related to a CEO’s TFL style. Safety must be purposeful, and the findings confirm that improving patient safety is a process that requires a well-orchestrated and systematic approach.
Anderson J. C., Gerbing D. W. (1988). Structural equation modeling in practice: A review and recommended two-step approach. Psychological Bulletin, 103 (3), 441–443.
Anderson J. C., Rungtusanatham M., Schroeder R. G., Devaraj J. S. (1995). A path analytic model of a theory of quality management
underlying the Deming management method: Preliminary empirical findings. Decision Sciences, 26 (5), 637–658.
Avolio B. J., Bass B. M. (2004). Multifactor leadership questionnaire: Manual and sampler set (form 5x-short) (3rd ed.). Redwood City, CA: Mind Garden.
Barling J., Loughlin C., Kelloway E. (2002). Development and test of a model linking safety-specific transformational leadership and occupational safety. Journal of Applied Psychology, 87 (3), 488–496.
Bollen K. A., Stine R. A. (1992). Bootstrapping goodness-of-fit measures in structural equation models. Sociological Methods & Research, 21 (2), 205–229.
Demi M. A., Coleman-Jenses A., Snyder A. R. (2010). The rural context and post-secondary school enrollment: An ecological systems approach. Journal of Research in Rural Education, 25 (7), 1–26.
Detert J. R., Schroeder R. G., Mauriel J. J. (2000). A framework for linking culture and improvement initiatives in organizations. Academy of Management Review, 25 (4), 850–863.
Douglas T. J., Fredendall L. D. (2004). Evaluating the Deming Management Model of Total Quality in Services. Decision Sciences, 35 (3), 393–422.
Douglas T. J., Judge W. Q. (2001). Total quality management
implementation and competitive advantage: The role of structural control and exploration. Academy of Management Journal, 44 (1), 158–169.
Flynn B. B., Sakakibara S., Schroeder R. G., Bates K. A., Flynn E. J. (1990). Empirical research methods in operations management. Journal of Operations Management, 9 (2), 250–284.
Fornell C., Larcker D. F. (1981). Evaluating structural equation models with unobservable variables and measurement error. Journal of Marketing Research, 18, 39–50.
Gaba D. M., Singer S. J., Sinaiko A. D., Bowen J. D., Ciavarelli A. P. (2003). Differences in safety climate between hospital personnel and naval aviators. Human Factors, 45 (2), 173–185.
Goldstein S. M., Schweikhart S. B. (2002). Empirical support for the Baldrige award framework in U.S. hospitals. Health Care Management
Review, 27 (1), 62–75.
Gowen C. R. III, McFadden K. L., Hoobler J. M., Tallon W. J. (2006). Exploring the efficacy of healthcare quality practices, employee commitment, and employee control. Journal of Operations Management, 24 (6), 765–778.
Hair J. F., Black W. C., Babin B. J., Anderson R. E., Tatham R. L. (2006). Multivariate data analysis (6th ed.). Upper Saddle River, NJ: Prentice Hall.
Institute of Medicine. (2000). To err is human: Building a safer health system. Washington, DC: National Academies Press.
Institute of Medicine. (2001). Crossing the quality chasm: A new health system for the 21st
century. Washington, DC: National Academies Press.
James L. R., Demaree R. G., Wolf G. (1984). Estimating within group interrater reliability with and without response bias. Journal of Applied Psychology, 69 (1), 85–98.
Kazandjian V. A., Wicker K. G., Matthes N., Ogunbo S. (2008). Safety is part of quality: A proposal for a continuum in performance measurement. Journal of Evaluation in Clinical Practice, 14 (2), 354–359.
Kelloway E. K., Mullen J., Francis L. (2006). Divergent effects of transformational and passive leadership on employee safety. Journal of Occupational Health Psychology, 11 (1), 76–86.
Kotter J. P. (1990). A force for change: How leadership differs from management. New York, NY: The Free Press.
McFadden K. L., Henagan S. C., Gowen C. R III. (2009). The patient safety
chain: Transformational leadership’s effect on patient safety
culture, initiatives, and outcomes. Journal of Operations Management, 27 (5), 390–404.
Nunnally J. C., Bernstein I. H. (1994). Psychometric theory (3rd ed.). New York, NY: McGraw-Hill.
Ouchi W. G. (1979). A conceptual framework for the design of organizational control mechanisms. Management Science, 25 (9), 833–848.
Pronovost P. J., Goeschel C. A., Marsteller J. A., Sexton J. B., Pham J. C., Berenholz S. M. (2009). Framework for patient safety
research and improvement. Circulation, 119 (2), 330–337.
Pronovost P. J., Nolan T., Zeger S., Miller M., Rubin H. (2004). How can clinicians measure safety and quality in acute care? Lancet, 363 (9414), 1061–1067.
Rijsdijk S. A., van den Ende J. (2011). Control combination in new product development projects. Journal of Product Innovation Management, (28), 868–880.
Rungtusanatham M., Forza C., Filippini R., Anderson J. (1998). A replication study of a theory of quality management
underlying the Deming management method: Insights from an Italian context. Journal of Operations Management, 17, 77–95.
Savič S. B., Pagon M., Robida A. (2007). Predictors of the level of personal involvement in an organization: A study of Slovene hospitals. Health Care Management
Review, 32 (3), 271–283.
Sexton J. B., Helmreich R. L., Neilands T. B., Rowan K., Vella K., Boyden J., Roberts P. R., Thomas E. J. (2006). The safety attitudes questionnaire: Psychometric properties, benchmarking data and emerging research. BMC Health Services Research, 6, 44.
Sexton J. B., Thomas E. J. (2003). The Safety Climate Survey: Psychometric and benchmarking properties. Technical Report 03-03. The University of Texas Center of Excellence for Patient Safety
Research and Practice (AHRQ grant # 1PO1HS1154401 and U18HS1116401).
Shortell S. M., O’Brien J. L., Carman J. M., Foster R. W., Hughes E. F. X., Boersteler H., O’Connor E. J. (1995). Assessing the impact of continuous quality improvement/total quality management
: Concept versus implementation. Health Services Research, 30 (2), 377–401.
Singer S. J., Falwell A., Gaba D. M., Meterko M., Rosen A., Hartmann C. W., Baker C. L. (2009). Identifying organizational cultures that promote patient safety
. Health Care Management
Review, 34 (4), 300–311.
Stock G. N., McFadden K. L., Gowen C. R III. (2010). Organizational culture, knowledge management and patient safety
in U.S. hospitals. Quality Management
Journal, 17 (2), 7–26.
Tucker A. L. (2007). An empirical study of system improvement by frontline employees in hospital units. Manufacturing and Service Operations Management, 9 (4), 492–505.
Turner K., Kakhija M. V. (2006). The role of organizational controls in managing knowledge. Academy of Management Review, 31 (1), 197–217.
Vogus T. J., Sutcliffe K. M., Weick K. E. (2010). Doing no harm: Enabling, enacting, and elaborating a culture of safety in health care. Academy of Management Perspectives, 24 (4), 60–77.
Vogus T. J., Welbourne T. M. (2003). Structuring for high reliability: HR practices and mindful processes in reliability-seeking organizations. Journal of Organizational Behavior, 24, 877–903.
Westphal J. D., Gulati R., Shortell S. M. (1997). Customization or conformity? An institutional and network perspective on the content and consequences of TQM adoption. Administrative Science Quarterly, 42 (2), 366–394.