Secondary Logo

Journal Logo

Multifocal Clinical Performance Improvement Across 21 Hospitals

Crawford, Barbara; Skeath, Melinda; Whippy, Alan

The Journal for Healthcare Quality (JHQ): March/April 2015 - Volume 37 - Issue 2 - p 117–125
doi: 10.1111/jhq.12039
Original Article

Abstract: Improving quality and safety across an entire healthcare system in multiple clinical areas within a short time frame is challenging. We describe our experience with improving inpatient quality and safety at Kaiser Permanente Northern California. The foundations of performance improvement are a “four-wheel drive” approach and a comprehensive driver diagram linking improvement goals to focal areas. By the end of 2011, substantial improvements occurred in hospital-acquired infections (central-line–associated bloodstream infections and Clostridium difficile infections); falls; hospital-acquired pressure ulcers; high-alert medication and surgical safety; sepsis care; critical care; and The Joint Commission core measures.

For more information on this article, contact Barbara Crawford at .

The authors declare no conflicts of interest.

This is an open access article under the terms of the Creative Commons Attribution-NonCommercial License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited and is not used for commercial purposes.

More than 10 years ago, the Institute of Medicine called for improved healthcare quality and safety. Numerous national quality initiatives were subsequently implemented, such as the 100,000 Lives Campaign of the Institute for Healthcare Improvement (IHI), The Joint Commission's (TJC) introduction of accountability measures, and public reporting of quality of care process and outcomes measures. Many reports exist of improvements within clinical focal areas and across settings.

However, few reports exist of systematic improvement spanning multiple quality and safety issues and large multihospital systems (Pryor, Hendrich, Henkel, Beckmann, & Tersigni, 2011; Schilling et al., 2010, 2011; Whippy et al., 2011). Our objective is to describe the performance improvement framework at Kaiser Permanente Northern California (KPNC) that supported multiple substantial quality improvements across 21 hospitals over a short period of time.

Back to Top | Article Outline


Setting: Disrupting the Status Quo

KPNC arranges for the total continuum of care for 3.25 million members. Its integrated healthcare delivery system includes 21 hospitals and 226 clinics and employs approximately 64,000 staff. The Permanente Medical Group, including more than 7,000 primary care and specialist physicians, contracts with the Kaiser Foundation Health Plan to provide comprehensive care to members in all settings.

Before 2005, clinical performance improvement at KPNC was characterized by obstacles that can impede performance improvement throughout healthcare: competing leadership priorities, inconsistent spread of effective practices and highly variable performance across medical centers. However, in response to a high-alert medication (HAM) error, regional leaders launched a highly prescriptive initiative to reduce HAM errors to zero.

KPNC leadership, physicians, nurses, pharmacists, quality leaders, and labor unions worked with regional and local medication safety committees to (1) standardize HAM handling practices; (2) enhance and standardize related education and annual core competencies; and (3) develop regional and local monitoring to support sustainability and ongoing improvements (Graham, Clopp, Kostek, & Crawford, 2008). The program was implemented in December 2005. Within a few months, overall regional compliance of 95% exceeded initial goals. KPNC continued to refine oversight, metrics, equipment, procedures, and cross-site collaboration. Since February 2006, no HAM-related errors at KPNC have resulted in major injury or death.

Back to Top | Article Outline

Framework for Improvement

HAM-related performance improvement highlighted the speed with which quality gains could be achieved and created new levels of acceptance of and expertise in regionally standardized care processes. Regional leaders vigorously committed to the goal of improving quality by reducing unnecessary care variation. They developed a framework for performance improvement called the “four-wheel drive” approach (Figure 1).

Figure 1

Figure 1

A compelling need to change is a necessary starting point. Harm, heart, and heat fuel organizational motivation to improve care. Harm refers to recharacterizing quality issues to increase the visibility of their impact on patients: using patient stories or images, for instance, or quantifying the frequency with which patients experience adverse events. “Every other day” paints a different picture of harm than does “180 per year.” Heart refers to emotional engagement in performance improvement; patient involvement in performance improvement activities engages staff in a way that data alone cannot. For instance, at the regional kickoff for a sepsis care initiative, a vibrant and articulate patient told the story of receiving life-saving care (Whippy et al., 2011). Heat refers to external forces for change, such as those arising from public performance reporting and designation of nonreimbursable conditions. In addition to the Centers for Medicare and Medicaid Services (CMS) cessation of reimbursement for certain preventable conditions, public reporting initiatives in California also create external forces for quality improvement (Rosenthal, 2007). In 2007, the California Hospital Assessment and Reporting Taskforce (CHART) introduced an Internet-based public report card for quality and safety measures related to cardiac, surgical, intensive, and maternity care, and pneumonia, as well as utilization and patient experiences (Rating Hospital Quality in California, 2012). In addition, state legislation mandates public reporting of and fines for serious reportable events (SREs) (California Senate Bill SB 2006).

Each wheel of the vehicle for performance improvement represents a force necessary for moving forward. Leadership alignment at all levels, from region to medical center department, makes priorities consistent throughout the organization and clarifies accountability; local champions for initiatives provide peer leadership. Standardization of care and systematization of processes reduce unnecessary variations across settings. Designated project management, typically provided by improvement advisors, keeps performance improvement a priority in daily operations.

Lastly, data that are actionable and drive performance improvement are critical. Metrics track outcomes, processes of care, and implementation. Hospitals monitor their performance through many metrics, including a monthly scorecard displaying the results for all regional medical centers. It includes process, outcome, and balancing measures: these relate to, for example, TJC core measures, stroke care, breastfeeding, elective deliveries, sepsis 6-hour bundle completion and improvement in intermediate lactate levels, immunizations, as well as a subset of patient safety metrics: hospital-acquired infections (HAIs) and pressure ulcers, patient falls, surgical safety, and safety climate as measured by the Safety Attitudes Questionnaire. Balancing measures are included to highlight potential unintended consequences of interventions. An example of a balancing measure for sepsis that appears on the scorecard is complications from central line insertion. The metrics on the scorecard are tied to organizational goals and, in many instances, to executive and manager compensation. Many performance metrics are also embedded as goals in annual performance reviews of nonunion staff and physicians.

KPNC, like all Kaiser Permanente regions, employs an integrated electronic health record (EHR), KP HealthConnect™. Some data can be gleaned from it, but providing timely, relevant data to guide performance improvement also requires tools that are not—or not yet—integrated into the EHR. For instance, tracking the prevalence of hospital-acquired pressure ulcers (HAPUs) is accomplished by a spreadsheet outside the EHR.

The goal or destination of the four-wheel drive vehicle is performance improvement that is owned by all stakeholders from front-line clinicians to senior regional leaders, reliably implemented, uniform across settings, and sustained. To support this goal across the region, a quality leader with expertise in implementing the IHI Breakthrough Collaborative model took a regional leadership position in 2006, spearheading collaboratives to reduce variation across medical centers in TJC core (now accountability) measure performance (The Joint Commission, 2010; The Breakthrough Series: IHI's Collaborative Model for Achieving Breakthrough Improvement, 2003). Participation in the 100,000 Lives Campaign introduced multiple evidence-based practices to reduce ventilator-acquired pneumonia, central-line–associated bloodstream infections, surgical site infections, and adverse drug events, implement evidence-based care for acute myocardial infarctions (AMIs), and deploy rapid response teams (Overview of the 100000 Lives Campaign 2006).

Back to Top | Article Outline

Engaged Facilities

Use of the collaborative model garners engagement from all medical centers, each of which hosts on a rotating basis a monthly collaborative call for all facilities. A member of the regional quality staff facilitates the call, and the hosting facility shares in greater detail their experiences with performance improvement, focusing on recent efforts, successes, and challenges. All other facilities provide more succinct updates on their performance improvement progress. Discussions often provide collaborative problem solving related to implementation barriers, identifying those that need to be addressed through senior leadership, which are brought to the appropriate party by the regional facilitator.

Dynamic “summits” convening content experts, representatives from all medical centers, and patients and families kick off improvement initiatives and provide annual forums for exchanging ideas and best practices. Regional quality staff gathers best practices from collaborative calls and invites facility representatives to present at the forum. In addition, regional leadership scans best practices at external organizations and invites outside speakers to present and participate. Local champions and improvement teams conduct small tests of change while adhering to standardized evidence-based practices. Quality area-specific steering or faculty committees meet on an ongoing basis to assess the current state of care, review new evidence regarding best practices, and identify internal and external top-performing sites. Site visits to top performers help fine-tune effective practices.

With evidence-based care bundles being systematically implemented in all medical centers, regional leaders added internal data to the evidence base for performance improvement. In 2008, the region undertook a hospital mortality review, in which the 50 most recent inpatient deaths at each medical center were examined; the process has been described elsewhere (Lau & Litman, 2011). Based on the results, regional leaders developed a driver diagram associating underlying system components and processes with system-level improvement goals (Figure 2). Driver diagrams provide a logic model for clinical performance improvement and support developing a portfolio of improvement projects (Nolan, 2007).

Figure 2

Figure 2

For example, HAIs emerged as a priority area. The review established that approximately 500 deaths and 46,000 hospital days could be collectively attributed to hospital-acquired pneumonia, surgical site infections, Clostridium difficile infections, central-line–associated bloodstream infections, and catheter-acquired urinary tract infections. Each HAI category became a performance improvement initiative. Some are still in pilot stages; others have been in place for 2–5 years. Additional focal areas for performance improvement include falls and HAPU prevention, sepsis care, critical care, and stroke care.

Back to Top | Article Outline

Spreading Effective Practices

Where and how to spread effective practices is typically determined by regional leadership on the basis of facilities’ readiness to accept an additional performance improvement initiative. Across individual facilities, this is a function of several factors, including the presence of a fully staffed leadership team with skill in facilitating local spread, the labor environment, and the overall performance improvement context. Occasionally, it seems likely that introducing a new initiative in a particular facility may cause improvement to slip in another area; in that case, implementation is delayed until previous improvement activity is consolidated and stabilized. In addition, if implementing an initiative will be highly complex, regional leadership may rely on phased spread. For instance, an initiative related to hospital-acquired pneumonia was initially spread to only two units at each facility. Each facility chose the units and had another 6 months to roll the initiative out to all care units. Regional leadership assessed the initial roll out to determine if more support was needed for full implementation.

Back to Top | Article Outline


Over time, performance improvement initiatives are refined and reinforced by regional quality leaders. Sustainability is accomplished by careful attention to ongoing performance monitoring at all levels of the organization. Unit managers and assistant managers receive daily performance reports on relevant metrics, such as TJC core measures, breastfeeding, sepsis care, falls, and HAPUs. Unit managers are accountable to departmental leaders who also review performance frequently and identify any concerns. Hospital, health plan, and medical group executives and regional quality staff review performance monthly or quarterly, depending on the measure, routinely scanning for changes in performance. The frequency of monitoring is determined by the confidence of regional leadership that improvement will be sustained, which is a function of the maturity of the initiative and the nature of the underlying condition and associated care processes. For example, although sepsis care improvement was initiated in 2009, process and outcome measures continue to be monitored monthly. A recent example of accountability in action occurred when HAPU rates increased slightly during winter months. Regional quality leaders contacted the facility leaders responsible for HAPU performance and asked them to address the issue. HAPU rates subsequently normalized.

Back to Top | Article Outline


Improvements occurred in each area over time periods ranging from 2 to 6 years. While the extent of improvement varied across hospitals, relative changes in region-wide performance on process and outcomes measures ranged from 4% to 700% (Table 1). The greatest improvements occurred in sepsis and central line care processes. Improvements typically represent steady and sustained quality gains. For instance, execution of the TJC AMI bundle gradually increased from 91% to 100% between 2005 and 2011: concurrently, AMI mortality decreased by 0.1% to 1.1% annually.

Table 1

Table 1

Back to Top | Article Outline


Using an internally developed improvement framework and IHI-developed driver diagrams and collaborative methods, KPNC improved clinical performance across multiple areas. Our experience suggests that large-scale improvements in inpatient care are possible within a compressed time frame, and our quality results compare favorably with available estimates of comparable state and national rates and recently published data (Pryor et al., 2011).

Improvement occurred within an integrated healthcare system, which can lead to questions about the generalizability of our experience. However, a survey of 45 multihospital health systems found that high performance was not associated with system characteristics (Yonek, Hines, & Joshi, 2010). The same report found associations between high performance and (1) establishing a system-wide plan with measurable goals; (2) creating alignment across the system with goals and incentives; (3) leveraging data and measurement; and (4) standardizing and spreading best practices across the system. These factors are highly aligned with the “wheels” in our approach, with the exception of project management. In our experience, the latter clarifies accountability and creates adequate time and attention devoted to performance improvement.

Our performance improvement initiatives benefited from the availability of an integrated EHR. However, as noted, we also conducted parallel data collection processes, particularly to track execution of the bundle for nursing care practices and before metrics were built into KP HealthConnect. In the report by Yonek et al. noted above, the presence of an EHR was not associated with high performance; however, the frequent and internally transparent use of dashboards, which can be generated through data mining of existing information systems, by hospital leaders and staff was identified as a best practice (Yonek et al., 2010).

Benchmarking our results is somewhat challenging, because few peer-reviewed reports supply contemporaneous rates. In a national project to reduce central-line–associated bloodstream infections, among 350 hospitals in 22 states, infection rates in intensive care units declined from a baseline of 1.8 infections per 1,000 central line days to 1.17 per 1,000 central line days over 12–15 months of participation (Eliminating CLABSI, A National Patient Safety Imperative: A Progress Report on the National On the CUSP: Stop BSI Project, 2011). A similar rate of 1.1 per 1,000 line days in intensive care units was reported in California (Healthcare Associated Infections Program, 2012).

Baseline data on C. difficile infection rates are sparse (Emerging Infections Program, 2012). However, the U.S. Department of Health and Human Services set a target of 30% reduction in facility-onset C. difficile infections between 2009–2010 and 2013 (National Targets and Metrics: Monitoring Progress Toward Action Plan Goals: A Mid-Term Assessment, 2011). Our reduction of 50% exceeds this target.

In the state of California, average risk-adjusted ICU mortality is 11.67% (Rating Hospital Quality in California, 2012). Although our rate compares favorably, we note that comparing mortality rates is complicated, and hospital type introduces bias into ICU mortality rates (Kuzniewicz et al., 2008; Reineck et al., 2012). We were unable to locate comparable benchmarks for the care processes of elevating the head of the bed and preventing deep vein thrombosis and peptic ulcer disease in ICUs.

The Collaborative Alliance for Nursing Outcomes (CALNOC) reported fall rates for 2007 and 2008 from 196 hospitals; the mean rate of injury falls across all types of hospital units was 0.10 per 1,000 patient-days (Brown, Donaldson, Burnes Bolton, & Aydin, 2010). Our baseline rate of 0.10 per 1,000 patient days in 2008 was equivalent but subsequently decreased by 50%. Similarly, the rates of HAPUs across all KPNC hospitals are below those reported elsewhere (Brown et al., 2010; Pryor et al., 2011).

We previously reported on our experience at improving sepsis care and reducing mortality (Whippy et al., 2011). Inpatient stroke mortality in the state of California in 2008 and 2009 was 10.4–10.6%; we were unable to identify comparable stroke care process benchmarks. California and national benchmarks for TJC core measures are available on the CMS hospital comparison website; our performance on core bundles exceeds both by one to seven percentage points (U.S. Department of Health and Human Services, 2012).

In some areas, such as critical care and surgical safety, the results in Table 1 represent a portion of ongoing improvement work, but methodological issues precluded reporting other improvements. For instance, identifying retained foreign objects in the surgical setting evolved after the implementation of state law, particularly with respect to obstetric procedures. Ongoing work to improve glucose control among critical care patients includes evolving internal measurement reporting systems.

Back to Top | Article Outline

Lessons Learned

Many of our learnings reinforce existing understandings about performance improvement. Leadership support at the highest levels is critical. However, performance improvement must be a top priority for leaders at all levels; KPNC medical center leaders know exactly how their team is performing and where opportunities exist to improve further. A robust performance improvement program provides leadership training; the equivalent of Six Sigma green- and black-belt training is available. Each facility has a performance improvement director who works closely with executive and clinical leadership and links staff members with off-site improvement training appropriate to their performance improvement responsibilities. Senior leaders receive training as needed for their level of governance and oversight.

In fact, we can now relate the primary obstacles to performance improvement to the quality of leadership. Stable and complete leadership teams at facilities consistently demonstrated the ability to overcome competing priorities, closely monitored performance relative to their peers, and pressed for the implementation of effective practices to close any gaps. Leadership gaps can be partially compensated for by the use of implementation toolkits, assigning regional performance improvement mentors to support facilities through leadership transition, and asking facilities to assign project managers or improvement advisors to specific initiatives—but these measures cannot replace a visible and committed senior leader.

Clinical performance improvement begins with a compelling understanding for all stakeholders of the potential for harm of the current state of care. In the four-wheel drive approach, we refer to this as “standing on an unstable rock”; as a system, we must move our patients and staff to a more solid place. We rely on those who outperform us, as our clinical and quality leaders draw on the expertise of their professional colleagues within Kaiser Permanente and around the world.

Dedicated resources must be invested. Clear and ambitious goals are pivotal, as is transparency about performance data. Sustainability requires continuing organizational energy, and we judiciously select ongoing metrics to track sustained performance and unintended consequences as efficiently as possible.

Back to Top | Article Outline

Next Steps

As improvements stabilize and are sustained, we are expanding into new areas. We are currently piloting performance improvement initiatives related to hospital-acquired pneumonia, unplanned transfers, and critical care. The latter includes improving sedation practices, reducing delirium, and improving mobility and sleep among ICU patients, as well as improving ventilator management and fine-tuning blood glucose control. We are also increasingly focusing on care transitions and advanced illness care planning to more consistently provide care in the most appropriate setting (Figure 2).

Back to Top | Article Outline


Using a breakthrough collaborative approach and a four-wheel drive model of change, KPNC successfully instituted multiple clinical improvement initiatives in 21 hospitals. High-quality performance resulted on a broad set of metrics.

Back to Top | Article Outline


Brown D.S., Donaldson N., Burnes Bolton L., Aydin C.E. Nursing-sensitive benchmarks for hospitals to gauge high-reliability performance. Journal for Healthcare Quality 2010;32:9–17. doi: 10.1111/j.1945-1474.2010.00083.x.
California Senate Bill SB 1301. 2006.
Eliminating CLABSI, A National Patient Safety Imperative: A Progress Report on the National On the CUSP: Stop BSI Project. Rockville, MD: Agency for Healthcare Research and Quality;2011.
Emerging Infections Program. Measuring the scope of Clostridium difficile infection in the United States. 2012. Retrieved August 10, 2012, from
Graham S., Clopp M.P., Kostek N.E., Crawford B.. Implementation of a high-alert medication program. Permanente Journal 2008;12:15–22.
Healthcare Associated Infections Program. Central line-associated bloodstream infections in California hospitals, April 2010 through March 2011. Sacramento, CA: California Department of Public Health;2012.
Kuzniewicz M.W., Vasilevskis E.E., Lane R., Dean M.L., Trivedi N.G., Rennie D.J., et al.. Variation in ICU risk-adjusted mortality: Impact of methods of assessment and potential confounders. Chest 2008;133:1319–1327. doi: chest.07-3061 [pii] 10.1378/chest.07-3061.
Lau H., Litman K.C. Saving lives by studying deaths: Using standardized mortality reviews to improve inpatient safety. Joint Commission Journal on Quality and Patient Safety 2011;37:400–408.
National Targets and Metrics: Monitoring Progress Toward Action Plan Goals: A Mid-Term Assessment. 2011. Retrieved August 10, 2012, from
Nolan T.W. Execution of strategic improvement initiatives to produce system-level results. Cambridge, MA: Institute for Healthcare Improvement;2007.
Pryor D., Hendrich A., Henkel R.J., Beckmann J.K., Tersigni A.R. The quality ‘journey’ at Ascension Health: How we've prevented at least 1,500 avoidable deaths a year–and aim to do even better. Health Affairs (Millwood) 2011;30:604–611. doi: 30/4/604 [pii] 10.1377/hlthaff.2010.1276.
Rating Hospital Quality in California. 2012. Retrieved August 10, 2012, from
Reineck L., Pike F., Le T., Cicero B., Iwashyna T., Kahn J. Bias in quality measurement resulting from in-hospital mortality and an ICU quality measure. American Thoracic Society 2012. San Francisco, CA;2012.
Rosenthal M.B. Nonpayment for performance? Medicare's new reimbursement rule. New England Journal of Medicine 2007;357:1573–1575. doi: 357/16/1573 [pii] 10.1056/NEJMp078184.
Schilling L., Dearing J.W., Staley P., Harvey P., Fahey L., Kuruppu F. Kaiser Permanente's performance improvement system, part 4: Creating a learning organization. Joint Commission Journal on Quality and Patient Safety 2011;37:532–543.
Schilling L., Deas D., Jedlinsky M., Aronoff D., Fershtman J., Wali A. Kaiser Permanente's performance improvement system, part 2: Developing a value framework. Joint Commission Journal on Quality and Patient Safety 2010;36:552–560.
The Breakthrough Series: IHI's Collaborative Model for Achieving Breakthrough Improvement. Cambridge, MA: Institute for Healthcare Improvement;2003.
The Joint Commission. Accountability Measures: A New Concept to Promote Quality Improvement. 2010. Retrieved August 10, 2012, from
U.S. Department of Health and Human Services. Hospital compare. 2012. Retrieved August 10, 2012, from
Whippy A., Skeath M., Crawford B., Adams C., Marelich G., Alamshahi M., et al.. Kaiser Permanente's performance improvement system, part 3: Multisite improvements in care for patients with sepsis. Joint Commission Journal on Quality and Patient Safety 2011;37:483–493.
Yonek J., Hines S., Joshi M. A guide to achieving high performance in multi-hospital health systems. Chicago, IL: Health Research & Educational Trust;2010.
Back to Top | Article Outline

Authors' Biographies

Barbara Crawford, RN, MS, is Vice President of Quality and Regulatory Services for Kaiser Foundation Hospitals and Health Plan in Northern California. Based in Oakland, California, she oversees a department that works in partnership with regional and medical center leadership to support programs related to quality, patient safety, risk management, licensing, accreditation, and regulatory agency compliance throughout Kaiser Permanente Northern California.

Melinda Skeath, RN, CNS, is Executive Director of Quality and Regulatory Services for Kaiser Permanente Northern California, based in Oakland. She works with regional and medical center leadership to support programs related to quality, accreditation, regulatory agency compliance, and professional licensing.

Alan Whippy, MD, is the Medical Director for Quality and Safety for Kaiser Permanente Northern California. In this role, she helps set the strategy and create the infrastructure for successful execution on system‐wide ambulatory and hospital quality and safety programs.


integrated delivery system; organizational change; patient safety; performance improvement models; quality improvement

© 2015 National Association for Healthcare Quality