Reducing preventable mortality in hospitals is an important patient safety goal. Recently, the Institute for Healthcare Improvement launched a campaign to save 100,000 lives by reducing inpatient mortality in the United States. The primary intervention of the campaign was an effort to increase the implementation of six evidence-based practices. Although the impact of this campaign has been debated,1,2 it has caught the attention of hospital leaders as well as physicians.
According to national data from the Health Care Utilization Project of the Agency for Healthcare Research and Quality, inpatient mortality in acute care hospitals in the United States declined steadily from 1993 to 2006. Every four years during that period, crude inpatient mortality declined by about 8% (Figure 1).3 A community hospital in the Ascension Health system reported a 19.2% reduction in observed mortality in the two years after implementation of eight specific interventions.4 A large acute care hospital in the United Kingdom reported 905 fewer deaths than expected during the three-year period when an improvement program was in place.5
Whereas several studies have identified factors affecting patient survival, such as access to intensive care units,6 delays in initiation of antibiotics,7 and medical errors,8 and have assessed the impact of goal-directed, multidisciplinary protocols and quality improvement programs on reductions in mortality,9 we found no studies that described the key contributors to mortality in academic medical centers (i.e., teaching hospitals). The engagement of academic medical centers in national quality programs is sometimes questioned.10 A recent study based on outcomes including mortality identified organizational characteristics of high-performing academic medical centers.11 In the current report, we identify common factors contributing to potentially preventable mortality and describe the approaches by which a number of academic medical centers achieved a reduction in risk-adjusted mortality. The imperative for faculty and staff involvement in improving quality will also be discussed.
Between 2001 and 2007, 30 U.S. academic medical centers asked the University HealthSystem Consortium (UHC), a member-owned alliance of 103 university teaching hospitals and 207 affiliated hospitals, to perform assessments of their institutions' quality infrastructure, patient safety, and clinical outcomes. The same team, made up of the authors of this report (R.B. and J.F.), conducted each assessment. Of these 30 medical centers, 16 focused on improving outcomes—particularly, mortality. Nine of the 16 hospitals also requested an evaluation of their documentation and coding practices; these evaluations were subcontracted to a third-party consultant.
We obtained the data from the UHC, which has developed a risk-adjustment methodology based on logistic regression that assigns an expected mortality value to each patient. Expected mortality is defined as the probability of death in the hospital; determining the probability of death requires taking into account, for each patient, the severity of illness (assigned by the All Patient Refined Diagnosis-Related Groups software),12 age, gender, race, socioeconomic status (patients with a payer status listed as charity care, self-pay, or Medicaid are classified as having low socioeconomic status), any transfer from another hospital, and comorbidities. The UHC database can be used to determine an observed mortality, defined as the number of in-hospital deaths divided by the total number of discharges, as well as an expected mortality for clinical populations. A mortality index, defined as the ratio of observed to expected mortality, is used to determine whether a clinical program (or a hospital) has excess mortality. A mortality index of 1.0 indicates that mortality is as expected. The database allows open comparisons with other academic medical centers under a data-sharing agreement. None of the data are publicly reported. Each of the hospitals represented in the current analysis had formally requested the UHC consultation with the primary purpose of improving the quality of the hospital's performance. As defined by our agreement with the hospitals, the scope of our activity was to perform a systematic assessment and provide recommendations to the hospital leadership. We did not actively participate in the treatment of human subjects and did not maintain quantitative data on the contributors to mortality. We did, and do, maintain the confidentiality of the hospitals that engaged in these consultations.
Getting the physicians on board
Our experience in guiding change suggested that interventions such as best practices would not be effective unless the clinicians and administrators acknowledged that there was a problem. The Kubler-Ross13 model of grief is sometimes used to describe physicians' responses to data; the stages are denial (“The data are wrong.”), anger (“Why are you looking at my practice?”), bargaining (“Can we adjust it for additional risk factors?”), depression (“Nothing can be done now.”), and acceptance (“Fine; the data may be correct.”). The Kubler-Ross model stops at acceptance and does not describe what happens next. We chose to use the stages-of-change framework originally developed by Prochaska and colleagues to assess and guide change in patient behaviors such as tobacco smoking.14 These five stages are precontemplation, contemplation, preparation, action, and maintenance. For a brief description of each stage, see Figure 2. We found that most of the physicians in these institutions were in the precontemplation stage with respect to their readiness for change—that is, they were simply unaware that a problem of excess mortality existed at their hospital. When presented with data showing higher-than-expected mortality, physicians had a wide range of responses. Whereas some questioned the use of administrative data to measure mortality, others welcomed an opportunity to examine the data. Some suggested that the patients who died all had do-not-resuscitate status, and others said that these patients were expected to be near the end of their life. Many of these physicians considered their hospital to be the last resort for their patients, especially for the patients received as transfers from community hospitals. More thana few suggested that mortality rates cannot be reduced.
We recognized these responses as symptomatic of deeper issues. Further dialogue began to reveal many of the root causes of resistance to change: earlier negative experiences in working with the hospital administration; a need to increase practice, the volume of procedures, and research; a need to be respected; and a desire to protect reputation. Some of these physicians exhibited distrust (“Why are you focusing on me? What's the real reason you are looking at my outcomes?”), whereas others offered different solutions (“We just need to hire more nurse practitioners to help in the unit.”). A few were defiant in their role as “rainmakers” (“My program brings a lot of revenue to the hospital.”) and wanted to be left alone. There was anxiety about changes that might result from acknowledging a problem with outcomes. The resistant physicians were generally few in number, but their group had a large influence. However, most of the physicians welcomed this attention to clinical outcomes. Junior faculty members readily acknowledged the day-to-day problems and expressed a strong desire to work on those problems. They offered insights into broken care-delivery processes and dysfunctional systems and into what needed to be done—but they felt their participation required approval from the chairs or section chiefs.
We worked from the assumptions that everyone cares about patient care and that, if we could demonstrate in an objective manner by using data that the status quo was hurting patients, the key stakeholders will accept that there is a problem with outcomes. We recognized that the assessment must answer two key questions. First, are documentation and coding appropriately capturing the severity and complexity of patients' conditions? Second, are there problems with quality of care that are related to evidence-based practices or systems and processes?
Documentation and coding problems were prevalent in each of the nine hospitals at which an external consultant performed the reviews (Table 1). The other seven hospitals did not request an external review of coding, because these hospitals had ongoing initiatives to improve clinical documentationand coding. The documentation and coding errors in the sample of charts reviewed at nine hospitals showed an underestimation of expected mortality by 5% to 20%. Even after adjustment for coding problems, the gap between observed and expected mortality persisted. Once the questions about data were answered, the focused shifted to quality-of-care issues. Many of these issues were already noted in the departmental morbidity and mortality conferences; we simply identified the patterns within and across departments. At that point, most of the physicians acknowledged that these were real problems, and most were surprised by the extent of the problems. However, they did not yet commit to taking action. Such hesitation is part of the second stage of change in the model of Prochaska and colleagues—contemplation. Nevertheless, at one hospital, a prominent department chair stood up in a medical staff meeting and said, “The arguments about data stop now. We've got a problem, and we need to fix it.” At that point, some of the more vocal resisters came on board as champions and engaged their colleagues. Goals were set, and areas of accountability were established. Once the physicians moved into the preparation stage, the provision of evidence-based recommendations and best practices from other hospitals proved effective in continuing the momentum. The action stage required support from the hospital quality infrastructure. Teams were formed, protocols were reviewed or developed, methods for improvement were identified, additional data were collected, and the real work of improvement began. At the first signs of improvement, positive feedback was provided to the teams and the clinical leaders. From that point, the task, building on the early gains, was to maintain the results.
Key contributors to inpatient mortality
The clinical areas and diagnoses that required examination because of high mortality (mortality indices > 1) in these hospitals were bowel surgery, cardiac surgery, neurosurgery, liver transplantation, respiratory failure, sepsis, and stroke. Although we found gaps in evidence-based practices, the key contributors to mortality were the same systems and process issues found across all of the hospitals under assessment. Most of the cases we reviewed had multiple contributory factors (Table 2).
“We have to fix this”
So what did the hospitals do to reduce mortality? Once the findings and recommendations were accepted, physician-driven improvement teams were put in place. A typical manifesto of improvement initiatives would include early goal-directed treatment of sepsis, implementation of central-line and ventilator bundles to prevent infections, deployment of a rapid response team, and development of a patient safety program. In addition, hospitals began focused, evidence-based initiatives to reduce mortality associated with specific procedures such as cardiac surgery and diagnoses such as stroke. A few hospitals began examining their transfer population and providing feedback on outcomes to the referring hospitals. Every hospital developed a program to improve clinical documentation and coding. Each hospital's leadership improved their hospital's resources by actions such as replacing defibrillators, improving nurse staffing in the intensive care units, recruiting hospitalists or intensivists, implementing a hospice-within-the-hospital program, and investing in the quality infrastructure. Clinical chairs and section chiefs began to join work rounds and started asking questions when a complication occurred or prophylactic interventions were not administered.
Did the mortality rates improve?
We used performance in calendar year 2002 as the baseline; in that year, the 16 hospitals had rates of observed mortality that were higher than predicted (n = 11) or no different than predicted (n = 5). We considered improvement in performance a success if the hospital reduced the mortality index as well as the observed mortality. The reduction in observed mortality must meet or exceed the overall trend across in U.S. hospitals, which was estimated as 8% during a period of four years.3 The primary reason to include reduction in observed mortality as a criterion was to ensure that the reduction in mortality index was not solely due to better documentation and coding. Of the 16 hospitals with two to five years of follow-up, 12 were able to reduce the mortality index. Of these 12 hospitals, 9 also reduced observed mortality by at least 8% (Figure 3). Hospitals 1 to 5 demonstrated the greatest improvements in observed mortality and the risk-adjusted index. We deemed these same five hospitals to be “the most receptive” to the assessment and the findings that we presented. They were also the only hospitals where the hospital chief executive officer, the dean of the college of medicine, the chief medical officer, and at least some of the department chairs were “around the table” when the findings were presented. These hospitals had a sharp focus on mortality as the specific outcome they wanted to improve. Among the hospitals in which performance did not improve (or worsened), the department chairs were most conspicuous by their absence during planning and implementation. Hospitals 13 to 16 showed a worsening of performance in the time frame of the assessment; hospital 16 had showed early improvement, but it was followed by deterioration during a time of turmoil in the institutional organization and changes in leadership. Hospital 12 has had the shortest amount of time since the assessment in which to show improvement, but the level of interest in mortality reduction and the engagement of its physician leaders were high. Hospitals 7 and 10 improved their mortality index, even though observed mortality increased; there was a desire to improve outcomes, but these hospitals' initial focus was on improving documentation and coding.
U.S. academic medical centers are actively involved in benchmarking outcomes data, reviewing and improving systems and processes, and sharing best practices. This study found that the academic medical centers that focused on mortality reduction and that had broad-based engagement of physicians were able to achieve meaningful reductions in hospital mortality. This report provides a unique view of longitudinal experiences of a cross-section of academic medical centers engaged in quality improvement. The improvements were sustained for a period of two to five years, and they led to as many as 190 fewer deaths in the final year of follow-up in one of the hospitals assessed.
Our identification of the key contributors to potentially preventable mortality is consistent with the key contributors reported by others. In a study of unexpected cardiac arrests, Buist and colleagues found that most of the patients had signs of deterioration several hours before cardiac arrest and were examined more than once by junior housestaff.15 In our reviews, we sometimes noted suboptimal initial decision making by the housestaff. In most cases, these knowledge-based mistakes were later corrected by more-senior physicians, albeit after a delay in initiation of appropriate treatment. A rather common “7:00 am phenomenon” was noted in several of the hospitals—that is, the on-call housestaff would wait until the morning to notify the attending physician of clinically significant events that had occurred overnight. A few members of the housestaff whom we interviewed considered it a sign of weakness to wake up the attending (or a more senior resident) in the middle of the night.
In our analyses, better-performing hospitals had a similar rate of complications but a much lower associated mortality. This finding suggests that some hospitals are better able than other hospitals to “rescue” patients from complications. The better performance of those hospitals may be due to better recognition of clinical perturbations by nursing staff and residents, faster notification of attending physicians about such concerns, better availability of system resources, or the existence of standardized care protocols for management of complications. Our reviews showed that technical errors, such as perforation of the superior vena cava during an attempted insertion of a dialysis catheter into the subclavian vein, and knowledge-based errors, such as failure to consider retroperitoneal hemorrhage as a cause of hemodynamic instability, were associated with greater morbidity and mortality. Communication errors, including problems with handoffs, were frequently cited as root causes in departmental quality reviews. In one particular case of poor communication, physician members of highly dysfunctional teams simply could not work together. We found that the hospital at night was generally perceived by the staff as less safe than the hospital during daytime hours. Care during off-hours—nights, weekends, and holidays—was characterized by the presence in-house of fewer senior clinical staff and a limited availability of specialized diagnostics and services. Similar findings were noted by Peberdy and colleagues in a study of survival from cardiopulmonary arrests16; they found significantly lower adjusted survival among persons experiencing such an arrest at night and on weekends.
One of the characteristics distinguishing medical centers that succeeded in lowering mortality from those that did not was the engagement of the clinical department chairs. Disengagement was a component of what we describe as an “organizational metabolic syndrome” that is prevalent in many hospitals. This syndrome is characterized by the detachment of clinical department chairs and the medical staff, a distrust of data, a culture in which the residents run the hospital, broken front-line processes, poor hospital–physician relations, a disconnect between middle managers and senior leadership, and a culture of “learned helplessness” on the part of the quality improvement staff. In our experience, organizational metabolic syndrome was the main reason that a hospital was unable to recognize that it had a problem with outcomes and that prior attempts at improvement failed.
We successfully used the model of Prochaska and colleagues in a novel way to recognize and promote stages of organizational readiness for change. It is noteworthy that none of the interventions directly targeted a hospital's culture, which was often cited as a barrier to improvement, and yet the culture was later reported by the hospital staff as being much improved. On the basis of our observation of this phenomenon at several hospitals, we surmise that a hospital's culture began to change when early improvements in quality of care were made. A more subtle finding was that cultural shifts started when departments and disciplines that had been “ships passing in the night” began to work together on a common agenda. A shared problem also brought hospital and physician leadership around the same table to formulate a shared goal to improve outcomes.
Our study had several limitations. The hospitals we assessed in this study were self-selected for a consultation. Although such a step could mean that they were more likely to make changes, we found that the staff at these hospitals initially did not believe they had a problem with outcomes. Moreover, not every hospital in this self-selected group was able to make changes and reduce mortality. It is also possible that some of the hospitals showing reduced mortality could have done so by avoiding higher-risk patients, but we did not notice a decline in case mix that might indicate that they were systematically doing that. It would be difficult for academic medical centers that routinely take transfers from community hospitals to avoid taking sicker patients. We did notice that some surgical programs established evidence-based criteria for operating on patients who are at extremely high risk. This practice may affect the mortality rate for one program but not for the hospital overall. More recently, patients with extremely low probability of recovery are increasingly given the option of palliative or hospice care. We expect the increasing availability of the option of hospice care to be one of the major determinants of in-hospital mortality in the next several years. Furthermore, the factors associated with success or with lack of success in reducing mortality were based on our observations. Given the relatively small number of hospitals in the two groups (successful and not successful), a quantitative analysis to identify statistically significant factors was not feasible. Future studies should build on these findings and test them in a larger group of medical centers. Finally, one can argue that, in a nonrandomized longitudinal study without matched controls, the improvements in mortality are simply due to regression to the mean. This concern cannot be fully allayed, but it should be reasonably mitigated by the fact that we defined improvement as a greater reduction in observed mortality that was seen across all U.S. hospitals in the same period. In addition, not every hospital in this study improved—in fact, some worsened.
In conclusion, the necessary ingredients for achieving meaningful improvement in clinical outcomes include good data, a sound approach to change, and physician leadership. Active participation by medical staff and senior physician leaders can positively catalyze performance improvement efforts. A good place for physicians to start is in developing a habit of data-driven introspection—taking a periodic “time-out” to evaluate one's own practices and patient outcomes. It is imperative that physician leaders inculcate this habit in themselves and encourage its inculcation among their hospital's faculty.
1 Wachter RM, Pronovost PJ. The 100,000 Lives Campaign: A scientific and policy review. Jt Comm J Qual Patient Saf. 2006;32:621–627.
2 Berwick DM, Hackbarth AD. IHI replies to “The 100,000 Lives Campaign: A scientific and policy review.” Jt Comm J Qual Patient Saf. 2006;32:628–633.
3 Agency for Healthcare Research and Quality. HCUPnet, Healthcare Cost and Utilization Project. Available at: http://hcupnet.ahrq.gov
. Accessed October 8, 2008.
4 Tolchin S, Brush R, Lange P, Bates P, Garbo JJ. Eliminating preventable death at Ascension Health. Jt Comm J Qual Patient Saf. 2007;33:145–154.
5 Wright J, Dugdale B, Hammond I, et al. Learning from death: A hospital mortality reduction programme. J R Soc Med. 2006;99:303–308.
6 Simchen E, Sprung CL, Galai N, et al. Survival of critically ill patients hospitalized in and out of intensive care. Crit Care Med. 2007;35:449–457.
7 Iregui M, Ward S, Sherman G, Fraser VJ, Kollef MH. Clinical importance of delays in the initiation of appropriate antibiotic treatment for ventilator-associated pneumonia. Chest. 2002;122:262–268.
8 Wilson RM, Runciman WB, Gibberd RW, Harrison BT, Newby L, Hamilton JD. The Quality in Australian Health Care Study. Med J Aust. 1995;163:458–471.
9 Stamou SC, Camp SL, Stiegel RM, et al. Quality improvement program decreases mortality after cardiac surgery. J Thorac Cardiovasc Surg. 2008;136:494–499.e8.
10 The Academic Medical Center Working Group of the Institute for Healthcare Improvement. The imperative for quality: A call for action to medical schools and teaching hospitals. Acad Med. 2003;78:1085–1089.
11 Keroack MA, Youngberg BJ, Cerese JL, Krsek C, Prellwitz LW, Trevelyan EW. Organizational factors associated with high performance in quality and safety in academic medical centers. Acad Med. 2007;82:1178–1186.
12 Averill RF, Goldfield NI, Muldoon J, Steinbeck BA, Grant TM. A closer look at all patient refined DRGs. J AHIMA. 2002;73:46–50.
13 Kubler-Ross E. On Death and Dying. New York, NY: Macmillan; 1969.
14 Prochaska JO, DiClemente CC, Norcross JC. In search of how people change. Applications to additive behaviors Am Psychol. 1992;47:1102–1114.
15 Buist MD, Moore CE, Bernard SA, Waxman BP, Anderson JN, Nguyen TV. Effects of a medical emergency team on reduction of incidence of and mortality from unexpected cardiac arrests in hospital: Preliminary study. BMJ. 2002;324:387–390.
16 Peberdy MA, Ornato JP, Larkin GL, et al. Survival from in-hospital cardiac arrest during nights and weekends. JAMA. 2008;297:785–792.