Secondary Logo

Journal Logo

Hospital Responses to Mortality Measures

A Survey of Hospital Administrative Leaders

Soelch, Luke A., BS; Repp, Allen B., MD, MSc

Quality Management in Healthcare: April/June 2019 - Volume 28 - Issue 2 - p 78–83
doi: 10.1097/QMH.0000000000000209
Original Research
Free
SDC

Background: Standardized hospital mortality ratios (SHMRs) are widely used for quality improvement, hospital ratings, and health care payment.

Objectives: (1) To characterize the programs implemented at hospitals in response to SHMRs.

(2) To describe hospital leaders' perceptions of SHMRs as indicators of care quality.

Methods: Electronic survey of administrative leaders at US academic medical centers who subscribed to Vizient leadership networks.

Results: Forty-seven administrative leaders from 37 US academic medical centers completed the survey. Respondents reported that SHMRs had the largest role in the decision to implement inpatient hospice programs, electronic early warning systems, and clinical documentation specialist programs at their institution. Respondents perceived clinical documentation specialist programs and condition-specific care pathways as the most effective programs to improve performance on SHMRs. Only 29% of respondents agreed that SHMRs accurately reflect the number of preventable deaths in hospitals, but 78% agreed that SHMRs have helped their hospital reduce preventable deaths.

Conclusions: Hospitals have employed various strategies in response to SHMRs—including clinical programs that focus on reducing preventable deaths and other programs that target improvement in SHMR performance without reducing preventable deaths. Hospital administrative leaders identify significant benefits and flaws of SHMRs as quality indicators.

Larner College of Medicine, The University of Vermont, Burlington (Mr Soelch); Department of Medicine, Larner College of Medicine, The University of Vermont, Burlington (Dr Repp); and University of Vermont Medical Center, Burlington (Dr Repp).

Correspondence: Allen B. Repp, MD, MSc, University of Vermont Medical Center, 111 Colchester Ave, Burlington, VT 05401 (allen.repp@uvm.edu).

The authors thank Kate FitzPatrick, DNP, RN, NEA-BC, FAAN, Stephen Leffler, MD, and Anna Noonan, MS, BSN, RN, for their assistance in testing and refining the survey instrument.

The authors declare no conflicts of interest.

Supplemental digital content is available for this article. Direct URL citation appears in the printed text and is provided in the HTML and PDF versions of this article on the journal's Web site (www.qmhcjournal.com).

Hospital mortality measures are used by payers, nonprofit organizations, and governmental agencies for the purposes of quality improvement, ratings, purchasing, and payment.1–8 Risk adjustment methodologies produce standardized hospital mortality ratios (SHMRs) that compare the number of observed deaths in a hospital with an expected number of deaths calculated by statistical models.9 SHMRs are intended to allow comparison of mortality rates across institutions.10

As the most prominent example in the United States, the Centers for Medicare & Medicaid Services (CMS) began public reporting of risk-adjusted hospital mortality rates in 2008. Through the Hospital Compare program, CMS now provide public comparisons of hospital 30-day mortality rates for numerous conditions: acute myocardial infarction, heart failure, pneumonia, chronic obstructive pulmonary disease, stroke, and coronary artery bypass surgery.4 Performance on these mortality measures influences the CMS “star rating” of a hospital and hospital payments by CMS.

The use of SHMRs as indicators of clinical quality has generated numerous criticisms. Different risk adjustment models have low levels of agreement, sometimes resulting in dramatically different performance assessments of the same hospital.2 SHMRs correlate poorly with other measures of hospital quality.11 CMS public reporting of mortality measures has not been associated with improvements in patient outcomes.12 Furthermore, SHMRs may not achieve their true goal: detecting preventable deaths.9,13–15

While the potential advantages, limitations, and risks of using SHMRs as quality indicators have been reported in the literature, there is little data on actual organizational responses of hospitals to SHMRs and the perceptions of hospital administrative leaders regarding the use of SHMRs as quality indicators. We conducted a survey of hospital administrative leaders at academic medical centers (AMCs) in the United States to characterize hospital responses to SHMRs and perceptions of hospital leaders regarding the use of SHMRs as quality indicators.

Back to Top | Article Outline

METHODS

Design

We conducted an online survey of hospital administrative leaders at AMCs who subscribed to the Vizient leadership networks. Most US nonprofit AMCs and nearly 300 affiliated hospitals are members of Vizient (formerly known as the University HealthSystem Consortium), whose purpose is collaboration on performance improvement. The survey sought to (a) characterize the policies and programs implemented at hospitals in response to mortality measures and (b) describe hospital leaders' perceptions of mortality measures as indicators of care quality. The study was deemed exempt from review by the University of Vermont institutional review board.

Back to Top | Article Outline

Recruitment and participation

After permission from Vizient was obtained, an electronic announcement of the survey study was sent to members of the Chief Quality Officers Network, Chief Nursing Officers Network, and Medical Leadership Network listservers hosted by Vizient. Subsequently, individual e-mail invitations to participate in the survey were sent to members of each listserver. The e-mail invitations contained an electronic link to the survey. Up to 3 reminder e-mails were sent to nonrespondents over the course of 4 weeks of survey administration (December 2016-January 2017). Survey participation was voluntary, and no incentives were offered for survey completion.

Back to Top | Article Outline

Survey development and administration

Following literature review and face-to-face interviews with 2 hospital administrative leaders, we developed a draft survey instrument, which was then piloted with a group of 3 hospital administrative leaders. The instrument was iteratively refined to optimize face and content validity. The final survey (included in Supplemental Digital Content Appendix A, available at: http://links.lww.com/QMH/A19) was a 62-item instrument that contained 5 sections:

  1. Respondent characteristics
  2. Hospital programs and perceived influence of mortality measures
  3. Other hospital responses to mortality measures
  4. Perceived effectiveness of institutional responses to SHMRs
  5. Perceptions of SHMRs as indicators of care quality

Data were collected and managed using REDCap (Research Electronic Data Capture)—a secure, Web-based application designed to support data capture for research studies—hosted at The University of Vermont.16

Back to Top | Article Outline

Analysis

Surveys that contained no responses to questions outside of demographic information were excluded; otherwise, partially completed surveys were included in data analysis. Descriptive statistics were used to characterize multiple-choice and Likert scale responses. Deductive thematic analysis was used to map free-text responses regarding participants' opinions about using mortality measures as quality indicators into 3 predefined categories (favorable, mixed, and unfavorable).

Back to Top | Article Outline

RESULTS

Characteristics of respondents

A total of 265 e-mail invitations were sent (144, 52, and 69 e-mail invitations to Chief Quality Officers Network, Medical Leadership Network, and Chief Nursing Officers Network listserv members, respectively). Forty-nine questionnaires were received (18%). Two were excluded because they contained no responses to questions outside of respondent characteristics. The 47 included questionnaires represented 37 separate AMCs. The characteristics of respondents are summarized in Table 1. The most frequently identified respondent roles were chief quality officer (n = 17; 36.2%), chief medical officer (n = 15; 31.9%), and chief nursing officer (n = 4; 8.5%). Most respondents (n = 45; 95.8%) stated they were moderately or extremely familiar with hospital mortality measures.

Table 1

Table 1

Back to Top | Article Outline

Hospital programs and perceived influence of mortality ratios

Figure 1 summarizes the reported presence of programs at respondents' hospitals and the perceived influence of mortality measures in the decision to start the program. All 45 respondents to the question indicated that their hospital has a clinical documentation specialist program to help capture severity of illness, an inpatient palliative care service, and a protocol for managing patients with acute stroke. In contrast, only 4.4% of respondents reported that their hospital has a telemedicine palliative care program and 20% reported that their hospital has hospital-at-home services. The greatest influence of mortality measures was reported for inpatient hospice programs, electronic early warning systems, and clinical documentation specialist program.

Figure 1

Figure 1

Back to Top | Article Outline

Other hospital responses to mortality measures

Ten respondents (22.7%) reported that their hospital changed protocols for accepting transfer patients from outside hospitals in response to mortality measures, and 8 respondents (18.2%) reported that their hospital changed surgical case selection protocols (Table 2). Only one of the participants indicated that their hospital had employed external consultants in response to mortality measures. In an open response, free-text field, other commonly described actions in response to mortality measures were condition-specific care protocols (53.3%) and systematic mortality case review processes (20%). Respondents (n = 44) estimated that a mean of 56.4% (SD = 27.1) of their hospital's efforts to improve performance on mortality measures has focused on reducing preventable deaths.

Table 2

Table 2

Back to Top | Article Outline

Perceived effectiveness of programs

Hospital administrative leaders perceived clinical documentation specialist programs and condition-specific care pathways as the most effective programs to improve hospital performance on mortality measures, with 90.7% and 90.5% rating these programs as “effective” or “very effective”, respectively (Figure 2). Figure 2 presents the perceived effectiveness of the hospital programs assessed.

Figure 2

Figure 2

Back to Top | Article Outline

Perceptions of mortality measures as indicators of care quality

Table 3 presents respondents' agreement with specific statements about mortality measures as indicators of care quality. While 78% of respondents agreed or strongly agreed that SHMRs have helped their hospital reduce preventable deaths, only 29% agreed that SHMRs accurately reflect the number of preventable deaths in hospitals.

Table 3

Table 3

Twenty-two participants entered free-text responses describing their opinion regarding SHMRs as quality indicators. The majority (n = 12; 54.5%) of these responses identified both favorable and unfavorable attributes of mortality measures and were categorized as “mixed.” One representative response is provided:

[Mortality measures] assist an organization in identifying opportunities for improvement; however, they should not be weighed as heavily or presented to be as compelling as they currently are. Several limitations [are] due to variability in processes across hospitals that have nothing to do with actual care or service provided.

Among the remaining responses, 5 (22.7%) were categorized as favorable and 5 (22.7%) as unfavorable.

Back to Top | Article Outline

DISCUSSION

Despite methodological flaws, SHMRs are used widely to compare the quality of care delivered at hospitals for ratings and payment—requiring hospital leaders to respond to SHMRs. Organizational efforts to improve performance on SHMRs could focus on activities that decrease preventable deaths, such as improving systems of care. Alternatively, efforts could focus on activities that increase performance on SHMRs without reducing the number of actual deaths, such as augmenting clinical documentation to increase expected mortality or reassigning the status of certain hospitalized patients so that their deaths are excluded from the observed mortality rate.

This study reveals that hospitals have adopted multiple programs that could impact SHMRs. Some commonly reported clinical programs such as condition-specific care pathways might yield a reduction in avoidable deaths. Other prevalent programs, such as clinical documentation specialists or inpatient hospice programs, might increase expected mortality or decrease observed mortality without reducing avoidable deaths. The most prevalent programs—clinical documentation specialists, condition-specific care pathways, and inpatient hospice programs—were also perceived to be the most effective in improving SHMR performance. In assessing their hospital's overall efforts to improve performance on SHMRs, hospital leaders estimated 56% of their hospital's efforts focused on reducing preventable deaths and, by inference, 44% did not focus on reducing deaths.

Approximately one-fifth of respondents reported that their hospital changed protocols for accepting transfer patients and selecting surgical cases in response to SHMRs. While these changes may promote appropriate care—for example, patients with a high risk of death who are unlikely to benefit from transfer to an AMC might remain at their local hospital—these practices also raise concerns that hospitals might deny appropriate care to high-risk patients to improve performance on SHMRs.

Hospital leaders' perceptions of mortality measures reflect the complexity of SHMRs. Although most leaders perceived that mortality measures helped reduce preventable deaths at their hospital, few agreed that mortality measures accurately reflect the quality of care delivered and the number of preventable deaths in hospitals. Free-text responses reflected this apparent paradox, with many hospital leaders expressing mixed views of SHMRs. Close inspection of these perceptions suggests a consistent theme: SHMRs are poor surrogates for care quality and preventable death, but when used internally, scrutinized critically, and monitored longitudinally, SHMRs can help individual hospitals uncover opportunities for care improvement. As one respondent observed:

[SHMRs] can be helpful. The main challenge is completely inadequate risk adjustment or cohort explanation or any means to describe major differences that impact mortality. That said, for internal longitudinal use, every hospital can learn from mortality measures.

There are several notable limitations to this study. Most respondents represented AMCs, reducing generalizability. Because of the small sample and low response rate, the results may not reflect the distribution of hospital leaders' perspectives on SHMRs. In addition, chief quality officers and chief medical officers appeared to have higher response rates than chief nursing officers. We speculate this may represent differences in the roles and responsibilities of these positions vis-à-vis mortality measures. The questionnaire was not an externally validated instrument, and the length of the questionnaire may have deterred participation. E-mail invitations were generated from the Web-based survey application, and it is possible that filters or firewall security systems interfered with delivery or accessibility of the e-mailed link. The questionnaire asked about hospital activities that may be perceived as socially unacceptable, such as efforts to improve SHMR performance without improving patient care, potentially leading to underreporting of these activities.

Despite these limitations, this study offers new insights into the real-life impact of SHMRs on hospital programs, policies, and actions. In addition, it advances the understanding of hospital leaders' perceptions regarding the effectiveness of specific programs in improving performance on SHMRs and the utility of SHMRs as indicators of care quality. Further studies including a broader range of hospital types and objective hospital data may help elucidate hospital responses to SHMRs, their cost, and their effectiveness in improving care quality and preventing deaths.

Back to Top | Article Outline

REFERENCES

1. Higgins A, Veselovskiy G, McKown L. Provider performance measures in private and public programs: achieving meaningful alignment with flexibility to innovate. Health Aff (Millwood). 2013;32(8):1453–1461.
2. Shahian DM, Wolf RE, Iezzoni LI, Kirle L, Normand SL. Variability in the measurement of hospital-wide mortality rates. N Engl J Med. 2010;363:2530–2539.
3. National Quality Forum. Composite measure evaluation framework and national voluntary consensus standards for mortality and safety—composite measures: a consensus report. http://www.qualityforum.org/Publications/2009/08/Composite_Measure_Evaluation_Framework_and_National_Voluntary_Consensus_Standards_for_Mortality_and_Safety%E2%80%94Composite_Measures.aspx. Published 2009. Accessed August 17, 2017.
4. Medicare.gov. Measures and current data collection periods. https://www.medicare.gov/hospitalCompare/Data/data-updated.html#MG17. Accessed August 17, 2017.
5. The Leapfrog Group. http://www.leapfroggroup.org/about. Accessed August 17, 2017.
6. Canadian Institute for Health Information. HSHMR: a new approach for measuring hospital mortality trends in Canada. https://secure.cihi.ca/estore/productSeries.htm?pc=PCC374. Published 2007. Accessed August 17, 2017.
7. National Health Service (NHS Digital). Summary hospital-level mortality indicator. http://content.digital.nhs.uk/SHMI. Published 2017. Accessed August 17, 2017.
8. Jarman B, Pieter D, van der Veen AA, et al The hospital standardised mortality ratio: a powerful tool for Dutch hospitals to assess their quality of care? Qual Saf Health Care. 2010;19(1):9–13.
9. Shojania KG, Forster AJ. Hospital mortality: when failure is not a good measure of success. CMAJ. 2008;179(2):153–157.
10. Scott I. Where Does Risk-Adjusted Mortality Fit Into a Safety Measurement Program? Rockville, MD: Agency for Healthcare Research and Quality, Patient Safety Network; 2015. https://psnet.ahrq.gov/perspectives/perspective/173/where-does-risk-adjusted-mortality-fit-into-a-safety-measurement-program. Accessed August 17, 2017.
11. Dubois RW, Rogers WH, Moxley JH III, Draper D, Brook RH. Hospital inpatient mortality. Is it a predictor of quality? N Engl J Med. 1987;317(26):1674–1680.
12. Joynt KE, Orav EJ, Zheng J, Jha AK. Public reporting of mortality rates for hospitalized Medicare patients and trends in mortality for reported conditions. Ann Intern Med. 2016;165(3):153–160.
13. Lilford R, Pronovost P. Using hospital mortality rates to judge hospital performance: a bad idea that just won't go away. BMJ. 2010;340:c2016.
14. Hogan H. The problem with preventable deaths. BMJ Qual Saf. 2016;25:320–323.
15. Hofer TP, Hayward RA. Identifying poor-quality hospitals. Can hospital mortality rates detect quality problems for medical diagnoses? Med Care. 1996;34(8):737–753.
16. Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)—a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42(2):377–381.
Keywords:

hospital mortality; leadership; quality improvement

Supplemental Digital Content

Back to Top | Article Outline
© 2019Wolters Kluwer Health | Lippincott Williams & Wilkins