Journal Logo

Feature Articles

Cultivating Quality: Meeting Effective Care Measures

Hemman, Eileen A., EdD, RN-BC

Author Information
AJN The American Journal of Nursing: December 2011 - Volume 111 - Issue 12 - p 54-60
doi: 10.1097/01.NAJ.0000408187.67511.f0
  • Free

Even before the Institute of Medicine (IOM) cited the astounding statistic that 98,000 people die each year in our nation's hospitals from medical errors, patient safety had moved to the forefront of America's health care agenda.1 In 1987, the Joint Commission launched the Agenda for Change, which integrated effective care measurement into the accreditation process.2 By 1997, these activities had evolved into the ORYX initiative, through which the Joint Commission developed a set of effective care measures for hospitals (known by such names as performance measurement systems, core measure sets, process of care measures, and accountability measures) that reflected the best evidence-based protocols for a wide variety of conditions, including acute myocardial infarction, heart failure, pneumonia, and pregnancy.2-7

On July 1, 2002, hospitals began collecting data on how well they met these measures for submission to the Joint Commission.3 Working together, the Joint Commission and the Centers for Medicare and Medicaid Services (CMS) standardized National Hospital Quality Measures and, in 2004, began publicly reporting hospital data.3

Today all effective care measures common to the Joint Commission and the CMS are reviewed and endorsed by the nonprofit National Quality Forum.3 The Joint Commission requires hospitals to select and report at least four core measure sets for accreditation,8 and the CMS provides reimbursement incentives, known as "pay for performance" and "value-based purchasing," to hospitals and providers for meeting or exceeding national standards.9

Since the inception of effective care measures, American hospital performance has improved significantly, resulting in saved lives, better patient health, and lowered health care costs.10-12 This article clarifies how effective care is defined and gauged. It describes the procedures for collecting and reporting data, explains exceptions to and limitations of effective care measures, and discusses the central role nurses play in achieving quality improvement.

DEFINING AND MEASURING EFFECTIVE CARE

The IOM's Committee on the Quality of Health Care in America defined effective care as evidence based and closely linked to positive patient outcomes; the committee also identified six qualities essential to any system that strives to meet patients' health care needs.13

  • Safe—Patient is not harmed or injured during care.
  • Effective—Care is based on scientific research and best evidence, avoiding overuse and underuse of services.
  • Patient centered—Care is respectful of and responsive to patient preferences, needs, and values.
  • Timely—Harmful delays are avoided.
  • Efficient—Care is cost-effective, not wasting equipment, supplies, energy, or ideas.
  • Equitable—Patients are provided the same level of care regardless of geographical location or socioeconomic status.

For reporting purposes, effective care is measured by calculating the percentage of patients with a specific diagnosis whose treatment conforms to a set of practice standards shown in clinical studies to improve that particular condition. Since these data are obtained using methods found in the appropriate specifications manual14 and collected nationally, they can be used to compare quality of coordination and delivery of care among state and national health care organizations—data that's especially useful for consumers (see Consumer Access to Performance Information).

The Specifications Manual for National Hospital Quality Measures (http://bit.ly/riqdjc) is a collaborative publication of the Joint Commission and the CMS.14 It provides the criteria for a uniform set of effective care measures for U.S. hospitals and includes a data dictionary; scientific rationale for each measure; algorithms to use in selecting appropriate patient records to audit; and requirements for data collection, reporting, and documentation. For example, to meet the specification guidelines, some effective care measures must be documented by licensed independent practitioners, namely physicians, NPs, or physician assistants; other measures, which fall within the scope and practice of nursing, can be documented by RNs.

The specifications manual lists each major category of effective care, along with a set of measurable indicators. For example, in version 4.0b, which covers January through June 2012 (the current version, 3.3b, ends this month), the measure set called heart failure (HF) has three indicators: HF-1, discharge instructions; HF-2, evaluation of left ventricular systolic (LVS) function; and HF-3, angiotensin-converting enzyme inhibitors or angiotensin receptor blockers for LVS dysfunction. Discharge instructions are defined as "written instructions or educational material given to patient or caregiver at discharge or during the hospital stay addressing all of the following: activity level, diet, discharge medications, follow-up appointment, weight monitoring, and what to do if symptoms worsen."14 The stated rationale acknowledges that patient nonadherence to diet and medication instruction is often responsible for "changes in clinical status" and urges health care professionals to "ensure that patients and their families understand their dietary restrictions, activity recommendations, prescribed medication regimen, and the signs and symptoms of worsening heart failure."14 This indicator is said to improve when there's a rise in the rate of patients with heart failure for whom documentation exists that written discharge instructions, addressing all six criteria listed above, were provided. If there's no documentation that all six of the instructional criteria were addressed, the care standard hasn't been met.

How to calculate a hospital's performance. Performance is calculated by dividing the "numerator statement," or number of patient charts meeting the standard (in this case, the number of patients with heart failure whose charts include appropriate documentation) by the "denominator statement," or total number of charts in the eligible patient population (in this case, patients with heart failure). The quotient, expressed as a percentage, represents the hospital's performance and can be compared with the average performance of all hospitals in the nation and with the average performance of top performing hospitals in the nation.

As the scientific evidence changes, the indicators of effective care are reevaluated and new measures may be introduced. For this reason, different versions of the specifications manual, each corresponding to a specific hospital discharge and data collection period, are available online at the QualityNet or Joint Commission Web sites.14 It's important to select the version that covers the discharge period under review so that performance chart audits follow the correct set of guidelines.

In addition, some measure sets undergo substantial revision. For example, a new category of measures for perinatal care was introduced in 2009 to replace the "pregnancy and related conditions" measure set. Data collection for the new measures began with April 1, 2010, discharges.15 The perinatal care measures are now part of 10 core measure sets from which hospitals can select and report a minimum of four (or, based on the patient population they serve, a combination of applicable core measure sets and noncore measures) to the Joint Commission in compliance with the ORYX performance measurement initiative.3 Another category of measures being considered for future development are those concerning sudden cardiac arrest.16

Exceptions to effective care measures. If a treating clinician determines that a patient's medical condition contraindicates established effective care measures, the rationale for the exception must be documented in the patient's medical record. For example, effective care measures for acute myocardial infarction specify that aspirin be given at arrival and prescribed at discharge.14 If a patient with an acute myocardial infarction is allergic to aspirin, the treating clinician must document allergy as the reason for not prescribing it. Charts with appropriately documented exceptions are excluded from the denominator statement.14 To meet the effective care standard, documentation must be performed by the appropriate party, as stipulated in the specification guidelines.

Collecting and reporting data. Effective care measurement data are obtained through chart audits, but not all charts are audited. Charts are randomly selected from among those that meet the inclusion criteria for the discharge period under review. Chart selection is based primarily on the ninth revision of the International Statistical Classification of Diseases and Related Health Problems (ICD-9) codes. For every condition audited, the facility's average monthly number of patients discharged with that diagnosis determines the number of charts selected for audit. For example, for acute myocardial infarction, the specifications manual (version 4.0) gives the following parameters: Health care facilities treating a monthly average of 516 or more patients with relevant ICD-9 codes must include at least 104 charts in the sample. If the monthly average falls between 131 and 515 patients, 20% must be audited; if it falls between 26 and 130, 26 must be audited. Facilities treating, on average, fewer than 26 patients per month with acute myocardial infarction may not sample charts, but must audit all of them.14 Once the chart audits are complete, data are reported through vendors that have been evaluated and certified by the Joint Commission to collect the data according to their specifications. Interrater reliability checks must be performed periodically to ensure data credibility.

LIMITATIONS OF EFFECTIVE CARE MEASURES

Every measurement system has limitations that should be considered when evaluating data. One of the problems with effective care measures is that, depending on the measure, it may be possible to "game" the system.7 For example, consider the first indicator in the heart failure measure set, "discharge instructions." Charts may be deemed compliant with the standard if a box is checked on a carefully worded discharge instruction form or if a written nursing note addresses all six criteria for effective care. Such documentation, however, doesn't measure whether care is being delivered effectively enough to improve patient outcomes.7 What we really want to measure is whether the patient with heart failure consistently weighs in every morning, follows dietary and activity restrictions, sees medical providers as often as is recommended, adheres to the prescribed medication regimen, and recognizes when worsening symptoms warrant medical follow-up—and in each case understands why.

To address this type of limitation, the Joint Commission has proposed, as a framework for effective care measures, four accountability criteria that take process into account.17 These include

  • research: a strong evidence base that establishes a link between provided care and improved outcomes.
  • proximity: a close connection to the evidence-based care process.
  • accuracy: the ability to demonstrate whether the evidence-based process has been provided.
  • safety: little risk of producing adverse effects.

The Joint Commission plans to assess whether current measures meet these accountability criteria after January 1, 2012.17 Of the 28 effective care measures for 2010, the Joint Commission classified 22 as accountability measures,7 although measures not deemed accountability measures may still provide valuable information to hospitals initiating quality improvement efforts.

A second limitation facing effective care measures is the timing of the chart audits: a chart isn't audited until after the patient has been discharged and ICD-9 codes have been applied, a process that may take months. While such retrospective audits help organizations identify opportunities to improve future care, they do nothing to improve care as it's being delivered. To address this problem, real-time chart audits, called "concurrent reviews," should be performed. Results communicated to nurses and physicians while a patient is still in their care can help ensure that the patient receives the best care possible.

STRATEGIES FOR ENSURING EFFECTIVE CARE

Hospitals have traditionally relied on organizational leaders or nurse and physician "champions" to drive improvements in effective care. Unfortunately, organizational leaders may not possess the clinical knowledge to advance quality improvement, and clinician champions may not have the authority to effect change at the bedside in more than one area.18 Real change tends to start with the organizational structure, goals, mission, and vision, which are typically established by the board of directors and senior leadership.

Altering expectations and goals. Expectations for improvement in effective care performance may be linked to personnel performance evaluations, or incentives may be provided in the form of employee gain-sharing, through which employees are financially compensated when performance goals are met. Changing the goal from that of complying with a standard to providing "perfect care" for all patients has also been successful.18

Promoting awareness. Educational campaigns to promote awareness of performance measures are essential. Once health care teams understand the significance of effective care measures and become experts on the topic, practice improves. Because nurses coordinate patient care, they play a central role in ensuring that care is effective—at the bedside and throughout the organization.19

For example, it may be possible for nurses to redesign processes to facilitate improvement initiatives, as they did in one hospital with an electronically based protocol for providing pneumococcal vaccination to inpatients.20 Or they may partner with physicians and quality improvement personnel to reinforce effective care standards by using abbreviated checklists during daily rounds.18

Employing nurse leaders. To drive quality improvements and promote patient safety, the American Association of Colleges of Nursing has proposed the creation of a new nursing position, the clinical nurse leader, who would lead innovation in the area of health care quality, while reducing costs at the point of care.21, 22 Other important leadership positions for RNs include quality management consultant, core measure abstractor, quality specialist, clinical documentation specialist, and quality improvement project coordinator. Whatever organizational strategies are adopted to promote effective care, accountability and clearly defined roles and responsibilities are imperative.

GOOD USE OF DATA

The common thread in successful endeavors to improve effective care is the ability to provide meaningful, real-time results to health care providers. Ideally, multiple forms of communication are used to increase transparency: messages and formal presentations prioritize the organization's top measures of success and improvement initiatives, while data are made available electronically, as well as on posters, flyers, and quality assurance bulletin boards. Data are most influential when tailored to the intended audience. For example, labor and delivery personnel need to see perinatal measure results, whereas surgical personnel need to see the Surgical Care Improvement Project (SCIP) measures. Two major ways of presenting data to drive performance improvement are dashboards and run charts.23, 24

Dashboards provide quick snapshots of a hospital's or patient care unit's performance, much like the dashboard of a car presents information at a glance about the fuel supply, speed of travel, and any mechanical malfunctions (see Figure 1). Effective care measures highlighted in red are areas of opportunity for improvement, whereas measures highlighted in blue represent performance among the top 10% of reporting hospitals. Dashboards provide abbreviated performance information, representing the "pulse" of the organization, but only for a specified period of time. To visualize hospital or unit improvement, run charts are used.

Figure 1
Figure 1:
Sample Hospital Core Measure Dashboard

Run charts graphically depict the percentage of patients receiving effective care over time. Usually the time period consists of consecutive months or quarters. In this way, upward and downward trends and large deviations can be noticed immediately and investigated further. Run charts can also be used to gauge whether system changes yield improvements and to compare one hospital's performance with national averages. Small upward or downward variations over the short term may be normal, so data on a run chart should cover a long enough period of time to show shifts, trends, and patterns, which can be analyzed in light of process changes that might have improved or worsened performance. If you have 25 points or more in your data series, a more sophisticated run chart, called a statistical control chart, can be used to detect performance changes quickly and accurately.25

BRINGING DATA TO LIFE WITH PATIENT STORIES

The major problem with dashboards and run charts is that the data they present removes the patients from the picture. To make the data more meaningful, it's necessary to include patient stories that describe a patient safety risk or a harmful event. When hearing a patient's story, providers become engaged and often begin to spontaneously discuss their experiences, offering firsthand insight into system issues. Effective leadership allows such insight to shape process decisions that may be appropriate for system-wide implementation. A case study from a hospital at which I once worked illustrates how patient stories may be used in conjunction with patient data to drive effective care improvements.

One case study. The quality improvement nurse at the hospital identified a downward trend over three quarters on one of the SCIP effective care measure sets, SCIP-Card-2, called "Surgery Patients on Beta-Blocker Therapy Prior to Arrival Who Received a Beta-Blocker During the Perioperative Period" (see Figure 2).14 In order to meet the effective care standard, patients scheduled for surgery who'd been receiving β-blocker therapy prior to admission had to receive the regular dose of their prescribed β-blocker within the perioperative period, defined as 24 hours before surgical incision through to discharge from the postanesthesia care unit (PACU) or recovery unit. The measure is supported by research showing that patients on β-blocker therapy whose therapy is discontinued through the perioperative period have a significantly higher risk of mortality than patients for whom β-blocker therapy is continued. In addition, the American College of Cardiology and the American Heart Association cite continuation of β-blocker therapy in the perioperative period as a class I indication and suggest that the β-blocker should be titrated to maintain tight control of the patient's heart rate.26 In order to calculate this measure, the patient's admission chart must reflect the date and time of the last β-blocker dose the patient received prior to surgery.

Figure 2
Figure 2:
Percentage of Patient Charts That Met Perioperative β-blocker Guidelines

The quality consultant reviewed the charts that didn't meet the standard and selected the following case for analysis and review at core measure meetings by pharmacy, preoperative, perioperative, anesthesia, and medical–surgical nursing staff (identifying patient characteristics have been omitted to protect patient privacy).

An elderly woman with a history of atrial fibrillation and stroke presented to the ED with severe vomiting and abdominal cramps. The ED admissions form indicated that the patient's current medications included a β-blocker, but the timing of the patient's last dose wasn't noted. The patient was diagnosed with a bowel obstruction secondary to adhesions and was taken immediately from the ED to surgery. Surgery was uneventful and she was transferred to the surgical unit for postoperative care. Unfortunately, her β-blocker wasn't ordered.

On the second postoperative day, the patient developed sudden-onset atrial fibrillation with a fast ventricular response (a pulse rate between 180 and 200 beats per minute). She was taken to the ICU, where her heart rate was normalized with successive doses of an intravenous antiarrhythmic drug. Her oral β-blocker was then restarted, and she was closely monitored for two more days in the ICU before being transferred to a surgical unit and, eventually, discharged home.

Upon review, the patient's chart failed to meet effective care standards because her β-blocker wasn't ordered postoperatively until after her ICU admission (on postoperative day two). Although atrial fibrillation, in and of itself, is usually not life threatening, it can be a medical emergency. Failing to administer β-blocker therapy in the perioperative period clearly put this woman at risk for such severe and life-threatening complications as stroke or heart failure. Furthermore, her hospital stay was prolonged for at least one more day, increasing the cost of her health care, and she was unnecessarily subjected to additional physical and emotional stress. She hadn't received the best care, and she'd also been put at elevated risk for adverse events.

Addressing process gaps and obstacles. To prevent similar events from occurring in the future, all accountable clinical partners collaborated to identify process gaps and obstacles to care. We first identified the various ways patients are admitted for surgery, and we found three: directly from the ED, scheduled from the perioperative clinic, and transferred from the inpatient nursing units. Clinical leaders in each of the five areas (pharmacy, preoperative, perioperative, anesthesia, and medical–surgical nursing staff) were consulted to determine current system processes and ways to improve the system. Based on our findings, the following changes were agreed to and implemented.

  • The pharmacy technicians responsible for medication reconciliation in the ED agreed to take responsibility for noting in the chart the date and time of the last β-blocker dose. The surgeon or admitting physician then reviews and signs the medication reconciliation list.
  • The preoperative clinic staff developed a β-blocker protocol, which includes highlighting β-blocker use in yellow on the chart to make it immediately visible to the perioperative nurse.
  • The perioperative nursing staff now notifies the surgeon and the anesthesiologist if a β-blocker was included on the preoperative clinic medication list but wasn't ordered prior to discharge from the PACU.
  • The anesthesia department agreed to make discharge rounds in the PACU and write orders for β-blocker administration when appropriate.
  • The nurses provided just-in-time staff training (a one-time in-service for all units with adult patients taking β-blockers) on the importance of continuing β-blocker therapy during the perioperative period. Part of the training emphasized the need to contact the physician if β-blocker therapy was stopped for more than 24 hours or if a dose was missed. If the patient was ordered to be kept NPO (non per os or nothing by mouth), the β-blocker would be administered intravenously.
  • The policy concerning β-blocker therapy was rewritten and guidelines for intravenous β-blocker administration were clarified. Staff training on the new policy was required for all affected patient care staff.
  • The education department staff included β-blocker training in their annual training and competency fair.

After these system changes were implemented, the effective care measure improved dramatically, and most important, no further evidence of patient risk or harm was found in the next two quarters.

Box
Box:
Consumer Access to Performance Information

The central role of nurses. Effective care measures play an integral part in ensuring patient safety. Linking effective care measures to accreditation and financial reimbursement has given organizations an added incentive to implement the best care for patients. It has taken many years to implement national quality measures, and these measures will undoubtedly continue to evolve and rely upon an increasing number of nursing-sensitive indicators. Nurses will continue to have varied and vital roles in ensuring effective patient care, implementing effective care measures, and improving health care systems. As patient advocates, we are best positioned to tell our patient stories and to ensure that patient safety remains at the center of our health care systems.

REFERENCES

1. Kohn LT, et al., editors. To err is human: building a safer health system. Washington, DC: National Academy Press; 2000.
2. The Joint Commission. A comprehensive review of development and testing for national implementation of hospital core measures. Oakbrook Terrace, IL; 2010 Nov 3. http://www.jointcommission.org/assets/1/18/A_Comprehensive_Review_of_Development_for_Core_Measures.pdf.
3. The Joint Commission. Facts about ORYX® for hospitals (National Hospital Quality Measures). Oakbrook Terrace, IL; 2011. http://www.jointcommission.org/facts_about_oryx_for_hospitals.
4. The Joint Commission. Ongoing activities [introduction and initial use of standardized core measures]. 2010. http://www.jointcommission.org/ongoing_activities_.
5. The Joint Commission. Evolution of performance measurement at the Joint Commission. 2010. http://www.jointcommission.org/Evolution_of_Performance_Measurement_at_the_Joint_Commission_.
6. Centers for Medicare and Medicaid Services. Hospital quality initiatives. Process of care measures. U.S. Department of Health and Human Services. 2011. https://www.cms.gov/HospitalQualityInits/18_HospitalProcessOfCareMeasures.asp.
7. Chassin MR, et al. Accountability measures—using measurement to promote quality improvement. N Engl J Med 2010;363(7):683-8.
8. The Joint Commission. 2011 communication guidelines for ORYX®Performance Measurement Systems. Oakbrook Terrace, IL; 2010. http://www.jointcommission.org/assets/1/6/2010_Communication_Guidlines_%20PMS_10_09.pdf.
9. Egol A, et al. Pay for performance in critical care: an executive summary of the position paper by the Society of Critical Care Medicine. Crit Care Med 2009;37(9):2625-31.
10. The Joint Commission. Improving America's hospitals: the Joint Commission's annual report on quality and safety—2009. Oakbrook Terrace, IL; 2010. http://www.jointcommission.org/Improving_Americas_Hospitals_The_Joint_Commissions_Annual_Report_on_Quality_and_Safety_-_2009.
11. The Joint Commission. Improving America's hospitals: the Joint Commission's annual report on quality and safety—2010. Oakbrook Terrace, IL; 2010. http://www.jointcommission.org/improving_america%E2%80%99s_hospitals_-_the_joint_commission%E2%80%99s_annual_report_on_quality_and_safety_-_2010.
12. Williams SC, et al. Quality of care in U.S. hospitals as reflected by standardized measures, 2002-2004. N Engl J Med 2005;353(3):255-64.
13. Committee on Quality Health Care in America, Institute of Medicine. Crossing the quality chasm: a new health system for the 21st century. Washington, DC: National Academy Press; 2001.
14. QualityNet. Specifications manual for national hospital inpatient quality measures. Centers for Medicare and Medicaid Services; the Joint Commission; 2011. http://www.qualitynet.org.
15. The Joint Commission. Perinatal care core measure set. 2010. http://www.jointcommission.org/perinatal_care.
16. The Joint Commission. Sudden cardiac arrest initiatives. 2011. http://www.jointcommission.org/sudden_cardiac_arrest_initiatives.
17. The Joint Commission. Joint Commission FAQ page: hospital-accountability measures. 2011. http://www.jointcommission.org/about/JointCommissionFaqs.aspx?CategoryId=31.
18. Pardini-Kiely K, et al. Improving and sustaining core measure performance through effective accountability of clinical microsystems in an academic medical center. Jt Comm J Qual Patient Saf 2010;36(9):387-98.
19. Hilton N. Innovation profile: implementing clinical nurse leader role improves core measures performance, patient and physican satisfaction, and reduces nurse turnover. Rockville, MD: Agency for Healthcare Research and Quality; 2008. http://www.innovations.ahrq.gov/content.aspx?id=2566.
20. Kishel JJ, et al. Implementing an electronically based, nurse-driven pneumococcal vaccination protocol for inpatients. Am J Health Syst Pharm 2009;66(14):1304-8.
21. Stanley JM, et al. The clinical nurse leader: a catalyst for improving quality and patient safety. J Nurs Manag 2008;16(5):614-22.
22. Gabuat J, et al. Implementing the clinical nurse leader role in a for-profit environment: a case study. J Nurs Adm 2008;38(6):302-7.
23. Donaldson N, et al. Leveraging nurse-related dashboard benchmarks to expedite performance improvement and document excellence. J Nurs Adm 2005;35(4):163-72.
24. Carey RG, Lloyd RC. Measuring quality improvement in healthcare: a guide to statistical process control applications. Milwaukee, WI: American Society for Quality Press; 2001.
25. Benneyan JC. Design, use, and performance of statistical control charts for clinical process improvement. Boston: Northeastern University; 2001. http://www1.coe.neu.edu/~benneyan/papers/intro_spc.pdf.
26. Fleisher LA, et al. ACC/AHA 2006 guideline update on perioperative cardiovascular evaluation for noncardiac surgery: focused update on perioperative β-blocker therapy: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines (Writing Committee to Update the 2002 Guidelines on Perioperative Cardiovascular Evaluation for Noncardiac Surgery). J Am Coll Cardiol 2006;47 (11):2343-55.
© 2011 Lippincott Williams & Wilkins, Inc.