Szent-Gyorgyi, Lara E. MPA; Coblyn, Jonathan MD; Turchin, Alexander MD, MS; Loscalzo, Joseph MD, PhD; Kachalia, Allen MD, JD
Quality efforts in health care today demand increasingly comprehensive and rigorous approaches to measurement and improvement.1 The drive to improve quality is shared by those within the profession, as well as patients and payers. In addition, the U.S. government seeks to spur further improvements in quality as part of its health care reform efforts.2,3 The focus of improvement efforts often resides in three aspects of care delivery: organizational infrastructure, process of care, and clinical outcomes.4
Despite the push for higher-quality care, many hurdles to effective quality efforts remain.5–9 Too many external mandates may dilute the importance of each area selected for improvement, and yet, paradoxically, only address a limited patient population or a small part of a patient's care. Physicians may be skeptical of the clinical value of externally selected measures, consistent with a belief that public reporting of poorly designed quality measures can have a negative impact on care.10–13 Furthermore, physicians and other health care professionals often regard collection and reporting of quality measures to be a burden on time and financial resources.
With these challenges in mind, the Department of Medicine (DOM) at Brigham and Women's Hospital (BWH) sought to build a grassroots quality program that would fit within an academic medical center's (AMC's) tripartite mission of excellence in clinical care, research, and education. We were mindful of AMC providers' multiple responsibilities, but we were also aware of their common goal: to deliver the highest-quality care. In developing the program within the existing departmental infrastructure, we were guided by four main principles: (1) improvement should be led by frontline clinicians selecting measures most important to their patients, (2) performance measurement should be automated and accurate, (3) appropriate resources are necessary to support quality efforts, and (4) interventions should be system-, and not provider-, focused. We also informed providers that financial incentives would not be tied to these improvement efforts unless they opted to do so.
In this article, we describe the structure of the BWH DOM quality program, our progress to date, and next steps. We share the challenges we encountered during initial implementation, how we have addressed them, and the insights we have gained. We hope that our program can offer others a departmental model that builds on clinicians' inherent interest in the quality of patient care by providing data collection and intervention assistance through a central team without the use of any external mandates or financial incentives.
Building a Quality Program
BWH is a teaching affiliate of Harvard Medical School and one of the two founding AMCs of Partners HealthCare System in Boston, Massachusetts. The DOM, which has about 1,200 physician faculty members, is the largest clinical department at BWH.
DOM faculty members are typically involved in all facets of the AMC mission of excellence in clinical care, research, and education. Accordingly, we developed the quality program to allow for their engagement in each aspect. Although the program's primary focus is on improving the quality of care, its structure facilitates study and evaluation of reasons for gaps in quality and well as the efficacy of interventions. The program's structure also lends itself to, and encourages, hands-on trainee education and involvement in quality projects.
Although specialty-based pay-for-performance (P4P) contracting is not an intrinsic element of the program, it is another factor in the AMC's environment. The quality program's structure is flexible so that each division's clinical leadership can determine if and when it is appropriate to use the program's resources to support P4P efforts.
Program development and design
We started the DOM quality program, which is funded solely by the department, in 2007. The quality program is structured to provide centralized coordination, monitoring, support, and reporting services for the department's clinical divisions. The central quality team staff, which initially included a part-time medical director (20% effort) and a full-time program manager, leads the design, evaluation, and analysis of the metrics and facilitates process improvements within each division. The DOM's vice chair for integrated clinical services serves as the program's executive sponsor.
Since its inception, the DOM quality program has been anchored by its four principles:
1. The program is driven by clinicians focused on patient care. To maximize clinical value (and to avoid potential reservations related to “mandated” metrics), each division's leadership selects metrics they believe are important to the care of their patients. Ideally, the metrics are evidence based and/or tied to national guidelines. The initially selected metrics are provided in Table 1.
2. Measurement is automated and accurate. To allow for quick and repeated measurement in support of rapid improvement cycles, all metric data elements are captured electronically. When necessary, work flow and/or system changes are made to enable electronic data capture.
3. Appropriate resources are provided. The central quality team was created to provide the divisions with resources and expertise. This enables the central team to build expertise in metric development and data collection and reporting while division leadership concentrates on clinical issues.
4. There is a system focus. Because the quality program's focus is on care provided to patients, the metrics are designed to measure a practice's, rather than an individual physician's, performance. There are no physician-level incentives or penalties. This system focus is consistent with our philosophy that providers are inherently interested in the quality of the care they provide and that better support systems will optimize improvements.
The central quality team works collaboratively with division representatives throughout the measurement and improvement process with the goal of minimizing the investment of clinical providers' time in administrative activities. The process contains several design elements that take metrics from conception to execution via a four-part implementation plan. Figure 1 illustrates the phases of metric development and deployment.
Electronic data capture
The quality program's requirement that all electronic data abstraction be automated eliminates the need for chart review with the goal of reducing the time and resources needed for data collection, improving the consistency of collected data, and increasing the ability to report data regularly. We anticipate that the resulting “push button” access to reports will have multiple downstream effects. Easy access to the data without chart reviews should increase division confidence in and use of data, as well as lead to deeper involvement and greater ownership of the metric.
The reliance on electronic data capture necessitates the development of accurate, precise, and reliable specifications for each metric. Detailed documentation of the metrics helps ensure that the data are extracted to the expectations of the division, are consistent across reporting periods, and can be adopted by other divisions or institutions.
Using the metric specification, the central quality team generates preliminary data and then vets those data with clinical leadership for technical (i.e., that the data were abstracted as specified) and clinical accuracy. The metric specifications are then adjusted per the division's suggestions; ultimate approval rests with the division. Once the final specifications are approved, the central team obtains baseline performance data that the divisional leadership uses to set performance goals. (An example of a metric specification is provided in Supplemental Digital Appendix 1, http://links.lww.com/ACADMED/A42.)
To produce percentage-based performance data, the central quality team needed to generate a numerator and denominator for each metric. The numerator represents the number of patients receiving or achieving a desired treatment or outcome, and the denominator represents the population targeted for that treatment or goal. Most of the numerator data—whether for a clinical process (e.g., vaccine administration) or an outcome (e.g., patient blood pressure)—are readily available within the BWH electronic health record (EHR) data repository. For some metrics for which numerator data were not reliably recorded, the central team has worked with divisional leadership to gain commitment to adjust work flows to record data in a standardized and extractable location in the EHR.
Identifying the population for the denominator (i.e., patients eligible for the metric) proved more challenging because not all qualifying clinical diagnoses were listed in an electronically extractable location within the EHR. To address this problem, we developed a method to combine billing data with available clinical data. With assurance from clinical leadership that physicians bill very accurately for their specialty-specific diseases, the central team identified the standard CPT and ICD-9 billing codes that could be used to identify the correct patient populations and then confirmed with division representatives that the codes were correct. To maintain accuracy, use of the billing data is restricted to the billing coding within the relevant division (e.g., to identify the correct population of patients with chronic kidney disease, we use only the renal clinic's billing).
Collaboration with Information Systems
After identifying the required data elements, the quality team worked with Information Systems (IS) to ensure the feasibility of electronic data capture. A fundamental first step was to facilitate an agreement between the Brigham and Women's Physicians Organization (BWPO), which houses the physician billing data, and the Quality Data Management group (QDM), which oversees the Partners HealthCare data warehouse that houses the EHR clinical data, to combine billing and clinical data for our improvement purposes. The BWPO now supplies necessary billing data to QDM, which combines the billing data with clinical data to generate the requested data reports. The transfer of billing data from the BWPO to QDM has been automated to support routine measurement.
After divisions finalize their metric specifications, they develop and implement quality improvement strategies. Improvement opportunities exist on two levels, documentation and clinical performance, each of which is critical to the delivery of high-quality care. Improved documentation can affect performance rates by simply demonstrating (and allowing for electronic measurement) that the necessary care was indeed provided. Clinical performance improvement efforts rely on administrative and clinical staff at all levels. The central quality team meets regularly with each division to review data and to assist with the development and monitoring of improvement strategies.
Challenges and Lessons
In the DOM quality program's first two years of operation (2007–2009), we were able to address many of the challenges related to quality improvement that emerged; many of these were related to electronic data capture. Table 2 highlights the program's milestones and describes strategies taken to overcome obstacles and lessons learned. Here, we describe our findings with regard to our four core principles.
Driven by clinical leadership
Engaging physicians is critical for the success of quality efforts. Developing measures that physicians value may encourage their engagement.14–16 We addressed this potential barrier by building a program that is led by local clinical leadership. Divisional leaders identified metrics they deemed clinically important, of value to patients, measurable, amenable to improvement, and (ideally) tied to evidence-based standards. Once possible metrics were identified, metric selection was generally finalized after discussion at a divisional faculty meeting.
Each participating clinical division quickly selected a metric and created an initial definition of the specifications. Given the flexibility to define their own metrics according to their patients' needs, most divisions selected measures related to prevention and wellness that had the potential to control costs and minimize inpatient hospital stays; selected metrics often were measures for which there were national standards (Table 1). Providers also have been very engaged with metric refinement and development of improvement strategies. With easy measurement now at their fingertips, several divisions are studying the effects of their intervention efforts.
Ease of measurement
Historically, a common source of quality data has been insurance claims.17 Much progress has been made in the use of claims data, but they are frequently limited in scope and application. The advent of EHRs has created expectations of greater data availability for quality efforts.18 Although research on extending the use of EHR data to specific patient populations is still in the early stages, adjusting either the data input processes or the clinic work flow to allow for more comprehensive data capture may have a positive impact on the ability to use an EHR for quality improvement.8,19
With assistance from IS, QDM, and the BWPO, our central quality team has successfully identified ways to capture data for almost all of the identified metrics, but this effort has not been without its challenges. Achieving quick and easy measurement has required flexibility in data capture. Furthermore, the data required by the various metrics span the entirety of the hospital's multileveled IS structure. The quality team worked through many obstacles, including QDM's limited access to certain data elements and difficulties in the systematic recording of data. For some metrics, these obstacles affected timely availability of initial data.
Our reliance on electronic data capture is challenged by how data are entered and extracted. Data entry challenges arise when no logical field exists in the EHR for a provider to record and store relevant information (e.g., asthma control test scores, externally collected lab results) or when an electronic field is available for use but has not been routinely used (e.g., vaccinations administered at another facility, blood pressure measurement). Solving the data entry problem requires collaboration with IS to create new fields or methods for data entry. In cases in which an electronic field exists but is not used, clinical work flow redesign is required. Data extraction challenges occur when data are not available for reporting because they are entered into the record in a nonstructured field or are not available to QDM within the current IS infrastructure. We have requested IS enhancements to address the data extraction problem. Through this experience, we have witnessed the critical importance of specifying how data will be captured and accessed for quality reporting during (and not after) the design of any new clinical process or electronic data system, and now advocate accordingly.
Despite increased recognition of the need for quality improvement initiatives, such efforts can be underresourced, particularly at the staff and data management levels. Some institutions regard appointing physician leadership as sufficient support for a quality program, but that can result in programs without appropriate structures or the dedicated resources necessary to develop, implement, and evaluate interventions.20
Our program's structure allows the centralized program manager to leverage the existing expertise and resources available throughout the DOM to the division level. One marker of success has been divisions reporting they have selected metrics they felt were administratively too challenging to pursue prior to the program's creation. To date, one of the program manager's main functions has been to obtain the necessary data, capitalizing on the department-level focus on quality to effect results. The effort invested in obtaining required data for one division can pay off in economies of scale when that division's metric or improvement solution can also be applied to other divisions.
Time and patience have also been essential to the program's growth. The central team's initial investment over the first two years of the program has been critical to building the foundation. That time has allowed the divisions to establish meaningful metrics with actionable data and has given the program manager the freedom to pursue all avenues to overcome obstacles, especially with regard to data acquisition.
Our system focus encourages divisions to develop quality improvement efforts within the existing framework of their clinical resources. We encourage divisions to primarily target work flows and processes to minimize large expenditures on additional staffing. We have found that this approach has created initiatives that do not place sole responsibility for improvement on any single provider. For example, in divisions seeking to improve vaccination rates, the clinics have set up a process using standing orders so that nurses and medical assistants can now automatically provide vaccinations to patients who come for a visit.
Even though data can be generated at the individual provider level, the DOM quality program focuses on solutions at the clinic level. In addition, no individual incentives or penalties are tied to the metrics. We recognize that P4P has emerged throughout the health care industry as a popular path to achieving quality improvement and cost control.21–23 Although some evidence has shown improved outcomes, P4P may not be without its pitfalls.24–27 Our quality program garners the desired attention from our clinicians without the need to provide additional financial incentives, consistent with our belief that operational support can go a long way toward improving quality.
Educating the next generation of providers to lead future quality improvement efforts is crucial.28–32 A natural extension of our program has been to invite residents and fellows interested in quality to participate in defined projects. They are able to take a clinical or administrative leadership role in helping to execute many of the program's activities and work closely with the divisions throughout the cycle of a metric. One resident took charge of data analysis and distribution as well as presentations to the division faculty. He also successfully applied for an internal grant to support the quality efforts. These collaborations have been very successful, and we look forward to working with trainees on a regular basis and providing them with practical experience in quality improvement.
We continue to pursue IS enhancements. Although we have created some temporary solutions to support our quality efforts, sustainable long-term solutions to improve work flow and data collection require IS adjustments. Unsurprisingly, IS-related challenges are likely to be perennially present as divisions continue to select new metrics and new, previously unanticipated measurement needs arise.
In the longer term, we anticipate tracking the results of implemented improvement strategies and sharing them across the DOM. Once performance goals for individual metrics have been met, we will work with divisions to identify iterations of current or new metrics for the next cycle of improvement.
We also look forward to taking proven solutions and implementing them across the DOM where applicable (e.g., blood pressure or vaccination goals). In addition, one division's successful approach may affect another division's selection of future metrics, providing further opportunity to demonstrate the effectiveness of an improvement strategy.
Our ultimate goal is to expand the program across the Partners system, to the DOM's counterparts at other Partners hospitals. This collaboration would strengthen the program as a whole, improve patient care, and provide more data for setting benchmarks in selected metrics.
Producing meaningful, actionable data remains an essential goal as AMCs prepare for a future in which greater emphasis is placed on quality measurement and improvement. The BWH DOM quality program has been designed to put quality management principles into action, specifically by selecting metrics that are clinically relevant to providers, facilitating electronic data collection, involving all clinic staff in improvement activities while providers focus on patient care, and setting performance goals without financial incentives. Now that we have established a program that enjoys provider engagement and employs a measurement infrastructure, the divisions will continue to design and implement improvement efforts aimed at improving the system in which our physicians practice.
The authors thank David McCready and Christine Imperato for their valuable support of the DOM Quality Program.
1Institute of Medicine. Crossing the Quality Chasm: A New Health System for the 21st Century. Washington, DC: National Academy Press; 2001.
3Evans DB, Tan-Torres ET, Lauer J, Frenk J, Murray CJL. Measuring quality: From the system to the provider. Int J Qual Health Care. 2001;13:439–446.
4Glickman SW, Baggett KA, Krubert CG, Peterson ED, Schulman KA. Promoting quality: The health-care organization from a management perspective. Int J Qual Health Care. 2007;19:341–348.
5McNeil BJ. Hidden barriers to improvement in the quality of care. N Engl J Med. 2001;345:1612–1620.
6Chassin MR, Galvin RW; National Roundtable on Health Care Quality. The urgent need to improve health care quality: Institute of Medicine national roundtable on health care quality. JAMA. 1998;280:1000–1005.
7Amalberti R, Auroy Y, Berwick D, Barach P. Five system barriers to achieving ultrasafe health care. Ann Intern Med. 2005;142:756–764.
9Auerbach AD, Landefeld CS, Shojania KG. The tension between needing to improve care and knowing how to do it. N Engl J Med. 2007;357:608–613.
10Blumental D, Epstein AM. Quality of health care: Part six of six. N Engl J Med. 1996;335:1328–1331.
11Casalino LP, Alexander GC, Jin L, Konetzka RT. General internists' views on pay-for-performance and public reporting of quality scores: A national survey. Health Aff (Millwood). 2007;26:492–499.
12Casalino LP, Elster A, Eisenberg A, Lewis E, Montgomery J, Ramos D. Will pay-for-performance and quality reporting affect health care disparities? Health Aff (Millwood). 2007;26:405–414.
13Snyder L, Neubauer RL. Pay-for-performance principles that promote patient-centered care: An ethics manifesto. Ann Intern Med. 2007;147:792–794.
14Caverzagie KJ, Bernabeo EC, Reddy SG, Homboe ES. The role of physician engagement of the impact of the hospital-based practice improvement module (PIM). J Hosp Med. 2009;14:466–470.
15Gosfield AG, Reinertsen JL. Finding common cause in quality: Confronting the physician engagement challenge. Physician Exec. March–April 2008;34:26–28, 30–31.
16Audet A-M, Doty MM, Shamasdin J, Shoenbaum SC. Measure, learn, and improve: Physicians' involvement in quality improvement. Health Aff (Millwood). 2005;24:843–853.
18Blumenthal D, Glaser JP. Information technology comes to medicine. N Engl J Med. 2007;356:2527–2534.
19Miller RH, Sim I. Physicians' use of electronic medical records: Barriers and solutions. Health Aff (Millwood). 2004;23:116–126.
21Rosenthal MB, Frank RG, Ahonghe L, Epstein AM. Early experience with pay-for-performance. JAMA. 2005;294:1788–1793.
22Campbell SM, Reeves D, Kontopantelis E, Sibbald B, Roland M. Effects of pay for performance on the quality of primary care in England. N Engl J Med. 2009;361:368–378.
23Epstein AM, Lee TH, Hamel MB. Paying physicians for high-quality care. N Engl J Med. 2004;350:406–410.
24Petersen LA, Woodard LD, Urech T, Daw C, Sookanan S. Does pay-for-performance improve the quality of health care? Ann Intern Med. 2006;145:265–272.
25Glickman SW, Ou F-S, DeLong ER, et al. Pay for performance, quality of care, and outcomes in acute myocardial infarction. JAMA. 2007;297:2373–2380.
26Lindenauer PK, Remus D, Roman S, et al. Public reporting and pay for performance in hospital quality improvement. N Engl J Med. 2007;356:486–496.
27Rosenthal MB. What works in market-oriented health policy? N Engl J Med. 2009;360:2157–2160.
References Cited in Tables Only
34Krumholz HM, Anderson JL, Bachelder BL, et al. ACC/AHA 2008 performance measures for adults with ST-elevation and non-ST-elevation myocardial infarction: A report of the American College of Cardiology/American Heart Association Task Force on Performance Measures. Circulation. 2008;118:2596–2648.
36Ghany MG, Strader DB, Thomas DL, Seeff LB; American Association for the Study of Liver Diseases. Diagnosis, management, and treatment of hepatitis C: An update. Hepatology. 2009;49:1335–1374.
38Geerts WH, Bergqvist D, Pineo GF, et al. Prevention of venous thromboembolism. Chest. 2008;133:381S–453S.
39Advisory Committee on Immunization Practices. Recommended adult immunization schedule: United States, 2010. Ann Intern Med. 2010;152:36–39.
40Smith RA, Saslow D, Sawyer KA, et al. American Cancer Society guidelines for breast cancer screening: Update 2003. CA Cancer J Clin. 2003;53:141–169.
42US Preventive Services Task Force. Screening for breast cancer: U.S. Preventive Services Task Force recommendation statement. Ann Intern Med. 2009;151:716–726.
44Saag K, Teng G, Patkar N, et al. American College of Rheumatology 2008 recommendations for the use of nonbiologic and biologic disease-modifying antirheumatic drugs in rheumatoid arthritis. Arthritis Rheum. 2008;59:762–768.