In 1998, the Accreditation Council for Graduate Medical Education (ACGME) developed the Outcomes Project to move residency training accreditation away from process-based action and towards measuring educational and patient care outcomes.1 Recently, the ACGME’s Next Accreditation System (NAS)2 explicitly requires training programs to connect resident-physician education to improved patient care outcomes. In addition, the NAS and its associated Clinical Learning Environment Review visit program put the onus on the larger institution for engaging and monitoring trainees in quality improvement (QI) and patient safety. Core competencies such as systems-based practice and practice-based learning and improvement require residents to incorporate methods of improving care into daily workflow. Residency programs often accomplish this through QI initiatives in the ambulatory setting.3 Significant variation exists in the roles and responsibilities of residents in these projects, and most published literature in this area shows improvement mainly in process measures of care, with little change seen in outcome measures such as diabetes or blood pressure control.4,5 Numerous cultural and organizational barriers have been proposed to explain the limited success of these endeavors,6 and several frameworks have been suggested to help training programs overcome these barriers.7,8 Armstrong and colleagues7 state that QI educators should combine didactic and project-based work, link the health system with improvement efforts, assess educational outcomes, and role model QI in the educational process. Headrick and colleagues8 have described exemplary care and learning sites (ECLS) marked by leadership, faculty expertise and mentorship, data management, learner buy-in, and patient engagement.
We describe the evolution of our QI educational program using these frameworks and discuss lessons learned along the way.
At the University of Cincinnati (UC) Medical Center, internal medicine residents rotate through a yearlong Ambulatory Long Block (LB) from the 17th to the 29th month of residency (November postgraduate year 2 through October postgraduate year 3).9 LB residents follow 120 to 150 patients, have office hours three half-days per week, and respond to patient needs daily (phone messages, medication refills, etc.). Otherwise, LB residents rotate on electives and research experiences with minimal overnight call. The residents fill out a feedback survey regarding their learning during LB every four months during the year.
The ambulatory center is located adjacent to the main UC Medical Center and is a safety-net practice with roughly 19,000 patient visits per year. Approximately 32% of the patients have diabetes. The resident-physician practice is separate from faculty practice. Different faculty members function as mentors for resident clinics. The ambulatory team is divided into six mini-teams with four LB residents and one nurse (RN or LPN) per team. Additional support within the practice includes a nurse practitioner, a social worker, a pharmacotherapy clinic, an anticoagulation clinic, and an on-site pharmacy and lab. The team uses a disease registry through the electronic medical record (EMR) to track quality measures. The entire team, including the residents and other ancillary staff, meets weekly to review and plan care delivery. In 2010, the practice received National Committee of Quality Assurance Level III Patient Centered Medical Home status.10
Evolution of QI Teaching
Prior to the development of the LB, QI education in the UC residency program was largely elective and not tied to actual patient care outcomes. A limited number of residents and faculty participated in national initiatives to develop QI skills including the Academic Chronic Care Collaborative (ACCC, 2005–2007) and the Robert Wood Johnson–funded Achieving Competence Today (2003–2008), but their experiences had not yet impacted the residency as a whole.
The creation of the LB provided the opportunity to expand resident education in QI and to tie this training to improved ambulatory care outcomes. Our efforts to do so can be described in several phases (Table 1).
In preparation for the first LB in 2006, a core team of two associate program directors, a QI expert, and two organizational development experts designed an intensely scripted, yearlong curriculum. The curriculum included weekly readings from a core QI textbook11 and key publications with an associated weekly learning session. Most sessions included learning through facilitated discussion and skill development (i.e., self-management support, process mapping).
The curriculum was not tied to patient care outcomes or systems changes in the resident practice. As a result, the curriculum was not perceived by the residents as relevant to their clinical role. We needed to combine didactic and project-based work, link improvement efforts to the health system, and include allied health team members in our QI programming.7,8 A success of this phase was that we did role model QI in the educational process,7 and proceeded to create future phases of the project.
In 2007–2010, we embedded residents’ QI education within the work of our weekly interprofessional practice team meetings. First through use of a disease registry, PECSYS,12 and then through the EMR, we expanded our initial focus on diabetes with the ACCC to include 22 measures of care in diabetes, hypertension, and prevention. The evidence supporting the measures was reassessed each year in a “defense of the measures” exercise designed to achieve buy-in of team members on the clinical importance of the measures selected. During this exercise, small teams of residents performed a literature-based review of each QI measure, presented the findings to the practice team, and then the team came to consensus on how the measurement would be carried out going forward. Practice-wide and individual-provider-level data were shared twice monthly with residents, staff, and faculty. Provider-level outcomes and their improvement over the LB were included in resident summative evaluations.13 We adopted the Model for Improvement14 as a guiding principle and trained our interprofessional team in this methodology in a daylong retreat at the start of each LB. Following this introduction, QI work happened largely through the weekly team meetings, guided by several faculty members who had developed QI expertise and focused on just-in-time solutions for immediate improvement challenges.
Although several specific system changes occurred because of key partnerships (e.g., same-day mammograms and podiatry referrals), we still did not link the wider health system with our improvement efforts. The involvement of residents in application of QI methods was variable. To this point, none of the residents had led a project or developed a deep understanding of QI principles. The focus on data without an organized approach to wider systems change placed responsibility for changing outcomes heavily at the individual rather than team level, despite our focus on teamwork in day-to-day operations. We measured some educational outcomes in tying resident assessment to patient care outcomes,13 but missed opportunities to study others (success of the defense of the measures exercise in developing learner buy-in was largely anecdotal). We did begin to develop some aspects of ECLSs, including leadership, faculty expertise, and data management, but failed to include patient engagement as a major component. This phase of our work accelerated improvement in process measures, but outcome measures such as diabetes or blood pressure control did not improve to the same degree. We have maintained the “defense of the measures” exercise, as informal feedback obtained from residents, faculty, and allied team members indicated that this added to overall team engagement.
In LB 2010–2011, we created a structured approach to improving specific quality outcomes through small, interprofessional, resident-led teams. We divided the clinic staff and residents into five improvement groups. A faculty member with special interest and expertise in QI met with each group and formally introduced the Model for Improvement14 and QI tools through a series of interactive sessions. These sessions were held twice a month for the first two months of the year. Each resident group then designed and conducted a QI project in the ambulatory practice. The same single faculty member served as the mentor for all groups. Projects were selected on the basis of resident interest from areas of priority identified by the medical director. A resident leader was selected by the members of each group to spearhead each project. All groups shared progress with the whole residency class during the weekly team meetings. This educational framework is summarized in Figure 1.
Here, we share the experience of one successful group. The presentation of this data was approved by the UC institutional review board.
Diabetes QI Group
The group identified its global aim as reducing hyperglycemia-related complications and health costs.
The group identified its SMART (specific, measurable, actionable, relevant, time-bound) aim as reducing the percentage of diabetics that have uncontrolled diabetes mellitus (UDM), defined by an HbA1c greater than 9% at three-month follow-up, by > 25% in a period of six months.
Identification of key drivers
We expanded the resident-nurse QI mini-team to include administrators, additional nurses, diabetes educators, and pharmacotherapists. The team used a homegrown data registry developed through data warehouse technology support to identify patients in the EMR (Centricity 9.2, General Electric Inc.) with UDM. Patient factors contributing to UDM were identified through resident-physician survey for cause of UDM in their patients (Figure 2). Using observational learning and a review of the literature on ambulatory control of diabetes, a key driver diagram was created (Supplemental Digital Figure 1, https://links.lww.com/ACADMED/A226). The use of these tools helped the team generate testable ideas to improve UDM.
Design and execute PDSA cycles
During the project period, the team used plan–do–study–act (PDSA) cycles to trial multiple interventions including use of checklists for decision support during clinical encounters, a UDM-focused clinic for involved residents, and triggered referrals to diabetes education, pharmacotherapy, and endocrinology for persistently uncontrolled diabetics. Some interventions were abandoned while others were adopted, based on their effects on balancing measures including practice resources and resident/staff work burden. Eventually, a three-step model of “active surveillance with need-based monthly intervention” was devised (Figure 3).
Implementation of the model
The model was implemented by two mini-teams consisting of six residents and two clinic nurses. The patients followed by these residents are referred to as the intervention group (IG). The outcomes were compared with the control group (CG) made up of patients followed by nine residents belonging to the three other mini-teams. Conventional management of diabetes mellitus was continued in the CG. Data were collected at baseline, three months (first follow-up), and six months (second follow-up).
All quantitative variables are described using appropriate summary statistics (mean, median, standard deviation [SD], and range). Categorical variables are presented using frequency and proportions. The comparison of IC and CG is done using unpaired Student t test. All statistical data were analyzed using Statistical Package for Social Sciences v16 software (IBM SPSS Inc.).
The primary outcome was the reduction in the number of patients with UDM in each group from baseline. Other predetermined secondary outcomes were the percentage of patients who converted from UDM to controlled diabetes with HbA1c < 9% (conversion rate) at first and second follow-up, the number of new patients diagnosed with UDM, and the number of patients with UDM that had not been seen or contacted by the clinic for more than three months and six months.
At baseline, there was no difference between the IG and CG (Supplemental Digital Table 1, https://links.lww.com/ACADMED/A226), with both groups having 10 patients with UDM per resident. The total number of patients with UDM decreased by 9% for the IG and 10% for the CG at first follow-up. At second follow-up there was a 38% decrease for the IG and 14% for the CG in patients with UDM (P = .07) from baseline. The conversion rate at the first follow-up was 58.6% for the IG and 27.8% for the CG (P = .002). At the second follow-up the conversion rate was 46% in the IG and 13.6% in the CG (P < .001) (Figure 4). A significant increase in the number of new patients with UDM was seen for the IG (4.6 patients per resident) compared with the CG (1.8 patients per resident) at first follow-up (P = .006). The number of new patients with UDM increased by the same proportion in both groups during the time period between the first and the second follow-up (P = .33).
At the second follow-up, the number of patients with UDM that were not seen or contacted by the clinic in more than three months decreased by 77.7% for the IG and 35.1% for the CG (P = .2) from baseline. The number of patients with UDM that were not seen or contacted by the clinic in more than six months decreased by 93.2% for the IG and did not change (0%) for the CG (P = .01).
The model of “active surveillance with need-based monthly intervention” was effective in reducing the rate of UDM in the IG of the resident ambulatory practice. This represented the largest decrease in UDM seen since the advent of the LB. Overall, this model was easy to implement within the small group, distributed the workload among different members of the clinic team by more active involvement of the clinic nurses in diabetes management and more frequent patient contact with pharmacotherapists, and used resources already available in the practice.
In this phase of our effort, we seamlessly combined didactic and project work, developed significant leadership and expertise, and incorporated at least some patient engagement in our efforts. Despite this success, we observed two challenging issues. First, the group was unable to execute spread across the practice before the end of their LB because of limited time, and the work of the project did not transition into the succeeding LB resident class (2011–2012). Resident learners are transient, and large projects such as the one described above require a long arc of time. Although the diabetes IG had an advantage over traditional resident practices given the yearlong structure of the LB, project development took time, and implementation of successfully tested interventions was just being spread to the other groups as the year ended. It took four months to implement the final model after multiple PDSA cycles, followed by six months to see a positive change in HbA1c. Unfortunately, without proper team engagement the successful diabetes IG processes were not sustained across LB classes. Secondly, residents in other QI groups did not match the same success of the diabetes IG in their respective projects. Not all residents felt that intense QI education was important even when coupled with actual clinical work. The faculty felt there was enough time in the LB schedule to do QI work, but many residents did not. In addition, although faculty mentorship was present, the groups had difficulty organizing meetings because of conflicting schedules, and there was a mismatch between faculty capacity and resident need. These concerns were voiced by residents in feedback surveys during the LB, as well as the many team meetings.
Experience has taught us that QI projects are best led by interested residents, and although a variable involvement by residents is acceptable, all residents need at least a familiarity with QI techniques. Over the subsequent two years, we developed a Quality Improvement Leadership Team (QuILT), modeled off the successful diabetes IG team described above and anchored by more frequent meetings for QI coaching. In LBs 2011–2012 and 2012–2013, the QuILT teams focused on chronic pain management in the practice, with an increased focus on patient engagement in the process of QI work and a successful handover of the project from one year to the next. At the start of LB 2013–2014, we asked residents to self-select for one of four QuILT teams. This increased number of projects was possible because of expansion of the number of faculty with QI expertise, participation of nursing and administrative leadership, and a structure of weekly QuILT team meetings for more direct and focused mentorship. Each QuILT team selects a project with an aim to improve outcome, not just process, measures. Current teams are focusing on four projects: improving chronic pain management, reducing interprovider variation in QI measures, developing structures that promote patient engagement such as a patient advisory council, and improving the safety of inpatient handoffs. QuILT team members provide updates at weekly team meetings, and all team members participate in project-specific PDSA cycles. Special attention is being paid to sustainability and spread within and across LB classes. For example, a redesigned refill process for opioids created by the pain management QuILT from LB 2012–2013 was carefully transitioned in spread phase to the current LB. Interestingly, our efforts have served as a role model for our faculty practice, as well as the entire primary care network within our health system, and many of our processes and data streams have been adopted by the institution as a whole.
Residency training programs face multiple obstacles in connecting QI education to patient care outcomes. Although we did not have a road map when we started in 2006, our many programmatic innovations have led us to do as Armstrong and colleagues7 have recently suggested—combine didactic and project-based work, link the health system with QI efforts, assess educational outcomes, and role model QI in the educational process. And, although we did not specifically intend to do so, we have developed the five core elements that Headrick and colleagues8 described with ECLSs: leadership, faculty expertise and mentorship, data management, learner buy-in, and patient engagement. The success and then failure of the diabetes IG project served to increase our efforts in subsequent phases to sustain hard-earned improvements across LB classes. We hope our example of operationalizing the improvement frameworks pioneered by earlier scholarship can help others as they attempt to improve care and education simultaneously. Further research and innovation are needed in this area, including optimizing strategies for strengthening resident-driven projects through partnership with nursing, allied health, and longitudinally engaged faculty members.
2. Nasca TJ, Philibert I, Brigham T, Flynn TC. The next GME accreditation system—rationale and benefits. N Engl J Med. 2012;366:1051–1056
3. Chase SM, Miller WL, Shaw E, Looney A, Crabtree BF. Meeting the challenge of practice quality improvement: A study of seven family medicine residency training practices. Acad Med. 2011;86:1583–1589
4. Patow CA, Karpovich K, Riesenberg LA, et al. Residents’ engagement in quality improvement: A systematic review of the literature. Acad Med. 2009;84:1757–1764
5. Diaz VA, Carek PJ, Dickerson LM, Steyer TE. Teaching quality improvement in a primary care residency. Jt Comm J Qual Patient Saf. 2010;36:454–460
6. Fernald DH, Deaner N, O’Neill C, Jortberg BT, deGruy FV 3rd, Dickinson WP. Overcoming early barriers to PCMH practice improvement in family medicine residencies. Fam Med. 2011;43:503–509
7. Armstrong G, Headrick L, Madigosky W, Ogrinc G. Designing education to improve care. Jt Comm J Qual Patient Saf. 2012;38:5–14
8. Headrick LA, Shalaby M, Baum KD, et al. Exemplary care and learning sites: Linking the continual improvement of learning and the continual improvement of care. Acad Med. 2011;86:e6–e7
9. Warm EJ, Schauer DP, Diers T, et al. The ambulatory long-block: An Accreditation Council for Graduate Medical Education (ACGME) educational innovations project (EIP). J Gen Intern Med. 2008;23:921–926
10. National Committee of Quality Assurance. Patient-Centered Medical Home Recognition. http://www.ncqa.org/Programs/Recognition/PatientCenteredMedicalHomePCMH.aspx
. Accessed May 20, 2014
11. Nash DB, Goldfarb NI The Quality Solution: The Stakeholder’s Guide to Improving Health Care. 2005 Sudbury, Mass Jones & Bartlett
12. . Aristos. PECSYS. http://www.aristos.com/pecsys.shtml
. Accessed May 20, 2014
13. Warm EJ, Schauer D, Revis B, Boex JR. Multisource feedback in the ambulatory setting. J Grad Med Educ. 2010;2:269–277
14. Langley GL, Nolan KM, Nolan TW, Norman CL, Provost LP The Improvement Guide: A Practical Approach to Enhancing Organizational Performance. 20092nd ed San Francisco, Calif Jossey-Bass Publishers