Secondary Logo

Journal Logo

Teaching and Improving Quality of Care in a Primary Care Internal Medicine Residency Clinic

Holmboe, Eric S., MD; Prince, Leslie, MPH; Green, Michael, MD, MSc

Research Report

Purpose Learning and applying quality of care principles are essential to practice-based learning and improvement. The authors investigated the feasibility and effects of a self-directed curriculum in quality of care for residents.

Method In 2001–02, 13 second-year residents at two community-based outpatient clinics in the Yale University primary care internal medicine residency program were asked to participate in a trial of a quality improvement curriculum (intervention group). Thirteen third-year residents in the same residency program served as the comparison group. The curriculum consisted of readings in quality of care, weekly self-reflection with a faculty member, completion of a commitment to change survey, and medical record audits. Study outcome measures were patient level quality of care measures for diabetes, satisfaction with the curriculum, and self-reported behavioral changes.

Results In the follow-up, patients of the intervention group were significantly more likely to have received a monofilament foot examination and baseline electrocardiogram than were patients of the comparison group. When comparing the change between baseline and follow-up, patients for the second-year residents showed significantly more improvement in hemoglobin A1c and LDL cholesterol levels and Pneumovax administration than did patients of the comparison group. All residents in the intervention group were highly satisfied with the curriculum. Thirty-five of 54 residents’ personal commitments to change were either partially or fully implemented six months after the curriculum.

Conclusions A multifaceted curriculum in quality improvement led to modest improvements in the care of diabetic patients and meaningful changes in self-reported practice behaviors. Future research should include more focus on the microsystems of residency outpatient experiences.

Dr. Holmboe is vice president for evaluation research and director of clinical performance services, American Board of Internal Medicine, Philadelphia, Pennsylvania. At the time of this study, Dr. Holmboe was associate professor of medicine, Department of Medicine, Yale University School of Medicine, New Haven, Connecticut.

Mr. Prince is a second-year medical student, University of the West Indies, Kingston, Jamaica. At the time of this study, Mr. Prince was a student in the Masters of Public Health Program, School of Public Health, University of Connecticut, Farmington, Connecticut.

Dr. Green is associate professor of medicine, Department of Medicine, Yale University School of Medicine, New Haven, Connecticut.

Correspondence should be addressed to Dr. Holmboe, American Board of Internal Medicine, Suite 1700, 510 Walnut Street, Philadelphia, PA 19160; telephone: (215) 446-3609; fax: (215) 446-3636; e-mail: 〈〉.

Recent reports have demonstrated that the quality of health care in the United States is suboptimal.1,2 In response to the clear need to change residency education, the Accreditation Council for Graduate Medical Education (ACGME) developed six general competencies that are required of all residency programs.3 One of these general competencies, practice-based learning and improvement (PBLI), directly addresses the need to teach and evaluate residents’ ability to apply quality improvement in their medical practice. Similarly, the Institute of Medicine (IOM) recently listed the application of quality improvement as one its five core competencies for all health care providers.4

To meet this challenge, new approaches are needed to teaching and evaluating residents’ quality of care. Ogrinc and colleagues5 recently proposed criteria for benchmarks of progression in PBLI that included collecting and reacting to data from the physician's own practice. Other recent studies have shown the value of quality improvement electives and rotations on resident satisfaction and quality improvement knowledge.6,7 However, to our knowledge, few studies have incorporated self-audit of practice and structured reflection or attempted to measure the effects of a curriculum on actual patient care.8–13 Furthermore, skills in audit and reflection are important lifelong professional skills, as acknowledged by the plan of the American Board of Medical Specialties for maintaining certification that requires an assessment of practice performance.14 In this study, we sought to investigate the effects of a multifaceted experiential self-directed curriculum on quality improvement that involved residents’ self-audit and reflection on their practice performances.

Back to Top | Article Outline


Participants and setting

The participants in our study were 13 second-year residents in the Yale University primary care internal medicine residency program. All residents have a half-day continuity clinic at one of two urban hospital-based clinics in Waterbury, Connecticut. Both clinics care predominantly for underserved Medicaid or uninsured patients. The study was approved by the Institutional Review Boards at both hospitals.

Back to Top | Article Outline

Study design

This was a one-year prospective observational trial of a multifaceted educational intervention. Beginning in August of 2001, we assigned one or two second-year residents rotating on a three-month ambulatory rotation to a new four-week quality-of-care rotation. Third-year residents, who did not participate in the curriculum, served as the comparison group. We matched one randomly selected third-year resident with each second-year resident, based on each resident's continuity clinic location and the timing of his or her ambulatory block. At the end of each three-month ambulatory block, one of us (ESH) gave a one-hour didactic conference on quality to the second- and third-year residents that presented combined aggregate data on diabetes care for both clinics.

Back to Top | Article Outline

The curriculum

The curriculum had several components (see Table 1). First, we gave each resident a syllabus that included selected chapters of the IOM reports To Err is Human and Crossing the Quality Chasm, selected chapters from the Audit Handbook (a brief guide on how to conduct clinical care audits), clinical care guidelines for diabetes from the American Diabetes Association, and optional readings on specific quality improvement methods.1,15–24

Table 1

Table 1

In the second component of the curriculum, the second-year residents audited the medical records of their own diabetic patients, as well as the records of their peers’ diabetic patients in the residency outpatient clinic. They used a modified version of a chart abstraction tool developed by Qualidigm, a private, nonprofit quality improvement organization.25 This instrument had been extensively tested for reliability in practicing physicians’ offices in Connecticut. The audit tool collected patients’ basic demographic information, comorbidities, medications, number of visits, and a number of process metrics for diabetes care. Over three half-days, each resident was encouraged to abstract ten medical records, going back one year before the residents first day on the quality improvement rotation to define the baseline review period.

The third component of the curriculum was weekly half-hour “academic detailing” meetings between the residents and a designated faculty member (ESH).26 At this meeting, ESH reviewed key points from the readings and facilitated self-reflection on residents’ medical record audit. Residents were also encouraged to develop solutions to identified deficiencies and problems. At the end of the four-week rotation, residents completed a commitment-to-change (CTC) questionnaire in which they identified “up to five concrete, measurable changes that they would employ in their own clinical practice.”27–31 For each commitment, residents used a five-point scale to rate their motivation to make the change and the perceived difficulty in making the change (1 = not difficult, 3 = somewhat difficult, 5 = difficult).

Back to Top | Article Outline

Outcomes assessment

The outcomes for this study were whether the curricular intervention led to improved care for diabetic patients, the feasibility of and residents’ satisfaction with the curriculum, and whether residents’ quality improvement behaviors changed. The diabetes quality metrics were the proportion of patients receiving a yearly hemoglobin A1c, hemoglobin A1c level at their last visit in the baseline and follow-up periods, receipt of Pneumovax ever, performance of at least one electrocardiogram as a baseline, performance of at least one foot and monofilament exam each study year, mean blood pressure control in each study year, and yearly cholesterol measurements and LDL levels.

The second-year residents in the intervention group received a follow-up questionnaire six months after participating in the curriculum. The follow-up contained their responses to the first CTC questionnaire. For each commitment to change, residents indicated if they had implemented the change fully, partially, or not at all. If they did not fully implement a change, they identified the barriers to implementation.

Back to Top | Article Outline


For changes in processes of diabetic care, one of us (LP), a trained investigator, independently abstracted the records of the residents in the comparison group. LP performed audits for the baseline and follow-up periods. The baseline period was one year prior to the start of the second-year residents’ quality improvement rotation. The follow-up period was one year, starting on day 1 of the rotation, after the quality improvement training. Two of us (ESH and LP) independently abstracted a subsample of charts to ensure reliability. The analysis was performed using the intention-to-treat principle. Patients were included if they received at least one visit during the one-year follow-up period. The analysis was performed in two steps. First, the second-year residents in the intervention group and the third-year residents in the comparison group were compared in the follow-up period using the Wilcoxon rank-sum test for nonparametric data, Pearson chi-square test for proportional data, and t test for dimensional data. To evaluate the impact of the intervention on improving quality of care, we calculated the delta in performances between the follow-up and baseline periods for each resident in each group, and compared the differences by using the nonparametric median test.32 All of the statistical analyses were conducted using STATA version 8.0 (STATA Corporation, College Station, TX).

To evaluate the curriculum, we asked residents to write in free text whether they found the curriculum valuable, why or why not, and what was most helpful to them personally.33 The content of the CTC questionnaires and the residents’ qualitative evaluation of the curriculum were analyzed using content analysis. Each of us first reviewed the CTC questionnaires independently to generate a list of themes and items. We then met to develop an initial taxonomy. Using the initial taxonomy, two of us (ESH and LP) independently reviewed the CTC questionnaires to categorize the statements and ensure no new theme or items emerged. ESH and MG then finalized the taxonomy and categorized each statement during a final review.34

Back to Top | Article Outline


Respondents and patients

Fourteen second-year residents completed the curriculum, but one of these residents did not see at least one of his or her own diabetic patients during both the baseline and follow-up periods and was therefore dropped from analysis. The mean ages were 32.2 years for the second-year residents (intervention group) and 34.3 years for the third-year residents (comparison group). Seven second-year residents were women compared to four third-year residents. Table 2 shows the characteristics of the patients of the two resident groups. The two patient groups were similar in age, gender, and comorbidities, but were different in race/ethnicity.

Table 2

Table 2

Back to Top | Article Outline

Patient quality of care

Table 3 shows the process of care metrics and Table 4 shows the outcome metrics for the patients of the intervention and the comparison groups in the baseline and follow-up periods. Comparing the intervention and comparison groups at follow-up patients of the intervention group were significantly more likely to have undergone a yearly foot monofilament examination and received their one-time ECG. Although not statistically significant, patients of the intervention group were also more likely to have received a urine microalbumin test and at least one Pneumovax. In the regression analysis, the change (delta) in several metrics reached statistical significance in favor of the intervention group: change in hemoglobin A1c (p = .001), change in LDL cholesterol (p = .01), and receipt of at least Pneumovax (p = .001).

Table 3

Table 3

Table 4

Table 4

Back to Top | Article Outline

Residents’ satisfaction and commitment to change

All 13 residents found the overall experience and medical record audits to be valuable. Twelve residents (92%) completed the CTC questionnaire at the end of the rotation. In total, they listed 54 commitments or a median of 4.5 commitments per resident.

We classified each commitment to change into one of three categories: individual or self change, patient-specific change, and systems-specific change. Individual or self changes are those the resident can make on his or her own, without major reliance on or the actions of either the patient or other health care personnel. Patient-specific change require at least some independent action by the patient. Systems-specific changes require the assistance of or reliance on other health care personnel and/or authorization from administrative personnel. List 1 provides examples of each type of change.

The majority of the changes documented by the residents were individual/self changes (see Table 5). Patient-specific and systems-specific changes made up just 5.6% and 22.2%, respectively, of the proposed changes. All residents indicated that they were motivated to make the changes. Fifty-nine percent of the changes categorized as individual/self changes were perceived as not being difficult to implement, compared to the 50% percent of the systems-specific changes.

Table 5

Table 5

In the six-month follow-up to the CTC questionnaire, residents indicated that over half (54%) of all changes were fully implemented, 32% partially implemented, and 15% not implemented. The majority (90%) of the individual changes were either partially or fully implemented. However, only 67% of the systems-based changes were partially or fully implemented. For the 23 changes partially or not implemented, systems issues were listed as the most common barrier to implementing individual/self and systems-based change combined (43%). Only 18% of changes partially or not implemented were attributable to residents’ lack of time to make the change. No resident indicated lack of knowledge or skill as a primary barrier to implementing any change. Approximately a third of the barriers were grouped into other categories. Examples of other barriers reported by the residents include “patient compliance with med[ications] [was] unpredictable” and “sometimes I forgot to check [the patients'] feet.”

Back to Top | Article Outline


In this study, we found that a self-directed curriculum in quality improvement led to modest improvements in patient care and meaningful changes in residents’ self-reported behavior. Given the time pressures faculty face to learn and teach PBLI, developing effective and efficient curricula will be essential. In addition to the self-directed aspects of this curriculum, other strengths include actual experience and training with medical record audit and the opportunity for self-reflection.35 This multifaceted curriculum does not require a substantial time commitment by either the resident or the faculty member. For the resident, it is just four half-days over the course of four weeks with a modest amount of time needed for outside reading. The time commitment for a faculty member is just three hours spread over four weeks once the curriculum is in place.

The curriculum satisfies several key components of the Ogrinc framework for teaching PBLI to residents: “measure and describe processes and outcomes of care for their own patients, identify places in their own practice that can be changed, apply improvement to their own panel (of patients), and use balanced measures to show changes have improved care.”5, p. 753–4 This curriculum also has other strengths. To our knowledge, the curriculum is one of the first to examine the impact of medical record self-audit. Audit turned out to be the most valued aspect of the curriculum because of residents’ epiphanies when they reviewed their own charts. Second, the longitudinal nature of the curriculum helps to ensure its’ sustainability. Several recent works have demonstrated the value of quality improvement electives or curricula where residents design a quality improvement project.6,7 However, the actual quality improvement projects may be more difficult to sustain over time as the residents rotate to other experiences. Beyond sustainability, our curriculum creates an ongoing residency-level project with the potential to improve the care of patients over the long term. Perhaps the optimal combination of curriculum in quality improvement would involve our curriculum in the first or second year of residency followed the next year by the opportunity to create a quality improvement project.

Finally, and perhaps most importantly, as our study suggests, curricular innovations can actually produce modest improvements in residents’ quality of patient care. This is the goal of patient-centered education: curricular interventions that benefit both learner and patient. Although not a randomized trial, ours was a controlled trial in which the comparison group consisted of more senior residents who theoretically should possess more experience and knowledge. The intention-to-treat approach also reduces the potential bias in favor of the intervention group of second-year residents.

Our study does demonstrate the limitations of curriculum alone to improve quality of care in a residency program. It is interesting to note that the areas of largest difference between the second- and third-year residents involved interventions mostly under the control of the physician such as monofilament examinations, electrocardiograms, and Pneumovax immunization, procedures performed by the residents themselves. The delta values between the baseline and follow-up periods for the outcome metrics hemoglobin A1c and LDL cholesterol were also better among the second-year residents. Other studies have shown that for chronic illness disease management programs are more effective approaches to care.1,36 However, disease management requires personnel and a system not present in either of the clinics involved in our study. Caring for patients with chronic disease in a residency clinic setting thus presents substantial challenges for training programs. Residents will need to be educated in disease management approaches, something that requires nonphysician resources and time. Many current residency programs may not be able to provide such resources, and if they are able, may have substantial barriers to incorporating residents into the disease management program. One barrier is the limited amount of time residents spend in a longitudinal outpatient clinic experience; currently, the ACGME's Internal Medicine Residency Review Committee only requires 36 half-day sessions per year, which amount to less than three full months over the course of an entire residency.37

We should note several other limitations of our study. The unexpectedly high proportion of patients who did not see their assigned primary resident at least once in the baseline and follow-up periods substantially reduced the statistical power of the study. It is also possible that the lack of continuity may have diminished the effects of the intervention by not giving the intervention group of second-year residents a sufficient opportunity to intervene on behalf of their own patients. In this study, we unintentionally highlighted the substantial challenges and limitations of this residency's clinic microsystems.38 We also cannot account for the possible confounding effects of the faculty preceptors. However, the faculty preceptors were not aware that the curriculum was being formally studied and they provided guidance to both the second- and third-year residents during the baseline and follow-up periods. Finally, the lecture on quality of care where one of us presented aggregate patient data may have produced some improvement in the comparison group of third-year residents, but this effect, if present, would actually lessen the magnitude of benefit from the curriculum. A previous study also strongly suggests that such lectures are unlikely to produce any substantial behavioral changes.39

In conclusion, our study demonstrates that a practical self-directed curriculum in quality of care appears to produce modest but meaningful changes in self-reported behaviors and modest but clinically important improvements in patient care. Future work should investigate the feasibility of this program in alternative settings. In addition, researchers should ask how such a curriculum would fit into different outpatient systems of care, how the microsystems of residency outpatient clinics affect both quality of care and learning, and how self-audit and reflection can be combined into interdisciplinary quality improvement projects in the training setting.

Back to Top | Article Outline


1 Institute of Medicine. Crossing the Quality Chasm. Washington, DC: National Academy Press; 2001.
2 McGlynn EA, Asch SM, Adams J, Keesey J, Hicks J, DeCristofaro A, Kerr EA. The quality of health care delivered to adults in the United States. N Engl J Med. 2003;348:2635–45.
3 Accreditation Council for Graduate Medical Education. The Outcomes Project 〈〉. Accessed 25 February 2005.
4 Institute of Medicine. Educating Health Professionals: A Bridge to Quality. Washington, DC: National Academy Press; 2003.
5 Ogrinc G, Headrick LA, Mutha S, Coleman MT, O'Donnell JO, Miles PV. A framework for teaching medical students and residents about practice-based learning and improvement, synthesized from a literature review. Acad Med. 2003;78:748–56.
6 Ogrinc G, Headrick LA, Morrison LJ, Foster T. Teaching and assessing resident competence in practice-based learning and improvement. J Gen Intern Med. 2004;19:496–500.
7 Weingart SN, Tess A, Driver J, Aronson MD, Sands K. Creating a quality improvement elective for medical house officers. J Gen Intern Med. 2004;19:861–7.
8 Ziegelstein RC, Fiebach NH. “The mirror” and “the village”: a new method for teaching practice-based learning and improvement and systems-based practice. Acad Med. 2004;79:83–8.
9 Holmboe ES, Scranton R, Sumption K, Hawkins RE. Effect of medical record audit and feedback on residents’ compliance with preventive health care guidelines. Acad Med. 1998;73:901–3.
10 Sutherland JE, Hoehns JD, O'Donnell B, Wiblin RT. Diabetes management quality improvement in a family practice residency program. J Am Board Fam Pract. 2001;14:243–51.
11 Callahan M, Fein O, Battelman D. A practice profiling system for residents. Acad Med. 2002;77:34–9.
12 Gould BE, Grey MR, Huntington CG, et al. Improving patient care outcomes by teaching quality improvement to medical students in community-based practices. Acad Med. 2002;77:1011–8.
13 Kern DE, Harris WL, Boekeloo BO, Barker LR, Hogeland P. Use of an outpatient medical record audit to achieve educational objectives: changes in residents’ performances over six years. J Gen Intern Med. 1990;5:218–24.
14 American Board of Medical Specialties. Maintenance of Certification 〈〉. Accessed 30 August 2004.
15 Kassirer JP. The quality of care and the quality of measuring it. N Engl J Med. 1993;329:1263–5.
16 O'Brien T, Oxman AD, Davis DA, Haynes RB, Freemantle N, Harvey EL. Audit and feedback versus alternative strategies: effects on professional practice and health care outcomes. Cochrane Database Syst Rev. 2000;2:1–15.
17 Crombie IK. The Audit Handbook: Improving Healthcare through Clinical Audit. Chichester, UK: John Wiley and Sons; 1993.
18 Brook RH, McGlynn EA, Cleary PD. Quality of health care. Part 2: measuring quality of care. N Engl J Med. 1996;335:966–70.
19 Soumerai SB, McLaughlin TJ, Gurwitz JH, et al. Effect of local medical opinion leaders on quality of care for acute myocardial infarction. JAMA. 1998;279:1358–63.
20 O'Brien T, Oxman AD, Davis DA, Haynes RB, Freemantle N, Harvey EL. Local opinion leaders: effects on professional practice and health care outcomes. In: The Cochrane Library, Issue 2, 2004. Chichester, UK: John Wiley & Sons, 2004.
21 Davis DA, Taylor-Vaisey A. Translating guidelines into practice. CMAJ. 1997;157:408.
22 Institute of Medicine. To Err is Human. Building a Safer Health System. Washington, DC: National Academy Press; 1999.
23 Pearson SD, Goulart-Fisher D, Lee TH. Critical pathways as a strategy for improving care: problems and potential. Ann Intern Med. 1995;123:941–8.
24 Expert Committee on the Diagnosis and Classification of Diabetes Mellitus. Report of the Expert Committee on the Diagnosis and Classification of Diabetes Mellitus. Diabetes Care. 2003;26(1 suppl):S4–20.
25 Qualidigm 〈〉. Accessed 25 February 2005.
26 Thomson O'Brien MA, Oxman AD, Davis DA, Haynes RB, Freemantle N, Harvey EL. Educational outreach visits: effects on professional practice and health care outcomes (Cochrane Review). In: The Cochrane Library, Issue 2, 2004. Chichester, UK: John Wiley & Sons, 2004.
27 Mazmanian PE, Mazmanian PM. Commitment to change: theoretical foundations, methods, and outcomes. J Cont Educ Health Prof. 1999;19:200–7.
28 Jones DL. Viability of the commitment-for-change evaluation strategy in continuing medical education. Acad Med. 1990;65:S37–8.
29 Pereles L, Gondocz T, Lockyer JM, Parboosingh J, Hogan D. Effectiveness of commitment contracts in facilitating change in continuing medical education intervention. J Contin Educ Health Prof. 1997;17:27–31.
30 Mazmanian PE, Johnson RE, Zhang A, Boothby J, Yeatts EJ. Effects of a signature on rates of change: a randomized controlled trial involving continuing education and the commitment-to-change model. Acad Med. 2001;76:642–6.
31 Mazmanian PE, Daffron SR, Johnson RE, Davis DA, Kangtrowitz MP. Information about barriers to planned change: a randomized controlled trial involving continuing medical education lectures and commitment to change. Acad Med. 1998;73:882–6.
32 Wilcoxon F. Individual comparisons by ranking methods. Biometrics. 1945;1:80–3.
33 Feinstein AR. Clinimetrics. New Haven, CT: Yale University Press; 1987.
34 Crabtree BF, Miller WL, eds. Doing Qualitative Research. Thousand Oaks, CA: Sage Publishing; 1999.
35 Schon DA. The Reflective Practitioner: How Professionals Think in Action. New York: Basic Books; 1983.
36 Wagner EH, Austin BT, Von Korff M. Organizing care for patients with chronic illness. Milbank Q. 1996;74:511–42.
37 Accreditation Council for Graduate Medical Education. Internal medicine program requirements 〈〉. Accessed 2 March 2005.
38 Godfrey MM, Nelson EC, Wasson EH, Mohr JJ, Batalden PB. Microsystems in health care. Part 3: planning patient centered services. Jt Comm J Qual Safety. 2003;29:159–70.
39 Thomson O'Brien MA, Freemantle N, Oxman AD, Wolf F, Davis DA, Herrin J. Continuing education meetings and workshops. In: The Cochrane Library, Issue 2, 2004. Chichester, UK: John Wiley & Sons, 2004.
© 2005 Association of American Medical Colleges