Kalet, Adina L. MD, MPH; Gillespie, Colleen C. PhD; Schwartz, Mark D. MD; Holmboe, Eric S. MD; Ark, Tavinder K. MSc; Jay, Melanie MD, MS; Paik, Steve MD, EdM; Truncali, Andrea MD, MPH; Hyland Bruno, Julia; Zabar, Sondra R. MD; Gourevitch, Marc N. MD, MPH
Dr. Kalet is associate professor of medicine and surgery, director, Medical Education, Section of Primary Care, Division of General Internal Medicine, and director, Educational Research, Division of Educational Informatics, Department of Medicine, NYU School of Medicine, New York, New York, as well as director of the ROMEO Unit at NYU.
Dr. Gillespie is assistant professor, Division of General Internal Medicine, Department of Medicine, NYU School of Medicine, New York, New York, as well as a member of the ROMEO Unit at NYU.
Dr. Holmboe is senior vice president and chief medical officer, American Board of Internal Medicine, Philadelphia, Pennsylvania.
Dr. Schwartz is associate professor, Division of General Internal Medicine, Department of Medicine, NYU School of Medicine, and Manhattan Veterans Administration Medical Center, New York, New York, as well as a member of the ROMEO Unit at NYU.
Ms. Ark is a doctoral student in measurement, evaluation, and research methodology, University of British Columbia, Vancouver, BC, Canada. At the time of writing, she was a research associate, Division of General Internal Medicine, Department of Medicine, NYU School of Medicine, New York, New York, and a member of the ROMEO Unit at NYU.
Dr. Jay is clinical assistant professor, Division of General Internal Medicine, Department of Medicine, NYU School of Medicine, New York, New York, as well as a member of the ROMEO Unit at NYU.
Dr. Paik is assistant professor, Department of Pediatrics, NYU School of Medicine, New York, New York, as well as a member of the ROMEO Unit at NYU.
Dr. Truncali is instructor, Division of General Internal Medicine, Department of Medicine, NYU School of Medicine, New York, New York, as well as a member of the ROMEO Unit at NYU.
Ms. Hyland Bruno is coordinator, Program for Medical Education Innovations and Research, Division of General Internal Medicine, Department of Medicine, NYU School of Medicine, New York, New York, as well as a member of the ROMEO Unit at NYU.
Dr. Zabar is associate professor, Division of General Internal Medicine, Department of Medicine, NYU School of Medicine, New York, New York, as well as a member of the ROMEO Unit at NYU.
Dr. Gourevitch is professor of medicine and psychiatry and director, Division of General Internal Medicine, Department of Medicine, NYU School of Medicine, New York, New York, as well as a member of the ROMEO Unit at NYU.
Please see the end of this article for information about the authors.
Correspondence should be addressed to Dr. Kalet, Department of Medicine, Division of General Internal Medicine, Section of Primary Care, 550 First Avenue, BCD D401, New York, NY 10016; telephone: (212) 263-1137; fax: (212) 263-8234; e-mail: email@example.com.
Recent calls for accountability and benchmarks to measure return on investment for medical education highlight the need to examine connections between medical education and clinical outcomes. However, only a few studies have directly linked educational interventions with patient outcomes beyond superficial (e.g., learner satisfaction) or short-term (e.g., knowledge gain) markers.1–4 Although there is a general belief that “the quality of care that the public receives is determined to some extent by the quality of medical education students and residents receive,”5 the traditional structure and content of health professions curricula lack a supporting evidence base.6
Critiques of medical education research emphasize that such research often is not guided by theory, is conducted in single institutions with small, homogenous samples, is predominantly cross-sectional, and is focused on questions of marginal significance for actual practice (e.g., measuring short-term retention of facts).1,7,8 Researchers have justly attributed many of these shortcomings to insufficient funding.9,10 We believe that highlighting and strengthening the links between educational processes and meaningful patient outcomes would dramatically improve the relevance and impact of medical education research. Establishing prospective, cross-institutional cohorts could markedly increase our understanding of the mechanisms through which physician competence translates into measurable patient health.
Medical education research is a translational science. Just as clinical research must be translated from the bench to the bedside and from the bedside to the community to fully realize its potential impact, so too educational research must be translated from the educational and social sciences to the classroom and from the classroom to the clinic through associations with meaningful patient outcomes. A major barrier to addressing the lack of hard evidence linking medical education to patient outcomes is that medical education and health services researchers, who draw on different research paradigms, have not joined their methodological expertise, experience, and resources.1 Further, the definition and measurement of quality outcomes in medical practice is an emerging science.
In this article, we offer a theoretical framework to guide outcomes-based medical education research that addresses critical gaps in educational translation. We propose that the medical education community adopt more sensitive measures and more sophisticated methodology, such as the standardized educational data warehouse discussed below.
We begin with an orientation to a new concept in benchmark patient outcomes that we propose medical educators adopt to evaluate the effectiveness of medical education. We argue that there is a need for a theoretical framework to guide medical education research. We close by describing early local initiatives in medical education outcomes research, including our Database for Research in Education in Academic Medicine (DREAM), and by recommending future directions for the medical education community.
Introducing Educationally Sensitive Patient Outcomes
Ambulatory care sensitive conditions11 are medical problems for which hospitalizations are preventable if patients have adequate access to primary care services. These conditions were first articulated and studied in the late 1980s by Billings and Teicholz12 and others, and they are now established benchmarks of primary care quality.13,14 Inspired by this important movement toward identifying benchmarks to establish evidence of effective care, we are developing and adopting educationally sensitive patient outcomes (ESPOs) and advocate that medical educators and researchers likewise use these measures as true end points for benchmarking medical education quality.
To identify meaningful ESPOs, educators and researchers must consider which patient outcomes are both important and sensitive to provider education. Clinical data—for example, related to blood pressure and glycemic control, vaccine-preventable conditions, weight management, and mortality—are important indicators of health care quality and merit greater scrutiny. However, clinical conditions depend on so many other public health and contextual factors that such data cannot provide a complete picture of medical education quality. The same is true for measures of important health-related patient behavior changes, such as reduced alcohol intake, smoking cessation, uptake of exercise, and adherence to prescriptions.
Educators and researchers must also consider that the process of becoming a doctor is lengthy, often discontinuous, and probably nonlinear. In designing studies, they must remember that medical students have little direct responsibility for individual patients, resident patient panels are small and heterogeneous, continuity is rarely achieved, and the modest size of some anticipated changes in outcome would make achieving statistical power difficult. Moreover, learning during medical training is deeply situated in complex systems. Educational interventions will always be tempered by patient (e.g., health status, health literacy, sociodemographics, self-efficacy), visit (e.g., first visit with physician, length of visit, trainee supervision), and system (e.g., electronic medical record prompts, access to services, team structure) factors.
We envision ESPOs as pathways that link educational interventions to patient health outcomes. As we developed the concept of ESPOs, we considered the many determinants of both patient health and medical education and identified two important and interdependent ESPOs: patient activation (PA), a strong component of healthful behavior change; and clinical microsystem activation (CMSA), a major influence on patient safety and health care quality. Each is likely to be educationally sensitive, can be measured accurately and reliably, and has significant influence on important clinical outcomes. Figure 1 presents our conceptual model of how PA and CMSA conjointly describe the relationship between physician education and patient-level outcomes.
Patient activation (PA)
Patients who are informed and active participants in their own care have better health outcomes and engage in healthier behaviors. Practicing physicians appreciate that in order to maximize the impact of their efforts, they must often encourage patients to change some aspect of their behavior and/or engage in self-management. Health outcomes depend to a great extent on the degree to which the patient takes personal responsibility for understanding his or her condition, adheres to medication, makes needed lifestyle changes, obtains preventive screening, goes to follow-up visits, follows wound care instructions, and so on—in other words, the degree to which the person assumes the role of an “activated” patient.
Investigators have demonstrated that PA has progressive stages and is amenable to intervention.15 Measurement of PA in the context of chronic disease outcomes has been facilitated by the development of the valid and highly reliable Patient Activation Measure (PAM).16,17 The PAM identifies four stages of PA:
1. coming to believe their role in their own care is important;
2. learning and developing enough confidence to act on their own behalf;
3. acting on their own behalf; and
4. reaching the point where they can act even under stress.
PAM stages have been associated with self-management of chronic diseases (e.g., foot care in diabetes, weight management in heart disease, peak flow monitoring in asthma), patient satisfaction, and medication adherence18–21; hemoglobin A1c and low-density lipoprotein levels among patients from a diabetes registry22; health services use such as emergency room visits21; and inpatient admissions among patients with chronic disease.23
We propose PA as an ESPO. We hypothesize that educating physicians to employ strategies, such as motivational interviewing, that measurably increase PA will have a measurable impact on the PAM and be strongly linked to other meaningful patient outcomes.24
Clinical microsystem activation (CMSA)
Associations among educational efforts, physician practices, and patient outcomes are mediated and moderated by the broader systems of care within which both physicians and patients operate. Many researchers view the health care clinical microsystem (CMS) as an appropriate unit of analysis for understanding and improving these complex patient, provider, and system interactions. A CMS is a small group of people who work together regularly—or assemble around a patient as needed—to provide care.25 The CMS shares processes of care, information, and needs, and it generates measurable performance outcomes. The CMS is the building block of larger health care organizations, the locus of both clinical work and patients' experience of care, and the context in which most medical education occurs beyond the first two years of medical school.
Our hypothesis is that learners' (i.e., medical students', residents', fellows') effectiveness in “activating” aspects of the CMS in which they are working to serve patients' specific needs is an important, patient-centered outcome of medical education. This is the core concept behind the Accreditation Council for Graduate Medical Education–mandated competency of systems-based practice.26 Much work has been done to identify the characteristics of health care systems that produce the best patient outcomes.27 This research is leading to the development of feasible, reliable, and valid measures of CMS-activating physician practice, which should include
* knowledge and attitudes reflecting CMS awareness;28
* performance, as assessed via clinical documentation or direct observation of patient encounters (actual or simulated), showing effective activation of the CMS in support of the patient; and
* team-level success, as evidenced by completed referrals, effective handoffs, and use of information systems supports and performance feedback.
Ultimate patient outcomes are the result of the interactions between physician, patient, and microsystem competencies (Figure 2).29 The PAM includes a person's own “competence” as a patient, that is, his or her ability to interact effectively with the CMS in which he or she is a patient. In traditional outcomes research, analyses typically adjust out this critical patient factor. We argue that physicians bear responsibility for helping patients navigate the CMS. As Figure 2 illustrates, a second ESPO is CMSA: the physician's contribution to the CMS to ensure that it serves the needs of its patients—in other words, the physician's ability to activate the CMS (which includes other professionals who support and extend the physician's role, such as physician assistants and nursing and social work staff) to provide patients with optimal experiences and outcomes.
Some difficulty exists in establishing CMSA as a vital pathway that leads physicians to contribute to better patient outcomes. Research conducted on systems has identified a number of key provider characteristics that enable a CMS to be successful but has not yet quantified the relative importance of each for patient outcomes.30 Also, these provider characteristics take a long time to develop and depend on a critical amount of experience working within a system, which presents a challenge for educational research because most clinical trainees rotate from one CMS to another at four- to eight-week intervals. The current training structure thus affords trainees scant opportunity to learn how to effectively interact with and work within a CMS. The literature31 and our experience as educators suggest that trainees often resort to workaround processes to complete their work. As a result, trainees may indeed “activate” the CMS but do so in a way that may not be efficient and may actually create unsafe conditions for others. Specific educational models aimed at improving care handoffs through best sign-out practices, maximizing supervision, or improving interdisciplinary communication are targeting CMSA. There is promising evidence that such education and skills development do improve patient outcomes.32
A Theoretical Framework to Address Methodological Challenges
In theory-driven assessment and evaluation, a well-specified theoretical framework is used to identify and describe the complex set of factors and processes hypothesized to lead to outcomes. Such an approach helps researchers overcome existing methodological limitations by facilitating the choice of study methods appropriate to the specific links delineated in the framework. While a robust conceptual model explicating the causal links behind patient outcomes would allow researchers to make reasonable assumptions and come to valid, generalizable truths, there are many complex methodological challenges involved in creating such a model. For instance, clinical training environments are organized through successive layers of supervision to protect patients being cared for by inexperienced trainees. Although essential for patient safety, this setup creates major barriers to investigating the links between educational interventions and patient outcomes. First, researchers are limited in their ability to directly measure the impact of addressing gaps in medical education at the individual physician level. Second, one trainee rarely cares for a sufficient number of patients with enough continuity to reliably measure the impact of care. Clearly, more sensitive and sophisticated research designs are needed.
To identify ESPOs, medical education researchers must begin to isolate the causal links connecting physician education and patient outcomes. We must identify points at which changes in physician skills are likely to produce effects on public health and quality of life large enough in magnitude to be measured at various, often distal, points in time; develop measures that can reliably and validly assess intermediate outcomes in this causal chain; and understand and adjust for the “noise” of the clinical processes and systems that surround physicians as well as for physicians' abilities to influence and improve the functioning of those systems.
The critical issue of unit of analysis must be addressed. Whereas interventions to enhance PA target the individual patient,23 most medical educational research focuses on the physician trainee as the unit of analysis. It is the rare educational intervention that targets clinical teams including trainees as the unit of allocation and analysis.32 Educational interventions are typically delivered across institutions to groups of learners with varying individual characteristics. Research seeking to link all these levels must consider the nested nature of subjects and use appropriate multilevel modeling to account for clustering. In designing cross-institutional studies, researchers must borrow heavily from disciplines with experience investigating outcomes of interventions across complex and heterogeneous systems, such as economics and organizational psychology.
Defining highly reliable, valid, and feasible measures of important patient outcomes is an important initial step that likely requires a mixed-methods approach because no single measurement strategy is adequate to the task. For example, validated chart audits and patient exit interviews are standard practice in health services research but have limitations. As prior studies report, at best 50% of patient education provided by physicians is evident on chart review,33 while systematic patient exit interviews are logistically challenging and resource intensive.
A possible solution is the use of standardized patients (SPs), whose assessments allow objective comparisons among physicians and thus effectively control for the “messiness” of real patients' personalities, clinical conditions, and psychosocial situations. A single postvisit SP rating is more reliable than a single real patient report.34 Further, the literature supports SPs as the reference standard for assessing physician skills.35,36 More research is needed, however, to solidify the link between trainees' performance on SP exams and actual patient-care outcomes. Finally, care quality indicators already in use for physician performance (e.g., referrals made, revisit rates, pharmacy records as adherence markers) present an excellent resource for medical education researchers to calibrate learners' SP performance to patient-level clinical outcomes, but the data are generally not subtle enough to identify the independent contributions of physician trainees.
Patient satisfaction literature provides additional opportunities for evaluating the patient CMS experience and the physician–patient interaction, but these instruments are subject to ceiling effects and have yet to link to clinical outcomes. Even so, some patient experience-of-care measures (e.g., provider interpersonal and communication skills) are promising as benchmarks for physician training.37
Finally, ESPO research must account for the inevitable time lag between trainees' learning of clinical skills and the impact of their learning on patient outcomes. We imagine a training-staged approach to outcomes measurement (Figure 3) to address this challenge. This work will require new methods and a broader interdisciplinary view of research methodologies. In the following section we briefly describe our own efforts to build a research infrastructure to begin the work of linking medical education to patient outcomes.
With funding from the Health Resources and Services Administration, in 2005 we established the Research on Medical Education Outcomes (ROMEO) initiative, an interdisciplinary research group comprising medical education and health services researchers who are internists, pediatricians, emergency medicine physicians, and psychologists at the New York University School of Medicine. ROMEO's mission is to generate the evidence base for primary care medical education by linking educational interventions with specific patient outcomes in underserved populations. Our initial projects target professionalism,38–41 obesity prevention and treatment,42–46 and screening and brief intervention for problem alcohol use.47,48 In each case we are using PA and CMSA measures as early tests of the ESPO model we propose in this article.
The ROMEO group is also designing a comprehensive, longitudinal educational data warehouse, the DREAM, which will allow us to conduct prospective studies that link educational processes to patient outcome measures. We are enrolling all undergraduate and graduate trainees at our institution in DREAM through an IRB-approved research registry. Similar to hospitals' continuity of care records and electronic data repositories that have the potential to improve continuity of patient care and ensure portability of health information across facilities, the DREAM registry integrates educational data from sources that are otherwise fragmented. Students, housestaff, and physicians have myriad assessment data points collected from multiple sources, and mechanisms are lacking to integrate such complex and heterogeneous information across the continuum of their educational lifetimes. In building the DREAM registry, we have been addressing the problems of quality, incompatible formats, poor labeling and coding, and inconsistent measurement inherent in these multiple datasets.
Although these technical and conceptual obstacles have slowed our progress in creating a fully functional DREAM, we have had the opportunity to describe some interesting phenomena. For example, we are tracking the development of trainees' patient education and counseling skills over time, in context, and in response to educational intervention, by linking the following data:
* patient education and counseling skills as measured in a high-stakes exam at the end of third year of medical school;
* faculty ratings of interns' communication competence in clinical rotations;
* health care team members' and patients' satisfaction with second-year residents' counseling skills;
* unannounced SPs' and actual patients' reports of second- and third-year residents' patient education, counseling, and PA practices;
* chart abstraction of second- and third-year residents' documentation of the above practices with their actual patients along with key variables describing patients' health status;
* senior residents' views on the degree to which patient education and counseling skills were adequately modeled in their clinical learning environments;
* senior residents' self-assessments as well as health care team members' assessments of each resident's ability to work effectively within the CMS;
* unannounced SP observation of the working of that CMS; and
* the PA level of a resident's panel of patients.
Data sources like DREAM are critical in the quest to link medical training to ESPOs. Only through multimethod, longitudinal research can researchers study central questions in medical education while embracing the complexity of the educational milieu.
Work of this magnitude will require unprecedented cooperation among U.S. academic health centers. This work is of great importance to medical educators across the continuum of training. It also aligns wholly with the fundamental interests of leaders in and funders of clinical medicine. In addition, it requires the public to provide encouragement, incentives, and infrastructure support for cross-institutional collaborations adequate to the scope of the work. This investment in the future of medical education is essential.
There is still foundational work to be done in building the necessary tools to allow educational epidemiology. Significant resources will be required to achieve the goal of linking medical education to patient outcomes, a goal of great importance to large health systems, health services research, and quality-management organizations.49 The tremendous societal investment in medical training in the United States and the value Americans place on high-quality medical care mandate an energetic approach. We suggest the following four steps:
1. Convene a conference of stakeholders (including medical educators and trainees, educational and health services researchers, leaders of health care systems, and patient representatives) to define a shared agenda in medical education outcomes research, to move toward consensus in further delineating the conceptual model and measurable ESPOs, and to inspire funders to support the large-scale studies required.
2. Encourage multiinstitutional DREAM-like collaborations, which are critical to amassing sample sizes large enough to control for the impact of institutional variables and regional variations and to increase the generalizability of findings.
3. Develop new measurement strategies (e.g., unannounced SPs), research designs (e.g., time series for understanding development over time, generalizability theory for understanding sources of variation, and complex intervention research strategies for linking inputs to outcomes50), and highly efficient statistical models and techniques to foster generation of highly reliable and valid measures and models of important patient outcomes.
4. Lobby for significant investment in this work. We suggest proponents leverage the fact that new measures and methodological approaches will serve as an asset for all graduate medical education programs as they struggle to meet increasingly rigorous accreditation requirements. We have attracted interest in our DREAM registry by arguing that medical education outcomes research will allow us to replace poorly designed and wasteful evaluation efforts with efficient, meaningful measures that justify such investment.
These steps, if undertaken thoughtfully, could yield common assessment and analytic strategies that enhance validity and promote standardization across curricula and institutions. Ultimately, a well-maintained, multiinstitutional DREAM would allow increasingly large-scale research capable of using high-quality measures for studies that assess the relative impact of education on distant outcomes, while controlling for individual-, institutional-, and system-level factors. In this way, medical education research may follow the developmental pathway traveled by health services research: from small descriptive studies to larger-scale studies with more highly validated measures and richer data sources that allow multimethod research approaches. A longitudinal database would support medical education scholars (who are often busy, underfunded clinical teachers) as they conduct the critical studies needed to assess and track the impact of innovative and traditional (yet untested) educational approaches.
Ultimately, ESPOs may prove to be increasingly sophisticated end points for assessing the medical education learners receive and for guiding early medical education toward a focus on outcomes of particular importance to public health and the population served.
Medical Educators' Role in Identifying Clear Benchmarks
Medical educators must ensure that medical students, residents, and practicing physicians will be prepared to meet the health care challenges of the 21st century. The aging U.S. population, the increasing burden of chronic illness, and rapid changes in biomedical technology and care delivery all highlight the need to better understand the processes and outcomes of medical education. Well-funded, rigorously designed studies that allow researchers to use patient outcomes data to improve the delivery of medical education are needed. By identifying clear, measurable, and meaningful benchmarks for medical education, researchers can delineate how medical education contributes directly to the health of individuals and the public.
Funding for this work was obtained from the Health Resources and Services Administration, Bureau of Health Professions, Title VII grant program, Academic Administrative units grant (PI: Gourevitch, 2005–2008 and 2008–2011).
1 Whitcomb M. Research in medical education: What do we know about the link between what doctors are taught and what they do? Acad Med. 2002;77:1067–1068.
2 Chen FM, Bauchner H, Burstin H. A call for outcomes research in medical education. Acad Med. 2004;79:955–960.
3 Gruppen LD. Improving medical education research. Teach Learn Med. 2007;19:331–335.
4 Kalet A. The state of medical education research. Virtual Mentor. 2007;9:285–289.
5 The Commonwealth Fund Task Force on Academic Health Centers. Training Tomorrow's Doctors: The Medical Education Mission of Academic Health Centers. New York, NY: The Commonwealth Fund; 2002.
6 Institute of Medicine. IOM report: Improving medical education—Enhancing the behavioral and social science content of medical school curricula. Acad Emerg Med. 2006;13:230–231.
7 Prystowsky JB, Bordage G. An outcomes research perspective on medical education: The predominance of trainee assessment and satisfaction. Med Educ. 2001;35:331–336.
8 Norman G. Research in medical education: Three decades of progress. BMJ. 2002;324:1560–1562.
9 Reed DA, Cook DA, Beckman TJ, Levine RB, Kern DE, Wright SM. Association between funding and quality of published medical education research. JAMA. 2007;298:1002–1009.
10 Jamshidi HR, Cook DA. Some thoughts on medical education in the twenty-first century. Med Teach. 2003;25:229–238.
11 Bindman A, Grumbach K, Osmond D, et al. Preventable hospitalizations and access to health care. JAMA. 1995;274:305–311.
12 Billings J, Teicholz N. Uninsured patients in District of Columbia hospitals. Health Aff (Millwood). 1990;9:158–165.
13 Fitch K, Iwasaki K. Ambulatory-Care-Sensitive Admission Rates: A Key Metric in Evaluating Health Plan Medical-Management Effectiveness. Seattle, Wash: Milliman; January 2009.
14 AHRQ Quality Indicators. Guide to Prevention Quality Indicators: Hospital Admission for Ambulatory Care Sensitive Conditions. Version 3.1. Rockville, Md: Agency for Healthcare Research and Quality; March 2007. AHRQ Publication 02-R0203.
15 Hibbard JH, Tusler M. Assessing activation stage and employing a “next steps” approach to supporting patient self-management. J Ambul Care Manage. 2007;30:2–8.
16 Hibbard JH, Mahoney ER, Stockard J, Tusler M. Development of the Patient Activation Measure (PAM): Conceptualizing and measuring activation in patients and consumers. Health Serv Res. 2004;39:1005–1026.
17 Hibbard JH, Mahoney ER, Stockard J, Tusler M. Development and testing of a short form of the Patient Activation Measure. Health Serv Res. 2005;40:1918–1930.
18 Mosen DM, Schmittdiel J, Hibbard J, Sobel D, Remmers C, Bellows J. Is patient activation associated with outcomes of care for adults with chronic conditions? J Ambul Care Manage. 2007;30:21–29.
19 Hibbard JH, Mahoney ER, Stock R, Tusler M. Do increases in patient activation result in improved self-management behaviors? Health Serv Res. 2006;42:1443–1463.
20 Mosen DM, Schmittdiel J, Hibbard J, Sobel D, Remmers C, Bellows J. Is patient activation associated with outcomes of care for adults with chronic conditions? J Ambul Care Manage. 2007;30:21–29.
21 Kaiser Permanente Care Management Institute. Unpublished national outcomes reports: Asthma, diabetes, cardiovascular disease, heart failure, and chronic pain. Oakland, Calif: 2003.
22 Remmers C, Hibbard J, Mosen DM, Wagenfield M, Hoye RE, Jones C. Is patient activation associated with future health outcomes and healthcare utilization among patients with diabetes? J Ambul Care Manage. 2009;32:320–327.
23 Hibbard JH, Greene J, Tusler M. Improving the outcomes of disease management by tailoring care to the patient's level of activation. Am J Manage Care. 2009;15:353–360.
24 Ogedegbe G, Chaplin W, Schoenthaler A, et al. A practice-based trial of motivational interviewing and adherence in hypertensive African Americans. Am J Hypertens. 2008;21:1137–1143.
25 Mohr JJ, Batalden PB. Improving safety on the front lines: The role of clinical microsystems. Qual Saf Health Care. 2002;11:45–50.
27 Batalden P, Davidoff F. Teaching quality improvement: The devil is in the details. JAMA. 2007;298:1059–1061.
28 Epstein RM. Mindful practice. JAMA. 1999;282:833–839.
29 Holmboe ES. Assessment of the practicing physician: Challenges and opportunities. J Cont Educ Health Prof. 2008;28(suppl 1):S4–S10.
30 Nelson EC, Batalden PB, Huber TP, et al. Microsystems in health care: Part 1. Learning from high-performing front-line clinical units. Jt Comm J Qual Improv. 2002;28:472–492.
31 Spear S, Schmidhofer M. Ambiguity and workarounds as contributors to medical error. Ann Intern Med. 2005;142:627–630.
32 McMahon GT, Thorndike ME, Coit ME, Laing M, Katz JT. Restructuring residency education improves the quality of inpatient care. J Gen Intern Med. 2009;24(suppl 1):S163.
33 Norman GR, Neufeld VR, Walsh A, Woodward CA, McConvey GA. Measuring physicians' performances by using simulated patients. J Med Educ. 1985;60:925–934.
34 Thompson HC, Osborne CE. Development of criteria for quality assurance of ambulatory child health care. Med Care. 1974;12:807–827.
35 Epstein RM, Levenkron JC, Frarey L, Thompson J, Anderson K, Franks P. Improving physicians' HIV risk-assessment skills using announced and unannounced standardized patients. J Gen Intern Med. 2001;16:176–180.
36 Tamblyn RM. Use of standardized patients in the assessment of medical practice. CMAJ. 1998;158:205–207.
37 Evans RG, Edwards A, Evans S, Elwyn B, Elwyn G. Assessing the practising physician using patient surveys: A systematic review of instruments and feedback methods. Fam Pract. 2007;24:117–127.
38 Paik S, Gillespie C, Ark T, et al. What pediatric residents perceive as barriers to quality patient care. Poster presented at: PAS Annual Meeting; May 2–6, 2008; Honolulu, Hawaii. Publication 5806.4.
39 Paik S, Ark TK, Gillespie C, et al. Using unannounced standardized patients to assess residents' communication and professionalism skills in managing pediatric phone calls. Platform Session at: PAS Annual Meeting; May 2–5, 2009; Baltimore, Md. Publication 4535.1.
40 Paik S, Gillespie C, Ark TK, et al. The reliability and validity of 360 degree evaluations for assessing communication and professionalism skills in pediatric residents. Platform Session at: PAS Annual Meeting; May 2–5, 2009; Baltimore, Md. Publication 3685.4.
41 Gillespie C, Paik S, Ark T, Zabar S, Kalet A. Residents' perceptions of their own professionalism and the professionalism of their learning environment. J Grad Med Educ. 2009;1:208–215.
42 Jay M, Gillespie C, Ark T, et al. Do internists, pediatricians, and psychiatrists feel competent in obesity care? Using a needs assessment to drive curriculum design. J Gen Intern Med. 2008;23:1066–1070.
43 Jay M, Schlair S, Gillespie C, et al. Using patient exit interviews to assess residents' quality of counseling after an obesity curriculum. J Gen Intern Med. 2009;24(suppl 1):S212.
44 Jay M, Schlair S, Gillespie C, et al. Is there an association between quality of obesity counseling and patients' motivation and intention to change their behaviors? J Gen Intern Med. 2009;24(suppl 1):S111.
45 Jay M, Kalet A, Ark TK, et al. Physicians' attitudes about obesity and their associations with competency and specialty: A cross-sectional survey. BMC Health Serv Res. 2009;9:106.
46 Jay M, Schlair S, Caldwell R, Kalet A, Sherman S, Gillespie CC. From the patients' perspective: The impact of training on resident physicians' obesity counseling. J Gen Intern Med. In press.
47 Truncali A, Gillespie C, Ark TK, Lee J, Zabar S, Kalet A. Need for targeted training in substance abuse prevention and treatment competencies. J Gen Intern Med. 2008;23(suppl 2):350.
48 Lee J, Triola M, Gillespie C, et al. Working with patients with alcohol problems: A controlled trial of the impact of a rich media Web module on medical student performance. J Gen Intern Med. 2008;23:1006–1009.
50 Craig P, Dieppe P, Macintyre S, et al. Developing and evaluating complex interventions: The new medical research council guidance. BMJ. 2008;337:a1655.