In this issue of Academic Medicine, Edward Krupat has laid out some thoughtful criticisms of entrustable professional activities (EPAs) in undergraduate medical education (UME) and of competency-based medical education (CBME) in general.1 In his Perspective, he notes that some view EPAs as “paradigm shifting.” Others have raised the question as to whether CBME is also a paradigm shift.2 Krupat writes, however, that “perhaps now is the right time to pause, take a deep breath, and ask ourselves whether we are taking a major step in the right direction rather than a step back or to the side.”1 Before exploring whether current change efforts in medical education indeed represent a paradigm shift and whether the medical education community should slow these efforts, it is important to briefly reflect on some of the drivers behind CBME and EPAs.
One of the primary drivers for outcomes-based medical education was the growing recognition during the late 20th and early 21st centuries that graduates of medical schools were insufficiently prepared for residency and that graduates of residency programs were insufficiently prepared for practice.3–5 Concomitantly, a growing body of research, culminating in the seminal Institute of Medicine reports To Err Is Human6 and Crossing the Quality Chasm,7 demonstrated significant and widespread deficiencies in the quality and safety of care received by patients. Subsequent reports have highlighted agonizingly slow progress in addressing many of the gaps in the triple aim: lowering health care costs, improving population health, and improving care quality and safety.8–10
Arnold Milstein11 noted in his 2010 address at the American Board of Internal Medicine Foundation’s forum on medical education that
[s]ince physician graduates of American medical education organizations typically lead or heavily influence U.S. health care delivery, one source of indirect, broad, outcome-based evidence [of the effectiveness of the medical education enterprise] is the overall performance of the U.S. health care system. The width of the performance gaps on the aims of effectiveness, safety and efficiency understandably reduces society’s confidence that physicians are adequately honoring their Hippocratic promises.
On top of these system concerns are a growing number of concerns about our individual learners: Too many continue to experience a toxic mix of substantial debt, dysfunctional clinical learning environments, significant work compression, and burnout, as well as fears of medical unemployment that lead students to apply to an astounding number of residency programs. Let’s be honest: Medical students and residents did not create these dysfunctional conditions, despite a penchant among some educators to complain about learners in generational (e.g., millennial) terms. Can those of us who serve as educational leaders really say these conditions support a humanistic educational process? Do all of us involved in education need to spend more time looking in the mirror and reflecting on the conditions we helped create?
It is against this challenging backdrop that interest in educational outcomes, or outcomes-based education, has begun to take root. The shift from a structure and process to an outcomes-based approach is the actual paradigm shift being attempted. Logically, and especially from a public perspective, UME and GME programs should be able to conclude with some degree of accurate confidence that graduates can actually perform the tasks and activities appropriate for their stage of development. No longer is it appropriate to simply rely on completing a course of study and passing an exam as sufficient evidence of competence. If education were that straightforward, would we be experiencing the current levels of quality and safety problems and medical errors?
Creating programs that maximize both meaningful educational and meaningful clinical opportunities for students and residents to help them acquire and apply needed competencies is a moral obligation of the medical education system. These opportunities must include an emphasis on competencies beyond medical knowledge and basic clinical skills, such as systems thinking, quality improvement, interprofessional teamwork, and patient safety, while concomitantly attending to identity formation, wellness, and resilience.12,13 The challenge now is to rebalance medical education by decreasing the emphasis on structure and process and increasing the focus on outcomes that include these competencies in a high-functioning learning environment.
Competencies were created as one mechanism to help facilitate the transformation to educational outcomes by providing frameworks and mental models of the essential abilities physicians need in a complex health care system.12 Love them or hate them, the six core competencies call attention to critical abilities that previously were either ignored or at best taught implicitly and rarely assessed, such as systems-based practice (SBP) and practice-based learning and improvement.2,12,13 For example, in my residency training, care coordination (a key component of SBP) was defined as “please be sure to discharge the patient by 11 am and schedule their follow-up visit.” Yet discharge is a complex process and a high-risk period for patients that receives insufficient attention.14 It pains me that, despite data highlighting ongoing deficiencies in these competency domains, words like “patient,” “quality,” and “safety” are rarely used in most of the literature critical of CBME, and different solutions or suggestions are rarely offered. It also troubles me to still see in some current public forums comments such as “Systems-based practice doesn’t really apply to our specialty,” or “We need to protect residents from this quality and safety stuff so they can learn medicine.”
EPAs, and also Milestones,15 can be conceptualized as educational technologies to guide the application and implementation of the competencies, and I fully recognize that they must be rigorously studied and refined through iterative cycles so that they do not become yet another “shiny bright object.”16 Those of us working in this space must not hold on too tightly to new ideas if they are ultimately shown not to be effective. It may be that EPAs and Milestones are simply bridging technologies to some other approach not yet realized. Recent work, however, suggests that they show some promise with meaningful operationalization of the competencies into educational practice.17–19
The journey to change takes time, and the experiences with Milestones and EPAs are no different. The first Milestone work began in 2007, and the Accreditation Council for Graduate Medical Education is currently planning for the next round of Milestone revisions, a process that will begin in 2018 and take several years. The Milestones have always been viewed as a dynamic framework that will change over time as experience is gained and lessons are learned on how best to use them.18–20 For EPAs in UME, Lomis and colleagues17 recently published early lessons from an ongoing pilot studying the Core Entrustable Professional Activities for Entering Residency. As we continue to study and grapple with transformation, I hope members of the community will ground the debate of their specific positions in key tenets of philosophy and science and avoid ideologically driven discourse, which is too often polarizing and fails to provide recommendations or suggestions for improvement. In this regard, Krupat’s article is a welcome addition to important debates around EPAs and CBME. I agree with Krupat that more work on how best to integrate these educational technologies across the UME and GME continuum is definitely needed as part of the struggle with the paradigm shift to outcomes.
By definition, paradigm shifts are messy and threatening, both for those wanting to remain with the older paradigm and those pushing the newer paradigm. As Thomas Kuhn21(p5) noted in 1962,
Normal science, the activity in which most scientists inevitably spend almost all of their time, is predicated on the assumption that the scientific community knows what the world is like. Much of the success of the enterprise derives from the community’s willingness to defend that assumption, if necessary at considerable cost. [emphasis added]
Assessment and program evaluation methodologies in medical education currently sit at the tense interface of this paradigmatic transition. Medical education can benefit by embracing lessons from the health and social services research worlds, which have developed methodologies incorporating the critical role of complexity in research and evaluation design.22,23
On the assessment side, the balance needs to shift from the 20th century’s heavy reliance on high-stakes “proxy” assessments, such as multiple-choice question (MCQ) examinations and objective structured clinical examinations (OSCEs), to work-based assessment in the clinical space, especially in GME. That is not to say that MCQ exams, for example, do not have value—they do—but at some point we in the educational community need to recognize that despite decades of advances in high-stakes MCQ exams and OSCEs, we have the health system performance we have. From the patient perspective, there is little comfort in seeing yet another study that demonstrates a correlation between lower passing scores on a licensing or certification exam and poorer care, especially if you are the patient receiving the poorer care from a licensed and certified physician. Too often, debates around the role of various assessment approaches get bogged down in unhelpful ideological arguments.
We need to embrace the reality that most of what we do in medical education involves implementing “complex interventions,” simply defined as interventions possessing multiple interacting and interdependent components.22 Through this lens, using EPAs in UME and GME, and Milestones in GME, are viewed as truly complex interventions with multiple interacting components embedded in variable and evolving contexts. EPAs and Milestones, when treated as interventions, are not therapeutic pills that can be easily randomized. While randomized controlled trials (RCTs) should continue to play a role where appropriate, research in medical education also needs to use methodological approaches that incorporate the effects of complexity on the implementation and outcomes of educational interventions.23 For example, while the Flexibility in Duty Hour Requirements for Surgical Trainees (FIRST) trial24 provided very helpful policy guidance, it did nothing to help us understand how best to use time (i.e., duty hours) to maximize learning and professional development or to improve quality of care and education. Furthermore, its participating institutions were stratified prior to randomization by their baseline clinical performance measures. Given that an institution’s level of performance in quality and safety is associated with the future practice of its individual graduates, the latter point is particularly salient.25,26
Implementations of complex interventions have long success journeys, are fragile, mutate along the way, depend heavily on context, and feed back on themselves.23,27 Randomization does not provide any insights into the mechanism and contexts that produce success or failure, and control groups contribute nothing to understanding what contributes to successful implementation.27 Other methodologies are needed to address these challenges. Trying to answer for EPAs and Milestones the fundamental questions of what works, for whom, under what circumstances, and why is a logical step before any attempt at an RCT.27
Finally, as we all work to change medical education across the continuum, we need to rethink how we handle polarities in our debates. At the current time, too much of the educational discourse revolves around many “either–or” polarities or dichotomies, such as high-stakes tests versus work-based assessments, quantitative versus qualitative approaches to assessment, reductionism versus holism, process versus outcome, and so forth. Such either–or arguments are typically unhelpful—engaging in them often makes for a fun ideological exercise but in the end does not move the field forward or ultimately help patients. Johnson’s28 model of polarity thinking may be a better approach to leverage tensions and challenges more productively. In this model, polarities are viewed as “both–and,” encouraging groups and individuals to assess the benefits and the limitations of each pole and thus leverage the strengths of the polarity while being attentive to the limitations. If not EPAs and Milestones, then what sits at the other pole when the status quo is simply not sufficient? What can most effectively help us attain the outcomes we all ultimately want and care about? These are important questions for continued study and debate. I believe using lenses like polarity thinking can help advance the debate and transform medical education to meet the needs of the public. I also hope that the medical education community debates and performs this challenging work collaboratively through humanistic and civil discourse, especially with our learners.
Acknowledgments: The text from A. Milstein’s 2010 address is quoted with the author’s permission.
1. Krupat E. Critical thoughts about the Core Entrustable Professional Activities in undergraduate medical education. Acad Med. 2018;93:371–376.
2. Holmboe ES, Sherbino J, Englander R, Snell L, Frank JR; ICBME Collaborators. A call to action: The controversy of and rationale for competency-based medical education. Med Teach. 2017;39:574–581.
3. Crosson FJ, Leu J, Roemer BM, Ross MN. Gaps in residency training should be addressed to better prepare doctors for a twenty-first-century delivery system. Health Aff (Millwood). 2011;30:2142–2148.
4. Cooke M, Irby DM, O’Brien BC. Educating Physicians: A Call for Reform of Medical School and Residency. 2010.San Francisco, CA: Jossey-Bass.
5. Frenk J, Chen L, Bhutta ZA, et al. Health professionals for a new century: Transforming education to strengthen health systems in an interdependent world. Lancet. 2010;376:1923–1958.
6. Institute of Medicine. To Err Is Human: Building a Safer Health System. 1999.Washington, DC: National Academy Press.
7. Institute of Medicine. Crossing the Quality Chasm: A New Health System for the 21st Century. 2001.Washington, DC: National Academy Press.
8. National Patient Safety Foundation. Free From Harm: Accelerating Patient Safety Improvement Fifteen Years After To Err Is Human. 2015. Boston, MA: National Patient Safety Foundation; http://www.npsf.org/?page=freefromharm
. Accessed June 5, 2017.
9. Berwick DM, Nolan TW, Whittington J. The triple aim: Care, health, and cost. Health Aff (Millwood). 2008;27:759–769.
10. Mossialos E, Wenzl M, Osborn R, Anderson C. International Profiles of Health Care Systems, 2014: Australia, Canada, Denmark, England, France, Germany, Italy, Japan, The Netherlands, New Zealand, Norway, Singapore, Sweden, Switzerland, and the United States. January 2015. Washington, DC: Commonwealth Fund; http://www.commonwealthfund.org/publications/fund-reports/2015/jan/international-profiles-2014
. Accessed June 5, 2016.
11. Milstein A. Trailing winds and personal risk tolerance: Transforming physician education to meet society’s needs. Paper presented at: American Board of Internal Medicine Foundation Forum; August 2010; Vancouver, Canada.
12. Batalden P, Leach D, Swing S, Dreyfus H, Dreyfus S. General competencies and accreditation in graduate medical education. Health Aff (Millwood). 2002;21:103–111.
13. Accreditation Council for Graduate Medical Education. Section IV.A.5. ACGME Common Program Requirements. Effective July 1, 2016. http://www.acgme.org/Portals/0/PFAssets/ProgramRequirements/CPRs_07012016.pdf
. Accessed June 9, 2017.
14. Coleman EA, Chugh A, Williams MV, et al. Understanding and execution of discharge instructions. Am J Med Qual. 2013;28:383–391.
15. Philibert I, Brigham T, Edgar L, Swing S. Organization of the educational milestones for use in the assessment of educational outcomes. J Grad Med Educ. 2014;6:177–182.
16. Baldoni J. Are “bright shiny objects” worth your time (and money)? Forbes. July 23, 2013. https://www.forbes.com/sites/johnbaldoni/2013/07/23/are-bright-shiny-objects-worth-your-time-and-money/#f74fd9725dd7
. Accessed June 9, 2017.
17. Lomis K, Amiel JM, Ryan MS, et al.; AAMC Core EPAs for Entering Residency Pilot Team. Implementing an entrustable professional activities framework in undergraduate medical education: Early lessons from the AAMC Core Entrustable Professional Activities for Entering Residency pilot. Acad Med. 2017;92:765–770.
18. Beeson MS, Holmboe ES, Korte RC, et al. Initial validity analysis of the emergency medicine milestones. Acad Emerg Med. 2015;22:838–844.
19. Hauer KE, Vandergrift J, Hess B, et al. Correlations between ratings on the resident annual evaluation summary and the internal medicine milestones and association with ABIM certification examination scores among US internal medicine residents, 2013–2014. JAMA. 2016;316:2253–2262.
20. Holmboe ES, Call S, Ficalora RD. Milestones and competency-based medical education in internal medicine. JAMA Intern Med. 2016;176:1601–1602.
21. Kuhn TS. The Structure of Scientific Revolutions. 1962.Chicago, IL: University of Chicago Press.
22. Craig P, Dieppe P, Macintyre A, Michie S, Nazareth I, Petticrew M. Developing and Evaluating Complex Interventions: New Guidance. 2006. London, UK: Medical Research Council; www.mrc.ac.uk/documents/pdf/complex-interventions-guidance/
. Accessed June 5, 2017.
23. Holmboe ES. The journey to competency-based medical education—Implementing milestones. Marshall J Med. 2017;3(1):article 2. http://mds.marshall.edu/mjm/vol3/iss1/2
. Accessed June 5, 2017.
24. Bilimoria KY, Chung JW, Hedges LV, et al. National cluster-randomized trial of duty-hour flexibility in surgical training. N Engl J Med. 2016;374:713–727.
25. Asch DA, Nicholson S, Srinivas S, Herrin J, Epstein AJ. Evaluating obstetrical residency programs using patient outcomes. JAMA. 2009;302:1277–1283.
26. Bansal N, Simmons KD, Epstein AJ, Morris JB, Kelz RR. Using patient outcomes to evaluate general surgery residency program performance. JAMA Surg. 2016;151:111–119.
27. Pawson R, Greenhalgh T, Harvey G, Walshe K. Realist review—A new method of systematic review designed for complex policy interventions. J Health Serv Res Policy. 2005;10(suppl 1):21–34.
28. Johnson B. Polarity Management: Identifying and Managing Unsolvable Problems. 2014.Amherst, MA: HRD Press.