In our evolving health care environment, medical educators need a more deliberate and effective way to prepare the next generation of physicians to provide high-quality patient care than the current model. Training at all levels should address a broad definition of essential competency domains and rely on practical and purposeful observations of a physician-in-training’s progress. In 2010, the Carnegie Foundation for the Advancement of Teaching recommended sweeping changes in medical education, including the standardization of learning outcomes while allowing for individualized experiences.1 Competency-based medical education (CBME) emphasizes abilities and learning outcomes and is both learner-centered and relevant to the workplace. A practical approach to assessing competencies is through the use of entrustable professional activities (EPAs).2 An EPA is a recognized unit of work performed by a professional that is observable and measurable and, thus, provides a means of assessing authentic educational outcomes and the integration of observable abilities from multiple competency domains. “EPAs are those professional activities that together constitute the mass of critical elements that operationally define a profession.”2
Graduate medical education (GME) in the United States has been restructured to incorporate outcomes-based assessment under the Next Accreditation System.3 Alignment of undergraduate medical education (UME) objectives and outcomes with those in the postgraduate setting would allow CBME to more effectively frame the educational trajectory across the continuum of medical training (medical school through residency or fellowship) by use of a shared language and approach. While many schools have begun to define specific undergraduate competencies, there is no standard approach. As students embark upon the core clerkship year of training, each discipline typically takes ownership of the skills that they are best suited to teach. We propose that core clinical disciplines standardize this approach by focusing on the outcomes they expect of their clerkship students. For example, if every third-year clerkship identifies and prioritizes a small number of EPAs, this approach will enable an integrated, holistic, and comprehensive assessment of the student by the end of the clerkship year.
We describe the process by which an Alliance for Academic Internal Medicine (AAIM) task force developed a common standard for implementation of competency-based student assessment within the core inpatient internal medicine (IM) clerkship. The task force conducted its work through meetings, literature review, and a national survey of clerkship directors. The resulting model for CBME in IM clerkships is intended to help clerkship directors and medical school curriculum committees as they strive to standardize learning outcomes by identifying high-priority assessment domains, as defined by our peer survey. The utility of CBME in defining the expected level of knowledge, skills, and attitudes in students’ clinical education is discussed, as well as the importance of a reformed assessment system.
The Role for CBME in UME
Given the potential benefits of introducing CBME early in training to promote student development in all domains of competence, educators have proposed several methods of implementation. In 2014, the Association of American Medical Colleges (AAMC) published the Core Entrustable Professional Activities for Entering Residency (Core EPAs), a list of 13 EPAs that all graduating medical students should be able to perform without direct supervision by their first day of residency training.4 The publication of the Core EPAs highlighted the importance of UME and GME as a continuum of medical training and also elevated the national discussion about the utility of EPAs as a framework for observation and assessment. The AAMC is sponsoring a pilot of the use of Core EPAs in 10 schools, as well as the Education in Pediatrics Across the Continuum project.5 Many schools have also begun to develop CBME efforts within individual rotations and/or across the UME setting.6
There are several limitations to current methods of implementation of CBME in UME. The AAMC Core EPAs may be too general for more practical and purposeful use in defining expectations and guiding key observations needed within a specialty-specific clinical experience.7 In addition, with the Core EPAs’ focus on the transition from senior student to intern, there has been much less concentration on student clinical learning earlier in the educational continuum. IM clerkships serve as pivotal clinical experiences in every medical school, charged with forming a strong clinical foundation in data gathering, oral presentations, clinical problem solving, and teamwork for all physicians-in-training, not just those pursuing an IM residency. Therefore, the task force called for a more consistent approach in defining key outcomes that will better inform the design of practical and purposeful learning experiences within core IM rotations.
The Importance of Assessment
Three critical elements in the design of learning experiences that support learners in achieving expected outcomes are opportunities for skill practice, timely assessment, and feedback. Historically, assessment in UME has relied on evaluations that suffer many shortcomings—they are often based on global impressions collected after a period of time working together with residents and attending supervisors and are often not based on direct observation. While CBME may not solve the problem of inadequate direct observation, it can create a framework to allow for more timely assessment, captured in the practice of meaningful clinical encounters.
In a CBME model, learners benefit from frequent feedback focused on competencies and milestones to guide their progress along a developmental trajectory. Ideally, formative assessments occur repeatedly throughout a period of learning, so that the learner can practice, receive feedback, practice again, and receive additional feedback on their progress with the same type of activity.
EPAs thus may serve as a framework for workplace-based assessments to capture authenticity in practice, providing a focused view into whether a learner effectively integrates competencies and abilities in the provision of patient care. Although CBME can focus on holistic performance in practice, efforts to assess learners using competency domains (e.g., interpersonal skills and communication) have often yielded lists of attributes and behaviors that deconstruct competent performance into its component parts—sometimes an overly analytical approach. To avoid this limitation, EPAs consider the entirety of performance across domains. The EPA framework emphasizes reliability and objectivity, using multiple observations compared with defined standards or criteria. EPAs can be adapted for medical students by breaking larger tasks into smaller units or simplifying the types of patients and contexts in which the tasks should be performed successfully by students. One approach is to define “observable practice activities” that inform broader judgments related to EPAs based on clinical context, enabling one to be more specific about the tasks to be observed within a given clinical assignment or clerkship.8 Another approach is to define developmental steps toward EPAs, such as the EPAs for entry into clerkships as described by Chen et al.9
In 2014, the CBME-UME task force was created by AAIM to reach consensus about how to best institute CBME for medical students in IM. The task force was made up of nine UME educators, four GME educators, and one fellowship educator and was charged to build on previous work from AAIM by IM resident and subspecialty fellowship program directors to incorporate milestones and EPAs into their assessment strategies as part of the Next Accreditation System and the Internal Medicine Subspecialty Milestone Project.10–13 A literature search was performed to review the evolution of competency-based education, the resident and subspecialty milestones in IM, and the application of CBME broadly in the UME setting. Task force members met by conference call on a quarterly basis from 2015 to 2016.
The task force chose to focus on the development of key learning outcomes as defined by EPAs specific to students’ clinical educational experiences in the inpatient IM clerkship. In 2013–2014, 99% of medical schools required students to complete experiences in IM, for an average of 9.2 weeks during the third year (range 4–14 weeks).14
The task was to define competencies for all students as they pertained to the clerkship, not just for those pursuing IM residency. For this reason, the group decided against a recalibration of the IM residency milestones, which address training for learners specializing in the field.7,10 While the Core EPAs document had laid out 13 general EPAs that should be expected by the time of graduation, the task force attempted to define minimum competency expectations at the end of the third year and in the clinical context of the IM clerkship. The goal was to narrow the scope of the EPAs, highlighting a small subset that should be considered the principal responsibility of the clerkship, and observations that were manageable, achievable, and measurable within the confines of a 6- to 12-week clerkship.
In the summer of 2015, all Clerkship Directors in Internal Medicine (CDIM) institutional members were invited to complete an online, confidential survey. The institutional review board (IRB) at the University of Texas Medical Branch determined that the CDIM Survey research protocol did not fit the definition of human subjects research, and, therefore, the protocol did not require exemption status, further IRB review, or IRB approval.
Respondents were asked if they were familiar with Core EPAs. For those who were familiar, the following question was posed: “Which of the following CEPAER-related [Core EPAs for Entering Residency] milestones should be highest priority for assessment in your internal medicine clerkship?” A total of 23 options were listed, which had been selected and modified from the critical functions of each of the 13 AAMC Core EPAs that were deemed to be potentially appropriate for an IM clerkship student. Options are listed in Appendix 1. We asked respondents to select a maximum of 5 unranked choices. Ninety-five of 123 (77% response rate) CDIM members responded to the survey. Approximately 25% of respondents were not familiar with Core EPAs, resulting in 65 respondents. Results are listed in Table 1.
A 20-response cutoff was somewhat arbitrarily defined to separate the top choices, thus enabling a practical focus on a small number of competencies which were considered highest in priority for assessment among IM clerkship directors.
Proposed Third-Year IM EPAs
Based on the survey responses, six key EPAs emerged as important for assessment of competence within the context of the inpatient IM clerkship (as seen in Table 1). The EPAs chosen by respondents highlight the fundamental principles of accuracy in obtaining a history and physical, the importance of verbal and written communication, the skills of patient assessment, and differential diagnosis generation within the context of clinical reasoning. They reflect many of the general clinical core competencies of the CDIM-SGIM (Society of General Internal Medicine) Core Medicine Curriculum.15 Many of these EPAs are not unique to IM, yet by the nature of the structure of IM clerkships, IM is a discipline well suited to take the lead on their assessment. Not surprisingly, this subset of student EPAs are well aligned with the IM residency milestones developed by the Accreditation Council for Graduate Medical Education (ACGME) and the American Board of Internal Medicine (Chart 1), as well as proposed EPAs for use in resident assessment.10,16
Model for Implementation
A model for how these EPAs could be assessed at an undergraduate level was developed. First, the group applied a published model for entrustment in UME to one of the identified EPAs. Chen et al7 proposed using the GME entrustment and supervision scale but “expanding the lower levels of the scale to include more gradations of supervision, allowing additional layers of progressive learner autonomy.” Depending on the activity, for example, a student would progress from “not allowed to practice,” to “allowed under full supervision,” to “allowed with on-demand supervision.” An example is provided in Table 2 for the IM clerkship EPA “Generate a differential diagnosis and working diagnosis for common problems.” This EPA can be broken into smaller tasks based on clinical encounters such as observable practice activities related to common acute problems or assessment of chronic conditions. The task force outlined several discrete behaviors related to observation of a student’s ability to “generate a differential and working diagnosis for common problems”; each is mapped to an ACGME domain of competence, with a descriptor aligned to each level along the scale of supervision. In actual practice, the scale would be expanded to at least five levels to allow greater discrimination. For each discrete behavior, an assessment or assessments are suggested. Using information gathered from a variety of assessment tools, evidence can be synthesized to make a decision regarding overall competence. Student advancement committees, analogous to clinical competency committees in GME, could then conduct this review. An alternative or additional approach would be to incorporate a student-generated portfolio for gathering and synthesizing performance information to be reviewed by trained faculty.
Full entrustment with true independence of supervision may not occur in undergraduate training. However, it is hoped that familiarizing students with the language and framework for EPAs and their underlying core competencies will help shape their eventual independent, professional identities. Because all assessments are dependent on context, it may be preferable to add specificity to the domains that are to be evaluated through definition of key observable practice activities supporting achievement of these six Core EPAs, perhaps guided by the “essential clinical encounters” defined by a school’s clerkship (Liaison Committee on Medical Education standard 6.2).17 The true value of a competency-based system is not in being overly prescriptive but, rather, in structuring the assessment of a learner’s readiness for progression and requiring direct observation of measurable behaviors to assess this progression and institute corrective feedback where necessary. Implementation will necessitate adequate faculty development as well as further definition of observable practice activities and assessment methods that might be shared across medical schools and IM clerkships.
Future Opportunities and Challenges
Adopting a CBME system in UME, and specifically applying this approach in IM clerkships, will bring benefits along with challenges. Benefits include using a common language across the continuum of medical education and having a practical and more purposeful approach to creating learning opportunities and direct observation. This commonality facilitates cross-clerkship and cross-institutional opportunities to collaborate in developing better systems for learning and assessment. Challenges are many. Most clerkships are time restricted, without a system for reliable longitudinal assessment. Faculty physicians are constrained by clinical and administrative responsibilities and report limited time for direct observation and feedback.18,19 In addition, the challenges of creating tools to assess competencies may lead to suboptimal design or implementation, as may occur with an overly simplified approach of using behavioral checklists—this may defeat the purpose of a holistic assessment and add to faculty time and workload. An appropriately resourced core faculty educator model, as advocated by Holmboe et al,18 would require all faculty involved in training students and residents to learn and consistently use a core set of competency assessment methods. Carraccio and Englander20 suggest an alternative approach that builds assessments into the routines of daily work of faculty (e.g., with a smartphone application), thereby reducing costs, increasing acceptability, providing validity in the context of an authentic clinical encounter, and improving reliability by frequent assessment, which then becomes part of the culture.
This work has limitations. While the task force achieved its end goal of defining a practical set of EPAs, our survey results are limited by the response rate as well as the creation of a somewhat arbitrary cutoff for representative EPAs. The competencies outlined here do not represent all that should be achieved by the end of the IM clerkship, but they do highlight a plausible subset of IM clerkship EPAs that are observable and can form the basis of a common means of assessment. The IM clerkship would of course still assess professionalism, communication skills, teamwork, critical appraisal of the literature, and related skills, ideally by incorporation into this or a similar EPA framework.
The cutoff of 20 votes for selection of the EPAs was guided by our goal of highlighting a small subset of EPAs that should be considered the principal responsibility of the clerkship and that would be manageable and achievable. This methodology poses drawbacks. Based on the survey data, one could argue that an alternative cutoff might have been made at 16 responses, thereby including five additional competencies (Table 1, competencies 7–11). The inclusion of these additional competencies would undoubtedly elevate their importance for the learners; the trade-off would be the ability to develop robust assessment strategies in each given the relatively short duration of the clerkship, and potential loss of focus on other student EPAs deemed of higher importance. One solution might be including additional EPAs after the top six were successfully incorporated.
There are several challenges to implementation of CBME in UME. Perhaps highest among them is that a competency-based assessment system may understandably be perceived as being in conflict with the notion of assigning grades, which are very often normative in nature. CBME fundamentally reduces performance assessment into a dichotomous scale of having met or not met set standards, eschewing recognition of gradations of performance. At present, residency programs look to grades as meaningful and important discriminators between candidates. How the two paradigms can or should be integrated is a critically important topic for future discussion and research. A switch to CBME will require intense faculty development, such as that evidenced during GME deployment. If indeed CBME can be successfully implemented, with the competence of all graduating medical students (and residents) as the end point, one might question whether it is necessary to continue to provide grades, a topic currently subject to much debate.
A CBME system will be most successful if all clerkships adopt it. Overlaps in EPA assessment from clerkship to clerkship can be beneficial, providing additional evidence and sampling that supports generalizability of the observations. A student may demonstrate different performance in an EPA in one clerkship compared with another, either due to case specificity or an individual’s different strengths and weaknesses. This underscores the need for multiple assessments in a variety of different settings and a final determination of EPA “competence” at the end of the clerkship year. EPAs are unlikely to supplant all other attempts to assess competencies. For example, subject examinations will likely continue to be a tool for knowledge assessment.
Future steps will include the identification of essential clinical encounters common to most IM inpatient clerkships from which to develop more specific observable practice activities and to create tools for assessment and feedback that can be shared across institutions. It will be important to consider how the EPAs outlined by this task force can effectively bridge to those established by the subinternship directors,21 and how they could eventually link back to preclinical courses. Duplicating this process with educators in other clinical disciplines could yield information about the generalizability across specialties. As this work takes shape, other questions will need to be addressed, such as whether the IM clerkship curriculum15 will require modification to best align with these outcomes, what the best strategies are for learner remediation, and how and when the information generated should be fed forward to other clerkship or course directors. The inherent flexibility of a fully time-independent, learner-centered educational experience may in fact minimize the need for forward feeding of weaknesses, as a student would not move on until they had been resolved. One of the biggest challenges to the adoption of CBME thus far has been the lack of consistency and standardization in both the development and assessment of competencies across disciplines.19 It is hoped that the approach outlined here is a step in the right direction in establishing a useful framework across the continuum of IM education as well as that of all UME.
Acknowledgments: The authors wish to acknowledge the Alliance for Academic Internal Medicine staff for their help in creating an online survey, as well as in survey distribution, collection, and data entry. The authors also wish to thank the Clerkship Directors in Internal Medicine (CDIM) Survey and Scholarship Committee for assistance with data extraction and methods, and CDIM membership for responding to the survey.
1. Cooke M, Irby DM, O’Brien BC, Shulman LS. Educating Physicians: A Call for Reform of Medical School and Residency. 2010.San Francisco, CA: Jossey-Bass.
2. ten Cate O, Scheele F. Competency-based postgraduate training: Can we bridge the gap between theory and clinical practice? Acad Med. 2007;82:542–547.
3. Nasca TJ, Philibert I, Brigham T, Flynn TC. The next GME accreditation system—Rationale and benefits. N Engl J Med. 2012;366:1051–1056.
4. Association of American Medical Colleges. Core Entrustable Professional Activities for Entering Residency: Curriculum Developers Guide. 2014. Washington, DC: Association of American Medical Colleges; https://www.aamc.org/cepaer
. Accessed June 29, 2017.
5. Powell DE, Carraccio C, Aschenbrener CA. Pediatrics redesign project: A pilot implementing competency-based education across the continuum. Acad Med. 2011;86:e13.
6. Hauer KE, Boscardin C, Fulton TB, Lucey C, Oza S, Teherani A. Using a curricular vision to define entrustable professional activities for medical student assessment. J Gen Intern Med. 2015;30:1344–1348.
7. Chen HC, van den Broek WE, ten Cate O. The case for use of entrustable professional activities in undergraduate medical education. Acad Med. 2015;90:431–436.
8. Warm EJ, Mathis BR, Held JD, et al. Entrustment and mapping of observable practice activities for resident assessment. J Gen Intern Med. 2014;29:1177–1182.
9. Chen HC, McNamara M, Teherani A, ten Cate O, O’Sullivan P. Developing entrustable professional activities for entry into clerkship. Acad Med. 2016;91:247–255.
16. Hauer KE, Kohlwes J, Cornett P, et al. Identifying entrustable professional activities in internal medicine training. J Grad Med Educ. 2013;5:54–59.
17. Liaison Committee on Medical Education. Functions and structure of a medical school. http://www.lcme.org/publications
. Published June 2016. Accessed June 29, 2017.
18. Holmboe ES, Ward DS, Reznick RK, et al. Faculty development in assessment: The missing link in competency-based medical education. Acad Med. 2011;86:460–467.
19. Hawkins RE, Welcher CM, Holmboe ES, et al. Implementation of competency-based medical education: Are we addressing the concerns and challenges? Med Educ. 2015;49:1086–1102.
20. Carraccio CL, Englander R. From Flexner to competencies: Reflections on a decade and the journey ahead. Acad Med. 2013;88:1067–1073.
21. Vu TR, Angus SV, Aronowitz PB, et al; CDIM-APDIM Committee on Transitions to Internship (CACTI) Group. The internal medicine subinternship—Now more important than ever: A joint CDIM-APDIM position paper. J Gen Intern Med. 2015;30:1369–1375.