Competency-based medical education (CBME) has been broadly accepted as a conceptual model for undergraduate medical education (UME), though implementation challenges persist. 1,2 Entrustable professional activities (EPAs) have been proposed as a framework to facilitate CBME implementation.3,4 Aiming to advance CBME and better prepare students for residency, the Association of American Medical Colleges (AAMC) convened a group in 2013 “to define the professional activities that every resident should be able to do without direct supervision on day 1 of residency, regardless of specialty.” 5 This group defined a set of 13 Core EPAs for Entering Residency 5 (shown in Table 1). The AAMC subsequently commissioned 10 U.S. MD-granting medical schools to pilot implementation of the Core EPAs. 6 With Core EPAs as a shared set of outcome competencies, pilot schools developed a set of guiding principles for implementation. 6 These principles and their alignment with Van Melle’s 5 core components of CBME (outcome competencies, progressive sequencing, tailored learning experiences, competency-focused instruction, and programmatic assessment) 7 are shown in Table 2. Programmatic assessment has been described as “an integral approach to the design of an assessment program with the intent to optimize its learning function, its decision-making function and its curriculum quality-assurance function” 8 in which “routine information about the learner’s competence and progress is continually collected, analyzed, and, where needed, complemented with purposively collected additional assessment information, with the intent to both maximally inform the learner and their mentor and allow for high-stakes decisions at the end of a training phase.” 9
The guiding principles for the entrustment process in the Core EPAs pilot map closely to programmatic assessment calling for summative entrustment decisions to be made by a trained group through compilation of longitudinal performance assessments from multiple assessors, including ad hoc supervision 4 or coactivity 10 judgments. In the pilot, these ad hoc judgments were documented through workplace-based assessments (WBAs) generally completed within a day after direct observation in the workplace. Lomis and colleagues described the pilot’s entrustment process objective: “When the entering class of 2015 approaches graduation in 2019, each school will pilot making summative entrustment decisions for each of its students in accordance with the guiding principles…. These decisions are meant to inform the feasibility of implementing the Core EPAs construct in … UME programs. As such, they will be theoretical in nature and will therefore not be included in the medical student performance evaluations or shared with residency programs.” 6
Core EPAs pilot schools developed trained entrustment groups (TEGs) to function as entrustment committees and make theoretical determinations about readiness to be entrusted to perform each EPA under indirect supervision. 11 These TEGs are not officially designated as entrustment committees at their schools and did not have a proscribed or uniform structure. Nonetheless, they are modeled conceptually on graduate medical education (GME) Clinical Competency Committees (CCCs). 12,13 Holistic longitudinal review of assessment data for all UME students has been uncommon; instead, promotion committee structures typically emphasize students for whom there are problems. 14 However, competency committees that review assessment data for all learners at specified timepoints in their learners’ progression are being adopted in some UME settings. 15–17 Core EPAs pilot TEGs were charged with analyzing available assessment data and making determinations about readiness for indirect supervision for those EPAs implemented at their schools (each school in the pilot implemented at least 4 EPAs and some schools implemented all 13 EPAs).
This study was an evaluation of the outcomes of the pilot entrustment process for the first cycle of theoretical summative entrustment decision making; it updates and expands upon prior progress reports regarding the pilot entrustment process. 11,18 To evaluate the extent to which pilot schools were able to compile sufficient data to engage in summative entrustment decision making for the class of 2019 graduates, we conducted a multi-institutional study to address the following questions:
- To what extent were pilot schools able to make theoretical decisions about readiness to perform EPAs under indirect supervision for their graduating students?
- Were there differences across EPAs in the extent to which decisions about readiness could be made?
- On an EPA-specific basis, what volume of WBAs was available for making entrustment decisions?
TEG data collection
A uniform set of 3 items was recorded for each student for each EPA-specific instance of theoretical entrustment decision making (Box 1): the readiness determination (1 of 4 choices: ready for indirect supervision, progressing but not yet ready, not progressing toward readiness, or could not make a decision about readiness); the confidence of the TEG in making that determination; and the number of WBAs available for that determination. In addition, 1 item was recorded per student regarding the overall evidence of trustworthiness—a term used by the Core EPAs pilot schools to refer explicitly to the development of daily work habits of conscientiousness (fulfilling one’s commitments), discernment (knowing one’s limits), and truthfulness (communicating honestly) 19 (see Supplemental Digital Appendix 1, at https://links.lww.com/ACADMED/B146). Each school also summarized (at the school level) the types of data available to the TEG, and the perceived usefulness of the data for entrustment decision making. The individual/student-level dataset at each school was de-identified and transmitted to the AAMC.
All participating schools’ TEG data were merged into a single database of individual-level records for analysis. We report aggregate results and results of chi-square analysis that tested the significance of associations between variables. Two-sided P values < .05 were considered significant. All analyses were performed using Stata statistical software, version 15 (StataCorp, College Station, Texas).
This work emerged out of the entrustment workgroup of the Core EPAs Pilot and the pilot’s program evaluation objectives related to the entrustment process and its outcomes. The authors are all directly involved in the Core EPAs pilot in various roles. Two are employed by the AAMC, which convened the Core EPAs pilot. The remaining 10 are all faculty members of pilot teams involved in some way in the entrustment process at their respective schools, including 3 faculty members at schools included in the current study and 7 faculty members at schools not included in the current study. The authors approached this project from the perspective of program evaluation for process improvement.
Core EPAs implementation time frames differed across the 10 pilot schools. Six schools that implemented the Core EPAs framework for the matriculating class of 2015, intending to attempt entrustment decision making for some or all of their class of 2019 graduates, were eligible for inclusion in this study. Four schools that first implemented the Core EPAs framework for their matriculating classes of 2016 or 2017, intending to initially attempt entrustment decision making subsequent to 2019, were not eligible for study inclusion. Two of 6 schools eligible for study inclusion had technical challenges in data consolidation that precluded study participation. The remaining 4 eligible schools participated in this study. TEG structures and processes at these 4 schools are shown in Supplemental Digital Appendix 2, at https://links.lww.com/ACADMED/B146.
Theoretical EPA-specific readiness for indirect supervision entrustment determinations
The 4 schools together made 2,415 theoretical EPA-specific readiness for indirect supervision entrustment determinations (hereafter, determinations) for 349 students. As shown in Table 1, data used in making these determinations included WBAs, end-of-clerkship assessments, objective structured clinical examinations (OSCEs), simulation, and narrative data. Each school used different combinations of data. WBAs, OSCEs, and narrative data, available for numerous EPAs, were generally considered very helpful; simulation data were limited but were considered helpful. Overall, availability of WBAs was low, as shown in Supplemental Digital Appendix 3, at https://links.lww.com/ACADMED/B146. There were no WBAs available for 64% (1,536/2,415) of all determinations; conversely, more than 10 WBAs were available for only 10% (238/2,415) of all determinations. WBAs were generally available for EPA2, differential diagnosis; EPA6, oral presentation; and EPA9, interprofessional collaboration. On a more limited basis, WBAs were available for EPA1, history and physical examination; EPA3, tests; EPA4, orders/prescriptions; EPA5, documentation; EPA7, evidence-based medicine; and EPA8, handovers. Few or no WBAs were available for EPA10, urgent care; EPA11, consent; EPA12, procedures; and EPA13, safety.
Table 3 shows the distribution of the 2,415 determinations. The numbers of determinations varied across EPAs from 100 to 349, reflecting distribution of the EPAs across schools and the number of students considered by each TEG. Two EPAs (EPA1 and EPA6) alone accounted for 28% (673/2,415) of all determinations. Of all 2,415 determinations, 28% (685/2,415) were that the “TEG could not make an entrustment decision.” The remaining 72% (1,730/2,415) involved the TEG making a decision about readiness for entrustment for indirect supervision, including 41.3% (997/2,415) “ready”; 23.1% (558/2,415) “progressing but not yet ready”; and 7.2% (175/2,415) “evidence against progress toward readiness.” Also shown in Table 3, distribution of these determination choices varied (P < .001) across EPAs. “TEG could not make an entrustment decision” proportions ranged from 11% (22/203, EPA5) to 80% (100/125, EPA13); “ready for entrustment” proportions ranged from 0% (0/125, EPA13) to 75% (164/220, EPA7); “progressing but not yet ready” proportions ranged from 11% (25/229, EPA12) to 56% (56/100, EPA4); and “evidence against progress” proportions ranged from 0% (0/125, EPA13) to 23% (23/100, EPA10).
Supplemental Digital Appendix 4, at https://links.lww.com/ACADMED/B146, shows the association between the availability of WBAs and entrustment decision making for each EPA; for most EPAs, WBA availability (any vs none) was associated (each P < .05) with the TEG making an entrustment decision as opposed to “could not make an entrustment decision.” No consistent relationships were observed across EPAs between numbers of WBAs available and the type of entrustment decisions made (ready, progressing but not yet ready, not progressing; data not shown).
Of all 997 EPA-specific “ready for entrustment” decisions, 71% (710/996) were made by the TEG with confidence (confidence data missing for 1/997), comprising 29% (710/2,415) of all determinations. As shown in Figure 1, proportions of all determinations that were “with confidence” “ready for entrustment” varied across EPAs from 0% (0/220, EPA11; 0/229, EPA12; 0/125, EPA13) to 64% (207/323, EPA6).
Evidence of trustworthiness
Types of data considered for the trustworthiness item are summarized in Supplemental Digital Appendix 2, at https://links.lww.com/ACADMED/B146. TEGs evaluated the evidence for overall trustworthiness (i.e., development of daily work habits of conscientiousness, discernment, and truthfulness) for 349 students as follows: consistent evidence supporting trustworthiness (grounded trust), n = 168 (48%); limited data about trustworthiness, but no trustworthiness concerns identified (presumptive trust), n = 124 (36%); minor concerns about the evidence of trustworthiness (questioned trust), n = 22 (6%); and significant concerns about the evidence of trustworthiness (distrust), n = 5 (1%). Conflicting/vague data resulted in no rating of the evidence of trustworthiness for 14 (4%) students; no trustworthiness data were available for 16 (5%) students.
EPAs explicitly orient CBME around questions of trust. 20,21 Developing an institutional “intention to trust” 22 in UME requires collecting, organizing, visualizing, analyzing, and interpreting relevant data. 9,11 For this study, we compiled and analyzed the first year of multischool TEG data after 6 pilot schools implemented EPA-related curriculum, assessments, and a theoretical entrustment process for students in the 2019 graduating class. The 4 schools included in this multi-institutional study were able to implement this process and arrive at entrustment decisions (of “ready for indirect supervision,” “progressing but not yet ready for indirect supervision,” or “not progressing toward readiness for indirect supervision”) for a majority (1,731/2,415; 72%) of all EPA-specific sets of data considered on an individual (student) basis. For the remaining EPA-specific sets of data (28% of all EPA-specific data sets considered), no entrustment decision could be made, generally due to insufficient data. These findings varied considerably across EPAs, reflecting ongoing limitations of data quality and/or quantity that were particularly pronounced for some EPAs. For most of the Core EPAs pilot schools, systems for programmatic assessment were not in place at the start of the pilot and have not yet been fully implemented. Assessment data systems, analytics, and dashboards are critical and complex to develop. These results demonstrate that the effort to implement comprehensive systems for programmatic assessment of the Core EPAs remains a work in progress at these pilot schools.
EPAs conceptually center competency assessment in the workplace, with explicit reliance on WBAs. 22 Implementation of WBAs, a novel assessment strategy, was challenging for several reasons among the schools in this study. The frequent written workplace feedback involves a culture change for students and faculty and carries administrative and technical burdens. 11,23 Pilot schools required or encouraged students to obtain a certain number of WBAs, but WBAs were typically implemented in a formative, student-driven fashion. There are substantial questions about psychometric reliability of these WBAs 24,25 and, in our study, the number of WBAs available for TEG review was comparatively low 26,27 and varied markedly across EPAs.
We also found differences across EPAs in the extent to which a theoretical entrustment decision could be made (vs a determination of “unable to make a decision”) and the nature of those decisions that could be made. Overall, the highest proportions of all determinations of “ready” were generally observed among a select subgroup of EPAs that have traditionally been broadly assessed (in various ways, not necessarily in a CBME framework) throughout preclinical and clinical UME curricula, 28 especially for EPAs 1, 5, 6, 7, and 9.
While the pilot’s guiding principles emphasized WBAs, TEGs in our study made numerous “ready for entrustment” decisions for EPA1 and EPA5 despite a relative paucity of WBAs for these EPAs (Supplemental Digital Appendix 3, at https://links.lww.com/ACADMED/B146) as the TEGs were able to incorporate a variety of other assessment modalities (Table 1) to arrive at these decisions. These 2 EPAs (EPAs 1 and 5), along with EPA6, were classified as “reporter” level in the reporter-–interpreter–manager–educator (“RIME”) framework 29; the remaining EPAs were classified at interpreter and/or manager, or educator levels in the framework. 29 Of these remaining EPAs, only EPA9 appeared to be assessed with some frequency in the clinical workplace (i.e., by WBAs) in our study; for EPAs 3, 4, 7, 8, and 10–13, there were no WBAs available for over 50% of students considered for each of these EPAs. Other investigators have also reported variability across EPAs in feasibility of workplace-based assessment in the clinical workplace. 17,26 Collectively, these preliminary findings suggest that there may be a disconnect between current clinical roles for students at many medical schools, including schools in our study, and a subset of the Core EPAs. 28 While continuity has the potential to facilitate the entrustment process, 17,30 none of our pilot schools has a longitudinal integrated clerkship.
Trustworthiness concerns were noted by TEGs for 27 students (8% of 349), including 5 students (1% of 349) for whom major concerns were noted. Due to the formative nature of the pilot and policies in place to meet requirements of the Family Educational Rights and Privacy Act, the pilot TEGs did not have access to all the data that are generally made available to schools’ promotions committees. Nonetheless, longitudinal compilation of EPA assessment data and explicit assessment of elements of trustworthiness may illuminate issues that are not clearly identified elsewhere.
This study has numerous important limitations. As it represents an evaluation of the first cycle of a formatively implemented pilot, we do not yet have evidence regarding the validity or reliability of the WBAs included in our study or of the decisions made by TEGs. Thus, the data we report should be primarily useful for identifying ways to improve implementation of CBME and EPA assessment and to further elucidate challenges within such an approach. Furthermore, our study includes only 4 of the 10 participating schools in the Core EPAs pilot; thus, our findings may not generalize to other schools implementing EPAs in an entrustment framework.
Data from this first formative cycle of theoretical summative entrustment decision making at 4 schools participating in the Core EPAs pilot project demonstrate that schools can gather and review multimodal student data longitudinally to make decisions regarding their graduating students’ readiness for indirect supervision on key clinical activities. These data also highlight the work still to be done, particularly in numbers of WBAs and availability of other assessment data. The variability of our findings across EPAs also likely reflects multifactorial challenges in assessment of these activities in the workplace including exposure, faculty development, and medical students’ readiness, among others.
It has been noted for CCCs in GME that having a group review existing data and assign a milestone rating can play an important role in program evaluation 10,11 and can prompt the introduction of additional curricular elements or new approaches to assessment. For the 4 schools in this study, the act of compiling and evaluating assessment data longitudinally by a trained faculty group has provided a valuable new opportunity to generate a broad overview of each student’s progress and of the overall educational program.
Next steps identified to enhance the entrustment process at the Core EPAs pilot schools include improving the amount and quality of data available from a range of sources, developing effective strategies to increase WBA volume on an EPA-specific basis, and improving faculty development and data management processes. The Core EPAs framework has informed curriculum development and expectation setting, prompted formative feedback to learners, and initiated a valuable process of gathering longitudinal assessment data of student progress in these activities.
Based on the work to date of the Core EPAs pilot schools, entrustment to perform all 13 Core EPAs under indirect supervision as a prerequisite to graduation remains aspirational. Based on our pilot schools’ experiences to date, it seems potentially most feasible for a subset of EPAs that are well represented in current medical student roles; this subset could include EPAs 1, 2, 5–7, and 9. Entrustment for EPAs 3, 4, and 8 was more difficult for our schools to determine; we speculate that potential contributory factors likely include lack of current training and assessments, lack of opportunities for students to do these tasks authentically under gradually lesser degrees of direct supervision in the workplace, and limited opportunities to observe students performing these activities in the clinical setting. Finally, this first year of pilot data suggests that entrustment for EPAs 10–13 based on assessment of workplace performance alone may not be widely feasible in UME; for these EPAs, reliance on additional data beyond WBAs, such as simulation, may be appropriate. These observations may also indicate a need for significant changes in our medical education systems to be able to entrust graduating students to perform these EPAs under indirect supervision at the start of residency.
EPAs TEG Entrustment Dataset for Graduating Class of 2019, From a Multi-Institutional Study of Theoretical Entrustment Decisions, 2019
The authors wish to thank Alison Whelan, MD, Chris Hanley, and Beatrice Schmider of the Association of American Medical Colleges (AAMC), and all of the members of the Core Entrustable Professional Activities for Entering Residency pilot for their support, inspiration, and contributions to this report. All participating Core EPAs pilot institutions and individuals can be found at https://www.aamc.org/initiatives/coreepas/pilotparticipants/.
1. Hall AK, Rich J, Dagnone JD, et al. It’s a marathon, not a sprint: Rapid evaluation of competency-based medical education program implementation. Acad Med. 2020;95:786–793.
2. Hauer KE. Seeking trust in entrustment: Shifting from the planning of entrustable professional activities to implementation. Med Educ. 2019;53:752–754.
3. ten Cate O. Entrustability of professional activities and competency-based training. Med Educ. 2005;39:1176–1177.
4. Chen HC, van den Broek WE, ten Cate O. The case for use of entrustable professional activities in undergraduate medical education. Acad Med. 2015;90:431–436.
5. Englander R, Flynn T, Call S, et al. Toward defining the foundation of the MD degree: Core Entrustable Professional Activities for Entering Residency. Acad Med. 2016;91:1352–1358.
6. Lomis K, Amiel JM, Ryan MS, et al.; AAMC Core EPAs for Entering Residency Pilot Team. Implementing an entrustable professional activities framework in undergraduate medical education: Early lessons from the AAMC Core Entrustable Professional Activities for Entering Residency pilot. Acad Med. 2017;92:765–770.
7. Van Melle E, Frank JR, Holmboe ES, Dagnone D, Stockley D, Sherbino J; International Competency-based Medical Education Collaborators. A core components framework for evaluating implementation of competency-based medical education programs. Acad Med. 2019;94:1002–1009.
8. van der Vleuten CPM, Schuwirth LWT, Driessen EW, Govaerts MJB, Heeneman S. Twelve tips for programmatic assessment. Med Teach. 2015;37:641–646.
9. Schuwirth L, van der Vleuten C, Durning SJ. What programmatic assessment in medical education can learn from healthcare. Perspect Med Educ. 2017;6:211–215.
10. Gofton WT, Dudek NL, Wood TJ, Balaa F, Hamstra SJ. The Ottawa Surgical Competency Operating Room Evaluation (O-SCORE): A tool to assess surgical competence. Acad Med. 2012;87:1401–1407.
11. Moeller JJ, Warren JB, Crowe RM, et al. Developing an entrustment process: Insights from the AAMC Core EPA pilot. Med Sci Educ. 2020;30:395–401.
12. Kinnear B, Warm EJ, Hauer KE. Twelve tips to maximize the value of a clinical competency committee in postgraduate medical education. Med Teach. 2018;40:1110–1115.
13. Pack R, Lingard L, Watling C, Cristancho S. Beyond summative decision making: Illuminating the broader roles of competence committees. Med Educ. 2020;54:517–527.
14. Hauer KE, Chesluk B, Iobst W, et al. Reviewing residents’ competence: A qualitative study of the role of clinical competency committees in performance assessment. Acad Med. 2015;90:1084–1092.
15. Monrad SU, Mangrulkar RS, Woolliscroft JO, et al. Competency committees in undergraduate medical education: Approaching tensions using a polarity management framework. Acad Med. 2019;94:1865–1872.
16. Keeley MG, Gusic ME, Morgan HK, Aagaard EM, Santen SA. Moving toward summative competency assessment to individualize the postclerkship phase. Acad Med. 2019;94:1858–1864.
17. Andrews JS, Bale JF Jr, Soep JB, et al.; EPAC Study Group. Education in Pediatrics Across the Continuum (EPAC): First steps toward realizing the dream of competency-based education. Acad Med. 2018;93:414–420.
18. Brown DR, Warren JB, Hyderi A, et al.; AAMC Core Entrustable Professional Activities for Entering Residency Entrustment Concept Group. Finding a path to entrustment in undergraduate medical education: A progress report from the AAMC Core Entrustable Professional Activities for Entering Residency Entrustment Concept Group. Acad Med. 2017;92:774–779.
19. Kennedy TJ, Regehr G, Baker GR, Lingard L. Point-of-care assessment of medical trainee competence for independent clinical work. Acad Med. 2008;83(suppl 10):S89–S92.
20. Holzhausen Y, Maaz A, Cianciolo AT, ten Cate O, Peters H. Applying occupational and organizational psychology theory to entrustment decision-making about trainees in health care: A conceptual model. Perspect Med Educ. 2017;6:119–126.
21. Duijn CCMA, Welink LS, Bok HGJ, Ten Cate OTJ. When to trust our learners? Clinical teachers’ perceptions of decision variables in the entrustment process. Perspect Med Educ. 2018;7:192–199.
22. Ten Cate O, Schwartz A, Chen HC. Assessing trainees and making entrustment decisions: On the nature and use of entrustment-supervision scales. Acad Med. 2020;95:1662–1669.
23. Geraghty JR, Ocampo RG, Liang S, et al.; Core Entrustable Professional Activities for Entering Residency Pilot Program. Medical students’ views on implementing the Core EPAs: Recommendations from student leaders at the Core EPAs Pilot institutions. Acad Med. 2021;96:193–198.
24. Ryan MS, Richards A, Perera R, et al. Generalizability of the Ottawa Surgical Competency Operating Room Evaluation (O-SCORE) scale to assess medical student performance on Core EPAs in the workplace: Findings from one institution. Acad Med. 2021;96:1197–1204.
25. Cutrer WB, Russell RG, Davidson M, Lomis KD. Assessing medical student performance of Entrustable Professional Activities: A mixed methods comparison of Co-Activity and Supervisory Scales. Med Teach. 2020;42:325–332.
26. Murray KE, Lane JL, Carraccio C, et al.; Education in Pediatrics Across the Continuum (EPAC) Study Group. Crossing the gap: Using competency-based assessment to determine whether learners are ready for the undergraduate-to-graduate transition. Acad Med. 2019;94:338–345.
27. Warm EJ, Held JD, Hellmann M, et al. Entrusting observable practice activities and milestones over the 36 months of an internal medicine residency. Acad Med. 2016;91:1398–1405.
28. Colbert-Getz JM, Lappe K, Northrup M, Roussel D. To what degree are the 13 Entrustable Professional Activities already incorporated into physicians’ performance schemas for medical students? Teach Learn Med. 2019;31:361–369.
29. Meyer EG, Kelly WF, Hemmer PA, Pangaro LN. The RIME Model provides a context for entrustable professional activities across undergraduate medical education. Acad Med. 2018;93:954.
30. Hirsh DA, Ogur B, Thibault GE, Cox M. “Continuity” as an organizing principle for clinical education reform. N Engl J Med. 2007;356:858–866.