In 2007, ten Cate and Scheele proposed using entrustable professional activities (EPAs) to refocus competency-based medical education (CBME).1 EPAs are “professional activities that together constitute the mass of critical elements that operationally define a profession.”1 Since then, the EPA framework for assessment has been widely described as advantageous for competency-based graduate medical education with the potential to impact every facet of medical education from curricular design to graduation competencies.2–7
In 2013, the Association of American Medical Colleges (AAMC) used consensus groups to develop 13 Core Entrustable Professional Activities for Entering Residency (Core EPAs).8 Shortly after the Core EPAs were published, an editorial supporting the use of EPAs in undergraduate medical education (UME) was published.9 The Core EPAs define the key tasks that graduates from U.S. MD-granting medical schools should be able to carry out without direct supervision when they start residency.8 , 10 In addition to identifying the 13 Core EPAs (Box 1), the group developed extensive descriptions for each EPA as well as a curricular guide.11 A longitudinal pilot program at several U.S. medical schools examining the feasibility of using the Core EPAs started in 2015.12 Concurrently, there was widespread interest in the Core EPAs, with over 60% of all articles on EPAs in UME mentioning the Core EPAs in the first 3 years after their introduction.13
Box 1
Association of American Medical Colleges’ Core EPAsa
Shortly following the release of the Core EPAs, concerns about whether they met the definition of an EPA (as initially conceived by ten Cate and Scheele1 ) were raised.14–16 ten Cate noted that most of the Core EPAs were problematic due to one or more of the following concerns: some did not stand alone as activities, the proposed levels of supervision (i.e., pre-entrustable vs entrustable) were not appropriate to the nature of the tasks being assessed, some were too vague, and some were too complex.14 Another commentary further detailed potential problems with several Core EPAs, elaborating on ten Cate’s concerns.15 More recently, there have also been concerns that the binary pre-entrustable versus entrustable assessment that was originally recommended for the Core EPAs by the AAMC did not capture the developmental nature of medical education, prompting a recommendation that the Core EPAs be reevaluated before committing to them.16 In response to these and other concerns, the AAMC Core EPA curricular guide now includes an entrustment scale that uses 4 developmental stages.17 , 18
Given the widespread interest in adopting the Core EPAs,13 it is essential that their construction enables them to advance the goal of EPA-based assessment: to operationalize competencies to enable assessment that predicts future performance.1 , 19 , 20 Therefore, it is paramount that investigations examine the extent to which the Core EPAs effectively operationalize the EPA construct and are a feasible approach to CBME.1 Determining whether the Core EPAs meet the intent of EPAs can be facilitated by several appraisal tools that have been published recently.19 , 21 These tools can assist in evaluating the Core EPAs’ alignment with the key characteristics (referred to as domains in this study) of the EPA construct and inform researchers and educators on how they might be improved.
The aim of our current study was to have subject matter experts evaluate the Core EPAs with the EQual rubric19 to determine if revisions were required and, if applicable, how to focus revision efforts. These efforts sought to identify whether there were any potentially problematic Core EPAs that may need to be reexamined before educators consider implementing them.
Method
We selected the EQual rubric19 to evaluate the Core EPAs, as this rubric represents the key domains of EPAs as defined in the relevant literature.1 , 2 , 20 , 22–24 That is, the EQual rubric’s 14 questions evaluate the 3 domains of EPAs: EPAs as discrete units of work (tasks that can be observed separately from other tasks); EPAs as entrustable, essential, and important tasks of the profession (tasks that can be entrusted to some degree to a student and that define the profession of medicine); and EPAs’ curricular role (tasks that could guide curricular efforts). Example items include19 :
This EPA has a clearly defined beginning and end (discrete units of work);
This EPA describes work that is essential and important to the profession (entrustable, essential, and important tasks of the profession); and
This EPA requires the application of knowledge, skills, and/or attitudes acquired through training (curricular role).
The complete rubric is included in Supplemental Digital Appendix 1 (at https://links.lww.com/ACADMED/A978 ). Taylor and colleagues used a modified Angoff approach to determine an overall cutoff score of 4.07, below which an EPA was deemed insufficient in key EPA domains and, thus, to possibly require revision.19 Unpublished cutoff scores for each domain of the EQual rubric (4.17, 4.00, and 4.00 for discrete units of work; entrustable, essential, and important tasks of the profession; and curricular role, respectively) were confirmed with an EQual rubric author who was also a member of this study (D.R.T.). Excellent reliability was achieved by Taylor and colleagues with 4 reviewers (phi coefficient of 0.84).19
A recently presented poster25 used network analysis to determine the 10 most published and cited authors in the field of EPAs. We invited these top 10 published and cited authors to participate as subject matter experts in our study. Our goal was to have at least 4 expert reviewers complete the evaluation to achieve a reliability similar to that found in the initial trial of the EQual rubric.19 One of the 10 experts piloted our online evaluation (see below and Supplemental Digital Appendix 1 at https://links.lww.com/ACADMED/A978 ) to confirm feasibility. No changes were recommended.
After gaining approval from the Uniformed Services University of the Health Sciences Human Research Protections Program (PSY-DB-18001), we delivered the EQual rubric to the subject matter experts electronically. These experts had a 6-month window (December 2018–May 2019) in which to complete the evaluation. A 15-minute orientation video on the EQual rubric26 was embedded into the first page of the evaluation system. Each subsequent page included 1 of the 13 Core EPAs and the EQual rubric. At the top of each page, the Core EPA was described using the appropriate excerpt from the AAMC’s Curriculum Developers Guide.11 For item 6 of the EQual rubric, “This EPA is clearly distinguished from other EPAs in the framework,” we included a list of all 13 Core EPAs for reference. In addition to the EQual rubric’s 14 questions, we included 3 prompts: “Do you think this EPA requires revision?,” “Why?,” and “How would you recommend fixing it?” The first prompt was limited to a “yes/no” response. The other 2 prompts were free text.
We calculated descriptive statistics for the EQual rubric scores for each of the 13 Core EPAs overall and for the 3 domains: discrete units of work; entrustable, essential, and important task of the profession; and curricular role. Core EPAs that received an overall average score below the cutoff (4.07) were noted. The number of experts recommending revision for each Core EPA was also noted and compared with its performance on the EQual rubric relative to the overall cutoff. Reliability between these 2 approaches was calculated using Cohen’s kappa. Furthermore, consistent with previous work,19 we determined the reliability of the EPA evaluations with a generalizability study (G-study) in which the Core EPAs, the reviewers, the rubric items, and interactions between these 3 variables (facets) were analyzed using a fully crossed design. Descriptive statistics were calculated using IBM SPSS Statistics 25 (IBM Corp., Armonk, New York). G-study results were calculated using G_String_V version 1.1.4 (McMaster University, Hamilton, Ontario, Canada).
Finally, based on recent recommendations on how to analyze free-text survey comments,27 responses to why and/or how a Core EPA should be revised were simply summarized, rather than analyzed as stand-alone qualitative data. These summaries were completed by 2 authors (E.G.M., D.R.T.) for any Core EPA that did not meet a cutoff score or for which the majority of experts recommended revision.
Results
Seven of the 10 EPA experts we contacted agreed to participate; 1 piloted the evaluation and 6 ultimately completed the evaluation, resulting in 1,092 evaluations (6 reviewers × 14 EQual questions × 13 Core EPAs). The G-study analysis demonstrated that 19% of the variance came from the Core EPA being evaluated, rather than from the raters or the items in the EQual rubric, and revealed excellent reliability (phi coefficient of 0.80) with 6 reviewers.
The overall score for the majority of the Core EPAs (9/13) was above the EQual rubric’s overall cutoff of 4.07, indicating that they align with the key domains of the EPA construct (Table 1 ). The remaining 4 Core EPAs—EPAs 2 (DDx), 7 (form questions & find answers), 9 (collaborate), and 13 (safety & improvement; see Box 1 for abbreviation definitions)—scored below the overall cutoff, suggesting that they may require revision. These data are further visualized in Figure 1 , where each expert’s score for each Core EPA is portrayed relative to the overall cutoff (4.07).
Table 1: Overall and Domain-Specific Average Scores for the 13 Core EPAs From 6 Subject Matter Expert Reviewers’ Evaluations Using the EQual Rubric,
19 2018–2019
a Figure 1: Visualization of 6 individual subject matter expert reviewers’ evaluations of the Core Entrustable Professional Activities for Entering Residency (Core EPAs) using their overall score (bars R1–R6) for each Core EPA on the EQual rubric,
19 2018–2019. The average overall score from all 6 expert reviewers (average bar) is also shown for each Core EPA. The 0 line on the vertical axis indicates a score of 4.07, the overall cutoff score. Each tick mark on the vertical axis represents a 0.5-unit increase or decrease from this overall cutoff score (e.g., 0.5 indicates a score of 4.57, −0.5 indicates a score of 3.57).
A majority (n ≥ 4) of the 6 experts felt that most (9/13) of the Core EPAs did not require major revision (it is important to note that these were not necessarily the same 9 Core EPAs that scored above the overall cutoff). A majority (n ≥ 4) of experts thought that Core EPAs 6 (oral pres), 7 (form questions & find answers), 9 (collaborate), and 13 (safety & improvement) did require revision. The agreement between Core EPAs scoring below the overall cutoff (4.07) and a majority (n ≥ 4) of reviewers reporting the need for revision was “very good” as calculated by Cohen’s kappa, which was 0.831 (95% confidence interval: 0.518–1.000). The only 2 Core EPAs where the experts’ EQual rubric evaluations did not line up with the majority opinion on revisions were Core EPAs 2 (DDx) and 6 (oral pres). Core EPA 2’s (DDx’s) overall score (4.05) was just below the EQual rubric cutoff (4.07); however, there were few (n = 2) experts who recommended revising this Core EPA. Core EPA 6’s (oral pres’s) overall score was 4.38, which was above the EQual rubric cutoff; however, a majority of experts (n = 4) recommended revision. Reasons for the discrepancies regarding these 2 Core EPAs were discernible in the free-text comments (see below). Different experts appeared to conceptualize these tasks differently for learners at various stages of training and/or understood them as potentially being nested. For example, experts who thought of Core EPA 6 in the context of early, preclerkship learners who were learning how to give an oral presentation did not think the task was specific to the practice of medicine. Conversely, experts who thought of Core EPA 6 in the context of a clerkship learner did not see the task as discrete and separate from other tasks—the oral presentation as a means to an end. These differences in conceptualization likely contributed to the variation within and between rater’s evaluations.
The experts’ domain-specific scores (Table 1 ) for Core EPAs 2 (DDx), 3 (rec & int tests), 7 (form questions & find answers), 9 (collaborate), and 13 (safety & improvement) were below the cutoff for discrete units of work (4.17). The experts’ domain-specific scores for Core EPAs 7 (form questions & find answers), 9 (collaborate), and 13 (safety & improvement) were below the cutoff for entrustable, essential, and important tasks of the profession (4.00), and their domain-specific score for Core EPA 9 (collaborate) was below the cutoff for curricular role (4.00).
The free-text comments for the 6 Core EPAs that were scored below the overall or domain-specific cutoffs or for which a majority of experts recommended revision are summarized in Table 2 . All of the free-text comments are available in Supplemental Digital Appendix 2 (at https://links.lww.com/ACADMED/A979 ). Several of the comments referenced more than just the specific Core EPA being evaluated. For example, experts recommended that Core EPA 2 (DDx) might be combined with Core EPA 1 (H&P) into a larger EPA on clinical care. It was also suggested that Core EPAs 2 (DDx) and 7 (form questions & find answers) required Core EPAs 5 (document) or 6 (oral pres) to be observed or entrusted. Furthermore, there was broad consensus that giving an oral presentation (Core EPA 6) is not an EPA by itself and that it may just be a method for assessing other EPAs. There were also concerns that Core EPAs 9 (collaborate) and 13 (safety & improvement) were not discrete units of work and could not be entrustable, with some experts concluding that these were closer to competencies than EPAs.
Table 2: Summary of the Free-Text Comments From 6 Subject Matter Expert Reviewers Regarding Core EPAs 2, 3, 6, 7, 9, and 13,a 2018–2019
There was moderate disagreement between individual experts on the overall scores of Core EPAs 2 (DDx) and 7 (form questions & find answers; Figure 1 ). This disagreement was also reflected in the experts’ free-text comments: there were differing opinions on whether Core EPAs 2 (DDx) and 7 (form questions & find answers) were appropriate stand-alone tasks. Conversely, experts who viewed EPAs as activities limited to the workplace thought these tasks required nesting into larger EPAs, as thinking and learning are not restricted to the profession of a physician. Similar differences were also present in different recommendations for improving Core EPA 3 (rec & int tests): nesting it within a larger EPA versus splitting it into smaller subcomponents.
Furthermore, there is evidence of concern that because the Core EPAs were originally constructed as part of a binary entrustment system (“pre-entrustable” and “entrustable”), there may be lingering problems with the Core EPAs themselves. Specifically, in a system where a student can only be pre-entrustable or entrustable, limitations were needed in the description of the Core EPAs. For example, as part of the definition of Core EPAs 2 (DDx), a student may give a working diagnosis, but team members must always endorse and verify it. While this limitation enables a binary entrustment scale, it undermines the definition of the task.
Discussion
The AAMC Core EPAs represent a significant effort to reform UME and to provide better quality assurance, standardization of training outcomes, increased reliability in workplace-based assessment, and improved patient safety.10 To achieve these lofty aims, the Core EPAs must be rigorously defined and feasible for implementation. We asked 6 subject matter experts to evaluate the Core EPAs using the EQual rubric19 to determine their degree of alignment with established key domains of the EPA construct. Adhering to these key domains is important to current efforts to implement EPAs in UME as an approach to CBME. Evaluating the Core EPAs against these standards helps inform educators of whether they are ready to be used for assessing learners.
Our work reinforced the importance of incorporating such an evaluation before implementing EPAs. Our results highlight that most (9/13) of the Core EPAs performed well, with a majority of experts agreeing that most did not require revision (as noted above, these were not necessarily the same Core EPAs as the 9 that scored above the overall cutoff). These results are reassuring, given the number of medical schools currently relying on the Core EPAs as part of their own efforts to transition to a competency-based system of assessment. The results also indicate that experts felt 6 of the Core EPAs (2, 3, 6, 7, 9, 13) may struggle to meet the intent of the EPA construct and would benefit from revision.
Core EPAs 2 (DDx), 3 (rec & int tests), 7 (form questions & find answers), 9 (collaborate), and 13 (safety & improvement) do not appear to be discrete units of work
This finding aligns with ten Cate’s analysis for Core EPAs 2, 3, and 914 and Tekian’s analysis for Core EPAs 9 and 13.15 One reason our experts had this concern was the perception that some of these activities are only observable through other work activities. For example, developing a differential diagnosis (Core EPA 2), recommending and interpreting tests (Core EPA 3), and forming questions and finding answers (Core EPA 7) can only be observed in a student’s written documentation (Core EPA 5) or oral presentation (Core EPA 6). Conversely, other experts felt that developing a differential diagnosis (Core EPA 2) and forming questions and finding answers (Core EPA 3) was something that preclerkship learners might do outside of a workplace environment, potentially as part of a case-based learning activity.28 Another concern our experts noted was that some of these activities were combinations of tasks—forming questions is a different task than finding answers (Core EPA 7) and identifying system failures is different than contributing to a culture of safety (Core EPA 13)—thus, they gave recommendations to focus on a single aspect. Lastly, there was a concern that collaborating (Core EPA 9) and contributing to a culture of safety (Core EPA 13) are competencies, not discrete tasks.
Recommended solutions to make these Core EPAs more discrete included splitting them into unique components (e.g., Core EPA 3 could be split into 2 EPAs, 1 for recommending tests and another for interpreting tests), or conversely, nesting them into larger EPAs (such as nesting Core EPA 7 as a precursor to Core EPAs 2, 3, and 4) or reworking them to focus on descriptions of discrete tasks (e.g., Core EPA 13 could be reworked to focus on reporting specific patient safety events).
Core EPAs 7 (form questions & find answers), 9 (collaborate), and 13 (safety & improvement) may not be entrustable, essential, and important tasks of the practice of medicine
These concerns appeared to spill over from the fact that these Core EPAs were not discrete tasks, so it was difficult to imagine entrusting them. For example, you cannot entrust a competency, which some experts felt Core EPAs 9 and 13 were. The experts’ concerns also appeared to stem from the fact that these Core EPAs are not specific to physicians but could legitimately be performed by anyone . The broad nature of these Core EPAs also appears to make it difficult to determine when a student is entrustable versus pre-entrustable, as the tasks associated with these Core EPAs span a wide range of developmental stages in UME.29 The fact that our experts were concerned about this may indicate that the AAMC’s update in 201730 to a 4-level system may not be broadly known, may be considered insufficient, and/or the impacts of developing the Core EPAs using the previous pre-entrustable versus entrustable system still need to be accounted for in the Core EPAs themselves. To make these Core EPAs more entrustable, experts recommended that the descriptions focus on tasks specific to physicians that students can be allowed to do at different levels of supervision and/or that they be nested in larger EPAs.
Core EPA 9 (collaborate) does not appear to have a clear curricular role
This finding was different from previous criticisms, which claimed that Core EPA 9 was only a curricular objective.15 This is likely alluding to the fact that teaching students to collaborate is integral to any medical school curriculum; but as our experts noted in free-text comments, Core EPA 9 lacks sufficient clarity to effectively guide task-based assessment or professional entrustment within a curriculum.
Expert opinions
Our participants are established scholars in the area of EPAs and contributed significant expertise to this study. Their shared understanding was reflected in their high level of agreement overall. Notably though, certain EPAs elicited stark differences in responses. Disagreement between experts was most notable for Core EPAs 2 (DDx) and 7 (form questions & find answers; Figure 1 ). Additionally, some experts felt the Core EPAs could be improved by being split into component parts, others thought they needed to be merged with other Core EPAs, and others felt that they should be nested by stage of training. These differences illuminate how the concept of EPAs is still maturing and that there is not a single shared mental model for EPAs, even among experts. What is reassuring about this maturing process is that Core EPAs 1 (H&P), 4 (enter & disc orders), 5 (document), 8 (pt handoff), 10 (urgent care), 11 (informed consent), and 12 (general procedures), which had previously evoked critical comments,14 were not found by our participants to have serious shortcomings. This does not mean that these Core EPAs could not be improved, but they do appear to meet the intent of the EPA construct.
Limitations
Our study had several limitations. Our results are limited by the fact that the EQual rubric was originally designed for EPAs in graduate medical education settings,1 while the Core EPAs are designed for UME.8 Additionally, despite our best efforts to provide neutral prompts, the free-text comments generally appeared to be negatively biased toward the Core EPAs. For example, there were few free-text comments defending positive evaluations of the Core EPAs. There were, however, many free-text comments recommending ways to improve Core EPAs even among those that performed well or that experts felt did not need revision. This negative bias may also be an artifact of the survey structure in which comments were solicited immediately after identifying whether a Core EPA needed revision.
Another important consideration in interpreting our findings is that not all aspects of the UME curriculum are best captured using EPAs. While criticism of a certain Core EPA as not being entrustable or discrete or having a curricular role may simply indicate that Core EPA struggles to meets the intent of the EPA construct, it may also indicate that something may not translate well to EPAs. For example, professionalism, collaborating, clinical reasoning, and medical ethics are all important to the practice of medicine and yet are not discrete activities, nor are they entrustable in and of themselves. These examples represent important curricular elements in UME that must be taught and assessed but may be better left outside of the EPA framework.
Conclusions
Our study provides evidence with good reliability that most of the AAMC’s Core EPAs are well constructed and well aligned with the EPA construct; as such, they represent a promising initial framework of EPAs for UME. However, some of the Core EPAs, as they are currently defined, might benefit from revision. Notably, when experts were asked how to revise the Core EPAs, a heterogeneity of perspectives emerged. This may point to the need for broad participation in future work focused on making consensus recommendations for Core EPA improvements. The differences in opinion between the experts who participated in our study likely did not reflect differences in their understandings of the EPA construct. Instead, they are likely differences in their opinions on how EPAs will need to be operationalized to meet the intent of the EPA construct. This reflects the larger, ongoing academic debate about the EPAs, which has moved beyond being pro-EPA or anti-EPA to more fruitful questions of why and how EPAs function as essential and discrete units of work that are entrustable to learners and have a curricular role. The process of improving the Core EPAs is a continuation of this important discussion and represents an opportunity to incorporate other EPAs proposed for UME,31 , 32 other methods for assessment,33 nesting of EPAs,20 and the implications of assessing EPAs outside of the workplace.32 Such efforts are necessary if the Core EPAs are to standardize outcomes for medical school graduates.34
Acknowledgments:
Special thanks to the 6 subject matter experts who generously volunteered their time to evaluate the Core Entrustable Professional Activities (Core EPAs).
References
1. ten Cate O, Scheele F. Competency-based postgraduate training: Can we bridge the gap between theory and clinical practice? Acad Med. 2007;82:542–547
2. ten Cate O. Entrustability of professional activities and competency-based training. Med Educ. 2005;39:1176–1177
3. Hauer KE, Soni K, Cornett P, et al. Developing entrustable professional activities as the basis for assessment of competence in an internal medicine residency: A feasibility study. J Gen Intern Med. 2013;28:1110–1114
4. Lockyer J, Carraccio C, Chan MK, et al. ICBME Collaborators. Core principles of assessment in competency-based medical education. Med Teach. 2017;39:609–616
5. Englander R, Carraccio C. A lack of continuity in education, training, and practice violates the “do no harm” principle. Acad Med. 2018;933 supplS12–S16
6. Gruppen LD, ten Cate O, Lingard LA, Teunissen PW, Kogan JR. Enhanced requirements for assessment in a competency-based, time-variable medical education system. Acad Med. 2018;933 supplS17–S21
7. Powell DE, Carraccio C. Toward competency-based medical education. N Engl J Med. 2018;378:3–5
8. Association of American Medical Colleges. The Core Entrustable Professional Activities (EPAs) for Entering Residency.
https://www.aamc.org/what-we-do/mission-areas/medical-education/cbme/core-epas . Accessed April 21, 2020.
9. Hirsh DA, Holmboe ES, ten Cate O. Time to trust: Longitudinal integrated clerkships and entrustable professional activities. Acad Med. 2014;89:201–204
10. Englander R, Carraccio C. Core Entrustable Professional Activities for Entering Residency (CEPAER). Presented at: 2014 Association of Pediatric Program Directors Fall Meeting; September 17–19, 2014.Arlington, VA:
11. Flynn T, Call S, Carraccio C, et al. Core Entrustable Professional Activities for Entering Residency: Curriculum Developers’ Guide. 2014.Washington, DC:: Association of American Medical Colleges;
12. Lomis KD, Ryan MS, Amiel JM, Cocks PM, Uthman MO, Esposito KF. Core Entrustable Professional Activities for Entering Residency pilot group update: Considerations for medical science educators. Med Sci Educ. 2016;26:797–800
13. Meyer EG, Chen HC, Uijtdehaage S, Durning SJ, Maggio LA. Scoping review of entrustable professional activities in undergraduate medical education. Acad Med. 2019;94:1040–1049
14. ten Cate O. Trusting graduates to enter residency: What does it take? J Grad Med Educ. 2014;6:7–10
15. Tekian A. Are all EPAs really EPAs? Med Teach. 2017;39:232–233
16. Krupat E. Critical thoughts about the Core Entrustable Professional Activities in undergraduate medical education. Acad Med. 2018;93:371–376
17. Lomis KD, Obeso VT, Whelan AJ. Building trust in entrustment: Pursuing evidence-based progress in the Core Entrustable Professional Activities for Entering Residency. Acad Med. 2018;93:341–342
18. Obeso V, Brown D, Phillipi CCore Entrustable Professional Activities for Entering Residency: Toolkits for the 13 Core EPAs. 2017.Washington, DC:: Association of American Medical Colleges;
19. Taylor DR, Park YS, Egan R, et al. EQual, a novel rubric to evaluate entrustable professional activities for quality and structure. Acad Med. 2017;9211 supplS110–S117
20. ten Cate O, Chen HC, Hoff RG, Peters H, Bok H, van der Schaaf M. Curriculum development for the workplace using Entrustable Professional Activities (EPAs): AMEE guide no. 99. Med Teach. 2015;37:983–1002
21. Post JA, Wittich CM, Thomas KG, et al. Rating the quality of entrustable professional activities: Content validation and associations with the clinical context. J Gen Intern Med. 2016;31:518–523
22. Frank JR, Snell LS, Cate OT, et al. Competency-based medical education: Theory to practice. Med Teach. 2010;32:638–645
23. Holmboe ES, Sherbino J, Long DM, Swing SR, Frank JR; International CBME Collaborators. The role of assessment in competency-based medical education. Med Teach. 2010;32:676–682
24. ten Cate O, Billett S. Competency-based medical education: Origins, perspectives and potentialities. Med Educ. 2014;48:325–332
25. Maggio L, ten Cate B, Chen HC. Charting the flow of ideas in medical education: A social network analysis of entrustable professional activities. Paper presented at: Association for Medical Education in Europe (AMEE) 2018; August 25–29, 2018.Basel, Switzerland:
26. YouTube. Equal rubric training video for EPA evaluation.
www.youtube.com/watch?v=yQZuWdzkQKM . Accessed April 21, 2020.
27. LaDonna KA, Taylor T, Lingard L. Why open-ended survey questions are unlikely to support rigorous qualitative insights. Acad Med. 2018;93:347–349
28. ten Cate O, Hoff RG. From case-based to entrustment-based discussions. Clin Teach. 2017;14:385–389
29. Meyer EG, Kelly WF, Hemmer PA, Pangaro LN. The RIME model provides a context for entrustable professional activities across undergraduate medical education. Acad Med. 2018;93:954
30. Obeso V, Brown D, Phillipi C, et al., eds. Core Entrustable Professional Activities for Entering Residency: Toolkits for the 13 Core EPAs—Abridged. 2017. Washington, DC: Association of American Medical Colleges;
https://www.aamc.org/system/files/c/2/484778-epa13toolkit.pdf . Published 2017. Accessed April 21, 2020
31. Chen HC, McNamara M, Teherani A, Cate OT, O’Sullivan P. Developing entrustable professional activities for entry into clerkship. Acad Med. 2016;91:247–255
32. ten Cate O, Graafmans L, Posthumus I, Welink L, van Dijk M. The EPA-based Utrecht undergraduate clinical curriculum: Development and implementation. Med Teach. 2018;40:506–513
33. Chen HC, van den Broek WE, ten Cate O. The case for use of entrustable professional activities in undergraduate medical education. Acad Med. 2015;90:431–436
34. Lomis K, Amiel JM, Ryan MS, et al. AAMC Core EPAs for Entering Residency Pilot Team. Implementing an entrustable professional activities framework in undergraduate medical education: Early lessons from the AAMC Core Entrustable Professional Activities for Entering Residency Pilot. Acad Med. 2017;92:765–770