Secondary Logo

Journal Logo

Research Reports

Developing End-of-Training Entrustable Professional Activities for Psychiatry: Results and Methodological Lessons

Young, John Q. MD, MPP, PhD; Hasser, Caitlin MD; Hung, Erick K. MD; Kusz, Martin; O’Sullivan, Patricia S. EdD; Stewart, Colin MD; Weiss, Andrea MD; Williams, Nancy MD

Author Information
doi: 10.1097/ACM.0000000000002058

Abstract

Numerous reports from foundations, professional associations, and government agencies indicate that the current health care system, including our system of education and training of health care professionals, is failing to meet the public’s needs.1–4 In response, reforms in medical education have led to the adoption of competency-based education.5 Accordingly, accreditation bodies have expanded their focus from structure and process measures (e.g., time-based rotations) to include educational outcomes.6,7 For example, in 2002, the Accreditation Council for Graduate Medical Education (ACGME) made graduation from residency contingent on demonstrated competence in the now-familiar six core domains. Similar reforms took place in other countries such as in Canada and The Netherlands (the CanMEDS framework) and the United Kingdom (Tomorrow’s Doctors).8,9

Once the outcomes had been defined, the ACGME added a developmental dimension to the framework. Competencies were created for each of the six core competency domains. Then, for each competency, the residency review committees (RRCs) crafted a series of milestones, that is, a logical progression of behaviors achieved over time as the learner progresses toward and beyond the threshold for unsupervised practice. Beginning in 2013, the Next Accreditation System required program directors to report biannually on each resident’s progress with regard to every milestone.10

However, implementation of this competency/milestone framework has encountered significant challenges, including logistical (e.g., structural models that accommodate flexible learning pathways with asynchronous advancement or promotion) and theoretical (e.g., clinical competence may not be represented by the sum of all the competencies, and competencies may not exist as attributes separate from their clinical context and content).5,11,12 Many have argued that the units of assessment (i.e., milestones or competencies) are too numerous, too granular, and/or too abstract for educators to meaningfully evaluate.9,13,14

The concept of entrustable professional activities (EPAs) was introduced as educators and researchers struggled to overcome these challenges.15,16 EPAs are units of professional practice (e.g., performing a diagnostic assessment) that are essential to the profession and require the simultaneous integration of multiple knowledge, skill, and attitude competencies.17 For purposes of assessment, this activity is the unit of observation. EPAs must be executable within a given time frame, observable, measurable, and suitable for focused entrustment decisions—that is, the task can be entrusted to the trainee for unsupervised execution once sufficient competence is reached.18 Taken together, all the EPAs for a given specialty define its core identity.19

EPAs have been developed for many specialties, including anesthesiology, ambulatory practice family medicine, gastroenterology, geriatric medicine, hematology and oncology, internal medicine, pediatrics, OB-GYN, psychiatry, pulmonary and critical care, and rheumatology.18.20–25 Several different types of expert consensus methodologies have been used, including task forces, interview, and survey. To date, there are only two published efforts on EPAs in psychiatry. In 2012, the Royal Australian and New Zealand College of Psychiatrists implemented EPAs.26 These EPAs emerged out of a context different from that in the United States and were based on an expert survey with a response rate of less than 18%.25 A second publication described how a single U.S. residency program used EPAs identified by rotation leaders as the basis for assessment.24 Recognizing the potential for end-of-training EPAs in the United States, the Executive Council of the American Association of Directors of Psychiatry Residency Training (AADPRT) created the EPAs for Psychiatry Task Force. The council charged the task force to develop proposed EPAs that every graduating resident should be able to perform without supervision. The task force employed a rigorous, multistage process to develop EPAs that were essential, clear, and representative. The purpose of this article is twofold: to demonstrate an innovative and rigorous methodology for the development of EPAs and to describe the proposed EPAs for psychiatry.

Method

The chair (J.Q.Y.) and the president of AADPRT chose task force members based on their experience and expertise in assessment. The six task force members represent five different programs, public and private, located in San Francisco; Iowa City; Washington, DC; and New York City. The task force employed a three-stage process from May 2014 to February 2017 (Table 1; description follows).

T1
Table 1:
Methods Used to Develop Proposed End-of-Training EPAs for Psychiatry, 2014–2017

Stage 1: Initial EPAs: Task force group consensus process

We reviewed existing and emerging literature on competency-based training and EPAs.1,3,6,8,9,16,18,20,25,27–41 We then adopted ten Cate’s37 definition of an EPA: an essential activity of the profession that is confined to qualified personnel; is independently executable within a time frame; and is observable, measurable, and entrustable. We focused on end-of-training EPAs relevant to a resident graduating from a general adult psychiatry program, regardless of whether the EPA was specific to psychiatry (e.g., provide supportive psychotherapy) or common to all specialties (e.g., manage transitions in care). Following recommendations to limit the number of EPAs, we set out to develop approximately 10 to 20 distinct EPAs.18

On the basis of this working definition of an EPA, the psychiatry milestones, EPAs drafted by other specialty groups, and our own experience, each task force member drafted his or her own list. The chair compiled each person’s list. We then met via teleconference for 90 minutes one or two times a month and used a consensus-driven process over multiple iterative rounds to eliminate items that did not meet the definition of an EPA and to combine items that were overlapping. When combined, the original distinct concepts were retained in the description of the EPA. We also incorporated feedback on the initial draft gathered from participants in a national workshop for program directors (AADPRT Annual Meeting in March 2015).

Stage 2: Refining the EPAs: Consultations with nonpsychiatrist experts in EPAs

We held teleconference consultations with each of four nonpsychiatry experts who represented three countries (United States, Canada, The Netherlands), three disciplines (pediatrics, internal medicine, and education), and both undergraduate and graduate medical education. In advance of each consultation, we provided each expert with our draft list of EPA titles and descriptions; a summary of our methodology; and a series of questions that focused on whether each proposed EPA met the criteria for an EPA, was clear and of reasonable scope, and was consistent with the other EPAs and with EPAs developed by other specialties. As a group, we modified our EPAs after each consultation.

Stage 3: Finalizing the EPAs: Delphi study with national experts

We defined experts as current members of AADPRT’s Executive Council and the members of the ACGME’s Psychiatry Milestones Work Group. We invited 39 people, and 31 accepted. To ensure that each participant had a basic understanding of an EPA and how it relates to competencies and milestones, we required that each participant watch a four-minute video we created and read a one-page summary. Once a respondent attested to having completed these two tasks, we used the Delphi technique to develop consensus. This method surveys a group of experts and then provides feedback on the group’s mean response. The process repeats itself until consensus (e.g., 80% or 0.8 agreement) is reached.42,43 All survey data were deidentified before analysis.

Round 1.

We used REDCap Survey Software (copyright 2010 Vanderbilt University) to construct and automate the survey. The survey asked demographic questions and then listed each EPA (title, description, and detailed specifications). Participants rated each EPA for its essentialness (five-point scale; none, low, medium, high, and very high) and clarity (five-point scale; very poor, poor, neither poor nor good, somewhat good, and very good). In addition, respondents recorded suggested changes to the title and description and made additional comments. At the end of the survey, respondents were asked to indicate whether they thought each EPA was “just right,” “too narrow,” or “too broad” in scope. If they answered “too narrow,” they were asked to indicate which EPA it should be subsumed under or to suggest an alternate EPA. Similarly, if they answered “too broad,” they were asked to indicate which EPAs it should be split into, choosing among the other proposed EPAs or suggesting an alternate EPA. Two task force members categorized every comment and resolved the few differences by consensus.

Round 2.

We distributed the revised titles and descriptions to the participants along with the group’s Round 1 mean essentialness rating for each EPA, the individual respondent’s rating in Round 1, and a summary of the changes that were made. Respondents re-rated each EPA for essentialness with the same scale as in Round 1. A single comment box solicited written feedback on each EPA. All 31 original respondents completed Round 2. For each item, we calculated the content validity index (CVI)—the proportion of respondents who assigned a high (4) or very high (5) essentialness score.44 We also calculated the asymmetric confidence interval (ACI) associated with each item’s mean essentialness rating.45 The ACI is a more conservative estimate that protects against the artificial narrowing of the confidence interval that can occur with skewed data. To be included in the final list, we followed the convention that an item’s CVI must be 0.80 or greater44,46 and the lower end of its ACI must not fall below 4.0.47–49 CVIs and ACIs were calculated with Excel (2013, Microsoft Corp., Redmond, Washington).

Results

Stage 1

The process of eliminating items that did not meet the definition of an EPA and of lumping or splitting overlapping items resulted in our list decreasing from 52 to 24 proposed EPAs. For each item, we agreed on a title; a description (brief narrative); a list of the relevant functions/tasks and the applicable contexts (e.g., clinical setting, type of clinical problem); supporting literature; and the relevant competencies.

Stage 2

The consultations with nonpsychiatrist experts in EPAs yielded 14 proposed EPAs. The reduction stemmed from either lumping together several EPAs (e.g., we broadened an EPA from a focus on violence assessment to management of a psychiatric emergency) or eliminating an item because it met the definition of a competency much more than an EPA (e.g., recognizing and addressing knowledge gaps).

Stage 3

Thirty-one of the 39 invited experts (79.5%) agreed to participate in the Delphi study. One hundred percent of those who agreed completed both rounds. Delphi study participants were, on average, board certified for more than 23 years. Participants were geographically distributed and reported substantial experience in resident supervision (e.g., 44% had more than 20 years’ experience) and in local and national leadership roles in psychiatric education and assessment. For example, more than 85% reported a prior leadership role in the national program directors association.

Round 1 of the Delphi study.

Numeric results (mean and standard deviation for essentialness, clarity, and scope) and comments were tallied for each EPA. Nearly all comments addressed one of four issues:

  • Phrasing (e.g., change “therapeutic frame” to “professional boundaries” or change “behavioral health emergencies” to “psychiatric emergencies”)
  • Scope (e.g., too large or too small—diagnostic interview should be split into separate EPAs by clinical setting such as inpatient, ambulatory, and emergency)
  • Implementation (e.g., how to assess)
  • Content (e.g., “engagement in quality improvement is not an essential aspect of psychiatry”)

The task force used consensus to decide whether and how it would respond to each comment. We eliminated one EPA (obtain informed consent) as the Round 1 results convinced us that this activity was nested within multiple other EPAs and rarely took place in isolation from these activities. Therefore, we made this activity an explicit task within several other EPAs. This resulted in 13 proposed EPAs for Round 2 of the Delphi study.

Round 2 of the Delphi study.

Ten of the final 13 EPAs met both inclusion criteria (Table 2). One of the remaining EPAs (provide cognitive behavioral therapy) had a CVI of at least 0.8, but the ACI fell below 4.0. Two additional EPAs (provide psychodynamic psychotherapy and apply quality improvement methodologies to one’s patient panel or clinical service) did not meet either inclusion criterion. For each EPA, the process generated a detailed description (title, narrative description, list of functions/tasks, and scope) and supporting references (see Supplemental Digital Appendix 1 at https://links.lww.com/ACADMED/A512) and mapping to the relevant competencies (see Table 2 and Supplemental Digital Appendix 2 at https://links.lww.com/ACADMED/A513).

T2
Table 2:
Delphi Survey Results, From a Study of End-of-Training Psychiatry EPA Development, 2014–2017a

Discussion

In this article, we describe the development of end-of-training EPAs for psychiatry. Our multistage methodology employed consensus decision making by a six-member national task force. The results incorporate input from task force members via iterative deliberation; program directors via a workshop at a national meeting; 4 international, nonpsychiatrist experts in EPAs via interview; and 31 national, psychiatric experts in residency training and assessment via a Delphi study.

Strengths

Our methodology included multiple validity-enhancing strategies relevant to future EPA-related research in any specialty (List 1). First, prior studies to date have tended to rely on a single methodology—either expert meetings and task forces,8,20,22,25,30,31 survey methods (e.g., Delphi study),21,34,50 or interview of experts.51 A few studies incorporate two of these methods.34,52 Our study incorporated all three types of input and did so from a national perspective. In addition, we obtained input from experts outside of the specialty, which helped us identify the “hidden assumptions” that we collectively held but that were not necessarily true. Second, our inclusion criteria were more stringent than those of most other studies published to date. In addition to the frequently used CVI threshold of 0.8,34 we employed a second criterion—that is, the lower end of the ACI must not be less than 4.0. This protects against the impact of skewness (non-normal distribution) on the sample mean.45,49

List 1

Validity-Enhancing Strategies Relevant to Future Studies, From a Study of End-of Training Psychiatry EPA Development, 2014–2017

  • Employed multiple methods, including:
    • Consensus-driven, iterative group process for a longitudinal task force
    • National Delphi survey of 31 experts with 100% response rate
    • Input from nonspecialty experts
    • Input from program directors at a national meeting
  • Provided frame-of-reference training (video and short article) to experts prior to participation in Delphi survey
  • Stringent inclusion criteria—accounted for the influence of skewness with use of asymmetric confidence interval
  • Explicitly acknowledged and permitted local adaption based on unique values and needs
  • Identified the lump/split dimensions for local experimentation:
    • Disease/syndrome
    • Setting of care
    • Patient acuity
    • Patient complexity
    • Treatment modality (e.g., class of medication)
  • Considers differentiating between core end-of-training versus potential core posttraining EPAs
  • Considers differentiating between core national, core local, elective, and aspirational EPAs

Abbreviation: EPA indicates entrustable professional activity.

Third, we did not differentiate EPAs by the setting of care (e.g., inpatient vs. ambulatory vs. emergency department), disease (e.g., schizophrenia vs. major depression), patient complexity (e.g., simple vs. complex or number of problems), or patient acuity (routine vs. urgent vs. emergent). This decision enabled us to reduce the total number of EPAs, but it may make our proposed EPAs too broad. Defining the best unit of observation for assessment purposes remains an open question in medical education.53 Programs may need to identify narrower EPAs nested within these broader EPAs.8,18,54 Recent published research has introduced the concept of observable practice activities—discrete activities (e.g., develop a safe discharge plan) that are nested within a given EPA (e.g., manage transitions in care) and that are more easily assessable.53–55 At least for now, the decision of whether and how best to split the broader EPAs (e.g., conduct a diagnostic evaluation or manage patients’ psychiatric conditions with medications) rests with subsequent research and with local programs and the values and needs particular to their context. We argue that any national assessment framework should be careful to define EPAs in broad enough terms so that local programs have sufficient flexibility to adapt them to their values, context, and priorities.

Fourth, we enhanced the validity of our findings by requiring participants in the Delphi study to watch a video and read a short primer that defined and differentiated EPAs, competencies, and milestones. This helped to establish a common frame of reference for our experts.

Finally, input from various stages highlighted the importance of permitting local contexts to adapt and adopt EPAs as dictated by their values and needs. These EPAs may be described as “local-core,” “emerging,” or “aspirational.” For example, telepsychiatry or collaborative care represent increasingly important features of health systems organized around population health.56 However, many programs are situated in settings that do not (currently) provide such services. Similarly, certain psychotherapies such as dialectical behavioral therapy, interpersonal therapy, or family therapy are important, evidence-based treatments. A program could choose to make such a treatment modality a required EPA at the local level. Differentiating “national-core” EPAs from “local-core,” “elective,” “emerging,” or “aspirational” EPAs helps incorporate activities that are important but not (yet) essential from a national perspective while also acknowledging legitimate local and regional variation in practice.

Implications

The results have significant implications for psychiatry. Results for the quality improvement, cognitive behavioral therapy, and psychodynamic therapy EPAs did not meet our threshold for inclusion. There are several possible explanations for why the quality improvement EPA did not meet the inclusion threshold. This EPA may be “on the rise” as the specialty continues to gain appreciation for the role of future psychiatrists in this domain. If true, one might expect this EPA to be perceived as core sometime in the near future. Alternatively, quality improvement may not represent a core EPA for psychiatry in general, even though in some care delivery systems this may be an essential activity. If true, quality improvement would then be a good example of an elective or potentially “local-core” EPA. Finally, this EPA may not represent a task sufficiently discrete to be amenable to entrustment decisions.18,57 It may be unclear what an entrustment decision actually permits in the workplace with respect to quality improvement. If true, quality improvement, although important, may best be addressed outside of the EPA framework or it may be a set of skills that is nested within other EPAs. These remain open questions.

The failure of two psychotherapy EPAs (cognitive behavioral and psychodynamic psychotherapy) to reach the inclusion threshold likely reflects several factors. First, the provision of psychotherapy is becoming a less prominent feature of psychiatric practice.58 Second, for various reasons, the proportion of training time available for psychotherapy has decreased, making it more difficult for residents to obtain competence for independent practice by the end of training. As a practical matter, residents may need additional training in psychotherapy after residency should they want to obtain competence. There is agreement that all residents should have exposure to these practices and certainly know when to refer a patient to these modalities. The disagreement centers on whether all residents should obtain “competence for independent practice” by the end of training or whether graduation might require a lower threshold such as entrustment for indirect supervision.59 Core, noncore, and elective EPAs have been proposed in other specialties (e.g., radiology in The Netherlands) and in undergraduate medical education.18,31,32 Similarly, the cognitive behavioral and the psychodynamic psychotherapy EPAs could represent “elective” rather than “core” EPAs that either individual trainees could pursue on their own (including after the conclusion of residency) or a local program could choose to require. Of note, the psychiatry RRC program requirements include the demonstration of competence in the application of these two psychotherapies, and as such, programs may still choose to use the EPA framework to assess competency. The psychotherapy EPAs remind us that the continuum of medical education includes the additional competencies obtained after training during professional practice.

Ten of the proposed EPAs received strong support and represent the core activities of a modern psychiatrist in which residents should obtain competence by the end of training. Taken together, these EPAs form the basis for a valid framework of competency-based assessment in psychiatry and can assist program directors with determining an individual resident’s development, progression, and mastery toward unsupervised practice. These EPAs may inform medical student education as well as how best to structure maintenance of certification.60 The EPAs may expand or contract, depending on the level of learner.

Limitations

Our work has several limitations. First, the majority of item eliminations or combinations occurred during stages 1 and 2 prior to the larger group of experts in the Delphi study. We may have obtained different results if we had provided a longer list of items, even if overlapping, to the Delphi study group. Second, although we have systematically incorporated input from nonpsychiatric and psychiatric experts, we may not have defined experts properly. Third, our efforts have focused entirely on the content validity of the EPAs. Consistent with the unitary model of validity, future work will need to collect other important types of evidence for validity such as internal structure, response process, association with other variables, and consequences.61 This work will also necessitate the development and testing of workplace-based assessment tools and scales that effectively measure EPAs,62 the relative role of ad hoc versus summative entrustment, and processes by which entrustment decisions are made.18,63 Effective faculty development will require the development of descriptive behavioral narratives and case vignettes (including videos) that exemplify each level of entrustment.16,60,64,65 Fourth, our proposed EPAs include both discrete (within a single episode of care) and longitudinal (care that unfolds over time) activities. It is unclear to what extent the latter (e.g., management of patients longitudinally or applying quality improvement methodology) will be amenable to assessment for entrustment.30

Conclusions

In conclusion, we undertook a three-stage process that sequentially reduced the number of proposed psychiatry EPAs from 54 to 24 to 14 to 13. Ten of 13 met our threshold for acceptance and can form the basis for competency-based assessment in general adult psychiatry training. The remaining 3 are clearly important, and future work will be needed to decide on their status. Our methods and results provide valuable lessons for the future development of EPAs in other specialties. Future research should focus on field testing of these EPAs as well as the development of the necessary assessment tools.

References

1. Cooke M, Irby DM, O’Brien BC; Carnegie Foundation for the Advancement of Teaching. Educating Physicians: A Call for Reform of Medical School and Residency. 2010.San Francisco, CA: Jossey-Bass.
2. Eden J, Berwick DM, Wilensky GR; Institute of Medicine (U.S.). Committee on the Governance and Financing of Graduate Medical Education. Graduate Medical Education That Meets the Nation’s Health Needs. 2014.Washington, DC: National Academies Press.
3. Frenk J, Chen L, Bhutta ZA, et al. Health professionals for a new century: Transforming education to strengthen health systems in an interdependent world. Lancet. 2010;376:19231958.
4. Skochelak SE. Commentary: A century of progress in medical education: What about the next 10 years? Acad Med. 2010;85:197200.
5. Frank JR, Snell LS, ten Cate O, et al. Competency-based medical education: Theory to practice. Med Teach. 2010;32:638645.
6. Carraccio C, Wolfsthal SD, Englander R, Ferentz K, Martin C. Shifting paradigms: From Flexner to competencies. Acad Med. 2002;77:361367.
7. Leach DC. A model for GME: Shifting from process to outcomes. A progress report from the Accreditation Council for Graduate Medical Education. Med Educ. 2004;38:1214.
8. Caverzagie KJ, Cooney TG, Hemmer PA, Berkowitz L. The development of entrustable professional activities for internal medicine residency training: A report from the Education Redesign Committee of the Alliance for Academic Internal Medicine. Acad Med. 2015;90:479484.
9. ten Cate O, Scheele F. Competency-based postgraduate training: Can we bridge the gap between theory and clinical practice? Acad Med. 2007;82:542547.
10. Nasca TJ, Philibert I, Brigham T, Flynn TC. The next GME accreditation system—Rationale and benefits. N Engl J Med. 2012;366:10511056.
11. Lurie SJ. History and practice of competency-based assessment. Med Educ. 2012;46:4957.
12. Hawkins RE, Welcher CM, Holmboe ES, et al. Implementation of competency-based medical education: Are we addressing the concerns and challenges? Med Educ. 2015;49:10861102.
13. Rekman J, Gofton W, Dudek N, Gofton T, Hamstra SJ. Entrustability scales: Outlining their usefulness for competency-based clinical assessment. Acad Med. 2016;91:186190.
14. Malone K, Supri S. A critical time for medical education: The perils of competence-based reform of the curriculum. Adv Health Sci Educ Theory Pract. 2012;17:241246.
15. ten Cate O. Entrustability of professional activities and competency-based training. Med Educ. 2005;39:11761177.
16. Englander R, Flynn T, Call S, et al. Toward defining the foundation of the MD degree: Core entrustable professional activities for entering residency. Acad Med. 2016;91:13521358.
17. ten Cate O, Young JQ. The patient handover as an entrustable professional activity: Adding meaning in teaching and practice. BMJ Qual Saf. 2012;21(suppl 1):i9i12.
18. Ten Cate O, Chen HC, Hoff RG, Peters H, Bok H, van der Schaaf M. Curriculum development for the workplace using entrustable professional activities (EPAs): AMEE guide no. 99. Med Teach. 2015;37:9831002.
19. ten Cate O. Competency-based education, entrustable professional activities, and the power of language. J Grad Med Educ. 2013;5:67.
20. Leipzig RM, Sauvigné K, Granville LJ, et al. What is a geriatrician? American Geriatrics Society and Association of Directors of Geriatric Academic Programs end-of-training entrustable professional activities for geriatric medicine. J Am Geriatr Soc. 2014;62:924929.
21. Wisman-Zwarter N, van der Schaaf M, ten Cate O, Jonker G, van Klei WA, Hoff RG. Transforming the learning outcomes of anaesthesiology training into entrustable professional activities: A Delphi study. Eur J Anaesthesiol. 2016;33:559567.
22. Brown CR Jr, Criscione-Schreiber L, O’Rourke KS, et al. What is a rheumatologist and how do we make one? Arthritis Care Res (Hoboken). 2016;68:11661172.
23. Rose S, Fix OK, Shah BJ, et al. Entrustable professional activities for gastroenterology fellowship training. Gastrointest Endosc. 2014;80:1627.
24. Weiss A, Ozdoba A, Carroll V, DeJesus F. Entrustable professional activities: Enhancing meaningful use of evaluations and milestones in a psychiatry residency program. Acad Psychiatry. 2016;40:850854.
25. Boyce P, Spratt C, Davies M, McEvoy P. Using entrustable professional activities to guide curriculum development in psychiatry training. BMC Med Educ. 2011;11:96.
26. Royal Australian & New Zealand College of Psychiatrists. EPA handbook. https://www.ranzcp.org/Files/PreFellowship/2012-Fellowship-Program/EPA-forms/EPA-handbook.aspx. Accessed October 10, 2017.
27. Aylward M, Nixon J, Gladding S. An entrustable professional activity (EPA) for handoffs as a model for EPA assessment development. Acad Med. 2014;89:13351340.
28. Beeson MS, Warrington S, Bradford-Saffles A, Hart D. Entrustable professional activities: Making sense of the emergency medicine milestones. J Emerg Med. 2014;47:441452.
29. Chan B, Englander H, Kent K, et al. Transitioning toward competency: A resident–faculty collaborative approach to developing a transitions of care EPA in an internal medicine residency program. J Grad Med Educ. 2014;6:760764.
30. Chang A, Bowen JL, Buranosky RA, et al. Transforming primary care training—Patient-centered medical home entrustable professional activities for internal medicine residents. J Gen Intern Med. 2013;28:801809.
31. Chen HC, McNamara M, Teherani A, ten Cate O, O’Sullivan P. Developing entrustable professional activities for entry into clerkship. Acad Med. 2016;91:247255.
32. Chen HC, van den Broek WE, ten Cate O. The case for use of entrustable professional activities in undergraduate medical education. Acad Med. 2015;90:431436.
33. Englander R, Carraccio C. From theory to practice: Making entrustable professional activities come to life in the context of milestones. Acad Med. 2014;89:13211323.
34. Hauer KE, Kohlwes J, Cornett P, et al. Identifying entrustable professional activities in internal medicine training. J Grad Med Educ. 2013;5:5459.
35. ten Cate O, Snell L, Carraccio C. Medical competence: The interplay between individual ability and the health care environment. Med Teach. 2010;32:669675.
36. Hauer KE, ten Cate O, Boscardin C, Irby DM, Iobst W, O’Sullivan PS. Understanding trust as an essential element of trainee supervision and learning in the workplace. Adv Health Sci Educ Theory Pract. 2014;19:435456.
37. Ten Cate O. Nuts and bolts of entrustable professional activities. J Grad Med Educ. 2013;5:157158.
38. Carraccio CL, Englander R. From Flexner to competencies: Reflections on a decade and the journey ahead. Acad Med. 2013;88:10671073.
39. Kogan JR, Conforti LN, Bernabeo E, Iobst W, Holmboe E. How faculty members experience workplace-based assessment rater training: A qualitative study. Med Educ. 2015;49:692708.
40. Long DM. Competency-based residency training: The next advance in graduate medical education. Acad Med. 2000;75:11781183.
41. Norcini JJ. Work based assessment. BMJ. 2003;326:753755.
42. Landeta J, Barrutia J, Lertxundi A. Hybrid Delphi: A methodology to facilitate contribution from experts in professional contexts. Technol Forecast Soc. 2011;78:16291641.
43. Humphrey-Murto S, Varpio L, Gonsalves C, Wood TJ. Using consensus group methods such as Delphi and nominal group in medical education research. Med Teach. 2017;39:1419.
44. Lynn MR. Determination and quantification of content validity. Nurs Res. 1986;35:382385.
45. Penfield RD. A score method of constructing asymmetric confidence intervals for the mean of a rating scale item. Psychol Methods. 2003;8:149163.
46. Polit DF, Beck CT, Owen SV. Is the CVI an acceptable indicator of content validity? Appraisal and recommendations. Res Nurs Health. 2007;30:459467.
47. Sanchez R, Sloan SR, Josephson CD, Ambruso DR, Hillyer CD, O’Sullivan P. Consensus recommendations of pediatric transfusion medicine objectives for clinical pathology residency training programs. Transfusion. 2010;50:10711078.
48. Thrush CR, Putten JV, Rapp CG, Pearson LC, Berry KS, O’Sullivan PS. Content validation of the Organizational Climate for Research Integrity (OCRI) survey. J Empir Res Hum Res Ethics. 2007;2:3552.
49. Penfield RD, Miller JM. Improving content validation studies using an asymmetric confidence interval for the mean of expert ratings. Appl Meas Educ. 2004;17:359370.
50. Shaughnessy AF, Sparks J, Cohen-Osher M, Goodell KH, Sawin GL, Gravel J Jr. Entrustable professional activities in family medicine. J Grad Med Educ. 2013;5:112118.
51. Spenkelink-Schut G, ten Cate TJ, Kort HSM. Application of EPA concept as connection between professional activities and CanMEDS competency areas: Pilot study for physician assistants in urology [in Dutch]. Tijdschrift voor Medisch Onderwijs. 2008;27:230238.
52. Fessler HE, Addrizzo-Harris D, Beck JM, et al. Entrustable professional activities and curricular milestones for fellowship training in pulmonary and critical care medicine: Report of a multisociety working group. Chest. 2014;146:813834.
53. Teherani A, Chen HC. The next steps in competency-based medical education: Milestones, entrustable professional activities and observable practice activities. J Gen Intern Med. 2014;29:10901092.
54. Warm EJ, Held JD, Hellmann M, et al. Entrusting observable practice activities and milestones over the 36 months of an internal medicine residency. Acad Med. 2016;91:13981405.
55. Warm EJ, Mathis BR, Held JD, et al. Entrustment and mapping of observable practice activities for resident assessment. J Gen Intern Med. 2014;29:11771182.
56. Sunderji N, Waddell A, Gupta M, Soklaridis S, Steinberg R. An expert consensus on core competencies in integrated care for psychiatrists. Gen Hosp Psychiatry. 2016;41:4552.
57. ten Cate O. Trusting graduates to enter residency: What does it take? J Grad Med Educ. 2014;6:710.
58. Mojtabai R, Olfson M. National trends in psychotherapy by office-based psychiatrists. Arch Gen Psychiatry. 2008;65:962970.
59. Yager J, Bienenfeld D. How competent are we to assess psychotherapeutic competence in psychiatric residents? Acad Psychiatry. 2003;27:174181.
60. Carraccio C, Englander R, Gilhooly J, et al. Building a framework of entrustable professional activities, supported by competencies and milestones, to bridge the educational continuum. Acad Med. 2017;92:324330.
61. Downing SM. Validity: On meaningful interpretation of assessment data. Med Educ. 2003;37:830837.
62. Crossley J, Jolly B. Making sense of work-based assessment: Ask the right questions, in the right way, about the right things, of the right people. Med Educ. 2012;46:2837.
63. ten Cate O, Hart D, Ankel F, et al.; International Competency-Based Medical Education Collaborators. Entrustment decision making in clinical training. Acad Med. 2016;91:191198.
64. Carraccio C, Englander R, Holmboe ES, Kogan JR. Driving care quality: Aligning trainee assessment and supervision through practical application of entrustable professional activities, competencies, and milestones. Acad Med. 2016;91:199203.
65. Regehr G, Ginsburg S, Herold J, Hatala R, Eva K, Oulanova O. Using “standardized narratives” to explore new ways to represent faculty opinions of resident performance. Acad Med. 2012;87:419427.

Supplemental Digital Content