Secondary Logo

Journal Logo


Beyond the United States Medical Licensing Examination Score: Assessing Competence for Entering Residency

Radabaugh, Carrie L. MPP; Hawkins, Richard E. MD; Welcher, Catherine M.; Mejicano, George C. MD, MS; Aparicio, Alejandro MD; Kirk, Lynne M. MD; Skochelak, Susan E. MD, MPH

Author Information
doi: 10.1097/ACM.0000000000002728
  • Free


The transition from undergraduate medical education (UME) to graduate medical education (GME) is arguably the most important transition in the educational journey of a physician. New physicians must be judged ready for supervised practice, wherein they must be responsible for writing orders and be accountable for patient well-being and the cost of ordered care. Assessment during this phase is intended to generate information that facilitates successful progression from medical student to resident. However, concerns have been raised that commonly used assessments do not necessarily screen for the types of residency applicants who would succeed in residency training or meaningfully contribute to a workforce that meets society’s needs.

To achieve a meaningful transition for learners, residency programs, and patients, assessment should reliably predict success in the specialty and specific residency program to which learners attempt to match. This depends on accurate assessment of applicant characteristics, including medical knowledge, clinical reasoning, demonstrated professional and ethical behavior, interpersonal and communication skills, scholarly work, and appropriate and effective patient care. However, program faculty increasingly are interested in skills not traditionally prized in residency selection, such as determining how well learners reflect on patient care, improve their practice of medicine, demonstrate understanding of the health care system, communicate in patients’ native language, assess socioeconomic determinants of health, and provide leadership in an interprofessional team.

High-quality assessments should provide information regarding these skills as well as those more traditionally measured skills that lead to a suitable match between the learner and the program’s culture, focus, and goals. Assessments may also identify a learner’s areas of strength and recognize clinical or knowledge deficits, therefore supporting continued professional growth and development. Doing so will contribute to the student’s progress from novice to expert1 or attainment of master adaptive learner2 qualities—that is, expert, self-directed, self-regulated, and lifelong workplace learners.3 These affirmative assessments can also help achieve the goals of medical education’s quadruple aim (improving the health of the population, improving the patient care experience, reducing per capita health care costs, and improving the work life of clinicians and staff).4

Most Commonly Used Assessment Models

The National Resident Matching Program (NRMP) conducts biennial surveys of all program directors (PDs) whose programs participate in the NRMP’s Main Residency Match. In 2018, 1,333 of 4,546 PDs (29.3%) of Match-participating programs in all specialties responded.5 Of the 1,233 PDs who responded to a query regarding which factors are most important when selecting interview candidates, the following were cited most frequently: the applicant’s United States Medical Licensing Examination (USMLE) Step 1 or Comprehensive Osteopathic Medical Licensing Examination (COMLEX) Level 1 score (94%); recommendation letters in the applicant’s indicated specialty (86%); medical student performance evaluation (MSPE) (formerly known as the dean’s letter) (81%); USMLE Step 2 clinical knowledge or COMLEX Level 2 cognitive evaluation score (80%); and applicants’ personal statements (78%) (shown in Figure 1).5 Preferences of PDs shifted when ranking candidates for the Match: The top five ranking factors for 1,208 PDs were the candidate’s interaction with faculty during the interview and visit (96%); interpersonal skills (95%); interactions with house staff during the interview and visit (91%); feedback from current residents (86%); and USMLE Step 1 or COMLEX Level 1 score (78%) (shown in Figure 2).5

Figure 1
Figure 1:
Factors that residency program directors commonly use to select applicants for interview.5 Abbreviations: USMLE indicates United States Medical Licensing Examination; COMLEX, Comprehensive Osteopathic Medical Licensing Examination; MSPE, medical student performance evaluation; CE, clinical examination.
Figure 2
Figure 2:
Factors that residency program directors commonly use to rank candidates for the Match.5 Abbreviations: USMLE indicates United States Medical Licensing Examination; COMLEX, Comprehensive Osteopathic Medical Licensing Examination; MSPE, medical student performance evaluation; CE, clinical examination.

The Association of American Medical Colleges (AAMC) conducted a PD survey in 2016 as part of its Optimizing GME initiative.6 This survey was administered to 3,718 PDs in Accreditation Council for Graduate Medical Education (ACGME)-accredited programs; 1,454 (39%) from a broad spectrum of specialties responded. The survey showcased four tools used by PDs to select interview candidates: recommendation letters, candidates’ personal statements, dean’s letter/MSPE, and the Electronic Residency Application Service (ERAS®) application. Respondents indicated which applicant characteristics they attempted to assess through each tool (Figure 3). Of 1,370 PDs who responded, 73% assessed professionalism through recommendation letters, but only 29% and 28% assessed professionalism through personal statements and the ERAS application, respectively.6

Figure 3
Figure 3:
Percentage of residency program directors reporting tools they used to assess applicant characteristics when determining who to interview.6 Abbreviations: MSPE indicates medical student performance evaluation; ERAS, Electronic Residency Application Service.

Recent scholarship7–10 has centered on selecting individuals with potential to enhance a patient-centered physician workforce, yet data show that skills such as nonclinician life experiences, volunteer and extracurricular experiences, membership in the Gold Humanism Honor Society (recognizing students “who are exemplars of compassionate patient care”11), and language fluency are not as valuable to PDs as are conventional measures when selecting interview candidates. These traits are relied on even less when candidates are ranked for the Match. For example, while 58% of respondents to the 2018 NRMP Program Director Survey cited other life experience as a factor in choosing interview candidates, only 45% identified that factor when ranking applicants (Figure 4). Likewise, 47% and 24% of PDs, respectively, cited membership in the Gold Humanism Honor Society and fluency in the language spoken by the patient population as factors in choosing interview candidates, but only 36% and 19% of PDs, respectively, relied on these factors when ranking.5

Figure 4
Figure 4:
Factors that residency program directors use less frequently to select applicants for interview and Match ranking.5 Abbreviation: Pt indicates patient.

The Case for Assessment Using USMLE Results

Among the sources provided to PDs for evaluating applicants, USMLE scores provide reliable assessment data that could inform trustworthy judgments about a learner’s medical knowledge and, by extrapolation, may demonstrate likelihood for success in residency. Actual assessment results or granular performance data are not generally included as part of a student’s grade point average (GPA) or individual grades or referenced in their MSPE or recommendation letters. For example, individual subject examination scores, or performance ratings or objective structured clinical examination (OSCE) scores in selected domains such as communication skills or clinical reasoning, may not be provided with grades in core clerkships. Thus, information provided to PDs does not offer enough detail about applicants’ actual skills to be as helpful as desired for resident selection.

The rationale for considering USMLE results in the selection process is supported by the importance of medical knowledge as a foundational competence, national faculty input into USMLE content development, and demonstrated psychometric strengths of the individual Steps.12–15 Additionally, licensing examination outcomes correlate with future results on in-training and board certification examinations and, to a lesser extent, performance on other measures in residency and practice.14–22 Students who fail USMLE Step 1 are less likely to graduate on time, pass USMLE Step 2, and achieve board certification.15,16,23,24

Although the rationale for weighing USMLE pass/fail outcomes is understandable, especially given the increasing number of applications programs receive, use of USMLE scores (particularly Step 1) as a screening tool in residency selection has been a subject of controversy and criticism.12,15,25 The psychometric rigor and validity argument for USMLE scores allows for defensible pass/fail decisions related to licensure but does not substantiate use of individual scores in selecting residents.12,25 Differences in individual USMLE scores may not predict differences in resident clinical performance or correlate with mastery of essential skills.25–29 Focusing on assessment of knowledge draws attention away from other competence domains, such as communication skills and professional behavior, that may predict future performance.15,30,31 Furthermore, exclusive learner and faculty focus on licensing examination content may lead to inadequate coverage of domains such as safety, quality, and teamwork that are essential to safe and effective patient care. Yet, without supplemental data to assess learners, PDs are left to rely on USMLE scores to determine interview invitations and ranking; this can stifle innovation essential to ensuring learner readiness for 21st-century practice. Furthermore, this lack of data prevents programs from determining whether applicant performance matches programmatic values and can hinder the ability of programs to assess the relationship between their selection criteria and performance of residents in their program. In turn, this dissonance between a score that is readily available versus information about other attributes that greatly matter in patient care contributes to an “opportunity cost” in selecting and optimally training the future physician workforce.

Alternative Models of Assessment

Currently, a range of alternative assessment models is being explored that may correlate with residents’ clinical performance and patient care outcomes. Whereas numerous methods of assessing clinical performance have been used traditionally by North American medical schools,32 we identified alternative models through discussions with colleagues at national meetings convened by the AAMC as well as the American Medical Association. Each is being used by one or more medical schools or residency programs in the United States.

Gateway exercises

Mandatory gateway exercises are high-stakes checkpoints for UME programs. One example consists of clinical performance assessment examinations administered after students rotate through core clerkships. These checkpoints typically include OSCEs with a set of standardized patients and help program administrators ensure that students have acquired requisite skills before they progress to advanced rotations such as acting internships. Two programs currently using gateway exercises are Oregon Health & Science University School of Medicine (through its Transition to Clinical Experiences mini-course,33 taken at the end of the preclinical phase of the curriculum before students enter into clinical experiences) and the University of Wisconsin School of Medicine and Public Health (via its Year End Professional Skills Examination,34 taken after students complete their first year of core clinical course work).

Assessment via simulation

Most medical schools integrate simulations into their curricula involving standardized patients, computerized case management scenarios, mannequins, clinical vignettes, or a combination of these methods. Simulations have been shown to improve clinical performance, teamwork,35,36 communication, leadership skills development, and learning.37,38 A recent study showed that simulation-based procedural skills training can be integrated into preclinical curricula with retention of skills in tested procedures at the end of the primary clinical year as assessed in an OSCE format.39 As more schools implement strategies to assess core entrustable professional activities (EPAs), students will have additional opportunities for repeated, low-stakes practice and formative assessments to reach entrustment before entering residency.

EPAs and competency-based assessments

The transition to competency-based education frameworks depends on development of trustworthy assessment approaches spanning multiple domains, thus allowing for transitions that are guided by a comprehensive profile of applicant readiness.14,15 Research identifies the competencies or EPAs that should be demonstrated among graduating students,40–42 methods that could be applied to assessing their achievement,43,44 and gaps recognized in core EPA performance.45,46

Numerous schools are using this framework because the tasks and responsibilities associated with EPAs are directly linked to clinical practice activities. Formal entrustment decisions are made by faculty members who must consider longitudinal data obtained from many sources, including ad hoc judgments made by clinical supervisors, whenever a student achieves a predetermined level of supervision associated with the level expected of a first-day resident (e.g., Level 3a).47 In addition, the concept of entrustment incorporates factors beyond the knowledge and skills needed to perform a core EPA: truthfulness, conscientiousness, and discernment.48

Additionally, many education and assessment leaders have embraced competency-based education to assess student performance and progression. In this framework, each medical school graduate must achieve a predetermined level of performance by demonstrating observable behaviors (milestones) associated with core competencies. Schools capture student performance along each competency as they progress through the curriculum, and electronic portfolios can monitor students’ trajectories as they rotate through different clinical clerkships.49,50

Assessment tools that longitudinally track performance in numerous settings determine each student’s learning trajectory over time. Faculty and residents who supervise students in authentic clinical settings provide daily or weekly feedback using supervision scales and narrative comments. These differ from clerkship evaluations in their frequency, focus on practical skills, and reliance on learners to drive the process of seeking feedback.51 Trajectories displayed on dashboards can help students, coaches,52 and program administrators understand what has been achieved and what remains to be accomplished during each clinical experience. Although competency-based assessments might provide useful information regarding a student’s strengths and areas for improvement in defining transitional learning needs, school-based differences in learning objectives and variability in ratings against those objectives may not allow for accurate comparison across residency applicants.

The Standardized Video Interview pilot

The AAMC, in partnership with ACGME-accredited emergency medicine programs, recently launched a pilot program to assess a Standardized Video Interview (SVI) during the 2018 residency application cycle.53 The SVI is an online, unidirectional interview consisting of questions based on knowledge of professional behaviors and interpersonal and communication skills. Interviews are scored by third-party raters trained by the AAMC, and those scores (with the video), an evaluation letter, and other available data are provided to emergency medicine residency programs. If this pilot is successful, it may widen the pool of applicants invited to interview in person, including those who might otherwise not have been considered for interview within the ERAS application.

Holistic assessments

Holistic assessment, defined by the AAMC as a “flexible, individualized way of assessing an applicant’s capabilities by which balanced consideration is given to experiences, attributes, and academic metrics and, when considered in combination, how the individual might contribute value as a medical student and physician,”54 assesses trainees’ capabilities in areas beyond those traditionally valued in future physicians, such as life experiences, community engagement, and leadership attributes. Holistic assessments increasingly are being used to help institutions reach their mission-related interests and institutional goals.54

Several holistic assessment frameworks have been proposed. One systems-based assessment model conveys overall system performance through synthesis, analysis, and interpretation of outcomes data to provide actionable information for continuous systems improvement, innovation, and long-term planning across the continuum.55 This proposed framework may result in active engagement of students, faculty, and curriculum directors in interpreting the meaning of their programs’ outcomes, which in turn may result in relevant, actionable information for program decision making. Concerns include the increased stakeholder effort required for implementation and the challenge of translating increasing amounts of individual outcomes data into actionable intelligence for decision making.55 A second model proposed by the AAMC establishes four core principles: Criteria for residency selection are broad, linked to an institution’s mission, and regard diversity as a prerequisite for institutional excellence; experiences, attributes, and metrics are applied to all candidates, grounded in data, and intended to create a diverse group of trainees; those responsible for admissions individually consider each applicant’s potential contributions; and an applicant’s race and ethnicity should be considered only in combination with evaluation of other factors, and only when necessary to achieve mission-related interests and goals.56 This focus on the applicant as a future physician has taken on new importance as more is learned about the benefits to patients of culturally concordant and responsive care.57–60

Benefits and challenges of alternative assessment models

Alternative frameworks have some advantages over conventional learner assessment methods (multiple-choice question exams, faculty ratings, and GPAs). For example, competency-based dashboards demonstrate learner progression over time; the use of EPAs provides a broad picture of student performance and can help determine whether students are prepared for the first day of internship; and holistic assessment methods may identify physicians more likely to serve underserved patient populations. Use of alternative methods also serves as a response to criticism that some standardized multiple-choice examinations propagate structural biases against diverse student populations.

In contrast, concerns about new models include increased complexity and cost.61 Newer frameworks lack the psychometric analytics that currently accompany knowledge-based examinations, and they may not support accurate comparison of applicants from different schools. Many schools may have difficulty implementing resource-intensive methods of assessment. Not all schools will invest in new technology and methods of sharing information across courses and clerkships. Further, there are faculty development challenges in the adoption of models that embrace candid and accurate feedback so that students improve over time, embrace the concept that students are judged against a set standard and not each other, and concur that direct observation of performance in clinical settings is more important than multiple-choice examinations. Pressure on faculty also may increase if the need for direct observation reduces clinical productivity; however, widespread use of supervision scales, together with 2018 guidelines62 released by the Centers for Medicare and Medicaid Services that allow attestation of student documentation in the electronic medical record, could positively affect workflow if learners and attending physicians simultaneously interview and examine patients. Finally, school assessments place faculty and administrators in a potentially conflicted situation in which they must balance their advocacy for students with their responsibility to provide valid and actionable information to PDs.

Conclusions and Areas for Further Study

To manage an ever-increasing number of residency applications and identify the best-qualified applicants, PDs are relying on screening measures that allow comparison of students across medical schools. Because the USMLE Step 1 and COMLEX Level 1 scores are convenient and quantitative, they have by default become the predominant method of screening for interview selection. Yet, we believe that success in residency should not be measured solely by graduating from the program and entering practice but, rather, by contributing to a competent and compassionate workforce that meets the needs of society, enhancing the existing body of scholarly work, providing patient-centered care in highly functioning interprofessional teams, and supporting institutions’ mission-related interests. In support of this goal, a holistic, standardized, and widely agreed-upon method to assess students’ readiness for residency is clearly desirable for physicians and patients alike.

Attention should be paid to foundational knowledge and clinical skills. However, overreliance on a single assessment format with limited coverage of the competence domains relevant to safe and effective patient care is not in the best interests of patients, learners, or educational program leaders along the UME-to-GME continuum. A more comprehensive and fair approach to resident selection should maintain learner trajectories by identifying their needs across other domains; ensure that a broader view of the applicant is considered in the application process; safeguard that performance gaps in domains that affect patient safety are addressed; and support residency programs as they consider the goals and needs of their programs during the resident selection process.

A number of promising new assessment tools and models are being considered, or in some cases used in an investigative manner, for screening and selection. While these methods currently may be expensive, subjective, and/or complicated to administer, future research and experimentation should be prioritized to establish measures that can best meet the needs of programs, faculty, staff, and students to determine best fit.

Unless what some might construe as veiled screening mechanisms are addressed in candidate ranking, we will continue to select residents who may not possess attributes that best match the residency program. We must persist in this quest because learners and their future patients deserve better than the imperfect system in use today.

Acknowledgments: The content in this article solely reflects the views of the authors, although they received input from the American Medical Association (AMA) Council on Medical Education (CME). This article is based on the content of a CME Stakeholders Session held during the AMA House of Delegates Meeting. The authors wish to thank Sarah Brotherton, PhD, and Annalynn Skipper, PhD, for their contributions to the manuscript.


1. Dreyfus SE. The five-stage model of adult skill acquisition. Bull Sci Technol Soc. 2004;24:177–181.
2. Cutrer WB, Miller B, Pusic MV, et al. Fostering the development of master adaptive learners: A conceptual model to guide skill acquisition in medical education. Acad Med. 2017;92:70–75.
3. American Medical Association. Envisioning the master adaptive learner. Accessed March 9, 2019.
4. Bodenheimer T, Sinsky C. From triple to quadruple aim: Care of the patient requires care of the provider. Ann Fam Med. 2014;12:573–576.
5. National Resident Matching Program. Results of the 2018 NRMP Program Director Survey. Published 2018. Accessed March 11, 2019.
6. Association of American Medical Colleges. Results of the 2016 program directors survey: Current practices in residency selection. Published 2018. Accessed March 9, 2019.
7. Xierali IM, Castillo-Page L, Zhang K, Gampfer KR, Nivet MA. AM last page: The urgency of physician workforce diversity. Acad Med. 2014;89:1192.
8. Andriole DA, McDougle L, Bardo HR, Lipscomb WD, Metz AM, Jeffe DB. Postbaccalaureate premedical programs to promote physician-workforce diversity. J Best Pract Health Prof Divers. 2015;8:1036–1048.
9. LaVeist TA, Pierre G. Integrating the 3Ds—Social determinants, health disparities, and health-care workforce diversity. Public Health Rep. 2014;129(suppl 2):9–14.
10. Maldonado ME, Fried ED, DuBose TD, et al. The role that graduate medical education must play in ensuring health equity and eliminating health care disparities. Ann Am Thorac Soc. 2014;11:603–607.
11. Gold Foundation. Gold Humanism Honor Society. Accessed March 9, 2019.
12. Prober CG, Kolars JC, First LR, Melnick DE. A plea to reassess the role of United States Medical Licensing Examination Step 1 scores in residency selection. Acad Med. 2016;91:12–15.
13. Dillon GF, Clauser BE, Melnick DE. The role of USMLE scores in selecting residents. Acad Med. 2011;86:793.
14. Kenny S, McInnes M, Singh V. Associations between residency selection strategies and doctor performance: A meta-analysis. Med Educ. 2013;47:790–800.
15. Swanson DB, Hawkins RE. Holmboe ES, Durning SJ, Hawkins RE. Using written examinations to assess medical knowledge and its application. In: Practical Guide to the Evaluation of Clinical Competence. 2017:Philadelphia, PA: Elsevier; 113–139.
16. Kay C, Jackson JL, Frank M. The relationship between internal medicine residency graduate performance on the ABIM certifying examination, yearly in-service training examinations, and the USMLE Step 1 examination. Acad Med. 2015;90:100–104.
17. Hamdy H, Prasad K, Anderson MB, et al. BEME systematic review: Predictive values of measurements obtained in medical schools and future performance in medical practice. Med Teach. 2006;28:103–116.
18. Dougherty PJ, Walter N, Schilling P, Najibi S, Herkowitz H. Do scores of the USMLE Step 1 and OITE correlate with the ABOS Part I certifying examination? A multicenter study. Clin Orthop Relat Res. 2010;468:2797–2802.
19. Thundiyil JG, Modica RF, Silvestri S, Papa L. Do United States Medical Licensing Examination (USMLE) scores predict in-training test performance for emergency medicine residents? J Emerg Med. 2010;38:65–69.
20. Wenghofer E, Klass D, Abrahamowicz M, et al. Doctor scores on national qualifying examinations predict quality of care in future practice. Med Educ. 2009;43:1166–1173.
21. Norcini JJ, Boulet JR, Opalek A, Dauphinee WD. The relationship between licensing examination performance and the outcomes of care by international medical school graduates. Acad Med. 2014;89:1157–1162.
22. Tamblyn R, Abrahamowicz M, Dauphinee WD, et al. Association between licensure examination scores and practice in primary care. JAMA. 2002;288:3019–3026.
23. McDougle L, Mavis BE, Jeffe DB, et al. Academic and professional career outcomes of medical school graduates who failed USMLE Step 1 on the first attempt. Adv Health Sci Educ Theory Pract. 2013;18:279–289.
24. Andriole DA, Jeffe DB. A national cohort study of U.S. medical school students who initially failed Step 1 of the United States Medical Licensing Examination. Acad Med. 2012;87:529–536.
25. McGaghie WC, Cohen ER, Wayne DB. Are United States Medical Licensing Exam Step 1 and 2 scores valid measures for postgraduate medical residency selection decisions? Acad Med. 2011;86:48–52.
26. Rifkin WD, Rifkin A. Correlation between housestaff performance on the United States Medical Licensing Examination and standardized patient encounters. Mt Sinai J Med. 2005;72:47–49.
27. Dirschl DR, Campion ER, Gilliam K. Resident selection and predictors of performance: Can we be evidence based? Clin Orthop Relat Res. 2006;449:44–49.
28. Thordarson DB, Ebramzadeh E, Sangiorgio SN, Schnall SB, Patzakis MJ. Resident selection: How we are doing and why? Clin Orthop Relat Res. 2007;459:255–259.
29. Brothers TE, Wetherholt S. Importance of the faculty interview during the resident application process. J Surg Educ. 2007;64:378–385.
30. Papadakis MA, Teherani A, Banach MA, et al. Disciplinary action by medical boards and prior behavior in medical school. N Engl J Med. 2005;353:2673–2682.
31. Boulet JR, McKinley DW, Whelan GP, Van Zanten M, Hambleton RK. Clinical skills deficiencies among first-year residents: Utility of the ECFMG clinical skills assessment. Acad Med. 2002;77(10 suppl):S33–S35.
32. Association of American Medical Colleges. Use of assessment methods by US and Canadian medical schools. Accessed March 9, 2019.
33. Oregon Health & Science University. Medical student handbook. Published 2018. Accessed March 11, 2019.
34. School of Medicine and Public Health, University of Wisconsin–Madison. MD program student handbook 2018–2019. Accessed March 9, 2019.
35. Gilfoyle E, Koot DA, Annear JC, et al.; Teams4Kids Investigators and the Canadian Critical Care Trials Group. Improved clinical performance and teamwork of pediatric interprofessional resuscitation teams with a simulation-based educational intervention. Pediatr Crit Care Med. 2017;18:e62–e69.
36. DeMaria S Jr, Levine A, Petrou P, et al. Performance gaps and improvement plans from a 5-hospital simulation programme for anaesthesiology providers: A retrospective study. BMJ Simul Technol Enhanc Learn. 2017;3:37–42.
37. Couloures KG, Allen C. Use of simulation to improve cardiopulmonary resuscitation performance and code team communication for pediatric residents. MedEdPORTAL. March 16, 2017. Accessed March 9, 2019.
38. Saraswat A, Bach J, Watson WD, Elliott JO, Dominguez EP. A pilot study examining experiential learning vs didactic education of abdominal compartment syndrome. Am J Surg. 2017;214:358–364.
39. Seidman PA, Maloney LM, Olvet DM, Chandran L. Preclinical simulation training of medical students results in better procedural skills performance in end of the year three objective structured clinical evaluation assessments. Med Sci Educ. 2017;27:89–96.
40. Angus S, Vu TR, Halvorsen AJ, et al. What skills should new internal medicine interns have in July? A national survey of internal medicine residency program directors. Acad Med. 2014;89:432–435.
41. Hawkins RE, Welcher CM, Holmboe ES, et al. Implementation of competency-based medical education: Are we addressing the concerns and challenges? Med Educ. 2015;49:1086–1102.
42. Harris P, Bhanji F, Topps M, et al.; ICBME Collaborators. Evolving concepts of assessment in a competency-based world. Med Teach. 2017;39:603–608.
43. Lypson ML, Frohna JG, Gruppen LD, Woolliscroft JO. Assessing residents’ competencies at baseline: Identifying the gaps. Acad Med. 2004;79:564–570.
44. Lockyer J, Carraccio C, Chan MK, et al.; ICBME Collaborators. Core principles of assessment in competency-based medical education. Med Teach. 2017;39:609–616.
45. Angus SV, Vu TR, Willett LL, Call S, Halvorsen AJ, Chaudhry S. Internal medicine residency program directors’ views of the Core Entrustable Professional Activities for Entering Residency: An opportunity to enhance communication of competency along the continuum. Acad Med. 2017;92:785–791.
46. Lindeman BM, Sacks BC, Lipsett PA. Graduating students’ and surgery program directors’ views of the Association of American Medical Colleges Core Entrustable Professional Activities for Entering Residency: Where are the gaps? J Surg Educ. 2015;72:e184–e192.
47. Chen HC, van den Broek WE, ten Cate O. The case for use of entrustable professional activities in undergraduate medical education. Acad Med. 2015;90:431–436.
48. Kennedy TJ, Regehr G, Baker GR, Lingard L. Point-of-care assessment of medical trainee competence for independent clinical work. Acad Med. 2008;83(10 suppl):S89–S92.
49. Carney PA, Mejicano GC, Bumsted T, Quirk M. Assessing learning in the adaptive curriculum. Med Teach. 2018;40:813–819.
50. Mejicano GC, Bumsted TN. Describing the journey and lessons learned implementing a competency-based, time-variable undergraduate medical education curriculum. Acad Med. 2018;93(3S Competency-Based, Time-Variable Education in the Health Professions):S42–S48.
51. Andrews JS, Bale JF Jr, Soep JB, et al.; EPAC Study Group. Education in Pediatrics Across the Continuum (EPAC): First steps toward realizing the dream of competency-based education. Acad Med. 2018;93:414–420.
52. American Medical Association. Coaching in medical education: A faculty handbook. Accessed March 9, 2019.
53. Association of American Medical Colleges. AAMC Standardized Video Interview. Accessed March 9, 2019.
54. Conrad SS, Addams AN, Young GH. Holistic review in medical school admissions and selection: A strategic, mission-driven response to shifting societal needs. Acad Med. 2016;91:1472–1474.
55. Bowe CM, Armstrong E. Assessment for systems learning: A holistic assessment framework to support decision making across the medical education continuum. Acad Med. 2017;92:585–592.
56. Association of American Medical Colleges. Roadmap to Diversity and Educational Excellence: Key Legal and Educational Policy for Medical Schools. 2014. 2nd ed. Washington, DC: Association of American Medical Colleges; Accessed March 9, 2019.
57. Charlot M, Santana MC, Chen CA, et al. Impact of patient and navigator race and language concordance on care after cancer screening abnormalities. Cancer. 2015;121:1477–1483.
58. Betancourt JR, Corbett J, Bondaryk MR. Addressing disparities and achieving equity: Cultural competence, ethics, and health-care transformation. Chest. 2014;145:143–148.
59. Behar-Horenstein LS, Warren RC, Dodd VJ, Catalanotto FA. Addressing oral health disparities via educational foci on cultural competence. Am J Public Health. 2017;107(suppl 1):S18–S23.
60. McQuaid EL, Landier W. Cultural issues in medication adherence: Disparities and directions. J Gen Intern Med. 2018;33:200–206.
61. Pangaro LN, Durning SJ, Holmboe ES. Holmboe ES, Durning SJ, Hawkins RE. Evaluation frameworks, forms, and global rating scales. In: Practical Guide to the Evaluation of Clinical Competence. 2018:Philadelphia, PA: Elsevier; 37–57.
62. Centers for Medicare & Medicaid Services; Medicare Learning Network. E/M service documentation provided by students (manual update). Published May 31, 2018. Accessed March 9, 2019.
Copyright © 2019 by the Association of American Medical Colleges