On the basis of these findings, we initially recommended that the AAMC engage in further study of six of the seven proposed core personal competencies for entering students.
Collecting feedback on the core personal competencies
The ILWG’s recommendation served as the foundation for the AAMC Admissions Initiative. One of that group’s first projects was to review the ILWG’s draft definitions of the recommended core personal competencies. Admissions Initiative members collected input from the AAMC’s Group on Student Affairs (GSA), Committee on Admissions (COA), and Holistic Review Project Advisory Committee, as well as from AAMC staff representing different constituencies and from constituents at AAMC regional and annual meetings. Revisions to the competency definitions were generally minor and related to form and organization. Many constituents stated that cultural competence, oral communication, and teamwork (which were subsumed in the definitions of the ILWG’s six recommended competencies) should be stand-alone competencies to signal their importance to medical schools and to ensure that they receive adequate attention during the admission process. Before adding these three as core personal competencies, Admissions Initiative members reviewed data from the MR5 Committee’s admission and academic affairs officers’ surveys17,18 about their importance to success in medical school. As shown in Table 2, on average, each of these three was rated as “important” or “very important” in those surveys. As such, they were added, resulting in a final list of nine core personal competencies for entering medical students.
Approving the nine core personal competencies for entering medical students
In February 2013, the AAMC’s COA endorsed the final list of nine core personal competencies for entering medical students (defined in Table 4): ethical responsibility to self and others; reliability and dependability; service orientation; social skills; capacity for improvement; resilience and adaptability; cultural competence; oral communication; and teamwork.
These personal competencies are linked to behaviors associated with success in medical school, are linked to the ACGME competencies, and build on the personal characteristics rated by admission and academic affairs officers as the most important for students to demonstrate at entry in order to be successful in medical school.
Exploring Tools to Assess the Core Personal Competencies Early in the Admission Process
Although the ILWG survey suggested a desire among admission officers for tools that assess applicants’ personal competencies early in the admission process, there are many unanswered questions about the use and value of such measures in medical school admissions.22 To begin to answer these questions, the ILWG reviewed the medical, higher education, and employment literatures published through summer 2012. We identified more than 50 seminal articles (including several meta-analyses) and six nonpublished technical reports about tools currently used to measure personal competencies in higher education and employment settings. We made subjective, holistic judgments about tools’ potential to provide information on applicants’ core personal competencies for use in the pre-interview screening stage of the admission process. We judged six types of tools according to the following eight criteria: validity, reliability, group differences, susceptibility to faking and coaching, applicant reactions, user reactions, cost/resource utilization, and scalability for use in pre-interview screening (Appendix 1).
Situational judgment tests
In situational judgment tests (SJTs), examinees are asked to indicate how they would (or should) respond to dilemmas presented in text-based, video, or animated scenarios. Response formats vary: Examinees may be asked to select from multiple-choice options, identify the most and least effective responses, and/or answer open-ended questions. SJTs have been used in medical school admission processes in Canada (the CASPer assessment23), Belgium,24–26 and Israel.27
The employment literature28 provides strong evidence for the reliability and validity of SJTs, as does research conducted in Belgium,26 where an SJT has been used in the medical school admission process since 1997. Additionally, research from the United Kingdom shows that SJT scores predict competency-based ratings of physician performance and provide incremental validity above and beyond a clinical problem-solving test.29 Further, applicants hold generally positive attitudes about SJTs.30
There is some evidence suggesting that there may be small racial/ethnic group differences in performance on SJTs that emphasize decision making.31 However, research conducted by the College Board and the Law School Admission Council indicates that including these tests in the admission process may increase the percentage of African American and Latino matriculants compared with using academic data alone, and that performance on SJTs is the best predictor of “lawyering effectiveness.”32,33 These studies were conducted in a research (rather than operational) environment, though.
SJTs are somewhat expensive to develop because of the technical expertise needed to create and score scenarios. However, they are scalable for use in pre-interview screening because they can be administered to a large number of applicants before the interview. Further, when SJTs are scored, data are presented in a format that is easy to consume.
Standardized evaluations of performance
In standardized evaluations of performance (SEPs), raters use a graphic, comparative, or behaviorally anchored rating scale to evaluate applicants on a set of competencies. Although most medical school admission processes use nonstandardized letters of recommendation—which have poor interrater reliability for nonacademic variables,34 have poor predictive validity, and lack comparative data—other graduate and professional program (e.g., veterinary medicine, optometry, physical therapy) admission processes use SEPs. In 2009, the Educational Testing Service introduced the Personal Potential Index,35 an SEP for use in graduate admissions, but there is no published literature to date on its psychometric properties.
Research on the Medical Student Performance Evaluation shows small but significant observed positive correlations between standardized evaluations and performance on comprehensive clinical performance exams.36 Admission officers are likely to have positive attitudes about SEPs because raters must include specific examples of behaviors illustrating applicants’ personal competencies.37 Although the employment literature includes some evidence of small racial/ethnic group differences,38 there is no evidence of group differences in the educational literature.
There is potential for rater leniency and consequent lack of variance in ratings, though. Given that applicants select their SEP raters, those raters may feel obligated to act as advocates rather than as objective evaluators. SEPs are also somewhat expensive because of the expertise needed to develop them and the infrastructure required to support their use. However, they are scalable for use with large applicant pools. SEP ratings would make data about applicants’ personal competencies available in an easy-to-use format in time for pre-interview screening.
Accomplishment records, also known as autobiographical questionnaires, are standardized descriptions of achievements and experiences. Applicants are asked to describe behaviors related to a set of important personal competencies. Typically, applicants write about a situation in which they demonstrated a certain competency, describing the specific actions taken and the outcome. The resulting narratives can be scored by raters or left unscored. Variations of this tool are already used in medical school admission processes, such as in the descriptions of experiences in the Work/Activities section of the American Medical College Application Service (AMCAS) application, in secondary applications developed at individual medical schools, and as part of the MOR27 center assessment.
Reliability is best when accomplishment records are collected in proctored settings and are scored by multiple raters.39 Validity data are not available with respect to their use in admissions. Applicants and users may have lukewarm reactions to them because of the added workload. There is little published research on unscored accomplishment records, but they are inexpensive to develop and can be administered to large numbers of applicants. It should be noted that unscored accomplishment records cannot be easily incorporated into pre-interview screening because a substantial amount of time and experience is needed to read and interpret them.
Personality and biographical data inventories
Personality inventories and biographical data inventories ask applicants to indicate the extent to which a series of statements accurately describe them, typically using a Likert-type response scale. These tools are relatively inexpensive to develop and can be administered to large numbers of applicants.
Both types of inventories have good psychometric properties and are commonly used in employee selection. However, there are concerns about their use in a high-stakes admission context. A primary concern is the potential for coaching and faking responses. Research demonstrates that applicants can respond to these types of inventories in ways that may make them appear more attractive and may compromise the validity of these assessments.40 Bardes and colleagues22 suggest that this phenomenon could be exacerbated in the medical school admission context because test preparation companies and others routinely help applicants prepare to apply to medical school. Applicants from low socioeconomic backgrounds who do not have access to such coaching may be at a disadvantage. There could also be negative reactions from applicants regarding privacy issues30 and from admission officers concerning the validity of these assessments.
The majority of medical schools use local (on-campus) interviews to assess applicants’ personal competencies. Interview types range from unstructured to structured, but most medical school interviews are semistructured. The typical medical school interview process includes a standard set of dimensions or questions, uses rating scales to evaluate applicant responses, and involves multiple interviews and/or interviewers.41 Local interviews have a number of limitations, however. Reliability for unstructured interviews is poor, and the practice of providing interviewers with access to applicants’ application data introduces bias.42,43 In addition, local interviews are subject to rater error, and ratings may have more to do with the interviewer than the interviewee.43
Although the unstructured personal interview has not been shown to predict clinical performance in medical school,44 semistructured interview scores have been shown to predict clerkship performance.45,46 In recent years, the Multiple-Mini Interview (MMI)‡ pioneered by McMaster University,14,15 structured interviews conducted at the University of Iowa Carver College of Medicine,47 and “behavioral event interviews” used by the Scholarly Excellence, Leadership Experiences, Collaborative Training program at the Morsani College of Medicine48 have paved the way for improved measurement of personal characteristics via interviews.
The employment49 and medical school admission45,46,50,51 literatures provide strong evidence for the reliability and validity of semistructured and structured interviews. Applicants and interviewers generally have positive attitudes about semistructured interviews,30 and applicants perceive the MMI process as being fair.52 There is no evidence of racial/ethnic group differences on interviews in the educational literature.
One concern about interviews is the potential for coaching and faking. Research suggests that applicants actively try to present themselves in a more favorable light during interviews and that those who do so successfully are likely to obtain higher interview scores.53–55 Unstructured local interviews may provide important information about medical school applicants’ personal competencies, but they lack reliability and have not been shown to predict future performance. Semistructured and structured interviews may also provide information about personal competencies and have better psychometric properties. However, local interviews are resource intensive.
Assessment centers can employ several standardized exercises (e.g., interviews, role-plays, in-baskets, group discussions) to provide multiple opportunities for multiple raters to evaluate applicant behaviors. Assessment centers have been used in medical school admission processes in Belgium24–26 and Israel.27 In the United States, assessment centers’ role-playing component has been used in the United States Medical Licensing Examination Step 2 Clinical Skills exam and in various medical schools’ objective structured clinical exams. The employment,56,57 medical school admission,24–26 and medical practice58 literatures all provide evidence for the reliability and validity of assessment centers. In Israel, applicants and admission officers who participated in the MOR perceived it to be fair for screening purposes.27 Data from assessment centers provide important information about applicants’ personal competencies, but such centers are resource intensive. Thus, it is not feasible to conduct them on a national level to provide data in time for pre-interview screening.
Tools recommended for future study
After reviewing the literature and evaluating potential tools on the eight criteria, we suggested that the AAMC further investigate three tools for possible use in assessing applicants’ core personal competencies during the admission process: SJTs, SEPs, and accomplishment records. We recommended these tools because each of them
- provides data about personal competencies in a format that is easy to use and would be available in time for pre-interview screening,
- allows for multiple sources of assessment,
- has acceptable validity and is likely to provide predictive value beyond UGPAs and MCAT scores in predicting nonacademic outcomes,
- demonstrates less potential risk of coaching and faking effects compared with other tools,
- is likely to be accepted by applicants and admission officers, and
- avoids exorbitant costs that would likely be passed on to applicants.
Given the many unanswered questions about assessment of personal competencies, we believe that the AAMC should conduct additional research before developing these tools for use in medical school admissions. No tool is perfect for all situations, so we recommend that multiple tools be employed to assess personal competencies to enable admission officers to evaluate the information collected (just as they currently consider both UGPAs and MCAT scores in context). SJTs, SEPs, and accomplishment records should be used together—as part of an “admissions toolbox”—along with data on applicants’ academic competencies, in deciding which applicants to interview.
Lack of consensus about the personal competencies2 needed at entry for success in medical school and concerns22 about the tools available to assess them have long hampered changes in medical school admission processes. Yet if medical schools do not incorporate data about applicants’ personal competencies into their admission processes, the composition of future matriculating classes is unlikely to change.
In this article, we report the nine core personal competencies for entering medical students that have been endorsed by the AAMC COA: ethical responsibility to self and others; reliability and dependability; service orientation; social skills; capacity for improvement; resilience and adaptability; cultural competence; oral communication; and teamwork. This is the first list of personal competencies that is likely to generalize to all U.S. MD-granting medical schools and Canadian medical schools that use the MCAT exam, and it provides a common taxonomy for admission researchers. Individual medical schools may require additional personal competencies, but our data suggest that these nine are important for—and can be linked to behaviors critical to—success at the majority of medical schools. Each of these competencies can also be linked to ACGME competencies and competency models for physician performance. Future research should examine the relationships among these personal competencies and performance outcomes at the national level, and whether these personal competencies differ in importance on the basis of medical schools’ characteristics (e.g., mission, values).
Our evaluation and comparison of tools currently used to measure personal competencies incorporates research from the employment literature and provides admission officers with new information that may be useful as they evaluate their local admission practices. From a practical perspective, the data and literature reviewed in this article will serve as the foundation for the AAMC Admissions Initiative, which over the next several years will investigate options for developing tools to assess these core personal competencies in the medical school admission process and make recommendations about which tools, if any, should be implemented by medical schools.
Future research on the use of SJTs in medical school admissions should explore different formats for presenting scenarios (e.g., actors, avatars), alternative response formats (e.g., rank order, narrative responses), validity, and the impact of coaching/faking on validity and user acceptance. In addition, research should investigate admission officers’ interest in SEPs and in incorporating an accomplishment record in the AMCAS application, as well as the likely value to the admission process. Research should also identify strategies to minimize the negatives and to capitalize on the strengths of the individual tools. In determining which tool (or set of tools) is a viable option for assessing applicants’ core personal competencies during the medical school admission process, the AAMC Admissions Initiative should weigh each tool’s advantages and drawbacks and balance them both with the admission community’s needs and goals and patients’ needs.
Acknowledgments: The authors would like to thank the following AAMC personnel for reviewing earlier drafts of this article: Henry Sondheimer, Amy Addams, Stephen Fitzpatrick, Cynthia Searcy, and Karen Mitchell. They would also like to thank the members of the MR5 Committee for their tireless efforts: Steven Gabbe, Ronald Franks, Lisa Alty, Dwight Davis, Kevin Dorsey, Michael Friedlander, Robert Hilborn, Barry Hong, Richard Lewis, Maria Lima, Catherine Lucey, Alicia Monroe, Saundra Oyewole, Erin Quinn, Richard Riegelman, Gary Rosenfeld, Wayne Samuelson, Richard Schwartzstein, Maureen Shandling, Catherine Spina, and Ricci Sylla. They would like to thank Nick Vasilopoulos and Paul Sackett for their insights during the ILWG’s deliberations. In addition, they thank Keith Dowd and Trey Pigg for their contributions to this article, as well as three anonymous reviewers for their suggestions, which greatly improved the manuscript.
Other disclosures: The Medical College Admission Test is a program of the Association of American Medical Colleges (AAMC). Related trademarks owned by the AAMC include Medical College Admission Test, MCAT, and MCAT2015.
Ethical approval: The American Institutes of Research institutional review board approved the 2010 survey on personal competencies. The e-mailed survey invitations informed participants about the study and did not offer any incentives for participation. Respondents provided consent by starting the online survey.
* The authors of this article were members of the ILWG, which was composed of four medical school representatives (T.W.K., S.K.P., C.T., J.P.W.), one consultant with expertise in developing assessments of personal characteristics (Nick Vasilopoulos), and three AAMC staff members (D.M.D., Karen Mitchell, Karla Whittaker).
† The AAMC Admissions Initiative (https://www.aamc.org/initiatives/admissions/) is a multiyear project designed to help medical schools transform their admission processes. Its purpose is to develop new tools and improve existing tools to provide information to medical schools about applicants’ personal and academic competencies.
‡ It should be noted that the MMI could be categorized as an assessment center because it includes multiple exercises (i.e., situational interview questions, role-play activities, and group activities). We categorized it as an interview because of its heavy emphasis on situational interview stations. In addition, in our experience, most admission officers view the MMI as a variant of an interview.
2. Albanese MA, Snow MH, Skochelak SE, Huggett KN, Farrell PM. Assessing personal qualities in medical school admissions. Acad Med. 2003;78:313–321
3. Carrothers RM, Gregory SW Jr, Gallagher TJ. Measuring emotional intelligence of medical school applicants. Acad Med. 2000;75:456–463
4. Adams KG, Searcy C, Norris D, Oppler S Development of a Performance Model of the Medical Education Process. 2001 Washington, DC American Institutes for Research
5. Bendapudi NM, Berry LL, Frey KA, Parish JT, Rayburn WL. Patients' perspectives on ideal physician behaviors. Mayo Clin Proc. 2006;81:338–344
6. Duberstein P, Meldrum S, Fiscella K, Shields CG, Epstein RM. Influences on patients’ ratings of physicians: Physicians demographics and personality. Patient Educ Couns. 2007;65:270–274
7. Grumbach KB, Bodenheimer T. Can health care teams improve primary care practice? N Engl J Med. 2005;353:2673–2682
8. Beach MC, Sugarman J, Johnson RL, Arbelaez JJ, Duggan PS, Cooper LA. Do patients treated with dignity report higher satisfaction, adherence, and receipt of preventive care? Ann Fam Med. 2005;3:331–338
9. Di Blasi Z, Harkness E, Ernst E, Georgiou A, Kleignen J. Influence of context effects on health outcomes: A systematic review. Lancet. 2001;357:752–762
10. Hojat M, Louis DZ, Markham FW, Wender R, Rabinowitz C, Gonnella JS. Physicians’ empathy and clinical outcomes for diabetic patients. Acad Med. 2011;86:359–364
11. Papadakis MA, Teherani A, Banach MA, et al. Disciplinary action by medical boards and prior behavior in medical school. N Engl J Med. 2005;353:2673–2682
12. Kirch D. A new day in admissions. AAMC Reporter. 2010;9(19):2
13. Association of American Medical Colleges.Medical School Admissions Requirements, 2012–2013 (MSAR). 2012 Washington, DC Association of American Medical Colleges
14. Eva KW, Reiter HI, Rosenfeld J, Norman GR. The ability of the multiple mini-interview to predict preclerkship performance in medical school. Acad Med. 2004;79(10 suppl):S40–S42
15. Eva KW, Rosenfeld J, Reiter HI, Norman GR. An admissions OSCE: The multiple mini-interview. Med Educ. 2004;38:314–326
16. Etienne PMJ, Julian ERCamara WKE. Assessing the personal characteristics of premedical students. In: Choosing Students: Higher Education Admissions Tools for the 21st Century. 2005 Mahwah, NJ Lawrence Erlbaum Associates
17. Association of American Medical Colleges.. Study of Medical School Admissions Policies and Practices [unpublished report]. 2008 Washington, DC Association of American Medical Colleges
18. Association of American Medical Colleges.. Study of Academic Affairs Officers’ Judgments About the Attributes Required for Student Success in Medical School [unpublished report]. 2009 Washington, DC Association of American Medical Colleges
19. Sanchez JI, Levin ELAnderson N, Ones DS, Sinangil HK, Viswesvaran C. The analysis of work in the 20th and 21st centuries. In: Handbook of Industrial, Work, and Organizational Psychology. 2002;Vol 1 London, UK SAGE Publications:71–89
21. Patterson F, Ferguson E, Lane P, Farrell K, Martlew J, Wells A. A competency model for general practice: Implications for selection, training, and development. Br J Gen Pract. 2000;50:188–193
22. Bardes CL, Best PC, Kremer SJ, Dienstag JL. Perspective: Medical school admissions and noncognitive testing: Some open questions. Acad Med. 2009;84:1360–1363
23. Dore KL, Reiter HI, Eva KW, et al. Extending the interview to all medical school candidates—Computer-Based Multiple Sample Evaluation of Noncognitive Skills (CMSENS). Acad Med. 2009;84(10 suppl):S9–S12
24. Lievens F, Sackett PR, Buyse T. The effects of response instructions on situational judgment test performance and validity in a high-stakes context. J Appl Psychol. 2009;94:1095–1101
25. Lievens F, Sackett PR. Situational judgment tests in high-stakes settings: Issues and strategies with generating alternate forms. J Appl Psychol. 2007;92:1043–1055
26. Lievens F, Sackett PR. The validity of interpersonal skills assessment via situational judgment tests for predicting academic success and job performance. J Appl Psychol. 2012;97:460–468
27. Ziv A, Rubin O, Moshinsky A, et al. MOR: A simulation-based assessment centre for evaluating the personal and interpersonal qualities of medical school candidates. Med Educ. 2008;42:991–998
28. McDaniel MA, Hartman NS, Whetzel DL, Grubb WL. Situational judgment tests, response instructions, and validity: A meta-analysis. Pers Psychol. 2007;60:63–91
29. Patterson F, Baron H, Carr V, Plint S, Lane P. Evaluation of three short-listing methodologies for selection into postgraduate training in general practice. Med Educ. 2009;43:50–57
30. Hausknecht J, Day DV, Thomas SC. Applicant reactions to selection procedures: An updated model and meta-analysis. Pers Psychol. 2004;57:639–683
31. Whetzel DL, McDaniel MA, Nguyen NT. Subgroup differences in situational judgment test performance: A meta-analysis. Hum Perform. 2008;21:291–309
32. Shultz MM, Zedeck S Identification, Development, and Validation of Predictors for Successful Lawyering. 2008 Newton, Pa Law School Admission Council
33. Camara W. New predictors in admissions: Challenges in moving higher education from judgmental predictors to standardized measures. Paper presented at: Annual Meeting of the American Educational Research Association. May 2010 Denver, Colo
34. Dirschl DR, Adams GL. Reliability in evaluating letters of recommendation. Acad Med. 2000;75:1029
36. Chen HC, Teherani A, O’Sullivan P. How does a comprehensive clinical performance examination relate to ratings on the medical school student performance evaluation? Teach Learn Med. 2011;23:12–14
37. Johnson M, Elam C, Edwards J, et al. Medical school admission committee members’ evaluations of and impressions from recommendation letters. Acad Med. 1998;73(10 suppl):S41–S43
38. Roth PL, Huffcutt AI, Bobko P. Ethnic group differences in measures of job performance: A new meta-analysis. J Appl Psychol. 2003;88:694–706
39. Dore KL, Hanson M, Reiter HI, Blanchard M, Deeth K, Eva KW. Medical school admissions: Enhancing the reliability and validity of an autobiographical screening tool. Acad Med. 2006;81(10 suppl):S70–S73
40. Tett RP, Neil D. Personality tests at the crossroads: A response to Morgeson, Campion, Dipboye, Hollenbeck, Murphy, and Schmit. Pers Psychol. 2007;60:967–993
41. Dunleavy DM, Whittaker KM. The evolving medical school admissions interview. AAMC Analysis in Brief. September 2011;11
42. Stansfield RB, Kreiter CD. Conditional reliability of admissions interview ratings: Extreme ratings are the most informative. Med Educ. 2007;41:32–38
43. Morris JG. The value and role of the interview in the student admissions process: A review. Med Teach. 1999;21:473–481
44. Elam CL, Johnson MM. Prediction of medical students’ academic performances: Does the admission interview help? Acad Med. 1992;67(10 suppl):S28–S30
45. Basco WT Jr, Gilbert GE, Chessman AW, Blue AV. The ability of a medical school admission process to predict clinical performance and patients’ satisfaction. Acad Med. 2000;75:743–747
46. Donnon T, Oddone-Paolucci E, Violato C. A predictive validity study of medical judgment vignettes to assess students’ noncognitive attributes: A 3-year prospective longitudinal study. Med Teach. 2009;31:e148–e155
47. Patrick LE, Altmaier EM, Kuperman S, Ugolini K. A structured interview for medical school admission, Phase 1: Initial procedures and results. Acad Med. 2001;76:66–71
49. Campion MA, Palmer DK, Campion JE. A review of structure in the selection interview. Pers Psychol. 1997;50:655–702
50. Reiter HI, Eva KW, Rosenfeld J, Norman GR. Multiple mini-interviews predict clerkship and licensing examination performance. Med Educ. 2007;41:378–384
51. Eva KW, Reiter HI, Trinh K, Wasi P, Rosenfeld J, Norman GR. Predictive validity of the multiple mini-interview for selecting medical trainees. Med Educ. 2009;43:767–775
52. Humphrey S, Dowson S, Wall D, Diwakar V, Goodyear HM. Multiple mini-interviews: Opinions of candidates and interviewers. Med Educ. 2008;42:207–213
53. Griffin B, Harding DW, Wilson IG, Yeomans ND. Does practice make perfect? The effect of coaching and retesting on selection tests used for admission to an Australian medical school. Med J Aust. 2008;189:270–273
54. Levashina J, Campion MA. Measuring faking in the employment interview: Development and validation of an interview faking behavior scale. J Appl Psychol. 2007;92:1638–1656
55. Barrick MR, Shaffer JA, DeGrassi SW. What you see may not be what you get: Relationships among self-presentation tactics and ratings of interview and job performance. J Appl Psychol. 2009;94:1394–1411
56. Schmitt N, Noe RA, Meritt R, Fitzgerald MP. Validity of assessment center ratings for the prediction of performance ratings and school climate of school administrators. J Appl Psychol. 1984;69:207–213
57. Schmidt FL, Hunter JE. The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research findings. Psychol Bull. 1998;124:262–274
© 2013 Association of American Medical Colleges
58. Patterson F, Ferguson E, Norfolk T, Lane P. A new selection system to recruit general practice registrars: Preliminary findings from a validation study. BMJ. 2005;330:711–714