The interview is a key determinant in the high-stakes medical school admissions process,1–3 yet there is little empirical research examining what factors affect interview performance. Most reports are surveys or single-institution studies, which have limited generalizability.4 The Association of American Medical Colleges (AAMC) publishes school-specific and aggregate national data on matriculants, but those data do not include information on interviews.5
In a survey of 120 U.S. and Canadian medical schools, admissions officers reported that Medical College Admission Test (MCAT) scores and undergraduate grade point average (GPA) were the most important factors in deciding whom to interview, whereas letters of recommendation and interview performance were more important factors in making acceptance decisions.6 A 2011 AAMC survey of admissions deans reported similar findings.7 Whether survey responses reflect actual screening and interview practices is unclear, however. A recent single-institution study found that interview invitations were associated with lower socioeconomic status, older age, female gender, and higher GPA and MCAT scores, but not with being a member of a racial/ethnic group underrepresented in medicine (UIM).8
Demographic and other factors related to interview performance are poorly characterized.4,9 U.S. medical schools have increasingly adopted multiple mini-interviews (MMIs),10 in which applicants work through a series of brief, semistructured assessment stations attended by trained raters.11 In one study mentioned above,8 an MMI format did not disfavor UIM applicants, but female gender and older age were associated with higher MMI performance. A small Canadian study found no relationship between personality factors and MMI performance.12 However, both a larger U.S. study13 and a robust Australian study14 found the personality factor extraversion to be associated with higher MMI scores. Although a 2012 Irish study found no differences in MMI performance by age, gender, or socioeconomic class, applicants with European Union nationality or English as their first language outperformed their counterparts.15
Interview performance of UIM and socioeconomically disadvantaged (DA) applicants is particularly relevant in light of evidence suggesting implicit bias in the medical school admissions process, which could impede efforts to enhance diversity in the health care workforce.16 Multi-institutional studies of MMIs and traditional interviews (TIs) are needed to better characterize interview practices and outcomes.4,9
In this study, we therefore examined the applicant characteristics (including sociodemographic factors) associated with MMI and TI performance across five medical schools known as the California Longitudinal Evaluation of Admission Practices (CA-LEAP) consortium. The CA-LEAP consortium was established in 2014, with funding from the Edward J. Stemmler Medical Education Research Fund of the National Board of Medical Examiners, by the deans of admissions from the five existing California public medical schools: David Geffen School of Medicine at UCLA (UCLA); University of California, Davis, School of Medicine (UCD); University of California, Irvine, School of Medicine (UCI); University of California, San Diego, School of Medicine (UCSD); and University of California, San Francisco, School of Medicine (UCSF). The central purpose of the CA-LEAP consortium is to describe the practices and long-term outcomes of University of California (UC) medical school admissions processes, particularly regarding interviews and interview performance, in the large and geographically diverse state.
We conducted this study from July 2014 to June 2016 using retrospective data from the CA-LEAP consortium medical schools’ 2011–2013 admissions cycles. Each school received approval from its own institutional review board (IRB) to enter into an IRB reliance agreement,17 wherein UCD acted as the IRB of record for continuing review of the project.
Each school completed a comprehensive admissions practices survey to characterize its screening methods, interview method and scoring, selection processes, committee structure, and special track programs. Individual-level data from the American Medical College Application Service (AMCAS) application were collected on each applicant interviewed over the three-year study period including age, gender, race/ethnicity (characterized as self-identified member of a UIM group [primarily Hispanic, African American, Pacific Islander, Alaskan Native, or Native American in this study] or not), DA status (yes or no), cumulative GPA, and total MCAT score. DA was a self-designation, typically based on social, economic, educational, or environmental factors described in a statement on the AMCAS application.
Each school securely transmitted its AMCAS data and interview scores to the study analyst (E.G.) at UCD. The data were initially identified by AMCAS number to link interview scores across schools for applicants undergoing multiple interviews; once linked, all data were deidentified by replacing the AMCAS number with a unique study identifier.
The study population included applicants interviewed at each CA-LEAP consortium medical school during the 2011–2013 admissions cycles. Applicants to the following medical school tracks, which had nonstandard interview or selection processes, were excluded from the study: MD–PhD programs, UCSD combined bachelor’s–MD program, UCSD PRogram in Medical Education (PRIME) program, UCLA DDS-MD program, UCLA PRIME program, Charles R. Drew/UCLA Medical Education Program, and UC Berkeley–UCSF Joint Medical Program. Applicants interviewed at UCSF during the 2011 admissions cycle were also excluded because their interview data consisted of narrative reports without scores.
Interview and scoring methods
During the study period’s three consecutive admission cycles, two schools used MMIs (UCD and UCLA), and three schools used TIs (UCI, UCSD, and UCSF). Each school’s interview process is summarized in Table 1. At the MMI schools, most stations were adapted from those developed at McMaster University18 and marketed by ProFitHR (Hamilton, Ontario, Canada). In general, MMI scenarios were designed to assess the following qualities: communication, integrity, ethical judgment, empathy, professionalism, diversity and cultural awareness, teamwork, ability to handle stress, problem solving, self-awareness, and career motivation. At the TI schools, the interview format and content were typically unstructured but informed by interviewer training, which included suggested interview topics and questions.
Interview scores were converted to a standard z score (mean = 0 and standard deviation [SD] = 1; standardization based on applicants to a specific school and application year) to allow direct comparison across schools and application years.
Quantitative data analyses were conducted using Stata Version 14.2 (StataCorp, College Station, Texas). Multilevel mixed-effects linear regression analyses were conducted, stratified by interview type (TI or MMI) with interview z score as the dependent variable. Independent variables included age, gender, UIM status, DA status, MCAT score, GPA, and interview occasion (number of prior interviews at CA-LEAP consortium schools, to examine “practice” effect). Analyses were also adjusted for date of interview during the interview season, total number of interviews per applicant, school, and application year. Applicants were treated as random effects to adjust for the nesting of applicant interviews within applicants. A Chow test was used to determine whether the parameter estimates from the two stratified regression analyses (TI and MMI) were equal.19
The study included 4,993 interviewees who underwent 7,516 interviews. Table 2 shows the characteristics of interviewed applicants. Interviewees included 2,378 women (47.6%), 931 UIM individuals (18.6%), and 962 DA individuals (19.3%). Their mean age was 24.4 years (SD = 2.7), and their mean GPA and MCAT scores were 3.72 (SD = 0.22) and 33.6 (SD = 3.7), respectively. There was overlap among the DA and UIM interviewees: Of all interviewees, 962 (19.3%) were DA, 931 (18.6%) were UIM, 493 (9.9%) were both, and 3,593 (72.0%) were neither. While most individuals (3,226; 64.6% of total) underwent an interview at only one school, 1,180 (23.6%) underwent interviews at two schools, and 587 (11.8%) underwent interviews at three or more schools.
Table 3 summarizes the results of the adjusted analyses stratified by interview type. The models were also adjusted for interview date and application year, total number of interviews per applicant, and school. Older age and female gender were associated with better performance on both MMIs and TIs (all P < .0001). Higher GPA was associated with lower MMI scores (z score, per unit GPA = −0.26; 95% confidence interval [CI] = −0.45, −0.06; P = .011) but unrelated to TI scores. DA applicants had higher TI scores (z score = 0.17; 95% CI = 0.07, 0.28; P = .001) but lower MMI scores (z score = −0.18; 95% CI = −0.28, −0.08; P = .001) than non-DA applicants. There were positive associations between interview occasion (number of prior interviews at CA-LEAP consortium schools) and both MMI performance (z score = 0.11; 95% CI = 0.07, 0.16; P < .0001) and TI performance (z score = 0.08; 95% CI = 0.03, 0.13; P = .002). Neither UIM status nor MCAT score was associated with interview performance.
The Chow test revealed that, overall, the parameter estimates were different in the two stratified regression equations (chi-square = 37.0; degrees of freedom [df] = 10; P = .0001). Of note, the effects for DA status (chi-square = 25.1; df = 1; P < .0001) and GPA (chi-square = 5.4; df = 1; P = .0199) were significantly different.
This study reports the CA-LEAP consortium’s initial findings on the interview performance of approximately 5,000 individuals who underwent 7,500 interviews over three consecutive admissions cycles at five California public medical schools that used two different interview methods (MMI and TI). To the best of our knowledge, this is the first multi-institutional study of U.S. medical schools to examine interview performance outcomes.
Most individuals (64.6%) were interviewed at only one school, underscoring the competitive nature of the admissions process. In regression analyses, both MMI and TI scores were higher among those with more prior interviews (adjusted for total number of interviews per applicant), suggesting that interview performance improves with practice. In contrast, a smaller Australian study found no difference in overall MMI performance in applicants with previous interview experience compared with those who had none.20
Female gender was associated with better MMI and TI performance. Female gender has been associated with higher interview performance in previous studies of MMIs8,13,21 and TIs.22 After the University of Queensland stopped using medical school admissions interviews, the proportion of incoming male medical students rose dramatically to nearly 74% within three years, likely because of the influence of higher Graduate Medical School Admissions Test scores in men, which illustrates a potential gender-balancing effect of interviews.23
Age was associated with better MMI and TI performance, perhaps reflecting more highly developed communication skills with increased life or work experience. Interestingly, GPA was inversely associated with MMI (but not TI) performance. Some previous studies have found an inverse association between GPA and MMI scores,8,18,24 while others have not.12,25,26 We speculate that the study habits and other attributes associated with achieving a high GPA may not be the same skills required for performance on the MMI, a fast-paced series of interactions in which high performance is associated with the personality factor extraversion.13,14 We found no relationship between MCAT score and interview performance.
DA applicants performed better than non-DA applicants on TIs but worse on MMIs, a concerning finding. Interviewers at TI schools were not blinded to the AMCAS application, so they may have had knowledge of applicants’ DA status which, in turn, may have influenced their interview scores. Relatively unstructured TIs also may allow interviewers more latitude to incorporate aspects of DA individuals’ history in their ratings. Conversely, we speculate that the content, brevity, high level of structure, and speed of MMIs may disfavor DA applicants. This finding is consistent with a recent Canadian study showing that graduation from a rural high school was associated with lower MMI scores (after adjustment for age, gender, GPA, and MCAT score).27
Our study found no significant association of UIM status with interview performance, a reassuring finding consistent with one prior study8 and not suggestive of implicit racial bias effects.16 However, our findings suggest that a degree of caution is warranted for schools planning to switch their interview process from the TI to the MMI, because of potentially worse outcomes for DA applicants. How DA and UIM applicants fare in the medical school admissions process has critical workforce implications because such individuals may be more likely to provide care to underserved populations.28,29
Our study has strengths and limitations. The large, multi-institutional cohort (with about one-third of the same individuals interviewing at two or more schools) revealed key differences in performance among interviewees at TI versus MMI schools. Unfortunately, the design did not permit us to determine the precise mechanism of the differences in interview performance across schools. For example, our finding that DA applicants had better performance on TIs and worse performance on MMIs than non-DA applicants could reflect differences between the TI and MMI methods. Or the differences could be related to construct differences (e.g., rater training, station content, implementation processes) at the various schools. Future studies designed to explore possible method versus construct or content effects would be helpful.
An additional limitation was the use of the self-designated DA status from the AMCAS application. While applicants may identify themselves as DA, schools may draw a different conclusion regarding their status or vary in how they weigh this factor. Conversely, applicants who do not identify themselves as DA may meet school-specific criteria for socioeconomic and/or environmental disadvantage. AMCAS and several UC medical schools have recently developed socioeconomic status indicators based on parental education and occupation,8,30 which provide more in-depth information than a self-designation; however, these measures were not available during the study period.
Although our study included a large number of DA individuals (962), we excluded applicants to the UCSD PRIME program, UCLA PRIME program, and Charles R. Drew/UCLA Medical Education Program, a large proportion of whom identified themselves as DA. Also, the study did not include the large cohort of applicants who were not invited for interviews; including them would have been helpful to provide a more complete picture of the interview process (e.g., invitation determinants, screening practices). Finally, the extent to which CA-LEAP consortium findings generalize to other schools is uncertain because interviewees at these California public medical schools may not reflect the broader U.S. or international applicant pool. For instance, the mean total MCAT score of our 2011–2013 study population was nearly 34, which was greater than the 90th percentile for all 2012 examinees.31
In this multi-institutional study examining U.S. medical school interview performance outcomes, older applicants performed better than younger applicants, and women outperformed men. GPA was inversely associated with MMI performance but not associated with TI performance. DA applicants did better on TIs and worse on MMIs than non-DA applicants, while no difference in performance was observed for UIM interviewees. Our findings have potentially important workforce implications and illustrate the need for other multi-institutional studies of medical school admissions processes,4 including research comparing applicant performance on different MMIs, elucidating potential biases toward rural or DA applicants, and evaluating the long-term outcomes or predictive validity of interviews.
Acknowledgments: The authors wish to thank Melissa Sullivan, Ariana Hosseini, MD, Hallen Chung, Kiran Mahajan, and Sarika Thakur, EdD, who provided invaluable assistance with data collection and project administration.
1. Puryear JB, Lewis LA. Description of the interview process in selecting students for admission to U.S. medical schools. J Med Educ. 1981;56:881885.
2. Edwards JC, Johnson EK, Molidor JB. The interview in the admission process. Acad Med. 1990;65:167177.
3. Nowacek GA, Bailey BA, Sturgill BC. Influence of the interview on the evaluation of applicants to medical school. Acad Med. 1996;71:10931095.
4. Knorr M, Hissbach J. Multiple mini-interviews: Same concept, different approaches. Med Educ. 2014;48:11571175.
5. Association of American Medical Colleges. FACTS: Applicants and matriculants data. https://www.aamc.org/data/facts/applicantmatriculant
. Accessed October 3, 2017.
6. Monroe A, Quinn E, Samuelson W, Dunleavy DM, Dowd KW. An overview of the medical school admission process and use of applicant data in decision making: What has changed since the 1980s? Acad Med. 2013;88:672681.
7. Association of American Medical Colleges. Medical school admissions: More than grades and test scores. AAMC Analysis in Brief. 2011;11(6). https://www.aamc.org/download/261106/data/aibvol11_no6.pdf
. Accessed October 3, 2017.
8. Jerant A, Fancher T, Fenton JJ, et al. How medical school applicant race, ethnicity, and socioeconomic status relate to multiple mini-interview-based admissions outcomes: Findings from one medical school. Acad Med. 2015;90:16671674.
9. Rees EL, Hawarden AW, Dent G, Hays R, Bates J, Hassell AB. Evidence regarding the utility of multiple mini-interview (MMI) for selection to undergraduate health programs: A BEME systematic review: BEME guide no. 37. Med Teach. 2016;38:443455.
10. Glazer G, Startsman LF, Bankston K, Michaels J, Danek JC, Fair M. How many schools adopt interviews during the student admission process across the health professions in the United States of America? J Educ Eval Health Prof. 2016;13:12.
11. Eva KW, Rosenfeld J, Reiter HI, Norman GR. An admissions OSCE: The multiple mini-interview. Med Educ. 2004;38:314326.
12. Kulasegaram K, Reiter HI, Wiesner W, Hackett RD, Norman GR. Non-association between Neo-5 personality tests and multiple mini-interview. Adv Health Sci Educ Theory Pract. 2010;15:415423.
13. Jerant A, Griffin E, Rainwater J, et al. Does applicant personality influence multiple mini-interview performance and medical school acceptance offers? Acad Med. 2012;87:12501259.
14. Griffin B, Wilson I. Associations between the big five personality factors and multiple mini-interviews. Adv Health Sci Educ Theory Pract. 2012;17:377388.
15. Kelly ME, Dowell J, Husbands A, et al. The fairness, predictive validity and acceptability of multiple mini interview in an internationally diverse student population—A mixed methods study. BMC Med Educ. 2014;14:267.
16. Capers Q 4th, Clinchot D, McDougle L, Greenwald AG. Implicit racial bias in medical school admissions. Acad Med. 2017;92:365369.
17. University of California IRB reliance registry. https://irbreliance.ucop.edu/site/index
. Accessed October 3, 2017.
18. Eva KW, Reiter HI, Rosenfeld J, Norman GR. The ability of the multiple mini-interview to predict preclerkship performance in medical school. Acad Med. 2004;79(10 suppl):S40S42.
19. Chow GC. Tests of equality between sets of coefficients in two linear regressions. Econometrica. 1960;28(3):591605.
20. Griffin B, Harding DW, Wilson IG, Yeomans ND. Does practice make perfect? The effect of coaching and retesting on selection tests used for admission to an Australian medical school. Med J Aust. 2008;189:270273.
21. Ross M, Walker I, Cooke L, et al. Are female applicants rated higher than males on the multiple mini-interview? Findings from the University of Calgary. Acad Med. 2017;92:841846.
22. Shaw DL, Martz DM, Lancaster CJ, Sade RM. Influence of medical school applicants’ demographic and cognitive characteristics on interviewers’ ratings of noncognitive traits. Acad Med. 1995;70:532536.
23. Wilkinson D, Zhang J, Byrne GJ, et al. Medical school selection criteria and the prediction of academic performance. Med J Aust. 2008;188:349354.
24. Terregino CA, McConnell M, Reiter HI. The effect of differential weighting of academics, experiences, and competencies measured by multiple mini interview (MMI) on race and ethnicity of cohorts accepted to one medical school. Acad Med. 2015;90:16511657.
25. Reiter HI, Lockyer J, Ziola B, Courneya CA, Eva K; Canadian Multiple Mini-Interview Research Alliance (CaMMIRA). Should efforts in favor of medical student diversity be focused during admissions or farther upstream? Acad Med. 2012;87:443448.
26. Hecker K, Donnon T, Fuentealba C, et al. Assessment of applicants to the veterinary curriculum using a multiple mini-interview method. J Vet Med Educ. 2009;36:166173.
27. Raghavan M, Martin BD, Burnett M, et al. Multiple mini-interview scores of medical school applicants with and without rural attributes. Rural Remote Health. 2013;13:2362.
28. Mensah MO, Sommers BD. The policy argument for healthcare workforce diversity. J Gen Intern Med. 2016;31:13691372.
29. Saha S, Shipman SA. Race-neutral versus race-conscious workforce policy to improve access to care. Health Aff (Millwood). 2008;27:234245.
30. Grbic D, Jones DJ, Case ST. The role of socioeconomic status in medical school admissions: Validation of a socioeconomic indicator for use in medical school admissions. Acad Med. 2015;90:953960.
31. Association of American Medical Colleges. Percentile ranks for MCAT total and section scores for tests administered in 2012. https://aamc-orange.global.ssl.fastly.net/production/media/filer_public/fe/1f/fe1f1258-8a5d-4f66-8ddd-7cdcbcd6fde8/combined12pdf.pdf
. Accessed October 3, 2017.