Do Admissions Multiple Mini-Interview and Traditional Interview Scores Predict Subsequent Academic Performance? A Study of Five California Medical Schools : Academic Medicine

Secondary Logo

Journal Logo

Research Reports

Do Admissions Multiple Mini-Interview and Traditional Interview Scores Predict Subsequent Academic Performance? A Study of Five California Medical Schools

Jerant, Anthony MD; Henderson, Mark C. MD; Griffin, Erin PhD; Hall, Theodore R. MD; Kelly, Carolyn J. MD; Peterson, Ellena M. PhD; Wofsy, David MD; Tancredi, Daniel J. PhD; Sousa, Francis J. MD; Franks, Peter MD

Author Information
Academic Medicine 94(3):p 388-395, March 2019. | DOI: 10.1097/ACM.0000000000002440

Abstract

Purpose 

To compare the predictive validities of medical school admissions multiple mini-interviews (MMIs) and traditional interviews (TIs).

Method 

This longitudinal observational study of 2011–2013 matriculants to five California public medical schools examined the associations of MMI scores (two schools) and TI scores (three schools) with subsequent academic performance. Regression models adjusted for sociodemographics and undergraduate academic metrics examined associations of standardized mean MMI and TI scores with United States Medical Licensing Examination Step 1 and Step 2 Clinical Knowledge (CK) scores and, for required clerkships, with mean National Board of Medical Examiners Clinical Science subject (shelf) exam score and number of honors grades.

Results 

Of the 1,460 medical students, 746 (51.1%) interviewed at more than one study school; 579 (39.7%) completed at least one MMI and at least one TI. Neither interview type was associated with Step 1 scores. Higher MMI scores were associated with more clerkship honors grades (adjusted incidence rate ratio [AIRR] 1.28 more [95% CI 1.18, 1.39; P < .01] per SD increase) and higher shelf exam and Step 2 CK scores (adjusted mean 0.73 points higher [95% CI 0.28, 1.18; P < .01] and 1.25 points higher [95% CI 0.09, 2.41; P = .035], respectively, per SD increase). Higher TI scores were associated only with more honors grades (AIRR 1.11 more [95% CI 1.01, 1.20; P = .03] per SD increase).

Conclusions 

MMI scores were more strongly associated with subsequent academic performance measures than were TI scores.

Admissions interview ratings are weighted heavily in medical school acceptance decisions.1 Yet many studies suggest that unstructured, one-on-one traditional interviews (TIs) may have limited predictive validity, in this context meaning the ability to identify applicants likely to succeed in training.2–4 By contrast, in studies mostly conducted outside the United States, higher scores on multiple mini-interviews (MMIs)—in which applicants work through a series of brief, semistructured stations attended by trained raters5—predict better objective structured clinical examination scores,6–8 licensing exam scores,9,10 and clinical clerkship ratings.8,10 Partly on the basis of such findings, many medical schools have switched from TIs to MMIs.11

A key limitation of prior studies is that they were conducted at single institutions employing either TIs or MMIs and at varying time points. Such studies are valuable, but they have limited utility in comparing the relative abilities of MMIs and TIs to identify applicants likely to succeed academically in medical school. A useful next step would be to examine the associations of interview scores with academic performance during a common time period across several schools that have overlapping applicant pools, with some schools employing MMIs and others TIs and with some applicants completing both interview types. Currently, such studies are lacking. The dearth of U.S. studies is also unfortunate, given substantial differences in application screening processes and academic performance measures among countries,12–14 as well as the greater sociodemographic diversity of applicants in the United States compared with other countries.15–17 Also, across prior studies the MMI processes varied substantially, as did the academic outcomes examined.7–10 A recent systematic review on MMI use in medical school selection concluded that consistent validity across countries, institutions, and outcomes cannot be assumed, underscoring the need for multi-institutional studies that more robustly examine the predictive validity of admissions interviews.18

In this study, using data from three consecutive admissions cycles at the five California Longitudinal Evaluation of Admissions Practices (CA-LEAP) consortium medical schools, we examined the associations of medical students’ admissions MMI and/or TI scores with their subsequent United States Medical Licensing Examination (USMLE) Step 1 and Step 2 Clinical Knowledge (CK) scores, National Board of Medical Examiners (NBME) clerkship subject (shelf) exam scores, and number of clerkship honors grades.

Method

We conducted this longitudinal observational study during July 2014–October 2017 using admissions interview data from 2011–2013 and academic performance data from 2013–2016, as described below. We obtained ethics approval from the institutional review boards of the five participating schools via the University of California Reliance Registry (protocol no. 683).

Study population

The study population included students who completed one or more medical school interviews at CA-LEAP schools and subsequently matriculated at a CA-LEAP school in one of the admissions cycles during 2011–2013. The five CA-LEAP schools are public institutions participating in a consortium to evaluate medical school interview processes and outcomes19: David Geffen School of Medicine at UCLA (UCLA); University of California, Davis, School of Medicine (UCD); University of California, Irvine, School of Medicine (UCI); University of California, San Diego, School of Medicine (UCSD); and University of California, San Francisco, School of Medicine (UCSF). Applicants to the following medical school tracks, which had nonstandard interview or selection processes, were excluded from the study: MD–PhD programs; UCSD combined bachelor’s–MD program; UCSD Program in Medical Education (PRIME); UCLA DDS–MD program; UCLA PRIME; Charles R. Drew/UCLA Medical Education Program; and the University of California, Berkeley–UCSF Joint Medical Program.

Interview processes and scoring

During the 2011–2013 admissions cycles, two of the five CA-LEAP schools used MMIs (MMI-1 and MMI-2), and three used TIs (TI-1, TI-2, and TI-3).

MMI schools.

The MMIs at MMI-1 and MMI-2 consisted of individually scored 10-minute stations (10 and 7 stations, respectively), most of which were adapted from commercially marketed content.20 All stations were multidimensional: At each station, a structured rating form was used to assess the applicant’s interpersonal communication ability along with one or more additional competencies (e.g., integrity/ethics, professionalism, diversity/cultural awareness, teamwork, ability to handle stress, problem solving). Each station was attended by one rater, except for one two-rater station at MMI-2. At some stations, raters interacted directly with applicants; at some, raters observed applicants’ interactions with others (e.g., with actors). At both schools, raters included physician and basic science faculty and medical students. At MMI-1, raters also included alumni, nurses, patients, lawyers, high-level administrative staff, and other community members. Raters at both schools received 60 minutes of training before each admissions cycle; at MMI-2, raters also received a 30-minute reorientation prior to each MMI circuit. Raters were given no information about applicants. Raters at both MMI schools assigned a single global score (higher score = better performance), although each school employed a different scale (MMI-1: 0–3 points; MMI-2: 1–7 points).

TI schools.

Applicants at each TI school completed two 30- to 60-minute unstructured interviews, one with a faculty member and one with a medical student or another faculty member. At all TI schools, at least 60 minutes of training was provided to interviewers before each admissions cycle. At TI-1 and TI-2, interviewers reviewed the candidate’s application before the interview, but academic metrics were redacted at TI-1. At TI-3, interviewers reviewed the candidate’s application after assigning their initial interview rating, but they could then adjust their rating (if desired), yielding a final interview rating (used in our analyses). At all three schools, interviewers rated applicants on standardized scales, though the domains rated and scales used differed. At TI-1 and TI-3, interviewers assigned a single global rating (TI-1 scale: 5 = exceptional, 4 = above average, 3 = average, 2 = below average, 1 = unacceptable; TI-3 scale: 3 = unreserved enthusiasm, 2 = moderate enthusiasm, 1 = substantial reservations). At TI-2, interviewers rated candidates in four domains using a 1–5-point scale for each domain (thinking/knowledge, communication/behavior, energy/initiative, empathy/compassion); these domain scores were summed, yielding a total score (range 4–20).

Other applicant characteristics

Applicant characteristics obtained from the American Medical College Application Service (AMCAS) application included age; self-designated gender; race and ethnicity; self-designated disadvantaged (DA) status (yes/no); cumulative undergraduate grade point average (GPA); total Medical College Admission Test (MCAT) score; and application year. For the current analyses, students were classified as underrepresented in medicine (UIM) if they self-identified as black or African American, Hispanic, Native American, or Pacific Islander.

USMLE Step 1 and Step 2 CK scores

The USMLE Step 1 exam assesses understanding and application of basic science concepts relevant to medical practice (possible score range 1–300).21 Students in our sample took this exam from 2013 through 2015, approximately two years after matriculating. The USMLE Step 2 exam has two parts, Step 2 CK and Step 2 Clinical Skills (CS). The Step 2 CK exam assesses the ability to apply the medical knowledge, skills, and understanding of clinical science needed to contribute to solving patient care problems under supervision (possible score range 1–300).21 Students in our sample took this exam from 2014 through 2016, approximately three years after matriculating. The Step 2 CS exam is scored pass/fail; few students in our sample failed, precluding meaningful analysis, so we did not include Step 2 CS data in this study. The USMLE adjusts for differences in difficulty across exam forms and years using statistical procedures, and it considers scores to be comparable across a window of 3 to 4 years.21

Clinical clerkship honors grades and shelf exam scores

We examined honors grades in all required clinical clerkships, which varied from six to eight clerkships among the five schools. Although grading formulas varied, key components for all clerkships and schools included supervising residents’ and attending physicians’ subjective ratings and the student’s score on the corresponding NBME Clinical Science subject exam, widely referred to in the United States as the clerkship shelf exam (possible score range 1–100). Per the NBME, the shelf exams “are achievement tests in a broad sense, requiring medical students to solve scientific and clinical problems.”22 Students in our sample completed the required clerkships and took the shelf exams from 2013 through 2016, two to four years after matriculating.

Analyses

Analyses were conducted using Stata (version 15.1, StataCorp, College Station, Texas). For the 2012 and 2013 admissions cycles, the analyses included data from all five schools. For 2011, TI-3 provided no data. The AMCAS identification (ID) number, a unique number assigned to all medical school applicants in the United States, was used to make cross-school application, interview, and academic performance data linkages. After making the linkages, the AMCAS ID was replaced by a unique CA-LEAP ID to anonymize the data. Only one of the authors (E.G.) had access to both the AMCAS IDs and corresponding CA-LEAP IDs. For each student, we calculated a mean MMI score and a mean TI score for all admissions interviews (i.e., up to two MMIs and three TIs). Both sets of scores were standardized (mean = 0, SD = 1) based on school of interview and admissions cycle. Key outcome measures were USMLE Step 1 and Step 2 CK scores, mean clerkship shelf exam score (average of scores on shelf exams from all required clerkships), and the total number of honors grades in required clerkships.

We employed two sets of four separate regression models, one set each for MMIs and for TIs. The models examined, respectively, the adjusted associations of mean MMI score or mean TI score with the following dependent variables:

  1. USMLE Step 1 score (linear regressions);
  2. USMLE Step 2 CK score (linear regressions);
  3. mean clerkship shelf exam score (linear regressions); and
  4. total number of clerkship honors grades (negative binomial regression, to adjust for overdispersion in this count variable).

Each regression included only students with data for the dependent variable of interest. Covariates in all models were age (< 23, 23, 24, 25, or ≥ 26); gender (male or female); UIM race/ethnicity (yes/no); DA status (yes/no); GPA (< 3.4, 3.4–3.6, > 3.6–3.8, or > 3.8); total MCAT score (< 27, 27–30, 31–32, 33–34, or > 34); school of matriculation (MMI-1, MMI-2, TI-1, TI-2, or TI-3); and admissions cycle (2011, 2012, or 2013). We included these covariates because apart from admissions cycle (included to capture secular trends), each was associated with aspects of medical school academic performance in prior studies.6,10,23–28 The models for clerkship honors grades also included terms for the interaction of matriculation school with admissions cycle, to adjust for between-school and between-year variation in the average and maximum number of possible honors grades for clerkships at each school (i.e., 6–8 depending on school). Analyses were conducted using the standardized mean MMI and TI scores in two ways: as continuous variables to examine linear trends, and coded by quintiles to examine nonlinear effects.

We tested whether there were statistically significant differences between TI and MMI scores in their associations with the study’s academic performance measures using the Stata program suest.29,30 The program uses model parameter estimates and their associated covariance matrices to allow statistical testing (Wald tests) of differences among parameter estimates across models.

To explore the robustness of findings from the primary analyses, we also conducted secondary analyses restricted to students who had completed at least one MMI and at least one TI, affording more direct comparison of the admissions interview types.

Results

There were 4,993 individuals who completed at least one MMI or TI at a CA-LEAP school during the study period. Of these applicants, 1,460 (29.2%) matriculated at one of the five schools and composed the study’s sample of students. Table 1 shows their medical school entry characteristics, and Table 2 shows their subsequent academic performance data. Of these 1,460 students, 746 (51.1%) interviewed at more than one CA-LEAP school, and 579 (39.7%) completed at least one MMI and at least one TI. The correlation between mean TI and MMI scores for these 579 students was 0.26.

T1
Table 1:
Characteristics at Medical School Entry Among Students (N = 1,460) Who Entered the Five CA-LEAP Consortium Schools in the 2011–2013 Admissions Cycles
T2
Table 2:
Academic Performance Among Students (N = 1,460) Who Entered the Five CA-LEAP Consortium Schools in the 2011–2013 Admissions Cycles

USMLE Step 1 data were missing for 24 (1.6%) of the students in the sample, and USMLE Step 2 CK data were missing for 133 (9.1%). Clerkship shelf exam scores and clerkship grades were missing for 59 students (4.0%). Of the prematriculation variables, only gender predicted the likelihood of missing shelf exam scores or clerkship grade data. More women were missing shelf exam scores or grade data compared with men (38 [5.1%] vs. 21 [1.9%], respectively; chi-square = 4.5; P = .03).

Unadjusted performance on the academic outcome measures was as follows. The USMLE Step 1 mean score was 233.5 (SD 19.4; range 163–271), the USMLE Step 2 CK mean score was 245.3 (SD 15.7; range 184–280), the clerkship shelf exam mean score was 78.5 (SD 6.6; range 54.0–97.7), and the mean number of clerkship honors grades was 1.7 with a median of 1 (interquartile range 0, 3) (Table 2).

Table 3 summarizes the key results of the two sets of four primary analyses examining the adjusted associations of mean MMI and TI scores, using both standardized mean scores and quintiles, with the study’s academic performance outcome measures. (Full model findings are available from the corresponding author upon request.) Figure 1 depicts relationships of TI score and MMI score (by quintile) with USMLE Step 2 CK score, mean clerkship shelf exam score, and number of clerkship honors grades.

T3
Table 3:
Associations of Admissions Interview Scores With Subsequent Academic Performance Among Students (N = 1,460) Who Entered the Five CA-LEAP Consortium Schools in the 2011–2013 Admissions Cycles
F1
Figure 1:
Associations of traditional interview (TI) and multiple mini-interview (MMI) scores, by quintile, with USMLE Step 2 Clinical Knowledge (CK) scores (possible range 1–300), mean clerkship shelf exam score (average of NBME Clinical Science subject examination scores for all required clerkships; possible range 1–100), and number of clerkship honors grades among 1,460 students who interviewed at one or more of the five CA-LEAP consortium schools and subsequently matriculated at one of those schools in the 2011–2013 admissions cycles. All analyses adjusted for age category (< 23, 23, 24, 25, ≥ 26); gender; underrepresented in medicine race/ethnicity (yes/no); self-designated disadvantaged status (yes/no); cumulative undergraduate grade point average (< 3.4, 3.4–3.6, > 3.6–3.8, or > 3.8); total Medical College Admission Test score (< 27, 27–30, 31–32, 33–34, or > 34); school of matriculation (MMI-1, MMI-2, TI-1, TI-2, or TI-3); and admissions cycle (2011, 2012, or 2013). The models for clerkship honors grades also adjusted for interactions between school of matriculation and admissions cycle. Each analysis included only students with data for the dependent variable of interest. Abbreviations: USMLE, United States Medical Licensing Examination; NBME, National Board of Medical Examiners; CA-LEAP, California Longitudinal Evaluation of Admissions Practices.

Neither MMI nor TI scores were associated with USMLE Step 1 score. The linear relationship between MMI score and number of clerkship honors grades was significant (adjusted incidence rate ratio [AIRR] 1.28 [95% CI 1.18, 1.39; P < .01] more honors grades per SD increase in MMI score), as was the relationship between TI score and honors grades (AIRR 1.10 [95% CI 1.01, 1.20; P = .03] more honors grades per SD increase in TI score) (Table 3). The MMI–honors association was significantly larger than the TI–honors association (AIRR = 1.16 [95% CI 1.04, 1.30; P = .01] more honors grades per SD increase in MMI score than per SD increase in TI score). We conducted a post hoc analysis exploring whether the relationship between MMI score and honors grades was independent of mean clerkship shelf exam score. The MMI–honors association was attenuated by 36.6% (95% CI 18.9%, 54.3%; P < .01) with additional adjustment for shelf exam score (AIRR 1.17 [95% CI 1.10, 1.25; P < .01] more honors grades per SD increase in MMI score).

Positive associations also were observed between MMI (but not TI) performance and mean clerkship shelf exam and USMLE Step 2 CK scores. There was a linear relationship between MMI score and shelf exam score (adjusted mean 0.73 points higher [95% CI 0.28, 1.18; P < .01] per SD increase in MMI score) (Table 3). The MMI–shelf exam association was significantly larger than the TI–shelf exam relationship (adjusted mean 0.47 points higher [95% CI 0.26, 0.89; P = .02] per SD increase in MMI score). There was also a linear relationship between MMI score and Step 2 CK score (1.25 points higher [95% CI 0.09, 2.41; P = .035] per SD increase in MMI score). Comparing the MMI–Step 2 CK association with the TI–Step 2 CK association, there was no significant difference (Step 2 CK score 0.82 points higher [95% CI −0.89, 2.52; P = .35] per SD increase in MMI score).

Secondary analyses limited to students who completed at least one MMI and at least one TI yielded similar findings to those of the primary analyses. (A summary is provided in Supplemental Digital Appendix 1 at https://links.lww.com/ACADMED/A594.)

Discussion

We believe this is the first study to directly compare the predictive validity of MMIs with TIs in a study of multiple schools with partially overlapping applicant pools during a common time period. Higher MMI scores were associated with receiving more honors grades in required clerkships, with higher USMLE Step 2 CK scores but not Step 1 scores, and with higher clerkship shelf exam scores. By contrast, TI performance exhibited only one more modest relationship with honors grades. The sizes of the MMI associations with honors grades and with clerkship shelf exam scores were both significantly greater than for the respective TI associations. The contrasting findings for MMIs versus TIs were also observed in secondary analyses limited to students who had completed both interview types.

Our study was not designed to determine the mechanisms of the relationship of MMI performance with clerkship honors grades. One possible explanation is that better performance on the MMI signifies superior candidates for medical training—individuals likely to attain higher levels of clinical performance than their peers. That the relationship of MMI score with clerkship honors grades remained significant after adjusting for clerkship shelf exam score, an indicator of medical knowledge, is consistent with the notion that the MMI–honors association may be driven in part by other individual characteristics (e.g., interpersonal skills). A second possibility is that MMIs select for individuals with characteristics that also weigh heavily in assigning clerkship grades but do not necessarily contribute to superior clinical performance. For example, we previously found significantly higher MMI scores among applicants with higher levels of the personality factor extroversion,31 while others have shown that more extroverted individuals receive higher interpersonal communication ratings on clerkships.32,33 Yet, to our knowledge, the net impact of extroversion on physician functioning in practice is unstudied. Ideally, to better gauge the net impact of adopting MMIs in medical school admissions, long-term, multi-institutional efforts should examine the association of MMI scores with real-patient clinical performance. Such initiatives would be challenging to field, but they might be feasible with collaboration among and support from organizations charged with oversight of medical education and physician specialty boards entrusted with ensuring ongoing competence.

Why better MMI performance was associated with higher clerkship shelf exam and Step 2 CK scores, but not with Step 1 scores, is unclear. All the exams are cognitive. However, both the shelf exams and the Step 2 CK exam assess the ability to apply clinical knowledge to patient care dilemmas posed in written case scenarios—in other words, clinical problem solving.21,22 MMIs similarly aim to assess problem-solving ability.34,35 This overlap in testing goals could account for the modest observed relationship between MMI scores and Step 2 CK scores, a speculation warranting further study. Our MMI findings are consistent with an earlier study showing that better MMI performance was associated with higher scores on the Medical Council of Canada Qualifying Examination Parts I and II.9

In previous CA-LEAP studies, we found that while within- and between-school reliabilities were lower for TIs than for MMIs,35 TIs performed as well as MMIs in predicting acceptance offers within and between schools.10 In the current study, TI scores exhibited a modest relationship only with clerkship honors grades. Nonetheless, we believe it would be premature for medical schools to stop using TIs, because our findings do not negate the possibility that TIs are effective in selecting students who are likely to succeed as clinicians. Rather, TIs may simply select for attributes that confer a more-limited net advantage in standardized testing or subjective clerkship performance ratings. Also, there is little evidence, all from Canadian studies, suggesting that higher scores on licensing exams for non-international-medical-school graduates predict superior performance in clinical practice.36,37 In this context, the ability of an MMI score to predict a subsequent Step 2 CK score has unclear practical relevance. Further, to our knowledge, no studies have examined how clerkship shelf exam scores or honors grades predict future performance. Additional multischool studies including both TIs and MMIs would be helpful, to compare the applicant characteristics rewarded by MMIs versus TIs and the quality of care provided by physicians who were selected through each admissions interview process.

Our study’s strengths include the large sample of applicants who matriculated at five public medical schools in California, which is one of the most sociodemographically diverse states, and the direct comparison of MMI and TI predictive validity. Our study also had limitations. The degree to which our findings may generalize to other schools is uncertain. Although we adjusted for potentially confounding student factors included in prior single-school admissions studies, as well as for other potential confounders such as school of matriculation and admissions cycle, confounding by unmeasured student or contextual (e.g., interviewer/rater) factors may still have occurred. Although we examined several important academic performance measures, other potentially important outcomes (e.g., USMLE Step 3 scores) also merit study. Finally, we had incomplete data, including no 2011 admissions cycle data for one consortium school; however, our analyses adjusted for admissions cycle.

In conclusion, in this study of the five CA-LEAP schools, better admissions MMI scores were associated with more clerkship honors grades and higher USMLE Step 2 CK and clerkship shelf exam scores. TI scores exhibited a more modest relationship with clerkship honors grades only.

References

1. Monroe A, Quinn E, Samuelson W, Dunleavy DM, Dowd KW. An overview of the medical school admission process and use of applicant data in decision making: What has changed since the 1980s? Acad Med. 2013;88:672–681.
2. Goho J, Blackman A. The effectiveness of academic admission interviews: An exploratory meta-analysis. Med Teach. 2006;28:335–340.
3. Patterson F, Rowett E, Hale R, et al. The predictive validity of a situational judgement test and multiple-mini interview for entry into postgraduate training in Australia. BMC Med Educ. 2016;16:87.
4. Sladek RM, Bond MJ, Frost LK, Prior KN. Predicting success in medical school: A longitudinal study of common Australian student selection tools. BMC Med Educ. 2016;16:187.
5. Eva KW, Rosenfeld J, Reiter HI, Norman GR. An admissions OSCE: The multiple mini-interview. Med Educ. 2004;38:314–326.
6. Eva KW, Reiter HI, Rosenfeld J, Norman GR. The ability of the multiple mini-interview to predict preclerkship performance in medical school. Acad Med. 2004;79(10 suppl):S40–S42.
7. Husbands A, Dowell J. Predictive validity of the Dundee multiple mini-interview. Med Educ. 2013;47:717–725.
8. Reiter HI, Eva KW, Rosenfeld J, Norman GR. Multiple mini-interviews predict clerkship and licensing examination performance. Med Educ. 2007;41:378–384.
9. Eva KW, Reiter HI, Rosenfeld J, Trinh K, Wood TJ, Norman GR. Association between a medical school admission process using the multiple mini-interview and national licensing examination scores. JAMA. 2012;308:2233–2240.
10. Jerant A, Henderson MC, Griffin E, et al. Medical school performance of socioeconomically disadvantaged and underrepresented minority students matriculating after a multiple mini-interview. J Health Care Poor Underserved. 2018;29:303–320.
11. Glazer G, Startsman LF, Bankston K, Michaels J, Danek JC, Fair M. How many schools adopt interviews during the student admission process across the health professions in the United States of America? J Educ Eval Health Prof. 2016;13:12.
12. Association of Faculties of Medicine of Canada. Admission requirements of Canadian faculties of medicine. https://afmc.ca/pdf/ADMISSION_REQUIREMENTS_EN.pdf. Accessed August 17, 2018.
13. Medical Schools Council. Entry requirements for UK medical schools: 2017 entry. https://www.ukcat.ac.uk/media/1063/msc-entry-requirements-for-uk-medical-schools-2017-entry.pdf. Accessed August 14, 2018.
14. Association of American Medical Colleges. Admission requirements for medical school. https://students-residents.aamc.org/choosing-medical-career/article/admission-requirements-medical-school. Accessed August 14, 2018.
15. Association of American Medical Colleges. Diversity in Medical Eudcation: Facts & Figures 2016. 2016. Washington, DC: Association of American Medical Colleges; http://www.aamcdiversityfactsandfigures2016.org. Accessed August 13, 2018.
16. Dhalla IA, Kwong JC, Streiner DL, Baddour RE, Waddell AE, Johnson IL. Characteristics of first-year students in Canadian medical schools. CMAJ. 2002;166:1029–1035.
17. Tiffin PA, Dowell JS, McLachlan JC. Widening access to UK medical education for under-represented socioeconomic groups: Modelling the impact of the UKCAT in the 2009 cohort. BMJ. 2012;344:e1805.
18. Rees EL, Hawarden AW, Dent G, Hays R, Bates J, Hassell AB. Evidence regarding the utility of multiple mini-interview (MMI) for selection to undergraduate health programs: A BEME systematic review: BEME Guide No. 37. Med Teach. 2016;38:443–455.
19. Henderson MC, Kelly CJ, Griffin E, et al. Medical school applicant characteristics associated with performance in multiple mini-interviews versus traditional interviews: A multi-institutional study. Acad Med. 2018;93:1029–1034.
20. ProFitHR. Welcome to ProFitHR. http://www.profithr.com. Accessed August 13, 2018.
21. United States Medical Licensing Examination. USMLE score interpretation guidelines. http://www.usmle.org/pdfs/transcripts/USMLE_Step_Examination_Score_Interpretation_Guidelines.pdf. Accessed August 13, 2018.
22. National Board of Medical Examiners. Subject examinations. http://www.nbme.org/schools/Subject-Exams. Accessed August 13, 2018.
23. Dunleavy DM, Kroopnick MH, Dowd KW, Searcy CA, Zhao X. The predictive validity of the MCAT exam in relation to academic performance through medical school: A national cohort study of 2001–2004 matriculants. Acad Med. 2013;88:666–671.
24. Kleshinski J, Khuder SA, Shapiro JI, Gold JP. Impact of preadmission variables on USMLE Step 1 and Step 2 performance. Adv Health Sci Educ Theory Pract. 2009;14:69–78.
25. Haist SA, Wilson JF, Elam CL, Blue AV, Fosson SE. The effect of gender and age on medical school performance: An important interaction. Adv Health Sci Educ Theory Pract. 2000;5:197–205.
26. Lee KB, Vaishnavi SN, Lau SK, Andriole DA, Jeffe DB. “Making the grade”: Noncognitive predictors of medical students’ clinical clerkship grades. J Natl Med Assoc. 2007;99:1138–1150.
27. Andriole DA, Jeffe DB. Prematriculation variables associated with suboptimal outcomes for the 1994–1999 cohort of US medical school matriculants. JAMA. 2010;304:1212–1219.
28. Campos-Outcalt D, Rutala PJ, Witzke DB, Fulginiti JV. Performances of underrepresented-minority students at the University of Arizona College of Medicine, 1987–1991. Acad Med. 1994;69:577–582.
29. Stata. Stata user’s guide. Release 15. http://www.stata.com/manuals13/rsuest.pdf. Accessed August 13, 2018.
30. Clogg CC, Petkova E, Haritou A. Statistical methods for comparing regression coefficients between models. Am J Sociol. 1995;100:1261–1312.
31. Jerant A, Griffin E, Rainwater J, et al. Does applicant personality influence multiple mini-interview performance and medical school acceptance offers? Acad Med. 2012;87:1250–1259.
32. Lievens F, Ones DS, Dilchert S. Personality scale validities increase throughout medical school. J Appl Psychol. 2009;94:1514–1535.
33. Chibnall JT, Blaskiewicz RJ. Do clinical evaluations in a psychiatry clerkship favor students with positive personality characteristics? Acad Psychiatry. 2008;32:199–205.
34. Michael G; DeGroote School of Medicine, McMaster University. Manual for interviewers 2017/18. Admissions, Undergraduate medical program. http://mdprogram.mcmaster.ca/docs/default-source/admissions/interviewer-manual-mmi_websiteversion.pdf?sfvrsn=2. Accessed August 13, 2018.
35. Jerant A, Henderson MC, Griffin E, et al. Reliability of multiple mini-interviews and traditional interviews within and between institutions: A study of five California medical schools. BMC Med Educ. 2017;17:190.
36. Tamblyn R, Abrahamowicz M, Brailovsky C, et al. Association between licensing examination scores and resource use and quality of care in primary care practice. JAMA. 1998;280:989–996.
37. Tamblyn R, Abrahamowicz M, Dauphinee WD, et al. Association between licensure examination scores and practice in primary care. JAMA. 2002;288:3019–3026.

Supplemental Digital Content

Copyright © 2018 by the Association of American Medical Colleges