Dunleavy, Dana M. PhD; Kroopnick, Marc H. MEng, PhD; Dowd, Keith W. MA; Searcy, Cynthia A. PhD; Zhao, Xiaohui PhD
Research evaluating the predictive validity of the Medical College Admission Test (MCAT) has focused primarily on the relationship between MCAT scores and scores on the United States Medical Licensing Examination (USMLE) Step exams. Overall, studies have shown that MCAT scores predict medical school matriculants’ subsequent scores on licensure exams up to six years after matriculation.1–5 Critics suggest that these studies overstate the predictive value of MCAT scores because the analyses relate scores from a standardized admission test to scores from standardized licensing tests, ignoring important measures of medical student performance like course grades and clerkship ratings, the need for academic remediation, measures of academic distinction, and time to graduation.6,7
Although the majority of students who start medical school graduate within five years of matriculation, 6% do not.8 Some students leave medical school for academic reasons, and others leave for personal or financial reasons. Some students take longer than planned to graduate because they encounter academic difficulties in courses or clerkships, have trouble passing the USMLE Step exams, or are slowed by nonacademic complications. This study expands the criterion domain for predictive validity research on the MCAT exam by focusing on students’ progress through medical school. We examined relationships between MCAT total scores, undergraduate grade point averages (UGPAs), and a new variable: unimpeded progress toward graduation (UP). We considered matriculants who did not withdraw and were not dismissed for academic reasons, graduated within five years, and did not repeat any of the Step 1 or 2 exams before passing them to have experienced UP. We considered students who had difficulty in any of these areas not to have experienced UP; in other words, such students experienced impeded progress in medical school (IP).
Implications of IP and UP
UP is an important outcome because students’ progress through medical school has individual, institutional, and societal implications. Arguably, the most important implications of UP and IP are for individual matriculants. In 2012, the median one-year cost of attendance at U.S. medical schools was $53,685 for public schools and $72,344 for private schools; the median student education debt for the class of 2012 was $170,000.9 Matriculants who experience IP therefore have a higher cost of attendance and are likely to have higher educational debt on graduation than those who experience UP. (It should be noted, though, that not all schools charge students additional tuition for taking more than four years to graduate.) Additionally, students who withdraw or are dismissed because of academic difficulty may be saddled with sizable educational debt and lack the medical degree that would help them pay it off.
In addition, students who experience IP may have fewer options after graduation than those who experience UP. Not passing the Step exams on the first attempt, for example, has implications for GME opportunities. More than 80% of residency program directors responding to the 2010 National Resident Matching Program (NRMP) Director Survey indicated that they would seldom or never interview an applicant who had failed Step 1 or Step 2 Clinical Knowledge (Step 2 CK) on the first attempt.10 Applicants with first-attempt failures on the Step exams are less likely than those who pass to match in the NRMP’s Main Residency Match.11 They also may have a higher risk of being dismissed from GME programs that require residents to complete the USMLE sequence within a specific time frame12 and of failing to achieve licensure due to state licensing board restrictions on the number of attempts permitted and the time line for completing the USMLE sequence.13
The implications of IP also extend to medical schools. The most recent comprehensive study14 of the cost of educating medical students in the United States estimated that total educational resource costs (i.e., both direct instructional costs and additional costs required to support faculty) were $72,000 to $93,000 per student per year (in 1996 dollars). The costs likely have increased since that 1997 study was published, because of curricular reforms that emphasize low student–faculty ratios, problem-based and small-group learning, and increased clinical course work in the first year of medical school.15 In addition, there are opportunity costs associated with matriculants who withdraw or are dismissed for academic reasons because their slots could have been filled by others. Thus, medical schools bear an increase in total educational costs when students experience IP.
Finally, there are also societal implications related to IP. The Association of American Medical Colleges (AAMC) projects that there will be a shortage of about 125,000 physicians in the U.S. workforce by 2025.16 To the extent that matriculants withdraw, are dismissed for academic reasons, or experience delayed graduation, fewer residents and, ultimately, fewer physicians will be available to manage the nation’s growing health care needs.
For these reasons, we believe that UP is an important outcome and that information about it is of practical value to medical school faculty and administrators. To the extent that medical schools understand predictors of UP, they will be able to provide better support services for matriculants who are at risk for experiencing academic-related delays. They also may be better positioned to plan for the financial implications of accepting matriculants with risk of experiencing IP.
Previous research has examined the relationship between prematriculation variables and outcome variables similar to those described above. Using data on the 1977 version of the MCAT exam, Jones and Vanyur17 found that MCAT section scores were negatively related to delayed graduation and withdrawal/dismissal for academic reasons. More recent studies using data from the 1991 version of the MCAT exam have also shown links between MCAT scores and withdrawal/dismissal for academic reasons.1,18,19
Using data from a national sample of medical students who matriculated in 1992, Huff and Fang18 conducted a survival analysis to determine whether MCAT scores predicted if and when matriculants experienced academic difficulty in medical school. After controlling for other variables, they found that as MCAT scores increased, risk of experiencing academic difficulty decreased. Similarly, in a study of 11 medical schools, Julian1 showed that the percentage of students who experienced academic difficulty decreased as MCAT scores increased. Both studies’ authors noted that the majority of matriculants with lower MCAT scores completed medical school without experiencing academic difficulty.
Andriole and Jeffe19 examined the relationship between various prematriculation variables and a four-category variable in which matriculants were grouped into one of the following categories: (1) withdrawn or dismissed for academic reasons, (2) withdrawn or dismissed for nonacademic reasons, (3) graduated within 10 years and did not pass Step 1 and/or Step 2 CK on the first attempt, and (4) graduated within 10 years and passed Step 1 and/or Step 2 CK on the first attempt. The first three categories were considered “suboptimal” outcomes; the fourth was considered the optimal outcome. Andriole and Jeffe19 found that matriculants were more likely to have suboptimal outcomes if they were Asian/Pacific Islanders, belonged to underrepresented racial/ethnic groups, were 24 years of age or older, had obtained an undergraduate degree from an institution that was not classified as having very high research activity, had an MCAT total score less than 29, had premedical education debt of $10,000 or greater, and/or had participated in a summer academic enrichment program as an undergraduate.
In this study, we extend the published research by investigating the relationships between MCAT total scores, UGPAs, and UP, a new indicator that incorporates medical student academic outcomes beyond standardized test scores and occurs about six years after application to medical school. We examine these relationships at the school level, allowing for the investigation of potential differences in the MCAT’s predictive validity by school.
We drew all matriculant data used in this study from deidentified research tables in the AAMC’s Data Warehouse. We linked data for individual matriculants using the AAMC research identification variable. This study was approved by the institutional review board of the American Institutes for Research as part of the MCAT program’s psychometric research protocol.
Individuals who matriculated at 128 MD-granting U.S. medical schools between 2001 and 2004 and took the paper-and-pencil 1991 version of the MCAT exam were eligible for inclusion in this study. We selected these cohorts because, in February 2012 when we conducted this study, the majority of these matriculants had completed the Step 1, Step 2 CK, Step 2 Clinical Skills (Step 2 CS), and Step 3 exams and had graduated from medical school. We excluded matriculants enrolled in MD/PhD or other special programs because of planned delays in graduation. We also excluded matriculants from medical schools that were missing UGPA data for 30% or more of the matriculants (n = 6) and those with special or joint programs that have unique educational missions or atypical time lines for graduation (n = 3). The final subset of 119 medical schools (71 public and 48 private) mirrored the distribution of U.S. public and private schools and was geographically diverse. Across the four years, the number of students in each school ranged from 118 to 1,084, with a median of 426 students per school.
Description of variables
Cumulative UGPA is the average of the matriculant’s grades from all undergraduate courses; it excludes grades from any graduate courses. We chose UGPA rather than the biology, chemistry, physics, and math (BCPM) GPA because UGPA is a more complete representation of the undergraduate academic experience. Differences in baccalaureate course content and grading standards make the meaning of UGPAs variable across undergraduate institutions, however.
Predictor: MCAT total score.
The MCAT total score is the sum of the matriculant’s scores on the three multiple-choice sections of the 1991 version of the exam: Verbal Reasoning (VR), Biological Sciences (BS), and Physical Sciences (PS). The VR section assesses the examinee’s ability to understand, evaluate, and apply information and arguments presented in text. The BS and PS sections assess the examinee’s ability to apply his or her introductory-level knowledge of biology, chemistry, and physics to solve scientific problems. Scores for each multiple-choice section are reported on a 15-point scale, resulting in an MCAT total score ranging from 3 to 45. In our analyses, we included each matriculant’s most recent MCAT total score at the time of application to medical school.
We created a dichotomous composite variable, UP, that represents academic progress in medical school. UP was operationalized as not being dismissed or withdrawing for academic reasons, graduating within five years of matriculation, and passing Step 1, Step 2 CK, and Step 2 CS on the first attempt. Using this variable, we identified two categories of matriculants: those who experienced UP and those who did not (i.e., experienced IP).
Many admission committees use MCAT total scores and UGPAs together to predict applicants’ academic readiness for medical school, but the ways they use these data differ according to institutions’ educational missions, goals, and applicant pools.20,21 As such, we examined the predictive validity of UGPAs and MCAT total scores separately and together. This allowed the comparison of the predictive validity for the following models:
* Model 1: UGPAs alone
* Model 2: MCAT total scores alone
* Model 3: UGPAs and MCAT total scores together
We examined the predictive validity of UGPAs and MCAT total scores at the school level. That is, we conducted separate analyses for each medical school. We used this approach for several reasons: (1) Medical schools use UGPAs and MCAT total scores differently, (2) the meaning of some medical student outcomes, such as standards for withdrawal/dismissal due to academic reasons and for graduation, differ across schools, (3) schools offer different levels and types of academic support, and (4) schools have their own educational missions and goals. These differences might alter the relationships between UGPAs, MCAT total scores, and UP. Adopting a school-level approach allowed us to investigate whether the direction and strength of relationships between these variables differed by school, as well as to estimate the validity of UGPAs and MCAT total scores across all 119 schools.
We used logistic regression analyses to estimate the relationships between UGPAs, MCAT total scores, and UP. We did not correct logistic regression analyses for range restriction. We summarized results across schools by computing the median and interquartile range (IQR) of predicted UP rates.
We also evaluated the extent to which each model differentiated between matriculants who experienced UP and those who did not, using the area under the receiver operator characteristic curve (AUC).22 We considered a model to be discerning when the confidence interval (CI) around the AUC was greater than 0.50.23 For each model, we examined the 95% CI for the AUC by school. Then, we computed the percentage of schools in which the 95% CI was greater than 0.50 for each model.
Across the distribution of 119 medical schools included in this study, we observed that the majority of matriculants experienced UP: For schools at the 10th percentile, 83% of matriculants experienced UP; for schools at the 25th percentile, 87% of matriculants experienced UP; for schools at the 50th percentile, 90% of matriculants experienced UP; for schools at the 75th percentile, 92% of matriculants experienced UP; and for schools at the 90th percentile, 95% of matriculants experienced UP. Therefore, in the majority of schools in the sample, at least 83% of matriculants experienced UP.
Figure 1 shows the positive relationship between UGPAs and the percentage of matriculants predicted to experience UP: The likelihood of a matriculant experiencing UP increases as his or her UGPA increases until UGPA exceeds 3.50, and then it tends to level off. As illustrated by the size of the IQRs, the relationship between UGPAs and UP varies across medical schools. When UGPAs are low, there is more variability in the likelihood of UP than when UGPAs are high.
Figure 2 shows the positive relationship between MCAT total scores and the percentage of matriculants predicted to experience UP: The likelihood of a matriculant experiencing UP increases consistently as his or her MCAT score increases until MCAT total score exceeds 30, at which point it tends to level off. As observed with UGPAs, the relationship between MCAT total scores and UP varies across medical schools. When MCAT scores are low, there is more variance in the likelihood of UP than when MCAT scores are high. However, the IQRs for MCAT total scores are smaller than those for UGPAs, indicating that the relationship between MCAT scores and UP is more similar across schools than is the relationship between UGPAs and UP.
Figure 3 shows that the relationship between MCAT total scores and the percentage of matriculants predicted to experience UP depends on UGPAs. That is, at all points along the MCAT total score scale, medical students are more likely to experience UP if they have higher UGPAs. This effect is stronger for lower MCAT total scores than for higher MCAT total scores.
Our analyses to determine which model is the best predictor of UP provided the following results:
* UGPAs alone (Model 1) differentiated among matriculants who were and were not likely to experience UP in 76 (64%) schools.
* MCAT total scores alone (Model 2) differentiated among matriculants who were and were not likely to experience UP in 89 (75%) schools.
* UGPAs and MCAT total scores together (Model 3) differentiated among matriculants who were and were not likely to experience UP in 107 (90%) schools.
Thus, the combination of UGPAs and MCAT total scores offers better prediction of UP than either UGPAs or MCAT total scores alone.
In this study, we found that the combination of UGPAs and MCAT total scores predicts UP, an academic outcome that relies on data beyond standardized test scores and occurs about six years after application to medical school. MCAT total scores, however, contribute more to the prediction of UP than do UGPAs. By using data for matriculants at 119 U.S. medical schools, we demonstrated that the relationships among UGPA, MCAT total scores, and UP generalize across medical schools, although there is some variance in predictive value at lower UGPAs and lower MCAT total scores.
We extended previous research on the predictive validity of the MCAT exam by examining the relationships between UGPAs, MCAT total scores, and our UP indicator, which incorporates data about not experiencing academic difficulty resulting in withdrawal or dismissal from medical school, graduating within five years of matriculation, and passing Step 1, Step 2 CK, and Step 2 CS on the first attempt. Our findings indicate that UGPAs and MCAT total scores are both strong predictors of the extent to which matriculants will experience UP; this is important because it shows that UGPAs and MCAT total scores predict academic performance in medical school well beyond the first two years. Our findings are also consistent with research showing that MCAT scores predict IP in medical school17–19 as well as academic outcomes beyond test scores, such as grades in basic sciences courses, clerkship performance, and academic difficulty or distinction.1,5
Second, consistent with Julian’s study,1 our findings indicate that MCAT total scores are better predictors of UP than are UGPAs alone. This is likely because the content of the MCAT exam is more closely aligned with the USMLE Step exams (which are a component of UP) than are UGPAs. UGPAs reflect several areas of study and are likely influenced by factors beyond academic knowledge and skill (e.g., study habits). In addition, UGPAs are not standardized across undergraduate institutions.
Third, we found that UGPAs and MCAT total scores together predict UP better than either UGPAs or MCAT total scores do alone. This finding is consistent with previously published research indicating that the combination of UGPAs and MCAT total scores yields the best prediction of scores on the Step exams.1 It also suggests that medical school admission committees should consider UGPA and MCAT total scores together when evaluating applicants.
Finally, we examined the extent to which the relationships between UGPAs, MCAT total scores, and academic outcomes (i.e., UP) varied across 119 U.S. medical schools. Our results indicated that there was consistency across schools; however, there was more variability between schools in the percentage of matriculants predicted to experience UP at lower UGPAs and lower MCAT total scores. One reason for variability in these relationships is likely sampling error due to differences in sample sizes, applicant pools, and admission criteria. Other reasons include medical schools’ different goals and missions, their different standards for academic performance and graduation,24 and the different levels of academic support they offer throughout medical school and in preparation for the Step exams. These differences, particularly in the level of academic support provided, may have more impact on students with lower UGPAs and lower MCAT scores.
This study was limited to the variables about academic performance in medical school available in AAMC databases. Our data did not allow us to examine the relationships between UGPAs, MCAT total scores, medical school grades, clerkship ratings, and other local indicators of students’ academic performance in medical school. In addition, results of this study may not generalize to the new versions of the MCAT and Step 1 exams.
There are also some deficiencies of the UP variable which may influence the generalizability of our results and the magnitude of the relationships between UP, UGPAs, and MCAT total scores. As noted above, we employed UP as a composite variable that included not withdrawing or being dismissed for academic reasons, graduating within five years, and passing Step 1, Step 2 CK, and Step 2 CS on the first attempt. UP could be defined in different ways, and its components could differ by medical school. For example, it is possible that failing Step 2 CK may not delay graduation at all schools. In addition, UP was limited by the quality of data available about matriculants with planned delays in graduation. We tried to minimize this limitation by excluding matriculants who were enrolled in joint degree programs or other special programs that may delay graduation; however, we were not able to identity all such matriculants.
Additionally, we did not correct for range restriction in the logistic regression analyses because there is not an agreed-on approach for doing so.25 Further, the majority of matriculants proceed through medical school without major academic setbacks and pass Step 1, Step 2 CK, and Step 2 CS on the first attempt.24 As a result, there was relatively little variance in UP within or across schools, limiting our ability to detect an effect. Additionally, there were small sample sizes for some extreme UGPAs and MCAT total scores, which may have limited the accuracy and generalizability of our predictions for those UGPAs and MCAT total scores.
Finally, this study did not examine the relationships between UGPAs, MCAT total scores, and physician performance because relevant outcome data were not available. Recent models suggest that physician performance is complex and multidimensional, consisting of several meta-dimensions: academic knowledge and skills (e.g., clinical knowledge and expertise, clinical problem solving), interpersonal skills (e.g., communicating and building relationships), and intrapersonal skills (e.g., professional integrity, personal organization).26–28 It is important to note that the MCAT exam is not designed to predict the entire domain of medical student or physician performance. Rather, it is designed to predict academic knowledge and skills alone. Other admission tools, such as the interview, are intended to predict interpersonal and intrapersonal aspects of medical student and physician performance.
As admission tools are designed to predict different aspects of performance, we suggest that future research on predictive validity clearly specify which aspects of performance the tool is designed to predict and provide a conceptual rationale for specific predictor–outcome relationships. Future research on the MCAT exam should examine whether MCAT total scores predict long-term academic knowledge and skills in clinical settings. For example, outcomes like diagnostic accuracy, recertification, career distinction, and promotion in military settings would be conceptually appropriate outcomes given the purpose of the MCAT exam. Additionally, as performance is multidimensional in nature, it is important to evaluate the incremental contribution of nonacademic factors (e.g., interpersonal skills) above UGPAs and MCAT total scores in predicting academic knowledge and skills. Likewise, future research should assess whether UGPAs and MCAT total scores contribute to the prediction of other aspects of performance, such as communication skills or demonstrating cultural competence, which may rely on specific technical knowledge.
We also suggest that this study be replicated with data from the future versions of the MCAT exam and Step exams and with BCPM GPA. Researchers should also examine school-level variables (e.g., provision of academic support, mission, class size) that may moderate the relationships between UGPAs, MCAT total scores, and various medical student outcomes. For example, does smaller class size or the provision of academic support reduce the relationships between UGPAs, MCAT scores, and UP? To these ends, the AAMC plans to establish a validity studies service with a pilot group of medical schools to validate the 2015 version of the MCAT exam. This service will be used to expand the evidence base for the validity of the MCAT exam, and it will act as a springboard for ongoing and collaborative validity research between the AAMC and member schools.
Acknowledgments: The authors would like to thank the following Association of American Medical Colleges (AAMC) personnel for reviewing earlier drafts of this manuscript: Karen Mitchell, Scott Oppler, Clese Erikson, Robert Jones, Atul Grover, Elisa Siegel, Geoff Young, and Henry Sondheimer. They would also like to thank the members of the MR5 Committee: Steven Gabbe, Ronald Franks, Lisa Alty, Dwight Davis, Kevin Dorsey, Michael Friedlander, Robert Hilborn, Barry Hong, Richard Lewis, Maria Lima, Catherine Lucey, Alicia Monroe, Saundra Oyewole, Erin Quinn, Richard Riegelman, Gary Rosenfeld, Wayne Samuelson, Richard Schwartzstein, Maureen Shandling, Catherine Spina, and Ricci Sylla, as well as consultant Paul Sackett. In addition, they would like to thank three anonymous reviewers whose comments greatly improved this article.
Other disclosures: Medical College Admission Test (MCAT) is a program of the AAMC. Related trademarks owned by the AAMC include Medical College Admission Test, MCAT, and MCAT2015.
Ethical approval: This study was approved by the institutional review board of the American Institutes for Research as part of the MCAT program’s psychometric research protocol.
1. Julian ER. Validity of the Medical College Admission Test for predicting medical school performance. Acad Med. 2005;80:910–917
2. Kleshenki J, Sadik AK, Shapiro JI, Gold JP. Impact of preadmission variables on USMLE Step 1 and Step 2 performance. Adv Health Sci Educ. 2009;14:69–78
3. Callahan CA, Hojat M, Veloski J, Erdmann JB, Gonnella JS. The predictive validity of three versions of the MCAT in relation to performance in medical school, residency, and licensing examinations: A longitudinal study of 36 classes of Jefferson Medical College. Acad Med. 2010;85:980–987
4. Violato C, Donnon T. Does the Medical College Admission Test predict clinical reasoning skills? A longitudinal study employing the Medical Council of Canada clinical reasoning examination. Acad Med. 2005;80(10 suppl):S14–S16
5. Donnon T, Paolucci EO, Violato C. The predictive validity of the MCAT for medical school performance and medical board licensing examinations: A meta-analysis of the published research. Acad Med. 2007;82:100–106
7. MaGaghie WC. Perspectives on medical school admission. Acad Med. 1990;65:136–139
8. Association of American Medical Colleges. Using MCAT Data in Medical Student Selection. 2012 Washington, DC Association of American Medical Colleges
9. Association of American Medical Colleges. . Medical Student Education: Costs, Debt, and Loan Repayment Facts. http://www.aamc.org/first
. Accessed December 17, 2012
10. National Resident Matching Program. Results of the 2010 NRMP Program Director Survey. 2010 Washington, DC National Resident Matching Program
11. National Resident Matching Program, Association of American Medical Colleges.Charting Outcomes in the Match: Characteristics of Applicants Who Matched to Their Preferred Specialty in the 2011 Main Residency Match. 20114th ed Washington, DC National Resident Matching Program
14. Jones RF, Korn D. On the cost of educating a medical student. Acad Med. 1997;72:200–210
15. Goodwin P, Krakower J Exploring the Cost of Undergraduate Education. 2012 Washington, DC Association of American Medical Colleges
16. Center for Workforce Studies. The Complexities of Physician Supply and Demand: Projections Through 2025. 2008 Washington, DC Association of American Medical Colleges
17. Jones RF, Vanyur S. MCAT scores and student progress in medical school. J Med Educ. 1984;59:527–531
18. Huff KL, Fang D. When are students most at risk of encountering academic difficulty? A study of the 1992 matriculants to U.S. medical schools. Acad Med. 1999;74:454–460
19. Andriole DA, Jeffe DB. Prematriculation variables associated with suboptimal outcomes for the 1994–1999 cohort of US medical school matriculants. JAMA. 2010;304:1212–1219
20. Mitchell KJ, Haynes R. Score reporting for the 1991 Medical College Admission Test. Acad Med. 1990;65:719–723
21. Monroe A, Quinn E, Samuelson W, Dunleavy DM, Dowd KW. An overview of the medical school admission process and use of applicant data in decision making: What has changed since the 1980s? Acad Med. 2013;88:672–681
22. Hosmer DW, Lemeshow S Applied Logistic Regression. 2000 New York, NY John Wiley and Sons, Inc.
23. Utzman RR, Riddle DL, Jewell DV. Use of demographic and quantitative admissions data to predict academic difficulty among professional physical therapist students. Phys Ther. 2007;87:1164–1180
24. Andriole DA, Jeffe DB. A national cohort study of U.S. medical school students who initially failed Step 1 of the United States Medical Licensing Examination. Acad Med. 2012;87:529–536
25. Stauffer J, Ree MJ. Predicting with logistic or linear regression: Will it make a difference in who is selected for pilot training? Int J Avia Psychol. 1996;6:233–240
27. Patterson F, Ferguson E, Lane P, Farrell K, Martlew J, Wells A. A competency model for general practice: Implications for selection, training, and development. Br J Gen Pract. 2000;50:188–193
28. Lievens F, Sackett PR. The validity of interpersonal skills assessment via situational judgment tests for predicting academic success and job performance. J Appl Psychol. 2012;97:460–468