Secondary Logo

Journal Logo

Research Reports

The Validity of Scores From the New MCAT Exam in Predicting Student Performance: Results From a Multisite Study

Busche, Kevin MD, FRCPC; Elks, Martha L. MD, PhD; Hanson, Joshua T. MD, MPH; Jackson-Williams, Loretta MD, PhD; Manuel, R. Stephen PhD; Parsons, Wanda L. MD, FCFP; Wofsy, David MD; Yuan, Kun PhD

Author Information
doi: 10.1097/ACM.0000000000002942

Abstract

The Medical College Admission Test (MCAT) is a tool for assessing applicants’ academic readiness for learning in medical school. It is a standardized examination that measures the foundational knowledge of scientific concepts and reasoning skills needed by entering medical students. Medical school admissions committees use academic metrics, such as MCAT scores and undergraduate grade point averages (UGPAs), with other information about applicants’ academic preparation and experiences as well as insights gathered during interviews, when making admissions decisions. Because the meaning of UGPAs differs for applicants from different undergraduate institutions and with different majors and coursework,1 MCAT scores provide an important common measure of academic readiness.

In April 2015, the Association of American Medical Colleges (AAMC) introduced a new version of the MCAT exam. The exam was redesigned to reflect changes in medical education, medical science, and health care delivery as well as changes in society and the increasingly diverse and aging patient population in the United States and Canada, which have occurred since the last revision of the MCAT exam in 1991.2,3 The new exam still tests foundational concepts in biology, chemistry, and physics, along with verbal reasoning skills; now, it also includes concepts from first-semester biochemistry and introductory psychology and sociology. The new exam also requires applicants to demonstrate more scientific reasoning skills than the old exam did.

In 2014, the AAMC formed the MCAT Validity Committee (MVC) to evaluate the validity of scores from the new MCAT exam. The committee includes representatives from 16 U.S. and 2 Canadian medical schools and 2 prehealth advisors serving in current or previous leadership roles with the National Association of Advisors for the Health Professions. The MVC is tasked with evaluating evidence about the fairness, impact, use, and predictive validity of scores from the new MCAT exam, as professional testing standards require.4 This research is the foundation for evaluating the soundness of using MCAT scores in admissions decisions.

The predictive validity research conducted by the MVC examines the value of scores from the new MCAT exam in predicting medical student performance on a variety of outcomes throughout medical school, including student performance in individual preclerkship and clerkship courses, passing the United States Medical Licensing Examination (USMLE) Step exams, and progression through and graduation from medical school. This research also examines whether MCAT scores provide comparable prediction of performance in medical school for students from different sociodemographic backgrounds.

Research on the old MCAT exam (1991 to January 2015) addressed these same questions. Findings from previous research showed that scores from the old MCAT exam predicted student performance throughout medical school.5–11 Prior research also found that scores from the old exam provided similar prediction of performance on the USMLE Step 1 exam and medical school graduation for students from different racial/ethnic backgrounds.8,12

In this research report, we present the results from the MVC’s first multisite study on the predictive validity of scores from the new MCAT exam. We examined the extent to which total scores predict student performance in the first year of medical school (M1). Specifically, we addressed the following 3 research questions:

  1. Do the total scores from the new MCAT exam predict students’ summative performance across first-year courses and on-time progression to year 2 (M2)?
  2. Do the total scores from the new MCAT exam add value beyond UGPAs in predicting students’ summative performance across first-year courses?
  3. Do the total scores from the new MCAT exam provide comparable prediction of summative performance across first-year courses and on-time progression to year 2 for students from different sociodemographic backgrounds?

Method

Participants

We used data from 2 groups of 2016 medical school matriculants in our analyses: (1) the national population of 2016 matriculants with scores from the new MCAT exam and (2) the sample of 2016 matriculants with scores from the new exam who attended one of the medical schools conducting MCAT validity research (validity schools) and who volunteered to participate in the research on their institution-specific student outcomes.

We refer to the first group as the national population because it included all 2016 matriculants to every U.S. MD-granting institution who had scores from the new MCAT exam (N = 7,970). These students were a subset of the more than 21,000 students who matriculated at the 145 accredited U.S. MD-granting institutions in 2016.

We refer to the second group as the validity sample. It includes the students who were enrolled in one of the 16 medical schools participating in the MCAT validity research who volunteered to participate in this study. These schools were selected from 65 medical schools that volunteered to participate in the MCAT validity research. They represented a wide range of institutional missions, geographic regions, public/private status, applicant pool sizes and characteristics, curricula, instruction, and grading practices.

About 80% (N = 955) of the students with scores from the new MCAT exam who matriculated in 2016 at the validity schools volunteered for the study. Among these 955 students, 83% (N = 791) came from U.S. medical schools and were also a subset of the national population of 2016 matriculants with scores from the new MCAT exam. The remaining students (N = 164) came from 2 Canadian schools. The validity sample was representative of the total population of 2016 matriculants at the validity schools (N = 2,541) based on demographic characteristics and undergraduate academic performance.

Data from these 2 groups complemented each other. The national population included more students and represented more schools, and it allowed for the study of the first milestone in medical school—sufficient mastery of the curriculum to progress on time to the next year. The validity sample was smaller, but it allowed for our examination of institution-specific outcomes.

We drew our data from deidentified research tables in the AAMC’s Data Warehouse and from outcome data provided by the validity schools. This study was approved by the institutional review board of the American Institutes for Research.

Criterion outcomes

We used 2 types of criterion outcomes in this study.

Summative performance in M1.

The validity schools identified M1 courses to include in this evaluation. To be included, courses had to have at least one medical student performance outcome that met the following criteria: measured individual (not team) performance, had adequate variation in student performance, represented students’ first attempts on these outcomes, and were continuous, ranging from 0 to 100. (The majority of M1 courses at each school were included because they met these criteria.)

Examples of selected courses included: biochemistry, cell and molecular biology, cardiovascular and pulmonary systems, behavioral medicine and health, health care ethics, introduction to clinical anatomy, and community engagement. Although the selected courses varied widely in the extent to which they related to the knowledge and skills tested on the MCAT exam, most taught natural sciences subjects. For each selected course, we analyzed one primary outcome. Some were weighted averages of students’ performance on multiple assessments given throughout the course. Others were based on students’ performance on the final exam. Because the courses selected by each validity school comprised the majority of M1 courses at the school, students’ average performance in these courses served as a measure of their summative performance in their first-year coursework.

On-time progression to M2.

We generated this progression outcome based on student enrollment records submitted by the registrars of the 145 accredited U.S. MD-granting schools. Students were categorized into 2 groups: those who experienced on-time progression to M2 and those who did not, according to their school’s curriculum. This outcome allowed us to study each student’s performance defined by her or his sufficient mastery of the curriculum to progress on time to the next year. We analyzed progression data for all students in regular MD programs. Students in dual degree programs were excluded from the analysis.

Predictors

Total scores from the new MCAT exam and UGPAs were used as predictors.

MCAT total scores.

The new MCAT exam has 4 sections: (1) Biological and Biochemical Foundations of Living Systems; (2) Chemical and Physical Foundations of Biological Systems; (3) Psychological, Social, and Biological Foundations of Behavior; and (4) Critical Analysis and Reasoning Skills. The first 3 sections test 10 foundational concepts and 4 scientific inquiry and reasoning skills in the natural, behavioral, and social sciences. The fourth section tests how well test takers comprehend, analyze, and evaluate what they read, draw inferences from text, and apply arguments to new ideas and situations.13 The new MCAT exam reports 4 section scores and a total score. The 4 section scores range from 118 to 132, with a midpoint at 125. The total score is the sum of the 4 section scores, with a range from 472 to 528 and a midpoint at 500.

Total UGPAs.

Total UGPAs came from either the application service used by the validity school (i.e., the American Medical College Application Service, the Texas Medical and Dental School Application Service) or from the validity school itself for the 2 Canadian schools that did not use an external application service. In all cases, UGPAs were verified and standardized by the application services or the validity schools to allow medical schools to compare the academic experiences of applicants from undergraduate institutions that used different academic calendars and grading systems.14,15 UGPAs ranged from 0 to 4 for all but one validity school in Canada, which calculated students’ UGPAs on a 0–100 scale.

Data analysis

Research question 1.

To address the first research question about the extent to which MCAT scores predict students’ M1 performance, we correlated MCAT total scores with validity school students’ summative performance across M1 courses. These analyses were conducted by school to control for differences across schools in the course outcomes used to generate the summative performance outcome. The validity coefficients presented in this report represent the correlations of MCAT total scores with summative performance in M1 after correcting for range restriction in MCAT total scores and UGPAs due to student selection in the admissions process (see the Figure 1 legend for more details).16 Correlation coefficients can range from 0 (no relationship) to ±1 (perfect positive/negative relationship).17 We present medians and interquartile ranges to summarize validity coefficients across schools.18

Figure 1
Figure 1:
Median and interquartile range of the corrected correlations of Medical College Admission Test (MCAT) total scores with students’ summative performance in the first year of medical school (M1). Summative performance is the mean of students’ scores from the M1 courses that each validity school selected to include in this study. Summative performance showed a strong correlation (above 0.90) with students’ official end-of-year performance, such as their M1 grade point average (GPA) or M1 class rank. The observed correlations were corrected for range restriction in MCAT total scores and undergraduate GPAs (UGPAs) due to student selection in the admissions process.16 The correlation correction adjusted for the differences in the standard deviations of MCAT scores and UGPAs in each school’s validity sample compared with the school’s applicant pool while accounting for the correlations among MCAT scores, UGPAs, and the outcome in the validity sample. Because more 2017 applicants than 2016 applicants to the validity schools had scores from the new exam, the corrections for range restriction were made using data from the 2017 admissions cycle. The corrected correlation coefficients (validity coefficients) are presented. The number next to the solid circle shows the median correlation. The numbers at the ends of the interquartile range show the correlation at the 25th (lower end) and 75th (higher end) percentiles of the distribution of corrected correlation coefficients, respectively. The horizontal line with a Y axis value at 0.3 is the reference line for a medium association.17 See Supplemental Digital Appendix 1 at http://links.lww.com/ACADMED/A731 for the observed correlations.

We also calculated the percentages of medical students in the national sample who achieved on-time progression to M2, using different ranges of MCAT total scores.

Research question 2.

To address the second research question about the value of MCAT total scores and UGPAs, alone and together, in predicting students’ summative performance in M1, we conducted 3 sets of regression analyses: Model 1: MCAT total score as the only predictor, Model 2: UGPA as the only predictor, and Model 3: both MCAT total score and UGPA as predictors.

We conducted the regressions by school and corrected the resulting correlations for range restriction in the validity sample compared with each school’s applicant population (see the Figure 3 legend for more details).16 We present the medians and interquartile ranges of the validity coefficients of the predictor(s) with the outcome,18 along with the percentage of variance in the outcome explained by the predictor(s).

Research question 3.

We used well-established regression procedures4,12 to address the research question about whether MCAT scores provided comparable prediction of performance for students from different sociodemographic backgrounds, conducting analyses by race/ethnicity, highest parental education, and gender.19–21

For each sociodemographic variable, students were divided into 2 groups. For race/ethnicity, students who self-identified as black or African American; Hispanic, Latino, or Spanish; American Indian or Alaska Native; or Native Hawaiian or other Pacific Islander were categorized as underrepresented in medicine (URM).22 Students who self-identified as white or Asian were categorized as non-URM.

Highest parental education was used as a proxy for students’ socioeconomic backgrounds because of the close relationship between education level and income.23 Students were categorized into 2 groups based on parental attainment of a bachelor’s degree. One group included students who reported their parents did not have a bachelor’s degree. The other included students who reported that at least one parent had a bachelor’s degree.

The third sociodemographic group was based on self-reported gender. One group included female students, and the other included male students.

We compared the degree to which MCAT total scores predicted summative performance in M1 and on-time progression to M2 by race/ethnicity, parental education, and gender. We conducted 3 sets of linear regression analyses for summative performance in M1 and 3 sets of logistic regression analyses for on-time progression to M2.

We used the estimated regression parameters from each regression analysis to generate the predicted outcome for each student. We then computed the average observed and predicted outcomes separately for all students included in a sociodemographic group. We tested whether the mean residual, that is, the difference between the average observed and predicted outcomes, differed from zero. We also computed the effect size associated with each residual to estimate the magnitude of prediction error. These effect sizes are measures of the magnitude of prediction error. An effect size of 0.2 is considered small.24 Prediction error with an effect size less than 0.2 means the difference between the average observed and predicted outcomes is trivial and of no practical importance. All analyses were conducted using Stata (version 14; StataCorp, College Station, Texas).

Results

Characteristics of participants and outcomes

Students in the validity sample were similar to the national population of medical students with scores from the new MCAT exam based on most sociodemographic characteristics. Slightly larger percentages of students received fee assistance from the AAMC and identified as black or African American in the validity sample than in the national population. The validity sample and the national population had similar means and standard deviations (SDs) of MCAT total scores (Meansample = 507.87, SD = 7.34; Meanpopulation = 508.57, SD = 6.92; see Table 1). Both the validity sample and the national population had a mean UGPA of 3.70 (SD = 0.25).

Table 1
Table 1:
Demographic Characteristics of the Validity Sample and the National Population of 2016 Medical School Matriculants With Scores From the New MCAT Exam Who Were Included in an Analysis of the Validity of Scores From the New MCAT Exam in Predicting Student Performance

Summative performance in M1

Figure 1 shows the median and interquartile range of the correlations of MCAT total scores with students’ summative performance in M1. The median correlation was 0.57, and the 25th and 75th percentiles were 0.47 and 0.68, respectively.

On-time progression to M2 by MCAT total scores

Figure 2 shows the percentages of students in the national population who progressed to M2 on time by different ranges of MCAT total scores. Overall, the vast majority (7,736; 97%) of these students progressed on time, and the progression rate for students across a wide range of MCAT total scores was high.

Figure 2
Figure 2:
Percentage of the 2016 cohort of medical school matriculants with scores from the new Medical College Admission Test (MCAT) who progressed to year 2 of medical school on time, by MCAT total score range. In total, 7,970 students in the national population of 2016 matriculants with scores from the new MCAT exam were included in the analysis. Students enrolled in MD/PhD or other dual degree programs were not included due to the planned delay in graduation. Less than 2% (N = 131) of the national population reported MCAT scores below 494; in this group, the on-time progression rate was 79%. The numbers of students in the score ranges below 494 are too small to interpret meaningful differences in their progression rates compared with those with scores at or above 494. On-time progression rates are based on observed progression for those students who were admitted to medical school and do not reflect the potential performance of those who were not accepted. Additionally, multiple factors may contribute to the lack of on-time progression, including a possible combination of academic and nonacademic reasons.

The percentage of students who progressed on time was 93% or above for those students with MCAT total scores from 494 to 528. Less than 2% (N = 131) of the national population reported scores below 494. The number of students with scores below 494 was too small to interpret meaningful differences in their progression rate compared with those who scored at or above 494.

Value of MCAT total scores and UGPAs in predicting students’ summative performance in M1

Figure 3 shows the medians and interquartile ranges of the correlations of MCAT total scores (Panel A), UGPAs (Panel B), and MCAT total scores and UGPAs together (Panel C) with students’ summative performance in M1.

Figure 3
Figure 3:
Medians and interquartile ranges of corrected correlations of new Medical College Admission Test (MCAT) total scores and undergraduate grade point averages (UGPAs), alone (Panels A and B) and together (Panel C), with students’ summative performance in the first year of medical school (M1). Summative performance is the mean of students’ scores from the M1 courses that each validity school selected to include in this study. Summative performance showed a strong correlation (above 0.90) with students’ official end-of-year performance, such as their M1 GPA or M1 class rank. The observed correlations were corrected for range restriction in MCAT total scores and UGPAs due to student selection in the admissions process.16 The correlation correction adjusted for the differences in the standard deviations of MCAT scores and UGPAs in each school’s validity sample compared with the school’s applicant pool, while accounting for the correlations among MCAT scores, UGPAs, and the outcome in the validity sample. Because more 2017 applicants than 2016 applicants to the validity schools had scores from the new exam, the corrections for range restriction were made using data from the 2017 admissions cycle. The corrected correlation coefficients (validity coefficients) are presented. The numbers next to the solid circles show the median correlations. The numbers at the ends of the interquartile ranges show the correlations at the 25th (lower end) and 75th (higher end) percentiles of the distribution of corrected correlation coefficients, respectively. The horizontal line with a Y axis value at 0.3 is the reference line for a medium association.17 See Supplemental Digital Appendix 1 at http://links.lww.com/ACADMED/A731 for the observed correlations.

As described previously, the median correlation of MCAT total scores with summative performance in M1 was 0.57, and the 25th and 75th percentiles were 0.47 and 0.68, respectively. The median correlation of UGPAs with summative performance in M1 was 0.52, and the 25th and 75th percentiles were 0.42 and 0.62, respectively. The results for MCAT total scores were similar to those for UGPAs, but the median and interquartile range for MCAT total scores were slightly higher than those for UGPAs. When using both MCAT total scores and UGPAs to predict summative performance in M1, the median correlation was 0.65, with an interquartile range of 0.56–0.79.

Using MCAT scores and UGPAs together explained a much greater percentage of the variance in students’ summative performance in M1 than using UGPAs alone. The percentage of variance explained by MCAT scores and UGPAs together based on the median validity coefficient (42%) was 1.6 times the percentage of variance explained by UGPAs alone (27%). These results show that, together, MCAT total scores and UGPAs provided stronger prediction of students’ summative performance in M1 than either predictor alone.

Predicting performance for students from different sociodemographic backgrounds

Table 2 shows the observed and predicted performance outcomes for students by race/ethnicity, highest parental education, and gender; the differences between the observed and predicted outcomes; and the effect sizes associated with those differences.

Table 2
Table 2:
Comparison of Observed and Predicted Performance for 2016 Medical School Matriculants From Different Sociodemographic Backgrounds Who Took the New MCAT Exam

In the validity sample, the differences between the observed and predicted outcomes of summative performance in M1 were minor for students by race/ethnicity, parental education, and gender. None of the mean differences were statistically significant. The magnitudes of these differences were also of no practical importance. For example, the mean difference between the observed and predicted performance for students identified as URM was −0.38 (on a 0–100 scale), with an effect size of −0.07. The mean difference for students whose parents did not have a bachelor’s degree was −0.09, with an effect size of −0.02. The mean difference for female students was 0.13, with an effect size of 0.03.

In the national population, there were either no differences or only minor ones between the observed and predicted outcomes of on-time progression to M2 for students by race/ethnicity, highest parental education, and gender. No differences were statistically or practically significant. For instance, 95% of students identified as URM were predicted to progress on time compared with 94% who did. The observed progression rates were the same as the predicted rates for medical students whose parents did not have a bachelor’s degree (96%). Similarly, female students had the same observed and predicted progression rates (96%).

Together, these results showed that medical students from different sociodemographic backgrounds performed, on average, at the levels that their MCAT scores predicted they would across their M1 courses and in their on-time progression to M2.

Discussion

Our study is the first multisite evaluation of the validity of scores from the new MCAT exam in predicting student performance in medical school, focusing on performance in M1. It is also the first study to assess whether scores from the new exam provide comparable prediction of performance in M1 for students from different sociodemographic backgrounds. Summative performance across selected M1 courses and on-time progression to M2 were used as outcomes.

The correlations of MCAT total scores with summative performance in M1 ranged from medium to large across the validity schools included in this study. These findings are important. Knowing that MCAT scores provide valid information about applicants’ readiness for medical school can give admissions committees the flexibility to select applicants who are academically prepared for medical school while taking into account the number of students they have the resources to support.

The predictive validity findings from our study are consistent with the findings from previous large-scale, multicohort studies on the predictive validity of scores from the old MCAT exam.5 They are also consistent with findings about the validity of scores from admission exams for other graduate or professional school programs, such as the Law School Admission Test18 and the Graduate Management Admission Test,25 which show medium to large correlations with first-year performance.

Our study also showed that overall, students with a wide range of MCAT total scores progressed to M2 on time. This outcome is important because it represents the first milestone toward graduating within the typical time frame based on each school’s curriculum. The finding that students with a wide range of MCAT total scores progressed to M2 on time suggests that medical school admissions committees are admitting applicants with the academic qualifications and demonstrated excellence in domains important for success in medical school and supporting those students when they enter.26

Also important is learning that using MCAT scores and UGPAs together provided better prediction of M1 performance than using either alone. The median correlation of each academic metric with students’ summative M1 performance was greater than 0.50. When used together, the median correlation increased to 0.65. This finding supports the common practice of using both metrics for admissions decisions26 and is consistent with professional testing standards and guidance on using MCAT scores in student selection.4,13

Our results also showed that MCAT total scores provided comparable prediction of performance in M1 for students from different sociodemographic backgrounds. The differences between observed and predicted performance on both outcomes we studied were of neither statistical significance nor practical importance. Evidence of comparability in prediction addresses professional testing standards related to the fairness of test scores used in admissions decision making.4

Results from our study suggest that MCAT scores provide useful information about student performance in the first year of medical school. The results from the validity schools are generalizable to 2016 matriculants at other medical schools in North America because the students in the validity sample were representative demographically of the national population of 2016 medical school matriculants with scores from the new MCAT exam and their schools were carefully selected for the validity research to represent a diverse pool of medical schools in North America.

Our study has several limitations. It is based on data from the first cohort of medical students admitted with scores from the new MCAT exam. Nationally, these students represent about 40% of the entire 2016 cohort of medical school matriculants. By comparison, almost 90% of 2017 matriculants applied with scores from the new exam. Consequently, the number of matriculants in each specific URM group (e.g., black or African American) in the 2016 cohort was smaller than it would be during a typical application year, when all applicants take the same version of the MCAT exam. Therefore, in this first validity study of scores from the new exam, we did not examine whether MCAT total scores provided comparable prediction for each specific URM group. Future research will continue addressing the comparability of MCAT scores as more data become available.

In addition, our study examined MCAT scores in relation to student performance in the first year of medical school. This outcome is the closest chronologically to MCAT scores. It comes from courses that probably have more concepts in common with those tested on the MCAT exam than courses in the later years of medical school, as the MCAT exam was designed to measure the foundational natural, behavioral, and social science concepts upon which the preclerkship years of the medical school curriculum are built.

Much remains to be learned about how students fare during the rest of their preclerkship and clerkship coursework, on the USMLE Step exams, and on other more distant and comprehensive outcomes, as well as for entering classes for which all students were admitted with scores from the same version of the MCAT exam. There is also more to learn about the value of MCAT scores in predicting performance for students at medical schools that vary in their missions, admission practices, curricular structures (e.g., discipline vs systems based), instructional approaches, and the types and amounts of support provided to students.27–29 In addition, there is great interest in learning how MCAT section scores predict performance in different subject areas, as well as how the old and new MCAT exams compare when predicting student performance in medical school. Future studies of the MVC will delve into these and other questions.

Acknowledgments:

The authors would like to thank the members of the Association of American Medical Colleges (AAMC) Medical College Admission Test (MCAT) Validity Committee for their dedication and tireless efforts to evaluate the new MCAT exam: Ngozi Anachebe, Barbara Beckman, Ruth Bingham, Kevin Busche, Deborah Castellano, Francie Cuffney, Julie Chanatry, Hallen Chung, Daniel Clinchot, Liesel Copeland, Martha Elks, William Gilliland, Jorge Girotti, Kristen Goodell, Joshua Hanson, Loretta Jackson-Williams, David Jones, Catherine Lucey, R. Stephen Manuel, Janet McHugh, Stephanie McClure, Cindy Morris, Wanda Parsons, Tanisha Price-Johnson, Boyd Richards, Aaron Saguil, Aubrie Swan Sein, Stuart Slavin, Doug Taylor, Carol Terregino, Ian Walker, Robert Witzburg, David Wofsy, and Mike Woodson. The authors would also like to thank the following AAMC personnel for reviewing earlier drafts of this manuscript: Heather Alarcon, Gabrielle Campbell, Karen Fisher, Marc Kroopnick, Karen Mitchell, Norma Poll, Elisa Siegel, and Geoffrey Young. In addition, they would like to thank Cynthia Searcy, Andrea Carpentieri, Melissa Lee, and Rob Santos for their contributions to this article.

References

1. Lei P-W, Bassiri D, Schultz EM; ACT. Alternatives to grade point average as a measure of academic achievement in college. ACT Research Report Series 2001–4. http://www.act.org/content/dam/act/unsecured/documents/ACT_RR2001-4.pdf. Published December 2001. Accessed May 23, 2019.
2. Schwartzstein RM, Rosenfeld GC, Hilborn R, Oyewole SH, Mitchell K. Redesigning the MCAT exam: Balancing multiple perspectives. Acad Med. 2013;88:560–567.
3. Kroopnick M. AM Last Page: The MCAT exam: Comparing the 1991 and 2015 exams. Acad Med. 2013;88:737.
4. American Educational Research Association; American Psychological Association; National Council on Measurement in Education. Standards for Educational and Psychological Testing. 2014.Washington, DC: American Educational Research Association.
5. Julian ER. Validity of the Medical College Admission Test for predicting medical school performance. Acad Med. 2005;80:910–917.
6. Donnon T, Paolucci EO, Violato C. The predictive validity of the MCAT for medical school performance and medical board licensing examinations: A meta-analysis of the published research. Acad Med. 2007;82:100–106.
7. Dunleavy DM, Kroopnick MH, Dowd KW, Searcy CA, Zhao X. The predictive validity of the MCAT exam in relation to academic performance through medical school: A national cohort study of 2001-2004 matriculants. Acad Med. 2013;88:666–671.
8. Koenig JA, Sireci SG, Wiley A. Evaluating the predictive validity of MCAT scores across diverse applicant groups. Acad Med. 1998;73:1095–1106.
9. Huff KL, Fang D. When are students most at risk of encountering academic difficulty? A study of the 1992 matriculants to U.S. medical schools. Acad Med. 1999;74:454–460.
10. Violato C, Donnon T. Does the Medical College Admission Test predict clinical reasoning skills? A longitudinal study employing the Medical Council of Canada clinical reasoning examination. Acad Med. 2005;80(10 suppl):S14–S16.
11. Gauer JL, Wolff JM, Jackson JB. Do MCAT scores predict USMLE scores? An analysis on 5 years of medical student data. Med Educ Online. 2016;21:31795.
12. Davis D, Dorsey JK, Franks RD, Sackett PR, Searcy CA, Zhao X. Do racial and ethnic group differences in performance on the MCAT exam reflect test bias? Acad Med. 2013;88:593–602.
13. Association of American Medical Colleges. Using MCAT Data in 2019 Medical Student Selection. 2018. Washington, DC: Association of American Medical Colleges; https://www.aamc.org/download/462316/data/mcatguide.pdf. Accessed May 23, 2019.
14. Association of American Medical Colleges. 2018 AMCAS Applicant Guide. 2017. Washington, DC: Association of American Medical Colleges; https://aamc-orange.global.ssl.fastly.net/production/media/filer_public/33/f0/33f0bd3f-9721-43cb-82a2-99332bbda78e/2018_amcas_applicant_guide_web-tags.pdf. Accessed May 23, 2019.
15. Texas Medical and Dental Schools Application Service. Entry Year 2019 Application Handbook. 2019. Austin, TX: Texas Medical and Dental Schools Application Service; https://www.tmdsas.com/Forms/ApplicationHandbookEY2019.pdf. Accessed May 23, 2019.
16. Sackett PR, Yang H. Correction for range restriction: An expanded typology. J Appl Psychol. 2000;85:112–118.
17. Cohen J. A power primer. Psychol Bull. 1992;112:155–159.
18. Anthony LC, Dalessandro SP, Trierweiler TJ; Law School Admission Council. Predictive validity of the LSAT: A national summary of the 2013 and 2014 LSAT correlation Studies. https://www.lsac.org/data-research/research/predictive-validity-lsat-national-summary-2013-and-2014-lsat-correlation. Published March 2016. Accessed August 9, 2018.
19. Coleman AL, Lipper KE, Taylor TE, Palmer SR; Education Counsel LLC. Roadmap to Diversity and Educational Excellence: Key Legal and Educational Policy Foundations for Medical Schools. 2014.2nd ed. Washington, DC: Association of American Medical Colleges.
20. Jolly P. Diversity of U.S. medical students by parental income. AAMC Analysis in Brief. January 2008;8. https://www.aamc.org/download/102338/data/aibvol8no1.pdf. Accessed May 23, 2019.
21. Magrane D, Jolly P. The changing representation of men and women in academic medicine. AAMC Analysis in Brief. July 2005;5. https://www.aamc.org/download/75776/data/aibvol5no2.pdf. Accessed May 23, 2019.
22. Association of American Medical Colleges. The status of the new AAMC definition of “underrepresented in medicine” following the Supreme Court’s decision in Grutter. Adopted March 19, 2004. https://www.aamc.org/download/54278/data/urm.pdf. Accessed May 23, 2019.
23. Wolla SA, Sullivan J. Education, income, and wealth. Page One Economics. January 2017. https://research.stlouisfed.org/publications/page1-econ/2017/01/03/education-income-and-wealth. Accessed May 23, 2019.
24. Cohen J. Statistical Power Analysis for the Behavioral Sciences. 1988.2nd ed. Hillsdale, NJ: Lawrence Erlbaum Associates;
25. Kuncel NR, Credé M, Thomas LL. A meta-analysis of the predictive validity of the Graduate Management Admission Test (GMAT) and undergraduate grade point average (UGPA) for graduate student academic performance. Acad Manage Learning Educ. 2007;6:51–68.
26. Association of American Medical Colleges. Using MCAT Data in 2018 Medical Student Selection. 2017.Washington, DC: Association of American Medical Colleges.
27. Saks NS, Karl S. Academic support services in U.S. and Canadian medical schools. Med Educ Online. 2004;9:4348.
28. Shields PH. A survey and analysis of student academic support programs in medical schools focus: Underrepresented minority students. J Natl Med Assoc. 1994:86:373–377.
29. Elks ML, Herbert-Carter J, Smith M, Klement B, Knight BB, Anachebe NF. Shifting the curve: Fostering academic success in a diverse student body. Acad Med. 2018;93:66–70.

References cited in tables only

30. Association of American Medical Colleges. Table B-3: Total U.S. medical school enrollment by race/ethnicity and sex, 2014–2015 through 2018–2019. Published November 14, 2018. https://www.aamc.org/download/321534/data/factstableb3.pdf. Accessed May 23, 2019.
    31. Hedges LV. Distribution theory for Glass’s estimator of effect size and related estimators. J Educ Behav Stat. 1981:6:107–128.

      Supplemental Digital Content

      Copyright © 2019 by the Association of American Medical Colleges