Secondary Logo

Journal Logo

PAPERS: Close but No Bananas: Predicting Performance

Does Institutional Selectivity Aid in the Prediction of Medical School Performance?

BLUE, AMY V.; GILBERT, GREGORY E.; ELAM, CAROL L; BASCO, WILLIAM T. JR.

Section Editor(s): Albanese, Mark PhD

Author Information

Various factors are considered in the decision to offer an admission interview to a medical school applicant, including Medical College Admission Test (MCAT) scores, undergraduate grade-point average (GPA), and the selectivity of the degree-granting undergraduate institution. Admission officers view MCAT scores, undergraduate GPA, and institutional selectivity as having high or moderate importance.1 Research has indicated that these factors, most notably the MCAT scores and the undergraduate GPA, are reliable in helping predict medical school performance.1,2,3,4,5,6 The strongest association has been shown between MCAT scores and performance on the United States Medical Licensing Examination, Step 1.2

Institutional selectivity data are used to help control for differences in grading stringency across undergraduate institutions.1 Previous reports have examined the role of institutional selectivity, or a specific undergraduate institution, as a predictor of performance in the first two years of medical school.1,2,3,6 With the exception of the study of Zelesnik et al.,6 which examined ten specific undergraduate institutions, these reports have used the Higher Education Research Institute (HERI) Index,7 also called the “Astin Index,”2 as a measure of institutional selectivity. Other measures of institutional selectivity or categorization that schools of medicine may employ include the Barron's Profiles of American Colleges Admissions Selector Rating8 and the Carnegie Classification from the Carnegie Foundation for the Advancement of Teaching.9 (These measures are explained in the next section.)

Institutional validity studies of admission decision-making data help to determine which characteristics should be accorded highest importance in applicant selection. Given the reliance upon institutional selectivity as an important admission characteristic and the different types of selectivity classifications available for medical schools to use, the purpose of this study was to examine how well three measures of institutional selectivity could predict medical students' performances, specifically their performances on the USMLE Step 1 and Step 2 and their final medical school GPAs.

Method

Admission and medical school performance data were obtained for the 1992–1995 matriculants at the study institution, the Medical University of South Carolina (MUSC). Admission data for each student consisted of his or her MCAT scores, undergraduate GPA, undergraduate institution, three institutional selectivity categorization indices (the 1983 HERI index, Barron's Admissions Selector Rating, and the Carnegie Classification), age, gender, and underrepresented minority (URM) status. The 1983 HERI index consists of the mean total SAT score for all students admitted in 1983 to U.S. undergraduate institutions. The Barron's Profile of American Colleges Admissions Selector Rating indicates the degree of competitiveness of admission to a college.8 The Carnegie Classification includes most colleges and universities in the United States that are degree-granting and accredited by an agency recognized by the U.S. Secretary of Education.9 The Carnegie Classification is not meant to be a measure of selectivity. It is a classification of institutions into 19 categories based upon the ranges and types of degree-granting programs at the institutions (doctoral through associate of arts) and the amount of federal support annually received at each institution.

Medical school performance data consisted of USMLE Step 1 and 2 scores and final GPA. Students admitted under the institution's existing Early Assurance Program (EAP) were excluded from analysis because an MCAT score was not required for their admission. (The EAP offered admission to exceptional applicants during their undergraduate education based on the applicants' SAT scores, undergraduate GPAs, medical school admission interview ratings, and the understanding that the applicants would not apply to another medical school. This program stopped selecting applicants for admission to MUSC in 1996).

To avoid having insufficient subgroup size, we dichotomized the Barron's Admissions Selector Ratings and the Carnegie Classification categories based upon logical breakpoints in the categories. Calculated frequency distributions indicated that these breakpoints separated into approximately equal numbers of matriculants in each selectivity index or categorization grouping, thus confirming the breakpoints. The Barron's Admissions Selector Ratings were dichotomized into “most/highly competitive” (includes Barron's categories “most competitive,” “highly competitive+” and “highly competitive”) versus “not highly competitive” (“very competitive+,” “very competitive,” “competitive,” “less competitive,” and “not competitive”). The Carnegie Classification categories were dichotomized into either “research-doctoral” (includes Carnegie Classification Research I and II and Doctoral I and II institutions) and “not research-doctoral.”

Descriptive statistics were performed for all variables. Stepwise linear regression (adjusted r-square method) was used to assess which control variables (undergraduate GPA, gender, URM status, age) contributed significantly to predicting USMLE Step 1 and Step 2 scores and final GPA. Age was the only control variable that did not contribute significantly to predicting any of the dependent variables. Multiple linear regression was then performed with each of the institutional selectivity or categorization indices, controlling for URM status and gender. The powers of the multiple regression equations ranged from 88.2% to 96.0% for an alpha of 0.05 and with estimating of small effect sizes.

Results

For the 1992–1995 academic years, 545 applicants matriculated at MUSC. Of these, 112 were admitted under MUSC's Early Assurance Program (therefore missing MCAT scores) and were thus excluded from the study. Institutional selectivity index or categorization data were incomplete for an additional 28 matriculants, leaving complete data for 405 matriculants (73.3%).

Two hundred and sixty of the matriculants studied (64%) were men; 70 of the matriculants (17%) were from URM groups. The mean age was 24.0 years (SD = 4.0). The average total of MCAT subscores was 27 (SD = 4.2) and the average undergraduate GPA was 3.4 (SD = 0.40). Based upon the dichotomized Barron's Admissions Selector Rating, 235 of the matriculants (58%) had gone to undergraduate institutions that were classified as “not highly competitive.” Using the dichotomized Carnegie classification, 233 of the matriculants (57%) had gone to research or doctoral undergraduate institutions. The mean USMLE Step 1 score was 205 (SD = 21), and the mean USMLE Step 2 score was 202 (SD = 21). The mean final medical school GPA was 3.3 (SD = 0.38).

Table 1 presents adjusted r-squared values for eight multiple regression models computed for the three dependent variables. All models predicted statistically significant variations in the dependent variables. Uniformly, the worst-fitting model was that which consisted of only the three control variables GPA, gender, and URM status. The amounts of explained variation ranged from 17% to 32%. Addition to the model of any institutional selectivity index or categorization slightly improved prediction (as measured by proportion of variation explained) above the prediction provided with GPA and demographic characteristics alone. When the MCAT score was added to the model involving the control variables and the GPA, it improved predictive ability of the equation by 6–13%. The addition of the institutional selectivity indices or categorization after the MCAT score was in the model added nothing to the predictive ability. Control variables plus MCAT score accounted for 38% of the variation in USMLE Step 1 scores, 38% of the variation in final GPA, and 28% of the variation in USMLE Step 2 scores.

TABLE 1
TABLE 1:
Percentages of Variation Accounted for in Predicting USMLE Step 1 and Step 2 Scores and Final Grade-Point Average with Three Institutional Selectivity Measures for 1992–1995 Medical University of South Carolina Matriculants

Discussion

During the medical school admission process, the selectivity of the degree-granting undergraduate institution is used to help control for grading differences across undergraduate institutions. Our results show that none of the three institutional selectivity indices or categorizations (i.e., HERI, Barron's, or Carnegie) and any GPA adjustment that would follow will improve correlation with performances on USMLE Step 1 and Step 2 and final GPA if MCAT scores and unadjusted GPA are used in conjunction. While the Barron's and HERI indices are meant to be measures of institutional selectivity, the Carnegie classification is a description of the degree spectrum offered. Even evaluating schools by the type of degree offered produced no added benefit to prediction.

Previous studies have shown that selectivity measures aid prediction of the USMLE Step 1 score and the GPAs in medical school years one and two if used in a model without the MCAT scores. However, those studies used only one measure of institutional selectivity, the HERI,2,3 or a sampling of undergraduate institutions.6 Our study evaluated three different methods of classifying the selectivity or type of undergraduate institution, and none improved prediction in models that included the MCAT score. Furthermore, our study examined performance on USMLE Step 2 and final medical school GPA, performance indicators beyond the first two years of medical school.

Our findings suggest that using institutional selectivity indices or categorizations as an admission characteristic may not be necessary. In addition, use of institutional selectivity indices or categorizations may discriminate against applicants with other desirable characteristics who have been granted degrees from less selective undergraduate institutions. For example, use of the average SAT score might unfairly discriminate against applicants who graduated from large, state-sponsored universities. The lack of correlation with the Carnegie classification also indicates that the size or academic comprehensiveness of the degree-granting institution has little bearing on individual performance in medical school. Our results should reassure admission officers that the performances of students who attend smaller undergraduate institutions or community colleges are predictable when using their MCAT scores and undergraduate GPAs.

One limitation of this study is that it relied upon data from only one, state-supported, medical school. However, matriculants at the school come from diverse undergraduate institutions, 116 for the individuals in this study. Additional research should examine this issue at other medical schools, both state-supported and private and in various regions of the United States. Another limitation is that because multiple linear regression was used, correlations with USMLE scores and final GPAs cannot be adjusted for restriction in range. Thus, the adjusted r-square values presented in Table 1 are, in all likelihood, underestimates of the relationships between the models and the dependent variables for the applicant pool. In addition, the dichotomization of the Barron's Admissions Selector Ratings and the Carnegie Classification categories may also have had some impact on our results. However, any contributed bias would likely have strengthened the ability of institutional selectivity to influence the performances of students. Another limitation is that the HERI index, although the most recent currently available, is quite dated (1983); hence, the HERI index may not be representative of today's undergraduate institutions. Finally, this study focused on primarily cognitive measures of academic achievement in medical school. The predictive value of institutional selectivity indices or categorization on performances in clinical settings also should be explored.

In summary, our results indicate that the characteristics of the degree-granting undergraduate institution, as measured by three different types of institutional selectivity or categorization, do not add to the ability to predict performances on USMLE Steps 1 and 2 and overall medical school GPA if the MCAT score and unadjusted undergraduate GPA are available. The results also further support the predictive validity of the scores on the MCAT examination for medical school performance.

References

1. Mitchell K, Haynes R, Koenig J. Assessing the validity of the updated Medical College Admission Test. Acad Med. 1994;69:394–401.
2. Wiley A, Koenig J. The validity of the Medical College Admission Test for predicting performance in the first two years of medical school. Acad Med. 1996;71(10 suppl):S83–S85.
3. Swanson DB, Case SM, Koenig J, Killian CD. Preliminary study of the accuracies of the old and new Medical College Admission Tests for predicting performance on USMLE Step 1. Acad Med. 1996;71(1 suppl):S25–S27.
4. Mitchell KJ. Traditional predictors of performance in medical school. Acad Med. 1990;65:149–58.
5. Huff KL, Fang D. When are students most at risk of encountering academic difficulty? A study of the 1992 matriculants to U.S. medical schools. Acad Med. 1999; 74:454–60.
6. Zelesnik C, Hojat M, Veloski, JJ. Predictive validity of the MCAT as a function of undergraduate institution. J Med Educ. 1987;62:163–9.
7. Higher Education Research Institute. UCLA Graduate School of Education and Information Studies [unpublished data].
8. Barron's Profiles of American Colleges, 23rd ed. Hauppauge, NY: Barron's Educational Series, July 1998.
9. Boyer E. A Classification of Institutions of Higher Education. Pittsburgh, PA: The Carnegie Foundation for the Advancement of Teaching, 1994.

Section Description

Research in Medical Education: Proceedings of the Thirty-ninth Annual Conference. October 30 - November 1, 2000. Chair: Beth Dawson. Editor: M. Brownell Anderson. Foreword by Beth Dawson, PhD.

© 2000 by the Association of American Medical Colleges