Secondary Logo

Journal Logo

Predicting Medical School Enrollment Behavior

Comparing an Enrollment Management Model to Expert Human Judgment

Burkhardt, John C., MA, MD; DesJardins, Stephen L., PhD; Teener, Carol A., MA; Gay, Steven E., MD; Santen, Sally A., MD, PhD

doi: 10.1097/ACM.0000000000002374
Addressing Implicit Bias
Free

Purpose Medical school admissions committees are tasked with fulfilling the values of their institutions through careful recruitment. Making accurate predictions regarding enrollment behavior of admitted students is critical to intentionally formulating class composition and impacts long-term physician representation. The predictive accuracy and potential advantages of employing an enrollment predictive model in medical school admissions compared with expert human judgment have not been tested.

Method The enrollment management-based predictive model previously generated using historical data was employed to provide a predicted enrollment percentage for each admitted student in the 2016–2017 application pool (N = 352). Concurrently, the human expert created a predicted enrollment percentage for each applicant while blinded to the values generated by the model. An absolute error for each applicant for both approaches was calculated. Statistical significance between approaches (expert vs. enrollment model) was assessed using t tests.

Results The enrollment management approach was noninferior to expert prediction in all cases (P < .05) with a superior correct classification rate (77.7% vs. 71.2%). When considering subgroup analyses for specific populations of potential importance in recruiting (underrepresented in medicine, female, and in-state applicants), the enrollment management predictions were statistically more accurate (P < .05).

Conclusions Examining a single admitted class, the enrollment predictions using the enrollment management model were at least as accurate as the expert human estimates, and in specific populations of interest more accurate. This information can be readily exported for a real-time dashboard system to drive recruitment behaviors.

J.C. Burkhardt is assistant professor, Departments of Emergency Medicine and Learning Health Sciences, University of Michigan Medical School, Ann Arbor, Michigan; ORCID: https://orcid.org/0000-0001-6273-8762.

S.L. DesJardins is professor, School of Education and School of Public Policy, University of Michigan, Ann Arbor, Michigan.

C.A. Teener is admissions director, University of Michigan Medical School, Ann Arbor, Michigan.

S.E. Gay is assistant dean for admissions and associate professor, Department of Internal Medicine, University of Michigan Medical School, Ann Arbor, Michigan.

S.A. Santen is senior associate dean for evaluation, assessment, and scholarship of learning, and professor, Department of Emergency Medicine, Virginia Commonwealth School of Medicine, Richmond, Virginia.

Funding/Support: None reported.

Other disclosures: None reported.

Ethical approval: This study was given exempt status by the University of Michigan Institutional Review Board.

Correspondence should be addressed to John C. Burkhardt, University of Michigan, Department of Emergency Medicine, 1500 E. Medical Center Dr., Ann Arbor, MI 48109-5303; telephone: (734) 763-7919; e-mail: jburkhar@med.umich.edu; Twitter: @DrJohnBurkhardt.

Medical school admissions officers are tasked with deciding who will constitute the next generation of physicians.1 In addition to making individual determinations about each applicant’s aptitude for obtaining a medical education at their school, admissions officers must also consider the impact of their decision on larger-scale concepts such as class composition and future physician representation. Given that less than half of all applicants will be offered admission to a single medical school,2 and once matriculated as a student, attrition is exceedingly rare (< 2%),3 admissions professionals have a considerable role in determining overall physician representation. Accomplishing this task requires accurate predictions regarding the future enrollment choice of admitted students. Universities and other higher education institutions have developed analytical approaches and administrative reorganization strategies, broadly called enrollment management (EM), to provide decision makers with data and policy options to improve admissions processes that can affect class composition and broader social goals.4

Generally, EM approaches are based on theories of student choice behavior.5 We extended student choice models by incorporating concepts from bounded rational choice theory, thereby better fitting observed applicant behavior.6 Using this combination of theoretical concepts helped us identify the factors likely to be important when medical school applicants make their enrollment decisions.7 The importance of applicant enrollment behavior may not be initially obvious because the admissions choice is in the hands of the institution. However, admissions officers admit more applicants than the anticipated class size with the expectation that some will decline. Errors in these projections can be administratively problematic and costly. Choosing whether to enroll lies with the applicant after being admitted. Therefore, the crafting of an enrolling class that will meet institutional goals (e.g., size, quality, diversity) is typically focused on applicant, not institutional behavior. Thus, studying applicant enrollment behavior is an important evaluation problem for admissions. In higher education, EM has been widely used to study undergraduate enrollment behavior such as the effectiveness of financial aid offers, how to set and meet enrollment goals, and how to promote the financial viability of institutions by ensuring sufficient numbers of enrolled students for each entering cohort.8–11 EM has, however, not been widely employed in professional education admissions processes, such as medical school admissions.

To fill this gap, prior to this study we examined the feasibility of using EM in medical education through the creation of a predictive enrollment analytic model.12 We identified factors including applicant demographics, the provision of scholarships, and the receipt of in-state tuition as being important factors in explaining admitted applicants’ enrollment choices. We also identified potential aid offer strategy changes to maximize the effectiveness of our financial aid dollars in recruiting students.12 This modeling approach allows one to generate a predicted enrollment probability for any admit. Herein we extended our analytic strategy by examining whether the statistical modeling added benefit to standard practices implicitly used by admissions officers.

Operationalizing predictive analytic techniques from the EM literature allows a form of “mechanical decision making” to be used as an adjunct to human intuition.13 Commonly called statistical prediction rules, they have been shown to be more accurate than human decision making in numerous contexts including education,14 psychology,15 law,16 and clinical care.17–19 In many cases, statistical predictive rules have outperformed human decision making13,20 because the latter are often limited by decision-making biases21–23 or even more mundane physiological routines (e.g., proximity to lunch).16 Although in a few cases statistical prediction rules did not outperform human decision makers,13 most applications of the former are at least noninferior in terms of accuracy compared with the latter.13 Others have argued that, if prediction rules are clearly superior in terms of reliability and accuracy relative to human judgment, then not employing the rule is unethical.14

Because EM is not a common practice in professional education, the accuracy of statistical prediction rules in admissions is largely unexplored in the medical school context. In this study we had two goals: First was to test whether EM predictions are noninferior to human decision making and, second, if they are noninferior, whether our statistical prediction rule can improve on human judgment. Given our understanding of statistical prediction rules and EM from other higher education contexts, we hypothesize that our EM-based predictive model will be at least noninferior, and likely superior, to expert human (EXH) projections. If true, an EM-based approach could provide medical school admissions officers the opportunity to “course correct” if they are concerned that they will not achieve an enrollment goal (e.g., class size, selectivity, diversity).

Back to Top | Article Outline

Method

To test the relative accuracy of a medical school EM model versus human decision making, we used an EM predictive model previously created by the authors.12 Predictions were generated prospectively for the two approaches and compared with the actual enrollment behavior of the 2016–2017 applicants selected for admission in 2017. Excel (Microsoft, Redmond, Washington) was used for initial data recording and pilot dashboard creation. Stata version 12.1 (Stata Corp, College Station, Texas) was used to conduct predictive modeling and accuracy comparison. This study was given exempt status by the University of Michigan Institutional Review Board.

Back to Top | Article Outline

EM model predictive approach

Individual factors used to predict enrollment were collected as part of the usual business of the admissions office. Data from applicant records from 2006–2014 were used to estimate the statistical model. Predicted probabilities were calculated using the logistic regression equation produced by our prior analysis.12 Independent categorical variables used in the model included gender, underrepresented in medicine status (URiM), in-state residency, and institutional financial aid offered. The independent continuous variables included undergraduate grade point average, Medical College Admission Test average, and admissions committee rating (ACS). The ACS is a composite of interview scores, letters of reference, previous life experiences. It is used to categorize applicants to inform admissions decisions. The outcome variable was the predicted probability of whether an admitted student would enroll at our institution. The logistic model used to calculate the predicted probabilities for each applicant was:

A predicted probability for each applicant was calculated and then used to compare the rule-based prediction versus the human judgment.

Back to Top | Article Outline

EXH predictive approach

The EXH, an assistant dean for admissions (10 years in dean’s role, 16 years working as an admissions officer), used the data typically available to make admissions decisions, which included all factors used in the statistical model. The regression coefficients from the predictive model and the predicted enrollment probabilities were not provided. The assistant dean was also free to consider any other available factors believed by him to be of importance in terms of predicting admitted applicant enrollment behavior. In March 2017, the expert used all this information to make predictions about the probability of enrollment of all admitted students. These predictions were kept in a database and remained unchanged in a separate location after being generated.

Back to Top | Article Outline

Predictive accuracy comparison

We compared the predicted probability (reported as a percentage) of an applicant to enroll provided by our EXH and that of the EM model (EMM) versus the actual enrollment behavior of the 2016–2017 applying class. An absolute error for each admitted applicant—the difference between actual and predicted enrollment behavior, where enrollment = 100% and not enrolling = 0%—was created. An absolute value was chosen so that over- and underprediction did not cancel out. Statistical tests (t tests) were then employed to compare whether there were differences between the absolute errors generated by the two predictive approaches. In addition to the overall predictions, t tests were also performed on specific subpopulations of admitted applicants including females, admits from URiM backgrounds, and in-state admits. Classification accuracy for both approaches, including sensitivity, specificity, positive predictive value, negative predictive value, and correct classification percentage, was calculated using a 50.1% or above threshold as predicting enrollment. In our sample, some admitted applicants chose to defer the onset of their education until the next year. These admitted applicants were considered to have enrolled here versus going elsewhere for purposes of the study analysis. Early-decision applicants (n = 6) were not included in the analysis because of their lack of other possible medical school enrollment options.

Back to Top | Article Outline

Mock dashboard creation

A projected aggregate class composition was also created based on the EMM. This was accomplished in two steps. First, a column with the EM-based model’s predicted probability in the form of a percentage was added to the standard Excel database provided by the admissions office. The second aspect of the dashboard was created by grouping the individual admitted applicants by specific population and adding their individual predicted enrollment probabilities together. For example, if three students from a single subgroup had predicted enrollment probabilities of 0.6 (60%), 0.4 (40%), and 0.35 (35%), the total predicted number of students enrolling from that group would be 1.35 people. The sum of the predicted probabilities was totaled in this manner for each group, thereby providing an estimate of enrollments for each group.

After the predicted probabilities were summed for each group, a pie graph was generated. Each graph was linked to a mock active admissions database so that the graph would be automatically updated with each new admission. This mock dashboard was not used in this admissions cycle so as not to bias the predictions made by the EXH decision maker.

Back to Top | Article Outline

Results

Demographic information for the admitted and enrolled students included in the study is represented in Table 1. Of the 342 applicants admitted in March 2017, 166 (48.5%) elected to enroll at our institution. Compared with males, female applicants were more highly represented in both the admitted and enrolled groups (57.0% and 57.2%, respectively). Greater representation from white and Asian (combined) admits (77.8%, 84.9%) and out-of-state admits (78.1%, 60.2%) was also found.

Table 1

Table 1

Statistical testing of the relative predictive accuracy of the two approaches (EXH and EM) was performed and is reported in Table 2. The mean prediction error for the EXH projections was 36.4 (95% confidence interval [CI] = 33.7–39.2). This indicates that on average the EXH predictions over/underestimated the likelihood of students matriculating by 36.4%. In the case of the EMM predictions, the average error was 33.7 (95% CI = 31.4–36.1). A one-tailed t test testing for noninferiority and/or the EXH predictions having a higher error was significant (P < .05). A two-tailed t test to determine whether the EMM was more accurate than the EXH approached statistical significance (P = .076). Taken together, the results indicate that the human predictions in this study cannot be more accurate than those generated by the analytic model. It also means that the EMM may be more accurate than the EXH, but there is also the statistical possibility that their accuracy is equivalent. Additional accuracy metrics were also calculated for both approaches, and the EMM had a higher sensitivity (82.7%; 95% CI = 76.2%–88.0%), increased negative predictive value (81.3%; 95% CI = 74.3%–87.0%), and better correct classification rate (77.7%) (Table 2).

Table 2

Table 2

A final set of analyses was performed to examine the predictive accuracy of the models for select subgroups of the admitted population. To this end, t tests were performed on the two predictive approaches on three specific categories thought to be of specific importance in recruiting: URiM, females, and in-state. The mean prediction error for the EXH when considering URiM admitted applicants was 41.8 (95% CI = 36.5–47.1), whereas the EMM prediction error was 32.5 (95% CI = 27.4–37.5). For female admitted applicants, the EXH error was 37.3 (95% CI = 33.7–40.9), and the EMM error was 33.4 (95% CI = 30.2–36.5). For the final group considered, the in-state admits, the error rates for the EXH and the EMM approach were 24.8 (95% CI = 17.4–32.2) and 18.4 (95% CI = 12.4–24.4), respectively. When compared, the EMM yielded significantly more accurate predictions than the EXH approach in all three subgroups (Table 2). For purposes of demonstration, static versions of the graphs generated via the dashboard are presented in Figure 1. Comparisons of the actual class composition and the predicted class composition versus the EXH projections are also provided (Figure 1).

Figure 1

Figure 1

Back to Top | Article Outline

Discussion

On the basis of EM concepts and theories from higher education, we tested whether an EM-based predictive analytical approach in medical education could match or improve on the accuracy of those made by an EXH decision maker. The results of this study confirmed our hypothesis that an EM model could perform at least as well as, and in specific populations, superior to, EXH judgment.

Regarding the accuracy of our EMM compared with other examples in higher education, the results were also encouraging. Although to our knowledge no examples of a single human versus model comparison like the one performed here exist in the higher education literature, there are examples of benchmarks for appropriate correct classification statistics in EM.24,25 Our correct classification rate of 77.7% was higher than those reported in two comparable EM-based approaches at large, public Research I institutions (65.7%)24 and small liberal arts colleges (69.2%).25

Although in the general case, the EMM predictions were found to be at least as good as those provided by an expert, this only tells part of the story regarding its potential usefulness. This study piloted EMM’s usefulness at only a single site for a single year (342 total students), resulting in a study with a post hoc power calculation of 95.7% (likelihood of detecting a difference if one exists). Additional years of study or expansion to other sites may further demonstrate EMM’s accuracy and utility. More important, even an EMM that is equivalent to human judgment may have several advantages. These advantages stem from the fact that EXH prediction requires several things that the EM approach does not, making the latter advantageous in many ways.

Our predictive analytic approach has benefits over the human approach that fall into three categories: time efficiency, novice user ability, and real-time aggregate class information. First, our expert had to dedicate time to reviewing the accepted applicants and then deciding on his best estimate for their likely enrollment behavior in addition to his usual duties. Bounded rationality would suggest that human decision making is limited by cognitive biases and incomplete information about potential options, for example, limitations on the time available to make a decision.6,26 Conversely, the EMM can be automated to provide predicted enrollment probabilities that can be added to the already-existing information used for admissions-related purposes, and automating this process mitigates cognitive biases inherent to human decision making. This process is likely to improve information availability and quality with little to no increased cost in effort.

A second potential advantage of our EMM is that it does not require specific training or experience in admissions to provide useable information. Our expert has worked in admissions for 16 years at the University of Michigan, developing an intimate knowledge of our specific institutional context and the students most likely to choose to attend. In contrast, our EM-based predictive model can provide equivalently accurate information to a person new to the role of chief admissions officer, a decision maker in a different field, or any other interested person that is privy to this information. In this way, predictive analytics can help mitigate the loss of institutional knowledge when a long-time leader vacates the position.

Concerning real-time data availability, the EM approach has another significant advantage. The advantages to automatization described above become even more manifest when predicting the overall makeup of an incoming class. Using the same approach, a predicted class makeup can be visualized using an immediately up-to-date graphical output. Although a similar approach could be undertaken using human decision making, it would again require both the increased time to make predictions, the expertise to maximize their accuracy, and the entry of this information into the database. In comparison, the up-to-the-minute graphs of our predictive analytic approach require no additional ongoing work. With these graphical representations, key personnel can review the predicted composite of the class change with each admissions decision.

The advantages of using an EMM approach are even more clear when considering specific subgroups of the admitted applicant pool. In the case of URiM, females, and in-state admits, the EM approach was significantly more accurate than human judgment. This increased accuracy could be operationalized, for example, to potentially increase URiM representation should that be an institutional goal. As shown in Figure 1, the EM-based predictive model predicted a final class containing 15.9% students from URiM backgrounds, whereas the EXH predicted a much higher projected URiM representation of 23.6%. The actual class URiM representation was 15.1%, very close to the statistical model-based projection. If an institution had set a target like the prediction made by our EXH, they would fall well short of attaining their goal. Using the predictive information obtained as students are admitted by the EM-based approach, an institution interested in achieving higher URiM representation could consider interviewing and admitting additional students from that group until the goals were more likely to be met.

In considering the results of this study, it is important to note the purposes of predictive tools. EM-based models are not designed to and cannot fully describe the factors that go into the decision-making process of admitted applicants. Rather, these models should be used to complement the expertise of admissions officers and better inform them about admits’ likely enrollment behavior. Second, although we developed and piloted this process at a single site, the overall approach could be valuable for other institutions. To be successful, each institution that added an EM component to their admissions process would benefit from the estimation of their own logistic regression model to describe their own specific enrollment context. Finally, future refinements of enrollment prediction models may be able to include nonquantitative factors without the adoption of the underlying biases inherent in people through addition of natural language or other automated data-mining approaches of essays and interview comments. We believe that this would represent the next logical step in the exploration of the topic of predictive analytics in medical school admissions and enrollment.

Back to Top | Article Outline

Conclusions

Our EM-based predictive model generally demonstrated statistical noninferiority and, in many specific cases, significantly increased accuracy compared with EXH assessment. It additionally provides advantages to decision makers in terms of time and general use by nonexperts. The information provided by EM-based models can also be readily exported to simple graphical interfaces to coordinate recruitment efforts.

Back to Top | Article Outline

References

1. Association of American Medical Colleges. Medical school applicants, enrollment reach all-time highs. https://www.aamc.org/newsroom/newsreleases/358410/20131024.html. Published October 24, 2013. Accessed July 18, 2018.
2. Association of American Medical Colleges. Applicants and Matriculants Data. Table 17: MCAT Scores and GPAs for Applicants and Matriculants to U.S. Medical Schools, 2002–2013. 2013.Washington, DC: Association of American Medical Colleges;
3. Garrison G, Mikesell C, Mathew D. Medical school graduation and attrition rates. AAMC Analysis in Brief. 2007;7:2. https://www.aamc.org/download/102346/data/aibvol7no2.pdf. Accessed July 18, 2018.
4. Coomes MD. The historical roots of enrollment management. New Dir Stud Serv. 2000;89:5–18.
5. Paulsen MB. College choice: Understanding student enrollment behavior. Paper presented at: Association for the Study of Higher Education; 1990; Washington, DC.
6. Jones BD. Bounded rationality. Annu Rev Polit Sci. 1999;2:297–321.
7. Leppel K. Logit estimation of a gravity model of the college enrollment decision. Res High Educ. 1993;34(3):387–398.
8. DesJardins SL, Bell A. Using economic concepts to inform enrollment management. New Dir Inst Res. 2006;132:59–74.
9. Martineau M. Valcik NA, Johnson JA. Moneyball in higher education: IR’s role in strategic enrollment management. In: Institutional Research Initiatives in Higher Education. 2017:New York, NY: Routledge; 47–56.
10. Hope J. Create an enrollment management structure to support recruitment, retention goals. Enrollment Manage Rep. 2017;20(12):1–5.
11. Langston R. Create a data-driven culture within your enrollment management operations. Enrollment Manage Rep. 2018;21(10):8.
12. Burkhardt JC, DesJardins SL, Teener CA, Gay SE, Santen SA. Enrollment management in medical school admissions: A novel evidence-based approach at one institution. Acad Med. 2016;91:1561–1567.
13. Grove WM, Zald DH, Lebow BS, Snitz BE, Nelson C. Clinical versus mechanical prediction: A meta-analysis. Psychol Assess. 2000;12:19–30.
14. Dawes RM. The ethics of using or not using statistical prediction rules in psychological practice and related consulting activities. Philos Sci. 2002;69(suppl 3):S178–S184.
15. Health Resources and Services Bureau of Health Professions. The Rationale for Diversity in the Health Professions: A Review of the Evidence. October 2006. Washington, DC: US Department of Health and Human Services; https://www.pipelineeffect.com/wp-content/uploads/2015/04/diversityreviewevidence.pdf. Accessed July 18, 2018.
16. Danziger S, Levav J, Avnaim-Pesso L. Extraneous factors in judicial decisions. Proc Natl Acad Sci. 2011;108(17):6889–6892.
17. Bachmann LM, Kolb E, Koller MT, Steurer J, ter Riet G. Accuracy of Ottawa ankle rules to exclude fractures of the ankle and mid-foot: Systematic review. BMJ. 2003;326:417.
18. Stiell IG, Clement CM, Grimshaw J, et al. Implementation of the Canadian C-spine rule: Prospective 12 centre cluster randomised trial. BMJ. 2009;339:b4146.
19. Center for Workforce Studies. Recent Studies and Reports on Physician Shortages in the US. 2012. Washington, DC: Association of American Medical Colleges; https://www.aamc.org/download/100598/data. Accessed July 18, 2018.
20. Bandiera G, Stiell IG, Wells GA, et al; Canadian C-Spine and CT Head Study Group. The Canadian C-spine rule performs better than unstructured physician judgment. Ann Emerg Med. 2003;42:395–402.
21. Kahneman D, Tversky A. Intuitive Prediction: Biases and Corrective Procedures. 1977.Arlington, VA: Cybernetics Technology Office;
22. Grewal D, Ku MC, Girod SC, Valantine H. Roberts LW. How to recognize and address unconscious bias. In: The Academic Medicine Handbook: A Guide to Achievement and Fulfillment for Academic Faculty. 2013:New York, NY: Springer New York; 405–412.
23. Schulz-Hardt S, Frey D, Lüthgens C, Moscovici S. Biased information search in group decision making. J Pers Soc Psychol. 2000;78:655–669.
24. DesJardins SL. An analytic strategy to assist institutional recruitment and marketing efforts. Res High Educ. 2002;43(5):531–553.
25. Maltz EN, Murphy KE, Hand ML. Decision support for university enrollment management: Implementation and experience. Decis Support Syst. 2007;44(1):106–123.
26. Kahneman D. A perspective on judgment and choice: Mapping bounded rationality. Am Psychol. 2003;58:697–720.
© 2018 by the Association of American Medical Colleges