Secondary Logo

Journal Logo

Research Reports

Outcomes-Based Selection Into Medical School: Predicting Excellence in Multiple Competencies During the Clinical Years

Schreurs, Sanne PhD; Cleutjens, Kitty B.J.M. PhD; Cleland, Jennifer PhD; oude Egbrink, Mirjam G.A. PhD, MHPE

Author Information
doi: 10.1097/ACM.0000000000003279
  • Open

Abstract

Members of medical school selection committees aim to identify the best possible students and, ultimately, the best future doctors.1–4 Since the number of medical school applicants typically outnumbers the available places, a range of tools has been developed to help those involved in the selection process make decisions. These tools include cognitive indicators (e.g., preuniversity grade point average [pu-GPA], aptitude tests [e.g., the Medical College Admission Test]) and (inter)personal assessments (e.g., Multiple Mini Interviews [MMIs], Situational Judgement Tests [SJTs]).1,2,5 Typically, the use of a combination of cognitive and (inter)personal tools for making admissions decisions is preferred as both are important elements in educational performance and future career.3,6–8

Many previous studies have shown that cognitive admissions tools are predictive of success during the early, mainly preclinical years of medical education, but that their predictive value decreases over time.2,3,7,9 Conversely, research indicates that (inter)personal assessments are more predictive of performance in the later, more clinical, years of medical school1–3,5,7; however, few investigators have studied the predictive validity of combinations of cognitive and (inter)personal selection tools for the clinical phase of medical school.7 The few studies that have explored this question have used relatively crude overall measures (dropout, delays in progress, and/or overall grades per clerkship), and, notably, they have reported few differences among medical students in their latter years of study (and effect sizes have been small).10,11 Indeed, only one study showed that MMIs predict clerkship performance at a more granular level.12 Importantly, many finer-grained indicators are available for measuring performance in the clinical years of medical school, including performance on formal and workplace-based assessments using competencies or EPAs.13–18 These finer-grained metrics may be more authentic and more informative indicators of success during the clinical years of medical school.19

The relative lack of in-depth investigations into the predictive value of selection for the clinical phase of medical school and future performance as a doctor may be due, at least to some extent, to the fact that the relation between the criterion (i.e., the outcome: clinical performance) and the predictor (i.e., selection performance) is distal—much more so than in the preclinical phase.1,20,21 One way to address this gap is to blueprint the selection procedure to the outcome criteria at the end of medical school.3,4,7,19 In this manner, the constructs assessed during admissions and in the clinical phase of medical school are more closely related, and therefore—although the predictor and criterion are still distal in time—they are more congruent in content. Outcome frameworks that describe the roles and/or competencies students should be capable of at graduation and, hence, at the start of their career as a future medical doctor14,16,22 offer this possibility. Aligning selection criteria and outcome frameworks overcomes the so-called “criterion problem,” or the impossibility of determining the worth of a selection procedure if the outcome to be predicted is unclear.1,2,23 Ultimately, aligning selection with outcomes may decrease the risk of admitting good students instead of good doctors.

Constructive alignment throughout the entirety of medical school—starting with selection and going through the end of the clinical phase—has been proposed as a way to improve the predictive validity of selection at the gate for the clinical years.5,8,24,25 A previous study conducted by our team illustrated that a multitool selection procedure, aligned with the outcome framework used to build the medical curriculum, predicted student performance in the preclinical phase of that curriculum.4 Until now, however, it was not known whether such explicit constructive alignment (of outcomes and selection) could also increase an admissions procedure’s predictive value in the clinical phase.

We aimed, therefore, to assess the relationship between selection performance and performance during the clinical years of a medical program. We conducted this study in the context of a medical school where the selection procedure, curriculum, and assessments all align with an outcome framework, specifically the 2009 Framework for Undergraduate Medical Education in the Netherlands,16 which itself is based on CanMEDS.14 Both the CanMEDS framework and the Dutch framework expand on the roles a doctor should be able to fulfill and on the competencies a medical school graduate should possess to be able to fulfill these roles. The 7 roles are as follows: Medical Expert, Communicator, Collaborator, Organizer (Leader in the 2015 edition of CanMEDS15), Health Advocate, Scholar, and Professional.14–16 We examined whether students who were selected via the local outcomes-based selection procedure (outlined in brief below), which focused on assessing dispositions for the CanMEDS roles, performed better in these 7 roles during the clinical phase of their medical training compared with students who were rejected through this procedure and entered medical school via an alternative route, a lottery procedure based on pu-GPA.

Method

Context

We conducted the study at Maastricht University Medical School (MUMS) in the Netherlands where medical studies consist of a preclinical bachelor’s program that is largely didactic followed by a clinical master’s program, during which students complete a predetermined number of clinical rotations. Both are 3-year programs aligned with the CanMEDS outcome framework, as is the norm in the Netherlands.16

We focused on the predictive value of the MUMS selection procedure for student performance in each of the 7 CanMEDS roles during clinical rotations. The MUMS clinical master’s program consists of the following 5 compulsory rotations:

  • Reflective Medicine (i.e., internal medicine and pulmonology, 12 weeks),
  • Surgical Medicine (12 weeks),
  • Mother and Child (10 weeks),
  • Neurosciences (20 weeks), and
  • Family/Social Medicine (12 weeks).

This master’s program also includes 2 elective rotations of 8 and 10 weeks, participation in scientific research (SCIP, 18 weeks), and a senior rotation at a department selected by the student (i.e., the health care participation [HELP], 18 weeks).

For each rotation, students gather not only quantitative information (e.g., from knowledge tests) but also qualitative and narrative feedback on their performance in all 7 CanMEDS roles. They gather information and feedback on multiple occasions and from different people representing various roles (e.g., supervisors, nurses, peers) over time. The students, guided by a mentor, reflect on this information and feedback, as well as on their own view of their progression, and capture their reflection in their portfolio. The role of the mentor is to advise students on how to learn from the feedback so as to facilitate further learning.

In addition to the programmatic assessment data gathered in students’ portfolios, MUMS gathers the results of students’ performance on an interuniversity progress test administered 4 times per year. This progress test focuses on knowledge, and students’ scores are taken into account in the assessment of students’ performance in the CanMEDS role of Medical Expert. This progress test is administered throughout both the bachelor’s and master’s programs. The current study focuses on students’ performance during only the latter clinical program.

All of this information, including the students’ self-reflections, is appraised by the Board of Examiners for Medicine 3 times (T1, T2, and T3) during the 3-year (clinical) master’s program. The Board of Examiners for Medicine provides a final judgment regarding competence in all 7 roles on a 3-point scale: below expectations, as expected, and exceeds expectations. At T1, the students have completed at least the first 2 clinical rotations (Reflective and Surgical Medicine), and at T2, they are required to have completed the remainder of the compulsory rotations (Mother and Child, Neurosciences, and Family/Social Medicine). At T3, students have completed the whole program, including the HELP, which is always the last part of the master’s program. Students may complete the 2 elective rotations and the SCIP at any point in the 3-year program; in other words, the elective rotations and SCIP may be included in any of the final judgments, T1, T2, or T3.

We specifically included the student cohorts entering the MUMS bachelor’s (i.e., preclinical) program in 2011, 2012, and 2013 since, during those years, there were 2 distinct ways to get into MUMS: a local, outcomes-based selection procedure and a national pu-GPA-based lottery procedure. In these years, students who were rejected through the local outcomes-based selection procedure could gain entry into the medical program via the lottery procedure, which considered only pu-GPA. This latter group provides a natural control group. Furthermore, the acceptance through the lottery of applicants rejected through the local procedure prevented restriction of range; that is, applicants scoring low in the local outcomes-based selection procedure could enter through the GPA-based lottery, and applicants with a low pu-GPA had a chance of entering through the outcomes-based selection procedure.

The lottery procedure was weighted. Applicants with a pu-GPA ≥ 8 (in the Netherlands, students’ GPAs range from 0 to 10, 10 being the highest) were all admitted. Applicants with lower pu-GPAs were divided into groups as follows: GPAs of ≥ 7.5 but < 8; ≥ 7 but < 7.5; ≥ 6,5 but < 7; and < 6.5. Students in these groups were admitted in the ratio 9:6:4:3.26

In the current study, we compared the clinical performance of students admitted to MUMS through the local, outcomes-based selection procedure (selection-positive, SP) with that of students who were initially rejected through the outcomes-based selection procedure but got into medical school through GPA-based lottery (selection-negative, SN). We gathered data for T1, T2, and T3 for the 2011 and 2012 cohorts, and for T1 and T2 for the 2013 cohort (students in the 2013 cohort had not yet finished the T3 assessment at the time of data collection).

Selection procedure

As stated earlier, the outcomes-based admissions procedure at MUMS aligns with the Dutch adaptation of the CanMEDS competency framework.16 In a previous article,4 we have explained in detail how we adapted the CanMEDS competencies into required applicant data; that is, how we translated outcome competencies into “derived competencies” that an 18-year-old applicant may possess. These derived competencies are as follows:

  • knowledge shown at preuniversity education (e.g., pu-GPA),
  • transfer (knowledge and information integration),
  • textual comprehension and verbal/inductive reasoning,
  • overall communication and strength of arguments,
  • collaboration,
  • organization,
  • social and medical consciousness,
  • ethical awareness,
  • empathy, and
  • reflection.

The first round of the local, outcomes-based selection procedure consists of reviewing applicants’ portfolios, which comprise 4 main elements. The first element, preuniversity academic performance, includes pu-GPA and information about additional courses and other academic activities. These metrics, designed to measure knowledge obtained during applicants’ preuniversity education, serve as indicators for the derived competencies transfer, textual comprehension, and verbal/inductive reasoning. The second element, extracurricular activities,27 entails a description of students’ nonacademic preuniversity activities, along with a written explanation of how these experiences helped develop competencies relevant for studying and practicing medicine (e.g., communication, collaboration, organization). The third element, fit with problem-based learning, is measured through open-ended questions and is designed to make sure applicants make an informed choice about matriculating into a program that uses this educational philosophy. Finally, the fourth element is fit with MUMS. This last element, also measured via open-ended questions, is designed to make sure applicants are aware of the Maastricht curriculum and how MUMS differs from other medical schools in the Netherlands. These 4 elements are weighed, respectively, 40%, 40%, 10%, and 10%. The first 2, heavily weighted parts align with the derived competencies, whereas the last 2 are mostly aimed at creating awareness about the MUMS program and approach.4

The second round of the local, outcomes-based selection procedure, which focuses more specifically on the competencies expected from a medical doctor, consists of 2 separate tests. The first is the SJT,28,29 through which applicants are confronted with situations that relate to the job or program to which they are applying. Applicants’ responses to the situations reflect “implicit trait policies,” which Patterson and colleagues define as “beliefs about the cost or benefits of acts expressing compassion, caring and respect for patients, related to candidates’ trait expression and values.”28(p2) In the SJT, as applied at MUMS, applicants view videos of situations (e.g., critical incidents, daily life situations) related to medical school, clinical practice, or specific CanMEDS-competencies. In contrast with typical SJTs that require respondents to give a close-ended answer, the SJT at MUMS is open ended, and applicants must respond to, reflect on, think about, or otherwise engage with the situation outlined in the video. The test takes 90 minutes and consists of 10 assignments, each requiring, on average, answers to 4 questions. The second test is a written aptitude test involving a broad range of questions about managing relevant situations, including not only the competencies in the SJT but also planning skills and fluid intelligence. This test takes 75 minutes and consists of 8 assignments—each with a varying number of subitems. Together, these 2 tests map onto all 7 CanMEDS competencies,14,16 targeted to the expected level of knowledge, skills, and attitudes appropriate for 18- to 19-year-old applicants. Prior research indicates that the admissions procedure is robust and replicable.4,30 In another study,30 we compared the constructs we intended to measure (i.e., the derived competencies) with the applicants’ results and found that the constructs we intended to measured were indeed the ones we measured. For more information on the specific selection procedure employed in the current study or on its development, content, and psychometric properties, please see our previous studies.4,30

Outcome variables

The outcome variables we included are, first, the assessments from the Board of Examiners for Medicine of each student’s performance in the 7 CanMEDS roles (measured on the 3-point scale: below expectations, as expected, and exceeds expectations) in each of the 3 master’s years (i.e., at T1, T2, and T3), as well as the mean progress test result for each year. To compare students within their cohorts, we converted their raw score on each progress test into a z score within the student’s university and cohort. We retrieved these individual z scores from the university’s database and calculated a mean progress test score per year per student for students who had completed at least 3 out of 4 progress tests that year (students could pass the progress test requirement with 3 tests, provided the results were good enough).

We gathered data on the progress test results and on the performance in the 7 CanMEDS roles on T1 and T2 in March 2019. In April 2019, we gathered the data for T3.

Ethical approval

During the selection procedure, we asked applicants to give their informed consent for the use of their selection and clinical assessment data for research purposes. We emphasized that not taking part in this research would not adversely influence either their admission or progression. The lead author (S.S.) anonymized participant data before sharing it with the full research team. The Ethical Review Board of the Netherlands Association for Medical Education approved this research (Nederlandse Vereniging voor Medisch Onderwijs, NVMO; file number 303).

Statistical analysis

We produced descriptive statistics for all outcomes (performance in the 7 CanMEDS roles at all 3 timepoints and the progress test z scores per year), as well as for covariates (sex, age, cohort, and pu-GPA). We selected 3 of these covariates based on previous research (i.e., sex, age, and pu-GPA31–33). We examined cohort because assessments changed slightly from year to year; thus, each cohort’s “treatment” differed slightly. We assessed differences in age and pu-GPA between the SP students and the SN students using analyses of variance (ANOVAs), and we analyzed differences in sex and cohort between the SP and SN students using a chi-square analysis. Furthermore, given the minor differences across cohorts, we added some descriptive statistics per cohort: age and pu-GPA using ANOVA, and sex using chi-square analysis.

We considered performances in the 7 CanMEDS roles as our primary outcomes. We explored these first using a chi-square analysis (or, if applicable, a Fisher’s exact test), taking into account all 3 levels of the outcomes (below expectations, as expected, and exceeds expectations). The occurrence of belowexpectations was so rare for all roles except Medical Expert (see Results) that we did not take it into account in later analyses. Next, we applied binary logistic regression to all outcomes (including Medical Expert); that is, we compared how frequently the SP students, versus the SN students, achieved an as expected versus an exceeds expectations rating for each of the 7 roles at each timepoint. We set the predicted outcome for the binary logistic regression to as expected. In each regression analysis, we took all covariates into account.

For the progress test, we compared the mean z scores per year between the SP group and the SN group using analysis of covariance (ANCOVA), considering the same covariates as those described above for the regression analyses.

We analyzed all data using SPSS version 24 for Windows (IBM Statistics, Armonk, New York).

Results

Descriptive statistics

The total sample included 692 students, of whom 401 (57.9%) were admitted into MUMS via the local, outcomes-based selection procedure and of whom 291 (42.1%) were rejected through this procedure and entered MUMS through the pu-GPA-based lottery. Gap years and delays in progression (e.g., repeating years) have resulted in some missing data. As mentioned, T3 data were not yet available for cohort 2013, leading to a smaller sample size for T3.

Table 1 shows the descriptive statistics per covariate. We detected no statistically significant differences by cohort or by other covariates; therefore, we combined cohorts in all later analyses. Furthermore, we found no significant differences in sex, age, cohort, or pu-GPA between SP and SN students.

Table 1
Table 1:
Characteristics of Students Accepted Into Maastricht University Medical School Through a Local, Outcomes-Based Selection Procedure and Through a Lottery-Based Admissions Procedure in 2011, 2012, and 2013

Performance in the 7 CanMEDS roles

In Table 2, we provide descriptive statistics and the results of the exploratory chi-square analyses. Specifically, Table 2 illustrates that in the entire sample of students, the outcome below expectations was rare, except for in the role of Medical Expert, wherein the overall frequency of below expectations decreased from 14.4% (n = 89) to 8.6% (n = 46) and then to 0% (n = 0) for, respectively, T1, T2, and T3. Students may not complete their portfolios with any rating of below expectations on their grade list, which explains why there are no below expectations at T3. Table 2 also shows that the SP students performed significantly better in the roles of Communicator, Collaborator, and Professional at T1 and in the roles of Communicator, Collaborator, Professional—plus Organizer—at T2. At T3, the SP students performed significantly better than the SN students in 6 of 7 CanMEDS roles: Communicator, Collaborator, Organizer, Scholar, Health Advocate, and Professional.

Table 2
Table 2:
Results of Chi-Square Analyses of Medical Students’ Performance Scores on Competencies Aligned With the 7 CanMEDS Medical Doctor Roles at 3 Time Periods During the Clinical Years of Their Medical Educationa

Table 3 shows the logistic regressions for the binary outcomes (as expected versus exceeds expectations), after accounting for all covariates that may affect performance (sex, age, cohort, and pu-GPA). In line with the findings in the chi-square analysis, we noted significant differences at T1: the SP students received the rating exceeds expectations significantly more often than the SN students for the roles of Communicator, Collaborator, and Professional. The results of the chi-square analysis indicate that these differences remain at T2. Specifically, SP students received the rating exceeds expectations significantly more often than SN students for the roles of Communicator, Collaborator, and Professional, plus the role of Organizer. The binary logistic regression model also showed a significant difference with SP students outperforming their SN counterparts in the role of Health Advocate at T2 already. Finally, the results at T3 again fully align with those of the chi-square analysis: the SP students received the rating exceeds expectations significantly more often than the SN students for the roles of Communicator, Collaborator, Organizer, Scholar, Health Advocate, and Professional. The odds ratios of the effects found in the regression analyses indicate small to medium effects.34 At T3, the SP students were significantly more likely to be excellent at Communicating (21.8% more likely), Collaborating (18.7%), Organizing (16%), successfully completing Scholarly activities (13.8%), Advocating health (10.7%), and behaving Professionally (18.6%) than SN students. Table 3 also shows the significant effects of covariates. Higher pu-GPA clearly has an additional, independent, positive effect, and both cohort and sex occasionally influence performance.

Table 3
Table 3:
Results of Binary Logistic Regression Analyses Comparing the Performance of SP and SN Students on Competencies Aligned With the 7 CanMEDS Medical Doctor Roles at 3 Time Periods During the Clinical Years of Their Medical Educationa

As indicated, the progress test results are part of the students’ performance in the role of Medical Expert. Table 4 shows the ANCOVA results, comparing the mean scores per year for the progress test for SP students versus SN students, controlled for sex, age, cohort, and pu-GPA. We found significant differences between the groups for years 2 and 3. Specifically, SP students had a higher mean score on the progress test in these years. Only one of the covariates, pu-GPA, significantly influenced the results of the progress tests, and did so throughout all 3 years.

Table 4
Table 4:
Results of Analyses of Covariance Comparing the Performance (Expressed in Z Scores) of SP and SN Students on Their Progress Testa

Discussion

To the best of our knowledge, this is the first study examining the value of a constructively aligned, outcomes-based selection procedure for predicting student performance in the clinical years of medical school. The unique Dutch context of 2 admissions processes operating in parallel allowed us to compare outcomes in the clinical years across 2 groups: those who were admitted through the local, outcomes-based selection procedure, and their counterparts who were rejected through this procedure and were then admitted via a pu-GPA-based lottery procedure. We found that students who were admitted via the local outcomes-based selection procedure (SP students) outperformed their initially rejected, lottery-entry counterparts (SN students) on an increasing number of CanMEDS roles and on national progress tests as they advanced through the clinical years of medical school. In other words, the local outcomes-based selection procedure had not only predictive value in the clinical phase of medical school, but also this predictive value increased over time.

One of the most obvious strengths of the current study is that our outcome measures included indicators of actual clinical performance in daily practice rather than focusing solely on the relationship between medical school selection and (a mean of) overall quantitative clerkship grades or other, more crude measures (e.g., dropping-out, delays).10,11 Interestingly, in our earlier study examining outcomes-based selection and performance during the preclinical years, we found that the predictive value of the admissions procedure was highest for objective structured clinical examinations (OSCE) performance,4 which is arguably the preclinical outcome measure most closely related to clinical performance. Indeed, drawing on the results of our current study and of the 2018 study,4 we have found that it is possible to use outcomes-based criteria for selecting students at the application stage who perform better throughout, and at the end of, their 6 years of medical school. In other words, the outcomes-based admissions procedure in this study has not lost its predictive value after the early, preclinical years.

We used outcome variables (i.e., indicators of progression), which were carefully designed to align with the CanMEDS-based Framework for Undergraduate Medical Education in the Netherlands.16 In one other study, MMIs focusing on noncognitive skills showed predictive value for clinical performance during clerkships at a more granular level.12 One previous study from the Netherlands examined preundergraduate factors (e.g., extracurricular activities) and found that these are related to clinical achievement27; however, other studies reported that their admissions procedures had only little11,35,36 or no relation10,37 with clinical performance. The comparison of these earlier studies with our own highlights the importance of using the right criteria for constructive alignment—blueprinting admissions, curriculum, and assessment to outcomes—when selecting and educating students to become competent doctors.7,14,16,19,22,38 Importantly, this blueprinting at MUMS occurred in advance of, and independent of, the current study, thus precluding bias.

Notably, students in both the SP group and SN group performed well in the clinical phase. These results echo previous observations that those who apply for medical school are relatively homogeneous and that most applicants are capable of completing medical studies—realities that make selection a challenging endeavor.39,40 However, if the goal of medical school is to produce the best doctors, our study indicates that selection procedures constructively aligned with intended outcomes16 may identify students better suited to caring for patients than admitting students based on their pu-GPA only.

The SP students showed more excellence at graduation (indicated by their grades at T3). As mentioned, they were significantly more likely to be excellent at Communicating, Collaborating, Organizing, successfully completing Scholarly activities, Advocating health, and behaving Professionally than SN students. These are important results, especially since they relate closely to later patient care, the ultimate goal of educating a good doctor. The program-specific, outcomes-based components of the selection procedure may have identified students who fit better with the educational system and context at MUMS (i.e., those who may thrive in a PBL environment), compared with those who were rejected through the outcomes-based procedure but entered via lottery.5,7,41 This finding suggests that it may be possible to align admissions with institutional mission,42 which merits further exploration.

Previous research has raised questions about the added value of a labor-intensive selection procedure over a simple lottery procedure.37 We believe our results clearly show a positive effect of active selection. Given these and previous results,4 we feel confident stating that an admissions procedure that aligns with the curriculum, assessment, and long-term goals has the capacity to identify students who will outperform pu-GPA-admitted students throughout medical school. Moreover, research has already shown that our local, outcomes-based selection procedure is cost efficient for the bachelor’s program at MUMS.43 These results justify the extra effort required to enact outcomes-based selection procedures for medical school, especially since any cost savings garnered from the admissions procedure may be invested in high-quality education.

Several additional findings are worth highlighting. One is the effect of pu-GPA. At the start of their studies, all the students in our study began at the exact same starting point: that is, we found no significant differences in the pu-GPA between the SP group and the SN group. However, the regression analyses showed that throughout the clinical phase, there is an additional, positive effect of pu-GPA; higher pu-GPA correlates with increasing scores in 3 CanMEDS roles (Medical Expert, Scholar, and Professional) from T1 to T2 and from T2 to T3. Pu-GPA consistently showed an effect for performance in these 3 roles (but effects were weaker for performance in the other 4 roles). Notably, the predictive effect of pu-GPA does not detract from the effect of our outcomes-based admissions procedure: results with and without pu-GPA as a covariate are consistent. Furthermore, selection and pu-GPA seem to be mostly supplemental (i.e., they show incremental value). Notably, each shows different strengths in terms of predictive value. Another interesting covariate is sex. The effect of sex appears to increase throughout the clinical phase, always favoring the female students. These advantages for women are mostly found in (inter)personal skills or competencies: Collaborator at T2 and Communicator, Collaborator and Professional at T3. This is an interesting finding, as we observed no predisposition in favor of women throughout the selection procedure. Finally, the findings—that progress test results significantly differ between the SP students and the SN students, but that performance in the role of Medical Expert between these 2 groups was essentially the same—may be surprising. One explanation may be that the role of Medical Expert encompasses not only the progress test results but also results and feedback from other knowledge tests and clinical assessments from throughout the rotations.

We note limitations to our study. We report on one system in one context, where the selection procedure is a university-specific (local) responsibility. Different selection procedures may be needed for different contexts and goals (e.g., different learning environments at universities, selecting for future demands, selecting with a specific focus on widening access to diverse or traditionally underserved populations).22,25,42 Comparisons across institutions, as well as explicit comparisons of different holistic selection procedures, would be helpful to examine the generalizability of our findings. Furthermore, our own time constraints meant we had to collect data before T3 data were available for the 2013 cohort, causing a smaller sample size for T3. This difference could explain why the effects of the admissions procedure increased with each time period; however, the lower number of students should have made it more difficult (not easier) to find significant results. As mentioned, we were aware of a few, minor variations in the assessments across the 3 cohorts, but the results indicate no statistically significant differences. Finally, we lacked some demographic information on students, such as socioeconomic status, nationality, or non-Western/immigration background. Recent data suggest that in the Netherlands, there is no overall effect of demographic variables in current admissions procedures44; however, in other countries, such demographics may have large effects.45–50

Conclusions

In conclusion, this study adds information to the long-lasting debate regarding what qualities/outcomes can be used in selection procedures. Our results show that careful consideration of intended graduate outcomes, defined using an outcome framework such as the CanMEDS (or another nationally endorsed framework)—along with clear constructive alignment of selection, curricula, and assessment to these outcomes5,7,24—can be effective in creating an admissions procedure that can predict performance in the preclinical as well as the clinical phase of medical school.

Acknowledgments:

The authors wish to thank the late Dr. Arno Muijtjens for his indispensable help in conceptualizing the current study and for his statistical support during a previous study which they were able to apply to the current study. The authors also wish to thank Margriet Schoonbrood-Brorens for her support and persistence in gathering of the data.

References

1. Cleland J, Dowell J, McLachlan J, Nicholson S, Patterson F. Identifying best practice in the selection of medical students (literature review and interview survey). General Medical Council. https://www.gmc-uk.org/-/media/gmc-site-images/about/identifyingbestpracticeintheselectionofmedicalstudentspdf51119804.pdf?la=en&hash=D06B62AD514BE4C3454DEECA28A7B70FDA828715. Published November 2012. Accessed February 24, 2020.
2. Patterson F, Knight A, Dowell J, Nicholson S, Cousans F, Cleland J. How effective are selection methods in medical education? A systematic review. Med Educ. 2016;50:36–60.
3. Prideaux D, Roberts C, Eva K, et al. Assessment for selection for the health care professions and specialty training: Consensus statement and recommendations from the Ottawa 2010 Conference. Med Teach. 2011;33:215–223.
4. Schreurs S, Cleutjens KB, Muijtjens AMM, Cleland J, Oude Egbrink MGA. Selection into medicine: The predictive validity of an outcome-based procedure. BMC Med Educ. 2018;18:214.
5. Patterson F, Roberts C, Hanson MD, et al. 2018 Ottawa consensus statement: Selection and recruitment to the healthcare professions. Med Teach. 2018;40:1091–1101.
6. Kreiter CD. A research agenda for establishing the validity of non-academic assessments of medical school applicants. Adv Health Sci Educ Theory Pract. 2016;21:1081–1085.
7. Patterson F, Zibarras L. Selection and Recruitment in the Healthcare Professions: Research, Theory and Practice. 2018.Cham, Switzerland: Springer;
8. Albanese MA, Snow MH, Skochelak SE, Huggett KN, Farrell PM. Assessing personal qualities in medical school admissions. Acad Med. 2003;78:313–321.
9. Siu E, Reiter HI. Overview: What’s worked and what hasn’t as a guide towards predictive admissions tool development. Adv Health Sci Educ Theory Pract. 2009;14:759–775.
10. Schripsema NR. Effects of medical school admission based on GPA, voluntary multifaceted selection, or lottery on long-term study outcomes. In: Medical student selection: Effects of different admissions processes [dissertation]. 2017:Groningen, the Netherlands: Rijksuniversiteit Groningen; 51–64.
11. Urlings-Strop LC, Themmen AP, Stijnen T, Splinter TA. Selected medical students achieve better than lottery-admitted students during clerkships. Med Educ. 2011;45:1032–1040.
12. Reiter HI, Eva KW, Rosenfeld J, Norman GR. Multiple mini-interviews predict clerkship and licensing examination performance. Med Educ. 2007;41:378–384.
13. Bugaj TJ, Schmid C, Koechel A, et al. Shedding light into the black box: A prospective longitudinal study identifying the CanMEDS roles of final year medical students’ on-ward activities. Med Teach. 2017;39:883–890.
14. Frank JR. The CanMEDS 2005 Physician Competency Framework. Better Standards. Better Physicians. Better Care. 2005. Ottawa, Canada: The Royal College of Physicians and Surgeons of Canada; http://www.ub.edu/medicina_unitateducaciomedica/documentos/CanMeds.pdf. Accessed February 24, 2020.
15. Frank JR, Snell LS, Sherbino J. The draft CanMEDS 2015 Physician Competency Framework. Series II. May 2014. Ottawa, Canada: The Royal College of Physicians and Surgeons of Canada; http://www.royalcollege.ca/rcsite/documents/canmeds/canmeds-full-framework-e.pdf. Accessed February 24, 2020.
16. Van Herwaarden CLA, Laan RFJM, Leunissen RRM. Raamplan Artsopleiding 2009. 2009. Utrecht, the Netherlands; http://www.nvmo.nl/resources/js/tinymce/plugins/imagemanager/files/2009_nvmo_raamplan_artsopleiding_r_laan.pdf. Accessed February, 24, 2020.
17. Rekman J, Gofton W, Dudek N, Gofton T, Hamstra SJ. Entrustability scales: Outlining their usefulness for competency-based clinical assessment. Acad Med. 2016;91:186–190.
18. Bok HGJ, de Jong LH, O’Neill T, Maxey C, Hecker KG. Validity evidence for programmatic assessment in competency-based education. Perspect Med Educ. 2018;7:362–372.
19. Wilkinson TM, Wilkinson TJ. Selection into medical school: From tools to domains. BMC Med Educ. 2016;16:258.
20. Roberts C, Wilkinson TJ, Norcini J, Patterson F, Hodges BD. The intersection of assessment, selection and professionalism in the service of patient care. Med Teach. 2019;41:243–248.
21. Cleland JA, Patterson F, Hanson MD. Thinking of selection and widening access as complex and wicked problems. Med Educ. 2018;52:1228–1239.
22. Raffoul M, Bartlett-Esquilant G, Phillips RL Jr.. Recruiting and training a health professions workforce to meet the needs of tomorrow’s health care system. Acad Med. 2019;94:651–655.
23. Patterson F, Cleland J, Cousans F. Selection methods in healthcare professions: Where are we now and where next? Adv Health Sci Educ Theory Pract. 2017;22:229–242.
24. Stegers-Jager KM. Lessons learned from 15 years of non-grades-based selection for medical school. Med Educ. 2018;52:86–95.
25. Conrad SS, Addams AN, Young GH. Holistic review in medical school admissions and selection: A strategic, mission-driven response to shifting societal needs. Acad Med. 2016;91:1472–1474.
26. Schripsema NR, van Trigt AM, Borleffs JC, Cohen-Schotanus J. Selection and study performance: Comparing three admission processes within one medical school. Med Educ. 2014;48:1201–1210.
27. Urlings-Strop LC, Themmen APN, Stegers-Jager KM. The relationship between extracurricular activities assessed during selection and during medical school and performance. Adv Health Sci Educ Theory Pract. 2017;22:287–298.
28. Patterson F, Zibarras L, Ashworth V. Situational judgement tests in medical education and training: Research, theory and practice: AMEE guide no. 100. Med Teach. 2016;38:3–17.
29. Motowidlo SJ, Hooper AC, Jackson HL. Implicit policies about relations between personality traits and behavioral effectiveness in situational judgment items. J Appl Psychol. 2006;91:749–761.
30. Schreurs S, Cleutjens KBJM, Collares CF, Cleland JA, oude Egbrink MGA. Opening the black box in selection. Adv in Health Sci Educ. 2019. https://doi.org/10.1007/s10459-019-09925-1.
31. Cleland JA, Milne A, Sinclair H, Lee AJ. Cohort study on predicting grades: Is performance on early MBChB assessments predictive of later undergraduate grades? Med Educ. 2008;42:676–683.
32. Kusurkar R, Kruitwagen C, ten Cate O, Croiset G. Effects of age, gender and educational background on strength of motivation for medical school. Adv Health Sci Educ Theory Pract. 2010;15:303–313.
33. Cohen-Schotanus J, Muijtjens AM, Reinders JJ, Agsteribbe J, van Rossum HJ, van der Vleuten CP. The predictive validity of grade point average scores in a partial lottery medical school admission system. Med Educ. 2006;40:1012–1019.
34. Rosenthal JA. Qualitative descriptors of strength of association and effect size. J Soc Serv Res. 1996;21:37–59.
35. Stegers-Jager KM, Themmen AP, Cohen-Schotanus J, Steyerberg EW. Predicting performance: Relative importance of students’ background and past performance. Med Educ. 2015;49:933–945.
36. Wouters A, Croiset G, Schripsema NR, et al. A multi-site study on medical school selection, performance, motivation and engagement. Adv Health Sci Educ Theory Pract. 2017;22:447–462.
37. Wouters A. Effects of medical school selection on student motivation: A PhD thesis report. Perspect Med Educ. 2018;7:54–57.
38. Patterson F, Ferguson E, Thomas S. Using job analysis to identify core and specific competencies: Implications for selection and recruitment. Med Educ. 2008;42:1195–1204.
39. Lucieer SM, Stegers-Jager KM, Rikers RM, Themmen AP. Non-cognitive selected students do not outperform lottery-admitted students in the pre-clinical stage of medical school. Adv Health Sci Educ Theory Pract. 2016;21:51–61.
40. Schripsema NR, van Trigt AM, van der Wal MA, Cohen-Schotanus J. How different medical school selection processes call upon different personality characteristics. PLoS One. 2016;11:e0150645.
41. Burgess A, Roberts C, Clark T, Mossman K. The social validity of a national assessment centre for selection into general practice training. BMC Med Educ. 2014;14:261.
42. Sklar DP. Who’s the fairest of them all? Meeting the challenges of medical student and resident selection. Acad Med. 2016;91:1465–1467.
43. Schreurs S, Cleland J, Muijtjens AMM, Oude Egbrink MGA, Cleutjens K. Does selection pay off? A cost-benefit comparison of medical school selection and lottery systems. Med Educ. 2018;52:1240–1248.
44. van den Broek A, Mulder J, de Korte K, Bendig-Jacobs J, van Essen M. Selectie bij opleidingen met een numerus fixus & de toegankelijkheid van het hoger onderwijs [Selection for Studies With a Numerus Fixus & the Accessibility of Higher Education]. 2018.Nijmegen, the Netherlands: ministerie van OCW;
45. Griffin B, Hu W. The interaction of socio-economic status and gender in widening participation in medicine. Med Educ. 2015;49:103–113.
46. Fielding S, Tiffin PA, Greatrix R, et al. Do changing medical admissions practices in the UK impact on who is admitted? An interrupted time series analysis. BMJ Open. 2018;8:e023274.
47. General Medical Council. National training survey 2013: Socioeconomic status questions. https://www.gmc-uk.org/-/media/documents/report-nts-socioeconomic-status-questions_pdf-53743451.pdf Published 2013. Accessed February 14, 2020.
48. Association of American Medical Colleges. Total enrollment by U.S. medical school and race/ethnicity (Alone), 2019-2020. https://www.aamc.org/download/321540/data/factstableb5-1.pdf. Published November 2019. Accessed February 14, 2020.
49. Association of American Medical Colleges. Diversity in the physician workforce; Facts and figures 2014. http://www.aamcdiversityfactsandfigures.org. Published 2017. Accessed February 14, 2020.
50. Freeman BK, Landry A, Trevino R, Grande D, Shea JA. Understanding the leaky pipeline: Perceived barriers to pursuing a career in medicine or dentistry among underrepresented-in-medicine undergraduate students. Acad Med. 2016;91:987–993.
Copyright © 2020 The Author(s). Published by Wolters Kluwer Health, Inc. on behalf of the Association of American Medical Colleges.