Skip Navigation LinksHome > April 2012 - Volume 87 - Issue 4 > Students Versus Faculty Members as Admissions Interviewers:...
Academic Medicine:
doi: 10.1097/ACM.0b013e318249687d
Medical School Admission

Students Versus Faculty Members as Admissions Interviewers: Comparisons of Ratings Data and Admissions Decisions

Eddins-Folensbee, Florence F. MD; Harris, Toi Blakley MD; Miller-Wasik, Melody; Thompson, Bruce EdD

Free Access
Article Outline
Collapse Box

Author Information

Dr. Eddins-Folensbee was, at the time of this research, senior associate dean of admissions and student affairs and associate professor of psychiatry and behavioral sciences, Baylor College of Medicine, Houston, Texas. She is now senior associate dean of students, vice dean for education, and professor of psychiatry, University of Texas Health Science Center at San Antonio School of Medicine, San Antonio, Texas.

Dr. Harris is assistant dean of student affairs and diversity and associate professor of psychiatry and behavioral sciences, Baylor College of Medicine, Houston, Texas.

Ms. Miller-Wasik is administrator, Office of Admissions, Baylor College of Medicine, Houston, Texas.

Dr. Thompson is distinguished professor of educational psychology and of library science, Texas A&M University, College Station, Texas, and adjunct professor of allied health sciences, Baylor College of Medicine, Houston, Texas.

Please see the end of this article for information about the authors.

Correspondence should be addressed to Dr. Eddins-Folensbee, University of Texas Health Science Center at San Antonio School of Medicine, Office of the Dean, Mail Code 7790, 7703 Floyd Curl Dr., San Antonio, TX 78229-3900; telephone: (210) 567-4430; fax: (210) 509-6962; e-mail: eddinsfolens@uthscsa.edu.

First published online February 22, 2012

Collapse Box

Abstract

Purpose: To explore variations both in interview ratings data and in medical school admissions decisions when current medical students do and do not participate in interviewing applicants.

Method: The research team conducted this randomized controlled trial by performing identical analyses for each of six independent cohorts of applicants (n = 3,868) to Baylor College of Medicine for the academic years 2005–2006 through 2010–2011. A pair of randomly selected interviewers—either two faculty members or a faculty member and a student—interviewed each applicant in a one-on-one interview.

Results: Interviewer pairs randomly structured to include either two faculty members (n = 1,523) or one faculty member and one student (n = 2,345) produced ratings of similar means as well as homogeneity across ratings. The structure of the rater pairs, as expected, was not predictive of the final admissions decisions after the authors took into account Medical College Admission Test scores and grade point average.

Conclusions: These results, showing that student involvement does not compromise the ratings of interviewed applicants, support the continued involvement of students in medical school admissions interviews.

Because many medical schools, particularly in North America, have very low rates of attrition, one could argue that the admissions procedure is the most important evaluation exercise conducted by a school.1

Today, nearly all medical degree (MD)-granting medical schools in the United States and Canada use applicant interviews as a key element of the admissions process. Indeed, Puryear and Lewis2 found that 61% of the 107 MD-granting U.S. medical schools responding to their survey viewed admissions interview data as the most important information used in selecting potential matriculants.

Although applicant interviews are widely used in medical schools' admissions procedures, the views of the medical school interview range from very negative35 to very positive.68 Those who dislike the use of interviews question their reliability and validity, especially because some of the best results seem to come from a high number of interviews, which are time-prohibitive and not financially feasible.35 One detractor, Taylor,5 estimated that, given the extensive personnel expenses involved in conducting time-intensive interviews, the cost of the interviewers' time was (in 1990) at least $5,500,000 nationally, per year, across all U.S. and Canadian MD-granting medical schools. Taylor cited an earlier analogy that compared the applicant interview to a somewhat undesirable cat with, regrettably, more than even nine lives, and he suggested that “perhaps it is time to ask whether this cat should be allowed to rest in peace, or at least retire with dignity, having lived a long and occasionally useful life.”5

Notwithstanding some negative views, medical school admissions teams continue to use the interview widely. Admissions officials recognize that interviews can “serve four purposes: information gathering, decision making, verification of application data, and recruitment.”6 Similarly, Bardes and colleagues9 have argued that

the need to assess subjective suitability for medicine is why nearly all U.S. medical schools (unlike most law, business, and even divinity schools) interview applicants and why admissions committees devote so much energy and commit so many resources to this endeavor. And although the interview is imperfect, the person-to-person contact between a skilled interviewer and an applicant seems more likely … to provide vital information about the applicant's noncognitive attributes than would a Kaplan-coached examination in altruism.

Thus, despite some detractors, the interview has become an established, ubiquitous feature of the admissions process. Despite the pervasive use of interviews across medical schools, the actual composition of interview panels and the structure and content of the interview process vary widely.10 For example, at most (but not all) schools, applicants interview with two different interviewers in a one-to-one setting so that each applicant has two separate interviews,2 and some medical schools involve current students in conducting interviews, whereas other medical schools do not.

In our view, the presence of medical students as interviewers communicates a message that a school respects students as rising professionals. Furthermore, according to previous research,10 students report “[feeling] particularly well placed to detect ‘prepared’ or insincere answers because they had themselves been through a similar process recently.” Student interviewers also note that they personally benefit. Koc and colleagues10 report that one of their study participants commented, “I gained insight into my own qualities and have become more confident in communication.”

Back to Top | Article Outline

Purpose

We conducted this current study to explore variations both in interview ratings data and in medical school admissions decisions when current medical students do and do not participate in interviewing applicants. Specifically, we sought to address three research questions:

1. Are the mean applicant ratings produced by pairs of interviewers who are both faculty members similar to the mean applicant ratings produced by pairs of interviewers involving one faculty member and one student?

2. Are the magnitudes of rater agreement similar when interviewer pairs involve two faculty members as opposed to one faculty member and one student?

3. Does the structure of the interviewer pair itself have a noteworthy effect on admissions decisions once cognitive data (i.e., Medical College Admission Test [MCAT] scores and grade point average [GPA]) are taken into account?

Back to Top | Article Outline

Method

Baylor College of Medicine is a medical school and research center located at the Texas Medical Center in Houston, Texas, that has highly competitive admissions rates. Between 2005 and 2010, Baylor admitted an average of 176 medical students (about 27% of those who applied) to the school per year.

Each year, approximately 650 to 880 applicants receive an invitation to visit campus both to learn more about the school and to participate in interviews. Roughly 90% of the invited applicants participate in these interviews in a given year. The one-day campus visits, which occur on Fridays, include a presentation about the college of medicine, campus tours, and interviews. After the interviews, which end in the late afternoon, the admissions committee meets to discuss and evaluate the applicants. The admissions committee includes as voting members both faculty members and current students.

The new students who join the interview panels each year are selected by other students who are still on campus and hold student leadership roles on the admissions committee (i.e., students [usually four] who have served on the committee for at least one full year and who help to plan much of the applicants' visits to campus). Only second-, third-, and fourth-year students in good standing are eligible to participate as interviewers. Students interested in becoming interviewers apply by completing an application form that requires both structured and unstructured answers. Then, admissions committee student leaders interview all admissions committee candidates. Once students are selected to be interviewers, those who are not in their fourth year typically elect to continue to serve as interviewers annually in their remaining years on campus. Students are not paid to serve as interviewers; rather, they consider being selected an honor, and they typically list their service on their curricula vitae.

Interview training is mandatory for new student interviewers as well as for new faculty interviewers. New faculty interviewers meet individually with the admissions dean, and new student interviewers meet with the admissions committee student leaders. All new and returning interviewers (faculty members and students) attend a mandatory workshop on interviewing dos and don'ts (lasting about two hours) that is given by the admissions dean and by the admissions committee chair and vice chairs.

During his or her visit to campus, each applicant is interviewed by a pair of randomly assigned interviewers. Every interview is conducted individually; thus, every applicant is interviewed twice. Each interviewer pair consists either of two faculty members or of one current student and one faculty member. A random, computer-driven process assigns interviewers to applicants and constitutes the interviewer pairs. Thus, the composition of the interviewer pairs across applicants allows us to create something akin to a randomized controlled trial. Theoretically, unless the structure of the pairs itself contributes to rating differences, the interviewer pair types will generate similar data.

Our data from the 2005 to 2010 admissions cycles include information for the 3,868 applicants who were assigned to interview separately either with two faculty members (n = 1,523) or with a faculty member and a current student (n = 2,345). The institutional review board of Baylor College of Medicine approved this study.

Using one-way analyses of variance (ANOVA), we compared the mean applicant ratings produced by pairs of interviewers who were both faculty members with the mean applicant ratings produced by pairs of interviewers involving one faculty member and one student. Likewise, we used ANOVA to compare the mean differences in ratings within a pair to determine whether two faculty member interviewers' ratings of an applicant were similar to or different from a faculty member and student interviewer's rating of an applicant.

Finally, we used predictive discriminant analysis (PDA)11 to determine the rate at which various factors (e.g., mean ratings, pair types) and combinations of factors predicted admission to Baylor College of Medicine from 2005–2006 to 2010–2011. A given PDA11 results in a “hit rate” indicating the percentage of applicants correctly predicted as having been accepted or not accepted. Our focus was on which applicant data most improved or hurt the hit rate above and beyond the hit rate achieved by using the applicant cognitive variables of MCAT and GPA data. Thus, we performed an all-possible-subsets analysis of the hit rates achieved by all possible combinations of our predictors, subject to the restriction that all predictor variable sets include MCAT and GPA data. We used Statistical Package for the Social Sciences (Version 6.0; Chicago, Illinois) to conduct our analyses.

Back to Top | Article Outline

Results With Explanations

We compared the mean applicant ratings produced by pairs of interviewers who were both faculty members with the mean applicant ratings produced by pairs of interviewers involving one faculty member and one student. Table 1 presents the means and the standard deviations (SDs) of the ratings for each interviewer pair type from 2005 through 2010. Table 1 also presents the statistical significance in P values associated with testing the differences in the two means for a given year, as well as the standardized difference effect size, Cohen d, for each mean comparison.

Table 1
Table 1
Image Tools

The results in Table 1 suggest that rating differences associated with the two types of interview pair structures do not produce noteworthy systematic rating differences. In some years one pair structure yielded slightly higher mean interview ratings, whereas in other years the other pair structure produced higher ratings. Four of the six Cohen d values are essentially zero, meaning that the mean ratings are nearly identical, and most statisticians would consider even the remaining two d values (−0.11 and −0.12) to be very small.12 None of the mean differences are statistically significant at α = 0.05. The result was replicated across the six years.

We also tested whether the structure or composition of the pairs affected the degree of agreement or disagreement in the ratings of applicants within the pairs. In other words, did two faculty members agree with each other about applicants more than, less than, or about the same as a faculty member and a student? Table 2 presents the means and the SDs of the differences in the ratings within the interviewer pairs for each year. Table 2 also presents the P value and the Cohen d values associated with these comparisons.

Table 2
Table 2
Image Tools

The results shown in Table 2 suggest that interviewer pair structure was not associated with a greater or lesser tendency for the raters in each pair to agree about applicants. The agreement between two faculty members was neither more heterogeneous nor more homogenous than agreement between a faculty member and a student. Five of the six Cohen d values were near zero. The remaining d value (i.e., d = 0.19 [for 2008]) is statistically significant, at least at α = 0.05, but still relatively small. This result was an outlier for our data, given that five of the six effect sizes were both homogeneous and near zero.

Finally, we examined whether the structure of the pair influenced the ultimate product of the admissions process: the dichotomous decision either to accept or not to accept a given applicant. Table 3 presents the results of our PDA.11 It shows the hit rates for predicting the admissions decision (accept or not accept), first using only MCAT and GPA data, and then using all the possible combinations of three other predictor variables: (1) the mean applicant rating by a given interviewer pair, (2) the homogeneity of the ratings within given interviewer pairs (i.e., the difference in the two scores produced by a given pair of interviewers), and (3) the structure of the interviewer pair (i.e., either two faculty members or one faculty member and one student).

Table 3
Table 3
Image Tools

Logically, because the interviewer ratings were explicitly used to inform admissions decisions, we would want and expect the interviewer pair mean ratings to predict actual decisions. However, we would not want the pair type to predict or affect admissions decisions. Thus, pair composition or structure should make no practical difference to applicants, at least as far as the admissions decision itself.

The results in Table 3 show that the predictor variables that should affect admissions decisions (because the decisions explicitly consider these factors)—MCAT scores, GPA, interview ratings—were indeed related to decisions. As expected and desired, MCAT and GPA data alone achieved very high hit rates ranging from 56.5% to 78.1% (mean = 69.95%, SD = 7.41). The next most useful predictor variable was the mean interview ratings of the applicants, which on average added roughly 8% to the hit rates for each of the six years we studied.

Conversely, when we analyzed the two remaining predictor variables (i.e., pair structure and homogeneity of ratings within pairs) in all their various combinations, the hit rates either stayed roughly the same or worsened, indicating—as hoped and expected—that admissions decisions were not associated with the composition of pairs of interviewers. Again, these findings were replicated for each of the six years we studied.

Back to Top | Article Outline

Discussion

We conducted identical analyses across six independent cohorts of admissions decisions for all applicants to Baylor College of Medicine for the academic years 2005–2006 through 2010–2011 in an effort to examine the effect of involving current medical students in applicant interviews.

Our findings suggest that scores from interviewer pairs consisting of both a faculty member and a student should be regarded with no more skepticism than should scores from interviewer pairs consisting of two faculty members. This finding is consistent both with those of Gelmann and Stewart13 and with those of Elam and Johnson.7 Gelmann and Stewart13 studied applicants' perceptions of student interviewers and found that “in no category investigated were medical students perceived to be inferior to faculty members as interviewers.” Elam and Johnson7 studied application data and reported results that “allay a concern voiced at [their] institution that student members of the admission committee might look at candidates for admission differently than faculty.” Neither of these studies directly examined the participation of current students as interviewers of medical school applicants.

The major strength of our study was our use of independent datasets, which provided stable and replicable14 results. Our study was limited by being conducted only at a single medical school. Caution must be exercised when extrapolating our results to other schools. Indeed, one opportunity for further research would be to conduct similar randomized controlled trials in other settings. Another area for future studies would be to examine the race, sex, or other demographic variables of student interviewers and their effects on the interview.

Our findings are encouraging for the continued participation of students in interviewing applicants. Student interviewers are essentially cost-free, whereas faculty interviewers, especially clinicians, must reduce the amount of time they spend in patient care in order to interview, which results in a net loss for the university. In addition, previous studies of student interviewers have shown that interviewees often express appreciation for the presence of current students.10 Albanese and colleagues15 have noted that the interview is a chance for an institution to place a human touch on a highly stressful, high-stakes decision process. We believe that this “human touch” can be improved, without downside costs in ratings quality, by including current students as interviewers. Compared with interviewer pairs of two faculty members, interviewer pairs including one student and one faculty member do not have different mean ratings, do not have more heterogeneous ratings within the pair, and do not ultimately affect admission decisions. Thus, including current students in medical school applicant interviews adds value to the process. Student participation sends a loud message to applicants that students are respected members of the medical school community,10 affords an opportunity for applicants to psychologically connect with a school and its program,15 and provides opportunities for professional growth for the current students who themselves can learn from the interview process.10

Back to Top | Article Outline
Acknowledgments:

None.

Back to Top | Article Outline

References

1. Eva KW, Rosenfeld J, Reiter HI, Norman GR. An admissions OSCE: The multiple mini-interview. Med Educ. 2004;38:314–326.

2. Puryear JB, Lewis LA. Description of the interview process in selecting students for admission to U.S. medical schools. J Med Educ. 1981;56:881–885.

3. DeVaul RA, Jervey F, Chappell JA, Carver P, Short B, O'Keefe S. Medical school performance of initially rejected students. JAMA. 1987;257:47–51.

4. Smith SR, Vivier PM, Blain ALB. A comparison of the first-year medical school performance of students admitted with and without interviews. J Med Educ. 1986;61:404–406.

5. Taylor TC. The interview: One more life? Acad Med. 1990;65:177–178.

6. Edwards JC, Johnson EK, Molidor JB. The interview in the admission process. Acad Med. 1990;65:167–177.

7. Elam CL, Johnson MMS. An analysis of admission committee voting patterns. Acad Med. 1992;67(10 suppl):S72–S75.

8. Meredith KE, Dunlap MR, Baker HH. Subjective and objective admissions factors as predictors of clerkship performance. J Med Educ. 1982;57:743–751.

9. Bardes CL, Best PC, Kremer SJ, Deinstag JL. Medical school admissions and noncognitive testing: Some open questions. Acad Med. 2009;84:1360–1363.

10. Koc T, Katona C, Rees PJ. Contribution of medical students to admission interviews. Med Educ. 2008;42:315–321.

11. Huberty CJ. Applied Discriminant Analysis. New York, NY: Wiley and Sons; 1994.

12. Cohen J. Statistical Power Analysis for the Behavioral Sciences. 2nd ed. Hillsdale, NJ: Erlbaum Associates; 1988.

13. Gelmann EP, Stewart JP. Faculty and students as admissions interviewers: Results of a questionnaire given to applicants. J Med Educ. 1975;50:626–628.

14. Rosenthal R, Rosnow RL. Essentials of Behavioral Research: Methods and Data Analysis. New York, NY: McGraw-Hill; 1984.

15. Albanese MA, Snow MH, Skochelak SE, Huggett KN, Farrell PM. Assessing personal qualities in medical school admissions. Acad Med. 2003;78:313–321.

Back to Top | Article Outline
Funding/Support:

None.

Back to Top | Article Outline
Other disclosures:

None.

Back to Top | Article Outline
Ethical approval:

The institutional review board of Baylor College of Medicine approved this study.

Cited By:

This article has been cited 1 time(s).

Medical Teacher
Personality assessments and outcomes in medical education and the practice of medicine: AMEE Guide No. 79
Hojat, M; Erdmann, JB; Gonnella, JS
Medical Teacher, 35(7): E1267-E1301.
10.3109/0142159X.2013.785654
CrossRef
Back to Top | Article Outline

© 2012 Association of American Medical Colleges

Login

Article Tools

Images

Share