Jena, Anupam B. MD, PhD; Arora, Vineet M. MD, MAPP; Hauer, Karen E. MD; Durning, Steven MD, PhD; Borges, Nicole PhD; Oriol, Nancy MD; Elnicki, D. Michael MD; Fagan, Mark J. MD; Harrell, Heather E. MD; Torre, Dario MD; Prochaska, Meryl; Meltzer, David O. MD, PhD; Reddy, Shalini MD
Dr. Jena was a third-year resident, Department of Medicine, Massachusetts General Hospital, at the time of writing. He is now assistant professor of health care policy, Harvard Medical School, and assistant physician, Department of Medicine, Massachusetts General Hospital, Boston, Massachusetts.
Dr. Arora is associate professor of medicine, assistant dean for scholarship and discovery, and associate program director, Internal Medicine Residency Program, University of Chicago Pritzker School of Medicine, Chicago, Illinois.
Dr. Hauer is professor of medicine and director of internal medicine clerkships, University of California, San Francisco, School of Medicine, San Francisco, California.
Dr. Durning is professor of medicine, Uniformed Services University of the Health Sciences, Bethesda, Maryland.
Dr. Borges is professor, Department of Community Health, and assistant dean of medical education research and evaluation, Wright State University Boonshoft School of Medicine, Dayton, Ohio.
Dr. Oriol is dean for students, Harvard Medical School, Boston, Massachusetts.
Dr. Elnicki is professor of medicine and director, Combined Ambulatory Medicine Pediatrics Clerkship, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania.
Dr. Fagan is professor of medicine and internal medicine clerkship director, Brown University Warren Alpert School of Medicine, Providence, Rhode Island.
Dr. Harrell is associate professor of medicine and internal medicine clerkship director, University of Florida College of Medicine, Gainesville, Florida.
Dr. Torre is associate professor of medicine and associate program director, Internal Medicine Residency Program, Drexel University School of Medicine, Philadelphia, Pennsylvania.
Ms. Prochaska is a second-year student, Loyola University Chicago School of Law, Chicago, Illinois; at the time of writing, she was project manager, Section of General Internal Medicine, University of Chicago, Chicago, Illinois.
Dr. Meltzer is associate professor of medicine and chief, Section of Hospital Medicine, University of Chicago Pritzker School of Medicine, Chicago, Illinois.
Dr. Reddy is associate professor of medicine and associate dean of student programs and professional development, University of Chicago Pritzker School of Medicine, Chicago, Illinois.
Correspondence should be addressed to Dr. Jena, Department of Health Care Policy, Harvard Medical School, 180 Longwood Ave., Boston, MA 02115-5899; telephone: (773) 209-8005; e-mail: firstname.lastname@example.org.
Supplemental digital content for this article is available at http://links.lww.com/ACADMED/A105.
In 2010, more than 27,000 U.S. senior medical students applied to first- and second-year residency positions in the National Resident Matching Program (NRMP) Main Residency Match.1 During the matching process each year, many residency applicants’ formal interviews with residency programs are followed by informal communications. Therefore, the NRMP’s Match Participation Agreement for Applicants and Programs (MPA) includes language that restricts communications between these parties. For the 2010 Match, the MPA specified:
One of the purposes of the Matching Program is to allow both applicants and programs to make selection decisions on a uniform schedule and without coercion or undue or unwarranted pressure. Both applicants and programs may express their interest in each other; however, they shall not solicit verbal or written statements implying a commitment.2
Prior studies have suggested that, despite these restrictions, explicit violations of MPA terms may occur and may involve communications between programs and applicants.3–12
Communicating with programs appears to have strong effects on applicants’ ranking decisions. In studies of pediatrics- and surgery-bound students, 10% to 35% of respondents reported that they had altered their rank order lists in response to postinterview communications from programs.6,8 A multicenter study exploring communications during the 2001 Match noted that 55% of the responding students felt pressured to offer reassurances to residency programs.3
Since the 2001 Match, the number of residency program applicants has increased rapidly, in part because of the rising numbers of graduates of MD-granting, DO-granting, and international medical schools. This trend, along with slower growth in residency positions, has resulted in an increasingly competitive Match.1 Residency applicants face intense competition to match to their program of choice, so it is important to understand the prevalence and nature of communications that occur between programs and applicants in today’s environment in order to help students and programs interpret one another’s statements of interest.
We thus conducted a multischool study to examine the frequency and nature of postinterview communications as reported by U.S. senior medical students who participated in the 2010 Match. We explored whether students considered such communications to be stressful and to have effects on their ranking decisions. We also investigated the ways in which applicant and program behaviors varied by applicant characteristics.
After the 2010 Match, we conducted a cross-sectional study of all senior medical students at a convenience sample of seven U.S. medical schools: Harvard Medical School; University of California, San Francisco (UCSF), School of Medicine; University of Chicago Pritzker School of Medicine; University of Florida College of Medicine; University of Pittsburgh School of Medicine; the Warren Alpert Medical School of Brown University; and Wright State University Boonshoft School of Medicine.
The seven schools varied by geographical location, public/private status, and research funding level. Two of us (A.J., S.R.) recruited investigators at five schools on the basis of a prior collaboration in a multi-institutional study of factors influencing career choice in internal medicine.13 Investigators at the two additional sites were recruited on the basis of their interest in participating in this study.
This study was approved by the institutional review board at each participating medical school. Completion of the survey was considered implied consent of study participants at UCSF School of Medicine, University of Florida School of Medicine, and Wright State University Boonshoft School of Medicine. The study was considered exempt at the remaining sites and did not require written consent of participants.
Survey development and content
We developed a survey to explore postinterview communications between applicants and residency programs during the Match. We tested it for clarity with first-year residents at Massachusetts General Hospital and the University of Chicago Medical Center and determined that no adjustments were needed.
The survey consisted of 27 closed- and open-ended questions about student characteristics (demographics and academic record), application characteristics, and postinterview communications between residency programs and applicants (see Supplemental Digital Appendix 1, http://links.lww.com/ACADMED/A105). The content of the survey, which took students about 10 to 15 minutes to complete, is detailed below.
Student and application characteristics. Students were asked to self-report their gender and underrepresented minority status as well as to indicate whether they submitted their rank order list jointly with another applicant (i.e., to participate in the Match as a couple). They were also asked about any advanced degrees in addition to an MD that they obtained during medical school, their clerkship grade in the specialty applied to, and membership in the Alpha Omega Alpha (AOA) honor society. (AOA membership is limited to top academic achievers at each medical school.) Questions about application characteristics addressed main specialty choice (12 categories were provided), the number of applications submitted, the number of interview offers received, the number of interviews attended, and the matched program’s position on the applicant’s rank order list.
Postinterview communications. Fifteen questions addressed postinterview communication between residency programs and applicants. Students were asked whether they were contacted by a program after interviewing (and, if so, by whom) and how many programs asked them where the program would be ranked (a direct violation of the MPA). Students also were asked whether and how often programs explicitly stated any of the following: that the applicant would “fit well” into the program, be “ranked highly” at the program, or be “ranked to match” at the program. These survey terms were based on wording suggested by first-year residents at Massachusetts General Hospital and the University of Chicago Medical Center before survey development as well as on previous reports that our student advisees made regarding Match-related communications. These terms have also been used by students on public Web forums to describe program communications.14 It is important to note that although these statements suggest programs’ varying levels of interest in applicants, they do not constitute direct violations of the MPA because they do not express binding commitments.
In addition, students were asked whether they altered their rank order list on the basis of communications with programs, what types of statements they made to programs (e.g., telling a program or programs that it/they would be ranked first or ranked highly), and whether anyone had recommended notifying their first-choice program (and, if so, who made that recommendation). The survey also included questions about the outcome and overall experience of the Match. Students were asked whether they felt that a program told them explicitly or implicitly that they would match, yet they did not despite ranking the program first. Students were also asked whether communicating with programs was helpful and to use a Likert scale to rate whether and/or how the decision to communicate with programs after interviews was stressful.
During March and April 2010, within four weeks of Match Day, we e-mailed the senior medical students at each of the seven participating schools to invite them to participate in the survey. We sent nonresponding students up to three follow-up e-mails over the subsequent six weeks, with final e-mails sent in May 2010. No compensation was provided, and participation was voluntary. Surveys were completed anonymously at an online survey site (Survey Monkey, Palo Alto, California). Site investigators were provided with overall response rates for their respective site, but no other information (except for Wright State University Boonshoft School of Medicine, which received aggregate site data in accordance with its IRB approval).
We calculated descriptive statistics of student characteristics, application characteristics, and postinterview communications. The association of main specialty choice and application characteristics with communications was assessed using univariate analyses. We divided the 12 main specialty choices into more competitive specialties (dermatology, ophthalmology, radiology, radiation oncology, and surgical specialties) and less competitive specialties on the basis of prior studies15 and NRMP data16 that defined specialty competitiveness according to applicant Match rates (greater competitiveness determined by lower Match rates). Surgical specialties included neurosurgery, orthopedic surgery, otolaryngology, plastic surgery, urology, and vascular surgery. Other student characteristics assessed were honors in specialty clerkship, gender, and self-identified underrepresented minority status. We used five characteristics of postinterview communications as outcomes: whether an applicant was contacted by a program, asked where he or she would rank a program, told he or she would “fit well,” told he or she would be “ranked highly,” or told he or she would be “ranked to match.” For the purposes of data analysis, program communications were noted to be specific if the program explicitly told the applicant that he or she would “match at their program if you wanted to” (i.e., be “ranked to match”). Program communications were considered nonspecific if the applicant received feedback that he or she would either “fit well” into the program or be “ranked highly” at the program. Differences in communications between specialties and application characteristics were compared with χ2 analysis at the P < .05 level of significance.
We estimated a generalized linear model with binomial family and logarithmic link function as well as robust error variances to determine the independent association of student demographic and application characteristics on each of the communication outcomes described, with results reported as relative risks (RRs). Because applicants with greater numbers of interviews would be more likely to experience postinterview communications, we adjusted for the number of programs at which respondents interviewed. To ensure that our results would be easy to interpret for faculty advising students who are entering the Match, most analyses were performed using all respondents as the denominator (any exceptions are noted in the Results). P values and 95% confidence intervals (CIs) were reported. We used STATA version 9 (STATA Corp, College Station, Texas) for all analyses.
Of the 827 U.S. senior medical students invited to participate in the survey, 564 (68.2%) responded. Five of the seven participating schools had response rates exceeding 60%, and three schools had rates ≥ 80% (Table 1).
Student and application characteristics
Among the 564 respondents, 270 (47.9%) were male, 94 (16.7%) self-reported as underrepresented minorities, 97 (17.2%) reported being members of AOA, 94 (16.7%) reported obtaining an advanced degree in addition to an MD, and 279 (49.5%) reported receiving an honors or equivalent grade in the clerkship in the specialty to which they applied (Table 2). Gender, underrepresented minority status, AOA membership, and advanced degree status were consistent with national data.16,17
Students most frequently reported applying to internal medicine residencies (n = 127; 22.5%), followed by programs in a surgical specialty (n = 62; 11.0%) and pediatrics (n = 54; 9.6%). These rates are similar to application rates by specialty for U.S. senior medical students in 2010.1,18,19 Overall, students reported applying to a mean of 24.1 (standard deviation [SD] 15.2) programs and interviewing at a mean of 10.3 (SD 3.7) programs. Median numbers of applications and interviews were 19 and 10, respectively, which are consistent with national estimates from the 2009 NRMP Match.16 More than half of the respondents (n = 318; 54.6%) reported matching at their top choice, which is consistent with the 2010 Match’s national average of 52.7%.1
Characteristics of postinterview communications with programs
The majority of the responding students (n = 487; 86.4%) reported being contacted by at least one residency program after an interview (Table 3). The survey did not distinguish between contacts initiated by the applicant versus by the residency program. Students among the 564 respondents most frequently reported being contacted by program directors (n = 440; 78.0%), residents from the program (n = 306; 54.3%), or other faculty interviewers (n = 227; 40.2%). These communications were predominantly via e-mail: Applicants who had communicated with programs reported receiving e-mail messages from a mean (SD) of 3.3 (2.3) program directors, 1.9 (2.2) residents, and 1.8 (3.2) other faculty interviewers. In contrast, they reported receiving phone calls from a mean (SD) of 1.03 (1.4) program directors, 0.6 (1.1) residents, and 0.2 (0.6) other faculty interviewers.
Statements made to applicants by programs. Most of the 564 respondents reported receiving nonspecific feedback from programs on where they would be ranked: 430 (76.2%) reported being told they would “fit well” in a program, and 298 (52.8%) indicated they were told that they would be “ranked highly.” Specific feedback was also common: 195 (34.6%) reported being told that they would be “ranked to match.” None of these communications constituted direct violations of the MPA. Among the 487 respondents who indicated that they had communicated with programs, the mean (SD) number of programs that reportedly told the applicant that he or she would “fit well” was 4.1 (2.5), that he or she would be “ranked highly” was 2.9 (2.2), and that he or she would be “ranked to match” was 2.1 (1.6). With each applicant interviewing at a mean of 10.3 programs, the percentages of programs at which an applicant interviewed that told the applicant that he or she would “fit well,” be “ranked highly,” or be “ranked to match” were, respectively, 39.8%, 28.2%, and 20.3%.
Twenty-seven (4.8%) of the 564 respondents reported that a program had asked where it would be ranked on the applicant’s rank order list, which is a violation of the MPA. Among these students, the mean (SD) number of programs asking for this information was 1.6 (1.3).
Communication with programs was prevalent but less common among respondents who did not match: 9 (47.4%) of these 19 applicants reported communicating with programs. Of the 9 applicants, 8 (88.9%) reported being told by a program that they would “fit well,” and 5 (55.6%) reported being told that they would “rank highly.” None reported being told that they would be “ranked to match.”
Statements made to programs by applicants. Students also indicated that they had communicated with programs about where they would rank them. Among the 564 respondents, 355 (62.9%) reported informing a single program that it would be ranked first. Respondents specified that the recommendation to notify their top choice came mainly from fellow applicants (n = 213; 37.8%), residents in the specialty (n = 190; 33.7%), and faculty advisors not in the applicant’s intended specialty (n = 163; 28.9%). Only 6 (1.1%) students reported telling multiple programs that each would be ranked first. However, nonspecific statements were common, with 338 (59.9%) students reporting that they told multiple programs that each would be ranked “highly.”
Effects of communications. Postinterview communications were reported to have important effects on applicants’ behavior and their overall experience with the Match. Almost one-quarter of the 564 respondents (n = 132; 23.4%) reported altering their rank order list on the basis of communications from programs. Only 7 (1.2%) students reported not matching at a program despite being told that they would be “ranked to match” and ranking that program first, whereas 105 (18.6%) reported not matching at a program despite feeling assured by postinterview communications that they would match there and ranking the program first. Overall, 368 (65.2%) students “agreed” or “strongly agreed” that deciding whether and/or how to communicate with programs was stressful. Despite the stress, 269 (55.2%) of the 487 students who reported communicating with programs “agreed” or “strongly agreed” that this communication was helpful in making their decision. Even among the 105 students who reported that they did not match at the program they ranked first despite feeling assured that they would, 41 (39.1%) “agreed” or “strongly agreed” that the communication was helpful in making their decision.
Association with main specialty choice and applicant characteristics
In univariate analysis (Table 4), students who applied to programs in the more competitive specialties were significantly less likely than those who applied to less competitive programs to report being contacted by a residency program (P < .001). Students who received an honors grade in their specialty clerkship were significantly more likely than those without an honors grade to report being contacted (P < .001). Female applicants were significantly more likely than male applicants to report being contacted (P = .022). Reported contact did not vary significantly by self-reported underrepresented minority status (P = .581).
There was no significant difference by specialty, honors grade in the specialty clerkship, gender, or underrepresented minority status in whether a program asked where an applicant would rank that program. However, compared with those who applied to less competitive specialties, respondents who applied to more competitive specialties were significantly less likely to report being told that they would be “ranked to match” (P = .002). Students who received honors in their specialty clerkship were significantly more likely than those without an honors grade to report being told that they would be “ranked to match” (P < .001). With the exception of female students being significantly more likely than male students to report being told that they would “fit well” (P = .007), there was no statistically significant difference within gender or underrepresented minority status in the probability that an applicant was ever told that she or he would be “ranked highly” or “ranked to match.”
Multivariate analysis of predictors of postinterview communications (see Table 5 for RRs and 95% CIs) demonstrated that students applying to more competitive specialties were less likely to report being contacted by a residency program or to report being told by a program that they would be ranked highly. Students with an honors grade in the specialty clerkship or an additional advanced degree were more likely to report being contacted. Specialty clerkship honors grade, AOA membership, and an additional advanced degree were all significantly associated with reporting being told that one would be ranked “highly.” Honors in the specialty clerkship and AOA membership were also significantly associated with reporting being told by a program that one would be “ranked to match.” Self-reported underrepresented minority status and gender were not statistically significantly independently associated with Match communication outcomes. In each regression model, the Pearson chi-square statistic to assess goodness of fit of the model was less than the P = .05 critical value for a chi-square with 558 degrees of freedom.
Two of the seven schools in this study had response rates < 60%. Analysis of responses from those schools could unpredictably bias our results through nonresponse bias, so we repeated our analyses excluding data from the two schools. Our results were unchanged: 398 (86.9%) of the 458 respondents at the other five schools reported communicating with programs after interviews, 352 (76.9%) reported being told they would “fit well,” 241 (52.6%) reported being told they would be ranked “highly,” and 166 (36.2%) reported being told they would be “ranked to match.” The multivariate analysis results showed that applicants were less likely to report being told they would be “ranked to match” if they applied to a more competitive specialty (RR 0.76; 95% CI 0.52–0.95). Applicants were more likely to be told they would be “ranked to match” if they received honors in the specialty clerkship (RR 1.44; 95% CI 1.10–1.88) or were members of AOA (RR 1.78; 95% CI 1.40–2.27).
In addition to this exclusion strategy to address nonresponse bias, our results were unaffected by weighting for differential response rates by the inverse of each school’s response rate. To address an additional potential bias arising from school-specific characteristics, we repeated our multivariable analysis including an indicator variable for each school, which did not change our results. Finally, our results were unaffected when we redid our analyses using as the denominator only the number of students who reported communicating with programs.
This multicenter study provides a snapshot of the state of postinterview communications between programs and applicants during the 2010 Match, almost a decade after Miller et al3 raised awareness of the challenges associated with communications during the 2001 Match. Although it is encouraging that students in our study reported fewer overt Match violations than did students in an earlier study,3 we found that students continued to report communicating with residency programs and to indicate that deciding whether and/or how to communicate with programs was stressful. In addition, this study is the first to examine specific terminology used in postinterview communications.
Our finding that few students reported being asked where a program would be ranked suggests that this particular MPA stipulation is being followed more closely than may have been the case in the past. It is noteworthy, however, that both specific and nonspecific communications were reported to be prevalent. Although postinterview communications may help applicants and programs gauge each other’s interest, both types of statements about ranking preferences can be misinterpreted by applicants who may view them as signals that they will match at certain programs. In our study, almost 20% of respondents reported that programs’ nonspecific statements did not reflect applicants’ Match outcomes, possibly reflecting applicants’ wishes to interpret such communications optimistically. It is also possible that residency program faculty, having participated in multiple Matches, may interpret applicants’ communications of interest with more skepticism than applicants apply to programs’ statements.5 We recommend that faculty advisors caution students that statements of interest by residency programs are common and should not be interpreted as binding or implicit commitments.
There are several reasons why programs and applicants may make specific or nonspecific statements of interest to one another while avoiding direct violations of the MPA. In our experience, programs that must move farther down their rank list of applicants to fill their openings may be viewed as having an unsuccessful Match; aggressive lobbying of applicants by programs may be a by-product of this motivation.20,21 Reports of more communications from programs in less competitive specialties may reflect increased competition among programs for applicants in these specialties. Applicants may have similar incentives; if they believe that communicating interest to a program will increase the likelihood of matching there, they may inform multiple programs that each will be “ranked highly.” It is reassuring that only 1% of students in our study reported notifying multiple programs that each would be ranked first.
Our study has limitations. Answers to survey questions are subjective and cannot be verified definitively. Nonresponse bias may be present, given institutions’ varying response rates; our results, however, were unchanged when we excluded the two schools with response rates < 60%. The U.S. senior medical students who responded to our survey may not be representative of all U.S. senior medical students, particularly given that our invited sample of 827 students comprised approximately 3.1% of the U.S. seniors who submitted rank order lists in the NRMP Main Residency Match in 2010. However, our sample of schools was diverse. In addition, respondents’ specialty choices and the percentage matching to their first-choice program mirrored those of applicants across the United States.1 Despite these similarities, the percentage of applicants in our survey who did not match (3.4%) was slightly lower than the average among U.S. applicants in the 2010 Match (6.7%).1 Further, survey invitations were distributed shortly after Match Day, so it is possible that applicants’ emotional responses to their Match results may have influenced their responses.
Our survey did not inquire about the number of programs that applicants ranked. Students who did not match may have ranked fewer programs; if so, the decision to rank fewer programs may have been made partly on the basis of postinterview communications from one or more programs. This subset of students would represent a group whose ability to match may have improved if they had not misinterpreted program communications and had ranked a greater number of programs.
Our results include responses from students who matched into programs in urology and ophthalmology, two fields in which matching occurs outside the NRMP. We could not exclude these respondents from the analysis because the specialty options from which students chose were broadly combined to ensure anonymity for respondents and ease of survey taking. The inclusion of ophthalmology and urology applicants should not affect our quantitative results, however. In 2010, there were 620 U.S. applicants to the national ophthalmology match,18 337 U.S. applicants to the national urology match,19 and 27,078 U.S. senior applicants to postgraduate year 1 and 2 positions in the NRMP Match.1 These numbers imply that in our sample of 564 respondents, at most 8 respondents applied to urology and 14 applied to ophthalmology, together accounting for no more than 3.9% of our sample.
Our survey also did not include questions about other Match communications violations that can occur. For example, students were not asked whether programs suggested that ranking by the program was contingent on the applicant’s providing a statement indicating rank preferences or whether programs required applicants to disclose the names of other programs to which they applied.22 Also, we did not ask students to report whether they or the program had initiated postinterview communications. Finally, our study did not consider the impact of applicants who acquired positions outside the Match or the potential implications of the NRMP’s “all-in” policy, which will require any program participating in the 2013 Match to include all first- and second-year residency positions in the Match.23
One way to resolve the issues discussed above would be to ban all postinterview communications between applicants and programs during the Match. Yet, although eliminating such communications could reduce applicant stress and program workload, this solution is neither optimal nor realistic. Although most applicants reported that deciding to communicate with programs was stressful, nearly half felt that such communications were helpful, and a significant portion reported that they were influenced by these communications when making their final ranking decisions. Postinterview communications may help programs gauge how interested applicants are and vice-versa, which may help both parties find the right “fit.”
A second possible solution is to create a centralized system for communications between programs and candidates. For example, creating a central notification process that would allow applicants to signal their interest to a limited number of programs could ensure that the expressed interest is credible. This type of program has been successfully implemented in the economics job market with success.24 In that market, doctoral students searching for academic appointments apply for positions at a broad range of universities with the option of formally notifying up to two potential employers that they are very interested in that organization. These signals are used to help break ties for interview slots or to offer interviews to applicants whom organizations previously considered less likely than others to be interested.
In summary, our study provides reassurance that direct violations of the MPA’s ban on asking applicants where they will rank a program are infrequent. As NRMP policies, including the Supplemental Offer and Acceptance Program25 and the “all-in” policy take effect and more applicants enter the Match,23 it will be important to continue to monitor the nature of postinterview communications and to recognize the possibility that programs and students may misinterpret each other’s statements of interest during the Match.
Funding/Support: This study was funded by the Department of Medicine at the University of Chicago Pritzker School of Medicine.
Other disclosures: Dr. Arora reports receiving funding from the Accreditation Council of Graduate Medical Education.
Ethical approval: Ethical approval or exemption was granted by the institutional review boards at Harvard Medical School; University of California, San Francisco, School of Medicine; University of Chicago Pritzker School of Medicine; University of Florida College of Medicine; University of Pittsburgh School of Medicine; the Warren Alpert Medical School of Brown University; and Wright State University Boonshoft School of Medicine.