Selection Criteria for Residency: Results of a National Program Directors Survey : Academic Medicine

Secondary Logo

Journal Logo

Residency Programs’ Selection Criteria

Selection Criteria for Residency: Results of a National Program Directors Survey

Green, Marianne MD; Jones, Paul MD; Thomas, John X. Jr PhD

Author Information
Academic Medicine 84(3):p 362-367, March 2009. | DOI: 10.1097/ACM.0b013e3181970c6b
  • Free


Medical student educators are aware of the anxieties and uncertainties facing students who are applying for residency positions. The National Resident Match Program (NRMP)/Association of American Medical Colleges (AAMC) publication, Charting Outcomes in the Match,1 attempts to provide information regarding variables that predict success for students to match in their preferred specialties. Variables included in this analysis were the number of programs that a student ranks, Alpha Omega Alpha (AOA) membership, United States Medical Licensing Examination (USMLE) Step 1 and Step 2 scores, and research experience. There is little current information about how residency program directors rate the relative importance of additional available student data, such as preclinical grades, the Medical Student Performance Evaluation (MSPE, or former dean’s letter), the USMLE Step 2 Clinical Skills (CS) exam, and clinical grades. Myths and rumors abound regarding what is really important to program directors and their selection committees about desirable applicants.

The most recent comprehensive survey of program directors’ residency selection criteria was reported by Wagoner and Suriano2 in Academic Medicine in 1999. In this study, questionnaires were sent to 1,200 of the 3,494 program directors (approximately 34%) in 14 specialties. Wagoner and Suriano found that some selection criteria were emphasized more heavily than others and that the emphasis varied according to the competitiveness of the specialty. Several smaller surveys exploring selection criteria within specific specialties also have been published.3–13

Since publication of these earlier studies, additional information has become available to residency program directors and selection committees as they evaluate medical student applications to their programs. The implementation of the National Board of Medical Examiners USMLE Step 2 CS examination in 2004 offers the potential to validate the clinical skills of applicants. Furthermore, the USMLE Step 2 Clinical Knowledge (CK) examination is now computerized and can be taken at any time, so scores are available to program directors at the time of application. The AAMC’s recommended template for the MSPE has standardized the traditional dean’s letter. Moreover, the competitiveness of many specialties has changed since 1999.

The primary purpose of this study was to assess residency program directors’ opinions about the relative importance of various selection criteria in their specialties in 2006. We extended our analysis to other specialties that were not part of previous reports, surveying a total of 21 specialties. With this study, we hope to provide an updated and more detailed perspective of the residency selection process and to highlight possible misperceptions that may affect student advising for residency application.


In the spring of 2006, we used the AMA’s Fellowship and Residency Electronic Interactive Database (FREIDA) to identify U.S. residency program directors. The FREIDA database listed a total of 2,980 university hospital and university-affiliated community hospital residency programs in 21 selected specialties. The program director’s direct contact information was available for 2,528 residency programs, representing approximately 85% of selected programs in the FREIDA database. We sent questionnaires both electronically and via surface mail to the 2,528 identified program directors. Program directors were able to submit responses using either method. An e-mail reminder that included a link to the survey was sent six weeks after the initial distribution.

The questionnaire originally developed by Wagoner and Suriano2 in 1996 was amplified (with permission) for use in this study. It divided residency selection criteria into five categories: Academic Criteria (14 items), Extracurricular Activities (3 items), Supporting Information (9 items), Issues of Concern (14 items) and Other (6 items). In addition, we collected demographic information about the residency programs. Responses to questionnaire items were recorded using a five-point Likert scale (5 = critical, 1 = unimportant; or 5 = agree, 1 = disagree).

We calculated mean values for each question within and across all specialties. The mean numeric scores assigned to item responses were used to create rank orders within three categories: Academic Criteria, Extracurricular Activities, and Supporting Information.

Responses from the surveys returned by mail and those completed on the Internet were combined before analysis. The data were sorted by specialty and residency program location (university hospital or university-affiliated community hospital). Responses from university hospitals and university-affiliated community hospitals were combined because initial statistical analyses revealed no differences between the two groups.

Analysis of variance was used to determine whether significant differences existed across specialties for each of the criteria and within a specialty for different criteria. When significant differences were found, we used a t test to calculate the difference between the two questions of interest and then computed the one-sample t test to see whether the mean of the difference is different from 0. The P value for significance was adjusted for making multiple comparisons (Bonferroni). In a similar fashion, the USMLE data for Step 1 and Step 2 CK were analyzed using ANOVA followed by a t test to isolate differences between the three groups of specialties as defined by their relative competitiveness.


The response rates for the surveys for each specialty are shown in Table 1. These ranged from a high of 65% (65 of 100) in urology to a low of 38% (41 of 107) in neurology. The overall response rate was 49% (1,201 of 2,443). Responses from combined medicine–pediatrics programs were considered insufficient in number to be included in the study (18 of 85). Of all the responses returned, 76% (913 of 1,201) were received via mail, and 24% (288 of 1,201) were received online.

Table 1:
Response Rates by Specialty of a National Survey of Residency Program Directors About Selection Criteria, 2006

Table 2 depicts the most important academic selection criteria across all specialties in rank order. Grades in required clerkships were the highest-ranked selection criteria compared with all other criteria (P < .002). To illustrate statistical differences that exist when comparing all other selection criteria, the column to the right indicates the ranks that are statistically different from the criteria listed in each row. For example, USMLE Step 1 scores were ranked significantly higher than USMLE Step 2 scores and all other criteria below; however, there was no statistical difference in the ranking of USMLE Step 1 score and grades in senior electives of specialty or the number of honors grades by program directors.

Table 2:
Rankings of the Importance of Academic Selection Criteria from a National Survey of Residency Program Directors, 2006

Table 3 depicts the data by rank within an individual specialty. We included only criteria where the mean response across specialties was 3.0 or greater. In addition, we included responses from three questions in the Supporting Information section of the questionnaire. These questions referred to audition electives, letters of recommendation, and the utility of the MSPE.

Table 3:
Mean Importance Ratings and Rankings by Specialty of Applicant Selection Criteria from a National Survey of Residency Program Directors, 2006
Table 3:

Specialties in which the majority of positions are filled by graduating seniors of U.S. medical schools may be considered more competitive than specialties that fill with a high percentage of international graduates. The top row of Table 3 indicates the U.S. fill rate for each specialty. These data were derived from published 2007 match statistics for the NRMP, the San Francisco match (neurosurgery and ophthalmology), and the American Urological Association (AUA) match. Orthopedic surgery, plastic surgery, and otolaryngology had more than 90% of their positions filled by U.S. graduating seniors, whereas family medicine, internal medicine, and neurology had much lower percentages of their positions filled by U.S. graduating seniors. To facilitate the analysis of the survey results, we adapted the approach used by Wagoner and Suriano2 and grouped specialties according to the percentages of residency spots that were filled by U.S. graduates. Specialties with U.S. graduate fill rates above 81% were grouped and designated as the “most competitive” residency programs. Specialties with U.S. graduate fill rates between 61% and 80% were designated as being “competitive.” Specialties with U.S. graduate fill rates below 60% were designated as being “less competitive.” For each criterion, the means across all specialties were calculated and are displayed in Table 3. Within specialties, significant differences existed. It should be noted that three criteria (grades in preclinical courses, grades in other senior electives, and usefulness of the MSPE) were never found to be statistically similar to the top criterion group for any specialty. In Table 3, the selection criteria also are ranked by numbers from 1 (highest) to 16 (lowest) to illustrate absolute rank within a specialty.

Table 4 shows the mean USMLE Step 1 and Step 2 CK scores for the groups of specialties that are listed. These data were obtained from the match statistics reported by the NRMP, the San Francisco match, and the AUA match. The mean USMLE Step 1 scores for matched students are significantly higher in the “most competitive” specialties when compared with the other two groups. The USMLE Step 1 scores in the “competitive” specialties are significantly higher than for the “less competitive” group. This trend holds for the “most competitive” specialties with USMLE Step 2 CK scores.

Table 4:
Mean United States Medical Licensing Examination (USMLE) Step 1 and Step 2 Clinical Knowledge (CK) Scores of Matching Applicants by Competitiveness of Specialty at the Time of a National Survey of Residency Program Directors, 2006


Our survey results are consistent with previously reported studies conducted in only one specialty. Although Otero et al8 used a different survey instrument when reporting on selection criteria for radiology residency programs, the top criteria did not significantly differ from our findings except that they placed a higher value on letters of recommendation. Similar studies in orthopedics and emergency medicine show that the most highly valued selection criteria are consistent with the results of our survey.4,9 In family medicine, former studies have emphasized the importance of more qualitative selection criteria. However, our study indicates that program directors in family medicine are increasingly valuing more objective criteria, such as class rank and USMLE Step 2 scores.14 Studies reporting the relative importance of comprehensive selection criteria for residency programs in other specialties have not been performed.

The results of this survey indicate that, of the top academic selection criteria (Table 2), all are based on clinical performance, with the exception of USMLE Step 1 scores. Not surprisingly, indicators that reflect excellence in clinical performance are valued across the specialties by residency program directors. Among nonacademic selection criteria that rise to the tops of program directors’ lists, letters of recommendation and audition electives figure prominently, again highlighting students’ clinical performance (see Table 3). The availability of USMLE Step 2 results in time for residency application, in addition to the development of the new clinical skills component of the USMLE exam, may have resulted in the relatively high ranking that program directors place on Step 2 results. Given the increasing emphasis that program directors are placing on Step 2 performance, students should be advised to take both components of this exam in time for residency application. Additionally, despite their subjective nature, letters of reference are highly ranked by most specialties, and students should be careful in selecting faculty members who write in support of their applications.

Despite student perceptions that research in medical school is an essential ingredient for a successful application to residency, our findings indicate that research experience ranks low among selection criteria when all specialties are grouped together. The NRMP also found that research experience in medical school did not differ significantly among U.S. seniors who matched in their specialties of choice and those who did not.1 This was consistent across competitive and noncompetitive specialties. It should be noted, however, that in our survey, program directors in particularly competitive specialties (radiation oncology, plastic surgery, neurosurgery, and dermatology) ranked research experience highly (Table 3). It may be that when all other selection criteria are outstanding among applicants to a particular specialty, research experience or research publications may help discriminate candidates.

In our survey, grades in preclinical courses are not highly valued by program directors. This may be because there is considerable variability in the naming and content of courses in medical schools in the preclinical curriculum, perhaps making grades difficult to interpret. The USMLE Step 1 exam is a well-understood and objective means to assess basic science knowledge, and it may act as a substitute for preclinical grades. It could be inferred that schools that have pass/fail grading systems in the preclinical years may not necessarily disadvantage their students in the residency application process.

The MSPE was ranked lowest of all criteria by the program directors. In theory, MSPEs should provide most of the information that program directors value. However, MSPE usefulness may be limited by variability in content and quality across institutions. Additionally, information included in the MSPE can be acquired through the transcript, USMLE score report, ERAS application, or letters of recommendation. Finally, the MSPE release date of November 1 may be too late to be useful to program directors, particularly for early match specialties. Given the enormous effort that goes into the creation of the MSPE, it may be worth further study to assess why program directors do not find it useful and to judge whether the resources required to create it are justified.

The responses from program directors can be interpreted in a variety of ways. There is remarkable consistency in the top six selection criteria between the specialties with >80% U.S. senior fill rates, 60% to 80% U.S. senior fill rates, and <60% U.S. senior fill rates (see Table 3). Among other criteria, however, our study does indicate trends that seem to be based on the competitiveness of the specialties. USMLE Step 2 CK score ranks higher in the specialties with <60% U.S. senior fill rates as compared with specialties with the highest U.S. fill rates. This difference is even more prominent for the Step 2 CS exam. Does performance on these exams reflect resident characteristics that are more highly valued in these specialties? Do students who perform poorly on Step 1 apply more often to the less competitive specialties, thereby raising the relative importance of Step 2? The data from the NRMP show that USMLE scores for both Step 1 and Step 2 are higher in the most competitive specialties (Table 4). Given the strong academic performance of individuals who apply to the most competitive specialties, characteristics that distinguish individuals among this highly accomplished group are difficult to discern. The higher emphasis on class rank, AOA membership, and published medical school research may represent accomplishments that allow program directors to select one student over another.

There are several limitations to our study. First, there was variability in response rates from program directors across specialties and in some specialties this may not be a representative sample. We do not have information about why some specialties had a higher response rate than others. However, this study surveyed a larger and more comprehensive group of program directors from more specialties than has been performed in prior research.2 Our methodology conformed with best practices in survey research.15 Although our response rates varied by specialty, the broad scope of our study and the large number of respondents overall make our results noteworthy and, we believe, a reliable snapshot of widespread practices. Future studies that achieve relatively high response rates across all sampled specialties could further support our conclusions. Second, we chose not to survey community hospitals that did not have university affiliations. The FREIDA database lists relatively few non-university-affiliated community hospital programs in most of the selected 21 specialties. Although in a few specialties the number of programs may have provided an adequate number of responses for statistical comparisons, in the majority of non-university-affiliated programs the likelihood of an adequate number of responses was very low. The limitation is that our results do not provide the perspective of community-based program directors who do not have university affiliations. Third, because of the diverse formats that exist in conducting interviews, we did not explore the relative importance of the interview in the final selection of candidates.

In summary, our study provides important information about the current perspectives of program directors across specialties regarding the relative importance of specific selection criteria. Given the changes that have occurred in information available to program directors, this study may be useful in advising students about career selection. Additional studies are necessary to understand which of these selection criteria predict residents’ performance. This may become particularly important as the NBME considers changing the format and content of licensure examinations, because, after such reforms, data that are currently highly valued by residency program directors (e.g., USMLE Step 1 scores) may no longer be available.


The authors greatly appreciate the assistance of Dr. Norma Wagoner, who worked with us to develop and initiate this study. The authors thank Stephanie Miller for assistance in the submission and collection of the survey results, Dr. Jessica Tooredman Casey and Sarah Umetsu for compiling the program directors list, and Dr. Jenny Huang for her help with the statistical analysis. Finally, the authors thank Dr. William McGaghie for his critique of early drafts of this manuscript.


1 National Resident Match Program; Association of American Medical Colleges. Charting Outcomes in the Match. Available at: ( Accessed November 14, 2008.
2 Wagoner NE, Suriano JR. Program directors’ responses to a survey on variables used to select residents in a time of change. Acad Med. 1999;74:51–58.
3 Hemaida RS, Kalb E. Using the analytic hierarchy process for the selection of first-year family practice residents. Hosp Top. 2001;79:11–15.
4 Bernstein AD, Jazrawi LM, Elbeshbeshy B, Della Valle CJ, Zuckerman JD. Orthopaedic resident-selection criteria. J Bone Joint Surg Am. 2002;84:2090–2096.
5 Grantham JR. Radiology resident selection: Results of a survey. Invest Radiol. 1993;28:99–101.
6 Garden FH, Smith BS. Criteria for selection of physical medicine and rehabilitation residents. A survey of current practices and suggested changes. Am J Phys Med Rehabil. 1989;68:123–127.
7 DeLisa JA, Jain SS, Campagnolo DI. Factors used by physical medicine and rehabilitation residency training directors to select their residents. Am J Phys Med Rehabil. 1994;73:152–156.
8 Otero HJ, Erturk SM, Ondategui-Parra S, Ros PR. Key criteria for selection of radiology residents: Results of a national survey. Acad Radiol. 2006;13:1155–1164.
9 Crane JT, Ferraro CM. Selection criteria for emergency medicine residency applicants. Acad Emerg Med. 2000;7:54–60.
10 Bajaj G, Carmichael KD. What attributes are necessary to be selected for an orthopaedic surgery residency position: Perceptions of faculty and residents. South Med J. 2004;97:1179–1185.
11 Turner NS, Shaughnessy WJ, Berg EJ, Larson DR, Hanssen AD. A quantitative composite scoring tool for orthopaedic residency screening and selection. Clin Orthop. 2006;449:50–55.
12 Galazka SS, Kikano GE, Zyzanski S. Methods of recruiting and selecting residents for U.S. family practice residencies. Acad Med. 1994;69:304–306.
13 Taylor CA, Weinstein L, Mayhew HE. The process of resident selection: A view from the residency director’s desk. Obstet Gynecol. 1995;85:299–303.
14 Travis C, Taylor CA, Mayhew HE. Evaluating residency applicants: Stable values in a changing market. Fam Med. 1999;31:252–256.
15 Mangione TW. Mail Surveys: Improving the Quality. Thousand Oaks, Calif: Sage Publications; 1995.
© 2009 Association of American Medical Colleges