Secondary Logo

A Quantitative Composite Scoring Tool for Orthopaedic Residency Screening and Selection

Turner, Norman, S; Shaughnessy, William, J; Berg, Emily, J; Larson, Dirk, R; Hanssen, Arlen, D

Clinical Orthopaedics and Related Research: August 2006 - Volume 449 - Issue - p 50-55
doi: 10.1097/01.blo.0000224042.84839.44
SECTION I: SYMPOSIUM I: C. T. Brighton/ABJS Workshop on Orthopaedic Education

The ability to accurately screen and select orthopaedic resident applicants with eventual successful outcomes has been historically difficult. Many preresidency selection variables are subjective in nature and a more standardized objective scoring method seems desirable. A quantitative composite scoring tool (QCST) to be used in a standardized manner to help predict orthopaedic residency performance from application materials was developed. In 64 orthopaedic residents, four predictors (United States Medical Licensing Examination [USMLE] Part I scores, Alpha Omega Alpha status, junior year clinical clerkship honors grades, and the QCST score) were analyzed with respect to four residency outcomes assessments. The outcomes included three standardized assessments, the orthopaedic in-training examination scores (OITE), the American Board of Orthopaedic Surgery (ABOS) written and oral examinations, and an internal outcomes assessment, attainment of satisfactory chief resident associate (CRA) status. Collectively, the QCST score had the strongest association as a predictor for all three standardized outcomes assessments (p < 0.001). Honors grades during junior years clinical clerkships was most strongly associated with satisfactory CRA status (p < 0.001). A composite scoring tool that is an effective predictor of orthopaedic resident outcomes can be developed. Additional work is still required to refine this scoring tool for orthopaedic residency screening and selection.

From the *Department of Orthopedic Surgery, and the †Division of Biostatistics, Mayo Clinic, Rochester, MN.

Each author certifies that he or she has no commercial associations (eg, consultancies, stock ownership, equity interest, patent/licensing arrangements, etc) that might pose a conflict of interest in connection with the submitted article.

Each author certifies that his or her institution has waived approval for the human protocol for this investigation and that all investigations were conducted in conformity with ethical principles of research.

Correspondence to: Arlen D. Hanssen, MD, Department of Orthopedics, Mayo Clinic, 200 First St. S.W., Rochester, MN 55905. Phone: 507-284-2884; Fax: 507-284-5936; E-mail:

It has been difficult to develop meaningful criteria for selecting those resident applicants who will successfully develop in an orthopaedic program. In a recent survey of orthopaedic program directors, the three most important criteria used for selection included a rotation at the director's institution, United States Medical Licensing Examination (USMLE)-Part I score, and rank in medical school.1 Additional variables included formality/politeness at interview, personal appearance, performance on ethical questions at interview, letter of recommendation from an orthopaedic surgeon, election into the Alpha Omega Alpha (AOA) honor society, medical school reputation, Dean's letter, and the applicant's personal statement.1 Because many of these variables are subjective, rather than objective, the absolute effect of some of these variables on the ultimate outcome is uncertain.

Some investigators have attempted to analyze specific preresidency selection criteria with orthopaedic residency outcomes based on standardized outcomes assessments. For example, the USMLE-Part I scores, Alpha Omega Alpha (AOA) status, research publications, age entering residency, marital status, and medical school affiliation have been analyzed with regards to performance on the orthopaedic in-training examination (OITE).2 Because the only associations to OITE scores were USMLE-Part I performance and marital status, these authors concluded few preresidency variables are associated with success during residency.

In another study of 58 orthopaedic residents, application data included scores on standardized tests, number of honors grades in the basic and clinical years of medical school, election to AOA, numbers of research projects and publications, and numbers of extracurricular activities.3 Performance measures included OITE scores, American Board of Orthopaedic Surgery (ABOS) written examination results, and faculty evaluations. The number of honors grades on clinical rotations was the strongest predictor of performance, whereas election to AOA was second. None of the predictor variables had an association with OITE or ABOS written examination scores.

The quest for a scoring measurement tool that incorporates many of these preselection variables, and is helpful for determining residency outcomes, has been unsuccessful for several decades.4-6 In one effort to develop a scoring system, the intraclass correlation coefficient was high for numeric or objective elements but low for more subjective elements.6 These investigators noted no benchmarks exist to define an acceptable intraclass correlation coefficient of a resident-applicant scoring model.

This report describes the development and initial use of a global scoring tool, the quantitative composite scoring tool (QCST), in our orthopaedic resident selection process. Given that individual parameters have not previously nor reliably predicted resident performance, we hypothesized that this new global scoring tool would be more predictive when compared with independent use of individual preselection variables for analyzing residency outcomes and performance.

Back to Top | Article Outline


In 1998, we developed the QCST (Table 1) to assist in the screening and selection of applicants into the Mayo Orthopedic Residency training program. The present study was designed to assess the utility of the QCST in a group of residents selected with our prior method of resident selection, immediately prior to the development of the QCST. This group of residents was selected because their outcomes are now known and we wished to determine whether the QCST, applied in a retrospective manner, provided any predictive value as compared with our prior use of the individual pre-residency selection variables. Screening and selection criteria during this time were done according to the preferences of the residency program director and residency selection committee. During this time period, the individual variables were not applied in a standardized manner.



Between 1993 and 1998, there were 67 residents matriculated into the Mayo Orthopedic Residency Program. Of these 67 individuals, there were two foreign medical graduates and one graduate of an osteopathic training program. These three individuals were excluded from this study so that evaluation of the QCST could be uniformly applied to graduates of an American allopathic medical school. We retrospectively applied the QCST to the remaining 64 applicants, who constitute the basis of this study. All individuals were provided with deidentifier labels so their identity was unknown for final data collection and statistical analysis. This retrospective study was performed with approval and in the guidelines of the Institutional Review Board.

Scores were retrospectively assigned to each individual in the 10 categories of the QCST by reviewing each of the individuals' institutional records, which contain all application data, residency performance evaluations, and outcomes assessments. The study was designed so that four predictor variables could be analyzed with respect to four outcomes assessments. The four predictors included USMLE-Part I scores, AOA status, performance in the junior year core clinical clerkships (JYCC score), and the QCST score. The four outcomes assessments were percentile OITE scores obtained in the final year of residency, whether the individual passed the ABOS written and oral examinations on their first attempt, and appointment to postgraduate year five (PGY5) chief resident associate (CRA) status based on faculty selection and performance evaluations.

The intent in developing the QCST was to quantify a large number of objective and subjective variables typically used to assist the residency screening and selection process. The QCST was developed solely on the impressions and experiences of two authors (WJS, ADH) with a large number of preresidency selection variables. We identified ten separate categories and each category was assigned variable point weighting for a total composite score of 92 points. The results of the interview process were not included in the QCST because it was believed this variable in our residency selection process is highly subjective and furthermore it is not available during the screening process.

In each of the ten categories, a numerical scale was developed for endpoints so scores could be easily determined and applied by multiple observers. For example, in the category of class rank, different scores are provided for class standing and AOA status as follows: junior AOA (10 pts); senior AOA (8 pts); upper quartile class standing (6 pts); upper third class standing (2 pts); and no points awarded for individuals outside of these categories. United States Medical Licensing Examination (USMLE) Part I percentile scores were linearly converted to a score between 0 and 10 using a predetermined reference scale. Junior clinical clerkship grades were assigned points by honors and high pass grades. Points were awarded for honors grades in surgery (7), medicine (5), pediatrics (3), obstetrics-gynecology (3), and psychiatry (2). Similarly, one point was awarded for high pass grades in each of the core clerkships.

For many additional categories with objective endpoints, such as honors grades during the basic sciences, undergraduate grade point average, and assignment of points based on masters or doctorate of philosophy degrees granted, these are usually easily identifiable and the point total being awarded is straightforward. Because of the variability among medical school policies, obtaining this data can be difficult. For example, some medical schools have a pass/fail grading system, do not assign AOA status until graduation, do not have an AOA chapter, or do not generate class rankings. As a result, these differences often require considerable effort to assess and assign points for certain medical schools and as a result can become quite subjective. These difficulties resulted in an adjustment in the ranking and points awarded in the medical school reputation category of the QCST. Grade inflation, as demonstrated by specific medical school grade distribution histograms, also led to a reduction in points awarded in the medical school reputation category.

For other categories, the criteria being assessed are inherently subjective and therefore more subject to reviewer variability. For example, assignment of points in the category of miscellaneous/extracurricular is dependant upon identification of participation in the arts, sporting activities, demonstration of leadership characteristics, published or unpublished research, or substantial community service activities. Likewise, determination of the meaning and intent of verbiage contained within the Dean's letter or other letters of recommendation often tends to be more subjective in nature. In an effort to reduce this variability, numerical reference tables were developed to determine the scores in these categories.

When the 64 matriculated residents were categorized according to USMLE-I scores, there were 40 (62.5%) individuals with an USMLE-I score ≥ 5 points. Of the 64 individuals, nine (14.1%) had junior AOA status, 25 (39.1%) had senior AOA status, and 30 (46.8%) did not have AOA status. When stratified by JYCC scores, 33 of 64 (51.6%) had scores ≥ 14 points with an average score of 17.4 points (range, 14-20 points). When sorted by the QCST, there were 33 individuals with a score ≥ 50 points with an average score of 58.0 points (range, 50-69 points). The association between these predictor variables when stratified into these dichotomous groups is shown in Table 2.



The data were summarized as mean (standard deviation) for continuous variables, and count (percent) for discrete variables. The outcomes of ABOS-I (written) examination results (pass versus fail), ABOS-II (oral) examination results (pass versus fail), and chief resident status (yes versus no or poor performance) were analyzed as binary variables, while the outcome of the OITE was evaluated as a continuous variable. As predictors, the USMLE-I score, the JYCC score and the QCST score were analyzed as continuous variables, while AOA status (Junior or Senior AOA versus other) was evaluated as a discrete, binary variable.

The associations of the predictors with the three categorical outcomes were evaluated using logistic regression. The relationships of the predictors with the continuous outcome (OITE) were examined using linear regression. Separate analyses were conducted for each outcome. Each predictor was first evaluated univariately. Subsequently, all predictors were analyzed using multivariable analysis. Specifically, for each outcome, stepwise selection, backwards elimination, and best subset selection were used to identify the predictors that were independently significant in multivariable analysis. For each outcome, each method yielded consistent results. All statistical tests were two-sided and p values less than 0.05 were considered significant. All analysis was conducted using SAS version 8.2 (SAS Institute Inc, Cary, NC).

Back to Top | Article Outline


The quantitative composite scoring tool (QCST) score was compared with the other three predictor variables USMLE-I scores, AOA status, and the JYCC score) with regard to the predictive value for the four defined outcomes assessments with univariate and multivariable analysis (Table 3). Multivariate analysis revealed that the QCST provided the strongest predictive value for the three standardized outcomes assessments which included the ABOS written and oral examinations as well as OITE scores (p < 0.001). Attainment of satisfactory chief resident associate (CRA) status, an internal outcome assessment, was most strongly associated with the junior year clinical clerkships (JYCC) score (p < 0.001).



Specifically, the orthopaedic in-training examination (OITE) raw percentile scores for the final year of residency training for the 64 individuals averaged 56th percentile (range, 12-99th percentile). By univariate analysis (Table 3), all four predictor variables were significantly associated with OITE results: USMLE-I scores (p < 0.001), the QCST score (p < 0.001), the JYCC score (p = 0.002), and AOA status (p = 0.007).However, the only predictor of statistical significance with multivariable analysis was the QCST score (p < 0.001).

On their first attempt at the written ABOS Part-I examination, 57 (89.1%) of the graduates passed and seven (10.9%) failed. By univariate analysis (Table 3), only the QCST score (p = 0.001) and the JYCC score (p = 0.002) were significantly associated with passing the ABOS Part-T examination. As univariate predictors, neither AOA status (p = 0.06) nor USMLE-I scores (p = 0.052) were statistically significant. The only predictor of statistical significance with multivariable analysis was the QCST score (p = 0.001).

At the time of data completion for this study, only 56 of the 64 individuals had attempted the oral ABOS Part-II examination. Most of the residents who matriculated in 1998 were ineligible because of participation in fellowships after residency. Of these 56 individuals, 45 (80.4%) passed and 11 (19.6%) failed the ABOS oral examination on their first attempt. By univariate analysis (Table 3), the QCST score (p < 0.001) the JYCC score (p < 0.001), and AOA status (p = 0.02) were significantly associated with passing the ABOS Part II examination. The association with USMLE-I scores was not statistically significant (p = 0.079). Again, the only predictor of statistical significance with multivariable analysis was the QCST score (p < 0.001).

Of the 64 individuals, 50 (78.1%) had been granted CRA status with seven (14.0%) of these individuals having unsatisfactory faculty performance evaluations. Therefore, there were 43 (67.2%) satisfactory CRA and 21 (32.8%) non/poor CRA individuals at final assessment. For this internal assessment outcome, by univariate analysis (Table 3), the QCST score (p < 0.001), the JYCC score (p < 0.001), an AOA status (p = 0.002) were significantly associated with achieving CRA status. Again, the association with USMLE-I scores was not statistically significant (p = 0.455). By multivariable analysis, the JYCC score was the only statistically associated predictor variable (p < 0.001).

Back to Top | Article Outline


The ability to accurately screen and select orthopaedic residency applicants who will successfully complete a residency program and pass their board examinations has historically proven to be very difficult.3 We devised a global quantitative scoring tool to combine the multiple available preselection variables to determine whether or not use of this composite scoring tool would be more predictive for residency outcomes.

It can be argued the definition of what constitutes a successful outcome for graduates of orthopaedic residency training has not yet been established. In addition to being an excellent clinical orthopaedist, other desirable characteristics might include talent in providing culturally competent care, assistance with elimination of healthcare disparities, skills in research, talent in leadership, skills in administration, and abilities in education.7 Presently, the most commonly used standardized outcome assessments for orthopaedic residency training include the OITE and the ABOS written (Part-I) and oral (Part-II) examinations.

In our study, the OITE and the ABOS examinations were used as established standardized outcomes assessments. Unlike Dirschl et al,3 the OITE scores used in this study were the percentile scores obtained in the last year of residency rather than an average of all OITE scores obtained throughout the residency. Similarly, the results of the ABOS written exam results in this study were determined as pass/fail rather than percentile scores as used by Dirschl et al.3 As the ABOS oral exam results are provided only as pass/fail, these results were used as the third standardized assessment outcome for the current study.

In addition to these three standardized outcomes assessments, we also used an internal outcomes assessment, satisfactory CRA status. The Mayo orthopedic departmental education committee annually approves or declines CRA status at the end of the PGY4 year based on review and discussion of faculty performance evaluations generated throughout the residency. Assignment of satisfactory versus unsatisfactory CRA performance was determined by the performance grade and faculty assessments received.

Previous studies have used a variety of predictor variables to assess orthopaedic residency outcomes, including USMLE-I scores,2,3 USMLE-II scores,8 AOA status,2,3 research publications,2,3 age entering residency,2 marital status,2 medical school affiliation,2 number of honors grades in the basic and clinical years of medical school,3 and numbers of extracurricular activities.3 The most important of these variables to date has been the number of honors grades obtained in the clinical years of medical school.3

We decided to study predictor variables that included two commonly used preresidency predictor variables, the USMLE-I scores and AOA status, as well as the JYCC honors grades score. Unlike Dirschl et al,3 the points awarded for honors grades in this study were weighted so that more points were awarded for surgery and medicine than for pediatrics, obstetrics-gynecology, and psychiatry. Additionally, points were also awarded for high pass grades in these core clerkship rotations. Adding these points resulted in the JYCC becoming unique predictor variable. These three predictors were compared with another new predictor, the QCST score, to test whether a composite scoring tool could be more valuable than the individual predictor variables to assess orthopaedic residency outcomes.

It has been suggested that selection committees overemphasize the predictive value of USMLE-I scores.9 However, as shown, USMLE-I scores have been statistically associated with the outcome of OITE scores.2 Likewise, USMLE-I scores have also been shown to be statistically associated with the outcome of the ABOS written examination.8 In our study, this relationship between USMLE-I and the ABOS written examination tended toward significant association. Importantly, USMLE-II scores have a better association with the ABOS written examination than USMLE-I scores8 as USMLE-I scores are not strong predictors of individuals at risk of failing the ABOS Part-I written boards.10 To our knowledge, USMLE-I scores have not been studied with regards to the outcome of the ABOS oral examination, but we observed no significant association. Unfortunately, USMLE-II scores are largely unavailable for the process of residency screening and selection. Likewise, there was no association between USMLE-I scores and the attainment of satisfactory CRA status.

Alpha Omega Alpha status has also been shown to be associated with the outcome of the OITE2,3 and the ABOS written examination.2,3 We confirmed these results for the results of OITE but association with the ABOS written examination was not statistically significant. In our study, AOA status was associated with the results of the ABOS oral examination and attaining satisfactory CRA status, in addition to the OITE. However, the use of AOA status remains problematic because some medical schools do not have AOA chapters.11 Furthermore, many medical schools now elect AOA status at graduation, well after the process of residency screening and selection. This current trend potentially places students at these medical schools at a disadvantage or may render AOA status a less effective predictor in the future. This situation is similar to the use of the pass/fail grading system, which also puts the medical student at a disadvantage in competing for general surgery residency positions.12

Dirschl et al3 suggested the number of honors grades on clinical rotations was the strongest predictor of performance for OITE scores, ABOS written examination results, and faculty evaluations of overall cognitive, affective, and psychomotor performance. Likewise, a similar predictor, the JYCC score, had a strong association with all four outcomes assessments evaluated in this study, and had the strongest association with satisfactory CRA status. Although not identical, the attainment of satisfactory CRA status is comparable to the faculty evaluations of overall cognitive, affective, and psychomotor performance reported by Dirschl et al.3 Our impression is that the internal outcomes assessment, the satisfactory CRA status, as determined within our department, is representative for the many variables considered important in a well-trained orthopaedic resident without emphasizing written examination results.

An important remaining question is what is the specific predictive effect of each of the 10 separate categories in the QCST? We are unable to comment on this question with the current study design and data but believe additional study of these individual categories is necessary. The current study was a retrospective study of applicants who were selected with a different residency selection method than is being currently used. Since 1998, we have been prospectively using the QCST for residency screening and selection purposes. We believe the QCST has become an effective residency screening and selection tool, but the use of the QCST has also resulted in a range restriction of the data for our current orthopaedic residents as compared with the residents evaluated in the present study. For these reasons, we are now planning to analyze the predictive effects of each of the selection categories in the QCST for these current residents that have a restricted data range in an attempt to determine whether point weighting adjustments of certain categories is required or whether particular categories need to be altered or eliminated.

For example, the value of the additional graduate school degrees category for which up to 5 additional points may be awarded in the QCST for masters or doctorate of philosophy degrees has been frequently debated within our residency selection committee. Preliminary evaluation has suggested this category may need to be eliminated. It is believed continued study of these separate QCST categories, under the current conditions of data range restriction, will require changes to the current version of QCST to become even more effective. It is also likely the QCST, as presented in this study, may require modification to be effective or relevant for other orthopaedic residency training programs or other subspecialties that may choose to use this selection approach.

In summary, we demonstrate that a quantitative composite scoring tool can be developed and be more effective in predicting orthopaedic residency outcomes as compared with individual predictors, such as USMLE-I scores, AOA status, or junior year clinical clerkship performance. As medical schools change their policies to delay release of traditionally available information, such as AOA status, or to change grading strategies with grade inflation of junior year clinical clerkship honors, the value of a composite scoring tool will likely increase in the future. Continued evaluation, refinement, and validation of this type of this composite residency screening and selection scoring tool is planned.

Back to Top | Article Outline


The authors kindly thank Kristine M. Thomsen for her assistance with statistical analysis.

Back to Top | Article Outline


1. Bernstein AD, Jazrawi LM, Elbeshbeshy B, Della Valle CJ, Zuckerman JD. An analysis of orthopaedic residency selection criteria. Bull Hosp Jt Dis. 2002;61:49-57.
2. Carmichael KD, Westmoreland JB, Thomas JA, Patterson RM. Relation of residency selection factors to subsequent orthopaedic in-training examination performance. South Med J. 2005;98:528-532.
3. Dirschl DR, Dahners LE, Adams GL, Crouch JH, Wilson FC. Correlating selection criteria with subsequent performance as residents. Clin Orthop Relat Res. 2002;399:265-271.
4. DaRosa DA, Folse R. Evaluation of a system designed to enhance the resident selection process. Surgery. 1991;109:715-721.
5. Clarke JR, Wigton RS. Development of an objective rating system for residency applications. Surgery. 1984;96:302-307.
6. Dirschl DR. Scoring of orthopaedic residency applicants: is a scoring system reliable? Clin Orthop Relat Res. 2002;399:260-264.
7. White AA3rd. Resident selection: are we putting the cart before the horse? Clin Orthop Relat Res. 2002;399:255-259.
8. Case SM, Swanson DB. Validity of NBME Part I and Part II scores for selection of residents in orthopaedic surgery, dermatology, and preventive medicine. Acad Med. 1993;68:S51-S56.
9. Fine PL, Hayward RA. Do the criteria of resident selection committees predict residents' performances? Acad Med. 1995;70: 834-838.
10. Klein GR, Austin MS, Randolph S, Sharkey PF, Hilibrand AS. Passing the Boards: Can USMLE and Orthopaedic In-Training Examination scores predict passage of the ABOS Part-I Examination? J Bone Joint Surg Am. 2004;86:1092-1095.
11. Clark R, Evans EB, Ivey FM, Calhoun JH, Hokanson JA. Characteristics of successful and unsuccessful applicants to orthopedic residency training programs. Clin Orthop Relat Res. 1989;241: 257-264.
12. Dietrick JA, Weaver MT, Merrick HW. Pass/fail grading: a disadvantage for students applying for residency. Am J Surg. 1991;162: 63-66.
© 2006 Lippincott Williams & Wilkins, Inc.