Secondary Logo

Journal Logo

Invited Commentaries

A Stepping Stone Toward Necessary Change: How the New USMLE Step 1 Scoring System Could Affect the Residency Application Process

Salari, Salomeh MD, MS; Deng, Francis MD

Author Information
doi: 10.1097/ACM.0000000000003501
  • Free

Abstract

One of the most significant milestones on the path to becoming a physician is taking the United States Medical Licensing Examination (USMLE) Step 1. It is so symbolic that students fear the exam from the start of medical school. It is so memorable that even friends and family exclaim when they hear mention of it. The infamous nature of Step 1 originates from the substantial weight scores carry in the resident selection process. Incredibly, this will all change as early as January 1, 2022, when score reporting for Step 1 will go from a 3-digit number to pass/fail. The exam cosponsors, the Federation of State Medical Boards (FSMB) and the National Board of Medical Examiners (NBME), stated that this change would both continue to allow state medical boards to make licensure decisions and discourage secondary uses of Step 1 scores, primarily in resident selection.1 In doing so, the FSMB and NBME hope that this change is “an important first step toward facilitating broader, system-wide changes to improve the transition from undergraduate to graduate medical education.”1 As residents who have recently navigated this transition, we foresee that the likely consequences of the score reporting change will fall short of addressing the concerns that led to the discussion in the first place. In this Invited Commentary, we suggest potential proactive solutions stakeholders can implement by the time this change takes effect in 2022 to alleviate the underlying issues affecting the resident selection process today.

Historically, the purpose of Step 1 was to act as a checkpoint, a means of determining medical students’ mastery of the basic sciences that lay the groundwork for clinical medicine. While some state medical boards consider the 3-digit score, the primary information they need for licensure decisions is binary: pass or fail. The impetus for requiring Step 1 was never to place medical students on a spectrum of perceived preparedness or to predict the quality of their future patient care. And yet, in the past decade, residency programs have increasingly used this numeric score to select and rank applicants. The result is a high-stakes test that is a burdensome stressor for medical students because of its outsized importance for their fate after graduation.

Despite the stress of Step 1, some students appreciate the opportunity that it provides to “level the playing field” for residency applicants of different educational experiences. Among these are international and osteopathic medical students who may feel their academic pedigree is a disadvantage in competing for coveted residency positions. Moreover, program directors desire data that allow for objective comparisons of applicants from different medical schools. Thus, standardized, stratifying metrics remain important for this hypercompetitive process. Ideally, these metrics would be purposefully designed to assess the medical knowledge and clinical skills relevant for residency, which Step 1 is not designed to do.

To fill the void left when Step 1 scores are no longer a singular objective discriminator between applicants, stakeholders will likely react by placing greater importance on the USMLE Step 2 Clinical Knowledge (CK), which continues to report 3-digit scores. Currently, many students have the option to take Step 2 CK after submitting residency applications and thus can subsequently report their score to programs. Now, many students may opt to take the exam earlier in medical school either to distinguish themselves or if more programs require scores for interview consideration. This shift should improve the validity of resident selection decisions: Whether due to its clinical relevance or timing closer to residency, the predictive value of Step 2 CK scores for performance during and after residency is greater than that of Step 1.2 An alternative possibility is more uniform use and reporting of scores on the NBME Subject Exams (so-called shelf exams), which medical schools often administer at the ends of clerkships. The use of additional tests in the resident selection process would reduce the stakes and stress associated with each test individually.

Even with a shift in focus from Step 1 to other tests, the overall burden of exam performance is unchanged and medical knowledge continues to carry inordinate importance for resident selection. Unfortunately, these multiple-choice tests capture only one of a multitude of pertinent qualities of a residency applicant, and a score-based screening process overlooks many would-be strong candidates. A holistic review of multiple domains is needed, but this may be unrealistic in current practice for 2 reasons. First, reliable comparative data in other domains seem to be lacking. For instance, medical school evaluations are notoriously difficult to interpret.3 Second, program directors face a growing deluge of applications and holistically reviewing them is too time intensive. This issue stems from an increase in competition: Although applicants have long outnumbered available positions in the Match, recent cohorts have submitted progressively more applications per applicant.4 Individuals apply in excess as a rational strategy to keep pace with competitors and improve their odds of success. The overapplication phenomenon self-perpetuates year after year, fueled by prior classes’ behavior and passed-down stories of an unpredictable and frenzied process.

Efforts to improve the transition from undergraduate to graduate medical education must address these underlying issues. The first solution is to provide program directors with assessments of students’ fit and readiness for residency that are not burdensome to understand and compare. Residency program directors should request standardized letters of recommendation from medical school faculty, as several specialties have done already.3 Moreover, student affairs deans should further standardize the Medical Student Performance Evaluation (so-called dean’s letter) that summarizes performance in the medical school curriculum.3

The second solution is to quell the application frenzy in which students increasingly find themselves caught. We easily recall the months of anticipation, never leaving our phones or computers unattended in the hopes of an interview invitation, and the fear that all the interview spots would be filled by the time we saw the invitation. Even a 15-minute delay in responding resulted in waitlists rather than interviews as other anxious applicants beat us to coveted interview spots. At the least, programs should agree on “traffic rules,” such as sending interview invitations on coordinated dates, inviting only as many applicants as there are interview slots available, and giving applicants a reasonable amount of time to express preferences for interview dates rather than racing to fill slots on a first come, first served basis. By doing so, students will be able to scrub into surgeries without reluctance and choose their final clinical electives based on educational passions, rather than on what affords the most flexibility for constant email checking. In lieu of exclusively focusing on the application cycle, students then can spend their last year of medical school further strengthening their clinical skills and gaining valuable exposure to specialties other than the ones to which they have applied, better preparing them overall for residency.

Another proposal is a preference signaling system that allows applicants to indicate interest in a limited number of programs, which can mitigate application congestion and optimize the allotment of interview offers.5 These measures would hopefully blunt applicants’ stress and uncertainty, leading to them applying to fewer programs. More drastic but potentially necessary interventions include instituting an optional early match program or limiting the number of applications each student can submit.4 These changes would allow program directors the time to more holistically review students’ applications and to select applicants based on merits other than their test-taking abilities.

As recent medical school graduates, we wholeheartedly welcome this much needed change to Step 1 score reporting given both the relative lack of holistic applicant review in practice now and the palpable stress and burnout caused by exam performance pressure. The cutthroat nature of resident selection has caused our future colleagues to spend months rather than weeks preparing for just Step 1. Even with prolonged study periods, many students still find themselves fearful of not matching at all, let alone in their preferred specialty or at their preferred program, based only on their Step 1 score. By addressing the underlying hypercompetition of resident selection, such as by clarifying other important selection criteria, these systemic issues will improve. Alone, the Step 1 score reporting change will not accomplish this goal; rather, the momentum caused by this change will dissipate without meaningful results. The medical education community would be remiss not to seize this opportunity to continue to evolve the currently tumultuous transition to residency.

For years, there has been discussion regarding the misplaced significance of Step 1 scores for resident selection. In finally moving Step 1 score reporting to pass/fail, the FSMB and NBME have paved the path for future changes that are equally imperative for improving the residency application process as a whole. Without these changes, the stress and frustration of this exciting transition will remain for all stakeholders involved. We urge medical schools and residency programs to continue working to redefine the selection criteria for the next generation of residents. Not doing so is a failure to emphasize the characteristics and values that truly matter for training competent physicians and our future colleagues.

Acknowledgments:

The authors would like to thank Dr. Jesse Burk Rafel for his support during the process of preparing and submitting this article.

References

1. United States Medical Licensing Examination. InCUS—Invitational Conference on USMLE Scoring. Change to pass/fail score reporting for Step 1. https://www.usmle.org/inCus. Accessed April 28, 2020.
2. Sharma A, Schauer DP, Kelleher M, Kinnear B, Sall D, Warm E. USMLE Step 2 CK: Best predictor of multimodal performance in an internal medicine residency. J Grad Med Educ. 2019;11:412–419.
3. Hartman ND, Lefebvre CW, Manthey DE. A narrative review of the evidence supporting factors used by residency program directors to select applicants for interviews. J Grad Med Educ. 2019;11:268–273.
4. Deng F, Chen JX, Wesevich A. More transparency is needed to curb excessive residency applications. Acad Med. 2017;92:895–896.
5. Chen JX, Deng F, Gray ST. Preference signaling in the National Resident Matching Program. JAMA Otolaryngol Head Neck Surg. 2018;144:951.
Copyright © 2020 by the Association of American Medical Colleges