Secondary Logo

Journal Logo

In Reply to de Haan et al

Chen, David R.; Priest, Kelsey C. PhD, MPH

doi: 10.1097/ACM.0000000000002759
Letters to the Editor
Free

Fourth-year medical student, University of Washington School of Medicine, Seattle, Washington; chend4@uw.edu; @davidroychen; ORCID: http://orcid.org/0000-0002-5711-8689.

Fifth-year MD/PhD student, Oregon Health & Science University, Portland, Oregon; @kelseycpriest; ORCID: http://orcid.org/0000-0003-3929-8177.

Disclosures: None reported.

We thank de Haan and colleagues for their engagement on the topic of the United States Medical Licensing Examination (USMLE) Step 1. We do not disagree on the need for excellence in medicine but, rather, on how this quality is defined. We view medical knowledge as one of several factors that constitute excellence, and argue that the current “Step 1 climate” detracts from the pursuit of excellence in medical education.

The authors cite Gumbert and colleagues,1 which Dr. Ghebremichael coauthored, to assert that Step 1 scores should continue to play a central role in residency selection. However, this article supports weighing Step 1 scores as one of many factors in this process. We agree that “non-technical skills of applicants are being increasingly recognized as important harbingers of resident performance in several core competencies” and that reliance on standardized test scores offers “inconsistent and only moderate use in the selection of residents.”1

Further, the authors claim that Step 1 scores “predict academic and clinical success.” However, the articles referenced do not support this statement.2,3 Chen and colleagues2 found only a modest correlation between Step 1 scores and in-training examinations, and a weak correlation between Step 1 scores and clinical performance. Zhou and colleagues3 did not study Step 1. Admittedly, there is a paucity of high-quality research on this topic.

The authors’ primary concern with a pass/fail approach to Step 1 seems to be the limitation it may place on residency program directors to adequately assess academic aptitude, which in turn could impact physician excellence. In contrast, we argue that the current Step 1 climate actively compromises the pursuit of excellence by emphasizing content developed by the for-profit test prep industry, harming diversity by widening disparities within medicine, and contributing to trainee burnout.

David R. Chen

Fourth-year medical student, University of Washington School of Medicine, Seattle, Washington; chend4@uw.edu; @davidroychen; ORCID: http://orcid.org/0000-0002-5711-8689.

Kelsey C. Priest, PhD, MPH

Fifth-year MD/PhD student, Oregon Health & Science University, Portland, Oregon; @kelseycpriest; ORCID: http://orcid.org/0000-0003-3929-8177.

Back to Top | Article Outline

References

1. Gumbert SD, Normand KC, Artime CA, et al. Reliability of a faculty evaluated scoring system for anesthesiology resident applicants. J Clin Anesth. 2016;31:131–136.
2. Chen F, Arora H, Martinelli SM, et al. The predictive value of pre-recruitment achievement on resident performance in anesthesiology. J Clin Anesth. 2017;39:139–144.
3. Zhou Y, Sun H, Lien CA, et al. Effect of the BASIC examination on knowledge acquisition during anesthesiology residency. Anesthesiology. 2018;128:813–820.
© 2019 by the Association of American Medical Colleges