Secondary Logo

Journal Logo

Letters to the Editor

Differences in Milestone Evaluations of Men and Women: The Devil Is in the Details

O’Connor, Daniel M. MD; Dayal, Arjun MD; Arora, Vineet M. MD, MAPP

Author Information
doi: 10.1097/ACM.0000000000003600
  • Free

To the Editor:

We commend Santen and colleagues1 on their analysis of resident evaluations by gender at the level of the clinical competency committee (CCC). When examining data aggregated at the national level, they found only modest differences in emergency medicine (EM) resident evaluations by gender. This finding directly contrasts with our prior work,2 which showed a substantial difference in evaluations of men and women when examining direct-observation evaluations of residents by attending physicians. One explanation is that subtle biases that are discernible in individual attending-resident evaluations conducted in real time may be masked or made less visible during the CCC summarization by consensus process.

Interestingly, the authors had recently analyzed the same national Accreditation Council for Graduate Medical Education (ACGME) EM dataset for the incidence of straight line scoring (SLS), defined as when the same score is given to a single resident in each of the 23 milestone subcompetencies.3 Although the likelihood of a resident achieving a straight line score by chance is 1 in 1023, SLS reporting was found in 20% of EM programs and occurred more frequently in evaluations of graduating residents.3 The authors note that these results are troubling as they may suggest a “fundamental misunderstanding of how the CCC should function.”3 Indeed, it may be very difficult to discern gender differences in a dataset subject to this problem.

Nevertheless, Santen and colleagues found that men were evaluated higher than women in 19 out of 23 subcompetencies, which is largely consistent with our findings. The fact that these differences were still present, some in a statistically significant manner, raises the concern that there are notable differences in how men and women are evaluated in residency. Further investigation of gender bias in evaluations is needed, especially as residency programs contemplate transitioning toward a competency-based graduation system. One method to expedite such research would be to improve access to deidentified evaluation data so that independent researchers may continue investigation.

Daniel M. O’Connor, MD
Third-year resident, Harvard Combined Dermatology Residency Training Program, Boston, Massachusetts; ORCID: http://orcid.org/0000-0001-5464-2031.
Arjun Dayal, MD
Third-year resident, Dermatology Residency Training Program, University of Chicago, Chicago, Illinois; ORCID: https://orcid.org/0000-0002-7024-2078.
Vineet M. Arora, MD, MAPP
Herbert T. Abelson Professor of medicine, assistant dean for scholarship and discovery, and associate chief medical officer-clinical learning environment, University of Chicago, Chicago, Illinois; [email protected]; Twitter: @FutureDocs; ORCID: https://orcid.org/0000-0002-4745-7599.

References

1. Santen SA, Yamazaki K, Holmboe ES, Yarris LM, Hamstra SJ. Comparison of male and female resident milestone assessments during emergency medicine residency training: A national study. Acad Med. 2020;95:263–268.
2. Dayal A, O’Connor DM, Qadri U, Arora VM. Comparison of male vs female resident milestone evaluations by faculty during emergency medicine residency training. JAMA Intern Med. 2017;177:651–657.
3. Beeson MS, Hamstra SJ, Barton MA, et al. Straight line scoring by clinical competency committees using emergency medicine milestones. J Grad Med Educ. 2017;9:716–720.
Copyright © 2020 by the Association of American Medical Colleges