Journal Logo

After the Match

After the Match

Halos and Horns

Cook, Thomas MD

Author Information
doi: 10.1097/01.EEM.0000544444.75440.fc
    emergency medicine residents
    emergency medicine residents:
    emergency medicine residents
    Figure
    Figure

    She was frustrated. My third-year resident wanted to know why her evaluations from her faculty and peers were still below average. She realized that she was behind her classmates when she matriculated into the program. She worked hard to catch up, and felt she should be at least average by now.

    I have one or two residents in each class who go through this, young adults who have spent their entire lives being the best at everything and who reach a point where their peers are perceived as better than they are. All interns enter residency apprehensive about measuring up to their classmates, and some struggle in the first few months of training. The first 100 days of internship unmask relative clinical deficiencies, and the attendings and upper-level residents create a narrative that someone is behind.

    This problem was supposed to be mitigated in theory by the Accreditation Council for Graduate Medical Education's Emergency Medicine Milestone Project, a system designed by emergency medicine educators to assess how well each resident had acquired professional skills. Performance milestones established measurable benchmarks that did not rely on direct comparison with their classmates. The ACGME required emergency medicine residencies to adopt this evaluation method for their residents in 2013. Programs could stop judging residents by comparing them with their classmates, and each resident's competency would be assessed by meeting specific criteria relevant to specific skills.

    When performing a focused history and physical examination, for example, an intern would not be able to “identify obscure, occult, or rare patient conditions based solely on historical and physical findings,” a skill typically seen in exceptional residents nearing graduation. Rather, they can only “perform and communicate a reliable history” and therefore receive a lower score than upper-level residents for this milestone. Each resident's progress is measured against what is needed by the end of training, not by how well he performed compared with his peers. There is, however, one problem with this system: Humans are using it.

    Regardless of how much we try, it is difficult for us to be objective. We are used to judging people when we meet them for the first time, and once we do, we are reluctant to change our minds. The cognitive biases at play here are known as the halo effect and the horns effect. Residents are not saints or demons, but that does not stop us from labeling them soon after they arrive.

    A ‘Book’ by its Cover

    The metrics most of us use to create our initial impression of each resident entering our program are, of course, appearance, manners, and interpersonal interaction. We judge people by their covers, and we keep doing it even though we say we don't want to. Not only that, we stick to our impressions. That's not to say that we think “weaker” residents cannot improve, but we all still subconsciously rank them a bit lower than their halo-adorned classmates.

    Show up to start your residency with an unconventional hairstyle, funny shoes, or a lot of tattoos, and you will likely be knocked down a few pegs in every evaluation metric relative to the neatly dressed resident with impeccable manners. That first impression may continue to pull down your evaluations going forward.

    Another complicating factor is that attendings and peers submitting evaluations are overwhelmed with survey fatigue. They are incessantly bombed with requests for information from the program director, the hospital, and numerous professional organizations on everything from off-service rotations to the comfort of call rooms. It is never-ending, and when asked for thoughtful feedback on a peer, they reach for what they know, and more often than not, that is the first impression they made.

    Whether that impression was great or not so great, they score every component of that performance going forward as better or worse than what they determine is the average score for that resident's class. They are not being objective in assessing the present, but rather branding the resident with a subjective impression from the past and sticking to the pecking order they created in their head.

    Is this fixable? Theoretically, yes, but realistically, probably not. Everyone is endlessly forced to create data, so most end up reaching for quick solutions to meet deadlines. Of course, this leaves the competent but perceptually “below average” residents in a quandary. How will they really know where they stand? Are they making progress toward being able to practice independently and competently?

    My suggestion is to look carefully at the comments in your evaluations and seek guidance from your program director. It is anecdotal if one person criticizes you for sloppy checkouts, but five or six comments mean you have a problem that needs fixing. It is typically pretty obvious when consensus is reached about something that a resident needs to remedy, and I make a point to give them examples of how past residents have overcome that deficiency. But I also tell the resident not to worry too much about evaluation scores. His scores may improve numerically over the course of training, but he may continue to be perceived as inferior to his classmates until his horns fade from memory.

    Share this article on Twitter and Facebook.

    Access the links in EMN by reading this on our website or in our free iPad app, both available at www.EM-News.com.

    Comments? Write to us at emn@lww.com.

    Wolters Kluwer Health, Inc. All rights reserved.