“So, I’m on call in the unit, Mike, and a 52-year-old crack smoker rolls into the ER with chest pain and ST elevations.”
“I text the tracing to the fellow, and, even though the elevations are lateral, he tells me to scan the guy. Make sure he isn’t dissecting.”
“So did you do it?”
“No. His enzymes were off the chart, and, like I said, the changes were lateral, not posterior. He definitely was not dissecting. I call the cath lab, but they say crack MIs just need nitro.”
“Yeah … so, he’s infarcting. And he gets a cath the next morning. Has a 90% circ lesion. Totally raw. Gets stented and sent back to the unit.”
“So, you were right.”
“I was right, Mike, but I got dressed down. ‘Milestones’ chucked at me like the pretty lady in a knife-throwing act. The fellow says I should have scanned the guy. That I can’t ‘work in a professional team.’ I defend my treatment plan, and the rounder says I’m not ‘receptive to feedback.’ I even got hauled into the program office and accused of ‘premature closure.’ Not ‘learning at the point of care.’”
“But you were right.”
“Yeah, I was right. But, Mike, I am sick of being evaluated. Why can’t my results just speak for themselves?”
Months went by, and I couldn’t scrub this conversation from my mind. Why had this resident been so harshly judged when every clinical decision she made had been correct? Her appeal for results-oriented evaluation had felt like a demand for justice and certainly called for consideration.
New evaluation models require us to focus on residents’ observable behaviors (“milestones”) and to assume that, if they are demonstrated, our trainees are properly progressing. To wit, we are watching for a finite number of acts in lieu of overall clinical and professional development, and, in this way, we are attending more to the mechanics than to the poetics of doctoring. I find this shift worrisome.
First, in emphasizing specific behaviors, we may too easily fixate on trivialities and find ourselves criticizing residents’ practice styles instead of praising their efforts to care for our ill patients. “Premature closure” is a real concern in medical student and resident training and can result in dangerous care, yet this young woman made rational and resource-saving management decisions for which she was criticized using the language of the milestones. To me, this seems wrong.
Second, many of the milestones are unrelated to patient outcomes, yet I think that trainees’ evaluations ought to reflect how their patients actually do. All physicians ought to be held to certain core behavioral standards, which we should model for and demand from our students and residents, but assessing a trainee’s ability to direct a ward team, to manage chronic disease in the outpatient setting, and to treat people with sensitivity and humanity may be more important than gauging her willingness to “slow down to reconsider an approach” or to “regularly reflect on her own practice.” Evaluations rooted in 360-degree feedback, in objective measures of diagnostic abilities, and in individualized outcomes data may prove more meaningful than the current milestones-based tools.
Working with residents is the most educational aspect of my academic practice, and I’m grateful to this particular one for reminding me that evaluators must be humble and cautious to stimulate rather than crush our trainees’ best instincts. She also motivated me to offer cleaner and more insightful evaluations of my own learners—to focus on capturing substantial data about residents and emphasize less the ornamental.
Michael Stillman, MD
M. Stillman is associate professor of internal medicine and neurosurgery, University of Louisville School of Medicine, Louisville, Kentucky; e-mail: [email protected]