Letters to the Editor
In Reply to White:
We thank Dr. White for his thoughtful reflections on our commentary about clerkship grading. In many respects, we agree with his comments, particularly those that address our community’s obligation to help struggling students succeed—or to make the difficult decisions not to allow them to progress. Similarly, we agree that distinctions at the “A/B” level are likely less critical than at the “pass/fail” threshold, and that we educators need to continue to study how and whether medical school performance predicts future performance.
We also wish to comment on the distinction among processes in which faculty play a role: assessment (making the observation), evaluation (placing a value on the observations), and grading (a final administrative decision).1 As educational leaders, we ask our faculty to make the observations (assess) and to place them into context (evaluate), but it is our responsibility to render the administrative decisions. This distinction often gets muddled in that faculty and students believe that faculty are “giving grades,” when, in fact, as our friend Louis Pangaro has said, they no more “give” a student a grade than they “give” a patient diabetes when making a clinical diagnosis. Educational leaders need to work to relieve individual faculty members of feeling that burden.
In addition, Dr. White comments on the difficulty with creating an evaluation tool for assigning a clerkship grade for use across medical schools, but we prefer his latter reframing of the issue as being part of an evaluation system. It is clear that there is no form or tool that will miraculously solve all issues with clerkship grading. As such, we believe that we should be focusing on creating a system of evaluation that values our teachers’ abilities to reach judgments and, more important, encourages them to describe their observations. Such a shift in focus away from individual tools to a system view fits the need for the central role of narrative in medical education.2
We do believe that using an easily understood framework such as RIME (reporter–interpreter–manager/educator) and talking with faculty about trainee performance constitute one way of achieving consistency and providing support.3 But providing greater transparency in grade decisions, valuing faculty, identifying struggling trainees, and understanding what predicts future performance mean that we must focus less on finding the “magic bullet” and more on process and program.
Paul A. Hemmer, MD, MPH
Professor and vice chairman for educational programs, Department of Medicine, F. Edward Hébert School of Medicine, Bethesda, Maryland; firstname.lastname@example.org.
Steven J. Durning, MD, PhD
Professor of medicine and pathology, F. Edward Hébert School of Medicine, Bethesda, Maryland.
1. Pangaro LNMorgenstern BZ ed. A primer of evaluation: Definition and important distinctions in evaluation. Guidebook for Clerkship Directors. 20124th ed North Syracuse, NY: Gegensatz Press
2. van der Vleuten CP, Schuwirth LW. Assessing professional competence: From methods to programmes. Med Educ. 2005;39:309–317
3. Durning SJ, Pangaro LN, Denton GD, et al. Intersite consistency as a measurement of programmatic evaluation in a medicine clerkship with multiple, geographically separated sites. Acad Med. 2003;78(10 suppl):S36–S38