Home Current Issue Previous Issues Published Ahead-of-Print Collections For Authors Journal Info
Skip Navigation LinksHome > March 2013 - Volume 88 - Issue 3 > A Standardized Approach to Grading Clerkships: Hard to Achi...
Academic Medicine:
doi: 10.1097/ACM.0b013e318280c933
Letters to the Editor

A Standardized Approach to Grading Clerkships: Hard to Achieve and Not Worth It Anyway

White, Christopher B. MD

Free Access
Article Outline
Collapse Box

Author Information

Professor, Department of Pediatrics, Medical College of Georgia, Georgia Health Sciences University, Augusta, Georgia; cwhite@georgiahealth.edu.

Back to Top | Article Outline

To the Editor:

In their commentary on the study by Alexander et al,1 Durning and Hemmer2 have clearly articulated the difficulties involved in achieving a national consensus on the evaluation of medical students on their clinical clerkships. I have a few additional concerns.

First, creating a valid and reliable evaluation tool that could be used by all medical schools for assigning a clerkship grade may not even be achievable. Most medical schools must use large numbers of full-time and volunteer faculty to train and evaluate their students in their clerkships. Newman et al3 recently reported their efforts to create a valid and reliable evaluation system for a much simpler task: the peer assessment of lectures. That 11-item evaluation tool validated for use by seven medical education experts took two years to develop. Developing a precise and reliable medical student clinical grading system for hundreds of preceptors to use across the myriad of training venues at even a single institution is unlikely.

Second, even if a national effort to standardize clerkship grades were successful, would it be worth the tremendous cost in time and resources? During my four years at West Point, I was graded almost daily in academics, physical fitness, and military aptitude and leadership, and even underwent twice-a-year peer evaluation. Despite such efforts, the four 4-star generals from my class did not all come from the top 20% of the graduates. What evidence is there that “A” medical students become “A” doctors? Can we even define an “A” doctor? More important, do “B” students become less competent physicians than “A” students? What about the impact of residency and fellowship training on physician development? If the vast majority of “A” and “B” students become competent physicians, is it worth making a national effort to precisely differentiate between them?

Instead, I believe we should focus our efforts on better identifying and remediating the medical students who are at the highest risk for having problems after graduation (the bottom 5% of students). Doing this would not necessarily help our colleagues in graduate medical education choose the top-tier candidates for residency training, but it would help our profession and the patients for whom these graduates will ultimately care.

Christopher B. White, MD

Professor, Department of Pediatrics, Medical College of Georgia, Georgia Health Sciences University, Augusta, Georgia; cwhite@georgiahealth.edu.

Back to Top | Article Outline

References

1. Alexander EK, Osman NY, Walling JL, Mitchell VG. Variation and imprecision of clerkship grading in U.S. medical schools. Acad Med. 2012;87:1070–1076

2. Durning SJ, Hemmer PA. Commentary: Grading: What is it good for? Acad Med. 2012;87:1002–1004

3. Newman LR, Brodsky DD, Roberts DH, et al. Developing expert-derived rating standards for the peer assessment of lectures. Acad Med. 2012;87:356–363

© 2013 Association of American Medical Colleges

Login

Article Tools

Share