There are many reasons for faculty to avoid talking to each other about how medical students are performing. But sharing information is the right thing to do, because we will develop better evaluations, direct our teaching to a student's specific needs, and produce better doctors. We owe it to our students, to their future patients, and to the integrity of our educational system.
In this issue, Frellsen et al1 provide survey data about the concerns of and constraints on clerkship directors working with students who “struggle.” They identify the reasons for poor performance, which run the gamut from inadequate knowledge to unprofessional behavior. They summarize the challenges that clerkship directors face in identifying marginal and unsatisfactory performance, and the lack of robust remediation programs. They provide a snapshot of the policies and practice at medical schools about sharing student performance information. In her point–counterpoint response, Dr. Cox2 articulates the reasons against sharing information about student performance. Although I understand her points, I believe we are obligated to share this information, for the following reasons:
- The acquisition of knowledge and clinical skills, and the behaviors associated with professionalism, are longitudinal and cumulative. A series of isolated assessments that begin anew each six or eight weeks often fails to identify inadequate or marginal performance on a timely basis, or at all.
- Early identification of areas of concern maximizes the time available to work on improvement. Otherwise, we waste time rediscovering what the last clerkship(s) identified.
- Individual faculty who rotate on an inpatient service for one or two weeks, or observe a student in the outpatient setting for three or four days, may understate concerns and avoid submitting descriptions that may come across as negative.
- Students with serious, repetitive patterns of marginal, inadequate performance sail under the radar when information about individual clerkship or course performance is not shared. The only thing a promotions committee sees is a transcript with passing grades.
- A series of marginal passes is reason for concern. Marginal passing performance may actually be inadequate performance because faculty, in general, give students the benefit of the doubt and fail to recognize the magnitude of a problem.
- Patients have no inside knowledge about the real performance of the doctor they see. They only know that the doctor has an MD and a license. It is our responsibility to confirm that medical school graduates clearly have the requisite knowledge and skills and are not simply recipients of the benefit of the doubt.
If we are to share information about student performance, it is also our obligation to minimize the risks to students and to communicate respectfully and professionally about the information. The following precautions will minimize the risks of betraying confidence, of introducing bias into evaluations, and, at the extreme, of avoiding legal liability:
- A school should clearly and explicitly articulate a longitudinal, integrated, and shared assessment program, which is vetted by faculty and about which students are informed at the beginning of, and periodically throughout, their program of study. This will address any concerns about legal liability.
- A limited number of faculty should participate in an oversight and information-sharing committee. Logically, these would be course and clerkship directors who do not have sole responsibility for assigning grades. Instead, a systematic grading scheme supported by a formula and a faculty committee should be in place.
- Students should participate in a series of independent, varied, contextual, and valid assessments that contribute to their cumulative performance profile. This includes written or computer examinations, standardized patient assessments, clinical observations by faculty and residents (especially using validated instruments such as the mini-CEX), peer assessments, etc. Information derived from these assessments will improve the overall validity and reliability of performance measures and minimize bias.
- Qualitative evaluations of students should describe specific behaviors and issues, not generalized and vague judgments. It is not useful to say, “She should stick to research and stay away from patient care.” It is far more useful to say, “She has poor eye contact, frequently interrupts patients during the history, and has difficulty expressing empathy.” Rather than describe a student as “struggling,” specific behaviors should be described. This contributes to a tone of constructive feedback rather than negativism and labeling.
- A meaningful longitudinal and shared assessment system must be paired with useful and available additional instructional and learning opportunities. Research indicates that students perform better under the wing of skilled teachers.3,4
Our goal is to graduate students in whom we have confidence and whom we could envision seeing as physicians one day. To do this, we must talk to each other, and to them, about performance. We owe this to them, and to their patients.
Lynn Cleary, MD
Dr. Cleary is senior associate dean for education, SUNY Upstate Medical University, Syracuse, New York, and chair, Undergraduate Medical Education Section, Group on Educational Affairs; (email@example.com).
1 Frellsen SL, Baker EA, Papp KK, Durning SJ. Medical school policies regarding struggling medical students during the internal medicine clerkships: Results of a national survey. Acad Med. 2008;83:876–881.
2 Cox SM. Point–counterpoint: “Forward feeding” about students' progress: Information on struggling medical students should not be shared among clerkship directors or with students' current teachers. Acad Med. 2008;83:801.
3 Griffith CH 3rd, Wilson JF, Haist SA, Ramsbottom-Lucier M. Do students who work with better housestaff in their medicine clerkships learn more? Acad Med. 1998;73(10 suppl):S57–S59.
4 Roop SA, Pangaro L, Effect of clinical teaching on student performance during a medicine clerkship. Am J Med. 2001;110:205–209.