Secondary Logo

Journal Logo

In Reply to Donato and Paladugu

Warm, Eric J. MD; Kinnear, Benjamin MD, MEd; Kelleher, Matthew MD, MEd; Sall, Dana MD, MEd; Holmboe, Eric MD

doi: 10.1097/ACM.0000000000002800
Letters to the Editor
Free

Professor of medicine and program director, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio; @CincyIM; warmej@ucmail.uc.edu; ORCID: https://orcid.org/0000-0002-6088-2434.

Assistant professor of medicine and pediatrics and associate program director, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio.

Assistant professor of medicine and pediatrics and associate program director, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio.

Assistant professor of medicine and associate program director, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio.

Senior vice president, Milestones Development and Evaluation, Accreditation Council for Graduate Medical Education, Chicago, Illinois, adjunct professor of medicine, Yale University, New Haven, Connecticut, and adjunct professor, Feinberg School of Medicine at Northwestern University, Chicago, Illinois.

Disclosures: None reported.

We thank Donato and Paladugu for their letter. They suggest that our assessment rating scales may have led to poor learner satisfaction and erroneous faculty judgment and ask us to abandon scales in favor of purely narrative data (e.g., “truly embrace the subjective”). Although the learner satisfaction and faculty judgments in our system have improved over time, we feel that any issues have less to do with rating type than with how we use the data.

We agree that assessment and feedback are complex, and interventions meant to help can have the opposite effect.1,2 However, we do not accept that values created by rating scales must be seen as summative and objective, and that narrative data must be seen as formative and subjective. Narrative assessments do not inherently remove risk, as they may contain sensitive or detailed feedback and be used in a summative manner—just like numbers. Numerical ratings are not inherently more objective than narrative comments, particularly in workplace-based assessments, as they represent a “code” based on a variety of inputs. The reality is that learners may perceive any type of assessment as subjective and high-risk when used improperly.

Numerical and narrative assessments represent a polarity, and rather than abandoning one for the other, we suggest maximizing the value of both.3 Training programs should develop support systems, such as longitudinal coaching, to help learners interpret and integrate all types of data. Coaches should personalize assessment data with a goal-directed approach, using feedback as the scaffold.1 In turn, coaches should be removed from making summative judgments, and this should be made explicit to learners.4 Data used for formative purposes should truly be low stakes, with no data point representing a threat to the learner. High-stakes decisions should be based on all available data (numerical and narrative) and should not be a surprise to learners or programs.4 Learners should coproduce these programs of assessment with faculty members. Finally, all forms of assessment should be supported by validity evidence. Do the data help learners become better over time?

Why do we need words and numbers? Although it may be possible to judge improvement over time using narrative only, it is a difficult thing to do. Numbers tell the story quickly and imperfectly, and narratives do so more slowly, but also imperfectly. Together, they tell a better story than either one can alone. How we listen to and use both assessment methods matters most.

Eric J. Warm, MD

Professor of medicine and program director, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio; @CincyIM; warmej@ucmail.uc.edu; ORCID: https://orcid.org/0000-0002-6088-2434.

Benjamin Kinnear, MD, MEd

Assistant professor of medicine and pediatrics and associate program director, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio.

Matthew Kelleher, MD, MEd

Assistant professor of medicine and pediatrics and associate program director, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio.

Dana Sall, MD, MEd

Assistant professor of medicine and associate program director, Department of Internal Medicine, University of Cincinnati College of Medicine, Cincinnati, Ohio.

Eric Holmboe, MD

Senior vice president, Milestones Development and Evaluation, Accreditation Council for Graduate Medical Education, Chicago, Illinois, adjunct professor of medicine, Yale University, New Haven, Connecticut, and adjunct professor, Feinberg School of Medicine at Northwestern University, Chicago, Illinois.

Back to Top | Article Outline

References

1. Shute VJ. Focus on formative feedback. Rev Educ Res. 2008;78:153–189.
2. Watling CJ, Ginsburg S. Assessment, feedback and the alchemy of learning. Med Educ. 2019;53:76–85.
3. Johnson B. Polarity Management: Identifying and Managing Unsolvable Problems. 1992.Amherst, MA: Human Resource Development.
4. Van Der Vleuten CPM, Schuwirth LWT, Driessen EW, Govaerts MJB, Heeneman S. Twelve tips for programmatic assessment. Med Teach. 2015;37:641–646.
© 2019 by the Association of American Medical Colleges