Although assessor training is essential for defensible assessments of physician performance, research on the effectiveness of training programs for promoting assessor consistency has produced mixed results. This study explored assessors’ perceptions of the influence of training and assessment tools on their conduct of workplace-based assessments of physicians.
In 2017, the authors used a constructivist grounded theory approach to interview 13 physician assessors about their perceptions of the effects of training and tool development on their conduct of assessments.
Participants reported that training led them to realize how variable assessor judgement can be, prompting them to change their scoring and feedback behaviors to enhance consistency. Nonetheless, many participants noted they had not substantially changed their numerical scoring. However, most thought training would lead to increased standardization and consistency among assessors, highlighting a “standardization paradox” in which participants perceived a programmatic shift toward standardization but observed minimal changes in their own ratings. An “engagement effect” was also found: Participants involved in tool development and training cited more learnings than participants involved only in training.
Findings suggest that training may help assessors develop awareness of their own subjectivity when judging performance. This learning may prompt behaviors that support rigorous and consistent scoring but may not lead to perceptible changes in assessors’ numeric ratings. Results also suggest that participating in tool development may help assessors align their expectations of physicians with scoring criteria. Overall, results support the continued study of assessor training programs as a means of enhancing assessor consistency.
K. Hodwitz is research associate at the College of Physicians and Surgeons of Ontario, Toronto, Ontario, Canada.
A. Kuper is associate professor and faculty co-lead, Person-Centred Care Education, Department of Medicine, scientist and associate director, Wilson Centre for Research in Education, University Health Network, University of Toronto, and staff physician, Division of General Internal Medicine, Sunnybrook Health Sciences Centre, Toronto, Ontario, Canada.
R. Brydges is research director and scientist and holds the professorship in Technology Enabled Education at the Allan Waters Family Simulation Centre, St. Michael’s Hospital and assistant professor, Department of Medicine and Wilson Centre for Research in Education, University of Toronto, Toronto, Ontario, Canada.
Supplemental digital content for this article is available at http://links.lww.com/ACADMED/A733.
Acknowledgments: The authors wish to acknowledge William Tays for his integral role in redeveloping the CPSO’s Peer Assessment Program and conducting assessor training sessions.
Funding/Support: None reported.
Other disclosures: None reported.
Ethical approval: The University of Toronto’s research ethics review board granted ethical approval.
Previous presentations: This work was originally published, in a different form, as a master of science thesis at the University of Toronto, completed in April 2018. This work was also presented at the University of Toronto’s Department of Medicine Annual Day in June 2018.
Correspondence should be addressed to Kathryn Hodwitz, Research and Evaluation Department, College of Physicians and Surgeons of Ontario, 80 College Street, Toronto, Ontario, Canada, M5G 2E2; telephone: 416-967-2600 x522; email: email@example.com; firstname.lastname@example.org.