RIME Oral Abstracts
The shift towards competency-based medical education in postgraduate medical training has transformed the way competence is taught and measured within medical education. Competency-based medical education is built upon the belief that competence can be assessed using frequent and meaningful assessments, many of which consist of both quantitative and qualitative evaluations. In order to understand associations between various quantitative and qualitative evaluations used in workplace-based assessments, this study aimed to explore the relationships that exist between assessors’ checklist scores, ratings, and written comments.
Data from the McMaster Modular Assessment Program (McMAP)1 were collected and analyzed using an explanatory mixed-methods design. McMAP was designed to assess the CanMEDS roles of postgraduates specializing in emergency medicine using checklists, rating scales, and written comments for both task-specific and global appraisals of competence. These workplace-based assessment checklist and rating scale scores were analyzed using regression analyses. Narrative comments, corresponding with the aforementioned numeric scores, were rated by a content expert using a modified version of the Completed Clinical Evaluation Report Rating2 and used as predictor variables in the regression analyses. The written comments appearing in the workplace-based assessment were also independently analyzed by two of the authors using content analysis.
Communicator and collaborator workplace-based assessments from 342 McMAP evaluations of postgraduate year (PGY) 1 and PGY2 residents were analyzed using logistic regression and content analysis. Results from the two regression models indicated that the task-specific ratings provided by faculty assessors were significant in determining whether the “done, but needs attention” checklist category was used. Furthermore, the “done, but needs attention” checklist category was most significant in determining whether a written comment, mentioning specific strengths and weaknesses, would appear in the McMAP assessment. Subsequent analysis of the qualitative comments suggested meaningful differences in the type of written feedback provided in workplace-based assessments. Our analysis supports the notion of a hidden code3 used by assessors to communicate levels of competence.
This study highlights some of the relationships that exist between checklists, rating scales, and written comments. As more institutions transition toward competency-based medical education, it becomes imperative that relationships among different forms of assessment are known in order to develop and implement comprehensive assessment programs. Findings from this study suggest that task and global ratings are differentially related to checklists, which has broader implications for the development of assessment tools. Furthermore, the presence of a hidden code creates challenges when interpreting information obtained from workplace-based assessments.
1. Chan T, Sherbino J; McMAP Collaborators. The McMaster Modular Assessment Program (McMAP): A theoretically grounded work-based assessment system for an emergency medicine residency program. Acad Med. 2015;90:900–905.
2. Dudek NL, Marks MB, Wood TJ, Lee AC. Assessing the quality of supervisors’ completed clinical evaluation reports. Med Educ. 2008;42:816–822.
3. Ginsburg S, Regehr G, Lingard L, Eva KW. Reading between the lines: Faculty interpretations of narrative evaluation comments. Med Educ. 2015;49:296–306.