Secondary Logo

Journal Logo

RIME: Issues in Clinical Teaching

Engaged at the Extremes

Residents’ Perspectives on Clinical Teaching Assessment

Myers, Kathryn MD, EdM, FRCPC; Zibrowski, Elaine M. MSc, MSc; Lingard, Lorelei PhD

Author Information
doi: 10.1097/ACM.0b013e3182674488
  • Free

Abstract

Medical trainees submit appraisals of their clinical teachers from the time they begin clerkship until they complete their tenure as resident physicians. Despite the fact that these assessments are used by academic centers for a wide range of purposes,1,2 trainees’ involvement in clinical teaching assessment (CTA) has remained an underexplored area of the rater-based assessment literature. To date, the literature has focused on the development of multidimensional instruments that can be used to capture trainees’ impressions of their teachers.3–5 However, this “instrument-level” focus is a limiting one; meaningful assessment systems can evolve only through exploration of all of their complexities, including raters’ perspectives of their role as assessors.6,7

Little is known about how trainees conceptualize and approach their responsibility as assessors of their clinical teachers. Previous research with medical residents has highlighted the prevalence of straight-line response patterns on multi-itemed CTA instruments8 and the underuse of the opportunity to append supplementary written comments.9–11 When postgraduate trainees do submit written comments, they tend to give short, general statements that affirm the supervisor’s personality attributes as a teacher or physician; they infrequently submit constructive suggestions for improvement.9 These observations raise the questions of how much knowledge, engagement, and motivation residents have regarding CTA, and what factors contribute to this. Acquiring this insight requires moving beyond the exploration of secondary assessment data and into in-depth discussion with postgraduate trainees. The purpose of our study was to explore residents’ attitudes, beliefs, and knowledge regarding their involvement in the assessment of their clinical supervisors.

Method

At the beginning of the study, which ran from May through August 2010, we sent e-mails to a purposive cohort of 60 internal medicine residents from the Schulich School of Medicine and Dentistry, inviting trainees who had assessed the performance of at least one of their clinical supervisors to participate in a semistructured, focus group interview. We arranged the focus groups to fit the volunteers’ schedules and made follow-up telephone calls to randomly selected pager numbers to fill out the groups. We conducted five focus groups with a total of 19 residents (13 male and 6 female residents; 7 first-year, 3 second-year, 3 third-year, and 6 fourth-year residents).

We collected and analyzed data using a constructivist grounded theory approach12 and developed the interview guide using sensitizing concepts from our previous work.8,9 Trained researchers, who were unknown to the participants, facilitated the focus groups. We undertook a constant comparative process; after each focus group, we reviewed and discussed residents’ responses to illuminate ideas emerging within and across the interviews and to refine the probes for subsequent focus groups. In June 2011, we presented a summary of recurrent themes to an additional group of postgraduate trainees for their consideration. After that final focus group, we once again reviewed the entire data set in light of refinements and elaborations that emerged there. Emergent ideas were followed to theoretical saturation, which was achieved after five focus groups.

Results

Three dominant themes emerged: evaluation burden; utility and futility of assessment; and consequences of assessment. We elaborate these themes below, illustrating them with representative quotes.

Evaluation burden

In all focus groups, some residents expressed disdain for the CTA process as a time-consuming exercise. As one resident put it,

when I go to my box to fill out evaluations, I have about 15 or 20 built up. Because every single thing we do we have to evaluate. Even if I do this quickly it takes me a good half hour to an hour and you know we’re busy. It’s hard.

Even when unsure of whether CTA was a formal requirement of their residency program, residents felt obligated to submit the assessments, because

even if they won’t say things like “mandatory” or whatever (on the CTA form), for you to get your feedback you have to fill those forms out.

Residents spoke about struggling throughout the academic year to meet their CTA obligations while maximizing their efficiency in doing so. Several residents described strategies they had used to expedite the CTA process, such as reducing the number of CTAs they completed or limiting the content within a given appraisal. As one resident rationalized,

I just thought to myself, umm, so, I can make this easier for myself or I can make this difficult. What happens is, when you rate the consultants, you click on a button and it says, “Please choose the consultants you worked with.” So, if I have seven consultants, I have seven forms to fill out. If I select only two that I really think highly of, I just have to fill out two forms. The system is not fool-proof. The system is not going to figure out how many I worked with, right? So, I just pulled out two consultants’ evaluations for an entire eight-week period.

Others described the shortcut of submitting “satisficing” responses for written comments and checklist ratings.

They’re not mandatory fields most of the time, but if they are you can just put “none” (in the areas for improvement box). There are little tricks. And to be honest, you can cheat on those boxes too. There’s so many of those things, for us to really take the time to read each one and rate it?

The comments are mandatory. You can’t submit until you have written something, so you say “none,” which is unfortunate.

Utility and futility of assessment

Several trainees felt the CTA was specifically useful for identifying faculty who were either very effective or very ineffective as teachers. As one resident explained,

Evaluations are good for extremes. The average is not enough (to get your attention); it doesn’t really matter.

In fact, most participants were less inclined to submit concrete, written comments for supervisors and experiences they perceived as being just average.

You only take the time to fill [comments] out if it’s really bad or really good. That’s how I prioritize it.

Unless the staff were exceptional or exceptionally bad, I wouldn’t write anything.

Several residents offered examples of written remarks they felt were suitable for very positively perceived supervisors, such as “Dr. So-and-So is fun to work with and makes learning fun.” As one resident put it,

Most times if someone’s outstanding, I’ll say that they made a lot of time to teach and they’re approachable, those sort of things, like good demeanor with patients.

However, residents were more cautious when commenting on supervisors at the other end of the spectrum. A few trainees would hesitate to submit any negative written feedback because, as one resident put it,

I don’t want to say anything bad about someone.

Other participants, however, considered this identifying of extremes as their very role within the CTA process, as exemplified by this assertion:

Our role is to identify the outliers, either positive or negative, and then [they] deal with them accordingly.

Although residents outlined the potential utility of CTAs to identify the very best and worst teachers, the majority had very little knowledge about what happens to the information submitted within their assessments. Some speculated, whereas others could only wonder.

I think they use them for rewards because every now and then we would get e-mails to vote the consultants for teaching awards. So, I think they might use it for that.

Do they get a bonus or extra salary? We don’t know who reads them or anything.

Trainees expressed a desire for greater transparency in the CTA process at both the departmental level and at the level of the individual faculty member. Residents wanted clarification on how the departmental administration handled and disseminated their assessments.

I never really found out. From what I understand, they compile them and put them into some sort of biennial report to the staff member if they were evaluated. That might be terribly askew. I have no idea.

In terms of individual supervisors, the residents had some doubt as to whether clinical faculty viewed their feedback as a valid source of information.

I assume the only person that’s going to read it is the preceptor, if he or she even does it.

Several participants described a sense of futility about their role in the CTA process based on experiences in which they or other residents took the time to submit detailed CTAs but, subsequently, saw little improvement in a clinical teacher’s performance. As one resident explained,

If there’s a consultant or someone who has consistently had negative feedback, but year after year nothing is changed, you wonder what is the point of writing something, because nothing is changing. They’re continuing to be promoted, so obviously nobody is paying attention to the comments of the students. You might not want to waste your time.

Consequences of assessment

Residents in all focus groups raised concerns regarding the degree of anonymity afforded to them by the CTA process. Although they knew that their names do not appear on the forms they receive from the Web-based system, they still worried about their identity being revealed.

People have reservations when they provide feedback because [the physicians] know which rotations they taught and who was on them.

[They’re] not too confidential. Someone has to see it.

The residents perceived the risk as high for inadvertently revealing one’s identity through written comments, especially those that suggested areas for a supervisor’s improvement. As a result of this fear of exposure, some trainees had diluted the content of their comments or changed their mind altogether about submitting them.

I don’t remember the specific incident, but it was probably earlier on in my residency where I wanted to write a comment about a consultant. I was writing specifics and then I realized— well, what if I get identified? So I started changing my statements to make it look like I wasn’t complaining.

If there are a few bad things that I picked up, I wouldn’t usually comment on them. It’s like my overall picture is positive. I’ll tend to focus in on those. I don’t know why I do that. Maybe it’s just because I’m afraid they’ll know it was me.

Discussion and Conclusions

To our knowledge, this study was the first to explore residents’ perspectives regarding their involvement in the assessment of clinical faculty. Although academic centers have conceptualized CTA as providing information for both important formative and summative functions, such as faculty development, reappointment, and promotion and tenure, the postgraduate trainees in our study did not share this perspective. We conclude that their motivation for submitting detailed CTAs was low except in cases where they perceived their supervisors as outliers. This focus on “extreme” performers, and the residents’ failure to recognize the potential utility of feedback for “average” faculty, suggests that they conceptualize CTA as a surveillance tool rather than as an aid to guide faculty development.

Chen13 has theorized that effective evaluation depends critically on working relationships between evaluators and stakeholders, which include ongoing communication of the evaluation’s purpose, transparency regarding eventual uses for the data collected, and an understanding of other stakeholders’ perspectives. He cautions that the less stakeholders are aware of an evaluation’s purposes and strategies, the more skeptical they may become about its utility. Although from an institutional standpoint, residents can be seen as serving dual roles in CTA, as both stakeholders and evaluators, residents did not reflect this understanding to us. Although they did not openly question why their program requires their participation in CTA, they viewed it as little more than a time-consuming task. Several trainees described shortcuts or tricks they used to minimize the time devoted to completing CTAs while still satisfying the requirements, including response patterns, previously observed by us, such as monotonic ratings to scaled items and perfunctory or inaccurate textual information (for instant, “none” under a prompt for areas of improvement). These findings raise the question of how the system of CTA delivery impacts its users. Administering Web-based assessment systems may be cost-effective and efficient, but their unintended consequences, such as residents’ shortcut strategies and perceptions of increased burden, warrant further attention.

Residents admitted that they were unaware of how their department deals with the data they submit and expressed concerns regarding anonymity. Further, and perhaps most important, they doubted whether their supervisors genuinely valued their input. Some residents validated these concerns by recalling instances in which they heard from other residents, or experienced themselves, that their input did little to change the behavior of ineffective supervisors.

We have previously reported that residents’ written comments tend to lack the features of high-quality feedback.9 The reluctance of some trainees in this study to submit “negative” comments (except in cases of outliers) raises the question of how well they understand the goals of CTA or the distinction between constructive and destructive feedback. Uncertainty about how the administration uses their comments and fear of the potential consequences of “negative” comments may also contribute to their reluctance to submit constructive thoughts.

According to Watling and colleagues,14 residents’ satisfaction with their own in-training evaluation reports (ITERs) is strongly tied to their perceptions of how much time and effort faculty members invested in those reports. Interestingly, that expectation of engagement in the process seems not to translate into their own attitudes toward CTA. The low proportion of CTAs with even a single comment by residents (fewer than 50% in two recent studies)9–11 starkly contrasts with the nearly uniform presence of faculty comments on ITERs.15 The residents’ tendency to append detailed comments only for the best or worst supervisors would be considered inadequate in the reverse situation; faculty are specifically tasked with providing constructive feedback to residents at all levels of abilities. Although it has not been demonstrated, the factors that affect faculty satisfaction with their CTAs may be similar to those described by residents regarding their ITERs. Thus, just as residents may not value feedback that they receive from preceptors they perceive as being underengaged, faculty may not view their appraisals as credible or useful in guiding their development as teachers if they sense that residents are not committed to the process.

The question—and challenge—that remains is what programs can do to improve the quality of their CTAs. The results of our study suggest that, at a minimum, residents must be educated regarding how CTAs are used, including how the data are cached, summarized, and presented to individual faculty and higher administration. Programs also need to be aware that trainees see CTAs as a tool for surveillance of the extremes of teaching performance. To ensure that feedback is provided to teachers across all performance levels, measures must be taken to alter that perception. Our results suggest that, to do this, programs must build residents’ trust in the process, assuring them that their assessments are anonymous, valued, and acted on by clinical teachers and administrators. Once trainees have a better sense of the institutional purpose of CTA, programs’ efforts to enhance the capacity for generating constructive feedback may yield better results.

We conducted this study within a single academic center. The pervasiveness of residents’ appraisals as a means of assessing the performance of clinical faculty suggests that our results will likely resonate in other contexts, but further research at other medical schools, necessary to explore the transferability of our results, could also enhance and refine our understanding of residents’ engagement with CTAs at the extremes of performance. Moreover, we focused on trainees’ perceptions; little is known about how clinical teachers view the assessments they receive. Future exploration of faculty perspectives regarding their appraisals is warranted.

Acknowledgments: The authors express gratitude to Michelle Pajot, Holly Ellinor, and Meredith Vanstone for their assistance during this study. They also thank Dr. Glenn Regehr and Dr. Paul Hemmer for their thoughtful comments on an earlier draft of the manuscript.

Funding/Support: This study was supported by Faculty Support for Research in Education and internal Social Science and Humanities Research Council grants, respectively awarded by the Schulich School of Medicine and Dentistry and the University of Western Ontario.

Other disclosures: None.

Ethical approval: Ethical approval (delegated review) for the study was obtained from the institution’s health sciences research ethics board.

Previous presentations: The abstract of an earlier version of this article was presented at the 2011 Annual Research Conference of the Centre for Education Research and Innovation, London, Ontario, Canada (October 2011).

References

1. Jones RF, Froom JD. Faculty and administration views of problems in faculty evaluation. Acad Med. 1994;69:476–483
2. Beasley BW, Wright SM, Cofrancesco J Jr, Babbott SF, Thomas PA, Bass EB. Promotion criteria for clinician–educators in the United States and Canada. A survey of promotion committee chairpersons. JAMA. 1997;278:723–728
3. Beckman TJ, Ghosh AK, Cook DA, Erwin PJ, Mandrekar JN. How reliable are assessments of clinical teaching? A review of the published instruments. J Gen Intern Med. 2004;19:971–977
4. Beckman TJ, Cook DA, Mandrekar JN. Factor instability of clinical teaching assessment scores among general internists and cardiologists. Med Educ. 2006;40:1209–1216
5. de Oliveira Filho GR, Dal Mago AJ, Garcia JH, Goldschmidt R. An instrument designed for faculty supervision evaluation by anesthesia residents and its psychometric properties. Anesth Analg. 2008;107:1316–1322
6. Watling CJ, Lingard L. Toward meaningful evaluation of medical trainees: The influence of participants’ perceptions of the process. Adv Health Sci Educ Theory Pract. 2012;17:183–194
7. Schuwirth LW, van der Vleuten CP. Programmatic assessment and Kane’s validity perspective. Med Educ. 2012;46:38–48
8. Zibrowski EM, Myers K, Norman G, Goldszmidt MA. Relying on others’ reliability: Challenges in clinical teaching assessment. Teach Learn Med. 2011;23:21–27
9. Myers KA, Zibrowski EM, Lingard L. A mixed-methods analysis of residents’ written comments regarding their clinical supervisors. Acad Med. 2011;86(10 suppl):S21–S24
10. Todhunter S, Cruess SR, Cruess RL, Young M, Steinert Y. Developing and piloting a form for student assessment of faculty professionalism. Adv Health Sci Educ Theory Pract. 2011;16:223–238
11. Young M, Cruess S, Cruess R, Steinert Y. The Multi-Dimensional Assessment of Clinical Teachers (MD-ACT): The reliability and validity of a new tool to assess the professionalism of clinical teachers. Colo Presented at: AAMC-RIME Meeting; 2011; Denver
12. Charmaz K Constructing Grounded Theory. A Practical Guide Through Qualitative Analysis. 2006 Thousand Oaks, Calif Sage Publications;
13. Chen H-Y Practical Program Evaluation: Assessing and Improving, Implementation, and Effectiveness. 2005 Thousand Oaks, Calif Sage Publications;
14. Watling CJ, Kenyon CF, Zibrowski EM, et al. Rules of engagement: Residents’ perceptions of the in-training evaluation process. Acad Med. 2008;83(10 suppl):S97–S100
15. Ginsburg S, Gold W, Cavalcanti RB, Kurabi B, McDonald-Blumer H. Competencies “plus”: The nature of written comments on internal medicine residents’ evaluation forms. Acad Med. 2011;86(10 suppl):S30–S34
© 2012 Association of American Medical Colleges