Few would argue that providing and receiving feedback are important to the educational process. However, educators often hear medical students complain that they do not receive feedback, and these claims are supported by research suggesting students receive minimal feedback that is further limited by a lack of detail.1 Yet clinical teaching experience reveals that, in fact, many teachers do try to give frequent and meaningful feedback. Why is there this discrepancy? Is it an artifact of the way feedback is usually measured?
There are several challenges inherent in measuring the feedback directed at medical students. For example, students may be unable to recognize feedback, especially when it is embedded in teaching.2 And, they may not recall feedback episodes (even if recognized) by the time they complete end-of-clerkship or course evaluations. Given that feedback is often measured in summative evaluations, there is little opportunity to inquire about specific feedback events and details. One or two questions on a summative rating scale do not encourage students to highlight the intricacies of their experiences.
We hypothesized that the mismatch between medical students' perceptions of feedback (as documented in the literature) and educators' teaching experiences was related to the way data about feedback are collected. Our goal was to find new mechanisms for measuring feedback, and the strategy we selected was to send short, daily e-mail questionnaires to students.
Students at our institution on their first or second primary care core clerkship were randomized to receive an e-mail questionnaire on a particular day for each week of their clerkship. They were instructed to respond via e-mail about the feedback they had received on the particular day the e-mail was sent. The seven-item questionnaire asked about sources of feedback, frequency of positive and corrective feedback from attendings and house officers, and satisfaction with the quality and specificity of feedback received. The unit of analysis was a “student day,” defined as each survey returned by an individual student on a particular day. The response rate in the first week was 73%. Unfortunately, it dwindled to 34% by week 23, leading to a 48% response rate overall. When students did respond, they did so promptly: 88% of the questionnaires were returned within 48 hours, providing the daily focus we wanted. We found that feedback was received on nearly all (89%) student days. Respondents reported having received feedback almost four times per day, and positive feedback was reported more frequently than was corrective feedback (84% vs. 56% of student days). In addition, the majority of respondents were satisfied with the quality and specificity of feedback received from attendings or house officers, regardless of whether the feedback was positive or corrective.
E-mail questionnaires may be a useful way to gather information from students independent of clerkship site (provided there is e-mail access). Just as we are moving to episode-based evaluation of medical students, daily evaluations of instructors by students may provide us with more detail about students' experiences during their clerkships, and e-mail questionnaires may be the most efficient mechanism for collecting such information.
1. Gil DH, Heins M, Jones PB. Perceptions of medical school faculty members and students on clinical clerkship feedback. J Med Educ. 1984;59:856–64.
2. Irby DM. What clinical teachers in medicine need to know. Acad Med. 1994;69:333–42.