Feedback is widely recognized as an essential element of clinical education, yet it is an area where we as medical educators continue to fall short. In this issue of Academic Medicine, Telio and colleagues1 offer an intriguing conceptual framework, termed the “educational alliance,” that prompts us to reconsider the goals of providing feedback and the educator’s role in optimizing the feedback process.
Evaluation and feedback have come under greater scrutiny in recent decades as physician education has evolved from a loosely planned clinical immersion to a curriculum-based experience linked to achievement of specific competencies. In this context, performance evaluation and feedback are essential.
Evaluation has multiple purposes, including documenting adequate performance and progression, informing decisions about the appropriate degree of clinical supervision, and affirming the achievement of competencies required for medical school graduation or graduate medical education (GME) program completion and independent practice. Feedback is often viewed as having the singular goal of improving the recipient’s performance. However, it is also an important tool for cultivating the learner’s ability to self-evaluate and later self-regulate in clinical practice. Feedback provides sequential opportunities for learners to check self-assessment against external assessment, which is particularly important in light of literature showing that high performers tend to underestimate their own abilities, whereas low performers overestimate theirs.2 Further, the affirmation of skills and competence obtained through positive feedback helps clinicians-in-training build the level of confidence essential for independent practice.
Institutional policies and national accreditation standards increasingly include explicit requirements for feedback, reflecting strong consensus about its importance.3 The value of both immediate, informal feedback and more formal, thematic feedback are now acknowledged. Clinical teachers in all specialties and all settings are expected to provide in-the-moment formative comments on individual learners’ performance, informed by direct observation. Immediate feedback is also a cornerstone of medical simulation, which is becoming an important tool in clinical education. More formal feedback—such as that occurring at the end of a clerkship or rotation—is also critical in that it reflects and prioritizes multiple observations. Finally, the formal, thematic feedback that a dean of students or a GME program director provides, synthesizing longitudinal information received from multiple supervisors (as well as peers, patients, and other health care professionals), can be particularly influential. This feedback reflects an experienced medical educator’s filtering and prioritizing of a breadth of experience with the learner and usually emphasizes the content of formal evaluations, thus bridging the formative and summative dimensions of evaluation and feedback.
The degree to which feedback has a positive impact on performance presumably depends on the learner receiving specific, actionable feedback; understanding and accepting that feedback; being motivated to change; and taking action. Although outcomes of feedback are difficult to measure, satisfaction with feedback is not. Unfortunately, data from national resident and medical student surveys show gaps in satisfaction, suggesting a weak link in the medical education process. Results from the Accreditation Council for Graduate Medical Education’s 2013 Resident Survey indicate that “[satisfaction] with feedback after assignments” had the third-lowest rating among the 41 items used to assess GME programs.4 Similarly, the 2012 Association of American Medical Colleges Medical School Graduation Questionnaire reveals that for some core clerkships, as many as one-third of responding students did not agree that “faculty provided sufficient feedback on [their] performance.”5
Failure of evaluation and feedback becomes most evident when a student or resident is performing poorly and does not recognize his or her deficiencies—a high-stakes circumstance both for the learner and for patients. Although such cases are pressing (and often poignant), they fortunately represent a small minority of students and residents. The more pervasive problem of insufficient feedback is in hampering efforts to help each learner reach his or her greatest potential.
Several potential explanations for feedback failure deserve consideration: Faculty lack the necessary skills; time is not allocated for the activity; supervisors do not have sufficient contact with learners or opportunities to directly observe them; and learners are not well “primed” to receive or accept feedback. In the absence of data addressing the relative importance of these factors, efforts to improve feedback have generally targeted content and delivery— what to say and how to say it.
In this context, Telio and colleagues1 propose the “educational alliance” as a framework for considering feedback to clinical learners. The general concept of relationship-based feedback is not new. However, the analogy these authors build between educational feedback and the empirically validated “therapeutic alliance” highlights a potentially important avenue for enhancing feedback, and one that is likely to resonate among clinicians. A key aspect of this formulation is that who gives the feedback may deserve as least as much attention as the what or the how of its delivery.
Supported by their review of the literature, Telio and colleagues1 assert the importance of the contextual and relational aspects of feedback. They note that “source credibility”—that is, the credibility of the person providing feedback—influences feedback acceptance and effectiveness. This credibility depends on the nature and quality of the teacher–learner relationship and alignment of values; the teacher’s understanding of the learner’s role and goals; the teacher’s direct observation of the learner; and the learner’s perception of the teacher’s good intentions (termed “beneficence”). Several of these characteristics mirror features of the therapeutic alliance, which has been shown in psychiatry to affect the outcomes of therapy. Thus, Telio and colleagues’ description of the educational alliance as a new framework for conceptualizing feedback in medical education seems intuitively sound.
Most medical education programs are not designed in a way that supports the educational alliance model, though. Unlike pairings of patients and therapists, students and residents usually have little or no choice in the supervisors with whom they work. Also, the physicians who serve as faculty supervisors are often selected for that role on the basis of their patient mix and clinical expertise, regardless of their interest in education or their ability to assess learner performance and provide feedback. In addition, the logistics of medical student rotations and GME programs may undermine key elements of the educational alliance. For example, scheduling constraints often limit continuity between learners and supervisors, making it harder to build relationships. Time pressures and the care delivery process in some specialties interfere with clinical teachers’ ability to directly observe learners, denying them an important ingredient for “source credibility.” In these and other ways, the busy clinical setting is not naturally conducive to establishing educational alliances.
To the extent that important feedback comes from educational leaders outside the clinical setting, deans of students and GME program directors are more likely than other faculty to be skilled in delivering it. Yet these individuals may have little direct interaction with students and residents—especially in large schools or programs—and thus may have only superficial relationships with learners and only second- or thirdhand information.
Telio and colleagues’1 suggestion that the education alliance model might lead to improvement in learner outcomes deserves to be studied. However, even without empiric evidence, we should consider acting on this intuitively satisfying theoretical framework. A few basic interventions may help teachers and learners cultivate educational alliances in clinical settings:
- The selection of clinical supervisors should reflect the faculty members’ interpersonal skills and interest in teaching, especially as perceived by prior trainees. Ideally, meaningful incentives and rewards should be put into place so that clinicians who would be effective educators will want to teach.
- Each trainee should be exposed to multiple faculty supervisors to increase the chance that a strongly positive relationship will emerge. (In very small programs, this could be accomplished through rotations to affiliate institutions.) At the same time, learners may benefit from having some sequential or longitudinal experiences with faculty supervisors in order to build on existing relationships.
- Teachers and learners should be encouraged to explicitly discuss learners’ educational needs and goals for each activity or experience.
- Opportunities for direct observation should be maximized to enhance the accuracy and credibility of assessments and feedback. This occurs naturally in operative settings, but can be difficult to accomplish in a clinic or hospital floor environment.
- Each student, resident, or fellow might be paired with a faculty mentor or coach—ideally of his or her own choosing—to translate evaluations, reinforce feedback, assess understanding, and provide advice about action plans.
These measures can occur in parallel with other ongoing efforts to strengthen feedback, such as continuing faculty development focused on how to assess learners and how to deliver feedback; coaching students and residents on how to seek, receive, and incorporate feedback; and ensuring that feedback conversations actually occur (e.g., by scheduling conference slots for this purpose or asking the pair to sign a copy of the supervisor’s evaluation after discussing it).
Although evaluation and feedback have always been fundamental components of medical education, they have become a central focus in this era of competencies, milestones, and entrustable professional activities. Additional work is needed to optimize these processes, including research that addresses the following questions:
- How do students and residents assimilate objective versus subjective information about their performance?
- What are the distinct roles and advantages of immediate, specific feedback versus retrospective, thematic feedback in performance improvement and learner satisfaction?
- As multisource evaluation is being implemented, should evaluation always be paired with direct feedback, including feedback from peers and/or patients?
- How do the processes of assessment, evaluation, feedback, coaching, and mentoring interact, and how are they best aligned?
In some ways, providing effective feedback in clinical education seems to be an intractable problem: the proverbial Gordian knot. Untangling it will likely require innovative approaches, exchange of successful strategies, and continued research. Telio and colleagues’1 contribution is thus a welcome prompt to rethink a very old challenge.
1. Telio S, Ajjawi R, Regehr G. The “educational alliance” as a framework for reconceptualizing feedback in medical education. Acad Med. 2015;90:609–614
2. Davis DA, Mazmanian PE, Fordis M, Van Harrison R, Thorpe KE, Perrier L. Accuracy of physician self-assessment compared with observed measures of competence: A systematic review. JAMA. 2006;296:1094–1102
3. Accreditation Council for Graduate Medical Education. . ACGME Common Program Requirements. Effective July 1, 2013. https://www.acgme.org/acgmeweb/Portals/0/PFAssets/ProgramRequirements/CPRs2013.pdf
. Accessed September 16, 2014
4. Accreditation Council for Graduate Medical Education. . ACGME Resident Survey 2013. https://www.acgme.org/ads/File/DownloadSurveyReport/60738
[requires ACGME logon]. Accessed July 4, 2014
5. Association of American Medical Colleges. . Medical School Graduation Questionnaire: 2012 All Schools Summary Report. https://www.aamc.org/download/300448/data/2012gqallschoolssummaryreport.pdf
. Accessed September 16, 2014