Feedback for learners in medical education has long been a topic of importance for medical educators.1,2 Despite the publication of guidelines on how to give feedback,3,4 medical students and residents have continued to claim they do not receive adequate feedback.5–7 Feedback has been described as an “intractable problem” in medical education.8 Reasons have been given for the challenges with feedback, including inaccurate self-assessment and discounted or rejected feedback,9 and definitions of feedback have been explored.10 In 2015–2016, we conducted a scoping review, published in this journal,11 as a means to synthesize and integrate the evidence of what is known about feedback in medical education.12–14 Our scoping review provided a map of the literature on feedback for learners and described the lack of high-quality, evidence-based recommendations for delivering feedback.
The evidence synthesis of this integrative review is an extension of our scoping review work.11 We selected a specific subset of articles to explore the following research question: What does the content of the feedback to learners tell us about feedback communication between teachers and learners? We believe that an in-depth analysis of feedback content may help determine why feedback remains a challenge in medical education. Exploring the exchange of information between teachers and learners may provide clues as to what the barriers to feedback are and how they might be mitigated.
We have chosen the tango as an apt metaphor of the feedback exchange. The tango is defined as a complex “ballroom dance of Latin American origin,” as well as an “interaction marked by lack of straightforwardness.”15 To the casual observer, the tango appears as two individuals actively engaged with dance steps that are both predictable and unexpected. Tango Lingua16 notes that “at the heart of [Argentine] tango is the desire to listen to, understand and converse with the person you’re dancing with.” As a way to illuminate feedback conceptually, we use this dance form to suggest a dynamic partnership between two individuals (in contrast to a framework of unidirectional transmission of information). The aim of our review was to synthesize and analyze the evidence of the content of feedback to learners and to describe our findings within the context of this distinctive metaphor.
The initial scoping review was conducted during December 2015–June 2016 by a research team of coinvestigators (all of the authors) with in-depth knowledge of medical education, experience with systematic reviews, and medical library expertise. The research question of the scoping review was, “What has been broadly published in the literature about feedback to help learners in medical education?”11
Following the suggested strategies regarding literature reviews as outlined by Gough et al,17 we subsequently determined a narrower review as a subset of articles with a specific research question. In September 2016, we selected an integrative review approach because it allows for the inclusion of articles using various methodologies.18,19 As a specific method to synthesize knowledge, the integrative review permits investigators to identify patterns or themes from a set of diverse evidence sources that may include both quantitative and qualitative data.18 We discovered in our scoping review11 that there were few articles with higher-quality levels of evidence (e.g., randomized trials) regarding feedback, so we concluded that a systematic review to quantify hierarchies of evidence would be neither feasible nor appropriate.20 This integrative review follows the recommended publication standards for health care education evidence synthesis (i.e., STORIES criteria20).
Data sources and search strategy
The search strategy for our scoping review has been described previously.11 Briefly, we searched six databases (Ovid MEDLINE, CINAHL, ERIC, ProQuest Dissertations and Theses Global, Scopus, and Web of Science) for English-language references published 1980 through 2015, using terms considered by us to describe feedback processes and practices for learners at multiple levels in medical education: feedback; feedback, psychological; medical students; assessment; self-assessment; internship and residency; resident; fellows; medical education; faculty; faculty, medical; and reflection. We also searched seven medical education journals (Academic Medicine, Advances in Health Sciences Education, BMC Medical Education, Journal of Continuing Education in the Health Professions, Medical Education, Medical Teacher, and Teaching and Learning in Medicine). In addition, we manually searched the reference lists of 11 key articles on feedback. After excluding duplicates, this search process resulted in 7,263 articles. In our two-stage screening process, articles were excluded for the following reasons: no mention of feedback or medical education, clinical studies with patients or basic science research, no focus on medical education learners, and descriptions of learner feedback about an educational activity. This resulted in 836 articles selected for full-text review. After another 186 full-text articles were excluded (i.e., not relevant, duplicates, unobtainable), 650 references were included in the scoping review.
During the scoping review’s data extraction process, 52 of the 650 articles were flagged because they elaborated on the substance and setting of the feedback given to learners. This subset of articles describing the content analysis of various feedback encounters forms the basis of this integrative review. The authors of these articles were not contacted for more clarifying information. The context of the content analysis of the feedback exchange in these 52 articles varied: audiotapes (n = 4; 8%), clinical examination (CEX) (n = 8; 15%), feedback cards (n = 6; 12%), multisource feedback (MSF) (n = 9; 17%), videotapes (n = 7; 14%), and written feedback (n = 18; 35%).
Data extraction and analysis
We developed a data extraction form for this review, which included the following items: author, year of publication, funding sources, conflict of interest, study setting and location, aim and type of study, inclusion/exclusion criteria for learners, sample size, use of a feedback tool, source of feedback, type of learners receiving feedback, context, and content analysis results. Two of us (R.B., K.V.) independently extracted the relevant data using a spreadsheet we designed in Microsoft Excel 2010 (Microsoft, Redmond, Washington) for subsequent ease of comparison. During the data extraction process, thematic analysis was used by these two authors to identify themes. The two coinvestigators met after extracting data from the first six articles to review the content analysis data (i.e., coding); they also defined and agreed on themes (i.e., patterns within the data) emerging from the process. Minimal disagreements in the extracted data were found and resolved by consensus. Frequency counts were determined as to whether a specific theme appeared in the articles included in this review. The data extraction phase was completed on November 21, 2016. All of the authors reviewed and agreed on the themes arising from the data analysis.
Characteristics of included articles
During the data extraction phase, one abstract article describing written feedback was found to include the same data as a later article referenced by similar authors and therefore was excluded. The 51 articles included in this integrative review21–71 are summarized in Supplemental Digital Appendix 1 (http://links.lww.com/ACADMED/A484). Almost half of the articles were published since 2011 (n = 25; 49%). Twenty-three articles (45%) were published from 2000 through 2010, and 3 articles (6%) were published from 1994 through 1996. Most of the articles involved medical students (n = 22; 43%) or residents (n = 22; 43%). Four articles (8%) involved practicing physicians as the learners receiving feedback. A small number of articles included nursing (n = 2; 4%), veterinary (n = 1; 2%), speech pathology (n = 1; 2%), and physiotherapy (n = 1; 2%) students. The majority of articles (n = 27; 53%) identified the specialty background of the learners, which included internal medicine (n = 10/27; 37%), family medicine (n = 5/27; 19%), and surgery (n = 4/27; 15%).
We identified several common themes during our review of the 51 articles: leniency bias, low-quality feedback, faculty deficient in feedback competencies, challenges with multiple feedback tools, and gender impacts (Table 1).
Fifteen articles (29%) described issues with low-quality feedback. A leniency bias was noted in many articles (n = 19; 37%). In other words, feedback providers were often reluctant to give any, or even minimally, constructive (i.e., negative) feedback to the learners. This occurred across multiple contexts: audiotapes (n = 3; 6%),21,22,24 CEX (n = 1; 2%),29 feedback cards (n = 2; 4%),33,38 MSF (n = 5; 10%),39,40,45–47 videotapes (n = 2; 4%),48,53 and written feedback (n = 6; 11.8%).56,57,64,66,69,70 The tendency to provide predominantly positive feedback occurred even when feedback was given using a structured feedback tool22 or in a group discussion.24 Both faculty29,38 and student peers53,56 were found to be lenient.
In contrast to the leniency bias, 3 articles (6%) noted feedback with a negative valence in detailed CEX forms given to interns,30 in videotapes of an objective structured clinical examination for preclinical students,52 and from faculty after watching videotapes of simulated general practice residents.51 Authors of the second and third articles, respectively, indicated that use of a checklist52 and the fact that the resident scenarios were simulated51 made it easier to give negative feedback.
Twelve articles (24%)22,23,26,27,31,33,40,48,51,61,64,67 indicated that the feedback given to learners was limited in amount or too general and did not conform to published guidelines for feedback.2–4 Six articles (12%)22,34,36,38,40,61 revealed that action plans for learners were either not included in the feedback or were limited in scope; previous action plans, from which to judge progress, were not included.
Faculty deficient in feedback competencies.
Eight articles (16%) indicated challenges with the faculty as they provided feedback. They did not use the feedback forms appropriately,31,46,62 even if they were experienced teachers,26 or they did not provide adequate feedback after training.28,29 Two articles (4%), which analyzed audiotapes, noted that the feedback sessions were not interactive and the faculty dominated the conversations.21,22
Challenges with multiple feedback tools.
Various types of feedback tools were used (see Supplemental Digital Appendix 1 at http://links.lww.com/ACADMED/A484). Seven articles (14%) described challenges related to these feedback tools. The authors reported that the tool simply was not used26,36,46 or that information on the completed feedback tool was missing.33,35,41 When a handwritten method was the means for communicating feedback, authors noted the frequent illegibility of such information.35,55
The gender of the learner or the feedback provider was noted in 7 articles (14%) as having an impact on the feedback exchange. In the article involving speech pathology students, women gave negative judgments to other women in an implicit manner (i.e., positive feedback was more explicit).21 In a study with internal medicine clerkship students using feedback cards, fewer recommendations for improvement were noted with gender-concordant pairs.36 In contrast, in a study with first-year medical students, gender-concordant pairs—specifically female pairs—resulted in more detailed goals as part of the written feedback given.62 In an MSF article on interprofessional behaviors, the female raters scored residents in multiple specialties lower than the male raters.41 In another MSF article, male OB/GYN residents received more negative feedback from nurses (who were more likely female) than did the female OB/GYN residents.43 In an article on case presentations, male first-year medical students were more likely than female students to provide critical feedback to peers.66 In an article assessing written summative feedback, third-year female medical students were more likely than male students to seek out this information.71
The findings of our integrative review add to the medical education literature by collecting in a comprehensive manner studies that examine the content of feedback given to learners. When these articles are read in isolation, they may raise individual questions about feedback. But when they are analyzed together, as in this review, common themes emerge. Learners’ complaints over the years that they do not receive adequate feedback5–7 appear to be true! Our findings support their claims that often feedback is nonspecific, limited in amount, lacking in action plans, and focused predominantly on the positive aspects of their performance. This leniency bias could occur for many reasons, including faculty fearing negative reactions or lacking skills in giving constructive feedback. Collectively, these findings may seem discouraging—supporting Irby’s8 claim that feedback is an intractable problem—but, from a more positive perspective, we have described elsewhere how feedback to learners is related to patient outcomes: In a narrative analysis of another subset of 28 articles from our scoping review that used patient data or chart reviews to assess the clinical impact of feedback interventions, we found that 79% (n = 22) reported characteristics of improved patient care.72
We believe there are important lessons to be gathered from our integrative review, which can also provide direction for future efforts in this area of medical education.
Like the tango,73 the feedback interaction can appear to be composed of a complex set of movements. The various contexts (e.g., CEX, MSF, written feedback), tools and forms used, and focus areas of performance make for a very complex interaction.74,75 Comparable to the multidirectional movements in the tango, feedback exchanges are not linear and straightforward and can be affected by the interplay of factors such as teacher characteristics and context.25 Just as the emotional state of the dance partners may affect the tango, the affective reactions of the feedback provider and receiver may impact the feedback interaction.76 Dance partners must trust one another and believe their partner is credible, as must individuals in a feedback exchange.77,78 Adding to the complexity and challenge, tango pairs need to be keenly mindful and situationally aware of each partner’s state, their surroundings, and any potential obstacles (e.g., other dance couples), as do feedback providers, who need to account for, as examples, environmental distractions and the reactions of the learner.
The feedback exchange should neither be a monologue nor unidirectional; rather, it should be an interactive, two-way dialogue.49,74 The one-way provision of simple information as feedback (e.g., numerical data)55,58,68 is not likely to be helpful. As the saying goes, “it takes two to tango.” The tango can be an improvisational, spontaneous dance, with both partners needing to adapt quickly to one another and “go with the flow.” Just as the tango cannot be performed by a single individual,79 feedback should not happen with an individual acting in isolation (i.e., without a helpful partner). Neither partner in the feedback exchange should have a fixed idea of what is to ensue, but each should be willing to respond appropriately to the other.
Learning a complex dance without proper instruction and practice would be quite challenging. Similarly, sufficient and periodic faculty training is necessary for such a complex process as feedback. Several of the articles included this review and by other authors suggest that faculty development in feedback is insufficient26,28,29,31,46,62 or may not be effective.80,81 Minimal instruction in feedback (e.g., reading a short description of instructions on a form) without deliberate practice is not likely to work out well.
Working with a dance partner over longer periods improves the performance of the pair. Establishing longer-term relationships between teachers and students appears to help the content and process of feedback.25,32,50,58,82,83 Over time, practicing with the same dance partner may improve how movements are progressively offered to one another with less resistance,84 similar to how during the feedback exchange it is important for learners and teachers to mutually accept information.85
Two other lessons learned from our integrative review do not fit neatly into the tango metaphor. First, gender has an impact on the feedback information exchanged between the provider and receiver. Historically, the tango embodied the active male and passive female roles, but the growth of LGBTQ+ tango pairs has changed the traditional dynamic. The lesson here may be that focusing on one’s role and clarifying the expectations of each partner are important for success with feedback. Feedback providers and receivers should also be mindful of possible gender and unconscious bias effects. Second, handwritten feedback is problematic because of the frequency of illegible information. With the information technology now available, medical educators should consider eliminating the provision of feedback through handwritten means.
Based on the lessons learned from this integrative review and analysis of the content of feedback to learners, we propose several future methods for improving feedback as well as areas for additional research. Some of our recommendations may be significant departures from normal practices and require different resources,86 but our evidence synthesis shows that the current state of affairs with feedback includes low-quality feedback, leniency biases, inadequate use of feedback tools, deficiencies in the feedback competency of feedback providers, and gender impacts. Our intent in proposing these recommendations is partly to provoke a vigorous dialogue amongst medical educators—particularly if our suggestions are seen as heretical. We acknowledge that these recommendations should involve further study before they are widely implemented. We also hope that our recommendations and review findings will guide medical educators as they develop feedback and assessment methods to address entrustment decisions for learners’ professional activities.
Limit the number of well-trained faculty providers.
Not everyone can dance well. We recommend that the number of trained feedback providers be narrowed down significantly. We believe it is unrealistic to expect to uniformly and consistently raise the level of feedback competency for all faculty. A unique group of medical educators working closely with learners may be required for feedback to be successful.37 Today, the clinical learning environment is fraught with briefer teacher and learner service requirements and interactions, which makes feedback exchanges challenging. Although we cannot ignore the presence of short-term faculty and their interactions with learners,87 both teachers and learners should recognize the limitations for providing any meaningful or quality feedback in these situations and understand why feedback may then be ignored or rejected. The proposed dedicated group of faculty feedback providers could also direct their efforts toward assessing learner achievement of desired competencies using frameworks, such as the Accreditation Council for Graduate Medicine Education Milestones or CanMEDS.
Feedback should predominantly occur within longitudinal relationships with faculty.
This smaller group of trained feedback providers will need to commit to long-term, deliberate practice of their feedback skills, as well as to longitudinal relationships with learners.88 More of a mentoring or apprenticeship-like approach may be needed.50 A longitudinal relationship can, of course, be a double-edged sword if it is not functional, which emphasizes the need to find the right dance (i.e., educational) partners. Consequently, the emphasis of the long-term relationship should be to create an educational alliance.86,89 Similar to a high-performing tango pair, the teacher and learner in a strong educational alliance have mutual respect for one another, care for the success of their work together, and feel psychologically safe enough to discuss shortcomings in their respective roles.86
Modify existing feedback tools.
Some of the feedback tools described in articles included in this review seemed to be used with success.23,43,63 To improve the use of feedback tools, address the leniency bias of feedback providers, and maximize the quality of the faculty evaluative effort, we recommend changing feedback tools by eliminating existing narrative spaces for positive feedback but leaving narrative spaces for constructive feedback. Providers will likely still provide positive feedback, maybe verbally, but explicitly focusing the tools on areas of improvement and action plans may lead to a better balance of reinforcing and constructive feedback in the end. In general, feedback tools need to be made explicit (i.e., general questions do not work), simplified (i.e., limit the number of competencies or behaviors being assessed), used frequently over time,62 and designed to be situation-specific (which may make them more effective).69 However, an isolated effort to create the “perfect” feedback tool may be a futile exercise and too reductionistic an approach.74
Determine the right steps of the feedback exchange.
The tango includes a series of structured steps, and the effective execution of these steps results in an enhanced performance.90 Determining an effective series of steps over the continuum of the feedback exchange, completed at the appropriate times and with the proper technique (e.g., two-way dialogue), continues to be an important area of study.91,92
Deliver ongoing, external feedback.
Archer74 advocates increasing learner reflection-in-action informed by external feedback (similar to the coaching and frequent feedback professional dancers and athletes receive) as a way to improve the feedback culture. Of interest, we found that learners were usually more critical of themselves than their peers.53,56 However, game theory—a “mathematically-based conceptual framework (originally developed for economics) for predicting, describing and explaining behaviour in strategic situations”93—suggests that peer feedback as the external source may be limited because of learners’ perceived risks in exposing one’s deficiencies and in damaging relationships.94 We recommend creating a system of more continuous, expected, ongoing, external feedback95 and eliminating the false dichotomy between formative and summative feedback.74 Too much focus in a learning environment on summative assessment (e.g., grading) may lead to less student receptivity to feedback.96 Students may also confuse formative feedback as serving summative purposes and ignore the feedback.97 Although summative assessment is unavoidable, an ongoing program of assessment with associated frequent, credible feedback could help achieve the goals of both formative and summative feedback and potentially improve the feedback culture.98 The components of this improved feedback system would need to be communicated often and through multiple means with learners. The optimal culture may allow for peer feedback to be useful.58 We also recommend a more concerted focus on improving learners’ abilities to receive and learn from feedback,99 as another means of developing the ideal feedback culture and learning environment that teachers and learners likely desire.
Improve the overall feedback culture.
The overall feedback culture of an organization would be enhanced by its educational leaders designing a robust structure that includes specific, formal, ongoing faculty development. To maximize the value of the time that faculty invest in this evaluative effort, there should be a proactive and thoughtful approach to the entire evaluation system (i.e., not just thinking about feedback in isolation), with the system being internally assessed and improved upon continuously. Returning to our tango metaphor, we put forward that the dance studio owner or senior choreographer is responsible for creating the optimal environment (e.g., the physical conditions of the dance studio, good music, skilled trainers, appropriate pairing of dancers, choreography), encouraging a positive culture, and emphasizing the continuous enhancement both of dance pairs and of the entire studio. Future research on feedback culture could focus on the medical education leaders or team responsible for establishing the entire feedback and evaluation system and the best practices to achieve the needed shift from our current state of affairs with feedback.
Similar to the limitations of our scoping review,11 this subset of articles was limited to references published in English, so we may have overlooked many non-English articles, and we elected not to explore the gray literature. Our focus was only learners in medical education and not practicing clinicians. Because the search range for the scoping review ended December 31, 2015, more recent articles were not included in this integrative review. There were potential biases of the coinvestigators conducting the data extraction, and this was controlled for by conducting independent extraction of information. It is possible that different results would have been found if additional or different investigators had been involved in the thematic analysis.
This integrative review analyzed a comprehensive collection of articles describing the content of feedback given to learners. Our findings reveal that the current exchange of feedback is troubled by leniency bias and low-quality feedback—which supports learners’ claims—as well as faculty deficient in feedback competencies, challenges with multiple feedback tools, and gender impacts. Using the tango dance form as a metaphor, we have provided recommendations to improve feedback for teachers and learners willing to work with each other and engage in the complexities of the dynamic partnership encompassing the feedback exchange.
1. Hewson MG, Little ML. Giving feedback in medical education: Verification of recommended techniques. J Gen Intern Med. 1998;13:111–116.
2. Ende J. Feedback in clinical medical education. JAMA. 1983;250:777–781.
3. van der Leeuw RM, Slootweg IA. Twelve tips for making the best use of feedback. Med Teach. 2013;35:348–351.
4. Ramani S, Krackov SK. Twelve tips for giving feedback effectively in the clinical environment. Med Teach. 2012;34:787–791.
5. Al-Mously N, Nabil NM, Al-Babtain SA, Fouad Abbas MA. Undergraduate medical students’ perceptions on the quality of feedback received during clinical rotations. Med Teach. 2014;36(suppl 1):S17–S23.
6. De SK, Henke PK, Ailawadi G, Dimick JB, Colletti LM. Attending, house officer, and medical student perceptions about teaching in the third-year medical school general surgery clerkship. J Am Coll Surg. 2004;199:932–942.
7. Sender Liberman A, Liberman M, Steinert Y, McLeod P, Meterissian S. Surgery residents and attending surgeons have different perceptions of feedback. Med Teach. 2005;27:470–472.
8. Irby DM. What clinical teachers in medicine need to know. Acad Med. 1994;69:333–342.
9. Bing-You RG, Trowbridge RL. Why medical educators may be failing at feedback. JAMA. 2009;302:1330–1331.
10. van de Ridder JM, Stokking KM, McGaghie WC, ten Cate OT. What is feedback in clinical education? Med Educ. 2008;42(2):189–197.
11. Bing-You RG, Hayes V, Varaklis K, Trowbridge R, Kemp H, McKelvy D. Feedback for learners in medical education: What is known? A scoping review. Acad Med. 2017;92:1346–1354.
12. Arksey H, O’Malley L. Scoping studies: Towards a methodological framework. Int J Soc Res Methodol. 2005;8(1):19–32.
13. Thomas A, Lubarsky S, Durning SJ, Young ME. Knowledge syntheses in medical education: Demystifying scoping reviews. Acad Med. 2017;92:161–166.
14. McGaghie WC. Varieties of integrative scholarship: Why rules of evidence, criteria, and standards matter. Acad Med. 2015;90:294–302.
17. Gough D, Thomas J, Oliver S. Clarifying differences between review designs and methods. Syst Rev. 2012;1:28.
18. Whittemore R, Knafl K. The integrative review: Updated methodology. J Adv Nurs. 2005;52:546–553.
19. de Souza MT, da Silva MD, de Carvalho R. Integrative review: What is it? How to do it? Einstein (Sãn Paulo). 2010;8(1):102–106.
20. Gordon M, Gibbs T. STORIES statement: Publication standards for healthcare education evidence synthesis. BMC Med. 2014;12:143.
21. Ferguson A. Appraisal in student-supervisor conferencing: A linguistic analysis. Int J Lang Commun Disord. 2010;45:215–229.
22. Hasley PB, Arnold RM. Summative evaluation on the hospital wards. What do faculty say to learners? Adv Health Sci Educ Theory Pract. 2009;14:431–439.
23. Spanager L, Dieckmann P, Beier-Holgersen R, Rosenberg J, Oestergaard D. Comprehensive feedback on trainee surgeons’ non-technical skills. Int J Med Educ. 2015;6:4–11.
24. Wen CC, Lin MJ, Lin CW, Chu SY. Exploratory study of the characteristics of feedback in the reflective dialogue group given to medical students in a clinical clerkship. Med Educ Online. 2015;20:25965.
25. Bok HG, Jaarsma DA, Spruijt A, Van Beukelen P, Van Der Vleuten CP, Teunissen PW. Feedback-giving behaviour in performance evaluations during clinical clerkships. Med Teach. 2016;38:88–95.
26. Fernando N, Cleland J, McKenzie H, Cassar K. Identifying the factors that determine feedback given to undergraduate medical students following formative mini-CEX assessments. Med Educ. 2008;42:89–95.
27. Gauthier S, Cavalcanti R, Goguen J, Sibbald M. Deliberate practice as a framework for evaluating feedback in residency training. Med Teach. 2015;37:551–557.
28. Harvey P, Radomski N, O’Connor D. Written feedback and continuity of learning in a geographically distributed medical education program. Med Teach. 2013;35:1009–1013.
29. Holmboe ES, Yepes M, Williams F, Huot SJ. Feedback and the mini clinical evaluation exercise. J Gen Intern Med. 2004;19(5 pt 2):558–561.
30. Kroboth FJ, Hanusa BH, Parker SC. Didactic value of the clinical evaluation exercise. Missed opportunities. J Gen Intern Med. 1996;11:551–553.
31. Pelgrim EA, Kramer AW, Mokkink HG, Van der Vleuten CP. Quality of written narrative feedback and reflection in a modified mini-clinical evaluation exercise: An observational study. BMC Med Educ. 2012;12:97.
32. Playford D, Kirke A, Maley M, Worthington R. Longitudinal assessment in an undergraduate longitudinal integrated clerkship: The mini Clinical Evaluation Exercise (mCEX) profile. Med Teach. 2013;35:e1416–e1421.
33. Bandiera G, Lendrum D. Daily encounter cards facilitate competency-based feedback while leniency bias persists. CJEM. 2008;10:44–50.
34. Donato AA, Park YS, George DL, Schwartz A, Yudkowsky R. Validity and feasibility of the Minicard Direct Observation Tool in 1 training program. J Grad Med Educ. 2015;7:225–229.
35. Johnston KT, Orlander JD, Manning B, et al. Structured observation of clinical skills (SOCS): An initiative to improve frequency and quality of student feedback. J Gen Intern Med. 2008;23(suppl 2):211.
36. Johnston KT, Orlander JD, Spires A, Manning B, Hershman WY. Quality of feedback to students during medicine clerkships: The impact of gender. J Gen Intern Med. 2008;23(suppl 2):384–385.
37. Schum TR, Krippendorf RL, Biernat KA. Simple feedback notes enhance specificity of feedback to learners. Ambul Pediatr. 2003;3:9–11.
38. Sokol-Hessner L, Shea JA, Kogan JR. The open-ended comment space for action plans on core clerkship students’ encounter cards: What gets written? Acad Med. 2010;85(10 suppl):S110–S114.
39. Bullock AD, Hassell A, Markham WA, Wall DW, Whitehouse AB. How ratings vary by staff group in multi-source feedback assessment of junior doctors. Med Educ. 2009;43:516–520.
40. Canavan C, Holtman MC, Richmond M, Katsufrakis PJ. The quality of written comments on professional behaviors in a developmental multisource feedback program. Acad Med. 2010;85(10 suppl):S106–S109.
41. Hayward MF, Curran V, Curtis B, Schulz H, Murphy S. Reliability of the interprofessional collaborator assessment rubric (ICAR) in multi source feedback (MSF) with post-graduate medical residents. BMC Med Educ. 2014;14:1049.
42. Lockyer JM. Role of Socio Demographic Variables and Continuing Medical Education in Explaining Multi Source Feedback Ratings and Use of Feedback by Physicians [dissertation]. 2002.Calgary, Alberta, Canada: University of Calgary
43. Ogunyemi D, Gonzalez G, Fong A, et al. From the eye of the nurses: 360-degree evaluation of residents. J Contin Educ Health Prof. 2009;29:105–110.
44. Qu B, Zhao YH, Sun BZ. Assessment of resident physicians in professionalism, interpersonal and communication skills: A multisource feedback. Int J Med Sci. 2012;9:228–236.
45. Sargeant JM, Mann KV, Ferrier SN, et al. Responses of rural family physicians and their colleague and coworker raters to a multi-source feedback process: A pilot study. Acad Med. 2003;78(10 suppl):S42–S44.
46. Whitehouse A, Hassell A, Bullock A, Wood L, Wall D. 360 degree assessment (multisource feedback) of UK trainee doctors: Field testing of team assessment of behaviours (TAB). Med Teach. 2007;29:171–176.
47. Wood L, Wall D, Bullock A, Hassell A, Whitehouse A, Campbell I. “Team observation”: A six-year study of the development and use of multi-source feedback (360-degree assessment) in obstetrics and gynaecology training in the UK. Med Teach. 2006;28:e177–e184.
48. Blatt B, Confessore S, Kallenberg G, Greenberg L. Verbal interaction analysis: Viewing feedback through a different lens. Teach Learn Med. 2008;20:329–333.
49. Frye AW, Hollingsworth MA, Wymer A, Hinds MA. Dimensions of feedback in clinical teaching: A descriptive study. Acad Med. 1996;71(1 suppl):S79–S81.
50. Ghaderi I, Auvergne L, Park YS, Farrell TM. Quantitative and qualitative analysis of performance during advanced laparoscopic fellowship: A curriculum based on structured assessment and feedback. Am J Surg. 2015;209:71–78.
51. Govaerts MJB, van de Wiel MWJ, van der Vleuten CPM. Quality of feedback following performance assessments: Does assessor expertise matter? EJTD. 2013;37:105–125.
52. Hollingsworth MA, Richards BF, Frye AW. Description of observer feedback in an objective structured clinical examination and effects on examinees. Teach Learn Med. 1994;6(1):49–53.
53. Hulsman RL, van der Vloodt J. Self-evaluation and peer-feedback of medical students’ communication skills using a Web-based video annotation system. Exploring content and specificity. Patient Educ Couns. 2015;98:356–363.
54. Rizan C, Elsey C, Lemon T, Grant A, Monrouxe LV. Feedback in action within bedside teaching encounters: A video ethnographic study. Med Educ. 2014;48:902–920.
55. Ball E, Franks H, Jenkins J, McGrath M, Leigh J. Annotation is a valuable tool to enhance learning and assessment in student essays. Nurse Educ Today. 2009;29:284–291.
56. Byrd B, Martin C, Nichols C, Edmondson A. Examination of the quality and effectiveness of peer feedback and self-reflection exercises among medical students. FASEB J. 2015;29(1):12.
57. Cook MR, Barton JS, Brown S, Watters JM, Deveny KE, Kiraly KN. A deliberate postoperative debriefing process can effectively provide formative resident feedback. J Am Coll Surg. 2014;219(3 suppl):S117.
58. Dannefer EF, Prayson RA. Supporting students in self-regulation: Use of formative feedback and portfolios in a problem-based learning setting. Med Teach. 2013;35:655–660.
59. Dekker H, Schönrock-Adema J, Snoek JW, van der Molen T, Cohen-Schotanus J. Which characteristics of written feedback are perceived as stimulating students’ reflective competence: An exploratory study. BMC Med Educ. 2013;13:94.
60. Evans C, Mori B. Web-based diaries—Windows to student internship feedback experiences. Med Educ. 2005;39:1169–1170.
61. Fitzgerald M, Gibson F, Gunn K. Contemporary issues relating to assessment of pre-registration nursing students in practice. Nurse Educ Pract. 2010;10:158–163.
62. Haffling AC, Beckman A, Edgren G. Structured feedback to undergraduate medical students: 3 years’ experience of an assessment tool. Med Teach. 2011;33:e349–e357.
63. Hughes C, Toohey S, Velan G. eMed Teamwork: A self-moderating system to gather peer feedback for developing and assessing teamwork skills. Med Teach. 2008;30:5–9.
64. Jackson JL, Kay C, Jackson WC, Frank M. The quality of written feedback by attendings of internal medicine residents. J Gen Intern Med. 2015;30:973–978.
65. Lindon-Morris E, Laidlaw A. Anxiety and self-awareness in video feedback. Clin Teach. 2014;11:174–178.
66. Melton D, Edmondson A, Nichols C. Analysis of the quality, themes, and reliability of faculty vs. student feedback following student group presentations in a medical school curriculum. FASEB J. 2015;29(1 suppl):13.
67. Nesbitt A, Pitcher A, James L, Sturrock A, Griffin A. Written feedback on supervised learning events. Clin Teach. 2014;11:279–283.
68. Pelgrim EA, Kramer AW, Mokkink HG, van der Vleuten CP. Reflection as a component of formative assessment appears to be instrumental in promoting the use of feedback; an observational study. Med Teach. 2013;35:772–778.
69. Renting N, Gans RO, Borleffs JC, van der Wal MA, Jaarsma DC, Cohen-Schotanus J. A feedback system in residency to evaluate CanMEDS roles and provide high-quality feedback: Exploring its application [published online ahead of print October 16, 2015]. Med Teach. 2016;38:738–745.
70. Sherbino J, Bandiera G. Improving communication skills: Feedback from faculty and residents. Acad Emerg Med. 2006;13:467–470.
71. Sinclair HK, Cleland JA. Undergraduate medical students: Who seeks formative feedback? Med Educ. 2007;41:580–582.
72. Hayes V, Bing-You RG, Varaklis K, Trowbridge R, Kemp H, McKelvy D. Is feedback to medical learners associated with characteristics of improved patient care? Perspect Med Educ. 2017;6:319–324.
73. Bottomer P. Let’s Dance! 1998.New York, NY: Black Dog & Leventhal
74. Archer JC. State of the science in health professional education: Effective feedback. Med Educ. 2010;44:101–108.
75. Kogan JR, Conforti LN, Bernabeo EC, Durning SJ, Hauer KE, Holmboe ES. Faculty staff perceptions of feedback to residents after direct observation of clinical skills. Med Educ. 2012;46:201–215.
76. Bynum WE 4th.. Filling the feedback gap: The unrecognised roles of shame and guilt in the feedback cycle. Med Educ. 2015;49:644–647.
77. van de Ridder JM, Berk FC, Stokking KM, ten Cate OT. Feedback providers’ credibility impacts students’ satisfaction with feedback and delayed performance. Med Teach. 2015;37(8):767–774.
78. Bing-You RG, Paterson J, Levine MA. Feedback falling on deaf ears: Residents’ receptivity to feedback tempered by sender credibility. Med Teach. 1997;19(1):40–43.
79. Collier S. Tango! 1995.New York, NY: Thames and Hudson, Inc.
80. Prystowsky JB, DaRosa DA. A learning prescription permits feedback on feedback. Am J Surg. 2003;185:264–267.
81. Salerno SM, Jackson JL, O’Malley PG. Interactive faculty development seminars improve the quality of written feedback in ambulatory teaching. J Gen Intern Med. 2003;18:831–834.
82. Bates J, Konkin J, Suddards C, Dobson S, Pratt D. Student perceptions of assessment and feedback in longitudinal integrated clerkships. Med Educ. 2013;47:362–374.
83. Chou CL, Masters DE, Chang A, Kruidering M, Hauer KE. Effects of longitudinal small-group learning on delivery and receipt of communication skills feedback. Med Educ. 2013;47:1073–1079.
85. van de Ridder JM, Peters CM, Stokking KM, de Ru JA, ten Cate OT. Framing of feedback impacts student’s satisfaction, self-efficacy and performance. Adv Health Sci Educ Theory Pract. 2015;20:803–816.
86. Wearne S. Effective feedback and the educational alliance. Med Educ. 2016;50:891–892.
87. Ross S, Dudek N, Halman S, Humphrey-Murto S. Context, time, and building relationships: Bringing in situ feedback into the conversation. Med Educ. 2016;50:893–895.
88. Voyer S, Cuncic C, Butler DL, MacNeil K, Watling C, Hatala R. Investigating conditions for meaningful feedback in the context of an evidence-based feedback programme. Med Educ. 2016;50:943–954.
89. Telio S, Regehr G, Ajjawi R. Feedback and the educational alliance: Examining credibility judgements and their consequences. Med Educ. 2016;50:933–942.
91. Sargeant J, Lockyer J, Mann K, et al. Facilitated reflective performance feedback: Developing an evidence- and theory-based model that builds relationship, explores reactions and content, and coaches for performance change (R2C2). Acad Med. 2015;90:1698–1706.
92. van de Ridder JM, McGaghie WC, Stokking KM, ten Cate OT. Variables that affect the process and outcome of feedback, relevant for medical training: A meta-review. Med Educ. 2015;49:658–673.
93. Blake A, Carroll BT. Game theory and strategy in medical training. Med Educ. 2016;50:1094–1106.
94. Rees EL, Davies B. The feedback game: Missed opportunities in workplace-based learning. Med Educ. 2016;50:1087–1088.
95. Konopasek L, Norcini J, Krupat E. Focusing on the formative: Building an assessment system aimed at student growth and development. Acad Med. 2016;91:1492–1497.
96. Harrison CJ, Könings KD, Dannefer EF, Schuwirth LW, Wass V, van der Vleuten CP. Factors influencing students’ receptivity to formative feedback emerging from different assessment cultures. Perspect Med Educ. 2016;5:276–284.
97. Watling C. The uneasy alliance of assessment and feedback. Perspect Med Educ. 2016;5:262–264.
98. van der Vleuten CP, Schuwirth LW, Driessen EW, et al. A model for programmatic assessment fit for purpose. Med Teach. 2012;34:205–214.
99. Bing-You RG, Bertsch T, Thompson JA. Coaching medical students in receiving effective feedback. Teach Learn Med. 1998;10(4):228–231.