Secondary Logo

Share this article on:

Feedback for Learners in Medical Education: What Is Known? A Scoping Review

Bing-You, Robert MD, MEd, MBA; Hayes, Victoria MD; Varaklis, Kalli MD, MSEd; Trowbridge, Robert MD; Kemp, Heather MLIS; McKelvy, Dina MA, MLS

doi: 10.1097/ACM.0000000000001578
Reviews

Purpose To conduct a scoping review of the literature on feedback for learners in medical education.

Method In 2015–2016, the authors searched the Ovid MEDLINE, ERIC, CINAHL, ProQuest Dissertations and Theses Global, Web of Science, and Scopus databases and seven medical education journals (via OvidSP) for articles published January 1980–December 2015. Two reviewers screened articles for eligibility with inclusion criteria. All authors extracted key data and analyzed data descriptively.

Results The authors included 650 articles in the review. More than half (n = 341) were published during 2010–2015. Many centered on medical students (n = 274) or residents (n = 192); some included learners from other disciplines (n = 57). Most (n = 633) described methods used for giving feedback; some (n = 95) described opinions and recommendations regarding feedback. Few studies assessed approaches to feedback with randomized, educational trials (n = 49) or described changes in learner behavior after feedback (n = 49). Even fewer assessed the impact of feedback on patient outcomes (n = 28).

Conclusions Feedback is considered an important means of improving learner performance, as evidenced by the number of articles outlining recommendations for feedback approaches. The literature on feedback for learners in medical education is broad, fairly recent, and generally describes new or altered curricular approaches that involve feedback for learners. High-quality, evidence-based recommendations for feedback are lacking. In addition to highlighting calls to reassess the concepts and complex nature of feedback interactions, the authors identify several areas that require further investigation.

Supplemental Digital Content is available in the text.

R. Bing-You is professor, Tufts University School of Medicine, and vice president for medical education, Maine Medical Center, Portland, Maine.

V. Hayes is clinical assistant professor, Tufts University School of Medicine, and faculty member, Department of Family Medicine, Maine Medical Center, Portland, Maine.

K. Varaklis is clinical associate professor, Tufts University School of Medicine, and residency program director in obstetrics and gynecology, Maine Medical Center, Portland, Maine.

R. Trowbridge is associate professor, Tufts University School of Medicine, and director of undergraduate medical education, Department of Medicine, Maine Medical Center, Portland, Maine.

H. Kemp is medical librarian, Maine Medical Center, Portland, Maine.

D. McKelvy is manager of library and knowledge services, Maine Medical Center, Portland, Maine.

Supplemental digital content for this article is available at http://links.lww.com/ACADMED/A426.

Funding/Support: None reported.

Other disclosures: None reported.

Ethical approval: Reported as not applicable.

Correspondence should be addressed to Robert Bing-You, Maine Medical Center, 22 Bramhall St., Portland, ME 04102; telephone: (207) 662-7060; e-mail: bingyb@mmc.org.

Feedback for learners continues to be an important area of study for medical educators.1,2 Effective feedback to learners is important for them to be able to acquire new knowledge and skills, especially in light of the shift toward competency-based education.3 Feedback is also a common component of faculty development programs4 and residents- and students-as-teachers5 programs. Evaluations of faculty typically include assessments of their feedback delivery skills.6 Nonetheless, despite this attention to feedback, medical students and residents have long indicated that they receive insufficient feedback.7,8

In one of the most frequently cited articles on feedback in medical education, Ende9 prescribed feedback behaviors that clinical teachers should employ with learners. However, these recommendations were drawn from other fields, such as business administration and organizational psychology, and were not necessarily based on evidence in medical education. Such recommendations have informed faculty development programs promoting faculty teaching behaviors in delivering effective feedback.10 In a 25-year follow-up commentary on the Ende9 article, two of us (R.B., R.T.) described the continued challenges of providing feedback—for example, its provision at a low cognitive level using basic and descriptive facts, the overpowering influence of learner affective reactions, and the poor ability of learners for self-assessment—and suggested that feedback interactions are complex and require further research to establish evidence-based recommendations.11

In recent years, the published literature has focused on what factors may determine learners’ satisfaction with the feedback they receive.12,13 The impact of the feedback provider’s credibility as a teacher on the learner’s perception of feedback has been explored,14 as have the effect of the learner’s emotional state on responsiveness to feedback15 and the impact of the teacher’s educational beliefs (e.g., personal theories of performance evaluations) on teacher–student interactions and how the teacher subsequently gives feedback.16 Alternative theoretical models, besides models supporting a behaviorist approach (i.e., observable and quantifiable ways a teacher acts), have been suggested for improving feedback interactions between teachers and learners.17–19 For example, Archer18 recommended more of a focus on nurturing the reflection-in-action (i.e., self-assessment) capabilities of learners to achieve more effective feedback.

In our view, the medical education literature regarding feedback for learners is far-ranging, has not been broadly assessed, contains varied approaches, and is not obviously suitable for systematic study. Therefore, we conducted a scoping review of the published literature to (1) identify the extent, range, and quantity of evidence available regarding feedback for learners in medical education; (2) map key concepts from the literature on feedback; and (3) determine existing gaps that may spur future research. Our aim was to explore what is known about feedback as a means of learner improvement in the field of medical education.

Back to Top | Article Outline

Method

The methodology for this scoping review was based on the framework suggested by Arksey and O’Malley20 and the subsequent recommendations proposed by Levac and colleagues21 and others.22,23 This scoping review followed five phases: (1) identifying the research question; (2) identifying potentially relevant articles; (3) selecting articles; (4) charting data; and (5) collating, summarizing, and reporting the results.

We began by establishing a research team of coinvestigators (the authors) with in-depth knowledge of medical education, experience with systematic reviews, and medical library expertise. We determined the broad research question to be addressed as well as the overall study protocol. We all reviewed the search terms identified and the selection of databases to be searched. On the basis of consensus and team members’ expertise, we determined at subsequent team meetings which coinvestigators would be responsible for each of the four latter phases.

Back to Top | Article Outline

Research question

This scoping review was focused on the following research question: “What has been broadly published in the literature about feedback to help learners in medical education?” We purposefully kept the research question wide-ranging to examine the extent, range, and nature of research activity in the feedback literature.20 Our goal was to identify key concepts, gaps in the literature, and sources of evidence to inform practice and additional research in medical education24; it was not to determine the quality of the literature or develop recommendations for delivering feedback to learners based on interpretations of the available evidence. Scoping reviews offer a wide perspective of the literature,22 which may lead to a “deeper dive” in the form of subsequent systematic reviews.

Back to Top | Article Outline

Data sources and search strategy

The initial search was initiated by two of us (D.M., H.K.) on December 14, 2015, in Ovid MEDLINE. Five other databases were also searched: CINAHL; ERIC; ProQuest Dissertations and Theses Global; Scopus; and Web of Science. The database searches were completed by January 21, 2016. These six databases were chosen to conduct a comprehensive and broad review. The search query consisted of terms considered by the coinvestigators to describe feedback processes and practices for learners at multiple levels in medical education: feedback; feedback, psychological; medical students; assessment; self-assessment; internship and residency; resident; fellows; medical education; faculty; faculty, medical; and reflection. The search was limited to articles published from January 1, 1980, to December 31, 2015 (including those published online ahead of print in 2015). We selected 1980 partly because Ende’s9 seminal article on feedback in medical education was published in 1983, and we theorized that the focus on feedback for learners in medical education increased after publication of that article.

The initial search query of the six databases resulted in 5,824 records. We decided to include only articles published in English because of limited resources for translation of non-English publications. We excluded letters to the editor, editorials, and commentaries. The volume of articles in our traditional search led us to the decision to exclude a search of the gray literature.

Two of us (D.M., H.K.) also searched seven well-known medical education journals (Academic Medicine, Advances in Health Sciences Education, BMC Medical Education, Journal of Continuing Education in the Health Professions, Medical Education, Medical Teacher, and Teaching and Learning in Medicine), with the same keywords and date range, using the “.jn” tag within the OvidSP interface. This identified 1,769 additional records.

In addition, the reference lists of 11 key articles on feedback1,2,17–19,25–30 were manually searched to identify any further articles not yet captured. This identified 10 additional articles (39 articles had already been captured in the initial search process).

Back to Top | Article Outline

Citation management

Endnote X7.4 (Thomas Reuters, New York, New York) was used to import all citations from the databases and journals. Articles were grouped together by database or medical education journal searched. Duplicate citations were removed manually.

Back to Top | Article Outline

Screening, selection, and data extraction

A two-stage screening process was used to determine the relevance to the research question of the 7,263 records remaining after duplicates were excluded. For the first stage of screening, two of us (R.B., V.H.) independently reviewed the title and abstract of the first 50 records from the Ovid MEDLINE search. The overall kappa for this initial screening step was 0.68; generally, a kappa greater than 0.60 is considered satisfactory.31 These coinvestigators developed a title and abstract screening form.22 It was reviewed by all of us, and minor revisions were made to the form, which designated articles that fell into the following categories as not relevant to the research question:

  • No mention of feedback or medical education
  • Clinical studies with patients (i.e., not medical education) or basic science research
  • Not focused on medical education learners (i.e., focused on learners other than medical students, residents/fellows, attendings, nurses, dental students, pharmacy students, etc.)
  • Learner feedback or evaluation/assessment about a program or activity

We decided to include articles that described health professions learners besides medical students and residents (e.g., nursing students, pharmacy students).

For the second stage of screening, the same two coinvestigators independently reviewed the titles and abstracts of all of the remaining records identified in the Ovid MEDLINE search and the records found in the other database, journal, and manual searches. They were not blinded to author or journal names. Articles for which there was no abstract were procured for full-text review if the title spoke to feedback for learners in medical education. Disagreements in this second stage of screening were reviewed and resolved by two different coinvestigators (K.V., R.T.).

The coinvestigators (R.B., V.H.) complet ing the title and abstract screening met several times to ensure consistency between themselves (overall kappa was 0.75) and to confirm an appropriate focus on the research question. After completion of the title and abstract screening, all 836 relevant articles were procured for a full-text review (see Table 1 for a list of data sources).

Table 1

Table 1

We developed an initial data-charting form, after screening the Ovid MEDLINE database records (n = 1,601), to determine which variables to extract from the full-text articles to answer the research question. We used Microsoft Excel 2010 (Microsoft, Redmond, Washington) to design a spreadsheet to collect the data. The following variables were included in the data-charting form: author; publication year; publication type; type of learner; health care discipline; medical specialty; reference origin; type of article (e.g., opinion paper, survey of learner perceptions, description of new curricular approach); intervention, if done; focus of feedback content; and evaluation of intervention, if done. The origin of the reference was based on the location (i.e., country) where the feedback intervention occurred or the country of origin of the first author if no intervention took place (e.g., opinion paper on feedback by a group of authors). All of us independently extracted data from the first 10 full-text articles, and we subsequently met to discuss the data-charting form. Minor modifications to the data-charting form were made—options were added to the spreadsheet dropdown list within each variable, but no new variables were created—and were agreed upon by the entire research team.

Using the data-charting form, one of us (R.B.) extracted the relevant data from all full-text articles procured (n = 836). To avoid errors from single data extraction,32,33 each of the full-text articles was reviewed by a second coinvestigator (V.H., R.T., K.V., D.M., and H.K., who reviewed approximately equal numbers of articles) against the data extracted to check for accuracy and completeness (kappa = 0.75). Disagreements regarding the extracted data were resolved by consensus. The data extraction phase of this scoping review was completed on June 4, 2016.

Back to Top | Article Outline

Results

We included 650 articles in this scoping review, after excluding 186 articles that were irrelevant, duplicates, or unobtainable (see Figure 1 for a flow diagram of the process and Table 2 for characteristics of the included articles). The complete list of included articles is available in Supplemental Digital Appendix 1 at http://links.lww.com/ACADMED/A426.

Table 2

Table 2

Figure 1

Figure 1

Back to Top | Article Outline

Characteristics of included articles

The majority of the 650 included articles originated from the United States (n = 306; 47.1%) and the United Kingdom (n = 104; 16.0%). The remaining articles came from Canada (n = 66; 10.2%), the Netherlands (n = 49; 7.5%), Australia (n = 38; 5.8%), and other countries (n = 87; 13.4%). There were few articles on feedback for medical education learners from the years 1980 through 1989 (n = 29; 4.5%). Even from 1990 through 1999, the decade after the publication of the seminal 1983 article by Ende,9 there were few articles found (n = 67; 10.3%). There was a large increase in articles from the years 2000 through 2009 (n = 213; 32.8%), and there were even more from the most recent years of the search, 2010 through 2015 (n = 341; 52.4%). The majority of the articles came from journals (n = 635; 97.7%), with few from theses/dissertations (n = 4; 0.6%), book chapters (n = 3; 0.5%), or conference proceedings (n = 8; 1.2%).

Back to Top | Article Outline

Types of learners and disciplines involved

Of the 543 articles (83.5%) in which a type of learner was specified, the majority focused on medical students (n = 274; 50.5%) or medical residents (n = 192; 35.3%). Among the 79 articles focused on medical students that specified a discipline, the disciplines included internal medicine (n = 27; 34.2%), surgery (n = 17; 21.5%), pediatrics (n = 12; 15.2%), OB/GYN (n = 6; 7.6%), psychiatry (n = 6; 7.6%), family medicine (n = 5; 6.3%), emergency medicine (n = 4; 5.1%), and anesthesiology (n = 2; 2.5%).

For the 159 articles focused on medical residents and fellows and specifying disciplines, the disciplines included internal medicine (n = 56; 35.2%), surgery (n = 31; 19.5%), pediatrics (n = 22; 13.8%), emergency medicine (n = 14; 8.8%), anesthesiology (n = 9; 5.7%), radiology (n = 8; 5.0%), family medicine (n = 8; 5.0%), OB/GYN (n = 7; 4.4%), and psychiatry (n = 4; 2.5%).

Fifteen (2.8%) of the 543 articles specifying a learner type included dental students as learners. There were only 2 articles (0.4%) that included dental residents. Pharmacy learners were included in 14 articles (2.6%), and learners from the nursing field were included in 21 (3.9%). Five articles (1.0%) focused on advanced practice provider learners (i.e., physician assistant or nurse practitioner students). A small number of articles included physical therapy learners (n = 4; 0.7%), midwifery learners (n = 2; 0.4%), speech-pathology learners (n = 1; 0.2%), dietetic learners (n = 1; 0.2%), and veterinary learners (n = 1; 0.2%).

Back to Top | Article Outline

Types of articles

Ninety-seven percent of the 650 articles in this scoping review (n = 630) could be categorized into a single type of article (Table 3), with the remaining articles (n = 20) including multiple types (e.g., a survey of learners plus a literature review).

Table 3

Table 3

The majority of the 630 articles (n = 327; 52.0%) described a new or altered curricular approach (e.g., implementing an audience response system in the classroom,34 using a feedback checklist during an objective structured clinical examination [OSCE],35 and using a Web-based tool to provide real-time answers to test questions36) that included some form of feedback to the learner. A small percentage of these 327 articles (n = 49; 15.0%)—and of all 650 included articles (n = 49; 7.5%)—involved randomizing the learners to one or more curricular changes or interventions.

Of the 630 articles, 15.1% (n = 95) were by groups of authors (n = 58; 9.2%) or by individuals (n = 37; 5.9%) describing either opinions or recommendations for feedback. Learner and/or faculty perceptions of feedback were described in 18.7% (n = 118), and methods used included surveys (n = 61; 9.7%), interviews (n = 43; 6.8%), or focus groups (n = 14; 2.2%). There were 22 literature review articles (3.5%; 5 were labeled as systematic reviews) and 1 meta-analysis (0.2%) on feedback related to simulation-based procedural skills.37

Some articles (n = 52/630; 8.3%), published mainly since 2003, described the often-qualitative analysis of the content of various feedback encounters, including the analysis of audiotapes (n = 4; 0.6%), written forms used for feedback (e.g., checklists; n = 18; 2.3%), feedback cards (n = 6; 0.9%), videotapes (n = 7; 1.1%), multisource feedback information (n = 9; 1.4%), and clinical examination (CEX) information (n = 8; 1.3%). Statistical analysis (n = 2; 0.3%) or performance (i.e., validation) of a rating scale or evaluation tool (n = 11; 1.7%) was described in a small number of the 630 articles.

Back to Top | Article Outline

Feedback method and focus

The method used for giving feedback to learners was described in 633 (97.4%) articles (Table 4). In these articles, most feedback given to learners by teachers consisted of written feedback (n = 117; 18.5%) or verbal feedback (n = 83; 13.1%). Use of standardized patients, typically in OSCE settings, was described as a method to give feedback to learners (n = 65; 10.3%). In addition, videotaping learners was used as a feedback method (n = 55; 8.7%). Simulation (n = 52; 8.2%) was described mostly in relation to giving feedback to learn procedural skills. Computer-based feedback (n = 48; 7.6%) often centered on giving feedback with online examination questions. Audience response systems (n = 14; 2.2%) were used for immediate feedback in preclinical student classrooms. Few studies used patient feedback to learners to help learners improve (n = 14; 2.2%). Some studies described combining two or more types of feedback to learners (n = 96; 15.2%).

Table 4

Table 4

The focus of feedback provided to learners was described in 324 articles (49.8%). The most common competency area in these articles was global performance and/or multiple competencies (n = 68; 21.0%). A similar number of articles (n = 60; 18.5%) focused on feedback on procedural skills, which in some of these studies involved haptic (i.e., hands-on) or force feedback during simulation (n = 8/60; 13.3%). Feedback behaviors or teaching skills were the focus of the feedback given in 16 articles (4.9%). Few articles (n = 7; 2.2%) focused on feedback on the professionalism of learners, and only 3 (0.9%) described feedback to learners to improve team-based skills.

Back to Top | Article Outline

Evaluation of feedback impact

An evaluation of a feedback intervention was described in 300 articles (46.2%). The majority of these 300 articles (n = 173; 57.7%) used the lowest of Kirkpatrick’s levels of evaluation,38 assessing learners’ reaction to feedback (i.e., level one). This was obtained mainly through surveys of learners. Assessing differences in learning after feedback (i.e., level two) primarily involved changes in examination or case study scores (n = 12; 4.0%). Assessing changes in behavior after feedback (i.e., level three) included describing the use of simulation as an evaluation method (n = 25; 8.3%), using standardized patients and OSCEs (n = 17; 5.7%), and using observation of learners (n = 7; 2.3%). Level four outcomes could reflect the impact on patients from feedback to learners. Nineteen articles (6.3%) elaborated on the use of clinical patient data, and an additional 9 articles (3.0%) described chart reviews, to assess the clinical impact of the feedback intervention (e.g., interviews with or examination of actual patients, test ordering, adherence to practice guidelines).

Back to Top | Article Outline

Discussion

The findings from this scoping review add to the literature by mapping what is known about feedback for learners in medical education. As evidenced by the large number of articles outlining recommendations for feedback approaches, feedback is considered to be an important driver of learner improvement. Most of the articles have been published since the year 2000. There is an abundance of articles that include medical students and residents; much smaller proportions of articles include learners from other disciplines such as dentistry, nursing, and pharmacy. A small number of articles describe changes in learner behavior after feedback, and even fewer articles assess the impact of feedback on patient outcomes.

Much of what is known about feedback for learners in medical education is not based on strong evidence in the medical education field.39 Our scoping review is not intended to dismiss the large body of feedback literature with K–12 and postsecondary students, from which theoretical constructs have informed recommendations for adult learners. Although articles continue to describe surveys of adult learners in medicine as to their feedback needs or their perceptions of feedback,40,41 there appear to be few rigorous studies in medical education assessing various approaches to feedback through randomized educational trials.42–44 Many articles included in this scoping review described new or altered curricular approaches that fall into lower-quality levels of evidence (e.g., single, observational feedback interventions with small numbers of learners).

That said, a small number of articles do support behaviorist approaches to feedback, the basis for Ende’s9 often-cited article from 1983. For example, Saraf et al44 demonstrated that feedback in the form of praise rather than criticism improved students’ knowledge and skills in obstetrical simulations. In contrast, however, Boehler et al43 found that students’ performance in surgical knot tying only improved with feedback about their deficiencies rather than just compliments. Feedback that occurs repeatedly, such as in the model of longitudinal integrated clerkships, appears to be beneficial.45,46 Immediate feedback appears to be helpful for learning gains,47 whereas terminal feedback (i.e., after practice) has been shown to result in better learning in technical skills than does concurrent (i.e., during practice) feedback.48 The feedback “sandwich” (i.e., positive–negative–positive feedback) does not appear to change actual performance, although students perceive that it does.49 In improving professionalism attitudes and behaviors, feedback occurring on an individual basis appears to be more effective than feedback given in a group setting.50 Before endorsing the behaviorist approaches listed above, we recommend assessment of the methodology and quality of these articles.

Reflecting the shift in the literature from the delivery aspects of feedback to the feedback conversation and interactions between the teacher and learner,14–19 as we highlighted in our introduction, more recently published articles have primarily focused on the nonbehavioral aspects of feedback. The effectiveness of feedback may be impacted by the affective state of the learner or the learner’s emotional reaction to the feedback.15,51–54 How the learner perceives the credibility of the individual delivering the feedback appears to be important.14,55–57 Learners have varying opinions on the benefits of feedback from patients,58,59 and patients may be reluctant to be critical of learners.60 Other research has focused on feedback and potential gender61–63 and cultural differences.64,65 Investigators have also explored the complex interchange between a learner and teacher through the content analysis of feedback interactions.66

Several rubrics, with various acronyms, have been suggested for use in feedback with learners: BEGAN (The Brown Educational Guide to the Analysis of Narrative),67,68 REFLECT (Reflection Evaluation for Learners’ Enhanced Competencies Tool),69,70 FAIRness (feedback, activity, relevance, and individualisation),71,72 ECO (emotion, content, outcomes),73 RIME (reporter–interpreter–manager–educator),74 and PEARLS (partnership, empathy, apology, respect, legitimation, support).75 However, these rubrics do not consider the increasing calls by recent authors to reconceptualize models of feedback.17–19,51 The proposed newer models for feedback reflect the criticism of a purely behavioral-based approach.9

This scoping review suggests that a possible reason for the lack of high-quality evidence regarding feedback for learners in medical education relates to challenges in study methodology and/or the approaches used. Although blinding learners and teachers to educational interventions is often not feasible, we believe studies of feedback interventions could involve randomization to improve the quality of the results, or at least control for potential confounding factors. Learners already perceive that they do not receive feedback,7,8 and therefore learners randomized to a control arm may not recognize any differences in feedback. Consideration should also be given to blinding the evaluators of the data. Medical educators could use more qualitative research methods or content analysis of feedback interactions66 to inform better feedback activities and quantitative research methods. Multisite trials involving feedback interventions are feasible42 and should be promoted.

This scoping review also helps identify potential areas for additional scholarly efforts. Methods to help students ask for and receive feedback76–78 and to train students and residents in giving feedback could be further explored.79,80 How feedback is perceived appears to differ by specialty.81 Advancing technology is providing more opportunities and novel means for capturing and providing feedback to learners.82 The long-term efficacy of feedback is unknown: Only one study assessed the long-term (i.e., 5-year) effect of feedback,83 and other studies suggested that little feedback is remembered short-term.84 Less competent learners may receive and/or use feedback differently.85 Feedback to learners may have minimal to no impact,86–88 and discovering the reasons behind this lack of improvement could be essential for the development and evaluation of new feedback methods. There can be further investigation into how the culture of feedback in an organization can be assessed and improved.89 There may be valuable lessons to learn from exploring how feedback occurs in different professions.65 This scoping review also suggests that systematic reviews focused on specific research questions about feedback in medical education—for example, Does feedback to learners improve patient care? Does feedback to learners improve patient outcomes? What is the role of feedback in simulation debriefing activities?—may be beneficial.

Back to Top | Article Outline

Limitations

This scoping review has several limitations. Our search strategy was limited to articles published in English, so we may have overlooked many non-English articles. In view of the large number of articles identified in our initial database searches, we elected not to explore the gray literature. We did not consult with stakeholders, the sixth step of a scoping review, as suggested by some authors,21 but approximately 60% of scoping reviews do not include this step.22 Lastly, our focus was on learners in medical education and not on providers in clinical practice,90 so there may be interesting differences or similarities with other groups of individuals receiving feedback in medical practice and medical education.

Back to Top | Article Outline

Conclusions

This scoping review provides a map of the literature on feedback for learners in medical education. This literature is broad, fairly recent, and predominantly describes new or altered curricular approaches that include feedback to learners. High-level, evidence-based educational recommendations for feedback are lacking. In addition to recent calls for revisiting the concepts and complex nature of feedback interactions, this scoping review identifies areas for potential educational research and future exploration. Completing systematic reviews focused on specific research questions about feedback for learners in medical education may be a valuable next step.

Back to Top | Article Outline

References

1. van der Leeuw RM, Slootweg IA. Twelve tips for making the best use of feedback. Med Teach. 2013;35:348–351.
2. Ramani S, Krackov SK. Twelve tips for giving feedback effectively in the clinical environment. Med Teach. 2012;34:787–791.
3. Holmboe ES, Yamazaki K, Edgar L, et al. Reflections on the first 2 years of milestone implementation. J Grad Med Educ. 2015;7:506–511.
4. Bahar-Ozvaris S, Aslan D, Sahin-Hodoglugil N, Sayek I. A faculty development program evaluation: From needs assessment to long-term effects, of the teaching skills improvement program. Teach Learn Med. 2004;16:368–375.
5. Blanco MA, Maderer A, Oriel A, Epstein SK. How we launched a developmental student-as-teacher (SAT) program for all medical students. Med Teach. 2014;36:385–389.
6. Litzelman DK, Stratos GA, Marriott DJ, Skeff KM. Factorial validation of a widely disseminated educational framework for evaluating clinical teachers. Acad Med. 1998;73:688–695.
7. De SK, Henke PK, Ailawadi G, Dimick JB, Colletti LM. Attending, house officer, and medical student perceptions about teaching in the third-year medical school general surgery clerkship. J Am Coll Surg. 2004;199:932–942.
8. Sender Liberman A, Liberman M, Steinert Y, McLeod P, Meterissian S. Surgery residents and attending surgeons have different perceptions of feedback. Med Teach. 2005;27:470–472.
9. Ende J. Feedback in clinical medical education. JAMA. 1983;250:777–781.
10. Skeff KM, Stratos GA, Berman J, Bergen MR. Improving clinical teaching. Evaluation of a national dissemination program. Arch Intern Med. 1992;152:1156–1161.
11. Bing-You RG, Trowbridge RL. Why medical educators may be failing at feedback. JAMA. 2009;302:1330–1331.
12. Duffield KE, Spencer JA. A survey of medical students’ views about the purposes and fairness of assessment. Med Educ. 2002;36:879–886.
13. van de Ridder JM, Peters CM, Stokking KM, de Ru JA, ten Cate OT. Framing of feedback impacts student’s satisfaction, self-efficacy and performance. Adv Health Sci Educ Theory Pract. 2015;20:803–816.
14. van de Ridder JM, Berk FC, Stokking KM, ten Cate OT. Feedback providers’ credibility impacts students’ satisfaction with feedback and delayed performance. Med Teach. 2015;37(8):767–774.
15. Eva KW, Armson H, Holmboe E, et al. Factors influencing responsiveness to feedback: On the interplay between fear, confidence, and reasoning processes. Adv Health Sci Educ Theory Pract. 2012;17:15–26.
16. Bok HG, Jaarsma DA, Spruijt A, Van Beukelen P, Van Der Vleuten CP, Teunissen PW. Feedback-giving behaviour in performance evaluations during clinical clerkships. Med Teach. 2016;38:88–95.
17. ten Cate OT. Why receiving feedback collides with self determination. Adv Health Sci Educ Theory Pract. 2013;18:845–849.
18. Archer JC. State of the science in health professional education: Effective feedback. Med Educ. 2010;44:101–108.
19. Sargeant J, Lockyer J, Mann K, et al. Facilitated reflective performance feedback: Developing an evidence- and theory-based model that builds relationship, explores reactions and content, and coaches for performance change (R2C2). Acad Med. 2015;90:1698–1706.
20. Arksey H, O’Malley L. Scoping studies: Towards a methodological framework. Int J Soc Res Method. 2005;8(1):19–32.
21. Levac D, Colquhoun H, O’Brien KK. Scoping studies: Advancing the methodology. Implement Sci. 2010;5:69.
22. Pham MT, Rajić A, Greig JD, Sargeant JM, Papadopoulos A, McEwen SA. A scoping review of scoping reviews: Advancing the approach and enhancing the consistency. Res Synth Methods. 2014;5:371–385.
23. Peters MD, Godfrey CM, Khalil H, McInerney P, Parker D, Soares CB. Guidance for conducting systematic scoping reviews. Int J Evid Based Healthc. 2015;13:141–146.
24. Daudt HM, van Mossel C, Scott SJ. Enhancing the scoping study methodology: A large, inter-professional team’s experience with Arksey and O’Malley’s framework. BMC Med Res Methodol. 2013;13:48.
25. van de Ridder JM, McGaghie WC, Stokking KM, ten Cate OT. Variables that affect the process and outcome of feedback, relevant for medical training: A meta-review. Med Educ. 2015;49:658–673.
26. van de Ridder JM, Stokking KM, McGaghie WC, ten Cate OT. What is feedback in clinical education? Med Educ. 2008;42:189–197.
27. Hamid Y, Mahmood S. Understanding constructive feedback: A commitment between teachers and students for academic and professional development. J Pak Med Assoc. 2010;60:224–227.
28. Saedon H, Salleh S, Balakrishnan A, Imray CH, Saedon M. The role of feedback in improving the effectiveness of workplace based assessments: A systematic review. BMC Med Educ. 2012;12:25.
29. Kaul P, Gong J, Guiton G. Effective feedback strategies for teaching in pediatric and adolescent gynecology. J Pediatr Adolesc Gynecol. 2014;27:188–193.
30. Branch WT Jr, Paranjape A. Feedback and reflection: Teaching methods for clinical settings. Acad Med. 2002;77(12 pt 1):1185–1188.
31. Viera AJ, Garrett JM. Understanding interobserver agreement: The kappa statistic. Fam Med. 2005;37:360–363.
32. Buscemi N, Hartling L, Vandermeer B, Tjosvold L, Klassen TP. Single data extraction generated more errors than double data extraction in systematic reviews. J Clin Epidemiol. 2006;59:697–703.
33. Hamm MP, Chisholm A, Shulhan J, et al. Social media use by health care professionals and trainees: A scoping review. Acad Med. 2013;88:1376–1383.
34. Alexander CJ, Crescini WM, Juskewitch JE, Lachman N, Pawlina W. Assessing the integration of audience response system technology in teaching of anatomical sciences. Anat Sci Educ. 2009;2:160–166.
35. Black NM, Harden RM. Providing feedback to students on clinical skills by using the objective structured clinical examination. Med Educ. 1986;20:48–52.
36. Hallgren RC, Parkhurst PE, Monson CL, Crewe NM. An interactive, Web-based tool for learning anatomic landmarks. Acad Med. 2002;77:263–265.
37. Hatala R, Cook DA, Zendejas B, Hamstra SJ, Brydges R. Feedback for simulation-based procedural skills training: A meta-analysis and critical narrative synthesis. Adv Health Sci Educ Theory Pract. 2014;19:251–272.
38. Frye AW, Hemmer PA. Program evaluation models and related theories: AMEE guide no. 67. Med Teach. 2012;34:e288–e299.
39. Guyatt GH, Haynes RB, Jaeschke RZ, et al. Users’ guides to the medical literature: XXV. Evidence-based medicine: Principles for applying the users’ guides to patient care. Evidence-Based Medicine Working Group. JAMA. 2000;284:1290–1296.
40. Nofziger AC, Naumburg EH, Davis BJ, Mooney CJ, Epstein RM. Impact of peer assessment on the professional development of medical students: A qualitative study. Acad Med. 2010;85:140–147.
41. Bing-You RG, Stratos GA. Medical students’ needs for feedback from residents during the clinical clerkship year. Teach Learn Med. 1995;7(3):172–176.
42. Bing-You RG, Greenberg LW, Wiederman BL, Smith CS. A randomized trial to improve resident teaching with written feedback. Teach Learn Med. 1997;9(1):10–13.
43. Boehler ML, Rogers DA, Schwind CJ, et al. An investigation of medical student reactions to feedback: A randomised controlled trial. Med Educ. 2006;40:746–749.
44. Saraf S, Bayya J, Weedon J, Minkoff H, Fisher N. The relationship of praise/criticism to learning during obstetrical simulation: A randomized clinical trial. J Perinat Med. 2014;42:479–486.
45. Bates J, Konkin J, Suddards C, Dobson S, Pratt D. Student perceptions of assessment and feedback in longitudinal integrated clerkships. Med Educ. 2013;47:362–374.
46. Chou CL, Masters DE, Chang A, Kruidering M, Hauer KE. Effects of longitudinal small-group learning on delivery and receipt of communication skills feedback. Med Educ. 2013;47:1073–1079.
47. El Saadawi GM, Azevedo R, Castine M, et al. Factors affecting feeling-of-knowing in a medical intelligent tutoring system: The role of immediate feedback as a metacognitive scaffold. Adv Health Sci Educ Theory Pract. 2010;15:9–30.
48. Walsh CM, Ling SC, Wang CS, Carnahan H. Concurrent versus terminal feedback: It may be better to wait. Acad Med. 2009;84(10 suppl):S54–S57.
49. Parkes J, Abercrombie S, McCarty T. Feedback sandwiches affect perceptions but not performance. Adv Health Sci Educ Theory Pract. 2013;18:397–407.
50. Camp CL, Gregory JK, Lachman N, Chen LP, Juskewitch JE, Pawlina W. Comparative efficacy of group and individual feedback in gross anatomy for promoting medical student professionalism. Anat Sci Educ. 2010;3:64–72.
51. Telio S, Ajjawi R, Regehr G. The “educational alliance” as a framework for reconceptualizing feedback in medical education. Acad Med. 2015;90:609–614.
52. Sargeant J, Mann K, Sinclair D, Van der Vleuten C, Metsemakers J. Understanding the influence of emotions and reflection upon multi-source feedback acceptance and use. Adv Health Sci Educ Theory Pract. 2008;13:275–288.
53. Mann K, van der Vleuten C, Eva K, et al. Tensions in informed self-assessment: How the desire for feedback and reticence to collect and use it can conflict. Acad Med. 2011;86:1120–1127.
54. Jarrell A, Harley JM, Lajoie S, Naismith L. Examining the relationship between performance feedback and emotions in diagnostic reasoning: Toward a predictive framework for emotional support. In: Artificial Intelligence in Education AIED 2015. 2015:Cham, Switzerland: Springer International Publishing; 650–653.
55. Bing-You RG, Paterson J, Levine MA. Feedback falling on deaf ears: Residents’ receptivity to feedback tempered by sender credibility. Med Teach. 1997;19(1):40–43.
56. Dijksterhuis MG, Schuwirth LW, Braat DD, Teunissen PW, Scheele F. A qualitative study on trainees’ and supervisors’ perceptions of assessment for learning in postgraduate medical education. Med Teach. 2013;35:e1396–e1402.
57. van Schaik SM, O’Sullivan PS, Eva KW, Irby DM, Regehr G. Does source matter? Nurses’ and physicians’ perceptions of interprofessional feedback. Med Educ. 2016;50:181–188.
58. Bokken L, Rethans JJ, Jöbsis Q, Duvivier R, Scherpbier A, van der Vleuten C. Instructiveness of real patients and simulated patients in undergraduate medical education: A randomized experiment. Acad Med. 2010;85:148–154.
59. Patel M, DiRocco DN, Marzec L, Day SC. Resident and faculty attitudes towards using patient feedback in an academic internal medicine outpatient practice. J Gen Intern Med. 2011;26:S561.
60. Feletti GI, Carney SL. Evaluating patients’ satisfaction with medical students’ interviewing skills. Med Educ. 1984;18:15–20.
61. O’Hara BS, Maple SA, Bogdewic SP, Saywell RM Jr, Zollinger TW, Smith CP. Gender and preceptors’ feedback to students. Acad Med. 2000;75:1030.
62. Johnston KT, Orlander JD, Spires A, Manning B, Hershman WY. Quality of feedback to students during medicine clerkships: The impact of gender. J Gen Intern Med. 2008;23:384–385.
63. Bose MM, Gijselaers WH. Why supervisors should promote feedback-seeking behaviour in medical residency. Med Teach. 2013;35:e1573–e1583.
64. Lee KB, Vaishnavi SN, Lau SK, Andriole DA, Jeffe DB. Cultural competency in medical education: Demographic differences associated with medical student communication styles and clinical clerkship feedback. J Natl Med Assoc. 2009;101:116–126.
65. Watling C, Driessen E, van der Vleuten CP, Vanstone M, Lingard L. Beyond individualism: Professional culture and its influence on feedback. Med Educ. 2013;47:585–594.
66. Blatt B, Confessore S, Kallenberg G, Greenberg L. Verbal interaction analysis: Viewing feedback through a different lens. Teach Learn Med. 2008;20:329–333.
67. Wald HS, Reis SP, Monroe AD, Borkan JM. “The loss of my elderly patient:” Interactive reflective writing to support medical students’ rites of passage. Med Teach. 2010;32:e178–e184.
68. Reis SP, Wald HS, Monroe AD, Borkan JM. Begin the BEGAN (The Brown Educational Guide to the Analysis of Narrative)—A framework for enhancing educational impact of faculty feedback to students’ reflective writing. Patient Educ Couns. 2010;80:253–259.
69. Wald HS, Borkan JM, Taylor JS, Anthony D, Reis SP. Fostering and evaluating reflective capacity in medical education: Developing the REFLECT rubric for assessing reflective writing. Acad Med. 2012;87:41–50.
70. Miller-Kuhlmann R, O’Sullivan PS, Aronson L. Essential steps in developing best practices to assess reflective skill: A comparison of two rubrics. Med Teach. 2016;38:75–81.
71. Chan P. FAIRness and clinical teaching. Med Teach. 2013;35:779–781.
72. Hesketh EA, Laidlaw JM. Developing the teaching instinct, 3: Facilitating learning. Med Teach. 2002;24:479–482.
73. Sargeant J, McNaughton E, Mercer S, Murphy D, Sullivan P, Bruce DA. Providing feedback: Exploring a model (emotion, content, outcomes) for facilitating multisource feedback. Med Teach. 2011;33:744–749.
74. DeWitt D, Carline J, Paauw D, Pangaro L. Pilot study of a “RIME”-based tool for giving feedback in a multi-specialty longitudinal clerkship. Med Educ. 2008;42:1205–1209.
75. Milan FB, Parish SJ, Reichgott MJ. A model for educational feedback based on clinical communication skills strategies: Beyond the “feedback sandwich.” Teach Learn Med. 2006;18:42–47.
76. Riddle JM, Frellsen S. A.S.A.P.—Teaching students to solicit effective feedback from their residents. J Gen Intern Med. 2004;19:84.
77. Bing-You RG, Bertsch T, Thompson JA. Coaching medical students in receiving effective feedback. Teach Learn Med. 1998;10(4):228–231.
78. Milan FB, Dyche L, Fletcher J. “How am I doing?” Teaching medical students to elicit feedback during their clerkships. Med Teach. 2011;33:904–910.
79. Graddy R, Galiatsatos P, Christmas C. The quality and importance of feedback in professional development of interns and residents. J Gen Intern Med. 2014;29:S230–S231.
80. Johnston KT, Orlander JD, Spires A, Manning B, Warren HY. Lost opportunities: Resident feedback on medical student clinical performance. J Gen Intern Med. 2008;23:341.
81. Etherton GM, Wigton RS, Tape TG. Residents’ perception of the value and frequency of feedback during residency training. J Gen Intern Med. 2005;20:162.
82. Kamath AM, Schwartz AJ, Simpao AF, Lingappan AM, Rehman MA, Galvez JA. Induction of general anesthesia is in the eye of the beholder—Objective feedback through a wearable camera. J Grad Med Educ. 2015;7:268–269.
83. Maguire P, Fairbairn S, Fletcher C. Consultation skills of young doctors: Benefits of feedback training in interviewing as students persists. Br Med J. 1986;292:1573–1576.
84. Humphrey-Murto S, Mihok M, Pugh D, Touchie C, Halman S, Wood TJ. Feedback in the OSCE: What do residents remember? Teach Learn Med. 2016;28:52–60.
85. Harrison CJ, Könings KD, Molyneux A, Schuwirth LW, Wass V, van der Vleuten CP. Web-based feedback after summative assessment: How do students engage? Med Educ. 2013;47:734–744.
86. El Saadawi GM, Tseytlin E, Legowski E, et al. A natural language intelligent tutoring system for training pathologists: Implementation and evaluation. Adv Health Sci Educ Theory Pract. 2008;13:709–722.
87. Pfeiffer CA, Kosowicz LY, Holmboe E, Wang Y. Face-to-face clinical skills feedback: Lessons from the analysis of standardized patient’s work. Teach Learn Med. 2005;17:254–256.
88. Rust CT, Sisk FA, Kuo AR, Smith J, Miller R, Sullivan KM. Impact of resident feedback on immunization outcomes. Arch Pediatr Adolesc Med. 1999;153:1165–1169.
89. Rougas S, Clyne B, Cianciolo AT, Chan TM, Sherbino J, Yarris LM. An extended validity argument for assessing feedback culture. Teach Learn Med. 2013;27(4):355–358.
90. Brehaut JC, Colquhoun HL, Eva KW, et al. Practice feedback interventions: 15 suggestions for optimizing effectiveness. Ann Intern Med. 2016;164:435–441.

Supplemental Digital Content

Back to Top | Article Outline
© 2017 by the Association of American Medical Colleges