Secondary Logo

Share this article on:

Feedback to Supervisors: Is Anonymity Really So Important?

Dudek, Nancy L. MD, MEd; Dojeiji, Suzan MD, MEd; Day, Kathleen MA; Varpio, Lara PhD

doi: 10.1097/ACM.0000000000001170
Research Reports

Purpose Research demonstrates that physicians benefit from regular feedback on their clinical supervision from their trainees. Several features of effective feedback are enabled by nonanonymous processes (i.e., open feedback). However, most resident-to-faculty feedback processes are anonymous given concerns of power differentials and possible reprisals. This exploratory study investigated resident experiences of giving faculty open feedback, advantages, and disadvantages.

Method Between January and August 2014, nine graduates of a Canadian Physiatry residency program that uses open resident-to-faculty feedback participated in semistructured interviews in which they described their experiences of this system. Three members of the research team analyzed transcripts for emergent themes using conventional content analysis. In June 2014, semistructured group interviews were held with six residents who were actively enrolled in the program as a member-checking activity. Themes were refined on the basis of these data.

Results Advantages of the open feedback system included giving timely feedback that was acted upon (thus enhancing residents’ educational experiences), and improved ability to receive feedback (thanks to observing modeled behavior). Although some disadvantages were noted, they were often speculative (e.g., “I think others might have felt …”) and were described as outweighed by advantages. Participants emphasized the program’s “feedback culture” as an open feedback enabler.

Conclusions The relationship between the feedback giver and recipient has been described as influencing the uptake of feedback. Findings suggest that nonanonymous practices can enable a positive relationship in resident-to-faculty feedback. The benefits of an open system for resident-to-faculty feedback can be reaped if a “feedback culture” exists.

Supplemental Digital Content is available in the text.

N.L. Dudek is associate professor, Department of Medicine, University of Ottawa, Ottawa, Ontario, Canada.

S. Dojeiji is associate professor, Department of Medicine, University of Ottawa, Ottawa, Ontario, Canada.

K. Day is senior project manager, Department of Innovation in Medical Education, University of Ottawa, Ottawa, Ontario, Canada.

L. Varpio is associate professor, Department of Medicine, Uniformed Services University of the Health Sciences, Bethesda, Maryland.

Funding/Support: This study was funded by a Physicians’ Services Incorporated Foundation Health Research Grant.

Other disclosures: None reported.

Ethical approval: This study was approved by the Ottawa Hospital research ethics board.

Disclaimers: The views expressed herein are those of the authors and do not necessarily reflect those of the United States Department of Defense or other federal agencies.

Previous presentations: This study was presented in part at the Canadian Conference on Medical Education on April 26, 2015, in Vancouver, Canada.

Supplemental digital content for this article is available at http://links.lww.com/ACADMED/A340.

Correspondence should be addressed to Nancy L. Dudek, Ottawa Hospital Rehabilitation Centre, 505 Smyth Rd., Room 1105D, Ottawa, Ontario, Canada, K1H 8M2; telephone: (613) 737-7350 ext. 75596; e-mail: ndudek@toh.on.ca.

Feedback is described as an integral element of nearly all learning theories, including those from behavioral, cognitive, and social constructivist orientations.1,2 Research has clearly established that receiving specific feedback promotes learning as evidenced by the demonstration of improved skills.3 In fact, previous studies have shown that physicians improved performance in response to trainee evaluations and performance assessments in both educational and clinical practice settings.4–6

Assessments completed by medical trainees (i.e., students and residents) make up the majority of feedback provided to physicians on the quality of their formal teaching (e.g., lectures) and clinical supervision (e.g., bedside teaching). Typically, these assessments have some type of rating scale and a dedicated space for qualitative comments. Usually, the assessments are provided to physicians in an anonymous, aggregate form at the end of a discrete educational activity (e.g., teaching a lecture, supervising a clinical rotation).

Although some professional outcomes are informed by medical trainee assessments (e.g., promotion), the main use for these assessments is to provide formative information (i.e., feedback) to physicians who teach trainees. Therefore, although an “assessment” is typically conceived of in a summative manner, assessments are also frequently used to provide formative information to enhance teaching.

Effective feedback has several features (see Table 1 for a detailed list of these features).6–11 The literature suggests that attending to these features when providing feedback will make the feedback more useful to the recipient, thus creating a better opportunity for feedback to have its intended effect on the recipient.6–11 Counter to conventional wisdom, many of the elements of effective feedback require or are promoted by a nonanonymous or “open feedback” process (see Table 1 for identification of features required or promoted by open feedback). In this article, we define “open feedback” as a situation where the feedback recipient is aware of who the feedback provider is. In other words, the feedback is not anonymous.

Table 1

Table 1

However, open feedback is typically not employed when the feedback receiver is in a position of power over the feedback giver, as is the case for medical trainees giving feedback to faculty. This trainee-to-faculty feedback is typically provided in an anonymous fashion. Anonymity is thought to relieve the psychosocial pressure that exists when providing negative feedback to those in an authority position.10,12,13 These psychosocial pressures are thought to intensify when the feedback giver and receiver are working together in a hierarchical situation.13 Although to our knowledge the balance of the literature supports anonymous feedback to supervisors, it is important to consider that the literature on this topic is limited. In fact, the majority of the reported research on giving feedback to authority figures has involved studying students at the undergraduate (bachelor degree) level or lower.10

There are several consequences that must be considered in choosing to keep trainee assessments of faculty anonymous. First, in working life, the psychosocial pressures of assessing others in an open manner are real but must be managed by physicians on a regular basis. For example, participating in peer review and reporting unprofessional performance by a colleague are activities that increasingly favor more open processes.10,13 These are challenges that we must prepare our trainees to meet and, ideally, to excel at.

Another consequence of anonymous feedback, albeit unintended, is that anonymity enables the provision of unhelpful statements (e.g., “She is annoying”).10,14,15 Further, in smaller programs, anonymous feedback cannot be timely because a reasonable number of trainees (usually 5–10) must provide feedback before it can be rendered anonymous via aggregate reporting (see Table 1 for identification of elements of effective feedback that are challenging to provide in an anonymous system for a small training program). Depending on the size of the program, collecting sufficient numbers of comments to protect anonymity can take months to years. This is far from ideal if the purpose of the feedback is to promote positive changes in the clinical supervisor’s performance.

In our review of the literature, we did not find any studies comparing open versus anonymous feedback with regard to which approach will create the desired behavior change.10 Without such research, our community runs the risk of employing feedback methods based on our own assumptions, and not on evidence-based analysis. To our knowledge, no residency program with an open feedback system has been systematically studied to explore the advantages and disadvantages of such a system. The objective of this research is thus to formally study the strengths and weaknesses of an open feedback system at the graduate medical education (GME) level from the trainee perspective, as well as factors that enable or obstruct the provision of open feedback.

Back to Top | Article Outline

Method

This study, approved by the Ottawa Hospital research ethics board, used a single-unit case study16 to qualitatively examine trainee experiences of open feedback.

Back to Top | Article Outline

Setting

This study was carried out at the University of Ottawa’s physiatry, also known as physical medicine and rehabilitation (PM&R), residency training program. We selected this site because it is an example of a small training program (one to two trainees per postgraduate year) that has employed an open feedback system for resident-to-faculty feedback for an extended period of time (since 1996). This system has been observed and evaluated by external review bodies. In the program’s most recent formal accreditation review by the Royal College of Physicians & Surgeons of Canada, the evaluators highlighted the open feedback culture and processes as a strength.17

The program expects residents to provide staff physicians with direct verbal feedback on a regular basis. Written feedback is completed at the midpoint and end of rotations. Residents and faculty have a meeting to discuss this written assessment of the faculty’s performance as a clinical supervisor. Residents and faculty are taught how to give and receive feedback in a professional manner. This is accomplished through regularly scheduled academic half-day sessions and grand rounds. As well, staff physicians are encouraged to participate in feedback workshops offered by the Faculty of Medicine. Individual coaching by the program director (for the residents and occasionally faculty) and the division chief (for faculty) is also used on an as-needed basis. If a resident feels uncomfortable with the open feedback system, there are options available to provide staff physicians feedback in a confidential fashion (i.e., via the program director, an assigned mentor, and/or the division chief). As well, residents participate in two group review sessions per year where their collective feedback on all aspects of the residency training program is sought and recorded in an anonymous fashion. These sessions are facilitated by a member of the residency training committee other than the program director. The session facilitator encourages open discussion amongst all of the residents regarding all elements of the program including faculty performance. This feedback is recorded in an anonymous aggregate format. However, given that the facilitator and all of the residents have heard the feedback, it is not truly anonymous.

Back to Top | Article Outline

Participants

Given the exploratory nature of the study, we wanted to examine the experiences of both graduates of the program and current residents. These made up the two participant groups in the study.

Back to Top | Article Outline

Graduates.

Twenty-one individuals have graduated from the University of Ottawa’s PM&R residency program between the implementation of the open feedback system (1996) and the time of the study (2014). Two of the study’s coinvestigators (N.L.D. and S.D.) qualified as participants in this cohort (i.e., members of the studied context), but they were excluded, leaving 19 potential participants for the graduate group.

Back to Top | Article Outline

Current residents.

Participants were recruited from the postgraduate year 3, 4, and 5 trainees during the 2013–2014 academic year (n = 8) to form the resident group. We excluded first- and second-year residents from the study because these trainees spend the majority of their time on off-service (i.e., non-PM&R) rotations and were deemed to not have had adequate exposure to the open feedback system.

We sent all potential participants an e-mail outlining the project and then contacted them by telephone through the study’s research assistant (RA) (K.D.) to request their participation. Informed consent was obtained prior to data collection.

Back to Top | Article Outline

Data collection

Given that the arguments against open feedback are about “internal” issues, such as fear of the power differential and possible reprisals, we needed data regarding the feelings, perceptions, thoughts, reasonings, and values of participants in relation to their participation in the open feedback program. Observations would not give us access to these internal perspectives. Instead, to gain access to these “internal” perspectives, we conducted qualitative, semistructured interviews. This data collection method rests on the premise that the perspectives and experiences of others are meaningful, knowable, and can be made explicit.18 All interviews were conducted by an experienced RA, trained in qualitative data collection techniques.19 The RA was not known to any of the participants in either study group. Interviews were audio recorded, transcribed, and rendered anonymous during the transcription process.

We invited graduates of the program to participate in a one-hour semistructured telephone interview designed to probe perceptions of open feedback systems in general and of their unique experiences (Supplemental Digital Appendix 1, http://links.lww.com/ACADMED/A340). Interviews took place from January to August 2014. Participants gave informed consent prior to the interview. They did not receive incentives.

We invited current residents to participate in small-group interviews. We elected to do small-group interviews with these participants because we wanted to ensure that the participants felt empowered to candidly express their views on the open feedback system in which they were actively participating. We felt that small-group interviews could provide a supportive environment to encourage rich discussion. The RA guided participants through a series of questions similar to the semistructured interviews for the graduates. The interviewer also shared some results of the graduate interviews and asked for the current residents’ reactions to and comments on these data (Supplemental Digital Appendix 2, http://links.lww.com/ACADMED/A340). These interviews took place in June 2014. Participants gave informed consent prior to the group interview. They did not receive incentives.

Back to Top | Article Outline

Data analysis

We analyzed the qualitative data set for emergent themes using conventional content analysis.20 Qualitative content analysis involves the systematic classification process of coding and identifying themes or patterns within a data set.20 In this tradition, language used by participants to describe their experiences is closely examined to develop categories that represent similar meanings.21 Three members of the research team completed the data analysis (principal investigator—N.L.D., who is a member of the studied context, and two coinvestigators—L.V. and K.D., who are not members of that context). We applied the final coding structure to the data set using NVivo qualitative data analysis software, version 10 (QSR International Party Ltd.) to facilitate cross-referencing.22 To ensure confirmability, the study collaborator who did not participate in the coding processes (S.D.) reviewed the coded transcripts of one graduate participant and of one small-group interview with current residents. Further, we ensured confirmability by maintaining an audit trail of all analytical memos, minutes of meetings, revisions to the coding structures, and node definitions.23,24 Triangulation25 was achieved in three ways: using three analysts; using the data from two different participant populations; and using two methods of collection (interview and small-group interviews). The resident participants were also used to member check the data collected from the graduate participants to see if a bias existed in terms of recollection and thus, confirm or disconfirm study findings.

Back to Top | Article Outline

Results

Nine of 19 graduates and 6 of 8 residents (via three small-group interviews) participated in the study, a 47% and 75% participation rate, respectively.

Participants identified advantages and disadvantages to the system. They clearly described the advantages as outweighing the disadvantages. They also identified several enablers and obstructors to the successful occurrence of open resident-to-faculty feedback.

Back to Top | Article Outline

Advantages

Graduates and current residents both described the same seven specific advantages of open feedback: timely feedback, specific feedback, staff physicians acting on the feedback, feeling accountable for the feedback, feeling that they learned how to give better feedback, feeling an improved ability to receive feedback, and a perception that the staff physicians were more approachable. In Table 2, we provide data excerpts to illustrate these themes.

Table 2

Table 2

Back to Top | Article Outline

Disadvantages

Study participants articulated three disadvantages that they associated with the open feedback system. These disadvantages were described in two different ways: personally experienced, or speculations of disadvantages other trainees might experience. In other words, participants did report direct experience with disadvantages, but the majority of the time the disadvantages they reported were hypotheses of what other trainees might experience. It is also notable that, while both the graduates and current residents described all three disadvantages, the current residents identified more strongly with these disadvantages than the graduates.

Disadvantages were concern for a nega tive impact to their relationship with the staff physician, feeling uncomfortable deliv ering negative feedback, and residents choosing to “moderate the message” by excluding or softening comments when negative feedback needed to be delivered. In Table 3, we illustrate these disadvantages with data excerpts.

Table 3

Table 3

Back to Top | Article Outline

Enablers and obstructors

Participants in the study identified several features of resident and staff physician behavior that enabled and/or obstructed the open feedback process. Tables 4 and 5 illustrate that the converse of many enablers act as obstructors, and vice versa. For example, staff physicians inviting feedback was described as enabling the open feedback process, whereas staff physicians not inviting feedback was reported as obstructing the open feedback process. Also, the participants identified that residents and faculty come to the program with certain “personal qualities” that make them either innately comfortable or uncomfortable with the open feedback system. Although there is little that can be done about these personal qualities, participants identified structural elements of the residency program that acted to either promote the enablers or minimize the impact of the obstructors (see Tables 4 and 5). First, the program explicitly teaches residents how to give useful feedback. Many participants identified this training as helping them to feel more comfortable in giving feedback to their staff physicians. Second, the residents expressed appreciation for knowing that an anonymous feedback option was available to them, even if they frequently chose not to use it. Participants described feeling more comfortable giving open feedback knowing they had an anonymous option should they need the protections offered by anonymity. Finally, the explicit expectation shared with the faculty and residents that open feedback would be used was also described as a structural support enabling their participation in the open feedback processes. Because this was culturally seen as the local norm and expectation, residents described being more at ease giving feedback.

Table 4

Table 4

Back to Top | Article Outline

Discussion

As our findings show, there was a difference of opinion amongst our study participants regarding the level of comfort that exists with an open feedback process. Two illustrative responses to the question “Would you have ever done anything differently if feedback had been anonymous?” were

I can’t recall of any instances that I would have done anything differently, no. (Graduate 5)

But I don’t believe that in every instance a resident feels free to say exactly what they think. (Graduate 3)

However, findings from this study suggest that an open resident-to-faculty feedback process can have numerous advantages that participants clearly identify as outweighing the disadvantages. Many of the advantages that the participants noted mirror the features of effective feedback cited in the literature, such as timeliness and specificity.6–11 Participants were also clear that, in the context of a small training program, these advantages would be lost in an anonymous system.

What was not anticipated on the basis of the literature was that participants identified the opportunity to provide open feedback to their supervisors as beneficial to their future ability to both give and receive feedback. The importance of this finding cannot be overlooked. The current discussion in the literature on the use of feedback with respect to maintenance of competence for practicing physicians suggests that feedback needs to be given as part of an ongoing, supportive relationship with a coach or peer group if they are going to consider incorporating the feedback into practice change.26 Both the graduates and current residents in this study credited their experiences with the open feedback system as enabling them to be comfortable receiving constructive feedback especially when given the opportunity to really understand it through an open dialogue. This unintended benefit holds promise for how open feedback systems in GME training may meaningfully contribute to the development of trainees’ independent medical practices.

The concerns of giving feedback to supervisors in an open setting, such as strained working relationships, previously reported on in the literature certainly existed in this study’s context. However, our participants identified very few personally experienced occurrences of this disadvantage. In fact, the vast majority of the disadvantages described by participants were based on speculation. In other words, participants reported that they did not personally experience a specific disadvantage, but they could imagine how others might. For instance, we intentionally asked graduates and current residents if the content of their comments was impacted by the open feedback systems. While these participants described making content changes because of the open feedback system (see Table 3), we were surprised that these changes were generally described as “I didn’t do this, but I could imagine that other residents might” (paraphrase). The majority of our participants did not perceive their own practices as having been modified because of the open context. Certainly, the intensity of these concerns was greater for the current residents than the graduates. Possibly, with the advantage of the perspective of time, these disadvantages, which seem problematic “in the moment” to a resident, are less important when reflecting back. Potentially, this speaks to the fact that these concerns did not materialize during the residency of graduate participants. We must balance these possibilities with the potential that some of the reported speculations were in fact real but that the participants did not feel comfortable stating this. Therefore, we must study the speculation versus personally experienced construct in different settings and with different approaches to better understand trainee experiences of open feedback.

Not all of the obstructors to open feedback can be overcome. Potentially, the largest concern uncovered in this study was that some residents “moderated the message” (essentially limiting the corrective feedback) when delivering feedback to the staff physicians. This is an obvious concern as staff physicians cannot improve without information about what needs to be changed. As such, it is clear that a program cannot only offer the open resident-to-staff feedback. Anonymous methods must also be available to the trainees. Participants who self-identified as less comfortable with open feedback reported that they took advantage of the opportunities to provide anonymous feedback.

As may be expected, there were participants who felt very comfortable providing open feedback to staff physicians and others who were less comfortable. Such personality differences will certainly be present in all trainee groups. This study suggests, however, that by understanding the enablers and obstructors, a GME program can create a structure that supports individuals to engage in open feedback processes.

In studying how this system functions, it is apparent that a mature relationship between the staff physician and resident exists. This relationship is built on trust. Other, related research investigating successful relationships in the interprofessional health care context identifies trust as a key feature.27,28 For example, Pullon27 presented that interprofessional trust develops from demonstrated competence by both nurses and doctors which creates credibility. This leads to mutual respect and the development of interprofessional trust. Hsieh and colleagues28 studied health care provider–interpreter trust and illustrated four theoretical constructs that can strengthen or compromise that trust: interpreter competence, shared goals, professional boundaries, and established patterns of collaboration. Our investigation suggests that a similar trust relationship can occur intraprofessionally in successful open resident-to-faculty feedback. It would appear that the residents were viewed as a credible source of feedback on staff performance. In turn, the staff physicians needed to appropriately receive and act on the feedback in a timely fashion. Both of these conditions must exist to create a mutual respect between staff and residents. Similar to Hsieh and colleagues’ work, our findings illustrate numerous features that can enable and obstruct the open feedback process by threatening trust. For example, staff physicians acting or not acting on the feedback that is provided either builds or erodes trust.

This study is not without limitations. We assessed only a single GME program, and not all potential participants chose to be interviewed. It is certainly possible that those who declined to participate held strong negative opinions of the program’s open feedback system. However, given that our findings revealed both positive and negative aspects of the system, we feel that we interviewed a representative population sample. This study also only examined the perspectives of the feedback giver (the trainee) and not the perspectives of the feedback recipient (the staff physicians). We intentionally focused on trainee experiences because the majority of the research on open feedback commonly points to the potential for this feedback structure to have a negative impact on trainees, not the staff physicians. Therefore, we specifically wanted to explore the impact of open feedback on trainees. To understand how to translate aspects of this system into another GME program, it would be vital to understand the experiences and perspectives of the faculty members in that program. Future research will need to explore this. Despite these limitations to the generalizability of our study findings, we feel that the work presented here is a vital first step toward understanding the opportunities and challenges that an open feedback system can offer to other GME programs.

Physicians want feedback on their performance as clinical supervisors. It is generally assumed that this feedback has to be provided anonymously for residents to feel safe. However, in small training programs, preserving anonymity comes at the cost of timely feedback. This study challenges the belief that open resident-to-faculty feedback needs to be anonymous and suggests some real benefits to providing feedback in an open manner. Our findings show that a GME program can put structures in place that enable resident–faculty trust to develop, which may make it possible to reap the benefits of open feedback. Rather than assuming that residents are unwilling to tell staff physicians what they think of their supervisory performance, we propose that additional research is needed to determine whether feedback to supervisors always needs to be anonymous. Potentially more useful feedback to clinical faculty could be gathered by encouraging open feedback while still offering additional anonymous feedback opportunities for those who require it.

Table 5

Table 5

Back to Top | Article Outline

References

1. Ilgen DR, Fisher CD, Taylor MS. Consequences of individual feedback on behavior in organizations. J Appl Psychol. 1979;64:349–371.
2. Wilkerson L, Irby DM. Strategies for improving teaching practices: A comprehensive approach to faculty development. Acad Med. 1998;73:387–396.
3. Boehler ML, Rogers DA, Schwind CJ, et al. An investigation of medical student reactions to feedback: A randomised controlled trial. Med Educ. 2006;40:746–749.
4. Cohen PA. Effectiveness of student rating feedback for improving college instruction: A meta-analysis research in higher education. Res Higher Educ. 1980;13:321–341.
5. Bing-You RG, Greenberg LW, Wiederman BL, Smith CS. A randomized multicenter trial to improve resident teaching with written feedback. Teach Learn Med. 1997;9:10–13.
6. Veloski J, Boex JR, Grasberger MJ, Evans A, Wolfson DB. Systematic review of the literature on assessment, feedback and physicians’ clinical performance: BEME guide no. 7. Med Teach. 2006;28:117–128.
7. Hewson MG, Little ML. Giving feedback in medical education: Verification of recommended techniques. J Gen Intern Med. 1998;13:111–116.
8. Sargeant J, Mann K. Cantillon P, Wood D. Feedback in medical education: Skills for improving learner performance. In: ABC of Learning and Teaching in Medicine. 2010.Oxford, UK: Blackwell Publishing Ltd.
9. Ende J. Feedback in clinical medical education. JAMA. 1983;250:777–781.
10. Guerrasio J, Weissberg M. Unsigned: Why anonymous evaluations in clinical settings are counterproductive. Med Educ. 2012;46:928–930.
11. Patterson K, Greeny J, Maxfield D, McMillan R, Switzler A. Influencer—The Power to Change Anything. 2008.New York, NY: McGraw Hill.
12. Wachtel HK. Student evaluation of college teaching effectiveness: A brief review. Assess Eval High Educ. 1998;23:191–212.
13. McCollister RJ. Students as evaluators: Is anonymity worth protecting? Eval Health Prof. 1985;8:349–355.
14. Eva KW. To blind or not to blind? That remains the question. Med Educ. 2012;46:924–925.
15. Conrad D. Deep in the hearts of learners: Insights into the nature of online community. Int J E-Learn Distance Educ. 2002;17:1–19.
16. Yin R. Case Study Research: Design and Methods. 2009.4th ed. Thousand Oaks, Calif: SAGE Publications.
17. Todesco J. The Royal College of Physicians and Surgeons of Canada On-Site Survey. 2010.Ottawa, Ontario, Canada: Royal College of Physicians and Surgeons of Canada.
18. Patton MQ. Qualitative Research and Evaluation Methods. 2002.3rd ed. Thousand Oaks, Calif: SAGE Publications.
19. Creswell JW. Qualitative Inquiry and Research Design: Choosing Among Five Approaches. 2013.3rd ed. Thousand Oaks, Calif: SAGE Publications.
20. Hsieh HF, Shannon SE. Three approaches to qualitative content analysis. Qual Health Res. 2005;15:1277–1288.
21. Weber RP. Basic Content Analysis. 1990.Beverly Hills, Calif: Sage.
22. Kelle U, Prein G, Bird K. Computer-Aided Data Analysis: Theory, Methods, and Practice. 1995.Thousand Oaks, Calif: Sage Publications.
23. Denzin NK, Lincoln YS. Denzin NK, Lincoln YS. Introduction: The discipline and practice of qualitative research. In: Handbook of Qualitative Research. 2000:2nd ed. Thousand Oaks, Calif: Sage Publications; 1–28.
24. Varpio L, St-Onge C. Documenting rigor with the study CV: A tool for research and scholarly work in medical education. Canadian Conference on Medical Education; May 2011; Toronto, Ontario, Canada.Workshop presented at:.
25. Stake RE. Denzin NK, Lincoln YS. Case studies. In: Handbook of Qualitative Research. 2000:2nd ed. Thousand Oaks, Calif: Sage Publications; 435–454.
26. Eva KW, Regehr G. Effective feedback for maintenance of competence: From data delivery to trusting dialogues. CMAJ. 2013;185:463–464.
27. Pullon S. Competence, respect and trust: Key features of successful interprofessional nurse–doctor relationships. J Interprof Care. 2008;22:133–147.
28. Hsieh E, Ju H, Kong H. Dimensions of trust: The tensions and challenges in provider–interpreter trust. Qual Health Res. 2010;20:170–181.

Supplemental Digital Content

Back to Top | Article Outline
© 2016 by the Association of American Medical Colleges