The Influence of Relationship-Centered Coaching on Physician Perceptions of Peer Review in the Context of Mandated Regulatory Practices : Academic Medicine

Secondary Logo

Journal Logo

Facilitating Systems of Faculty Learning

The Influence of Relationship-Centered Coaching on Physician Perceptions of Peer Review in the Context of Mandated Regulatory Practices

Arabsky, Sherylyn MHE; Castro, Nadya MHA; Murray, Michael MD, MHSc, CCFP(EM), CHE; Bisca, Ioana; Eva, Kevin W. PhD

Author Information
Academic Medicine 95(11S):p S14-S19, November 2020. | DOI: 10.1097/ACM.0000000000003642
  • Free

Abstract

Through its social contract, the profession of medicine is granted status, respect, financial reward, and the privilege of self-regulation.1 In exchange, society expects physicians to be altruistic, moral, competent, and focused on the needs of patients. Within this stated ideal, there is tension between autonomy, appropriately granted to the individual practitioner given the need to routinely make context-appropriate decisions, and the regulation of standards established by the profession to ensure patient safety. Even if we assume every practitioner is fundamentally altruistic, moral, and focused on patient needs, maintaining one’s competence is a complex process dependent on more than self-judgment and experience.2,3

Feedback is necessary but not simply a matter of telling people what aspects of performance could be improved.4,5 The uptake of information regarding how one might improve is dependent on the interaction between one’s prior beliefs, alignment of those beliefs with new information, and one’s capacity (real and perceived) to make appropriate changes.6 New information that contradicts prior understanding risks being discounted, sometimes as inaccurate and sometimes as irrelevant.7,8 Such reactions often occur without conscious awareness and may be particularly prominent when the individual senses threat, be it to their self-efficacy, professional identity, livelihood, or prioritization (e.g., “I don’t have time for professional development because I’m busy providing care”).4,9

Credibility is key as individuals appear to respond more effectively to those who are knowledgeable, engaged, and who have the individual’s interests at heart.10,11 This makes the challenge of supporting professional development particularly great in the context of institutions mandated to assure performance meets expected standards. Whether a professional college, a specialty board, a hospital’s quality improvement office, an academic department’s head, or a learner’s clinical preceptor, those with authority to assure quality will face difficulty supporting individuals to improve quality because threat is inevitably present. Shirking the capacity to encourage performance improvement efforts on a large scale, however, can detrimentally contribute to a culture in which quality improvement initiatives are seen as punitive, forced upon those deemed to be struggling.12 It is important, therefore, that all stakeholders in the health professions seek ways to normalize and facilitate the pursuit of excellence by all despite these challenges.13

In efforts to facilitate a learning culture, many institutions have established coaching strategies that connect individuals with supporters who help practitioners identify, reflect upon, and take action toward addressing areas of need. Those who promote such models generally advocate for keeping the locus of control with coachees, increasing receptivity to feedback by empowering them to take ownership over data gathered and to play a proactive role in determining what changes should be made.14,15 Studies of effective coaches suggest the value of building relationships, exploring reactions to feedback, exploring feedback content, and coaching for performance change (R2C2).16

This “R2C2 model” has been used to good effect in residency education when participation was voluntary.17 Not yet determined, however, is whether adoption of deliberate efforts to engage relationship-centered coaching can improve receptivity to feedback in more socially challenging situations in which participants may be less eager. This is an important gap to fill for the sake of determining if coaching practices are perceived as beneficial only by those who anticipate them being beneficial. Further, answering this question will facilitate consideration of whether the threat imposed by interactions with authority figures is so substantial that efforts to prioritize quality improvement in such contexts should be abandoned as beyond their scope of influence.

The purpose of this study, therefore, was to explore whether physician perceptions of peer review in the context of a mandated quality improvement program can be heightened by adopting principles of relationship-centered coaching. To do so, we took advantage of a natural experiment that arose in the College of Physicians and Surgeons of British Columbia’s (CPSBC’s) performance enhancement program. By comparing 3 time periods (an historical control, a period after which relationship-centered coaching was established, and a period after which assessed physicians were given more control over their review process and more data with which to engage in discussion with peer reviewers), we explored whether these changes were related to changes in physicians’ perceptions of the value of peer review.

Method

Context

The CPSBC is a provincial regulatory authority charged through law with the mandate “to ensure physicians meet expected standards of practice and conduct.”18 In an effort to encourage excellence and professionalism amongst its registrants, the CPSBC runs a quality improvement initiative called the Physician Practice Enhancement Program (PPEP).19 All community-based physicians in BC undergo periodic assessments that are mandatory, but not exam based or disciplinary. The program includes having physicians submit a summary of their practice profile; completion of a multisource feedback (MSF) process that involves anonymous collection of data from physician peers, nonmedical colleagues, and patients20; consideration of the physician’s prescription profile for the preceding 3 months; review of the physician’s continuing medical education activities; and a visit from a physician colleague for review and discussion of patient charts selected randomly by the assessor. In any given year, the majority of reviewed physicians are selected through a process of randomization, but the College prioritizes review of physicians working in solo practice and those over age 70.

Design and participants

To assess influences on perceptions of the value of this process, we conducted a quasi-experimental analysis examining physicians’ perspectives on their experiences during 3 distinct time periods defined by the CPSBC’s increasing efforts to create a culture of learning (i.e., leveraging the interactions between peer reviewers and physicians to emphasize quality improvement opportunities for all physicians).

During time period 1 (March 2016–December 2016), the program ran as described above. March 2016 was chosen as the start date for this study because that is when the PPEP instituted its current Onsite Feedback Questionnaire (described below) to evaluate physicians’ experience.

Time period 2 began in December 2016 when, in efforts to maximize PPEP’s relationship-centered coaching capacity, assessors began being trained in the R2C2 method to support feedback interactions. Formal training began during a 1-day event for assessors. The materials used are available on request. Subsequently, ongoing support with application of the R2C2 model (from medical advisors employed by the PPEP) was made available. Otherwise, the program was unchanged relative to time period 1.

In March 2017, the beginning of time period 3, additional program enhancements aimed at furthering the capacity of assessors and assessees to engage in meaningful discussions about the latter’s practice were implemented. PPEP began providing both individuals with the assessee’s MSF report and assessees were asked to complete an MSF reflection tool before the onsite visit. Previously, the MSF report was provided only after completion of the assessor’s report, precluding the capacity to discuss it with the peer reviewer. In addition, assessed physicians began receiving the opportunity to self-select a portion of their patient charts for peers to review in an effort to provide a greater sense of empowerment and assure that assessed physicians could discuss cases they considered reflective of their practice. Other program components remained the same relative to preceding time periods. For the sake of this study, time period 3 concluded at the end of 2018.

As the largest physician group evaluated by the CPSBC, we focused on family physicians who held an active license in a community-based practice in the province of BC and were selected for participation in PPEP during the periods studied. This included n = 1,153 onsite assessments, one per each of 1,153 physicians.

Data collection

During all 3 time periods, a paper-based copy of the Onsite Feedback Questionnaire was provided (along with a stamped envelope addressed to the CPSBC) to the assessed physician by the assessor at the end of each physician interview. Space was included for the assessed physician’s name and College ID number, but both questions were marked as optional. The body of the questionnaire included 12 questions (described in the results) aimed at capturing physicians’ perceptions of the value of the review process, all of which were accompanied by a 5-point Likert (1 = strongly disagree, 5 = strongly agree) scale and “not applicable” response options.

Questionnaires returned with a name or ID number allowed for linkage to physician age, gender, and performance rating. The latter rating is assigned by a PPEP medical advisor after reviewing all data collected (MSF ratings, prescription profiles, peer reviewers’ notes, etc.). It reflects degree of concern regarding the assessed physician’s performance with higher scores indicative of more serious medical record keeping issues and/or patient safety concerns. Physicians can be required to undertake follow-up of various forms, as determined by a PPEP Panel, thereby supporting perceptions physicians may hold that the process contains some degree of threat despite this component of CPSBC’s practice being primarily aimed at quality improvement.

Analysis

Age, gender, and performance ratings were compared for those who provided their name/ID and the full population of assessed physicians. The relationship between these variables and Onsite Feedback Questionnaire responses were considered using linear regression and univariate correlation coefficients (for continuous variables). This allowed determination of whether any of these factors potentially confounded the association between time period and physicians’ questionnaire responses.

The internal structure of the Onsite Feedback Questionnaire was examined using exploratory factor analysis. Separating questions into subscales was not a purpose of this study, but this analysis facilitated decision making regarding how the data should be aggregated to create the dependent variable of physicians’ perceptions of the value of peer review. Internal consistency was calculated for the same purpose using Cronbach’s alpha to confirm that items within subscales were contributing to measurement of a uniform construct.

Finally, to address the primary research question, average scores were created (using all items within subscales) and submitted as dependent variables to ANCOVAs that included performance scores and age as covariates to statistically account for differences in perception that might be related to these variables. Time period of assessment was treated as the independent variable with planned comparisons conducted using Fisher’s Protected LSD.

Ethics review

This study was deemed exempt from requiring ethics approval by the Behavioural Research Ethics Board of the University of British Columbia based on the Tri-Council Policy framework governing research ethics review in Canada.

Results

Response rate and respondent representativeness

A total of 1,153 physicians participated in the PPEP and had their onsite assessment completed between March 2016 and December 2018. N = 840 (73%) completed and returned their Onsite Feedback Questionnaires to the CPSBC. Table 1 illustrates that the response rate improved over the course of the 3 time periods (χ2 = 30.2, P < .001).

T1
Table 1:
Response Rate for Completion of Onsite Feedback Questionnaires

Of the 840 respondents, 605 (72.0%) identified themselves by name or College ID number. Demographic comparisons of these 605 to the full set of 1,153 physicians who completed their onsite assessment suggest that the sample was representative of the population. Their average age was 52.9 years vs 51.1 years, respectively; 42.0% of the sample identified as female vs 44.5% of the population; and 83.0% of those who completed Onsite Feedback Questionnaires received a performance score that required no follow-up, whereas the same was true for 80.9% of all assessed physicians.

Onsite Feedback Questionnaire analyses

Exploratory factor analysis performed on responses to the Onsite Feedback Questionnaire suggested that the questions divided into 2 groups reflecting physicians’ perceptions of their experience: those related to assessor interactions (Table 2) and those related to the assessment process (Table 3). Cronbach’s alpha calculated using the items loading onto each subscale was 0.94 and 0.89, respectively, with all item-total correlations within subscale exceeding r = 0.6. This illustrates a high degree of association between items within each scale. As such, we averaged scores for each subscale for each respondent and used these means as dependent variables in subsequent analyses.

T2
Table 2:
Mean Ratings (and 95% Confidence Intervals) Assigned to Assessor Interaction Items During 3 Time Periods
T3
Table 3:
Mean Ratings (and 95% Confidence Intervals) Assigned to Assessment Process Items During 3 Time Periods

Univariate correlations and multiple regressions examining the capacity of performance rating, age, and gender to predict assessor interaction and assessment process scores revealed small, but statistically significant associations for performance rating (on both outcomes) and age (for assessment process scores). As such, both were treated as covariates in subsequent analyses.

Physician perceptions of the value of peer review

Tables 2 and 3 reflect the average ratings provided to each item for each time period (along with 95% confidence intervals) for the sake of comprehensiveness. In general, it can be seen that ratings improved over time. For the sake of concision, however, we describe only the formal analyses conducted on the most robust measures (subscale totals).

One-way between subjects ANCOVA conducted on the average rating assigned to assessor interaction scores, using time period as an independent variable and performance rating and age as covariates confirmed a statistically significant improvement in physician perceptions (F = 7.7, P = .01; Table 2). Planned pairwise comparisons showed this difference arose because time period 1 elicited lower ratings of assessors’ skills relative to time periods 2 (P < .05; d = 0.37) and 3 (P < .01; d = 0.31). The latter time periods did not differ.

A comparable analysis performed on assessment process scores similarly confirmed a statistically significant improvement in physician perceptions over time (F = 5.4, P < .01; Table 3). In this case, however, planned pairwise comparisons showed improvement was more gradual, with time period 2 not being statistically different from either time periods 1 or 3 while time periods 1 and 3 elicited different ratings (P < .01; d = 0.29).

Discussion

Most practicing physicians can be expected to perform at a high level and all can be expected to want to perform at a high level. It remains necessary, however, to engage quality assurance activities for the sake of protecting the public from the minority of cases where performance does not keep up to date with modern health care.21 Rather than expend extensive resources to evaluate the many for the sake of identifying the few, it is sensible that regulatory authorities would approach their mandate of assuring quality by attempting to facilitate its further achievement. Regardless of the strength of such intentions, it is equally sensible for individual practitioners to be wary of the risk inherent in openly engaging in professional development activities when there is threat should their practice be found wanting. Even without the authority of regulators, many have reported that medical culture is generally averse to feedback provision, treating it as a cue that one is struggling.12,22 Such observations have led to calls to establish more normative engagement in continuous quality improvement,13 but the act of doing so remains particularly challenging in contexts where participation is mandated and risk is palpable.

The evolution of the Physician Practice Enhancement Program run by the CPSBC created a natural experiment in which we could test whether efforts to emphasize learning could be translated into more positive reactions to peer review despite understandable reasons for physicians to be anxious about engaging. Many regulatory authorities worldwide have created similar programs and, while the details are variable, the general features that made this a valuable context for study (performance data gathered for formative purposes that nonetheless carry a degree of risk of summative impact) are highly prevalent at all levels of health professional training and practice. Examining Onsite Feedback Questionnaires, we saw that training peer assessors to approach their task from a relationship-centered perspective was associated with more positive perceptions of the usefulness of assessor interactions. Although the pool of assessors did not change from one period to another, they were on the whole deemed more credible and supportive of assessed physicians even before the implementation of additional program enhancements directed at offering reviewed physicians more information and a greater sense of empowerment. The improvement amounted to 1/3 of a standard deviation, which is conventionally treated as a small to moderate effect size, but is impressive in this context given the maximum rating of 5 and that historical controls provided a baseline rating of nearly 4.5. One must question the functional value of making such an improvement within a system that was rated fairly highly to begin with, but we would argue that any such benefit is worth pursuing given that (1) the changes made did not require additional resources and (2) the general lore surrounding physician review programs tends to be much more negative, indicating a need to maximize the extent to which the cadre of reviewed individuals recognize value in the process if it is to be seen broadly as contributing effectively to a professional culture of quality improvement.

The observation that assessment process items increased in a statistically meaningful way from the historical controls only during period 3 provides validity evidence supportive of the argument that the ratings examined are sensitive to the particular changes made rather than reflecting coarse overarching judgments of physician satisfaction. Of more relevance to the central research question, this improvement reinforces the perspective that engagement with feedback opportunities is greater when the recipient of feedback is empowered to share their own perspective and to take steps to ensure that the data/dialogue are reasonable representations of their actual practice.15 Sargeant and colleagues described this issue as reflecting the “credibility of the process” through which data are collected.11 As the PPEP continues to evolve, the ratings suggest there is more to be gained from further developing assessees’ experiences of the review process, whereas perceptions of the quality of their assessor interactions may be hitting ceiling.

It is important to reinforce, as alluded to above, that the changes described did not require additional investment of resources relative to the historical control period. Assessors in this program have participated annually in workshops aimed at facilitating their role as peer reviewers; the change associated with the increased ratings reported here was predominantly one of redirecting the focus of this activity away from the logistics of review toward relationship-centered feedback practices, as outlined in the R2C2 model.

Limitations of this study include the fact that neither assessors nor assessees were randomized to a particular training condition/assessment process. The assessed physicians who provided the ratings that served as the focus of study were selected into the PPEP at random, but not in a manner that deliberately assigned them to one condition or another. They were blind, however, to the differences in program implementation across time as they are unlikely to have inquired into the details of how reviewer training or PPEP practices had evolved. Furthermore, it would have been helpful to have access to data on what aspects of physician practice changed as a result of engaging in the PPEP program given that there is no guarantee perceptions of the quality of review processes translate into actual practice enhancement. Nonetheless, we consider such perceptions to be important prerequisites for practice change given all that is known about how unlikely change is to occur when feedback to not considered credible.

In conclusion, there remains much to do and much to learn about how to facilitate a culture of continuous quality improvement within the health professions, particularly in the context of processes run by authority figures. That said, despite the many reasons for such interactions to be approached with caution on the part of both the authority figure and the recipient, processes can be established in a manner that leads to their being deemed valuable; this may be especially so if the authority approaches its task with the sensitivity, respect, and autonomy that are central to relationship-centered education.

References

1. Cruess SR, Cruess RL. Professionalism and medicine’s social contract with society. Virt Ment. 2004; 6:185–188
2. Eva KW, Regehr G, Gruppen LD. Blinded by “insight”: Self-assessment and its role in performance improvement. Hodges BD, Lingard L, eds. In: The Question of Competence: Reconsidering Medical Education in the Twenty-First Century. New York, NY: Cornell University Press. 2012131–154
3. Ericsson KA. Deliberate practice and the acquisition and maintenance of expert performance in medicine and related domains. Acad Med. 2004; 79(10 suppl):S70–S81
4. Mann KV, van der Vleuten C, Eva K, et al. Tensions in informed self-assessment: How the desire for feedback and reticence to collect and use it conflict. Acad Med. 2011; 86:1120–1127
5. Molloy E, Ajjawi R, Bearman M, Noble C, Rudland J, Ryan A. Challenging feedback myths: Values, learner involvement and promoting effects beyond the immediate task. Med Educ. 2020; 54:33–39
6. Eva KW, Armson H, Holmboe E, et al. Factors influencing responsiveness to feedback: On the interplay between fear, confidence, and reasoning processes. Adv Health Sci Educ Theory Pract. 2012; 17:15–26
7. Sargeant J, Mann K, Ferrier S. Exploring family physicians’ reactions to multisource feedback: Perceptions of credibility and usefulness. Med Educ. 2005; 39:497–504
8. Sargeant JM, Mann KV, Ferrier SN, et al. Responses of rural family physicians and their colleague and coworker raters to a multi-source feedback process: A pilot study. Acad Med. 2003; 78(10 suppl):S42–S44
9. Kluger AN, Van Dijk D. Feedback, the various tasks of the doctor, and the feedforward alternative. Med Educ. 2010; 44:1166–1174
10. Watling C. Cognition, culture, and credibility: Deconstructing feedback in medical education. Perspect Med Educ. 2014; 3:124–128
11. Sargeant J, Eva KW, Armson H, et al. Features of assessment learners use to make informed self-assessments of clinical performance. Med Educ. 2011; 45:636–647
12. Watling C, Driessen E, van der Vleuten CP, Vanstone M, Lingard L. Beyond individualism: Professional culture and its influence on feedback. Med Educ. 2013; 47:585–594
13. Eva KW, Bordage G, Campbell C, et al. Towards a program of assessment for health professionals: From training into practice. Adv Health Sci Educ Theory Pract. 2016; 21:897–913
14. Lovell B. What do we know about coaching in medical education? A literature review. Med Educ. 2018; 52:376–390
15. Flannery M. Self-determination theory: Intrinsic motivation and behavioral change. Oncol Nurs Forum. 2017; 44:155–156
16. Sargeant J, Lockyer J, Mann K, et al. Facilitated reflective performance feedback: Developing an evidence- and theory-based model that builds relationship, explores reactions and content, and coaches for performance change (R2C2). Acad Med. 2015; 90:1698–1706
17. Sargeant J, Mann K, Manos S, et al. R2C2 in action: Testing an evidence-based model to facilitate feedback and coaching in residency. J Grad Med Educ. 2017; 9:165–170
18. College of Physicians and Surgeons of British Columbia. Mission, mandate, and values. https://www.cpsbc.ca/about-us/mission. Accessed July 10, 2020
19. College of Physicians and Surgeons of British Columbia. Physician practice enhancement program. https://www.cpsbc.ca/programs/ppep. Accessed July 10, 2020
20. College of Physicians and Surgeons of British Columbia. Multi-source feedback assessment. https://www.cpsbc.ca/programs/ppep/assessment-process/msf. Accessed July 10, 2020
21. Hawkins RE, Irons MB, Welcher CM, et al. The ABMS MOC part III examination: Value, concerns, and alternative formats. Acad Med. 2016; 91:1509–1515
22. Watling C, Driessen E, van der Vleuten CP, Vanstone M, Lingard L. Music lessons: Revealing medicine’s learning culture through a comparison with that of music. Med Educ. 2013; 47:842–850
Copyright © 2020 by the Association of American Medical Colleges