Secondary Logo

Journal Logo

Innovation Reports

“You Want Me to Assess What?”: Faculty Perceptions of Assessing Residents From Outside Their Specialty

Burm, Sarah PhD; Sebok-Syer, Stefanie S. PhD; Lingard, Lorelei PhD; VanHooren, Tamara MD, FRCPC; Chahine, Saad PhD; Goldszmidt, Mark MD, PhD, FRCPC; Watling, Christopher J. MD, MMEd, PhD, FRCPC

Author Information
doi: 10.1097/ACM.0000000000002771

Abstract

Problem

All Canadian medical schools have started shifting to competency-based medical education (CBME) for postgraduate training and assessment. This transition requires reexamining how medical educators think about and enact current assessment practices. CBME promises frequent observations, personalized assessment, and better documentation of residents’ performance1; the challenge, however, is meeting these demands with limited resources. The medical education community knows, for example, that CBME places more demands on faculty who are already balancing patient care and clinical supervision.2,3 Furthermore, smaller training programs may not have the faculty expertise or resource capacity to manage the volume of assessment data required to implement a new CBME system. Facing such a burden, medical school leaders and faculty need to think creatively about finding efficiency in their approaches to assessment. Toward that end, we suggest a new approach to workplace-based assessments through which faculty assess residents in specialties outside their own, specifically around overlapping competencies related to the CanMEDS intrinsic roles (i.e., the communicator, collaborator, and health advocate roles).

Understanding the need to use our existing faculty and resources most effectively, we feel that this approach, which we have termed “cross-specialty assessment,” might be a solution. Here, we describe the preparatory training that faculty were required to complete before completing cross-specialty assessments, the deployment of faculty assessors to different residency programs, and the faculty members’ reported experiences assessing residents outside their own specialty.

Approach

Using a case study methodology4 at a single, medium-sized medical school in Ontario, Canada, we trained faculty to assess the observable task of patient handover. We used the definition of handover as articulated by Riesenberg and colleagues: the transfer of responsibility and accountability for some or all aspects of health care for a particular patient or group of patients.5 Handover is a fundamental skill across all specialties that physicians must learn and maintain competence in throughout their careers. Handover also occurs frequently and at predictable times, which we believe makes it a feasible skill for those outside the direct patient care environment to assess.

We applied the principles of programmatic assessment to the design of this cross-specialty assessment innovation.6 We recruited 12 faculty members to partake in this pilot study; all held appointments in a clinical department and were required to participate in a 4-hour in-person training session on how to assess residents’ delivery of handover. The training took place in May and June 2017. It included a presentation by an education specialist and PhD researcher (S.B.) and by a clinician–teacher and education leader (C.J.W.) on best practices in handover delivery. The presentation was followed by videos exemplifying sound and unsound handover practice. Faculty then practiced completing assessments focused on the delivery of handover using the same assessment tool they would complete upon deployment. We asked faculty to rate their general impression of the handover delivery. Specifically, we encouraged them to put themselves in the shoes of the handover recipient, focusing on the overall organization and coherency of the handover. Recognizing the potential discomfort some assessors might experience rating someone from outside their specialty, we ensured ample discussion time within the training session to explore the perspectives and norms of different specialties.

The assessment tool for this innovation (see Appendix 1) included a 5-point entrustment scale (to be completed from the handover receiver’s perspective), a 3-point checklist, and open textboxes for written comments. We did not explore aspects of reliability, such as interrater reliability, as part of this study since Mahmood and colleagues (2018) have already shown that raters can reliably assess those outside their specialty.7 Rather, our focus was on exploring the feasibility and credibility of cross-specialty assessment. We felt that by gaining a better understanding of how faculty feel assessing those outside their specialty, we could both ascertain which elements of the innovation work and potentially alleviate some of the practical challenges of completing workplace-based assessments.

In July and August 2017, we deployed 10 physician faculty members representing 7 different clinical specialties, along with 2 PhD faculty members, to 2 clinical settings: critical care and pediatrics. The inclusion of PhD faculty was a matter of convenience and opportunity. At the time of this pilot study, both PhD faculty members were closely involved in CBME initiatives at the postgraduate level. Given their experience and involvement in medical education, we felt that they represented a novel potential assessment resource worthy of study. Assessors each completed 11 to 26 assessments of resident handover—all outside of their own specialty.

We intended for the completed assessments to guide residents’ learning; being observed by multiple assessors allowed residents to receive cumulative information about their handover performance over time. We informed residents that these assessments were formative, not summative, and had no bearing on their progression in their respective training programs. Assessed residents (n = 20) ranged in training level (postgraduate years 1–4) and were enrolled in the following specialty programs: internal medicine (n = 8), pediatrics (n = 10), urology (n = 1), and neurosurgery (n = 1). A member of the research team (S.B. or S.S.S.-S.) was present during all assessments to provide assessors with feedback, collect assessments, and distribute copies of the completed assessments to residents.

After the intervention (between August and November 2017), we interviewed the 12 assessors in person to hear their views on cross-specialty assessment (see Supplemental Digital Appendix 1, available at http://links.lww.com/ACADMED/A679, for the interview guide). Interviews were 20 to 60 minutes in length, audio-recorded, transcribed verbatim, and deidentified. After the transcripts were entered into the QSR NVivo data management program (version 10, Melbourne, Australia), we began a comprehensive process of data coding and identification of themes. One of us (S.B.) completed the coding process, meeting regularly to analyze data with S.S.S.-S. and C.J.W. Next, a larger team (L.L., T.VH., S.C., and M.G.) met to ensure that the developing themes were grounded in the collected data. The Office of Human Research Ethics at Western University provided ethical approval for this pilot study.

Outcomes

Collectively, the 12 faculty assessors completed 174 assessments. We analyzed a total of 171 assessments; 3 were removed because the assessor selected no distinct performance rating. Descriptive statistics showed that assessors used the full range of options on the entrustment scale. All assessments contained written comments, which included a mix of formative assessments for the resident and comments on contextual information influencing handover.

Faculty members’ reported experiences with cross-specialty assessment varied. Below, we have summarized our findings and provided quotations from faculty to illustrate themes, using deidentified faculty assessor number and degree (MD or PhD), to show the range of those providing feedback.

Familiarity with task and specialties

While some assessors found completing workplace-based assessments “a relatively easy process” (A-005, MD), others expressed concerns about being able to appropriately assess handover due to environmental distractions and interruptions to which they were unaccustomed:

The environment in ICU [intensive care unit] is very different. . . . It’s a noisier environment, and there are groups of people that are packed close together. . . . I worried about whether I’d be able to witness or hear the handover, as it was being done. (A-006, PhD)

Many assessors expressed greater confidence completing assessments in one specialty over another:

Even though I’m a plastic surgeon, I have been in the ICU setting before as a trainee, so I’m familiar with what the lay of the land was, how they did their handovers. (A-011, MD)

Assessors’ comfort with the assessment task appeared linked to their clinical familiarity with the deployment settings.

Is handover an appropriate task for cross-specialty assessment?

Assessors identified variability between the genre of handover they were trained to assess and the handover they observed residents perform, particularly in critical care. As one assessor recalled:

What I was prepared to do was observe two trainees giving each other one-on-one handover . . . in the ICU setting . . . there were multiple residents providing handover . . . it almost felt like they were just providing a progress update as opposed to preparing the on-call person for handover. (A-012, MD)

While training provided an overview of what information should be transferred to ensure delivery of safe care, it appeared less effective at capturing the nuances of handover in each unique clinical environment. Assessors without previous exposure to critical care or pediatrics appeared most impacted.

Many assessors labeled what they observed as something other than handover. One assessor referred to the experience in critical care as: an amalgamation of handover, part examination and . . . parts of therapy planning—going beyond what they needed for the night . . . it was too much for handover. (A-008, MD) Separating handover from other observed clinical tasks complicated the completion of assessments. Others wondered whether the presence of one or more senior-level physicians (e.g., fellows or attendings) influenced the genre of handover they observed:

If you, as the attending or the fellow who is on call, have already obtained handover and know what needs to be done, then am I really giving you handover or am I just giving you a progress update, and is this just for the purpose of teaching? (A-012, MD)

Given that handover in critical care contained a vast amount of clinical information, assessing individual residents was challenging for some assessors because they questioned whether they “missed something” (A-001, MD). As a result, a large proportion of the written comments that residents received focused on the nonmedical expert components of handover (e.g., volume, pace, eye contact).

Assessors’ perceptions of their credibility

When we asked faculty assessors if they thought residents considered them credible assessors, the response was one of doubt. Many assessors questioned their credibility, particularly those whose professional experience was far removed from the deployment settings. Assessors often second-guessed their written feedback—“Was I too harsh?” (A-001, MD)—or they questioned the value of their assessment data in the eyes of residents: “Will they just think, ‘Well, what does she know?’ and completely disregard everything?” (A-002, MD). When assessors were further probed about their credibility, they indicated that credibility was based on 2 main components: medical expertise and demonstrated experience in a specific clinical specialty.

Feasibility: The “f word” in cross-specialty assessment

Assessors valued seeing what assessment approaches may be effective in meeting increased assessment demands. To illustrate, one faculty member said:

I am the [CBME] lead for my program, so it was nice to get in there and see how some of these things might work. . . . I will probably adopt some of those ideas when [CBME] comes around for us. (A-008, MD)

Cross-specialty assessment showcased the potential benefits of this approach, mainly “that you have a larger pool of people from which to draw” (A-006, PhD) and the opportunity for faculty to broaden their understandings around assessment. However, assessors equally identified a number of challenges to sustaining this approach:

The challenges I think will always be time management. . . . It’s a volunteer thing to do . . . even if you have the time, what do you get out of it for yourself? Only doing it for the academic fun in it will wear off soon. (A-007, MD)

Some assessors suggested that they would be more comfortable assessing in “adjacent specialties” (A-001, MD), such as subspecialties within their broad area of expertise. Others identified a greater willingness to assess foundational tasks over those requiring specific medical expertise:

It’s much easier to envision for early entry-type tasks . . . like approach to history and physical exams, things like medical documentation . . . for me to go in and assess anything specific to medical content or procedure abilities, I think would be really difficult. (A-005, MD)

Assessors identified that the sustainability of providing this kind of assessment across specialties requires careful consideration: “Feasible is a loaded word. . . . I think certainly I could assess people in ICU, or Medicine, but feasible from the time standpoint . . . it’s probably not” (A-011, MD).

Assessors’ response to the question of feasibility was often It depends. As one assessor explained: “It’s going to depend on the task being assessed, and . . . how we define the necessary expertise” (A-009, PhD). Even around handover, largely seen as a communication task, assessors expressed concern around what was feasible to expect faculty from another specialty to assess:

You can’t just receive a bunch of criteria, and then say, “Okay, well, I’m just going to listen for these criteria, even though I don’t really know about them, and I’m going to determine whether those criteria are being met in a technical area.” (A-006, PhD)

Assessors noted the challenge of at times disassociating medical expertise from communication, as both were intricately woven into the performance of handover.

Next Steps

Through this pilot study, we uncovered important challenges to address in the training of faculty to assess residents in specialties outside their own. Firstly, the selection of tasks appropriate for cross-specialty assessment requires careful consideration. We selected handover because it follows a standard format that we thought would translate well for the purposes of exploring this innovation. However, our assessors’ experiences highlighted that handover encompasses more than medical expertise and clear communication; that is, the contextual differences across specialties make assessment of the task fraught with challenges. Assessors described a number of environmental and situational factors influencing their ability to distinguish residents’ independent performance of handover. This finding does not come as a surprise; evidence shows that context specificity is a problem when assessing trainees in the clinical workplace3 and that any one trainee’s performance is often coupled with or influenced by that of other team members.8 We suggest a possible contextual threshold for cross-specialty assessment: tasks with high context specificity might not be suitable for this approach to assessment. Introducing higher-fidelity simulation into the training protocol may also better prepare faculty to assess residents amidst the chaos of the naturalistic clinical environment.

Secondly, as faculty are busy with their clinical and academic responsibilities, the recruitment and retention of assessors will be challenging. Departments need to allocate and protect time for individual assessors to receive faculty training and complete workplace-based assessments. Moreover, the time faculty spend developing expertise in assessment should count in their consideration for reappointment, tenure, and/or promotion.9

The sustainability of cross-specialty assessment will take time, resources, and buy-in from both faculty and residents. Future iterations of this assessment strategy could measure interrater reliability among assessors, including among assessors from the same, a proximal, or a different specialty. Understanding the outcomes of this innovation from the perspective of residents is also essential. Our findings suggest that, with further research and careful consideration, individuals could be trained to assess residents in specialties outside their own at pivotal points throughout the residents’ training. Our aspiration is to open up future opportunities for shared assessment resources across programs, building a much-needed efficiency into the CBME assessment system.

Acknowledgments:

The authors want to acknowledge the faculty members who participated in this assessment innovation for their time and commitment.

References

1. Touchie C, ten Cate O. The promise, perils, problems and progress of competency-based medical education. Med Educ. 2016;50:93–100.
2. Holmboe ES, Sherbino J, Long DM, Swing SR, Frank JR. The role of assessment in competency-based medical education. Med Teach. 2010;32:676–682.
3. Gruppen LD, ten Cate O, Lingard LA, Teunissen PW, Kogan JR. Enhanced requirements for assessment in a competency-based, time-variable medical education system. Acad Med. 2018;93(suppl 3):S17–S21.
4. Stake RE. The Art of Case Study. 1995.Thousand Oaks, CA: SAGE Publications Inc..
5. Riesenberg LA, Leitzsch J, Massucci JL, et al. Residents’ and attending physicians’ handoffs: A systematic review of the literature. Acad Med. 2009;84:1775–1787.
6. Van Der Vleuten CPM, Schuwirth LWT, Driessen EW, Govaerts MJB, Heeneman S. Twelve tips for programmatic assessment. Med Teach. 2015;37:641–646.
7. Mahmood O, Dagnæs J, Bube S, Rohrsted M, Konge L. Nonspecialist raters can provide reliable assessments of procedural skills. J Surg Educ. 2018;75:370–376.
8. Sebok-Syer SS, Chahine S, Watling CJ, Goldszmidt M, Cristancho S, Lingard L. Considering the interdependence of clinical performance: Implications for assessment and entrustment. Med Educ. 2018;52:970–980.
9. Irby DM, O’Sullivan PS. Developing and rewarding teachers as educators and scholars: Remarkable progress and daunting challenges. Med Educ. 2018;52:58–67.

Appendix 1 Handover Observation Assessment Tool Used in Cross-Specialty Assessment of Residents, 2017

Supplemental Digital Content

Copyright © 2019 by the Association of American Medical Colleges