Secondary Logo

Journal Logo

Perspectives

Improving Learner Handovers in Medical Education

Warm, Eric J. MD; Englander, Robert MD; Pereira, Anne MD, MPH; Barach, Paul MD, MPH

Author Information
doi: 10.1097/ACM.0000000000001457
  • Free

Abstract

Imagine a student who moves through her clinical years with fragmented supervision, rotating from one department or hospital to another every four to six weeks. During each brief rotation, she encounters a number of new attending physicians. No single physician observes her often enough to assess the breadth of her knowledge and her level of those skills required to become a competent and safe physician. She has an extensive fund of medical knowledge, and assessments of her performance focus mainly on this characteristic. However, throughout her clinical training, she repeatedly exhibits unprofessional behavior, such as copying and pasting daily progress notes, dismissing nurses’ concerns, and being brusque with patients and families. Periodically, a supervising physician or resident notices this behavior and may even express concern to the student in passing. However, because they are not aware that she has previously demonstrated similar behavior and because she does not volunteer that others have raised these concerns previously, each individual assumes she must just be “having a bad day.” Not wanting to jeopardize her grade and thereby her future by documenting these concerns on her end-of-rotation evaluation, they do not do so. Two separate faculty members approach the dean of student affairs in the hall about the student’s behavior. He reviews the student’s file but finds no such trends in her behavior noted. He requests that the faculty members document their concerns but never receives their documentation. The student’s Medical Student Performance Evaluation (MSPE), which is predominantly a compilation of the written assessments of her performance, does not mention her repeated interpersonal challenges and unprofessional behaviors. With outstanding United States Medical Licensing Examination (USMLE) scores and grades, she is accepted into a competitive residency program.

Three weeks into the student’s first busy inpatient rotation, a nurse and pharmacist file formal complaints about her unprofessional, dismissive behavior towards them. The program director (PD) is surprised and dismayed. Even when she rereads the student’s MSPE with this new knowledge about the student’s unprofessional behavior, she can find no “red flags” indicating that the student may need focused training, monitoring, and feedback.

Does this scenario sound familiar? Our current system of learner handovers between undergraduate medical education (UME) and graduate medical education (GME), as well as that between GME and post-GME practice, represents a high-risk, highly variable, and untrustworthy form of communication. Communication failures during patient handovers frequently lead to patient harm,1,2 and we believe that the same can occur as a result of communication failures during learner handovers in medical education. We hypothesize that poor learner handovers arise from two root causes: (1) the absence of agreed-on outcomes of training and/or accepted assessments of those outcomes, and (2) the lack of standardized ways to communicate the results of those assessments. In this article, we propose an alternative model that addresses the deficiencies in the MSPE and reduces the growing trust gap between medical schools and residency programs.

Current Challenges With Learner Handovers

Most residency PDs are well aware of the shortcomings of the MSPEs they receive from medical schools. Multiple studies across varied specialties have confirmed that these data fail to reliably predict future performance or capture the range of competencies that PDs are looking for in applicants.3–11 PDs struggle with the challenge of determining which applicants to rank and frequently weigh 45 minutes of interview time more heavily than a four-year medical school record. In short, PDs put limited trust in the MSPE to help them determine which skills medical school graduates actually possess.

If the general public were aware of this challenge, how would they react? Would they expect an airline to hire a pilot from a flight school if that school could not demonstrate or attest to the competencies the pilot had actually mastered? Why should medical training be different or medical schools less accountable? Why do learner handovers between UME and GME continue to be highly variable, of limited trustworthiness, and lacking in transparency?

Ashforth and Anand12 described how organizations come to enact repetitive practices without significant thought about the norms of the behavior. In medical education, medical school deans face tremendous pressures to match students to residency programs and may at times withhold or shade known performance or behavioral deficiencies to meet expectations. In two recent reviews of MSPEs, investigators found wide variation in how schools describe students, using similar terms to describe variable levels of performance.10,13 For example, one medical school used the word excellent to describe students between the 5th and 43rd percentiles, another school reserved excellent for the 71st to 90th percentiles, and a third school used the word outstanding to describe the top quarter of the class and excellent for everyone else.10 In these studies, few schools included all the comparative information suggested in the current Association of American Medical Colleges’ guidelines, with prestigious schools being more likely to withhold all ranking information.10,13 This problem is not new—it stretches back decades.14

We do not believe, however, that deliberate data obfuscation is the main reason for variable quality in UME-to-GME handovers. Instead, we believe that the main reason deans do not share the whole truth about applicants is that they simply do not know the whole truth.

In 1990, George Miller published an article entitled “The assessment of clinical skills/competence/performance”15 that had an immediate and lasting impact on medical education.16 In that article, Miller proposed a pyramid structure with four levels, each of which required specific methods of assessment. Using his recommendations, medical schools have improved their processes for assessing the first three levels of Miller’s pyramid (“knows,” “knows how,” and “shows how”),15 but their processes for assessing the fourth level of performance—the ability to function independently in clinical situations (“does”)—still vary greatly. For medical students, ward rounding mainly involves presentation skills (“knowing how” or “showing how”), and assessment is often based on a description of those skills rather than on attending supervisors or team members witnessing the skills themselves. True competency can be masked when well-meaning residents help students “look good” for supervising faculty during the case presentation.

Success in professional settings has been shown to be associated with a combination of technical and nontechnical factors,17 but it is unclear how valid or reliable current ward-based assessments of these factors are.18 If residency PDs focus on a few summative decision points (e.g., clerkship grades, USMLE scores), then medical students will as well. Studies have demonstrated that students ignore feedback from summative assessments and focus on achieving a grade rather than seeking to improve.19–21 During brief high-stakes ward rotations, students may avoid asking questions out of fear of appearing ignorant or weak and may miss opportunities to address core learning deficits. Deans can only report the information they have, and in many cases, the MSPE is constructed from flawed assessments of inconsistent experiences beyond their immediate control.

The GME apparatus is also complicit in this process. Residency programs, like medical schools, vie for prestige and position. As long as invitations for residency interviews and final rank-order lists continue to be heavily influenced by comparisons of variable and unreliable data, medical schools will continue to supply this information. In addition, residency PDs, facing pressure to get residents into fellowship or practice, often fail to share the unvarnished truth in their letters of recommendation, passing residents with significant competency deficiencies on to fellowship PDs and prospective employers.

Furthermore, residency programs rarely give meaningful feedback to medical schools about the downstream performance of their graduates, and in turn practices rarely give feedback to residency programs. Many medical schools do survey residency PDs about their students, but these surveys are sporadic, suffer from validity and reliability issues, and carry no penalty for noncompletion.

The current learner handover system has the potential to cause harm and to waste vital resources. According to a national survey of internal medicine PDs, the mean point prevalence of problem residents was 6.9%, with 94% of programs reporting that they had problem residents.22 According to another study of a single residency program over 25 years, the prevalence of problem residents was 9.1%.23 Students who complete medical school successfully only to fail out of residency typically have enormous financial debt and poor job prospects. When residents are let go or remediated by their program, the gap they leave must be filled by other residents in the program, leading to increased workload, decreased morale, and potential patient harm. Most chillingly, as Papadakis and colleagues24,25 found, physicians who have had disciplinary action taken against them by medical boards years after their training were more likely to be reported as being irresponsible or to have diminished insights about their erratic behaviors while in medical school and residency.

Continued tolerance for residency PDs, fellowship PDs, or employers not accurately knowing the skills of their prospective trainees or employees is unacceptable. It is equally unacceptable for medical students and residents to spend a great deal of money and time on training only to have significant gaps in their knowledge and skills. To address these issues, we propose a model that applies the principles of patient handovers to learner handovers.

The CLASS Handover Model

Knowledge about patient handovers has grown tremendously over the past decade.26 Although many patient handover models exist, one that has been particularly effective is the I-PASS program.27 The I-PASS model includes a statement of Illness severity (i.e., stable, watcher, unstable), a summary of the diagnosis and treatment Plan, an Action list of items to be completed by the clinician receiving the handover, a statement of Situational awareness and contingency planning, and Synthesis by the receiving clinician, with an opportunity for the receiving clinician to ask questions and confirm the plan.27

What if we took a similarly structured approach to learner handovers? To this end, we present the CLASS model for learner handovers. This model includes a description of the Competencies the learner has attained, a summary of the Learner’s performance, an Action list of the items to be completed by the program receiving the learner, a statement of Situational awareness of the learner’s skills and behaviors, and Synthesis by the receiving program of the learner’s current abilities.

In patient handovers, an alternative way to describe illness severity is to delineate the patient’s level of health attainment. For learners, this characteristic could be described as competency attainment. At present, learners, faculty members, employers, and the public are approaching agreement on the outcomes that indicate attainment of the competencies to practice with indirect supervision (UME to GME, residency to fellowship) and of those to practice unsupervised (GME to practice).28 The creation of the Association of American Medical Colleges’ Core Entrustable Professional Activities for Entering Residency (Core EPAs)29 and the Accreditation Council for Graduate Medical Education’s Milestone Project30 are key steps in this direction. These efforts push our educational system from a flawed emphasis on ranking trainees to a criterion-referenced system that identifies what a given trainee or physician can actually do and what skills that individual has mastered.31 Early results from studies of these systems demonstrate that progression towards competence can be reliably measured.32–35 However, more work in this area is needed.

At the UME level, current clinical rotation grades are high stakes, often leading medical students to hide their weaknesses for fear of receiving low grades. Additionally, these high-stakes grades are often calculated from the results of traditional norm-based assessments (such as the National Board of Medical Examiners Shelf examination), rather than from the results of directly observed workplace-based assessments, which are needed to understand students’ emerging clinical expertise. Instead, reliable assessment requires a proportional relationship between the stakes of the assessment and the number of data points involved, with higher-stakes assessments necessitating more data.21 In addition, faculty members need training to transition from providing summative assessments to providing those that focus on professional formation.36

Next, feedback on students’ performance during clinical rotations should be centered around learning and improvement and should include rich narrative comments.21 In addition, assessors should come from multiple professions, not just attending physicians, and be those best placed to observe students’ performance,37 including allied health professionals, patients, and peers.21 Entities such as clinical competency committees should review assessment data periodically to evaluate students’ progress, predict future performance based on past performance, and create improvement plans as needed. Summary decisions should be made after triangulating the many sources of feedback on students’ performance. Good patient handovers include a summary of the assessment data collected with the goal of reducing risk and harm, not creating it. Good learner handovers should do the same.

Once transparent and comprehensive learner assessments are completed, medical schools, learners, and residency programs should work together to construct individual action plans and situationally aware contingency plans. How does a particular student react to acute time pressure? Sleep deprivation? Challenging and suffering patients? How powerful would it be if learners and residency programs together offered an honest synthesis of the learner’s performance under different contexts and, from the first day of postgraduate training, developed strategies to optimize the learner’s performance? How important would it be if medical schools knew that residency programs would be continuing the job they started and determined how the first four years of medical education actually prepared the learner for practice? How influential would it be if residency programs informed medical schools of the gaps in their learners’ assessment and preparedness? How potent would the work of students, medical schools, and residency programs be if they felt like the goal of medical education was to improve learner outcomes in order to improve patient outcomes?

This last point is critical. Studies have shown that patient handover interventions can improve patient outcomes.26,27 We need to perform similar studies to ensure that learner handover interventions do the same.38

An Example of the CLASS Handover Model

We offer the following scenario as an alternative to the one we described at the beginning of this article. Imagine the student who moves through her clinical years under a coherent supervision plan with stable, longitudinal clinical experiences, where each data point collected is optimized for learning. Throughout her clinical training, she encounters not only attending physician supervisors but also peers, nurses, allied health professionals, and patients who provide authentic feedback on her performance. This information is skills based and specific (e.g., taking a social history, performing a medication reconciliation, returning pages in a timely manner, placing sutures) and mapped to the Core EPAs. The student reviews these data in real time, does her own self-assessment, and periodically meets with a medical school coach who helps her identify strengths and weaknesses. Her work is reviewed every three months by the medical school clinical competency committee, which aggregates the assessment data and determines where she is on the continuum of achieving proficiency and reaching entrustment on the Core EPAs. After six months, the clinical competency committee determines that the student has reached entrustment on 6 of the 13 Core EPAs. The committee members indicate that they would like to see improvement on the 1 EPA for which she is not progressing well—“collaborate as a member of an interprofessional team.” A coach meets with the student, and together they develop an improvement plan. Progress on this plan is monitored and tracked for the remainder of the year. On the basis of her performance on a number of assessments completed by her interprofessional colleagues documenting improvement in her communication and professionalism skills, the student is able to reach entrustment on this EPA (“collaborate as a member of an interprofessional team”) after 11 months.

The student’s MSPE documents her current standing with regard to the Core EPAs as well as the plan for how she will reach entrustment on the remaining EPAs in her final year of medical school. In addition, the MSPE spells out her transition plans, including how she will work towards entrustment on the additional EPAs for residents entering her specialty of choice (e.g., suturing a wound as she is planning on entering a surgery residency). Finally, she and her coach develop a monitoring plan to ensure that she maintains her communication and professionalism competencies throughout the remainder of her medical school tenure.

The student is accepted into her selected surgical residency program. After Match day, the student, her medical school coach, and her residency PD review her medical school performance. The PD synthesizes the content of the MSPE and provides an update on the skills she obtained over the final year of medical school. The residency PD, the student, and her medical school coach develop an action plan for residency. Three weeks into her first rotation, the clinical staff recognize her competent professionalism and communication skills. The student has periodic coaching and competency reviews as she did in medical school. At the end of her first year of residency, the residency PD shares her progression towards competence with her medical school dean and coach. When she applies for fellowship, this process repeats itself.

Conclusions

To ensure that learner handovers are successful, we must create a shared mental model of competence; develop and test high-quality tools to assess competency; and create authentic, safe ways to communicate about learners’ performance. Borrowing from the I-PASS model to achieve these goals, we suggest using the CLASS handover model (Competency attainment, Learner summary, Action planning, Situational awareness, and Synthesis), which includes coaching oriented towards improvement along the continuum of education and care. We predict that learners, PDs, deans, and coaches will be considerably more enthusiastic about this handover program than they are about the one we have in practice today.

We must appreciate the harm that may come to patients as a result of poor learner handovers, just as we have for the harm that may come from poor patient handovers.39 Patient handover improvement research has taught us that we need to evaluate our processes by measuring what matters most.26 Similarly, we must evaluate our learner handover processes by measuring what matters most to students, programs, patients, and the public.

Acknowledgments: The authors would like to acknowledge David Leach, MD, for his review and thoughtful comments on an earlier version of this manuscript. He received no compensation for this contribution.

References

1. Joint Commission. Sentinel event data summary. https://www.jointcommission.org/sentinel_event_statistics_quarterly/. August 1, 2016. Accessed September 1, 2016.
2. Salas E, Baker DP, King HB, Battles JB, Barach P. The authors reply: On teams, organizations, and safety: Of course…. Jt Comm J Qual Patient Saf. 2006;32:112113.
3. Borowitz SM, Saulsbury FT, Wilson WG. Information collected during the residency match process does not predict clinical performance. Arch Pediatr Adolesc Med. 2000;154:256260.
4. Boyse TD, Patterson SK, Cohan RH, et al. Does medical school performance predict radiology resident performance? Acad Radiol. 2002;9:437445.
5. Kanna B, Gu Y, Akhuetie J, Dimitrov V. Predicting performance using background characteristics of international medical graduates in an inner-city university-affiliated internal medicine residency training program. BMC Med Educ. 2009;9:42.
6. Cullen MW, Reed DA, Halvorsen AJ, et al. Selection criteria for internal medicine residency applicants and professionalism ratings during internship. Mayo Clin Proc. 2011;86:197202.
7. Harfmann KL, Zirwas MJ. Can performance in medical school predict performance in residency? A compilation and review of correlative studies. J Am Acad Dermatol. 2011;65:10101022.e2.
8. Stohl HE, Hueppchen NA, Bienstock JL. Can medical school performance predict residency performance? Resident selection and predictors of successful performance in obstetrics and gynecology. J Grad Med Educ. 2010;2:322326.
9. Naidich JB, Grimaldi GM, Lombardi P, Davis LP, Naidich JJ. A program director’s guide to the medical student performance evaluation (former dean’s letter) with a database. J Am Coll Radiol. 2014;11:611615.
10. Robins JA, McInnes MD, Esmail K. What information is provided in transcripts and medical student performance records from Canadian medical schools? A retrospective cohort study. Med Educ Online. 2014;19:25181.
11. Burish MJ, Fredericks CA, Engstrom JW, Tateo VL, Josephson SA. Predicting success: What medical student measures predict resident performance in neurology? Clin Neurol Neurosurg. 2015;135:6972.
12. Ashforth BE, Anand V. The normalization of corruption in organizations. Res Organ Behav. 2003;25:152.
13. Hom J, Richman I, Hall P, et al. The state of medical student performance evaluations: Improved transparency or continued obfuscation? Acad Med. 2016;91:15341539.
14. Hunt DD, MacLaren C, Scott C, Marshall SG, Braddock CH, Sarfaty S. A follow-up study of the characteristics of dean’s letters. Acad Med. 2001;76:727733.
15. Miller GE. The assessment of clinical skills/competence/performance. Acad Med. 1990;65(9 suppl):S63S67.
16. Cruess RL, Cruess SR, Steinert Y. Amending Miller’s pyramid to include professional identity formation. Acad Med. 2016;91:180185.
17. Heckman JJ, Stixrud J, Urzua S. The effects of cognitive and noncognitive abilities on labor market outcomes and social behavior. J Labor Econ. 2006;24:411482.
18. Schraagen JM, Schouten T, Smit M, et al. Assessing and improving teamwork in cardiac surgery. Qual Saf Health Care. 2010;19:e29.
19. Harrison CJ, Könings KD, Molyneux A, Schuwirth LW, Wass V, van der Vleuten CP. Web-based feedback after summative assessment: How do students engage? Med Educ. 2013;47:734744.
20. Bok HG, Teunissen PW, Favier RP, et al. Programmatic assessment of competency-based workplace learning: When theory meets practice. BMC Med Educ. 2013;13:123.
21. van der Vleuten CP, Schuwirth LW, Driessen EW, Govaerts MJ, Heeneman S. 12 tips for programmatic assessment [published online November 20, 2014]. Med Teach. doi: 10.3109/0142159X.2014.973388.
22. Yao DC, Wright SM. National survey of internal medicine residency program directors regarding problem residents. JAMA. 2000;284:10991104.
23. Reamy BV, Harman JH. Residents in trouble: An in-depth assessment of the 25-year experience of a single family medicine residency. Fam Med. 2006;38:252257.
24. Papadakis MA, Teherani A, Banach MA, et al. Disciplinary action by medical boards and prior behavior in medical school. N Engl J Med. 2005;353:26732682.
25. Papadakis MA, Arnold GK, Blank LL, Holmboe ES, Lipner RS. Performance during internal medicine residency training and subsequent disciplinary action by state licensing boards. Ann Intern Med. 2008;148:869876.
26. Hesselink G, Schoonhoven L, Barach P, et al. Improving patient handovers from hospital to primary care: A systematic review. Ann Intern Med. 2012;157:417428.
27. Starmer AJ, Spector ND, Srivastava R, et al.; I-PASS Study Group. Changes in medical errors after implementation of a handoff program. N Engl J Med. 2014;371:18031812.
28. Sterkenburg A, Barach P, Kalkman C, Gielen M, ten Cate O. When do supervising physicians decide to entrust residents with unsupervised tasks? Acad Med. 2010;85:14081417.
29. Association of American Medical Colleges. Core Entrustable Professional Activities for Entering Residency. 2014. Washington, DC: Association of American Medical Colleges; https://www.aamc.org/cepaer. Accessed September 1, 2016.
30. Accreditation Council for Graduate Medical Education; American Board of Internal Medicine. The internal medicine milestone project. https://acgme.org/acgmeweb/Portals/0/PDFs/Milestones/InternalMedicineMilestones.pdf. Published July 2015. Accessed September 1, 2016.
31. Choo KJ, Arora VM, Barach P, Johnson JK, Farnan JM. How do supervising physicians decide to entrust residents with unsupervised tasks? A qualitative analysis. J Hosp Med. 2014;9:169175.
32. Warm EJ, Mathis BR, Held JD, et al. Entrustment and mapping of observable practice activities for resident assessment. J Gen Intern Med. 2014;29:11771182.
33. Warm EJ, Held JD, Hellmann M, et al. Entrusting observable practice activities and milestones over the 36 months of an internal medicine residency. Acad Med. 2016;91:13981405.
34. Hauer KE, Clauser J, Lipner RS, et al. The internal medicine reporting milestones: Cross-sectional description of initial implementation in U.S. residency programs. Ann Intern Med. 2016;165:356362.
35. O’Brien BC, Hirsh D, Krupat E, et al. Learners, performers, caregivers, and team players: Descriptions of the ideal medical student in longitudinal integrated and block clerkships. Med Teach. 2016;38:297305.
36. Patton MQ. Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use. 2011.New York, NY: Guilford Press.
37. Crossley J, Jolly B. Making sense of work-based assessment: Ask the right questions, in the right way, about the right things, of the right people. Med Educ. 2012;46:2837.
38. Flink M, Öhlén G, Hansagi H, Barach P, Olsson M. Beliefs and experiences can influence patient participation in handover between primary and secondary care—a qualitative study of patient perspectives. BMJ Qual Saf. 2012;21(suppl 1):i76i83.
39. Laugaland K, Aase K, Barach P. Interventions to improve patient safety in transitional care—a review of the evidence. Work. 2012;41(suppl 1):29152924.
Copyright © 2016 by the Association of American Medical Colleges