Secondary Logo

Journal Logo

Entrustment Decisions: Bringing the Patient Into the Assessment Equation

ten Cate, Olle, PhD

doi: 10.1097/ACM.0000000000001623
Invited Commentaries
Free
SDC

With the increased interest in the use of entrustable professional activities (EPAs) in undergraduate medical education (UME) and graduate medical education (GME) come questions about the implications for assessment. Entrustment assessment combines the evaluation of learners’ knowledge, skills, and behaviors with the evaluation of their readiness to be entrusted to perform critical patient care responsibilities. Patient safety, then, should be an explicit component of educational assessments. The validity of these assessments in the clinical workplace becomes the validity of the entrustment decisions.

Modern definitions of the validity of educational assessments stress the importance of the purpose of the test and the consequences of the learner’s score. Thus, if the learner is a trainee in a clinical workplace and entrusting her or him to perform an EPA is the focus of the assessment, the validity argument for that assessment should include a patient safety component.

While the decision to allow a learner to practice unsupervised is typically geared toward GME, similar decisions are made in UME regarding learners’ readiness to perform EPAs with indirect supervision (i.e., without a supervisor present in the room). Three articles in this issue address implementing EPAs in UME.

The author of this Commentary discusses the possibility of implementing true entrustment decisions in UME. He argues that bringing the patient into the educational assessment equation is marrying educational and health care responsibilities. Building trust in learners from early on, gradually throughout the continuum of medical education, may reframe our vision on assessment in the workplace.

Th.J.(Olle) ten Cate is professor of medical education and director, Center for Research and Development of Education, University Medical Center Utrecht, Utrecht, The Netherlands.

Funding/Support: None reported.

Other disclosures: None reported.

Ethical approval: Reported as not applicable.

Editor’s Note: This is an Invited Commentary on Lomis K, Amiel JM, Ryan MS, et al. Implementing an entrustable professional activities framework in undergraduate medical education: Early lessons from the AAMC Core Entrustable Professional Activities for Entering Residency pilot. Acad Med. 2017;92:765–770; on Brown DR, Warren JB, Hyderi A, et al. Finding a path to entrustment in undergraduate medical education: A progress report from the AAMC Core Entrustable Professional Activities for Entering Residency entrustment concept group. Acad Med. 2017;92:774–779; and on Favreau MA, Tewksbury L, Lupi C, et al. Constructing a shared mental model for faculty development for the Core Entrustable Professional Activities for Entering Residency. Acad Med. 2017;92:759–764.

Correspondence should be addressed to Olle ten Cate, Center for Research and Development of Education, University Medical Center Utrecht, PO Box 85500, 3508 GA Utrecht, The Netherlands; telephone: +31 (0) 88-75-570-10; e-mail: t.j.tencate@umcutrecht.nl.

Three articles in this issue of Academic Medicine relate to entrustable professional activities (EPAs).1–3 EPAs provide a structure for teaching and assessment in the clinical workplace and were initially conceived for graduate medical education (GME). They are well-defined tasks that may be entrusted to learners once the learners have demonstrated that they possess the required competencies to perform these tasks unsupervised. The three articles in this issue describe EPAs in undergraduate medical education (UME), but the principles of entrustment in GME and UME are not essentially different. These similarities apply particularly to the assessment of learners and the consequences for faculty development, as described in this issue by Brown and colleagues2 and by Favreau and colleagues, respectively.3

Entrustment decision making enriches our current workplace-based assessment procedures. Assessment, from the traditional perspective of education, determines whether a learner meets set standards, as demonstrated during an examination. It is a dialogue between two parties—the learner and the school or the educator. The learner aims to demonstrate her or his knowledge, skills, and behaviors to the satisfaction of those who set the standards. To ensure such assessments are measuring what they aim to be measuring, they should be valid. Validity can be argued from the perspective of the content, response process, internal structure, relationship to other variables, and consequences of the test.4,5 In the world of educational science and psychology, the validity of an educational assessment is no longer viewed as a fixed property of the test (e.g., a written test, an objective structured clinical exam, or an oral exam is not “valid” per se) but as a property of its purpose—that is, the consequences of the score the learner receives (e.g., being a high-stakes or low-stakes assessment).4,6 Enabling academic progress is the dominant consequence in educational testing.

In workplace-based curricula that incorporate EPAs, another dimension is added. The question is not only whether the learner performed well but also whether the learner is ready for a decrease in the level of supervision. This new dimension is both an assessment issue and a patient safety issue. Is the learner ready to be left alone with the patient? The patient then becomes part of the validity argument. Workplace-based assessments have been criticized for leniency bias, also called generosity error or failure-to-fail.7 Bringing the patient into the assessment equation may serve to counter this bias, because of the now dual responsibility of educators to train learners as well as to care for patients.8 Clinicians may think twice before entrusting a learner and potentially putting their patients at risk if the assessment is framed as an entrustment decision with patient safety implications.

Entrustment decisions happen every day in clinical settings. Most are situation-specific and ad hoc. These decisions never have the summative nature of the decision to license a learner to practice unsupervised before graduation. Until now, the only true summative entrustment decision is made when learners register for a specialty; at that point, they are entrusted to perform every component of practice at once. Competency-based medical education suggests that learners should reach this point whenever they are ready for it, rather than at a preset moment in time.9 But a license to practice a specialty does not easily allow for this suggested flexibility. The range of required competencies is simply too big to evaluate in a valid way all at once.

EPAs, however, allow educators to make entrustment decisions not for every component of practice at once but for individual units of practice that can be reasonably observed and evaluated. EPAs thus enable competency-based medical education to be more time-variable. While entrustment decisions regarding EPAs are not yet legally binding, the essence of an EPA-based model requires learners to assume responsibility for units of practice before they are fully licensed to practice their specialty. This model also prepares learners more gradually and safely for unsupervised practice than a sudden abolishment of all supervision at the moment of licensing. Enabling learners to be responsible for separate EPAs before they are responsible for the full breadth of patient care requires educators to provide supervision at a distance for those EPAs whenever possible and to create an environment that places learners in a responsible position.

Summative entrustment decisions, both those that are made at the completion of a program, as happens now, and those that are made for separate EPAs, as might happen in the future, require careful validity testing, as they are high-stakes decisions. Do observations truly pertain to the EPA as it was meant (content validity)? Can the rating form truly capture a valid estimation of the supervision a learner requires, and do clinicians understand it and use it well (response process validity)? Do the number and diversity of sources of information—such as short, sampled observations; case-based discussions; longitudinal observations; and other sources—align in their support for the decision (internal structure validity)? Would the decision be generally supported by other health care team members (relationship to other variables validity)? And finally, what will be the consequences of the decision to grant the learner more autonomy (consequences validity)?

The EPA concept was created for GME. The question now is whether it can be translated to UME. To answer this question, we must first ask several subquestions. First, to what extent can medical students be left alone with EPAs? That is, how much responsibility can they be given, how much responsibility is justified, and can entrustment decisions in UME really have the intended consequences? Second, how can the validity of entrustment decisions be argued? That is, what does it take to make entrustment decisions, and how can they be grounded? And finally, how can time variability, if ever, be realized in UME? Defining the units of practice (EPAs) that learners must master before entering residency is one step in translating the EPA concept from GME to UME, which was well described by Lomis and colleagues1 in this issue. However, the implementation of entrustment decision making and the creation of a time-variable entrustment process are quite different issues.

My colleagues and I have suggested that medical students may be trained up to the level of indirect supervision for EPAs (i.e., the execution of an EPA without a supervisor physically present but with one quickly available)—that is, to level 3 of the supervision scale.10 Reaching this level is a significant achievement, as the system must trust students to report, interpret, and manage expected and unexpected happenings and to organize help if needed. Arriving at a summative entrustment decision for this level then requires, as Favreau and colleagues3 nicely put it in their article, that “both learners and clinical faculty educators engage in the entrustment experience, partnering in the pursuit of trust.” This reciprocity is important. If educators only trust learners if they can discern their limitations and ask for help,11 learners must be able to trust educators to provide that help without hesitation or humiliation.

Brown and colleagues2 in their article in this issue suggested establishing entrustment committees (comparable to the clinical competency committees in GME). Doing so makes sense, as valid summative entrustment decisions should not be made by individual educators but by multiple clinicians who trust each other’s judgments and who can oversee and weigh the information sources that inform such decisions. Summative entrustment decisions also may be informed by a series of ad hoc entrustment decisions. Attendings observing learners who act conscientiously and responsibly may briefly evaluate and report this information. An e-portfolio can be used to document such encounters and to help ground summative entrustment decisions.

A summative entrustment decision for a medical student should have serious consequences. Students must deeply feel that they are being trusted to legitimately participate in the health care community—that is, that they are responsible for the health care of patients. In these situations, patient safety can be realized by a supervisor who is present but not directly visible. Sheu and colleagues12 recently showed how residents monitor interns somewhat unobtrusively using electronic medical records to build their trust in the interns.

Finally, can a time-variable entrustment process be realized in UME? I believe that reaching this goal is possible through two approaches. The first is to abandon the preset moment of graduation (“If you are a member of the Class of 2022, your graduation date will be Friday, May 13, 2022”). The second is to create elective EPAs and allow for variation in the portfolios with which students graduate. Both approaches have examples from practice. Unlike in the United States, in other countries such as the Netherlands, medical school starts at a fixed moment but ends at a variable moment depending on each student’s progress. Residencies then also start at variable moments. Moving away from a fixed graduation, the Education in Pediatrics Across the Continuum Project (https://www.aamc.org/initiatives/epac) offers a smooth EPA- and competency-based UME–GME connection. Similarly, the first program at the University Medical Center Utrecht to offer a package of three elective critical care EPAs for final-year medical students, in addition to the core EPAs, was recently established. Mastering these EPAs eases medical students’ transition into four different residency programs and prepares them for a shorter postgraduate training period. These developments are important to enabling the implementation of a time-variable entrustment process.

In conclusion, if educators want learners to find meaning in their training, we must guide, observe, and monitor them up to the point of entrustment and then actually show our trust by affording them responsibilities in patient care. Assessment in the workplace has too long been separated from entrustment to provide patient care. Bringing the patient into the educational assessment equation is marrying educational and health care responsibilities. We must be ready to build programs that allow for the true entrustment of learners throughout the continuum of medical education.

Acknowledgments: The author thanks Dr. Patricia O’Sullivan from the University of California, San Francisco, who developed a workshop using the entrustable professional activities validity framework with the author for the annual International Advanced Assessment Course in London and to Dr. Sally Santen from the University of Michigan, who reviewed an earlier version of this manuscript.

Back to Top | Article Outline

References

1. Lomis K, Amiel JM, Ryan MS, et alImplementing an entrustable professional activities framework in undergraduate medical education: Early lessons from the AAMC Core Entrustable Professional Activities for Entering Residency pilot. Acad Med. 2017;92:765–770.
2. Brown DR, Warren JB, Hyderi A, et alFinding a path to entrustment in undergraduate medical education: A progress report from the AAMC Core Entrustable Professional Activities for Entering Residency entrustment concept group. Acad Med. 2017;92:774–779.
3. Favreau MA, Tewksbury L, Lupi C, et alConstructing a shared mental model for faculty development for the Core Entrustable Professional Activities for Entering Residency. Acad Med. 2017;92:759–764.
4. Plake BS, Wise LL, Cook LLAmerican Educational Research Association, American Psychological Association, National Council on Measurement in Education. The Standards for Educational and Psychological Testing. 2014.Washington, DC: American Educational Research Association;
5. Downing SMValidity: On meaningful interpretation of assessment data. Med Educ. 2003;37:830–837.
6. Cook DA, Brydges R, Ginsburg S, Hatala RA contemporary approach to validity arguments: A practical guide to Kane’s framework. Med Educ. 2015;49:560–575.
7. Albanese MAChallenges in using rater judgements in medical education. J Eval Clin Pract. 2000;6:305–319.
8. Kogan JR, Conforti LN, Iobst WF, Holmboe ESReconceptualizing variable rater assessments as both an educational and clinical care problem. Acad Med. 2014;89:721–727.
9. Carraccio C, Wolfsthal SD, Englander R, Ferentz K, Martin CShifting paradigms: From Flexner to competencies. Acad Med. 2002;77:361–367.
10. Chen HC, van den Broek WE, ten Cate OThe case for use of entrustable professional activities in undergraduate medical education. Acad Med. 2015;90:431–436.
11. Kennedy TJ, Regehr G, Baker GR, Lingard LPoint-of-care assessment of medical trainee competence for independent clinical work. Acad Med. 2008;83(10 suppl):S89–S92.
12. Sheu L, O’Sullivan PS, Aagaard EM, et alHow residents develop trust in interns: A multi-institutional mixed-methods study. Acad Med. 2016;91:1406–1415.
© 2017 by the Association of American Medical Colleges