Medical education, from the first year of medical school through the end of residency training, requires that the learner take on graded responsibility and autonomy in clinical practice. From a medical student's first history and physical examination of an established patient to a senior resident's performance of a new admission or follow-up of a complex patient, we clinical faculty are responsible for our patients' care. Our patients trust that we, the teaching faculty members, are taking care of them, are watching out for their welfare and watching over the clinical care delivered by all members of the team. Graded responsibility and autonomy include the need for a learner to take increasing ownership of patient assessment, clinical reasoning, subsequent care, and patient outcomes. Similarly, this graded approach requires that we faculty members provide the physical space as well as the diagnostic, reasoning, and treatment opportunities for residents to learn. Learners live the tension of being responsible for their diagnostic and treatment decisions, and we faculty must guide the learners while also ensuring quality care for the patient. Faculty members live the tension between the imperative for quality care and the need to grant, so residents can take on, this graded responsibility and autonomy. How will residents be able to develop into independent practitioners without this progression? How can attending physicians, who are ultimately responsible for the patient, watch closely and yet maintain the distance needed for learners to develop?
Sterkenburg and colleagues1 defined six sequentially more challenging entrustable professional activities (EPAs) in their study of anesthesiology residents and attendings. The gaps between when the faculty thought the residents were ready for certain EPAs, when the residents thought themselves ready, and when the residents were actually allowed and expected to perform these tasks offer important new information as well as an opportunity to further investigate the entrustment decisions faculty make in practice and supervision.
The authors also described four categories of factors important in an attending physician's entrustment decisions: the resident, the actual task or EPA, the clinical circumstances, and the attending him- or herself. These categories will likely resonate with clinical educators and offer a framework for their further work as they address the EPAs specific to their specialty and develop ways to supervise these tasks at progressively greater distances.
Recognizing that faculty members have already developed ways to observe and evaluate residents, this study offers the opportunity to address some larger educational and practice questions:
1. What is, or should be, the level of supervision of residents at the end of their training period? The level of supervision is critical: too much and the graduate may not really be ready to practice independently; too little and the attending may not be able to assess the resident's level of competence—or even whether the resident has achieved a particular level of competence.
2. What is achievement of a level of competence? Sterkenburg and colleagues' survey and interview results showed some differences of opinion between faculty and residents. Further work is needed both to continue to define competence and to ensure that both residents and faculty know the goals and the evaluation metrics.
3. What is the level of risk or resident independence that any individual faculty member is willing to tolerate? One attending's reasons for granting trust may be another's reasons for providing more direct supervision. The four categories of factors that influence entrustment decisions not only show the dynamic nature of these decisions but also offer anchors for further investigation into how faculty currently make these decisions and how they can approach such decisions in the future.
4. How do faculty “hand off” learners? The handoff could be similar to the “handoff' of a patient, though the issue is one of practice or training—rather than of clinical care. Could faculty use the sequentially more difficult EPAs as one framework for handing off learners? Could faculty use the handoff as one way to have a better practice-level understanding of the learner's competence and thus enhance the process of EPA assessment and progression to more independent practice? For example, could the handoff include the information that the resident in question is competent to perform a lower-complexity EPA but not yet ready for a higher-complexity EPA? Such information could give the receiving attending both insight into the learner's performance and potential practice goals for the learner.
5. What are effective ways to train faculty to evaluate residents? Specifically, how can faculty use EPAs to observe residents and assess their competence? How do we develop the “gold standard” for performance of any given EPA, and then how do we measure our residents against that standard? Similarly, what are effective ways of assessing faculty's competence in evaluation skills and of developing standards for that evaluation?
6. What are effective techniques for direct observation? What are effective techniques for “watching closely at a distance”? How can faculty members incorporate the use of electronic health records and multisource evaluations in “watching” residents as they provide care? How can we design the electronic health record to optimally obtain patient- and physician-specific information in order to assess outcomes of care?
7. How can we ensure that the quality and safety measurements required by the many stakeholders in health care be incorporated into the definition and evaluation of EPAs? EPAs offer an opportunity for clinical care and educational leaders to continue discussions about the combined missions of quality care and education; to define and leverage resources; and to develop ways to fund projects in, and disseminate findings from, the increasingly important area of competency-based assessment. Collaboration among the leaders of clinical care sites and educational programs could be a joint success: ensuring that quality-of-care metrics are attained while also ensuring that trainees achieve their program's educational and practice goals.
8. What is the role of EPAs in the continuum of education? Does the defining of EPAs for graduate medical education have implications for, or create opportunities and synergies with, assessment in undergraduate medical education or continuing medical education?
The use of EPAs may also help address a broader tension in the assessment of residents, a tension that rests between the use of more global evaluations and the use of focused assessments of a specific competency. Ultimately, faculty members and training programs must assess and attest to their residents' achievement of particular levels of competence. These are daunting tasks. The use of the EPA, that is, a discrete clinical episode of care, could help to break down the larger evaluation tasks into more manageable and practical units of clinical observation and assessment. Each EPA can encompass the six core competencies; for example, performance of a procedure incorporates aspects of medical knowledge, patient care, interpersonal and communication skills, professionalism, systems-based practice, and practice-based learning and improvement. Faculty can evaluate some or all of these elements in observing a procedure; however, they are also watching the clinical episode as a whole. It may be easier for residents and faculty alike to use an EPA as the starting point for assessing competence (i.e., “How—and how well—is the resident doing this?”), to view any particular EPA as a “focused global assessment” (i.e., “The resident can do this particular task, this EPA, competently”), and to define the specific aspects of the EPA which provide evidence for performance in the six core competency areas. Further work could focus on using EPAs to enhance resident and faculty efforts at understanding, incorporating, and performing assessments of clinical competence in their daily activities.
Watching closely at a distance and deciding when to entrust residents with patient care are critical parts of supervision. Selecting whom to watch and determining how closely to watch, understanding how to observe, and choosing what to observe are all teaching and evaluation decisions that clinical faculty make daily. Sterkenburg and colleagues have advanced academic medicine's understanding of the entrustment decision-making process. Their work also opens up important new ways to use EPAs as a focus for evaluation, instruction, faculty development, and quality improvement. Entrustment decisions around patient care are at the confluence of training, assessment, quality care, and patient outcomes. As such, grant support to continue innovative investigations of EPAs could enhance medical education. Similarly, EPA research efforts could benefit from a supported clearinghouse or network for patient safety and quality initiatives in combination with training and assessment efforts. Lastly, the work of Sterkenburg and colleagues offers a starting point for further research into the many questions integral to resident supervision and evaluation.