Jones, M. Douglas Jr. MD; Rosenberg, Adam A. MD; Gilhooly, Joseph T. MD; Carraccio, Carol L. MD, MA
Dr. Jones is professor, Department of Pediatrics, University of Colorado School of Medicine and The Children's Hospital, Aurora, Colorado.
Dr. Rosenberg is program director, Pediatric Residency Program, and professor, Department of Pediatrics, University of Colorado School of Medicine and The Children's Hospital, Aurora, Colorado.
Dr. Gilhooly is vice chair for education, director of fellowship education, and professor, Department of Pediatrics, Oregon Health & Science University School of Medicine, Portland, Oregon.
Dr. Carraccio is associate chair for education and professor, Department of Pediatrics, University of Maryland School of Medicine, Baltimore, Maryland.
Correspondence should be addressed to Dr. Jones, MS 8402, Education 2 South, 13121 E. 17th Avenue, Aurora, CO, 80045; telephone: (303) 724-2851; fax: (720) 777-7323; e-mail: email@example.com.
The concept of linking general competencies1,2 to residency training has changed the phenotype of graduate medical education (GME). Since 2001, when the six core competencies were introduced in the United States by the Accreditation Council for Graduate Medical Education (ACGME) and the American Board of Medical Specialties (ABMS), they have had a profound impact not only on GME but also on maintenance of certification (MOC)3 and undergraduate medical education (UME).4
Despite widespread adoption, skeptics voice concerns that competency-based medical education (CBME) may be more fad than substance.5 Some suggest that CBME undermines the very basis of residency training.6 They argue that the division of complex professional behaviors into competencies (which are then inevitably divided into subcompetencies that become learning objectives driving curricula and assessment) is reductionist and artificial.5–11 Snadden10 warns against reducing complex behaviors required for professional life activities into the smallest observable units of behavior in order to measure them “objectively”:
At present our assessment methods stem from the reductionist philosophy that underpins our discipline, and we are, thus, trapped by our need to compare like with like. Until we can make a mental shift that allows us to include a more holistic approach to assessment, one which values the development of individuals over time, we will continue to struggle to measure the immeasurable, and may end up measuring the irrelevant because it is easier.
Similar concerns have been raised in other areas of higher education.11 Although few critics would advocate retreat “into the mist of holistic waffle about professional experience and the ineffability of … intuitive wisdom,”11 they raise legitimate points that become tangible when one attempts to base medical education on six decontextualized competencies that serve as the starting point from which all else flows.
Unless the general competencies are clearly linked to clinical care, they are difficult to grasp. When asked to list or define them, residents are rarely able to go beyond Patient Care and Medical Knowledge.12–15 The patient care from which residents learn is not consciously structured around mastery of general competencies.14,16 During many hours of learning-related conversations in one large pediatric residency program, the competencies were never mentioned as such.14 Moreover, CBME calls for competency-based assessment.12 Assessment of an arguable list of deconstructed behaviors removed from the environment of direct patient care and the longitudinal professional development of physicians tends to be seen as a burdensome add-on by faculty and residents. Although validated assessments are available, few faculty are skilled in their use.12,17
Synthesis of Core Competencies and Traditional GME Practice
Are the general competencies therefore irrelevant to resident education? Ten Cate and Scheele12 suggest that the problem is not with the concept of competencies but, rather, in the interpretation and implementation of CBME. They propose a more holistic concept of resident assessment that retains the power of the competencies to stress important aspects of professional behavior. They suggest that the principal clinical responsibilities in a particular medical specialty could be identified and then mapped to the core competencies most important to each responsibility. They designate those responsibilities as “entrustable professional activities” (EPAs). Entrustment refers to the granting of independence (perhaps more precisely “supervised independence” within the context of a residency program) to trainees to perform the clinical responsibility (EPA) without direct supervision. Their approach puts core competencies into the familiar context of clinical practice.
Although the granting of independent responsibility to residents has been habitual since residency training began, the practice has only recently been subjected to systematic study.18–22 Dijksterhuis et al18 point out that entrustment is context dependent: If entrustment is to be reliable,
a safe working environment and mutual respect between trainee and supervisor are essential, supervision should be continuously available and easily accessible, and trainees should feel uninhibited about asking for supervision when they feel the need. Finally, the professional behavior of the trainee, especially in terms of whether she knows her limitations and is able to ask for help in a timely manner, must be assessed before progressive independence can be granted.
Kennedy et al19 also discuss the importance of knowledge of residents' limitations. They found that faculty granted residents independence on the basis of what they termed residents' “trustworthiness,” a construct based on four attributes: knowledge and skill, discernment (i.e., a resident's insight into and awareness of limits), conscientiousness, and truthfulness. Faculty evaluated trustworthiness by double-checking residents' findings (against their own findings and those of nurses and others) and by evaluating their verbal communication about issues known to be important.19 Though somewhat differently categorized, these four attributes are not substantially different from Altmaier and colleagues'20 list of desirable resident traits derived from faculty recall of critical incidents.
Dijksterhuis et al18 found that trainees and supervisors put considerable weight on trainees' self-assessment of readiness for independence. They commented that some would question this, given physicians' documented23 tendency to overestimate their professional skills. However, Eva and Regehr24 have argued, building on Schön's25 work on reflection, that although physicians are poor at assessing themselves when they reflect on action, their accuracy improves when they reflect in action. In other words, self-assessment that involves knowing when to slow down and seek help is more reliable than global self-assessment that is biased by optimistic reconstruction of aggregated memories long removed from the actual events.26
Although the importance of knowing when to seek help is clear, factors that influence help-seeking behavior have also only recently been studied. Kennedy et al27 found that residents' requests for help were affected by three factors: the question itself (its clinical importance and whether the matter was within the resident's or supervisor's scope of practice), the availability and approachability of supervisors, and the attitudes of residents (e.g., desire for independence, concern for how the request would affect personal credibility and evaluations by supervisors). The overriding importance of an environment of mutual respect between trainees and senior staff has been emphasized repeatedly.18,27,28 Residents must feel that their requests for help will be valued and that supervisors will respond promptly with valuable guidance.
Such traits as knowledge of limits, willingness to seek help, conscientiousness, and truthfulness are difficult to quantify, however. Evaluation is intuitive, based on the faculty member's and others' personal experiences with the resident. One could speculate that it is composed of numerous individual, informal “tests” where the resident's performance is compared with the evaluator's internalized norms. The longer the experience is, the more diverse are the “tests” the resident “takes”—and the more reliable, presumably, the assessment will be.29 The principal difficulty with brief, fragmented faculty-resident contact may be that the number of “tests” that any one faculty member can witness is too few to draw valid, reliable conclusions. That is not to say that longer contact guarantees better evaluation, however; the other variable is the capability of the evaluator.
Criteria for Entrustment
How, then, should faculty determine when residents may be entrusted with professional activities?12 Global evaluations of resident performance, if sufficient in number and if elicited from a range of evaluators, can identify residents with difficulties30 but are insufficient in themselves as criteria for entrusting important professional activities. In-training exams, objective structured clinical exams, and mini-clinical evaluation exercises are valuable, but performance on exams is tangential to integrated professional function. Such tests cannot provide sufficient information about awareness of limitations, conscientiousness, and truthfulness.18,19 Resident self-assessment, in the sense of awareness of personal limitations,26 is useful, with the caveat that the clinical training environment must be considered.18 None of these, alone or in combination, though, can replace evaluation based on close resident-faculty contact over time.
The difficulty is that close contact between one resident and one faculty member, at least in many pediatric residency training programs in the United States, may last no more than one or two weeks. In many programs, the only setting with reliable continuity of resident-faculty contact is the continuity clinic. The longitudinal relationships between residents and faculty that are the hallmark of continuity clinics must somehow be woven into other clinical settings. There is no substitute for holistic assessment of resident performance. Although performance on a checklist of deconstructed behavior may provide useful supplementary information about clinical competence, it cannot replace a supervisor's intuitive feel for clinical competence gained over time.31 Much attention has been given to possible consequences of resident duty hours for learning and patient safety.32 More attention should be paid to the consequences of the long-standing practice of resident entrustment based on brief, disconnected faculty supervisory experiences superimposed on relatively brief, disconnected clinical rotations.
The challenges with entrustment are clear. Yet structuring resident education and assessment around entrustment would serve two purposes. First, it would refocus attention on a central element of progression through residency training: assumption of increasing responsibility for patient care with decreasing levels of supervision. Second, it would remind clinical educators how important it is to subject the process of entrustment to critical examination—to ask what it is about the resident, the patient, the supervisor, and the circumstances that allows the supervisor to feel comfortable delegating/entrusting care to the resident.18,19,27,28
Linking Milestones to EPAs
Reconsideration of the proper balance between entrustment based on holistic, largely intuitive criteria and entrustment based on assessment of specific skills and competencies is especially important at this time.33 The ACGME is partnering with member boards of the ABMS to take a next step in CBME through the “Milestones Project,”34 the purpose of which is to refine ACGME competencies for each specialty. The charge to each member board is to set performance standards within its specialty for each of the core competencies and to identify tools to assess performance. The project is well under way.35 In pediatrics, for example, a working group and advisory board have been established, and involvement of the members of the Association of Pediatric Program Directors has been secured. The first draft recommends performance standards for pediatric residents based on review of the literature and expert opinion regarding the ontogeny of subcompetencies within the six ACGME domains. By providing narrative descriptors of behaviors by training level, milestones are intended to serve as a learning roadmap and resource for program directors, faculty, and trainees. If milestones and their assessment are to be meaningful, however, they must be linked to clinical practice. Use of EPAs would be one way to accomplish that.
Milestones linked to EPAs are being considered as a possible foundation for a project sponsored by the Association of American Medical Colleges to examine the relationship between UME and GME in pediatrics. The project planning group has been charged with exploring how clinical practice relates to core competencies and how to assess progress across the continuum of medical learning, thereby transitioning learners from UME to GME to unsupervised practice on the basis of competence rather than time. The project represents a unique opportunity to develop a continuum of education goals and assessment tools affecting all pediatric medical education and practice, using EPAs rather than competencies as the starting point. Such a continuum was one of three goals suggested by the Residency Review and Redesign in Pediatrics (R3P) project,36 which recently examined pediatric residency education in depth. Noting the dearth of evidence on which to base changes, R3P suggested creating an entity to foster research in pediatric medical education. The result was the Initiative for Innovation in Pediatric Education.37
Linking EPAs to Core Competencies in Pediatric Medicine
Thus, there are two reasons that developing EPAs serves medical education and continuous professional development (as part of MOC).38 First, EPAs identify important professional activities that are familiar to learners, faculty, and the public. Second, they make core competencies meaningful by placing them in a familiar context.12
The Appendix lists potential EPAs for pediatrics and shows how they might be mapped to ACGME competencies and expected levels of achievement in the context of a residency program.12 Although, in the end, all competencies apply to all EPAs, the mapping of EPAs to competencies in the process of learning depends on the patient population and the stages of professional development of both the teacher and the learner. The end product of the EPA-competencies mapping process (i.e., the map itself) is less important than are the conversations among supervisors and between supervisors and learners about how competencies apply in a particular situation. For example, to assess and provide routine care for a normal newborn, the resident must know about the effects of maternal health on the baby and be able to perform a detailed physical examination of the infant with attention to possible congenital anomalies; discuss breastfeeding with the mother and communicate with the family in a way that makes them comfortable asking questions; take into account the mother's/family's cultural background and its effect on child rearing; and coordinate care with the future primary care physician. A conversation that flows from this EPA as the starting point and places core competencies in the context of the EPA, as articulated above, is likely to be more meaningful than a conversation that starts with the core competencies per se.
Appendix Potential E...Image Tools
Linking EPAs to core competencies is more than an academic exercise. It provides residents and teaching faculty with a valuable opportunity to reflect together on transcendent elements of professional practice represented by the competencies.
The authors are grateful to Professor Th. J. (Olle) ten Cate for his comments and encouragement during the writing of this article.
1Batalden P, Leach D, Swing S, Dreyfus H, Dreyfus S. General competencies and accreditation in graduate medical education: An antidote to overspecification in the education of medical specialists. Health Aff (Millwood). 2002;21:103–111.
5Norman G. Outcomes, objectives, and the seductive appeal of simple solutions. Adv Health Sci Educ Theory Pract. 2006;11:217–220.
7Lingard L. What we see and don't see when we look at “competence”: Notes on a god term. Adv Health Sci Educ Theory Pract. 2009;14:625–628.
8Talbot M. Monkey see, monkey do: A critique of the competency model in graduate medical education. Med Educ. 2004;38:587–592.
9Rees CE. The problem with outcomes-based curricula in medical education: Insights from educational theory. Med Educ. 2004;38:593–598.
10Snadden D. Portfolios—attempting to measure the unmeasurable? Med Educ. 1999;33:478–479.
11Hussey T, Smith P. The trouble with learning outcomes. Learning Higher Educ. 2002;3:220–233.
13Zibrowski EM, Singh SI, Goldszmidt MA, et al. The sum of the parts detracts from the intended whole: Competencies and in-training assessments. Med Educ. 2009;43:741–748.
14Balmer DF, Master CL, Richards B, Giardino AP. Implicit versus explicit curricula in general pediatrics education: Is there a convergence? Pediatrics. 2009:124;e347–e354.
16Teunissen PW, Scheele F, Scherpbier AJ, et al. How residents learn: Qualitative evidence for the pivotal role of clinical activities. Med Educ. 2007;41:763–770.
18Dijksterhuis MG, Voorhuis M, Teunissen PW, et al. Assessment of competence and progressive independence in postgraduate clinical training. Med Educ. 2009;43:1156–1165.
20Altmaier EM, McGuinness G, Wood P, Ross RR, Bartley J, Smith W. Defining successful performance among pediatric residents. Pediatrics. 1990;85:139–143.
21Philibert I. In this issue. J Grad Med Educ. 2010;2:7.
22Kashner TM, Byrne JM, Chang BK, et al. Measuring progressive independence with the resident supervision index: Empirical approach. J Grad Med Educ. 2010;2:17–30.
23Eva KW, Regher G. I'll never play professional football and other fallacies of self-assessment. J Contin Educ Health Prof. 2008;28:14–19.
25Schön D. The Reflective Practitioner: How Professionals Think in Action. London, UK: Temple Smith; 1983.
27Kennedy TJ, Regehr G, Baker GR, Lingard L. Preserving professional credibility: Grounded theory study of medical trainees' requests for clinical support. BMJ. 2009;338:399–401.
28Stewart J. “Don't hesitate to call”—The underlying assumptions. Clin Teacher. 2007;4:6–9.
29van der Vleuten CP, Schurwirth LW. Assessing professional competence: From methods to programmes. Med Educ. 2005;39:309–317.
30Williams RG, Klamen DA, McGaghie WC. Cognitive, social and environmental sources of bias in clinical performance ratings. Teach Learn Med. 2003;15:270–292.
31Norman G. Editorial—Checklists vs. ratings, the illusion of objectivity, the demise of skills and the debasement of evidence. Adv Health Sci Educ Theory Pract. 2005;10:1–3.
32Iglehart JK. Revisiting duty-hour limits—IOM recommendations for patient safety and resident education. N Engl J Med. 2008;359:2633–2635.
33ten Cate O, Snell L, Carraccio C. Medical competence: The interplay between individual ability and the health care environment. Med Teach. 2010;32:669–675.
34Nasca T. Where will the “Milestones” take us? The next accreditation system. ACGME Bull. September 2008:3–5.
35Rushton JL, Hicks PJ, Carraccio CL. The next phase of pediatric residency education: The partnership of the Milestones Project. Acad Pediatr. 2010;10:91–92.
36Jones MD Jr, Leslie LK, McGuinness GA, eds. Residency review and redesign in pediatrics: New (and old) questions. Pediatrics. 2009;123(supp1 1):S1–S60.