At the 28th Research in Medical Education conference in 1989, George E. Miller (1918–1998), discussing the assessment of learners, stated:
No single assessment method can provide all the data required for judgment of anything so complex as the delivery of professional services by a successful physician. And so let me begin by suggesting a framework within which that assessment might occur. 1
He then proceeded to present the iconic model that, brilliant in its simplicity, has withstood the test of time. 1
What Is Miller’s Pyramid?
The pyramid is an organizing framework for assessing learners in the health professions, reflecting phases across the educational trajectory, in which targets of assessment in the lower levels are incorporated within higher levels (see Figure 1).
Figure 1: Miller’s pyramid, with explanatory text from the article in which the pyramid first appeared.
1 (Used with permission.)
The model merges traditional assessment approaches, often emphasizing standardized, objective testing, with the vocational need of certification for practice. While Miller’s terminology of competence, positioned at a lower level than performance (“knows how”), may no longer align with more current conceptualizations of competence and performance, 2,3 the core framework has proven to be very useful. In this Perspective, we use the word competence to signify a general medical ability that cuts across the levels of the pyramid. 2 The multiple facets of the competence construct represented by the different levels of Miller’s pyramid make valuable and different contributions to assessment that do not perfectly correlate. 4 Another argument for distinguishing levels of Miller’s pyramid is that learners adapt their efforts and behavior to meet stated assessment criteria; in other words, assessment drives learning (one of Miller’s assertions 1). Given this assumption, assessment should be programmed to guide learning 5 in an environment that stimulates the behavior aligning most with the desired outcomes of learning. For example, when mastery of knowledge or skills is the focus of assessment, learners should be acknowledged for the deliberate practice that leads to mastery itself. All levels of Miller’s pyramid are important, and the learning and assessment associated with each facet of competence build iteratively throughout the training.
Since 1990, progress has particularly been made in the quality of workplace-based assessment, that is, at Miller’s “does” level. 6,7 Multiple tools with multiple points of workplace-based observations and assessment over time fit well with what has more recently been called programmatic assessment 5,8 and a systems approach to assessment embedded in a curriculum. 9 The worldwide movement toward competency-based medical education (CBME) has incorporated workplace-based assessment in various forms, 10,11 which align with the pyramid, 12 as a major focus. Despite these developments, reliably assessing learners in the clinical workplace remains a huge challenge. The lack of standardized conditions for assessment; rater bias, such as generosity error and halo effects; and the paucity of evidence that relates trainee care quality to health care outcomes are a few of the barriers that have precluded the development of standards of workplace-based assessment at the “does” level. 13–16 Training raters and using instruments with validity evidence can mitigate some of these challenges but not eliminate them. 17
The Dual Role of Diplomas in Medicine
While continued efforts to address psychometric inadequacies of workplace-based assessment are needed, something more fundamental also deserves attention. Degrees in medicine, more so than in other programs of higher education, go beyond just a testimony of academic accomplishment. A medical diploma attests (implicitly or explicitly) that medical graduates are prepared to safely care for patients, often from the very day of graduation. This means that examiners and programs have a dual responsibility in assessing learners: (1) to verify that learners have successfully done what they were asked to do (i.e., passed examinations; completed coursework, assignments, and rotations; practiced skills in simulated contexts; and demonstrated expected performance in the clinical environment) and (2) to ensure and protect the quality of patient care (i.e., verifying that learners can assume the responsibilities they face after graduation and assuring the public that the new graduate can provide safe and effective care). A medical license affords a legal responsibility; a specialty certification affords fully unsupervised practice. So, assessing medical learners is not just to verify course completions and progress decisions or award a diploma but also to express confidence that the learner is prepared for professional activities in health care. This dual responsibility should inform the judgments to grant learners the permission to engage in patient care. 18
Entrustment: Extending Beyond the Assessment of Observed Performance or Action
About 2 decades after Miller’s publication and 1 decade after the widespread adoption of CBME, the medical education community began adopting the concept of entrustable professional activities (EPAs). 19 An EPA, a term coined in 2005, can be defined as “a unit of professional practice (a task or group of tasks) that can be fully entrusted to a trainee, once he or she has demonstrated the necessary competence to execute this activity unsupervised.” 20 Entrustment decision making is more than another approach or scale to assess medical trainees in the workplace. 21 The concept was introduced to operationalize CBME through a stepwise and safe engagement of learners in clinical practice with a progressive decrease of supervision. 22 Implementing EPAs requires a shift in focus from workplace-based assessment of isolated clinical competencies or abilities to decisions on whether to entrust learners with health care delivery tasks and responsibilities on the basis of the learners’ capability to safely and effectively perform those EPAs. 23 The critical judgment comes into play with respect to the required level of supervision, in response to a learner’s level of readiness to perform a particular health care activity. 24 However, what is less overt is that a summative entrustment decision incorporates trust in the trainee’s ability to handle unknown future situations, including securing adequate support if needed; this (future-facing) trust is based on what was previously observed, learned, and applied.
The Nature of Decisions to Entrust a Learner or Graduate With Health Care Tasks
The variety and breadth of expected clinical competencies are wider than can possibly be observed during training. Subsequent patients will not be the same as those previously diagnosed and treated, and contexts will differ from the ones in which the medical trainee was observed. Any activity in health care involves some level of risk, even for experienced clinicians. Entrustment decisions, 18,25 therefore, imply an estimation of the learner’s readiness to perform in unknown contexts and the determination that the risks involved in trusting the learner are acceptable going forward. 26,27 Making a summative entrustment decision that—from moment x on—the learner is permitted to perform EPA y is similar to awarding a driver’s license in continental Europe and trusting the driver will drive safely on the left side of the road when emerging from the Chunnel (underwater tunnel between France and the United Kingdom) on British soil. Even a move from proactive supervision, with a supervisor present (entrustment level 2), to reactive supervision, with a supervisor quickly available (entrustment level 3), may imply accepting risks. In other words, it is impossible to control for all possible situations in which the entrusted learner will act, and it is deceptive to suggest that prior observations have covered every possible context for the enactment of the EPA.
Entrustment decisions should, therefore, include the assessment of a learner’s ability to cope with risks, ask for help, and adapt when the outcomes of health care situations are different than what was expected. More succinctly, entrustment decisions, which focus on the future, should include an estimation of adaptive expertise. Assessing adaptive expertise or adaptive competence requires more than observing performance in the clinical environment. Scales that focus on supervision levels have been created with either a retrospective nature (how much supervision was provided with an activity?) or a prospective nature (do I feel the learner is ready for less supervision for this EPA?). Only the latter represents true entrustment. In radiology, ALARA (as low as reasonably achievable) seeks to provide the lowest level of radiation possible to perform a meaningful test and minimize exposure and its resultant risks. 28 Likewise, the lowest level of acceptable performance risks corresponding with readiness for unsupervised practice may guide summative entrustment decisions and privilege.
Extending Miller’s Pyramid
Since Miller’s seminal address, many approaches to assessment in the clinical workplace have been developed, 29,30 but all have focused on observing what learners do in the clinical environment, that is, Miller’s level 4 (the “does” level). However, as delineated earlier, there is something beyond the top of Miller’s “does” level. 31,32 Observed performance in practice is what schools or programs should really deliver to society: graduates that patients, colleagues, and employers can “trust.” This presumptive trust, 21,33 implicitly conferred at the completion of a physician’s educational program, extends beyond direct observation during training to any workplace physicians find themselves in, suggesting the need for a new pinnacle on Miller’s pyramid. Like all levels, this top level should naturally include and assume sufficient achievement at lower levels, so rather than replacing the “does” level, the new top level assesses something different. This is not a new thought: In 2016, Al-Eraky and Marei proposed incorporating trust in Miller’s pyramid. 34
With these thoughts in mind, we propose an adaptation of Miller’s pyramid by adding a level above the “does” level and making slight adaptations to the current explanatory phrases (Figure 2). The 5 levels may be briefly explained in the focus of the assessment that fits with the level. Table 1 elaborates briefly on the focus and possible methods of assessment, in part inspired by the work of Yudkowsky et al. 35
Table 1: Focus and Methods of Assessment in Medical Education Associated With Each Level of Miller’s Pyramid1 and With a Proposed New Level
Figure 2: Miller’s pyramid extended. A new fifth level (“trusted”) reflects the process for reaching the decision to award a learner an attestation of the completion of training, leading to a medical license or specialty registration or certification, that provides permission to act unsupervised and makes the grantors cognizant of the inherent risks. Note that explanatory phrases have been modified to reflect a more current use of the words “competence” and “performance.”
In this adaptation, the top of the pyramid reflects the process for reaching the decision to award a learner an attestation of the completion of training, leading to a medical license or specialty registration or certification, which provides permission to act unsupervised and makes the grantors cognizant of the inherent risks. Currently, physicians earn their diploma after successfully completing all the required elements of the curriculum, and the ticking clock gets to the preset end of clinical training experience. An administrative body then confirms that the trainee has the right to graduate, and so it happens. In graduate medical education, a program director, along with a single competency committee, oversees learners throughout the course of training, but the step toward unsupervised practice is still often determined by time rather than by observed competence and the future risk that comes with entrustment. In undergraduate medical education, however, the large number of students make it impossible for a single individual or committee to closely follow students’ progress from matriculation to graduation. Thus, the realization of the immense step of being granted the responsibility of a license, and the future risk involved, should drive the decision to graduate based on readiness for entrustment, not time.
The small steps informing entrustment, taken ad hoc in the workplace, seem to be made with more discernment than the big decision to graduate, license, or certify learners. To earn a diploma in higher education, including medical education, the accumulation of passed tests and assessments often suffices, but there are seldom committees or educators who really oversee the significance of the MD diploma for individual graduates. Licensing exams are, at most, an imperfect measure to support the big entrustment decision that the license implies. In small graduate medical education programs, there may be more oversight than in larger programs, but the scheduled end of training and timing of the board exam often dominate the decision to allow a trainee to complete the program.
With the introduction of EPAs, there is a middle ground. EPAs are units of professional practice with boundaries, allowing each EPA to be reasonably overseen, assessed, monitored, documented, and entrusted. Educators or, preferably, a clinical competency committee (CCC), should have the courage—and be empowered—to make thoughtful summative entrustment decisions with sufficient, multifaceted information, and those educators or the committee should then be accountable for those decisions. Some programs make these decisions with a signed certificate form for EPAs (with a statement of awarded responsibility or STAR), which reflects that accountability explicitly. 36 More informally, that accountability can be the answer to a question: “Would I have this learner take care of my own family members?” 37 or “Would I hire this graduate for my department if I could choose among applicants?” The development of a committee’s (or other body’s) trust that a learner can execute an activity unsupervised requires justification and conviction strong enough to enable the deciding body to assure patients the learner has mastered the domain sufficiently to justify a patient’s presumptive trust in him or her. It requires knowledge, experience, and courage to make a summative entrustment decision based on inherently limited previous observations. This reasoning makes “trusted”—with its inherent risks—a new level for Miller’s pyramid, beyond the “does” level.
Assessing Readiness for Entrustment
An entrustment decision requires a sense that the learner understands the risks involved in clinical care and confidence that the individual can deal with the unexpected. 38 There is always the potential for things to go wrong, and it is critical that the learner knows what to do in unexpected circumstances. Both learners and clinical supervisors must know, weigh, and accept these risks when entrustment decisions are made.
In considering what clinicians take into account when making entrustment decisions, when transferring responsibility to a learner, or when titrating the amount of supervision needed, multiple variables emerge. 39–42 Interestingly, most of these facets of competence are characteristics related to trustworthiness; are not task specific; and have been summarized as integrity, reliability, humility, and agency, in addition to an EPA-specific capability. 18,21,43 Entrustment seems to require both trustworthiness and previously demonstrated competence in the domain being assessed. Together these facets can be used to weigh the risks involved in an entrustment decision.
Specific assessment approaches that incorporate trust, risk assessment, and adaptive expertise are limited currently. 44,45 Along with better measures of the quality of care learners deliver, 46 these approaches may become more important in the development of assessment in health professions education.
Readiness for entrustment is implied in licensing and certification. However, the top level of the revised pyramid applies not only to those big moments but also in programmatic assessment based on an EPA framework. If a learner is trusted with an individual EPA, he or she is considered able to perform the given EPA safely and effectively without supervision. However, because EPAs in aggregate define the professional activities of a specialty, a learner would be expected to achieve this same level of performance for all EPAs associated with direct patient care at the time of program completion. Over a practitioner’s career, there will be changes in the professional activities he or she is entrusted and entitled to perform. The dynamic nature of an individual’s EPA portfolio, which is based in a physician’s scope of practice, suggests the need for adapting over time what practitioners are entrusted with and entitled to do. 47
While CCCs can and should focus on entrustment when making progression decisions for individual EPAs or EPAs in aggregate, 48 individual supervisors should consider elements of entrustment in their observations, thinking, and recommendations, whenever possible. These elements may include, but are not limited to, learners exhibiting help-seeking behaviors, learners’ ability to apply previous learning in a new situation, and learners asking for the degree of needed supervision to practice at the edge of their competence. In other words, entrustment thinking, including the non–task-specific features, is not confined to committees (CCC or other) but is part of all workplace-based assessments. At lower levels of the pyramid, judgment based on what a learner actually does or has done is appropriate; at the higher levels, an inference of readiness is needed. Examples of practical approaches to support such inferences are entrustment-based discussions 45 and realistic, live semisimulations. 44,49 A recent report of the use and value of resident-sensitive quality measures in informing entrustment decisions also holds promise. 50
Concluding Thoughts
Suggesting an extension of a model as iconic as Miller’s pyramid requires careful consideration. Others have created variations of the pyramid, calling it Miller’s triangle or prism; adding Bloom’s distinctions of knowledge, skills, and attitudes as a dimension 51; turning the pyramid upside down to stress the importance of the “does” level 3; adding “assessment orbits” around the pyramid 34; and adding professional identity on top of the pyramid. 52 We considered such variations but decided to limit the adaptation of the pyramid to a concept that has a clear focus and practical value for the approach used to assess learners and adds to existing approaches already reflected in the pyramid. Cruess et al’s extension, 52 calling the top level “is” to reflect the core of a physician’s identity, embraces the personal characteristics we have described earlier (capability, integrity, reliability, humility, and agency).53 Thus, our proposed new pinnacle for Miller’s pyramid combines personal characteristics linked to identity formation (“is”) with the mitigation of risk that is promised in the capacity to adapt from the familiar to the unfamiliar (“trusted”).
“Trusted” can be read in 2 ways. One is related to assessment and to a decision, that is, as a pivotal moment that leads to permission to act without supervision—it is the focus of our contribution. The other is a state of being. Being trusted may be regarded as one of the most crucial characteristics of physicians and other health care professionals and has important implications for continuing professional development. Deserved trust is not a trait but a state that requires ongoing self-awareness, reflection, and maintenance, by all health care professionals.
Acknowledgments:
The concept for this Perspective originated during a 2-day meeting of the International Competency-Based Medical Education Collaborators, Ottawa, Ontario, Canada; July 2019.
References
1. Miller GE. The assessment of clinical skills/competence/performance. Acad Med. 1990;65:S63–S67.
2. Epstein RM, Hundert EM. Defining and assessing professional competence. JAMA. 2002;287:226–235.
3. Rethans JJ, Norcini JJ, Barón-Maldonado M, et al. The relationship between competence and performance: Implications for assessing practice performance. Med Educ. 2002;36:901–909.
4. Savoldelli GL, Naik VN, Joo HS, et al. Evaluation of patient simulator performance as an adjunct to the oral examination for senior anesthesia residents. Anesthesiology. 2006;104:475–481.
5. Schuwirth LWT, Van der Vleuten CPM. Programmatic assessment: From assessment of learning to assessment for learning. Med Teach. 2011;33:478–485.
6. Norcini JJ, Blank LL, Arnold GK, Kimball HR. The mini-CEX (clinical evaluation exercise): A preliminary investigation. Ann Intern Med. 1995;123:795–799.
7. Norcini JJ, Blank LL, Duffy FD, Fortna GS. The mini-CEX: A method for assessing clinical skills. Ann Intern Med. 2003;138:476–481.
8. Torre DM, Schuwirth LWT, Van der Vleuten CPM. Theoretical considerations on programmatic assessment. Med Teach. 2020;42:213–220.
9. Norcini J, Anderson MB, Bollela V, et al. 2018 Consensus framework for good assessment. Med Teach. 2018;40:1102–1109.
10. ten Cate O. Competency-based postgraduate medical education: Past, present and future. GMS J Med Educ. 2017;34:Doc69.
11. Frank JR, Snell LS, ten Cate O, et al. Competency-based medical education: Theory to practice. Med Teach. 2010;32:638–645.
12. Lockyer J, Carraccio C, Chan MK, et al.; ICBME Collaborators. Core principles of assessment in competency-based medical education. Med Teach. 2017;39:609–616.
13. Albanese MA. Challenges in using rater judgements in medical education. J Eval Clin Pract. 2000;6:305–319.
14. Lurie SJ. History and practice of competency-based assessment. Med Educ. 2012;46:49–57.
15. Gruppen LD, ten Cate O, Lingard LA, Teunissen PW, Kogan JR. Enhanced requirements for assessment in a competency-based, time-variable medical education system. Acad Med. 2018;93:S17–S21.
16. Holmboe ES, Cate O ten, Durning SJ, Hawkins RE. Holmboe E, Hawkins R, Durning S. Assessment challenges in the era of outcomes-based education. In: Practical Guide to the Evaluation of Clinical Competence. 2018: 2nd ed, Philadelphia, PA: Elsevier Inc., 1–21.
17. Kogan JR, Hatala R, Hauer KE, Holmboe E. Guidelines: The do’s, don’ts and don’t knows of direct observation of clinical skills in medical education. Perspect Med Educ. 2017;6:286–305.
18. Ten Cate O. Entrustment as assessment: Recognizing the ability, the right, and the duty to act. J Grad Med Educ. 2016;8:261–262.
19. Shorey S, Lau TC, Lau ST, Ang E. Entrustable professional activities in health care education: A scoping review. Med Educ. 2019;53:766–777.
20. ten Cate O. Entrustability of professional activities and competency-based training. Med Educ. 2005;39:1176–1177.
21. Ten Cate O, Hart D, Ankel F, et al.; International Competency-Based Medical Education Collaborators. Entrustment decision making in clinical training. Acad Med. 2016;91:191–198.
22. ten Cate O, Scheele F. Competency-based postgraduate training: Can we bridge the gap between theory and clinical practice? Acad Med. 2007;82:542–547.
23. ten Cate O. Trust, competence, and the supervisor’s role in postgraduate training. BMJ. 2006;333:748–751.
24. ten Cate O. Supervision and entrustment in clinical training: Protecting patients, protecting trainees. PSNet–WebM&M.
https://psnet.ahrq.gov/webmm/case/461. Published 2018. Accessed November 10, 2020.
25. ten Cate O. Entrustment decisions: Bringing the patient into the assessment equation. Acad Med. 2017;92:736–738.
26. Damodaran A, Shulruf B, Jones P. Trust and risk: A model for medical education. Med Educ. 2017;51:892–902.
27. ten Cate O. Managing risks and benefits: Key issues in entrustment decisions. Med Educ. 2017;51:879–881.
28. Davis DA, Giomuso CA, Miller WH, et al. Dose calibrator activity linearity evaluations with ALARA exposures. J Nucl Med Technol. 1981;9:188–191.
29. Norcini J, Burch V. AMEE medical education guide no. 31: Workplace-based assessment as an educational tool. Med Teach. 2007;29:855–871.
30. Norcini J, Anderson B, Bollela V, et al. Criteria for good assessment: Consensus statement and recommendations from the Ottawa 2010 Conference. Med Teach. 2011;33:206–214.
31. Cutrer WB, Miller B, Pusic MV, et al. Fostering the development of master adaptive learners: A conceptual model to guide skill acquisition in medical education. Acad Med. 2017;92:70–75.
32. Pusic MV, Santen SA, Dekhtyar M, et al. Learning to balance efficiency and innovation for optimal adaptive expertise. Med Teach. 2018;40:820–827.
33. Cruess R, Cruess S. Cockerham W, Dingwall R, Quah S. Professional trust. In: The Wiley Blackwell Encyclopedia of Health, Illness, Behavior, and Society. 2014. Hoboken, NJ: John Wiley & Sons, Ltd
34. Al-Eraky M, Marei H. A fresh look at Miller’s pyramid: Assessment at the ‘is’ and ‘do’ levels. Med Educ. 2016;50:1253–1257.
35. Yudkowsky R, Park YS, Downing SM. Yudkowsky R, Park YS, Downing SM. Introduction to assessment in the health professions. In: Assessment in Health Professions Education. 2020: 2nd ed, New York, NY: Taylor and Francis Ltd, 3–16.
36. Mulder H, Ten Cate O, Daalder R, Berkvens J. Building a competency-based workplace curriculum around entrustable professional activities: The case of physician assistant training. Med Teach. 2010;32:e453–e459.
37. Jonker G, Ochtman A, Marty A, Kalkman CJ, ten Cate O, Hoff RG. Would you trust your loved ones to this resident? Certification decisions in postgraduate anesthesiology training. Br J Anaesth. 2020;125(5):e408–e410.
38. Holzhausen Y, Maaz A, Cianciolo AT, ten Cate O, Peters H. Applying occupational and organizational psychology theory to entrustment decision-making about trainees in health care: A conceptual model. Perspect Med Educ. 2017;6:119–126.
39. Kennedy TJT, Regehr G, Baker GR, Lingard L. Point-of-care assessment of medical trainee competence for independent clinical work. Acad Med. 2008;83:S89–S92.
40. Sterkenburg A, Barach P, Kalkman C, Gielen M, ten Cate O. When do supervising physicians decide to entrust residents with unsupervised tasks? Acad Med. 2010;85:1408–1417.
41. Wijnen-Meijer M, van der Schaaf M, Nillesen K, Harendza S, ten Cate O. Essential facets of competence that enable trust in graduates: A Delphi study among physician educators in the Netherlands. J Grad Med Educ. 2013;5:46–53.
42. Duijn CCMA, Welink LS, Bok HGJ, ten Cate OTJ. When to trust our learners? Clinical teachers’ perceptions of decision variables in the entrustment process. Perspect Med Educ. 2018;7:192–199.
43. Chen HC, ten Cate O. Delany C, Molloy E. Assessment through entrustable professional activities. In: Learning & Teaching in Clinical Contexts: A Practical Guide. 2018:Chatswood, Australia: Elsevier Australia, 286–304.
44. Wijnen-Meijer M, Van der Schaaf M, Booij E, et al. An argument-based approach to the validation of UHTRUST: Can we measure how recent graduates can be trusted with unfamiliar tasks? Adv Health Sci Educ Theory Pract. 2013;18:1009–1027.
45. ten Cate O, Hoff RG. From case-based to entrustment-based discussions. Clin Teach. 2017;14:385–389.
46. Schumacher DJ, Holmboe ES, van der Vleuten C, Busari JO, Carraccio C. Developing resident-sensitive quality measures: A model from pediatric emergency medicine. Acad Med. 2018;93:1071–1078.
47. ten Cate O, Carraccio C. Envisioning a true continuum of competency-based medical education, training and practice. Acad Med. 2019;94:1283–1288.
48. Smit MP, de Hoog M, Brackel HJL, ten Cate O, Gemke RJBJ. A national process to enhance the validity of entrustment decisions for Dutch pediatric residents. J Grad Med Educ. 2019;114 Suppl158–164.
49. Kalet A, Zabar S, Szyld D, et al. A simulated “Night-onCall” to assess and address the readiness-for-internship of transitioning medical students. Adv Simul (Lond). 2017;2:13.
50. Schumacher DJ, Holmboe E, Carraccio C, et al. Resident-Sensitive quality measures in the pediatric emergency department: Exploring relationships with supervisor entrustment and patient acuity and complexity. Acad Med. 2020;95:1256–1264.
51. Hodgson JL, Pelzer JM, Inzana KD. Beyond NAVMEC: Competency-based veterinary education and assessment of the professional competencies. J Vet Med Educ. 2013;40:102–118.
52. Cruess RL, Cruess SR, Steinert Y. Amending Miller’s pyramid to include professional identity formation. Acad Med. 2016;9:180–185.
53. ten Cate O, Chen HC. The ingredients of a rich entrustment decision. Med Teach. 2020;42:1413–1420.