Editor’s Note: This is an Invited Commentary on Lundsgaard KS, Tolsgaard MG, Mortensen OS, Mylopoulos M, østergaard D. Embracing multiple stakeholder perspectives in defining trainee competence. Acad Med. 2019;94:838–846.
In this issue of Academic Medicine, Lundsgaard and colleagues present “Embracing Multiple Stakeholder Perspectives in Defining Trainee Competence.”1 This qualitative study explores how various stakeholders contribute to our understanding of trainee competence. Recently, this topic has taken center stage for educational leaders and scholars, program directors, assessors, and trainees. Although the competency-based medical education (CBME) movement is no longer in its infancy, it is far from mature. In my own experiences as a residency program director, faculty development leader, and former member of the Association of American Medical Colleges (AAMC) Core Entrustable Professional Activities for Entering Residency (Core EPAs) pilot, I have struggled with a recurrent sense of indirection as I have contributed to efforts to create the road maps that will guide our trainees toward competence. As I have worked to develop and revise scales to measure trainee progression along their journey from novice to expert, I have come to an unsettling realization: We are racing to develop assessment tools to measure a construct—competence—that we do not fully understand. The work of Lundsgaard and colleagues helps illuminate and describe this experience, and it has implications for assessment tool developers, educational program directors, and assessors alike.
To place their work in context, Lundsgaard and colleagues describe CBME as a movement that depends on defining competence. However, many attempts to define measurable competencies have relied primarily on the perspectives and experience of senior physicians, often in an expert consensus approach. I appreciate the authors’ aim to expand beyond this narrow perspective, and their study explores how various stakeholders (e.g., senior physicians/supervisors, leaders/administrators, nurses, patients, trainees) contribute to our understanding of trainee competence. Their work illustrates that incorporating stakeholder perceptions into early discussions about what competence looks like produces a picture that is more nuanced and complete than what we can visualize through our own lenses as physicians and educators. This message is timely and on point. If we aim to educate students to become physicians who provide value to multiple stakeholders, we must involve those stakeholders in the upstream development of assessment processes and tools. However, the work of Lundsgaard and colleagues also highlights a second, provocative point. In addition to augmenting our understanding of competence, each stakeholder perspective also produces a different picture of what constitutes competence. Like optical illusions, these views reflect the variation that can result when pictures are taken from different angles. In the words of essayist David Hume,2 “[Beauty in things] exists merely in the mind which contemplates them.” Similarly, the work of Lundsgaard and colleagues demonstrates that value in competence exists in the eye of the stakeholder.
Lundsgaard and colleagues’ decision to use a conceptual framework from the business world for engaging a broader population of stakeholders is noteworthy. Assessment developers and educators who may wish to apply stakeholder theory to their own educational programs should be aware of its basic assumptions: The creation of value is core to any business strategy, and stakeholder perceptions of value should guide the assessment of employees’ performance. Educators may take pause when considering whether those who will be gaining value from trainees’ performance should be the loudest voices in the conversation regarding the definition of competence. In the business world, the aim of incorporating stakeholders’ views is to ultimately create value for the company. The values we embrace in medical education, both in defining competence and in determining outcomes, will shape the characteristics of the physicians we train. As an emerging field, CBME can benefit from the application of theories from other disciplines. However, using a business model may run the risk of prioritizing economic value and consumer satisfaction over other outcomes that are deeply embedded in the heart of medical education, such as beneficence, learning, and humanism. Similarly, implementing a business approach might inadvertently reduce learners to “commodities,” a risk that can perhaps be minimized with the thoughtful application of such approaches.3
To realize the benefits, and mitigate the risks, of applying a business model to trainee assessment, the stakeholders who will be involved should be selected with care and intention. Lundsgaard and colleagues note that, in stakeholder analyses, a crucial first step is identifying the stakeholders with the most power. There are many types of power, and power differentials, in academic medicine, and a diverse group of academic medical center employees may provide conflicting answers when asked which stakeholders wield the most power. The stakeholder groups selected by Lundsgaard and colleagues included leaders/administrators, senior physicians who supervise trainees, nurses/nurse practitioners, and patients. Because the concept of power in academic medical centers may be complex and variable, educators who want to apply a stakeholder model to assessment might find the following question easier to answer: Whose perceptions regarding what trainees should be able to do matter most?
Assessment developers considering this question should also ask how each group’s voice interacts with those of supervisors, who are the dominant contributors to many existing assessment instruments. Lundsgaard and colleagues’ approach of analyzing whether additional stakeholder voices replicated, elaborated, or complicated the themes expressed by the supervisors lends helpful insight into stakeholder selection decisions. In areas where the supervisors were uniquely suited to observe relevant behaviors and possess content expertise regarding those behaviors, the other stakeholder voices only replicated their perspective. Notable examples in the Core Clinical Activities category included themes such as clinical assessment and plan, knowledge about the anticipated course of injury, and recognition and management of critical diagnoses,1 all of which make intuitive sense given that supervisors have longitudinal personal experience with these competencies.
To illustrate this point, imagine that a cardiologist contributes to defining competence in electrocardiogram (EKG) interpretation. Although other stakeholders may replicate some elements of competent EKG interpretation that the cardiologist identified, I would not expect measurable elaboration or complication from their comments, as the cardiologist is uniquely suited to define, observe, and assess EKG interpretation competence. For certain procedural competencies, it may be that an objective standard exists (or should exist), and applying the stakeholder theory to definitions of competence may be redundant or even inappropriate.
In contrast, when stakeholders’ views were compared with those of supervisors for themes in the Patient Centeredness category, such as establishing rapport and providing dynamic and personalized communication or keeping up speed throughout the patient encounter, they complicated the theme. This finding suggests that different stakeholders have different values, and programs of assessment should be inclusive of these differences. Compared with a consensus-driven approach, which seeks to assess common themes that arise across stakeholder discussions, a targeted approach may add value that is realized only with the retention of diverse perspectives.
To illustrate the potential value of assessment instruments that are created with input from stakeholders with divergent values, consider the following theme: Keeping up speed throughout the patient encounter. It may refer to the ever-present mismatch between workload and available time in the emergency department and the reality that the baseline in the emergency department is a “surge” scenario. This reality requires that the emergency physician balance the need to manage the entire department (which requires speed and efficiency) and the need to provide patient-centered care (which requires time, to be present for the patient, to listen to them, and to answer their questions). As an emergency physician and educator, navigating these conflicting priorities strikes me as one of the most crucial, and challenging, competencies that trainees must learn. Every seasoned emergency physician has a toolbox of strategies they use to increase their speed, but they also know what can be lost in terms of patient communication when they do. In a constant dance between caring for a population versus an individual in the emergency department, the thoughtful practitioner aims for a workflow that optimizes both, while recognizing that these tasks are inherently negatively correlated.
A competency system that explicitly measures different stakeholder perspectives of emergency department management versus patient-centered communication will provide trainees with the feedback necessary to calibrate and fine-tune their style. For this theme, querying both supervisors regarding trainee emergency department management and patients regarding trainee communication is necessary to assess competence. Competence is complex, nuanced, and messy, and some aspects are in direct conflict with each other. Aiming for assessment strategies that seek to measure this conflict, rather than resolve it, are necessary to embrace the reality that, because stakeholders’ values differ, competence in the eye of one stakeholder may mean lagging performance in the eye of another.
While the results of Lundsgaard and colleagues’ study may be used prospectively by assessment developers in the early stages of creating new instruments and programs, medical educators with assessment programs in place may struggle to apply these ideas to their educational practice. Over the past decade, initiatives such as the Accreditation Council for Graduate Medical Education’s Next Accreditation System and the AAMC’s Core EPAs pilot have provided the opportunity for programs to implement changes that move us closer to CBME. These initiatives have resulted in major changes, from residency programs engaging in milestones-based competency decisions to medical schools implementing entrustable professional activities as part of broad curricula.
However, my personal experience participating in these initiatives has been that the combination of time pressures, competing administrative demands, varying levels of expertise, and evolving understanding of educational theory among development teams can lead to the implementation of preliminary assessment programs that either do not measure what they intend to or have produced a picture of competence that is incomplete. Program directors may recognize that existing assessment programs have gaps in both measuring the elements of competence according to multiple stakeholders and collecting adequate data to form competency decisions, and they can feel overwhelmed at the idea of yet another overhaul of their current process. However, Lundsgaard and colleagues’ findings can be applied in a practical, stepwise fashion that may feel more achievable for busy educators.
The study by Lundsgaard and colleagues highlights a concept that is consistent with assessment best practices. Ideally, assessment that is workplace based and relies on direct observation will draw on the assessor’s unique perspective, target assessment items to behaviors that the assessor is uniquely suited to observe and describe, and include instruments that require minimal training and are intuitive to use. Educators seeking to apply Lundsgaard and colleagues’ work to their current assessment practices might consider the summary principles presented in List 1. As program directors iteratively revise assessment practices as part of ongoing program improvement, incorporating multiple stakeholders into instrument development and workplace-based assessment may provide a more nuanced picture of competence that incorporates the values of those ultimately impacted by trainee performance.
Considerations for Incorporating Stakeholder Theory Into Trainee Assessment Practices
- Incorporating the perspectives of multiple stakeholders into discussions defining physician competence may expand our understanding of what trainees should be able to do.
- When selecting stakeholders to take part in this process, consider including those groups that offer unique insight into different aspects of physician performance.
- For some aspects of competence, such as procedural competence, objective assessment measures may already exist or be needed, and stakeholder theory may not be applicable.
- Create assessment instruments that are stakeholder group specific, and develop content that is informed by that group’s perspective.
- Additional stakeholder groups may replicate, elaborate, or complicate competence themes identified by supervisors. When two stakeholder groups measure competence in ways that conflict, each group may value different aspects of performance (and both aspects may be equally important).
Acknowledgments: The author would like to thank her colleagues Amy Miller Juve, EdD, and Anthony R. Artino Jr, PhD, for their critical feedback on this Invited Commentary.