Secondary Logo

Journal Logo

From the Editor

Emerging Issues in Assessment in Medical Education: A Collection

Roberts, Laura Weiss MD, MA

Author Information
doi: 10.1097/ACM.0000000000003855
  • Free

In this issue of Academic Medicine, we have assembled a set of articles on emerging issues in assessment in medical education. The collection consists of systematic reviews, reports of empirical studies and surveys, scholarly Perspectives, and letters. The authors are learners, educators, and leaders (sometimes all three), and their work examines a wide range of considerations and challenges in assessment across the stages of medical education. Most of these articles came to our journal as spontaneous submissions, signaling interest in and the ever-increasing importance of assessment in medical education.

The articles in this collection address topics in assessment that are surfacing in the current context of medical education. That context is uniquely complex and constellated because of the global pandemic, the greater attention being paid to professional well-being in medical education, and the relatively recent changes in grade reporting for national examinations. Recognition of inequity and evidence regarding bias in standardized testing have further heightened concerns regarding assessment in medical education. Viewed in the past as something of a curricular “afterthought,” assessment has evolved to become a rich area for inquiry, with new constructs for defining and evaluating competence, new and different approaches to testing, new emphasis on holistic evaluation, and new and highly informative applications of expertise from other fields, such as the quantitative sciences, humanities, cognitive neuroscience, and, most recently, artificial intelligence/machine learning.

Assessment is integral to the learning process and is used as a means of ensuring accountability in medical education and society at large. For these reasons, it is no surprise that many of the articles in this collection call for greater focus, rigor, research, and intentionality in assessment in medical education.

The Articles

In an extensive systematic review, Brydges et al 1 closely examined reports published over 18 years regarding competence-based medical education, finding that a large but “mixed evidence base, static assumptions, and limited research practices” are hampering advances in assessment and medical education research. Looking intensively at one discipline in postgraduate education, on the basis of a large systematic review of assessment approaches in training in the surgical specialties, Hanrahan et al 2 similarly note that current efforts to evaluate resident competence, strengths, and deficits are fragmented. Hanrahan et al state that published evidence is insufficient, of low quality, and typically based on small-scale initiatives. The authors conclude with a call for a paradigm shift in surgical education, entailing national and international collaboration “to optimize design and validation so that a comprehensive assessment of surgical competence can be implemented.”

Reflecting on the impact of the pandemic on medical education, Hauer et al 3 articulate that there is an imperative to further embrace competence-based, rather than time-based, training objectives. Hauer et al reason that the field should expand assessment methods and prioritize “useful, meaningful assessment data” and outcomes, especially in relation to the transition from undergraduate to graduate training stages. This transition in training stages is also the focus of a report by Geraghty et al, 4 who describe six domains of “tension” identified by medical students involved in the early implementation of 13 Core Entrustable Professional Activities (EPAs) for Entering Residency. The student leaders who authored this piece came from 5 of 10 pilot institutions and emphasized the need for additional research to “explore the perspectives of students throughout the process of implementing the Core EPAs” in light of students’ role as “end users” of new curricula.

In their Invited Commentary on reevaluating teacher and learner roles and responsibilities, Prober and Norden 5 ask that we place greater importance on interactions between faculty and students and peer-to-peer student collaborations and reemphasize “competence, communication, and compassion” in developing more attuned assessments. This theme of faculty and student interaction emerged in the report by Ingram et al, 6 who examined 3,947 completed evaluations by faculty from the internal medicine, pediatrics, and surgery clerkships at the University of Alabama at Birmingham School of Medicine. The investigators found that 5 characteristics predicted whether students would be recommended for “honors.” One of these characteristics (i.e., contact time with a supervisor) related to clerkship structure rather than explicit clinical competences. For this reason, the authors suggest that the structural elements of clerkships deserve our attention and that evaluation rubrics deserve rigorous “scrutiny.”

Hernandez et al 7 performed a survey study that included responses from 110 of 134 internal medicine clerkship directors in the United States. The authors found that most programs rely on clinical performance assessments and the subject exam of the National Board of Medical Examiners to arrive at student grades. Clerkship directors expressed concerns about grade inflation, evaluation inconsistencies, and students’ emphasis on exam performance, which may detract from clinical learning. An overreliance on standardized testing may result in achievement gaps for some students, especially students who identify as belonging to groups underrepresented in medicine, as noted by Jones et al, 8 who argue that such practices contribute to discrimination in medical education. Clerkship grading was the focus of a novel article by Ryan et al, 9 who argue for a transition from grades to a federally regulated competence-based assessment model and development of a standardized letter to communicate accurately the competence, strengths, and weaknesses of students.

Many challenges and concerns, some far-reaching in their scope and consequences, rest behind recommendations to reconsider assessment and grading in medical education. The unintended uncoupling of assessment from the goals of medical education and the recognition of bias and inequity in standardized testing have led to examination of the role of assessment in medical education and licensure. As in the past, the articles in this collection reinforce the continuing need to build more psychometrically robust approaches to assessment of specific clinical skills. Our authors raise other issues that are more technical or tactical in nature, such as the need to create more refined evidence-based evaluation tools and to develop assessments that may be conducted in remote learning situations due to the pandemic.

Change is difficult, however, as illustrated in the report by McDonald et al, 10 who studied the impact of the elimination of tiered grades and the expansion of 1:1 feedback on core clerkships at the University of California, San Francisco, School of Medicine. Their investigation brought into clarity the many—and sometimes unexpected—effects of curricular change for both faculty and students, even when such change is embraced pedagogically and culturally. (See the AM Last Page by Palaganas and Edwards 11 in this issue for insights about approaching and avoiding common pitfalls in feedback conversations, and see the Perspective by Bearman et al 12 for a discussion of feedback processes that may be implemented in situations with little or no supervision.) The need to engage and support faculty during cultural change was also a theme in the Invited Commentary by van Loon and Scheele, 13 who emphasize empowerment of faculty as the key to effective educational innovation. Similarly, the extensive project by Ryan et al 14 provides validity evidence for assessment built on the reporter–interpreter–manager–educator framework and better-delineated faculty-related and student-related dimensions for this approach to assessment. As reflected in several articles in this collection, empirical study of assessment is valuable in that it can provide clues as to how to better or more effectively implement novel educational approaches, for example, through faculty engagement or empowerment and attention to students’ voices and recommendations.

In addition, the value of creative approaches to assessment is illustrated in two reports in this collection. Chang et al 15 describe one of the first longitudinal studies of the progression of metacognition, critical thinking, and regulated learning strategies of medical students, with the intention of helping educators to strengthen the learning skills of all students as well as to identify students at risk of falling behind. In an Innovation Report, Patwari et al 16 describe their early experience using a diagnostic objective structured clinical examination to identify clinical reasoning and knowledge-based deficits in students who may have a disability requiring accommodations to support learning.

A Welcome Collection

The contribution of thoughtful and carefully derived articles on the topic of assessment from our colleagues across the field of academic medicine is most welcome. The editors of the journal are especially delighted that so many of the articles assembled here were co-authored by medical students and residents. The collection serves to highlight crucial and evolving issues at this moment in medical education and calls upon us to do more on behalf of our learners and our field.

References

1. Brydges R, Boyd VA, Tavares W, et al. Assumptions about competency-based medical education and the state of the underlying evidence: A critical narrative review. Acad Med. 2021;96:296–306.
2. Hanrahan JG, Sideris M, Pasha T, Dedeilia A, Papalois A, Papalois V. Postgraduate assessment approaches across surgical specialties: A systemic review of the published evidence. Acad Med. 2021;96:285–295.
3. Hauer KE, Lockspeiser TM, Chen HC. The COVID-19 pandemic as an imperative to advance medical student assessment. Acad Med. 2021;96:182–185.
4. Geraghty JR, Ocampo RG, Liang S, et al. Medical students’ views on implementing the Core EPAs: Recommendations from student leaders at the Core EPAs pilot institutions. Acad Med. 2021;96:193–198.
5. Prober CG, Norden JG. Learning alone or learning together: Is it time to reevaluate teacher and learner responsibilities? Acad Med. 2021;96:170–172.
6. Ingram MA, Pearman JL, Estrada CA, Zinski A, Williams WL. Are we measuring what matters? How student and clerkship characteristics influence clinical grading. Acad Med. 2021;96:241–248.
7. Hernandez CA, Daroowalla F, LaRochelle JS, et al. Determining grades in the internal medicine clerkship: Results of a national survey of clerkship directors. Acad Med. 2021;96:249–255.
8. Jones AC, Nichols AC, McNicholas CM, Stanford FC. Admissions is not enough: The racial achievement gap in medical education. Acad Med. 2021;96:176–181.
9. Ryan MS, Brooks EM, Safdar K, Santen SA. Clerkship grading and the U.S. economy: What medical education can learn from America’s economic history. Acad Med. 2021;96:186–192.
10. McDonald JA, Lai CJ, Lin MYC, O’Sullivan PS, Hauer KE. “There is a lot of change afoot”: A qualitative study of faculty adaptation to elimination of tiered grades with increased emphasis on feedback in core clerkships. Acad Med. 2021;96:263–270.
11. Palaganas JC, Edwards RA. Six common pitfalls of feedback conversations. Acad Med. 2021;96:313.
12. Bearman M, Brown J, Kirby C, Ajjawi R. Feedback that helps trainees learn to practice without supervision. Acad Med. 2021;96:205–209.
13. van Loon KA, Scheele F. Improving graduate medical education through faculty empowerment instead of detailed guidelines. Acad Med. 2021;96:173–175.
14. Ryan MS, Lee B, Richards A, et al. Evaluating the reliability and validity evidence of the RIME (reporter-interpreter-manager-educator) framework for summative assessments across clerkships. Acad Med. 2021;96:256–262.
15. Chang C, Colón-Berlingeri M, Mavis B, Laird-Fick HS, Parker C, Solomon D. Medical student progress examination performance and its relationship with metacognition, critical thinking, and self-regulated learning strategies. Acad Med. 2021;96:278–284.
16. Patwari R, Ferro-Lusk M, Finley E, Meeks LM. Using a diagnostic OSCE to discern deficit from disability in struggling students. Acad Med. 2021;96:228–231.
Copyright © 2020 by the Association of American Medical Colleges