Health professions education rests on a contract between students and faculty: Both groups will put forth their best effort to serve the teaching and learning process, allowing trainees to emerge from educational institutions with the knowledge and skills to serve society as physicians. Along this path, faculty will provide students with feedback to grow and improve. Assessments—both formative and summative—serve a critical role in facilitating growth and development. For assessment to be most effective, learners need to be vulnerable; vulnerability with respect to assessment entails admitting one’s limitations, being open to making mistakes, being wrong on occasion, and welcoming the growth that occurs as a result. However, learner openness to vulnerability requires trust in both the teacher and the system, and a number of factors may limit learner trust. Similarly, faculty must possess time and skill in providing feedback when students share limitations; otherwise, the growth opportunities facilitated by vulnerability will be limited. Faculty require assurance that providing students with constructive feedback will be supported and encouraged. We will discuss several threats that impact vulnerability, trust, and open feedback in the context of learner assessment from both the student and faculty perspectives. After exploring these threats, we will share strategies to engender trust in assessment and evaluation systems.
The Student Perspective
Grades and their implications for learners’ future can form a barrier to trust and vulnerability in learner assessment. Even when grades occur in a competency-based medical education system that emphasizes the use of criterion-based systems of assessment as opposed to normative-based systems, students often perceive—perhaps legitimately—that they are being compared with peers. This is particularly true when more than one student is working on a team in clerkships or in faculty-led small-group sessions. Students express particular concern in instances where 2 students start a rotation with different levels of skill but end with similar skills, fearing that the student who starts at a higher level will receive a higher grade than the student who required more improvement to reach the same level of skill by the end of the rotation. As a result, learners may feel that they have to show competence outwardly even when they feel that they are not ready to independently carry out specific tasks. The opportunity for coaching and improvement may be lost despite the fact that students in early clerkship experiences value guidance and support.1
Additionally, students desire transparent, standardized, and clear expectations with criteria that are consistent across different areas of the curriculum. When assessment criteria are clear, students have increased trust in the manner in which they are being assessed and how these assessments contribute to grades.
However, even with clear expectations, students may express concern regarding the impact of subjectivity. Multiple-choice exams and objective structured clinical exams are often perceived as objective measures, while students perceive workplace-based assessments as subjective and less trustworthy. Students’ concerns may be well founded, as studies have demonstrated evidence of bias in clerkship grades and other narrative assessments.2–4
Finally, students express concerns regarding inconsistencies between verbal feedback, written assessments, and grades. Learners may receive positive verbal feedback initially, only to find that the written feedback and/or grades are less positive than what was discussed. Students express fear that informal or verbal feedback is “sugarcoated” in an effort to avoid hurting students’ feelings or impacting team dynamics. This fear and the discrepancy between verbal feedback and written feedback or grades can fuel frustration and distrust.
The Faculty Perspective
First, faculty share student concerns with respect to the impact of bias on assessment systems. As publications highlight evidence of implicit bias across numerous areas of student assessment or achievement,2–5 more faculty recognize that assessments can reflect bias. Without forums where bias and efforts to mitigate bias are openly discussed, faculty may increasingly feel unable to trust in assessment systems.
Another threat relates to faculty resources. While faculty may wish to assess students in a manner consistent with best practices, they may struggle to find the time to directly observe students’ skills because of increasing documentation or clinical productivity requirements.6 While clerkship directors and other educational leaders continue to disseminate learner expectations via email, in-person orientations, or webinars, faculty are often too busy to engage. This leads faculty to lack a shared mental model of the stage-appropriate knowledge or skills that learners are expected to demonstrate. Moreover, even the most well-intentioned teaching faculty who are eager to improve their observation and feedback skills may struggle to find the time for the faculty development activities that would support their knowledge and skill acquisition.
Finally, student evaluations play an important role in faculty accountability and reflection for growth. However, students are often more critical of faculty who provide constructive feedback, particularly when those faculty are women or members of historically underrepresented minorities.7–9 Faculty who are aware of these biases may fear that truthful, accurate assessment feedback could result in lower evaluations and have negative career repercussions. Often, promotions criteria include student evaluations as evidence of teaching performance; faculty then fear career consequences if they provide constructive and/or critical feedback.
How do we overcome these concerns to build a learning and assessment culture that encourages learner vulnerability and empowers faculty to give specific, valuable, actionable feedback? We believe that a number of practices, detailed below, can positively impact trust in assessment systems. We have enacted these practices in our own institution to foster a culture of trust.
Provide an educational environment where learners can be vulnerable
Learners bring variable levels of skill to the clinical environment during the first weeks and months of clerkships.10 Our current system often fails to recognize or reward educational achievements attained after feedback, reflection, and learner improvement. Ideally, our systems would train learners in a mastery learning model where all learners meet the same outcomes while the time to achieve these outcomes is individualized. Grades would not be necessary, and assessment would feel safer. This would require residency program directors to trust the validity of the information provided to them. We acknowledge that implementing a system to (1) meet the needs of both student growth and vulnerability and (2) aid residency programs in screening and ranking applicants remains a significant challenge. While small pilots have had success with this model, as an education community, we have not yet achieved this goal.11
In the meantime, we can provide learners with some clinical opportunities in environments where they can share their vulnerabilities with supervising faculty. In our institution, all students now participate in a 4-year, longitudinal primary care clerkship, the Education-Centered Medical Home (ECMH).12 Students’ grades will be pass/fail only, and assessment will focus on the achievement of independence in the performance of relevant entrustable professional activities. ECMH preceptors work with the same students across the 4 years, allowing them to engage in conversations regarding learners’ strengths, areas for improvement, and action steps without the tension that exists when these conversations occur in the presence of a graded system.
Cocreate individual assessments and assessment systems
We believe that it is imperative to include students in the design and evaluation of assessment processes. Students can help faculty leadership understand the barriers to vulnerability and effective assessment. They provide innovative solutions that faculty may not have considered. Furthermore, student representatives can help fellow students understand the assessment system and can share efforts being made to address student concerns related to assessment, further fostering a culture of trust.
In our institution, students serve on all curricular committees, including the assessment and evaluation subcommittees. They not only provide feedback on new individual assessment forms but also are involved in their creation. Additionally, student input through focus groups is solicited before implementing significant changes in our assessment system. This provides an opportunity for faculty both to obtain student input and to educate students about best practices in assessment.
Explicitly discuss bias, employ practices to limit its impact, and support efforts to develop and study bias-reduction strategies
Leaders in health professions education must acknowledge that bias exists and work to mitigate it through faculty training and implementation of best practices. Some practices show promise in reducing the impact of bias in clinical assessment. For example, employing an assessment system that contains a variety of assessment types by multiple assessors in multiple contexts can reduce the impact of assessor bias on grades.3 Additionally, construct-aligned scales with clear expectations and mental models improve the reliability of workplace-based assessment13 and show promise in providing learners with feedback on specific constructs based on directly observed skills.11 While these strategies represent important systems-based approaches, we must commit to additional development and study of strategies to reduce bias in assessment. Finally, as educational institutions, we must be willing to share data with students and faculty, demonstrating where bias occurs and how it may be mitigated in assessment systems—particularly with respect to factors such as race, socioeconomic factors, gender, religion, or sexual orientation. This transparency plays a key role in fostering student trust in assessment systems.
In our institution, we are working toward this aim. Our assessment systems include multiple observations by multiple faculty in multiple settings to reduce the impact of assessor bias. We continue to provide faculty development related to implicit bias and have created a task force to identify areas of bias in our curriculum.
Given the role that bias plays in learner evaluations of faculty, promotion and tenure committees also must receive guidance to triangulate data related to faculty performance to help mitigate the risk to faculty. Just as health education leaders must share existing efforts to mitigate bias in assessment with students, promotion and tenure committees must share their strategies with faculty. These efforts may support faculty to give constructive feedback more freely, enhancing the opportunity for learners to improve across the learning continuum.
Protect faculty time
Assessing learners based on the direct observation of their skills—as opposed to intuiting skill from oral presentations alone—requires time. If we are truly committed to a competency-based education system, we must be confident in the outcomes of our learners, and that necessitates direct observations of learners in the workplace. Whether crediting faculty who teach and assess with “educational value units,” providing protected time for teaching, or paying for core assessment faculty, training institutions must accept that better assessment will require investment.
As health professions educators and leaders, we have an obligation to learners and to the public to provide feedback and design assessment practices that will ensure that future physicians attain competency and gain all the necessary knowledge and skills to serve patients’ needs. We are hopeful that continued efforts of students, faculty, and health professions education leaders working together to create and transparently discuss fairness in assessment systems will support improved trust in these systems and further students’ learning and skill achievement.
The authors would like to acknowledge and thank the ABIM Foundation for initiating a series of discussions on trust and the role it plays in health professions education and patient care.
1. Karp NC, Hauer KE, Sheu L. Trusted to learn: A qualitative study of clerkship students’ perspectives on trust in the clinical learning environment. J Gen Intern Med. 2019;34:662–668.
2. Riese A, Rappaport L, Alverson B, Park S, Rockney RM. Clinical performance evaluations of third-year medical students and association with student and evaluator gender. Acad Med. 2017;92:835–840.
3. van Andel CEE, Born MP, Themmen APN, Stegers-Jager KM. Broadly sampled assessment reduces ethnicity-related differences in clinical grades. Med Educ. 2019;53:264–275.
4. Ross DA, Boatright D, Nunez-Smith M, Jordan A, Chekroud A, Moore EZ. Differences in words used to describe racial and gender groups in medical student performance evaluations. PLoS One. 2017;12:e0181659.
5. Boatright D, Ross D, O’Connor P, Moore E, Nunez-Smith M. Racial disparities in medical student membership in the Alpha Omega Alpha Honor Society. JAMA Intern Med. 2017;177:659–665.
6. Sinsky C, Colligan L, Li L, et al. Allocation of physician time in ambulatory practice: A time and motion study in 4 specialties. Ann Intern Med. 2016;165:753–760.
7. Morgan HK, Purkiss JA, Porter AC, et al. Student evaluation of faculty physicians: Gender differences in teaching evaluations. J Womens Health (Larchmt). 2016;25:453–456.
8. Sinclair L, Kunda Z. Motivated stereotyping of women: She’s fine if she praised me but incompetent if she criticized me. Pers Soc Psychol Bull. 2000;26:1329–1342.
9. McOwen KS, Bellini LM, Guerra CE, Shea JA. Evaluation of clinical faculty: Gender and minority implications. Acad Med. 2007;82(10 suppl):S94–S96.
10. Hauer KE, Lucey CR. Core clerkship grading: The illusion of objectivity. Acad Med. 2019;94:469–472.
11. Murray KE, Lane JL, Carraccio C, et al.; Education in Pediatrics Across the Continuum (EPAC) Study Group. Crossing the gap: Using competency-based assessment to determine whether learners are ready for the undergraduate-to-graduate transition. Acad Med. 2019;94:338–345.
12. Henschen BL, Bierman JA, Wayne DB, et al. Four-year educational and patient care outcomes of a team-based primary care longitudinal clerkship. Acad Med. 2015;90(11 suppl):S43–S49.
13. Crossley J, Johnson G, Booth J, Wade W. Good questions, good answers: Construct alignment improves the performance of workplace-based assessment scales. Med Educ. 2011;45:560–569.