Clerkship Grading and the U.S. Economy: What Medical Education Can Learn From America’s Economic History : Academic Medicine

Secondary Logo

Journal Logo

Scholarly Perspectives

Clerkship Grading and the U.S. Economy: What Medical Education Can Learn From America’s Economic History

Ryan, Michael S. MD, MEHP; Brooks, E. Marshall PhD; Safdar, Komal; Santen, Sally A. MD, PhD

Author Information
Academic Medicine 96(2):p 186-192, February 2021. | DOI: 10.1097/ACM.0000000000003566
  • Free

Abstract

Medical school grades are paramount to indicate progression and distinction and for selection into residency.1–3 Students view clerkship grades, in particular, as vital to a successful Match2,4; a perspective supported by the practice of residency program directors across specialties who use these grades to make selections or rank lists.1,3 Outside of assessments of professionalism, grades in clerkships serve as the most integral component used for promotion within medical schools3,5; moreover, scholastic achievement is a requirement for honors such as Alpha Omega Alpha selection.6

Although laden with importance in medical education and treated as an impartial measure of student success, we contend that clerkship grades are highly variable social constructs that imprecisely and unreliably reflect learners’ true value. In this article, we first explore the concept of social constructs to help identify inherent flaws in clerkship grades. We then compare clerkship grading with the American economic system and describe ways in which the flawed, though necessary, system of grading should be refined to more accurately and credibly assess value. Finally, we provide a 2-step solution to improve upon grading for future generations of medical students.

Grades Are a Social Construct

A social construct is a concept created, accepted, and interpreted in a similar manner by the members of a community.7 Social constructs do not exist apart from the communities in which they are embedded, and they thereby differ across contexts and situations. To understand how social constructs work, one has to examine the collective formation of shared assumptions about reality, deconstruct one’s knowledge about the world, and expose the ways in which so-called everyday reality is derived from and maintained by social interactions. However, to say something is socially constructed is not to deny its objectively verifiable components or impacts; rather, it points to how ideas about meaning, significance, and value get layered upon material surroundings and the collective understanding of these surroundings.

Money is a prime example of a socially constructed value. A dollar bill (or note) itself is materially near worthless. Composed of cotton, linen, and ink, the individual parts of a note carry little exchange value. It is only when properly cultivated, designed, assembled, stamped, and circulated that it becomes something precious. Its value stems from the collective belief in the economic system that produces and circulates that note and the belief that the value of that note, no matter its denomination, will stay consistent across time and place.

In the medical education system, grades have much in common with money. Analogously, grades are the currency through which value exchanges are negotiated between the system’s various stakeholders, including students, medical schools, and residency programs. Whatever their form (e.g., A, honors, distinction), grades provide a widely recognizable and efficient medium through which learner development can be assessed, tracked, compared, and demonstrated. Grades are indeed necessary because they reduce complex, voluminous information about a student’s performance into a more digestible form. However, the relative value they represent is not derived from any underlying material essence, but rather from their very creation and circulation within a community that collectively ascribes to what it believes grades mean.

As the American economic system requires stakeholders to implicitly trust in the value of a dollar bill, the medical education system requires stakeholders to implicitly trust certain aspects of grading. For one, stakeholders must trust that the grading system is legitimate and transparent and that the institutions assigning grades are acting in good faith and assessing students consistently and impartially. And two, they must trust that there is a commonly agreed upon objective measurement system undergirding what is, by necessity, a subjective process. That is, they must trust that it is the same knowledge, skills, and abilities that are being assessed from place to place. Only then can institutions make accurate calculations about risk and return on investment when using grades to decide which students to accept.

However, medical education’s collective trust in the current system of grading may be misplaced. By drawing comparisons between the U.S. economic system and clerkship grading, we identify how current grading systems inadvertently create problems and then describe potential solutions that may be pursued. For the purpose of this article, we have limited our critique to multitiered grading systems (e.g., honors, high pass, pass, fail), which go beyond determining if a learner is competent versus not competent (pass vs fail). A summary of the economic concepts we use throughout the remainder of this article, their relationship to clerkship grades, and evidence supporting the relationship between economic concepts and grading practices are provided in Table 1.

T1
Table 1:
Principles for Assessing and Exchanging Value: U.S. Economic Concepts of Regulation and Stock Price and Their Relationship to Current Clerkship Grading Practices

Economic Concepts and Their Association With Clerkship Grades

Regulation

Economic concept.

The tension between state- and federal-level banking serves as one metaphor for understanding grading systems in the United States. We will simplify the metaphor by stating that banking could be regulated at 3 levels: county (local), state, and/or federal levels. Applying that metaphor to grading, local banking would be the equivalent of clerkship and course grades, state banking would be the equivalent of medical school grades, and federal banking would be the equivalent of grading across medical schools.

The history of currency in the United States unfolds as a transition from earlier periods of hyperdiversity, market volatility, and consumer confusion to an increasingly centralized control of currency value and exchange.8 For the first 100 years of its existence, the United States lacked a standardized currency system and instead relied on English, Spanish, and French currencies. Rampant counterfeiting and interstate differences in note values contributed to widespread currency speculation, inflation, and market instability. As a result, the inherent value of any given note was not immediately apparent, and complicated exchange rates had to be calculated and charged when people went to spend or redeem notes. Eventually, many private banks and businesses refused to accept notes. This led to widespread uncertainty over whether money could be successfully exchanged for desired goods and services, ultimately undermining confidence in the market.9,10

To avoid these issues, in 1861, the U.S. government began issuing demand notes that were payable on demand and backed by the federal government. Realizing the failure of the state-chartered banking system, some private banks were instead chartered as national banks and only authorized to print currency that used federally authorized paper and designs to ensure the consistency, quality, and value of notes throughout the country. The value of each coin and bill was agreed upon by each state and thus universal across the country.10

Relationship to grading.

Grading in medical schools currently functions similarly to the early history of U.S. banking; that is, it is regulated at the local (clerkship) and state (medical school) levels rather than the federal (across medical schools) level. Evidence for this can be found throughout the literature. For example, Alexander and colleagues surveyed 119 U.S. medical schools and found marked variation in grading practices between and within institutions.11 The authors highlighted the range of tiers (e.g., 2-tier [pass/fail] vs 3-tier [honors, pass, fail]) and differences in nomenclature (e.g., honors vs outstanding) used by medical schools. Within individual institutions, the authors found the proportion of students receiving the highest available grade varied by as much as 62% across clerkships. Similar results have been found in other studies.12,13 When considering the term honors specifically, studies suggest a range of highly variable meanings for this term, including mastery of content,14 outstanding (or highest-level) performance,13,15 or even that a student would be desirable to a particular residency training program.13 Collectively, these differences suggest both intra- and interinstitutional variations in grading practices, grading terminology, and that grading functions with local (i.e., clerkship)- or state (i.e., medical school)-level regulation.

Though it may not appear problematic to use a local or state rather than a federal approach to grading, there is substantial evidence that stakeholders perceive that grades function as though they were federally regulated. To best illustrate this concept, we consider the term honors. Residency program directors commonly consider the number of honors obtained within a clerkship as a key criterion when ranking for the Match.3 The National Board of Medical Examiners (NBME) also provides a suggested benchmark for honors-level performance on the subject examinations.16 These observations suggest a uniform, centralized, socially constructed value for honors.

In summary, there is no evidence that grades possess a consistent and uniform value across or, in many cases, within institutions. The definition of honors or pass is dependent upon the local grading structure. However, grades simultaneously serve as a centralized currency in the minds of major stakeholders such as residency program directors. This disconnect suggests that grades function similarly to how notes functioned in the early years of U.S. currency, during which individual banks produced their own notes, leading to complicated exchange rates as well as a lack of trust in the system.

Stock prices

Economic concept.

Casual observers may assume the stock price of a company directly reflects its underlying value. But there is a significant difference between a company’s market value and its so-called intrinsic value. Market value is the current value of a company as reflected by the company’s stock price. Stock price is a surrogate for public sentiment about a company and, as such, may fluctuate widely with changes in supply and demand in the investing market and with investor speculation about the company’s future. The market value of a company therefore may be significantly higher or lower than its intrinsic value.17 Intrinsic value is a core metric used by value investors to analyze the underlying or true value of a company, regardless of its market value or stock price.17 Value investors seek to invest in companies that have a higher true value than the one being assigned to it by the market.18

Relationship to grading.

Grades may be understood as an indicator of the “value” of a learner, equivalent to a stock price. They serve as an easily digestible metric to judge the relative value of the learner: that is, the higher the grade, the more desirable the candidate. In this sense, grades may be viewed as a measure of a student’s skill, attitude, and knowledge. However, the true value of the learner is his/her capabilities to perform as a physician. Similar to stock prices, grades provide a simplified way of understanding and comparing value and may over- or underrepresent the true value of the learner.

There are several external factors that may drive the determination of a student’s final grade. Examples include the preceptor to which the student is assigned19; the site in which the student rotates20,21; and even more concerning, factors such as a learner’s gender, age, or race/ethnicity.22,23 In comparison to these factors, several studies have shown that the contribution of the student’s performance to the variance in performance evaluations is relatively low.24–26

Grades are also influenced by the specific weighting system employed by the local clerkship. One clerkship may weigh the NBME subject examination to contribute 15% of the final grade, while another may weigh the same examination to contribute 50% of the final grade. While contributions provided by a standardized, objective examination assessment may appear more representative of a student’s capabilities, there are flaws with this as well. Standardized examination performance predicts future examination performance rather than performance as a physician.27–30 In our previous work, we showed that the substitution of NBME subject examination performance with United States Medical Licensing Examination (USMLE) performance did not result in a change in final grades for most students and, thus, use of the NBME subject examinations to determine a final grade may misrepresent knowledge acquired during a specific clerkship.30

Efforts have been made to improve the accuracy and transparency of grade assignments. The Association of American Medical Colleges (AAMC) Group on Student Affairs recently advocated for the disclosure of each clerkship’s weighting of each component (e.g., subject examination vs faculty evaluations) used to determine a final grade to assist in interpretation.31 While these efforts may be helpful to some extent, Schilling has pointed out how such disclosures may inaccurately represent the actual contribution provided by any one component toward a student’s final grade.32 For example, even if a school states that the contribution of a component equaled 15%, the realized contribution (i.e., how the grade component translates to a final grade) may be substantially higher or lower depending upon numerous other factors such as the variability in performance, ratings from evaluators, etc. In summary, grades provide a snapshot of a learner’s value, but, similar to stock prices, they may not provide an accurate representation of their true value.

Summary

Clerkship grades serve as an inadequate form of currency. The lack of central regulation has resulted in marked variations in grading between and within institutions, concerns over transparency and volatility, and the perception that grades do not provide an accurate picture of students’ true value. While these challenges highlight fundamental issues with clerkship grading, lessons learned from the history of the U.S. economic system may provide a template for meaningful change.

Two-Step Solution

In this section, we outline a 2-step solution that applies lessons learned from the U.S. economy to challenges inherent in clerkship grading: (1) transition from grades to a federally regulated competency-based assessment model and (2) development of a departmental competency letter that incorporates competency-based assessments rather than letter grades and meets the needs of program directors. Then, we propose an alternative method for describing the value of learners to program directors in lieu of letter grades.

A federally regulated competency-based assessment model

Across the continuum of medical education, there are examples of regulation at both state and federal levels. In medical school, grades are left to the state (institutions), while other aspects of overall competency are regulated federally (e.g., by passing the USMLE examinations) and program-level quality is determined federally by the Liaison Committee on Medical Education (LCME). In residency training, assessment methods are left to the state (residency program), while overall competency is regulated federally (e.g., by specialty and licensing boards) and program-level quality is determined federally by the Accreditation Council for Graduate Medical Education (ACGME).

On the surface, it may therefore appear as though the distribution between state and federal regulation in undergraduate medical education (UME) is comparable to the distribution in graduate medical education (GME). However, there are 2 notable distinctions. First, while each GME training program can determine specific methods of assessment, final judgments must be linked to a set standard: the ACGME milestones. Milestones thus serve as a federal benchmark by which all GME learners are evaluated. In medical school, there is no federal standard by which all UME learners are evaluated. The second major distinction involves regulatory bodies. In GME and throughout practice, specialty and medical boards regulate individual physicians to determine competence for licensure and specialty certification. In UME, there is no equivalent federal regulation of individual students; the LCME regulates institutions rather than individuals and, thus, does not provide accountability standards for individual learners.

We feel that the distribution between state and federal regulation in GME is preferable to that in UME and that this distribution likely explains some of the challenges with clerkship grading. Therefore, we suggest UME programs consider a more centralized or federal approach as a logical first step. The next questions are therefore: (1) what sort of approach is best and (2) who should provide the regulation?

What sort of approach is best?

Previous authors have discussed 2 general approaches to federally regulate clerkship grades. The first approach retains existing letter grade terminology (e.g., honors) but calls upon institutions to develop consensus and consistency around those terms.13,32 The second approach proposes a complete reform in grading terminology and philosophy to transition to a competency-based assessment model.33,34

We favor a competency-based model for 2 major reasons. First, we feel competency-based frameworks provide an opportunity for a more accurate representation of a learner’s true value. By describing how a student performs in various domains (e.g., patient care vs medical knowledge), one can determine the student’s strengths and areas for improvement. Second, competency-based frameworks offer the potential to guide learners over the course of their developmental trajectory. Thus, a competency-based model may not only be valuable in expressing the value of a learner to his/her respective residency program director but may also provide inherent value for the developing physician throughout his/her training.

Several competency-based models may provide a template for a federal approach to clerkship grades. For example, the AAMC recently developed the Core Entrustable Professional Activities for Entering Residency to serve as common graduation competencies across all medical schools.35 Alternatively, there may be an advantage to adapting frameworks such as the ACGME core competencies and integrating these into the UME context, as this would allow translation across the continuum of training. Previous authors have described such an approach using the ACGME core competencies.36 However, both of these options would serve to define a federal currency across medical schools.

Who should provide the regulation?

As others have suggested, the LCME may serve as a logical organization to provide central oversight on grades.11 However, we do not think such a move is consistent with the accrediting body’s philosophy and instead suggest that medical schools form a coalition to take this on themselves.

Since its creation, the LCME has recognized its inherent social responsibility for training physicians by ensuring that medical education programs meet set standards.37 Part of these standards focus on national outcome data such as USMLE scores, AAMC Graduation Questionnaire results, and program director survey results. However, this is counterbalanced by a simultaneous recognition of the “diverse institutional missions and educational objectives” of individual programs.38 This means that responsibility for objectives, assessment methods, and grading procedures must be developed by each individual medical school and must correspond to their educational mission.38 Therefore, efforts to standardize grades through the LCME would require a fundamental shift from their philosophy of local regulation for curriculum and assessment to a more federal perspective.

While we do not think it is likely that the LCME will take on grading standards, we do feel a federal approach is critical. Thus, we would suggest that medical schools form a coalition to uniformly adopt a framework of competency-based assessment. Hauer and Lucey had a similar suggestion.33 We would take their recommendations a step further by proposing that all schools not only use the same framework but that they use that framework and that framework alone (i.e., no grades whatsoever) for assessing student performance across clerkships. This would serve 3 goals. First, it would better articulate the true value (rather than the “stock price”) of the learner. Second, it would provide a set standard for evaluating learners, thus addressing concerns over volatility and transparency. Finally, it would address the longitudinal nature of learner development, the complexity of skills associated with becoming a physician, and the notion that medical schools should assess learner competency.

Impact on residency selection

The transition from letter grades to a competency-based framework raises one major concern, namely, the impact on residency selection for interview and ranking. Program directors consider a variety of metrics when evaluating candidates, including clerkship grades, interview scores, and performance on standardized tests. Authors have expressed concern that eliminating letter grades would increase the likelihood that residency programs will place increased emphasis on other, less desirable components of an applicant’s file (e.g., USMLE performance).39 This has led some to conclude that grading will remain as is for the foreseeable future.39 To address this concern, we feel it is important to consider why program directors use metrics such as grades to select candidates.

All metrics, including clerkship grades, are imperfect surrogates for the answers to the 3 fundamental questions program directors want to know: (1) will the applicant succeed, (2) how does this candidate compare with other applicants, and (3) is the applicant a “good fit” for the program?40–43 As stated by Katzung and colleagues, “performance in an EM [emergency medicine] rotation is one of the most important aspects of the application … because optimal performance in an EM rotation is the most direct and convincing evidence [italics added] in the eyes of PDs [program directors] that the student will later excel [italics added] in an EM training program.”42 Grades serve as a proxy for future performance. They provide some indication of whether an applicant will succeed and how that candidate compares with other applicants; meanwhile, interviews serve as a way to get supplemental answers about whether an applicant will succeed and if he/she is a good fit for the program.43

When considering the implications of removing letter grades from the equation, we must therefore consider how to provide residency programs with an alternative or more optimal way to determine the qualifications of a candidate. We do not think it is reasonable to expect program directors, tasked with increasing numbers of applications each year, to be willing or able to translate a competency-based framework into a decision on whether to interview or how to rank an applicant. Thus, we suggest the development of a supplemental structured letter that provides context to a student’s competency-based rating.

A viable alternative to clerkship grades: The departmental competency letter

In recent years, a growing number of initiatives have attempted to improve the residency selection process. Each of these comes in the form of a structured letter with the intent of summarizing a student’s performance in a consistent, transparent, and standardized method across institutions and/or specialties. In relation to economic principles, they partially address the movement to regulate grades at the federal level and attempts to reflect the intrinsic value of the learner.

At the institutional level, one such initiative involves revision to the Medical Student Performance Evaluation (MSPE). In 2015, a task force assembled to discuss potential changes to the MSPE specifically to improve upon its value, standardization, transparency, and ability to compare candidates both among and within institutions.31 Based on its intent and design, the revised MSPE should therefore centralize and standardize the currency exchange between medical schools and residency programs. While the task force’s proposed changes to the MSPE demonstrated efforts to improve the exchange between medical school and residency programs, the implementation has not been as successful. As Hook and colleagues recently illustrated, the revised MSPE still lacks consistency, transparency, and reliability in clerkship grade determination and in methods for reporting.44

At the specialty or departmental level, initiatives include the departmental chair’s letter (e.g., internal medicine),45,46 the Standardized Letter of Evaluation (SLOE; e.g., emergency medicine, pediatrics),47,48 and a competency-based educational handover letter (e.g., emergency medicine, obstetrics and gynecology, pediatrics, surgery).49–51 These letters each use comparative specialty-specific assessments to allow for differentiation of performance (i.e., novice to advanced), thus providing some advantage over the revised MSPE. Other common advantages include some ability to compare learners both within and, to some extent, across institutions. However, the departmental chair’s letter and SLOE are somewhat limited in their ability to provide multiple measures of performance (e.g., more than just performance during 1 or 2 rotations), and they both lack the use of a competency-based framework.45–48 The handover letter overcomes the downsides of the departmental chair’s letter and SLOE in offering a true competency-based assessment and incorporating multiple measures of performance.49–51 However, the handover letter is more limited in terms of its ability to allow for interinstitutional comparisons. Perhaps more importantly, the handover letter is provided after the Match, which prevents it from being used for residency selection.

While grades are a component of the SLOE and departmental chair’s letters, we contend that grades themselves are not integral to the value of either. To illustrate this point, emergency medicine program directors note that a “well written SLOE provides an overall perspective on what an individual candidate offers to a training program. It is unique in its ability to provide comparative data to peers [italics added] in addition to important information regarding the distinguishing non-cognitive characteristics (e.g. maturity, professionalism, leadership, compassion, initiative, enthusiasm) that an applicant possess[es].”52 Therefore, the value provided by the SLOE is the comparative, transparent, and reliable data based on performance, not the specific grades.

In summary, each of the supplemental letters previously described provides some improvement to the standard application and may offer an alternative to clerkship grades. They each partially address the foundational economic principles of regulating currency at the federal level and representing the intrinsic value of the learner. However, they each fall short for various reasons as summarized in Table 2. We therefore propose the use of an alternative letter that draws upon the strengths and minimizes the weaknesses of those previously described—the departmental competency letter.

T2
Table 2:
Illustration of Foundational Economic Principles and How Well Each Is Represented by Existing or Proposed Structured Letters for Residency Selectiona

The proposed departmental competency letter would be specialty specific, incorporate a common standard, allow for intra- and interinstitutional comparisons, use competency-based terminology, and incorporate multiple measures of a student’s performance (i.e., more than performance in a single clerkship). Specialties would select the most relevant competencies, milestones, or entrustable professional activities for the field. Aggregate data would be provided in a similar manner for all learners going into that field, thus allowing for comparisons within and across institutions using a common and competency-based standard. Such a letter would incorporate multiple measures of competency-based assessments, as in the revised MSPE (see Table 2), but would not include a letter grade (e.g., honors). In addition to the specific contents, the letter would be provided before the Match to specifically meet the needs of residency program directors and to serve as a viable alternative to clerkship grades. Such a change may be particularly welcome in the near future as the USMLE Step 1 moves to pass/fail53 and program directors seek alternative objective measures for evaluating candidates.54 Overall, these attributes would provide a federally regulated currency exchange that better demonstrates the true value of a learner and addresses key considerations for program directors.

Conclusion

Grades serve a valuable purpose in medical education. At their foundation, they help determine whether medical students are competent for graduation, while also providing comparative data for residency training programs. However, they serve neither purpose in an optimal manner.

We suggest that the optimal method for grading involves application of key lessons learned from the history of the U.S. economy. By developing a federally regulated currency that more accurately demonstrates a learner’s true value, we may be able to provide a more transparent, reliable, and informative metric for all key stakeholders.

Acknowledgments:

The authors would like to thank Nicole Deiorio, MD, and Meg Keeley, MD, for providing a critical review of this manuscript.

References

1. Green M, Jones P, Thomas JX Jr. Selection criteria for residency: Results of a national program directors survey. Acad Med. 2009;84:362–367.
2. Go PH, Klaassen Z, Chamberlain RS. Residency selection: Do the perceptions of US programme directors and applicants match? Med Educ. 2012;46:491–500.
3. National Resident Matching Program. Results of the 2016 NRMP program director survey. https://www.nrmp.org/wp-content/uploads/2016/09/NRMP-2016-Program-Director-Survey.pdf. Published 2016. Accessed June 15, 2020.
4. Brandenburg S, Kruzick T, Lin CT, Robinson A, Adams LJ. Residency selection criteria: What medical students perceive as important. Med Educ Online. 2005;10:4383.
5. Green EP, Gruppuso PA. Justice and care: Decision making by medical school student promotions committees. Med Educ. 2017;51:621–632.
6. Alpha Omega Alpha Honor Medical Society. How members are chosen. https://www.alphaomegaalpha.org/how.html. Updated June 1, 2020. Accessed June 15, 2020.
7. Searle JR. The Construction of Social Reality. 1995. New York, NY: The Free Press
8. Newman EP. The Early Paper Money of America. 1990. Iola, WI: Krause Publications
9. Cagan P. Carson D; Comptroller of the Currency. The first fifty years of the national banking system: An historical appraisal. In: Banking and Monetary Studies: In Commemoration of the Centennial of the National Banking System. 1963:Homewood, IL: Irwin, 15–42.
10. Riggs T. Gale Encyclopedia of U.S. Economic History. 2015. 2nd ed, Detroit, MI: Gale
11. Alexander EK, Osman NY, Walling JL, Mitchell VG. Variation and imprecision of clerkship grading in U.S. medical schools. Acad Med. 2012;87:1070–1076.
12. Fazio SB, Torre DM, DeFer TM. Grading practices and distributions across internal medicine clerkships. Teach Learn Med. 2016;28:286–292.
13. Lipman JM, Schenarts KD. Defining honors in the surgery clerkship. J Am Coll Surg. 2016;223:665–669.
14. Bullock JL, Lai CJ, Lockspeiser T, et al. In pursuit of honors: A multi-institutional study of students’ perceptions of clerkship evaluation and grading. Acad Med. 2019;9411 supplS48–S56.
15. Dudas RA, Barone MA. Setting standards to determine core clerkship grades in pediatrics. Acad Pediatr. 2014;14:294–300.
16. National Board of Medical Examiners. 2016 NBME clinical clerkship subject examination survey: Summary of results. https://www.nbme.org/sites/default/files/2020-01/Clerkship_Survey_Summary.pdf. Accessed June 15, 2020.
17. Burr Williams J. The Theory of Investment Value. 1997. Cambridge, MA: Harvard University Press
18. Graham B. The Intelligent Investor: A Book of Practical Counsel. 1959. New York, NY: Harper
19. Pulito AR, Donnelly MB, Plymale M. Factors in faculty evaluation of medical students’ performance. Med Educ. 2007;41:667–675.
20. Fay EE, Schiff MA, Mendiratta V, Benedetti TJ, Debiec K. Beyond the ivory tower: A comparison of grades across academic and community OB/GYN clerkship sites. Teach Learn Med. 2016;28:146–151.
21. Plymale MA, French J, Donnelly MB, Iocono J, Pulito AR. Variation in faculty evaluations of clerkship students attributable to surgical service. J Surg Educ. 2010;67:179–183.
22. Riese A, Rappaport L, Alverson B, Park S, Rockney RM. Clinical performance evaluations of third-year medical students and association with student and evaluator gender. Acad Med. 2017;92:835–840.
23. Lee KB, Vaishnavi SN, Lau SK, Andriole DA, Jeffe DB. “Making the grade:” Noncognitive predictors of medical students’ clinical clerkship grades. J Natl Med Assoc. 2007;99:1138–1150.
24. Zaidi NLB, Kreiter CD, Castaneda PR, et al. Generalizability of competency assessment scores across and within clerkships: How students, assessors, and clerkships matter. Acad Med. 2018;93:1212–1217.
25. Kreiter CD, Ferguson KJ. Examining the generalizability of ratings across clerkships using a clinical evaluation form. Eval Health Prof. 2001;24:36–46.
26. Kreiter CD, Ferguson K, Lee WC, Brennan RL, Densen P. A generalizability study of a new standardized rating form used to evaluate students’ clinical clerkship performances. Acad Med. 1998;73:1294–1298.
27. Zahn CM, Saguil A, Artino AR Jr, et al. Correlation of National Board of Medical Examiners scores with United States Medical Licensing Examination Step 1 and Step 2 scores. Acad Med. 2012;87:1348–1354.
28. Dong T, Copeland A, Gangidine M, Schreiber-Gregory D, Ritter EM, Durning SJ. Factors associated with surgery clerkship performance and subsequent USMLE Step scores. J Surg Educ. 2018;75:1200–1205.
29. Myles TD, Henderson RC. Medical licensure examination scores: Relationship to obstetrics and gynecology examination scores. Obstet Gynecol. 2002;1005 Pt 1955–958.
30. Ryan MS, Bishop S, Browning J, et al. Are scores from NBME subject examinations valid measures of knowledge acquired during clinical clerkships? Acad Med. 2017;92:847–852.
31. Association of American Medical Colleges. Medical Student Performance Evaluation (MSPE). https://www.aamc.org/professional-development/affinity-groups/gsa/medical-student-performance-evaluation. Accessed June 15, 2020.
32. Schilling DC. Using the clerkship shelf exam score as a qualification for an overall clerkship grade of honors: A valid practice or unfair to students? Acad Med. 2019;94:328–332.
33. Hauer KE, Lucey CR. Core clerkship grading: The illusion of objectivity. Acad Med. 2019;94:469–472.
34. Durning SJ, Hemmer PA. Commentary: Grading: What is it good for? Acad Med. 2012;87:1002–1004.
35. Englander R, Flynn T, Call S, et al. Toward defining the foundation of the MD degree: Core Entrustable Professional Activities for Entering Residency. Acad Med. 2016;91:1352–1358.
36. Englander R, Cameron T, Ballard AJ, Dodge J, Bull J, Aschenbrener CA. Toward a common taxonomy of competency domains for the health professions and competencies for physicians. Acad Med. 2013;88:1088–1094.
37. Kassebaum DG. Origin of the LCME, the AAMC-AMA partnership for accreditation. Acad Med. 1992;67:85–87.
38. Liaison Committee on Medical Education. Functions and structure of a medical school. https://lcme.org/publications. Published March 2019. Accessed June 15, 2020.
39. Hemmer PA, Durning SJ. A standardized approach to grading clerkships: Hard to achieve and not worth it anyway. Acad Med. 2013;88:295–296.
40. Agarwal V, Bump GM, Heller MT, et al. Do residency selection factors predict radiology resident performance? Acad Radiol. 2018;25:397–402.
41. Raman T, Alrabaa RG, Sood A, Maloof P, Benevenia J, Berberian W. Does residency selection criteria predict performance in orthopaedic surgery residency? Clin Orthop Relat Res. 2016;474:908–914.
42. Katzung KG, Ankel F, Clark M, et al. What do program directors look for in an applicant? J Emerg Med. 2019;56:e95–e101.
43. Stephenson-Famy A, Houmard BS, Oberoi S, Manyak A, Chiang S, Kim S. Use of the interview in resident candidate selection: A review of the literature. J Grad Med Educ. 2015;7:539–548.
44. Hook L, Salami AC, Diaz T, Friend KE, Fathalizadeh A, Joshi ART. The revised 2017 MSPE: Better, but not “outstanding.” J Surg Educ. 2018;75:e107–e111.
45. Fitz M, La Rochelle J, Lang V, DeWaay D, Adams W, Nasraty F. Use of standard guidelines for department of medicine summary letters. Teach Learn Med. 2018;30:255–265.
46. Lang VJ, Aboff BM, Bordley DR, et al. Guidelines for writing department of medicine summary letters. Am J Med. 2013;126:458–463.
47. Love JN, Ronan-Bentle SE, Lane DR, Hegarty CB. The Standardized Letter of Evaluation for postgraduate training: A concept whose time has come? Acad Med. 2016;91:1480–1482.
48. Love JN, Smith J, Weizberg M, et al.; SLOR Task Force. Council of Emergency Medicine Residency Directors’ standardized letter of recommendation: The program director’s perspective. Acad Emerg Med. 2014;21:680–687.
49. Schiller JH, Burrows HL, Fleming AE, Keeley MG, Wozniak L, Santen SA. Responsible milestone-based educational handover with individualized learning plan from undergraduate to graduate pediatric medical education. Acad Pediatr. 2018;18:231–233.
50. Wancata LM, Morgan H, Sandhu G, Santen S, Hughes DT. Using the ACMGE milestones as a handover tool from medical school to surgery residency. J Surg Educ. 2017;74:519–529.
51. Sozener CB, Lypson ML, House JB, et al. Reporting achievement of medical student milestones to residency program directors: An educational handover. Acad Med. 2016;91:676–684.
52. Council of Residency Directors in Emergency Medicine. The Standardized Letter of Evaluation (SLOE). https://www.cordem.org/esloe. Accessed June 15, 2020.
53. United States Medical Licensing Examination. InCUS: Invitational Conference on USMLE Scoring: Change to pass/fail score reporting for Step 1. https://www.usmle.org/incus. Accessed June 18, 2020.
54. Makhoul AT, Pontell ME, Ganesh Kumar N, Drolet BC. Objective measures needed—Program directors’ perspectives on a pass/fail USMLE Step 1. N Engl J Med. 2020;382:2389–2392.
Copyright © 2020 by the Association of American Medical Colleges