Nearly two decades ago in 2001, Jason Thompson and I wrote in Academic Medicine:
The annual U.S. News & World Report rankings of U.S. medical schools are ill-conceived; are unscientific; are conducted poorly; ignore the value of school accreditation; judge medical school quality from a narrow, elitist perspective; do not consider social and professional outcomes in program quality calculations; and fail to meet basic standards of journalistic ethics. The U.S. medical education community, higher education scholars, the journalism profession, and the public should ignore this annual marketing shell game.1
Nothing much has changed in the past 20 years because the U.S. News & World Report medical school rankings still yield distinctions without differences. These ubiquitous medical school rankings are alive and well, are studied and used by medical school applicants, generate income for the popular magazine, and bring money and visibility to medical schools and academic medical centers. As noted by many authors, this system has truly had remarkable staying power despite its methodological flaws and values that align poorly with what really matters in health professions education and health care delivery.
In this Invited Commentary, I address five questions: (1) How are the U.S. News & World Report medical school rankings performed, and why are they taken so seriously? (2) What are the core missions of U.S. medical schools, and how are they gauged? (3) What really matters for individual student success in medical school and in professional practice? (4) What improvements should be made in the medical school evaluation process? (5) How can U.S. medical schools correct the irony of dismissing the U.S. News & World Report rankings on methodological and professional grounds while still using the rankings for marketing and fund-raising?
Medical School Rankings
The methods used by U.S. News & World Report to rank medical schools are based on factors that can be measured easily but do not reflect the quality of a medical school from either a student or patient perspective. Thirty percent of a medical school’s overall score, for example, depends on the research criterion, which reflects total National Institutes of Health (NIH) research activity and average NIH activity per faculty member. An additional 20% reflects incoming students’ undergraduate grade point average (GPA) and Medical College Admission Test (MCAT) scores, and the medical school’s acceptance rate. Another 30% is based on a “quality assessment” consisting of surveys sent to medical school deans and residency program directors (often chosen by the school) asking respondents to rate the school using a five-point scale (1 = marginal, 5 = outstanding) and yielding a response rate of only 31% for the 2019 rankings. Several other criteria (e.g., faculty resources) account for a smaller proportion of the overall score.2
Medical school rankings are taken seriously by the public and the profession largely because of the U.S. cultural obsession with selectivity, status, and elitism. This is expressed in the widespread myth that one’s academic pedigree predicts professional and life success. However, this belief system fails to acknowledge that the formula used to determine medical school rankings reflects opinions about the schools and what makes them “America’s Best,” not facts grounded in empirical evidence. Robert Alpen, dean of the Yale University School of Medicine, was quoted in Health Affairs: “I think what’s frustrating everybody … is that there’s nothing really in [U.S. News’s] formula that is really evaluating the quality of medical education. That would be so much more valuable to the applicants, to the students. And it would incentivize us to do a better job in education.”3 The U.S. News & World Report medical school rankings are a beauty contest, not a scientific project.
A recent medical education research study conducted at the University of Michigan illustrates the difference between the quality of a medical school and its ranking in U.S. News & World Report. The purpose of the study, “Assessing Residents’ Competency at Baseline: How Much Does the Medical School Matter?”4 was to assess the ability of new residents to perform important clinical skills needed during the first months of postgraduate medical education. Objective clinical performance data for 1,795 residents from 139 U.S. and 33 international medical schools were collected from 2002 to 2012. The data included reliable measures of clinical data gathering, clinical assessment, team skills, procedural competence, and communication. Results show that “[r]esidents’ medical school of origin is [very] weakly correlated with clinical competency as measured by a standardized objective structured clinical examination.”4 When assessed rigorously, clinical skill acquisition among undergraduate medical students had no practical association with their medical school of origin. By contrast, an alternative approach is suggested by a recent study of obstetrics–gynecology residents that demonstrated a link between the quality of training programs and downstream patient care provided by residency program graduates.5
Medical School Missions
U.S. medical schools and their affiliated academic medical centers are national treasures with a variety of education, scientific, and public health missions. The missions include quality education and clinical competence assessment; promotion of diversity in the medical profession; advancement of basic, clinical, and education science research; service to the underserved; preventive medicine and public health advocacy; and many others. These diverse missions simply cannot be captured by a unitary, ordinal ranking of the 144 MD-granting and 33 DO-granting medical schools now accredited in the United States. By placing all schools on the same scale, the rankings leave an impression that medical schools are all trying to win the same competition, that they all have the same goals, and that they all serve the same interests and needs. The truth is different.
Homogenization of U.S. medical schools into an imprecise, non-evidence-based formula weakens the ability of each medical school to achieve its unique mission in its own setting and community. The rankings imply that more research funding makes a better medical school, that NIH funding is better than other research resources regardless of impact, and that higher undergraduate GPAs and MCAT scores produce better and more committed future doctors. At the student level, this perpetuates a flawed system that values test-taking abilities above all other attributes even before medical school begins. Focusing on these and other arbitrary metrics used in the U.S. News & World Report rankings diverts time and attention away from important missions that have higher potential to improve local communities and public health but yield less prestige and national acclaim.
What Really Matters
What really matters for individual student success in professional education (and life) is individual aspiration, effort, and grit6 that lead to successful attainment of education goals, not one’s medical school pedigree. Today’s medical students face rising debt; evolving changes in the roles of physicians in clinical care; fast-moving health system changes; uncertain federal, state, and private insurers; technology advances including the uncertain impact that artificial intelligence will have on clinical practice; and a profession only beginning to address substantial burnout and wellness concerns. We must begin to formulate assessments of medical schools that reflect preparation for these realities and allow students to compare institutions by answering questions such as: Are graduates of this school prepared to enter graduate medical education? How do graduates perform on assessments of entrustable professional activities and attainment of professional competencies such as the Accreditation Council for Graduate Medical Education milestones? How well are graduates of this school functioning 10 and 20 years into their professional careers? Have graduates achieved their personal and professional goals in patient care, research, education, or community service? How do graduates rate the education, clinical readiness, and instillation of professional values by their medical school? The U.S. News & World Report ranking system is hopelessly naïve about mechanisms such as these that ensure quality control in the medical profession.
What Improvements Can Be Made?
I amplify the earlier criticism of the current ranking system1 with a call for an outcomes-based approach that reflects the core missions of medical schools and aligns more closely with education and quality outcomes at student, patient, research, and community levels. I specifically recommend the development, evaluation, and implementation of medical school metrics that assess:
- Education quality measured by clinical skill acquisition, student engagement, professional satisfaction, and career achievement among graduates;
- Quality of clinical care provided by faculty and graduates;
- Impact of medical school research rather than quantity of research funding; and
- Community benefit such as improved public health and workforce diversity.
Irony: Flawed Rankings Versus Marketing
There is widespread recognition among U.S. medical schools and academic medical centers that the U.S. News & World Report rankings are useless on methodological and professional grounds. To my knowledge, there are no naysayers about this argument. However, medical school webpages, promotion materials, financial campaigns, and alumni newsletters routinely cite the rankings as evidence of school quality. Ironically, U.S. medical schools “feed the beast” as a marketing strategy to attract money in the form of clinical care and philanthropy. Financial returns to medical schools and academic medical centers that receive high ranks are likely much higher than profits earned by U.S. News & World Report.
Criticism about the U.S. News & World Report rankings is not confined to medical schools. Fault has been found uniformly not only with rankings of undergraduate colleges and universities but also with rankings of such professional schools as education, engineering, social work, law, and others. Indiana University law professor Jeffrey Evans Stake7 summarizes the state of affairs cogently:
U.S. News has set up a game. The players are the schools being ranked and the faculty members at those schools. Most faculty members and administrators seek to increase their school’s rank by various strategic moves. These moves are costly, in terms of money and other resources, but do little or nothing to improve legal education for students. Indeed, it is worse than that. Many of the strategies run contrary to the interests of students and society. Unfortunately, the players cannot exit the game for fear of losing support. This ranking game allows no exit and has no time clock.
U.S. medical schools need to endorse and act on values that go beyond financial prosperity, prestige, and singular attention to academic status. Quality education, community service, professional diversity, research excellence, health advocacy, interprofessional care, fostering of student resiliency and well-being, and other outcomes are better metrics of medical school quality than the currently flawed rankings.
The author thanks S. Barry Issenberg and Diane B. Wayne for critical comments about earlier drafts of this Invited Commentary.