Secondary Logo

Journal Logo

America's Best Medical Schools: A Critique of the U.S. News & World Report Rankings

McGaghie, William C. PhD; Thompson, Jason A.

INSTITUTIONAL ISSUES: ARTICLES
Free
SDC

Rankings of American medical schools published annually by the news magazine U.S. News & World Report are widely used to judge the quality of the schools and their programs. The authors describe and then critique the rankings on methodologic and conceptual grounds, arguing that the annual U.S. News medical school evaluation falls short in both areas. Three categories of program quality indicators different from those used by U.S. News are presented as alternative ways to judge medical schools. The authors conclude that the annual U.S. News & World Report rankings of American medical schools are ill-conceived; are unscientific; are conducted poorly; ignore medical school accreditation; judge medical school quality from a narrow, elitist perspective; and do not consider social and professional outcomes in program quality calculations. The medical school rankings have no practical value and fail to meet standards of journalistic ethics.

Dr. McGaghie is professor of medical education and professor of preventive medicine, Northwestern University Medical School, Chicago, Illinois. Mr. Thompson is a statistical programmer analyst, Behavioral Medicine Research Group, Department of Psychology, University of Pittsburgh, Pittsburgh, Pennsylvania.

Correspondence and requests for reprints should be addressed to Dr. McGaghie, Northwestern University Medical School, Office of Medical Education and Faculty Development, 3-130 Ward Building, 303 East Chicago Avenue, Chicago, IL 60611-3008; e-mail 〈wcmc@northwestern.edu〉.

Note added in proof: As this article was going to press, The Washington Monthly published its critique of the U.S. News & World Report rankings of colleges in its September 2001 issue. The authors, Amy Graham and Nicholas Thompson, reached conclusions similar to ours concerning the misleading and harmful character of those rankings.

The authors are indebted to Addeane S. Caelleigh, Robert Gundlach, Dennis Hoban, Emil Petrusa, Jon Wergin, and three anonymous reviewers for critical comments about an earlier version of the manuscript.

All Americans have a stake in the quality of U.S. medical schools. Prospective medical students and the schools themselves have a particular, practical interest. Everyone wants assurance that the doctors produced by U.S. medical schools are effective and safe.

The news magazine U.S. News & World Report has addressed such concerns by publishing an annual national ranking of graduate education programs in a variety of fields, including medicine. The U.S. News medical school rankings have become a de facto measure of program quality by assessing American medical schools' reputations, research activity, student selectivity, and faculty resources. These rankings are accepted and cited by the medical profession, colleges and universities, the national and local popular press, broadcast journalists, and the public. The U.S. News rankings are so highly valued and relied upon because uncritical consumers believe these data provide the only available objective, unbiased way to assess and compare U.S. medical schools. In this article, we examine problems and weaknesses in the U.S. News rankings and raise doubts about their utility for judging the quality of medical education programs. We point to other, potentially more meaningful, kinds of assessment that could be used instead.

People want sound, objective data about the quality of medical schools and their programs for several reasons. Policymakers want evidence that the U.S. medical education system is efficient and works. Prospective students need information to help them decide where to apply so they can assess how well they “fit” an educational environment. Others may want to know how “productive” different schools are for their enrolled students: “What is the graduation rate?” “What is the value added for students by graduation from a particular academic medical institution, and are some kinds of contributions more valuable than others?” Does it matter, for example, if an American physician graduates from the University of California, Irvine College of Medicine, or from Johns Hopkins University School of Medicine, the University of Missouri—Columbia School of Medicine, or the Ponce School of Medicine in Puerto Rico? If there is a difference, what is it?

In this article, we address such questions, after considering the U.S. News & World Report ranking system.

Back to Top | Article Outline

METHODS OF EVALUATION

Any method of evaluation and ranking is, of course, a value system expressed in numbers. The measured quantities are, by definition, what is valued. The evaluation method assigns numbers to these valued attributes and the ranking system determines their order from top to bottom. The important point for this discussion is that U.S. News & World Report uses a small set of four attributes to rank American medical schools: reputation, research activity, student selectivity, and faculty resources. The value system at the heart of these attributes emphasizes institutional prestige driven by national visibility. This emphasis is never revealed in the rankings and, we believe, is not a true index of medical school quality.

What other medical school attributes could policymakers, analysts, or prospective students use to assess medical school quality? One could be a school's secular versus religious heritage (e.g., Roman Catholic, Jewish, Seventh Day Adventist), because these traditions appeal to different faculty and student groups. By contrast, a school's commitment to public service could be used as a measure of quality based on the idea that U.S. society has a big financial stake in educating physicians and therefore medical schools should focus and use their programs in ways that serve their surrounding communities. Or a measure could be the racial and ethnic diversity of the faculty and student body, given American pluralism and the importance of assuring access of all segments to the health professions, especially medicine. Finally, medical graduates' performances on national board examinations or other important tests could be the quality index, based on the idea that the most important point is the measured educational achievement of the nation's doctors.

There are other possible measures, but this short list illustrates our case that many sets of values are available to evaluate the quality of U.S. medical schools. One is obvious and taken for granted—accreditation by the Liaison Committee on Medical Education (LCME). This system emphasizes the importance of each school's having the necessary resources (library, laboratories, clinics) to conduct medical education, having properly educated and qualified faculty and enough of them, and requiring students to demonstrate on commonly accepted tests that they have the necessary knowledge to be good physicians. No one thinks these values are unimportant or should be ignored. The reason that the U.S. News rankings do not need to deal with these values is that over the past 90 years the U.S. medical profession has itself set up tough accreditation standards and bodies to ensure they are present at all medical schools.

Many different methods are available to evaluate program quality in U.S. medical education. Table 1 identifies six of these program evaluation methods, ranging from accreditation, which as noted is required of all medical schools to begin or continue operations, to impact on students, prosperity, public service, reputation, and research activity. Each method is grounded in a rationale—a set of values—that connotes images of program quality. The evaluation methods and their underlying values are not better or worse than one another, just different. The key point is that each approach used to gauge program quality is a direct expression of one's values, expressed or not.

Table 1

Table 1

In this essay, we have two goals. First, we describe and critique the annual U.S. News & World Report rankings of “top” U.S. medical schools and link recent data with data published over two decades ago. The critique is done on both methodologic and conceptual grounds. Second, we describe and illustrate three different approaches to evaluating medical education program quality that use accreditation as a baseline and indexes of goal-directed results as outcomes.

Back to Top | Article Outline

U.S. NEWS & WORLD REPORT RANKINGS

The rankings of 125 U.S. medical schools published by U.S. News & World Report on April 10, 2000, are cast in two categories: (1) overall and (2) primary care. The overall medical school rankings, which we focus on here, were calculated using the following formula. (The descriptions are direct quotations from the year 2000 ranking reports, shortened only for clarity in this context.)

  • Reputation (40 percent [of overall rank]): Academic reputation was measured through two kinds of surveys conducted by U.S. News in the fall of 1999. In the first, medical school deans and senior faculty were asked to rate each school's overall academic quality… on a scale of ‘marginal’ (1) to ‘distinguished’ (5). The response rate was 51 percent for medical schools… A school's average score accounts for 20 percent of its overall rank… In the second survey, sent to residency program directors, respondents were asked to select… the 25 best medical schools… Forty-five percent responded to the best medical schools survey… Their opinions count for 20 percent of a school's overall rank.” [Emphasis added.]
  • Research activity (30 percent of overall rank): This indicator is measured as the total dollar amount of National Institutes of Health research grants awarded to the medical school and its affiliated hospitals, averaged for 1998 and 1999.”
  • Student selectivity (20 percent of overall rank…): This indicator consists of three components, which describe the class entering in fall, 1999: mean composite Medical College Admission Test score (65 percent of this measure), mean undergraduate grade-point average (30 percent), and the proportion of applicants accepted into the program (5 percent).”
  • Faculty resources (10 percent of overall rank…): This indicator is the ratio of full-time science and clinical faculty to full-time students in 1999.”
  • Overall rank: To obtain its overall score, the school's score on each indicator was first standardized. The standardized scores of each indicator were weighted, totaled, and rescaled so that the top-scoring school received a 100 and other schools received a percentage of the top school's score.” [Emphasis added.]

This method of ranking the “top medical schools” yielded predictable results in 2000. The five highest-ranking U.S. medical schools (overall scores in parentheses) are:

  1. Harvard University (100.0)
  2. Johns Hopkins University (73.0)
  3. University of Pennsylvania (70.0)
  4. Washington University in St. Louis (66.0)
  5. Columbia University College of Physicians and Surgeons (64.0)

The first author's employer, Northwestern University Medical School, ranked 22nd (overall score of 45.0), which means by the U.S. News formula that Northwestern has just 45% of Harvard's quality. The last school on the list, ranked 50th (overall score of 27.0), is Wake Forest University. Ranks and overall scores for the remaining 75 accredited U.S. medical schools are not published.

Similar ranking methods have been used since the beginning of the “best graduate schools” reports by U.S. News. The U.S. News & World Report formula evaluates overall medical school quality in terms of reputation measured by subjective impressions; research activity by money received from one source (NIH); student selectivity by test scores, grades, and selection ratio; and faculty resources by faculty: student ratio, another index of school wealth.

Back to Top | Article Outline

CRITIQUE OF U.S. NEWS & WORLD REPORT RANKINGS

Flawed Methods

There are at least five reasons to criticize the U.S. News & World Report rankings on methodologic grounds: (1) narrow focus, (2) inadequacy of response rates, (3) measurement error, (4) unchanging stability of results, and (5) confounding.

The narrow focus of the U.S. News rankings on a small minority of “top” U.S. medical schools ignores the large majority of fully accredited medical schools. For example, just 40% of U.S. medical schools were ranked in 2000 and 1999; 20% of the schools were ranked in 1998, 1997, and 1996.1,2,3,4,5 Thus, 60% to 80% of accredited U.S. medical schools received no attention from U.S. News from 1996 to 2000.

One of the four medical school quality indicators is reputation, which is based on perceptions of senior medical school administrators, faculty members, and residency (post-graduate medical education) directors, measured from surveys. However, response rates to the U.S. News & World Report surveys of medical school reputation do not meet scientific standards. With few exceptions, the survey response rates are under 50%, which is not considered reliable or acceptable. Any trained researcher could turn to reference books on this point, but to illustrate, Thomas Mangione notes in his 1995 book, Mail Surveys: Improving the Quality,

Nonresponse error is the single biggest impediment to any survey study…. What is considered a high response rate? Certainly a response rate in excess of 85% is viewed as an excellent rate of return…. Response rates in the 70% to 85% range are viewed as very good. Responses in the 60% to 70% range are considered acceptable, but you begin to be uneasy about the characteristics of nonresponders. Response rates between 50% and 60% are barely acceptable and really need some additional information that contributes to confidence about the quality of your data. Response rates below 50% really are not scientifically acceptable.6 [Emphasis added.]

U.S. News survey response rates from deans and senior faculty (D & F) and residency program directors (RPDs) about the academic reputations of American medical schools are insufficient to reach valid conclusions. Specifically, for the five-year period from 1996 to 2000 the survey response rates were:

  • 1996, D & F = 49%, RPDs = 40%
  • 1997, D & F = 50%, RPDs = 45%
  • 1998, D & F = 54%, RPDs = 40%
  • 1999, D & F = 55%, RPDs = 46%
  • 2000, D & F = 51%, RPDs = 45%

The mean response rates for this time span are 51.8% for deans and faculty, 43.2% for residency directors. These response rates cast grave doubt on the validity of the U.S. News survey data on reputation—data that account for 40% of each medical school's overall score—due to incomplete information and nonresponse bias, the risk that deans, faculty members, and residency directors who do not answer survey questions differ from those who respond with regard to the survey variables.7

In addition, the U.S. News overall scores do not account for measurement error by reporting confidence intervals (CIs) for point estimates. Reporting CIs is standard practice in survey,6,8 clinical,9 and educational research.10 In a recent study of the institutional context of departmental prestige in U.S. higher education, the investigators reported that variation in the research data makes the top seven universities studied “statistically indistinguishable from one another.”10 The higher education researchers learned this because they calculated CIs specifically to correct for this kind of potential problem. Therefore, without such an index of variability, the U.S. News data are incomplete at best and quite possibly deceptive, because the rankings show differences between medical schools where none may exist on statistical grounds.

Despite low response rates to the reputation surveys and other flaws, the U.S. News medical school ranks and overall scores are remarkably stable, even when amplified by independent data sets from reputation ratings published approximately 25 years ago. Table 2 presents correlations for ranks and overall scores for the “top” 25 medical schools reported by U.S. News & World Report in 2000 with their annual ranks and overall scores from 1996 to 1999; ranks, quality scores, and visibility scores reported by Cole and Lipton in 197711; and ranks (for 11 medical schools) reported by Blau and Margulies in 1974–75.12 All of the correlations are high and positive, with only two of 42 (5%, both involving old data from 1974–75) failing to reach statistical significance. This is strong evidence that U.S. medical school ranks and quality scores based on reputation judgments within the academic community demonstrate glacial change in the short run and the long run despite enormous economic, educational, social, and cultural changes in U.S. society, medical education, medical care and practice, and the institutions themselves.

Table 2

Table 2

The final methodologic criticism of the U.S. News & World Report ranking of American medical schools is about confounding. This concerns the problem of trying to isolate and judge separately the many elements that contribute to a complex phenomenon, in this case medical school and program quality.13 Research shows, for example, that it is impossible to unpack (i.e., separate, distinguish) perceptions about the prestige of an academic department from the eminence of its parent university.10 Therefore, the U.S. News respondents' perceptions of a medical school may actually be partly or largely perceptions of its parent university—or may not. The problem is that there is no way to tell. Another possible confounding element may be that the respondents are unconsciously focusing on the faculty rather than the school. As higher education scholars Conrad and Blackburn assert, “Quality programs are almost always related to characteristics of the faculty responsible for the implementation of the curriculum…. studies of program quality frequently seem to be little more than efforts to ascertain faculty quality”.13p. 285

Back to Top | Article Outline

Underlying Conceptual Flaws

Even if the methods used by U.S. News & World Report to collect and analyze its data were sound—which they are not, as shown in the preceding section—major flaws remain. Two major conceptual criticisms undercut the foundation of the U.S. News medical school rankings.

First, as stated earlier, all 125 U.S. medical schools are evaluated by the LCME against a set of rigorous accreditation standards on a minimum seven-year cycle. The cycle is more frequent if problems are detected. The LCME—with members from both the Association of American Medical Colleges (AAMC) and the American Medical Association (AMA) — is the body recognized by the U.S. government as responsible for accrediting medical schools. Therefore, the LCME ensures that all U.S. (and Canadian) medical education programs meet or surpass minimum quality standards. The assertion by U.S. News & World Report that the top-ranked U.S. medical school (in 2000 Harvard, with a score of 100) sets a single standard for judging the remaining 124 fully accredited medical schools is amateurish and simply wrong. The value and meaning of medical school accreditation as a national quality assurance mechanism are lost under these circumstances.14,15,16

Second, the annual U.S. News & World Report ranking of American medical schools is simply a demonstration of “Matthew effects” in medical education.17 This is drawn from the New Testament book of Matthew (25: 29), which states, “For to every one who has will more be given, and he will have abundance; but from him who has not, even what he has will be taken away.” Columbia University social scientist Robert Merton, who coined the term for a 1968 article in the journal Science, described its practical consequences. Merton states, “One institutional version of the Matthew effect… is expressed in the principle of cumulative advantage that operates in many systems of social stratification to produce the same result: the rich get richer at a rate that makes the poor become relatively poorer.”17, p.62 [Emphasis added.] Thus it is no surprise that a small minority of American medical schools are consistently named “top” schools in annual U.S. News rankings while the majority of accredited medical schools receive no mention.

To summarize, the annual rankings of American medical schools published by U.S. News & World Report yield distinctions without differences. The U.S. News rankings are worthless on methodologic and conceptual grounds. Better approaches are needed to identify and recognize exemplary medical schools and schools that are performing poorly.

Back to Top | Article Outline

BETTER WAYS TO JUDGE MEDICAL SCHOOLS

The quality of medical education programs can be measured many ways beyond reputation and prosperity. In a society as diverse as that in the United States, it is not only reasonable but imperative that identifying and measuring medical school quality be pluralistic as well. The institutions differ in their histories, goals, structures, and aspirations. Even equally wealthy and famous medical schools may consider themselves to have greatly different missions in medicine and education. In such an environment a system intended to rate and rank medical schools would need to do so in light of these differences. But even more important, the assessment of quality should be based on criteria of special importance to U.S. society—that is, the measures should be meaningful in ways that go beyond wealth and reputation. They should be measures that are proven to be directly related to graduating better doctors.

Such an approach would respect medical schools' individuality and uniqueness of mission. Given accreditation, what is distinctive about the mission and achievements of a medical school? Is there, for example, an emphasis on primary care, preparation of medical missionaries, community service, or clinical research? Another way to ask the question is: “What is a medical school's value added for its graduates?”

Looking at three assessment categories will show the potential for such approaches to assessing program quality in medical education.

Back to Top | Article Outline

Inclusion of Minorities

Increased involvement of underrepresented minority groups in the U.S. medical profession is a stated goal of the Association of American Medical Colleges (AAMC) as expressed in the continuing initiatives begun under the 3000 by 2000 program.18 Many medical schools and undergraduate institutions that are the source of prospective medical students have responded to this challenge. Medical school programs designed to recruit and retain minorities have demonstrated positive results nationally,19 with especially impressive outcomes at such geographically distinct medical schools as the University of California at Davis20 and the University of North Carolina.21

Achieving these professionally and socially important goals often involves altering a medical school's admission policies, moving away from almost exclusive reliance on test scores and college grades for student selection. The National Institutes of Health generously supports minority physicians and scientists who aspire to research careers, while private philanthropies such as the Josiah H. Macy, Jr. Foundation have led in supporting programs to prepare underrepresented minorities for clinical practice.22 However, a decade of success of medical school programs to increase minority representation in the medical profession is not directly acknowledged by the ranking formula used by U.S. News, which features student selection based only on test scores, grades, and student selection ratio. The selection process at medical schools and the process reflected in these narrow U.S. News & World Report criteria have less in common each year.

Back to Top | Article Outline

Clinical Skills Assessment

Over the years, many task forces and groups have worked to identify the core skills and attributes of the good physician. The most recent, the Medical School Objectives Project (MSOP), sponsored by the AAMC, asserts that one of the four key outcomes of medical education is that physicians must be skillful.23 The only way to ensure that this objective has been achieved is to install a mechanism to assess students' clinical skills rigorously.

Two U.S. medical schools are prominent from their leadership in starting and maintaining programs assessing students' clinical skills: Southern Illinois University School of Medicine24 and the Mt. Sinai School of Medicine.25,26 These programs rely on standardized patients (SPs) to simulate clinical problems that physicians encounter in everyday practice. Research and experience show that these assessments of skills are realistic approximations to clinical situations, yield reliable and valid data for student evaluation, and give students useful feedback about the growth of their clinical skills. The Mt. Sinai program is especially noteworthy because it serves the clinical assessment needs of eight medical schools in metropolitan New York City, a “multiplier effect” that transcends the walls of a single educational institution.

Progress toward assessment of clinical skills is one of the most important changes in U.S. medical education in the past two decades. However, its value within American medical schools is not captured at all in the U.S. News & World Report rankings. This is shortsighted, because only a fraction of American medical schools have rigorous assessment programs that can certify student acquisition of clinical skills, activities that doctors perform everyday, which is a key, real-world measure of program quality, not a proxy measure like reputation and wealth.

Back to Top | Article Outline

Service to the Underserved

American medical schools and the academic health centers where they reside have historically defined society's health needs in terms of self-interest from the “supply side” of faculty expertise and research interests rather than from the “demand side” of community health needs.27 In recent years, however, new initiatives are closing the gap between medical school curricula and the health care needs of the communities their graduates serve. The “Health of the Public” approach to medical education, which has been supported from a variety of sources including the W. K. Kellogg Foundation, the Rockefeller Foundation, the Pew Charitable Trusts, and The Robert Wood Johnson Foundation, has been a major impulse toward uniting medical education with public health needs.28,29

The University of New Mexico Health Sciences Center (UNMHSC) contains one of the most prominent medical education programs that fulfills “Health of the Public” objectives.30,31 Its medical curriculum is grounded in a population perspective, meaning students learn how to address the most prevalent health care needs of the state's people, especially impoverished people. The academic health center engages in rural outreach to educate students and serve people in remote locations. It also recruits students from underrepresented minority groups, conducts thematic research on public health problems, and has undergone administrative realignment to better achieve its combined public health and medical education goals. The UNMHSC is definitely not an “ivory tower” but a broad-based resource that serves educational and community health needs statewide.

Medical schools such as New Mexico that emphasize service to the underserved via education, outreach services, minority student recruitment, and administrative streamlining get no points from the U.S. News & World Report ranking formula. The magazine is silent about public health and service goals, acquisition of private funding, or use of internal money to reach these goals, and the utility of nontraditional educational formats for students and faculty.

Back to Top | Article Outline

CONCLUSIONS

The rankings of American medical schools published annually by U.S. News & World Report have the appearance of objectivity and scientific integrity yet fall short of both goals on methodologic and conceptual grounds. Even if the rankings were conducted professionally they ignore the role of medical school accreditation and address a narrow, elitist view of what constitutes program quality in medical education. Writing in their 1997 book The Rise of American Research Universities: Elites and Challengers in the Postwar Era higher education scholars Graham and Diamond state that “systems of institutional comparison can camouflage the unique history, structure, and organizational personality that give institutions their distinctive character.”32, p. 132 Like leadership, program quality in medical education can be expressed in many ways. The U.S. News rankings ignore this insight.

There is another reason to question not only the utility but also the intent of the annual U.S. News medical school rankings. By publishing the annual rankings, U.S. News & World Report is not just reporting news but creating news. The magazine sponsors and endorses the research that underlies the ratings. However, in contradistinction to academic codes of conduct, U.S. News provides no evidence that research quality standards have been met or that peer review of the data, methods, or reporting has been sought. The first principle of the Society of Professional Journalists Code of Ethics states, “Journalists should: Test the accuracy of information from all sources and exercise care to avoid inadvertent error.” The annual U.S. News rankings of American medical schools fail to meet this standard of journalistic ethics.

This critique is not alone. Journalist Nicholas Thompson wrote a hardnosed evaluation of the annual U.S. News rankings of American colleges and universities in 2000.33 Thompson's judgments are similar, but not identical, to this report's conclusions.

The annual U.S. News & World Report rankings of U.S. medical schools are ill-conceived; are unscientific; are conducted poorly; ignore the value of school accreditation; judge medical school quality from a narrow, elitist perspective; do not consider social and professional outcomes in program quality calculations; and fail to meet basic standards of journalistic ethics. The U.S. medical education community, higher education scholars, the journalism profession, and the public should ignore this annual marketing shell game.

Back to Top | Article Outline

REFERENCES

1. Best graduate schools. U.S. News & World Report. 2000;128(April 10):56–94.
2. Best graduate schools. U.S. News & World Report. 1999;126(March 29):76–84.
3. Best graduate schools, U.S. News & World Report. 1998;124(March 2):87–89.
4. Best graduate schools. U.S. News & World Report. 1997;122(March 10):86–87.
5. Best graduate schools, U.S. News & World Report. 1996;120(March 18):96–97.
6. Mangione TW. Mail Surveys: Improving the Quality. Applied Social Research Methods Series, Vol. 40. Thousand Oaks, CA: Sage Publications, 1995.
7. Kalton G. Introduction to Survey Sampling. Quantitative Applications in the Social Sciences Series No. 07-035. Beverly Hills, CA: Sage Publications, 1983.
8. Design of the Sample. Gallup Poll Monthly. 1999;(402):53–4.
9. Dawson-Saunders B, Trapp RG. Basic and Clinical Biostatistics. 2nd ed. Norwalk, CT: Appleton & Lange, 1994.
10. Keith B. The institutional context of departmental prestige in American higher education. Am Educ Res J. 1999;36:409–45.
11. Cole JR, Lipton JA. The reputations of American medical schools. Social Forces. 1977;55:662–84.
12. Blau PM, Margulies RZ. The reputations of American professional schools. Change. 1974;Dec 1974-Jan 1975, 6:42–7.
13. Conrad CF, Blackburn RT. Program quality in higher education: a review and critique of literature and research. In: Smart JC (ed). Higher Education: Handbook of Theory and Research. Vol. 1. New York: Agathon Press, 1985:283–308.
14. Kassebaum DG, Eaglen RH, Cutler ER. The meaning and application of medical accreditation standards. Acad Med. 1997;72:807–18.
15. Kassebaum DG, Cutler EG, Eaglen RH. The influence of accreditation on educational change in U.S. medical schools. Acad Med. 1997;72:1127–33.
16. Kassebaum DG, Cohen JJ. Nonaccredited medical education in the United States. N Engl J Med. 2000;342:1602–5.
17. Merton RK. The Matthew effect in science. Science 1968;159(3810):56–63.
18. Association of American Medical Colleges, Division of Minority Health, Education, and Prevention. Secondary School Science Minority Achievement Registry, Volume I: Project 3000 by 2000, 1994–1995. Washington, DC: AAMC, 1994.
19. Carline JD, Patterson DG, Davis LA. Enrichment programs for undergraduate college students intended to increase the representation of minorities in medicine. Acad Med. 1998;73:299–312.
20. Davidson RC, Lewis EL. Affirmative action and other special consideration admissions at the University of California, Davis, School of Medicine. JAMA. 1997;278:1153–8.
21. Strayhorn G. A pre-admission program for underrepresented minority and disadvantaged students: application, acceptance, graduation rates, and timeliness of graduating from medical school. Acad Med. 2000;75: 355–61.
22. Bediako MR, McDermott BA, Bleich ME, Colliver JA. Ventures in education: a pipeline to medical education for minority and economically disadvantaged students. Acad Med. 1996;71:190–2.
23. The Medical School Objectives Writing Group. Learning objectives for medical student education—guidelines for medical schools: Report I of the Medical School Objectives Project. Acad Med. 1999;74:13–8.
24. Vu NV, Barrows HS, Marcy ML, Verhulst SJ, Colliver JA, Travis TA. Six years of comprehensive clinical performance-based assessment using standardized patients at the Southern Illinois University School of Medicine. Acad Med. 1992;67:42–50.
25. Swartz MH, Colliver JA. Using standardized patients for assessing clinical performance. Mt Sinai J Med. 1996;63:241–9.
26. Swartz MH, Colliver JA, Bardes CL, Charon R, Fried ED, Moroff S. Validating the standardized patient assessment administered to medical students in the New York City Consortium. Acad Med. 1997;72:619–26.
27. White KL. The Task of Medicine. Dialogue at Wickenberg. Menlo Park, CA: The Henry J. Kaiser Family Foundation, 1988.
28. Showstack J, Fein O, Ford D, et al. Health of the Public: an academic response. JAMA. 1992;267:2497–502.
29. Evans JR. The “Health of the Public” approach to medical education. Acad Med. 1992;67:719–23.
30. Kaufman A, Galbraith P, Alfero C, et al. Fostering the health of communities: a unifying mission for the University of New Mexico Health Sciences Center. Acad Med. 1996;71:432–40.
31. Kaufman A, Derksen D, McKernan S, et al. Managed care for uninsured patients at an academic health center: a case study. Acad Med. 2000;75:323–30.
32. Graham HD, Diamond N. The Rise of American Research Universities: Elites and Challengers in the Postwar Era. Baltimore, MD: Johns Hopkins University Press, 1997.
33. Thompson N. Playing with numbers: how U.S. News mismeasures higher education and what we can do about it. Washington Monthly. 2000;32(9):16–23.
© 2001 Association of American Medical Colleges