Home Current Issue Previous Issues Published Ahead-of-Print Collections Podcasts Videos For Authors Journal Info
Skip Navigation LinksHome > May 2008 - Volume 19 - Issue 3 > The Impact Factor Follies
Epidemiology:
doi: 10.1097/EDE.0b013e31816b6a8c
Commentary: IMPACT FACTOR

The Impact Factor Follies

Rothenberg, Richard

Free Access
Article Outline
Collapse Box

Author Information

From Georgia State University, Atlanta, GA.

Editors' note: Related articles appear on pages 369, 370, and 373.

Correspondence: Richard Rothenberg, Editor, Annals of Epidemiology, 140 Decatur Street (Urban Life Bldg., 8th floor), PO Box 3995, Atlanta, GA 30302-3995. E-mail: rrothenberg@gsu.edu.

Argument by analogy poses problems for philosophers,1 but the rest of us love it. It lets us question the analogy and avoid the issue. Hernán's2 argument, for example, is a bit hard to follow, and the congruence between the numerators (a published article vs. a case) and the denominators (all articles published in a journal vs. all cases occurring at a facility) seems somehow inexact. But no matter. The analogy may be arguable, but the point is clear: epidemiology and good sense dictate that numerators and denominators be of the same logical type. The frequency with which articles in a journal are cited, divided by the number of articles the journal publishes, is a ratio whose interpretation is dicey. A large ratio can result from a large numerator or a small denominator, and vice versa.

The defects in the impact factor are legion (aberrations in counting, citation of nonresearch articles, gaming the system, inappropriate use for academic promotion, etc.).3,4 Rectifying the fraction by making numerator and denominator congruent will not solve the problem. As Douglas Altman has pointed out in an ongoing conversation on the listserve of the World Association of Medical Editors (WAME-L@LIST.NIH.GOV), the impact factor does not measure quality, but rather the frequency of citation—not at all the same thing. To use an analogy with social network analysis, the IF measures centrality, the extent to which a “node” (the journal in this case) is connected to others. Judgments about that centrality (prominence, importance, influence) are revealed only with a lot more information about those connections.

These arguments have been around for some time, so why is the measure still with us? Tried and true? Tested by time? Corporate control? Faute de mieux? No. I would like to suggest that we have not abandoned it because, warts and all, it works for classifying journals. The big journals have large impact factors, and the lesser journals, smaller ones. Like it or not, the impact factor reflects, with occasional miscarriages, a pecking order that we all recognize. The New England Journal of Medicine has a higher impact than the 3 epidemiology journals that Hernán cites. Within the microcosm of epidemiology, those 3 journals have a higher impact than the one I edit. But more important, those 3 are clearly not very different from each other (small variation in impact factor notwithstanding), and represent first-tier journals within the field. The one I edit shares the second tier (impact factors in the 2's) with a number of others—an order we all recognize.

But if the impact factor simply designates the obvious, why bother with it? For editors, publishers, and sponsors, it provides categories for qualitative judgment, and permits appraisal of change (I have gone from “1” to “2”). We should jettison its cloak of pseudoquantitation: fix the numerous distortions, but abandon the 3 significant digits, and leave only the truncated integer (no rounding up, please). A journal's number would then better reflect the tier, and meretritious microdistinctions can be avoided. A journal can be judged by the company it keeps and the company it strives for. To use a final analogy, like the philosopher Mr. Ramsey who managed to get to “Q” in his thinking,5 perhaps I can get to “3.”

Back to Top | Article Outline

ABOUT THE AUTHOR

RICHARD ROTHENBERG is Professor at the Institute of Public Health of Georgia State University. He has worked for many years in the field of STD and HIV, with special interest in the dynamics of disease transmission. He is the Editor-in-Chief of the Annals of Epidemiology.

Back to Top | Article Outline

REFERENCES

1.Juthe A. Argument by analogy. Argumentation. 2005;19:1–27.

2.Hernán MA. Epidemiologists (of all people) should question journal impact factors. Epidemiology. 2008;19:366–368.

3.Seglen PO. Why the impact factor of journals should not be used for evaluating research. BMJ. 1997;314:498–502.

4.Rossner M, Van Epps H, Hill E. Show me the data. J Cell Biol. 2007;179:1091–1092.

5.Woolf V. To the Lighthouse. Paperback. Harcourt; 2005.

Cited By:

This article has been cited 6 time(s).

Preventive Medicine
Don't Step on Our 2007 Citation Footprint (CF)
Morabia, A; Costanza, MC
Preventive Medicine, 47(4): 351-353.
10.1016/j.ypmed.2008.09.005
CrossRef
Archives of Environmental & Occupational Health
Citation Analysis and Impact Factor Trends of 5 Core Journals in Occupational Medicine, 1985-2006
Smith, DR
Archives of Environmental & Occupational Health, 63(3): 114-122.

Epidemiology
A Correction Regarding Bibliographic Impact Factors
Hernán, MA
Epidemiology, 20(5): 785.
10.1097/EDE.0b013e3181b074a7
PDF (50) | CrossRef
Epidemiology
More on Impact Factors
Kogevinas, M
Epidemiology, 19(6): 876.
10.1097/EDE.0b013e318188e829
PDF (142) | CrossRef
Epidemiology
Impact Factor: A Call to Reason
Hernán, MA
Epidemiology, 20(3): 317-318.
10.1097/EDE.0b013e31819ed4a6
PDF (84) | CrossRef
Epidemiology
What Are We BIF-fing About?: Science Needs Impact Metrics
Colditz, IG; Colditz, GA
Epidemiology, 20(3): 462.
10.1097/EDE.0b013e31819ed7c5
PDF (502) | CrossRef
Back to Top | Article Outline

© 2008 Lippincott Williams & Wilkins, Inc.

Twitter  Facebook

Login

Article Tools

Share