Commentary: IMPACT FACTOR
Argument by analogy poses problems for philosophers,1 but the rest of us love it. It lets us question the analogy and avoid the issue. Hernán's2 argument, for example, is a bit hard to follow, and the congruence between the numerators (a published article vs. a case) and the denominators (all articles published in a journal vs. all cases occurring at a facility) seems somehow inexact. But no matter. The analogy may be arguable, but the point is clear: epidemiology and good sense dictate that numerators and denominators be of the same logical type. The frequency with which articles in a journal are cited, divided by the number of articles the journal publishes, is a ratio whose interpretation is dicey. A large ratio can result from a large numerator or a small denominator, and vice versa.
The defects in the impact factor are legion (aberrations in counting, citation of nonresearch articles, gaming the system, inappropriate use for academic promotion, etc.).3,4 Rectifying the fraction by making numerator and denominator congruent will not solve the problem. As Douglas Altman has pointed out in an ongoing conversation on the listserve of the World Association of Medical Editors (WAME-L@LIST.NIH.GOV), the impact factor does not measure quality, but rather the frequency of citation—not at all the same thing. To use an analogy with social network analysis, the IF measures centrality, the extent to which a “node” (the journal in this case) is connected to others. Judgments about that centrality (prominence, importance, influence) are revealed only with a lot more information about those connections.
These arguments have been around for some time, so why is the measure still with us? Tried and true? Tested by time? Corporate control? Faute de mieux? No. I would like to suggest that we have not abandoned it because, warts and all, it works for classifying journals. The big journals have large impact factors, and the lesser journals, smaller ones. Like it or not, the impact factor reflects, with occasional miscarriages, a pecking order that we all recognize. The New England Journal of Medicine has a higher impact than the 3 epidemiology journals that Hernán cites. Within the microcosm of epidemiology, those 3 journals have a higher impact than the one I edit. But more important, those 3 are clearly not very different from each other (small variation in impact factor notwithstanding), and represent first-tier journals within the field. The one I edit shares the second tier (impact factors in the 2's) with a number of others—an order we all recognize.
But if the impact factor simply designates the obvious, why bother with it? For editors, publishers, and sponsors, it provides categories for qualitative judgment, and permits appraisal of change (I have gone from “1” to “2”). We should jettison its cloak of pseudoquantitation: fix the numerous distortions, but abandon the 3 significant digits, and leave only the truncated integer (no rounding up, please). A journal's number would then better reflect the tier, and meretritious microdistinctions can be avoided. A journal can be judged by the company it keeps and the company it strives for. To use a final analogy, like the philosopher Mr. Ramsey who managed to get to “Q” in his thinking,5 perhaps I can get to “3.”
ABOUT THE AUTHOR
RICHARD ROTHENBERG is Professor at the Institute of Public Health of Georgia State University. He has worked for many years in the field of STD and HIV, with special interest in the dynamics of disease transmission. He is the Editor-in-Chief of the Annals of Epidemiology.
1.Juthe A. Argument by analogy. Argumentation
2.Hernán MA. Epidemiologists (of all people) should question journal impact factors. Epidemiology
3.Seglen PO. Why the impact factor of journals should not be used for evaluating research. BMJ
4.Rossner M, Van Epps H, Hill E. Show me the data. J Cell Biol
5.Woolf V. To the Lighthouse
. Paperback. Harcourt; 2005.