Letters: Letters to the Editor
To the Editor:
I applaud the stance taken against the current uncritical use of the impact factor in the commentaries recently published in Epidemiology. I hope that the thought—experiment by Hernán will encourage many colleagues to reexamine this privately owned metric with the same rigor as they would do with any other measure newly proposed to them.1
Many journals including Epidemiology make their impact factor public in prominent places, for example their websites. Some even report it with the (pseudo-) precision of 3 decimals, but I know none providing a measure of uncertainty—a standard requirement for reporting of other estimates. Journals provide tables or diagrams showing time trends,2 whereas others give the current impact factor the graphic allure of a quality seal.3 Often, the Journal's ranking in the respective Journal Citation Report is presented alongside. Unfortunately, these rankings are also fraught with problems: they are imprecise,4 and the composition of the categories subject to constant change.5
I agree with the commentators that something needs to be done. Unfortunately, individual academics who decide not to obey the rules of the impact factor game harm themselves, in particular if they are subject to impact-factor driven assessments. However, journals and publishing groups are more powerful players and could try to bring about change. As a first step, they could simply remove the impact factor and derived rankings from their electronic and print material. If several journals acted jointly, their move could be explained in a common declaration. Second, they could ask the Thomson Corporation to stop calculating their impact factor. Even if the company cannot be deterred easily from this lucrative business, the request would be highly emblematic, as having an impact factor is still considered an attractive feature of scientific journals. Finally, editorial policies could be reviewed to ensure that they are free from reasoning related to impact factor performance, such as the prospects of articles and journal sections for future citations.6
If the journals took leadership now, academic committees might follow their example. Eventually, relying on the impact factor for whatever evaluations or decisions may become outdated, or even “politically incorrect.” Alternative indicators such as article-level metrics may be developed and may, in the long term, fill the gap left by the impact factor. Epidemiologists (of all people) and their journals should make a difference now and stop worshipping this golden calf.
Erik von Elm
Institute of Social and Preventive Medicine (ISPM)
University of Bern
German Cochrane Centre
Department of Medical Biometry and Statistics
University Medical Centre
Collective action by editors would be an excellent idea, if only editors could agree there is a problem. In the meantime, Epidemiology will continue to advocate for more valid criteria by which to assess the scientific quality of journals, and papers, and authors.
1. Hernán MA. Epidemiologists (of all people) should question journal impact factors. Epidemiology
. 2008;19:366 –368.
4. Greenwood DC. Reliability of journal impact factor rankings. BMC Med Res Methodol
5. Schoonbaert D. Citation patterns in tropical medicine journals. Trop Med Int Health
. 2004; 9:1142–1150.
6. Chew M, Villanueva EV, van der Weyden MB. Life and times of the impact factor: retrospective analysis of trends for seven medical journals (1994 –2005) and their Editors’ views. J R Soc Med.