Letters: Letters to the Editor
Centre for Research in Environmental Epidemiology (CREAL), Municipal Institute of Medical Research (IMIM-Hospital del Mar), CIBER Epidemiologia y Salud Pública (CIBERESP), Barcelona, Spain, Medical School, University of Crete, Heraklion, Greece
To the Editor:
The criticisms published in Epidemiology on the calculation and use of impact factors1–5 are only to some extent justifiable. Any numerical summary of scientific production is bound to be simplistic (although not necessarily entirely wrong). However, these criticisms do not evaluate positive aspects of the use of the impact factor, at least in some communities. Second, they underestimate the capacity of scientists to make rational use of bibliometric indices.
The commentaries ignore the fact that evaluation of research in several research communities is far from rational or objective. In both Spain and Greece (2 countries where I am currently working), numerical bibliometric indices have helped bring some order to arbitrary systems of research evaluation. For years, scientists doing innovative research had to compete with “established scientists” whose numerous unsubstantial papers (mostly in proceedings of local congresses) were counted the same as papers in peer-reviewed journals. The emergence of PubMed and of numerical bibliometric indices has helped remedy a situation in which scientists were being evaluated with unclear criteria. I find that an extreme criticism of the flaws of the impact factor probably comes mostly from researchers in centers that can do a rational evaluation of scientific production without needing impact factors. Even in these more organized scientific environments, impact factors could be useful as one of many indices.
The criticisms also ignore that most scientists using impact factors do not use them as the only bibliometric index. Any experienced researcher or administrator knows that finding the best candidate for a job is far more complex than ranking people by a single index, and that this process also involves a degree of subjectivity. When I am evaluating the CVs of candidates for a research post, I evaluate their work by evidence of their capacity to do innovative research work, to have continuity in their work, to work within a group, to teach, to serve the community. I also look at impact factors together with other bibliometric indices, such as citations. As mentioned by Hernán,1 “Epidemiologists (of all people) should question journal impact factors.” I agree. But certainly epidemiologists (of all people) should also question evaluations that do not incorporate quantitative assessments.
Impact factors can be usefully applied to evaluate our work, perhaps in some societies more than others. The message is simple: better to have the impact factors than not have them; even better, use them wisely knowing their limitations.
Centre for Research in Environmental Epidemiology (CREAL)
Municipal Institute of Medical Research (IMIM-Hospital del Mar)
CIBER Epidemiologia y Salud Pública (CIBERESP)
Medical School, University of Crete
Kogevinas is no doubt correct that there are worse things than impact factors although this may be faint praise. - —The Editors
1. Hernán MA. Epidemiologists (of all people) should question journal impact factors. Epidemiology. 2008;19:366 –368.
2. Szklo M. Impact factor: good reasons for concern. Epidemiology. 2008;19:369.
3. Porta M, Alvarez-Dardet C. How come scientists uncritically adopt and embody Thomson’s bibliographic impact factor? Epidemiology. 2008;19:370 –371.
4. Rothenberg R. The impact factor follies. Epidemiology. 2008;19:372.
5. Wilcox AJ. Rise and fall of the Thomson impact factor. Epidemiology. 2008;19:373–374.
© 2008 Lippincott Williams & Wilkins, Inc.