Commentary: IMPACT FACTOR
From the aInstitut Municipal d'Investigació Mèdica, Universitat Autònoma de Barcelona, Barcelona, Spain; bJournal of Epidemiology and Community Health; and cUniversidad de Alicante, Alicante, Spain.
Submitted 17 December 2007; accepted 30 January 2008.
Editors' note: Related articles appear on pages 369, 372, and 373.
Correspondence: Miquel Porta, Institut Municipal d'Investigació Mèdica, Universitat Autònoma de Barcelona, Carrer del Dr. Aiguader 88, E-08003 Barcelona, Catalonia, Spain. E-mail: email@example.com.
The bibliographic impact factor (BIF) of Thomson Scientific is sometimes not a valid scientometric indicator for a number of reasons. One major reason is the strong influence of the number of “source items” or “articles” for each journal that the company chooses each year as BIF's denominator. The irresistible fascination with (and picturesque uses of) a construct as scientifically weak as BIF are simple reminders that scientists are embedded in and embody culture.
We write from Chamberí, Spain. The city has a brain irritability factor (BIF) of 3.8, which is pretty good; such value can hardly be why Thomson et al excluded our city from the hypothetical study submitted to Epidemiology.1 The problem is that Chamberí's BIF is unknown to the Thomson Corporation. Other cities with relevant BIFs (such as Kyoto, Berlin, or Rio de Janeiro) are also underrepresented in Thomson's database, thus making the results flawed through a not-so-subtle mechanism: cultural bias. Although Miguel Hernán cogently argues against using Thomson's bibliographic impact factor (BIF) to rank journals,1 he uses examples of journals with the highest BIF—no doubt to illustrate that even we critics use Thomson's rankings.2–4 BIFs mesmerize scientists.
We agree with Hernán and others on the poor accountability, validity and performance of BIF when used to attribute scientific or symbolic value to a journal, author or paper. And we support his call against use of the BIF by the epidemiologic community: yes, biases are large in most cases and uses.3,5,6 This is clear when one rationally assesses Thomson's (partly opaque) methods and data.
One problem with BIF stems from the opportunity of observation: only some journals count as sources of citations. Criteria to select such journals are nontransparent and biased. While some journals' references counted right from issue 1, it took years for other journals (as Epidemiology) to count.2,3 An even stronger reason why BIF is often not a valid scientometric indicator is the extreme influence of the number of “source items” or “articles” chosen as BIF's denominator.5 Virtually nobody knows what those articles are, or how Thomson decides each year BIF's denominator for each of several thousand journals. Citations to articles excluded from the denominator of BIF are counted in the numerator.2–6 Hence, BIF is the simple mean of a logically incoherent ratio. It is a lesser concern—but still relevant—that values of the ratio follow a highly skewed distribution.
BIF does not apply to virtually any article published in a journal (for elementary statistical reasons), nor to the journal as a whole (for validity and conceptual reasons). If you wish to know Thomson's “bibliographic impact” for a journal you may look at the total number of citations received by the articles published by that journal. At least that number of citations is not influenced by the number of “source items” chosen as BIF's denominator. Of course, such number of citations is influenced by the number of articles published by the journal. But so what, if the journal is your unit of analysis? Also, there should be no need to restrict your analysis to citations received over the previous 2 years; you may want to choose a period that is more appropriate for your field and research question. If you wish to know the “bibliographic impact” of an article, just look at the total number of citations received by the article; Thomson's and other databases are useful for this.4 If you wish to know the “bibliographic impact” of an individual or an institution, use the number of citations received by publications coauthored by the individual or by people working at that institution. You just need to adjust for relevant factors (data source, specialty, half-life, of article citations, numbers of journals and researchers in the field, coauthors, time periods). Finally, let us remember that nothing can substitute for reading, and for thinking both inwards and towards the wider context.2,4,7
The irresistible fascination with (and picturesque uses of) a construct so scientifically weak as BIF are simple reminders that scientists are embedded in and embody culture.8,9 We are vain and contradictory human beings too—as shown by our references below.
ABOUT THE AUTHORS
MIQUEL PORTA is a Senior Scientist at the Institut Municipal d'Investigació Mèdica, a Professor at the School of Medicine of the Universitat Autònoma de Barcelona, Spain, and an editorial consultant to several journals, including Epidemiology and The Lancet. CARLOS ÁLVAREZ-DARDET has for the past 10 years been Editor of the Journal of Epidemiology and Community Health; he is a Professor of Public Health at the Universidad de Alicante, Spain.
1.Hernán MA. Epidemiologists (of all people) should question journal impact factors [editorial]. Epidemiology. 2008;19:366–368.
2.Porta M. Factor de impacto bibliográfico (Science Citation Index y Social Sciences Citation Index) de las principales revistas de medicina preventiva, salud pública y biomedicina. Algunas cifras, algunas impresiones. In: Porta M, Álvarez-Dardet C, eds. Revisiones en Salud Pública. Vol. 3. Barcelona: Masson; 1993:313–347.
3.Porta M. The bibliographic “impact factor” of the Institute for Scientific Information, Inc.: how relevant is it really for public health journals? J Epidemiol Community Health. 1996;50:606–610.
4.Porta M, Fernandez E, Bolúmar F. The ‘bibliographic impact factor’ and the still uncharted sociology of epidemiology. Int J Epidemiol. 2006;35:1130–1135.
5.Joseph KS. Quality of impact factors of general medical journals. BMJ. 2003;326:283.
6.Rossner M, Van Epps H, Hill E. Show me the data [editorial]. J Cell Biol. 2007;179:1091–1092.
7.Garfield E. Fifty years of citation indexing. Int J Epidemiol. 2006;35:1127–1128.
8.Knorr-Cetina K. Epistemic cultures. In: Restivo S, ed. Science, Technology, and Society. An Encyclopedia. Oxford: Oxford University Press; 2005.
9.Castiel LD, Álvarez-Dardet C. A Saúde Persecutória: Os Limites da Responsabilidade. Rio de Janeiro: Editora Fiocruz; 2007.
© 2008 Lippincott Williams & Wilkins, Inc.