Share this article on:

What Are We BIF-fing About?: Science Needs Impact Metrics

Colditz, Ian G.; Colditz, Graham A.

doi: 10.1097/EDE.0b013e31819ed7c5

CSIRO Livestock Industries Armidale NSW, Australia (Ian)

Department of Surgery Washington University School of Medicine St Louis, MO, (Graham)

Back to Top | Article Outline

To the Editor:

The recent lively discussion of bibliographic impact factors (BIF) in Epidemiology highlights the importance given to publications when assessing the impact of scientists’ work.1–5 Subsequent correspondence6–10 has focused largely on the limitations of the Impact Factor algorithm and has not addressed broader approaches already in use. We argue that the scientific community should wholeheartedly support the ongoing development and validation of metrics for describing the quality and impact of science. The Hirsch index and the journal strike rate index are examples of second generation metrics derived from citation data, the first providing good power to predict future performance of scientists and the second permitting comparisons of journals across disciplines.

Data on citations have been collected in Science Citation Index since 1961. The initial objective of comparing authors was subsequently broadened to include quantifying the impact of journals.11 Journal impact factors and author citation statistics have attracted for a number of years the slavish attention of many (if not most) participants in the scientific community. Resources are limited at every level of the science enterprise, and science administrators, grant reviewers, librarians, academic publishing houses, and scientists all need to assess the greatest likely return on the investment of their dollars and time. As these personal and commercial decisions can affect the careers and livelihoods of individuals and institutions, there is a strong need for objective means to assess past performance and predict future performance of individuals, groups of scientists, and the media they publish in.

The limitations of citation metrics have been widely discussed. Citation counts form the basis for most metrics, but rewarding scientists for the impact factor of journals they publish in merely rewards them for the company they keep, rather than the contribution of their own work. Recognition of these limitations has spurred interest in metrics with less bias and better comparability across disciplines. Examples include the Hirsch (h) index12 and the journal strike rate index.13 In a recent empirical study, Hirsch14 found that the h-index (the number [N] of papers with ≥N citations) was a better predictor of future performance of scientists than total paper count, total citation count, or citations per paper. Hirsch indexes are now computed in the ISI Web of Science and Scopus, and can readily be estimated from Google Scholar; however, a recent study found discrepancies among h-indexes calculated from the ISI, Scopus, and Google Scholar databases.15 Importantly, the hit counts and citation counts for Google Scholar are untraceable and may be inflated. Barendse's strike rate index (10 log h/log N), where h is the Hirsch index for a journal and N the total number of citable items in the journal during the interval under examination, has a similar distribution across disciplines; thus, it seems to have greater utility in comparing the quality of journals across disciplines than does BIF. Other metrics are under development.11

The need for quantitative measures of science performance and journal quality may differ among sectors of the science enterprise. To aid the development of informative metrics we need clearer articulation of the particular needs of administrators, government, investors, publishers, collection curators, and scientists when they assess performance and quality. Some criteria such as power to predict future performance and ability to compare performance across disciplines may be of interest to all sectors. Broader knowledge of the strengths and weaknesses of each metric is needed to increase the awareness of other participants’ gamesmanship. Support from journals such as Epidemiology for the development of robust metrics should be a high priority.

Ian G. Colditz

CSIRO Livestock Industries Armidale NSW, Australia

Graham A. Colditz

Department of Surgery Washington University School of Medicine St Louis, MO,

Back to Top | Article Outline


1. Hernán MA. Epidemiologists (of all people) should question journal impact factors [commentary]. Epidemiology. 2008;19:366–368.
2. Szklo M. Impact factor—Good reasons for concern [commentary]. Epidemiology. 2008;19:369.
3. Porta M, álvarez-Dardet C. How come scientists uncritically adopt and embody Thomson's bibliographic impact factor? [commentary]. Epidemiology. 2008;19:370–371.
4. Rothenberg R. The impact factor follies [commentary]. Epidemiology. 2008;19:372.
5. Wilcox AJ. Rise and fall of the Thomson impact factor [editorial]. Epidemiology. 2008;19:373–374.
6. Castelnuovo G. More on impact factors [letter]. Epidemiology. 2008;19:762–763.
7. Guiliani F, DePetris MP. More on impact factors [letter]. Epidemiology. 2008;19:763.
8. Kogevinos M. More on impact factors [letter]. Epidemiology. 2008;19:876.
9. Davey Smith G, Ebrahim S. More on impact factors [letter]. Epidemiology. 2008;19:876–877.
10. vonElm E. More on impact factors [letter]. Epidemiology. 2008;19:878.
11. Banks M, Dellavalle R. Emerging alternatives to the impact factor. OCLC Syst Serv. 2008;24:167–173.
12. Hirsch JE. An index to quantify an individual's scientific research output. Proc Natl Acad Sci USA. 2005;102:16569–16572.
13. Barendse W. The strike rate index: a new index for journal quality based on journal size and the h-index of citations. Biomed Digit Libr. 2007;4:3.
14. Hirsch JE. Does the H index have predictive power? Proc Natl Acad Sci USA. 2007;104:19193–19198.
15. Jacso P. Testing the calculation of a realistic h-index in Google Scholar, Scopus, and Web of Science for FW. Lancaster. Libr Trends. 2008;56:784–815.
© 2009 Lippincott Williams & Wilkins, Inc.