Epidemiology

Skip Navigation LinksHome > May 2008 - Volume 19 - Issue 3 > Rise and Fall of the Thomson Impact Factor
Epidemiology:
doi: 10.1097/EDE.0b013e31816a1293
Editorial: IMPACT FACTOR

Rise and Fall of the Thomson Impact Factor

Wilcox, Allen J.

Free Access

How do we judge a journal's success? The publisher's criterion is simple enough—a journal has to make money. But editors, authors, and readers have a more elusive goal. We want our journals to publish interesting and important papers that advance the field.

How can we tell if a journal is succeeding?

The impact factor seemed at first to be a step in the right direction. Here was a measure of the extent to which a journal's papers contribute enough to be mentioned by others. This measure had a simple basis (we thought), in providing the average number of times a journal's papers are cited over a period of time. This had some intuitive appeal.

There were obvious limitations even at the outset: mere citation doesn't mean that a paper is important—or even good. More subtle problems gradually emerged. The impact factor is subject to manipulation, to the extent of distorting the editorial process. An editor who holds 2 equally good epidemiology papers, say on breast cancer and on liver disease, could be swayed by knowing there are hundreds of breast cancer epidemiologists out there ready to cite a breast cancer paper, but only a few who care about liver disease. This is hardly fair to authors who pioneer a new area. As with so many other things in life, the advantage seems to go to the strong.

Such limitations of the impact factor are no secret. They have been widely discussed1,2 and the system remains widely tolerated nonetheless.

But lately, events have taken an unexpected turn. What started as an index for evaluating a journal has now morphed into an index for evaluating the papers that are published in the journal—and even for evaluating the authors who write the papers that are published in the journal. It has become widespread practice for academic institutions to base monetary awards on the Thomson impact factor of the journals in which their researchers publish. Apparently the thinking is, “even if your paper is useless, publish it in a journal with a good impact factor and we will forgive you.”

Some examples:

In Germany, universities distribute money to researchers by a formula that includes the Thomson impact factor. Each point of impact factor is worth about 1000 Euros. (Stephan Mertens, personal communication).

In Pakistan, researchers receive bonuses of up to US$20,000 a year depending on the sum of the impact factors of the journals in which they publish.3 Half is for researchers' personal use.3

In Finland, a portion of hospital funding from the government depends on the impact factor of journals in which the hospital researchers publish. An increase in one point of impact factor for one paper can increase a hospital's funding by US$7000.4

As these uses (and abuses) of the Thomson impact factor spread, now we find out that the impact factor doesn't even mean what we thought it did. The commentary5 by Miguel Hernán in this issue of Epidemiology demonstrates the degree to which the impact factor is biased by an arbitrary rule of bookkeeping—a bias that in just one small sample of epidemiology journals changes the impact factor by up to 30%.

Thomson Scientific (the owner, calculator, and aggressive marketer of the impact factor) is unapologetic about such problems. The company says that if we have been misinterpreting the impact factor, then we just haven't been paying attention.6 Maybe so. Another concern is that Thomson's methods for calculating the impact factor are neither transparent nor reproducible.5,7

Where does all this leave us? Our institutions are evaluating our scientific work with a single indicator of obscure construction, subject to manipulation, and meaning something different than we thought.

We have a problem.

It should go without saying (but apparently needs to be said) that no single number can capture the value of scientific work. At the very least, we need lots of numbers. In an age of hyperabundant data, this should not be difficult—and in fact, it's not. There are many facets of journals (and papers, and authors) that can be quantified. Do you want to know how many times one of your scientific papers has been cited? “Google Scholar”8 will tell you in a fraction of a second (and for free). Or perhaps you're curious about how journals compare in measures of productivity and prestige? “SCImago”9 is an ambitious attempt to quantify these aspects—again for free and with structural advantages over the Thomson impact factor.

To an extent that no one could have anticipated, the academic world has come to place enormous weight on a single measure that is calculated privately by a corporation with no accountability, a measure that was never meant to carry such a load. Yes, some of us benefit from this flawed system—in addition to other rewards that come from publishing in high-impact journals, we collect nice cash bonuses. But none of this changes the fact that evaluating research by a single number is embarrassing reductionism, as if we were talking about figure skating rather than science. Our university and hospital administrators and our granting agencies apparently haven't gotten this message.

As Hernán points out, there's no one better qualified to tell them than us.

Back to Top | Article Outline

REFERENCES

1.Porta M. The bibliographic “impact factor” of the Institute for Science Information how relevant is it really for public health journals? J Epidemiol Commun Health. 1996;50:606–610.

2.PLoS Medicine Editors. The impact factor game: it is time to find a better way to assess the scientific literature.; PLoS Med 2006;3:e291. Doi:10.1371/journal.pmed.0030291.

3.Fuyuno I, Cyranoski D. Cash for papers: putting a premium on publication. Nature. 2006;441:792.

4.Adams D. The counting house. Nature. 2002;415:726–729.

5.Hernán MA. Epidemiologists (of all people) should question journal impact factors. Epidemiology. 2008;19:366–368.


7.Rossner M, Van Epps H, Hill E. Show me the data. J Exp Med. 2007. Doi: 10.1084/jem.20072544.



Cited By:

This article has been cited 5 time(s).

Epidemiology
Impact Factor: A Call to Reason
Hernán, MA
Epidemiology, 20(3): 317-318.
10.1097/EDE.0b013e31819ed4a6
PDF (84) | CrossRef
Epidemiology
“How Am I Doing?”
Wilcox, AJ
Epidemiology, 21(2): 163.
10.1097/EDE.0b013e3181cd709e
PDF (52) | CrossRef
Epidemiology
What Are We BIF-fing About?: Science Needs Impact Metrics
Colditz, IG; Colditz, GA
Epidemiology, 20(3): 462.
10.1097/EDE.0b013e31819ed7c5
PDF (502) | CrossRef
Epidemiology
More on Impact Factors
Kogevinas, M
Epidemiology, 19(6): 876.
10.1097/EDE.0b013e318188e829
PDF (142) | CrossRef
Epidemiology
A Correction Regarding Bibliographic Impact Factors
Hernán, MA
Epidemiology, 20(5): 785.
10.1097/EDE.0b013e3181b074a7
PDF (50) | CrossRef
Back to Top | Article Outline

© 2008 Lippincott Williams & Wilkins, Inc.

Twitter  Facebook 

Login

Article Tools

Share

Article Level Metrics