Hernán, Miguel A.; Wilcox, Allen J.
During the last few years, we have published several editorials drawing attention to the dark side of the impact factor.1–3 As scientists, we are concerned that the rules for deriving impact factor are not in the public domain, and thus its value cannot be generally reproduced. As editors, we worry that obeisance to the impact factor can have corrupting effects on journals, encouraging dubious strategies to pump up citations.
There is a third concern: the annual impact factor is statistically unstable—even if it does have that flashy, scientific-looking third decimal place. The randomness of impact factors adds yet more unpredictability to the annual ranking of journals with similar impact.
Most major epidemiology journals, including ours, have seen a steady rise in their impact factors during recent years. At the same time, the relative rank of these journals changes from year to year. Such changes are unlikely to represent true annual changes in these journals' relative quality. We think the various epidemiology journals are indeed different, and they deserve to be evaluated and compared. But we're happier when such assessments are based on matters of substance, such as editorial policies, quality of reviews, quality of editing, efficiency in the processing of manuscripts, and the (real) impact of the journal on the field.
When the 2011 impact factors were published, it was Epidemiology's turn to take first place among the epidemiology journals that publish original research. We'd like to think that our hard work had suddenly paid off to make the journal better than the rest in 2011—but probably not.
More important, what do our colleagues think about these rankings? Does being number one in impact factor make Epidemiology a more desirable journal for authors? We saw a natural experiment taking shape. The 2011 impact-factor rankings were made public almost exactly halfway through 2011. If authors care about impact factor rankings, then perhaps we would see an increase in our submissions after becoming the “top-ranked” epidemiology journal. So, we did the analysis. (Rather than attempt to publish our results in an even higher-ranking journal, we modestly present them here.)
We received 394 papers in the first half of 2011 and 388 in the second half, producing a submission ratio of 0.98. The corresponding ratio for the period 2006–2010 was 1.00. Taken together, these data provide no support for an increase in submissions after our highest–ever impact factor was announced—in fact, our submissions dropped slightly.
We are reassured that our authors seem so uninterested in the vagaries of impact factor rankings. This demonstrates the sensible behavior we've come to expect from our colleagues. Maybe, all this time, we've been preaching to the choir.
1. Hernán MA. Epidemiologists (of all people) should question journal impact factors. Epidemiology. 2008;19: 366–368.
2. Wilcox AJ. Rise and fall of the Thomson impact factor. Epidemiology. 2008;19:373–374.
3. Hernán MA. Impact factor: a call to reason. Epidemiology. 2009;20:317–318.
© 2012 Lippincott Williams & Wilkins, Inc.