At Thomson Reuters, we encourage and appreciate discussion about citation metrics, including our proprietary evaluation metric, the Journal Impact Factor. Furthermore, we appreciate the opportunity to restate and clarify the logic behind its design.
What follows is a series of responses to some of the inaccuracies raised in the editorial, “Impact factor: A call to reason.”1
“One easy solution would be to redefine the impact factor as proposed in our previous commentary: only citations to items in the denominator should be included in the numerator.”
From the beginning, a key to the success of impact factor has been its simplicity. But while the impact factor algorithm is—on its surface—simple, the judgment as to what defines a “citable article” is decidedly more complex. It is, however, applied universally and consistently. “Citable items”—upon which the numerator is derived—can be defined as any part of a journal that contributes to the scholarly discourse. Original research reports and review articles make up most of citable items, but other journal content can be included if it contributes to the dialogue substantially.
The algorithm's denominator reflects only original research and review articles, and excludes editorials, letters to the editors, news items, meeting abstracts, etc. Research conducted by the Thomson Reuters Research Services Group has demonstrated that, across all journals, more than 98% of citations in the numerator of impact factor are to items considered “citable” and counted in the denominator. In other words, it is possible, though uncommon, that a citable item in the numerator is not included in the denominator.
The impact factor calculation was not designed to exclude, but rather, to ensure that all important research is included in the numerator, without penalizing a journal for publishing more items that are generally less-cited—such as meeting abstracts—which serve an important function, but might lower the journal's impact factor. Including the full range of journal contents in the denominator would have this latter adverse effect. Similarly, including in the numerator only those article types in the denominator would potentially neglect important (if less frequent) contributions to the scientific discourse.
The judgment that Thomson Reuters editors apply to what we consider a “citable item” is precisely why Thomson discourages journals from publishing “recalculated” impact factors. Though the algorithm is freely available, such recalculated impact factors are flawed.
“ … a big one [problem] is that even the corrected BIF would only include citations over the most recent two-year period.”
Based on user feedback, we have added a 5-year impact factor calculation to the most recent release, Journal Citation Reports 20072 (released 1 February 2009).
The 2-year citation window is a common point of criticism in the bibliometric literature, but this, too, is by design and not happenstance. The 2-year window allows impact factor figures to stay current and timely, and not be greatly impacted by articles of influence from 3, 4, or 5 years prior to the calculation.
The 2-year citation window is commonly thought to negatively impact certain fields, such as mathematics, where citations to the work tend to lag more than in other fields, such as physics. While it is true that the citation window plays out differently across various scientific fields, this presents a problem only when impact factor data are improperly assessed as absolutes and compared across scientific fields. In other words, as long as the impact factor is being used within its stated parameters—for comparison only with like journals within the same subject area—then the data still provide an accurate snapshot of journal quality.
“many [scientists] … are working on alternative impact metrics that do not rely on proprietary information, and whose methodology is freely available. It is possible that the BIF will not survive the emergence of these new metrics.”
Thomson Reuters has always encouraged users of Journal Citation Reports to consider much more than impact factor when assessing journal performance. With Journal Citation Reports, our goal is to provide a “complete picture” through objective, statistical data so our users can make sound, evaluative judgments. Our commitment to providing a multidimensional assessment of archived journals is evident throughout Journal Citation Reports; for many years, Immediacy Index, Total Cites, Citation Half-Life, and other metrics have been presented. And, with the February release of Journal Citation Reports 2007,2 eigenfactor, a 5-year prestige metric developed by University of Washington researchers, made its JCR debut. We closely monitor up-and-coming citation metrics for inclusion in all our products, and will continue to do so as the field of bibliometrics evolves.
For the past 50-plus years, the impact factor has been a dynamic measure; its guidelines and characteristics have adapted to the evolving needs of the research community. It has, however, consistently been the most ubiquitous, universal indicator of journal quality. We appreciate the support and good stewardship of the research community in helping it remain as such into the future.
ABOUT THE AUTHOR
JAMES TESTA joined Thomson Reuters (then ISI) in 1983. For many years he was responsible for building and maintaining working relations with the more than 3000 international scholarly publishers whose journals are indexed by Thomson Reuters. Since 2007 he has been Senior Director, Editorial Development and Publisher Relations, where he continues to build content for Thomson Reuters products and work to increase efficiency in communication with the international publishing community.