Human beings love to rank things. We rank everything from cuts of beef to sports teams to intelligence. It seems that we are motivated by an innate desire to eat, own, or be the very best! While that may be a worthy aim, ranking individuals or objects requires comparing one to the other, which most often entails reducing a set of complex phenomena to a single number – and therein lies the rub. Complex phenomena are just that – complex. They are characterized by subtlety and nuance, by measurable and unmeasurable attributes, by knowable and unknowable facets that, taken together, defy valid and reliable expression as a single number.
And this applies to scholarly, peer-reviewed publications. There is no question that understanding the impact of an academic journal is important to its editors, authors, editorial board members, owners, and other stakeholders. However, that is not a simple task. If it were, a journal’s impact could be reduced to a single number. But it cannot.
So what is impact, what data are available to help us understand it, what measures have been developed based on those data – and, importantly, what assumptions and limitations are involved – and what does it all mean for this journal?
The impact of a journal can refer to its relative importance within a field of inquiry, the quality of the work it publishes, or the degree to which it influences a discipline. I think Academic Medicine’s impact is the degree to which its articles and special features advance knowledge and practice in medical schools and teaching hospitals.
Clearly, these are complex constructs, more amenable to a few pages of descriptive prose – with balanced and thoughtful consideration of relevant issues – than to being captured by a single number. Even if we ask the seemingly simple question, To what degree does a journal influence its field?, it only leads to more questions: In the short-term? Over the long run? From whose perspective? To what end? Leading to better science? To improved patient outcomes? All of which affect the kinds of data one might seek and the types of measures one might develop.
What data have been used to try to understand impact? The most commonly used datum is the citation count – i.e., the number of times an individual article in a peer-reviewed journal has been cited by other articles published in peer-reviewed journals. Even this apparently straightforward approach raises a number of important questions: What journals will be included in the analysis? Over what time period? Who will do the counting and how will they ensure accuracy? Is it better to consider total counts of citations or averages? Why limit the citation count to other peer-reviewed journals; why not include books? Will the data be available publicly so others can verify counts and analyses?
Other kinds of data that have been used to assess impact include the number of submitted manuscripts, the number of journal subscribers, the time from acceptance of an article to publication, the caliber of editing, the quality of editorial policy, and the presence of a statement about editorial independence. Each of these types of data has advantages and limitations. For example, the number of individuals who subscribe to the print edition of a journal no longer provides an approximation of the journal’s circulation or the breadth of its readership. Given that many, if not most, individuals access journals and articles in journals via the Web – either directly or via search engines – and that libraries provide electronic access to packages of journals based on negotiated contracts, it is difficult to estimate the size or to profile the characteristics of the core readership of a journal.
This shift to electronic access has led to new kinds of data. One can count the number of “visits” to a journal’s web site or the number of times a particular article is viewed or downloaded from that web site. One can even determine how long a particular page on a web site has been viewed. Of course, these data have limitations as well. For example, we really cannot know how many individuals have visited a Web site; we know only how many computers have. And, if an author sends an e-mail message to 1,000 of his or her closest friends with a link to the online version of the author’s latest published article, it will likely increase the number of times that article is downloaded.
What measures have been developed based on the aforementioned data? Perhaps the best-known are the Thomson-Reuters two-year journal impact factor (JIF) and the h-index.
The JIF is released each summer by Thomson Reuters. Traditionally, the JIF is reported as a single number, to three decimal places. For a given journal, the 2008 JIF (released in 2009) is a measure of the average number of times the articles published in that journal, in 2006 and 2007, were cited by articles published in 2008 in journals included in the Thomson-Reuters database.*
The JIF, because of its name, journal impact factor, can seem to be a definitive declaration of a journal’s impact when, in fact, it is a proxy measure based on assumptions and encumbered by limitations (just like any other measure that attempts to capture a complex construct with a single number). Let’s be clear: The JIF can contribute in important ways to understanding a journal’s impact. It can help us understand the relative importance of journals in the same field in terms of the average number of citations per article during a defined period of time. This is important especially in relatively fast-moving clinical fields. However, the JIF has several limitations, including that certain citations may be counted in the numerator without being represented in the denominator, citations to retracted articles may be included, there could be a disproportionate effect of methodologic articles, and so forth.1,2 Although the JIF is one important factor in assessing impact, it does not, and cannot, provide a comprehensive understanding of a journal’s impact.
The JIF for Academic Medicine has hovered around 2 for the last several years. This means, for example, that the articles published in Academic Medicine in 2006 and 2007 were cited, on average, approximately two times per article by authors who published articles in 2008 in journals counted by Thomson Reuters. However, this does not help us predict how many more times these Academic Medicine articles (published in 2006 and 2007) will be cited in 2009 and over the ensuring several years. It does not tell us how many articles have been cited a few times and how many have been cited a very large number of times. It does not give us a sense of the enduring value of an article. In fact, some articles published in Academic Medicine have been cited several hundred times over many years, a phenomenon not captured in the two-year JIF “snapshot.”
There is at least one measure that can help us understand the number of highly-cited articles published by a journal: the h-index. Named for Jorge Hirsch, the physicist who proposed the index in 2005,3 originally as a measure of an individual researcher’s scientific output, it also can provide useful information about a journal. The h-index is the largest number of articles with that same number (or a higher number) of citations. For example, if an individual has published 50 articles, but only three have been cited three or more times (with the other 47 being cited zero, one, or two times), that individual’s h-index is 3. If that individual had published four articles, each with four or more citations, the h-index would be 4. Academic Medicine’s h-index is 60, indicating that 60 articles have been cited greater than or equal to 60 times. This compares quite favorably to similar journals.
Other measures have been developed to rank journals including the Eigenfactor scores and article influence scores; see (http://www.eigenfactor.org/methods.pdf) for more information. Academic Medicine appears at or near the top of a list of similar journals when ranked using these scores.
So what does all this mean for Academic Medicine? First, we recognize that a single number cannot capture the ability of an Academic Medicine article to stimulate the imagination of its readers, to spark the creativity of researchers and educators to ask better questions and implement more innovative programs, or to foster a more sophisticated dialogue among leaders of academic health centers so that they pursue more innovative solutions to their problems.
Second, we realize that although a single number cannot adequately represent impact, citation counts (while only one factor among many) may be the best kind of hard data currently available that contribute to a broad-based understanding of impact. Even though citation counts constitute only a proxy measure for impact, a citation to an article provides clear evidence that another person or group working in a related area was influenced, in one way or another and to some extent, by the cited article. We at the journal will continue to carefully analyze measures based on citation counts, taking into account their assumptions and limitations.
Third, we seek improved measures that can contribute to our understanding of the journal’s impact, particularly those measures based on sound theory, publicly available data, explicitly defined calculations, and transparent and scholarly practices. Sufficient information and methodological detail should be provided about any impact factor to allow others to reproduce calculations so that, in the true spirit of the scientific method, results can be verified, challenged, and improved.
Fourth, we acknowledge that important evidence of the journal’s impact may be revealed by the insightful comments of an author or reviewer, in an informal e-mail message offering feedback, or in a hallway discussion at a professional meeting. We will continue to listen carefully so that we can use this important source of qualitative data to improve the journal.
The editorial staff, editorial board, and I will continue to work to cultivate a broad-based sense of this journal’s impact, both in the short run and over the long term, based on both quantitative and qualitative data, particularly in the context of the journal’s goal: to publish important, relevant, and timely articles that advance knowledge and practice in Academic Medicine’s five focus areas – education and training issues, health and science policy, institutional issues, research practice, and clinical practice in academic settings.
Steven L. Kanter, MD
1 The PLoS medicine editors. The impact factor game. PLoS Med. 2006;3(6):e291. doi:10.1371/journal.pmed. 0030291.
2 Rossner M, Van Epps H, Hill E. Show me the data. JCB 2007;179:1091–1092.
3 Hirsch JE. An index to quantify an individual’s scientific research output. PNAS. 2006;102(46):16569–16572. doi:10.1073/pnas. 0507655102.
*To explain a bit further: The JIF is a ratio. Consider the 2008 JIF for a given journal. The numerator is the number of times that articles published in 2008, in journals included in the Thomson-Reuters database, cited articles published in 2006 and 2007 in the given journal. The denominator is the number of articles published in 2006 and 2007 in the given journal. More information is available at (http://thomsonreuters.com/products_services/science/free/essays/impact_factor/).