Gray, Chancellor F. MD
From the Department of Orthopaedic Surgery, University of Pennsylvania Health System, Philadelphia, PA.
Address correspondence and reprint requests to Chancellor F. Gray, MD, Department of Orthopaedic Surgery, University of Pennsylvania Health System, 3400 Spruce St., 2 Silverstein, Philadelphia, PA 19104; E-mail firstname.lastname@example.org
Acknowledgment date: September 6, 2012. Acceptance date: September 6, 2012.
The manuscript submitted does not contain information about medical device(s)/drug(s).
No funds were received in support of this work.
No relevant financial activities outside the submitted work.
It would be tempting—but misleading—to report the results of spinal fusion surgery solely in terms of the rate of radiographical union. To be sure, radiographical union is an objective observation, and using it as the standard avoids all of the imprecision you would get with more subjective measures such as pain relief; but using that objective measure alone, at best, tells an incomplete story.
Similarly, it would be tempting to report the relative success of a medical journal solely in terms of how many times its articles were cited. To be sure, citations—such as radiographical fusions—are an objective observation, and using them as the standard avoids all of the imprecision you would get with more subjective measures such as the clinical impact of the articles. But using citations alone at best tells an incomplete story and is certainly misleading.
Yet, most people hear only that incomplete story. Medical journals are commonly rated by citations—specifically, a citation-based metric called “impact factor” (IF). The IF is calculated by dividing the number of current year citations to the source items published in that journal during the previous 2 years by the total number of such source items. In other words, it is the average number of citations per citable item during the previous 2 years.
One criticism of IF is that citations fail to capture “how well read and discussed the journal is outside the core scientific community or whether it influences health policy.”1 Furthermore, because citations do not follow a normal distribution, IF can be “influenced by a small minority of [a journal's] papers.”2 For some journals, 85% or more of their IF is only due to the performance of a handful of their articles.
There is more not to like about IF. First, in some fields such as orthopedic surgery, a 2-year window does not allow an article's (or a journal's) performance to shine through. Just to name one example, consider the article by Beaton et al3 published in Spine in 2000. This article has garnered more than 600 citations to date; yet only 3 of these citations were made in 2001 or 2002, and therefore more than 99% did not count toward Spine's IF.
The second issue is the use of a denominator. If a journal published a lot of articles that are not cited much, its IF shrinks—but that seems to denigrate the achievement of a journal with regard to the articles that are cited. I personally consider the fact that the top 10 articles from Spine in the year 2000 were cited 4487 times to be more significant than that Spine published 111 articles that year that were not cited at all. The articles that are not cited are still peer-reviewed, are legitimate science, and may have influenced the thinking of surgeons and scientists—Should Spine be “punished” for publishing these articles?
To help tell a more complete story, Joe Bernstein of the University of Pennsylvania and I developed a metric called “content factor” (CF). CF attempts to improve on IF by not considering the 2-year window only and not decreasing the IF by the number of articles published. For a field such as orthopedic surgery, in which citations are garnered slowly, this is essential to accurately capture the “impact” of a publication. In a recent article4 in PLoS One, we reported that CF much more highly correlates with a journal's perceived importance among an expert panel than IF.
By the standards of CF, Spine is doing quite well. In fact, in the year we examined, 2010, Spine had the highest CF of all orthopedic journals (33.1). That is to say, in 2010, Spine articles (from all years) were cited more than 33,000 times. Spine, one might say, is making significant contributions to the medical community that is not reflected in the IF (only 2.5 in that same year).
It may well be that the assessment of medical journals—just like the assessment of patients—must need to go beyond 1 or 2 pieces of data. That is, CF alone does not tell the whole story either. Even if Spine has the highest CF in orthopedics (as it did in 2010), it would be wrong to say that this necessarily proves that Spine is the best orthopedic journal (though it may be). What can be said, however, in the light of the 33,000 citations that Spine garnered in 2010 is that the journal is making a major contribution to orthopedic knowledge.
1. Bernstein J, Gray CF. The impact factor game. It is time to find a better way to assess the scientific literature. PLoS Med 2006;3:e291.
2. Editorial: Not-so-deep impact. Nature 2005;435:1003–4.
3. Beaton DE, Bombardier C, Guillemin F, et al. Guidelines for the process of cross-cultural adaptation of self-report measures. Spine 2000;25:3186–91.
4. Bernstein J, Gray CF. Content factor: a measure of a journal's contribution to knowledge. PLoS One 2012;7:e41554.
© 2013 Lippincott Williams & Wilkins, Inc.