Skip Navigation LinksHome > May 2008 - Volume 19 - Issue 3 > Epidemiologists (of All People) Should Question Journal Impa...
Epidemiology:
doi: 10.1097/EDE.0b013e31816a9e28
Commentary: IMPACT FACTOR

Epidemiologists (of All People) Should Question Journal Impact Factors

Hernán, Miguel A.

Free Access
Article Outline
Collapse Box

Author Information

From the Harvard School of Public Health, Boston, MA.

Submitted 7 December 2007; accepted 25 January 2008.

Editors' note: Related articles appear on pages 369, 370, 372, and 373.

Correspondence: Miguel A. Hernán, Harvard School of Public Health, 677 Huntington Ave, Boston, MA 02115. E-mail: miguel.hernan@post.harvard.edu.

Each year Thomson Scientific, a private company, computes the bibliographic impact factor (BIF) for many journals, including general epidemiology journals. The 2006 BIF was 5.2 for the American Journal of Epidemiology, 4.5 for the International Journal of Epidemiology, and 4.3 for Epidemiology.

The literature on the shortcomings of the BIF as a criterion for ranking journals is extensive. The main criticisms of the BIF as a measure of research quality, as well as the vulnerability of the BIF to editorial manipulation and the distortions encouraged by the use of the BIF, have been recently discussed by several authors,1–4 including the creator of the BIF.5 This commentary does not reiterate those criticisms. Rather, I would like to highlight some flaws of the BIF that epidemiologists are especially well trained to detect. To do so, let me tell you the apocryphal story of a paper that I recently handled as an Editor of Epidemiology.

Thomson et al submitted a paper whose implicit goal was to compare the quality of medical care for epileptic patients among the neurology clinics of 3 hospitals located in Baltimore, Maryland, Durham, North Carolina, and Bristol, England. To accomplish this goal, the authors identified all new diagnoses of epilepsy in each hospital during the years 2004 and 2005. They then conducted an exhaustive search to count the number of seizures experienced by each of the hospital's patients during the year 2006. The authors were able to find all occurrences of seizures in these patients no matter where in the world they were in 2006. Pretty impressive, I thought. For each clinic, Thomson et al computed the ratio of number of seizures among its patients divided by the number of patients with epilepsy. They referred to this ratio as the brain irritability factor (BIF). Thomson et al cautioned against the misuse of the BIF, but nonetheless announced their intention to compute the BIF for comparisons among all major hospitals in the world. The 2006 BIF was 5.2 for Baltimore, 4.5 for Bristol, and 4.3 for Durham. I sent the paper for review to 3 fellow epidemiologists. They raised the following criticisms:

1. Bad Choice of Denominator: The authors included information from more subjects in the numerator than in the denominator of the BIF. Specifically, for each hospital, the denominator was the number of patients admitted with a diagnosis of epilepsy in 2004–2005, whereas the numerator was the total number of seizures experienced in 2006 by all patients admitted to the clinic in 2004–2005 (regardless of their diagnosis). The reviewers asked to see a corrected BIF that includes all admitted patients (regardless of their diagnosis) in the denominator. Otherwise, the BIF could not be interpreted as “a measure of the frequency of seizures of the ‘average patient’ in a clinic during a particular period,” as proposed by Thomson et al in their article. Had the authors responded to this criticism, they would have reported that the corrected 2006 BIF was approximately 4.1 for Baltimore, 2.8 for Durham, and 2.1 for Bristol.

2. Need for Adjustment: The proportion of patients with a diagnosis of epilepsy varied greatly among the 3 hospitals: approximately 86% for Baltimore, 72% for Durham, and 59% for Bristol. Because the number of seizures is expected to be greater among epileptic patients, a crude comparison of the average number of seizures across hospitals would be misleading. Figure 1 represents this problem. Thus the reviewers requested that either the BIF be standardized to some common distribution of epilepsy frequency, or the numerator of the BIF be modified to include only seizures in patients with epilepsy. Had the authors responded, they would have reported that the 2006 BIF restricted to patients with epilepsy was approximately 5.1 for Baltimore, 3.8 for Durham, and 3.1 for Bristol. Some characteristics of the patients with epilepsy (eg, comorbidities) may also be differentially distributed by hospital. If these characteristics are strongly associated with the number of seizures, then even the restricted BIF may be misleading.

Figure 1
Figure 1
Image Tools

3. Questionable Summary Measure: Because of the highly skewed distribution of the number of seizures, the mean may not be the most informative summary. Other measures such as the median number of seizures (4 in Baltimore, 2 in Durham and Bristol) or the proportion of epilepsy patients with no seizures (approximately 4.7% in Baltimore, 11.7% in Durham, 11.3% in Bristol) or the proportion above a certain number of seizures may also provide important information.

I asked the authors to address these standard epidemiologic criticisms. I specifically directed the authors' attention to the fact that restriction to patients with epilepsy changed the value of the BIF differentially among hospitals (a change of about 2% for Baltimore, 10% for Durham, and 30% for Bristol), and wondered whether the sensitivity of the estimates could be explained by a combination of the differential proportions of seizures detected in the same hospital where patients were admitted (about 5.2% in Baltimore, 4.7% in Durham, and 12.0% in Bristol) and of patients with epilepsy.

I also asked the authors for the rationale underlying the use of seizures that occurred only in 2006, and requested a better description of their methods. Specifically, it was unclear what procedure was used to diagnose epilepsy, and thus the number of patients that should contribute to the denominator of the BIF. This vagueness ensures that the authors' BIF cannot be replicated by other investigators who may wish to assess its accuracy. In fact, I could not exactly reproduce the unadjusted BIFs reported by the authors, even when provided the raw data consisting of each patient's number of seizures and medical records (as a result, the restricted BIFs for Durham and Bristol reported above are probably 0.1–0.2 lower than they should be).

The authors rejected these criticisms. Paraphrasing Hoeffel,6 they responded that the BIF “is not a perfect tool to measure the quality of clinics but there is nothing better and it has the advantage of already being in existence and is, therefore, a good technique for scientific evaluation.” They also responded that the diagnosis of epilepsy was “based on human judgment” and the diagnostic criteria were not meant to be publicly available. These responses left me with no choice but to reject the paper. I later learned that other editors in similar situations had actually received similar responses from Thomson, or none at all.7

The parallels between this hypothetical BIF and the journal BIF are summarized in Table 1. Many epidemiologists use the Thomson Scientific impact factor to rank journals. Some even decide where to submit their own papers based on the journals' BIF—which confers the BIF rankings with the power of a self-fulfilling prophecy, as journals with higher BIFs (1) get the right of first refusal of many papers, including a disproportionate number of the best ones, and (2) are read more and thus tend to have more citations. Interestingly, some epidemiologists who would be quite critical of Thomson et al's brain irritability factor seem to put their critical faculties on hold when considering the Thomson Scientific bibliographic impact factor, even though both BIFs go against the fundamental epidemiologic principles that epidemiologists abide by in their teaching and research.

Table 1
Table 1
Image Tools

Developing a good impact factor is a nontrivial methodologic undertaking that depends on the intended goal of the rankings. Hence, a scientific discussion about any impact factor requires that its goal is made explicit and its methodology is described in enough detail to make the calculations reproducible. Paradoxically, the methodology of the impact factor that is used to evaluate peer-review journals cannot be fully evaluated in a peer-reviewed journal. As illustrated above, a manuscript describing the Thomson Scientific impact factor would be a hard sell for most journals, and hardly acceptable for the American Journal of Epidemiology, the International Journal of Epidemiology, or Epidemiology.

Back to Top | Article Outline

REFERENCES

1.Seglen PO. Why the impact factor of journals should not be used for evaluating research. BMJ. 1997;314:498–502.

2.Porta M, Fernández E, Bolúmar F. Commentary: the ‘bibliographic impact factor’ and the still uncharted sociology of epidemiology. Int J Epidemiol. 2006;35:1130–1135.

3.Adler R. The impact of impact factors. IMS Bull. 2007;36:4.

4.Chew M, Villanueva EV, Van der Weyden MB. Life and times of the impact factor: retrospective analysis of trends for seven medical journals (1994–2005) and their Editors' views. J R Soc Med. 2007;100:142–150.

5.Garfield E. The history and meaning of the journal impact factor. JAMA. 2006;295:90–93.

6.Hoeffel C. Journal impact factors [letter]. Allergy. 1998;53:1225.

7.PLoS Medicine Editors. The impact factor game [editorial]. PLoS Med. 2006;3:e291, 1–2.

Cited By:

This article has been cited 22 time(s).

Archives of Environmental & Occupational Health
Citation Analysis and Impact Factor Trends of 5 Core Journals in Occupational Medicine, 1985-2006
Smith, DR
Archives of Environmental & Occupational Health, 63(3): 114-122.

Journal of Child Neurology
Impact Factor Wars: Episode V-The Empire Strikes Back
Brumback, RA
Journal of Child Neurology, 24(3): 260-262.
10.1177/0883073808331366
CrossRef
Statistica Neerlandica
Why do statistics journals have low impact factors?
van Nierop, E
Statistica Neerlandica, 63(1): 52-62.
10.1111/j.1467-9574.2008.00408.x
CrossRef
Journal of Child Neurology
The Silver Jubilee: Journal of Child Neurology Turns 25
Brumback, RA
Journal of Child Neurology, 25(1): 4-31.
10.1177/0883073809351986
CrossRef
Preventive Medicine
Don't Step on Our 2007 Citation Footprint (CF)
Morabia, A; Costanza, MC
Preventive Medicine, 47(4): 351-353.
10.1016/j.ypmed.2008.09.005
CrossRef
Acta Anaesthesiologica Scandinavica
From the new Editor-in-Chief
Rasmussen, LS
Acta Anaesthesiologica Scandinavica, 54(1): 1-2.
10.1111/j.1399-6576.2009.02178.x
CrossRef
Psychological Reports
A Comparison of Citations Across Multidisciplinary Psychology Journals: A Case Study of Two Independent Journals
Schumm, WR
Psychological Reports, 106(1): 314-322.
10.2466/PR0.106.1.314-322
CrossRef
Journal of Child Neurology
"Worshiping false idols: The impact factor dilemma": Correcting the record - Response
Brumback, RA
Journal of Child Neurology, 23(9): 1092-1094.
10.1177/0883073808322331
CrossRef
Journal of Cross-Cultural Psychology
Entering Our Fifth Decade: An Analysis of the Influence of the Journal of Cross-Cultural Psychology During Its First Forty Years of Publication
Lonner, WJ; Smith, PB; van de Vijver, FJR; Murdock, E
Journal of Cross-Cultural Psychology, 41(3): 301-317.
10.1177/0022022110366940
CrossRef
Epidemiology
More on Impact Factors
von Elm, E
Epidemiology, 19(6): 877-U5.

Statistics in Medicine
Characteristics of recent biostatistical methods adopted by researchers publishing in general/internal medicine journals
Nietert, PJ; Wahlquist, AE; Herbert, TL
Statistics in Medicine, 32(1): 1-10.
10.1002/sim.5311
CrossRef
Epidemiology
More on Impact Factors
Kogevinas, M
Epidemiology, 19(6): 876.
10.1097/EDE.0b013e318188e829
PDF (142) | CrossRef
Epidemiology
More on Impact Factors
Smith, GD; Ebrahim, S
Epidemiology, 19(6): 876-877.
10.1097/EDE.0b013e3181880f08
PDF (142) | CrossRef
Epidemiology
“How Am I Doing?”
Wilcox, AJ
Epidemiology, 21(2): 163.
10.1097/EDE.0b013e3181cd709e
PDF (52) | CrossRef
Epidemiology
Impact Factor: Let's Be Unreasonable!
Brumback, RA
Epidemiology, 20(6): 932-933.
10.1097/EDE.0b013e3181ba3a4c
PDF (200) | CrossRef
Epidemiology
Impact Factor: A Call to Reason
Hernán, MA
Epidemiology, 20(3): 317-318.
10.1097/EDE.0b013e31819ed4a6
PDF (84) | CrossRef
Epidemiology
Impact Factor: Good Reasons for Concern
Szklo, M
Epidemiology, 19(3): 369.
10.1097/EDE.0b013e31816b6a7a
PDF (59) | CrossRef
Epidemiology
The Impact Factor Follies
Rothenberg, R
Epidemiology, 19(3): 372.
10.1097/EDE.0b013e31816b6a8c
PDF (58) | CrossRef
Epidemiology
A Correction Regarding Bibliographic Impact Factors
Hernán, MA
Epidemiology, 20(5): 785.
10.1097/EDE.0b013e3181b074a7
PDF (50) | CrossRef
Epidemiology
How Come Scientists Uncritically Adopt and Embody Thomson's Bibliographic Impact Factor?
Porta, M; Álvarez-Dardet, C
Epidemiology, 19(3): 370-371.
10.1097/EDE.0b013e31816b73ab
PDF (94) | CrossRef
Epidemiology
Rise and Fall of the Thomson Impact Factor
Wilcox, AJ
Epidemiology, 19(3): 373-374.
10.1097/EDE.0b013e31816a1293
PDF (84) | CrossRef
Epidemiology
What Are We BIF-fing About?: Science Needs Impact Metrics
Colditz, IG; Colditz, GA
Epidemiology, 20(3): 462.
10.1097/EDE.0b013e31819ed7c5
PDF (502) | CrossRef
Back to Top | Article Outline

© 2008 Lippincott Williams & Wilkins, Inc.

Twitter  Facebook

Login

Article Tools

Images

Share