Secondary Logo

Journal Logo

A Remarkable Journal Impact Factor for Simulation in Healthcare

Gaba, David M. MD

Simulation in Healthcare: The Journal of the Society for Simulation in Healthcare: December 2011 - Volume 6 - Issue 6 - p 313-315
doi: 10.1097/SIH.0b013e31823ca798
Editorial
Free

From the Stanford University School of Medicine, Stanford, CA

As Editor-in-Chief of Simulation in Healthcare, the author receives a yearly honorarium equivalent to approximately 7% of his annual income. Much of this is donated to the Center for Immersive and Simulation-based Learning at Stanford.

Reprints: David M. Gaba, MD, 291 Campus Drive, LK311C, Stanford, CA 94305-5217 (e-mail: gaba@stanford.edu).

I am pleased and proud to announce that Simulation in Healthcare has been assigned its first “Journal Impact Factor” (JIF) of 2.036 since citations to the journal began to be tracked by Thomson Reuters. This ranks the journal 26th out of 71 journals in Thomson Reuters' Web of Science Database category of Health Care Sciences and Services. This database contains 10,000 journals and proceedings in sciences and social sciences. For comparison, some journals likely to be familiar to our readers include Medical Education (JIF = 2.639), Academic Medicine (JIF = 2.631), Quality and Safety in Healthcare (JIF = 2.856), Medical Teacher (JIF = 1.494), and Teaching and Learning in Medicine (JIF = 0.679). The JIF is updated yearly by Thomson Reuters in its Journal Citation Report.

For readers not familiar with citation tracking and JIF, let me explain roughly how this works. Since the advent of computerized databases of articles (beginning in the 1960s), it has become possible to automatically determine the number of times any particular journal article is cited by other articles in the database.1 Thus, authors who are interested can look at the frequency of citation of their articles on a year-by-year basis (which I can attest is both an exciting and humbling experience) and can find numerical indices of their own publishing productivity (such as the h-index alluded to below).

The JIF looks at citations from the journal's perspective. An impact factor of 2.036 means that on average the articles published in Simulation in Healthcare in 2008 or 2009 (of which there were 56 counted for JIF purposes) were cited as a reference approximately two times in 2010 in any peer-reviewed science or social science journal (there were 114 citations). The ratio 114/56 is the JIF. Of course, some articles are cited many more times than average and others not at all. Despite many caveats about JIF and its interpretation,2–5—some of which are articulated below—there is wide agreement that overall it allows a rough comparison of the impact of a journal versus other journals in the same general field.

Given how new Simulation in Healthcare is compared with most other journals—we have only recently begun publishing bimonthly—having a starting JIF of nearly 2.04 is a significant achievement. It indicates that other authors are indeed looking at and utilizing the content of our journal and doing so at a rate that is within striking distance of that for more mature and more general journals in our sphere. This supports our belief that Simulation in Healthcare is indeed a high-quality, highly respected journal and is the premiere venue for the publication of articles aimed at the simulation community. This is a testament not only to the members of the Editorial Board, Associate Editors, and Reviewers who have worked hard to ensure the initial and growing rigor of work we publish but also to the bravery and commitment of authors who have submitted their work to a new journal which until now has not had a formal measure of its impact relative to others.

Having lauded our JIF, I will join the many others who have cautioned that it is an imperfect measure.2,5 Some of the technicalities of calculating the JIF can have a substantial effect on the result. Review articles and methodology articles are cited more often than those with new research findings. Citation patterns in different fields vary widely (ie, in some using few citations is the norm; in others dense citation is expected) as do the total number of articles published in a given field per year (ie, more articles, more opportunities for others to cite one's work). Differing citation patterns might be an important issue for our journal because of the highly multidisciplinary, interprofessional, and international nature of SimulationinHealthcare. Because of the newness of the field, the total number of articles on the topic is low even across all journals. The JIF will generally be higher for “fast moving” fields where research can build relatively quickly on findings of others. This may not be so easy for simulation research in which complex studies with busy learners (whether students or active clinicians) take time to plan, conduct, and analyze.

The JIF is a ratio between the number of citations to articles (in the numerator) and the number of “countable ” articles published (in the denominator). Not all types of articles count in the denominator. For example, editorials can cite prior articles in the journal (adding to the numerator) and they may also generate future citations to the editorials themselves in other articles, but editorials are not counted in the denominator. Thus, editorials that are highly cited can boost a journal's JIF. Journals can “game the system” by purposefully publishing more articles of types that generate many citations to them (eg, review articles) and by reducing the number of other kinds of articles they publish (shrinking the denominator). The ideal journal from a JIF standpoint might be one that publishes only a few notable editorials and reviews per year and nothing else, but it might be a poor excuse for a scholarly journal. There are anecdotal allegations that some reviewers and editors of other journals have encouraged—indeed in effect required—authors of submitted papers to cite large numbers of articles appearing in the journal even when many of those articles are either not relevant or redundant. Although highly improper, such practices have a logic to them, because, for a journal with only about 60 articles published in the 2-year window, adding only another 20 citations would increase the JIF by 0.33 points—a substantial boost. It is of course appropriate to cite relevant articles in the same journal to which one is submitting a manuscript (including one's own articles), but the wholesale citing of articles without justification, to boost a journal's JIF, is highly unethical. Simulation in Healthcare will never encourage or demand this of submitting authors.

The JIF is only one method to try to get a handle on the impact of a journal, albeit it is by far the most widely used numerical measure. Other quantitative measures of citations have been introduced, such as the h-index—which can apply to an individual author or to a journal6—or the journal eigenfactor7 or the article influence score (see also http://www.eigenfactor.org). A variety of correction factors have been introduced to alleviate the limitations of each of these. I cannot go into the details of these measures, but a quick Internet search will turn up an abundance of information about them. Suffice it to say that for biomedical journals, at the roughest approximation, the rankings of journals by these different measures are similar. Despite the imperfections, there is little serious challenge to the use of JIF as a rough guide to the impact of a journal relative to its peers.

Clearly, citation analysis, especially from databases that only count citations in indexed journals, doesn't fully capture the real impact of a journal's publications, perhaps all the more so for simulation. The Thomson Reuters Web of Science Database doesn't include book chapters and other types of literature (whereas, say Google Scholar does and typical numerical citation indices can also be computed from Google Scholar data). As a significant portion of our community uses simulation to improve education, training, and clinical care processes, they may perform quality management studies that are not intended to produce publications or they may concentrate solely on implementation without performing any research at all. Thus, articles in Simulation in Healthcare might well change peoples' thinking and educational practices and have a major impact on the field, but these impacts may not show up in citations. Conversely, evidence about the impact of the journal might be sought in other ways, for example through measuring the number of downloads of articles or by talking with authors, reviewers, and leaders in the field.2

Of more serious concern to many is that the use of JIF has spread insidiously to purposes for which it was never designed or intended. Perhaps of greatest concern is the simplistic use of impact factor to rate the articles published by individuals during promotion reviews or grant proposal reviews.4 Rather than focusing on the quality, novelty, and actual meaning of articles or the quality of a journal's peer review, there is sometimes blind reliance on the JIF of the journals in which researchers chose to publish. Such mindless reliance is unwise. For example, it is fascinating in history to consider the number of pioneering articles that laid fallow unnoticed until the time was ripe for their consideration (beyond the 2-year citation window used to calculate the standard JIF and sometimes well beyond the 5-year window used for an alternate JIF). A few examples of relevance to our fields include: articles published in the late 60s or early 70s about the Sim One mannequin-based simulator8,9 were largely ignored until after the independent reinvention of this technology by my group and others in the late 80s. The peak popularity of Kolb's Experiential Learning Theory occurred long after its original publication (as a book) in 1984.10 Thus, I do not think that high stakes decisions, whether on academic promotion or grant funding, should consider JIF as anything but a crude indicator of the prestige of the journals in which applicants have published. Reviewers in any of these settings should be looking primarily at the overall significance of the work and not just where it was published.

In summary, we are rightly proud of our initial JIF for Simulation in Healthcare. We will continue to follow the trend of this and other metrics over the years. We expect them to increase as we publish more work that leads to greater impact on patient care and safety and as our journal becomes even more well known. At the same time, we caution people to use such information wisely and to always think about the meaning and veracity of publications rather than just the pedigree of their sources.

Back to Top | Article Outline

REFERENCES

1. Garfield E. The history and meaning of the journal impact factor. JAMA 2006;295:90–93.
2. Kanter S. Understanding the Journal's impact. Acad Med 2009;84:1169–1170.
3. Saha S, Saint S, Christakis D. Is impact factor a valid measure of journal quality? J Med Libr Assoc 2003;9:42–46.
4. Seglen P. Why the impact factor of journals should not be used for evaluating research. BMJ 1997;314:497–502.
5. Petsko G. Having an impact (factor). Genome Biology 2008;9:107.
6. Hirsch JE. An index to quantify an individual's scientific research output. Proc Natl Acad Sci USA 2005;102:16569–16572.
7. Bergstrom C. Eigenfactor: measuring the value and prestige of scholarly journals. College & Research Libraries News 2007;68:314–316.
8. Abrahamson S, Denson J, Wolf R. Effectiveness of a simulator in training anesthesiology residents. J Med Educ 1969;44: 515–519.
9. Denson J, Abrahamson S. A computer-controlled patient simulator. JAMA 1969;208:504–508.
10. Kolb D. Experiential Learning: Experience as a Source of Learning and Development. Upper Saddle River, NJ: Prentice Hall; 1984.
© 2011 Society for Simulation in Healthcare