Share this article on:

Bibliometrics in Optometry and Vision Science

Twa, Michael D.

doi: 10.1097/OPX.0000000000001276
Editorials

The purpose of research and discovery is to advance knowledge and understanding. Communicating the results of that research through peer-reviewed publications is an essential step whereby new contributions, ideas, and evidence are evaluated and ultimately judged on merit. Valuable contributions can open new lines of investigation, resolve longstanding debates, or change clinical practice. However, because value is subjective, it is not always straightforward to determine the significance of a scientific contribution to a field of research. Likewise, it can be years or even decades before meaningful discoveries are understood, their importance recognized, and their influence fully realized. For example, the average age of Nobel Prize winners in chemistry, physiology, or medicine is 58 years. Einstein was awarded the Nobel Prize in Physics at the youthful age of 42, for work he did when he was 26 years old.

In 1955, Garfield1 first mentioned the idea of a scientific impact factor in an article he authored for Science. By analogy, Garfield1 drew from the long-standing practice of cataloging legal citations (Shepard's Citations) and proposed a new way to quantify the influence of research publications by tracking citations to articles as a way to index the reach of ideas associated with individual publications. Once article citations were accumulated, it was a matter of sorting by journal title or by author to generate the metrics that form the basis for the modern Journal Citation Report Index, still the most widely used source for bibliometric comparisons.

Back to Top | Article Outline

WHY MEASURE SCIENTIFIC IMPACT?

Probably the most common use for measures of scientific impact is to provide support for academic review and career advancement, where it is customary to judge scholarly productivity, that is, the number of publications or grants as well as the influence of that work on their scientific discipline. Likewise, institutions may wish to aggregate and judge the merits of scientific output at the departmental, school, or other institutional level. Funding agencies have also recently developed their own publication metrics. For example, the National Institutes of Health's Relative Citation Ratio is a metric that uses citations rates and citation networks to measure article-level influence across the boundaries of traditional academic disciplines.2,3 This metric will allow program administrators to compare the value of scientific output from their own funded research portfolios to all other NIH-funded research. Libraries also compare journals to determine which titles to hold in their collection.

Back to Top | Article Outline

THE RANGE OF AVAILABLE METRICS

Journal Citation Reports

Since the introduction of the first citation metric (Science Citation Index) in 1961, numerous other metrics have been developed. Journal Citation Reports do not count all citations to publications. In some instances, citations could come from sources of dubious quality, for example, predatory journals and those are excluded. Also excluded are citations from books, conference proceedings, dissertations, patents, technical reports, and other sources. Summary metrics provided as part of the standard Journal Citation Report include the following:

  • Impact factor: This is calculated as the ratio of all citations to the journal in the current year for items published in the previous 2 years divided by the total number of citable items (e.g., articles, reviews, and proceedings papers) published in the journal in the previous 2 years. Although not a strict average, the impact factor provides a rough idea of the number of times an article is cited; for example, an impact factor of 1.5 will mean that on average articles published in the journal will be cited 1.5 times over the previous 2-year period.
  • Five-year impact factor: This is the average number of times that articles published in the past 5 years have been cited in the reported year. It is calculated by dividing the number of citations in the report year by the total number of articles published in the previous 5 years.
  • Immediacy index: The immediacy index is the average number of times an article is cited in the year it is published.
  • Eigenfactors: This metric quantifies citation networks over an entire field while normalizing and accounting for differences in citation behavior across disciplines. Eigenfactors are calculated over a 5-year period.
  • Journal impact factor percentile: Journals are ranked within category by impact factor, and the percentile is reported. A journal impact factor percentile of 75 performs better than 75% of journals in that category, based on its impact factor score.
Back to Top | Article Outline

h-index

Citations to articles can range widely for collections of articles within a journal and for an individual author. The distribution of cites is often hyperbolic, and the h-index is a simple way to limit the influence of extremes on citation metrics. The h-index is defined as x if the author has at least x articles that are cited x or more times. For example, if an author has 20 publications cited 20 or more times, the h-index would be 20. The h-index can be calculated for authors, journals, departments, and so on. The average h-index was 39 for the top 25 Optometry and Vision Science authors over the past 25 years and each of these authors had more than 200 publications.

Back to Top | Article Outline

Google Scholar

Google Scholar represents the other end of this spectrum when compared to Journal Citation Reports and errs on the side of inclusion. Citations are aggregated automatically by programs (bots) that scour Web content seeking citations. Thus, citations from virtually any source may be included in citation counts and this is a source of criticism. On the positive side, Google Scholar can provide a more complete picture of where citations are coming from. After years of development, their Web analysis is sophisticated and can capture the information required to more fully describe the reach of scholarly publications across languages and from many different types of content. On the negative side, Google is a media company. They profit from creating channels to content, and they are primarily concerned with how information is packaged, promoted, and monetized. The fact that search results are ranked and packaged according to proprietary weighting schemes is both unavoidable and unfortunate. Search results that present a highly cited article at the top of the list can lead to additional citations to that article, producing a self-reinforcing cycle.

Metrics reported in Google Scholar include the h-index (described previously) and the i10-index. The i10-index is defined as the total number of articles cited 10 or more times.

Optometry and Vision Science is the only optometry journal ranked by Google Scholar in the top ophthalmology and optometry journals.

Back to Top | Article Outline

Altmetrics

More recently, new metrics were created to capture interest and use in scholarly publications through alternative media channels. The Internet was one of the most influential developments in the world of scholarly publishing over the last 30 years. It was highly disruptive to newsprint and magazines, and its effects continue to ripple through the world of academic publishing. As content continues to migrate toward electronic media and Web access, new ways to measure and track content use are possible.

Altmetrics tracks many different types of article attention and use, including traditional print media, tweets, blog mentions, Facebook posts, Wikipedia pages, Google pages, Reddit, Mendely readers, and other uses. The creators have a proprietary algorithm that values each of these different types of recognition at different levels. There is a conspicuous absence of traditional citations, but Altmetrics add value for those interested in tracking influence and attention from the broader public, and may help provide metrics for those interested in institutional visibility.

Back to Top | Article Outline

PlumX Metrics

In February 2017, Plum Analytics was acquired by Elsevier, the largest media company in academic publishing. What Plum Analytics provides is a picture of article level metrics that combines traditional citation metrics with additional alternative metrics of usage, discussions, and social media posts, all on a single dashboard. An example of the PlumX information display is provided for the 2017 Garland Clay Award–winning Optometry and Vision Science publication from 2013 by Walline and colleagues.4 in Fig. 1. These alternative metrics are not a substitute for traditional citation metrics but do provide additional perspective on the use and attention publications receive.

FIGURE 1

FIGURE 1

There is no shortage of new and emerging metrics, and the trend will continue as scholarly publishing continues to feel the pressure and effects of the digital publishing transformation. Digital publishing allows analysts greater insights on the use and patterns of behavior associated with digital artifacts. Capturing those user interactions with content and aggregation of that information to discover and define usage patterns is valuable to many stakeholders in academic publishing (authors, institutions, editors, societies, etc.) and related businesses (publishing, advertising, etc.). Innovations in digital academic publishing are far from over.

Back to Top | Article Outline

THE PROBLEM WITH METRICS

Charles Goodhart was an economist credited with pointing out that when social or economic measures are turned into targets for policy they lose their value as metrics and are subject to gaming.5 Marilyn Strathern was credited with articulating a simplified and more general statement that has since come to be known as Goodhart's law6: “When a measure becomes a target, it ceases to be a good measure.”

Gaming citation metrics is a clear example of Goodhart's law. If rewards are tied to the number of publications, there is a perverse motivation to publish the smallest possible fragment of knowledge to get the greatest number of publications per unit of knowledge (salami slicing). Likewise, if the number of citations is the target, then behavior will adapt to maximize this output (e.g., self-citations), and there have been many examples of perverse behavior targeting citation metrics. One example of such metric abuse is the creation of citation cartels. In 2018, 20 journals were denied impact factors by Journal Citation Reports; that is, they were totally removed from the Journal Citation Reports rankings because of high rates of self-citation or apparent collusion with other journals. It is not unusual for authors to cite their own work to some degree, and when they do, it can boost their individual citation rate. Journals can also increase their impact factor by encouraging authors to cite their journal. A more manipulative practice is for journals to collude and cite each other's publications for the purpose of boosting their citation metrics.7

Another recent trend in academic publishing designed to maximize journal citation metrics is to discontinue publishing case reports, which commonly receive few if any citations. By carving out clinical case reports, a journal can decrease the number of infrequently cited items, effectively boosting the journal's impact factor.

Back to Top | Article Outline

PUBLISHING IN OPTOMETRY AND VISION SCIENCE

Optometry and Vision Science is committed to providing excellent service to authors and to the clinical and vision science community. We recognize that research can be a very competitive endeavor, and as fellow clinicians and scientists, we strive to offer the best venue for your work at Optometry and Vision Science. Last month, journal's 2-year impact factor rose to 1.5, up from 1.4 in 2016. The 5-year impact factor rose from 1.7 to 1.9 and we are poised to see it increase further next year. Nevertheless, we are more committed to providing a quality publication venue for cutting-edge vision science than maximizing the journal's citation metrics. Concentration on the former will take care of the latter.

Michael D. Twa

Editor in Chief Birmingham, AL

Back to Top | Article Outline

REFERENCES

1. Garfield E. Citation Indexes for Science; a New Dimension in Documentation through Association of Ideas. Science 1955;122:108–11.
2. Icite. National Institutes of Health: Office of Portfolio Analysis. Available at: https://icite.od.nih.gov/analysis. Accessed July 1, 2018.
3. Hutchins BI, Yuan X, Anderson JM, et al. Relative Citation Ratio (RCR): A New Metric That Uses Citation Rates to Measure Influence at the Article Level. PLoS Biol 2016;14:e1002541.
4. Walline JJ, Greiner KL, McVey ME, et al. Multifocal Contact Lens Myopia Control. Optom Vis Sci 2013;90:1207–14.
5. Goodhart CAE. Problems of Monetary Management: The UK Experience Reserve Bank of Australia; 1975.
6. Strathern M. ‘Improving Ratings’: Audit in the British University System. European Review 2009;5:305–21.
7. Davis P. The Emergence of a Citation Cartel. The Scholarly Kitchen. Available at: https://scholarlykitchen.sspnet.org/2012/04/10/emergence-of-a-citation-cartel/. Accessed July 1, 2018.
© 2018 American Academy of Optometry