Secondary Logo

Journal Logo

BASIC RESEARCH

The Growth of Poorly Cited Articles in Peer-Reviewed Orthopaedic Journals

Kortlever, Joost T.P. MD; Tran, Thi T.H. MD; Ring, David MD, PhD; Menendez, Mariano E. MD

Author Information
Clinical Orthopaedics and Related Research: July 2019 - Volume 477 - Issue 7 - p 1727-1735
doi: 10.1097/CORR.0000000000000727

Abstract

Introduction

The volume of scientific publications in orthopaedic surgery is steadily increasing [4, 7, 12, 16, 19]. This could be a result of the improvement in the technology of instrumentation and implants [4], though it seems as likely to be a function of increasing demand on researchers to publish articles [10, 21]. The increased number of published studies may or may not increase useful knowledge [16], and might limit exposure to potentially impactful work [21]. In general, medical journals can be divided in three categories: subscription-model, open-access, and hybrid (the author can choose whether to pay an article-processing charge and publish open-access, or publish his/her article behind a paywall as part of the subscription-model portion of the journal). To cover the costs of publishing [23], open-access shifts the payment from the reader or institution (in the form of subscription fees) to an author or funding agency (in the form of article-processing charges). In an open-access journal there is no charge to see them in full-text form, and no barrier to access it other than access to the Internet itself [5]. Variations in open-access include immediate versus delayed access and read-only access versus access with varying reuse rights. Gold open-access typically includes articles being made open-access by publishers, and with green open-access, authors can legally upload papers to institutional or other repositories, typically with certain restrictions [5]. By contrast, subscription-model journals require readers or institutions to pay a per article or per journal fee to access full-text publications. In a sense, hybrid journals are more like subscription-model journals, but with the additional option that authors can choose to make their publication open-access by paying an article-processing charge, and articles published that way are open-access (even as other articles in the same journal are behind a subscription paywall). Many traditionally subscription-model journals have transitioned to being hybrid journals over the years [5].

One study looking at the number of hybrid journals and hybrid articles found that 73% of journals of the big-five publishers (Elsevier, Springer Nature Group, Wiley-Blackwell, Taylor & Francis, and Sage) were hybrid and that the average number of hybrid articles per journal has started to grow in the period of 2009 to 2016 [5]. A benefit of open-access publishing is that it can disseminate research results more rapidly and widely [20]. One might theorize that articles published open-access are cited more often because they are freely available and more people can read the results, including individuals who cannot afford a subscription to an expensive journal. Then again, authors have to pay for open-access publishing and given that a great deal of research is unfunded, most authors cannot or will not be interested in paying [17]. In addition, there has been a great increase in the number of predatory open-access journals that charge publication fees, but do not provide the basic editorial and publishing services [18]. Articles are published quickly and easily, but by essentially bypassing peer review, there is a risk of publishing studies of lower quality that may promote information that is misleading or untrue. The number of citations and the rate at which an article is cited by other authors is one measure of the academic importance of an article [4, 7, 13, 16, 19, 21]. With the increasing number of publications in orthopaedic surgery, we wondered whether articles published in the varying journal types performed differently, based on the number of citations.

Journals can be identified as open-access or not in repositories or on the journal website. However, it is more difficult to track if an article is published open-access or behind a paywall in a hybrid journal since journals index articles in various ways, and to manually look up all individual article publication types is a tedious task. Therefore, we tested differences between articles published in subscription-model versus open-access journals, and also compared these against hybrid journals. Specifically, we assessed the yearly number and proportion of poorly cited articles published in orthopaedic journals and compared the proportion of poorly cited articles that were published in subscription-model journals with the proportion of poorly cited articles that were published in open-access and hybrid journals.

Materials and Methods

Study Design

We identified all original articles published in orthopaedic peer-reviewed journals and indexed in the Scopus® (Elsevier BV, Amsterdam, Noord Holland, The Netherlands) citation database that were active from 2002 to 2012. We chose 2012 as the final cutoff year so that articles published in that year would have at least 5 years to possibly get cited. Scopus is the largest database available for citation analysis and contains more than 64 million records (articles, reviews, editorials, letters, conference abstracts, and articles in press) from more than 21,500 peer-reviewed journals [9]. Scopus captures the citations of any article by other records within the Scopus database and provides yearly citation counts for each article. SCImago is a third-party provider that categorizes journals into fields, for example, “Surgery” and “Traumatology”, based on data obtained from Scopus [22]. The SCImago Journal & Country Rank is a publicly available portal that includes the journals and country scientific indicators developed from the information contained in the Scopus database [22]. We used the search terms “Orthopedics” and “Sports Medicine” to recover journals for analyses. We assessed all journals with the most recent available CiteScore of 2016, resulting in 228 journals (see Appendix, Supplemental Digital Content 1, https://links.lww.com/CORR/A173) and 135,029 articles. We included all original articles published in all journals. Two researchers independently collected the data (TTHT, ILV), and then one researcher (JTPK) and the senior author (MEM) checked the work; the group reached a consensus when there was a discrepancy.

Measurements

For each journal we recorded (1) type of access (subscription-model, open-access, or hybrid journal); (2) their most recent CiteScore available (2016) [8]; (3) the number of well- and poorly cited articles per year calculated from the date of publication until data collection. We defined poorly cited articles as those with five or fewer citations after publication. For example, if an article published in 2002 reached more than five citations by time of data collection in 2018, it was defined as well-cited. The same was done for an article published in 2007. An article published in 2002 thus had more time to become well-cited. However, since the peak citation of articles typically occurs within the first 2 to 4 years after publication, we did not use a specific window of years for an article to be cited [3, 21]; (4) we manually calculated if the journals themselves were poorly cited per year. Poorly cited journals were defined as those with more than 75% of journal content being poorly cited per year [21]. For example, if a journal had 100 cited articles in 2002, of which 80 (80%; that is, more than 75%) were poorly cited over the following years, it was defined as a poorly cited journal in 2002. If that same journal had 100 cited articles in 2007, of which 40 (40%; that is, less than 75%) were poorly cited over the following years, it was defined as a well-cited journal in 2007; (5) we compared the proportion of poorly cited articles in subscription-model journals versus the proportion of poorly cited articles in open-access journals. Additionally, we compared these to the proportion of poorly cited articles in hybrid journals. However, since articles in hybrid journals could either be published open-access or not–and we did not have this per article data–this comparison is less reliable. We excluded journals that were not active in our time period and those that only contained systematic reviews, resulting in 204 (89%) journals for analyses, of which 142 (70%) were hybrid, 48 (24%) open-access, and 14 (6.9%) subscription-model journals (Table 1).

T1
Table 1.:
Journal and article characteristics 2002-2012

We included documents that were indexed as “articles’’ and excluded documents that were indexed as “reviews” and other noncitable documents, such as editorials, letters, conference abstracts, and articles in press that are normally excluded from citation analysis [21]. For every year in our range (2002-2012) we reported the number of article citations per journal access type (Table 2).

T2
Table 2.:
Yearly article and journal citations

Statistical Analysis

Histograms and Shapiro-Wilk tests for normality showed non-normal data distribution. Continuous variables are presented as a median and interquartile range (IQR). Discrete data are presented as a number and percentage. We used Mann-Whitney U tests to compare yearly proportions of poorly cited articles in subscription-model versus open-access journals and we used Kruskal-Wallis tests to compare these with hybrid journals. We wanted to do multivariable Poisson regression analysis to test factors (such as, subscription-model or open-access) independently associated with the number of poorly cited articles. However, we tried different approaches and all models were unstable and unreliable so we did not report these. This might be due to the fact that the data was not following a Poisson distribution and/or that we only had 48 open-access and 14 subscription-model journals (underpowered). Because we used all available data within our year range, we did not perform an a priori power analysis.

Source of Funding

No external funds were received in support of this investigation.

Results

Proportion of Poorly Cited Articles and Journals in Orthopaedics

Five years or more after publication, 48,133 of 135,029 (36%) total articles published were classified as poorly cited (Table 1). In the period from 2002 to 2012, the total number and proportion of poorly cited articles increased over the years, from 2121 of 7860 (27%) in 2002 to 6927 of 16,282 (43%) in 2012 (Fig. 1). The number and proportion of poorly cited articles in subscription-model journals increased from 226 of 395 (57%) in 2002 to 411 of 578 (71%) in 2012 (Fig. 2). The number and proportion of poorly cited articles in open-access journals decreased from 264 of 434 (61%) in 2002 to 296 of 801 (37%) in 2006, and then increased again to 1387 of 2259 (61%) in 2012 (Fig. 2). The number and proportion of poorly cited journals increased from 33 of 122 (27%) in 2002 to 69 of 204 (34%) in 2012 (Fig. 3). Finally, the number and proportion of poorly cited subscription-model and hybrid journals increased over the years, while for open-access journals the number increased but the proportion decreased (Fig. 4).

F1
Fig. 1:
In the left figure, the bars indicate the total number of poorly cited (blue) and well-cited (red) articles of all access types combined per year. On the right, the lines indicate the total proportion of poorly cited (blue) and well-cited (red) articles per year.
F2
Fig. 2:
On the left, the bars indicate the total number of poorly cited articles in hybrid journals (blue), open-access journals (red), and subscription-model journals (green) per year. On the right, the lines indicate the relative proportion of poorly cited articles in hybrid journals (blue), open-access journals (red), and subscription-model journals (green) per year.
F3
Fig. 3:
On the left, the bars indicate the total number of poorly cited (blue) and well-cited (red) journals of all access types combined per year. On the right, the lines indicate the total proportion of poorly cited (blue) and well-cited (red) journals per year.
F4
Fig. 4:
On the left, the bars indicate the total number of poorly cited journals for hybrid journals (blue), open-access journals (red), and subscription-model journals (green) per year. On the right: the lines indicate the relative proportion of poorly cited journals for hybrid journals (blue), open-access journals (red), and subscription-model journals (green) per year.

Comparing Poorly Cited Articles in Subscription-model Versus Open-access Journals

When we compared yearly proportions of poorly cited articles in subscription-model versus open-access journals using Mann-Whitney U tests, we only found a difference in 2012, with a higher proportion of poorly cited articles in subscription-model journals (median [IQR] of poorly cited article proportions for open-access, 0.61; IQR, 0.38-0.96 and subscription-model journals, 0.92; IQR, 0.54-1.0; p = 0.049; Table 3).

T3
Table 3.:
Proportions of poorly cited articles for open-access and subscription journals

Additional comparisons of poorly cited articles for all three types of access for journals showed lower proportions of poorly cited articles in hybrid journals for each year, with the lowest proportion found in 2002 (0.20; IQR, 0.09-0.67; p = 0.003; Table 3).

Discussion

One measure of an article’s impact is the number of citations it receives after publication. The increasing volume of scientific publications within orthopaedic surgery [4, 7, 12, 16, 19] does not in itself increase useful knowledge [16]. Articles that are harder to find (perhaps including those published behind paywalls) might limit exposure to potentially impactful work [21]. We searched for the number and proportion of well- and poorly cited articles in orthopaedic journals and found that between 2002 and 2012, there was overall no difference in the proportion of poorly cited articles in subscription-model versus open-access journals. Though, their rates were higher than for hybrid journals. Where the number of poorly cited articles is continuously growing, the proportion of these articles steadily increased for subscription-model journals, varied over the years for open-access journals, and was lowest for hybrid journals.

We acknowledge several limitations to our study. The major limitation of this study is that we compared article citations based on the journal level and did not do a per-article analysis. This is especially of interest because most journals were hybrid journals (n = 142 of 204; 70%). We felt it was impractical to review these 135,029 articles manually. On the other hand, our impression is that most hybrid journals contain less than 10% open-access articles [11]. Still, it is possible that a per-article analysis would have resulted in different findings. Also, the comparisons between subscription-model and open-access journals are based on a low sample size of 62 journals (of which 48 were open-access and 14 were subscription-model) and might be different if we included more journals (we took the total number available in Scopus). We believe that our results give an indication of no difference in the proportion of poorly cited articles in subscription-model and open-access journals, apart from in 2012 (which was borderline). We also tried to create Poisson regression models to assess independent factors associated with the proportion of poorly cited articles, but we could only compare subscription-model and open-access journals, and the sample was too small for a stable model. Second, the rate at which an article is quoted or referred to by other authors is the most common measurement of article impact [7]. However, article citation does not address how often an article is viewed, read, discussed, and integrated into practice, or influences future research [12, 24]. There are alternative approaches to measuring an article’s impact, such as the willingness of readers to pay for the results that are generated to use for their own research or to apply in clinical practice [15], or measuring how often an article is referenced in mainstream media or public-policy documents (Altmetrics, Altmetric LLP, London, United Kingdom) [2]. Third, we selected documents indexed on Scopus as “articles” to identify original articles and excluded reviews, editorials, letters, conference abstracts, and articles in press. It would have been more precise to individually review the articles, but, as stated earlier, due to the volume of articles that were included, this was impractical. Fourth, in our analysis we assessed the number of total cited articles and the number of poorly cited articles for each journal for every year in our range, and we did not use a specific window of years for an article to be cited. Scopus does not give the option to search all articles in a specific window, it only gives the option to set a specific time for each article separately. To obtain the number of citations in a specific period we should have gone through each article individually and changed the window setting. Again, this was an impractical task with the high number of articles included. Fifth, Scopus was only able to tell whether or not a journal was open-access; it did not provide information for hybrid journals. Therefore, we manually looked up all the journals’ access types on their official websites to see if they were either a subscription-model, open-access, or hybrid journal. Finally, in this study we did not account for self-citations–authors citing their own work–which would obviously account for a higher percentage of poorly cited articles. One study looking at the best cited articles of the European Journal of Orthopaedic Surgery and Traumatology found that self-citations accounted for 9.1% of the total citation count [19].

More than a third of articles were poorly cited, which is in line with previous studies [1, 10, 21]. One study of cardiovascular research found that almost 50% of all articles were poorly cited, of which roughly 16% had no citations at all [21]. Studies in the early 1990s showed rates of up to 55% of poorly cited articles in the field of social and general science [10]. A review of different journal specialties that studied academic relevance found 30% of initial clinical experience articles have never been cited [1]. Journals with the highest proportion of articles that were never cited were specialized in ophthalmology (47%), urology (40%), and cardiovascular (36%) [1]. The continued expansion of orthopaedic journals and articles might be explained by increased pressure for researchers to publish [4], increased availability, cost effectiveness, and ease of use of large-scale databases such as the Nationwide Inpatient Sample (NIS) and the National Surgical Quality Improvement Program (NSQIP), ease of electronic submission, and likely many other factors [6]. There is an increase in the quantity but not necessarily in the quality of published studies: the increase in the number of orthopaedic articles is accompanied by an increase in poorly cited articles. However, this is not a waste of science. An article in Nature talks about science that has never been cited and mentioned many citations are often missed due to errors in referencing, or citations could have been made in journals not known to databases [23]. We do not know how often papers are downloaded, read, and/or have motivated new studies without being cited [23].

The proportion of poorly cited articles in subscription-model journals is gradually increasing, with the difference reaching significance in 2012, whereas the proportion of poorly cited articles in open-access journals varied over the years. This study was a first step to assess the number and proportion of poorly cited articles in orthopaedic journals. Understanding why some articles are more cited than others is a next step. Variables that are of interest and could help explain the variation in citation rates could be the number of authors, citations, references, participating institutions, or pages, or things like sample size, statistics used, and conditions studied. A study looking at these types of characteristics for articles published in the Lancet found, among others, more citations for articles with three to five times the median number of authors per article, 50% to 600% greater median number of references per article, and that medical themes and diagnoses of interest to a broad audience also have an impact (such as large topics like breast cancer or coronary circulation) [14]. It is possible that the articles we indicated as poorly cited lacked some of these characteristics.

We found no difference in the likelihood an article would be cited based on whether the article was published in a subscription-model journal or an open-access journal. An article-by-article analysis might provide more insight regarding citation rates for articles published within hybrid journals. As there are many articles per journal, one could randomize fewer journals and articles than the 135,029 articles we looked at. A future study might also compare open-access and paywall articles on similar topics published in the same journal within a few years of each other to see if there are differences in citation rates or Altmetrics. It may also be worthwhile to investigate the characteristics of poorly cited articles, so that researchers and editorial staffs might understand what topics are more impactful and to determine if any important work is less appreciated.

Acknowledgments

We thank Imelda L. Vetter for her work and guidance using Scopus®.

References

1. Ahmed AT, Rezek I, McDonald JS, Kallmes DF. 'Initial Clinical Experience' articles are poorly cited and negatively affect the impact factor of the publishing journal: a review. JRSM Short Rep. 2013;4:21.
2. Altmetric. What are Altmetrics? Available at: https://www.altmetric.com/about-altmetrics/what-are-altmetrics/. Accessed January 9, 2019.
3. Amin M, Mabe MA. Impact factors: use and abuse. Medicina (B Aires). 2003;63:347-354.
4. Bayley M, Brooks F, Tong A, Hariharan K. The 100 most cited papers in foot and ankle surgery. Foot (Edinb). 2014;24:11-16.
5. Bjork BC. Growth of hybrid open access, 2009-2016. PeerJ. 2017;5:e3878.
6. Bohl DD, Russo GS, Basques BA, Golinvaux NS, Fu MC, Long WD 3rd, Grauer JN. Variations in data collection methods between national databases affect study results: a comparison of the nationwide inpatient sample and national surgical quality improvement program databases for lumbar spine fusion procedures. J Bone Joint Surg Am. 2014;96:e193.
7. Cassar Gheiti AJ, Downey RE, Byrne DP, Molony DC, Mulhall KJ. The 25 most cited articles in arthroscopic orthopaedic surgery. Arthroscopy. 2012;28:548-564.
8. Elsevier. CiteScore™ Metrics. Available at: https://www.scopus.com/sources. Accessed January 9, 2019.
9. Elsevier. Scopus®. Available at: https://www.elsevier.com/solutions/scopus. Accessed January 9, 2019.
10. Hamilton DP. Publishing by--and for?--the numbers. Science. 1990;250:1331-1332.
11. Jahn N. About the Hybrid OA Dashboard. 2019. Available at: https://subugoe.github.io/hybrid_oa_dashboard/about.html. Accessed January 12, 2019.
12. Jia Z, Ding F, Wu Y, He Q, Ruan D. The 50 Most-cited Articles in Orthopaedic Surgery From Mainland China. Clin Orthop Relat Res. 2015;473:2423-2430.
13. Kavanagh RG, Kelly JC, Kelly PM, Moore DP. The 100 classic papers of pediatric orthopaedic surgery: a bibliometric analysis. J Bone Joint Surg Am. 2013;95:e134.
14. Kostoff RN. The difference between highly and poorly cited medical articles in the journal Lancet. Scientometrics. 2007;72:513-520.
15. Krumholz HM. How do we know the value of our research? Circ Cardiovasc Qual Outcomes. 2013;6:371-372.
16. Lefaivre KA, Shadgan B, O'Brien PJ. 100 most cited articles in orthopaedic surgery. Clin Orthop Relat Res. 2011;469:1487-1497.
17. Leopold SS. Editorial: Paying to publish--what is open access and why is it important? Clin Orthop Relat Res. 2014;472:1665-1666.
18. Leopold SS. Editorial: CORR ((R)) Thanks its Peer Reviewers. Clin Orthop Relat Res. 2016;474:2551-2552.
19. Mavrogenis AF, Megaloikonomos PD, Mauffrey C, Scarlat MM, Simon P, Hasegawa K, Fokter SK, Kehr P. The best cited articles of the European Journal of Orthopaedic Surgery and Traumatology (EJOST): a bibliometric analysis. Eur J Orthop Surg Traumatol. 2018;28:533-544.
20. openaccess.nl. What Is Open Access? Pros and Cons. 2019. Available at: https://www.openaccess.nl/en/what-is-open-access/pros-and-cons. Accessed February 25, 2019.
21. Ranasinghe I, Shojaee A, Bikdeli B, Gupta A, Chen R, Ross JS, Masoudi FA, Spertus JA, Nallamothu BK, Krumholz HM. Poorly cited articles in peer-reviewed cardiovascular journals from 1997 to 2007: analysis of 5-year citation rates. Circulation. 2015;131:1755-1762.
22. SCImago. SJR — SCImago Journal & Country Rank. Available at: http://www.scimagojr.com. Accessed January 9, 2019.
23. Society-for-Scholarly-Publishing. Focusing on Value — 102 Things Journal Publishers Do (2018 Update). 2018. Available at: https://scholarlykitchen.sspnet.org/2018/02/06/focusing-value-102-things-journal-publishers-2018-update/. Accessed January 9, 2019.
24. Zhang Y, Kou J, Zhang XG, Zhang L, Liu SW, Cao XY, Wang YD, Wei RB, Cai GY, Chen XM. The evolution of academic performance in nine subspecialties of internal medicine: an analysis of journal citation reports from 1998 to 2010. PLoS One. 2012;7:e48290.

Supplemental Digital Content

© 2019 by the Association of Bone and Joint Surgeons