Secondary Logo

Journal Logo


Editorial: Can Journals, as Trusted Intermediaries, Cut Through the Signal-to-Noise Problem in Medical Publishing?

Leopold, Seth S. MD1

Author Information
Clinical Orthopaedics and Related Research: July 2021 - Volume 479 - Issue 7 - p 1409-1412
doi: 10.1097/CORR.0000000000001845
  • Free

Two years ago, Clinical Orthopaedics and Related Research® partnered with three other leading general-interest orthopaedic journals to take a stand against the use of preprint servers in clinical research [20]. At the time, we were unsure whether we were looking at a snowball rolling down a hill or the leading edge of an avalanche.

It was a big slide.

Preprint servers are online platforms that allow authors to post full-text versions of complete manuscripts that have not yet passed peer review. Ostensibly, the purpose is to allow other scientists to comment on the work so that it can be improved prior to submission to a high-quality journal [4]. But I believe, and others have shown [3, 5], that in fact this is not an important function of preprint servers. And yet, in biomedicine—and even in orthopaedic surgery—they’re still growing by leaps and bounds [22].

Although open-access publication has its upsides, for purposes of this essay, I am going to lump publishing in open-access journals in with posting to preprint servers as potentially problematic. My reason for doing so is that both make it harder for clinicians to separate helpful research from distracting, unhelpful, and in the case of preprint servers, unvetted material. In previous editorials, I’ve highlighted some redeeming qualities of open-access publication [17, 18]; I also note that open access is a publication option here at CORR®. But from where I sit today, it’s becoming clear to me that the distortion of publication incentives that are inherent to fully open-access journals does not serve readers (or their patients) very well.

The questions that keep coming back to me are: What should clinicians and clinician-scientists expect from their information sources, how should they decide which ones to use, what are the masqueraders in the information ecosystem, and what is the harm if we get it wrong?

What Should Readers Expect from Information Sources?

If the problem is one of diminishing signal amidst growing noise, your information sources should help solve the problem, not exacerbate it. Clarivate™, the company that produces the Impact Factor, indexed more than 80 orthopaedic journals last year; another service, Scimago, indexed nearly 300 [29]. Did you read 300 practice-changing articles last year? If not, perhaps you’ll agree that having 300 indexed journals in one small specialty represents a signal-to-noise problem.

To borrow a term from the cryptocurrency universe, good information sources should serve as trusted intermediaries between content creators and readers. In this context, a trusted intermediary would, at minimum:

  • Screen content using a trustworthy approach—such as robust peer review—and set as its standard for dissemination not merely “likely true” but “likely true and also helpful.”
  • Ensure that published content doesn’t manipulate readers through steps like plagiarism detection, reporting and management of relevant conflicts of interest [19], and close editing.
  • Amplify signal and suppress noise by providing context for the content it publishes in the form of commentaries, features, interviews, and expert perspectives.
  • Encourage and foster post-publication dialogue. And,
  • Recognize that error is inevitable; that being so, any trusted intermediary in this context must be open to hearing that it has erred, correcting the error (or retracting the flawed work if its flaws are foundational), explaining its reasons for so doing, and learning from the experience.

A service that posts (nearly) anything—like a preprint server—falls far short of this standard. But merely publishing peer-reviewed material that seems generally valid without also screening that content to ascertain whether it’s likely to be useful in real-world practice also does not meet busy orthopaedic surgeons’ needs. Currently, I believe that this describes most open-access journals in our specialty. One of the world’s largest fully open-access journals, PLOS One, which has published thousands of orthopaedic papers, states that it evaluates papers based on methodologic rigor and not their “perceived significance” [27]. Busy orthopaedic surgeons don’t read journals just to find things that are true. To justify a reader’s attention, a journal’s contents need to be both true and helpful.

How Can Readers Find Good Information Sources?

It’s a jungle out there, and it’s only gotten worse for surgeons who seek answers to clinical questions. In 2020, PubMed began a pilot program that indexed content from biomedical preprint servers [35], which CORR® [20] and others [1, 2, 14] have identified as having serious problems.

There are two ways to make sure the information we use deserves our attention: by being choosy in terms of what we read each month, and by being thoughtful in our screening of results from online searches when we have a specific question, such as when making a tricky clinical decision.

When it comes to month-over-month reading, we get what we pay for. While free orthopaedic information sources abound, eight of the top 10 orthopaedic journals by 2019 Impact Factor (Clarivate™) are journals to which one can subscribe, as are all of the top 10 by h-index (a more robust metric) [9]. The better, branded journals—which remain the ones most orthopaedic readers grew up with—make their approaches to screening, review, and article selection public (CORR’s are available, of course, both in print [18] and in considerably more detail online [6]). As one spends more time immersed in the sources we follow regularly, it’s possible to get a sense for the brands we use. To what degree do they go even beyond robust review and close editing to meet readers’ needs? Do they do a good job on those other curatorial functions? Are articles accompanied by thoughtful commentary? Are the letters to the editor informative and engaging? Do they contain other sections—columns, features, interviews—that inform your professional life in meaningful ways? Perhaps most importantly, how do they handle errors when they arise?

When we search online, whether using a scholarly search engine or a quickie “lay” search, the same general principles apply. I don’t think content from branded journals should get an easy pass; we all need to read critically before we apply something we see on PubMed in the body of another human being [16]. But in my experience, articles I read in subscription journals like JBJS or TheAmerican Journal of Sports Medicine (and, I’d like to think, CORR) are much more likely to be informative and trustworthy than those from any of the many masqueraders in the environment.

What are the Masqueraders in the Information Ecosystem, and What is the Harm?

Without question, top on the list is preprint servers. When CORR partnered with JBJS, BJJ, and JOR in agreeing not to publish clinical research that had been posted to preprint servers [20], preprint servers sought to maintain the fig leaf that a key goal of posting research to a preprint server was to get feedback that would improve a paper’s chances in peer review by improving the quality of the work.

I’m afraid the fig leaf has since fallen off; a recent guest post on ASAPbio shared the story of a paper the blogger submitted to a journal at the same time she posted it to a preprint server. The paper was rejected within days from the journal. The author’s reaction? “I thought I would be heartbroken to see my paper getting rejected, but I wasn’t … because the work was already available for everyone to see [on the preprint server] … Although I didn’t get a lot of direct feedback from other scientists, my Twitter exploded. People saw my paper, liked it, re-tweeted it, and I even got new followers” [25].

In those few lines, that preprint advocate provides many—though not all—of the reasons we should mistrust preprints, including:

  • They seem to represent a purposeful end-run around peer review for self-serving purposes, such that there no longer is any pretense that the “pre” in “preprint” carries meaning; the pattern of posting “preprints” in parallel or after manuscript submission is widespread [3].
  • The strength of the science doesn’t appear to factor in, only whether it’s advantageous to the scientist.
  • Posters of preprints may be indifferent to whether they receive usable scientific feedback, as long as the career-enhancing social media boost is achieved (I note that the vast majority of preprints do not receive any comments at all [12, 24]).

The blogger’s “benefits” of preprint servers seem trivial; lists of preprint “benefits” from major players in that space don’t even mention patients or whether they’ll be helped or harmed by the rapid, wide dissemination of unreviewed work [26]. What’s not trivial is the harm an uncorrected preprint can cause. Many preprint servers do not link preprints to the definitive (published) version of record. In addition, there is no obligation to correct preprints to reflect the changes and corrections made in response to reviewers’ and editors’ comments if the manuscript eventually makes it to a journal. The effect of all this is that there can be two citable versions in circulation at the same time, one with errors and overstatements, and one without them, which allows the competing (uncorrected preprint) version to continue to circulate with no barriers for the public to access it, potentially unaware that there is a corrected version elsewhere. Shockingly, a small but non-zero percentage of researchers favor citing preprints and other pre-publication formats over the reviewed, edited, and published version of record of the same paper [32]. This preference strikes me as frankly dangerous to the patients whose care might hinge on getting it right. Beyond that, preprints are widely (mis)used both in the lay [22] and scientific press as evidence [10], and misleadingly covered as news [13]; some even have been weaponized as disinformation [7]. Only the most egregiously incorrect ones are retracted; of nearly 15,000 COVID-19 preprints on the two leading biomedical preprint servers [23], only about a dozen have been retracted or withdrawn [28].

And while I wouldn’t classify open-access journals as masqueraders in general, the number of fully open-access journals has increased dramatically in recent years, and the amount of noise they’ve added to the signal has become deafening. They’ve also fallen prey to innumerable publishing “sting” operations that demonstrate how slipshod review processes can become when a journal is paid by the piece to publish [17]. That same set of incentives has given birth to the evil stepbrother of open-access publication: the predatory journal.

Predatory journals are those that charge open-access publication fees but do not provide the basic publishing and editorial function services one should associate with legitimate journals. Somewhat amazingly, there may be more than 10,000 of them across the scholarly publishing landscape [11]. These, unfortunately, do more than add noise to obscure the ambient signal. They engage in identity theft [34]. They put scientists’ names on their “editorial boards” without permission [15]. They publish misleading and useless content while seeking to deceive the unsuspecting (and usually young and inexperienced) researcher into thinking that they are a reputable journal [31], and in extreme cases, even create mock websites that mirror those of known journals [30].

There are checklists one can use to see whether a journal is likely to engage in predatory practices [33]. Some of them have dozens of items to consider [8].

My checklist is simpler: If you haven’t read an article in the journal you’re thinking of sending your work to, don’t send it there.

And if you’re thinking about using a preprint server as an end-run around peer review because it may benefit your career: Don’t. Thoughtful review processes improve the quality and clarity of research and keep serious errors and overstatements from harming patients. I understand the temptation to get one’s work out quickly, particularly for early career researchers, but our first obligation is to those whom we serve, not to our own careers.

The public’s trust is hard to keep and easy to lose.


1. Anderson K. 5 ways preprint servers could improve. Available at Accessed March 15, 2021.
2. Anderson K. MedRxiv’s postings follow submission. Available at Accessed March 15, 2021.
3. Anderson KR. bioRxiv: trends and analysis of five years of preprints. Learned Publishing. 2020;33:104-109.
4. bioRxiv: The Preprint Server for Biology. About bioRxiv. Available at Accessed March 15, 2021.
5. Brainard J. Do preprints improve with peer review? A little, one study suggests. Science. March 26, 2020. Available at Accessed March 19, 2021.
6. Clinical Orthopaedics and Related Research. Other information & FAQs. Available at Accessed on March 15, 2021.
7. Donovan J, Nilsen J. Cloaked science: the Yan reports. Media manipulation casebook. Available at Accessed March 15, 2021.
8. Duke University Medical Center Library & Archives. Be iNFORMEd: Checklist. Available at Accessed March 15, 2021.
9. Google Scholar. Top publications, orthopaedic medicine & surgery. Available at Accessed March 19, 2021.
10. Gewin V. Junior researchers are losing out by ghostwriting peer reviews. Available at Accessed March 15, 2021.
11. Grudniewicz A, Moher D, Cobey KD, et al. Predatory journals: no definition, no defence. Nature. 2019;576:210-212.
12. Inglis JR, Sever R. bioRxiv: a progress report. 2016. Available at Accessed March 15, 2021.
13. Khamsi R. Problems with preprints: covering rough-draft manuscripts responsibly. The OPENNotebook—The Story Behind the Best Science Stories. June 1, 2020. Available at Accessed March 19, 2021.
14. Kent D. Caveat emptor: preprint servers in biomedical science. University Affairs. December 2, 2020. Available at Accessed March 15, 2021.
15. Laine C, Winker MA. Identifying predatory or pseudo-journals. Biochemia Medica. 2017;27:285-291.
16. Leopold SS. Editorial: Getting the most from what you read in orthopaedic journals. Clin Orthop Relat Res. 2017;475:1757-1761.
17. Leopold SS. Editorial: Is open access for you? It depends who you are. Clin Orthop Relat Res. 2020;478:195-199.
18. Leopold SS. Editorial: Peer review and the editorial process—a look behind the curtain. Clin Orthop Relat Res. 2015;473:1-3.
19. Leopold SS, Beadling L, Dobbs MB, et al. Editorial: Active management of financial conflicts of interest on the editorial board of CORR. Clin Orthop Relat Res. 2013;471:3393-3394.
20. Leopold SS, Haddad FS, Sandell LJ, Swiontkowski M. Editorial: Clinical Orthopaedics and Related Research, The Bone & Joint Journal, The Journal of Orthopaedic Research, and The Journal of Bone and Joint Surgery will not accept clinical research manuscripts previously posted to preprint servers. Clin Orthop Relat Res. 2019;477:1-4.
21. Los Angeles Times Editorial Board. How coronavirus is revealing the problems with ‘fast science’. Los Angeles Times. May 26, 2020. Available at Accessed March 15, 2021.
    22. Malički M, Jerončić A, ter Riet G, et al. Preprint servers’ policies, submission requirements, and transparency in reporting and research integrity recommendations. JAMA. 2020;324:1901–1903.
    23. medRxiv: The Preprint Server for Health Sciences. COVID-19 SARS-CoV-2 preprints from medRxiv and bioRxiv. Available at Accessed March 15, 2021.
    24. Narock T, Goldstein EB. Quantifying the growth of preprint services hosted by the center for open science. Publications. 2019;7:44.
    25. Pereira SG. Why preprints? That is easy to answer! The real question is: why not? ASAPbio. March 3, 2021. Available at Accessed March 15, 2021.
    26. PLOS. Preprints. Available at Accessed March 15, 2021.
    27. PLOS. Publish. Available at: Accessed May 5, 2021.
    28. Retraction Watch. Retracted coronavirus (COVID-19) papers. Available at Accessed March 15, 2021.
    29. Scimago Journal & Country Rank. Available at Accessed March 15, 2021.
    30. Shamseer L. “Predatory” journals: an evidence-based approach to characterizing them and considering where research ought to be published. PhD Thesis, 2021. Available at Accessed March 15, 2021.
    31. Sharma H, Verma S. Predatory journals: the rise of worthless biomedical science. J Postgrad Med. 2018;64:226-231.
    32. Nature Springer. Exploring researcher preference for the version of record. Available at: Accessed May 5, 2021.
    33. Tulane University Libraries. Predatory publishers: a guide. Available at Accessed March 15, 2021.
    34. Umlauf MG, Mochizuki Y. Predatory publishing and cybercrime targeting academics. Int J Nurs Pract. 2018;24:e12656.
    35. US National Library of Medicine/National Institutes of Health. NIH preprint pilot. Available at Accessed March 15, 2021.
    © 2021 by the Association of Bone and Joint Surgeons