Publishing in scientific journals is the principal means of communicating and assimilating new information into the body of scientific knowledge. Ensuring the quality of the scientific record is a vital task that has traditionally been the responsibility of journal editors and anonymous peer reviewers; however, this model is now being challenged by the rise of postpublication review, fueled by the internet culture of open criticism. A host of postpublication review websites and scientific blogs have emerged over the last years, moving journal club critiques into the public domain.1 These platforms support an ongoing discussion of the validity and importance of published work, which is ultimately a more representative method of assessing impact than editorial decision-making. Unsurprisingly, postpublication review has proven more effective at detecting instances of nonreproducibility and fraud than traditional peer review.2 Although some aspects of postpublication peer review are still controversial, especially the lack of accountability of anonymous postpublication reviewers, this approach is being embraced by biomedical scientists. Consequently, journals, such as Transplantation, will be forced to consider how prepublication and postpublication review processes can best complement one another. This article collates online resources with relevance to the readership of Transplantation that give guidance in writing prepublication and post-publication peer reviews, and how to maximize the impact of that effort.
Prepublication peer review is widely accepted as an indispensable part of scientific publishing, but its actual purpose and value are somewhat obscure. In a general sense, prepublication peer review is meant to evaluate the “acceptability” of a manuscript for publication in a given journal, but peer review processes and definitions of acceptability vary considerably between journals, even within the field of transplantation. Prepublication peer review has been variously defended as a mechanism for filtering work according to its importance or specialist relevance, as a means of assessing its technical quality or logical coherence, as a safeguard against scientific misconduct, certification of scientific legitimacy and as a process to improve manuscripts. Regrettably, prepublication peer review does not perform well in any of these assignments; indeed, many commentators now regard prepublication review as so unreliable, uncertain of purpose, and biased that it should be abandoned altogether.3
[A] DOI: 10.1038/nature04990
Transplantation typically solicits the opinion several reviewers, and every manuscript is assessed by 4 editors of different grades; nevertheless, conflicting decisions from expert reviewers are not uncommon. Agreement of 6 reviewers is required for statistical reliability of their decisions, but several studies have shown than concurrence of opinion between pre-publication peer reviewers is little better than random [A]. A single experienced editor can be equally effective as peer review in determining the originality, reliability, importance, and suitability of articles in the medical sciences. Whether educating reviewers improves the quality and concordance of their reports is questionable; nevertheless, familiarity with guidelines issued by the Committee for Publication Ethics [B] and training material published by the BMJ [C] might help reviewers to understand what editors and authors seek in a good commentary.
An alarming number of preclinical studies are later found to be irreproducible.4 Several recent studies estimate the prevalence of irreproducibility at 51% to 89% of published papers, a problem that is not confined to low-ranking journals.5 Freedman and colleagues estimate the cost of irreproducible preclinical research in the USA at $28 billion/year. Clearly, these statistics imply that prepublication peer review is unreliable at detecting errors in scientific manuscripts. An often-cited experiment conducted by the BMJ aimed to determine whether peer review was an effective way of detecting errors in study design, data analysis and interpretation6: 221 regular reviewers were asked to evaluate a manuscript with 8 deliberate flaws; on average, each reviewer only identified 2 of those weaknesses. If the intention of prepublication peer review is to protect against their publication, given the high previous probability of preclinical studies being irreproducible, reviewers must approach new manuscripts with a very high degree of skepticism. In this regard, Transplantation and Transplantation Direct support reviewers' requests for raw data sets7 and adherence to reporting standards8 when this could help to identify mistakes in an article.
Peer reviewers are notoriously poor at detecting deliberate scientific fraud and other instances of scientific misconduct. On the other hand, new analytical tools to detect plagiarism [D], confected images,9 [E] and fabricated data sets10,11 are being used by Transplantation and Transplantation Direct to detect fraudulent work with considerable success. The editorial office tracks websites reporting scientific misconduct, such as RetractionWatch [F], and reports all instances of fraud to the Committee for Publication Ethics. The editorial office is aware of the problem of fictitious reviewers, which has led to retraction of many papers in reputable journals, and verifies the credentials of all reviewers acting for Transplantation and Transplantation Direct.12
Opinion about the importance of prepublication peer review in improving manuscripts for publication is divided.13 Edward K. Geissler, Executive Editor at Transplantation Direct, emphasizes the role of reviewers in assessing whether the conclusions of a manuscript are supported by experimental evidence. Inevitably, making improvements to submitted manuscripts depends on the willingness of reviewers to provide detailed responses, so it is not a fail-safe mechanism for improving manuscripts. Moreover, similar improvements could perhaps be made before submission if authors observed reporting guidelines, asked experienced colleagues to proof-read their work, and consulted language-editing services when appropriate [G].
On all available evidence, prepublication peer review is a slow, expensive, unreliable, biased and inconsistent process. On the other hand, it is the only way for traditional journals like Transplantation to control their content. Two complementary approaches have emerged as potential solutions to the problems of traditional peer review, namely, incentivizing reviewers to return better commentaries and postpublication review. Publons.com is a new platform that recognizes peer-review as a measurable form of academic output [H]. Reviewers can establish a free account at Publons, allowing them to anonymously record their reviewing activity at any journal and obtain a score. In addition, Publons supports open online publication of comments made by reviewers during the prepublication review process so that scientifically valuable exchanges are not lost. Rubriq.com is a commercial service offering double-blinded peer reviews to authors that can be used to test the quality of manuscripts prior to journal submission. By commercializing the editorial process, reviewers can be offered financial incentives, which may lead to higher standards of commentary [I]. It is yet to be seen whether Publons or Rubriq will deliver their intended benefits, or whether they create new biases, perverse incentives and fresh opportunities for scientific misconduct.
Postpublication review builds on the notion that the scientific record should be self-correcting, meaning that scientifically valuable articles are promoted, discussed, and their ideas pursued, whereas irreproducible or irrelevant information is expunged. Until recently, postpublication review has been a highly fragmented process, largely restricted to the introduction or discussion sections of subsequent papers, or to systematic reviews.14 To a limited extent, traditional journals, such as Transplantation, are able to highlight important articles in “Editor's Picks” lists or “Research Highlights” articles,15,16 as well as publishing “correspondence” on published articles. Unfortunately, these formats cannot match the speed and volume of modern biomedical research. Moreover, traditional approaches are unable to capture the substantive discussion of published work that happens at conferences, in journal clubs and through online channels. Accordingly, new ways of enabling readers to comment on published research are being explored, which can lead to postpublication enrichment of articles through ongoing expert debate.
Postpublication evaluation and discussion of research takes many forms, including solicited and unsolicited reviews published alongside articles in online journals, as well as commentaries posted on independent websites and blogs. F1000Research.com is an online journal that immediately publishes all articles it receives and then solicits open peer reviews [J]. Reviewers' critiques are published with revised and unrevised versions of manuscripts, and readers who identify themselves may also post comments. bioRxiv.org is a repository for prepublication manuscripts that allows authors to make their findings immediately available to the scientific community and to receive feedback on articles prior to journal submission [K]. Platforms for discussing research are growing in popularity, especially PubMed Commons, PubPeer.com and ResearchGate.net [L]. PubMed Commons is an NIH-supported system that allows named researchers to share opinions and information about scientific publications in a moderated forum [M]. PubPeer has achieved notoriety as an anonymous platform for announcing possible instances of irreproducibility or scientific misconduct [N]. Discussions on PubPeer have led to a string of high-profile retractions of fraudulent papers from major journals, proving that scrutiny by the community is a more rigorous test of authenticity than pre-publication review. Although negative appraisals that predominate on PubPeer and similar platforms are evolving as a useful means of detecting poor or dishonest science, it should be noted that this only represents one side of the postpublication peer review: positive commentary is equally important if valuable articles are to be identified and promoted. In this regard, F1000Prime.com [O] and the Center for Evidence in Transplantation [P] are important resources.
“Transplantation has a large panel of expert reviewers, at least two and up to four of whom review each paper. Four editors consider those reviews and decide on whether to revise, reject or accept the papers. We also use dedicated software to detect plagiarism and duplicate publication. The current acceptance rate is 19% and reviewers tend to be kinder to papers than editors. We seldom use author suggested reviewers but do avoid reviewers the author requests us not to use.We also have specialist statistics and health economics editorswhen the paper requires that expertise.Nosystemis infallible but the more eyes that see a paper heading for acceptance, the better that paper will be.”
- Jeremy Chapman, Editor-in-Chief.
In summary, prepublication peer review is an imperfect process, but traditional journals have no alternative means to control their content at the present time. Postpublication evaluation of articles by the research community is becoming an increasingly important way of discriminating between high- and low-quality studies. The challenge to traditional journals, including Transplantation, is to develop channels for open, but responsible discussion of their content.
1. Knoepfler P. Reviewing post-publication peer review. Trends Genet
. 2015; 31: 221–223.
2. Galbraith DW. Redrawing the frontiers in the age of post-publication review. Front Genet
. 2015; 6: 198.
3. Smith R. Peer review: a flawed process at the heart of science and journals. J R Soc Med
. 2006; 99: 178–182.
4. Prinz F, Schlange T, Asadullah K. Believe it or not: how much can we rely on published data on potential drug targets? Nat Rev Drug Discov
. 2011; 10: 712.
5. Freedman LP, Cockburn IM, Simcoe TS. The economics of reproducibility in preclinical research. PLoS Biol
. 2015; 13: e1002165.
6. Godlee F, Gale CR, Martyn CN. Effect on the quality of peer review of blinding reviewers and asking them to sign their reports: a randomized controlled trial. JAMA
. 1998; 280: 237–240.
7. Hutchinson JA. Data sharing. Transplantation
. 2015; 99: 649–650.
8. Hutchinson JA. Minimum information standards. Transplantation
. 2015; 99: 464–465.
9. Pearson H. Forensic software traces tweaks to images. Nature
. 2006; 439: 520–521.
10. Gadbury GL, Allison DB. Inappropriate fiddling with statistical analyses to obtain a desirable p-value: tests to detect its presence in published literature. PLoS One
. 2012; 7: e46363.
11. Head ML, Holman L, Lanfear R, et al. The extent and consequences of p-hacking in science. PLoS Biol
. 2015; 13: e1002106.
12. Ferguson C, Marcus A, Oransky I. Publishing: the peer-review scam. Nature
. 2014; 515: 480–482.
13. Ploegh H. End the wasteful tyranny of reviewer experiments. Nature
. 2011; 472: 391.
14. Bastian H. A stronger post-publication culture is needed for better science. PLoS Med
. 2014; 11: e1001772.
15. Muller E. Research highlights. Transplantation
. 2015; 99: 1305.
16. Issa F. Research highlights. Transplantation
. 2015; 99: 1099–1100.