Secondary Logo

Journal Logo

Scientific Integrity and the Reproducibility Crisis

Twa, Michael D., OD, PhD

Optometry and Vision Science: January 2019 - Volume 96 - Issue 1 - p 1–2
doi: 10.1097/OPX.0000000000001339

Editor in Chief Birmingham, AL

“We have a habit in writing articles published in scientific journals to make the work as finished as possible, to cover all the tracks, to not worry about the blind alleys or to describe how you had the wrong idea first, and so on. So there isn't any place to publish, in a dignified manner, what you actually did in order to get to do the work.” –Richard Feynman1

The word crisis is often overused these days. Ignoring the global financial crisis in 2008, when the title words from the Australian Broadcasting Corporation's news stories were analyzed over the last 15 years (2003 to 2017),2 there was a crisis on 48 out of every 100 days, or just about every other day. Questions on the reproducibility of science are not new, but gained attention in the past several years. In 2011 the Open Science Framework launched a project titled: Estimating the Reproducibility of Psychological Science.3 The project enlisted 250 collaborators whose task was to replicate research results from 100 studies published in 2008. The results were highly discouraging—only 36% of the studies were reproducible. Estimates are similarly discouraging for results from cancer and other biomedical research.4 Nevertheless, is it correct to say that the current state of science is in shambles—that the community of scientists, their research institutions, funding agencies, and publishers are failing in their pursuit of knowledge and truth? Turns out, it is not quite so simple.

The issue of reproducibility in science is not new. In his 1974 commencement address at Caltech, Richard Feynman, Physicist and Nobel Laureate, spoke on the current challenges of research reproducibility and scientific integrity (ironically, also using examples from psychology).5,6 Moreover, Feynman addressed how to bear the responsibility of being a scientist:

“We've learned from experience that the truth will come out… The first principle is that you must not fool yourself — and you are the easiest person to fool. And it's this type of integrity, this kind of care not to fool yourself, that is missing to a large extent in much of the research in cargo cult science.”

Internal Bias and external pressures to succeed are powerful forces in science, creating distortions that can influence scientific results and their publication. Unintentional bias can easily lead to failures of scientific thinking. Unbiased scientists would scrutinize expected results just as carefully as any unexpected findings, but do we?

As the average age of the first NIH R01 award moves ever higher, there is enormous pressure on faculty to publish findings and secure funding. It is easy to see how this pressure can shape behavior in unintended ways. Recently, CRISPR modified human pregnancies are grabbing news headlines.7 Because the pressure to succeed and lead in science is so palpable, claims on novelty are taking a backseat to reproducibility and to important ethical considerations as well.

A hyper-competitive funding environment combined with pressure for academic career advancement can accentuate bias and get in the way of the healthy skepticism required to create good reproducible science.

Financial conflicts of interest are a well-known source of bias in clinical biomedical research. With multi-million-dollar clinical trials as the required cost for market entry, there is significant pressure on drug and device manufacturers to not only make it to market, but to capture the attention of practitioners and influence their clinical decisions.

Publishers play an important role in reproducible science as well. Editors must be willing to publish reproducibility studies in addition to prioritizing novel findings. Peer-review should carefully consider sources of bias and their control, e.g., masked observers, pre-defined (or registered) analysis protocols, replication of key experiments by the investigators, appropriate comparison or control groups (positive and negative), control for multiple comparisons, and a clear distinction between any pre-defined hypotheses and exploratory analyses. The goal for authors and the expectation of reviewers should be to provide all that is needed to judge the value of a scientific contribution to the field, not a one-sided argument in support of a single perspective.

Because multiple factors are contributing to a lack of reproducibility in science, no single effort will be sufficient to eliminate the problem. Optometry and Vision Science will continue to publish reproducibility studies. As always, it will be incumbent on authors to make a clear case to the reviewers, editors, and readers about the value of their specific contribution to the field and the need to reproduce previous work. Optometry and Vision Science has a history of publishing negative results and that will continue to be an editorial priority. Nevertheless, authors must provide a compelling argument within their submission for the importance of publishing a particular negative result—even a negative result must make an important contribution. Astute readers, reviewers and editors will critically evaluate an author's submission to determine if the authors ask the right question, conduct the right study, evaluate the right participants (or samples), conduct the proper analysis, and provide a reasonable interpretation of their results. Failure in any one of these areas could be sufficient to explain a false negative result that would not merit publication. Optometry and Vision Science will continue to put a high priority on thorough and careful peer-review to help insure that we publish quality reproducible science.

There are a number of additional recommendations outside the realm of academic publishing that can help increase the reproducibility of science that deserve consideration as well. Training young scientists is an important part of improving reproducibly in science. Training that reinforces critical thinking and scientific skepticism is essential. Mentoring that encourages students and young faculty to check lab notebooks more carefully, to challenge results (both expected and unexpected), to collaborate with others inside and outside of their own lab, and to value data management and quality control can help improve reproducibility. Training investigators in research ethics, study design, and scientific integrity can help shape the culture of science, but only if this training is reinforced by practice and only if the community demands adherence to the principles.

Back to Top | Article Outline


1. Feynman RP. Nobel Prize in Physics Lecture: The Development of the Space-Time View of Quantum Electrodynamics. The Nobel Foundation; 2018. Available at: Accessed December 3, 2018.
2. Kulkarni R. A Million News Headlines: News Headlines Published over a Period of 15 Years. Kaggle; 2018. Available at: Accessed December 2, 2018.
3. Reproducibility Project. Psychology. Center For Open Science; [last updated: October 1, 2018]. Available at: Accessed December 2, 2018.
4. Ioannidis JP. An Epidemic of False Claims. Competition and Conflicts of Interest Distort Too Many Medical Findings. Sci Am 2011;304:16.
5. Feynman R. Cargo Cult Science: Some Remarks on Science, Pseudoscience, and Learning How to Not Fool Yourself. Caltech; 2018. Available at: Accessed December 3, 2018.
6. Feynman RP, Leighton R, Hutchings E, et al. "Surely You're Joking, Mr. Feynman!" : Adventures of a Curious Character. New York: W.W. Norton & Company; 2018.
7. Taylor AP. Second Crispr-Modified Pregnancy May Be Underway. LabX Media Group; 2018. Available at: Accessed December 3, 2018.
© 2019 American Academy of Optometry