Subscribe to eTOC

We Take It Back!
How Journal Editors View Retractions

ARTICLE IN BRIEF

In the wake of a study retraction on autism 12 years after it appeared in the Lancet, editors of neurology journal discuss their policies on retracting papers.

In early February, the venerable British medical journal The Lancet announced that it would retract the findings of a 1998 paper that linked autism to the measles-mumps-rubella (MMR) vaccine. The paper, published by British gastroenterologist Andrew Wakefield, MD, was based on research on 12 children (guests at Dr. Wakefield's son's birthday party), and had been the subject of scientific criticism and controversy for years. In 2004, 10 of the paper's 14 original co-authors publicly disavowed it, and there had long been speculation that Dr. Wakefield had falsified data in the study.

Nonetheless, the Wakefield paper helped to fuel a movement that convinced many parents of the vaccine's link to autism, and MMR vaccination rates plummeted in the United Kingdom (and less so in the United States). Lancet Editor Richard Horton, FMedSci, said years ago that the Wakefield study should never have been published due to conflicts of interest — yet it took more than a decade for a formal retraction.

The retraction was a black eye for Lancet; no medical journal wants to be seen as publishing bad science, especially when the discredited findings have had such far-reaching implications. In the wake of the Lancet retraction, Neurology Today spoke with several leading editors of neurology journals to get their reactions and to find out how retractions and retraction policies have affected those publications.

“This illustrates an issue of the short-term self-interest of journals as well as their long-term enlightened interest,” says Stephen L. Hauser, MD, professor and chair of the department of neurology at the University of California-San Francisco, who has edited Annals of Neurology for the past five years. “As editors, we're all motivated to improve the visibility of our beloved products. There are 200 or so journals in the clinical neurosciences alone. We are all looking for impact. But we have to remember that our standards for review of the paradigm-shifting work really has to be at a very high bar because the implications are so important to human health.”

In 12 years editing the Archives of Neurology, Roger N. Rosenberg, MD, the Abe (Brunky), Morris, and William Zale Distinguished Chair in Neurology at the University of Texas Southwestern Medical School, said that he has yet to preside over any retractions of published work. “The peer review process is the best we have,” he said. “Our reviewers tell us frequently that they don't think a paper should be published because the data are incomplete or too preliminary, internally inconsistent, statistically not significant, the conclusions reached are not supported by the data, or the design of the study and methods used are flawed. These are common issues encountered in the review process, which lead to a rejection of the manuscript.”

Figure

DR. STEPHEN L. HAUSER: “As editors, were all motivated to improve the visibility of our beloved products. There are 200 or so journals in the clinical neurosciences alone. We are all looking for impact. But we have to remember that our standards for review of the paradigm-shifting work really has to be at a very high bar because the implications are so important to human health.”

RETRACTIONS: HOW THEY OCCUR

Journal retractions can happen in one of two ways: they can be initiated by the editors or by the authors. Sometimes authors of the original study approach the journal in which it was published to report that new information has come to light, or old data has been re-reviewed, and doubt has been cast on their original findings. While not exactly something journal editors look forward to, these instances of honest mistakes (voluntary retractions) are far less troubling — and much less damaging to a scientist's career — than retractions by the editors.

At Neurology, Executive Editor Patricia Baskin said that a total of eight retractions have taken place over the past 20 years — most of them voluntary. “For example, in one, some genotyping errors came to light and it changed the results of the whole study, resulting in the authors' retracting the paper,” she explained. “In another, the authors found that the data on one particular genetic mutation was wrong, but the data they'd published on other mutations were valid, so they asked for a partial retraction of the paper.”

In another instance, the authors had included the same information in articles in several journals, without reworking the data to fit each study. “Readers noticed discrepancies and an investigation was done by the authors' institution,” she said. “It showed that the data was flawed and errors had been made. The authors subsequently retracted the paper.

Of the three editorial retractions, none came close to approaching the significance of the Wakefield paper. “In 2006, the editors retracted one paper because it was almost identical to the same work, published by the same authors, in another journal,” said Baskin. “Also in 2006, they did a partial retraction of a statement in a correspondence in which one of the authors was found to have committed scientific fraud — he claimed to have done research he in fact had not done.” Years earlier, in1989, Neurology retracted an abstract published in its annual meeting supplement, after the results were deemed invalid because of methodologic and procedural flaws.

Dr. Hauser has not published any editorial retractions in his five years with Annals of Neurology, but in his first year as editor, he announced a policy: “We would publish all reasonable efforts that failed to confirm data presented in our pages, because failure to replicate becomes harder to publish than the original positive result.”

Since that statement, Annals of Neurology has aired at least three such debates. The first involved a paper that was the highest-rated article from the previous year, describing a new model of Parkinson disease in mice and the implication that proteasome pathways were involved in the genesis of the disease.

“We all know that for many animal models, especially toxin-induced models, lab-to-lab variability is a huge issue,” said Dr. Hauser. “But this paper engendered enough commotion that we began to receive papers that either could replicate or not replicate its findings. We devoted approximately half of an issue to publishing a series of papers dealing with this topic, along with a point/counterpoint group of commentaries, including one by the original author, to fully air this question. I thought this was our clear responsibility as editors.”

Another such response had to do with a finding in spinal fluid from people with multiple sclerosis, indicating that a protein called cystatin C was degraded in a unique way that was diagnostic of MS. “Again, this is a huge paradigm shift; after all, a holy grail in MS is to develop a diagnostic,” Dr. Hauser said. “Then we received a couple of papers suggesting that this could be an artifact of incubator conditions, and we published these papers and asked the author to comment.”

A final example was a virus discovery in MS that others could not replicate; those negative findings were also published. “I must say I lost sleep over that paper,” said Dr. Hauser. “We required replication that was independent, from another institution, before publishing. They were able to replicate the finding, and yet subsequently, others were not able to. But it did satisfy every metric, every bar that we had established for replication. And it's not to say that the paper was wrong.”

Journal editors put themselves on the line with every article they publish, Baskin said. “We only accept 15 percent of our papers; there are many we can't accept. We want only superb papers, and if something goes wrong, it's an embarrassment to us,” she observed. “The best way to overcome that is to correct the literature record as soon as possible. Lancet is making that correction, but they have the added problem that they've had a lot of people over the years saying ‘This isn't right.’”

‘EXERCISE DUE DILIGENCE’

Baskin said the Dr. Wakefield retraction should serve as a warning to all editors. “It reminds us all to be very careful and exercise due diligence, to make sure that anything sent to us passes muster with our peer reviewers — who really have to be knowledgeable in the field to the point that they can pick up content that might be fraudulent or not well thought out.”

Ultimately, Dr. Rosenberg said the Wakefield case is an isolated incident, not reflective of a larger problem within the peer review and journal publication process.

“In this case, the peer review process didn't get it right and the author didn't present it right. Sometimes it takes time for the process to correct itself,” he said. “But most of the time I do believe that we do publish the best papers, and those that should not be published are not.

“The peer review process isn't perfect, but there's no substitute. And I'd rather turn down a paper that turns out to be correct than publish a paper in which data are incorrect. It's more of a disservice to the scientific community, to patients, to understanding disease or brain function, to publish a paper that is not scientifically accurate than to fail to publish a paper that turns out to be correct.”