The scandal involving former Duke University faculty member Anil Potti, MD, is far from over. As I mentioned in my last post, concerns continue to surround Dr. Potti’s work, including papers not (yet) retracted. Additionally, some members of the oncology community think the events should trigger deeper soul-searching about how data and methods are reported and about patient safeguards in the era of big-money medicine.
I read the most recent Cancer Letter with interest. In that publication, Keith Baggerly, PhD, and Kevin Coombes, PhD, the biostatisticians at MD Anderson Cancer Center who first uncovered the irregularities in Dr. Potti’s work, summarized and annotated documents the National Cancer Institute provided to the Institute of Medicine’s Committee on the Review of Omics-Based Tests for Predicting Patient Outcomes in Clinical Trials in December.
The documents, along with a statement from NCI biostatistician Lisa McShane, PhD, traced NCI’s interest and concerns regarding Dr. Potti’s work.
Unfortunately, the 551 pages of NCI documentation, as summarized by Drs. Baggerly and Coombes, leave me with more questions than answers. Key amongst them, why did it take the NCI four years to reveal their concerns?
Obviously, the institute scientists wanted to give the Duke team time to respond to questions and try to work out the problems, and that is only reasonable. But I wonder if the discussion should have been more public, so that other scientists could have participated in the dialogue. If that had been the case, perhaps the whole mess would have been resolved more quickly – before any patient trials started.
But where could such concerns have been aired? Is there a forum for such discussions? Ideally journals that publish the original work should be interested and willing to have follow-up commentary, particularly when there specific scientific concerns. Yet I know from several conversations with Dr. Baggerly that he and Dr. Coombes tried several times to publish their concerns in the journals that had published the original papers. Instead of such information being included as a letter to the editor or a commentary, Drs. Baggerly and Coombes were turned down. (One scientist who has seen the rejection letters from the journal editors said they “were absolutely hilarious and appalling about why they weren’t interested in publishing it.”)
Drs. Baggerly and Coombes did ultimately publish some of their scientific criticism, but in the Annals of Applied Statistics (see paper listed at the bottom of the page), which surely has a different readership than Nature Medicine, the Journal of Clinical Oncology, or Lancet Oncology, where the Duke work was published.
But if high-profile journals only want to publish flashy (i.e., positive) results – as several analyses have suggested is the case – how is scientific discourse and self-correction supposed to occur? Certainly the community doesn’t want inadequate or inappropriate methods used in clinical trials. We got lucky this time that no patients were actually harmed. (All received standard of care, according to Otis Brawley, MD, Chief Medical Officer at the American Cancer Society, which largely funded the smaller trials.)
But I don’t think anyone wants to rely on luck as a regular approach.