Home Archive Blogs Collections Podcasts Videos Info & Services
Skip Navigation LinksHome > Blogs > FRESH SCIENCE for Clinicians > How Could Nevins AVOID Knowing?
FRESH SCIENCE for Clinicians
News about basic science of interest and relevance for cancer clinicians
Thursday, March 31, 2011
How Could Nevins AVOID Knowing?

In the course of reporting about the problems at Duke University, I have had numerous lengthy phone conversations and email exchanges with Keith Baggerly, PhD, the biostatistician at MD Anderson Cancer Center, who initially uncovered issues with the data. Yet, listening to Dr. Baggerly’s presentation to the Institute of Medicine committee today, I was taken aback.

 

Until now, I think many of us believed that Anil Potti, MD, held the bulk of the responsibility for the problems and that if there had been data manipulation – as Joseph Nevins, PhD, admitted yesterday there was – that it was Dr. Potti’s doing alone.

 

However, my view has changed after listening to Dr. Baggerly describe attempt after unsuccessful attempt to get clear answers from Drs. Potti and Nevins -- an effort that from November 2006 until July 2010 -- and Dr. Nevins’ presentation to the committee yesterday.

 

I still think Dr. Potti is likely the only one who altered data. But I now think Dr. Nevins bears an enormous amount of responsibility for not seeing and dealing with the problem.

 

As Gil Omenn, MD, PhD, chair of the IOM committee, pointed out yesterday (summarized here), there have been other examples of fraud and corruption in research. The difference, though, is that they have not been allowed to continue for years after the first questions were raised.

 

Let alone, raised and raised and raised.

 

Just one week after Drs. Potti and Nevins and colleagues published their very high profile paper in Nature Medicine in November 2006, Drs. Baggerly and Coombes contact the authors asking for sufficient data to reproduce the work. Between that time and May 30, 2007, Drs. Baggerly and Coombes reached out to the Duke team 11 times (by my count from his slides) with either questions or concerns about the work. That included a face-to-face meeting with Dr. Nevins on January 24, 2007 at MD Anderson Cancer Center.

 

The issues raised by Drs. Baggerly and Coombes included evidence of such problems as:

  • off-by-one errors which left gene names incorrectly associated with array data;
  • reversing drug “sensitive” and “resistant” labels for cell lines on which the studies were based;
  • changes in the numbers of cell lines used from 22 in the original work to 28 in the amended data following the biostatisticians questions; and
  • an apparent mislabeling of a heatmap for gene expression data that claimed to predict sensitivity to cyclophosphamide with that for paclitaxel sensitivity, and missing data for the cyclophosphamide heatmap.

Yet despite these questions, the Duke group opened a clinical trial based on these very data that spring. Was no one concerned about that there were so many issues raised? Did Dr. Nevins think this was nothing more than irrelevant complaints?

 

Yesterday, Dr. Nevins described his version of the events, which I described briefly in my previous post. Though he acknowledged that it is the principal investigator who is ultimately responsible for work that comes out of his lab, he did not seem to think he failed to take the complaints seriously enough. Rather, he said over and over again that he thought there was value in the overall ideas and in the infrastructure the Duke team had built to run these tests and trials. And he said, explicitly that he thought the institution handled the problems appropriately. “I frankly would suggest that the institution did a very good job of addressing this and paying attention to the issues that were raised,” he said.

 

Moreover, instead of accepting responsibility, he talked about how, in the future, journals should have more control over what is published and ensure that all methods and data are made available to interested readers. He talked about how team science involved multiple specialties and that there should be co-leaders. He even said that he thought they did a pretty good job of that, particularly with the statistical expertise, “not just as an advisor, but as a co-investigator.”

 

Maybe I’m being too harsh, but that all starts to sound like the comments of someone who is looking for other people to accept the responsibilities that should, ultimately, be his.

 

Dr. Nevins repeated several times that he had remained convinced of the project’s validity, in part, because validation had been done using a supposedly blinded dataset. The collaborators who provided those samples, however, have subsequently said the samples were not blinded. And one comment yesterday made it seem that the critical patient response information was included with the sample tubes.

 

So if Dr. Nevins considered this validation set of such critical importance, why didn’t he bother to confirm that it was truly blinded? “It’s important to go back to what we were thinking at that time,” Dr. Nevins said. “At that time I was certainly of the viewpoint  that the fundamental issues, the scientific criticisms, had to do with how the methodology was being applied and I had no [his emphasis] reason to doubt this particular point and I had no reason for a lack of trust of being told that particular data being blinded.”

 

No reason to question this? Really?

 

The paper that included those validation data was published in Lancet Oncology in December 2007. The MD Anderson scientists had been asking questions for a year and had not been able to reproduce the results, even when they used the same data (as in the paclitaxel/cyclophosphamide example above), and the Duke team had already had to make formal corrections to the Nature Medicine paper.

 

Instead of asking questions internally, the Duke group opened a second clinical trial.

 

Interestingly, Dr. Nevins seems to be one of the few people who didn’t have concerns about the work. I can tell you, about this time, my group and I became aware of the fact that these signatures weren’t working,” said IOM committee member John Quackenbush, PhD, Professor of Computational Biology and Bioinformatics at the Dana-Farber Cancer Institute, during this morning’s session. “And we talked to quite a few people -- not Keith [Baggerly] -- who work in the area,” he continued. “I think there was a growing undercurrent in the community to recognize that there were problems.”

 

So, once again, why didn’t Dr. Nevins see or at least worry about the problems? We may never know. But regardless of the reason, his complaisance has cost the community dearly in terms of time, resources, and confidence, from insiders and outsiders. And from patients.

 

Yesterday, Dr. Nevins argued several times during his presentation that there was no potential harm for patients in the clinical trials because all of them were assigned to treatment with standard drugs. But one of the ethicists on the IOM committee disagreed with that point during today’s discussion. “There may not have been physical harm, but there was certainly dignitary harm if patients had to consent to something that was completely wrong or misleading. That is a harm,” she said.

 

Perhaps one day, when all of this is done and over, Dr. Nevins will see that too.

About the Author

Rabiya S. Tuma, PhD
RABIYA S. TUMA, PHD, a Contributing Writer for Oncology Times, is an award winning journalist and a regular contributor to The Economist, and the Journal of the National Cancer Institute. Her work has appeared in a variety of publications including CR Magazine, Yoga + Joyful Living, O The Oprah Magazine, HHMI Bulletin, and the New York Times. Prior to launching her writing career, Rabiya earned her doctorate at the University of Washington and Fred Hutchinson Cancer Research Center and worked at a biotechnology firm in Eugene, Oregon. And though she traded a lab bench for a computer, she remains fascinated with the work that takes basic science into the clinic.

Her OT blog was recognized this year by the American Society of Healthcare Publication Editors (ASHPE) with a bronze award in the category of Best Blog.