Data were manipulated in studies at Duke, said Joseph Nevins, PhD, the principal investigator on the now-retracted studies, during a presentation to the Institute of Medicine’s committee to Review of Omics-Based Tests for Predicting Patient Outcomes in Clinical Trials.
This is the first time that any of the principals involved in the Duke project have directly admitted that the data were purposely manipulated, although there has been increasing appearance of such problems.
Dr. Nevins, the Barbara Levine University Professor of Breast Cancer Genomics at Duke University Medical Center, made the statement during a 90-minute presentation and question-and-answer session during the committee meeting today. Dr. Nevins made the comment as he described an effort to use a set of patient samples, obtained from MD Anderson Cancer Center, to validate a predictive gene signature:
“Although, however, we believe the methodology was appropriate and, in fact, we believe that the manner in which we had developed methods to apply this in a prospective manner had some innovation, and we believed it was another example of validation, the evidence, in fact, was flawed.
“In that data set, a neoadjuvant [breast cancer] study, there were 99 non-responders and 34 responders. When we were prompted by observations of inconsistencies in an ovarian cancer cell line data set, in which there were mislabeling of samples in a non-random way, we looked at this. And we identified the fact that of the non-responders that are being predicted [in this validation test], in fact, 12 of those samples were actually responders that were mislabeled as non-responders.
“That is to say if we looked at the annotation data that was in the manuscript that had been developed and compared it against the annotation, the information we downloaded from the MD Anderson Cancer Center, we saw that discrepancy. In addition, there were 12 non-responders that were labeled as responders. The overall distribution was the same [as] from the source at MD Anderson Cancer Center…but 12 samples were altered in each direction.
“Clearly [this was] a non-random mislabeling of samples that gives the validation that I am showing here with the corrupted data ––but not with authentic data.
“And based on observations like that and several similar observations, it led to retraction papers, and it was part of the basis for closing the trials. It was the basis for closing the trials.”
Sounding entirely disheartened and like he was nearly too tired to continue, Dr. Nevins said that this sort of corruption is not something one expects. “Data corruption, of the form that I just described, that we experienced in this set of studies, clearly can happen, but I’ll say it is not something that one generally anticipates.”
The mood in the room during Dr. Nevins’ presentation was tense, according to one attendee, with people seeming almost to be holding their breath. The committee members seemed to be trying to balance the need and desire for pointed questions with a desire not to batter Dr. Nevins too harshly.
During the discussion following Dr. Nevins’ presentation, Catherine DeAngelis, MD, MPH, Editor-in-Chief of JAMA, noted that she had published one of his papers and immediately upon hearing about the problems had asked her editors to re-review it.
She asked Dr. Nevins what journal editors should be doing in a case like this. He replied that papers need to be dealt with on a case-by-case basis and shouldn’t be discarded simply because a particular name appears amongst the authors. (He was apparently referring to Dr. Anil Potti, but at no time mentioned him by name.)
“It is a challenging situation,” Dr. Nevins said. “We had one example of a paper where I think 95% of it is just fine. There happens to be one figure in there that made use of data that has now been retracted. That particular figure had almost no bearing on the overall thrust of the paper. Should a paper like that be removed? Or is that something that should be corrected and noted? The point is that we want the literature to be correct, that is the primary goal. But there are also individuals involved in this. Junior scientists whose careers are affected by that particular paper.”
When asked what the role of the principal investigator is in a situation like this, Dr. Nevins initially seemed ready to take full responsibility. “The principal investigator is the one that is responsible ultimately,” he said. However, he then continued on to say that projects like this are team projects. And that the system could be improved if the senior authors with different specialties acted as equals in the supervisory role.
Gilbert S. Omenn, MD, PhD, Chair of the IOM committee and Director of the University of Michigan Center for Computational Medicine and Biology at the University of Michigan Medical School, was the most direct in his criticism of the events that have taken place at Duke and of Dr. Nevins role in them. He said that the tone in some of the correspondence with journal editors about the criticisms indicated that the issues was not being taken seriously and that institutional responses were dismissive.
“The reverberations for the whole community, you know Joe, are serious. There are hundreds of other scientists who have referenced the work. You’ve described calmly this afternoon, the lack of investigation when things were brought up,” Dr. Omenn said. “That raises the question of how do you deal with something that looks too good to be true –– and might be not true.
“This has happened to other important investigators around the world and mostly those cases did not involve something that went on for three or four years before finally the problem was acknowledged,” Dr. Omenn continued. “So that is something that has to be dealt with internally.”
More to come tomorrow on the IOM discussions about responsibilities held by institutions and journals.