The Institute of Medicine has released a report on best practices in 'omics test development for the clinic. The response to the report, Evolution of Translational Omics: Lessons Learned and the Path Forward, from several involved community members interviewed for this article has been enthusiastic, but even some committee members acknowledge that more remains to be done in some areas.
“Our report provides an integrated roadmap for the discovery, development, and evaluation of tests derived from omics technologies,” IOM committee chair Gilbert Omenn, MD, PhD, Professor of Internal Medicine, Human Genetics, and Public Health, and Director of the Center for Computational Medicine and Bioinformatics at the University of Michigan, said at a news conference announcing the report's release.
“We describe best practices for the discovery and confirmation of a potential new test in a research lab, validation of the test in a clinical lab, and use of the test in clinical trials—and eventually in clinical practice—to assess its utility in guiding selection of patient therapies.”
The committee was formed in response to problems at Duke University in which Anil Potti, MD, Joseph Nevins, PhD, and others claimed they had developed microarray tests that could predict the most effective drug for individual cancer patients. That work has been found to be deeply flawed and at least partially fraudulent.
And while the committee did try to understand what happened at Duke that allowed the failures to occur and go undetected for so long—and to be used in clinical trials—the bulk of the new report focuses on how biomarker development can be done well.
“While what happened at Duke was what motivated the study, our committee took its charge to look at a large number of 'omics biomarkers, to understand their successes and failures, and to make recommendations that we felt would provide safeguards and rigor in the process,” said John Quackenbush, PhD, Professor of Computational Biology and Bioinformatics at Dana-Farber Cancer Institute, and a member of the IOM committee.
“I am very pleased with the report,” said Lisa McShane, PhD, a National Cancer Institute biostatistician who was critical in uncovering the problems at Duke. “It was difficult to tackle the scope of the project, but the committee did a really good job. They looked at all the different pieces and identified where some of the weaknesses were in the system. They mapped out a very rational approach of how to move forward in a way that will be safer for patients and consume less resources.
“The question in everyone's mind now is, ‘This is like motherhood and apple pie; it all sounds great but how do we make sure that all these things happen and what kind of additional resources are going to be needed to make them happen?’”
The committee recommendations emphasize the need for increased transparency by all responsible parties and at all steps in test development. Moreover, the report recommends that funders and journal editors require that data and code be made fully available as a condition of funding and publication.
The most obvious reason for transparency is to allow other research groups to review and verify results, as well as identify potential problems. Keith Baggerly, PhD, and Kevin Coombes, PhD, the MD Anderson Cancer Center biostatisticians who first detected problems in the Duke work, estimated that they spent more than 1,500 hours trying to figure out what had been done by Potti and colleagues.
“Had such data and code been available earlier, this would have greatly reduced the amount of effort required for others (including us) to check and potentially extend on the underlying results,” Baggerly said via email.
A less obvious, but equally important reason, for transparency is the power of shared methods, said another committee member, Nathan D. Price, PhD, Associate Professor in the Institute for Systems Biology in Seattle. He noted that because individual omics studies have a tremendous number of variables and a relatively small number of biological samples, the tests are prone to overfitting, a statistical problem that makes a test seem much better than it is. However, as the research community at large repeats experiments using the same code and computational methods on new data sets, any true biological signal will be amplified and the noise will be swamped out.
“As studies start going across multiple different locations and sites, the true biological signal gets amplified and becomes much easier to identify,” he said.
In addition to addressing policy issues, the IOM report focuses on the methods required to develop clinically valuable omics-based tests. A major recommendation is that the code and computational procedures need to be “locked down,” which means they are recorded and no longer changed, before the assay is tested. McShane notes that although that may seem obvious for experienced clinical investigators, basic science researchers often work in a discovery mode and are used to tweaking their methods as they go along.
“I've seen this in my own collaborations over the past few years,” she said. “You have to really make sure you have locked down the model and test it in a rigorous way.”
Baggerly too was glad to see the report emphasize the need for locking down the methods. And, he points out, the report provides an explicit definition of what it means to lock down data and methods. In his view, such a definition might have alleviated some of the confusion with the Duke work and subsequent trials.
When originally questioned about their methods both Potti and Nevins used hand-waving explanations that hid behind the apparent complexity of high–dimensional data and experiments. Yet when the problems were ultimately uncovered and described, most were simple errors or changes, like mislabeling of genes or patient outcomes.
“What comes across to me is that there are two things that are going to be helpful [with the IOM report],” said Robert Califf, MD, Vice Chancellor for Clinical Research and Director of the Duke Translational Medicine Institute.
“One thing is that it does point out things to avoid, but the other is that it is pretty detailed and proscriptive in how things should be done.” That said, he cautions that the report is not adequate to take the place of training. “If you read this report and you followed it to the T, but you had no experience and no expertise, it still wouldn't cut the mustard. One of the key ingredients is adequate informatics and statistical collaboration and experienced expertise.”
With respect to the Duke case, the new report provides perhaps the most concise outline of the events available—the timeline itself consumes four pages. The report does not provide much new information about those events, though the committee, did explore how and why the events were able to continue for so long in the face of outside criticism. Further details may become available when Duke completes its official misconduct investigation, but the university has no information on when that will be.
Will the report help improve omics-based biomarker development? Baggerly and others say they think it will. “The very existence of the report is recognition that reproducibility is an important problem for the omics-test community. This is a necessary step towards fixing the problem,” he said.
When asked during the news conference what will compel investigators and institutions to follow the committee's recommendations, Debra Leonard replied simply, “I think that academic institutions will pay attention to the report because most institutions are aware of the events at Duke.”
“So there is every reason to think that while up to now in the field most of the studies have been individual, underpowered, and very prone to overfitting, that going forward—as people are open and sharing happens more broadly—these signals will get more robust. It will have a major positive effect on the field.”
Need for Regulatory and Institutional Oversight at Multiple Levels
A key theme that runs through the nearly 300-page IOM document is the need for more regulatory involvement and increased oversight at multiple levels, including at the level of the research institution.
A single flow chart (view it on OT's iPad issue) illustrates the suggested path from the discovery phase to clinical test. Remarkably, the chart includes FDA consultation during the validation stage of test development, which is much earlier than many 'omics groups have done in the past.
“Cases lacking FDA oversight for test development put extra pressure on academics, academic institutions, or companies to ensure scientific integrity and oversight of development process—and the pressure is often underappreciated,” said IOM committee chair Gilbert Omenn, MD, PhD.
One point of discussion during review of the problems at Duke was whether the institutional review board (IRB) inappropriately approved clinical trials that used omics tests in the absence of an investigational device exemption (IDE) from the FDA. Alberto Gutierrez, PhD, Director of the FDA's Office of In Vitro Diagnostic Device Evaluation and Safety in the Center for Devices and Radiological Health, stated clearly during a telephone briefing with the IOM committee last August that an IDE was necessary when an experimental test is used in a trial. The flow chart and text of the report reiterate that necessity.
“An IDE is actually currently required by the FDA for any test that is not FDA approved but is going to be used to manage patient care within a trial,” said Debra G.B. Leonard, MD, PhD, Professor and Vice Chair for Laboratory Medicine and Director of the Clinical Laboratories at Weill Cornell Medical College in New York City. “This is not widely understood by IRBs or by investigators. There is a lot of confusion around this point. They know about the drug process with the FDA but not the clinical test process.”
Baggerly too thinks clarity in the regulatory landscape is important. “It seems likely that several of the problems identified with the Duke trials would have been caught by an FDA review, particularly if the agency already had cause for concern, such as a letter to the editor identifying analytical shortcomings.”
Another problem that occurred at Duke was a lapse in institutional oversight. The IRB failed to ask enough questions, as did the cancer protocol review board. And even when questions were raised, in letters to journal editors and to Duke administrators, senior officials seemed too willing to delegate responsibility to Nevins, the senior investigator in the troubled work.
Duke officials acknowledged this concern in their August 22 discussions with the IOM committee, and have subsequently developed a new framework to strengthen oversight and responsibility in the institution.
With regard to the committee's recommendations on institutional oversight, Omenn said, “Those recommendations aim to clarify lines of authority and reinforce the policies and procedures already in place at most research institutions. We recognize that our recommendations might increase the oversight requirements for omics research at some institutions, but we agree that these potential costs will be offset by the added safeguards for the integrity of the research.”
Further emphasizing the critical nature of adequate, competent oversight, the report directly states that institutions need to be up to the task. “If an institution does not have the infrastructure or capability to follow the recommended Test Development and Evaluation Process defined in this report, then the committee believes that institution should consider not engaging in the translation of omics-based discoveries into validated tests intended for clinical use.”
After reading that passage during the news conference, Omenn summarized the importance of oversight by saying simply: “Patient safety is paramount and public trust is at stake.”
Conflict of Interest
During discussions with Duke officials, including Nevins, IOM committee members repeatedly raised questions about conflict of interest. Early on in the Duke work, the investigators filed for patents and started two companies. The university had a financial stake in the companies as well. Yet despite these clear conflicts, all of the oversight was handled by the institution, even though the university has a method in place for using external reviews when it is potentially conflicted.
The IOM report recommends that institutions have someone in charge of handling conflicts of interest. When asked during the news conference how this differed from existing systems, including the one in place at Duke during the problems, committee member David L. DeMets, PhD, Professor of Biostatistics and Medical Informatics at the University of Wisconsin–Madison, noted that increased attention to the issue may help: “I think the experience at Duke and the recommendations of our report are going to make institutions in their review of conflict of interest more sensitive, more tuned in,” he said.
“I don't think we need to have new procedures, but maybe new sensitivities to the challenges we face in this frontier of genomics research.”
With respect to issues of institutional conflict of interest, he said he did not have specific recommendations, but that the committee felt strongly about the problem. “This is an issue that must be addressed, and I am sure that over the next weeks and months institutions are going to pay more attention to these issues.”
Quackenbush and others interviewed emphasize that institutional conflict of interest does not always involve financial conflicts; an institution's concern for its reputation—either bolstering it or protecting it—can be significant as well. “[Duke] just went through a scandal with their lacrosse team, they didn't' want more bad press,” Quackenbush told OT. “It is hard to know if that was part of the lack of investigation” by the university when Baggerly and Coombes first raised the questions about Potti's work.
Although the report does not have specific recommendations on how to manage institutional conflict of interest, David F. Ransohoff, MD, Professor of Medicine and Clinical Professor of Epidemiology at the University of North Carolina at Chapel Hill, points out that the committee's emphasis on it is important: “If the report serves to bring important questions to the table, then that is a significant accomplishment.”
—Rabiya S. Tuma, PhD