At a recent meeting of the Oncologic Drugs Advisory Committee (ODAC) of the U.S. Food and Drug Administration, the FDA tried to do the right thing.
It is no secret that I'm a big fan of the FDA, particularly the Office of Hematology and Oncology Products, with whom I've had a lot of contact. The people who work for this division spend long hours evaluating submissions of complicated chemotherapeutics and biologics, often for rare diseases, and try to evaluate whether the efficacy of these drugs outweighs toxicities that would be deemed unacceptable in non-oncologic indications—all while resisting the influence of the well-funded, clearly incentivized, and organized pharmaceutical and device industries.
Even getting through that last sentence is a challenge. Imagine actually doing it every day!
The discussion topic for the meeting was pretty wonkish: “Evaluation of Radiologic Review of Progression-free Survival in Non-hematologic Malignancies.”
But don't let the title scare you away from a good read. The FDA was asking our opinion about whether it was okay to remove (let me repeat that word: remove) requirements for the conduct of large clinical trials in which progression-free survival (PFS)—defined as the time from randomization to either disease progression or death, whichever occurs first—is the primary endpoint.
What is the current state of affairs? Right now, it is fairly standard that, particularly in large, phase 3 studies in which patients with solid tumors are randomized to a new drug or new drug combination vs. placebo or standard therapy, and PFS is the primary endpoint, that independent radiologic reviews occur to either agree or disagree with a local investigator about tumor response or progression on a radiographic scan.
So, you enroll a patient onto a study and have no idea if your patient with lung cancer is receiving placebo or “tumorkillamab.” You obtain baseline CT scans and then follow-up scans in two months, and you see a shrinkage of tumor of 75 percent. Great—you keep treating your patient with drug and obtain repeat scans at four months. Now, you see that the tumor has doubled in size, so you take your patient off the study, declaring progression.
Those same scans are sent to central reviewers who are blinded to both study drug and to your radiographic reviews, and those reviewers (often two radiologists) make their own determination. Most of the time they agree with your review. But what happens if they disagree? What if they feel that, at the two-month scan, your patient had a growth, not a shrinkage, of tumor, and should have been taken off study? Or what if they felt that, at the four-month scan, your patient actually had a further shrinkage of his lung cancer, and should have been kept on study, but you took him off-study?
Well, it turns out that in a meta-analysis conducted by the FDA of 28 trials for nine indications since 2005, there was a high degree of correlation between investigator-assessment and independent radiologic review in measuring PFS effects and in objective response rates, when looking at the entire study cohort.
This is really good news.
The tricky part is that, on an individual patient basis, there was an approximately 30 percent disagreement between the local investigator and the independent radiologic reviews. But it didn't matter in assessing a drug's efficacy—which implies that, essentially, the instances in which there was disagreement were either equally balanced between the local investigator favoring progression or favoring disease response compared with the independent review, or that these disagreements were washed out in the context of results from the entire study.
What was proposed was an independent audit of investigator assessment of progression in a random sample of patients, instead of an independent review of all patients. In the words of Richard Pazdur, MD, the Director of the Office of Hematology and Oncology Products of the FDA, this would introduce a system akin to the IRS monitoring our tax returns. They don't audit every tax return, but the threat of an audit is enough to keep us honest!
Why go to a system like this? It would reduce the cost and burden on the clinical trial investigators, avoid some of the missing data issues, and streamline processes.
Read: It would save millions of dollars in costs associated with conducting large studies, and save a heck of a lot of time and resources at individual cancer centers.
Naturally, if many discrepancies arose during the audit, a larger audit, and perhaps even complete audit, would kick in.
What are the arguments against such a strategy? The major one is getting over the logical hurdle of it being okay to disagree on the radiologic interpretation of an individual patient 30 percent of the time, as long as it doesn't affect the overall study conclusions. But remember, although we make clinical decisions specific to the person sitting in front of us every working hour of every day, we make regulatory decisions considering populations.
The second concern is to make sure that such a strategy would not shift costs and procedures and additional requirements to investigators and sites, most of which are already stretched to the limits of regulatory requirements and having enough personnel to meet those requirements. It's hard to imagine these would be any more stringent with an audit strategy vs. the traditional approach of having every patient re-evaluated by external review.
I think these concerns are surmountable—particularly in support of the government taking its own initiative to ease study conduct requirements. That doesn't exactly happen every day.
More ‘Second Thoughts’!
Check out all the previous articles in Mikkael Sekeres' award-winning column in this collection on the OT website: http://bit.ly/OT-SekeresCollection