WASHINGTON, DC—The United States spends more than $7 billion every year on clinical trials to evaluate drugs, devices, and biologics, and while randomized clinical trials are the gold standard in evaluating new therapies, they are only as good as the data gathered and analyzed. But what happens when data are missing? Do missing data lead to biased assumptions about results? Can these biases, if present, affect clinical treatment?
Those are some of the questions explored in a new report from an expert panel convened by the National Research Council of the National Academy of Sciences, started at the request of the Food and Drug Administration.
The panel was chaired by Roderick J.A. Little, PhD, Professor and former Chair of the Department of Biostatistics at the University of Michigan and coauthor of a book on missing trial data, Statistical Analysis with Missing Data (2002, Wiley).
Missing data were defined in the report as outcome values considered meaningful for clinical trial analysis that were not collected. The advantages of randomization in evaluating a new therapy are jeopardized when some of a trial's outcome measures are missing.
“Missing data can arise for a variety of reasons, including the inability or unwillingness of participants to meet appointments for evaluation,” stated the panel, which also noted that unfortunately in some trials, data collection stops when participants discontinue treatment. The panel strongly advised investigators to obtain information about dropouts “to the extent possible,” and to anticipate and plan for missing data in trial protocols.
In fact, the bottom line of the panel is that careful trial design should be undertaken upfront to minimize data gaps, recognizing that missing trial data are all too common and that some missing data are probably inevitable.
“There is no ‘foolproof’ way to analyze data subject to substantial amounts of missing data; that is, no method recovers the robustness and unbiasedness of estimates derived from randomized allocation of treatments....There is no single correct method for handling missing data,” the report said.
Thus to strengthen clinical trials and reduce the potential for missing data, the panel said two critical elements need to be stressed:
* Careful design and conduct to limit the amount and impact of missing data.
* Analysis that makes full use of information on all randomized participants and is based on careful attention to the assumptions about the nature of the missing data underlying estimates of treatment effects.
Example of Chronic Pain
As an example of how clinical trial design and analysis can be crucially important when subjects drop out, the panel cited chronic pain. The “last observation carried forward” technique of trial analysis implicitly assumes that a trial participant who had good pain control in the short term and then dropped out would have had good pain control in the long term—but this assumption seems questionable in many settings, the panel concluded.
The “baseline observation carried forward” analytical technique assumes that a participant's pain control is the same as that measured at the beginning of the trial. This technique is likely to underestimate the effectiveness of any treatment, especially given the well-documented power of the placebo effect.
To minimize the unintended consequences of missing data in clinical trials, the panel made the following recommendations, among others:
* Investigators, sponsors, and regulators should design clinical trials consistent with the goal of maximizing the number of participants who are maintained on the trial protocol-specified intervention until the outcome data are collected. Techniques to accomplish that include (1) Careful choice of study sites, investigators, participants, study outcomes, time in study, and times of measurement, and the nature and frequency of follow-up to limit the amount of missing data; (2) Use of rescue therapies or alternative treatment regimens to allow analysis of subjects who discontinue the assigned treatment; (3) Limiting the burden on study participants, such as making follow-up visits easy in terms of travel and child care; (4) Providing frequent reminders of participants' study visits; (5) Training investigators about the importance of avoiding missing data; (6) Providing incentives to investigators and participants designed to limit dropouts; and (7) Monitoring of adherence and in other ways dealing with participants who cannot tolerate or do not adequately respond to treatment.
* Study sponsors should explicitly anticipate potential problems of missing data—the trial protocol should contain a section that addresses missing data issues, including the anticipated amount of missing data, and what steps will be taken to limit and monitor the amount of missing data.
* Statistical methods for handling missing data should be specified by clinical trial sponsors in study protocols, and their associated assumptions stated in a way that can be understood by clinicians.
* Single imputation methods like “last observation carried forward” and “baseline observation carried forward” should not be used as the primary approach to the treatment of missing data unless the assumptions that underlie these methods are scientifically justified.
* Sensitivity analyses should be part of the primary reporting of findings from clinical trials.
* The FDA and NIH should make use of their extensive clinical data bases to carry out a program of research, both internal and external, to identify common rates and causes of missing data in different domains, and to see how different models perform in different settings, in order to inform future clinical trial designs.
* The FDA and drug, device, and biologics companies that sponsor clinical trials should carry out continued training of their analysts to keep abreast of up-to-date techniques for missing data analysis. Similarly, FDA should continuously train its clinical reviewers in missing data methods and terminology.
* The treatment of missing data in clinical trials should have a higher priority for sponsors of statistical research such as NIH and the National Science Foundation.