ARTICLE IN BRIEF
An Institute of Medicine summary of a past workshop focused on the question: “Why do many therapeutics show promise in preclinical animal models but then fail to elicit predicted effects when tested in humans?” Study design and execution of animal studies are partly to blame, the workshop participants said.
Animal studies that produce promising results for drugs to treat Alzheimer's and Parkinson's disease, as well as other neurological disorders, frequently fail in clinical trials. While the differences between human and non-human species may account for some of the failures, a new report by the Institute of Medicine (IOM) calls attention to faulty experimental designs, questionable statistical analysis, and the tendency of journals to prefer publishing positive results.
WHY ANIMAL MODELS FAIL
The report, “Improving the Utility and Translation of Animal Models for Nervous System Disorders,” is a summary of the IOM Forum on Neuroscience and Nervous System Disorders held in 2012 in Washington, DC. The forum focused on the question: “Why do many therapeutics show promise in preclinical animal models but then fail to elicit predicted effects when tested in humans?”
The answer revolves around details for improving the design and execution of animal studies.
While many participants reiterated the value of animal models, they also acknowledged that no animal model fully recapitulates human neurological diseases. Focusing on a particular aspect of the disease, however, such as the accumulation of amyloid-beta in Alzheimer's, or the breakdown of myelin in multiple sclerosis, can provide invaluable insights into the pathology of the disease, as well as clues to potential treatments.
Participants analyzed both the “external validity” of animal experiments — how well the results apply to human diseases — as well as the “internal validity” — the extent to which the design and execution of the experiment provides clean, unbiased results. One spirited discussion revolved around the possibility of bypassing animals studies altogether for certain neurological disorders and moving directly to human clinical trials: “In particular, one participant noted that if the mouse models for polygenic psychiatric disorders are really as poor as described, perhaps it is time to ask under what circumstances it would be both worthwhile and ethical to go straight into human clinical trials after establishing safety.”
Participants also suggested the creation of noncompetitive alliances could pool data, build consensus around hypotheses, and speed translation to therapies.
PUBLICATION BIAS: ONLY POSITIVE RESULTS
Scientific journals were criticized for their tendency to favor positive results over negative or modest findings. Katrina Kelner, PhD, editor of Science Translational Medicine, acknowledged that a much larger fraction of papers that report positive findings are published, and that this can skew the perception of the effectiveness of an experimental drug. She pointed out that if 20 laboratories tested drug X, and only one reported positive results, but that was the only report published, other researchers would be misled into believing the drug was effective and try to replicate the findings.
Publication of studies that are poorly designed and analyzed also introduces bias, Dr. Kelner said, but weeding out such studies can be very difficult. She advocated higher standards for peer review, and said she is encouraged by the proliferation of journals dedicated to publishing negative results.
The IOM report contains many perceptive and salient observations regarding animal research, according to Marie-Francoise Chesselet, MD, PhD, who has written several papers on the challenges posed by animal models.
“If a study is not reproduced and a compound does not produce the desired effect, it does not necessarily mean that the investigators did something wrong,” said Dr. Chesselet, Charles H. Markham professor of neurology, and director of the Integrative Center for Neural Repair at the David Geffen School of Medicine at the University of California, Los Angeles. “When a study cannot be reproduced, sloppy science may be involved, but the failure may be due to the inherent variability of animal testing. Subtle variations in lighting can affect animals, along with food, handling, length of quarantine, [diverse] strains of animals. These can all induce differences.”
Dr. Chesselet cautioned neurologists against over-reliance on “anthropomorphic” end points in animal studies. “Neurologists may want the behavior of an animal model to mimic human disease, but the disease will manifest itself within the limits of animal behavior,” she said.
Comparing animal models to human disease poses additional problems when studying Parkinson's disease and other disorders that begin years or even decades before symptoms appear.
“If you want to reproduce in an animal the early phase of disease, that animal is not going to display the deficits the neurologists associates with the disorder,” she said. “When researchers develop a model of preclinical disease, it is often dismissed by neurologists because the model doesn't have the characteristics of manifest disease.”
NEED FOR GREATER TRANSPARENCY
Robert A. Gross MD, PhD, the editor-in-chief of Neurology, found the IOM report to be a much-needed analysis of the failure of animal models to produce effective therapies in humans.
“The problem is clear,” he said. “Few compounds make it from initial development to clinical use. A lot of compounds that get pushed through the pipeline eventually fail.”
In a Nature paper published last year Dr. Gross and his co-authors called for greater transparency in reporting the methods used in animal research so others can replicate the results.
“Basic animal studies are not reported well,” said Dr. Gross, professor of neurology and of pharmacology and physiology, and associate chair for academic affairs for neurology at the University of Rochester Medical Center. “Rarely are they randomized; rarely are they blinded; rarely is a power calculation done to determine how many animals should be used to test a certain hypothesis.”
In addition, results often are not validated across different models. “For example, it's well known that different strains of mice have different tendencies to have seizures,” he said. “So if you study seizures in a particular strain of animal, you'll learn a lot about seizures in that strain, but not necessarily in other genetic backgrounds, let alone in humans. Restricted study can lead you astray.”
And as the editor of a major scientific journal, Dr. Gross constantly confronts the problem discussed at the IOM workshop — journals tend to prefer positive results.
“This is very true with animal studies, but it's also true in human trials where publishing a negative result could save a lot of money by discouraging other researchers from pursuing the same or similar avenues of investigation,” he said.
As reported in the IOM report, Dr. Gross does not believe researchers routinely misrepresent their findings. “They might say there was a trend toward this result, when the statistics show it is not significant,” he said. “That is a hopeful statement, not a scientific statement. Authors do this all the time, but for the most part it's not a conscious attempt to pull the wool over someone's eyes. You want to believe your data, you want to present it in the best light, and so you say these things, but they're wrong and misleading.”
Dr. Gross believes that improving animal studies must begin with better training of scientists, and more adept use of statistics, but ultimately it's up to journal editors to uphold the standards of reporting.
“It has to start with training,” he said, “but editors are the last gatekeepers of integrity in science.”
TRANSLATIONAL SUCCESS IN ANIMAL MODELS
During the first day of the Institute of Medicine workshop, breakout groups discussed the translational success of animal models in several areas of research.
* Alzheimer's disease: Moderator Bradley Hyman, MD, PhD, professor of neurology at Harvard Medical School, said animal models, while successful at replicating plaques, tangles, and other specific aspects of Alzheimer's disease, are incomplete models of the human disease, in part because the animals don't live long enough to mimic the gradual accumulation of pathology that takes place over many years in humans.
* Stroke: Costantino Iadecola, MD, Anne Parrish Titzell professor of neurology and neuroscience at Weill Cornell Medical College, noted that the pathobiology of stroke can be reproduced effectively in animal models, although the mechanism by which the occlusion develops in humans may not be mimicked exactly. Efforts are being made to more closely align animal models with human stroke, he said. For example, treatments are increasingly administered to hypertensive, diabetic, and aged animals in an effort to account for stroke risk factors. But unfortunately many pharmaceutical companies have abandoned the search for stroke treatments due to the failure of clinical trials to produce effective results.
* Addiction: Athina Markou, PhD, professor in the department of psychiatry at the University of California, San Diego, credited the success of varenicline, a treatment for smoking addiction, to the strong theoretical rationale provided by animal models, which provided the confidence to move ahead into clinical trials. However, almost no drugs aimed at modulating dopamine have made it to market, she added, perhaps because the hypothesis that dopamine mediates dependence and addiction is wrong, or because clinical trials based on that hypothesis have not been done properly.
* Schizophrenia: Many participants in this session, moderated by Holly Moore, PhD, associate professor of clinical neurobiology in psychiatry at Columbia University, complained that animals models for schizophrenia are inadequate, despite some useful assays of behavior, cognition, and the neurocircuitry affected by schizophrenia. Similar neurocircuitry in humans and animals models might provide clues to what mediates symptoms and aberrant behavior.
* Pain: Human brain imaging provides insights into chronic pain in humans, which could inform animal models of pain, observed moderator Apkar Vania Apkarian, PhD, professor of physiology at Northwestern University. Chronic pain is not just a matter of nociception, but rather a reorganization of brain activity, with genetic and experiential factors (which could be modeled in animals) most likely modulating the experience of pain.
TOO MANY POSITIVE RESULTS IN JOURNALS?
Evidence supporting the widespread belief that scientific journals are more inclined to publish positive results just appeared in PLoS Biology. John P. A. Ioannidis, MD, DSc, a professor of medicine at the Stanford University School of Medicine, led a review of 160 meta-analyses that included 1,000 published animal studies describing Alzheimer's disease, Parkinson's disease, brain ischemia, and other neurological conditions. Almost all of them found statistically significant favorable findings for the experimental intervention.
Dr. Ioannidis and colleagues compared this to the number of expected significant results, which they calculated from the largest and most thorough studies in each meta-analysis. The number of statistically significant results in the studies they examined “was too large to be true,” the authors concluded. “Only eight of the 160 evaluated treatments should have been subsequently tested in humans.”
They attributed their results not to willful fraud, but rather to the use of data analysis that produces “better” results, and to the tendency of high-profile journals to prefer publishing positive results.
He and his coauthors recommend that animal studies adhere to strict guidelines, that they be pre-registered so publication of the results is ensured, and that methodological details and data be available for other researchers to verify.
LINK UP FOR MORE INFORMATION