See Article, page 1124
We need better interventions to treat acute and chronic pain. This opinion piece focuses on new pharmaceuticals, but the concepts discussed equally apply to nonpharmacologic interventions. Despite decades of basic science investigation, which includes the 2021 Nobel Prize, drugs used commonly to treat pain (local anesthetics, opioids, nonsteroidal anti-inflammatory drugs (NSAIDs), corticosteroids, antidepressants, and anticonvulsants) are limited by short duration, inadequate efficacy, and/or poorly tolerated adverse events. Belief in the past that opioids could treat chronic pain led to increased opioid prescribing and development of longer acting formulations, which contributed to an epidemic of opioid misuse and deaths. Better ability to visualize peripheral nerves with ultrasound imaging greatly expanded the delivery of regional analgesia, but existing local anesthetics and formulations remain limited by their short duration of action and adverse events,1 and there have been no new local anesthetics or selective sodium channel subtype blockers approved in 30 years. The public has responded to this situation with increasing distrust of approved analgesic drugs and medical practitioners and by embracing other therapies, such as cannabis, largely circumventing regulatory guidance, and with poor quality evidence of efficacy or safety.2
For decades, basic science has rapidly advanced, often led by clinical observations and genomic research, to discover new target upon new target to treat pain. These targets have been plausibly established by a multitechnological confluence of evidence derived from studies in single cells or tissues in vitro or in vivo and in sentient animals, usually rodents. Targets have been identified, which manipulate processes ranging from genetic translation to cellular bioenergetics to complex neural interactions to crosstalk between cells of the nervous and immune systems. Despite this effort, it is noteworthy that, with few exceptions of uncommonly used drugs, clinicians and patients are essentially left with the same analgesic drug classes now as 40 years ago. The purpose of this piece is not to describe the newest targets. Rather, it is to identify possible reasons for this lack of progress and alternative pathways that might be more fruitful.
One could argue that this failure is understandable. Chronic pain, like neuropsychological disorders in general, is a highly complex biopsychosocial and individualized experience and failure to translate analgesic discovery into therapy that may reflect inadequate understanding of this complexity. Or failure may simply reflect the nature of scientific progress, which often appears saltatory, with decades-long periods of minimal change, then appearance of an idea or finding that rapidly shifts paradigms. Periods of stagnation in some cases reflect flawed thinking or hypotheses, but in others, they reflect laborious construction of new tools and concepts, which enable paradigm shifting research. For example, slow progress in fundamental understanding and generation of methods in molecular biology and immunology provided the tools that were applied to generate effective vaccines against severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2) in a period of weeks to months. Perhaps, the past 40 years of pain research have generated such tools and concepts that will lead to novel treatments soon.
We believe it is more likely that failure may reflect fundamental flaws in methods in rodents to identify and validate targets. Most preclinical pain research in rodents relies on behavioral responses to simple peripheral stimuli, oftentimes reflexive ones, as measures of drug efficacy. This approach works when determining pharmacologic efficacy and potency of novel compounds within drug classes of established clinical effectiveness, such as new NSAIDs, opioids, or gabapentinoids. It has been tacitly assumed that this approach would also predict human efficacy of drugs targeting novel mechanisms. Indeed, a workshop January 30–31, 2019, sponsored by the US National Institutes of Health concluded that high throughput screening of novel analgesics should be performed in rodents and that only drugs that show efficacy in such screening should move on to further testing.
The success of research in rodents to predict clinical efficacy in humans has been poor, as evidenced by failure to relieve pain despite evidence of target engagement in clinical trials of antagonists of neurokinin-1, 5-hydroxytryptamine type 3, and calcitonin gene-related peptide antagonists, although the last mentioned exhibits efficacy to treat migraine, an important and common pain state. Ziconotide is perhaps the best example of success coming from rodent research, but its clinical utility is limited by the need for continuous intrathecal infusion and its low therapeutic ratio.3 Other targets discovered in rodents, such as nerve growth factor or cholinergic or angiotensin II receptors,4,5 were indeed successfully translated to humans, but their clinical development has been delayed by toxicity and adverse event factors.
Table 1. -
Considerations of Internal and External Validity of Animal Models of Pain
||Mitigating experimental bias
||Optimizing clinical/ ethological relevance
||Appropriate sample size
||Ensure genetic diversity
||Include both sexes if clinical pain condition occurs in both sexes
||Appropriate age in the lifespan
||Include risk factors for clinical condition (eg, chronic stress, malnutrition, and chronic opioid use)
|Reducing attrition bias
||The Experimental Design Assistant https://eda.nc3rs.org.uk/
|ARRIVE checklist www.arriveguidelines.org
Abbreviation: ARRIVE, Animal Research: Reporting of In Vivo Experiments.
More fundamental to methods are species differences. It is possible that the anatomy and neural circuitry and physiology of the peripheral and central nervous systems differ so much from rodents to humans and that the behavioral repertoire of rodents is so limited that the use of rodents in discovery science for analgesic drug development is doomed to fail. We disagree, but suggest that failures to date reflect reliance on experiments with low internal and external validity conducted in rodents. We next review some methods, which have been proposed to increase the validity of studies in rodents and optimistically hope that this will improve the chances of success.
ANIMALS AS MODELS OF HUMAN DISEASE
Improving models of human disease in rodent studies requires improving both internal and external validities (Table 1).
The term “internal validity” refers to the intrinsic aspects of the design, conduct, and analysis of an experiment, which mitigate against experimental bias confounding the results.6 The central ethos of internal validity is enshrined in the classical concept of scientific method. As such, it differs from external validity, which, for example, covers the clinical and ethological relevance of an animal model and the outcome measures deployed.
Experimental bias can impact all stages of an experiment’s design, conduct, analysis, and reporting and is defined by Cochrane as “A systematic error, or deviation from the truth, in results.”7 Generally, the more robust the internal validity of an experimental protocol, the lower the risk of bias impacting the outcomes and likelihood that they are closer to the “truth.” Biases work in both directions and can lead to underestimation or overestimation of the true intervention effect. Similarly, biases vary in magnitude, with some being small and trivial compared with the observed effect.
In terms of purpose and aims, laboratory research using animal models may be classified as “hypothesis generating” (exploratory) or “hypothesis testing” (confirmatory) (Figure).6 Hypothesis-generating studies are the lifeblood of curiosity-driven discovery science and create testable hypotheses—a clinical analogy might be a case report series. It is essential that hypothesis-generating studies are clearly reported as such and do not infer that a discovery has been confirmed, for example, by the inappropriate use of inferential rather than simple descriptive statistics. Unfortunately, many preclinical studies apply the less rigorous methods of discovery science (Figure) but analyze and interpret them as hypothesis testing. This section discusses the internal validity of hypothesis-testing studies, which we posit are analogous to a randomized controlled trial.
The Lancet series on research waste identified widespread failures in attention to the internal validity as 1 of 5 major factors contributing to waste of resources across the biomedical literature, including hypothesis-testing studies conducted using animal models.8
Conventionally, the most prominent methods for mitigating potential bias in animal studies are:9
- Calculation of an appropriate sample size required for a particular experimental design. This must be conducted at the protocol design stage. The mathematical method and assumptions (eg, expected effect size for the model/outcome measure and the intervention to be tested) to be used in this calculation declared in the protocol. Systematic review and meta-analysis of preclinical studies can be a rich unbiased source of the data required to inform sample size calculations.10–13 The use of appropriate sample sizes mitigates against the impact of underpowered studies, which have greater risks of generating false-positive results and exaggerating effect sizes. While there is an ethical imperative to use the minimal number of animals necessary in an experiment, it is also true that an underpowered experiment in which little faith can be placed in the results is equally unethical.
- Random allocation of animals to groups mitigates against selection bias. Appropriate methods of randomization should be used and declared in the protocol. Simple pseudo randomization methods such as the order by which animals are taken from the cage are not sufficiently robust.
- Allocation concealment is withholding the allocation of animals to model or intervention assignment from the experimenter until the time of assignment (eg, when the model is to be created). It should not be confused with randomization.
- Blinding (masking) of both the experimenter and the data analyst is essential aspects of any rigorous experiment. This approach mitigates against, for example, selection, performance, and detection biases. It is also important to include in a protocol a safeguarding check that the blinding method was adequate and that the risk of inadvertent unblinding was low. A simple way of assessing this can be to include in the protocol an end of experiment question asking experimenters to guess the group to which the animals they had worked on had been allocated.
- Protection against attrition bias can be achieved by clearly defining in the protocol the inclusion and exclusion criteria. Furthermore, all animals excluded from the analysis must be declared and documented in the study report together with the reason for exclusion in each case. This can be documented using an experimental flow chart. Holman et al14 have clearly identified the dangers of attrition bias when targeted exclusion of “outliers” is performed on the very small samples sizes generally reported in animal studies.
While the above generic methods of bias mitigation are those conventionally recommended for use in laboratory animal studies, this list is neither complete nor exclusive. Although systematic reviews do use reporting of such internal validity quality metrics, their provenance and importance in the animal model setting is difficult to precisely trace. They probably were originally extrapolated from the clinical trial literature. Thus, it could be argued that their direct relevance to this setting does still require the determination of empirical evidence of their importance.
Systematic review and meta-analysis of experiments involving animal model experiments in pain research have clearly identified areas where rigor in internal validity must be improved in order that faith may be placed in the veracity of the results.10–13 Although it should be again emphasized that this issue is widespread, it is by no means a unique feature of animal model-based research. For example, a systematic review of 341 publications using models of chemotherapy-induced neuropathy ascertained that only 51% of reports reported blinded assessment of outcome, 28% randomization to group, 18% animal exclusions, 2.1% the use of a sample size calculation, and just 1.5% reported allocation concealment.10 Strikingly, similar findings are seen in the cannabinoid literature, where analysis of 374 studies showed that 47% reported blinded assessment of outcome, 32% randomization, 14% animal exclusions, 13% predetermined animal exclusion criteria, 4% allocation concealment, and only 3% reported a sample size calculation.15 There is clearly room for improvement.
There are some caveats here—systematic review can only analyze what is actually reported, so, for example, it is plausible that some authors did, in fact, deploy such bias mitigation measures, but merely did not report them. Conversely, in contrast to the clinical literature, it is often not possible for reviewers to determine the actual method (eg, randomization, blinding, etc.) used and whether it was appropriate and rigorously applied. A quality improvement and safeguard now firmly established in human clinical trial practice, which has yet to be realized for hypothesis testing in the animal model arena literature (which are of course effectively often clinical trials just using different species), is registration, and peer-reviewed publication of protocols and statistical analysis plans before commencement of the study.
There are an increasing number of tools, which laboratory scientists who use animal models may find helpful when designing high internal validity experimental protocols, for example, the free online “The Experimental Design Assistant” resource hosted by the United Kingdom’s National Centre for the Replacement, Refinement and Reduction of Animals in Research https://eda.nc3rs.org.uk/. Similarly, the European Commission Innovative Medicines Initiative consortium Enhancing Quality In Preclinical Data have recently produced eLearning modules and other resources such as evidence-based guidance for experimental design (www.quality-preclinical-data.eu).9 Finally, guidance and checklists primarily intended for improving the reporting of laboratory animal studies, such as the Animal Research: Reporting of In Vivo Experiments, can be used as an aide memoire at the design stage (www.arriveguidelines.org).10,16
Inevitably, widespread adoption of the forgoing suggestions for enhancing internal validity of animal studies will have consequences. Most obviously, the published literature would likely contain fewer apparently “positive” studies and more “negative” studies. However, the authors posit that these pejorative terms are redundant—a rigorous study testing an important and well-found hypothesis is of importance in the generation of new knowledge irrespective of the outcome. Other areas of biomedical science, such as clinical trials, have become more advanced in placing the importance of a rigor above apparent “novelty” reported in a study that might have internal validity flaws. Acceptance of this position also requires a culture shift among all stakeholders in laboratory animal science (eg scientists, universities, funders, industry, and publishers) to ensure that internal validity is prioritized above perceived novelty in the assessment of research performance for academic promotion, appraisal, funding, etc.
The animal population selected and the procedures used to model the disease state of chronic pain (the animal model) require validation. In short, the animal model should reasonably reflect the population and disease conditions of people for whom the drug is being developed.
Diversity of participants in clinical research, despite increased focus in recent years, typically fails to reflect diversity of the population with disease. Because ethnic and racial diversity correlates in rodents are largely unexplored, researchers can control genetic and sex diversity. Preclinical research published in the journal PAIN largely used outbred rats in 1980, and by 2020, the majority of studies utilized mice, mostly of 1 inbred strain.17 There are many arguments against this practice, including the lack of strong evidence to support the notion that variability in behavioral responses to nociceptive stimuli is reduced in inbred compared to outbred animals or the many differences in sensory neuronal phenotypes and mRNA or protein marker expression between inbred mice and humans. Additionally, it is unclear which mouse strain should be used, given large differences among inbred mice strains in nocifensive and other behaviors to acute stimuli or injury.18 Mice have been selected primarily for reasons of cost and access to or ease in altering genetic expression in this species, but this does not preclude the use of outbred strains.
Pain experience to acute stimuli differs subtly between men and women, and some chronic pain conditions are largely restricted to 1 sex, and drug absorption, distribution, elimination, and action can differ between men and women. Despite this, of the 316 articles in the journal PAIN between 2016 and 2020 using rodents, the majority of studies included male animals only and fewer than 25% included both sexes.17 Similarly in a meta-analysis of 337 publications using rodent models of chemotherapy-induced peripheral neuropathy, 84% of reports described the use of male animals only, despite the fact that some neurotoxic chemotherapeutics are used to treat female preponderant cancers (eg, breast or ovarian).10 Pain experience changes with age, and some pain disorders are more common during specific age spans, yet most rodent research utilizes adolescent or young adult rodents.
Socioeconomic conditions and attendant stressors, poor nutrition, and risk of previous or ongoing physical trauma impart risks for chronic pain. Clearly, there are many conceptual and ethical challenges to mirroring these factors in preclinical research, yet disregarding their role reduces external validity.
OUTCOME MEASURES TO INFER PAIN
The outcome measure most commonly used to assess pain in people is verbal report of the pain experience, either regarding pain intensity or emotional impact. This measure is not possible in rodents. As revealed by meta-analysis by far, the most frequently reported behaviors deployed in rodent pain models are reflexive in nature, usually withdrawal from a mechanical or thermal stimulus.10,15 These reflexive behaviors can sometimes be assessed in graded categories but most commonly they are recorded as present or absent, with various analytic approaches used to estimate the stimulus intensity with a 50% likelihood to elicit the response. Reflexive responses can be affected by acute stress and distractors, including intervals between application of stimuli, and environmental conditions, such as odors, lighting and noise, and experimenter. For punctate mechanical stimuli delivered by monofilaments, there is no consensus on training of the investigator, habituation of the animal to testing before experimental manipulation, or method of determination of threshold. Despite these shortcomings, reflexive behaviors are useful when studying novel molecules within drug classes of clinically established analgesics. For example, time-to-tail flick from controlled radiant heat reliably identifies the relative potency and duration of action of opioids. Similarly, threshold of mechanical stimulation to induce withdrawal in an acutely inflamed paw or joint following injection of an inflammatory irritant reliably identifies the relative potency and duration of action of NSAIDs.
Increasingly, rodent research focuses on chronic pain following peripheral nerve injury caused by interventions, including surgery, chemotherapy,10 prolonged hyperglycemia, or opioid exposure. The primary outcome measure in most studies using these models is reflexive withdrawal behavior, lowered threshold to withdrawal after the intervention inferred as akin to allodynia, and drug-induced increase in that threshold inferred as analgesia. This is problematic for 2 reasons. Most importantly, allodynia is present in a minority of patients with neuropathic pain, with most patients showing a sensory modality-specific mixture of hypersensitivity and hyposensitivity.19,20 Second, although some studies have demonstrated a decrease in area of allodynia with analgesia from drug treatment, there is less evidence for analgesics increasing pain threshold within the allodynic area.
Because of these limitations of reflexive behaviors to assess pain and analgesia following interventions intended to mimic chronic pain, a variety of other methods have been explored, which reflect ethologically relevant behaviors that are impacted by inferred pain. These include burrowing,12,21,22 thigmotaxis,23–28 facial grimacing,29 weight bearing,30 gait analysis,31 and conditioned place preference or avoidance.32 However, these are commonly used by a small minority of laboratories and are rarely declared as primary outcomes over reflexive measures.
External validity of withdrawal outcomes depends also on considerations to pharmacologic manipulation of a new analgesic target. Most obviously, some targets will interfere with reflexive assays by reducing the ability to move (peripheral nerve block with local anesthetics), reducing responsivity to stimuli in general by causing sedation (gabapentin), or by causing hypothermia (cannabinoids).33 Indeed, gabapentin, often used as a positive control in studies of novel analgesics in chronic neuropathic pain models, exhibits overlapping dose responses to alter reflexive withdrawal and to produce behavioral evidence of sedation. This not only questions the cause of the change in withdrawal, but also effectively unblinds the investigator performing the test. Finally, preclinical studies are often performed and interpreted without confirmation of target engagement by the drug at its presumed site of action.
PRIORITIZING TARGETS FOR CLINICAL TRANSLATION
Given the challenges in translation of using animal studies to identify novel analgesic targets in humans, prioritization should be given to targets for which there is at least some evidence for a role in human pain. This may come from observational data of drugs in clinical use for other indications, which act on the novel target, either by design or by off-target action of the molecule. In other cases, observational data of pain in unique circumstances can partially verify a target, for example, increased speed of recovery from pain after childbirth leading to preclinical evidence for oxytocin signaling as a cause.34,35 Evidence for a role in human pain comes most commonly, though, through examination of population genetic or epigenetic variance in expression or function of the novel target that correlates with pain or its absence, for example the primary evidence for the sodium channel subtype NaV 1.7 as a novel pain target.36
Method of drug administration and cost should also be considered in prioritizing targets for development. Some drugs and devices require implantation of catheters or electrodes in the perispinal space for efficacy. These chronically invasive approaches are inherently costly, limited to specialized centers, are impractical or not feasible in many areas or entire countries, and have very limited evidence for clinical effectiveness over placebo. To a lesser extent, parenteral injection for systemic administration, especially intravenous administration, increases cost and reduces global availability and impact.
Adverse events are rarely addressed or meaningfully discussed in preclinical research of novel analgesic targets, yet these are critical in prioritizing targets for clinical translation. In some cases, adverse event rate and severity for the target class may be known from clinical use for other indications. In other cases, preclinical screening for common adverse events fails to predict them occurring in humans, for example, lack of retching or vomiting in dogs, but severe nausea and vomiting in people from intrathecal neostigmine37 or hepatobiliary toxicity in EMA401, an angiotensin receptor antagonist in development for the treatment of neuropathic pain.4
Developing and Applying New Tools and Systems
The preclinical pain research community has recognized for many years the need to develop clinically relevant preclinical models and ethologically relevant outcome measures for chronic inflammatory and neuropathic pain with greater external validity, yet models continue to fail to reflect human populations of disease. There have been only a few new outcome measures developed, and these are only sporadically used by relatively few investigators and have not usually been rigorously validated; for example, only 1 has been subject to prospective multicenter validation.21 Major funders of research should consider allocating funds and creating application review committees with members with expertise in validation and replication methodology to support research in this area. Industry has similar frustration with existing methods, and funds should be specifically allocated for academic-industry grant mechanisms, such as the Small Business Innovation Research program at the US National Institutes of Health to advance preclinical pain science (Table 2).
Table 2. -
Paths Forward in Preclinical Pain Research
|Developing new tools and systems
| Increase funding to academia and industry to generate more clinically relevant models
| Increase education in the need to apply hypothesis testing methods to mitigate experimental bias and register trials with a proper power calculation and statistical analysis plan
| Increase funding for multicenter rodent trials
Changing the culture of preclinical pain research
| Rebalance the perceived value of innovative discovery and rigorous hypothesis testing as these relate to
| Grant review and funding
| Manuscript review and publication
| Academic promotion
| Awards and programs at national and international meetings
| Identify sources of change that can be applied from outside the current culture and recruit them for positive change
As previously noted, scientific rigor and transparent reporting of clinical trials have been improved by study registration, including registration of statistical analysis plans before unblinding study data, in hypothesis-testing studies. Journals should assure proper separation of hypothesis-generating discovery science from hypothesis-testing preclinical science in study description, analysis, and interpretation. It is remarkable how often discovery studies in preclinical pain research with extremely small samples sizes end their discussion with recommendations for clinical research and practice rather than the immediate priority replication of the preclinical study using more robust designs.
In clinical research, small-to-moderate-sized discovery trials are often followed by industry, or sometimes, federally funded, multicenter, hypothesis-testing clinical trials. There are very few multicenter preclinical pain studies, although 1 was recently published21 and another is nearing completion (personal communication: Laura Stone, PhD, Professor of Anesthesiology, University of Minnesota Medical School, June 20, 2022). Research funding should prioritize application of these methods with greater scientific rigor.
Education and communication between preclinical and clinical pain research need strengthening. This includes engagement of “lived experience” of the target pain population for the therapy early on in setting priorities for preclinical research, as well as conceptual framework driving development of new animal models and outcomes, as recently described with posttraumatic stress disorder.38 Joint sessions of preclinical and clinical investigators at conferences, symposia, and society meetings should be expanded with inclusion of scientific journal leaders included.
Many of these concepts are being applied through funding opportunities at the NIH Helping to End Addiction Long-Term (HEAL) initiative.39
Changing the Culture of Preclinical Pain Research
While the poor translational record of preclinical pain research is generally acknowledged, there is no universal agreement among pain research professionals regarding the exact approaches, relative merits, and implementation of the various solutions to animal model-based experiments discussed herein. Indeed, in some quarters, the compelling need to change the status quo is rarely voiced. All global stakeholders (eg, researchers, lived experience experts, health care professionals, funders, publishers, learned societies, academic institutions, and industry) must urgently come together in an alliance to agree the way forward. Professional leaders play critical roles in defining and maintaining the current culture through mentoring and educating new researchers, defining research agendas for the field, and sitting on panels and committees, which determine promotion, publication, and grant funding. A recent report applying evolutionary concepts to model scientific rigor in the social and behavioral sciences quantified persistently low statistical power, likely associated with false-positive results, in reports from 1955 through 2015 and came to the following conclusion: “The persistence of poor methods results partly from incentives that favor them, leading to the natural selection of bad science. This dynamic requires no conscious strategizing—no deliberate cheating nor loafing—by scientists, and only that publication is a principal factor for career advancement.”40
While some aspects of this complex scenario (eg, improving the clinical relevance of models and ethological relevance of outcome measures) will take years of concerted efforts and collaboration to resolve, others are immediately implementable. For example, improving internal validity and rigor in experimental design, conduct, analysis, and reporting should have no barriers to generic and immediate implementation, as was the case for clinical trials. Similarly, robust methods of replication, including multicenter approaches, are readily available. We would also advocate for increased use of meta-analyses to identify risks of bias, optimal methods, and relative effect sizes in the pertinent existing scientific knowledge and prioritizing candidate approaches for clinical development. Furthermore, identification of the areas for prioritization can be achieved by meaningful collaboration with lived experience experts and clinicians. This heightened rigor approach will require acceptance of the true value of a “negative” study when achieved in a robustly designed and conducted study testing an important question and well-found hypothesis. “Expect failure, celebrate success.” This requires culture change especially for the factors by which career progression and success are judged, and the futures of early career scientists should not be enhanced and not comprised in this advance—they should be judged and rewarded more on quality of their work and less on perceived novelty.
Clinical research, in contrast, has succeeded in changing its culture over the past few decades, and high-impact medical journals require reporting of and adherence to protocol registration, methods to reduce bias and enhance transparent reporting. It is striking to us during grant review meetings or participating in peer review of manuscripts how stark the contrast is between preclinical and clinical research regarding the weight applied to these aspects of scientific rigor, even by the same individual reviewing both types of research. Change was initially forced on clinical researchers by federal regulatory agencies that approve new therapies and by human studies’ committees that address ethical failings of earlier research. In other words, change was initiated in part because forces outside the research community and its internal peer review systems recognized a need for change and had the power to enact it. Change was expanded more broadly and continued over time because the strengths of improving internal and external validity and applying replication by some researchers became obvious to the research community as a whole.
We believe that the cycle of exciting preclinical reports of new targets to treat pain, presented with compelling and consistent stories in highly cited journals, has largely failed to advance scientific understanding or clinical care over the past 4 decades. This failure in translation is not unique to preclinical research in pain. The same conclusion has been reached in preclinical neuroscience, social science, and oncology research and for the same reasons. Clinical research has evolved in the past decades to embrace greater scientific rigor and transparency of reporting, in part due to a force outside of funding, publication, and promotion by peers. Whether such a force exists to begin this evolution in preclinical research is unclear. E
Name: James C. Eisenach, MD.
Contribution: This author created the initial outline and draft of this commentary, created the figure and tables, and edited the manuscript to final form.
Conflicts of Interest: J. C. Eisenach was supported through a grant from the Analgesic, Anesthetic, and Addiction Clinical Trial Translations, Innovations, Opportunities, and Networks (ACTTION) public private partnership between the Food and Drug Administration and corporate partners to develop consensus around study elements and designs to improve the quality of research in these areas. Corporate partners in ACTTION are Abbott, Aptinyx, Aquinox, Biogen, Boston Scientific, Collegium, Depomed, Egalet, Flexion, GW Pharmaceuticals, Horison Pharma, Jazz Pharmaceuticals, Lilly, Medtronic, Mundipharma Research, Innovoll NuFactor, Nuvectra, Pacira, Pfizer, Regenacy Pharmaceuticals, Saluda Medical, and Stimwave.
Name: Andrew S. C. Rice, MD.
Contribution: This author revised, edited, and added to the manuscript from the initial draft to the final form.
Conflicts of Interest: A. S. C. Rice undertakes consultancy and advisory board work for Imperial College Consultants—in the last 36 months, this has included remunerated work for: Confo, CombiGene, Vertex, Novartis, Orion, and Shanghai SIMR Biotech. A. S. C. Rice undertakes Medicines and Healthcare products Regulatory Agency (MHRA), Commission on Human Medicines–Neurology, Pain and Psychiatry Expert Advisory. A. S. C. Rice was the owner of share options in Spinifex Pharmaceuticals from which personal benefit accrued upon the acquisition of Spinifex by Novartis in July 2015. The final payment was made in 2019. A. S. C. Rice was the investigator in the European Quality in Preclinical Data (EQIPD) IMI project, which has received funding from the Innovative Medicines Initiative 2 Joint Undertaking under grant agreement no. 777364. This Joint Undertaking receives support from the European Union’s Horizon 2020 research and innovation program and EFPIA.
This manuscript was handled by: Jianren Mao, MD, PhD.
1. Ilfeld BM, Eisenach JC, Gabriel RA. Clinical effectiveness of liposomal bupivacaine administered by infiltration or peripheral nerve block to treat postoperative pain. Anesthesiology. 2021;134:283–344.
2. Moore A, Fisher E, Eccleston C, Haroutounian S, Gilron I, Rice A. International association for the study of pain presidential task force on cannabis and cannabinoid analgesia position statement. Pain. 2021;162:S1–S2.
3. Rauck RL, Wallace MS, Burton AW, Kapural L, North JM. Intrathecal ziconotide for neuropathic pain: a review. Pain Pract. 2009;9:327–337.
4. Rice ASC, Dworkin RH, Finnerup NB, et al. Efficacy and safety of EMA401 in peripheral neuropathic pain: results of 2 randomised, double-blind, phase 2 studies in patients with postherpetic neuralgia and painful diabetic neuropathy. Pain. 2021;162:2578–2589.
5. Rice ASC, Dworkin RH, McCarthy TD, et al.; EMA401-003 study group. EMA401, an orally administered highly selective angiotensin II type 2 receptor antagonist, as a novel treatment for postherpetic neuralgia: a randomised, double-blind, placebo-controlled phase 2 clinical trial. Lancet. 2014;383:1637–1647.
6. Huang W, Percie du Sert N, Vollert J, Rice ASC. General principles of preclinical study design. Handb Exp Pharmacol. 2020;257:55–69.
7. Boutron I, Page MJ, Altman DG, et al. Considering bias and conflicts of interest among the included studies. Higgins JPT, Thomas J, Chandler J, Cumpston M, T. L, Page MJ, Welch VA, eds. In: Cochrane Handbook for Systematic Reviews of Interventions. Cochrane; 2022:chap 7.
8. Ioannidis JP, Greenland S, Hlatky MA, et al. Increasing value and reducing waste in research design, conduct, and analysis. Lancet. 2014;383:166–175.
9. Vollert J, Macleod M, Dirnagl U, et al. The EQIPD (Enhancing Quality In Preclinical Data) framework for rigour in the design, conduct, analysis, and documentation of animal experiments. Nat Meth. Published online September 5, 2022. doi: 10.1038/s41592-022-01615-y
10. Currie GL, Angel-Scott HN, Colvin L, et al. Animal models of chemotherapy-induced peripheral neuropathy: a machine-assisted systematic review and meta-analysis. PLoS Biol. 2019;17:e3000243.
11. Soliman N, Rice ASC, Vollert J. A practical guide to preclinical systematic review and meta-analysis. Pain. 2020;161:1949–1954.
12. Zhang XY, Barakat A, Diaz-delCastillo M, et al. Systematic review and meta-analysis of studies in which burrowing behaviour was assessed in rodent models of disease-associated persistent pain. Pain. 2022;162(Suppl 1):S26–S44.
13. Soliman N, Haroutounian S, Hohmann AG, et al. Systematic review and meta-analysis of cannabinoids, cannabis-based medicines, and endocannabinoid system modulators tested for antinociceptive effects in animal models of injury-related or pathological persistent pain. Pain. 2021;162:S26–S44.
14. Holman C, Piper SK, Grittner U, et al. Where have all the rodents gone? The effects of attrition in experimental research on cancer and stroke. PLoS Biol. 2016;14:e1002331.
15. Soliman N, Haroutounian S, Hohmann AG, et al. Systematic review and meta-analysis of cannabinoids, cannabis-based medicines, and endocannabinoid system modulators tested for antinociceptive effects in animal models of injury-related or pathological persistent pain. Pain. 2021;162:S26–S44.
16. Percie du Sert N, Hurst V, Ahluwalia A, et al. The ARRIVE guidelines 2.0: updated guidelines for reporting animal research. Br J Pharmacol. 2020;177:3617–3624.
17. Sadler KE, Mogil JS, Stucky CL. Innovations and advances in modelling and measuring pain in animals. Nat Rev Neurosci. 2022;23:70–85.
18. Mogil JS, Yu L, Basbaum AI. Pain genes?: natural variation and transgenic mutants. Annu Rev Neurosci. 2000;23:777–811.
19. Gierthmühlen J, Maier C, Baron R, et al.; German Research Network on Neuropathic Pain (DFNS) study group. Sensory signs in complex regional pain syndrome and peripheral nerve injury. Pain. 2012;153:765–774.
20. Baron R, Maier C, Attal N, et al.; German Neuropathic Pain Research Network (DFNS), and the EUROPAIN, and NEUROPAIN consortia. Peripheral neuropathic pain: a mechanism-related organizing principle based on sensory profiles. Pain. 2017;158:261–272.
21. Wodarski R, Delaney A, Ultenius C, et al. Cross-centre replication of suppressed burrowing behaviour as an ethologically relevant pain outcome measure in the rat: a prospective multicentre study. Pain. 2016;157:2350–2365.
22. Andrews N, Legg E, Lisak D, et al. Spontaneous burrowing behaviour in the rat is reduced by peripheral nerve injury or inflammation associated pain. Eur J Pain. 2012;16:485–495.
23. Wallace VC, Segerdahl AR, Blackbeard J, Pheby T, Rice AS. Anxiety-like behaviour is attenuated by gabapentin, morphine and diazepam in a rodent model of HIV anti-retroviral-associated neuropathic pain. Neurosci Lett. 2008;448:153–156.
24. Wallace VC, Blackbeard J, Segerdahl AR, et al. Characterization of rodent models of HIV-gp120 and anti-retroviral-associated neuropathic pain. Brain. 2007;130:2688–2702.
25. Wallace VC, Blackbeard J, Pheby T, et al. Pharmacological, behavioural and mechanistic analysis of HIV-1 gp120 induced painful neuropathy. Pain. 2007;133:47–63.
26. Hasnie FS, Breuer J, Parker S, et al. Further characterization of a rat model of varicella zoster virus-associated pain: relationship between mechanical hypersensitivity and anxiety-related behavior, and the influence of analgesic drugs. Neuroscience. 2007;144:1495–1508.
27. Huang W, Calvo M, Pheby T, et al. A rodent model of HIV protease inhibitor indinavir induced peripheral neuropathy. Pain. 2016;158(1):75–85.
28. Huang W, Calvo M, Karu K, et al. A clinically relevant rodent model of the HIV antiretroviral drug stavudine induced painful peripheral neuropathy. Pain. 2013;154:560–575.
29. Langford DJ, Bailey AL, Chanda ML, et al. Coding of facial expressions of pain in the laboratory mouse. Nat Methods. 2010;7:447–449.
30. Schött E, Berge OG, Angeby-Möller K, Hammarström G, Dalsgaard CJ, Brodin E. Weight bearing as an objective measure of arthritic pain in the rat. J Pharmacol Toxicol Methods. 1994;31:79–83.
31. Hadlock TA, Koka R, Vacanti JP, Cheney ML. A comparison of assessments of functional recovery in the rat. J Peripher Nerv Syst. 1999;4:258–264.
32. King T, Vera-Portocarrero L, Gutierrez T, et al. Unmasking the tonic-aversive state in neuropathic pain. Nat Neurosci. 2009;12:1364–1366.
33. Finn DP, Haroutounian S, Hohmann AG, Krane E, Soliman N, Rice ASC. Cannabinoids, the endocannabinoid system, and pain: a review of preclinical studies. Pain. 2021;162:S5–S25.
34. Gutierrez S, Liu B, Hayashida K, Houle TT, Eisenach JC. Reversal of peripheral nerve injury-induced hypersensitivity in the postpartum period: role of spinal oxytocin. Anesthesiology. 2013;118:152–159.
35. Eisenach JC, Pan P, Smiley RM, Lavand’homme P, Landau R, Houle TT. Resolution of pain after childbirth. Anesthesiology. 2013;118:143–151.
36. Cummins TR, Dib-Hajj SD, Waxman SG. Electrophysiological properties of mutant Nav1.7 sodium channels in a painful inherited neuropathy. J Neurosci. 2004;24:8232–8236.
37. Hood DD, Eisenach JC, Tuttle R. Phase I safety assessment of intrathecal neostigmine methylsulfate in humans. Anesthesiology. 1995;82:331–343.
38. Dunsmoor JE, Cisler JM, Fonzo GA, Creech SK, Nemeroff CB. Laboratory models of post-traumatic stress disorder: the elusive bridge to translation. Neuron. 2022;110:1754–1776.
39. Oshinsky ML, Bachman JL, Mohapatra DP. Opportunities for improving basic and translational pain research. Anesth Analg. 2022;135:1124–1127.
40. Smaldino PE, McElreath R. The natural selection of bad science. R Soc Open Sci. 2016;3:160384.