For SCS, 18 of the 72 total trials were actively recruiting participants, 4 were withdrawn before subject enrollment, and 1 was not yet open for recruitment. Results were sought for the remaining 49 trials (22 trials listed as completed, 12 trials terminated after beginning enrollment, 8 trials listed as active but not recruiting, and 7 trials of unknown status). For rTMS/tDCS, 34 of the 92 total trials were actively recruiting participants, 2 were withdrawn before subject enrollment, and 12 were not yet open for recruitment. Results were sought for the remaining 44 trials (25 trials listed as completed, 1 trial terminated after beginning enrollment, 5 trials listed as active but not recruiting, and 13 trials of unknown status).
Forty-five percent of SCS trials (22/49) and 48% of rTMS/tDCS trials (21/44) had available results. Thirty-three percent of SCS trials (16/49) and 45% of rTMS/tDCS trials (20/44) had results in a peer-reviewed journal. Fourteen percent (5/36) of publications for rTMS, tDCS, and SCS trials were linked directly to the registry, and the remaining 86% (31/36) were found by manually searching through PubMed. For SCS, 2 trials had results available through direct posting on CTG and 4 had results available only in the gray literature. For rTMS/tDCS, 1 trial had results available through direct posting on CTG. There was no significant association between the type of device and the availability of results.
Focusing on those SCS trials where the registry entry showed the trial as “completed” and with a specific completion date, results could be found for 10 of the 18 trials (56%) completed at least 12 months before our data collection, of which 8 (44%) were in peer-reviewed journals. There were 3 SCS trials completed <12 months before our data collection, and only 1 had any available results (not peer reviewed). For rTMS/tDCS, results could be found for 14 of the 21 trials (67%) completed at least 12 months before our data collection, of which 13 (62%) were in peer-reviewed journals. Two rTMS/tDCS trials were completed <12 months before our data collection, and none had any available results. There was no significant association between the type of device and the availability of results for completed trials with a specified completion date.
Twenty-two of the 49 SCS trials eligible for a results search had an industry primary sponsor. Of these trials, 10 had results available (45%), of which 8 (36%) were in the peer-reviewed literature. The remaining 27 had a nonindustry primary sponsor, of which 12 (44%) had any available results, with 8 (30%) in the peer-reviewed literature. Only 2 of the 44 rTMS/tDCS trials eligible for a results search had an industry primary sponsor, and both had results available in the peer-reviewed literature. The remaining 42 had a nonindustry primary sponsor, of which 19 (45%) had any available results, with 18 (43%) in the peer-reviewed literature.
3.3. Comparison of RReADS, RReACT-CRPS, and RReACT-CPSP with RReACT and RReMiT
A total of 763 trials were eligible for a results search (391 PHN/DPN/FM, 237 migraine, 42 CRPS/CPSP, and 93 SCS/rTMS/tDCS). Irrespective of time since study completion, Figure 2 shows the number of trials eligible for a results search, the number with available results, and the number with results in the peer-reviewed literature for all 6 disorders (PHN, DPN, FM, migraine, CRPS, and CPSP) and both types of devices (SCS and rTMS/tDCS). In the RReMiT database, 55% of 237 eligible trials had any results available and 45% had peer-reviewed results available, irrespective of time since study completion.4 In the RReACT databases for PHN, DPN, and FM, 46% of 391 eligible trials had any results available and 30% had peer-reviewed results available, irrespective of time since study completion.30 In the RReACT databases for CRPS and CPSP, 45% of 42 eligible trials had any results available and 38% had peer-reviewed results available. In the RReADS databases for SCS, rTMS, and tDCS, 46% of 93 eligible trials had any results available and 39% had peer-reviewed results available.
Focusing on just those trials with a specified completion date at least 12 months before data collection, the percentages rise as follows (any results/peer-reviewed results): CRPS 67/60, CPSP 88/75, SCS 56/44, and rTMS-tDCS 67/62. Comparison data are available from the RReMiT database, where the percentages (irrespective of study completion date) rise from 55% for any results and 45% for peer-reviewed results to 70% for any results and 57% for peer-reviewed results at 12 months after study completion.4 Pooling the data sets yields 225 total trials with a specified completion date at least 12 months before data collection, and the percentage with any results is 69% and with peer-reviewed results is 57%. Extending the time window to studies with a specified completion date at least 2 years before data collection, the proportions show little further change for CRPS, CPSP, SCS, rTMS/tDCS, and migraine. Comparing the RReADS databases, all RReACT databases, and the RReMiT database, there were no significant differences in terms of results availability or proportion with peer-reviewed publications.
In evidence-based medicine, randomized controlled trials are the gold standard for establishing the safety and efficacy of an intervention. However, the current infrastructure of evidence-based medicine—the levels of evidence and grades of recommendation—is not necessarily generalizable to evaluating invasive interventions such as SCS. When performing research on non-pharmacological interventions, challenges related to randomization procedures and the use of blinding arise, and randomized controlled trials are not always feasible.1,6,22,26,29,40,43 Despite the expense of new technologies, rigorously controlled trial designs assessing basic efficacy have not been required for SCS device approvals after the advent of clinical trial registries. Spinal cord stimulation trial designs providing an appropriate double-blind control are problematic because the induction of paresthesias in the area of pain is part of the therapeutic assessment, but the absence of masking can lead to several types of bias.6,14 Device studies of rTMS and tDCS can be more rigorously controlled because the devices are noninvasively applied to the skull, and masking noises, in combination with mimicking the cutaneous sensation and muscular discomfort caused by these types of devices, can effectively blind the wearer to whether or not the device has been turned on.1,40,43 Full double-blinding may be possible if all assessments are made by an independent person not involved in operating the device.
Because of the methodological and practical constraints associated with device-based research, we included in the RReADS databases prospective studies that were nonrandomized or observational, as these types of studies are often used when a randomized controlled trial is not feasible. In nonrandomized trials, a rigorous prospective design and focused data collection can reduce bias caused by incomplete data or unmasked outcome assessment.6 Although beyond the scope of this project, our impression of the trials in the RReADS database suggests that many registry trial records contain vaguely described study designs, multiple exploratory endpoints without a clearly specified primary outcome, and non-standard measures only indirectly assessing pain. Pre-specified statistical analysis plans were often either missing or minimal. An ACTTION systematic review comparing registered and published primary outcome specifications in the analgesic trials contained in the initial version of RReACT found many discrepancies.44 In the RReADS database, for some trials, the lack of clarity in design description on the registry was such that we were unable to confirm the accuracy of the trial-publication pairing (eg, SCS trials ACTRN12612000350820 and ISRCTN33292457).
Kessler et al.21 suggest that off-label use of an approved medical device allows clinicians to uncover new uses in an experiential manner, but cautions that therapeutic procedures using established devices for new indications do not always receive systematic rigorous evaluation. The threat of publication bias and selective outcome reporting is particularly great in the field of neuromodulation and intervention-based research, where advances in the field have historically relied on case reports or case series, and issues revolving around study design, blinding, and bias may be unresolvable.6,14,29,42 Ergina et al.6 suggest that for procedure-based interventions, a distinction should be made between explanatory trials, which evaluate the efficacy of the intervention, and pragmatic trials, which assess how the procedure is administered in clinical practice and seek to inform clinical decision-making. Many trials in the RReADS databases, especially of SCS, had primary outcomes other than analgesia, and seemed to be directed toward how best to deliver the intervention or were evaluating the technical performance of new technologies. The decision to permanently implant an SCS device is made based on the clinical response to temporary lead placement. Spinal cord stimulation trials either recruit patients who have already had a device implanted, or provide efficacy analyses only on the patients progressing to full implantation, and thus are comparable to enriched enrollment trials of new drugs, but are not comparable with the more classic parallel-design, placebo-controlled drug trial or any type of crossover trial.
Device-based investigation faces financial, regulatory, and insurance barriers combined with limited funding.2,20 Only 3% of total neuroscience research funding in 2005 was directed toward medical devices.3 Only 2 rTMS/tDCS trials were registered with an industry primary sponsor. Both rTMS and tDCS are still early in their development, with tDCS not yet approved by the FDA, and rTMS only approved for depression. Many questions remain, such as whether to target deep or superficial structures, how to best apply “sham” stimulation, and the optimum frequency and duration of stimulation sessions to achieve an enduring effect.1,22,26,40,43,52 The market potential of rTMS and tDCS, an important incentive for large-scale industry partnership, is not certain. Spinal cord stimulation, which has been FDA-approved for more than 20 years, has a much larger proportion of industry-funded trials. In the RReADS-SCS database, almost half (45%) of trials eligible for a results search had been registered with an industry primary sponsor.
The new RReACT databases and the RReADS databases suggest a different industry role than in PHN, DPN, FM, and migraine, which all have multiple approved drugs and are frequently targeted in phase 2/3 new compound development programs. Only 2 CRPS trials (of AV-411 and lenalidomide), and no CPSP trials, appeared to represent a pharma new oral drug registration effort. Patients with CRPS may be involved in litigation or disability claims, and many CPSP sufferers have too much neurological impairment to serve as trial subjects. Regulatory approval pathways for therapeutic devices are different from those governing new drug development, and might explain the role of industry in the SCS and rTMS/tDCS trials.32,41
We found the proportion of trials without available results to be similar across a diverse range of pain disorders and treatment strategies, as has been shown to be true in psychiatric disorders and other medical conditions.10,11,13,17,19,23,24,33,36,48,50,51 Results are rarely available within a year of trial completion, as would be expected, but the effect of time is similar across all conditions for which this analysis could be conducted. For studies providing a specified completion date of at least 12 months before data collection, the percentage only rises to 69% for any results and to 57% for peer-reviewed results. Extending the window to 24 months for results to become available only slightly increases these percentages. In migraine, studies with primary industry sponsorship were more likely to be published, but industry sponsorship had no apparent effect in CRPS, CPSP, and device trials.4 For disorders difficult to study using a typical randomized controlled trial design (such as CRPS and CPSP), there may be many fewer trials, but the ones that are conducted are no less likely to have available results. Device trials have substantial design and blinding issues, but not a distinct publication issue.
For all disease areas, what can be done to increase the proportion of registered trials with available results? First, groups advocating for increased transparency in clinical trials research deserve support. For example, AllTrials (http://www.alltrials.net) is an initiative that includes BMJ, the Cochrane Collaboration, PLOS, and the Dartmouth Institute for Health Policy & Clinical Practice.11 Second, existing regulations require posting of study results within 12 months of study completion for certain categories of trials. Compliance is poor and enforcement could be better.10,24,25,33 Third, large registries besides CTG could enable direct posting of study results and better support investigators who wish to do so. Fourth, far too much manual searching was required to create RReACT, RReMiT, and RReADS. For CRPS and CPSP, 17% of trials had 1 or more published article(s) linked to the trial record, a proportion that increased to 38% when PubMed was manually searched. For rTMS, tDCS, and SCS, the proportion increased from 5% of trials to 39%; and in RReMiT, the proportion increased from 20% to 45% through manual searches. If all journals would provide the trial registration number, preferably in the abstract, accurate links could become automatic on all registries.4 Fifth, the apparent barriers to publishing negative trials must be reduced, a focus of Project OPEN (http://www.open-project.eu/welcome).4,30,39,55 “Negative” clinical trials have lower publication rates and take longer to be published, although it remains uncertain whether the impact factor of the journal they eventually appear in is lower.19,48 Peer-reviewed journals welcoming unexpected, controversial, provocative, and/or negative results such as the Journal of Negative Results in Biomedicine are a step forward but not necessarily feasible for investigator-initiated trials with insufficient funds to pay publication costs. Could a narrowly focused, peer-reviewed clinical trials “brief communications” journal using a standardized results-reporting format and offering authors' limited statistical assistance succeed?
The RReACT, RReMiT, and RReADS databases are not designed to delve deeper into issues of publication bias and selective reporting of results. However, by spanning almost 1000 trials in 6 disorders and 2 types of devices, a universal problem is clearly apparent. Irrespective of time since completion, more than half of all registered clinical trials do not have publicly available results. Evidence-based medicine must more rigorously take into account the sheer magnitude of missing results in formulating treatment recommendations.
Conflict of interest statement
Supported by a research subaward from the Food and Drug Administration (FDA) U101 FD004187-02 (University of Rochester, R. H. Dworkin, PI), Analgesic, Anesthetic, and Addiction Clinical Trial Translations, Innovations, Opportunities, and Networks (ACTTION) public–private partnership with the U.S. FDA. The authors declare no conflicts of interest.
Appendix A. Supplemental Digital Content
Supplemental Digital Content associated with this article can be found online at http://links.lww.com/PAIN/A6, http://links.lww.com/PAIN/A7, http://links.lww.com/PAIN/A8 and http://links.lww.com/PAIN/A9.
. Arana AB, Borckardt JJ, Ricci R, Anderson B, Li X, Linder KJ, Long J, Sackeim HA, George MS. Focal electrical stimulation as a sham control for repetitive transcranial magnetic stimulation: does it truly mimic the cutaneous sensation and pain of active prefrontal repetitive transcranial magnetic stimulation? Brain Stimul 2008;1:44–51.
. Bergsland J, Elle OJ, Fosse E. Barriers to medical device innovation. Med Devices (Auckl) 2014;7:205–9.
. Dorsey ER, Vitticore P, De Roulet J, Thompson JP, Carrasco M, Johnston SC, Holloway RG, Moses H III. Financial anatomy of neuroscience research. Ann Neurol 2006;60:652–9.
. Dufka FL, Dworkin RH, Rowbotham MC. How transparent are migraine clinical trials
? Repository of Registered Migraine Trials (RReMiT
). Neurology 2014;83:1–10.
. Dwan K, Altman DG, Cresswell L, Blundell M, Gamble CL, Williamson PR. Comparison of protocols and registry entries to published reports for randomised controlled trials. Cochrane Database Syst Rev 2011:MR000031.
. Ergina PL, Cook JA, Blazeby JM, Boutron I, Clavien PA, Reeves BC, Seiler CM; Balliol Collaboration, Altman DG, Aronson JK, Barkun JS, Campbell WB, Cook JA, Feldman LS, Flum DR, Glasziou P, Maddern GJ, Marshall JC, McCulloch P, Nicholl J, Strasberg SM, Meakins JL, Ashby D, Black N, Bunker J, Burton M, Campbell M, Chalkidou K, Chalmers I, de Leval M, Deeks J, Grant A, Gray M, Greenhalgh R, Jenicek M, Kehoe S, Lilford R, Littlejohns P, Loke Y, Madhock R, McPherson K, Rothwell P, Summerskill B, Taggart D, Tekkis P, Thompson M, Treasure T, Trohler U, Vandenbroucke J. Challenges in evaluating surgical innovation. Lancet 2009;374:1097–104.
. Gill CJ. How often do US-based human subjects research studies register on time, and how often do they post their results? A statistical analysis of the Clinicaltrials.gov database. BMJ Open 2012;2:e001186.
. Goldacre B. Are clinical trial data shared sufficiently today? No. BMJ 2013;347:f1880.
. Greene K, Dworkin RH, Rowbotham MC. A snapshot and scorecard for analgesic clinical trials
for chronic pain: the RReACT
database. PAIN 2012;153:1794–7.
. Hart B, Lundh A, Bero L. Effect of reporting bias on meta-analyses of drug trials: reanalysis of meta-analyses. BMJ 2012;344:d7202.
. Higgins JPT, Altman DG, Sterne JAC, editors. Chapter 8: assessing risk of bias in included studies. In: Higgins JPT, Green S, editors. Cochrane handbook for systematic reviews of interventions. Version 5.1.0 [updated March 2011]. The Cochrane Collaboration, 2011. Available at: www.cochrane-handbook.org
. Accessed November 26, 2014.
. Huser V, Cimino JJ. Precision and negative predictive value of links between ClinicalTrials.gov and PubMed. AMIA Annu Symp Proc 2012;2012:400–8.
. Huser V, Cimino JJ. Evaluating adherence to the International Committee of Medical Journal Editors' policy of mandatory, timely clinical trial registration. J Am Med Inform Assoc 2013;20:e169–74.
. Huser V, Cimino JJ. Linking ClinicalTrials.gov and PubMed to track results of interventional human clinical trials
. PLoS One 2013;8:e68409.
. Jones CW, Handler L, Crowell KE, Keil LG, Weaver MA, Platts-Mills TF. Non-publication of large randomized clinical trials
: cross sectional analysis. BMJ 2013;347:f6104.
. Kanaan Z, Galandiuk S, Abby M, Shannon KV, Dajani D, Hicks N, Rai SN. The value of lesser-impact-factor surgical journals as a source of negative and inconclusive outcomes reporting. Ann Surg 2011;253:619–23.
. Kelly ML, Malone D, Okun MS, Booth J, Machado AG. Barriers to investigator-initiated deep brain stimulation and device research. Neurology 2014;82:1465–73.
. Kessler L, Ramsey SD, Tunis S, Sullivan SD. Clinical use of medical devices in the “Bermuda Triangle”. Health Aff (Millwood) 2004;23:200–7.
. Kessler SK, Turkeltaub PE, Benson JG, Hamilton RH. Differences in the experience of active and sham transcranial direct current stimulation. Brain Stimul 2012;5:155–62.
. Kirillova O. Results and outcome reporting in ClinicalTrials.gov, what makes it happen? PLoS One 2012;7:e37847.
. Kuehn BM. Few studies reporting results at US government clinical trials
site. JAMA 2012;307:651–53.
. Law MR, Kawasumi Y, Morgan SG. Despite law, fewer than one in eight completed studies of drugs and biologics are reported on time on ClinicalTrials.gov. Health Aff (Millwood) 2011;30:2338–45.
. Loo CK, Taylor JL, Gandevia SC, McDarmont BN, Mitchell PB, Sachdev PS. Transcranial magnetic stimulation (TMS) in controlled treatment studies: are some “sham” forms active? Biol Psychiatry 2000;47:325–31.
. Mathieu S, Boutron I, Moher D, Altman DG, Ravaud P. Comparison of registered and published primary outcomes in randomized controlled trials. JAMA 2009;302:977–84.
. McGauran N, Wieseler B, Kreis J, Schüler YB, Kölsch H, Kaiser T. Reporting bias in medical research—a narrative review. Trials 2010;11:37.
. Meakins JL. Innovation in surgery: the rules of evidence. Am J Surg 2002;183:399–405.
. Munch T, Dufka FL, Greene K, Smith SM, Dworkin RH, Rowbotham MC. RReACT
goes global: perils and pitfalls of constructing a global open-access database of registered analgesic clinical trials
and trial results. PAIN 2014;155:1313–17.
. O'Connell NE, Wand BM, Marston L, Spencer S, Desouza LH. Non-invasive brain stimulation techniques for chronic pain. Cochrane Database Syst Rev 2014;4:CD008208.
. Peña C, Bowsher K, Costello A, De Luca R, Doll S, Li K, Schroeder M, Stevens T. An overview of FDA medical device regulation as it relates to deep brain stimulation devices. IEEE Trans Neural Syst Rehabil Eng 2007;15:421–4.
. Prayle AP, Hurley MN, Smyth AR. Compliance with mandatory reporting of clinical trial results on ClinicalTrials.gov: cross sectional study. BMJ 2012;344:d7373.
. Priori A, Ciocca M, Parazzini M, Vergari M, Ferrucci R. Transcranial cerebellar direct current stimulation and transcutaneous spinal cord direct current stimulation as innovative tools for neuroscientists. J Physiol 2014;592(pt 16):3345–69.
. Rising K, Bacchetti P, Bero L. Reporting bias in drug trials submitted to the Food and Drug Administration: review of publication and presentation. PLoS Med 2008;5:e217; discussion e217.
. Ross JS, Mulvey GK, Hines EM, Nissen SE, Krumholz HM. Trial publication after registration in ClinicalTrials.gov: a cross-sectional analysis. PLoS Med 2009;6:1–9.
. Ross JS, Tse T, Zarin DA, Xu H, Zhou L, Krumholz HM. Publication of NIH funded trials registered in ClinicalTrials.gov: cross sectional analysis. BMJ 2012;344:d7292.
. Rowbotham MC. The impact of selective publication on clinical research in pain. PAIN 2008;140:401–4.
. Rowbotham MC. The case for publishing “negative” clinical trials
. PAIN 2009;146:225–6.
. Russo R, Wallace D, Fitzgerald PB, Cooper NR. Perception of comfort during active and sham transcranial direct current stimulation: a double blind study. Brain Stimul 2013;6:946–51.
. Sastry A. Overview of the US FDA medical device approval process. Curr Cardiol Rep 2014;16:494.
. Schlaepfer TE, Fins JJ. Deep brain stimulation and the neuroethics of responsible publishing: when one is not enough. JAMA 2010;303:775–6.
. Sheffer CE, Mennemeier MS, Landes RD, Dornhoffer J, Kimbrell T, Bickel WK, Brackman S, Chelette KC, Brown G, Vuong M. Focal electrical stimulation as an effective sham control for active rTMS and biofeedback treatments. Appl Psychophysiol Biofeedback 2013;38:171–6.
. Smith SM, Wang AT, Pereira A, Chang RD, McKeown A, Greene K, Rowbotham MC, Burke LB, Coplan P, Gilron I, Hertz SH, Katz NP, Lin AH, McDermott MP, Papadopoulos EJ, Rappaport BA, Sweeney M, Turk DC, Dworkin RH. Discrepancies between registered and published primary outcome specifications in analgesic trials: ACTTION systematic review and recommendations. PAIN 2013;154:2769–74.
. Song F, Parekh S, Hooper L, Loke YK, Ryder J, Sutton AJ, Hing C, Kwok CS, Pang C, Harvey I. Dissemination and publication of research findings: an updated review of related biases. Health Technol Assess 2010;14:iii, ix–xi, 1–193.
. Song JJ, Popescu A, Bell RL. Present and potential use of spinal cord stimulation to control chronic pain. Pain Physician 2014;17:235–46.
. Strech D. Normative arguments and new solutions for the unbiased registration and publication of clinical trials
. J Clin Epidemiol 2012;65:276–81.
. Suñé P, Suñé JM, Montoro JB. Positive outcomes influence the rate and time to publication, but not the impact factor of publications of clinical trial results. PLoS One 2013;8:e54583.
. Tse T, Williams RJ, Zarin DA. Reporting “basic results” in ClinicalTrials.gov. Chest 2009;136:295–303.
. Turner EH, Knoepflmacher D, Shapley L. Publication bias
in antipsychotic trials: an analysis of efficacy comparing the published literature to the US Food and Drug Administration database. PLoS Med 2012;9:e1001189.
. Turner EH, Matthews AM, Linardatos E, Tell RA, Rosenthal R. Selective publication of antidepressant trials and its influence on apparent efficacy. N Engl J Med 2008;358:252–60.
. Tzabazis A, Aparici CM, Rowbotham MC, Schneider MB, Etkin A, Yeomans DC. Shaped magnetic field pulses by multi-coil repetitive transcranial magnetic stimulation (rTMS) differentially modulate anterior cingulate cortex responses and pain in volunteers and fibromyalgia patients. Mol Pain 2013;9:33.
. van de Wetering FT, Scholten RJ, Haring T, Clarke M, Hooft L. Trial registration numbers are underreported in biomedical publications. PLoS One 2012;7:e49599.
. Vedula SS, Bero L, Scherer RW, Dickersin K. Outcome reporting in industry-sponsored trials of gabapentin for off-label use. N Engl J Med 2009;361:1963–71.
. Wager E, Williams P; Project Overcome failure to Publish nEgative fiNdings Consortium. “Hardly worth the effort?” Medical journals' policies and their editors' and publishers' views on trial registration and publication bias
: quantitative and qualitative study. BMJ 2013;347:f5248.
. Whittington CJ, Kendall T, Fonagy P, Cottrell D, Cotgrove A, Boddington E. Selective serotonin reuptake inhibitors in childhood depression: systematic review of published versus unpublished data. Lancet 2004;363:1341–5.
. Zarin DA, Tse T. Medicine. Moving toward transparency of clinical trials
. Science 2008;319:1340–2.
. Zarin DA, Tse T, Williams RJ, Califf RM, Ide NC. The ClinicalTrials.gov results database: update and key issues. N Engl J Med 2011;364:852–60.
Clinical trials; Transparency in research; Registries; RReACT; RReADS; RReMiT; Global databases; Publication bias; Selective reporting
Supplemental Digital Content
© 2015 International Association for the Study of Pain