Factors Associated With Declaration of Trial Registration
Table 2 displays the unadjusted OR and CI for factors associated with declaration of trial registration details within reports of registered trials, while Figure 3 displays the adjusted OR and CI. Trials that were registered were more likely to declare registration details in related reports if published in later years (2007: OR 6.3, CI 1.3–29.8; 2008: OR 4.6, CI 1.3–16.0; 2009: OR 9.5, CI 1.8–50.1; 2010: OR 30.5, CI 5.9–158.7; P=0.002) or if published in a journal that followed ICMJE guidelines (OR 3.9, CI 1.6–9.6; P=0.003). Compared with European trials, trials conducted globally were significantly less likely to declare their registration details (OR 0.3, CI 0.08–0.9). Trials conducted in the United States or other regions were no more or less likely to declare their registration details than European trials (United States: OR 0.7, CI 0.2–2.4; other: OR 1.0, CI 0.3–3.3). Declaration of trial registration differed according to funding source, P=0.007. Trials that did not declare their funding source were significantly less likely to declare their registration details (OR 0.1, CI 0.02–0.4), as were those that were commercially funded but not significantly so (OR 0.4, CI 0.2–1.1). No interaction term proved significant on analysis. The model adequately fitted the data (Hosmer–Lemeshow goodness-of-fit test P=0.78). Exclusion of abstracts on sensitivity analyses did not result in qualitatively altered outcomes (data not shown).
Despite mandatory trial registration being endorsed in 2005 by the ICMJE (8), the majority (76%) of randomized controlled trials in kidney transplantation reported from 2005 to 2010 were not prospectively registered, and even when registered frequently did not cite registration details in related reports (74%). After controlling for other factors, we found that trials where the funding source was not stated were less likely to be registered and when registered were less likely to declare their registration details in trial reports. Although still suboptimal, the situation is improving over time, with both trial registration and declaration of registration details more likely in later years.
Consistent with our findings, previous research has shown low rates of trial registration (15–17). Commercial funding is known to affect rates of publication and the size and direction of results presented (18–20). However, to our knowledge, no previous study has demonstrated a negative correlation between funding source and the uptake of trial registration or its subsequent declaration. This underutilization of trial registration subverts the intention of bias minimization and transparency in contemporary research. In addition, the nondeclaration of a trial's registration details when trial findings are published makes any potential manipulations less detectable to the reader.
Our study has several limitations. First, as our analyses were based on data from reports of published trials, we have not included unpublished trials. Unpublished trials may differ from published trials in their rate of trial registration, although probably in the direction of less registration. Second, as our study is a retrospective cohort study, it is prone to more bias, for example, selection and measurement bias, than a prospective study (21). We attempted to overcome these limitations by using predefined hypotheses, inclusion criteria, study factors, and analyses. Finally, our study only included trials related to kidney transplantation. Because of this, the generalizability of our results may be limited. However, we deliberately designed our study with this approach, to reflect the clinical reality of the end users of research working in a medical specialty.
If trial registration is going to increase transparency and scientific rigor, the evidence of trial registration needs to be available to users of research. This requires the cooperation of study investigators and journal editors. Our findings suggest that neither investigators nor editors implement best practice. To improve this situation, more journals need to make prospective trial registration a prerequisite for publication and make trial registration details clearly visible in related reports. Currently, 910 journals unofficially “follow” the ICMJE's uniform requirements for manuscripts submitted to biomedical journals (http://www.icmje.org/journals.html, accessed on January 1, 2011). In the subset of trials included in our study, which were published in journals that followed ICMJE guidelines, only 45% of trials were registered, and of those, 49% declared this in their publication, so clearly these journals are not fully implementing ICMJE policy. The success of trial registration will depend on all journals consistently implementing these policies. Therefore, we suggest that the ICMJE and other editorial groups encourage their affiliated journals to enforce the implementation of their trial registration requirements.
Although we disagree, some argue that because of the lack of repercussions and policing, the effect of trial registers is negligible and that study protocol changes, omissions, and suppressions are the rule (22). Trial registers may even have a deterrent effect on commercial incentives to conduct trials because the increased transparency means the conduct of a trial becomes public knowledge and the investment necessary for the conduct of a trial will be scrutinized by market forces to assess the opportunity cost of such an investment, resulting in fewer trials being conducted due to the increased risk (23). If open access to trial protocols through trial registries is an inadequate method to detect subsequent reporting bias (24), then trial registration by itself may be inadequate to prevent research misconduct. Multiple complementary methods are probably required to combat publication bias. One alternative is to supply open access to the raw data from a trial, so that anyone may scrutinize the presented results. Spurred on by the Wakefield debacle and other cases, where only prolonged investigation fully revealed gross data manipulation, the proposal to allow access to the raw data from trials has gained momentum (25, 26). Clearly, any efforts to improve reporting transparency in the medical community are desirable. The EQUATOR network (http://www.equator-network.org/about-equator/) seeks to promote transparent and accurate reporting of all research studies and is therefore an important central repository for such efforts, for example, the Standardized Protocol Items for Randomized Trials initiative and the GPP2 guidelines, which promote unbiased and ethical reporting, respectively (27, 28).
The aim of trial registration is to increase trial transparency and accountability. Although the situation is improving over time, currently, the trial registration process is not being used effectively, which means users of research are less able to identify protocol deviations. A concerted effort from all parties, to increase education, awareness, implementation, and utilization of trial registration and its principles, is necessary to produce higher quality research. It remains to be seen whether other efforts such as the proposal to allow access to trial data or the use of reporting guidelines will improve trial transparency and accountability.
MATERIALS AND METHODS
We conducted a cohort study of all randomized controlled trials in kidney transplantation, published at least once in a journal, between October 2005 and December 2010. We identified these trials from the Cochrane Renal Group's specialized register. This register is updated daily and contains records of randomized trials in nephrology, identified from searches of MEDLINE, Embase, and CENTRAL. In addition, records identified from hand searching of selected journals and the proceedings of major conferences are continuously added to the specialized register (29). We chose the lower limit of October 2005 for publication because according to ICMJE policy, all trials (both those initiated before and after July 2005) should have been registered by September 2005 (8). We chose the upper limit of December 2010 for publication to reduce any selection bias that might arise due to potential differences among medical specialty journals due to a time lag from publication to indexing by the National Library of Medicine (last search May 2011). Once all eligible trials were identified, we retrieved any additional reports, including journal articles and conference abstracts that related to that trial. We excluded trials that were published only as conference abstracts. To create a cohort of comparable studies, we excluded trials reported only in a language other than English, trials of other solid organ transplants, and trials where the unit randomized was not a transplant recipient. We established the registration status of each trial by conducting investigator and title searches of the WHO International Clinical Trials Registry Platform (ICTRP: http://apps.who.int/trialsearch/) between November 25, 2009, and May 14, 2011. Trials not found through the WHO International Clinical Trials Registry Platform were considered unregistered. Subsequently, for trials that were registered, we examined all published reports that related to that trial for the presence of trial registry identifiers, to determine whether trial registration details had been declared. Data collection and analysis were conducted by one author and checked by another, without blinding to the trial names or authors.
As many trials are reported more than once, we first identified reports from the same trial using trial characteristics (such as sample size, study intervention, location of trial, and study population) and then grouped them. To determine the factors associated with trial registration, we conducted logistic regression analyses, with the trial as the unit of analysis. We considered the following study factors: number of reports per trial (1, 2, >2), sample size (<200, ≥200), earliest date of publication (before 2007, 2007, 2008, 2009, 2010), funding source (investigator, commercial, not stated), whether the primary outcome favored the intervention (P<0.05 or not), region in which the trial was conducted (global or more than two continents, United States, Europe, other), and whether the trial had at least one report in a journal affiliated with the ICMJE (determined either by the journal being listed on ICMJE Web site on January 6, 2010, or as stated or implied from each journal's “instructions to authors”). In attributing trial funding source where trials were funded by both investigator and commercial sources, we recorded the trial as commercially funded. Trial funding source was recorded from trial reports, trial declarations, and conflict of interest statements. The primary outcome of a trial was identified as the outcome reported as such from the trial report or if this was unclear from the sample size calculation. The earliest year of trial publication entered and remained in the adjusted analyses regardless of statistical significance, as calendar year was central to our research question. To allow for potential effect modification among study factors, we prespecified potential interaction terms. We hypothesized that commercially funded trials were more likely to be larger and reported more than once; hence, we considered potential for commercial funding source being associated with multiple reports and commercial funding source being associated with larger sample size.
To determine the factors associated with declaration of trial registration details within trial reports, we conducted logistic regression analyses using generalized estimating equations. The unit of analysis was the trial report and adjusted for any clustering effect that might be present due to multiplicity of reports within trials through a sandwich estimator (30). We considered the following study factors using the same categories and rules as with the first model: number of reports per trial, sample size, date of publication, funding source, whether the primary outcome reported favored the intervention, region in which the trial was conducted, and whether the trial report was published in a journal affiliated with the ICMJE. To allow for potential effect modification among study factors, we prespecified potential interaction terms. In line with existing studies (16, 17, 29–31), we hypothesized that commercially funded trials were more likely to reach conclusions favoring the drug than noncommercially funded trials. Therefore, we considered potential for commercial funding source being associated with a statistically significant result.
For both models, associations between the covariates and outcomes were reported as ORs with 95% CIs. All factors were considered in adjusted analyses if the unadjusted association showed P less than 0.25 and were sequentially eliminated using backward selection if adjusted association showed P more than 0.05. All P values were calculated from Wald chi-squared test statistics. The final models were checked using a Hosmer–Lemeshow goodness-of-fit test. To investigate the robustness of our analyses, we repeated our analyses excluding abstract reports. Statistical analyses were carried out using STATA software (Stata11, StataCorp LP, TX).
1. Rennie D. Trial registration
: A great idea switches from ignored to irresistible. JAMA
2004; 292: 1359.
2. Ghersi D, Pang T. En route to international clinical trial transparency. Lancet
2008; 372: 1531.
3. Simes RJ. Publication bias: The case for an international registry of clinical trials. J Clin Oncol
1986; 4: 1529.
4. Ross JS, Mulvey GK, Hines EM, et al. Trial publication after registration in ClinicalTrials.Gov: A cross-sectional analysis. PLoS Med
2009; 6: e1000144.
5. Cowley A, Skene A, Stainer K, et al. The effect of lorcainide on arrhythmias and survival in patients with acute myocardial infarction: An example of publication bias. Int J Cardiol
1993; 40: 161.
6. Whittington CJ, Kendall T, Fonagy P, et al. Selective serotonin reuptake inhibitors in childhood depression: Systematic review of published versus unpublished data. Lancet
2004; 363: 1341.
7. Dickersin K, Chan S, Chalmers TC, et al. Publication bias and clinical trials. Control Clin Trials
1987; 8: 343.
8. De Angelis CD, Drazen JM, Frizelle FA, et al. Clinical trial registration
: A statement from the International Committee of Medical Journal Editors. Med J Aust
2004; 181: 293.
9. Evans T, Gulmezoglu M, Pang T. Registering clinical trials: An essential role for WHO. Lancet
2004; 363: 1413.
10. Krleža-Jeric K, Lemmens T. 7th revision of the Declaration of Helsinki: Good news for the transparency of clinical trials. Croat Med J
2009; 50: 105.
11. Krleža-Jeric K, Chan AW, Dickersin K, et al. Principles for international registration of protocol information and results from human trials of health related interventions: Ottawa statement (part 1). BMJ
2005; 330: 956.
13. The Surgery Journal Editors Group. Consensus statement on mandatory registration of clinical trials. Ann Surg
2007; 245: 505.
15. Mathieu S, Boutron I, Moher D, et al. Comparison of registered and published primary outcomes in randomized controlled trials. JAMA
2009; 302: 977.
16. Rasmussen N, Lee K, Bero L. Association of trial registration
with the results and conclusions of published trials of new oncology drugs. Trials
2009; 10: 116.
17. Reveiz L, Cortés-Jofré M, Asenjo Lobos C, et al. Influence of trial registration
on reporting quality of randomized trials: Study from highest ranked journals. J Clin Epidemiol
2010; 11: 1216.
18. Als-Nielsen B, Chen W, Gluud C, et al. Association of funding and conclusions in randomized drug trials: A reflection of treatment effect or adverse events? JAMA
2003; 290: 921.
19. Bhandari M, Busse JW, Jackowski D, et al. Association between industry funding and statistically significant pro-industry findings in medical and surgical randomized trials. CMAJ
2004; 170: 477.
20. Lexchin J, Bero LA, Djulbegovic B, et al. Pharmaceutical industry sponsorship and research outcome and quality: Systematic review. BMJ
2003; 326: 1167.
21. Grimes DA, Schulz KF. Bias and causal associations in observational research. Lancet
2002; 359: 248.
22. Moja L. Clinical trials: Trial registration
cannot alone transform scientific conduct. Nat Rev Urol
2010; 7: 7.
23. Dahm M, González P, Porteiro N. Trials, tricks and transparency: How disclosure rules affect clinical knowledge. J Health Econ
2009; 28: 1141.
24. Smyth RM, Kirkham JJ, Jacoby A, et al. Frequency and reasons for outcome reporting bias in clinical trials: Interviews with trialists. BMJ
2011; 342: c7153.
25. Chan AW. Access to clinical trial data. BMJ
2011; 342: d80.
26. Godlee F. Goodbye PubMed, hello raw data. BMJ
2011; 342: d212.
27. Chan AW, Tetzlaff J, Altman DG, et al. The SPIRIT initiative: Defining standard protocol items for randomized trials. German J Evid Quality Health Care (suppl)
2008; 102: s27.
28. Graf C, Battisti W, Bridges D, et al. Good publication practice for communicating company sponsored medical research: The GPP2 guidelines. BMJ
2009; 339: b4330.
29. Henderson LK, Craig JC, Willis NS, et al. How to write a Cochrane systematic review. Nephrology (Carlton)
2010; 15: 617.
30. Rabe-Hesketh S, Skrondal A. Multilevel and longitudinal modeling using Stata [ed. 2], College Station, TX: Stata Press 2008.
31. Song F, Parekh S, Hooper L, et al. Dissemination and publication of research findings: An updated review of related biases. Health Technol Assess
2010; 14: 1.
Trial registration; Registration declaration; Cohort study; Kidney transplantation
Supplemental Digital Content
© 2011 Lippincott Williams & Wilkins, Inc.