Skip Navigation LinksHome > March 2011 - Volume 22 - Issue 2 > Preregistration of Epidemiology Protocols: A Commentary in S...
Text sizing:
A
A
A
Epidemiology:
doi: 10.1097/EDE.0b013e318207fc7c
On Registration of Observational Studies: Commentary

Preregistration of Epidemiology Protocols: A Commentary in Support

Bracken, Michael B.

Free Access
Erratum
Article Outline

Erratum

Reference

Bracken MB. Preregistration of epidemiology protocols: a commentary in support. Epidemiology. 2011;22:135–137.

On page 136, a paper by A. Stang, C. Poole and O. Kuss is quoted as addressing the “tyranny of significant testing.” This phrase should instead be “tyranny of significance testing.”

Epidemiology. 22(3):447, May 2011.

Collapse Box

Author Information

From the Center for Perinatal, Pediatric and Environmental Epidemiology, Yale University Schools of Medicine and Public Health, New Haven, CT.

Correspondence: Michael B. Bracken, Center for Perinatal, Pediatric and Environmental Epidemiology, Yale University Schools of Medicine and Public Health, 1 Church Street, New Haven, CT 06510. E-mail: michael.bracken@yale.edu.

It has been proposed1 that observational epidemiology protocols, just like those for randomized controlled trials (RCTs),2 should be preregistered. Unlike the editors of the Lancet and BMJ who have endorsed the proposal,3,4 the E pidemiology editors5 and 5 commentators6–10 all resisted. One does not have to agree with all the details of the workshop document1 to believe that protocol registration has substantial advantages in epidemiologic research. Some of them are reported in this article.

Thankfully, neither the editors5 nor any of the 5 commentators suggest that a protocol should not form part of the research enterprise. But if protocols are important for decisions as to whether research should be supported based on a priori hypotheses (as is required for all research from PhD dissertations to NIH and MRC submissions), are they not equally necessary for consumers of research (including other researchers) to judge how much the completed research deviated from what was planned? Poole9 and Vandenbroucke10 argue that it is irrelevant whether a hypothesis was prespecified but surely this is incorrect. Prespecified exposures and outcomes will have been measured with much more attention to detail (eg, dose, recency, cumulative exposure), with more careful strategies to avoid bias with respect to the primary exposures and outcomes, and with more extensive data collected on potentially important confounders, than will exposures and outcomes that are assessed only as covariates. Investigators often do not reveal in publications whether their hypotheses were initially planned. When covariates are published as the exposure of primary interest—a common occurrence in large epidemiology projects—problems of interpretation and replication arise.

In a seminal paper, Chan et al11 compared protocols for randomized trials with the final trial reports to document significant bias in outcome reporting. In 34% of reports, the protocol primary outcome was published as a secondary outcome; in 26%, the protocol primary outcome was not reported at all; in 19%, the protocol secondary outcomes were published as primary; and in 17%, the published primary outcome was not mentioned in the protocol. Overall, 62% of published trials showed discrepancies between the protocol and the published primary outcome. Perhaps not surprisingly, statistically significant outcomes in the originally declared protocols were 2 to 3 times more likely to be fully reported in the trial report than nonsignificant outcomes. Of particular relevance to observational epidemiology, statistically significant outcomes concerning harm (which can often be studied only by observational epidemiology) and defined in the protocol were 4 to 5 times more likely to be reported than nonsignificant outcomes. Interestingly, 86% of authors denied the existence of unreported outcomes despite clear evidence to the contrary.

Why does this matter? Epidemiologists believe that replication of reported associations is necessary to arrive at conclusions about causation, and this demands careful synthesis of extant literature. However, it is increasingly clear that the publication process, with selective reporting of outcomes, can bias entire bodies of evidence. Reviewers must be able to determine that a particular association of interest was studied in a paper but found to be “uninteresting” and so not reported, as distinct from not being studied at all. When investigators fail to report the results of tested hypotheses because they are null, synthesis by meta-analysis is severely biased.12 Protocol registration offers the most transparent way to determine what is likely to have been studied and analyzed but not reported. Outcome reporting bias (and not reporting the trial results at all) is the rationale for data bases of randomized trial protocols now widely accepted in the United States (http://clinicaltrials.gov/), the United Kingdom (http://www.controlled-trials.com/isrctn/) and World Health Organization (http://www.who.int/ictrp/en/).

Chan et al11 identified RCTs with up to 160 declared outcomes per trial, which provides ample opportunity for outcome reporting bias. However, many epidemiology protocols will cover even broader areas of research, with possibly a greater number of outcomes; the protocols will be constantly amended and will spin off smaller research projects over time. This makes protocol registration even more imperative and certainly does not preclude registration of parent protocols and amendments in data bases similar to those established for RCTs. I agree with the editors of E pidemiology that the number of explored hypotheses in epidemiologic studies “is rarely recorded and impossible to infer.” However, this is even more reason to register protocols that document at least the number of preplanned hypotheses. While not a perfect system, this would be a step in the right direction.

Guidelines have been written to improve scientific reporting for RCTs (CONSORT),13 for Meta-analyses (PRISMA),14 for diagnostic studies (STARD),15 and for animals studies (ARRIVE).16 The guidelines for observational epidemiology (STROBE)17 have been adopted by dozens of journals but not E pidemiology 18 for whom it would seem particularly suited. The editors disingenuously suggest that this is a complex issue reported in 31 dense pages5—however, the 22-item checklist that covers all observational epidemiology designs is listed in a single table on the STROBE website.17 Given the space constraints on published articles, many of the items required by these check lists can be fully documented only by access to the study protocol.

Takkouche and Norman8 comment that the PRISMA guidelines for meta-analysis are “insidious” and wonder who will control them. The answer, as for all science, is editors and reviewers. These authors state that they “are not aware of any register for meta-analysis”8—although the Cochrane Collaboration,19 the largest single producer of meta-analyses, registers and mandates prior publication of a protocol before publishing a systematic review and meta-analysis. Other consortia using meta-analysis in the social and political sciences (Campbell20), and in genetics (HuGEnet21), also require protocol registration.

Like the E pidemiology commentators, few researchers are likely to suggest that subgroup analysis should be published only if prespecified in a protocol. But to fully understand the importance of a reported association and to avoid over-interpretation of associations that may have occurred just by chance, readers need to be informed whether an association or a subgroup comparison was specified a priori (in a protocol) and thus is testing a hypothesis, or whether it is an ad hoc, hypothesis-generating, subgroup analysis.22 Much of the methodological work on this problem has been done using the randomized trial literature, but the statistical and theoretical problems are readily transferable to observational epidemiology.

The editors of E pidemiology also comment on the hypothesis-free genome-wide association studies undertaken by genetic epidemiologists and state “they accept false positive results as a logical part of the scientific process.”5 This is inaccurate. In fact, genetic epidemiologists take considerable care to correct significance testing for multiple comparisons23 and have developed several new methodologies to handle this problem.24 Observational epidemiologists may not be making half a million comparisons but they can make several hundred in analyzing a single hypothesis, and they would do well to pay more attention to this major methodological problem in the analysis of observational data. Rather than worry about the “tyranny of significant testing,”25 genetic epidemiologists are more concerned about the anarchy that follows from the lack of significance testing.

In discussing ethical objections to preregistration of protocols, the editors also err in their belief that subjects in longitudinal studies are almost always in less “jeopardy” than those in RCTs.5 There are no “deep differences” between the two and, depending on trial and study, subjects in observational studies may expose themselves to considerable risk (such as from false-negative test results or inadvertent loss of data, not least DNA) and as much inconvenience as participants in trials. Comparing these two groups of participants does not “denigrate the contribution of clinical-trial volunteers.” Rather, failing to compare them denigrates the contributions of subjects in observational epidemiology.

Poole9 applauds the meta-analysts who ignored whether the primary outcome in their meta-analysis was a primary outcome in the original research. I do not. A subgroup analysis of studies which did or did not prespecify the primary outcome would be important for assessing potential outcome-reporting bias, and that information has to be gleaned from a protocol. Another key point is how the primary outcome was specified in the protocol for the meta-analysis. Outcome reporting bias is as much a concern in a meta-analysis as it is in the original studies.26

Concern about guidelines for reporting RCTs and protocol registration has been mostly promulgated for RCTs. It is widely recognized that RCTs generally have the greatest validity of any study design, but rather than moving observational epidemiology as close as possible to this “gold standard,” the editors and several commentators opine that observational epidemiology is somehow different and should not use these guidelines. This is alarmingly short-sighted for epidemiology. It is not a novel idea that observational epidemiology should strive to follow the methodological strengths of RCTs.27 Lash7 references several successes of observational epidemiology, but there have also been some monumental disasters. Based on evidence from observational epidemiology, millions of women used hormone-replacement therapy and millions more people consume food supplements, while expecting benefits that RCTs have refuted and even found dangerous.

We now know, based on systematic reviews and comparison with RCTs, that entire bodies of epidemiology literature can be systematically biased.28 Whether observational epidemiology has, on balance, contributed more to the good of public health than not is an open question. Many commentators believe it has not,29 leaving one to suggest that: “… the simple expedient of closing down most university departments of epidemiology could both extinguish this endlessly fertile source of anxiety-mongering while simultaneously releasing funds for serious research.”30 This skepticism should encourage us to conduct observational epidemiology with as much experimental rigor as possible. Protocol registration is not a “bureaucratic structure” with “no bearing” on evidence synthesis, as declaimed by the editors,3 but rather registration is central to understanding the validity of studies that are being synthesized. It is also only one small part of needed reforms if we are to avoid becoming a discipline of little consequence.

Back to Top | Article Outline

ABOUT THE AUTHOR

MICHAEL BRACKEN is the Susan Dwight Bliss Professor of Epidemiology at Yale University where he has taught for 40 years. He is a former President of the American College of Epidemiology and of the Society for Epidemiologic Research and was convener of the first Congress of Epidemiology in 2001. He co-edits the Cochrane Neonatal Review Group.

Back to Top | Article Outline

REFERENCES

1.Workshop: Enhancement of the Scientific Process and Transparency of Observational Epidemiology Studies, 24–25 September 2009, London. Brussels, Belgium: European Centre for Ecotoxicology and Toxicology of Chemicals; November 2009. Workshop Report No. 18. Available at: http://links.lww.com/EDE/A415.RCTregistration.

2.DeAngelis C , Drazen JM , Frizelle FA , et al. Clinical trial registration: a statement from the International Committee of Medical Editors . N Eng J Med . 2004 ;351 :1250–1251.

3.The editors. Should protocols for observational studies be registered? Lancet. 2010;357–348.

4.Loder E , Groves T , MacCauley D . Registration of observational studies: the next step toward research transparency . BMJ . 2010 ;340 :375–376.

5.The editors. The registration of observational studies—when metaphors go bad. Epidemiology. 2010;21:607–609.

6.Samet J . To register or not to register . Epidemiology . 2010 ;21 :610–611.

7.Lash TL . Preregistration of study proposals is unlikely to improve the yield from our sciences, but other strategies might . Epidemiology . 2010 ;21 :612–613.

8.Takkouche B , Norman G . Meta-analysis protocol registration: Sed quis custodiet ipsos custodes? But who will guard the guardians ? Epidemiology . 2010 ;21 :614–615.

9.Poole C . A vision of accessible epidemiology . Epidemiology . 2010 ;21 :616–618.

10.Vandenbroucke JP . Pre-registration of epidemiologic studies: an illfounded mix of ideas . Epidemiology . 2010 ;21 :619–620.

11.Chan AW, Hróbjartsson A, Haahr MT, Gøtzsche PC, Altman DG. Empirical evidence for selective reporting of outcomes in randomized trials: comparison of protocols to published articles. JAMA. 2004;291:2457–2465. PMID: 15161896.

12.Dwan K, Altman DG, Arnaiz JA, et al. Systematic review of the epidemiologic evidence of study publication bias and outcome reporting bias. PLoS ONE. 2008;3:e3081.




16.Kilkenny C, Browne WJ, Cuthill IC, Emerson M, Altman DG. Improving bioscience research reporting: the ARRIVE guidelines for reporting animal research. PLoS Biol. 2010;8:e1000412.


18.The editors. Probing STOBE. Epidemiology. 2007;18:789–790.




22.Pocock SJ , Assmann SE , Enos LE , Kasten LE . Subgroup analysis, covariate adjustment and baseline comparisons in clinical trial reporting: current practice and problems . Stat Med . 2002 ;21 :2917–2930.

23.Storey JD , Tibshirani R . Statistical significance for genomewide studies . Proc Natl Acad Sci U S A . 2003 ;100 :9440–9445.

24.Chen JJ , Robertson PK , Schell MJ . The false discovery rate; a key concept in large-scale genetic studies . Cancer Control . 2010 ;17 :58–62.

25.Stang A, Poole C, Kuss O. The ongoing tyranny of statistical significance testing in biomedical research. Eur J Epidemiol. 2010:25;225–230.

26.Kirkham JJ, Altman DG, Gamble C, Dodd S, Smyth R, Williamson PR. The impact of outcome reporting bias in randomized controlled trials on a cohort of systematic reviews. BMJ. 2010;340:c365.

27.Feinstein AR . Scientific standards in epidemiologic studies of the menace of daily life . Science . 1989 ;243 :1257–1263.

28.Ionnides JP. Why most published research findings are false. PLoS Med. 2005;8:e124.

29.Taubes G . Epidemiology faces its limits . Science . 1995 ;269 :164–169.

30.Le Fanu J. The Rise and Fall of Modern Medicine. New York: Carrol and Graf; 1999.

Cited By:

This article has been cited 2 time(s).

Journal of Clinical Epidemiology
The exposure-crossover design is a new method for studying sustained changes in recurrent events
Redelmeier, DA
Journal of Clinical Epidemiology, 66(9): 955-963.
10.1016/j.jclinepi.2013.05.003
CrossRef
Pain Practice
Assessment of Research Quality of Telehealth Trials in Pain Management: A Meta-Analysis
McGeary, DD; McGeary, CA; Gatchel, RJ; Allison, S; Hersh, A
Pain Practice, 13(5): 422-431.
10.1111/j.1533-2500.2012.00601.x
CrossRef
Back to Top | Article Outline

© 2011 Lippincott Williams & Wilkins, Inc.

Twitter  Facebook

Login

Article Tools

Share