Skip Navigation LinksHome > March 2012 - Volume 23 - Issue 2 > Commentary: Should Preregistration of Epidemiologic Study P...
Epidemiology:
doi: 10.1097/EDE.0b013e318245c05b
Perspectives

Commentary: Should Preregistration of Epidemiologic Study Protocols Become Compulsory?: Reflections and a Counterproposal

Lash, Timothy L.a,b; Vandenbroucke, Jan P.c,d

Free Access
Article Outline
Collapse Box

Author Information

From the aAarhus University Hospital, Aarhus, Denmark; bDivision of Public Health Sciences, Department of Epidemiology, Wake Forest University School of Medicine, Winston-Salem, NC; cNetherlands Royal Academy of Arts and Sciences, Amsterdam, The Netherlands; and dDepartment of Clinical Epidemiology, Leiden University Medical Center, Leiden, The Netherlands.

The authors reported no financial interests related to this research.

Correspondence: Timothy L. Lash, Department of Clinical Epidemiology, Olof Palmes Alle 43-45, 8200 Aarhus, Denmark. E-mail: tl@dce.au.dk.

There is an ongoing debate regarding preregistration of epidemiologic study protocols.14 We examine the basic idea that preregistration of study protocols and their associated hypotheses would enhance the reliability of observational research. We define instances in which preregistration would be useful, and we support a counter-proposal: a public registry containing descriptions of collected epidemiologic data.

A decision to institute compulsory preregistration of protocols for observational studies—to be enforced by editors and reviewers as sometimes suggested13—is not to be taken lightly, and should not be endorsed solely on the basis of an analogous system instituted for randomized trials. Negative reactions toward compulsory registration have been published elsewhere.512 Note that it is the compulsory preregistration of protocols that is most at issue. There are already mechanisms by which epidemiologists can voluntarily preregister their protocols,13 if they feel preregistration is advantageous. The open question is whether such preregistration should be required in order for observational research to be published in leading journals (assuming the same “enforcement” mechanism would be adopted as for clinical trials).

We examine the validity of the analogy between randomized trials and observational studies with regard to the value of preregistering protocols. We then examine the idea that prespecification of a hypothesis enhances the credibility of results, and that avoidance of “false positives” should always be a primary concern. We discuss research settings when preregistration of an observational study protocol might be of value. Finally, as a counter-proposal, we support the establishment of a public registry of collected epidemiologic data, to include descriptions of the study sample, data elements, and the methods by which the data were collected. Such a registry would better accomplish some of the stated goals of protocol preregistration, such as securing public knowledge about ongoing epidemiologic research and providing a means to identify all potentially available evidence about a research topic.

Back to Top | Article Outline

The Flawed Analogy to Clinical Trial Protocols

We urge epidemiologists to avoid being too hastily convinced by arguments about the proximity of our science to the science of randomized trials (eg, Williams et al13). It is perfectly possible that a system that has worked well for one might not work so well for the other, as noted by the editors of EPIDEMIOLOGY.14 For example, the strong calls to preregistration of trials15,16 focused on the importance of reporting all trial results pertaining to the initial motivation of the trial, as an obligation to both participants and to society. The original motivation was to counter the suppression of results, a goal with which we agree. In contrast, many of the commentaries in support of preregistration of protocols of observational studies have focused mainly on its purported utility in identifying false-positive associations that may arise because of deviations from a priori hypotheses and methods.13,17 That is, commentators see suppression or down-weighting of results as a public good—exactly the opposite of the motivation of trialists' registries.

A further distinction between preregistration of trial protocols and preregistration of observational epidemiology study protocols is the wide gap between the practical requirements of successful implementation.

First, would preregistration of observational studies have to occur before data collection and data set assembly, before data analysis, or before a publication is submitted for peer review? This question is easy to answer for a randomized trial—the protocol must be registered before the first subject is assigned to treatment by a randomization scheme15—but there is no such bright line in observational epidemiology. We can easily imagine that almost any order of hypothesis formulation, protocol writing, data collection and analysis, and protocol registration would be possible. Some possible orders of these aspects would seem to subvert some of the claimed benefits in separating a priori hypotheses from a posteriori hypotheses, but who would know the actual order of events? All possible sequences would appear the same at the time of manuscript submission, when the control of editors and reviewers is proposed to begin.

Furthermore, the fact that all possible orders may sometimes be implemented benefits the productivity of epidemiologic research. For example, in secondary analyses of existing data sets or analysis of registry data, hypothesis formulation can be virtually coincident with a brief initial data examination (eg, to check whether a particular analysis is feasible before writing a grant). These practices improve the yield from our science, but would be inconsistent with the idea that study protocols should be registered before beginning a study and then adhered to rigidly.

Second, when would a protocol revision require amendment of the preregistered protocol? Imagine a protocol to enroll a cohort of people diagnosed with cancer between 1985 and 2001. After preregistration and after the study has begun, the investigators learn that they can include study subjects through 2002. Should they amend the study protocol to improve the study's precision, or should they retain the original enrollment criterion to avoid the trouble of modifying the protocol? If they try to justify the marginal revision that took place after they had access to the data, will they risk being labeled as data dredgers? It seems inevitable that researchers will either be constantly updating their preregistered protocols, choosing less-than-optimal revisions to data collection or analysis methods,5,12 or hoping that incongruities go unnoticed or unpunished once the manuscript has been submitted. A last alternative, which we would never endorse but also seems inevitable, would be for the manuscript to incompletely describe the true methods of data collection and analysis so that the manuscript description aligns more closely with the protocol than with reality. This unintended consequence of compulsory preregistration would do more harm than good.

These and other practical considerations must be resolved before implementing a policy requiring preregistration of epidemiology protocols and enforcement by editors and reviewers. It is inadequate to argue that trialists have managed to implement such a system, so observational epidemiology can also.13 Although some of these practical obstacles might be resolved, there is an even more important consideration: Is preregistration necessary for better science?

Back to Top | Article Outline

Prespecified Hypotheses are Unlikely to Improve Inference

Proponents of compulsory protocol registration have argued that comparing the hypothesis in a preregistered protocol with its published result will help to evaluate the reliability of the findings.14,17 They have focused especially on the notion that preregistered protocols containing preregistered hypotheses provide a safeguard against selective reporting of some results (eg, statistically significant results) from among many generated results, and therefore reduce type I errors. We argue that the reliability of an inference regarding a particular hypothesis is a function of the quality of the designs and analyses of the studies included in the evidence base, in conjunction with the credibility of the associated hypothesis. These have nothing to do with prespecification of the hypothesis, preregistration, or how many other hypotheses were simultaneously evaluated.14,1820 In fact, similar arguments concerning multiple comparisons were raised and dismissed 2 decades ago.21 (An exception is hypothesis scanning when decisions must be made about the allocation of resources.22) Yet this analogous and long-settled debate appears to have been lost from view by the proponents of protocol preregistration.

It is sometimes regarded as self-evident that inferences based on results about prespecified hypotheses are more reliable. However, literature on the philosophy of science teaches us that there is no basis for a large difference in strength of posterior belief when evidence pertains to a prespecified hypothesis versus when it pertains to an equivalent hypothesis specified after viewing the data.23

What is more important is whether a study is designed as a “severe test” of a hypothesis,24 and whether the hypothesis has credibility from independent data or other branches of biologic science. Studies of prespecified hypotheses have 2 advantages in this regard over studies with hypotheses suggested by the data. First, a prespecified hypothesis should make a specific prediction about the expected data, which prediction is unlikely to be fulfilled if the hypothesis is false. For example, a prediction about the return of a comet at a given day and time is a very specific prediction and therefore a severe test of an underlying theory. However, a study's test of a hypothesis can be severe, even if the hypothesis was suggested, or followed from the data—as the comet example shows. Existing data that were not used to suggest the hypothesis—but are consistent with predictions from it—can also be regarded as a severe test. The process is the same whether a hypothesis was prespecified or suggested by the data: the more specific the comparison of the prediction and the existing data, the more unlikely it is that data would fit the hypothesis, were the hypothesis false.

Prespecified hypotheses often seem more credible than hypotheses suggested by the data, which seems like an advantage. Prespecified hypotheses are usually the basis for funding support, and therefore often take little risk, invoke little imagination, and stray only a short distance from what is already well understood. Therefore, although they may be credible, data collection and results consistent with them yield only small incremental gains in knowledge. Hypotheses suggested by the data are not so constricted; they can take risks, invoke imagination, and stray a long distance from what is already well understood.25 Although many such hypotheses will eventually be refuted, those that gain support from accumulated evidence are often the ones that yield the largest gains in knowledge. In fact, new explanations in science often gain credibility most rapidly when they can explain previously observed data patterns that were theretofore not well understood.26

What seems to be happening is that the mantra of type I error avoidance, which has served randomized trials well, is now being carried over to observational research that has mostly other aims. However, the trial analogy again serves observational epidemiologic research poorly. Trials are designed to provide answers with the intent of directly informing decision making; ie, allocation of resources for submitting future patients to a specific treatment. In contrast, observational epidemiologic research is an exercise in measurement, with a slow accumulation of evidence from multiple scientific disciplines. When one removes the decision-making perspective from this debate, the concerns about balancing type 1 errors against type 2 errors disappear, at least with regard to the interpretation of single-study results.

Scientific progress requires publication of those insights that might carry us forward.27 The best ideas that follow from a particular study will come from the insights of those who explore the consequences of the findings of the study, and who look for alternative explanations (like bias and confounding). When alternative explanations are ruled out in a credible way, observational epidemiologic data may do as well as trials in providing a basis for regulation. This process is incremental and often slow. Whether a particular hypothesis or analysis was prespecified plays little role in the process.

Back to Top | Article Outline

When Might Advance Specifications be Valuable?

There are instances in which prespecification of a hypothesis might be important in observational research. Consider a heated controversy concerning a topic with large societal and economic consequences, in which conflicting results have been obtained, perhaps even by analyses from the same data. To make progress, this might be an instance in which stakeholders sit together beforehand to agree on a protocol that will convince everybody. The main purpose of such actions is not “prespecification”; however, it is to bind stakeholders (who may distrust the other's analyses) to a procedure they all trust.

Back to Top | Article Outline

A Counter-proposal: Registration of Existing Data From Observational Research

We recognize that certain systems must be in place to organize and conduct epidemiologic research, including most importantly systems to ensure human subjects' protection, allocation of scarce research resources, and access to publication space. Emphatically, we do not propose that one should do research or analyze data without any protocol. Almost all science starts with a preconceived idea, and almost all scientific undertakings will have some protocol. In fact, it is impossible to observe something without having in mind what one would like to observe.28

We do not, however, endorse establishment of systems to organize and regulate ideas, including most importantly hypotheses, by compulsory preregistration of observational epidemiology protocols. As a counter-proposal, we suggest development of a publicly available registry that describes data already collected for observational studies of human subjects. This idea has been proposed elsewhere,2,3,10 but has received less attention than the idea of registering protocols. This registry would provide descriptions of the data with regard to the inclusion and exclusion criteria for the study sample (eg, time, place, and common characteristics), the data elements, and the methods by which the data elements were collected. The goal of this registry would be to permit an independent investigator to determine by review of the registry, in combination with perhaps one specific inquiry to the data steward, whether the dataset could provide evidence pertinent to particular research topic. Examples of similar registries include the defunct IARC registry of cancer epidemiology studies29 and the current registry of European pharmacoepidemiology data resources (http://encepp.eu/encepp/resourcesDatabase.jsp).

Such a registry meets important stated goals of earlier recommendations for preregistration of observational epidemiologic research. It secures public knowledge about ongoing epidemiologic research, it provides a means for identifying all potentially available evidence (eg, for meta-analysis), and it prevents unproductive duplication. Compared with preregistration of protocols with specific hypotheses, it actually achieves these goals more effectively. Registration of data collections allows determination of what actually could be done with the data, whereas a registry of protocols with specific hypotheses provides information only on what has been done or is being done with the data. The latter are subsets of the former, and thus less complete. Furthermore, registration of collected data should be simpler than registration of full protocols, should less often require amendment, and should allay fears that novel hypotheses will be stolen from registries before their registrants' can publish results—which is more likely when analyses are done on existing data.

In contrast, compulsory preregistration of protocols, controlled by journals and reviewers, will inevitably dampen creativity in generating hypotheses, in revising research procedures during study conduct or data analysis (even when these revisions would improve the validity or precision of the resulting estimate), and in selecting research results that should be shared with others in publications.

Back to Top | Article Outline

CONCLUSION

Instead of compulsory preregistration of epidemiologic protocols, we propose (a) a voluntary public registration of descriptions of the data available in observational studies, and (b) the use of existing registries for specific studies that aim at immediate decision-making. Decisions on publication should not be influenced by whether the study or the information about that study was preregistered.

We also encourage more collegial debate on preregistration. Labeling entire fields of epidemiologic research as “monumental disasters”1 is unproductive, particularly without any accompanying illustration of how preregistration of protocols would have averted said “disasters.” For example, we do not see how prespecification of hypotheses or preregistration of protocols would have avoided finding a “pseudo-protective” effect of hormone replacement therapy on myocardial infarction. In fact, in many of the pertinent studies, this association was among the prespecified aims that secured grant funding, so almost certainly, these hypotheses and protocols would have been preregistered. The preregistration would have lent even more credibility to the results, so it would not have averted the perceived “disaster.” Likewise, citing criticisms of epidemiologic research3032 without citing well-reasoned answers to those criticisms3335 gives an incomplete view. Poorly grounded ethical appeals, incomplete analogies to registration of clinical trial protocols, and one-sided inspection of the contributions of our science to medicine and public health may lead to a rushed decision by medical editors on this topic, and ultimately box our science into a corner from which it will be difficult to escape. We owe each other better than that.

Back to Top | Article Outline

REFERENCES

1. Bracken MB. Pre-registration of epidemiology protocols: a commentary in support. Epidemiology. 2011;22:135–137.

2. European Center for Ecotoxicology and Toxicology of Chemicals. Workshop: Enhancement of the Scientific Process and Transparency of Observational Epidemiology Studies. Workshop Report No. 18. Available at http://www.ecetoc.org/workshops.

3. Rushton L. Should protocols for observational research be registered? Occup Environ Med. 2011;68:84–86.

4. Editorial. Should protocols for observational research be registered? Lancet. 2010;375:348.

5. Sørensen HT, Rothman KJ. The prognosis for research. Br J Med. 2010;340.

6. Vandenbroucke JP. Registering observational research: second thoughts. Lancet. 2010;375:982–983.

7. Samet JM. To register or not register. Epidemiology. 2010;21:610–611.

8. Takkouche B, Norman G. Meta-analysis protocol registration: Sed quis custodiet ipsos custodies [But who will guard the guardians?] Epidemiology. 2010;21:614–615.

9. Lash TL. Pre-registration of study protocols is unlikely to improve the yield from our science, but other strategies might. Epidemiology. 2010;21:612–613.

10. Poole C. A vision of accessible epidemiology. Epidemiology. 2010;21:616–618.

11. Vandenbroucke JP. Pre-registration of epidemiologic studies: an illfounded mix of ideas. Epidemiology. 2010;21:619–620.

12. Pearce N. Registration of protocols for observational research is unnecessary and would do more harm than good. Occup Environ Med. 2011;68:86–88.

13. Williams RJ, Tse T, Harlan WR, Zarin DA. Registration of observational studies: is it time? CMAJ. 2010;182:1638–1642.

14. The Editors. The registration of observational studies—when metaphors go bad. Epidemiology. 2010;21:607–609.

15. De Angelis C, Drazen JM, Frizelle FA, et al.. Clinical trial registration: a statement from the International Committee of Medical Journal Editors. N Engl J Med. 2004;351:1250–1251.

16. Krleza-Jeriæ K, Chan AW, Dickersin K, Sim I, Grimshaw J, Gluud C. Principles for international registration of protocol information and results from human trials of health related interventions: Ottawa statement (part 1). BMJ. 2005;330:956–958.

17. Loder E, Groves T, MacAuley D. Registration of observational studies, The next step towards research transparency. BMJ. 2010;340:375–376.

18. Cole P. The hypothesis generating machine. Epidemiology. 1993;4:271–273.

19. Michels KB, Rosner BA. Data trawling: to fish or not to fish. Lancet. 1996;348:1152–1153.

20. Savitz DA. Commentary: prior specification of hypotheses: cause or just correlate of informative studies. Int J Epidemiol. 2001;30:957–958.

21. Rothman KJ. No adjustments are needed for multiple comparisons. Epidemiology. 1990;1:43–46.

22. Greenland S, Robins JM. Empirical-Bayes adjustments for multiple comparisons are sometimes useful. Epidemiology. 1991;2:244–251.

23. Lipton P. Testing hypotheses: prediction and prejudice. Science. 2005;307:219–221.

24. Katzav JK. Should we assess climate model predictions in light of severe tests? Eos, Transactions, American Geophysical Union. 2011;92:195.

25. Parascandola M. Epistemic risk: empirical science and the fear of being wrong. Law Probab Risk. 2010;9:201–214.

26. Brush SG. Accommodation or prediction? Science. 2005;308:1409–1412.

27. Medawar 4PB. “Is the scientific paper a fraud?” In: Medawar PB, The Threat and the Glory: Reflections on Science and Scientists. Oxford: Oxford University Press; 1991:228–233.

28. Popper KR. Objective Knowledge. An evolutionary approach. Revised edition. Oxford: Oxford University Press; 1979;259.

29. Sankaranarayanan R, Wahrendorf J, D 233 maret E, eds. Directory of On-Going Research in Cancer Epidemiology 1996. IARC Scientific Publication No. 137. Lyon: International Agency for Research on Cancer; 1996.

30. Taubes G. Epidemiology faces its limits. Science. 1995;269:164–169.

31. Ioannidis JP. Why most published research findings are false. PLoS Med. 2005;8:e214.

32. Boffetta P, McLaughlin JK, La Vecchia C, Tarone RE, Lipworth L, Blot WJ. False-positive results in cancer epidemiology: a plea for epistemological modesty. J Natl Cancer Inst. 2008;100:988–995.

33. Wynder EL. Invited commentary: response to science article, “Epidemiology faces its limits.” Am J Epidemiol. 1996;143:747–749.

34. Goodman S, Greenland S. Why most published research findings are false: problems in the analysis. PLoS Med. 2007;4:e168.

35. Blair A, Saracci R, Vineis P, et al.. Epidemiology, public health, and the rhetoric of false positives. Environ Health Perspect. 2009;117:1809–1813.

Back to Top | Article Outline
ACKNOWLEDGMENTS

We appreciate very helpful discussions with Charles Poole in an early phase of writing the paper. We also acknowledge critical comments by Henrik Toft Sørensen, Neil Pearce, Ken Rothman, Peter Gøtzsche, Myriam Cevallos, and Paolo Vineis. All opinions expressed in this are ours.

Cited By:

This article has been cited 2 time(s).

Pharmacoeconomics
Improving Confidence in Observational Studies Should Statistical Analysis Plans be Made Publicly Available?
Onukwugha, E
Pharmacoeconomics, 31(3): 177-179.
10.1007/s40273-012-0019-0
CrossRef
Cancer Epidemiology Biomarkers & Prevention
Knowledge Integration in Cancer: Current Landscape and Future Prospects
Ioannidis, JPA; Schully, SD; Lam, TK; Khoury, MJ
Cancer Epidemiology Biomarkers & Prevention, 22(1): 3-10.
10.1158/1055-9965.EPI-12-1144
CrossRef
Back to Top | Article Outline

© 2012 Lippincott Williams & Wilkins, Inc.

Twitter  Facebook

Login

Article Tools

Share