Home Current Issue Previous Issues Published Ahead-of-Print Collections Podcasts Videos For Authors Journal Info
Skip Navigation LinksHome > Blogs > EPIDEMIOLOGY Watching > Registration of observational research: a series of lively d...
EPIDEMIOLOGY Watching
“EPIDEMIOLOGY watching” is a forum to address broad aspects of epidemiologic research – its history, its methods, its impact – and to stimulate discussion among its students and practitioners.
Monday, November 08, 2010
Registration of observational research: a series of lively debates!
 
Two recent debates have addressed the registration of observational research (discussed in the September issue of this journal [1]). One debate was at the August meeting of the International Conference on Pharmacoepidemiology (ICPE) at Brighton, UK, and the second was at the September meeting of the American College of Epidemiology (ACE) at San Francisco. [Full disclosure: I took part in both.]
 
The idea of registering observational research was launched at the end of 2009 in a meeting organized by a group representing the European chemical industry [2] – an industry that feels epidemiology is behaving irresponsibly. Thereafter, the registration idea was enthusiastically embraced by Lancet [3] and BMJ [4] with arguments that reveal the great confusion that prevails when observational research is discussed and pitted against RCTs.
 
BMJ editors consider observational research 'vulnerable to bias and selective reporting': researchers 'may … craft a paper that selectively emphasises certain results, often those that are statistically significant or provocative'. In the future, BMJ will demand 'a clear statement of whether the hypothesis arose before or after the inspection of the data' (if afterwards, the journal will demand extra explanations), and they will ask 'whether the study was registered, and if registered whether the protocol was registered before data acquisition or analysis began'. BMJ’s reason is that they are interested only in papers that have clear and immediate clinical relevance.
 
Are we allowed to have new ideas while exploring existing data? At ICPE, the debate was about multiplicity in pharmacoepidemiology. The argument against multiple analyses of pharmacoepidemiologic data was defended by Stan Young and Stuart Pocock, based on the same reasoning that makes subgroup analyses ‘not done’ in randomized trials (RCTs). On the other side, Ken Rothman and Sonia Hernandez-Diaz argued that multiple analyses are a hallmark of good science: good science investigates several aspects of a question and is not limited to a single prespecified question and analysis. Epidemiologists learn during data analysis, in particular in large complex databases; they behave like lab scientists who adapt their experiments and change their protocols after seeing the results of the previous experiment.
 
Consider real epidemiology practice. Of course, we always tell our PhD students to have prespecified research questions and a prespecified plan when ‘attacking’ a data set. The reason is not to make the results more believable. The reason is to avoid getting lost in your data analysis: to know what you are doing, why you are doing it and where you came from -  just as lab scientists keep notes of their experiments in lab journals.
 
Almost all science starts with a preconceived idea, and a lot of science will have some protocol. Think of archeologists. They will start digging somewhere with an idea in mind – otherwise they would not get funded. Suppose that while working at the terrain, they notice that the strange shape of the next hill is also promising. After a test dig, artefacts are found. Are they 'data dredgers' whose findings should be treated with suspicion?
 
At ACE in San Francisco, the debate session was about registration of observational research. The 'pro' position was defend by Douglas Weed, of DLW Consulting Services [5] who largely approved of the document of the chemical industry. In Weed’s view, true transparency was an obligation to society and meant making protocols available beforehand. On the other side, Richard Rothenberg (editor of the Annals of Epidemiology) felt that for journals to require registration would promote standardization and restrict an editor's mandate to foster innovation and creativity. I also spoke against registration, based on the premise that RCTs – which seem to be guiding beacons – are, in fact, scientifically the 'odd man out'. RCTs try to avoid multiple and post hoc analyses at all costs. These safeguards are necessary for the credibility of the small number of RCTs that usually suffices for drug approval. Indeed, the whims of an investigator who sees something interesting in the data of a single trial should not bear on medical decisions that have consequences for millions of patients. Registration of RCTs was set up as a stringent measure to avoid selective reporting, and rightly so.
 
Recently, Mark Parascandola defined 'epistemic risk': “In drawing an inferential conclusion or accepting a hypothesis as true, one takes on an ‘epistemic risk’ – the risk of being wrong.” [6]. The RCT procedure can be seen as minimizing epistemic risk – that is, minimizing the risk of a wrong answer for the key question.  However, minimizing type I error increases type II error, and hence prevents us from seeing new things. It is not clear which error (type I or II) is the worst when we try to explain Nature. Much good can come from an idea that initially lacks strong support, or that seems at first ‘useless’, or that while wrong leads to new insights. Maximal avoidance of type I error is contrary to an important aim of science: to discover new explanations.
 
What seems to be happening is that the mantra of ‘type I error avoidance’ that serves RCTs so well, is now indiscriminately carried over to observational research. When the BMJ editorial is followed to the letter, any new idea that occurs during data analysis should be registered first – and even then the researcher is cheating, since the idea occurred after seeing the data.
 
The support of Lancet and BMJ for registration rests on the premise that all sciences should behave like RCTs. Imagine telling a theoretical physicist, an evolutionary biologist, a molecular biologist or an astronomer that she should not publish any thought or finding other than the ones she had in mind several years earlier!  Science requires publication of those insights that seem to carry us forward - not the whole history of all wrong ideas, mishaps and detours. The acceptance of your paper will come from others who explore the consequences of your ideas, and who look for alternative explanations (like bias and confounding). Often, this is a long process. When alternative explanations are ruled out in a credible way, observational data may lead to action – even regulation - as much as RCTs. Whether a particular hypothesis or analysis was prespecified plays no role in that process.
 
The debate on registration of observational research touches on the fundamentals of how scientific progress is made. No real surprise that this will be different for different sciences. That makes these debates interesting and exciting.
 
More debates are forthcoming. The next one that I know of is on 14 December 2010 at the Amsterdam Medical Center in the Netherlands, where the lecturer is Kay Dickersin, Director of the US Cochrane Center at Johns Hopkins. She has published extensively about selective publications that may wreck meta-analyses of RCTs. Rumour has it that there are budding plans to bring up the topic at the 3rd North American Congress of Epidemiology in Montreal in 2011, as well.
 
If you like to comment, Email me directly at epidemiologyblog@gmail.com or submt your comment via the journal which requires a password protected login.
 
[1] These EPIDEMIOLOGY Commentaries are freely available at http://journals.lww.com/epidem/toc/2010/09000
[2] Workshop: Enhancement of the Scientific Process and Transparency of Observational Epidemiology Studies, 24 –25 September 2009, London. Workshop Report No. 18, Brussels, November 2009, European Centre for Ecotoxicology and Toxicology of Chemicals. Available at: http://links.lww.com/EDE/A415.
[3] The editors. Should protocols for observational studies be registered? Lancet. 2010, 375:348.
[4] Loder E. Groves T, MacAuley D. Registration of observational studies: The next step towards research transparency. BMJ. 2010;340:375–376.
[5] DLW Consulting Services: http://douglaslweed.com/
[6] Parascandola M. Epistemic risk: empirical science and the fear of being wrong. Law, Probability and Risk 2010:9; 201-214  doi:10.1093/lpr/mgq005
 
© Jan P Vandenbroucke, 2010
 
 
11/11/2010
blog reader said:
Tom Jefferson wrote: I do Cochrane reviews in the field of acute respiratory infections. Overall the quality of cohort and case-control studies addressing these topics is low. Extensive gaming of sponsors in the choice of journals for publication completes the picture. Also non-randomized studies are often used for marketing purposes or to support decisions already taken on dubious or no evidence. Registration would give insight into how many non randomized comparative studies are planned, executed and published, and their rationale. If publication of the full protocol were to be made compulsory, any deviation between pre planned and post hoc analysis could be spotted. Authors would have to record and justify such deviations. This would not stifle innovations, natural curiosity and inspiration, it would merely document its genesis and development. It would discourage multiple unplanned analyses and would help to bring transparency in a field where it is badly needed.
11/11/2010
Prof. Timothy L. Lash said:
To me, the ethical arguments continue to take a lot of attention. The distinction between ethical duties to publish trial results (subjects agree to be manipulated to answer one main question) are very different from the ethical duties to publish observational study results (subjects agree to be watched to answer a wide range of questions). For the former, the investigators have an ethical duty to publish the main results. For the latter (and for secondary analyses in the former), the investigators have an ethical duty to publish results of merit, and the subjects implicitly accept the investigators' judgments about those choices. I believe the editors of Epidemiology have done a good job of highlighting the basis for the distinctions in their editorial.
11/10/2010
blog reader said:
Paul Glasziou writes: I agree with JPV that registration of observational questions should not be compulsory. After all, many new questions arise within the cohorts of randomized trials that were not the registered trial question. As an example, about 10 years after completing a large randomized trial of statins (the LIPID study), we developed new analytic methods to work out how often cholesterol monitoring needed to be done. These methods were then applied to this 10 year old data; something we could not have predicted doing at the time of the study. Any registration process needs to recognize that new ideas and new methods will be developed and tested on old data sets.
11/9/2010
Rutger A. Middelburg said:
RCTs are registered to avoid both suppression of negative results and false positives from multiple analyses. A much overlooked difference with observational research is that to start an RCT we should be at equipoise, while observational research can start anywhere from complete ignorance to near certainty. This has a bearing on the need to publish all results. When reading an RCT the reader should be able to trust that the statistical analyses from this RCT give a reliable answer to the question addressed. This is only possible if all RCTs publish all results. Most importantly, it is only possible if all RCTs were started at equipoise. When reading observational research the reader should be provided with a discussion of the new results in the light of all prior evidence, but this is in no way related to registration of the research. New ideas that arise during data analyses can be viewed in light of prior evidence, even if the prior evidence was not gathered before the analyses.
11/8/2010
Charles Poole said:
As Phil Cole and I have explained [1,2], it is impossible for a hypothesis to arise after inspecting the data. If the hypothesis had not been generated before, which many have, the mere decision to examine a relevant association generates it. Thus, the BMJ's requirement is always met and no explanation for so-called "post hoc hypotheses" is ever required. It is possible, however, to generate a hypothesis after collecting some data with which a relevant association can be computed. Much of value that we have learned from trials (e.g.and most of value that we have learned from observational studies pertains to hypotheses that arose after relevant data were collected (and by logical necessity, before those data were inspected). Charles Poole 1. Cole P. The hypothesis generating machine. Epidemiology 1993;4:271-273. 2. Poole C. Induction does not exist in epidemiology, either. In: Rothman KJ (ed). Causal inference. Chestnut Hill, MA: Epidemiology Resources Inc., 1988;153-162.
About the Author

Jan P. Vandenbroucke
Jan P. Vandenbroucke is a professor of Clinical Epidemiology at Leiden University and an Academy Professor of the Royal Netherlands Academy of Arts and Sciences. He studied medicine in Belgium and epidemiology at Harvard. He serves on the advisory board of The Lancet, is co-editor of the James Lind Library and the People’s Epidemiologic Library, and is co-author of the STROBE guidelines (Strengthening the Reporting of Observational Studies in Epidemiology).

Blogs Archive