Home Current Issue Previous Issues Published Ahead-of-Print Collections Podcasts Videos For Authors Journal Info
Skip Navigation LinksHome > March 2009 - Volume 20 - Issue 2 > Epidemiology, Data Sharing, and the Challenge of Scientific...
Epidemiology:
doi: 10.1097/EDE.0b013e318196784a
The Changing Face of Epidemiology

Epidemiology, Data Sharing, and the Challenge of Scientific Replication

Hernán, Miguel A.; Wilcox, Allen J.

Free Access
Article Outline
Collapse Box

Author Information

Editors’ note: This series addresses topics that affect epidemiologists across a range of specialties. Commentaries are first invited as talks at symposia organized by the Editors. This paper was originally presented at the 2008 Society for Epidemiologic Research Annual Meeting in Chicago.

Editors’ Note: Related articles appear on pages 169 and 172.

On 23 March 1989, Fleischmann and Pons1 claimed they had achieved nuclear fusion in a jar of water at room temperature. Their claim was met with disbelief mixed with excitement. If cold fusion was possible, it could solve the energy crisis. In the following weeks, physicists from the best research centers in the world sought to replicate Pons-Fleischmann experiment. They failed.

On 3 May 1989, while Pons was in Washington waiting to meet with President Bush's advisors, the American Physical Society concluded that every possible variant of the Pons-Fleischmann experiment had been tried without success. The claim of cold fusion was declared invalid. Taubes provided dramatic accounts of the failure of cold fusion in Science2 and elsewhere.3

The cold fusion story shows the importance of replication to the scientific process. Scientists could not replicate the cold fusion claim, and so–regardless of the reputations of the investigators or the journal involved–the claim was rejected.

Epidemiologic research is more complex than cold fusion science. Little of the cold fusion story resembles what happens when epidemiologic findings are reported (other than the part about Taubes). To illustrate this, imagine the following scenario. Suppose that in the next issue of Epidemiology, investigators from the Framingham Heart Study were to report that high levels of physical activity after age 40 increase the probability of reaching age 90 by 6-fold.

Few epidemiologists would be in a position to replicate this finding. (There are not many follow-up studies with 50 years of physical activity data for thousands of persons.) Even if another such study could be found, would it guarantee replication? What if the study had been conducted in 19th century Singapore? A different outcome could be explained by potential effect modification by diet, or genetic factors, or type of physical activity, or its method of measurement.

This is precisely why so many epidemiologic questions remain unresolved: if our study produces a finding that your study does not confirm, we can always find an explanation that you will not be able to refute. We might argue that, say, bedtime can modify the effect of interest and that our study was conducted in Spain where people stay up late. Granted, a large number of experimental details may be omitted in the methods section of a typical paper on cold fusion, but the sheer number of design and analytic decisions and the number of variables that need to be accounted for in a typical epidemiologic study dwarf that in the cold fusion experiments. No wonder epidemiology frustrates policy makers.

Confronted with the impossibility of direct replication, we do what is practical. We evaluate the claim according to the quality of the study, the carefulness of the analysis, and the consistency of the findings with those from previous studies. We propose alternative explanations–confounding, selection bias, measurement error, random variability, model misspecification–and finally make up our minds one way or the other.

What we rarely do is reexamine the data themselves.

Given the difficulties of direct replication in epidemiology, data sharing may indeed be the next best thing. If epidemiologic data (along with adequate descriptors) were made available, epidemiologists could replicate at least the final steps in the process leading to publication: the data analysis. This form of replication is a far cry from the Replication (with a capital R) that is central to the scientific endeavor, but it is better than no replication at all. The analysis could be approached in different ways, sensitivity analyses could be used to identify crucial assumptions, simulations could be performed, and data inconsistencies could be discovered. In short, interested investigators could engage in a more informed scientific debate.

Of course, data sharing is nothing new: social scientists have been doing it for decades, and genomic research relies on freely available data. Data sharing for epidemiologists, however, may be a harder issue. As Colditz4 reminds us, whatever the potential benefits of data sharing, there are dangers as well. The list includes breach of confidentiality, inadequate attribution of credit to the original investigators, misuse of the data by incompetent investigators or by groups with special interests, discouragement of young epidemiologists from generating new data, and the cost and complexity of providing fully documented self-explanatory data files. Although these barriers may seem insuperable, a number of epidemiologic studies already have mechanisms in place that work.5 For example, the Farmingham Heart Study, the Atherosclerosis Risk in Communities Study, and the women’s Health Initiatives Study provide data to interested and qualified investigators–even to those whose tax money was not used to create the resource.

The Editors of Epidemology recognize the risks and burdens of making data publicly available. We also see undeniable benefits. As a small step to encourage our authors to think in this direction, we propose the following. We invite our authors to share their data and computer code when the burden is minimal. For example, for articles based on publicly accessible data or simulations, the provision of computer code alone would provide everything necessary to replicate the results. Making their questionnaires available is another way that authors can contribute to a fuller assessment of their results.6 Accordingly, we have revised our instructions for authors as follows.

The Editors invite authors to provide the following information either on their own website or as an electronic appendix that can be published with their manuscript:

* Analytic code used for the analysis of publicly available data

* Code used to develop and analyze simulation data

* Questionnaires (either whole, or a subset of key analytic variables)

* Authors are welcome to provide more extensive electronic data if they wish

Just to be clear, this is simply an invitation. Our purpose is to encourage our colleagues to consider these issues, not to impose requirements. We do this not only because it is the right thing for science, but also because it is in some sense inevitable. As Samet says,7 data sharing is not an “if” question but a “when” question. All epidemiologists should be aware that their data, unless destroyed or lost, will one day be in the public domain. The better we prepare for this inevitability (and the sooner we make it happen), the more good our data might accomplish.

Back to Top | Article Outline

REFERENCES

1. Fleischmann M, Pons S, Hawkins M. Electrochemically induced nuclear fusion of deuterium. J Electroanal Chem. 1989;261:301–308.

2. Taubes G. “Cold fusion conundrum at Texas A&M.” Science. 1990:1299.

3. Taubes G. Bad Science: The Short Life and Weird Times of Cold Fusion. New York: Random House; 1993.

4. Colditz G. Constraints on data sharing for profession based cohort studies, experience from the nurses’ health study. Epidemiology. 2009;20:169–171.

5. Available at: http://www.framinghamheartstudy.org/research/proposal.html. Accessed November 18, 2008.

6. Wilcox AJ. The quest for better questionnaires. Am J Epidemiol. 1999;150:1261–1262.

7. Samet J. Data: to Share or Not to Share? Epidemiology. 2009;20:172–174.

Cited By:

This article has been cited 4 time(s).

Nature
Empty archives
Nelson, B
Nature, 461(): 160-163.
10.1038/461160a
CrossRef
International Journal of Radiation Oncology Biology Physics
Improving Normal Tissue Complication Probability Models: the Need to Adopt A "Data-Pooling" Culture
Deasy, JO; Bentzen, SM; Jackson, A; Ten Haken, RK; Yorke, ED; Constine, LS; Sharma, A; Marks, LB
International Journal of Radiation Oncology Biology Physics, 76(3): S151-S154.
10.1016/j.ijrobp.2009.06.094
CrossRef
Epidemiology
Complex Diseases, Complex Genes: Keeping Pathways on the Right Track
Kraft, P; Raychaudhuri, S
Epidemiology, 20(4): 508-511.
10.1097/EDE.0b013e3181a93b98
PDF (154) | CrossRef
Epidemiology
Data Sharing and Scientific Replication
VanderWeele, TJ
Epidemiology, 20(5): 782-783.
10.1097/EDE.0b013e3181aff277
PDF (200) | CrossRef
Back to Top | Article Outline

© 2009 Lippincott Williams & Wilkins, Inc.

Twitter  Facebook

Login

Article Tools

Share