Secondary Logo

Journal Logo

Commentary: CHILDREN

Should Epidemiologists Always Publish Their Results?

Yes, Almost Always

Kheifets, Leekaa; Olsen, Jørnb

Author Information
doi: 10.1097/EDE.0b013e318177813d
  • Free


Epidemiology is the discipline that studies determinants of health and disease. Without epidemiology, we would have very little evidence-based medicine or public health actions. The need for epidemiologic research does not, however, guarantee that results from these studies will be welcomed by all. Results may represent unpleasant suggestions: that a disease is related to one’s lifestyle (obesity, sexually transmitted diseases); that a disease is related to a product one is marketing (tobacco or cell phones); that society is systematically failing some of its most vulnerable members (drug abuse); or indeed that the study itself stigmatizes a certain section of society.

In addition, not all epidemiologic findings—perhaps very few indeed—can be regarded as rock solid, even after taking into consideration combined results from many studies. Replication is always good but, unfortunately, epidemiologists do not often have the option to repeat their studies as might be possible in a laboratory. The data we have may be the only data we will ever get. In most situations, there is inevitable uncertainty about the strength and causal implication of the studied association. The important question is what to do with potentially useful but unwelcome results: what about findings that might be controversial (because of the topic they examine) or hard to believe (because of low prior probability due to unknown biologic mechanism or simply because they are novel), or that may cause undue public concern (because the public might not understand the degree of remaining uncertainty).

It goes without saying that work of poor quality should usually not be published. We say “usually,” because some studies of poor quality have provided important results, such as the first studies on thalidomide and congenital malformations.1 However, some go beyond this and advocate the presentation of evidence only when it is clear and convincing. We disagree. We believe that so long as there are no fatal flaws, findings should be published even if they may be controversial or subsequently contradicted. Controversial results can prove to be important, as for example, the studies showing no protective effect of hormonal replacement therapy on cardiovascular disease risk before this was found in randomized trial.2

In keeping with scientific tradition, results of all published studies will join the body of public evidence where they can be openly debated by the scientific community. In the published literature, a study’s hypothesis, method of testing, control of variables and confounders, and other characteristics, can be put to critical tests. Only associations that survive such scrutiny may later be accepted as “real.” And only through publication can this process of open scrutiny take place.

Selective publication would distort our understanding of how causes act. Such distortion can come either by the well-documented tendency to publish “significant” associations or by never publishing results because of fear of worrying or confusing the public. Selective availability of information damages “the cement of the universe” (to use one of Mackie’s terms from his studies on causation).3 While there have been attempts to quantify publication biases,4 the extent of unpublished research is unknown and difficult to define. Authors, editors, and peer reviewers must rally to minimize censorship, including the worst and least tractable sort—self-censorship.

Research should be reported transparently, in detail, cautiously, and without over-interpreting results—that is, without raising unnecessary concern, but also without false assurance of safety. Epidemiologic studies provide key information for the risk assessment that informs public policies. These efforts should not rely on a single or a small number of selective studies. Even less should they depend on an individual researcher’s subjective and possibly prejudiced opinion as to which results are valid. Vital public health judgments would then become based on less than the totality of present knowledge, and are thereby impaired. These judgments should instead be based on a body of evidence that is as complete as possible—from an unbiased set of results, and from all streams of scientific enquiry.

Are scientists as willing as the media or the public to make a mistake? Scientists spend considerable time, effort, and money striving to be accurate and not to make statements unsupported by evidence. The public, on the other hand, may be more concerned about making sure a potential risk is not overlooked. Precautionary measures to prevent or limit exposures to possibly harmful agents are often justified. Publishing only unambiguous or uncontroversial evidence would delay a great many necessary public health actions. Furthermore, it would deprive individuals of their right to exercise precaution when knowledge is incomplete or uncertain. It may, for example, turn out to be good advice to limit exposure from cell phones in childhood despite the fact that cell phone use has not (as of yet, at least) convincingly been demonstrated to cause disease.

We agree that the association between cell phone use and behavioral problems may well be noncausal,5 and we will continue to explore and present alternative explanations. However, given the lack of a plausible biologic mechanism, this association may be as likely, or unlikely, as any of the many other endpoints that have been studied in relation to cell phones. Alarming results on use of cell phones and brain cancer have apparently not had much impact on cell phone traffic or cell phone sales, and we doubt that our results, cautiously written, on much less severe conditions, will cause public alarm.

We understand the confusion the public must experience when recommendations—on, for example, dietary habits—change from week to week. Unfortunately it is difficult to see how this confusion can be avoided. Fortunately, the response from the public appears, in general, to be one of healthy skepticism. Public concern is often a transitory phenomenon that does not justify compromising the longer-term quest for scientific truth. The cure for uncertainty, or for a conflict of ideas, is more information—not less. It is not a coincidence that closed undemocratic societies are characterized by very little epidemiologic research.

Potential health risks of any new and widely used technology should be comprehensively evaluated. Some of these results may turn out to be false positives and others may turn out to be false negatives—but among them will be some true negatives, and perhaps some true positives. Time will tell which is which, but it can do so only in a free society where research and the right to publish are not suppressed. Thomas Jefferson summarized this well: “I know no safe depositary of the ultimate powers of the society but the people themselves; and if we think them not enlightened enough to exercise their control with a wholesome discretion, the remedy is not to take it from them, but to inform their discretion by education.”6


1. McBride WB. Thalidomide and congenital abnormalities. Lancet. 1961;278:1358.
2. Løkkegaard E, Jovanovic Z, Heitmann BL, et al. Increased risk of stroke in hypertensive women using hormone therapy: analyses based on the Danish nurse study. Arch Neurol. 2003;60:1379–1384.
3. Mackie JL. The Cement of the Universe. Oxford, UK: Clarendon Press; 1975.
4. Turner EH, Matthews AM, Linardatos E, et al. Selective publications of antidepressant trials and its influence on apparent efficacy. N Engl J Med. 2008;358:252–260.
5. Divan H, Kheifets L, Obel C, et al. Prenatal and postnatal exposure to mobile phone use and behavioral problems in children. Epidemiology. 2008;19:523–529.
6. Jefferson T. To William C. Jarvis. vii ed. Washington, Monticello; 1820;179.
© 2008 Lippincott Williams & Wilkins, Inc.