It is often difficult for the media and the public to appreciate the role of flawed but contributory epidemiologic research. A study of cell phone use and children’s behavior problems (published in this issue) illustrates the ingredients of “inflammatory epidemiology”—there is a common exposure and a common health problem, a very low prior probability of a biologic effect, and a statistical association between the exposure and outcome. The authors acknowledge the study’s limitations, and the reviewers and editors share the view that these findings are worth disseminating to the scientific community for their evaluation. This report moves the evidence from an extremely low prior probability to a slightly higher (but still extremely low) posterior probability. The potential for misinterpretation can be mitigated by appropriately cautious interpretation of the findings, and by reliance on expert panels to integrate evidence and to draw the behavioral and policy implications of such studies.
From the Department of Community and Preventive Medicine, Mount Sinai School of Medicine, New York, NY.
Submitted 17 March 2008; accepted 27 March 2008; posted 20 May 2008.
Correspondence: David A. Savitz, Department of Community and Preventive Medicine, Mount Sinai School of Medicine, One Gustave L. Levy Place, Box 1057, New York, NY 10029-6574. E-mail: firstname.lastname@example.org.
Epidemiology is capable of contributing information to address almost any question regarding influences on health, and we seem to have no shortage of unresolved issues. As we survey the menu of possibilities, several considerations bear on the importance and potential for contributing to science and public health—the magnitude of the health problem, the prevalence and size of the likely impact of the putative cause, amenability to intervention, prior epidemiologic evidence on the topic, and the extent to which other lines of research support the plausibility of a causal association. These are the ammunition we assemble in research proposals to persuade granting agencies of the promise of the planned investigation. These are the same issues that guide editors in choosing manuscripts for publication.
Beyond this reasoned world of scientific and public health is another consideration that, while not unique to epidemiology, is heightened relative to other realms of research: How will the media and ultimately the public interpret our reports? Public interest in epidemiologic research is a mixed blessing, of course—both a gratifying reminder that we are addressing the pressing concerns of those we seek to inform (who happen to be taxpayers and the funders of our research), and a troubling source of misunderstanding, overreaction, and, often, despair with our discipline as our inevitable mistakes in methods and inference are aired in public. At worst, every association we report is presumed to be causal (sometimes with encouragement by the author) or, just as bad, every hypothetical limitation is believed to completely discredit the study (and its author). The nuances of generating flawed but contributory research are difficult to explain and certainly not easy for the public to comprehend.
In this issue, we present a study of cell phone use by the mother during pregnancy and also by the child in relation to behavioral problems at age 7. The authors reported a positive association between exposure to cell phones and a spectrum of behavioral disorders.1 This study makes efficient use of data from the Danish National Birth Cohort, a large, carefully conducted study. The data are appropriately analyzed and interpreted as identifying a clear statistical association that is not likely to be causal.
The study is also a nearly perfect recipe for “inflammatory epidemiology.” Begin with an agent that is ubiquitous and recently introduced, and that works in ways that are completely incomprehensible to its users. Combine with an extremely common health outcome—childhood behavioral disorders—that every parent and those exposed to children would define as universal. Add an extremely low prior probability that the low energy deposition associated with cell phone use would have any significant biologic effects on brain development. Season with the special anxiety associated with the health of children and our deep ambivalence regarding the mixed blessing of cell phones. Stir all these considerations together and add the final ingredient—empirical evidence linking the exposure and the outcome, a statistical association between cell phone use and behavioral problems in children.
The authors, reviewers, editors, and readers surely appreciate the methodologic limitations that bear on the interpretation of these findings. The fetus receives very limited exposure through the mother’s cell phone use. The prior probability of a biologically mediated behavioral effect of cell phone use on brain development is extremely low. There is a lack of specificity of the association with any particular behavioral problem. Both the exposure and the outcome are reported by the parent, with the possibility of errors in perception, recall, and reporting that could bias the results.
Even so, the reviewers and editors believe that these findings are worth consideration by the scientific community. The very factors that make this result potentially inflammatory also provide the justification for deciding to publish such research—the exposure is common and growing, the outcome is a public health concern, and the laboratory can provide only limited insights for extrapolation to humans. In fact, the potential for such studies to lead ultimately to a major discovery is much greater than for research on topics that have already received much attention, ones for which the biologic mechanisms and epidemiologic patterns are well understood.
If we lived in a world made up only of epidemiologists (God forbid), there would be no cost to publishing results with a low prior probability. A shift from an extremely low prior probability of a causal association to a stronger—but still extremely low—posterior probability is not a reason for concern. The public would be able to appreciate the nature of the evolving evidence and reserve its judgment. In the real world, however, even with cautious interpretation by the authors (as in this case), the dissemination of such research incurs the risk of overreaction. We risk fulfilling the Taubesian nightmare of scaring and confusing the public, extorting vast research funds by draining it away from far more important topics, followed inevitably by the discrediting admission that we were wrong at best and intentionally misleading at worst, and thus squandering public goodwill and taxpayers’ money.
This risk is mitigated by carefully reporting the study limitations and engaging with the media as appropriate. Authors can do this by explaining how such findings fit into the larger picture of evolving knowledge, and by providing a clear statement that a causal effect remains unlikely. Beyond what individual authors can provide, collective review and interpretation by panels of experts are essential for bridging the gap between data and policy. We cannot always count on individuals (whether authors, editors, reporters, or consumers) to grasp the most informative, helpful take-home message. Expert panels and committees can provide the wisdom that comes from consensus among researchers from multiple disciplines, with all their individual and inevitable biases.
If such a panel were convened on the topic of this paper, my guess is that they would offer the sort of message that sends reporters scurrying for other news: “no call for alarm, stay tuned.”