Skip Navigation LinksHome > November 2012 - Volume 23 - Issue 6 > Commentary: Epidemiologic Methods Are Useless: They Can Onl...
doi: 10.1097/EDE.0b013e31826c30e6
The Changing Face of Epidemiology

Commentary: Epidemiologic Methods Are Useless: They Can Only Give You Answers

Kaufman, Jay S.; Hernán, Miguel A.

Free Access
Article Outline
Collapse Box

Author Information

From the McGill University, Montreal, Quebec, Canada.

Editors’ note: This series addresses topics of interest to epidemiologists across a range of specialties. Commentaries start as invited talks at symposia organized by the Editors. This symposium took place at the Third North American Congress of Epidemiology in Montreal in June 2011 and was organized by Editors Kaufman and Hernán.

Correspondence: Jay S. Kaufman, Department of Epidemiology, Biostatistics, & Occupational Health, Purvis Hall, 1020 Pine Ave. West, Rm 45 Montreal, QC H3A 1A2, Canada. E-mail: jay.kaufman@mcgill.ca.

Much of epidemiologic research is concerned with the estimation of causal effects. Specifying an average causal effect in an observational study requires a counterfactual contrast between the mean outcomes under two or more hypothetical interventions in a defined population and time period.1 Although counterfactual thinking has been an integral part of epidemiologic practice since at least the days of John Snow, the growing popularity of explicitly “causal” methods in modern epidemiology has increased epidemiologists’ ability to provide valid effect estimates despite time-dependent confounding and other challenges. Nonetheless, no method is so sophisticated that it can relieve the investigator of the obligation to provide a well-formulated causal question in the first place. Even in the absence of confounding and other biases, common epidemiologic measures of effect such as risk ratios and risk differences may not quantify well-defined causal effects if the hypothetical interventions or the target population are ambiguous. A risk ratio for the effect of “obesity,” for example, is vague because there are many ways to measure and conceptualize obesity and many ways to intervene to change it.2 These potential definitions and hypothetical interventions could lead to very different numerical estimates, and so it is imperative that the policy maker has in mind the particular definition and intervention modeled by the researcher. Without this connection between question and answer, effect measures for many variables commonly used in epidemiologic studies are difficult to interpret, to the point of being useless for etiologic interpretation or policy formation, no matter how elaborate or thoughtful the statistical modeling.3

In an effort to generate more careful thinking and discussion around this fundamental issue of how to frame a good question, the editors of EPIDEMIOLOGY invited three epidemiologists to discuss how they wrestle with this conundrum in their own substantive areas of research. We chose to highlight research programs that seemed especially challenging when it comes to posing meaningful and useful causal questions: environmental epidemiology, perinatal epidemiology, and social epidemiology. The presentations were part of a symposium at the Third North American Congress of Epidemiology in Montréal, Québec, on 23 June 2011, and were followed by comments from the senior statesman of causal inference in epidemiology, James Robins. The three presenters were then invited to translate their talks into brief essays, which are being published together with this commentary.4–6 These authors consider the process of turning meaningful epidemiologic questions into studies that provide useful epidemiologic answers, and the limitations of so-called “causal methods” in the absence of a carefully articulated design.

It was the painter Pablo Picasso who observed: “Computers are useless. They can only give you answers.”7 This observation is still relevant even after nearly a half-century of dramatic growth in computer capacity and complexity. The utility of these devices will always be limited by the human ingenuity necessary to pose useful questions. Etiologic epidemiology faces a similar constraint. Our analytical tools have likewise grown swiftly in their capacity and complexity, but remain limited by the uses to which we put them. As the authors of the essays that follow make clear, developing the ability to ask good causal questions is crucial if we want to make meaningful contributions to health and well-being.

Back to Top | Article Outline


1. Hernán MA. A definition of causal effect for epidemiological research. J Epidemiol Community Health. 2004;23:265–271

2. Hernán MA, Taubman SL. Does obesity shorten life? The importance of well-defined interventions to answer causal questions. Int J Obes (Lond). 2008;32(Suppl 3):S8–S14

3. Hernán MA, VanderWeele TJ. Compound treatments and transportability of causal inference. Epidemiology. 2011;22:368–377

4. Weisskopf MG. What, me worry? Chemicals and causality. Epidemiology. 2012;23:785–786

5. Kramer MS, Moodie EEM, Platt RW. Infant feeding and growth: can we answer the causal question? Epidemiology. 2012;23:790–794

6. Harper S, Strumpf EC. Social epidemiology: questionable answers and answerable questions. Epidemiology. 2012;23:795–798

7. Fifield W. Pablo Picasso: a composite interview. The Paris Review 1964 Summer-Fall. 2012;32:66 Available at: http://quoteinvestigator.com/2011/11/05/computers-useless/. Accessed July 20

© 2012 Lippincott Williams & Wilkins, Inc.

Twitter  Facebook 


Article Tools


Article Level Metrics