The author responds:
Scientists from many disciplines, including epidemiology, are interested to discover causal relationships or explicate causal processes. Most causal processes worth studying are complex in nature.
The most effective way I know to represent a causal process is to write down a model that explicitly encodes the causal effect(s) of direct interest. Moreover, I am persuaded that models written in terms of potential outcomes provide the most effective and transparent representations of causal processes and causal effects. Directed acyclic graphs, though less prevalent, provide an equally complete and rigorous framework for representing causal processes. Regression models, contingency tables, and the like characterize association, not causation. (The second paragraph in Dr. Levine's letter is completely disconnected from any statement, direct or implied, in my commentary).
One concern expressed by Dr. Levine is that anyone one can write down a potential outcomes model and claim it corresponds to an actual causal process. This has nothing to do with statistics. My commentary presumes that appropriate background research and scientific expertise has been brought to bear in the formulation of a causal model.
Dr. Levine expresses substantial concern about language and terminology. The language of causal modeling and inference can indeed be opaque. It is nevertheless important to resolve the confusion—evident in Dr. Levine's letter and probably shared by others—between causal models and the statistical methods that are used to estimate their parameters from data. A structural model represents the causal effect of an exposure or condition that is, in theory, subject to manipulation by external actions.1 A marginal structural model (MSM) is a structural model that is parameterized in terms of population-averaged (ie, ‘marginalized’) causal effects. Hence MSM are models, not methods of estimation.
Inverse probability weighting (IPW) is not a model, but rather a method of estimation; indeed, it is a common method of fitting a MSM. IPW, like any method used to fit causal models to observational data, requires assumptions that cannot typically be verified by observed data (eg, that all relevant confounders have been observed). However, it is possible to quantify the range of potential bias attributable to violating assumptions. This is one of my key recommendations for reporting on causal models.
Dr. Levine suggests that I have encouraged researchers “to believe they understand a complex causal relation because they've used a complicated model on observational data—ie, a ‘causal model.’” Apparently this is overstated to make a point. While causal models may sometimes be misused and misinterpreted, even in published research, this can be said of countless statistical models and methods; P values and linear regression are just 2 obvious examples.
I trust good epidemiologists to appreciate the complexities associated with drawing causal inferences: elaborating plausible mechanisms, using data that are sufficiently rich, formulating models in terms of specific causal parameters of interest, using appropriate statistical methods to draw inference, and frankly assessing limitations of their findings.
I could not disagree more with Dr. Levine's general point that promoting the use of causal models—and calling them causal—is a disservice (the references to Orwell fall somewhere beyond hyperbole). Models and methods for causal inference are not some new phenomenon: they have been studied in various disciplines for almost 100 years. Since 1974, when Rubin2 formalized the potential outcomes framework, the field has seen pioneering advances by leading scientific thinkers of our generation. James Heckman was awarded the 2000 Nobel Prize in Economics for development and application of structural models to study social and economic phenomena from observational data. For decades, causal modeling has been a staple of empirical research in the social sciences3 and virtually taken for granted in economics. It is a complex but worthwhile undertaking, and holds significant promise for important problems in epidemiology that rely on observational data.
Joseph W. Hogan
Biostatistics Section, Program in Public Health, Brown University, Providence, RI, email@example.com
1.Halpern JY, Pearl J. Causes and explanations: a structural-model approach. Part I: Causes. In: Proceedings of the Seventeenth Conference on Uncertainty in Artificial Intelligence
. San Francisco: Morgan Kaufmann; 2001:194–202.
2.Rubin DB. Estimating causal effects of treatments in randomized and nonrandomized studies. J Educ Psychol
3.Morgan SL, Winship CW. Counterfactuals and Causal Inference: Methods and Principles for Social Research
. Cambridge, UK: Cambridge University Press; 2007.