Cardiovascular Disease: Rejoinder
From the aCentre for Health Equity Studies (CHESS), Stockholm University/Karolinska Institutet, Stockholm, Sweden; bDepartment of Public Health Sciences, Karolinska Institute, Stockholm, Sweden; and cDepartment of Public Health and Caring Sciences, Uppsala University, Uppsala, Sweden.
Correspondence: Kristiina Rajaleid, Centre for Health Equity Studies (CHESS), Stockholm University/Karolinska Institutet, Sveavägen 160, 106 91 Stockholm, Sweden. E-mail: firstname.lastname@example.org.
Editors' note: Related articles appear on pages 138 and 148.
Every parent who slips around the corner and unscrews the fuse when slightly annoyed by the child's repetitive enthusiasm over having discovered the causal connection between turning the switch and the onset of light, is practicing a lay understanding of conditional causation, causal interdependence, interaction, synergy, or whatever synonymous concept we might like to use. Everyday life is full of such encounters with the logic of multicausality, and without grasping it we would be lost. Consider for a moment how our minds work to find possible explanations when facing the awkward situation of no light, despite using the switch. Etiology in the fields of medicine and public health is not very different, given that we accept the fruitful idea of weaker causal constructs that are neither sufficient nor necessary. We have long been puzzled by the lack of attention to conditional causation, causal interdependence, interaction, and synergy in empirical epidemiologic research, despite the fact that this concept is the core of many current research questions such as the “thrifty phenotype” hypothesis. We also find it puzzling that the concept remains so controversial, given the promising potential of insights into basic causal mechanisms that might guide our preventive actions.
Tools such as conditional counterfactuals, potential-outcomes models, directed acyclic graphs, and the sufficient-component-cause model are currently influencing our thinking about the process of causal inference. These help us to understand what empirical evidence is required for legitimate inference of causal-effect relationships, as well as what types of inferences can be drawn.1,2 Consequently, we are also reminded to use causal-effect parameters when we report our results that have an interpretation in relation to a causal model for correct inferences to be made.3 Although the picture is still not completely coherent, we are not as pessimistic or even nihilistic as Lawlor,4 based on our understanding of recent and earlier attempts to conceptualize and analyze the issue of interaction, conditional causation, and causal interdependence from the perspective of various abstract causal models.5,6
Every discussion should start with a shared and correct understanding of the issue at stake. Interaction in its causal sense is a formal representation of the real-world example above, and implies that there is at least one mechanism in which the causal action of A (the light switch) is conditional on the presence of B (the fuse). In essence, the causal inference we are aiming at is a qualitative statement of yes or no. It has nothing to do with the choice of scale or the actual quantitative relation between the 2 causes and the outcome, although the quantitative consequences of the interaction—once established—are important, for risk evaluation and public health interpretation. Abstract causal frameworks can help us establish the nature of the empirical information needed to make sound causal inference. A beautiful example is the possibility of deriving a criterion from the sufficient-component cause and counterfactual frameworks by which we may infer from the quantitative effects found in the epidemiologic analyses whether there is interaction. Simply contrasting biologic and statistical interaction is an outdated discussion. The statistical part of the issue is rather about identifying the caveats of the statistical techniques used to organize the empirical data according to the needs of the criteria.7,8 Given that the definition of interaction is different in statistics and epidemiology, we strongly disagree with Lawlor's message that biologic interaction is not different from statistical interaction. The real problem, as we see it, is that far too many researchers in biomedical sciences are making incorrect direct causal interpretations based on product terms (ie, association modifications) in multiplicative models.
We share Lawlor's concerns regarding the use of the prefix “biologic”—although perhaps not for the same reasons. Factors do not have to interact in a biologic/mechanistic sense.6 It is easy to think of possible causal interdependence between 2 social risk factors, or 2 psychologic mechanisms. Hence the more general concept of causal interaction, sufficient-cause interaction, conditional causation, or causal interdependence would have been preferable as these better describe of the underlying causal mechanism. However, considering the nature of the thrifty phenotype hypothesis, biologic interaction is an adequate concept to use also from a semantic perspective.
We welcome Lawlor's4 critical appraisal of the basis for our conclusion, but we maintain that it correctly reflects our results. Given the hypothesis-driven approach, the problem of multiple testing is irrelevant in our study. Further, in these Swedish birth cohorts the proportion born prematurely was rather small and therefore the results are less sensitive to the selection problem brought up by Lawlor. Much more could be said both about the theoretical nature of the 2 potentially interacting exposures under study, and about how they are measured. However, an important distinction to keep in mind is whether birth weight for gestational age and adult BMI serve simply as imprecise indicators of the true underlying causal constructs or whether they rather have quite different causal interpretations. The first situation would imply random misclassification, most probably leading to an underestimation of the interaction, and the second would imply empirical support for a causal interaction with a completely different interpretation.9
Our point of departure was that there is surprisingly little empirical evidence regarding this widely discussed hypothesis that has both devoted supporters and critics. We had (and still have) no sentiments either for or against the hypothesis, but we do have some data. We could not falsify the hypothesis. Instead we report results that could later be used for causal inference based on an inductive approach, and we have tried to publish information that reveals the actual strength of our piece of evidence. We would like to encourage others who have the necessary data also to analyze the hypothesis in a correct format. Eventually enough empirical studies might accumulate that could be systematically reviewed, allowing the strength of the empirical evidence to be graded.
1.Pearl J. Causality: Models, Reasoning, and Inference. 2nd ed: Cambridge University Press; 2009.
2.Greenland S, Brumback B. An overview of relations among causal modelling methods. Int J Epidemiol. 2002;31:1030–1037.
3.Maldonado G, Greenland S. Estimating causal effects. Int J Epidemiol. 2002;31:422–429.
4.Lawlor DA. Biological interaction—-time to drop the term? [Commentaryrsqb]. Epidemiology. 2011;22:148–150.
5.Greenland S, Poole C. Invariants and noninvariants in the concept of interdependent effects. Scand J Work Environ Health. 1988;14:125–129.
6.VanderWeele TJ, Robins J. The identification of synergism in the sufficient-component-cause framework. Epidemiology. 2007;18:329–339.
7.VanderWeele TJ. Sufficient cause interactions and statistical interactions. Epidemiology. 2009;20:6–13.
8.Skrondal A. Interaction as departure from additivity in case-control studies: a cautionary note. Am J Epidemiol. 2003;158:251–258.
9.Lundberg M, Hallqvist J, Diderichsen F. Exposure-dependent exposure misclassification in interaction analyses. Epidemiology. 1999;10:545–549.
© 2011 Lippincott Williams & Wilkins, Inc.