The Research Institute of the University of Montré Hospital Research Centre, Montreal, Quebec, Canada, Department of Social and Preventive Medicine, University of Montré, Montreal, Quebec, Canada, email@example.com
I.K. is a Fonds de la Recherche en Santéu Quéc Junior 1 Scholar and Canadian Institutes of Health Research New Investigator.
To the Editor:
In epidemiologic and biostatistical literature, there has been considerable attention to the phenomenon of noncollapsibility of the odds ratio (OR), with the defining feature of noncollapsibility being a deviation of the covariate-conditional value of the OR from its crude, unconditional counterpart.1 For example, in a recent commentary, Kaufman2p.491 pointed out that the “oft-noted noncollapsibility of the OR means that change-in-estimate approaches for detecting confounders are not reliable when the OR is the measure of effect and the risk of disease is large in at least one stratum of the covariate.” Although noncollapsibility of the OR is an undeniable mathematical fact, it is unclear how much of a concern this should be to those conducting etiologic research—particularly case-control studies of the density type, which are the preeminent variant of study design in this genre of research.
The purpose of the control series in a case-control study, viewed from the modern vantage, is to provide denominator inputs into the quasi-rates of the outcome occurrence for the contrasted categories of the exposure at issue.3 Thus, while the ratio of quasi-rates in a case-control study can be presented as the cross-product of elements in a 2-by-2 table—that is, in the form of an empirical OR—this outlook represents an unnecessary distraction and serves to obfuscate what parameter the study is, conceptually, all about—namely, the incidence density ratio (IDR). Once it was demonstrated that the ratio of quasi-rates represents an unbiased estimate of the IDR, the rarity of the outcome occurrence became irrelevant.3 By the same token, noncollapsibility of the OR should have ceased being a concern in case-control studies long ago, given that the IDR is a collapsible measure of association.4
The above idea can be demonstrated with data from a hypothetical case-control study aimed at estimating the association between a certain exposure and outcome (Table). In these data, sex is a risk factor for the outcome but is not associated with the exposure at issue in the study base. Thus, because there is no confounding by sex, the sex-conditional value of the IDR (2.0) numerically coincides with its crude, unconditional counterpart. Under simple random sampling of the study base in drawing the control series at a 1:1 control:case ratio, the sex-conditional value of the empirical OR is, in expectation, 2.0, and so is its crude value. In contrast, if sex were associated with the exposure in the study base—that is, if sex were an actual confounder—then the sex-conditional value of the OR would differ from the crude one, just as it would for the IDR in the study base.
So, does all this mean that the “change-in-estimate” approach to confounder detection/selection in case-control studies is justifiable? The general answer is still no, because absence of noncollapsibility does not mean that different approaches to confounding control will necessarily produce numerically identical estimates of a measure of association, such as IDR. One reason for this is that different regression models may implicitly correspond to different standardization schemes with respect to the covariates at issue.5 Thus, estimates from models controlling for different covariates may differ not because of confounding but because of nonconstancy of the values of the measure of association across covariate strata. Other reasons for potential numerical discrepancy between results obtained when controlling for different covariates include bias amplification6 and collider-stratification bias.7 Thus, decisions regarding which characteristics need to be controlled for will need to be made on thorough consideration of their (presumed or known) interrelations with one another and with the exposure under study and the outcome.8
The Research Institute of the
University of Montré Hospital Research Centre
Montreal, Quebec, Canada
Department of Social and Preventive Medicine
University of Montré
Montreal, Quebec, Canada
1. Greenland S, Robins JM, Pearl J. Confounding and collapsibility in causal inference. Statist Sci.. 1999;14:29–46
2. Kaufman JS. Marginalia: comparing adjusted effect measures. Epidemiology. 2010;21:490–493
3. Miettinen OS. Estimability and estimation in case-referent studies. Am J Epidemiol. 1976;103:226–235
4. Gail MH, Wieand S, Piantadosi S. Biased estimates of treatment effects in randomized experiments with nonlinear regressions and omitted covariates. Biometrika.. 1984;7:431–444
5. Greenland S. The interpretation of multiplicative-model parameters as standardized parameters. Stat Med.. 1984;13:989–999
6. Myers JA, Rassen JA, Gagne JJ, et al. Effects of adjusting for instrumental variables on bias and precision of effect estimates. Am J Epidemiol. 2011;174:1213–1222
7. Hernán MA, Hernández-Díaz S, Robins JM. A structural approach to selection bias. Epidemiology. 2004;15:615–625
8. Hernán MA, Hernández-Díaz S, Werler MM, Mitchell AA. Causal knowledge as a prerequisite for confounding evaluation: an application to birth defects epidemiology. Am J Epidemiol. 2002;155:176–184