From the Department of Epidemiology, Harvard School of Public Health, Boston, MA.
Correspondence: Sonia Hernández-Díaz, Department of Epidemiology, Harvard School of Public Health, 677 Huntington Ave, Boston, MA 02115. E-mail: firstname.lastname@example.org.
In this issue of Epidemiology, Suissa et al1 describe the “time-window” bias in case-control studies as a consequence of defining exposure within time-windows of different lengths for cases and controls. To illustrate the bias, the authors estimated the association between incident lung cancer and statin exposure. The average follow-up was 3 years. If the lung cancer incidence was constant over time, the follow-up time for cases would have been roughly half; it was in fact around 2 years. Statin exposure was defined for cases as any prescription from study entry to diagnosis, and for controls as any prescription to the end of follow-up. In other words, exposure was defined as statin use at any time during 2 years (for cases) or 3 years (for controls). The odds ratio was 0.62 for any prescription versus none during follow-up. Under the rare-disease assumption, an odds ratio in this design should be a valid estimate of the risk ratio that would have been obtained in a cohort comparing subjects with at least one prescription and those never exposed during follow-up.
Such a sampling strategy in case-control studies may be acceptable for nonchanging factors such as race. However, “any time during follow-up” is a time-dependent variable that necessarily increases over time. The authors demonstrate the problem by repeating the analyses after sampling an index date for controls within their person-time. With this approach, follow-up for both cases and controls was around 2 years, and the odds ratio was 1.0 for “any” versus “no” prescription by index date (controls) or diagnosis (cases). In this design, the odds ratio would be a valid estimate of the rate ratio (rather than the risk ratio) that would have been obtained in a cohort.2
At least one of the two estimates above is biased—most likely the first one. Intuitively, when the period of observation is shorter for cases, they are less likely to be exposed to transient events such as prescriptions.
The authors present this bias as a problem of control sampling: the first approach selects a sample of controls among noncases at the end of follow-up, rather than sampling from person-time so that the probability of being selected is proportional to the person-time contributed by the subjects to the cohort. Yet, with the same suboptimal sampling of controls, the time-window problem could have been avoided by defining exposure to statins as “use at baseline,” or “prescriptions in the last 30 days before index date.” (I will not discuss here other potential biases related to selection of prevalent users, carry-over effects, induction periods, confounding, etc.)
If we consider the origin of the “time-window” bias as being in the exposure definition rather than in the control sampling, then the problem is not constrained to case-control designs. This is not merely a matter of nomenclature but points to the fact that cohort studies are not immune to this problem.3 Case-control studies are often perceived as inferior. More realistically, they are just a family of designs that analyze an efficient sample of noncases (in traditional case-control designs) or a sample of person-time, rather than the entire cohort. For every case-control study, there is an underlying cohort—explicit or virtual. Both the biases and the virtues of the conceptual cohort are inherited by the corresponding case-control sampling.
Consider a real example: Currie et al4 studied the risk of specific cancers in a cohort of subjects with diabetes who had started insulin. A prescription of metformin at any time during follow-up—independent of time in the cohort—was associated with a rate ratio of 0.54 for any cancer. But the opportunity for exposure was lower in participants who developed cancer, because cases had fewer months of follow-up to receive metformin.5 As in the case-control example presented by Suissa et al, if cases were distributed uniformly, they would have had just over 50% of the person-time of the noncases (accounting for losses to follow-up and competing risks in the cohort). Thus, any rate-ratio measure for transient exposures defined as “at any time” would bias the estimate toward 0.50.
The bias described by Suissa et al1 is not confined to case-control studies. In both cohort and case-control designs, the opportunity for exposure to time-varying factors increases with follow-up. Therefore, exposure definitions such as “at any time during follow-up” will introduce bias when duration of follow-up varies with the outcome. Although the basis for this bias might seem obvious to the readers of Epidemiology, its persistence in peer-reviewed papers suggests a need to increase awareness.
We sometimes coin new names for biases with the best of didactic intentions. The bias discussed here could be called “incorrect control selection,” “confounding by time of follow-up,” “differential opportunity for exposure,” “differential duration of exposure ascertainment,” or “time-window bias,” among other things. But debates over nomenclature should not distract our focus from the goal of attaining study validity—just as Christian leaders in Constantinople six centuries ago would have been better off not to spend their time debating the sex of angels while Ottoman troops prepared to take their city.
ABOUT THE AUTHOR
SONIA HERNÁNDEZ-DÍAZ is the Director of the Pharmacoepidemiology Program and Associate Professor of Epidemiology at the Harvard School of Public Health. Her research focuses on the evaluation of patterns of use and comparative safety of drugs during pregnancy. Another area of interest concerns the application of innovative methodologic concepts to reproductive epidemiology.
1.Suissa S, Dell'Aniello S, Vahey S, Renoux C. Time-window bias in case-control studies: statin and lung cancer. Epidemiology. 2011;22:228–231.
2.Walker AM. Observation and Inference: An Introduction to the Methods of Epidemiology. Newton Lower Falls, MA: Epidemiology Resources Inc., 1991.
3.Suissa S. Immortal time bias in pharmacoepidemiology. Am J Epidemiol. 2008;167:492–499.
4.Currie C, Poole C, Gale E. The influence of glucose-lowering therapies on cancer risk in type 2 diabetes. Diabetologia. 2009;52:1766–1777.
5.Hernández-Díaz S, Adami HO. Diabetes therapy and cancer risk: causal effects and other plausible explanations. Diabetologia. 2010;53:802–808.
© 2011 Lippincott Williams & Wilkins, Inc.