Skip Navigation LinksHome > May 2009 - Volume 20 - Issue 3 > Bias in Full Cohort and Nested Case-Control Studies?
Epidemiology:
doi: 10.1097/EDE.0b013e31819ec966
Nested Case-Control Studies: Commentary

Bias in Full Cohort and Nested Case-Control Studies?

Wacholder, Sholom

Free Access
Article Outline
Collapse Box

Author Information

From the Division of Cancer Epidemiology and Genetics, National Cancer Institute, Bethesda, Maryland.

Supported in part by the Intramural Research Program of the NIH, National Cancer Institute, Division of Cancer Epidemiology and Genetics.

Editors’ Note: Related articles appear on pages 321, 330, and 341.

Correspondence: Sholom Wacholder, Division of Cancer Epidemiology and Genetics, National Cancer Institute, Bethesda, MD 20892. E-mail: Wacholds@mail.nih.gov.

In this issue, Langholz and Richardson1 and Hein et al2 address 2 recent articles by Deubner et al3,4 about nested case-control studies. In 1 article,3 Deubner et al called into question the fundamental validity of the nested case-control design. If their critique were compelling, it would raise doubts about the interpretation of hundreds of publications that report results from nested case-control studies. In another article,4 the same authors suggested a restriction on control selection in nested case-control studies to make cases and controls more comparable.

Fundamentally, a properly executed case-control study nested in a cohort is valid if the corresponding analysis of the full cohort is valid. The mathematics of the likelihoods are the same for both,5 as Langholz and Richardson1 point out, and the same software procedures work for both. The only salient difference between the 2 designs is whether independent random samples or 100% samples are used in the conditional likelihood factor for each case. As Deubner et al3 note, the design and analysis of nested case-control studies are complex, but no more so than the analysis of cohorts, which must consider issues including time scales, various measures of time-dependent variables, and possible censoring.

Generally, the only disadvantages to nested case-control studies are the reduced precision and power due to sampling of controls, and the possibility of flaws in the sampling design or its implementation. Therefore, any fundamental problem with nested case-control studies must also be a problem for full cohort analysis. Demonstration that the problem applies to both designs or an explanation of any discordance between designs would add to the credibility of the challenge.

The simulations by Deubner et al appear to show bias in nested case-control studies with lagged measures of exposure. Each step of the simulation seems reasonable. Simulated case-control studies assign case and control status to members of the cohort, preserving their age and work history information. In each simulation, the authors randomly assigned 142 of the cohort members to be cases, and took the end of their follow-up as the event or end point time. Controls matched to each case were selected from at-risk cohort members at the age of the case's event. A case's cumulative exposure was measured from time of entry into the cohort until event time. A control's cumulative exposure was measured from time of entry until the control reached the age at event of the index case to which the control was matched. Analysis was conducted by conditional logistic regression.

In fact, a subtle flaw in the design of these simulation studies renders them misleading. As Langholz and Richardson1 point out, Deubner et al mistakenly chose cases as a random sample of all cohort members; in fact, as in Table 1 of the paper by Hein et al,2 the average age-at-event in cases is less than the average age at the end of follow-up in comparable cohort members when censoring is not informative and the exposure has no effect on risk of the event or censoring. Why are cases younger at the event? It is because the cases’ age at end of follow-up has to be the minimum of (1) the age of death from lung cancer, (2) the age of death from other causes, or (3) the age at other causes of censoring—whereas controls are followed to the minimum of (2) and (3) only.

Further, controls’ follow-up time in the simulations tends to begin at an older age than cases'. This is because, to be chosen, controls must be followed at the age of the event in the case, and (as is standard) follow-up time is truncated when the control reaches the age of the index case. In the simulations, therefore, average follow-up time in controls will tend to be less than in cases, which are randomly selected from the cohort and have untruncated follow-up times. Similarly, all measures of exposure that depend on follow-up time (such as duration, average, and cumulative exposure) are distorted even when everyone receives the same level of exposure during follow-up. This phenomenon can be seen in the second row (and possibly the first row) of Table 2 of the paper by Deubner et al3 where, in the absence of an exposure effect, the cumulative exposure of cases (proportional to follow-up time when exposure is constant) is greater than that of controls. In contrast, average cumulative exposure in cases and controls are similar when the hazard ratio is 1 in simulations that generate a random cohort (Table 1, rows 1 and 2).2

So why does proportional hazards analysis truncate exposures for controls but not for cases? In proportional hazards analysis of full cohorts and nested case-control studies, the key calculation is the set of conditional probabilities that each case is the one who developed disease among all those in the cohort (or among the case and matched controls in the nested case-control study) under follow-up at case's age at event, given everyone's exposure through that age. Logically, any exposure in the case after the event cannot be related to risk at the time of event. Similarly, the other cohort members’ exposures subsequent to the index case's age at event also should not be allowed to affect the conditional probability of the event.

I do not agree with Deubner et al that lagging raises special concerns. A lagged measure of exposure with lag L bases risk at a given time point t only on exposure through time point tL. Lagging is simply one way to measure exposure, and does not differ fundamentally from choosing other metrics such as average exposure, peak exposure, or cumulative exposure without lagging.1 As long as exposure is measured only up to the time of the event, the particular choice of exposure summary cannot introduce bias in comparing cases and controls.1

In their second paper, Deubner et al (this time with Levy as the first author4) suggest the use of risk-set members’ age at the end of follow-up as a control selection criterion. Specifically, they advocate choosing only controls whose age at end of follow-up is close to the index case's age at death in order to avoid imbalance between cases and controls in age at start of follow-up or of first exposure and in age at censoring. Unfortunately, the use of risk set members’ age at end of follow-up as a control selection criterion generates nonrandom samples. As Lubin and Gail6 state (and Levy et al4 quote), it is essential to choose a random sample from the risk set. Indeed, Hein et al,2 (Table 1, row 3) show that there is a bias generated from a nonrandom sample with controls who are younger at end of follow-up than the average in the risk set. The extra restriction proposed by Levy et al4 can also cause another bias: if a time-independent exposure, one whose value is constant during follow-up, causes censoring due to death from another cause, the average exposure of cohort members in the risk set with follow-up even only slightly beyond the time of diagnosis of the case will tend to be less than the average in the risk set. Thus, the difference in exposure between cases and controls—and its estimated effect—will be exaggerated, even under the usual assumption of independent censoring. By contrast, the full cohort analysis will not have an analogous restriction and will be valid under independent censoring.

In my view, the 2 papers published in this issue1,2 and the arguments offered here provide a persuasive defense of the standard analytic approach for nested case control designs. The arguments by Deubner et al3,4 about lagged exposure do not in fact undermine the standard analysis. The setup of their simulation contains an error, and their results are not confirmed by others. These authors offer no explanation of why the bias with lagged exposures would be restricted to nested case-control studies and not be present in the full cohort analysis. Their suggestion of nonrandom selection of controls could itself induce bias. Taking all things into account, their critique is not a valid criticism of this familiar and useful epidemiologic approach. Even so, such challenges to the status quo as offered by Deubner et al are not without benefit—they push us to a better understanding of the fundamental principles that underlie our methods.

Back to Top | Article Outline

ACKNOWLEDGMENTS

I thank Kyle Steenland, Emory University, for help in preparation of this manuscript.

Back to Top | Article Outline

REFERENCES

1. Langholz B, Richardson D. Are nested case-control studies biased? Epidemiology. 2009;20:321–329.

2. Hein M, Deddens J, Schubauer-Berigan M. Bias from matching on age at death or censor in nested case-control studies. Epidemiology. 2009;20:330–338.

3. Deubner DC, Roth HD, Levy PS. Empirical evaluation of complex epidemiologic study designs: workplace exposure and cancer. J Occup Environ Med. 2007;49:953–959.

4. Levy PS, Roth HD, Deubner DC. Exposure to beryllium and occurrence of lung cancer: a reexamination of findings from a nested case-control study. J Occup Environ Med. 2007;49:96–101.

5. Prentice RL, Breslow NE. Retrospective studies and failure time models. Biometrika. 1978;65:153–158.

6. Lubin JH, Gail MH. Biased selection of controls for case-control analyses of cohort studies. Biometrics. 1984;40:63–75.

Cited By:

This article has been cited 1 time(s).

Epidemiology
Rejoinder: Progress in Understanding the Relationship Between Beryllium Exposure and Lung Cancer
Deubner, DC; Roth, HD
Epidemiology, 20(3): 341-343.
10.1097/EDE.0b013e31819f32de
PDF (121) | CrossRef
Back to Top | Article Outline

© 2009 Lippincott Williams & Wilkins, Inc.

Twitter  Facebook

Login

Article Tools

Share