Skip Navigation LinksHome > November 2012 - Volume 23 - Issue 6 > Commentary: Reclassify Controls at Your Own Risk
Epidemiology:
doi: 10.1097/EDE.0b013e31826cc118
Methods

Commentary: Reclassify Controls at Your Own Risk

Witte, John S.a; Visscher, Peter M.b,c

Free Access
Article Outline
Collapse Box

Author Information

From the aDepartment of Epidemiology & Biostatistics, Institute for Human Genetics, University of California San Francisco, San Francisco, CA; bQueensland Brain Institute, The University of Queensland, Brisbane, Queensland, Australia; and cUniversity of Queensland Diamantina Institute, The University of Queensland, Brisbane, Queensland, Australia.

Supported by National Institutes of Health, grant numbers CA088164 and CA127298.

The authors report no conflict of interest.

Editors’ note: Related articles appear on pages 902 and 912.

Correspondence: John S. Witte, University of California, San Francisco, San Francisco, CA 94158–9001. E-mail: JWitte@ucsf.edu.

A key aspect of epidemiologic research is the correct measurement of exposures and poor health outcomes. When considering a binary disease, cases are generally confirmed using medical records or other trustworthy information. In contrast, controls’ nondiseased status is often based on self-report or simply the lack of evidence to the contrary. For example, genetic epidemiologic association studies commonly use convenience or unscreened controls who are assumed unaffected even though their actual disease status has not been assessed. Depending on the prevalence of disease in the source population of controls, some proportion may actually be cases. This disease misclassification or absence of knowledge about controls’ disease status can result in reduced power and information bias, thus threatening the internal validity of the findings.

Ioannidis and colleagues1 propose to address this issue by reclassifying controls as cases if they have a high probability of disease, as estimated from prediction models of known genetic or environmental risk factors. If the prediction models have high sensitivity and specificity—and the prevalence of disease misclassification among controls is relatively high (>20%)—simulations show that this approach can improve the power to detect novel risk factors.1 Nonetheless, no clear improvement is seen when applying this method to a seemingly ideal study of progression to advanced age-related macular degeneration.1

This reclassification of controls should generally be avoided because it may actually increase bias and reduce precision. For reclassification, one must turn the estimated probabilities of disease into 100% certainty by selecting a cutoff for the predicted values above which controls are assumed affected. The probabilities will usually have a smooth distribution, and so the choice of cutoff is subjective and the ensuing results will be sensitive to this choice. For example, even in the absence of disease-classification errors but in the presence of a strong predictor, some controls will have a high probability of being a case; reclassifying them as cases based on the predicted values will bias inference from subsequent analyses.

Is there an alternative to the approach suggested by Ioannidis et al?1 Instead of a deterministic, all-or-none reclassification of controls, there are more appropriate ways to incorporate into one’s analysis prior knowledge of uncertainty in disease status. Ioannidis et al1 touch on this in their discussion and note that they “presented a simple dichotomous correction, the equivalent of a diagnostic test evaluation.” If it is reasonable to assign controls their probabilities of disease, the outcomes will now range from 0 to 1 (for the controls), with a large mass at 1 (for cases). To analyze such data, one can use a model from the ordinal logistic family: continuation ratio models (usual or reversed contrasting 1 vs. <1) or the cumulative odds model.2 Another possibility is to use an outcome-mixture model for one-inflated regression.3 A nice future project would compare these approaches with that proposed by Ioannidis et al.1

One might also incorporate the predicted values of disease among controls into a Monte Carlo sensitivity analysis to assess how potential misclassification affects estimates.4 Here, instead of applying the same distribution of sensitivity values to all controls, one can specify a distribution of predicted values of disease for each control from the prediction model. From these distributions, probabilities of being a case can be randomly drawn and compared with a uniform (0, 1) draw to determine reclassification. Then, misclassification-adjusted estimates of effect are calculated. By repeating this process thousands of times, a distribution of adjusted estimates is generated that can be compared with the original nonmisclassification-adjusted estimates of effect to determine the potential impact of reclassifying the controls as cases.

Another issue with the reclassification approach is how to address age-related risks if the controls are not sampled from the same risk set as cases. Ioannidis et al1 treat this as another form of control misclassification, noting that “a common problem is the classification of participants as non-diseased … because participants have not been followed up long enough to develop disease.” However, even if the probability of disease increases with age, potential future disease or progression is not equivalent to current misclassification, and one should generally not treat future cases as current cases. To see why this is problematic, assume death is the outcome under study. Then, any study subjects not dead by the end of follow-up would be considered misclassified owing to insufficient follow-up—because with sufficient follow-up they will all be dead! A similar argument holds for the age-related macular degeneration example of Ioannidis et al.1 Instead, survival analysis prediction modeling for age-specific incidence is needed here, not lifetime cumulative incidence (an exception being the study of highly penetrant genetic diseases). Hence, lack of follow-up is a different phenomenon than errors in the measurement of a phenotype and should be treated differently in statistical analyses.

The genetic correction of phenotype is an intriguing idea that could help reduce misclassification and ultimately lead to new discoveries if the prediction model is correctly developed and applied. Even without reclassification, comparing the phenotype and its predictor is of interest because large deviations may lead to the discovery of errors, such as diagnostic errors in a clinical setting. However, for these kinds of genetic predictions to be useful, they have to be very accurate. When studying common diseases with high prevalence—for which the method proposed by Ioannidis et al1 would be most useful—genetic predictors are unlikely to ever be sufficiently accurate.5 If the prediction model is not correctly developed and applied, then reclassification can actually result in worse estimation.

Back to Top | Article Outline

ABOUT THE AUTHORS

JOHN S. WITTE is a Professor of Epidemiology & Biostatistics and the Head of the Division of Genetic and Cancer Epidemiology at the University of California, San Francisco. He works on developing and applying statistical tools to decipher the genomic and environmental basis of complex diseases. PETER M. VISSCHER is a Professor and Chair of Quantitative Genetics at the University of Queensland and an NHMRC Senior Principal Research Fellow. He is currently focused on whole-genome methods for explaining heritability of disease and risk-prediction methods.

Back to Top | Article Outline

ACKNOWLEDGMENT

We thank Sander Greenland for helpful comments.

Back to Top | Article Outline

REFERENCES

1. Ioannidis JP, Yu Y, Seddon JM. Correction of phenotype misclassification based on high-discrimination genetic predictive risk models. Epidemiology. 2012;23:902:909

2. Greenland S. Alternative models for ordinal logistic regression. Stat Med. 1994;13:1665–1677

3. Muthén B, Shedden K. Finite mixture modeling with mixture outcomes using the EM algorithm. Biometrics. 1999;55:463–469

4. Greenland S. Multiple-bias modelling for analysis of observational data. J Royal Statist Soc A. 2005;168:267–306

5. Wray NR, Yang J, Goddard ME, Visscher PM. The genetic interpretation of area under the ROC curve in genomic profiling. PLoS Genet. 2010;6:e1000864

Cited By:

This article has been cited 1 time(s).

Epidemiology
Rejoinder: To Correct or Not to Correct–and How
Ioannidis, JP; Yu, Y; Seddon, JM
Epidemiology, 23(6): 912-913.
10.1097/EDE.0b013e31826cc1b3
PDF (342) | CrossRef
Back to Top | Article Outline

© 2012 Lippincott Williams & Wilkins, Inc.

Twitter  Facebook

Login

Article Tools

Share