Are Further Studies Really Needed?: If So, Which Ones?

Olshan, Andrew F.

doi: 10.1097/EDE.0b013e3181775e3a
Commentary: CANCER

The association between residential radon exposure and the risk of childhood leukemia reported in this issue of Epidemiology may—or may not—justify further studies to confirm this finding. Such decisions could benefit from a more systematic approach than researchers ordinarily use. Furthermore, when considering the possibility of replication studies, epidemiologists need a new strategy—one that explores more explicitly the improvements and the additional study features that would be required to produce a more meaningful answer.

From the Department of Epidemiology, School of Public Health University of North Carolina, Chapel Hill, NC.

Submitted 16 March 2008; accepted 27 March 2008.

Correspondence: Andrew F. Olshan, Department of Epidemiology, School of Public Health, CB#7435, University of North Carolina, Chapel Hill, NC 27599-7435. E-mail:

Article Outline

The etiology of childhood cancer has remained elusive, especially with regard to possible environmental influences. Childhood cancer is rare, as are some of the exposures of interest, which poses significant challenges for epidemiologists. Epidemiologists have adopted a number of strategies, including large national and regional case-control studies, a focus on subtypes (eg, defined by age at diagnosis, biology, or associated conditions), and large-scale linkage studies.1–3 Recently there has been discussion of assembling international birth and childhood cohorts to study childhood cancer.4 However, these strategies are typically proposed with little more than modest hope that more or bigger studies will produce more definitive results. They seldom do. Is there a better strategy?

The history of research on childhood leukemia is instructive. Childhood leukemia is the most common cancer under the age of 15 years. A wide array of possible “environmental” causes has been suggested, including parental smoking, parental occupational exposures, diet, pesticides, electromagnetic fields, maternal alcohol use, daycare, breast-feeding, and early childhood infections.5,6 With the exception of in utero exposure to high doses of ionizing radiation, the “known” risk factors tend to be demographic predictors or rare genetic associations.3,5 Some of these findings have helped to identify specific molecular events or pathways in the development of specific cancers, but they have not helped us to find preventable causes.

New reports of an association of childhood cancer with an environmental factor produce a predictable riptide of epidemiologic activity. There is a great flurry of interest and a period of attempted replication (typically unsuccessful). The usual suspects are implicated (biased control groups, multiple comparisons, inadequate exposure information). This is followed by optimism that studies with targeted subgroups, or biomarkers of exposure or genes will provide new leads. But definitive answers often remain elusive.

A Danish study by Raaschou-Nielsen et al7 in this issue of Epidemiology provides intriguing findings that suggest residential radon exposure as a possible cause of childhood leukemia. Early ecological studies had suggested this association, but subsequent US and UK case-control studies failed to support it.8,9 Those case-control studies had important limitations, as the authors of this new paper point out. The Danish study has some strengths (no selection bias in control group selection, and no problems with nonresponse). Further, the study estimated exposure over substantial periods of time using a prediction model. One would not expect exposure-model error to yield false-positive results, although more exploration would be needed to gauge the properties of such error, and its influence on bias. The authors did explore data restrictions that would be expected to improve exposure precision and study validity. They also noted consistency between their study results and the elevated effect estimate in the high-exposure category in the US study,9 although that estimate was based on unmatched analysis. The matched analysis in the US study was compatible with no effect (and with the confidence limits of the Danish study).

In the context of the previous hopes and disappointments, how does one judge the import of this new study? The authors are cautious and note that the study suggests that domestic radon exposure increases the risk for acute lymphocytic leukemia during childhood. Considering the past studies of childhood cancer generally, and previous work on this topic specifically, I do not believe we can declare this a promising new “lead.” There is the usual mixture of findings based on studies of varying quality. This study is better in some areas, and yet uncertainty remains. History teaches us to be wary. We have been down this road before for many exposures (and not just with childhood cancer).

This is the natural order of the epidemiologic world—taking an incremental approach with the hope that an answer will eventually emerge. The epidemiologic process is inherently evolutionary, with studies (ideally) progressively refining the hypotheses and the associated methodology. Even so, how often does this doggedly iterative process yield a definitive answer? More often we fall into a long and expensive trek with no end in sight. What resources and time will this journey require? It seems rare that such questions are even asked.10

There is an alternative. First, we should decide if there is a real signal. Meta-analysis (or ideally, pooled analyses) can allow an integrated assessment of the exposure-cancer association. In the case of childhood leukemia and radon, there is likely to be great variation in the studies, and so careful evaluation of heterogeneity is critical. There should be a careful delineation of the threats to validity and their relative importance, with an exploration of their impact on the results.

If focused review and analysis provide enough evidence to justify further study, then we should define our studies in such a way that they will explicitly improve over what has gone before. In the case of radon and childhood cancer, the biggest deficiency of the case-control studies is in their measurement of exposure. Other threats (such as selection bias and potential confounding) may also be at play, although they are likely to be less important.

In planning improved studies, emerging quantitative methods (bias analysis, sensitivity analysis, uncertainty analysis) can be used to estimate the improvement in effect estimates that would be expected with new methods of exposure assessments (or better control of confounding and selection bias, or more complete exploration of interactions).11–15 These bias-analysis methods are usually confined to the interpretation of results from completed studies. There would be great utility in expanding their use to the planning of new studies. They would allow better judgments as to what level of measurement precision is needed, whether future studies can practically attain this benchmark, and at what cost.

We need to employ these methods in the same spirit as power calculations, to guide our decisions on whether a particular study design feature will yield enough information (at least in ballpark terms) to provide a real improvement in our interpretation of the association. The application of these methods to radon and childhood cancer studies would be challenging, but possible. If the constraints are too great either theoretically (is the measurement threshold attainable?) or practically (can we afford it?), then the work should not proceed. Similar exercises could assess the impact of other biases. With such information in hand, we could more rationally weigh the costs and benefits of the proposed research, and focus our research either on conducting an improved study, or on figuring out how to overcome the hurdles that stand in the way. In either case, this process of more rationally defining the architecture and capacities of the “next study” would provide a sorely needed basis for decision making—not only on this particular topic, but in many areas of epidemiologic research.

Back to Top | Article Outline


ANDREW OLSHAN is professor and chair of the Department of Epidemiology at the School of Public Health, University of North Carolina at Chapel Hill. His research has focused on the epidemiology of childhood and other cancers and reproductive and pediatric health.

Back to Top | Article Outline


1. Little J. Epidemiology of Childhood Cancer. Lyon: IARC; 1999.
2. Ross JA, Spector LG. Cancers in children. In: Schottenfeld D, Fraumeni JF, eds. Cancer Epidemiology and Prevention. 3rd ed. New York: Oxford University Press; 2006:1251–1268.
3. Linet MS, Wacholder S, Zahm SH. Interpreting epidemiologic research: lessons from studies of childhood cancer. Pediatrics. 2003;112:218–232.
4. Brown RC, Dwyer T, Kasten C, et al. International childhood cancer cohort consortium (I4C). Int J Epidemiol. 2007;36:724–730.
5. Smith MA, Gloeckler Ries LA, Gurney JG, et al. Leukemia. In: Ries LAG, Smith MA, Gurney JG, et al, eds. Cancer Incidence and Survival among Children and Adolescents: United States SEER Program 1975–1995. Bethesda, MD: National Cancer Institute, SEER Program; 1999:17–34. NIH Pub. No. 99-4649.
6. Buffler PA, Kwan ML, Reynolds P, et al. Environmental and genetic risk factors for childhood leukemia: appraising the evidence. Cancer Invest. 2005;23:60–75.
7. Raaschou-Nielsen O, Andersen CE, Andersen HP, et al. Domestic radon and childhood cancer in Denmark. Epidemiology. 2008;19:536–543.
8. UK Childhood Cancer Study Investigators.The United Kingdom Childhood Cancer Study of exposure to domestic sources of ionising radiation: 1: radon gas. Br J Cancer. 2002;86:1721–1726.
9. Lubin JH, Linet MS, Boice JD Jr, et al. Case-control study of childhood acute lymphoblastic leukemia and residential radon exposure. J Natl Cancer Inst. 1998;90:294–300.
10. Phillips CV. The economics of ‘more research is needed’. Int J Epidemiol. 2001;30:771–776.
11. Phillips CV. Quantifying and reporting uncertainty from systematic errors. Epidemiology. 2003;459–466.
12. Lash TL, Fink AK. Semi-automated sensitivity analysis to assess systematic errors in observational data. Epidemiology. 2003;14:451–458.
13. Greenland S. Multiple-bias modeling for analysis of observational data (with discussion). Royal Stat Soc. 2005;168:267–306.
14. Jurek AM, Maldonado G, Greenland S, et al. Uncertainty analysis: an example of its application to estimating a survey proportion. J Epidemiol Community Health. 2007;61:650–654.
15. Chu H, Wang Z, Cole SR, et al. Sensitivity analysis of misclassification: a graphical and a Bayesian approach. Ann Epidemiol. 2006;16:834–841.
© 2008 Lippincott Williams & Wilkins, Inc.