Fitz-Simon et al1 ought to be commended on a carefully conducted complex study of the impact of environmental exposure on blood lipids, using a natural experiment of environmental remediation. The subject matter is of acute interest to the community at large, and good evidence of health effects in humans is sparse. The authors conducted multiple sensitivity analyses, as befits the field’s aspirations for more complete presentation of study results. Of course more complex and complete analysis invites more food for thought, and my observations that resulted from such examination are presented for consideration by my colleagues.
The central question that arises while reading the article is: How should we interpret diverging analyses of ecological comparisons (protective effect) and repeated measures individual-level analysis (adverse effect)? To address this, it is important to correctly appraise uncertainty in the analysis. One also wonders what a simple country doctor would make of the numbers. The doctor’s community appears to be no worse or better off for the removal of exposure, and yet more sophisticated analyses suggest that there are people who were adversely affected (and now are better). Is this a case of the country doctor not understanding Simpson’s paradox?
The following effects on low-density lipoprotein (LDL) were noted due to halving of exposures: 1) an ecological effect of +2% (standard deviation 27%); 2) an individual-level effect estimated for perfluorooctanoic acid (PFOA) of −3% (95% confidence interval [CI] = −1% to −5%) and −5% (95% CI = −2% to −7%) for perfluorooctanesulfonic acid (PFOS) (fully adjusted Model 3, Table 21; unlike the result highlighted in abstract that arises from Model 2); 3) an expected range of effects due to regression to the mean, with 95% confidence ±1% (the expected value of effect is less relevant as it is not assured to be realized in any single study); and iv) a 1% bias toward null, based on sensitivity analysis for incomplete data on use of lipid-lowering drugs. There is clearly no overall community-level effect, although regression models identified some individual-level trends for some subgroups. These individual-level effect estimates have to bear an extra “penalty” in precision, on the order of what is expected, say, from regression to the mean (by 1%) and use of lipid-lowering drugs (by 1%). Thus, the 95% uncertainty in effect of PFOA that incorporates systematic biases can be reasonably imagined to be at least 0 to −6% (0 to −8% for PFOS). But there are other important sources of uncertainty.
The authors argue that drift in laboratory methods and errors in exposure assessment do not add systematic bias to their results. The authors, however, do not quantify the possible impact of such errors. Although PFOS and PFOA can be measured in blood with great precision, the two laboratory methods did not perform identically: errors on the order of 2–10% do not appear to be unreasonable (Appendix1) and have been shown to be nonignorable in epidemiology of perfluorinated acids.2 Furthermore (and despite claims to the contrary by the authors), change-versus-change models are susceptible to bias from measurement error unless measurement errors are perfectly correlated, which is inconceivable in the studied setting.3
Consider classical additive measurement error in each exposure measurement: for true exposure Xi observed as Wi with iid zero-mean errors ei, we have Wi = Xi + ei for time periods i = 1, 2. Therefore, the error model for difference between two exposure levels ΔX = (X1 – X2) observed as ΔW = (W1 – W2) is also classical additive: ΔW = ΔX + (e1 – e2) with zero-mean error variance 2σ2, when σ2 = σi2. If ei’s are not iid, then ΔW = ΔX + Δe, with Δe having variance (σ12 + σ22 − 2ρσ1σ2), where ρ is the correlation of errors e1 and e2.
More importantly, there still remains uncertainty as to whether timing of exposure measurements is correct in terms of capturing the biologically effective dose. Let us imagine that measured exposures were lower (due to intervention, after exposures peaked in 2000, and latency of unknown duration) than the ones associated with observed effect. This can be the artifact of the repeated measures cross-sectional design of the study that leads to bias in effect estimate away from the null. Overall, it is not clear what further penalty in precision and bias is to be incurred due to uncertainty of how well the measured exposure represent relevant dose. Can it be another 1% widening of CIs for the PFOA-LDL effect estimate, to yield +1 to −7% (for PFOS the same propagation of uncertainty leads to 95% CI of +1 to −9%)? It is certainly common for more formal evaluation of uncertainty to appropriately increase confidence bounds (and shift point estimates, of course).4–6
There is also an inescapable cofounder at play in the study: secular trends. How does the reported decline in LDL compare with LDL trends in the general US population over the study period?7 Clear linear trends in LDL have been observed over time that would have resulted in a decrease of LDL of 4 mg/dL in an average person (or ~3% of median LDL in 2005/6). So, how can we separate claimed decline in LDL due to reduction of environmental exposures (−3 to −5% point estimates) from a secular trend of the same magnitude (−3%)? It is also noteworthy that, over the period of the study, the average LDL in the community was in fact better than the expected value in the United States: 114–117 mg/dL.
My evaluation of uncertainty does not account for other sources of systematic errors, such as missing body mass index (BMI) measures at follow-up, the mixture of fasting and nonfasting samples, and model selection (eg, no effect of PFOA on LDL in untransformed analysis [Table 3 of Appendix],1 arbitrary selection of cut points to categorize exposures, different parameterizations of BMI). My calculation also does not account for uncertainty inherent in sensitivity analysis itself. However, my admittedly simplistic but more comprehensive sensitivity analysis indicates that it is possible to reconcile all effect estimates if one accepts that the authors’ regression analyses are overly precise because they account only for random errors. Systematic evaluation of all sources of uncertainty has been urged from the pages of this journal and is possible to implement in common software.8 Is it time to ask that all articles published in EPIDEMIOLOGY fully explore uncertainty in effect estimates and causal interpretations?
But let us assume that the reported results are causal and of the right magnitude: would they have clinical importance? The odds ratio of remaining with LDL >100 mg/dL upon reduction of PFOA exposure by half has a 95% CI = 0.9–1.0 (only crude analysis conducted), based on 56 persons with improvement. This implies that improvement in health of these people likely had little to do with removal of PFOA. However, it is important to note that clinical cutoff for LDL applicable to typical adult in the United States (130 mg/dL) is higher than that used by the authors (100 mg/dL);9 therefore, clinical significance of results is clearly overstated in the article. (On a technical note, the effect estimate from logistic regression appears to be precise enough to be trustworthy, so the authors’ claim that they “require in excess of 100 cases to detect an association in a logistic model” comes across as poorly argued.)
Another way to appraise the clinical significance of the findings is to assume that the fully adjusted model in Table 21 is correct and to predict the impact of counterfactual doubling of both PFOA and PFOS exposure on LDL in the community circa 2010 (Table 1).1 Even after doubling both exposures, the predicted 97.5%ile of LDL for most people who had a healthy level at the onset will remain within the normative range (Figure). This is certainly true for the typical resident whose LDL is predicted at the 97.5%ile of 124 mg/dL postexposure. Of course people who had elevated LDL before exposure would have incurred additional detriment to their LDL, unless they were already striving, under care of our good country doctor, to control their pre-exposure causes of elevated LDL. It certainly seems to be the case here that proper clinical management of LDL is more likely than environmental remediation to improve a patient’s health.
A slightly more complex version of the calculation of clinical significance at the population level is to look at the change in percent of people in the population who would exceed 130 mg/dL LDL if exposure was doubled. Assuming that 1) our best estimate of effect is roughly a −1 to +9% change in average LDL with a doubling of exposure, 2) a normal distribution of LDL with parameters in Table 1,1 and 3) no independent time trend in LDL, we can expect the proportion of LDL >130 mg/dL to go from 31% to 30–42%. A community level of 42% high LDL would be alarming (given the 2008 US average of 32%),10 but it appears to stretch credibility: even when exposures were high in 2005/6, the proportion of LDL >130 mg/dL was still about 31%.
Taken together, the available data gives little credence to the idea that reducing exposure to PFOA and PFOS should be on the forefront of country doctor’s efforts to control risk for heart disease among her patients. The overall picture that emerges from the article is that of no compelling evidence of detectable overall effect of PFOA and PFOS on LDL. There may be a true causal effect, but it seems to be too small to isolate from noise and too small to warrant attention of public health professionals. Our country doctor will probably walk away reassured that the common sense that helps her deal with ailments of her community is also a trustworthy guide to knowing what is good for her patients, and that her ignorance of subtle matters of causal inference is no impediment to doing public good. Her community appears to have LDL levels in line with US general population, and getting neither better nor worse between 2005/6 and 2010.
ABOUT THE AUTHOR
IGOR BURSTYN is Associate Professor in the Department of Occupational and Environmental Health at Drexel University. He is an epidemiologist and occupational hygienist who was never admitted to medical school and is prone to thinking for years about obscure calculations that are unlikely to influence public health. He aspires to conduct work that is equally appreciated by country doctors, epidemiologists, and statisticians.
1. Fitz-Simon N, Fletcher T, Luster MI, et al. Reductions in serum lipids with a 4-year decline in serum perfluorooctanoic acid and perfluorooctanesulfonic acid. Epidemiology. 2013;24:569–576
2. Espino-Hernandez G, Gustafson P, Burstyn I. Bayesian adjustment for measurement error in continuous exposures in an individually matched case-control study. BMC Medl Res Methodol. 2011;11 Available at: http://www.biomedcentral.com/1471-2288/11/67
. Published 14 May 2011. Accessed 10 May 2013.
3. Liker JK, Augustyniak S, Duncan GJ. Panel data and models of change: A comparison of first difference and conventional two-wave models. Soc Sci Res. 1985;14:80–101
4. de Vocht F, Kromhout H, Ferro G, Boffetta P, Burstyn I. Bayesian modeling of lung cancer risk and bitumen fume exposure adjusted for unmeasured confounding by smoking. Occup Environ Med. 2009;66:502–508
5. Carroll RJ, Ruppert D, Stefanski LA Measurement Error in Nonlinear Models. 1995 London, England Chapman and Hall Ltd
6. Lash TL, Fox MP, Fink AK Applying Quantitative Bias Analysis to Epidemiologic Data. 2009 Dordrecht, Heidelberg, London, New York Springer
7. Carroll MD, Kit BK, Lacher DA, Shero ST, Mussolino ME. Trends in lipids and lipoproteins in US adults, 1988-2010. JAMA. 2012;308:1545–1554
8. Phillips CV. Quantifying and reporting uncertainty from systematic errors. Epidemiology. 2003;14:459–466
10. Roger VL, Go AS, Lloyd-Jones DM, et al.American Heart Association Statistics Committee and Stroke Statistics Subcommittee. Heart disease and stroke statistics–2012 update: a report from the American Heart Association. Circulation. 2012;125:e2–e220