The authors respond:
We thank Dr. Hoffmann for his comments1 on 3 important issues in evaluating and applying exposure prediction rules. First, Hoffmann pointed to the uncertainty in predicted exposures and questioned the precision and validity of our reported association between predicted whole-body vibration (WBV) and back pain.2 We shared the same concern with such bias. As indicated in our paper and highlighted by Hoffmann, there are methods3 for bias correction when applying exposure prediction rules to the main study. However, given that a validation sample is usually smaller than the main study population, and that there are often some confounders, which are not all directly measured in the validation sample, we were wary of making statistical inference on any disease model constructed in a validation sample; we did note caution in interpretation of the reported WBV-back pain association.
Secondly, Hoffmann argued the reported WBV-back pain relationship did not prove the construct validity. Hoffmann and we may have different views of what should be interpreted as construct validity in psychometrics.4 Our original intent for examining such a cross-sectional association was simply to demonstrate the applicability of the developed exposure prediction rule by showing the theoretical (or hypothesized) relationship between predicted construct (daily WBV) and presumed indicator (back pain prevalence). Using construct validity for instrument evaluation does have drawbacks, eg, incorrect hypothesis, poorly-measured indicators. Hoffmann gave another example where variables included for exposure prediction are all outcome predictors, thus showing a significant exposure-outcome association even with high prediction errors. With this in mind, we presented not only the construct validity, but also the face and predictive validity. We also considered the possibility of confounding in the construct validity. That is, when many outcome predictors are important WBV predictors and the predictors-outcome association has nothing to do with WBV, we may get an exposure prediction rule with low errors and good construct validity. This consideration had prompted us to adjust for age, body mass index, and engine size in modeling the prevalence.
Finally, Hoffmann asserted the need to compare the model fit using the predicted WBV with the fit of other competitive models using all driving-related variables. Although not the best approach,5 goodness-of-fit data do provide an alternative when there are no biologic priors. However, our rationale for exposure assessment was based upon the “energy equivalency principle,” which had made the goodness-of-fit approach less appealing for the evaluation of construct validity.
Department of Epidemiology; School of Public Health; University of North Carolina; Chapel Hill, North Carolina; email@example.com
David C. Christiani
Occupational Health Program; Department of Environmental Health; Harvard School of Public Health; Boston, Massachusetts
Louise M. Ryan
Department of Biostatistics; Harvard School of Public Health; Boston, Massachusetts
1. Hoffmann K. RE: “Using ‘exposure prediction rules' for exposure assessment: an example on whole-body vibration in taxi drivers”. Epidemiology
2. Chen JC, Chang WR, Shih TS, et al. Using “exposure prediction rules” for exposure assessment: an example on whole-body vibration in taxi drivers. Epidemiology
3. Spiegelman D, Valanis B. Correcting for bias in relative risk estimates due to exposure measurement error: a case study of occupational exposure to antineoplastics in pharmacists. Am J Public Health
4. Pedhazur EJ, Schmelkin LP. Measurement, design, and analysis: an integrated approach
. Hillsdale, NJ: Lawrence Erlbaum Associates; 1991.
5. Steenland K, Deddens JA. A practical guide to the dose-response analyses and risk assessment in occupational epidemiology. Epidemiology