Skip Navigation LinksHome > July 2003 - Volume 14 - Issue 4 > Semi-Automated Sensitivity Analysis to Assess Systematic Err...
Epidemiology:
doi: 10.1097/01.EDE.0000071419.41011.cf
Original Articles

Semi-Automated Sensitivity Analysis to Assess Systematic Errors in Observational Data

Lash, Timothy L.; Fink, Aliza K.

Free Access
Article Outline
Collapse Box

Author Information

From the Boston University School of Public Health, Boston, MA, USA.

Correspondence: Timothy L. Lash, Boston University Medical Center, 88 East Newton Street, #F433, Boston, MA 02118. E-mail: tlash@bu.edu.

This study was supported by Grants K07 CA87724 and R01 CA/AG70818 from the National Cancer Institute and National Institute on Aging, National Institute of Health.

Submitted 4 March 2002; final version accepted 28 February 2003.

Collapse Box

Abstract

Background: Published epidemiologic research usually provides a quantitative assessment of random error for effect estimates, but no quantitative assessment of systematic error. Sensitivity analysis can provide such an assessment.

Methods: We describe a method to reconstruct epidemiologic data, accounting for biases, and to display the results of repeated reconstructions as an assessment of error. We illustrate with a study of the effect of less-than-definitive therapy on breast cancer mortality.

Results: We developed SAS code to reconstruct the data that would have been observed had a set of systematic errors been absent, and to convey the results. After 4,000 reconstructions of the example data, we obtained a median estimate of relative hazard equal to 1.5 with a 95% simulation interval of 0.8-2.8. The relative hazard obtained by conventional analysis equaled 2.0, with a 95% confidence interval of 1.2-3.4.

Conclusions: Our method of sensitivity analysis can be used to quantify the systematic error for an estimate of effect and to describe that error in figures, tables, or text. In the example, the sources of error biased the conventional relative hazard away from the null, and that error was not accurately communicated by the conventional confidence interval.

Many introductory epidemiology textbooks describe effect measures obtained from observational research as susceptible to errors arising from chance, confounding and bias, including measurement error. 1 More advanced texts separate these sources of error into those deriving from random error, which is assessed by the effect measure’s precision, and those deriving from systematic error, which is assessed by the effect measure’s validity. 2–5 If the error about an effect estimate equals its difference from the truth, then the random error is that which approaches zero as the study size increases, and the systematic error is that which does not. A quantitative assessment of the systematic error for an effect estimate can be made by sensitivity analysis.

To improve the precision of an effect estimate, epidemiologists design their studies to gather as much information as possible, 6 apportion the information efficiently among the strata of variables that affect the outcome, 6 and undertake precision-optimizing analyses such as pooling 7 and regression. 8 Even with an efficient design and analysis, epidemiologists customarily present a quantitative assessment of the remaining random error about an effect estimate. Although there has been considerable 9–12 and continuing 13,14 debate about methods of describing random error, a consensus has emerged in favor of the frequentist confidence interval. 15

To improve the validity of an effect estimate, epidemiologists design their studies to assure comparability of the exposed and unexposed groups, 16 reduce differential selection forces, 17,18 and control measurement error by obtaining accurate information or forcing the direction of its expected error to be predictable. 19,20 When the validity might be compromised by confounding after implementation of the design, epidemiologists employ analytic techniques such as stratification 7 or regression 8 to maintain the validity of the effect estimate. Analytic corrections for selection forces or measurement error are seldom seen. Quantitative assessments of the remaining systematic error surrounding an effect estimate are rare indeed.

Thus, the quantitative assessment of the error surrounding an effect estimate usually reflects only the residual random error. Much has been written, 10,12,15,21 and many examples proffered, 22,23 about the abuses made of these quantitative assessments of random error. The near complete absence of quantitative assessments of residual systematic error in published epidemiologic research has received much less attention. Several reasons likely explain this inattention.

First, existing custom does not expect a quantitative assessment of the systematic error for an effect estimate. For example, the Uniform Requirements for Manuscripts Submitted to Biomedical Journals instructs authors to “quantify findings and present them with appropriate indicators of measurement error or uncertainty (such as confidence intervals),”24 which measure only residual random error. With no demand to drive development and no habit to breed familiarity, few methods are available to quantify the systematic error for an effect estimate, and few epidemiologists are comfortable with the implementation of existing methods.

Second, the established methods require presentations of systematic error that are lengthy, 25 and so are too unwieldy to incorporate into data summarization and inference. The quantitative assessments of random error require little additional space for presentation of an apparently rigorous measurement of residual random error.

Last, the automated analytic tools often used by epidemiologists provide quantitative assessments of residual random error surrounding effect estimates, 26 but contain no such automated method of assessing residual systematic error.

Sensitivity analysis has been recommended for all analyses of observational data sets 27 as a rational alternative to ignoring residual systematic error in epidemiologic research. Basic methods for sensitivity analysis have been explained, 25 and advanced techniques proposed, 28 but these have not yet overcome the aforementioned barriers. The method proposed here has the advantage of using existing and familiar software and presentation methods, thereby addressing the latter two barriers.

Back to Top | Article Outline

METHODS

Sample Data Set

To illustrate our method, we used a study of the effect of less-than-definitive primary therapy vs definitive therapy on breast cancer mortality. The investigation 29 and an initial sensitivity analysis 30 have been previously published. In this analysis, we show how to use the semi-automated method of sensitivity analysis to assess systematic errors attributable to selection bias, misclassification and confounding. We did not seek to assess all of the possible systematic errors at work in this data set, although a more comprehensive assessment with our method would be possible.

The study population included 494 female breast cancer patients diagnosed at 8 Rhode Island hospitals between July 1984 and February 1986. 31 We ascertained the vital status of subjects by matching their identifying variables to the National Death Index and the Social Security Administration database of active social security transactions. We assigned the breast cancer mortality outcome to subjects with a death certificate containing the International Classification of Disease, 9th revision code 174, for breast cancer as the underlying cause of death or as one of the contributing causes of death. We assigned the date of the last follow-up as the date of death recorded on the death certificate for known decedents (N = 105; 69 attributed to breast cancer) or as the date 5 years after diagnosis for subjects with no National Death Index match.

All treatments received during the first year following diagnosis were documented for each patient. We defined definitive primary therapy for women with local disease as receiving mastectomy or breast-conserving surgery plus radiation therapy. 32 We required the same primary therapy for women with regional disease, and also required chemotherapy or hormonal therapy. We classified women who did not receive minimum primary therapy as having had less-than-definitive primary therapy.

We adjusted for two confounders: (a) age at diagnosis, in categories of 45-64 years, 65-74 years and 75-90 years; and (b) stage of disease, abstracted from medical records and categorized as local or regional. We used Cox’s proportional hazards regression to estimate the effects of less-than-definitive care on the outcome, adjusted for the two confounders. Hazards were proportional only during the first 5 years, and so we conducted analyses over the first 5 years of follow-up.

Back to Top | Article Outline
Sources of Systematic Error
Selection Bias

The original investigators removed identifying variables from the data set after the enrollment project 31 was completed. We re-identified subjects for the follow-up study by matching unique patient characteristics to the cancer registry of the Hospital Association of Rhode Island. For each match, the Hospital Association reported to the present investigators the identifying variables necessary to complete the follow-up. The Hospital Association of Rhode Island re-identified 390 of the original 449 patients (87%) with local or regional disease. The probability of re-identification did not strongly depend on patients’ age, cancer stage, co-morbid disease status (defined as the number of co-morbid diseases) or receipt of definitive care (data not shown). The probability of re-identification did depend on the hospital of diagnosis, because two affiliated hospitals participated in the Hospital Association of Rhode Island Cancer Registry for only the latter part of the enrollment period. It may be that the effect of less-than-definitive therapy, compared with definitive therapy, on breast cancer mortality is different among the women who were not re-identified than among the women who were re-identified.

Back to Top | Article Outline
Misclassification of a Covariate

The tumors of women who did not receive an axillary dissection may have been incorrectly staged. These women might have had regional disease, but because their lymph nodes were staged clinically rather than pathologically, they may have been misclassified as having local disease. Clinical assessment might also misclassify local disease as regional disease. Stage determines the criteria for definitive therapy, and so an incorrect assessment of stage may lead to misclassification of both the stage of disease (a confounder) and the treatment (the exposure).

Back to Top | Article Outline
Unmeasured Confounding

When treatment is not assigned by randomization, the potential exists for unknown or unmeasured confounders to bias the effect estimate. In observational research comparing the efficacy of treatments on their targeted disease, confounding by indication is of primary concern. Women who received less-than-definitive therapy may have had indications for receipt of that therapy, and these indications may also be related to a higher risk of breast cancer mortality.

Back to Top | Article Outline
Sensitivity Analysis

Ideally, the data set we create would be identical to that which would have been observed had the three sources of systematic error been absent. No such ideal data set can be reconstructed with confidence, and so we created multiple reconstructions, each yielding an estimate of the relative hazard of less-than-definitive therapy, compared with definitive therapy, adjusted for one or all three sources of systematic error. The theoretical basis for multiple imputations to assess error has been previously described. 33 The method of reconstruction depended on the systematic error or combination of errors under consideration. Figure 1 displays a flowchart of the dataset reconstruction process. We used SAS 26 for all analyses and graphing. The SAS code and sample data set are available at http://www.bumc.bu.edu/epi/lash&finksensitivityanalysis.

Figure 1
Figure 1
Image Tools
Back to Top | Article Outline
Selection Bias

For the selection bias, ideally we would learn the breast cancer mortality status of the women who were not re-identified. Instead, we guessed the status of these women—informed by the outcomes of the women who were re-identified—and repeated these guesses multiple times. We modeled the log-odds of breast cancer mortality using the re-identified participants, for whom vital status at 5 years was known, as a function of age, stage, therapy, co-morbid disease and hospital group. We set hospital group equal to zero for the hospitals that participated in the cancer registry throughout the enrollment period and equal to one for the two hospitals that did not. The model yielded a vector of coefficients and its covariance matrix. In a single reconstruction, we set the parameter vector, except the therapy parameter, equal to its maximum likelihood estimate plus the product of a randomly selected vector of standard normal deviates and the square root of the covariance matrix. We used this parameter vector to calculate an estimate of risk of breast cancer mortality for each non-re-identified women (N = 59) over the 5 years of follow-up, and we conducted a Bernoulli trial using the risk to model whether she died of breast cancer. For example, a Bernoulli trial for a woman with a risk of 50% would be like a coin flip, with heads representing a breast cancer death and tails representing survival over the 5-year follow-up. We randomly assigned a survival time from the distribution of survival times observed in the re-identified women classified as breast cancer decedents. We assumed that women classified as non-breast cancer decedents survived the 5 years of follow-up. With non-re-identified women now assigned survival outcomes, we included all in the Cox’s proportional hazards regression (N = 449). The sensitivity analysis of this systematic error incorporated two sources of variability. First, in each reconstruction of the data set, we selected different parameter estimates for the women not re-identified, yielding different risks of death. Second, the results of the Bernoulli trials differ for non-re-identified women in each reconstruction, yielding a different subset of the population assigned the outcome.

Back to Top | Article Outline
Misclassification of a Covariate

For the bias due to misclassification of stage, ideally we would learn the pathologic lymph-node status of the women whose lymph nodes were staged clinically. Instead, we guessed the pathologic node status of these women, as informed by their clinical stage and literature reports of the sensitivity and specificity of clinical node staging, with pathologic staging as the gold standard. We created triangular probability density functions 26 to represent the sensitivity and specificity of clinical examination to assess axillary node status, with pathologic examination as the gold standard. We abstracted the published literature values of the sensitivities and specificities. 30 We parameterized the triangular probability density functions with the minimum and maximum values reported in the literature, and with the mode equal to the inverse variance-weighted average of all the reported values. Sensitivity ranged from 13% to 83%, with a mode of 46%. Specificity ranged from 41% to 98%, with a mode of 85%. We modeled these error rates as independent of a woman’s status on other variables that could affect the error rate (eg, death due to breast cancer), and as independent of errors in classification of other variables, although such dependencies could be easily incorporated.

To perform a single reconstruction, we selected a sensitivity and a specificity from the respective triangular probability density functions. From the sensitivity, we calculated the positive predictive value for women who were staged clinically, and from the specificity we calculated the negative predictive value. The predictive values are also a function of the true prevalence of node-positive disease in the population under study. 34 The prevalence of node-positive disease in this study among women who were staged pathologically equaled 41% (139 node-positive cases out of 342 cases who received an axillary dissection). To approximate that prevalence, we selected the prevalence of node-positive disease from a triangular probability distribution with minimum prevalence of 30%, maximum prevalence of 50% and mode of 41%. For each of the clinically staged women classified as having local disease, we conducted a Bernoulli trial using the negative predictive value to model whether she was correctly or incorrectly classified. Similarly, for each of the clinically staged women classified as having regional disease, we conducted a Bernoulli trial using the positive predictive value to model whether she was correctly or incorrectly classified. We reclassified the stage and primary therapy for the women selected as having been misclassified, and then subjected the reconstructed data set to Cox’s proportional hazards regression to estimate the effect of less-than-definitive care. This sensitivity analysis incorporated four sources of variability. For each reconstruction, different values of the prevalence of node-positive disease, and of the sensitivity and specificity of clinical staging were selected from the triangular probability distributions. In addition, the results of the Bernoulli trials differ in each reconstruction, yielding a different subset of the population with reclassified stage.

Back to Top | Article Outline
Unmeasured Confounding

For the bias due to confounding by an unknown or unmeasured confounder, ideally we would learn the value of the confounder for each woman. Instead, we created a dichotomous variable to represent confounding by indication in this data set and guessed its value for each woman, as informed by her exposure and outcome status. We chose a population prevalence from a uniform distribution with a lower limit of 30% and an upper limit of 40% for those who received definitive therapy and did not die of breast cancer, from limits of 45% and 55% for those who received less-than-definitive therapy and did not die of breast cancer and for those who received definitive therapy and died of breast cancer, and from limits of 60% and 70% for those who received less-than-definitive therapy and died of breast cancer. For each subject, the combination of disease and exposure determined the probability of having the unknown confounder. To perform a single reconstruction, we conducted a Bernoulli trial for all subjects, based on their probabilities, to assign whether they had the dichotomous confounder. We then subjected the reconstructed data set to Cox’s proportional hazards modeling, with the model now containing the unknown confounder. With each reconstruction, different population prevalences are selected, and the Bernoulli trials select a different subset of the population to have the unmeasured confounder.

Back to Top | Article Outline
Presentation of Sensitivity Analysis Results

Each data-set reconstruction yielded an estimate of the effect of less-than-definitive therapy compared with definitive therapy on breast cancer survival over 5 years of follow-up. As reconstructions accumulated, we plotted three cumulative probability functions against the log of the relative hazard to display the results. First, we plotted the conventional result, reflecting only random error, for purposes of comparison. The relative hazard where a horizontal line plotted at the 50th percentile intersected the cumulative probability function equaled the conventional result. The relative hazards where horizontal lines plotted at the 2.5 percentile and 97.5 percentile intersected the cumulative probability function equaled the limits of a conventional 95% confidence interval. Second, we plotted the cumulative probability function from the relative hazards accumulated from the iterations of the sensitivity analysis, reflecting only systematic error, and reported the relative hazards at the same intersections. Last, we plotted the latter cumulative probability function with a bootstrapped estimate of total error, and reported the relative hazards at the same intersections. We generated the bootstrap estimates by re-sampling the study population, with replacement from the base population after reconstruction, but before estimating the relative hazard.

We wished to assess whether sufficient reconstructions had accumulated to characterize the sensitivity of the analysis to the sources of error under scrutiny. Our primary objective was to characterize accurately the width of the plotted distributions, so we chose a convergence criterion that focused on the tails of the cumulative probability distributions. When the widths of the 90% confidence intervals about the 5th and 95th percentiles of the log-hazard-ratio distribution were both less than 1/20 of the width of the range between the 5th and 95th percentiles, we determined that the Monte Carlo simulation had converged.

Back to Top | Article Outline

RESULTS

The table displays the results of the analyses. The original analysis of the data set, with no assessment of systematic error, yielded a hazard ratio of 2.0 (95% confidence interval, 1.2-3.4) associating less-than-definitive primary therapy with breast cancer mortality over 5 years of follow-up.

The sensitivity analyses suggest that the original effect estimate was biased away from the null by the systematic errors. For the analysis of sensitivity to selection bias, the median relative hazard equaled 1.8 and the 95% simulation interval equaled 1.5 to 2.2, reflecting systematic error. The analysis of sensitivity to misclassification of stage and to unknown confounding yielded similar results.

The Table also presents the results of the sensitivity analysis incorporating all three sources of systematic error. The median relative hazard equaled 1.5 (95% simulation interval for systematic error, 1.2-1.9; 95% bootstrapped simulation error for systematic and random error, 0.8-2.8). The analysis required approximately 4,000 reconstructions to sufficiently characterize the sensitivity to all three sources of systematic error.

Table 1
Table 1
Image Tools

Figure 2 displays all of the evidence provided by the conventional analysis and the sensitivity analysis. The dotted/dashed line shows the cumulative probability function for the conventional result, centered on the log of 2.0 and intersecting horizontal lines at 2.5% and 97.5% at the limits of the 95% confidence interval. The dashed line shows the cumulative probability function for the sensitivity analysis, reflecting only systematic error. The solid line shows the cumulative probability function for the systematic error, incorporating random error as well using the bootstrap technique. Comparison of the systematic error (dashed line) with the conventional result (dotted/dashed line) shows that few of the reconstructions (1.4%) yielded estimates of effect greater than the original result. This comparison illustrates that the conventional confidence interval does not reflect systematic error. Comparison of the bootstrap result (solid line) with the conventional result (dotted/dashed line) illustrates the bias due to systematic error.

Figure 2
Figure 2
Image Tools
Back to Top | Article Outline

DISCUSSION

In the absence of prior information, a common interpretation of the original result might be as follows: “The point estimate of the relative hazard of less-than-definitive therapy, compared with definitive therapy, on breast cancer mortality is 2.0 with a 95% confidence interval of 1.2 to 3.4. The observed data are most likely when the true effect equals the point estimate, and the data would be unlikely under the null. The data are equally consistent with effects greater than 2.0 as with effects less than 2.0.” In contrast, the interpretation of the sensitivity analysis might begin with the same first sentence, but then add: “Under our model for systematic error, the observed data would be likely if the true hazard ratio was in the range 0.8 to 2.8, and most likely if that ratio were 1.5. These data are more likely if the true effect is less than 2 than if the effect is greater than 2.”

Our sensitivity analysis interpretation differs from the original in several important ways. First, our sensitivity-analysis interpretation recognizes the range of estimates of effect consistent with the data, whereas a conventional interpretation tends to focus on a single point estimate. Second, the conventional interpretation makes an explicit statement about the probability of observing the data under the null hypothesis, which is a customary, though not inherent, component of assessments of random error. Our sensitivity analysis interpretation makes no such statement, instead treating the null as one of the infinite number of hypotheses within the range considered. Third, the conventional interpretation suggests that the data are equally consistent with effects greater than or less than the point estimate. Our sensitivity analysis suggests that the data are more consistent with effects less than the point estimate than with effects greater than the point estimate.

An objection to our method might be its reliance on underlying assumptions about the nature of the systematic errors. We respond that investigators who make no quantitative assessment of systematic error assume that there is no systematic error. Quantitative assumptions about known sources of systematic error should be more informative, on average, than an assumption that they have no influence. Furthermore, common definitions of frequentist assessments of random error, such as the confidence interval, assume a valid statistical model and no bias in the effect estimate. 35 Observational epidemiologic research seldom satisfies the first assumption because exposure status is not assigned by randomization. 36 The second assumption cannot hold in the presence of systematic error. The assumptions underlying our method seem more tenable than those that underpin the confidence interval’s description of random error, which is nonetheless routinely presented.

Several advantages of our method weigh against the objection. It yields a graphical display of the study’s results, which can also be presented efficiently in a manner similar to the usual presentation of study results. In place of the point estimate of effect, one would report the median relative hazard of the bootstrapped sensitivity analysis. In place of a confidence interval to describe random error around the point estimate of effect, one would report two additional intervals: (a) the sensitivity analysis simulation interval, and (b) the bootstrapped sensitivity analysis interval.

Our method offers familiar contexts to epidemiologic methodologists, which is its second advantage. To begin, it requires counterfactual thinking, the foundation for much of causal inference in epidemiology. 16,37 In this case, one strives to create a data set that would have been observed had there been no systematic error. In addition, our method can incorporate data interchanges and influence analysis, two techniques recommended to address the aforementioned failure of observational research to satisfy the assumptions of frequentist statistics. 36 Last, one can envision our method as a means to reverse the distortion of information passing through the “episcope” recently described by Maclure and Schneewiess. 38

Finally, our method allows investigators to quantitatively assess sources of systematic error, rather than focusing only on random error, as they seek to understand the uncertainty about an estimate of effect. This is its third advantage. Although any valid method of assessing important sources of systematic error would be better than ignoring it, our method provides both efficient presentation and sufficient information, a combination not achieved by existing alternatives. For example, many investigators discuss the limitations of their results in a Discussion section, but few provide any quantitative assessment of the impact of the limitations. In the absence of quantification, the confidence interval assessment of random error may substitute for the entire error assessment. In some cases, investigators quantitatively estimate bias attributable to one or several sources of systematic error. This assessment is akin to presenting alternative point estimates. In the absence of an interval, however, the focus remains on the point estimate and the confidence interval, which assesses only random error.

An extension of the preceding method of sensitivity analysis is to create plausible scenarios for systematic error and to calculate revised point estimates under each scenario. 25 Separate sources of systematic error usually receive separate analyses. Although this technique presents a range of values for consideration, the reader has no sense of the likelihood of each scenario and, therefore, no sense of which revised estimates of effect are most compatible with the data. Furthermore, this technique requires a substantial amount of text or table space to be fully presented.

A Monte Carlo technique recently proposed 28 addresses the shortcomings of the last alternative. It provides a comprehensive assessment of all the sources of systematic error, provides an understanding of the estimates of effect most compatible with the data, and its results can be economically presented. A disadvantage of the technique is its distance from the original data. It re-estimates the effect that would have been observed in the absence of the systematic error by probabilistically adjusting the original effect estimate for the bias of the systematic error. When the original data are available, as it is to original investigators, our method is more easily implemented and more likely preserves the interrelationships between sources of error. 38 The alternative treats sources of error as independent, or assigns correlations between errors that would be difficult to accurately quantify. The results of the method, with estimates of the bias distributions empirically derived from our results, did not replicate our sensitivity analysis results (data not shown).

Although our method validly describes systematic error, we recognize that additional development should be undertaken. Assessment of bias arising from differential misclassification should be readily implemented, as it requires only different probability distributions within strata of variables. Extensions to case-control designs may require specialized enhancements, particularly in matched studies. Finally, the method should be extended to incorporate Bayesian assessments, in which context the technique will likely fit more comfortably. Figure 3 illustrates with an example in which a prior distribution centered on the null is modified by the results of the sensitivity analysis to yield a posterior distribution that suggests an adverse effect of less-than-definitive therapy.

Figure 3
Figure 3
Image Tools

More importantly, we encourage epidemiologists to practice and publish some quantitative sensitivity analysis to assess systematic error in their research. Any method will be susceptible to valid criticism. In the absence of such assessments, however, systematic error will continue to receive short shrift in the selection of results worthy of publication and development of public health policy. Assessments of random error, and particularly statistical significance testing, will remain the focus of these objectives until epidemiologists regularly supply coherent and concise descriptions of the systematic error surrounding their estimates of effect.

Back to Top | Article Outline

ACKNOWLEDGMENTS

We thank Sander Greenland and Ken Rothman for their thoughtful reviews of drafts of the manuscript.

Back to Top | Article Outline

REFERENCES

1. Hennekens CH, Buring JE. Epidemiology in Medicine. Boston, MA: Little, Brown, 1987.

2. Szklo M, Nieto FJ. Epidemiology: Beyond the Basics. Gaithersburg, MD: Aspen, 2000.

3. Norell SE. Workbook of Epidemiology. New York, NY: Oxford University Press, 1995.

4. Rothman KJ, Greenland S. Precision and validity in epidemiologic studies. In: Rothman KJ, Greenland S, eds. Modern Epidemiology. Philadelphia, PA: Lippincott-Raven, 1998; 115–134.

5. Kleinbaum DG, Kupper LL, Morgenstern H. Epidemiologic Research. New York, NY: Van Nostrand Reinhold, 1982.

6. Rothman KJ, Greenland S. Accuracy considerations in study design. In: Rothman KJ, Greenland S, eds. Modern Epidemiology. Philadelphia, PA: Lippincott-Raven, 1998; 135–146.

7. Greenland S, Rothman KJ. Introduction to stratified analysis. In: Rothman KJ, Greenland S, eds. Modern Epidemiology. Philadelphia, PA: Lippincott-Raven, 1998; 253–280.

8. Greenland S. Introduction to regression modeling. In: Rothman KJ, Greenland S, eds. Modern Epidemiology. Philadelphia, PA: Lippincott-Raven, 1998; 401–434.

9. Thompson WD. Statistical criteria in the interpretation of epidemiologic data. Am J Public Health. 1987; 77: 191–194.

10. Poole C. Beyond the confidence interval. Am J Public Health. 1987; 77: 195–199.

11. Thompson WD. On the comparison of effects. Am J Public Health. 1987; 77: 491–492.

12. Poole C. Confidence intervals exclude nothing. Am J Public Health. 1987; 77: 492–493.

13. The editors. The value of P. Epidemiology. 2001; 12: 286.

14. Weinberg C. It’s time to rehabilitate the P-value. Epidemiology. 2001; 12: 288–290.

15. Poole C. Low P-values or narrow confidence intervals. Which are more durable? Epidemiology. 2001; 12: 291–294.

16. Greenland S, Robins JM. Identifiability, exchangeability, and epidemiological confounding. Int J Epidemiol. 1986; 15: 412–418.

17. Miettinen OS. Theoretical Epidemiology: Principles of Occurrence Research in Medicine. Albany, NY: Delmar, 1985.

18. Wacholder S, McLaughlin JK, Silverman DT, Mandel JS. Selection of controls in case-control studies. Principles. Am J Epidemiol. 1992; 135: 1019–1028.

19. Greenland S. The effect of misclassification in the presence of covariates. Am J Epidemiol. 1980; 112: 564–569.

20. Brenner H, Savitz DA. The effects of sensitivity and specificity of case selection on validity, sample size, precision, and power in hospital-based case-control studies. Am J Epidemiol. 1990; 132: 181–192.

21. Lang J, Rothman KJ, Cann C. That confounded p-value. Epidemiology. 1998; 9: 7–8.

22. Rothman KJ. Is flutamide effective in patients with bilateral orchiectomy? Lancet. 1999; 353: 1184.

23. Lash TL. Re: Insulin-like growth factor 1 and prostate cancer risk: a population-based case-control study. J Natl Cancer Inst. 1998; 90: 1841.

24. Uniform Requirements for Manuscripts Submitted to Biomedical Journals, February 27, 2002. http://www.icmje.org/index.html#manuscripts.

25. Greenland S. Basic methods for sensitivity analysis and external adjustment. In: Rothman KJ, Greenland S, eds. Modern Epidemiology. Philadelphia, PA: Lippincott-Raven, 1998 343–358.

26. SAS [computer program], Version 8. Cary, NC: The SAS Institute, 1999.

27. Robins JM, Greenland S. The role of model selection in causal inference from non-experimental data. Am J Epidemiol. 1986; 123: 392–402.

28. Phillips CV, Maldonado G. Using Monte Carlo methods to quantify the multiple sources of error in studies. Am J Epidemiol. 1999; 149: S17(abstr).

29. Lash TL, Silliman RA, Guadagnoli E, Mor V. The effect of less-than-definitive care on breast cancer recurrence and mortality. Cancer. 2000; 89: 1739–1747.

30. Lash TL, Silliman RA. A sensitivity analysis to separate bias due to confounding from bias due to predicting misclassification by a variable that does both. Epidemiology. 2000; 11: 544–549.

31. Silliman RA, Guadagnoli E, Weitberg AB, Mor V. Age as a predictor of diagnostic and initial treatment intensity in newly diagnosed breast cancer patients. J Gerontol. 1989; 44: M46–M50.

32. The Steering Committee on Clinical Practice Guidelines for the Care and Treatment of Breast Cancer. Mastectomy or lumpectomy? The choice of operation for clinical stages I and II breast cancer. Can Med Assoc J. 1998; 158( Suppl 3): S15–S21.

33. Rubin DB. Bayesian inference for causal effects: the role of randomization. Ann Statist. 1978; 6: 34–58.

34. Rosner B. Fundamentals of Biostatistics, 4th ed. Belmont, CA: Wadsworth, 1995; 56–61.

35. Rothman KJ, Greenland S. Approaches to statistical analysis. Modern Epidemiology. In: Rothman KJ, Greenland S, 1998; 9: 181–200. Lippincott-Raven Philadelphia, PA

36. Greenland S. Randomization, statistics, and causal inference. Epidemiology. 1990; 1: 421–429.

37. Greenland S. Probability logic and probabilistic induction. Epidemiology. 1998; 9: 322–332.

38. Maclure M, Schneeweiss S. Causation of bias: the episcope. Epidemiology. 2001; 12: 114–122.

Cited By:

This article has been cited 80 time(s).

Journal of Occupational Medicine and Toxicology
Diesel exhaust in miners study: how to understand the findings?
Morfeld, P
Journal of Occupational Medicine and Toxicology, 7(): -.
ARTN 10
CrossRef
Journal of Periodontology
Associations between periodontal disease and systemic disease: Evaluating the strength of the evidence
Dietrich, T; Garcia, RI
Journal of Periodontology, 76(): 2175-2184.

Circulation-Heart Failure
Association of Multiple Anthropometrics of Overweight and Obesity With Incident Heart Failure The Atherosclerosis Risk in Communities Study
Loehr, LR; Rosamond, WD; Poole, C; McNeill, AM; Chang, PP; Folsom, AR; Chambless, LE; Heiss, G
Circulation-Heart Failure, 2(1): 18-24.
10.1161/CIRCHEARTFAILURE.108.813782
CrossRef
American Journal of Epidemiology
Model-based estimation of relative risks and other epidemiologic measures in studies of common outcomes and in case-control studies
Greenland, S
American Journal of Epidemiology, 160(4): 301-305.

International Journal of Environmental Research and Public Health
Probabilistic Approaches to Better Quantifying the Results of Epidemiologic Studies
Gustafson, P; McCandless, LC
International Journal of Environmental Research and Public Health, 7(4): 1520-1539.
10.3390/ijerph7041520
CrossRef
American Journal of Epidemiology
Periconceptional multivitamin use reduces the risk of preeclampsia
Bodnar, LM; Tang, G; Ness, RB; Harger, G; Roberts, JM
American Journal of Epidemiology, 164(5): 470-477.
10.1093/aje/kwj218
CrossRef
Statistical Science
Relaxation Penalties and Priors for Plausible Modeling of Nonidentified Bias Sources
Greenland, S
Statistical Science, 24(2): 195-210.
10.1214/09-STS291
CrossRef
Environmental Health Perspectives
Meeting report: Atmospheric pollution and human reproduction
Slama, R; Darrow, L; Parker, J; Woodruff, TJ; Strickland, M; Nieuwenhuijsen, M; Glinianaia, S; Hoggatt, KJ; Kannan, S; Hurley, F; Kalinka, J; Sram, R; Brauer, M; Wilhelm, M; Heinrich, J; Ritz, B
Environmental Health Perspectives, 116(6): 791-798.
10.1289/ehp.11074
CrossRef
Journal of Epidemiology and Community Health
Periconceptional maternal vitamin supplementation and childhood leukaemia: an uncertainty analysis
Jurek, AM; Maldonado, G; Spector, LG; Ross, JA
Journal of Epidemiology and Community Health, 63(2): 168-172.
10.1136/jech.2008.080226
CrossRef
Revista De Saude Publica
Quality of scientific articles
Szklo, M
Revista De Saude Publica, 40(): 30-35.

Pharmacoepidemiology and Drug Safety
Using prescription claims data for drugs available over-the-counter (OTC)
Yood, MU; Campbell, UB; Rothman, KJ; Jick, SS; Lang, J; Wells, KE; Jick, H; Johnson, CC
Pharmacoepidemiology and Drug Safety, 16(9): 961-968.
10.1002/pds.1454
CrossRef
European Journal of Epidemiology
A prospective study on the association between hay fever among children and incidence of asthma in East Germany
Rzehak, P; Schoefer, Y; Wichmann, HE; Heinrich, J
European Journal of Epidemiology, 23(1): 17-22.
10.1007/s10654-007-9205-3
CrossRef
Mount Sinai Journal of Medicine
Epidemiologic research on man-made disasters: Strategies and implications of cohort definition for World Trade Center worker and volunteer surveillance program
Savitz, DA; Oxman, RT; Metzger, KB; Wallenstein, S; Stein, D; Moline, JM; Herbert, R
Mount Sinai Journal of Medicine, 75(2): 77-87.
10.1002/msj.20023
CrossRef
International Journal of Epidemiology
Commentary: Cornfield on cigarette smoking and lung cancer and how to assess causality
Zwahlen, M
International Journal of Epidemiology, 38(5): 1197-1198.
10.1093/ije/dyp294
CrossRef
International Journal of Epidemiology
Interval estimation by simulation as an alternative to and extension of confidence intervals
Greenland, S
International Journal of Epidemiology, 33(6): 1389-1397.
10.1093/ije/dyh276
CrossRef
American Journal of Epidemiology
Accounting for independent nondifferential misclassification does not increase certainty that an observed association is in the correct direction
Greenland, S; Gustafson, P
American Journal of Epidemiology, 164(1): 63-68.
10.1093/aje/kwj155
CrossRef
Journal of the National Cancer Institute
Effectiveness of radiation therapy in older women with ductal carcinoma in situ
Smith, BD; Haffty, BG; Buchholz, TA; Smith, GL; Galusha, DH; Bekelman, JE; Gross, CP
Journal of the National Cancer Institute, 98(): 1302-1310.
10.1093/jnci/djj359
CrossRef
Maternal and Child Health Journal
A new method for measuring misclassification of maternal sets in maternally linked birth records: True and false linkage proportions
Leiss, JK
Maternal and Child Health Journal, 11(3): 293-300.
10.1007/s10995-006-0162-3
CrossRef
Pain Physician
Evidence-Based Medicine, Systematic Reviews, and Guidelines in Interventional Pain Management: Part 4: Observational Studies
Manchikanti, L; Singh, V; Smith, HS; Hirsch, JA
Pain Physician, 12(1): 73-108.

Journal of Affective Disorders
Depression and comorbid panic in primary care patients
DeVeaugh-Geiss, AM; West, SL; Miller, WC; Sleath, B; Kroenke, K; Gaynes, BN
Journal of Affective Disorders, 123(): 283-290.
10.1016/j.jad.2009.09.013
CrossRef
Pharmacoepidemiology and Drug Safety
Methods to apply probabilistic bias analysis to summary estimates of association
Lash, TL; Schmidt, M; Jensen, AO; Engebjerg, MC
Pharmacoepidemiology and Drug Safety, 19(6): 638-644.
10.1002/pds.1938
CrossRef
Contemporary Clinical Trials
A sensitivity analysis of a randomized controlled trial of zinc in treatment of falciparum malaria in children
Fox, MP; Lash, TL; Hamer, DH
Contemporary Clinical Trials, 26(3): 281-289.
10.1016/j.cct.2005.01.004
CrossRef
Journal of Clinical Epidemiology
Health state information derived from secondary databases is affected by multiple sources of bias
Terris, DD; Litaker, DG; Koroukian, SM
Journal of Clinical Epidemiology, 60(7): 734-741.
10.1016/j.jclinepi.2006.08.012
CrossRef
Radiation Research
A Monte Carlo maximum likelihood method for estimating uncertainty arising from shared errors in exposures in epidemiological studies of nuclear workers
Stayner, L; Vrijheid, M; Cardis, E; Stram, DO; Deltour, I; Gilbert, SJ; Howe, G
Radiation Research, 168(6): 757-763.

Annals of Epidemiology
Endometriosis among women exposed to polybrominated biphenyls
Hoffman, CS; Small, CM; Blanck, HM; Tolbert, P; Rubin, C; Marcus, M
Annals of Epidemiology, 17(7): 503-510.
10.1016/j.annepidem.2006.11.005
CrossRef
Sleep
Daily siesta, cardiovascular risk factors, and measures of subclinical atherosclerosis: Results of the Heinz Nixdorf recall study
Stang, A; Dragano, N; Poole, C; Moebus, S; Mohlenkamp, S; Schmermund, A; Siegrist, J; Erbel, R; Jockel, KH
Sleep, 30(9): 1111-1119.

American Journal of Epidemiology
The impact of residual and unmeasured confounding in epidemiologic studies: A simulation study
Fewell, Z; Smith, GD; Sterne, JAC
American Journal of Epidemiology, 166(6): 646-655.
10.1093/aje/kwm165
CrossRef
Plos Medicine
Strengthening the reporting of observational studies in epidemiology (STROBE): Explanation and elaboration
Vandenbroucke, JP; von Elm, E; Altman, DG; Gotzsche, PC; Mulrow, CD; Pocock, SJ; Poole, C; Schlesselman, JJ; Egger, M
Plos Medicine, 4(): 1628-1654.
ARTN e297
CrossRef
Journal of Epidemiology and Community Health
Adjusting a relative-risk estimate for study imperfections
Maldonado, G
Journal of Epidemiology and Community Health, 62(7): 655-663.
10.1136/jech.2007.063909
CrossRef
European Journal of Epidemiology
Appropriate epidemiologic methods as a prerequisite for valid study results
Stang, A
European Journal of Epidemiology, 23(): 761-765.
10.1007/s10654-008-9299-2
CrossRef
International Journal of Epidemiology
A method to automate probabilistic sensitivity analyses of misclassified binary variables
Fox, MP; Lash, TL; Greenland, S
International Journal of Epidemiology, 34(6): 1370-1376.
10.1093/ije/dyi184
CrossRef
Environmental Health Perspectives
Environmental determinants of infectious disease: A framework for tracking causal links and guiding public health research
Eisenberg, JNS; Desai, MA; Levy, K; Bates, SJ; Liang, S; Naumoff, K; Scott, JC
Environmental Health Perspectives, 115(8): 1216-1223.
10.1289/ehp.9806
CrossRef
Journal of Epidemiology and Community Health
Creating a demand for bias analysis in epidemiological research
Fox, MP
Journal of Epidemiology and Community Health, 63(2): 91.
10.1136/jech.2008.082420
CrossRef
American Journal of Clinical Nutrition
Short- and long-term effects of feeding hydrolyzed protein infant formulas on growth at <= 6 y of age: results from the German Infant Nutritional Intervention Study
Rzehak, P; Sausenthaler, S; Koletzko, S; Reinhardt, D; von Berg, A; Kramer, U; Berdel, D; Bollrath, C; Gruebl, A; Bauer, CP; Wichmann, HE; Heinrich, J
American Journal of Clinical Nutrition, 89(6): 1846-1856.
10.3945/ajcn.2008.27373
CrossRef
Journal of Science and Medicine in Sport
Testing with confidence: The use (and misuse) of confidence intervals in biomedical research
Marshall, SW
Journal of Science and Medicine in Sport, 7(2): 135-137.

Annals of Epidemiology
Sensitivity analysis of misclassification: A graphical and a Bayesian approach
Chu, HT; Wang, ZJ; Cole, SR; Greenland, S
Annals of Epidemiology, 16(): 834-841.
10.1016/j.annepidem.2006.04.001
CrossRef
Journal of Exposure Science and Environmental Epidemiology
Assessment of exposure in epidemiological studies: the example of silica dust
Dahmann, D; Taeger, D; Kappler, M; Buchte, S; Morfeld, P; Bruning, T; Pesch, B
Journal of Exposure Science and Environmental Epidemiology, 18(5): 452-461.
10.1038/sj.jes.7500636
CrossRef
Value in Health
Good Research Practices for Comparative Effectiveness Research: Analytic Methods to Improve Causal Inference from Nonrandomized Studies of Treatment Effects Using Secondary Data Sources: The ISPOR Good Research Practices for Retrospective Database Analysis Task Force Report-Part III
Johnson, ML; Crown, W; Martin, BC; Dormuth, CR; Siebert, U
Value in Health, 12(8): 1062-1073.
10.1111/j.1524-4733.2009.00602.x
CrossRef
Environmental Health Perspectives
Traffic-related atmospheric pollutants levels during pregnancy and offspring's term birth weight: a study relying on a land-use regression exposure model
Slama, R; Morgenstern, V; Cyrys, J; Zutavern, A; Herbarth, O; Wichmann, HE; Heinrich, J
Environmental Health Perspectives, 115(9): 1283-1292.
10.1289/ehp.10047
CrossRef
International Journal of Cancer
Risk of different histological types of postmenopausal breast cancer by type and regimen of menopausal hormone therapy
Fiesch-Janys, D; Slanger, T; Mutschelknauss, E; Kropp, S; Obi, N; Vettorazzi, E; Braendle, W; Bastert, G; Hentschel, S; Berger, J; Chang-Claude, J
International Journal of Cancer, 123(4): 933-941.
10.1002/ijc.23655
CrossRef
International Journal of Environmental Research and Public Health
The Effect of Uncertainty in Exposure Estimation on the Exposure-Response Relation between 1,3-Butadiene and Leukemia
Graff, JJ; Sathiakumar, N; Macaluso, M; Maldonado, G; Matthews, R; Delzell, E
International Journal of Environmental Research and Public Health, 6(9): 2436-2455.
10.3390/ijerph6092436
CrossRef
American Journal of Epidemiology
Monte Carlo sensitivity analysis and Bayesian analysis of smoking as an unmeasured confounder in a study of silica and lung cancer
Steenland, K; Greenland, S
American Journal of Epidemiology, 160(4): 384-392.
10.1093/aje/kwh211
CrossRef
International Journal of Epidemiology
Multiple-imputation for measurement-error correction
Cole, SR; Chu, HT; Greenland, S
International Journal of Epidemiology, 35(4): 1074-1081.
10.1093/ije/dyl097
CrossRef
Statistics in Medicine
Curious phenomena in Bayesian adjustment for exposure misclassification
Gustafson, P; Greenland, S
Statistics in Medicine, 25(1): 87-103.
10.1002/sim.2341
CrossRef
European Journal of Epidemiology
Exposure-measurement error is frequently ignored when interpreting epidemiologic study results
Jurek, AM; Maldonado, G; Greenland, S; Church, TR
European Journal of Epidemiology, 21(): 871-876.
10.1007/s10654-006-9083-0
CrossRef
American Journal of Epidemiology
Performance of propensity score calibration- A simulation study
Sturmer, T; Schneeweiss, S; Rothman, KJ; Avorn, J; Glynn, RJ
American Journal of Epidemiology, 165(): 1110-1118.
10.1093/aje/kwm074
CrossRef
Journals of Gerontology Series A-Biological Sciences and Medical Sciences
Methodology, design, and analytic techniques to address measurement of comorbid disease
Lash, TL; Mor, V; Wieland, D; Ferrucci, L; Satariano, W; Silliman, RA
Journals of Gerontology Series A-Biological Sciences and Medical Sciences, 62(3): 281-285.

American Journal of Epidemiology
Using probabilistic corrections to account for abstractor agreement in medical record reviews
Lash, TL; Fox, MP; Thwin, SS; Geiger, AM; Buist, DSM; Wei, FF; Field, TS; Yood, MU; Frost, FJ; Quinn, VP; Prout, MN; Silliman, RA
American Journal of Epidemiology, 165(): 1454-1461.
10.1093/aje/kwm034
CrossRef
American Journal of Epidemiology
Association of periconceptional multivitamin use and risk of preterm or small-for-gestational-age births
Catov, JM; Bodnar, LM; Ness, RB; Markovic, N; Roberts, JM
American Journal of Epidemiology, 166(3): 296-303.
10.1093/aje/kwm071
CrossRef
Stata Journal
A tool for deterministic and probabilistic sensitivity analysis of epidemiologic studies
Orsini, N; Bellocco, R; Bottai, M; Wolk, A; Greenland, S
Stata Journal, 8(1): 29-48.

American Journal of Public Health
The Effectiveness of Child Restraint Systems for Children Aged 3 Years or Younger During Motor Vehicle Collisions: 1996 to 2005
Rice, TM; Anderson, CL
American Journal of Public Health, 99(2): 252-257.
10.2105/AJPH.2007.131128
CrossRef
International Journal of Epidemiology
Bayesian perspectives for epidemiologic research: III. Bias analysis via missing-data methods
Greenland, S
International Journal of Epidemiology, 38(6): 1662-1673.
10.1093/ije/dyp278
CrossRef
International Journal of Cancer
Null association between pregnancy termination and breast cancer in a registry-based study of parous women
Lash, TL; Fink, AK
International Journal of Cancer, 110(3): 443-448.
10.1002/ijc.20136
CrossRef
Journal of Clinical Endocrinology & Metabolism
Maternal vitamin D deficiency increases the risk of preeclampsia
Bodnar, LM; Catov, JM; Simhan, HN; Holick, MF; Powers, RW; Roberts, JM
Journal of Clinical Endocrinology & Metabolism, 92(9): 3517-3522.
10.1210/jc.2007-0718
CrossRef
Journal of Epidemiology and Community Health
Accounting for uncertainty about investigator bias: disclosure is informative: How could disclosure of interests work better in medicine, epidemiology and public health?
Greenland, S
Journal of Epidemiology and Community Health, 63(8): 593-598.
10.1136/jech.2008.084913
CrossRef
Risk Analysis
Bounding analysis as an inadequately specified methodology
Greenland, S
Risk Analysis, 24(5): 1085-1092.

Journal of the Royal Statistical Society Series A-Statistics in Society
Multiple-bias modelling for analysis of observational data
Greenland, S
Journal of the Royal Statistical Society Series A-Statistics in Society, 168(): 267-291.

International Statistical Review
Smoothing observational data: A philosophy and implementation for the health sciences
Greenland, S
International Statistical Review, 74(1): 31-46.

Journal of Clinical Epidemiology
A review of uses of health care utilization databases for epidemiologic research on therapeutics
Schneeweiss, S; Avorn, J
Journal of Clinical Epidemiology, 58(4): 323-337.
10.1016/j.jclinepi.2004.10.012
CrossRef
International Journal of Epidemiology
Proper interpretation of non-differential misclassification effects: expectations vs observations
Jurek, AM; Greenland, S; Maldonado, G; Church, TR
International Journal of Epidemiology, 34(3): 680-687.
10.1093/ije/dyi060
CrossRef
Pharmacoepidemiology and Drug Safety
Sensitivity analysis and external adjustment for unmeasured confounders in epidemiologic database studies of therapeutics
Schneeweiss, S
Pharmacoepidemiology and Drug Safety, 15(5): 291-303.
10.1002/pds.1200
CrossRef
Journal of the Royal Statistical Society Series A-Statistics in Society
Sample size implications when biases are modelled rather than ignored
Gustafson, P
Journal of the Royal Statistical Society Series A-Statistics in Society, 169(): 865-881.

Journal of Epidemiology and Community Health
Uncertainty analysis: an example of its application to estimating a survey proportion
Jurek, AM; Maldonado, G; Greenland, S; Church, TR
Journal of Epidemiology and Community Health, 61(7): 650-654.
10.1136/jech.2006.053660
CrossRef
Journal of the National Cancer Institute
Mobile Phone Use and Risk of Uveal Melanoma: Results of the Risk Factors for Uveal Melanoma Case-Control Study
Stang, A; Schmidt-Pokrzywniak, A; Lash, TL; Lommatzsch, PK; Taubert, G; Bornfeld, N; Jockel, KH
Journal of the National Cancer Institute, 101(2): 120-123.
10.1093/jnci/djn441
CrossRef
British Journal of Cancer
A nested case-control study of adjuvant hormonal therapy persistence and compliance, and early breast cancer recurrence in women with stage I-III breast cancer
Barron, TI; Cahir, C; Sharp, L; Bennett, K
British Journal of Cancer, 109(6): 1513-1521.
10.1038/bjc.2013.518
CrossRef
Biometrika
Bias attenuation results for nondifferentially mismeasured ordinal and coarsened confounders
Ogburn, EL; Vanderweele, TJ
Biometrika, 100(1): 241-248.
10.1093/biomet/ass054
CrossRef
Plos One
Measuring Unsafe Abortion-Related Mortality: A Systematic Review of the Existing Methods
Gerdts, C; Vohra, D; Ahern, J
Plos One, 8(1): -.
ARTN e53346
CrossRef
American Journal of Epidemiology
Invited Commentary: Off-Roading With Social Epidemiology-Exploration, Causation, Translation
Glymour, MM; Osypuk, TL; Rehkopf, DH
American Journal of Epidemiology, 178(6): 858-863.
10.1093/aje/kwt145
CrossRef
Epidemiology
Are “Further Studies” Really Needed?: If So, Which Ones?
Olshan, AF
Epidemiology, 19(4): 544-545.
10.1097/EDE.0b013e3181775e3a
PDF (92) | CrossRef
Epidemiology
Exposure Misclassification in Studies of Agricultural Pesticides: Insights From Biomonitoring
Acquavella, JF; Alexander, BH; Mandel, JS; Burns, CJ; Gustin, C
Epidemiology, 17(1): 69-74.
10.1097/01.ede.0000190603.52867.22
PDF (339) | CrossRef
Epidemiology
Heuristic Thinking and Inference From Observational Epidemiology
Lash, TL
Epidemiology, 18(1): 67-72.
10.1097/01.ede.0000249522.75868.16
PDF (226) | CrossRef
Epidemiology
Bayesian Methods for Correcting Misclassification: An Example from Birth Defects Epidemiology
MacLehose, RF; Olshan, AF; Herring, AH; Honein, MA; Shaw, GM; Romitti, PA; the National Birth Defects Prevention Study,
Epidemiology, 20(1): 27-35.
10.1097/EDE.0b013e31818ab3b0
PDF (424) | CrossRef
Epidemiology
Intelligent Smoothing Using Hierarchical Bayesian Models
Graham, P
Epidemiology, 19(3): 493-495.
10.1097/EDE.0b013e31816b7859
PDF (125) | CrossRef
Epidemiology
Quantifying and Reporting Uncertainty from Systematic Errors
Phillips, CV
Epidemiology, 14(4): 459-466.
10.1097/01.ede.0000072106.65262.ae
PDF (553) | CrossRef
Epidemiology
Bounding Causal Effects Under Uncontrolled Confounding Using Counterfactuals
MacLehose, RF; Kaufman, S; Kaufman, JS; Poole, C
Epidemiology, 16(4): 548-555.
10.1097/01.ede.0000166500.23446.53
PDF (356) | CrossRef
Epidemiology
Authors' Response
Greenland, S; Gago-Dominguez, M; Esteban Castelao, J
Epidemiology, 15(5): 527-528.
10.1097/01.ede.0000136364.97719.ba
PDF (91) | CrossRef
Epidemiology
The Value of Risk-Factor (“Black-Box”) Epidemiology
Greenland, S; Gago-Dominguez, M; Castelao, JE
Epidemiology, 15(5): 529-535.
10.1097/01.ede.0000134867.12896.23
PDF (264) | CrossRef
Epidemiology
Epidemiology and Drinking Water: Are We Running Dry?
Steenland, K; Moe, C
Epidemiology, 14(6): 635-636.
10.1097/01.ede.0000091601.03987.ff
PDF (87) | CrossRef
Epidemiology
Strengthening the Reporting of Observational Studies in Epidemiology (STROBE): Explanation and Elaboration
Vandenbroucke, JP; von Elm, E; Altman, DG; Gøtzsche, PC; Mulrow, CD; Pocock, SJ; Poole, C; Schlesselman, JJ; Egger, M; for the STROBE Initiative,
Epidemiology, 18(6): 805-835.
10.1097/EDE.0b013e3181577511
PDF (5205) | CrossRef
Medical Care
Confounding Control in Healthcare Database Research: Challenges and Potential Approaches
Brookhart, MA; Stürmer, T; Glynn, RJ; Rassen, J; Schneeweiss, S
Medical Care, 48(6): S114-S120.
10.1097/MLR.0b013e3181dbebe3
PDF (503) | CrossRef
Back to Top | Article Outline
Keywords:

epidemiologic methods; systematic errors

© 2003 Lippincott Williams & Wilkins, Inc.

Twitter  Facebook

Login

Article Tools

Images

Share

Search for Similar Articles
You may search for similar articles that contain these same keywords or you may modify the keyword list to augment your search.