<![CDATA[Epidemiology - Causality]]>
http://journals.lww.com/epidem/pages/collectiondetails.aspx?TopicalCollectionId=11
en-usSat, 20 Dec 2014 10:02:58 -0600Wolters Kluwer Health RSS Generatorhttp://images.journals.lww.com/epidem/XLargeThumb.00001648-201501000-00000.CV.jpeg<![CDATA[Epidemiology - Causality]]>
http://journals.lww.com/epidem/pages/collectiondetails.aspx?TopicalCollectionId=11
http://pdfs.journals.lww.com/epidem/1990/11000/Randomization,_Statistics,_and_Causal_Inference_.3.pdf
<![CDATA[Randomization, Statistics, and Causal Inference.]]>This paper reviews the role of statistics in causal inference. Special attention is given to the need for randomization to justify causal inferences from conventional statistics, and the need for random sampling to justify descriptive inferences. In most epidemiologic studies, randomization and random sampling play little or no role in the assembly of study cohorts. I therefore conclude that probabilistic interpretations of conventional statistics are rarely justified, and that such interpretations may encourage misinter pretation of nonrandomized studies. Possible remedies for this problem include deemphasizing inferential statistics in favor of data descriptors, and adopting statistical techniques based on more realistic probability models than those in common use. (Epidemi
(C) Lippincott-Raven Publishers.]]>Mon, 17 Jan 2011 09:40:26 GMT-06:0000001648-199011000-00003
http://pdfs.journals.lww.com/epidem/1992/03000/Identifiability_and_Exchangeability_for_Direct_and.13.pdf
<![CDATA[Identifiability and Exchangeability for Direct and Indirect Effects.]]>We consider the problem of separating the direct effects of an exposure from effects relayed through an intermediate variable (indirect effects). We show that adjustment for the intermediate variable, which is the most common method of estimating direct effects, can be biased. We also show that, even in a randomized crossover trial of exposure, direct and indirect effects cannot be separated without special assumptions; in other words, direct and indirect effects are not separately identifiable when only exposure is randomized. If the exposure and intermediate never interact to cause disease and if intermediate effects can be controlled, that is, blocked by a suitable intervention, then a trial randomizing both exposure and the intervention can separate direct from indirect effects. Nonetheless, the estimation must be carried out using the G-computation algorithm. Conventional adjustment methods remain biased. When exposure and the intermediate interact to cause disease, direct and indirect effects will not be separable even in a trial in which both the exposure and the intervention blocking intermediate effects are randomly assigned. Nonetheless, in such a trial, one can still estimate the fraction of exposure-induced disease that could be prevented by control of the intermediate. Even in the absence of an intervention blocking the intermediate effect, the fraction of exposure-induced disease that could be prevented by control of the intermediate can be estimated with the G-computation algorithm if data are obtained on additional confounding variables. (Epidemiology 1992;3:143-155)
(C) Lippincott-Raven Publishers.]]>Mon, 17 Jan 2011 09:41:45 GMT-06:0000001648-199203000-00013
http://pdfs.journals.lww.com/epidem/1992/07000/G_Estimation_of_the_Effect_of_Prophylaxis_Therapy.7.pdf
<![CDATA[G-Estimation of the Effect of Prophylaxis Therapy for Pneumocystis carinii Pneumonia on the Survival of AIDS Patients.]]>AIDS Clinical Trial Group Randomized Trial 002 compared the effect of high-dose with low-dose 3-azido-3-deoxythymidine (AZT) on the survival of AIDS patients. Embedded within the trial was an essentially uncontrolled observational study of the effect of prophylaxis therapy for pneumocystis carinii pneumonia on survival. In this paper, we estimate the causal effect of prophylaxis therapy on survival by using the method of G-estimation to estimate the parameters of a structural nested failure time model (SNFTM). Our SNFTM relates a subject's observed time of death and observed prophylaxis history to the time the subject would have died if, possibly contrary to fact, prophylaxis therapy had been withheld. We find that, under our assumptions, the data are consistent with prophylaxis therapy increasing survival by 16% or decreasing survival by 18% at the a=0.05 level. The analytic approach proposed in this paper will be necessary to control bias in any epidemiologic study in which there exists a time-dependent risk factor for death, such as pneumocystis carinii pneumonia history, that (Al) influences subsequent exposure to the agent under study, for example, prophylaxis therapy, and (A2) is itself influenced by past exposure to the study agent. Conditions A1 and .42 will be true whenever there exists a time-dependent risk factor that is simultaneously a confounder and an intermediate variable. (Epidemiology 1992;3:319-336)
(C) Lippincott-Raven Publishers.]]>Mon, 17 Jan 2011 09:42:21 GMT-06:0000001648-199207000-00007
http://pdfs.journals.lww.com/epidem/1996/09000/Imputation_for_Exposure_Histories_with_Gaps,_under.7.pdf
<![CDATA[Imputation for Exposure Histories with Gaps, under an Excess Relative Risk Model.]]>: In reconstructing exposure histories needed to calculate cumulative exposures, gaps often occur. Our investigation was motivated by case-control studies of residential radon exposure and lung cancer, where half or more of the targeted homes may not be measurable. Investigators have adopted various schemes for imputing exposures for such gaps. We first undertook simulations to assess the performance of five such methods under an excess relative risk model, in the presence of random missingness and under assumed independence among the true exposure levels for different epochs of exposure (houses). Assuming no other source of measurement error, one of the methods performed without bias and with coverage of nominally 95% confidence intervals that was close to 95%. This method assigns to the missing residences the arithmetic mean across all measured control residences. We show that its good properties can be explained by the fact that this approach produces approximate "Berkson errors." To take advantage of predictive information that might exist about the missing epochs of exposure, one might prefer to carry out the imputations within strata. In further simulations, we asked whether the method would still perform well if imputations were carried out within many strata. It does, and much of the lost statistical power/precision can be recovered if the stratification system is moderately predictive of the missing exposures. Thus, observed control mean imputation provides a way to impute missing exposures without corrupting the study's validity; and stratifying the imputations can enhance precision. The technique is applicable in other settings where exposure histories contain gaps.
(C) Lippincott-Raven Publishers.]]>Mon, 17 Jan 2011 09:43:19 GMT-06:0000001648-199609000-00007
http://pdfs.journals.lww.com/epidem/1999/01000/Causal_Diagrams_for_Epidemiologic_Research_.8.pdf
<![CDATA[Causal Diagrams for Epidemiologic Research.]]>Causal diagrams have a long history of informal use and, more recently, have undergone formal development for applications in expert systems and robotics. We provide an introduction to these developments and their use in epidemiologic research. Causal diagrams can provide a starting point for identifying variables that must be measured and controlled to obtain unconfounded effect estimates. They also provide a method for critical evaluation of traditional epidemiologic criteria for confounding. In particular, they reveal certain heretofore unnoticed shortcomings of those criteria when used in considering multiple potential confounders. We show how to modify the traditional criteria to correct those shortcomings. (Epidemiology 1999;10:37-48)
(C) 1999 Lippincott Williams & Wilkins, Inc.]]>Mon, 17 Jan 2011 09:44:14 GMT-06:0000001648-199901000-00008
http://journals.lww.com/epidem/Fulltext/2000/09000/Marginal_Structural_Models_and_Causal_Inference_in.11.aspx
<![CDATA[Marginal Structural Models and Causal Inference in Epidemiology]]>In observational studies with exposures or treatments that vary over time, standard approaches for adjustment of confounding are biased when there exist time-dependent confounders that are also affected by previous treatment. This paper introduces marginal structural models, a new class of causal models that allow for improved adjustment of confounding in those situations. The parameters of a marginal structural model can be consistently estimated using a new class of estimators, the inverse-probability-of-treatment weighted estimators.]]>Mon, 17 Jan 2011 09:45:56 GMT-06:0000001648-200009000-00011
http://journals.lww.com/epidem/Fulltext/2000/09000/Marginal_Structural_Models_to_Estimate_the_Causal.12.aspx
<![CDATA[Marginal Structural Models to Estimate the Causal Effect of Zidovudine on the Survival of HIV-Positive Men]]>Standard methods for survival analysis, such as the time-dependent Cox model, may produce biased effect estimates when there exist time-dependent confounders that are themselves affected by previous treatment or exposure. Marginal structural models are a new class of causal models the parameters of which are estimated through inverse-probability-of-treatment weighting; these models allow for appropriate adjustment for confounding. We describe the marginal structural Cox proportional hazards model and use it to estimate the causal effect of zidovudine on the survival of human immunodeficiency virus-positive men participating in the Multicenter AIDS Cohort Study. In this study, CD4 lymphocyte count is both a time-dependent confounder of the causal effect of zidovudine on survival and is affected by past zidovudine treatment. The crude mortality rate ratio (95% confidence interval) for zidovudine was 3.6 (3.0–4.3), which reflects the presence of confounding. After controlling for baseline CD4 count and other baseline covariates using standard methods, the mortality rate ratio decreased to 2.3 (1.9–2.8). Using a marginal structural Cox model to control further for time-dependent confounding due to CD4 count and other time-dependent covariates, the mortality rate ratio was 0.7 (95% conservative confidence interval = 0.6–1.0). We compare marginal structural models with previously proposed causal methods.]]>Mon, 17 Jan 2011 09:46:29 GMT-06:0000001648-200009000-00012
http://journals.lww.com/epidem/Fulltext/2001/05000/Data,_Design,_and_Background_Knowledge_in.11.aspx
<![CDATA[Data, Design, and Background Knowledge in Etiologic Inference]]>I use two examples to demonstrate that an appropriate etiologic analysis of an epidemiologic study depends as much on study design and background subject-matter knowledge as on the data. The demonstration is facilitated by the use of causal graphs.]]>Mon, 17 Jan 2011 09:47:14 GMT-06:0000001648-200105000-00011
http://journals.lww.com/epidem/Fulltext/2003/05000/Quantifying_Biases_in_Causal_Models__Classical.9.aspx
<![CDATA[Quantifying Biases in Causal Models: Classical Confounding vs Collider-Stratification Bias]]>It has long been known that stratifying on variables affected by the study exposure can create selection bias. More recently it has been shown that stratifying on a variable that precedes exposure and disease can induce confounding, even if there is no confounding in the unstratified (crude) estimate. This paper examines the relative magnitudes of these biases under some simple causal models in which the stratification variable is graphically depicted as a collider (a variable directly affected by two or more other variables in the graph). The results suggest that bias from stratifying on variables affected by exposure and disease may often be comparable in size with bias from classical confounding (bias from failing to stratify on a common cause of exposure and disease), whereas other biases from collider stratification may tend to be much smaller.]]>Mon, 17 Jan 2011 09:48:25 GMT-06:0000001648-200305000-00009
http://journals.lww.com/epidem/Fulltext/2004/09000/A_Structural_Approach_to_Selection_Bias.20.aspx
<![CDATA[A Structural Approach to Selection Bias]]>The term “selection bias” encompasses various biases in epidemiology. We describe examples of selection bias in case-control studies (eg, inappropriate selection of controls) and cohort studies (eg, informative censoring). We argue that the causal structure underlying the bias in each example is essentially the same: conditioning on a common effect of 2 variables, one of which is either exposure or a cause of exposure and the other is either the outcome or a cause of the outcome. This structure is shared by other biases (eg, adjustment for variables affected by prior exposure). A structural classification of bias distinguishes between biases resulting from conditioning on common effects (“selection bias”) and those resulting from the existence of common causes of exposure and outcome (“confounding”). This classification also leads to a unified approach to adjust for selection bias.]]>Mon, 17 Jan 2011 09:48:56 GMT-06:0000001648-200409000-00020
http://journals.lww.com/epidem/Fulltext/2005/07000/Bounding_Causal_Effects_Under_Uncontrolled.18.aspx
<![CDATA[Bounding Causal Effects Under Uncontrolled Confounding Using Counterfactuals]]>Common sensitivity analysis methods for unmeasured confounders provide a corrected point estimate of causal effect for each specified set of unknown parameter values. This article reviews alternative methods for generating deterministic nonparametric bounds on the magnitude of the causal effect using linear programming methods and potential outcomes models. The bounds are generated using only the observed table. We then demonstrate how these bound widths may be reduced through assumptions regarding the potential outcomes under various exposure regimens. We illustrate this linear programming approach using data from the Cooperative Cardiovascular Project. These bounds on causal effect under uncontrolled confounding complement standard sensitivity analyses by providing a range within which the causal effect must lie given the validity of the assumptions.]]>Mon, 17 Jan 2011 09:49:34 GMT-06:0000001648-200507000-00018
http://journals.lww.com/epidem/Fulltext/2006/07000/Instruments_for_Causal_Inference__An.4.aspx
<![CDATA[Instruments for Causal Inference: An Epidemiologist's Dream?]]>The use of instrumental variable (IV) methods is attractive because, even in the presence of unmeasured confounding, such methods may consistently estimate the average causal effect of an exposure on an outcome. However, for this consistent estimation to be achieved, several strong conditions must hold. We review the definition of an instrumental variable, describe the conditions required to obtain consistent estimates of causal effects, and explore their implications in the context of a recent application of the instrumental variables approach. We also present (1) a description of the connection between 4 causal models—counterfactuals, causal directed acyclic graphs, nonparametric structural equation models, and linear structural equation models—that have been used to describe instrumental variables methods; (2) a unified presentation of IV methods for the average causal effect in the study population through structural mean models; and (3) a discussion and new extensions of instrumental variables methods based on assumptions of monotonicity.]]>Mon, 17 Jan 2011 09:50:19 GMT-06:0000001648-200607000-00004
http://journals.lww.com/epidem/Fulltext/2007/05000/The_Identification_of_Synergism_in_the.8.aspx
<![CDATA[The Identification of Synergism in the Sufficient-Component-Cause Framework]]>Various concepts of interaction are reconsidered in light of a sufficient-component-cause framework. Conditions and statistical tests are derived for the presence of synergism within sufficient causes. The conditions derived are sufficient but not necessary for the presence of synergism. In the context of monotonic effects, the conditions derived are closely related to effect modification on the risk difference scale; however, this is not the case without the assumption of monotonic effects.]]>Mon, 17 Jan 2011 09:51:05 GMT-06:0000001648-200705000-00008
http://journals.lww.com/epidem/Fulltext/2007/07000/Properties_of_2_Counterfactual_Effect_Definitions.10.aspx
<![CDATA[Properties of 2 Counterfactual Effect Definitions of a Point Exposure]]>As recognized for more than 2 decades, the way to define effects of an exposure may be unclear if the effects are conditional on occurrence of prior events. Since age-specific rates are inherently conditional on survival to the age for which rates are calculated, age-specific rate ratios may be misleading. We consider this problem in the context of a point exposure and an unmeasured risk factor that is independent of the exposure, together with potential outcome models and associated counterfactual effect definitions. The methods apply to a recurring exposure that “tracks” over time, as well as to more complicated situations (although additional issues may then arise). We identify and evaluate 2 seemingly-natural ways that the population effects of a point exposure might be defined. At least one definition of the population effects of a point exposure is identifiable, while another natural definition is not identifiable. We describe possible implications of these definitions for the distortion of time-specific rate ratios that can occur with passage of time. We discuss interpretation of effects for each definition, and how the definitions are related to selection bias as recently defined by Hernán et al (Epidemiology. 2004;15:615–625). We present implications for study design, and make several recommendations. Problems may be reduced or avoided by starting follow-up before onset of exposure and by using survival curves to compare exposed with unexposed.]]>Mon, 17 Jan 2011 09:51:56 GMT-06:0000001648-200707000-00010
http://journals.lww.com/epidem/Fulltext/2007/09000/Four_Types_of_Effect_Modification__A.6.aspx
<![CDATA[Four Types of Effect Modification: A Classification Based on Directed Acyclic Graphs]]>It is possible to classify the types of causal relationships that can give rise to effect modification on the risk difference scale by expressing the conditional causal risk-difference as a sum of products of stratum-specific risk differences and conditional probabilities. Directed acyclic graphs clarify the causal relationships necessary for a particular variable to serve as an effect modifier for the causal risk difference involving 2 other variables. The directed acyclic graph causal framework thereby gives rise to a 4-fold classification for effect modification: direct effect modification, indirect effect modification, effect modification by proxy and effect modification by a common cause. We briefly discuss the case of multiple effect modification relationships and multiple effect modifiers as well as measures of effect other than that of the causal risk difference.]]>Mon, 17 Jan 2011 09:53:10 GMT-06:0000001648-200709000-00006
http://journals.lww.com/epidem/Fulltext/2008/09000/Causal_Directed_Acyclic_Graphs_and_the_Direction.14.aspx
<![CDATA[Causal Directed Acyclic Graphs and the Direction of Unmeasured Confounding Bias]]>We present results that allow the researcher in certain cases to determine the direction of the bias that arises when control for confounding is inadequate. The results are given within the context of the directed acyclic graph causal framework and are stated in terms of signed edges. Rigorous definitions for signed edges are provided. We describe cases in which intuition concerning signed edges fails and we characterize the directed acyclic graphs that researchers can use to draw conclusions about the sign of the bias of unmeasured confounding. If there is only one unmeasured confounding variable on the graph, then nonincreasing or nondecreasing average causal effects suffice to draw conclusions about the direction of the bias. When there are more than one unmeasured confounding variable, nonincreasing and nondecreasing average causal effects can be used to draw conclusions only if the various unmeasured confounding variables are independent of one another conditional on the measured covariates. When this conditional independence property does not hold, stronger notions of monotonicity are needed to draw conclusions about the direction of the bias.]]>Mon, 17 Jan 2011 09:57:20 GMT-06:0000001648-200809000-00014
http://journals.lww.com/epidem/Fulltext/2009/01000/Sufficient_Cause_Interactions_and_Statistical.4.aspx
<![CDATA[Sufficient Cause Interactions and Statistical Interactions]]>When the outcome and all exposures of interest are binary it is sometimes possible to draw conclusions from empirical data about mechanistic interactions in the sufficient cause sense. Empirical conditions are given for sufficient cause interactions and these conditions are compared with and contrasted to interaction coefficients in linear, log-linear and logistic regression models. Conditions that suffice to allow for the interpretation of statistical interactions as sufficient cause interactions are derived. Discussion is presented concerning the implications of the inclusion of confounding variables in the model.]]>Mon, 17 Jan 2011 09:58:28 GMT-06:0000001648-200901000-00004
http://journals.lww.com/epidem/Fulltext/2009/01000/Marginal_Structural_Models_for_the_Estimation_of.6.aspx
<![CDATA[Marginal Structural Models for the Estimation of Direct and Indirect Effects]]>The estimation of controlled direct effects can be carried out by fitting a marginal structural model and using inverse probability of treatment weighting. To use marginal structural models to estimate natural direct and indirect effects, 2 marginal structural models can be used: 1 for the effects of the treatment and mediator on the outcome and 1 for the effect of the treatment on the mediator. Unlike marginal structural models typically used in epidemiologic research, the marginal structural models used to estimate natural direct and indirect effects are made conditional on the covariates.]]>Mon, 17 Jan 2011 09:59:04 GMT-06:0000001648-200901000-00006
http://journals.lww.com/epidem/Fulltext/2009/11000/On_the_Distinction_Between_Interaction_and_Effect.16.aspx
<![CDATA[On the Distinction Between Interaction and Effect Modification]]>This paper contrasts the concepts of interaction and effect modification using a series of examples. Interaction and effect modification are formally defined within the counterfactual framework. Interaction is defined in terms of the effects of 2 interventions whereas effect modification is defined in terms of the effect of one intervention varying across strata of a second variable. Effect modification can be present with no interaction; interaction can be present with no effect modification. There are settings in which it is possible to assess effect modification but not interaction, or to assess interaction but not effect modification. The analytic procedures for obtaining estimates of effect modification parameters and interaction parameters using marginal structural models are compared and contrasted. A characterization is given of the settings in which interaction and effect modification coincide.]]>Mon, 17 Jan 2011 09:59:41 GMT-06:0000001648-200911000-00016
http://journals.lww.com/epidem/Fulltext/2010/07000/Bias_Formulas_for_Sensitivity_Analysis_for_Direct.17.aspx
<![CDATA[Bias Formulas for Sensitivity Analysis for Direct and Indirect Effects]]>A key question in many studies is how to divide the total effect of an exposure into a component that acts directly on the outcome and a component that acts indirectly, ie, through some intermediate. For example, one might be interested in the extent to which the effect of diet on blood pressure is mediated through sodium intake and the extent to which it operates through other pathways. In the context of such mediation analysis, even if the effect of the exposure on the outcome is unconfounded, estimates of direct and indirect effects will be biased if control is not made for confounders of the mediator-outcome relationship. Often data are not collected on such mediator-outcome confounding variables; the results in this paper allow researchers to assess the sensitivity of their estimates of direct and indirect effects to the biases from such confounding. Specifically, the paper provides formulas for the bias in estimates of direct and indirect effects due to confounding of the exposure-mediator relationship and of the mediator-outcome relationship. Under some simplifying assumptions, the formulas are particularly easy to use in sensitivity analysis. The bias formulas are illustrated by examples in the literature concerning direct and indirect effects in which mediator-outcome confounding may be present.]]>Mon, 17 Jan 2011 10:00:18 GMT-06:0000001648-201007000-00017
http://journals.lww.com/epidem/Fulltext/2011/01000/A_Method_for_Detection_of_Residual_Confounding_in.10.aspx
<![CDATA[A Method for Detection of Residual Confounding in Time-series and Other Observational Studies]]>Background: A difficult issue in observational studies is assessment of whether important confounders are omitted or misspecified. In this study, we present a method for assessing whether residual confounding is present. Our method depends on availability of an indicator with 2 key characteristics: first, it is conditionally independent (given measured exposures and covariates) of the outcome in the absence of confounding, misspecification, and measurement errors; second, it is associated with the exposure and, like the exposure, with any unmeasured confounders.
Methods: We demonstrate the method using a time-series study of the effects of ozone on emergency department visits for asthma in Atlanta. We argue that future air pollution may have the characteristics appropriate for an indicator, in part because future ozone cannot have caused yesterday's health events. Using directed acyclic graphs and specific causal relationships, we show that one can identify residual confounding using an indicator with the stated characteristics. We use simulations to assess the discriminatory ability of future ozone as an indicator of residual confounding in the association of ozone with asthma-related emergency department visits. Parameter choices are informed by observed data for ozone, meteorologic factors, and asthma.
Results: In simulations, we found that ozone concentrations 1 day after the emergency department visits had excellent discriminatory ability to detect residual confounding by some factors that were intentionally omitted from the model, but weaker ability for others. Although not the primary goal, the indicator can also signal other forms of modeling errors, including substantial measurement error, and does not distinguish between them.
Conclusions: The simulations illustrate that the indicator based on future air pollution levels can have excellent discriminatory ability for residual confounding, although performance varied by situation. Application of the method should be evaluated by considering causal relationships for the intended application, and should be accompanied by other approaches, including evaluation of a priori knowledge.]]>Mon, 17 Jan 2011 10:00:50 GMT-06:0000001648-201101000-00010
http://pdfs.journals.lww.com/epidem/1990/01000/White_Swans,_Black_Ravens,_and_Lame_Ducks_.11.pdf
<![CDATA[White Swans, Black Ravens, and Lame Ducks: Necessary and Sufficient Causes in Epidemiology.]]>Several authors have used Popper's "white swan" example to support arguments for a falsificationist approach to epidemiology. The statement "all swans are white" cannot be verified by finding even a large number of white swans, but can be falsified by finding a single black swan. An analogous epidemiologic example that has been proposed is the hypothesis that a particular virus is a necessary cause of acquired immunodeficiency syndrome (AIDS). Such examples, however, have little relevance to science since scientific theories are not generalizations of facts; rather, they involve an understanding of the underlying processes that cause certain facts to occur. Futhermore, the "white swan" example is particularly inapplicable to epidemiology, since most factors of scientific or public health importance are neither necessary nor sufficient causes of disease. Nevertheless, epidemiologic research has achieved success in the understanding and prevention of disease. These points are exemplified by applying Rothman's model of causal constellations, which provides a conceptual basis for the development of epidemiologic theories.
(C) Lippincott-Raven Publishers.]]>Wed, 18 May 2011 14:37:02 GMT-05:0000001648-199001000-00011
http://pdfs.journals.lww.com/epidem/1991/09000/On_the_Origin_of_Hill_s_Causal_Criteria_.10.pdf
<![CDATA[On the Origin of Hill's Causal Criteria.]]>The rules to assess causation formulated by the eighteenth century Scottish philosopher David Hume are compared to Sir Austin Bradford Hill's causal criteria. The strength of the analogy between Hume's rules and Hill's causal criteria suggests that, irrespective of whether Hume's work was known to Hill or Hill's predecessors, Hume's thinking expresses a point of view still widely shared by contemporary epidemiologists. The lack of systematic experimental proof to causal inferences in epidemiology may explain the analogy of Hume's and Hill's, as opposed to Popper's, logic
(C) Lippincott-Raven Publishers.]]>Wed, 18 May 2011 14:38:45 GMT-05:0000001648-199109000-00010
http://pdfs.journals.lww.com/epidem/1993/01000/Measures_of_Effect_Based_on_the_Sufficient_Causes.8.pdf
<![CDATA[Measures of Effect Based on the Sufficient Causes Model. 1. Risks and Rates of Disease Associated with a Single Causative Agent.]]>The sufficient causes model of disease occurrence leads to a specific conception of how the risk of disease resulting from exposure to an agent combines with background risks. From this conception, one can derive a method for quantifying in populations the respective effects of the agent and background, using either rates or risks. This method differs from the usual difference or ratio measures of effect by taking into account the probability that the sufficient cause of disease involving the agent of interest and that not involving it will both occur during the observation period. The method leads to: (1) measures of the risks or rates of completion of sufficient causes involving or not involving the agent of interest; (2) a measure of the proportion of cases preventable by removing or blocking the agent, based on observed risks of disease. This proportion varies as a function of the duration of exposure; (3) a measure of the proportion of cases caused by the agent, based on observed rates of disease. This proportion is constant over time, if the rates are; (4) a causal interpretation of a constant rate ratio, when the rates vary over time.
(C) Lippincott-Raven Publishers.]]>Wed, 18 May 2011 14:40:52 GMT-05:0000001648-199301000-00008
http://pdfs.journals.lww.com/epidem/1993/11000/Measures_of_Effect_Based_on_the_Sufficient_Causes.7.pdf
<![CDATA[Measures of Effect Based on the Sufficient Causes Model. 2. Risks and Rates of Disease Associated with a Single Preventive Agent.]]>We considered a simple formulation of the sufficient causes model, in which a preventive agent exerts its effect by preventing a sufficient cause of the disease from occurring, while-leaving another sufficient cause unaffected. Ina-group unexposed to the preventive agent, a case of the disease is caused by whichever of the two sufficient causes occurs alone or first in the subject. Among exposed subjects, the preventive agent prevents only the cases of disease in which the sufficient cause it blocks would have occurred alone, not the cases in which the other sufficient cause also occurs during the study period. The proportion of subjects who would avoid the disease if exposed to the preventive agent is the risk difference. The risk difference varies over time, even when the rates of occurrence of the sufficient causes are constant It increases to a maximum and then declines, as the subjects who have avoided the disease because of the agent later contract the same disease because of exposure to the other sufficient cause. This maximum and the time at which it occurs are readily computed from the incidence rates of disease among exposed and unexposed subjects. (Epidemiology 1993;4:517-523)
(C) Lippincott-Raven Publishers.]]>Wed, 18 May 2011 14:42:12 GMT-05:0000001648-199311000-00007
http://pdfs.journals.lww.com/epidem/1993/11000/Causal_Inference_.13.pdf
<![CDATA[Causal Inference.]]>No abstract available]]>Wed, 18 May 2011 14:44:06 GMT-05:0000001648-199311000-00013
http://pdfs.journals.lww.com/epidem/1994/03000/ERRATUM_.27.pdf
<![CDATA[ERRATUM.]]>No abstract available]]>Wed, 18 May 2011 14:45:36 GMT-05:0000001648-199403000-00027
http://pdfs.journals.lww.com/epidem/1995/03000/Causal_Inference_in_Infectious_Diseases_.10.pdf
<![CDATA[Causal Inference in Infectious Diseases.]]>Since the 1970s, Rubin has promoted a model for causal inference based on the potential outcomes if individuals received each of the treatments under study. Commonly, the assumption is made that the outcome in one individual is independent of the treatment assignment and outcome in other individuals. In infectious diseases, however, whether one person becomes infected is quite often dependent on the infection outcome in other individuals, a situation known as dependent happenings. Here, we review the model proposed by Rubin for the example of infectious disease. Consequences of the violation of the stability assumption include the need for an expanded representation of outcomes, and the existence of different kinds of effects, such as direct and indirect effects. Effects of interest include changes in susceptibility as well as changes in infectiousness. We define the transmission probability formally as an average causal parameter of effect in a population by conditioning on exposure to infection. Unconditional indirect and total effects are difficult to define formally using this model for causal inference. The assignment mechanism can influence the sampling mechanism when it determines who is exposed to infection, raising problems that require further inquiry. We conclude by contrasting the role of differential exposure to infection in direct and indirect effects.
(C) Lippincott-Raven Publishers.]]>Wed, 18 May 2011 14:47:17 GMT-05:0000001648-199503000-00010
http://pdfs.journals.lww.com/epidem/1997/01000/Reproductive_Factors,_Oral_Contraceptive_Use,_and.15.pdf
<![CDATA[Reproductive Factors, Oral Contraceptive Use, and Risk of Colorectal Cancer.]]>Multiparity and use of oral contraceptives are hypothesized to reduce risk of colorectal cancer. Among 57,529 women, 31-90 years of age, who volunteered for a nationwide breast cancer screening program from 1973 to 1980, we observed 154 pathologically confirmed cases of colon cancer and 49 cases of rectal cancer in up to 10 years of follow-up (388,555 person-years). Parity was not associated with risk of colorectal cancer [age-adjusted rate ratio for >=4 children vs no children = 1.0; 95% confidence interval (CI) = 0.72-1.5], although decreases in proximal colon cancer and increases in distal colon cancer were observed among parous women. The effect of parity did not vary by age at diagnosis. We found no strong or consistent association for age at menarche, age at first birth, or age at natural menopause. In addition, oral contraceptive use, reflecting mainly past use, was unrelated to risk of colorectal cancer (rate ratio = 1.0; 95% CI = 0.75-1.4). These findings do not corroborate the hypothesis that reproductive events or oral contraceptives influence the development of colorectal cancer.
(C) Lippincott-Raven Publishers.]]>Wed, 18 May 2011 14:49:17 GMT-05:0000001648-199701000-00015
http://pdfs.journals.lww.com/epidem/1998/05000/Probability_Logic_and_Probabilistic_Induction_.18.pdf
<![CDATA[Probability Logic and Probabilistic Induction.]]>This article reviews some philosophical aspects of probability and describes how probability logic can give precise meanings to the concepts of inductive support, corroboration, refutation, and related notions, as well as provide a foundation for logically sound statistical inference. Probability logic also provides a basis for recognizing prior distributions as an integral component of statistical analysis, rather than the current misleading practice of pretending that statistics applied to observational data are objective. This basis is important, because the use of realistic priors in a statistical analysis can yield more stringent tests of hypotheses and more accurate estimates than conventional procedures. (Epidemiology 1998;9:322-332)
(C) Lippincott-Raven Publishers.]]>Wed, 18 May 2011 14:52:05 GMT-05:0000001648-199805000-00018
http://pdfs.journals.lww.com/epidem/1992/07000/Time_Related_Confounders_and_Intermediate.2.pdf
<![CDATA[Time-Related Confounders and Intermediate Variables.]]>No abstract available]]>Tue, 07 Jun 2011 15:48:25 GMT-05:0000001648-199207000-00002
http://pdfs.journals.lww.com/epidem/1993/07000/Measures_of_Effect_Based_on_the_Sufficient_Causes.17.pdf
<![CDATA[Measures of Effect Based on the Sufficient Causes Model.]]>No abstract available]]>Tue, 07 Jun 2011 15:49:26 GMT-05:0000001648-199307000-00017
http://journals.lww.com/epidem/Fulltext/2001/03000/Causal_Values.1.aspx
<![CDATA[Causal Values]]>No abstract available]]>Tue, 07 Jun 2011 15:59:16 GMT-05:0000001648-200103000-00001
http://journals.lww.com/epidem/Fulltext/2001/03000/A_Psychometric_Experiment_in_Causal_Inference_to.19.aspx
<![CDATA[A Psychometric Experiment in Causal Inference to Estimate Evidential Weights Used by Epidemiologists]]>A psychometric experiment in causal inference was performed on 159 Australian and New Zealand epidemiologists. Subjects each decided whether to attribute causality to 12 summaries of evidence concerning a disease and a chemical exposure. The 1,748 unique summaries embodied predetermined distributions of 19 characteristics generated by computerized evidence simulation. Effects of characteristics of evidence on causal attribution were estimated from logistic regression, and interactions were identified from a regression tree analysis. Factors with the strongest influence on the odds of causal attribution were statistical significance (odds ratio = 4.5 if 0.001 ≤P < 0.05 and 7.2 if P < 0.001, vsP ≥ 0.05); refutation of alternative explanations (odds ratio = 8.1 for no known confounder vs none adjusted); strength of association (odds ratio = 2.0 if 1.5 < relative risk ≤ 2.0 and 3.6 if relative risk > 2.0, vs relative risk ≤ 1.5); and adjunct information concerning biological, factual, and theoretical coherence. The refutation of confounding reduced the cutpoint in the regression tree for decision-making based on strength of association. The effect of the number of supportive studies reached saturation after it exceeded 12 studies. There was evidence of flawed logic in the responses concerning specificity of effects of exposure and a tendency to discount evidence if the P-value was a “near miss” (0.050

Tue, 07 Jun 2011 16:00:12 GMT-05:0000001648-200103000-00019
http://journals.lww.com/epidem/Fulltext/2003/11000/Marginal_Structural_Models_as_a_Tool_for.9.aspx
<![CDATA[Marginal Structural Models as a Tool for Standardization]]>In this article, we show the general relation between standardization methods and marginal structural models. Standardization has been recognized as a method to control confounding and to estimate causal parameters of interest. Because standardization requires stratification by confounders, the sparse-data problem will occur when stratified by many confounders and one then might have an unstable estimator. A new class of causal models called marginal structural models has recently been proposed. In marginal structural models, the parameters are consistently estimated by the inverse-probability-of-treatment weighting method. Marginal structural models give a nonparametric standardization using the total group (exposed and unexposed) as the standard. In epidemiologic analysis, it is also important to know the change in the average risk of the exposed (or the unexposed) subgroup produced by exposure, which corresponds to the exposed (or the unexposed) group as the standard. We propose modifications of the weights in the marginal structural models, which give the nonparametric estimation of standardized parameters. With the proposed weights, we can use the marginal structural models as a useful tool for the nonparametric multivariate standardization.]]>Tue, 07 Jun 2011 16:01:01 GMT-05:0000001648-200311000-00009
http://journals.lww.com/epidem/Fulltext/2006/05000/Estimation_of_Direct_Causal_Effects.12.aspx
<![CDATA[Estimation of Direct Causal Effects]]>Many common problems in epidemiologic and clinical research involve estimating the effect of an exposure on an outcome while blocking the exposure's effect on an intermediate variable. Effects of this kind are termed direct effects. Estimation of direct effects is typically the goal of research aimed at understanding mechanistic pathways by which an exposure acts to cause or prevent disease, as well as in many other settings. Although multivariable regression is commonly used to estimate direct effects, this approach requires assumptions beyond those required for the estimation of total causal effects. In addition, when the exposure and intermediate variables interact to cause disease, multivariable regression estimates a particular type of direct effect—the effect of an exposure on an outcome when the intermediate is fixed at a specified level. Using the counterfactual framework, we distinguish this definition of a direct effect (controlled direct effect) from an alternative definition, in which the effect of the exposure on the intermediate is blocked, but the intermediate is otherwise allowed to vary as it would in the absence of exposure (natural direct effect). We illustrate the difference between controlled and natural direct effects using several examples. We present an estimation approach for natural direct effects that can be implemented using standard statistical software, and we review the assumptions underlying our approach (which are less restrictive than those proposed by previous authors).]]>Tue, 07 Jun 2011 16:01:55 GMT-05:0000001648-200605000-00012
http://journals.lww.com/epidem/Fulltext/2009/01000/The_Consistency_Statement_in_Causal_Inference__A.3.aspx
<![CDATA[The Consistency Statement in Causal Inference: A Definition or an Assumption?]]>No abstract available]]>Tue, 07 Jun 2011 16:02:43 GMT-05:0000001648-200901000-00003
http://journals.lww.com/epidem/Fulltext/2009/01000/Interactions_in_Epidemiology__Relevance,.5.aspx
<![CDATA[Interactions in Epidemiology: Relevance, Identification, and Estimation]]>No abstract available]]>Tue, 07 Jun 2011 16:04:50 GMT-05:0000001648-200901000-00005
http://journals.lww.com/epidem/Fulltext/2009/05000/Bringing_Causal_Models_Into_the_Mainstream.20.aspx
<![CDATA[Bringing Causal Models Into the Mainstream]]>No abstract available]]>Tue, 07 Jun 2011 16:06:33 GMT-05:0000001648-200905000-00020
http://journals.lww.com/epidem/Fulltext/2009/11000/Estimating_Direct_Effects_in_Cohort_and.14.aspx
<![CDATA[Estimating Direct Effects in Cohort and Case–Control Studies]]>Estimating the effect of an exposure on an outcome, other than through some given mediator, requires adjustment for all risk factors of the mediator that are also associated with the outcome. When these risk factors are themselves affected by the exposure, then standard regression methods do not apply. In this article, I review methods for accommodating this and discuss their limitations for estimating the controlled direct effect (ie, the exposure effect when controlling the mediator at a specified level uniformly in the population). In addition, I propose a powerful and easy-to-apply alternative that uses G-estimation in structural nested models to address these limitations both for cohort and case–control studies.]]>Tue, 07 Jun 2011 16:07:28 GMT-05:0000001648-200911000-00014
http://journals.lww.com/epidem/Fulltext/2009/11000/Mediating_Various_Direct_effect_Approaches.15.aspx
<![CDATA[Mediating Various Direct-effect Approaches]]>No abstract available]]>Tue, 07 Jun 2011 16:08:40 GMT-05:0000001648-200911000-00015
http://journals.lww.com/epidem/Fulltext/2009/11000/Concerning_the_Consistency_Assumption_in_Causal.18.aspx
<![CDATA[Concerning the Consistency Assumption in Causal Inference]]>Cole and Frangakis (Epidemiology. 2009;20:3–5) introduced notation for the consistency assumption in causal inference. I extend this notation and propose a refinement of the consistency assumption that makes clear that the consistency statement, as ordinarily given, is in fact an assumption and not an axiom or definition. The refinement is also useful in showing that additional assumptions (referred to here as treatment-variation irrelevance assumptions), stronger than those given by Cole and Frangakis, are in fact necessary in articulating the ordinary assumptions of ignorability or exchangeability. The refinement furthermore sheds light on the distinction between intervention and choice in reasoning about causality. A distinction between the range of treatment variations for which potential outcomes can be defined and the range for which treatment comparisons are made is discussed in relation to issues of nonadherence. The use of stochastic counterfactuals can help relax what is effectively being presupposed by the treatment-variation irrelevance assumption and the consistency assumption.]]>Tue, 07 Jun 2011 16:09:10 GMT-05:0000001648-200911000-00018
http://journals.lww.com/epidem/Fulltext/2009/11000/Causal_Models.28.aspx
<![CDATA[Causal Models]]>No abstract available]]>Tue, 07 Jun 2011 16:09:53 GMT-05:0000001648-200911000-00028
http://journals.lww.com/epidem/Fulltext/2009/11000/Causal_Models.29.aspx
<![CDATA[Causal Models]]>No abstract available]]>Tue, 07 Jun 2011 16:10:22 GMT-05:0000001648-200911000-00029
http://journals.lww.com/epidem/Fulltext/2010/01000/DAG_Program___Identifying_Minimal_Sufficient.29.aspx
<![CDATA[DAG Program:: Identifying Minimal Sufficient Adjustment Sets]]>No abstract available]]>Tue, 07 Jun 2011 16:11:30 GMT-05:0000001648-201001000-00029
http://journals.lww.com/epidem/Fulltext/2010/05000/Negative_Controls__A_Tool_for_Detecting.17.aspx
<![CDATA[Negative Controls: A Tool for Detecting Confounding and Bias in Observational Studies]]>Noncausal associations between exposures and outcomes are a threat to validity of causal inference in observational studies. Many techniques have been developed for study design and analysis to identify and eliminate such errors. Such problems are not expected to compromise experimental studies, where careful standardization of conditions (for laboratory work) and randomization (for population studies) should, if applied properly, eliminate most such noncausal associations. We argue, however, that a routine precaution taken in the design of biologic laboratory experiments—the use of “negative controls”—is designed to detect both suspected and unsuspected sources of spurious causal inference. In epidemiology, analogous negative controls help to identify and resolve confounding as well as other sources of error, including recall bias or analytic flaws. We distinguish 2 types of negative controls (exposure controls and outcome controls), describe examples of each type from the epidemiologic literature, and identify the conditions for the use of such negative controls to detect confounding. We conclude that negative controls should be more commonly employed in observational studies, and that additional work is needed to specify the conditions under which negative controls will be sensitive detectors of other sources of error in observational studies.]]>Tue, 07 Jun 2011 16:12:16 GMT-05:0000001648-201005000-00017
http://journals.lww.com/epidem/Fulltext/2010/07000/dagR__A_Suite_of_R_Functions_for_Directed_Acyclic.26.aspx
<![CDATA[dagR: A Suite of R Functions for Directed Acyclic Graphs]]>No abstract available]]>Tue, 07 Jun 2011 16:13:20 GMT-05:0000001648-201007000-00026
http://journals.lww.com/epidem/Fulltext/2010/09000/Sufficient_cause_Interaction.33.aspx
<![CDATA[Sufficient-cause Interaction]]>No abstract available]]>Tue, 07 Jun 2011 16:14:22 GMT-05:0000001648-201009000-00033
http://journals.lww.com/epidem/Fulltext/2010/11000/On_the_Consistency_Rule_in_Causal_Inference_.19.aspx
<![CDATA[On the Consistency Rule in Causal Inference: Axiom, Definition, Assumption, or Theorem?]]>In 2 recent communications, Cole and Frangakis (Epidemiology. 2009;20:3–5) and VanderWeele (Epidemiology. 2009;20:880–883) conclude that the consistency rule used in causal inference is an assumption that precludes any side-effects of treatment/exposure on the outcomes of interest. They further develop auxiliary notation to make this assumption formal and explicit. I argue that the consistency rule is a theorem in the logic of counterfactuals and need not be altered. Instead, warnings of potential side-effects should be embodied in standard modeling practices that make causal assumptions explicit and transparent.]]>Tue, 07 Jun 2011 16:15:09 GMT-05:0000001648-201011000-00019
http://journals.lww.com/epidem/Fulltext/2011/01000/On_the_Link_Between_Sufficient_cause_Model_and.26.aspx
<![CDATA[On the Link Between Sufficient-cause Model and Potential-outcome Model]]>No abstract available]]>Tue, 07 Jun 2011 16:16:05 GMT-05:0000001648-201101000-00026
http://journals.lww.com/epidem/Fulltext/2011/05000/Compound_Treatments_and_Transportability_of_Causal.18.aspx
<![CDATA[Compound Treatments and Transportability of Causal Inference]]>Ill-defined causal questions present serious problems for observational studies—problems that are largely unappreciated. This paper extends the usual counterfactual framework to consider causal questions about compound treatments for which there are many possible implementations (for example, “prevention of obesity”). We describe the causal effect of compound treatments and their identifiability conditions, with a special emphasis on the consistency condition. We then discuss the challenges of using the estimated effect of a compound treatment in one study population to inform decisions in the same population and in other populations. These challenges arise because the causal effect of compound treatments depends on the distribution of the versions of treatment in the population. Such causal effects can be unpredictable when the versions of treatment are unknown. We discuss how such issues of “transportability” are related to the consistency condition in causal inference. With more carefully framed questions, the results of epidemiologic studies can be of greater value to decision-makers.]]>Tue, 07 Jun 2011 16:17:20 GMT-05:0000001648-201105000-00018
http://journals.lww.com/epidem/Fulltext/2011/05000/Compound_Treatments,_Transportability,_and_the.19.aspx
<![CDATA[Compound Treatments, Transportability, and the Structural Causal Model: The Power and Simplicity of Causal Graphs]]>No abstract available]]>Tue, 07 Jun 2011 16:18:18 GMT-05:0000001648-201105000-00019
http://journals.lww.com/epidem/Fulltext/2011/07000/Direct_and_Indirect_Effects_in_a_Survival_Context.24.aspx
<![CDATA[Direct and Indirect Effects in a Survival Context]]>A cornerstone of epidemiologic research is to understand the causal pathways from an exposure to an outcome. Mediation analysis based on counterfactuals is an important tool when addressing such questions. However, none of the existing techniques for formal mediation analysis can be applied to survival data. This is a severe shortcoming, as many epidemiologic questions can be addressed only with censored survival data. A solution has been to use a number of Cox models (with and without the potential mediator), but this approach does not allow a causal interpretation and is not mathematically consistent. In this paper, we propose a simple measure of mediation in a survival setting. The measure is based on counterfactuals, and measures the natural direct and indirect effects. The method allows a causal interpretation of the mediated effect (in terms of additional cases per unit of time) and is mathematically consistent. The technique is illustrated by analyzing socioeconomic status, work environment, and long-term sickness absence. A detailed implementation guide is included in an online eAppendix (http://links.lww.com/EDE/A476 ).]]>Tue, 07 Jun 2011 16:18:58 GMT-05:0000001648-201107000-00024
http://journals.lww.com/epidem/Fulltext/2011/07000/Causal_Mediation_Analysis_With_Survival_Data.25.aspx
<![CDATA[Causal Mediation Analysis With Survival Data]]>No abstract available]]>Tue, 07 Jun 2011 16:19:29 GMT-05:0000001648-201107000-00025
http://journals.lww.com/epidem/Fulltext/2011/07000/Differences_Between_Marginal_Structural_Models_and.26.aspx
<![CDATA[Differences Between Marginal Structural Models and Conventional Models in Their Exposure Effect Estimates: A Systematic Review]]>Background: Marginal structural models were developed to address time-varying confounding in nonrandomized exposure effect studies. It is unclear how estimates from marginal structural models and conventional models might differ in real settings.
Methods: We systematically reviewed the literature on marginal structural models since 2000.
Results: Data to compare marginal structural models and conventional models were obtained from 65 papers reporting 164 exposure-outcome associations. In 58 (40%), estimates differed by at least 20%, and in 18 (11%), the 2 techniques resulted in estimates with opposite interpretations. In 88 papers, marginal structural models were used to analyze real data; only 53 (60%) papers reported the use of stabilized inverse-probability weights and only 28 (32%) reported that they verified that the mean of the stabilized inverse-probability weights was close to 1.0.
Conclusions: We found important differences in results from marginal structural models and from conventional models in real studies. Furthermore, reporting of marginal structural models can be improved.]]>Tue, 07 Jun 2011 16:19:58 GMT-05:0000001648-201107000-00026
http://journals.lww.com/epidem/Fulltext/2011/09000/Causal_Interactions_in_the_Proportional_Hazards.17.aspx
<![CDATA[Causal Interactions in the Proportional Hazards Model]]>The paper relates estimation and testing for additive interaction in proportional hazards models to causal interactions within the counterfactual framework. A definition of a causal interaction for time-to-event outcomes is given that generalizes existing definitions for dichotomous outcomes. Conditions are given concerning the relative excess risk due to interaction in proportional hazards models that imply the presence of a causal interaction at some point in time. Further results are given that allow for assessing the range of times and baseline survival probabilities for which parameter estimates indicate that a causal interaction is present, and for deriving lower bounds on the prevalence of such causal interactions. An interesting feature of the time-to-event setting is that causal interactions can disappear as time progresses, ie, whether a causal interaction is present depends on the follow-up time. The results are illustrated by hypothetical and data analysis examples.]]>Tue, 18 Oct 2011 10:48:23 GMT-05:0000001648-201109000-00017
http://journals.lww.com/epidem/Fulltext/2011/09000/Transportability_and_Causal_Generalization.23.aspx
<![CDATA[Transportability and Causal Generalization]]>No abstract available]]>Tue, 18 Oct 2011 10:51:56 GMT-05:0000001648-201109000-00023
http://journals.lww.com/epidem/Fulltext/2011/11000/Alternative_Assumptions_for_the_Identification_of.1.aspx
<![CDATA[Alternative Assumptions for the Identification of Direct and Indirect Effects]]>The assessment of mediation is important for testing the mechanisms that explain an observed relationship between exposure and disease. Several types of direct and indirect effects have been defined, broadly characterized as either controlled or natural. The identification of these effects requires a stricter set of assumptions than those necessary for the identification of the total effect of exposure on disease. The particular assumptions that are required differ depending on the type of effect. We use an approach based on response types to derive new assumptions for the identification of direct and indirect effects, both controlled and natural. These assumptions are stated in terms of response types and potential outcomes, and are compared with those already in the literature. This approach yields an alternative, and sometimes less stringent, set of assumptions for the identification of direct and indirect effects than those previously proposed.]]>Tue, 18 Oct 2011 10:53:49 GMT-05:0000001648-201111000-00001
http://journals.lww.com/epidem/Fulltext/2012/05000/Completion_Potentials_of_Sufficient_Component.15.aspx
<![CDATA[Completion Potentials of Sufficient Component Causes]]>Many epidemiologists are familiar with Rothman's sufficient component cause model. In this paper, I propose a new index for this model, the completion potential index I show that, with proper assumptions (monotonicity, independent competing causes, proportional hazards), completion potentials for various classes of sufficient causes are estimable from routine epidemiologic data (cohort, case-control or time-to-event data). I discuss the advantage of the completion potential index over indices of rate ratio, rate difference, causal-pie weight, population attributable fraction, and attributable fraction within the exposed population. Hypothetical and real data examples are used. The completion potential index proposed here allows better characterization of complex interactive effects of multiple monotonic risk factors.]]>Wed, 30 May 2012 12:46:42 GMT-05:0000001648-201205000-00015
http://journals.lww.com/epidem/Fulltext/2012/05000/Mediation_Analysis_With_Multiple_Versions_of_the.16.aspx
<![CDATA[Mediation Analysis With Multiple Versions of the Mediator]]>The causal inference literature has provided definitions of direct and indirect effects based on counterfactuals that generalize the approach found in the social science literature. However, these definitions presuppose well-defined hypothetical interventions on the mediator. In many settings, there may be multiple ways to fix the mediator to a particular value, and these various hypothetical interventions may have very different implications for the outcome of interest. In this paper, we consider mediation analysis when multiple versions of the mediator are present. Specifically, we consider the problem of attempting to decompose a total effect of an exposure on an outcome into the portion through the intermediate and the portion through other pathways. We consider the setting in which there are multiple versions of the mediator but the investigator has access only to data on the particular measurement, not information on which version of the mediator may have brought that value about. We show that the quantity that is estimated as a natural indirect effect using only the available data does indeed have an interpretation as a particular type of mediated effect; however, the quantity estimated as a natural direct effect, in fact, captures both a true direct effect and an effect of the exposure on the outcome mediated through the effect of the version of the mediator that is not captured by the mediator measurement. The results are illustrated using 2 examples from the literature, one in which the versions of the mediator are unknown and another in which the mediator itself has been dichotomized.]]>Wed, 30 May 2012 12:47:21 GMT-05:0000001648-201205000-00016
http://journals.lww.com/epidem/Fulltext/2012/11000/Distribution_Free_Mediation_Analysis_for_Nonlinear.18.aspx
<![CDATA[Distribution-Free Mediation Analysis for Nonlinear Models with Confounding]]>Recently, researchers have used a potential-outcome framework to estimate causally interpretable direct and indirect effects of an intervention or exposure on an outcome. One approach to causal-mediation analysis uses the so-called mediation formula to estimate the natural direct and indirect effects. This approach generalizes the classical mediation estimators and allows for arbitrary distributions for the outcome variable and mediator. A limitation of the standard (parametric) mediation formula approach is that it requires a specified mediator regression model and distribution; such a model may be difficult to construct and may not be of primary interest. To address this limitation, we propose a new method for causal-mediation analysis that uses the empirical distribution function, thereby avoiding parametric distribution assumptions for the mediator. To adjust for confounders of the exposure-mediator and exposure-outcome relationships, inverse-probability weighting is incorporated based on a supplementary model of the probability of exposure. This method, which yields the estimates of the natural direct and indirect effects for a specified reference group, is applied to data from a cohort study of dental caries in very-low-birth-weight adolescents to investigate the oral-hygiene index as a possible mediator. Simulation studies show low bias in the estimation of direct and indirect effects in a variety of distribution scenarios, whereas the standard mediation formula approach can be considerably biased when the distribution of the mediator is incorrectly specified.]]>Fri, 12 Oct 2012 07:45:22 GMT-05:0000001648-201211000-00018