<![CDATA[Epidemiology - Featured Articles - Featured Articles]]>
http://journals.lww.com/epidem/
en-usWed, 27 Jul 2016 01:32:25 -0500Wolters Kluwer Health RSS Generatorhttp://images.journals.lww.com/epidem/XLargeThumb.00001648-201607000-00000.CV.jpeg<![CDATA[Epidemiology - Featured Articles - Featured Articles]]>
http://journals.lww.com/epidem/
http://journals.lww.com/epidem/Fulltext/2016/07000/Commentary___Selection_Bias_in_Clinical.2.aspx
<![CDATA[Commentary: Selection Bias in Clinical Epidemiology: Causal Thinking to Guide Patient-centered Research]]>No abstract available]]>Sun, 05 Jun 2016 17:06:05 GMT-05:0000001648-201607000-00002
http://journals.lww.com/epidem/Fulltext/2016/07000/Assessment_of_Heart_Transplant_Waitlist_Time_and.3.aspx
<![CDATA[Assessment of Heart Transplant Waitlist Time and Pre- and Post-transplant Failure: A Mixed Methods Approach]]>Background: Over the past two decades, there have been increasingly long waiting times for heart transplantation. We studied the relationship between heart transplant waiting time and transplant failure (removal from the waitlist, pretransplant death, or death or graft failure within 1 year) to determine the risk that conservative donor heart acceptance practices confer in terms of increasing the risk of failure among patients awaiting transplantation.
Methods: We studied a cohort of 28,283 adults registered on the United Network for Organ Sharing heart transplant waiting list between 2000 and 2010. We used Kaplan–Meier methods with inverse probability censoring weights to examine the risk of transplant failure accumulated over time spent on the waiting list (pretransplant). In addition, we used transplant candidate blood type as an instrumental variable to assess the risk of transplant failure associated with increased wait time.
Results: Our results show that those who wait longer for a transplant have greater odds of transplant failure. While on the waitlist, the greatest risk of failure is during the first 60 days. Doubling the amount of time on the waiting list was associated with a 10% (1.01, 1.20) increase in the odds of failure within 1 year after transplantation.
Conclusions: Our findings suggest a relationship between time spent on the waiting list and transplant failure, thereby supporting research aimed at defining adequate donor heart quality and acceptance standards for heart transplantation.]]>Sun, 05 Jun 2016 17:07:33 GMT-05:0000001648-201607000-00003
http://journals.lww.com/epidem/Fulltext/2016/07000/Commentary___Can_a_Quasi_experimental_Design_Be_a.8.aspx
<![CDATA[Commentary: Can a Quasi-experimental Design Be a Better Idea than an Experimental One?]]>No abstract available]]>Sun, 05 Jun 2016 17:08:42 GMT-05:0000001648-201607000-00008
http://journals.lww.com/epidem/Fulltext/2016/07000/Regression_Discontinuity_Design__Simulation_and.9.aspx
<![CDATA[Regression Discontinuity Design: Simulation and Application in Two Cardiovascular Trials with Continuous Outcomes]]>In epidemiology, the regression discontinuity design has received increasing attention recently and might be an alternative to randomized controlled trials (RCTs) to evaluate treatment effects. In regression discontinuity, treatment is assigned above a certain threshold of an assignment variable for which the treatment effect is adjusted in the analysis. We performed simulations and a validation study in which we used treatment effect estimates from an RCT as the reference for a prospectively performed regression discontinuity study. We estimated the treatment effect using linear regression adjusting for the assignment variable both as linear terms and restricted cubic spline and using local linear regression models. In the first validation study, the estimated treatment effect from a cardiovascular RCT was −4.0 mmHg blood pressure (95% confidence interval: −5.4, −2.6) at 2 years after inclusion. The estimated effect in regression discontinuity was −5.9 mmHg (95% confidence interval: −10.8, −1.0) with restricted cubic spline adjustment. Regression discontinuity showed different, local effects when analyzed with local linear regression. In the second RCT, regression discontinuity treatment effect estimates on total cholesterol level at 3 months after inclusion were similar to RCT estimates, but at least six times less precise. In conclusion, regression discontinuity may provide similar estimates of treatment effects to RCT estimates, but requires the assumption of a global treatment effect over the range of the assignment variable. In addition to a risk of bias due to wrong assumptions, researchers need to weigh better recruitment against the substantial loss in precision when considering a study with regression discontinuity versus RCT design.]]>Sun, 05 Jun 2016 17:09:41 GMT-05:0000001648-201607000-00009
http://journals.lww.com/epidem/Fulltext/2016/07000/Property_Values_as_a_Measure_of_Neighborhoods__An.11.aspx
<![CDATA[Property Values as a Measure of Neighborhoods: An Application of Hedonic Price Theory]]>Background: Researchers measuring relationships between neighborhoods and health have begun using property appraisal data as a source of information about neighborhoods. Economists have developed a rich tool kit to understand how neighborhood characteristics are quantified in appraisal values. This tool kit principally relies on hedonic (implicit) price models and has much to offer regarding the interpretation and operationalization of property appraisal data-derived neighborhood measures, which goes beyond the use of appraisal data as a measure of neighborhood socioeconomic status.
Methods: We develop a theoretically informed hedonic-based neighborhood measure using residuals of a hedonic price regression applied to appraisal data in a single metropolitan area. We describe its characteristics, reliability in different types of neighborhoods, and correlation with other neighborhood measures (i.e., raw neighborhood appraisal values, census block group poverty, and observed property characteristics). We examine the association between all neighborhood measures and body mass index.
Results: The hedonic-based neighborhood measure was correlated in the expected direction with block group poverty rate and observed property characteristics. The neighborhood measure and average raw neighborhood appraisal value, but not census block group poverty, were associated with individual body mass index.
Conclusion: We draw theoretically consistent methodology from the economics literature on hedonic price models to demonstrate how to leverage the implicit valuation of neighborhoods contained in publicly available appraisal data. Consistent measurement and application of the hedonic-based neighborhood measures in epidemiology will improve understanding of the relationships between neighborhoods and health. Researchers should proceed with a careful use of appraisal values utilizing theoretically informed methods such as this one.]]>Sun, 05 Jun 2016 17:10:46 GMT-05:0000001648-201607000-00011
http://journals.lww.com/epidem/Fulltext/2016/07000/Collider_Bias_Is_Only_a_Partial_Explanation_for.12.aspx
<![CDATA[Collider Bias Is Only a Partial Explanation for the Obesity Paradox]]>Background: “Obesity paradox” refers to an association between obesity and reduced mortality (contrary to an expected increased mortality). A common explanation is collider stratification bias: unmeasured confounding induced by selection bias. Here, we test this supposition through a realistic generative model.
Methods: We quantify the collider stratification bias in a selected population using counterfactual causal analysis. We illustrate the bias for a range of scenarios, describing associations between exposure (obesity), outcome (mortality), mediator (in this example, diabetes) and an unmeasured confounder.
Results: Collider stratification leads to biased estimation of the causal effect of exposure on outcome. However, the bias is small relative to the causal relationships between the variables.
Conclusions: Collider bias can be a partial explanation of the obesity paradox, but unlikely to be the main explanation for a reverse direction of an association to a true causal relationship. Alternative explanations of the obesity paradox should be explored. See Video Abstract at http://links.lww.com/EDE/B51 .]]>Sun, 05 Jun 2016 17:12:11 GMT-05:0000001648-201607000-00012
http://journals.lww.com/epidem/Fulltext/2016/07000/Risk_of_Miscarriage_in_Women_Receiving.14.aspx
<![CDATA[Risk of Miscarriage in Women Receiving Antidepressants in Early Pregnancy, Correcting for Induced Abortions]]>Background: Earlier studies on the association between antidepressant use and miscarriage have obtained conflicting results after accounting for the role of depression, and none have taken into account the high risk of induced abortions in women using antidepressants.
Methods: We identified 41,964 pregnant women delivering between 1998 and 2002 using Quebec’s health administration databases. We compared women prescribed antidepressants in the first trimester and with a recorded diagnosis of depression before pregnancy to (1) women with neither antidepressant use nor a depression diagnosis before or during pregnancy; (2) women with a depression diagnosis before pregnancy, but no antidepressants prescribed in the first trimester; and (3) women prescribed hypothyroid medication in the first trimester, but not antidepressants. We used log binomial regression to assess the adjusted relative risk of miscarriage, corrected for induced abortion risk.
Results: The miscarriage risk uncorrected for induced abortions was 16%, 10%, and 9% for depressed women exposed to antidepressants; unexposed depressed women; and unexposed, nondepressed women, respectively. These decreased to 11%, 8%, and 7% after correction for induced abortions. In multivariable analysis, the corrected risk of miscarriage relative to unexposed, nondepressed women was 1.3 (1.1–1.5) for antidepressant-exposed women and 1.1 (1.0–1.2) for unexposed depressed women. The miscarriage relative risk for antidepressant users compared with unexposed depressed women was thus 1.2 (1.0–1.4).
Conclusions: Antidepressant use in the first trimester is associated with an increased risk of miscarriage when compared with either nondepressed or depressed unexposed women, even after accounting for induced abortions.]]>Sun, 05 Jun 2016 17:13:57 GMT-05:0000001648-201607000-00014
http://journals.lww.com/epidem/Fulltext/2016/07000/Transportability_in_Network_Meta_analysis.16.aspx
<![CDATA[Transportability in Network Meta-analysis]]>Network meta-analysis is an extension of the conventional pair wise meta-analysis to include treatments that have not been compared head to head. It has in recent years caught the interest of clinical investigators in comparative effectiveness research. While allowing a simultaneous comparison of a large number of treatment effects, an inclusion of indirect effects (i.e., estimating effects using treatments that have not been randomized head to head) may introduce bias. This bias occurs from not accounting for covariates differences in the analysis, in a way that allows transfer of causal information across trials. Although this problem might not be entirely new to network meta-analysis researchers, it has not been given a formal treatment. Occasionally it is tackled by fitting a meta-regression model to account for imbalance of covariates. However, this approach may still produce biased estimates if covariates responsible for disparity across studies are post-treatment variables. To address the problem, we use the graphical method known as transportability to demonstrate whether and how indirect treatment effects can validly be estimated in network meta-analysis. See Video Abstract at http://links.lww.com/EDE/B37 .]]>Sun, 05 Jun 2016 17:15:09 GMT-05:0000001648-201607000-00016
http://journals.lww.com/epidem/Fulltext/2016/07000/Sensitivity_to_Excluding_Treatments_in_Network.17.aspx
<![CDATA[Sensitivity to Excluding Treatments in Network Meta-analysis]]>Network meta-analysis of randomized controlled trials is increasingly used to combine both direct evidence comparing treatments within trials and indirect evidence comparing treatments across different trials. When the outcome is binary, the commonly used contrast-based network meta-analysis methods focus on relative treatment effects such as odds ratios comparing two treatments. As shown in a recent report, when using contrast-based network meta-analysis, the impact of excluding a treatment in the network can be substantial, suggesting a methodological limitation. In addition, relative treatment effects are sometimes not sufficient for patients to make decisions. For example, it can be challenging for patients to trade off efficacy and safety for two drugs if they only know the relative effects, not the absolute effects. A recently proposed arm-based network meta-analysis, based on a missing-data framework, provides an alternative approach. It focuses on estimating population-averaged treatment-specific absolute effects. This article examines the influence of treatment exclusion empirically using 14 published network meta-analyses, for both arm- and contrast-based approaches. The difference between these two approaches is substantial, and it is almost entirely due to single-arm trials. When a treatment is removed from a contrast-based network meta-analysis, it is necessary to exclude other treatments in two-arm studies that investigated the excluded treatment; such exclusions are not necessary in arm-based network meta-analysis, leading to substantial gain in performance.]]>Sun, 05 Jun 2016 17:16:06 GMT-05:0000001648-201607000-00017
http://journals.lww.com/epidem/Fulltext/2016/07000/Targeted_Maximum_Likelihood_Estimation_for.18.aspx
<![CDATA[Targeted Maximum Likelihood Estimation for Pharmacoepidemiologic Research]]>Background: Targeted maximum likelihood estimation has been proposed for estimating marginal causal effects, and is robust to misspecification of either the treatment or outcome model. However, due perhaps to its novelty, targeted maximum likelihood estimation has not been widely used in pharmacoepidemiology. The objective of this study was to demonstrate targeted maximum likelihood estimation in a pharmacoepidemiological study with a high-dimensional covariate space, to incorporate the use of high-dimensional propensity scores into this method, and to compare the results to those of inverse probability weighting.
Methods: We implemented the targeted maximum likelihood estimation procedure in a single-point exposure study of the use of statins and the 1-year risk of all-cause mortality postmyocardial infarction using data from the UK Clinical Practice Research Datalink. A range of known potential confounders were considered, and empirical covariates were selected using the high-dimensional propensity scores algorithm. We estimated odds ratios using targeted maximum likelihood estimation and inverse probability weighting with a variety of covariate selection strategies.
Results: Through a real example, we demonstrated the double robustness of targeted maximum likelihood estimation. We showed that results with this method and inverse probability weighting differed when a large number of covariates were included in the treatment model.
Conclusions: Targeted maximum likelihood can be used in high-dimensional covariate settings. In high-dimensional covariate settings, differences in results between targeted maximum likelihood and inverse probability weighted estimation are likely due to sensitivity to (near) positivity violations. Further investigations are needed to gain better understanding of the advantages and limitations of this method in pharmacoepidemiological studies.]]>Sun, 05 Jun 2016 17:17:51 GMT-05:0000001648-201607000-00018
http://journals.lww.com/epidem/Fulltext/2016/07000/Review_Article___The_Role_of_Molecular.22.aspx
<![CDATA[Review Article: The Role of Molecular Pathological Epidemiology in the Study of Neoplastic and Non-neoplastic Diseases in the Era of Precision Medicine]]>Molecular pathology diagnostics to subclassify diseases based on pathogenesis are increasingly common in clinical translational medicine. Molecular pathological epidemiology (MPE) is an integrative transdisciplinary science based on the unique disease principle and the disease continuum theory. While it has been most commonly applied to research on breast, lung, and colorectal cancers, MPE can investigate etiologic heterogeneity in non-neoplastic diseases, such as cardiovascular diseases, obesity, diabetes mellitus, drug toxicity, and immunity-related and infectious diseases. This science can enhance causal inference by linking putative etiologic factors to specific molecular biomarkers as outcomes. Technological advances increasingly enable analyses of various -omics, including genomics, epigenomics, transcriptomics, proteomics, metabolomics, metagenomics, microbiome, immunomics, interactomics, etc. Challenges in MPE include sample size limitations (depending on availability of biospecimens or biomedical/radiological imaging), need for rigorous validation of molecular assays and study findings, and paucities of interdisciplinary experts, education programs, international forums, and standardized guidelines. To address these challenges, there are ongoing efforts such as multidisciplinary consortium pooling projects, the International Molecular Pathological Epidemiology Meeting Series, and the Strengthening the Reporting of Observational Studies in Epidemiology-MPE guideline project. Efforts should be made to build biorepository and biobank networks, and worldwide population-based MPE databases. These activities match with the purposes of the Big Data to Knowledge (BD2K), Genetic Associations and Mechanisms in Oncology (GAME-ON), and Precision Medicine Initiatives of the United States National Institute of Health. Given advances in biotechnology, bioinformatics, and computational/systems biology, there are wide open opportunities in MPE to contribute to public health.]]>Sun, 05 Jun 2016 17:18:42 GMT-05:0000001648-201607000-00022