Secondary Logo

Institutional members access full text with Ovid®

Examining the Reproducibility of 6 Published Studies in Public Health Services and Systems Research

Harris, Jenine K., PhD; Wondmeneh, Sarah B., HBSc; Zhao, Yiqiang, MPH; Leider, Jonathon P., PhD

Journal of Public Health Management and Practice: March/April 2019 - Volume 25 - Issue 2 - p 128–136
doi: 10.1097/PHH.0000000000000694
Research Reports: Research Full Report
Buy
SDC

Objective: Research replication, or repeating a study de novo, is the scientific standard for building evidence and identifying spurious results. While replication is ideal, it is often expensive and time consuming. Reproducibility, or reanalysis of data to verify published findings, is one proposed minimum alternative standard. While a lack of research reproducibility has been identified as a serious and prevalent problem in biomedical research and a few other fields, little work has been done to examine the reproducibility of public health research. We examined reproducibility in 6 studies from the public health services and systems research subfield of public health research.

Design: Following the methods described in each of the 6 papers, we computed the descriptive and inferential statistics for each study. We compared our results with the original study results and examined the percentage differences in descriptive statistics and differences in effect size, significance, and precision of inferential statistics. All project work was completed in 2017.

Results: We found consistency between original and reproduced results for each paper in at least 1 of the 4 areas examined. However, we also found some inconsistency. We identified incorrect transcription of results and omitting detail about data management and analyses as the primary contributors to the inconsistencies.

Recommendations: Increasing reproducibility, or reanalysis of data to verify published results, can improve the quality of science. Researchers, journals, employers, and funders can all play a role in improving the reproducibility of science through several strategies including publishing data and statistical code, using guidelines to write clear and complete methods sections, conducting reproducibility reviews, and incentivizing reproducible science.

Brown School, Washington University in St Louis, St Louis, Missouri (Dr Harris, Ms Wondmeneh, and Mr Zhao); and Johns Hopkins Bloomberg School of Public Health, Baltimore, Maryland (Dr Leider).

Correspondence: Jenine K. Harris, PhD, Brown School, Washington University in St Louis, One Brookings Dr, Campus Box 1196, St Louis, MO 63130 (harrisj@wustl.edu).

The authors acknowledge Todd Combs for his helpful comments on drafts of this manuscript and for his assistance in setting up the coding2share GitHub account as a repository for their code. The research presented in this paper is that of the authors and does not reflect the official policy of the Robert Wood Johnson Foundation.

J.K.H. received a grant from the Robert Wood Johnson Foundation in March 2017 that supported a portion of her time completing the writing and editing of this manuscript. S.W. and Y.Z. were paid hourly as part-time graduate research assistants by Washington University in St Louis for their work on this project.

The authors declare no conflicts of interest.

Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal's Web site (http://www.JPHMP.com).

Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.