The editors of EPIDEMIOLOGY recently received an invitation to endorse the “Transparency and Openness Promotion (TOP) Guidelines.” These guidelines were promulgated by a committee of the same name and were recently published with accompanying commentaries.1–3 The objectives of the guidelines, as stated in the invitation, are “to encourage greater transparency and reproducibility of research in the published record” with the ultimate goal of improving “both research practices and the credibility and reputation of our field.” After due consideration, the editors of EPIDEMIOLOGY declined the invitation. There is much to be admired in the aspirations of TOP, so I feel obliged to explain the editors’ decision.
The guidelines promulgate eight transparency standards that can be implemented in whole or part by journals. Journals that adopt the guidelines will choose the level of their implementation for each standard, ranging from level 0, connoting no implementation of the standard, to level 3, connoting full implementation and active policing of the standard by the journal editors. Adoption of the guidelines will require authors, reviewers, and editors to view research papers through a different lens than they currently use, one that focuses on adherence to the guidelines. This change will most likely lead to reduced focus on other areas. EPIDEMIOLOGY prefers that authors, reviewers, and editors focus on the quality of the research and the clarity of its presentation. For this reason, among others, the editors of EPIDEMIOLOGY have largely declined opportunities to endorse or implement guideline endeavors such as this one.4–7 In fact, the journal’s current mission statement sets as one of its goals to “maintain an editorial policy that encourages creativity and novelty, resists regimentation of research practices to the extent practicable, and invites challenges to current scientific habits and conventions through innovation in epidemiologic theory and practice.” Implementing the TOP guidelines would run counter to this goal.
A second reason for our decision to decline the invitation is that the guidelines came with no concrete plan for program evaluation or revision. There is an allowance for version numbering, but that does not go far enough. We have previously advocated for explicit expiration dates for guidelines such as TOP, so that scientists would not be shackled to outdated standards for research conduct and reporting.4 The stated goals of TOP—transparency and reproducibility in the research endeavor—can be measured; an example appeared in the same issue of Science in which the guidelines were published.8 The increased burden that the guidelines impose on authors, reviewers, editors, and publishers is certainly measureable. Program evaluation—including objective criteria by which success or failure will be measured—is critical to an undertaking such as proposed by the guideline committee, particularly because there is no substantial evidence in the invitation, guidelines, or published commentaries that TOP will have the intended effect. Well-meaning guidelines with similar goals have sometimes had the opposite of their intended effect,9 a concern that has already been raised about the recent stampede toward reproducibility.10 Our community would never accept a public-health or medical intervention that had little evidence to support its effectiveness and no plan for longitudinal evaluation. By declining the invitation to endorse TOP, the editors hold guidelines pertaining to the publication of research to the same standard.
The final reason we declined the invitation to join TOP is the implicit, and sometimes explicit, perspective upon which it rests. This perspective treats each research paper as if its results were right or wrong, as eventually determined by whether its results are reproduced or not. Such concordance is too easily evaluated by whether two results are both statistically significant, or both not. The editors of EPIDEMIOLOGY have always rejected dichotomization of research results into right or wrong, or into statistically significant or not. Our preference has always been to treat each research result as an imperfect measurement of an underlying parameter, allowing time for the slow accumulation of evidence, potentially from many studies, to ultimately yield knowledge that can guide policy. We prefer to think of replication not as reproducibility, but as enhancement, or as replication with advance.11 Ideally, each study takes us a step closer toward the accumulation of actionable knowledge. Lack of transparency and failure to invest in replication research present obstacles to this process, so we support the TOP committee in its desire to reduce these obstacles, even if we do not support the mechanisms promulgated by the guidelines themselves.
The editors of EPIDEMIOLOGY appreciate the invitation to endorse the TOP guidelines and admire some of the espoused goals. For the reasons above, we cannot accept the invitation. Nonetheless, we encourage authors to make their research data and computing code available for replication endeavors, a goal that is consistent with part of the TOP guidelines. We intend to soon ask authors of each new submission to explain whether and how these materials might be made available. To the extent our plan, once implemented, might be aided by the model provided by the TOP guidelines, we are grateful for it.
1. Nosek BA, Alter G, Banks GC, et al. SCIENTIFIC STANDARDS. Promoting an open research culture. Science. 2015;348:1422–1425
2. Buck S. Solving reproducibility. Science. 2015;348:1403
3. Alberts B, Cicerone RJ, Fienberg SE, et al. SCIENTIFIC INTEGRITY. Self-correction in science at work. Science. 2015;348:1420–1422
4. Rothman KJ, Poole C. Some guidelines on guidelines: they should come with expiration dates. Epidemiology. 2007;18:794–796
5. Editors. . Probing STROBE. Epidemiology. 2007;18:789–790
6. Editors. . The registration of observational studies—when metaphors go bad. Epidemiology. 2010;21:607–609
7. Lash TL. Truth and consequences. Epidemiology. 2015;26:141–142
8. Kaiser J. The cancer test. Science. 2015;348:1411–1413
9. King NB, Kaufman JS. More author disclosure: solution or absolution? Epidemiology. 2012;23:777–779
10. Nuijten MB, van Assen MALM, Veldkamp CLS, Wicherts JM. The replication paradox: Combining studies can decrease accuracy of effect size estimates. Rev Gen Psychol. 2015;19:172–182
11. Lash TL. Advancing research through replication. Paediatr Perinat Epidemiol. 2015;29:82–83