Share this article on:

Correspondence Between Results and Aims of Funding Support in EPIDEMIOLOGY Articles

Lash, Timothy L.a; Kaufman, Jay S.b; Hernán, Miguel A.c,d

doi: 10.1097/EDE.0000000000000767
Editorial

From the aDepartment of Epidemiology, Rollins School of Public Health, Emory University, Atlanta, GA; bDepartment of Epidemiology, Biostatistics, & Occupational Health, McGill University, Quebec, Canada; and cDepartment of Epidemiology, and dDepartment of Biostatistics, Harvard T.H. Chan, School of Public Health, Harvard University, Cambridge, MA.

Disclosure: The authors report no conflicts of interest.

Correspondence: Timothy L. Lash, Department of Epidemiology, Rollins School of Public Health, Emory University, 1518 Clifton Rd. NE, CNR3, Atlanta, GA 30322. E-mail: tlash@emory.edu.

The credibility of epidemiologic research and its relation to prespecification of hypotheses and approaches to data analysis have been addressed on the pages of EPIDEMIOLOGY since its first issue.1–3 Recently, the lay and scientific media have raised concerns about the replicability of scientific research.4–6 Selective reporting of findings based on “hypothesizing after results are known,” or HARKing,7 and results-driven data analysis,8 or p-hacking, may lead to incorrect or exaggerated scientific results that cannot be later replicated.9,10 Selective reporting of findings has been empirically demonstrated. A landmark study of randomized trials found that over half of the outcomes were incompletely reported, that “statistically significant” outcomes were more likely to be reported, and that over 60% of trials had at least one primary outcome that differed from the ones listed in the study protocol.11 Soon thereafter, medical journals began to require preregistration of clinical trials.12,13

One consequence of the implementation of trial registration has been a parallel call for compulsory preregistration of nonrandomized epidemiologic research,14–16 which proponents argue would allow comparison of published results with preregistered objectives and protocols, including the study hypothesis. Although there are important logical and philosophical reasons to question the preeminence of a priori hypotheses over a posteriori hypotheses,17–20 many scientists assign greater credibility to results that correspond to the former.9,21 For reasons explained elsewhere, the editors of EPIDEMIOLOGY have resisted calls for compulsory preregistration22 and other regimentation of epidemiologic research.23,24 Nonetheless, because conversations about selective reporting in observational studies have been hampered by the sparse empirical data, we undertook a self-study to compare the correspondence between published results in our journal and the prespecified objectives in the funding mechanisms that authors said had supported the work leading to the publication.

For every original research article and brief report published by EPIDEMIOLOGY in 2013, 2014, and 2015, we extracted the publication’s abstract and its information on the sources of funding as provided by the authors. We attempted to locate Internet databases for each funding source and downloaded summary information submitted by the authors to the funding source at the time of their application for funding support (e.g., the specific aims, project objectives, or other description of anticipated work). One of us (T.L.) compared the abstract with all available descriptions of objectives and categorized the results in the abstract into one of five categories: (1) the published result was clearly among the funded aims; (2) the published result was possibly among the funded aims; (3) there is no evidence that the published result was among the funded aims; (4) the funding information was inconclusive; or (5) no funding information was available. The category “the funding information was inconclusive” most often applied to articles for which no Internet database was located for the funding source, the funding source information was unavailable in English and attempts at Internet-enabled translation failed, or the funding source information did not include a description of the project’s objectives that was submitted at the time of application for funding support. The category “no funding information available” applied to articles for which the authors listed no source of funding support. As a secondary evaluation, we repeated the analysis with restriction to nonmethods articles. As a validation substudy, four editors reviewed the information for 10 publications each, selected at random and without replacement. Finally, for each publication that reported a ratio estimate of association, we extracted the first ratio estimate and its 95% confidence interval from the abstract. We extracted the first ratio estimate with the expectation that it would represent the main result and with the intention to select a result systematically rather than preferentially. We ranked the point estimates from lowest to highest within categories of “no evidence that published result was among funded aims,” “published result clearly or possibly among the funded aims,” or “not evaluable” (comprising publications for which no funding information was available or for which funding information was inconclusive). To evaluate whether results within these categories may have emanated from different data generating mechanisms, we compared the distributions using box and whisker plots.

Of the 291 brief reports and original articles published between 2013 and 2015, we could compare abstract results with funded objectives for 171 (58%; Table 1). Most (89) of the 120 for which we were unable to make a comparison had funding information that was inconclusive. Among the articles with informative funding information, the published result was clearly (40%) or possibly (24%) among the funded aims for nearly two-thirds. The results were nearly identical when restricted to the 217 nonmethods articles. The validation substudy showed good interrater agreement, with concordant results for 29 of the 40 publications. For five publications, the main rater assigned the category “no evidence that published result was among funded aims,” whereas the second rater assigned the category “published result clearly or possibly among funded aims.” This agreement study suggests that the main rater may have underestimated the proportion of evaluable publications for which the published result was among the funded aims.

TABLE 1

TABLE 1

Table 2 and the Figure display the distributions of the first ratio estimate extracted from each publication’s abstract, restricted to publications for which the first estimate was a ratio estimate (118/291, 40% of all articles; 118/217, 54% of nonmethods articles). The distribution of results from the category for which there was no evidence that results corresponded to aims was nearly null centered and wide (interquartile range, 0.78–1.59; ratio of 75th to 25th percentiles = 2.0). The distributions of results from the category with results corresponding to aims (interquartile range, 1.04–1.37; ratio of 75th to 25th percentiles = 1.3) and from the not evaluable category (interquartile range, 1.02–1.70; ratio of 75th to 25th percentiles = 1.7) were skewed toward associations above the null and narrower.

TABLE 2

TABLE 2

FIGURE. Di

FIGURE. Di

While reviewing this information, we also learned that there is a wide variety of author behaviors in what they choose to name as sources of funding support and in the clarity with which they describe that support.

After completing this self-study, the editors arrived at the following conclusions. First, during the 3-year time window under study, most of the evaluable research articles published in EPIDEMIOLOGY present results that correspond to aims or objectives anticipated in applications for funding that supported the project. This conclusion must be tempered by the lack of evaluable information for more than 40% of the publications and by our inability to compare design and analysis protocols with the implemented methods. Nonetheless, we are reassured that most of the evaluable published results did, in fact, relate to a priori hypotheses of sufficient merit that they were selected for funding support. Second, the distribution of results was detectably, but not substantially, different when stratified by whether “there was no evidence that the published result was among the funded aims,” “the published result was clearly or possibly among the funded aims,” or “the published result was not evaluable.” In addition, there was wide variety in the volume and quality of author reporting of their sources of funding support. For these reasons, we have implemented an editorial process that requests funding information should be reported with specific language. In the box asking authors about funding sources, authors are now instructed as follows:

  • Enter “none” if the work was completed without specific funding support.
  • If the result reported in the submission corresponds directly to the specific aims of a source (or sources) of funding, then describe that source of funding as: “The results reported herein correspond to specific aims of grant (or other source of support) XXX to investigator YYY from ZZZ”, where XXX is a grant or project number, YYY is the Principal Investigator of the grant or project, and ZZZ is the funding agency.
  • Describe all other sources of support as: “This work was (also) supported by grant(s) (or other source of support) XXX from ZZZ”, where ‘(also)’ is inserted only if the listed support is in addition to support corresponding directly to a specific aim, XXX is a grant or project number, and ZZZ is a funding agency. Additional sources of support should be added serially (e.g., grants XXX1 from ZZZ1, XXX2 from ZZZ2, and XXX3 from ZZZ3.” Sources of support can include general salary support, which may not have a grant or project number.

Grant or project numbers should be provided in a format that allows interested parties to find the grant in publicly available databases provided by many funding agencies.

EPIDEMIOLOGY has requested information about funding support from authors since its inaugural issue in 1990. Asking that authors provide specific information in a common format does not substantially increase the authors’ burden and is a reasonable compromise in answer to calls for greater transparency in the process by which hypotheses came to be evaluated. We emphasize that reviewers will not receive this information when providing an external review of submissions, and that editors will not incorporate this information into editorial decisions about the merit of submissions. We also recognize that funding sometimes supports investigators, not projects, and that original study aims are often superseded by the acquisition of new epidemiologic, biologic, and social knowledge. Proposals for such new analyses may undergo an internal review process by a study group, and these would not necessarily appear in grant aims. We caution, therefore, that the absence of funding information pointing to a specific aim of a grant proposal is not evidence of selective reporting or the absence of a prespecified hypothesis. We encourage authors to provide information about the genesis of their hypotheses in their description of the funding information, in the main text,25 or by noting preregistration in publicly available repositories.

Back to Top | Article Outline

REFERENCES

1. Marshall JRData dredging and noteworthiness. Epidemiology. 1990;1:5–7.
2. Rothman KJNo adjustments are needed for multiple comparisons. Epidemiology. 1990;1:43–46.
3. Vandenbroucke JPHow trustworthy is epidemiologic research? Epidemiology. 1990;1:83–84.
4. Ioannidis JPHow to make more published research true. PLoS Med. 2014;11:e1001747.
5. Unreliable research: trouble at the lab. Economist. 18 October 2013.
6. Collins FS, Tabak LAPolicy: NIH plans to enhance reproducibility. Nature. 2014;505:612–613.
7. Kerr NLHARKing: hypothesizing after the results are known. Pers Soc Psychol Rev. 1998;2:196–217.
8. Simmons JP, Nelson LD, Simonsohn UFalse-positive psychology: undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychol Sci. 2011;22:1359–1366.
9. Munafò MR, Nosek BA, Bishop DVM, et alA manifesto for reproducible science. Nature Human Behaviour. 2017;1:0021.
10. Motulsky HJCommon misconceptions about data analysis and statistics. Pharmacol Res Perspect. 2015;3:e00093.
11. Chan AW, Hróbjartsson A, Haahr MT, et alEmpirical evidence for selective reporting of outcomes in randomized trials: comparison of protocols to published articles. JAMA. 2004;291:2457–2465.
12. De Angelis C, Drazen JM, Frizelle FA, et alInternational Committee of Medical Journal Editors. Clinical trial registration: a statement from the International Committee of Medical Journal Editors. N Engl J Med. 2004;351:1250–1251.
13. Krleza-Jerić K, Chan AW, Dickersin K, et alPrinciples for international registration of protocol information and results from human trials of health related interventions: Ottawa statement (part 1). BMJ. 2005;330:956–958.
14. Bracken MBPreregistration of epidemiology protocols: a commentary in support. Epidemiology. 2011;22:135–137.
15. The LShould protocols for observational research be registered? Lancet;375:348.
16. Loder E, Groves T, Macauley DRegistration of observational studies. BMJ. 2010;340:c950.
17. Lash TL, Vandenbroucke JPCommentary: should preregistration of epidemiologic study protocols become compulsory?: Reflections and a counterproposal. Epidemiology. 2012;23:184–188.
18. Cole PThe hypothesis generating machine. Epidemiology. 1993;4:271–273.
19. Michels KB, Rosner BAData trawling: to fish or not to fish. Lancet. 1996;348:1152–1153.
20. Savitz DACommentary: prior specification of hypotheses: cause or just a correlate of informative studies? Int J Epidemiol. 2001;30:957–958.
21. Editors PMObservational studies: getting clear about transparency. PLoS Med. 2014;11:e1001711.
22. Editors TThe registration of observational studies—when metaphors go bad. Epidemiology. 2010;21:607–609.
23. Lash TLDeclining the transparency and openness promotion guidelines. Epidemiology. 2015;26:779–780.
24. EditorsProbing STROBE. Epidemiology. 2007;18:789–790.
25. Vandenbroucke JP, von Elm E, Altman DG, et alSTROBE Initiative. Strengthening the Reporting of Observational Studies in Epidemiology (STROBE): explanation and elaboration. Epidemiology. 2007;18:805–835.
Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.