Getting Over TOP : Epidemiology

Secondary Logo

Journal Logo


Getting Over TOP

Lash, Timothy L.

Author Information
Epidemiology 33(1):p 1-6, January 2022. | DOI: 10.1097/EDE.0000000000001424
  • Free

In May 2015, the Center for Open Science invited Epidemiology to support the Transparency and Openness Promotion (TOP) Guidelines.1 After consulting our editors and former Editors-in-Chief, I declined this invitation and published an editorial to explain the rationale.2 Nonetheless, the Center for Open Science has assigned a TOP score to the journal and disseminated the score via Clarivate, which also disseminates the Journal Impact Factor. Given that Epidemiology has been scored despite opting not to support the TOP Guidelines, and that our score has been publicized by the Center for Open Science, we here restate and expand our concerns with the TOP Guidelines and emphasize that the guidelines are at odds with Epidemiology’s mission and principles.

We declined the invitation to support the TOP Guidelines for three main reasons. First, Epidemiology prefers that authors, reviewers, and editors focus on the quality of the research and the clarity of its presentation over adherence to one-size guidelines. For this reason, among others, the editors of Epidemiology have consistently declined opportunities to endorse or implement endeavors such as the TOP Guidelines.3–5 Second, the TOP Guidelines did not include a concrete plan for program evaluation or revision. Well-meaning guidelines with similar goals sometimes have the opposite of their intended effect.6 Our community would never accept a public health or medical intervention that had little evidence to support its effectiveness (more on that below) and no plan for longitudinal evaluation. We hold publication guidelines to the same standard. Third, we declined the invitation to support the TOP Guidelines because they rest on the untenable premise that each research article’s results are right or wrong, as eventually determined by whether its results are reproducible or not. Too often, and including in the study of reproducibility that was foundational in the promulgation of the TOP Guidelines,7 reproducibility is evaluated by whether results are concordant in terms of statistical significance. This faulty approach has been used frequently, even though the idea that two results—one statistically significant and the other not—are necessarily different from one another is a well-known fallacy.8,9


The editors of Epidemiology reject dichotomization of research results into right or wrong, or into statistically significant or not.10 Our preference is to treat each research result as an imperfect measurement of an underlying parameter, allowing time for the accumulation of evidence, potentially from many studies, to ultimately yield knowledge that can guide policy. It is difficult to estimate the degree to which reliance on statistical significance testing has exaggerated the impression of a reproducibility crisis. Examples of claims of irreproducible results based on p-values falling on opposite sides of the type 1 threshold, like those presented in the Figure,11–13 are easy to find. They contribute to the perception that epidemiologic results are poorly reproducible when, in these examples and many others, the evidence base is highly consistent. There is no doubt, therefore, that the cultural preference for inference resting on null hypothesis significance testing has done dramatic harm to the perceived reproducibility of scientific evidence.14

Three sets of P value functions. Each set depicts two or more highly consistent results that were described as inconsistent with one another because of discordant tests of statistical significance. Dashed lines are statistically significant results and solid lines are not statistically significant (Type 1 α = 0.05). CI, confidence interval; HR, hazard ratio; IPTW, inverse probability of treatment weight; OR, odds ratio.

The foundational article for the TOP Guidelines7 provides a detailed case study. It included a scatter plot of the original effect estimate size against the replication effect estimate size. All but three of the 100 original effect estimates were statistically significant. Of the 99 studies for which an effect size in both the original and replication study could be estimated, 82 showed a stronger effect size in the original study. This pattern is what one would expect when the original effects targeted for replication were selected because they were statistically significant. Because of that selection bias, on average, these initial findings are expected to be overestimates.14 The replication studies are not subject to this selection pressure. Without the same selection bias, they are not expected to be overestimates, but rather should regress towards the null, which they did. The authors of the foundational study, including many who also served on the self-appointed committee to promulgate the TOP Guidelines, recognized this phenomenon:

In a discipline with low-powered research designs and an emphasis on positive results for publication, effect sizes will be systematically overestimated in the published literature. There is no publication bias in the replication studies because all results are reported.7

Given the connection between selection of statistically significant results and poor reproducibility, which the coauthors of the TOP Guidelines and the foundational paper clearly noted, it is reasonable to expect that the TOP Guidelines would include as one of their eight scored guidelines some indication that journals discourage authors from using the statistical significance of a result as a basis to select it for publication. None of the eight standards address this problem. In contrast, Epidemiology has discouraged null hypothesis significance testing for more than 30 years, standing nearly alone among major general interest medical and Epidemiology journals in this editorial philosophy over that period.15

Rather than discouraging null hypothesis significance testing, a large group of coauthors that included the Executive Director of the Center for Open Science instead suggested lowering the conventional type 1 error rate for “new discoveries” from α = 0.05 to α = 0.005.16 Their aim was to improve reproducibility by lowering the false-positive rate. In the article wherein this recommendation is made, the authors write “The choice of any particular threshold is arbitrary and involves a trade-off between type I and type II errors.” Clearly, if the choice of an acceptable Type 1 error rate, such as 0.05 or 0.005, requires a trade-off with type 2 error rates, and we care about both types of errors, then the choice is not arbitrary. The quoted passage only makes sense if the trade-off between type 1 and type 2 errors is, itself, seen as inconsequential. It is easy to make this mistake when the ethical values attached to the trade-off go unstated.

Imagine designing a study of the relation between exposure to pollution emitted from an industrial facility and the occurrence of severe asthma among children living nearby. Imagine further that the study is designed with 90% power (implying an acceptable type 2 error rate of β = 0.10) to detect a 1.1-fold increase in the occurrence of severe asthma among nearby children and an acceptable type 1 error rate of either 0.05 (a conventional choice) or 0.005 (as advocated16). Now attach ethical value statements to this design. A type 2 error (failing to reject the null hypothesis when it is false) benefits stakeholders in the industrial facility; they can continue to emit harmful pollution without remediation or liability because the health consequences went undetected by the study. The design accepts a 10% probability of making such a mistake. A type 1 error (rejecting the null hypothesis when it is true) benefits members of the community; they might be exposed to less pollution if the study leads to regulation, even though it really has not affected the occurrence of severe asthma among children. Conventionally, proponents of significance testing accept a 5% probability of making such a mistake. The coauthors advocate lowering this acceptable threshold to 0.5%. This example clarifies that there are ethical values that are far from arbitrary attached to the decisions about acceptable type 1 and type 2 error rates. In this example, the value judgment is that a type 2 error of 10%, which benefits the industry and harms the community, is twice as costly as a type 1 error of 5%, which benefits the community and inconveniences the industry. If we set the acceptable type 1 error rate at 0.5%, as recommended for new discovery research projects such as this one,16 then the ratio favors the polluting industry over the health of the community by 20-fold, not 2-fold.

This example simplifies complex social, economic, and policy considerations that would be necessary for a full evaluation of the ethics. Nonetheless, articulating these value judgments, as opposed to blindly accepting conventional or stricter acceptable error rates, reveals that many studies are designed with ethical values that run counter to ordinary public health priorities. A stated goal of the TOP Guidelines is “to encourage greater transparency and reproducibility of research.” Requiring authors to be transparent about the ethical values attached to design decisions ought to be a priority, but is not included among the eight criteria by which TOP scores journals. Even better would be to abandon the culture of null hypothesis significance testing even in the study design stage. Epidemiology has published guidelines for designing studies based on expected precision of its estimates,17 not on a framework of significance testing.


Epidemiology is an official journal of the International Society for Environmental Epidemiology (ISEE). The science of environmental epidemiology operates within a highly charged social and political context, one in which economic and material interests are often wrongly viewed as conflicting with environmental protections, environmental public health, and environmental justice.18–20 ISEE has been particularly wary of the influence of the fossil fuel extraction industry on environmental health and policy.21 Readers of Epidemiology, and consumers of the TOP score, ought to know that one organization, with fossil fuel extraction at the origin of its wealth, has awarded an unprecedented amount of support in the name of research integrity to improve reproducibility.22 The cofounder and Executive Director of the Center for Open Science has recognized the central role of this support in the center’s work, saying it “completely transformed what we could imagine doing.”23 The awards to the Center for Open Science were apparently made without open calls for competitive applications and without external review.23 In fact, awards were initiated by the foundation contacting the center founder upon learning of the views he held.23 I encourage readers of Epidemiology and consumers of the TOP score to think critically about the public health and public policy implications of one foundation awarding an unprecedented amount of support to organizations who aim to reshape the practice of science.22,24

It might seem baseless to question calls for openness and transparency in science, regardless of their headwaters. There is, however, precedence for industry interests to use our genuine efforts to improve the practice of science as a lever to reduce the use of science in health protections. In the past few years, the US Environmental Protection Agency’s (EPA) ability to make sound policy based on complete scientific evidence has been threatened by legislation and rulemaking to require open access to original data underlying the research. The “Honest and Open New EPA Science Treatment (HONEST) Act” aimed to prohibit EPA from using studies for agency decision-making unless raw data, computer code, etc were provided to the agency and made publicly available.25 Former EPA Administrator Gina McCarthy says the HONEST Act “was designed to prevent us (EPA) from getting the information we need to protect public health.” The HONEST Act never passed Congress, but during the Trump administration EPA successfully enacted the same fundamental policies through administrative rulemaking titled “Strengthening Transparency in Regulatory Science.”26–28 The rule was vacated in court on procedural grounds and sent back to EPA, which is unlikely to reinstate it under the Biden administration.29 This rule has been a longstanding goal of industrial interests, with the intent of hampering environmental regulation.30 Importantly, defenders of the policy point to science’s own efforts to improve open access to data31 to deflect criticism.32,33

Environmental epidemiologists, and Epidemiology as an official journal of an environmental epidemiology society, should be wary of any attempt to regulate the scientific process that might in turn be used to hamper public health. Policies that require preregistration of nonrandomized research protocols trigger the same wariness. The “preregistration revolution”34 is rooted in a 2009 report from a workshop sponsored by the European Centre for Ecotoxicology and Toxicology of Chemicals (ECETOC),35 which was titled “Enhancement of the Scientific Process and Transparency of Observational Epidemiology Studies.” According to its web site, the ECETOC “provides a collaborative space for top scientists from industry, academia and governments to develop and promote practical, trusted and sustainable solutions to scientific challenges which are valuable to industry, and to the regulatory community and society in general.” It is financed by its membership, which includes “the leading companies with interests in the manufacture and use of chemicals, biomaterials and pharmaceuticals.” The introductory paragraph of the summary and recommendations section of the workshop report reads:

Among the workshop participants there was general consensus that current practice in observational epidemiology research is often far from the ideal and that improvements need to be made in particular with regard to enhancing transparency and credibility of observational epidemiologic research. The issues of publication and other biases along with undocumented deviation from original study designs were highlighted. The workshop recognized several key points on how to enhance the transparency and credibility of observational epidemiology studies.

First among the recommendations was compulsory preregistration of observational epidemiology studies in a public database following the model of clinical trials. Note that investigators can already preregister nonrandomized studies and protocols, and readers can already weight their interpretations of studies based on preregistration status. What was new in this recommendation is the notion that preregistration should be made compulsory. This recommendation was widely discussed in clinical36–38 and epidemiology journals.39Epidemiology published an editorial5 accompanied by five commentaries,40–44 a follow-on commentary,45 and a follow-on to the follow-on.46 In addition, to answer concerns that many epidemiology studies present analyses of hypotheses that were not preplanned,34,47,48 the editors compared three years of research papers that appeared in Epidemiology with the publicly available description of the specific aims of the grants that authors had cited as supporting the work presented in their papers.49 Among the articles with informative funding information, the published result was clearly or possibly among the funded aims for nearly two-thirds. We were reassured that most of the evaluable published results related to a priori hypotheses of sufficient merit that they were selected for funding support. This self-study resulted in changes to the way we ask authors to describe funding support.49

Advocates for compulsory preregistration of nonrandomized research often cite the experience with randomized trials as a model. We have addressed this faulty analogy before.5,46 The original motivation for registration of trials was to assure that null trial results would be known even if never published, so that new trial participants would not be randomized to a treatment already shown to be ineffective.50,51 Secondary benefits pertaining to scientific rigor were anticipated, but the focus was on assuring all trial results came to light so as to preserve the ethical duty to do no harm to trial participants. Fifteen years of evidence has now accumulated, allowing a robust examination of whether the objectives of mandatory preregistration of clinical trials have been achieved. In one review, nearly one-third of trials published in the six highest-impact general medicine journals were out of compliance with the International Committee of Medical Journal Editors’ (ICMJE) policy on prospective trial registration, including nearly 10% that were registered after primary endpoint ascertainment could have taken place.52 In a second review, less than half of published trials were adequately registered, and even among the half in compliance, nearly one-third showed some evidence of discrepancies between the outcomes registered and the outcomes published, often favoring selection of outcomes with a statistically significant result.53 In a Cochrane meta-review of 16 reviews (median 54 trials per review), discrepancies between protocols or trial registry entries and trial reports were described as “common.”54 Similar evidence has already accumulated in a review of registration of nonrandomized studies.55 One of the seminal papers regarding the ICMJE requirement of trial registration listed ethical and scientific rationales for compulsory registration.51 The ethical rationales have largely been achieved, but the expectation of secondary benefits of increased scientific rigor have not been manifest. It is disingenuous to predict34 that compulsory preregistration of nonrandomized research studies will achieve the gains in scientific rigor that compulsory preregistration of randomized trials has failed to produce.

Support for the idea of compulsory preregistration of observational epidemiology studies declined after 2009, only to reemerge as one of the eight scored elements of the TOP Guidelines.1 I am wary of the drive to make preregistration of nonrandomized epidemiologic research compulsory or normative. It is easy to imagine legislation or rulemaking that follows the HONEST Act model described above, such that epidemiologic research that was not preregistered would be summarily excluded from the evidence base. The next step assuredly would be to compare the published and preregistered research with its protocol, leading to arguments against a paper’s research being used to inform public policy because of even minor deviations from the protocol. The beginnings of this strategy to diminish the influence of epidemiology on environmental health have already been made evident.56 Similarly, compulsory public registration of community-based participatory research of environmental hazards would alert industries to the study, giving them the opportunity to interfere in its implementation. This strategy has also already been observed.57


Both as Editor-in-Chief of Epidemiology and as a member of the scientific community, I maintain grave concerns about the TOP Guidelines, the method by which they were developed, and the potential of influence by industry interests. The evidence is clear that reliance on statistical significance to classify study results degrades the evaluation of reproducibility. It also dramatically affects the appearance of research results that seem poorly reproducible, when in fact the distribution of results is exactly as expected given the strong initial selection pressures favoring statistically significant results. This pattern was clear in the foundational research paper informing the TOP Guidelines, and was noted by its authors, yet the TOP Guidelines nowhere discourage null hypothesis significance testing. To the contrary, the Executive Director of the center that promulgates the TOP Guidelines has advocated reducing the acceptable type 1 error rate, without due consideration of the inevitable distortions of public health priorities that would ensue. The TOP Guidelines also advocate for compulsory or normative preregistration of nonrandomized epidemiologic research, despite the fact that both the workshop report that launched the dialogue35 and the Executive Director’s own writing34 recognize that there is insufficient evidence that compulsory preregistration improves the reproducibility of scientific research. In fact, the substantial evidence base developed from meta studies of compulsory preregistration of randomized trials suggest that it will not. A requirement for preregistration, or normative expectation of it, may, however, be used to diminish the influence of epidemiologic research on public health policy, in exactly the way our advocacy for open data has been used against us.

For these reasons, I will continue to lead Epidemiology with the goal of achieving its mission, which is transparently stated on our web site: “Epidemiologys mission is to publish high-quality epidemiologic research and methodologic innovations that educate and inform readers, influence policy, and improve public health and health care.” Nothing in this mission or our related goals pertains to optimizing our TOP score (nor our Journal Impact Factor, which we have likewise disavowed58). Score us however you like; we will continue to measure the journal’s success by how well we strive to achieve our mission and by whether authors feel some thrill when they get the news that their work will appear on our pages.


I am grateful for the review and comments on earlier drafts provided by Thomas P. Ahern, Lindsay J. Collin, Miguel A Hernán, Jay S. Kaufman, Richard F. MacLehose, Maria C. Mirabelli, Sunni L. Mumford, Kenneth J. Rothman, Andreas Stang, Sonja A. Swanson, Jan P. Vandenbroucke, and Allen J. Wilcox.


1. Nosek BA, Alter G, Banks GC, et al. SCIENTIFIC STANDARDS. Promoting an open research culture. Science. 2015;348:1422–1425.
2. Lash TL. Declining the transparency and openness promotion guidelines. Epidemiology. 2015;26:779–780.
3. Rothman KJ, Poole C. Some guidelines on guidelines: they should come with expiration dates. Epidemiology. 2007;18:794–796.
4. Editors. Probing STROBE. Epidemiology. 2007;18:789–790.
5. Editors. The registration of observational studies--when metaphors go bad. Epidemiology. 2010;21:607–609.
6. King NB, Kaufman JS. More author disclosure: solution or absolution? Epidemiology. 2012;23:777–779.
7. Open Science Collaboration. Estimating the reproducibility of psychological science. Science. 2015;349:aac4716.
8. Gelman A, Stern H. The difference between “significant” and “not significant” is not itself statistically significant. Am Stat. 2006;60:328–331.
9. Greenland S, Senn SJ, Rothman KJ, et al. Statistical tests, P values, confidence intervals, and power: a guide to misinterpretations. Eur J Epidemiol. 2016;31:337–350.
10. Lang JM, Rothman KJ, Cann CI. That confounded P-value. Epidemiology. 1998;9:7–8.
11. Rothman KJ, Lanes S, Robins J. Casual inference. Epidemiology. 1993;4:555–556.
12. Seliger C, Meier CR, Becker C, et al. Statin use and risk of glioma: population-based case-control analysis. Eur J Epidemiol. 2016;31:947–952.
13. Brown HK, Ray JG, Wilton AS, Lunsky Y, Gomes T, Vigod SN. Association between serotonergic antidepressant use during pregnancy and autism spectrum disorder in children. JAMA. 2017;317:1544–1552.
14. Lash TL. The harm done to reproducibility by the culture of null hypothesis significance testing. Am J Epidemiol. 2017;186:627–635.
15. Stang A, Deckert M, Poole C, Rothman KJ. Statistical inference in abstracts of major medical and epidemiology journals 1975-2014: a systematic review. Eur J Epidemiol. 2017;32:21–29.
16. Benjamin DJ, Berger JO, Johannesson M, et al. Redefine statistical significance. Nat Hum Behav. 2018;2:6–10.
17. Rothman KJ, Greenland S. Planning study size based on precision rather than power. Epidemiology. 2018;29:599–603.
18. Schwartz J; International Society for Environmental Epidemiology. Science, politics, and health: the environmental protection agency at the threshold. Epidemiology. 2017;28:316–319.
19. McCarthy G. Scientists must shape our future as they have shaped our past: perspective of the former US EPA administrator. Epidemiology. 2018;29:e1–e4.
20. Wing S. Social responsibility and research ethics in community-driven studies of industrialized hog production. Environ Health Perspect. 2002;110:437–444.
21. Kogevinas M, Takaro T. Sponsorship by big oil, like the tobacco industry, should be banned by the research community. Epidemiology. 2019;30:615–616.
22. Lash TL, Collin LJ, Van Dyke ME. The replication crisis in epidemiology: snowball, snow job, or winter solstice? Curr Epidemiol Rep. 2018;5:175–183.
23. Apple S. The Young Billionaire Behind the War on Bad Science. Wired. Available at: Accessed 29 April 2021.
24. Massing M. How to Cover the One Percent | by Michael Massing | The New York Review of Books. Available at: Accessed 22 June 2021.
25. Michaels D, Burke T. The dishonest HONEST Act. Science. 2017;356:989.
26. Environmental & Energy Law Program. EPA is Planning to Limit the Science It Considers. Harvard Law School. Published April 4, 2018. Available at: Accessed 11 May 2021.
27. Environmental & Energy Law Program. Legal Shortcomings in EPA’s So-Called “Secret Science” Proposed Rule. Harvard Law School. Published May 1, 2018. Available at: Accessed 11 May 2021.
28. Environmental & Energy Law Program. Comments by Public Health Experts on the Proposed Rule. Harvard Law School. Published October 22, 2018. Available at: Accessed 11 May 2021.
29. Environmental & Energy Law Program. The Downfall of the “Secret Science” Rule, and What It Means for Biden’s Environmental Agenda. Harvard Law School. Published March 5, 2021. Available at: Accessed 11 May 2021.
30. Niiler E. The EPA’s Anti-Science ‘Transparency’ Rule Has a Long History. Wired. Available at: Accessed 11 May 2021.
31. Gewin V. Data sharing: an open mind on open data. Nature. 2016;529:117–119.
32. Yong E. The Transparency Bills That Would Gut the EPA. The Atlantic. Published March 15, 2017. Available at: Accessed 11 May 2021.
33. Yong E. How Trump Could Wage a War on Scientific Expertise. The Atlantic. Published December 2, 2016. Available at: Accessed 11 May 2021.
34. Nosek BA, Ebersole CR, DeHaven AC, Mellor DT. The preregistration revolution. Proc Natl Acad Sci U S A. 2018;115:2600–2606.
35. Ecetoc. Workshop Report 18—Enhancement of the Scientific Process and Transparency of Observational Epidemiology Studies. Available at: Accessed 11 May 2021.
36. Lancet T. Should protocols for observational research be registered? Lancet. 2010;375:348.
37. Vandenbroucke JP. Registering observational research: second thoughts. Lancet. 2010;375:982–983.
38. Sørensen HT, Rothman KJ. The prognosis for research. BMJ. 2010;340:c703.
39. Rushton L. Should protocols for observational research be registered? Occup Environ Med. 2011;68:84–86.
40. Samet JM. To register or not to register. Epidemiology. 2010;21:610–611.
41. Lash TL. Preregistration of study protocols is unlikely to improve the yield from our science, but other strategies might. Epidemiology. 2010;21:612–613.
42. Takkouche B, Norman G. Meta-analysis protocol registration: sed quis custodiet ipsos custodes? [but who will guard the guardians?]. Epidemiology. 2010;21:614–615.
43. Poole C. A vision of accessible epidemiology. Epidemiology. 2010;21:616–618.
44. Vandenbroucke JP. Preregistration of epidemiologic studies: an ill-founded mix of ideas. Epidemiology. 2010;21:619–620.
45. Bracken MB. Preregistration of epidemiology protocols: a commentary in support. Epidemiology. 2011;22:135–137.
46. Lash TL, Vandenbroucke JP. Should preregistration of epidemiologic study protocols become compulsory? Reflections and a counterproposal. Epidemiology. 2012;23:184–188.
47. Kerr NL. HARKing: hypothesizing after the results are known. Pers Soc Psychol Rev. 1998;2:196–217.
48. Munafò MR, Nosek BA, Bishop DVM, et al. A manifesto for reproducible science. Nat Hum Behav. 2017;1:0021.
49. Lash TL, Kaufman JS, Hernán MA. Correspondence between results and aims of funding support in EPIDEMIOLOGY articles. Epidemiology. 2018;29:1–4.
50. De Angelis C, Drazen JM, Frizelle FA, et al.; International Committee of Medical Journal Editors. Clinical trial registration: a statement from the International Committee of Medical Journal Editors. N Engl J Med. 2004;351:1250–1251.
51. Krleza-Jerić K, Chan AW, Dickersin K, Sim I, Grimshaw J, Gluud C. Principles for international registration of protocol information and results from human trials of health related interventions: Ottawa statement (part 1). BMJ. 2005;330:956–958.
52. Dal-Ré R, Ross JS, Marušić A. Compliance with prospective trial registration guidance remained low in high-impact journals and has implications for primary end point reporting. J Clin Epidemiol. 2016;75:100–107.
53. Mathieu S, Boutron I, Moher D, Altman DG, Ravaud P. Comparison of registered and published primary outcomes in randomized controlled trials. JAMA. 2009;302:977–984.
54. Dwan K, Altman DG, Cresswell L, Blundell M, Gamble CL, Williamson PR. Comparison of protocols and registry entries to published reports for randomised controlled trials. Cochrane Database Syst Rev. 2011:MR000031doi: 10.1002/14651858.MR000031.pub2
55. Boccia S, Rothman KJ, Panic N, et al. Registration practices for observational studies on indicated low adherence. J Clin Epidemiol. 2016;70:176–182.
56. Swaen GMH, Urlings MJE, Zeegers MP. Outcome reporting bias in observational epidemiology studies on phthalates. Ann Epidemiol. 2016;26:597–599.e4.
57. Wing S, Horton RA, Muhammad N, Grant GR, Tajik M, Thu K. Integrating epidemiology, education, and organizing for environmental justice: community health effects of industrial hog operations. Am J Public Health. 2008;98:1390–1397.
58. Hernán MA, Wilcox AJ. We are number one but nobody cares-that’s good. Epidemiology. 2012;23:509.
Copyright © 2021 Wolters Kluwer Health, Inc. All rights reserved.