Journal Logo

Editorial

Editorial

Preregistration and Open Science Practices in Hearing Science and Audiology

The Time Has Come

Svirsky, Mario A. PhD

Author Information
doi: 10.1097/AUD.0000000000000817
  • Free

With this issue, Ear and Hearing announces its decision to offer and promote the use of open science practices (Pre-registration, Open Data, and Open Materials) in an effort to document and increase scientific rigor and transparency in our field.

This initiative is partly in response to what has been called the “replication crisis.” Ioannidis (2005) published an influential paper entitled “Why most published research findings are false.” The resulting public discussion led to closer scrutiny of many results that were accepted as true in many disciplines, and this was particularly acute in the field of social psychology—a field that has gathered a huge amount of media coverage and has generated some of the most widely viewed Ted talks. In fact, many high-profile results popularized by the press have turned out to be much weaker than initially thought, or impossible to replicate. Some examples that come to mind are the “Mozart effect” and “power posing.” The former was popularized as “listening to Mozart makes you smarter.” This was based on a Nature paper published in 1993 claiming that listening to music by Mozart improved spatial reasoning (Rauscher et al. 1993). Subsequent research cast serious doubts about the original claim and the associated effect size (Pietschnig et al. 2010). The concept of power posing received significant public exposure in “Your body language may shape who you are,” the second most popular Ted talk ever (as of this writing; Cuddy 2012), and in a book (“Presence”; Cuddy 2015) that made the New York Times bestsellers list. The initial study claimed that assuming “powerful” postures lead to positive behavioral and hormonal changes (an increase in testosterone and a decrease in cortisol), but follow-up studies failed to replicate the hormonal results and cast doubt on the behavioral ones, and the lead author of the original study (Carney et al. 2010) renounced the theory.

Faced with this unsettling situation, the Center for Open Science and its co-founder, Brian Nosek, started the Reproducibility Project: Psychology in 2011. This was a coordinated attempt by 270 contributing authors to carefully replicate 100 published psychological studies, 97 of which had resulted in significant findings. The depressing results of this collaboration were published in August 2015 (Nosek et al. 2015): only 36 of the 97 original studies replicated and those that did replicate typically showed smaller effects than the original studies.

Why is it that seemingly reliable studies supported by traditional null hypothesis significance testing fail to replicate so often? Why do false-positive results happen with such abnormal frequency? A possible answer can be found in Simmons et al. (2011). The authors “demonstrate how unacceptably easy it is to accumulate (and report) statistically significant evidence for a false hypothesis” by using practices that are common in many fields of science, including our own. These practices have been variously named as “p-hacking,” “HARKing” (or Hypothesizing After Results are Known), “The garden of forking paths,” or simply “QRPs” (questionable research practices, John et al. 2012). This refers to research practices other than outright data falsification that still result in inflated effects or in false-positive findings. Some of these practices include selective reporting of dependent variables, deciding whether to collect more data after looking to see whether the results were significant with a given N, selective reporting of studies (or analyses) that “worked,” or reporting an unexpected finding as having been predicted from the start. Simmons et al. (2011) show that the combination of 4 specific practices that are still commonly accepted in some fields result in a 60.7% likelihood of obtaining a false-positive result when the p value is 0.05 and 21.5% when the p value is 0.01. By definition, of course, the likelihood of a false-positive result when the null hypothesis is true should be 5% for a p value of 0.05 and 1% for a p value of 0.01. Although vigorous debate about the replication crisis has centered in the field of psychology, perhaps the reason why the debate still has not reached the shores of our field is that we have not looked closely enough.

What can be done? An important step toward enhancing scientific rigor and reproducibility, and minimizing the likelihood of spurious false-positive results, lies in the use of open science practices. Ear and Hearing encourages the use of such practices and will recognize their use by allowing authors to apply for Open Science Badges. Badges do not define good practice, but they certify that a particular practice has been followed. This is a voluntary practice and is meant as an incentive to promote open science practices. Icons for each badge type (preregistration, open data/open materials) will be published with the article and in the table of contents. The corresponding information (registration site link, URL, doi, etc.) will be published along with the badge in the article.

The Pre-registered Badge requires researchers to write a detailed description of the study plan, which is saved in a time-stamped, uneditable archive that can be shared with reviewers, editors, and other researchers. This is usually done before data collection, or in some cases (and with proper justification) before data analysis. The plan typically includes specification of the outcome and predictor variables to be used, planned sample size, and the analyses that will be conducted, among other points. For a complete description, see the section “Preregistered+Analysis Plan” at the Center for Open Science page (Center for Open Science, 2019) (please note that the document in this URL also addresses Open Data and Open Materials, which we discuss later). Pre-registration does not preclude additional analyses that were not anticipated a priori. It is simply a way for an author to document (and for readers to understand) which analyses were preplanned and which were exploratory. Ear and Hearing does not mandate any particular preregistration framework. Accepted ones include the very comprehensive ClinicalTrials.gov, the minimalist aspredicted.org (which just requires simple answers to nine questions), as well as others such as Open Science Framework.

The Open Data badge will require two disclosure items: a URL, DOI, or other permanent path for accessing the data or materials in a public, open access repository, and the authors’ certification that the provided link includes sufficient information for an independent researcher to reproduce the reported results. For example, in the case of behavioral data, the repository may include all individual results or even each response to each stimulus token from each subject. Physiological studies may include raw data, such as individual traces from each recording electrode in response to each stimulus in an evoked potentials experiment. The posted data set should include a data dictionary or codebook describing the data in sufficient detail for independent researchers to reproduce the reported analyses and results.

Open Materials badges also require two disclosure items, similar to those required for the Open Data badge. The difference is that this badge refers to materials other than data. A typical example would include computer code implementing an algorithm used in a study, or a script implementing a particular type of statistical analysis using R code or SPSS syntax. The posted materials should be sufficient for an independent researcher to reproduce the reported methodology.

Please note that Ear and Hearing is rewarding, not requiring, these three open practices (preregistration, open data, and open materials) with badges. At this time, we are not recognizing other open practices, such as registered reports. Registered reports are type of journal article where proposed methods and analyses are not only preregistered but also peer-reviewed before research being conducted. We deem registered reports worthwhile because, among other reasons, they make acceptance for publication independent of whether results are statistically significant or not, thus helping reduce researcher bias and disincentivizing any search for artificially low p values. The reason we do not plan to recognize registered reports at this time despite the value we see in them is that we are opting for a more gradual approach to the introduction of open science practices, starting with the less logistically demanding practices described in this editorial.

It is important to note again that the open science practices described earlier will not be mandatory. It is perfectly acceptable for authors not to pursue a given badge or any of them. The new policy will simply allow investigators to document that a given analysis was indeed planned and/or to share data and materials. Authors will have an opportunity to request Pre-registration, Open Data, and Open Materials badges during manuscript submission. Reviewers will have the opportunity to view the preregistration and/or open data/materials sites during peer review, but the decision to award badges will be made by the Editorial Board. The Editorial Board uses the general criteria for awarding badges as outlined on the OSF website (Center for Open Science Badges). The badges represent a declaration by the author that a given open science practice has been followed—not a guarantee from Ear and Hearing that this is actually the case. Editors and reviewers will check that the posted materials exist and may take them into account during the review process.

The new policy aims to increase openness as well as accessibility of scientific research data and materials. The adoption of badges is already having an effect on some scientific fields, with psychological science leading the way. For example, Nosekand Lindsay (2018) point out that the number of registrations on the popular OSF open platform has more than doubled, on average, between 2012 and 2017, and it is even becoming the norm in some subfields of psychological science (Nosek 2019). For more information about Ear and Hearing Open Practice Badges, please visit the journal website (Ear and Hearing).

It is hoped that the open science practice revolution will lead to a cultural shift among scientists toward better, more open, and less misleading reporting of experimental studies.

Mario A. Svirsky, PhD

Editor-in-Chief, Emeritus

REFERENCES

Carney D. R., Cuddy A. J., Yap A. J.. Power posing: Brief nonverbal displays affect neuroendocrine levels and risk tolerance. Psychol Sci, (2010). 21, 1363–1368.
Center for Open Science (Preregistered+Analysis Plan. (2019). Retrieved May 8, 2019 from. https://osf.io/tvyxz/wiki/1.%20View%20the%20Badges/.
Cuddy A. Your body language may shape who you are [Video file]. (2012). Retrieved May 28, 2019 from https://www.ted.com/talks/amy_cuddy_your_body_language_shapes_who_you_are.
Cuddy A. Presence: Bringing Your Boldest Self to Your Biggest Challenges. (2015). Hachette, UK.
Ioannidis J. P.. Why most published research findings are false. PLoS Med, (2005). 2, e124.
John L.K., Loewenstein G., Prelec D. Measuring the prevalence of questionable research practices with incentives for truth telling. Psychol Sci, (2012). 23, 524–532.
Nosek B. A.. The Rise of Open Science in Psychology, A Preliminary Report. (2019). Retrieved from June 7, 2019 https://cos.io/blog/rise-open-science-psychology-preliminary-report/.
Nosek B. A., Lindsay D. S.. Preregistration becoming the norm in psychological science. APS Observer, (2018). 31, 19–21.
Nosek B. A., Aarts A. A., Anderson C. J., et al.; Open Science Collaboration. (Estimating the reproducibility of psychological science. Science, (2015). 349, aac4716.
Pietschnig J., Voracek M., Formann A. K.. Mozart effect–Shmozart effect: A meta-analysis. Intelligence, (2010). 38, 314–323.
Rauscher F. H., Shaw G. L., Ky K. N.. Music and spatial task performance. Nature, (1993). 365, 611.
Simmons J. P., Nelson L. D., Simonsohn U. False-positive psychology: Undisclosed flexibility in data collection and analysis allows presenting anything as significant. Psychol Sci, (2011). 22, 1359–1366.
Copyright © 2020 Wolters Kluwer Health, Inc. All rights reserved.