Open Science in Health Psychology and Behavioral Medicine: A Statement From the Behavioral Medicine Research Council : Psychosomatic Medicine

Journal Logo


Open Science in Health Psychology and Behavioral Medicine: A Statement From the Behavioral Medicine Research Council

Segerstrom, Suzanne C. PhD, MPH; Diefenbach, Michael A. PhD; Hamilton, Kyra PhD; O’Connor, Daryl B. PhD; Tomiyama, A. Janet PhD;  with the Behavioral Medicine Research Council

Author Information
Psychosomatic Medicine 85(4):p 298-307, May 2023. | DOI: 10.1097/PSY.0000000000001186


Summary of Recommendations of the Behavioral Medicine Research Council


The BMRC strongly recommends the practice of preregistration when engaging in hypothesis-driven research, with transparent reporting of deviations from preregistered plans. The BMRC further encourages the inclusion of sample diversity considerations in preregistration.

Registered Reports

The BMRC recognizes the value of journals in the area of health psychology and behavioral medicine to introduce Registered Reports as a new article format.

Preprints and Postprints

The BMRC views peer-reviewed, accepted science as the best form of evidence and recommends a close evaluation of the role of preprints for health psychology and behavioral medicine research and to compare this role with the use of preprints among physicists and economists.

Open Research

The BMRC encourages open research practices at a minimum as required by funding entities and publications. In practice, research materials should be as open as possible and as closed as necessary, respecting privacy, laws, and cultural knowledge.

Civility, Collegiality, and Collaboration

The BMRC urges researchers to be tolerant and to work ­together in a collaborative, collegial, and civil manner, acknowledging scientific and methodological differences and similarities.


The BMRC recognizes the advantages and disadvantages of Open Science in achieving equity in health psychology and behavioral medicine. A more equitable research environment is needed to advance equitable open science. Open access publication cost and institutional recognition of open science practices may inadvertently disadvantage underrepresented scientists.


The present article resulted from a dialogue among representatives of the Behavioral Medicine Research Council (BMRC; representing four large international organizations in behavioral medicine and health psychology), focusing on the need to communicate our science openly and equitably while maintaining rigorous research standards. The need for this dialogue arose from multiple developments that happened over the past decade: First, legislative actions require data generated through federal funding to be made available if requested by other researchers. Second, the scientific field was confronted with high-profile incidents in which studies could not be replicated, including cases in which the original data had been fabricated or falsified (1–3). Third, questions of equity in data quality and data access have become increasingly prominent. Fourth, the introduction of new and innovative publishing recommendations and formats (e.g., preregistration and registered reports) has prompted the need for greater transparency. The aim of the present BMRC statement on Open Science is threefold: (a) to provide a snapshot of Open Science practices in three of the most prominent journals in our field; (b) to critically evaluate the most common Open Science practices for our field; and (c) to provide recommendations for the adoption of such practices, including preregistration, registered reports, preprints and postprints, and open research.


As members of the research community, we accept the need to publish the results of our research efforts, and we are often reminded that if it is not published, “it has not happened.” Yet, the traditional publication system has been criticized for not providing equitable access to publicly funded research results (4). Instead, journals tend to favor positive findings over null or contradictory results (see the well-known “file-drawer problem”) (5). Additionally, non-registered research is open to post-hoc analytic reports by researchers and may contribute to the reproducibility problem through so-called “questionable research practices” (see below). For example, one study found that 57% of studies published prior to 2000 (when registration for large clinical trials was introduced) reported beneficial intervention effects on the primary outcome compared to only 8% of trials published after 2000 (6).

Since the publication of the Open Science Collaboration’s 2015 paper (7) estimating the reproducibility of psychological science, there have been many important developments to address these issues. The research community has suggested several practices, together known as “Open Science.” Open Science includes some combination of registering and publishing study protocols (including hypotheses, primary and secondary outcome variables, and analysis plans) and making available preprints of manuscripts, study materials, de-identified data sets, and analytic codes. Open Science is important for health psychology and behavioral medicine. Research in this field has the potential to profoundly impact individual, community, and population health and well-being, as well as healthcare practices and policies. The potential societal impact of our work underscores the importance of ensuring experimental rigor, transparency, reproducibility, and equitable access to advance our science.

Uptake of Open Science practices has been steady and there is clear evidence of a steep upward trajectory (8). Progress has accelerated since leading funders signed on to improving reproducibility (9) and journals and publishers started to embrace the Transparency and Openness Promotion (TOP) guidelines (see Box 1), preregistration, and new article formats such as registered reports. For example, in 2012, registered reports were first proposed by the journals Cortex and Perspectives on Psychological Science and then launched in these journals (along with in Social Psychology) in 2013 (10). Over 300 journals now offer the registered reports format across a large number of disciplines including psychology and medicine. Despite these numerous developments and advances, there remains much room for improvement.

Box 1. Open Science Resources for Researchers

Reporting Guidelines

American Psychological Association Reporting Guidelines:

EQUATOR Network:

Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA):

Consolidated Standards of Reporting Trials (CONSORT):

Transparency and Openness Promotion (TOP) Guidelines:

  • See Current Signatories tab for participating journals


Systematic reviews and meta-analyses:

Clinical trials:

Registered reports (after in principle acceptance):

Preregistration templates:,

Preprints and Postprints

Electronic preprints and postprints:

Most journals’ postprint policies:

SPARC author addendum:

American Psychological Association policy:

Australian Resource Council policy:

Australian National Health and Medical Research Council policy:

Open Research

British Psychological Society policy:

National Institutes of Health Policy for Data Management and Sharing:

FAIR Principles:

About Creative Commons licenses:

Generalist open science repositories (all assign DOIs)

Synthetic databases:

Videos, Primers, and How-to Guides

OSF (preregistration):

OSF (how-to guides):

UK Reproducibility Network:

Synthetic databases:

Frequency of Open Science Practices in Annals of Behavioral Medicine, Health Psychology, and Psychosomatic Medicine, 2018–2020

As a starting point, we examined Open Science practices in the primary journals of the BMRC’s constituent organizations and how patterns and trends in transparency and openness have changed (data and code available at In an analysis of Open Science practices in Annals of Behavioral Medicine, Health Psychology, and Psychosomatic Medicine, coders indicated for each empirical study or review published in 2018, 2019, and 2020 whether it was preregistered (the study protocol was predefined in its entirety or in part); was a Registered Report (acceptance in principle was based on the review of the introduction and methods only, before data collection and/or analysis); made a statement on protocol sharing, data sharing, or material sharing; or whether it was gold open access (for further definitions, see the Open Research Coding Checklist in the Supplemental Materials, (11). We sampled for 3 years to ensure a sufficient sampling time frame to provide a good overview of the frequency of Open Science practices. Open Science practices overall were low (Table 1), except for the relatively high number of articles published as gold open access in Annals of Behavioral Medicine and Health Psychology (48.3% and 51.1%, respectively). This result is consistent with an analysis of reporting practices in 2018 in these three journals plus the American Journal of Preventive Medicine, in which there was low occurrence of elements such as explicit description of analyses as primary or secondary (16% of 162 sampled papers) and if and when studies were registered (13.6%) (12).

TABLE 1 - Open Science Practices in Behavioral Medicine Research Council Society Journals
By journal By year
ABM, % HP, % PM, % 2018, % 2019, % 2020, %
1. Does the article state whether or not the study (or some aspect of the study) was preregistered? (Yes) 23.2 10.4 14.4 18.7 11.4 16.9
2. Is the article a Registered Report? (Yes) 0 3.8 0 0 1.7 3.2
3. Does the article link to an accessible protocol? (Yes) 10.5 10.1 11.9 17.1 11.1 4.6
4. Does the article state whether or not data are available? (Yes) 15.4 6.8 5 2.0 14.8 9.9
5. Does the article state whether the study materials are available (on a free to access repository or similar) or make them available in the paper or ­supplementary materials section? (Yes) 28.9 21.9 11.5 15.8 28.6 19.4
6. Is the article gold open access? (Yes) 48.3 51.1 8.5 42.9 53.2 23.7
ABM, Annals of Behavioral Medicine; HP, Health Psychology; PM, Psychosomatic Medicine.

No clear pattern emerged from 2018 to 2020 (Table 1). If anything, there was evidence of reductions in some practices over time. It is difficult to reconcile these observations as journals and funders have become more stringent in their reporting requirements and need for registration. However, study registration did increase from 2008 to 2018 (12). Annals of Behavioral Medicine, Health Psychology, and Translational Behavioral Medicine are signees to the TOP Guidelines (13–15) (see Box 1), which establish guidelines for data citation; data, materials, and code transparency; design and analysis; preregistration; and replication. Psychosomatic Medicine will become a signatory in 2023 (16). Annals of Behavioral Medicine and Health Psychology’s new instructions to authors emphasize open science practices in accordance with their TOP guidelines (14,17). Journals can customize whether TOP guidelines are required or optional, however, it is likely that increased adherence to TOP guidelines will be key to improving uptake of open science practices in the future.

These findings mirror psychology at large and also echo a recent pulse survey conducted by the Society of Behavioral Medicine examining the work presented at the 2019 annual meeting of the society (15,18). Nearly three-quarters of all presentations (e.g., papers, posters, and symposia) did not report using any Open Science practice. Taken together, these findings should represent a call to action for health psychology and behavioral medicine researchers to integrate Open Science practices into research programs and investigate the barriers to uptake (19,20).

Nevertheless, health psychology and behavioral medicine researchers have been early adopters of some key Open Science practices (21). We have been exemplars in preregistering systematic reviews and meta-analyses and following the Preferred Reporting Items for Systematic Reviews and Meta-analyses and the Consolidated Standards of Reporting Trials guidelines (21). Moreover, for many years, perhaps due to our close collaborative relationships with medicine or due to regulatory requirements, it has been standard practice for health psychology and behavioral medicine researchers to preregister randomized controlled trials in open-access trial repositories. As of April 2021, Translational Behavioral Medicine, published by the Society of Behavioral Medicine, has adopted the badge system for open data and open materials, thus providing an incentive for authors to make available their data and study materials to other researchers.


The number of published null results has increased over time in U.S. National Heart, Lung and Blood Institute (NHLBI) funded clinical trials, potentially as the result of introducing registration for large clinical trials on around the year 2000 (22). Specifically, 57% of studies published before 2000 reported beneficial intervention effects on the primary outcome, compared with only 8% of trials published after 2000. Thus, the year 2000 marked the beginning of a natural experiment that resulted in greater constraints on reporting clinical trial results, which may have led to greater transparency in reporting standards.

When analyses are conducted transparently, questionable research practices are less likely. Questionable research practices are actions that may not constitute outright scientific fraud but threaten the validity of scientific conclusions (23). They come in many forms but commonly arise from post hoc activities to produce a more easily publishable paper. One example is “p-hacking,” which is the practice of taking actions such as removing observations or adding covariates solely to lower p values below .05 (24). Another example is “HARKing,” which stands for hypothesizing after results are known (25). HARKing violates the fundamental tenet of formulating hypotheses a priori before an experiment is conducted. Yet another example is the overuse of “researcher degrees of freedom,” wherein many statistical tests are run and only those that reach the threshold for statistical significance are reported.

There are numerous benefits of preregistration, not least that registering empirical work helps reduce the use of questionable research practices (26). It is consistent with the requirements of truly confirmatory research, while not precluding the performance of exploratory research and data analysis (27). Preregistration involves the precise specification and documentation of all the main aspects of an empirical study and registering these in a repository in advance of conducting the work. As a result, researchers give careful and thorough consideration of the study hypotheses, design, data acquisition, and data analysis plans a priori, allowing time to fine-tune all aspects of the research process and ensuring that the research team has an agreed-upon, clear understanding of the proposed research. It also provides the researcher the opportunity to specify which hypotheses are confirmatory and which are exploratory. Presenting exploratory results as confirmatory misrepresents the scientific process and is another kind of questionable research practice (28).

One commonly raised objection is that preregistration is not possible in the case of secondary data analysis. Indeed, because the cost of collecting data is high, many of us engage in secondary data analysis of large data collection efforts, such as the Health and Retirement Survey or the Midlife in the United States Study. However, preregistration before analysis is possible, and thus Open Science is not at odds with secondary data analysis. Of course, whether or not a secondary data analysis is preregistered, manuscripts should be transparent about whether the research questions were formulated before the analyses were conducted and specifying which were exploratory.

The BMRC strongly recommends the practice of preregistration when engaging in hypothesis-driven research, with transparent reporting of deviations from preregistered plans.

Registered Reports

Null findings are more likely to remain in a researcher’s file drawer and/or are less likely to be accepted for publication (29). This science-wide problem is not limited to health psychology and behavioral medicine. However, as outlined earlier, the impact of publication bias is of greater consequence in our disciplines than many others, therefore making the introduction of Registered Reports a particularly important development for our field.

The Registered Report is a relatively new type of article that aims to increase scientific transparency by implementing peer review before study results are obtained. Once the researcher has developed an idea and designed the study, including details of measures, sample size, inclusion/exclusion criteria, and data analysis plan, they submit a Stage 1 Registered Report (including Introduction and Method sections) for peer review. The key difference from the standard scientific process is that the researcher does not commence data collection until the Stage 1 Registered Report has received an In-Principle Acceptance. Once the data are collected and written up, the full Registered Report will be accepted for publication irrespective of the findings or their statistical significance, conditional on adherence to the Registered Report. Comparing 71 published Registered Reports in psychology with a random sample of 152 hypothesis-testing studies, 96% of standard reports had positive results compared with only 44% positive results in the Registered Reports (6). Yet, the quality of Registered Reports has been shown to be higher than conventional publications (30). At this time, Annals of Behavioral Medicine and Health Psychology do not offer Registered Reports. Psychosomatic Medicine is introducing the format in 2023.

The BMRC recognizes the value for journals in the area of behavioral medicine and health psychology to introduce Registered Reports as a new article format. Over time, this change is likely to help encourage the uptake of this new approach to conducting science and improve the robustness of our evidence base (31).

Preprints and Postprints

A preprint is a version of a scholarly work, often a complete draft and after feedback from coauthors, uploaded to a public server without undergoing formal peer review. A postprint is a version of a scholarly piece of work that is uploaded to a public server after formal peer review (32). The emphasis placed on preprints (and perhaps postprints) is often discipline-specific. For example, the preprint server has been essential for physics, mathematics, and computer sciences for almost three decades and EconStor has long been the norm as a disciplinary repository for economics and business. In contrast, the preprint server was established in 2016 for the psychological sciences and is still in its infancy.

Preprints and postprints are important to Open Science as they provide open and rapid (in the case of preprints) access to scholarly work. This ensures the work is made publicly available to all interested parties, especially those in developing nations where institutional funds to publish, read, and subscribe to scientific journals are limited. Empirically, journal articles deposited on a preprint/postprint server have sizably higher citation and altmetric counts compared to non-deposited articles (33).

Given the momentum of Open Science and the unprecedented explosion of preprints in COVID-19 times, most psychology journals now permit the posting of preprints. However, most journals do not permit posting the publisher-prepared PDF but may allow posting the original author-formatted document. It is, therefore, important that authors check the journal policy on posting preprints and postprints (see Box 1). It is also possible for authors to negotiate for permission to post their preprints and postprints using tools such as the SPARC Author Addendum (see Box 1).

Preprints and “peer reviewed” published papers represent a continuum in the evolution of a body of work and can be formally linked, ensuring that the “peer reviewed” published paper supersedes the preprint as the version of record that should be cited (34). Best practice is to update the preprint to the author-formatted document with each submission, ensuring that the available preprint is the final version submitted to the journal and providing a digital online identifier (DOI) for the published version of record. Some services will automatically link the preprint and published version-of-record DOIs. Conversely, a journal may require that the DOI for the preprint be provided in the version of record. For the member society journals, Annals of Behavioral Medicine and Psychosomatic Medicine have explicit preprint policies that allow for posting to non-commercial (NC) preprint servers and set forth DOI requirements. The American Psychological Association has a policy for its journals (including Health Psychology) that also allows posted preprints, with more stringent rules about copyright and warnings about “manuscripts that have garnered significant media attention as preprints” (see Box 1).

There are further advantages (and disadvantages) to posting preprints (see Table 1 in Ref (35)), and these can be considered from the perspective of the academic and early career researcher (ECR), funding bodies, and journal publishers. From time to submission to paper publication, the publication process is unpredictable, variable, and often time-consuming—particularly problematic for ECRs who rely on the timely publication of their work to gain recognition for their efforts (36). Depositing a scholarly piece of work in a preprint server ensures that the work is made publicly available almost immediately and to all, democratizing the flow of information. Authors can also receive feedback beyond a selected few who review the scholarly work during a formal peer-review process and make their judgments of appropriateness of and interest in the work. Moreover, preprints can be revised and updated far more efficiently than submitting corrections after publication. Further, a preprint documents the history of the ideas and thus becomes a timestamp establishing priority of scientific discovery and innovation, debunking the myth that preprints lead to scooping (34). Posting preprints can also benefit academics, particularly ECRs, increasing visibility, facilitating networking, accelerating training time, optimizing research design and quality, and developing reviewer skills (36).

From the perspective of funding bodies and journals, there can be substantial benefits from the widespread adoption of preprints (34). Although funders typically ask for “peer-reviewed publications” as demonstrated evidence of researchers’ work in the field, they often allow the detail of “other scientific contributions”. Such contributions could include preprints. Preprints provide tangible evidence of researchers’ most recent work. Funding decisions should be based on the merit of the research, and preprints help to uphold this principle by allowing independent assessment of researchers’ ideas rather than relying on journal names or impact factors as a proxy for quality (34). Comments on preprints can also provide a more efficient formal review process, possibly improving the final manuscript.

Despite the many benefits, some concerns and challenges must be addressed, particularly concerning preprints (see Table 1 in Ref (35)). One concern with preprints is that servers will be flooded with weak papers only meant to assert priority. This can lead to misleading findings and confusion and distortion of study conclusions as well as premature media coverage, which is potentially dangerous given that preprints can shape scientific and global discourse (34), a phenomenon witnessed with the acceleration of preprints around COVID-19 (37,38). Given preprints have the potential, knowingly or not, to misrepresent knowledge, an important empirical question to be considered is: how can the scientific field ensure preprints positively and accurately shape knowledge? Also, how can the distinction between preprints and formal “peer-reviewed” papers be upheld, especially to lay readerships, and in all stages of the communication process (including conventional media, social media, and policy)? Should the notion be embraced that preprints and “peer reviewed” papers exist in parallel, synergizing and fulfilling complementary functions? Preprints facilitate rapid communication of scientific findings, whereas “peer reviewed” papers provide formal certification processes that promote reliability and reproducibility (34,38).

Among 3,759 researchers across multiple disciplines, Open Science content and independent verification of author claims were essential for judging preprint credibility (39). Peer reviews and author information were rated as less critical. Nevertheless, upholding fundamental principles and practices of peer review should be maintained when assessing the quality of preprints, and papers should adhere to respected article reporting standards (see Box 1).

The BMRC recognizes the potential value of preprints as mechanisms to improving transparency and faster dissemination. However, the lack of regulation and potential to produce harm are significant concerns, and we view peer-reviewed, accepted science as the best form of evidence.

The BMRC recommends a close evaluation of the role of preprints for health psychology and behavioral medicine research and to compare this role with the use of preprints among physicists and economists.

Engaging in Open Research

Open research involves openly sharing one’s research materials with others, including data, syntax, protocols, experimental stimuli, and so on (8,28,40–43). One guideline for open data comes from the FAIR (findable, accessible, interoperable, and reusable) principles (see Box 1), which will be invoked below (44). However, many researchers have reservations. They have proprietary feelings about data that took significant resources to collect, syntax that took significant expertise and time to write, and stimuli that took significant piloting to refine (20,45–47). Furthermore, making data, code, and other material shareable requires additional work (e.g., creating a codebook, cleaning data to ensure anonymity, labeling data, and commenting on code so it is interpretable) (45). Promoting FAIR data will require planning for and budgeting money and time to prepare the data for open access. Researchers may also be concerned that their research will be “scooped” (20,47,48).

On the other hand, the resources involved in research materials and data are often taxpayer-funded and therefore arguably belong in the public domain. Delivering our findings transparently to the public is a first principle and an ethical obligation of the scientific community, ensuring quality and eschewing gatekeeping. In addition, open research benefits the entire field in that more resources are available to more researchers (20,47). Meta-analysis of individual participant data (sometimes called mega-analysis), facilitated by open research, is beginning to take over from meta-analysis of published results. Individual participant data meta-analyses are better powered and can better address moderators and confounding variables (49).

Less well-known are the benefits to the individual researcher. First, additional work to make data and syntax shareable is an academic work product. It is, therefore, possible to create a curriculum vitae (CV) line for publicly available datasets and syntax files, particularly when the data are extensive and extensively documented or when the syntax uses innovative and reusable approaches to problems. Many data repositories assign a DOI, making data findable and citable, and journals should mandate citation of data in papers using those data (29) (this mandate is part of the Open Science TOP Guidelines.) The license associated with the data (see below) can generate citations for the work. Furthermore, data and code sharing are associated with citation advantages for the publication itself (50).

Second, open research creates opportunities to find new collaborators and to publish research with other groups (47,50). Sharing data, for example, does not automatically mean allowing others unfettered use of the data. Many different licenses can be applied to data, from CC0 (public domain) to CC BY (credit given to the creator, using the DOI) and additions including NC (non-commercial use only), SA (adaptations must be shared under the same terms), and ND (no derivatives or adaptations of the work permitted) (see Box 1). If a creator is interested in collaborating on shared data, a more restrictive license (e.g., CC-NC-ND) prevents new and different uses except when collaborating with the creator. Licenses are part of making data reusable. Simulated datasets (see below) are another method for finding new collaborators rather than sharing data in the public domain. Embargo periods are also possible (40).

Third, the process of making research materials shareable often reveals errors before sharing. One would typically want to make sure that a lab member or colleague can understand materials and reproduce results, that is, recreate the same results using the same data (or simulated data) and code. Unfortunately, errors are rife in the scientific literature. Too few research results are reproducible from the data (e.g., only 63% of meta-analyses were reproducible within 0.1 of the reported effect size) (51). Typographical errors sneak in, perhaps contributing to many misreported p values (52). The process of making data and code open is likely to reduce errors, corrections, and even retractions insofar as it motivate reproducibility checks before publication. Psychological Science articles with open data had only 5% major discrepancies on reproduction in measures of central tendency, variation, p values, effect sizes, test statistics, count/proportions, and degrees of freedom (53). By contrast, articles in psychology published between 1985 and 2013 had 7%–15% major discrepancies in p values alone (52). Open data and the researchers who publish them were perceived as more trustworthy (47).

Finally, open research is increasingly a requirement by funders and journals (20). For example, the National Institutes of Health (NIH) requires a Data Management and Sharing Plan in grant applications (see Box 1) and will soon require that “researchers will maximize the appropriate sharing of scientific data, acknowledging certain factors (i.e., legal, ethical, or technical) that may affect the extent to which scientific data are preserved and shared.” The policy defines data as: “The recorded factual material commonly accepted in the scientific community as of sufficient quality to validate and replicate research findings, regardless of whether the data are used to support scholarly publications. Scientific data do not include laboratory notebooks, preliminary analyses, completed case report forms, drafts of scientific papers, plans for future research, peer reviews, communications with colleagues, or physical objects, such as laboratory specimens” (emphasis added).

Making One’s Research Open

Making one’s research open is not difficult, although some elements are more difficult than others, and every step toward more open research is important (see resources in Box 1) (54). Repositories exist for deposition of open research materials. Some journals and universities provide data repositories, and there are general and discipline-specific repositories (Box 1). Repositories are important for preventing broken or deleted links to an individual scientist or lab’s web page and ameliorate low response rates when data are requested. Registration and indexing in a searchable resource such as a repository is part of making data findable. Data may be shared as used in a particular publication (NIH will expect this step on publication) or as a complete study dataset (NIH will expect this step at the end of the funding period). The former is essential to assessing a study’s reproducibility, and the latter avoids waste of resources associated with questions unasked of a particular dataset. It is important to share data in a form that will not become technologically inaccessible and is compatible with different software and therefore interoperable. For example, .csv files are more robust than .sav (SPSS) files.

Perhaps the most challenging issue in open data is privacy (55). Many consent forms do not include language about data sharing but doing so is now best practice (56). Consent rates were generally not affected by this language in psychological research, and the majority of consented participants in genomic research chose public release of anonymized data (57,58). Qualitatively, participant concerns about open data center around privacy invasion and release to irresponsible third parties (57,59); addressing these concerns during the informed consent process might improve consent rates. Local institutional review boards may also limit open data due to privacy concerns (20). Finally, some data may preclude sharing because culture-specific knowledge is required to use them, or a cultural group does not permit it (48,60). Participants from underrepresented racial or ethnic groups may be less amenable to data sharing than White participants (48). Industry funders and even academic institutions may prohibit open data or raise barriers to open data, such as complex approval processes. Sharing should be as open as possible and as closed as necessary to protect privacy and adhere to regulations (e.g., British Psychological Society [BPS] open data policy, see Box 1).

There are often federal guidelines regarding what is considered private health information and how de-identification is achieved (e.g., in the USA, the Safe Harbor method) (61). However, a conservative rule of thumb is that if a person could definitively identify themselves in a dataset, then it is possible that others could also identify them and further measures may be necessary (see BPS open data policy, Box 1). Many data can be anonymized, but there are still options for open research where that is impossible (55,62). One solution for quantitative data is a synthetic dataset (see Box 1). Synthetic datasets preserve the variances and covariances of the original data but do not include any of the original data. A synthetic dataset will reproduce the original results given the same analysis. Furthermore, a synthetic dataset allows others to explore additional analyses or test other hypotheses and get the same results they would get with the actual data but precludes publication of those results. The original scientist(s) who obtained the original data must be included to create a publishable product. Simulated datasets can be quite large regarding the number of variables and number of observations and are easily generated using the R package synthpop (62). Commercial solutions for electronic medical record data are also available (63).

Code associated with a particular publication should be shared alongside the data, whether real or synthetic. Both pieces are necessary to evaluate reproducibility, that is, the ability of an outside person to obtain the same results, given the same data and code. (Reproducibility is distinguished from replicability, which is the ability to obtain the same results given the same methods but new data.) Ideally, the code includes all the steps taken in cleaning, scoring, and analyzing data—that is, a third party could take the raw data and the code and obtain the reported results. Comments detailing the purpose and rationale for each step should be included in the code (45).

The BMRC recognizes the value of open research to improve value, accuracy, and collaboration in health psychology and behavioral medicine research.

The BMRC encourages open research practices at a minimum as required by funding entities and publications. In practice, research materials should be as open as possible and as closed as necessary, respecting privacy, laws, and cultural knowledge.

Open Science and Equity

Open Science has the potential to both improve and obstruct equity for underrepresented groups in science (48). On one hand, the availability of preprints/postprints (with their attendant benefits and drawbacks, see above) and open data may benefit scientists with fewer resources, who may not have subscription access to journals or the financial or logistical ability to collect large samples of participants (64). Researchers from underrepresented groups highly endorsed open science values of rigor, reproducibility, and transparency and believed that research dissemination was an important equity issue (48). Collaborations arising from open science may benefit researchers from underrepresented groups and generate adequately powered samples of underrepresented groups (48,64). Some practices (preprints and postprints) do not incur a significant burden, and others (preregistration) may save time in the long run (65).

On the other hand, researchers from underrepresented groups were also concerned that financial and time resources required for some open science practices (20,45–47) would further disadvantage scholars from underrepresented groups (48). In financial terms, open-access publication should be considered in an equity context; the cost to publish open access can be prohibitive even for well-resourced investigators (e.g., at the time of writing, €9,500 at Nature (66), or at the current exchange rate, US$10,165). In time terms, scholars from underrepresented groups already bear an unequal burden in mentoring and service work (the “minority tax”). More recognition for open science practices in evaluation and promotion is not necessarily a cure: Groups who do not bear additional burdens might benefit disproportionately because they have more time to engage in open science practices. A more equitable research environment is needed to advance equitable open science, including decreasing the “minority tax” imposed on additional service contributions (67) and multilevel, multidimensional initiatives to increase individual and structural equity for female and underrepresented researchers (68).

Finally, preregistration might include attendant pressures to improve statistical power by relying on populations that are not hard to recruit and thereby decrease diversity. To probe this question, reported racial/ethnic diversity in the clinical trials included in Ref (22) was examined. Figure 1 shows the results (data and code available at There is a clear trend toward more diversity after the preregistration requirement was put in place in 2000. However, this era also coincides with the March 1994 NIH requirement that grant applications include gender and ethnic diversity such that “for Phase III clinical trials… women and minorities and their subpopulations must be included such that valid analyses of differences in intervention effect can be accomplished” (69). This requirement followed 1990 guidance on the “inclusion of women and members of minority groups in all NIH-supported biomedical and behavioral research involving humans subjects” (69). A few conclusions may be drawn from these data: first, diversity increased following requirements rather than guidance; second, racial/ethnic qualities of the sample were more likely to be reported following the onset of requirements and preregistration; and third, before requirements and preregistration, the proportion of white participants usually exceeded Census estimates (open squares in Figure 1); afterward, the proportion was closer to census estimates. The added requirement of preregistration did not appear to harm diversity in these clinical trials. However, preregistration does not typically require consideration of diversity as do NIH grant applications. Insofar as preregistration benefits researchers by requiring them to carefully consider how their study will be performed and why, the addition of diversity elements to preregistration would force researchers to address generalizability with regard to diversity and representation; oversampling may be necessary to appropriately characterize some groups (70). The recruited sample could also be compared against the preregistration targets.

Percent White participants in Ref. [22] as a function of study publication year and whether trial recruitment started after the publication of National Institutes of Health guidance in 1994. Reports in which racial/ethnic descriptions were not included are shown at the bottom of the graph. Census estimates for the USA are shown in open boxes. * indicates where the sample was described only as percent of a nonwhite group, the remainder was assumed to be White for the purpose of this illustration.

The BMRC recognizes the advantages and disadvantages of Open Science in achieving equity in health psychology and behavioral medicine. A more equitable research environment is needed to advance equitable open science. Open access publication cost and institutional recognition of Open Science practices may inadvertently disadvantage underrepresented scientists.

The Need for Civility, Collegiality, and Collaboration

There have been numerous important and innovative developments in how scientific research is conducted. These changes have been described by some as a scientific revolution and there has been much talk of psychological science undergoing a renaissance (21). However, there has also been discussion of the “tone debate” and concerns about the civility of the conduct of the scientific debate surrounding replication and reproducibility (71). These concerns have centered around the need to be respectful and collegial in scientific discourse, to critique the science and not the scientist, and to recognize that there are different reactions to Open Science practices. In the latter case, for example, preregistration can be viewed by some as a commitment to do exactly what was proposed; however, it is also important to remember that preregistration is “a plan, not a prison” (72). Deviations should be transparently reported but not demonized, allowing dispassionate and scientific scrutiny of the rationale and consequences of deviations. In the context of study replications more generally, the BMRC notes that failures of replication may reflect critical issues of context (73) and this failure to replicate and consequent drive to generate new hypotheses is part of the scientific method.

The BMRC urges researchers to be tolerant and to work together in a collaborative, collegial, and civil manner.


We have argued that Open Science in health psychology and behavioral medicine can potentially increase reproducibility, replication, openness, and transparency, which will improve our science’s quality and reliability. There is no one-size-fits-all solution that will encompass all Open Science needs in health psychology and behavioral medicine’s diverse research products and outlets: for example, qualitative science and community-based participatory research will require a different approach than quantitative science; clinical trials will require a different approach than observational studies. Different scientists and journals will have different research foci both in topic and approach and will adopt Open Science guidelines accordingly. When deciding to engage in or with Open Science practices and evaluations, researchers should include collegiality and equity in their priorities. However, there are sufficient resources and motivating data that health psychology and behavioral medicine research as a discipline should continue to move toward Open Science. This will ultimately improve the robustness of our evidence base in the longer term. As such, the BMRC recommends that health psychology and behavioral medicine adopt more Open Science practices such as preregistration, registered reports, and open research and that the field continue to monitor the viability of preprints as a method of scientific communication.

This statement was developed through a collaboration among the Society for Health Psychology, the Society of Behavioral Medicine, the American Psychosomatic Society, and the Academy of Behavioral Medicine Research and has been published jointly in Health Psychology, the Annals of Behavioral Medicine, and Psychosomatic Medicine.

The authors thank Ava Cazares, Joon Soo Kim, Thomas Mistretta, Halie Pfister, Cristina Pinheiro, Charis Stanek, Christy Wang, and Andrea Yacoub for their assistance in coding Open Science practices.

Behavioral Medicine Research Council members: Simon L. Bacon, Gary G. Bennett, Elizabeth Brondolo, Susan M. Czajkowski, Karina W. Davidson, Elissa S. Epel, Tracey A. Revenson, and John M. Ruiz.

Authors’ Statement of Conflict of Interest and Adherence to Ethical Standards: Authors Suzanne C. Segerstrom, Michael A. Diefenbach, Kyra Hamilton, Daryl B. O’Connor, and A. Janet Tomiyama declare that they have no conflict of interest.

Authors’ Contributions: S.C.S.: Conceptualization, Investigation, Data Curation, Writing-Original Draft, Writing-Review and Editing, Visualization, Supervision. M.A.D.: Conceptualization, Investigation, Writing-Original Draft, Writing-Review and Editing. Kyra Hamilton: Conceptualization, Investigation, Writing-Original Draft, Writing Review and Editing. Daryl B. O’Connor: Conceptualization, Investigation, Writing-Original Draft, Writing-Review and Editing. A. Janet Tomiyama: Conceptualization, Investigation, Writing-Original Draft, Writing Review and Editing.

Disclaimer: The content of this paper is solely the responsibility of the authors and does not necessarily represent the official views or policies of the US National Cancer Institute, National Institutes of Health, or Department of Health and Human Services.

Ethical Approval: This review was not formally registered, and also there was no analytic plan in this review apart from descriptive statistics.

Note: The mission of the Behavioral Medical Research Council (BMRC) is to identify strategic, high-priority research goals, and encourage multidisciplinary and multicenter research networks to pursue them. The BMRC consists of representatives of the following organizations: Academy of Behavioral Medicine Research; American Psychosomatic Society; Society for Health Psychology; and the Society of Behavioral Medicine. More information about the BMRC can be found at https://www.behavioralmedicineresearch

Data availability: Data used to construct the table and figure presented in this review are available in a public archive: (iv) Analytic code availability. Analytic code used to construct the table and figure presented in this review are available in a public archive: Materials availability. There were no materials in this review.


1. Yong E. Replication studies: Bad copy. Nature. 2012; 485:298–2.
2. Camerer CF, Dreber A, Forsell E, et al. Evaluating replicability of laboratory experiments in economics. Science. 2016;351:1433–3.
3. Prinz F, Schlange T, Asadullah K. Believe it or not: How much can we rely on published data on potential drug targets? Nat Rev Drug Discov. 2011;10:712–0.
4. Franco A, Malhotra N, Simonovits G. Underreporting in psychology experiments: Evidence from a study registry. Soc Psychol Personal Sci. 2016;7:8–4.
5. Franco A, Malhotra N, Simonovits G. Publication bias in the social sciences: Unlocking the file drawer. Science. 2014;345:1502–3.
6. Scheel AM, Schijen MRMJ, Lakens D. An excess of positive results: Comparing the standard psychology literature with registered reports. Adv Methods Pract Psychol Sci. 2021;4.
7. Open Science Collaboration. Estimating the reproducibility of psychological science. Science. 2015;349:aac4716.
8. Nosek BA, Hardwicke TE, Moshontz H, et al. Replicability, robustness, and reproducibility in psychological science. 2021;73:719–29.
9. Collins FS, Tabak LA. Policy: NIH plans to enhance reproducibility. Nature. 2014;505:612–1.
10. Chambers CD, Tzavella L. The past, present and future of Registered Reports. Nat Hum Behav. 2022;6:29–13.
11. Norris E, He Y, Loh R, West R, Michie S. Assessing markers of reproducibility and transparency in smoking behaviour change intervention evaluations. J Smok Cessat. 2021;2021:e6694386.
12. McVay MA, Cooper KB, Carrera Seoane M, Donahue ML, Scherer LD. Transparent reporting of hypotheses and analyses in behavioral medicine research: An audit of publications in 2018 and 2008. Health Psychol Behav Med. 2021;9:285–12.
13. Nosek BA, Alter G, Banks GC, et al. Promoting an open research culture. Science. 2015;348:1422–3.
14. Freedland KE. Health psychology adopts transparency and openness promotion (TOP) guidelines. Health Psychol. 2021;40:227–2.
15. McVay MA, Conroy DE. Transparency and openness in behavioral medicine research. Transl Behav Med. 2021;11:287–3.
16. Segerstrom SC. Psychosomatic Medicine: looking forward. Psychosom Med. 2022;84:265–1.
17. Revenson TA, Zoccola PM. New instructions to authors emphasize open science, transparency, full reporting of sociodemographic characteristics of the sample, and avoidance of piecemeal publication. Ann Behav Med. 2022;56:415–2.
18. Hardwicke TE, Thibault RT, Kosie JE, Wallach JD, Kidwell MC, Ioannidis JPA. Estimating the prevalence of transparency and reproducibility-related research practices in psychology (2014–2017). Perspect Psychol Sci. 2022;17:239–12.
19. Gagliardi D, Cox D, Li Y. Institutional inertia and barriers to the adoption of open science. In: The Transformation of University Institutional and Organizational Boundaries. Leiden, Netherlands: Brill;2015.
20. Houtkoop BL, Chambers C, Macleod M, Bishop DVM, Nichols TE, Wagenmakers EJ. Data sharing in psychology: A survey on barriers and preconditions. Adv Methods Pract Psychol Sci. 2018;1:70–15.
21. O’Connor DB. Leonardo da Vinci, preregistration and the architecture of science: Towards a more open and transparent research culture. Health Psychol Bull. 2021;5:39–6.
22. Kaplan RM, Irvin VL. Likelihood of null effects of large NHLBI clinical trials has increased over time. PLOS One. 2015;10:e0132382.
23. John LK, Loewenstein G, Prelec D. Measuring the prevalence of questionable research practices with incentives for truth telling. Psychol Sci. 2012;23:524–8.
24. de Winter JC, Dodou D. A surge of p-values between 0.041 and 0.049 in recent decades (but negative results are increasing rapidly too). PeerJ. 2015;3:e733.
25. Kerr NL. HARKing: Hypothesizing after the results are known. Personal Soc Psychol Rev. 1998;2:196–21.
26. Bosnjak M, Fiebach CJ, Mellor D, et al. A template for preregistration of quantitative research in psychology: Report of the joint psychological societies preregistration task force. Am Psychol. 2022;77:602–13.
27. Fife DA, Rodgers JL. Understanding the exploratory/confirmatory data analysis continuum: Moving beyond the “Replication Crisis.” Am Psychol. 2022;77:453–13.
28. Munafò MR, Nosek BA, Bishop DVM, et al. A manifesto for reproducible science. Nat Hum Behav. 2017;1:1–8.
29. Simonsohn U, Nelson LD, Simmons JP. P-curve: A key to the file-drawer. J Exp Psychol Gen. 2014;143:534–13.
30. Soderberg CK, Errington TM, Schiavone SR, et al. Initial evidence of research quality of registered reports compared with the standard publishing model. Nat Hum Behav. 2021;5:990–7.
31. Norris E, O’Connor DB. Science as behaviour: Using a behaviour change approach to increase uptake of open science. Psychol Health. 2019;34:1397–9.
32. Harnad S. Electronic preprints and postprints. In: Encyclopedia of Library and Information Science. New York: Marcel Dekker; 2003.
33. Serghiou S, Ioannidis JPA. Altmetric scores, citations, and publication of studies posted as preprints. JAMA. 2018;319:402.
34. Berg JM, Bhalla N, Bourne PE, et al. Preprints for the life sciences. Science. 2016;352:899–2.
35. Elmore SA. Preprints: What role do these have in communicating scientific results? Toxicol Pathol. 2018;46:364–1.
36. Sarabipour S, Debat HJ, Emmott E, Burgess SJ, Schwessinger B, Hensel Z. On the value of preprints: An early career researcher perspective. PLoS Biol. 2019;17:e3000151.
37. Fraser N, Brierley L, Dey G, et al. The evolving role of preprints in the dissemination of COVID-19 research and their impact on the science communication landscape. PLoS Biol. 2021;19:e3000959.
38. Vlasschaert C, Topf JM, Hiremath S. Proliferation of papers and preprints during the coronavirus disease 2019 pandemic: Progress or problems with peer review? Adv Chronic Kidney Dis. 2020;27:418–8.
39. Soderberg CK, Errington TM, Nosek BA. Credibility of preprints: An interdisciplinary survey of researchers. R Soc Open Sci. 2020;7:201520.
40. Martone ME, Garcia-Castro A, VandenBos GR. Data sharing in psychology. Am Psychol. 2018;73:111–14.
41. Hesse BW, Conroy DE, Kwaśnicka D, et al. We’re all in this together: Recommendations from the Society of Behavioral Medicine’s Open Science Working Group. Transl Behav Med. 2021;11:693–5.
42. Lindsay DS. Sharing data and materials in psychological science. Psychol Sci. 2017;28:699–3.
43. Schönbrodt FD, Maier M, Heene M, Bühner M. Forschungstransparenz als hohes wissenschaftliches Gut stärken. Psychol Rundsch. 2018;69:37–7.
44. Wilkinson MD, Dumontier M, Aalbersberg IjJ, et al. The FAIR Guiding Principles for scientific data management and stewardship. Sci Data. 2016;3:160018.
45. Trisovic A, Lau MK, Pasquier T, Crosas M. A large-scale study on research code quality and execution. arXiv:210312793 Cs. 2021. Doi:10.48550/arXiv.2103.12793.
46. Obels P, Lakens D, Coles NA, Gottfried J, Green SA. Analysis of open data and computational reproducibility in registered reports in psychology. Adv Methods Pract Psychol Sci. 2020;3:229–8.
47. Abele-Brehm AE, Gollwitzer M, Steinberg U, Schönbrodt FD. Attitudes toward open science and public data sharing: A survey among members of the German Psychological Society. Soc Psychol. 2019;50:252–8.
48. Lui PP, Gobrial S, Pham S, Giadolor W, Adams N, Rollock D. Open science and multicultural research: Some data, considerations, and recommendations. Cultur Divers Ethnic Minor Psychol (Online ahead of print April 112022). Doi:10.1037/cdp0000541.
49. Riley RD, Lambert PC, Abo-Zaid G. Meta-analysis of individual participant data: Rationale, conduct, and reporting. BMJ. 2010;340:c221.
50. McKiernan EC, Bourne PE, Brown CT, et al. How open science helps researchers succeed. ELife. 2016;5:e16800.
51. Lakens D, Hilgard J, Staaks J. On the reproducibility of meta-analyses: Six practical recommendations. BMC Psychol. 2016;4:24.
52. Nuijten MB, Hartgerink CHJ, van Assen MALM, Epskamp S, Wicherts JM. The prevalence of statistical reporting errors in psychology (1985–2013). Behav Res Methods. 2016;48:1205–21.
53. Hardwicke TE, Bohn M, MacDonald K, et al. Analytic reproducibility in articles receiving open data badges at the journal Psychological Science: An observational study. R Soc Open Sci. 2021;8:201494.
54. Kathawalla U-K, Silverstein P, Syed M. Easing into open science: A guide for graduate students and their advisors. Collabra Psychol. 2021;7:18684.
55. Walsh CG, Xia W, Li M, Denny JC, Harris PA, Malin BA. Enabling open-science initiatives in clinical psychology and psychiatry without sacrificing patients’ privacy: Current practices and future challenges. Adv Methods Pract Psychol Sci. 2018;1:104–10.
56. Meyer MN. Practical tips for ethical data sharing. Adv Methods Pract Psychol Sci. 2018;1:131–13.
57. Cummings JA, Zagrodney JM, Day TE. Impact of open data policies on consent to participate in human subjects research: Discrepancies between participant action and reported concerns. PLoS One. 2015;10:e0125208.
58. McGuire AL, Oliver JM, Slashinski MJ, et al. To share or not to share: A randomized trial of consent for data sharing in genome research. Genet Med. 2011;13:948–7.
59. Trinidad SB, Fullerton SM, Bares JM, Jarvik GP, Larson EB, Burke W. Informed consent in genome-scale research: What do prospective participants think? AJOB Prim Res. 2012;3:3–8.
60. Lui PP, Skewes M, Gobrial S, Rollock D. Advancing transparency and impact of research: Initiating crosstalk between indigenous research and mainstream “Open Science.” J Indig Res. 2021;9.
61. Health and Human Services. Methods for De-identification of PHI. Washington, DC:;2015.
62. Quintana DS. A synthetic dataset primer for the biobehavioural sciences to promote reproducibility and hypothesis generation. ELife. 2020;9:e53275.
63. Reiner Benaim A, Almog R, Gorelik Y, et al. Analyzing medical research results based on synthetic data and their relation to real data results: Systematic comparison from five observational studies. JMIR Med Inform. 2020;8:e16492.
64. Syed M, Kathawalla UK. Cultural psychology, diversity, and representation in open science. In: Cultural Methods in Psychology: Describing and Transforming Cultures. New York: Oxford University Press;2022:427–27.
65. Tackett JL, Brandes CM, Reardon KW. Leveraging the Open Science Framework in clinical psychological assessment research. Psychol Assess. 2019;31:1386–8.
66. Seltzer R. Open Access Comes to Selective Journal. Washington, DC: Inside Higher Ed; 2020.
67. Williamson T, Goodwin CR, Ubel PA. Minority tax reform—avoiding overtaxing minorities when we need them most. N Engl J Med. 2021;384:1877–2.
68. Bilimoria D, Singer LT. Institutions Developing Excellence in Academic Leadership (IDEAL): A partnership to advance gender equity, diversity, and inclusion in academic STEM. Equal Divers Incl Int J. 2019;38:362–19.
69. NIH Guide. NIH Guidelines on the Inclusion of Women and Minorities as Subjects in Clinical Research;2000.
70. Vaughan R. Oversampling in health surveys: Why, when, and how? Am J Public Health. 2017;107:1214–1.
71. Derksen M, Field SM. The tone debate: Knowledge, self, and social order. Rev Gen Psychol. 2022;26:172–11.
72. DeHaven A. Preregistration: A Plan, Not a Prison. Charlottesville, VA: Center for Open Science; 2017.
73. Nosek BA, Hardwicke TE, Moshontz H, et al. Replicability, robustness, and reproducibility in psychological science. Annu Rev Psychol. 2022;73:719–29.

Reproducibility; Methodology; Privacy; Publication bias

Copyright © American Psychological Association, the Society of Behavioral Medicine, and the American Psychosomatic Society