Secondary Logo

Journal Logo

Supplement Article

Data Velocity in HIV-Related Implementation Research: Estimating Time From Funding to Publication

Schwartz, Sheree R. PhD, MPHa; Chavez Ortiz, Joel MPHa; Smith, Justin D. PhDb; Beres, Laura K. PhD, MPHc; Mody, Aaloke MDd; Eshun-Wilson, Ingrid MBchB, MSd; Benbow, Nanette MAe; Mallela, Deepthi P. MBBS, MPHa; Tan, Stephen BAa; Baral, Stefan MD, MPHa; Geng, Elvin MD, MPHd,f

Author Information
JAIDS Journal of Acquired Immune Deficiency Syndromes: July 1, 2022 - Volume 90 - Issue S1 - p S32-S40
doi: 10.1097/QAI.0000000000002963

Abstract

INTRODUCTION

Given the evidence-base supporting antiretroviral therapy and pre-exposure prophylaxis (PrEP), there is an increasing recognition that while we know ‟what” to do to effectively treat and prevent HIV; research is needed to establish how and under what circumstances these interventions should be implemented to address the global HIV pandemic.1–3 Indeed, the full benefits of HIV prevention and treatment interventions have not been observed due to challenges scaling interventions to effective programs.4,5 Implementation research (IR), defined by the United States (U.S.) National Institutes of Health (NIH) as the ‟scientific study of the use of strategies to adopt and integrate evidence-based health interventions (EBIs) into clinical and community settings to improve individual and population health,” has rapidly expanded within recent years to answer the questions of ‟how.”1,6–8

To realize the potential of implementation science approaches and IR to achieve the Ending the HIV Epidemic in the United States (EHE) goals by 2030,9 research must be rapid, relevant, and rigorous.10 IR that is rigorous but not rapid may lack relevance on publication if the guidelines and strategies have evolved or the evidence base has already surpassed the intervention being promoted or the strategies tested to facilitate adoption.11 Rapid generation and dissemination of high-quality IR can promote timely incorporation of findings into the guidelines and implementation protocols to enhance research translation. IR has the potential for rapidity compared with traditional clinical effectiveness research. Unlike drug trials and traditional efficacy studies, the preponderance of implementation studies requires fewer regulatory approvals. However, delays in funding, protocol approvals, study enrollment, write-up, and publication may result in lengthy gaps in IR evidence generation, dissemination, and utilization.12,13

Lags in implementation evidence generation and dissemination of findings may result in the suboptimal guidelines and/or practice because of a limited evidence base for delivery of EBIs. This may result in poor adoption of practices within health care systems and among providers as well as inadequate uptake and coverage among clients.14 For example, despite strong evidence that oral PrEP is efficacious, it has had modest effectiveness in preventing HIV infections outside of trial environments because of limited reach and sustained use.15 Implementers may also be quick to adopt promising policies and programs that are ultimately suboptimal and potentially less cost-effective if the supporting evidence-base is not expected for years.16 As a recent example, the push to switch PrEP formulation from tenofovir disoproxil fumarate with emtricitabine to a newer regimen of tenofovir alafenamide with emtricitabine based on limited evidence could result in switches to nonsuperior regimens that are less cost-effective and, in some cases, less clinically beneficial.17–24 The US government including the NIH, the Centers for Disease Control, and Health Resources and Services Administration have thus invested significantly in recent years in efforts to End the HIV Epidemic in the United States through implementation science and scaling of evidence-based practice.9,25–27 Funding for IR both in the United States and internationally has been steadily increasing in recent years.1

There is often a lengthy delay in evidence generation, although some of this may be field specific.11,12,28–30 The objective of these analyses was to review the scientific production process and time to dissemination of HIV-related IR findings. Our goal is to characterize the current state of data production to inform the timely generation, dissemination, and utilization of research that guides implementation and promotes generalizability.

METHODS

Search Strategy

To identify a framework to assess the speed of IR generation and dissemination, we used a list of HIV-related IR studies identified by Smith et al1 through a mapping review of NIH-funded studies. In brief, the authors searched NIH RePORTER to identify HIV-related grant awards funded across NIH Institutes and Centers between January 1, 2013, and March 31, 2018, that included IR using NIH's definition.31 Studies were identified first through a text mining algorithm, and then, the NIH abstract was reviewed manually for final eligibility determination and data abstraction. Overall, through this process, Smith et al identified 216 HIV-related funded studies that included IR during the period.

Using these 216 grants as a base, we reviewed each of these awards on October 1, 2020, and identified all publications linked to the award in NIH RePORTER. We reran this search on June 1, 2021, to capture any additional linked articles published through January 1, 2021. A PRISMA diagram of the 216 awards and resulting publications included in the review is provided; 1 grant was excluded from review because it was a large program award (P01) that funded multiple research studies (Fig. 1).

F1
FIGURE 1.:
PRISMA diagram of NIH-funded HIV-related implementation research grants and published articles included in the data velocity analysis.

Screening and Eligibility

Publications were eligible for review inclusion if they originated from any of the remaining 215 funded awards described above, were published between January 1, 2013, and January 1, 2021, were linked to the award in NIH RePORTER by June 1, 2021, and if they were original research. Information from nonoriginal research articles was not included in this review because our focus was on the data production and dissemination process specifically. Articles were excluded if they were unrelated to HIV or published in a peer-reviewed journal before their grant award date.

A research assistant and the lead author systematically reviewed each of the 215 included grants, creating a database of 1509 articles for further screening. Twenty-five linked publications were published before January 1, 2013, and were excluded. After deduplication, there were 1348 full-text articles; of these, 316 were removed through the screening process because they were nonoriginal research, including editorials/commentaries/viewpoints (n = 106), review articles (n = 104), protocol articles (n = 83), basic science/diagnostics/genomics work not related to IR (n = 16), conceptual pieces (n = 4), or nonpublished articles including preprints or conference summaries (n = 3). The remaining 1032 articles were assessed for eligibility by a team of 3 research assistants. An additional 80 articles were excluded from full-text abstraction because they were not HIV related (n = 42), or they had a publication date that preceded the grant award date (n = 38). Overall, there were 952 articles eligible for inclusion in the data velocity review which were imported into an Airtable database (https://airtable.com/). We further identified whether the data published were directly related to the grant award as described in NIH RePORTER. Each article was reviewed by a research assistant or co-author and additionally reviewed by the first author, following a systematic decision tree to determine whether the data produced were a direct result of the grant award, henceforth described as grant related (see Figure 1, Supplemental Digital Content, https://links.lww.com/QAI/B883). Grant-related publications were the source for the primary analyses presented in this article.

Data Abstraction

Data abstracted at the article level included the journal, country of research, study population, and key study dates (data collection start, enrollment stop, data collection end, journal submission date, revision date, acceptance date, and first publication date—online or in print, whichever occurred first). When only months or the year was noted for specific events, the midpoint was used to capture the date (eg, the 15th of the month or July 1st of the corresponding year). When data collection dates were missing or not present in the article, the research team looked to articles from the same data source and/or ClinicalTrials.gov to identify the data collection period. Other data abstracted included the primary phase of implementation per the exploration, preparation, implementation and sustainment framework, and the study design.32 Implementation outcomes were abstracted per Proctor but are not reported here.33 Over 80 articles were dually abstracted and reviewed to ensure consistency and refine the data abstraction process; the remaining articles were singly reviewed, and data were entered into Airtable. All publication dates were checked to ensure integrity of the data. The review process was completed between October 14, 2020, and June 30, 2021.

After data abstraction, data at the article-level were merged with information previously abstracted from Smith et al at the grant level, including grant award date, award type, country of research, stage within the IR continuum, HIV research area (which was reclassified by the analysis team per the EHE pillars diagnose, treat, prevent, and respond), and details regarding study design.1,9

Analysis

The number of original research publications was summarized within grant awards and characteristics of grant-related research articles summarized. The timing of steps across the data production cascade among published articles was summarized.

The time to dissemination among only those articles reaching publication, however, underestimates the data generation process overall as nonpublished studies cannot, by definition, be included. To account for nonpublication among some awards, uneven follow-up times, and multiple publications per award, we also considered time-to-event analyses at the grant award level. Kaplan–Meier plots were drawn to assess time to first, third, and fifth grant-related original research publication by grant, using the NIH grant award date as the time origin and years since funding award start as the time scale. Failure curves, the complement of survival curves, were presented overall, without K-awards, and stratified by grant type, year of funding, region, and stage of IR continuum research. Sensitivity to K-awards was assessed given the dedicated research time conferred by these awards and emphasis on investigator development, which may make them less representative of the speed of other research grants. We further assessed associations between funding year, grant mechanism and location (United States vs. global), and time to publication using multivariable Cox proportional hazards models assessing time to first publication and time to any publication (the latter accounting for clustering within grant awards and assuming no specific order of publication). Analyses were conducted in Stata 15SE (College Station, TX) and RStudio v.1.3.1093.

RESULTS

Overall, 181 of 215 (84.2%) grants published one or more articles by January 1, 2021; when restricting to original research articles, 164 of 215 (76.3%) grants had published at least 1 original research article and 127 of 215 (59.1%) published a grant-related article. Among the 215 grants included in the review, each grant published a mean of 6.9 articles (SD 8.1, median 4, and range 1–51). After excluding nonoriginal research articles, the mean number of original research articles published per grant was 4.5 (SD: 6.4 and range 0–44). After further excluding non–grant-related articles, the mean number of articles included per grant in the primary analysis was 2.0 (SD: 3.3 and range 0–28); the distribution is shown in Figure 2, Supplemental Digital Content, https://links.lww.com/QAI/B883. The average number of grant-related publications varied across grant mechanism and award year, with average publications generally higher among larger awards and grants funded in earlier years (see Table 1, Supplemental Digital Content, https://links.lww.com/QAI/B883).

When considering the 952 original research articles, less than half (431 of 952, 45.3%) of publications were grant-related data. The proportion of grant-related research articles varied by grant mechanism (Χ2 = 17.6, P = 0.003, see Figure 3, Supplemental Digital Content, https://links.lww.com/QAI/B883). Of these 431 articles, over half (n = 227, n = 52.7%) emerged from 92 R01 awards, 16.0% came from 36 R21 awards, 12.8% from 22 K-awards, and the remainder from other mechanisms (Table 1). Despite earlier awards having additional time to publish, there was a broad distribution of grant-related research articles by year of award, with the exception of 2018, which was most recent and limited as only 3 months of funding awards were included in the Smith et al review. Most (n = 226, 52.4%) of the publications from NIH grants were from studies conducted in the United States, followed by Africa (n = 155, 36.0%). For the pillars outlined in the EHE initiative, 48 (21.2%) grant-related research publications from the United States emerged from grants that focused on diagnosing HIV, 93 (41.2%) on treating HIV, 140 (62.0%) on preventing HIV, and 41 (18.1%) on real-time data-informed epidemic response9; grants could be classified as responding to more than 1 EHE pillar.

TABLE 1. - Characteristics of Data Velocity of HIV-Related Implementation Research Articles (n = 431)
Characteristic N (%)
Grant mechanism
 R01 227 (52.7)
 R34 28 (6.5)
 R21 69 (16.0)
 R00 6 (1.4)
 K-award 55 (12.8)
 U-award 32 (7.4)
 Other 14 (3.2)
Year of award
 2013 80 (18.6)
 2014 101 (23.4)
 2015 107 (24.8)
 2016 73 (16.9)
 2017 70 (16.3)
 2018* 0 (0.0)
Region of data collection
 United States 226 (52.4)
 Africa 155 (36.0)
 The Americas (non-US) 12 (2.8)
 Asia 25 (5.8)
 Europe 7 (1.6)
 Multiple regions 6 (1.4)
EPIS phase of research
 Exploratory 220 (51.0)
 Preparation 118 (27.4)
 Implementation 84 (19.5)
 Sustainment 9 (2.1)
Study design
 Qualitative 98 (22.8)
 Cross-sectional quantitative 104 (24.1)
 Observational (retrospective and prospective) 99 (23.0)
 Experimental (nonrandomized and randomized) 66 (15.3)
 Modeling/cost-effectiveness 38 (8.8)
 Mixed methods 26 (6.0)
*Only includes grants funded through March 2018.
Multisite studies including sites from multiple regions.

IR study designs included in this review encompassed over three-quarters of publications from the exploration or preparation phases of research, around one-fifth of the articles addressing the implementation phase, and very few (n = 9) the sustainment phase. Experimental studies, including nonrandomized and randomized conditions, comprised 15.3% of the articles published.

Regarding time to data production and dissemination, we explored the time lags across multiple steps of the research production cascade (Fig. 2). Although grant award and publication dates were uniformly available across articles, other data were not uniformly reported within research articles and thus denominators varied. Overall, among the 431 grant-related publications, 69 (16.0%) articles were missing study start dates, 104 (n = 24.1%) were missing study enrollment end dates, 72 (16.7%) were missing study completion dates, 170 (39.4%) were missing journal submission dates, 326 (75.6%) were missing journal revision dates, and 156 (36.2%) were missing final journal acceptance dates. Considering all 431 articles from the 127 grants producing grant-related research publications during this period, the mean time of data production from grant award date to publication was 3.7 years (SD: 1.5). Notably, summative times across cascade steps varied when broken down by step because of differences in denominators per missing data; negative values in funding to study start date were excluded for the grants including historical data comparators or prespecified analyses in existing cohorts. On average, for studies newly initiating primary data collection, the mean time from funding to study enrollment initiation for the relevant article was 1.0 year (SD: 0.7). The study enrollment period and time between the end of data collection and publication were the 2 most substantive periods, each lasting over 1.5 years. Figure 2 also shows the timing of the publication process among published articles; overall, from the 261 articles reporting journal submission dates, the time from article submission at the journal of publication to first publication (online or in print) averaged a mean of 205 days (SD: 107) or 0.56 years. Among studies reporting when the journal first received the article and those that did not, the time from grant award to publication by the journal was not significantly different, 3.7 vs. 3.8 years, respectively, P = 0.300. Notably, the time from manuscript submission to acceptance decreased on average by 25 days per year (linear regression β=−25.5, 95% confidence interval: −34.0 to −16.9, P < 0.001) for articles submitted in 2014 [mean 333 days (SD: 127)] to articles submitted in 2020 [mean 134 days (SD: 64)]. Overall, 17 articles (3.9%) reported earlier presentation of their data at a scientific conference.

F2
FIGURE 2.:
Schematic of steps across the research production and dissemination cascade. Steps in blue show the average number of years (mean, SD, and number of articles included) for each research step from the 431 publications included in the review. Denominators vary based on reporting in articles and by journals. Data collection dates that precede study awards were excluded within the step. Thus, the overall mean years from award date to publication for grant related original research articles in green includes all 431 articles and represents a shorter period, given that data collection included within the articles often preceded study award dates. Studies that did not publish one or more original research articles are excluded from this figure.

For the time-to-event analyses taking censoring into account, we first assessed the time from grant award to the first grant-related research article publication for all funding mechanisms and after excluding K-grant awards. Overall, the median time-to-first publication across the 215 grants was just over 4 years (4.25), increasing to 4.5 years after excluding K-awards (n = 193) (Fig. 3). We further assessed the timing from grant award date to the third and fifth grant-related articles published. By 7.5 years postaward, overall just under 33% and 25% of grants had published 3+ or 5 or more grant-related research articles, respectively.

F3
FIGURE 3.:
Time from grant award to publication of first, third, and fifth original research publications directly related to the award in years, overall and excluding K-awards.

We also assessed time to publication by different grant characteristics (Fig. 4). Notably, there seemed to be a trend to accelerated time-to-first publication across years (Fig. 4A), with a median time-to-first publication of 4.5 years in 2013 and just over 3 years in 2017 (data from 2018 were excluded as only 3 HIV-related IR grants were funded). The median time from funding to first grant-related publication was 3 years for K-awards, 4 years for R21 grants, and 5 years for R01 awards (Fig. 4C). Time-to-first publication by region and IR continuum at the grant level are also illustrated (Figs. 4B, D). In multivariable Cox proportional hazards regression, time-to-first publication was significantly accelerated among K and R21 awards; when considering time to any grant-related publication and accounting for clustering within awards, K and R21 mechanisms still had faster times to publication, but we also observed much higher velocity in publication over time from 2013 to 2017 (see Table 2, Supplemental Digital Content, https://links.lww.com/QAI/B883).

F4
FIGURE 4.:
Time from grant award to grant related original research first publication in years, stratified by (A) year of funding, excluding 2018 because of small sample size; (B) study region; (C) grant mechanism; and (D) IR continuum at the grant level.

DISCUSSION

Despite the many challenges to rapid research production including protocol development, ethical review, study implementation, data collection, cleaning, and analysis and the scientific review process within journals, most of the HIV-related and IR-related grants funded from 2013 to early 2018 disseminated substantial amounts of findings at a rate that is accelerating over time. Still, 24% of grants had not yet published any original research articles, and 41% had not published results from data directly related to their grant. Excluding nonoriginal research and data not directly grant-related, the average HIV-related and IR-related grant produced 2 grant-related publications. Most of the data were produced through NIH R01 awards, were from the United States, and represented earlier stages of IR (eg, exploratory and preparatory work). There remains a need to progress IR toward the implementation and sustainment stages to ensure translation of effective strategies to promote evidence-based practice.

Although the average time from grant funding to first publication after 4 years provides a potentially more optimistic perspective on data velocity than what has previously been reported,11,12 these data present one stage of the evidence-to-practice process. Development and testing of evidence-based interventions preceding the phase of IR, frequent and unaccounted for rounds of unsuccessful grant submissions before the secured funding assessed here, and integration of IR findings into policies and/or guidelines and actual adoption in practice easily lend data production and translation into care closer to the often quoted 17–20 year timelines of “evidence reaching intended recipients (patients).”12,34 Furthermore, key trial results are typically the later publications within grants, and thus, the 4-year timeline is likely an underestimate of the time to translation for evaluative/summative work. Nevertheless, the trends toward increased funding of IR studies and more rapid data production over time are encouraging.1

Notably, over half (55%) of articles attributed to a grant were not from data directly generated from the funded project. Effort for investigators to engage in other relevant analyses while data collection is ongoing may be an efficient use of resources and a way to maximize scientific output. Conversely, it may mask gaps in expected evidence or timeliness of data velocity if not considered. Training awards in particular have purposes of supporting the development of the investigator, affording them time to publish, expanding their networks, and mentoring that are likely being fulfilled during these awards in ways that are additional to the proposed science and perhaps equally or more impactful to their careers—given that 80% of published data among K-awards are not directly related to the funded science.

The most substantial lags identified in this analysis included time from funding to study initiation, study enrollment and follow-up, and submission for publication. Studies that leverage existing routine or program data, natural experiments, and simulations can accelerate this process and may account for why the overall time from funding to publication when considered in 1 step is much shorter than the combination of each of the steps for which data are available.35,36 Although time from manuscript submission to publication by the publishing journal averaged about 6 months—a period slightly shorter than what has been previously reported from HIV behavioral studies37—it should be noted that part of the lag between study end and publication date may be that manuscript submissions, review periods, and rejections by one or more journals before the ultimate journal of publication are not accounted for in this subcomponent analysis. Given that just 36% of the data production time was used for study enrollment/follow-up, reforming the scientific machinery to get studies up and running faster and disseminated more quickly is where substantive gains in accelerated research production could likely be seen. Examples of this could include increasing investments in institutional review boards to increase efficiency in time to enrollment or requiring data sharing within 12 months poststudy end. Strategies to speed up data collection include designs that leverage real-world data across multiple sites to foster more rapid enrollment and more proximal end points or proposed thresholds of effect to warrant extending end points through optimization designs to reduce follow-up time.

The speed of data dissemination and practice change witnessed in the COVID-19 pandemic has illustrated the potential for more rapid translation processes.38–41 Lessons learned from the pandemic in fast-tracked funding, rapid low-risk study approvals within institutional review boards, increased rapidity of publication by journals, and real-time data synthesis all demonstrate the potential for rapid impact when political will and funding are available. Lessons from the COVID-19 pandemic for modeling population impact and benefit as well as exploring context and mechanisms through which results may be expected to translate across populations and settings provide roadmaps to decrease the time of evidence translation to end the HIV pandemic.41,42

This analysis has several limitations. First, because identification of ‟the” primary research article from an award is subjective given that grants have multiple aims, we did not identify a single primary article per award; thus, it is possible that the timing of publication of those critical articles is overestimated or more likely underestimated given that articles reporting implementation outcomes and effectiveness may represent later stage publications. Second, many articles may still emerge from these grants and certainly the final proportion of grants publishing data from their awards will increase with time. In addition, administrative supplements (eg, P30 Center for AIDS Research supplements) were not included in the original sample of grants, although they likely have high relevance to this question given their emphasis on implementation science in recent years.43 In addition, as noted above, the journey of a manuscript from the journal from which it was initially submitted to its final publication destination is unknown and certainly overestimates the time from study end to manuscript submission and underestimates the time spent under editorial processing and peer review. Finally, we have not included the initial basic science and steps of efficacy studies on which EBIs for implementation promotion are based nor considered guideline adoption and practice routinization. Specific data velocity case studies on discrete EBIs, such as PrEP or HIV self-testing, may add further insights to the relation across these phases. Despite these limitations, however, a key strength of this analysis is that through linking articles to awards, we can account for unpublished studies which would not be represented at all when starting only with a review of published HIV and IR articles; not including these grants underestimates the timing.

In conclusion, this review suggests that once HIV EBIs are established and IR studies funded, data generation and dissemination are accelerating; however, delays on either end of data collection contribute significantly to research translation speed and there is need to continue to hasten the process if EHE goals are to be met by 2030. The extent to which data culminate in the production of generalizable knowledge that can be applied and/or adapted across settings is not directly answered in this review; however, the volume and rapidity reported lend themselves to this outcome—creating increased opportunities for evidence synthesis across implementation strategies and reinforcing the need for clear specification of context and strategies in support of this goal.1,44 Placing increased emphasis on implementation and sustainment phase studies, reducing barriers to rapid study initiation for low-risk implementation studies, and more rapid dissemination after study completion may further promote the efficient and effective integration of EBIs into care.

ACKNOWLEDGMENTS

Generating data to inform data velocity was itself a long process which drew on the support of many contributors. The authors were grateful to Lisa Lucas, Umaima Tahir Banda, Gauri Kore, and Wilson Gomez for their support in the updated review and to Christopher Kemp for his comments on the analysis.

REFERENCES

1. Smith JD, Li DH, Hirschhorn LR, et al. Landscape of HIV implementation research funded by the national Institutes of health: a mapping review of project abstracts. AIDS Behav. 2020;24:1903–1911.
2. Baral S, Rao A, Sullivan P, et al. The disconnect between individual-level and population-level HIV prevention benefits of antiretroviral treatment. The lancet HIV. 2019;6:e632–e638.
3. Geng EH, Holmes CB, Moshabela M, et al. Personalized public health: an implementation research agenda for the HIV response and beyond. PLoS Med. 2020;16:e1003020.
4. Bain LE, Nkoke C, Noubiap JJN. UNAIDS 90-90-90 targets to end the AIDS epidemic by 2020 are not realistic: comment on ‟Can the UNAIDS 90-90-90 target be achieved? A systematic analysis of national HIV treatment cascades.” BMJ Glob Health. 2017;2:e000227.
5. Cox J, Gutner C, Kronfli N, et al. A need for implementation science to optimise the use of evidence-based interventions in HIV care: a systematic literature review. PLoS One. 2019;14:e0220060.
6. National Institute of Health. 2021. Available at: https://grants.nih.gov/grants/guide/pa-files/PAR-19-274.html. Accessed July 15, 2021.
7. Scaccia JP, Scott VC. 5335 days of Implementation Science: using natural language processing to examine publication trends and topics. Implement Sci. 2021;16:47.
8. Geng E, Hargreaves J, Peterson M, et al. Implementation research to advance the global HIV response: introduction to the JAIDS supplement. J Acquir Immune Defic Syndr. 2019;82(suppl 3):S173–S175.
9. Fauci AS, Redfield RR, Sigounas G, et al. Ending the HIV epidemic: a plan for the United States. JAMA. 2019;321:844–845.
10. Lambdin BH, Cheng B, Peter T, et al. Implementing implementation science: an approach for HIV prevention, care and treatment programs. Curr HIV Res. 2015;13:244–249.
11. Khan S, Chambers D, Neta G. Revisiting time to translation: implementation of evidence-based practices (EBPs) in cancer control. Cancer Causes Control. 2021;32:221–230.
12. Morris ZS, Wooding S, Grant J. The answer is 17 years, what is the question: understanding time lags in translational research. J R Soc Med. 2011;104:510–520.
13. Hanney SR, Castle-Clarke S, Grant J, et al. How long does biomedical research take? Studying the time taken between biomedical and health research and its translation into products, policy, and practice. Health Res Pol Syst. 2015;13:1.
14. Mugo NR, Ngure K, Kiragu M, et al. The preexposure prophylaxis revolution; from clinical trials to programmatic implementation. Curr Opin HIV AIDS. 2016;11:80–86.
15. Mayer KH, Chan PA, Patel RR, et al. Evolving models and ongoing challenges for HIV preexposure prophylaxis implementation in the United States. J Acquir Immune Defic Syndr. 2018;77:119–127.
16. Geng EH, Holmes CB. Research to improve differentiated HIV service delivery interventions: learning to learn as we do. PLoS Med. 2019;16:e1002809.
17. Krakower DS, Daskalakis DC, Feinberg J, et al. Tenofovir alafenamide for HIV preexposure prophylaxis: what can we DISCOVER about its true value? Ann Intern Med. 2020;172:281–282.
18. Mayer KH, Molina JM, Thompson MA, et al. Emtricitabine and tenofovir alafenamide vs emtricitabine and tenofovir disoproxil fumarate for HIV pre-exposure prophylaxis (DISCOVER): primary results from a randomised, double-blind, multicentre, active-controlled, phase 3, non-inferiority trial. Lancet. 2020;396:239–254.
19. Walensky RP, Horn T, McCann NC, et al. Comparative pricing of branded tenofovir alafenamide-emtricitabine relative to generic tenofovir disoproxil fumarate-emtricitabine for HIV preexposure prophylaxis: a cost-effectiveness analysis. Ann Intern Med. 2020;172:583–590.
20. Grov C, Westmoreland DA, D'Angelo AB, et al. Marketing of tenofovir disoproxil fumarate (TDF) lawsuits and social media misinformation campaigns' impact on PrEP uptake among gender and sexual minority individuals. AIDS Behav. 2021;25:1396–1404.
21. Fields SD, Tung E. Patient-focused selection of PrEP medication for individuals at risk of HIV: a narrative review. Infect Dis Ther. 2021;10:165–186.
22. Robles G, Sauermilch D, Gandhi M, et al. PrEP demonstration project showed superior adherence with tenofovir alafenamide/emtricitabine compared to tenofovir disoproxil fumarate/emtricitabine in a sample of partnered sexual minority men. AIDS Behav. 2021;25:1299–1305.
23. D'Angelo AB, Westmoreland DA, Carneiro PB, et al. Why are patients switching from tenofovir disoproxil fumarate/emtricitabine (truvada) to tenofovir alafenamide/emtricitabine (descovy) for pre-exposure prophylaxis? AIDS patient care and STDs. 2021;35:327–334.
24. Marcus JL, Levine K, Sewell WC, et al. Switching from tenofovir disoproxil fumarate to tenofovir alafenamide for human immunodeficiency virus preexposure prophylaxis at a boston community health center. Open Forum Infect Dis. 2021;8:ofab372.
25. Psihopaidas D, Cohen SM, West T, et al. Implementation science and the health resources and Services administration's ryan white HIV/AIDS program's work towards ending the HIV epidemic in the United States. PLoS Med. 2020;17:e1003128.
26. Marston HD, Dieffenbach CW, Fauci AS. Ending the HIV epidemic in the United States: closing the implementation gaps. Ann Intern Med. 2018;169:411–412.
27. Eisinger RW, Dieffenbach CW, Fauci AS. Role of implementation science: linking fundamental discovery science and innovation science to ending the HIV epidemic at the community level. J Acquir Immune Defic Syndr. 2019;82(suppl 3):S171–s172.
28. Montgomery ET, van der Straten A, Chitukuta M, et al. Acceptability and use of a dapivirine vaginal ring in a phase III trial. AIDS. 2017;31:1159–1167.
29. Phillips SM, Alfano CM, Perna FM, et al. Accelerating translation of physical activity and cancer survivorship research into practice: recommendations for a more integrated and collaborative approach. Cancer Epidemiol Biomarkers Prev. 2014;23:687–699.
30. Presseau J, Byrne-Davis LMT, Hotham S, et al. Enhancing the translation of health behaviour change research into practice: a selective conceptual review of the synergy between implementation science and health psychology. Health Psychol Rev. 2022;16:22–49.
31. NIH Reporter. Available at: https://projectreporter.nih.gov/reporter.cfm. Accessed November 20, 2020.
32. Aarons GA, Hurlburt M, Horwitz SM. Advancing a conceptual model of evidence-based practice implementation in public service sectors. Adm Pol Ment Health. 2011;38:4–23.
33. Proctor E, Silmere H, Raghavan R, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Pol Ment Health. 2011;38:65–76.
34. Brownson RC, Jacob RR, Carothers BJ, et al. Building the next generation of researchers: mentored training in dissemination and implementation science. Acad Med. 2021;96:86–92.
35. Handley MA, Lyles CR, McCulloch C, et al. Selecting and improving quasi-experimental designs in effectiveness and implementation research. Annu Rev Public Health. 2018;39:5–25.
36. Sweeney P, DiNenno EA, Flores SA, et al. HIV data to care—using public health data to improve HIV care and prevention. J Acquir Immune Defic Syndr. 2019;82(suppl 1):S1–S5.
37. Ingersoll KS, Van Zyl C, Cropsey KL. Publishing HIV/AIDS behavioural science reports: an author's guide. AIDS care. 2006;18:674–680.
38. Homolak J, Kodvanj I, Virag D. Preliminary analysis of COVID-19 academic information patterns: a call for open science in the times of closed borders. Scientometrics. 2020;124:2687–2701.
39. Aviv-Reuven S, Rosenfeld A. Publication patterns' changes due to the COVID-19 pandemic: a longitudinal and short-term scientometric analysis. Scientometrics. 2021;126:6761–6784.
40. Raynaud M, Zhang H, Louis K, et al. COVID-19-related medical research: a meta-research and critical appraisal. BMC Med Res Methodol. 2021;21:1.
41. Redd A, Peetluk LS, Jarrett BA, et al. Curating the evidence about COVID-19 for frontline public health and clinical care: the novel coronavirus research compendium. Public Health Rep. 2022;137:197–202.
42. Ford DE, Johnson A, Nichols JJ, et al. Challenges and lessons learned for institutional review board procedures during the COVID-19 pandemic. J Clin Transl Sci. 2021;5:e107.
43. National Insitute of Allergy and Infectious Diseases. Available at: https://www.niaid.nih.gov/research/cfar-arc-ending-hiv-epidemic-supplement-awards. Accessed November 20, 2020.
44. Proctor EK, Powell BJ, McMillen JC. Implementation strategies: recommendations for specifying and reporting. Implement Sci. 2013;8:139.
Keywords:

research translation; implementation science; data production; review; HIV; Ending the HIV Epidemic

Supplemental Digital Content

Copyright © 2022 Wolters Kluwer Health, Inc. All rights reserved.