Identifying Research Gaps and Prioritizing Psychological Health Evidence Synthesis Needs : Medical Care

Secondary Logo

Journal Logo

Original Articles

Identifying Research Gaps and Prioritizing Psychological Health Evidence Synthesis Needs

Hempel, Susanne PhD*,†; Gore, Kristie PhD; Belsher, Bradley PhD§

Author Information
Medical Care 57():p S259-S264, October 2019. | DOI: 10.1097/MLR.0000000000001175

Abstract

Evidence synthesis is an essential step in promoting evidence-based medicine across health systems; it facilitates the translation of research to practice. A systematic review of the research literature on focused review questions is a key evidence synthesis approach that can inform practice and policy decisions.1 However, systematic reviews are resource-intense undertakings. In a resource-constrained environment, before an evidence review is commissioned, the need and the feasibility of the review must be established.

Establishing the need for the review can be achieved through a research gap analysis or needs assessment. Identification of a gap serves as the first step in developing a new research question.2 Research gaps in health care do not necessarily align directly with research needs. Research gaps are only critical where knowledge gaps substantially inhibit the decision-making ability of stakeholders such as patients, health care providers, and policymakers, thus creating a need to fill the knowledge gap. Evidence synthesis enables the assessment of whether a research gap continues to exist or whether there is adequate evidence to close the knowledge gap.

Furthermore, a gap analysis often identifies multiple, competing gaps that are worthwhile to be pursued. Given the resource requirements of formal evidence reviews, topic prioritization is needed to best allocate resources to those areas deemed the most relevant for the health system. Regardless of the topic, the prioritization process is likely to be stakeholder-dependent. Priorities for evidence synthesis will vary depending on the mission of the health care system and the local needs of the health care stakeholders. A process of stakeholder input is an important mechanism to ensure that the evidence review will meet local needs as well to identify a receptive audience of the review findings.

In addition to establishing the need for an evidence review, the feasibility of conducting the review must also be established. In conducting primary research, feasibility is often mainly a question of available resources. For evidence reviews, the resources, the availability of primary research, and the presence of existing evidence reviews on the topic need to be explored. Not all topics are amenable for a systematic review which focus on a specific range of research questions and rely heavily on published literature. Furthermore, evidence review synthesizes the existing evidence; hence, if there is insufficient evidence in the primary research literature, an evidence review is not useful. Establishing a lack of evidence is a worthwhile exercise since it identifies the need for further research. However, most health care delivery organizations will be keen to prioritize areas that can be synthesized, that is, investing in synthesizing a body of research sizable enough to derive meaningful results. For evidence reviews, the presence of existing evidence syntheses is also an important consideration, in particular, to determine the incremental validity of a new review. Although primary research benefits profoundly by replication, secondary literature, in particular in the context of existing high-quality reviews and/or limited evidence, may not add anything to our knowledge base.3

This work describes a structured and transparent approach to identify and prioritize areas of psychological health that are important and that can be feasibly addressed by a synthesis of the research literature. It describes a collaboration between an agency charged with facilitating the implementation of evidence-based research and practices across the Military Health System (MHS) and a research center specializing in evidence synthesis.

METHODS

This project is anchored in the relationship between the Defense Health Agency Psychological Health Center of Excellence (PHCoE) and the RAND Corporation’s National Defense Research Institute (NDRI), one of the Federally Funded Research and Development Centers (FFRDC) dedicated to providing long-term analytic support to the Defense Health Agency. PHCoE, an agency charged with facilitating the implementation of evidence-based research and practices across the Military Health System funded a series of systematic reviews and evidence maps synthesizing psychological research. The project draws on the expertise of the Southern California Evidence-based Practice Center (EPC) located at RAND, a center specializing in evidence synthesis. The project included scoping searches, stakeholder input, and feasibility scans. The project is ongoing; this manuscript describes methods and results from June 2016 to September 2018. The project was assessed by our Human Subject Protection staff and determined to be exempt (date July 7, 2016, ID ND3621; August 6, 2017, ID ND3714).

The following describes the process, Figure 1 provides a visual overview.

F1
FIGURE 1:
Process of identifying research gaps and prioritizing psychological health evidence synthesis needs.

Scoping Searches to Identify Evidence Synthesis Gaps

Scoping searches targeted pertinent sources for evidence gaps. The searches focused on clinical conditions and interventions relevant to psychological health, including biological psychiatry, health care services research, and mental health comorbidity. Proposed topics and study populations were not limited by deployment status or deployment eligibility, but the topic section considered the prevalence of clinical conditions among Department of Defense active duty military personnel managed by the MHS. The scoping searches excluded evidence gaps addressing children and adolescents and clinical conditions exclusively relevant to veterans managed by the Department of Veterans Affairs.

Scoping Search Sources

We screened 15 sources in total for evidence synthesis gaps.

Veterans Affairs/Department of Defense clinical practice guidelines were a key source for documented evidence gaps.4–9 Recently updated guidelines were screened only for evidence gaps that indicated a lack of synthesis of existing research or content areas that were outside the scope of that guideline (guidelines rely primarily on published systematic reviews and can only review a limited number of topic areas).

We consulted the current report of the committee on armed services of the House of Representatives regarding the proposed National Defense Authorization Act (NDAA) and the report for the upcoming fiscal year.10,11 We specifically screened the report for research priorities identified for psychological health. We also screened the published National Research Action Plan designed to improve access to mental health services for veterans, service members, and military families.12

We conducted a literature search for publications dedicated to identifying evidence gaps and research needs for psychological health and traumatic brain injury. We searched for publications published since 2000–2016 in the most relevant databases, PubMed and PsycINFO, that had the words research gap, knowledge gap, or research priority in the title and addressed psychological health (Supplemental Digital Content, https://links.lww.com/MLR/B836). The search retrieved 203 citations. Six publications were considered potentially relevant and obtained as full text, 1 source was subsequently excluded because the authors conducted a literature search <3 years ago and it was deemed unlikely that a new review would identify substantially more eligible studies.13–19

We also used an analysis of the utilization of complementary and alternative medicine in the MHS20 to identify interventions that were popular with patients but for which potentially little evidence-based guidance exists. We focused our scoping efforts on complementary approaches such as stress management, hypnotherapy, massage, biofeedback, chiropractic, and music therapy to align with the funding scope. In the next step, we reviewed the existing clinical practice guidelines to determine whether clinicians have guidance regarding these approaches. The Department of Defense Health Related Behaviors Survey of Active Duty Military Personnel21 is an anonymous survey conducted every 3 years on service members with the aim of identifying interventions or health behaviors patients currently use. To address evidence gaps most relevant to patients, we screened the survey results, and then matched the more prevalent needs identified with guidance provided in relevant clinical practice guidelines.

We consulted the priority review list assembled by the Cochrane group to identify research needs for systematic reviews. We screened the 2015–2017 lists for mental health topics that are open to new authors, that is, those that do not have an author team currently dedicated to the topic. None of the currently available topics appeared relevant to psychological health and no topics were added to the table. We also consulted with ongoing federally funded projects to identify evidence gaps that were beyond the scope of the other projects. In addition, we screened a list of psychological health research priorities developed at PHCoE for knowledge gaps that could be addressed in systematic reviews or evidence maps. Finally, we screened resources available on MHS web sites for evidence gaps.

Gap Analysis Procedure and Approach to Translating Gaps into Evidence Review Format

We first screened these sources for knowledge gaps, regardless of considerations of whether the gap is amenable to evidence review. However, we did not include research gaps where the source explicitly indicated that the knowledge gap is due to the lack of primary research. We distinguished 5 evidence gap domains and abstracted gaps across pertinent areas: interventions or diagnostic questions, treatment outcomes or specific populations, and health services research and health care delivery models.

We then translated the evidence gaps into potential topics for evidence maps and/or systematic reviews. Evidence maps provide a broad overview of large research areas using data visualizations to document the presence and absence of evidence.22 Similar to scoping reviews, evidence maps do not necessarily address the effects of interventions but can be broader in scope. Systematic reviews are a standardized research methodology designed to answer clinical and policy questions with published research using meta-analysis to estimate effect sizes and formal grading of the quality of evidence. We considered systematic reviews for effectiveness and comparative effectiveness questions regarding specific intervention and diagnostic approaches.

Stakeholder Input

Evidence synthesis gaps that were determined to be amenable to systematic review or evidence map methods provided the basis for stakeholder input. Although all topics were reviewed by project personnel, we also identified psychological health service leads for Army, Navy, Air Force, and Marines within the Defense Health Agency as key stakeholders to be included in the topic selection process. To date, 2 rounds of formal ratings by stakeholders have been undertaken.

The first round focused on the need for systematic review covering issues related to posttraumatic stress disorder (PTSD). The second round focused on other potential psychological health topics determined to be compatible with the MHS mission. Represented clinical areas were suicide prevention and aftercare, depressive disorders, anxiety disorders, traumatic brain injury, substance use disorder including alcohol and opioid use disorder, and chronic pain. All of the potential topics addressed either the effects of clinical interventions or health service research questions.

Stakeholders rated the topics based on their potential to inform psychological health care in the military health system. The raters used a scale 5-point rating scale ranging from “No impact” to “Very high impact.” In addition, stakeholders were able to add additional suggestions for evidence review. We analyzed the mean, the mode, and individual stakeholder rating indicating “high impact” for individual topics.

Feasibility Scans

Feasibility scans provided an estimate of the volume and the type of existing research literature which is informative for 3 reasons. First, this process determined whether sufficient research was available to inform a systematic review or an evidence map. Second, feasibility scans can provide an estimate of the required resources for an evidence review by establishing whether only a small literature base or a large number of research studies exists. Finally, feasibility scans identify existing high-profile evidence synthesis reports that could make a new synthesis obsolete.

Feasibility scans for potential evidence maps concentrated on the size of the body of research that would need to be screened and the relevant synthesis questions that can inform how this research should be organized in the evidence map. Feasibility scans for systematic reviews aimed to determine the number of relevant studies, existing high-quality reviews, and the number of studies not covered in existing reviews. Randomized controlled trials (RCTs) are the focus of most of the systematic review topics, that is, strong research evidence that could inform clinical practice guideline committees to recommend either for or against interventions. An experienced systematic reviewer used PubMed, a very well-maintained and user-friendly database for biomedical literature, developed preliminary search strategies, and applied database search filters (eg, for RCTs or systematic reviews) in preliminary literature searches to estimate the research volume for each topic.

Scans also identified any existing high-quality evidence review published by agencies specializing in unbiased evidence syntheses such as the Agency for Healthcare Research and Quality (AHRQ)’s Evidence-based Practice Center program, the Cochrane Collaboration, the Campbell Collaboration, the Evidence Synthesis Program of the Department of Veterans Affairs, and the Federal Health Technology Assessment program. We used the databases PubMed and PubMed Health to identify reports. We appraised the scope, relevance and publication year of the existing high-profile evidence reviews. The research base for psychological health develops rapidly and evidence syntheses need to ensure that current clinical policies reflect the best available evidence. When determining the feasibility and appropriateness of a new systematic review, we took the results of the original review and any new studies that had been published subsequent to the systematic review on the same topic into account.

RESULTS

The following results are described: the results of the scoping searches and gap analysis, the translation of gaps into evidence synthesis format, the stakeholder input ratings, and the feasibility scans.

Scoping Searches and Gap Analysis Results

The scoping search and gap analysis identified a large number of evidence gaps as documented in the gap analysis table in the Appendix (Supplemental Digital Content, https://links.lww.com/MLR/B836). Across sources, we identified 58 intervention, 9 diagnostics, 12 outcome, 19 population, and 24 health services evidence synthesis gaps. The evidence gaps varied considerably with regard to scope and specificity, for example, highlighting knowledge gaps in recommendations for medications for specific clinical indications or treatment combinations4 to pointing out to gaps in supporting caregivers.11 The largest group of evidence gaps were documented for interventions. This included open questions for individual interventions (eg, ketamine)12 as well as the best format and modality within an intervention domain (eg, use of telehealth).6 Diagnostic evidence gaps included open questions regarding predictive risk factors that could be used in suicide prevention8 and the need for personalized treatments.12 Outcome evidence gaps often pointed to the lack of measured outcomes to include cost-effectiveness as well as the lack of knowledge on hypothesized effects, such as increased access or decreased stigma associated with technology-based modalities.23 Population evidence gaps addressed specific patient populations such as complex patients5 and family members of service members.11 The health services evidence gaps addressed care support through technology (eg, videoconferencing23) as well as treatment coordination within health care organizations such as how treatment for substance use disorder should be coordinated with treatment for co-occurring conditions.4

Potential Evidence Synthesis Topics

The gaps were translated into potential evidence map or systematic review topics. This translation process took into account that some topics cannot easily be operationalized as an evidence review. For example, knowledge gaps regarding prevalence or utilization estimates were hindered by the lack of publicly available data. In addition, we noted that some review questions may require an exhaustive search and a full-text review of the literature because the information cannot be searched for directly, and hence were outside the budget restraints.

The clinical areas and number of topics were: PTSD (n=19), suicide prevention (n=14), depression (n=9), bipolar disorder (n=9), substance use (n=24), traumatic brain injury (n=20), anxiety (n=1), and cross-cutting (n=14) evidence synthesis topics. All topic areas are documented in the Appendix (Supplemental Digital Content, https://links.lww.com/MLR/B836).

Stakeholder Input Results

Stakeholders rated 19 PTSD-related research gaps and suggested an additional 5 topics for evidence review, addressing both preventions as well as treatment topics. Mean ratings for topics ranged from 1.75 to 3.5 on a scale from 0 (no impact potential) to 4 (high potential for impact). Thus, although identified as research gaps, the potential of an evidence review to have an important impact on the MHS varied across the topics. Only 2 topics received a mean score of ≥3 (high potential), including predictors of PTSD treatment retention and response and PTSD treatment dosing, duration, and sequencing. In addition, raters’ opinions varied considerably across some topics with SDs ranging from 0.5 to 1.5 across all topics.

The stakeholders rated 22 other psychological health topics, suggested 2 additional topics for evidence review, and revised 2 original topics indicating which aspect of the research gap would be most important to address. Mean scores for the rated topics ranged from 0.25 to 3.75, with the SDs for each item ranging from 0 to 1.4. Six topics received an average score of ≥3, primarily focused on the topics of suicide prevention, substance use disorders, and telehealth interventions. Opinions on other topics varied widely across service leads.

Feasibility Scan Results

Evidence review topics that were rated by stakeholders as having some potential for impact (using a rating cutoff score>1) within the MHS were selected for formal feasibility scans. To date, 46 topics have been subjected to feasibility scans. Of these, 11 were evaluated as potential evidence map, 17 as a systematic review, and 18 as either at the time of the topic suggestion. The results of the feasibility scans are documented in the table in the Appendix (Supplemental Digital Content, https://links.lww.com/MLR/B836).

The feasibility scan result table shows the topic, topic modification suggestions based on literature reviews, and the mean stakeholder impact rating. The table shows the employed search strategy to determine the feasibility; the estimated number of RCTs in the database PubMed; the number and citation of Cochrane, Evidence Synthesis Program, and Health Technology Assessment reviews, that is, high-quality syntheses; and the estimated number of RCTs published after the latest existing systematic review that had been published on the topic.

Each potential evidence review topic was discussed in a narrative review report that documented the reason for determining the topic to be feasible or not feasible. Reasons for determining the topic to be not feasible included the lack of primary research for an evidence map or systematic review, the presence of an ongoing research project that may influence the evidence review scope, and the presence of an existing high-quality evidence review. Some topics were shown to be feasible upon further modification; this included topics that were partially addressed in existing reviews or topics where the review scope would need to be substantially changed to result in a high-impact evidence review. Topics to be judged feasible met all outlined criteria, that is, the topic could be addressed in a systematic review or evidence map, there were sufficient studies to justify a review, and the review would not merely replicate an existing review but make a novel contribution to the evidence base.

DISCUSSION

The project describes a transparent and structured approach to identify and prioritize evidence synthesis topics using scoping reviews, stakeholder input, and feasibility scans.

The work demonstrates an approach to establishing and evaluating evidence synthesis gaps. It has been repeatedly noted that research gap analyses often lack transparency with little information on analytic criteria and selection processes.24,25 In addition, research need identification may not be informed by systematic literature searches documenting gaps but primarily rely on often unstructured content expert input.26,27 Evidence synthesis needs assessment is a new field that to date has received very little attention. However, as health care delivery organizations move towards providing evidence-based treatments and the existing research continue to grow, both evidence reviews and evidence review gap identification and prioritization will become more prominent.

One of the lessons learned is that the topic selection process added to the timeline and required additional resources. The scoping searches, translation into evidence synthesis topics, stakeholder input, and feasibility scans each added time and the project required a longer period of performance compared to previous evidence synthesis projects. The project components were undertaken sequentially and had to be divided into topic areas. For example, it was deemed too much to ask for stakeholder input for all 122 topics identified as potential evidence review topics. Furthermore, we needed to be flexible to be able to respond to unanticipated congressional requests for evidence reviews. However, our process of identifying synthesis gaps, checking whether topics can be translated into syntheses, obtaining stakeholder input to ensure that the gaps are meaningful and need filling, and estimating the feasibility and avoiding duplicative efforts, has merit considering the alternative. More targeted funding of evidence syntheses ensures relevance and while resources need to be spent on the steps we are describing, these are small investments compared to the resources required for a full systematic review or evidence map.

The documented stakeholder engagement approach was useful for many reasons, not just for ensuring that the selection of evidence synthesis topics was transparent and structured. The stakeholders were alerted to the evidence synthesis project and provided input for further topic refinement. This process also supported the identification of a ‘customer’ after the review was completed, that is, a stakeholder who is keen on using the evidence review is likely to take action on its results and ready to translate the findings into clinical practice. The research to practice gap is substantial and the challenges of translating research to practice are widely documented.28–30 Inefficient research translation delays delivery of proven clinical practices and can lead to wasteful research and practice investments.

The project had several strengths and limitations. The project describes a successful, transparent, and structured process to engage stakeholders and identifies important and feasible evidence review topics. However, the approach was developed to address the specific military psychological health care system needs, and therefore the process may not be generalizable to all other health care delivery organizations. Source selection was tailored to psychological health synthesis needs and process modifications (ie, sources to identify gaps) are needed for organizations aiming to establish a similar procedure. To keep the approach manageable, feasibility scans used only 1 database and we developed only preliminary, not comprehensive searches. Hence, some uncertainty about the true evidence base for the different topics remained; feasibility scans can only estimate the available research. Furthermore, the selected stakeholders were limited to a small number of service leads. A broader panel of stakeholders would have likely provided additional input. In addition, all evaluations of the literature relied on the expertise of experienced systematic reviewers; any replication of the process will require some staff with expertise in the evidence review. Finally, as outlined, all described processes added to the project timeline compounding the challenges of providing timely systematic reviews for practitioners and policymakers.31,32

We have described a transparent and structured approach to identify and prioritize areas of evidence synthesis for a health care system. Scoping searches and feasibility scans identified gaps in the literature that would benefit from evidence review. Stakeholder input helped ensure the relevance of review topics and created a receptive audience for targeted evidence synthesis. The approach aims to advance the field of evidence synthesis needs assessment.

ACKNOWLEDGMENTS

The authors thank Laura Raaen, Margaret Maglione, Gulrez Azhar, Margie Danz, and Thomas Concannon for content input and Aneesa Motala and Naemma Golshan for administrative assistance.

REFERENCES

1. Whitlock EP, Lopez SA, Chang S, et al. AHRQ series paper 3: identifying, selecting, and refining topics for comparative effectiveness systematic reviews: AHRQ and the effective health-care program. J Clin Epidemiol. 2010;63:491–501.
2. Carey TY A, Beadles C, Wines R. Prioritizing future research through examination of research gaps in systematic reviews; 2012.
3. Ioannidis JP. The mass production of redundant, misleading, and conflicted systematic reviews and meta-analyses. Milbank Q. 2016;94:485–514.
4. The Management of Bipolar Disorder Working Group. VA/DoD Clinical Practice Guideline for Management of Bipolar Disorder (BD) in Adults. Washington, DC: US Department of Veterans Affairs, US Department of Defense; 2010.
5. The Management of Major Depressive Disorder Working Group. VA/DoD Clinical Practice Guideline for the Management of Major Depressive Disorder. Washington, DC: US Department of Veterans Affairs, US Department of Defense; 2016.
6. The Management of Substance Use Disorders Work Group. VA/DoD Clinical Practice Guideline for the Management of Substance Use Disorders. Washington, DC: US Department of Veterans Affairs, US Department of Defense; 2015.
7. The Management of Post-Traumatic Stress Working Group. VA/DoD Clinical Practice Guideline for Management of Post-Traumatic Stress. Washington, DC: US Department of Veterans Affairs, US Department of Defense; 2010.
8. The Assessment and Management of Risk for Suicide Working Group. VA/DoD Clinical Practice Guideline for Assessment and Management of Patients at Risk For Suicide. Washington, DC: US Department of Veterans Affairs, US Department of Defense; 2013.
9. The Management of Concussion-mild Traumatic Brain Injury Working Group. VA/DoD Clinical Practice Guideline for the Management of Concussion-Mild Traumatic Brain Injury. Washington, DC: US Department of Veterans Affairs, US Department of Defense; 2016.
10. National Defense Authorization Act for Fiscal Year 2017 Title VII—Health Care Provisions Report of the Committee on Armed Services, House of Representatives on HR 4909 Together with Additional Views. Washington, DC: US Government Publishing Office; 2016.
11. National Defense Authorization Act for Fiscal Year 2018—Report of the Committee on Armed Services, House of Representatives on HR 2810 together with Additional Views. Washington, DC: US Government Publishing Office; 2017.
12. The White House Office of the Press Secretary. National Research Action Plan Responding to the Executive Order Improving Access to Mental Health Services for Veterans, Service Members, and Military Families (August 31, 2012). Washington, DC: US Department of Defense, Department of Veterans Affairs, Department of Health and Human Services, and Department of Education; 2013.
13. Shea CW. From the neurobiologic basis of alcohol dependency to pharmacologic treatment strategies: bridging the knowledge gap. South Med J. 2008;101:179–185.
14. Zitnay GA, Zitnay KM, Povlishock JT, et al. Traumatic brain injury research priorities: the Conemaugh International Brain Injury Symposium. J Neurotrauma. 2008;25:1135–1152.
15. Weimer MB, Chou R. Research gaps on methadone harms and comparative harms: findings from a review of the evidence for an American Pain Society and College on Problems of Drug Dependence clinical practice guideline. J Pain. 2014;15:366–376.
16. McCaul ME, Monti PMGalanter M, Galanter M. Research priorities for alcoholism treatment. Recent Developments in Alcoholism, Volume 16: Research on Alcoholism Treatment. New York, NY: Kluwer Academic/Plenum Publishers; 2003:405–414.
17. Alegría M, Page JB, Hansen H, et al. Improving drug treatment services for Hispanics: research gaps and scientific opportunities. Drug Alcohol Depend. 2006;84(suppl 1):S76–S84.
18. Valderas JM, Starfield B, Roland M. Multimorbidity’s many challenges: a research priority in the UK. BMJ. 2007;334:1128.
19. Robinson J, Pirkis J, Krysinska K, et al. Research priorities in suicide prevention in Australia. Crisis. 2008;29:180–190.
20. Herman PM, Sorbero ME, Sims-Columbia AC. Complementary and Alternative Medicine in the Military Health System. Santa Monica, CA: CA RAND Corporation; 2016.
21. Barlas FM, Higgins WB, Pflieger JC, et al. 2011 Department of Defense Health Related Behaviors Survey of Active Duty Military Personnel. Washington, DC: US Department of Defense; 2013.
22. Miake-Lye I, Hempel S, Shanman R, et al. What is an evidence map? A systematic review of published evidence maps and their definitions, methods, and products. Syst Rev. 2016;5:28.
23. The Management of Posttraumatic Stress Disorder Work Group. VA/DoD Clinical Practice Guideline for the Management of Posttraumatic Stress Disorder and Acute Stress Disorder—Version 30. Washington, DC: Department of Veterans Affairs; Department of Defense; 2017.
24. Robinson KA, Akinyede O, Dutta T, et al. Framework for Determining Research Gaps During Systematic Review: Evaluation. Rockville, MD: Agency for Healthcare Research and Quality; 2013.
25. Snilstveit B, Vojtkova M, Bhavsar A, et al. Evidence & gap maps: a tool for promoting evidence informed policy and strategic research agendas. J Clin Epidemiol. 2016;79:120–129.
26. Otto JL, Beech EH, Evatt DP, et al. A systematic approach to the identification and prioritization of psychological health research gaps in the Department of Defense. Mil Psychol. 2018;30:557–563.
27. Saldanha IJ, Wilson LM, Bennett WL, et al. Development and pilot test of a process to identify research needs from a systematic review. J Clin Epidemiol. 2013;66:538–545.
28. Morris ZS, Wooding S, Grant J. The answer is 17 years, what is the question: understanding time lags in translational research. J R Soc Med. 2011;104:510–520.
29. Eddy DM. Clinical policies and the quality of clinical practice. N Engl J Med. 1982;307:343–347.
30. Bero LA, Grilli R, Grimshaw JM, et al. Closing the gap between research and practice: an overview of systematic reviews of interventions to promote the implementation of research findings. The Cochrane Effective Practice and Organization of Care Review Group. BMJ. 1998;317:465–468.
31. Danz MS, Hempel S, Lim YW, et al. Incorporating evidence review into quality improvement: meeting the needs of innovators. BMJ Qual Saf. 2013;22:931–939.
32. Ganann R, Ciliska D, Thomas H. Expediting systematic reviews: methods and implications of rapid reviews. Implement Sci. 2010;5:56.
Keywords:

evidence review; evidence synthesis; gap analysis; research prioritization; translational science

Supplemental Digital Content

Copyright © 2019 The Author(s). Published by Wolters Kluwer Health, Inc.