Secondary Logo

Journal Logo

SYSTEMATIC REVIEW PROTOCOLS

Methodological components and quality of evidence summaries: a scoping review protocol

Whitehorn, Ashley1; Porritt, Kylie1; Lockwood, Craig1; Xing, Weijie2,3; Zhu, Zheng2,3; Hu, Yan2,3

Author Information
doi: 10.11124/JBISRIR-D-19-00258
  • Free

Abstract

Introduction

Evidence-based health care is now an expectation rather than an abstract concept; however, with the rapidly increasing body of evidence,1 it can be challenging for time-poor clinicians and policy makers to keep up-to-date with current evidence and best practice. An evidence summary is a way to provide health care decision makers (clinicians and policy developers) with the most recent, highest quality evidence available on a particular topic in an easily digestible format to facilitate evidence-based clinical decisions. However, objectively evaluating the internal validity of these types of evidence reviews is challenging.2

An evidence summary may be considered as a type of rapid review in that the time frame for completion is expedited compared to more traditional methods of evidence synthesis such as systematic reviews. Although there is some debate over the definition of a rapid review, this term is most commonly used to define a methodology that follows aspects of systematic review methodology, with the omission of various steps to reduce the time and resources required.3 The term “evidence summary,” although broadly used, is poorly defined. The methodologies used in evidence summaries are likely to differ extensively, beyond simply omitting particular steps of the systematic review process. Although the broad purpose of systematic reviews, rapid reviews and evidence summaries is to facilitate evidence-based practice by summarizing evidence, the methodology, application, dissemination and target audiences may differ among these three types of resource.

An evidence summary includes different types of evidence such as experimental and observational studies, systematic reviews, and clinical practice guidelines. Systematic reviews typically take from six months to two years to complete,4 are predominantly published in peer-reviewed journals, and the format tends to be targeted more at academics or researchers. Rapid reviews, although not well-reported, appear to take anywhere from one week to six months to complete, are published as reports and increasingly in academic journals, and target a variety of audiences including government agencies, health care professionals, patients, and researchers.5 Derivative products such as plain language or evidence summaries tend to target clinicians and policy makers; this review seeks to determine other methodological differences.

Another resource produced with the aim of facilitating evidence-based health care is clinical practice guidelines. A key difference between clinical practice guidelines and these three types of resources (evidence summaries, rapid reviews, and systematic reviews) is the additional layer of expert opinion or consensus that is combined with the highest level of evidence available to create recommendations for practice. Clinical practice guidelines are usually large documents covering multiple topics of care in a specific area, they are labor- and cost-intensive, and they are difficult to keep up-to-date (updated at arbitrary time points), especially in areas with an active research agenda.6,7 In contrast, an evidence summary is likely to be a short document (fewer than five pages) that is focused on a specific aspect of care and is regularly updated. Although the target audiences of evidence summaries and clinical practice guidelines are similar, the dissemination and methodologies are likely to differ substantially.

A number of literature review (often systematic review) derivative products are available such as BMJ Best Practice, UpToDate, and JBI evidence summaries. Each of these derivatives has the broad purpose of facilitating knowledge to action at the point of care; however, in most cases the methodologies of these derivative products are poorly reported, and therefore clinicians and policy makers must assume that the product is of high quality with no objective evaluation methods available. Dissemination also differs in that evidence summaries may be distributed to clinical points of care via electronic medical records, clinical decision support systems, or databases that are accessed from global publishers.

A bibliographic study described and evaluated the quality, rigor, and content of evidence-based practice point-of-care resources.8 The study assessed the quality of five main methodological components including search methodology, critical appraisal, hierarchical evidence inclusion, evidence grading, and whether expert opinion (if included) was identifiable.8 The study included 20 resources that present evidence for clinicians at the point of care, yet across these resources there was poor conceptual overlap on what constituted measures of quality; measures of internal validity were particularly lacking, and similar literature tends to focus on scope, breadth, and editorial control of content. Although these are important domains of quality, they are of unclear benefit in the quality assessment of evidence summaries. Given that quality assessment usually focuses on internal validity, it is likely that other methodological components could contribute to evaluating the quality of an evidence summary and mapping of this literature is warranted.

A preliminary search of PROSPERO, MEDLINE, the Cochrane Database of Systematic Reviews and the JBI Database of Systematic Reviews and Implementation Reports was conducted and no current or underway scoping reviews or systematic reviews on the topic were identified.

This scoping review seeks to compare the core components of current methodological sources focusing on the development of resources that meet the definition of an evidence summary and to determine whether there are any current methods of determining quality given their widespread use (but poor definition) in clinical and policy decision-making. The objective of this review is to identify and map the available evidence related to evidence summary methodologies and indicators of quality.

Review questions

  • i) How are evidence summaries defined and structured?
  • ii) What are the core methodological components of an evidence summary?
  • iii) What tools or instruments are available to assist in the assessment of the quality of evidence summaries?

Inclusion criteria

Concept

The concept being considered in this review is rapid summaries of evidence.

An evidence summary may be defined as a synopsis of existing literature on health care interventions or activities for health care policy or practice to use at the point of care.2 An evidence summary is based on a search for evidence using a hierarchical approach; it includes targeted database searching or comprehensive searching, selection of studies against a PICO (population, intervention, comparison, outcome) question, evaluation of the quality of relevant studies, and summation of the evidence with the intent to guide decision-making for health care policy or practice.

Context

This review will consider methodologies for evidence summaries that describe health care interventions or activities for the health professional or policy makers at the point of care.

Types of studies

Articles, papers, books, dissertations, reports, and websites will be included if they evaluate, or describe the development or appraisal of, an evidence summary methodology as defined above.

Exclusion criteria

Literature reviews including summaries of evidence on interventions or health care activities that measure either effects or a phenomena of interest, or where the objective is the development, description, or evaluation of methods for the following excluded types of studies:

  • Systematic reviews — defined as “a comprehensive, unbiased synthesis of many relevant studies in a single document using rigorous and transparent methods9(para.1)
  • Rapid reviews — defined as “a type of knowledge synthesis in which components of the systematic review process are simplified or omitted to produce information in a short period of time”3(p.2)
  • Integrative reviews — these allow for simultaneous inclusion of experimental and non-experimental research, and the combination of theoretical and empirical literature in a review to more fully understand a phenomenon of concern.10
  • Health technology assessments — defined as “the systematic evaluation of properties, effects, and/or impacts of health technology. It is a multidisciplinary process to evaluate the social, economic, organizational and ethical issues of a health intervention or health technology. The main purpose of conducting an assessment is to inform a policy decision making”11(para.1)
  • Clinical practice guidelines — defined as “statements that include recommendations intended to optimize patient care that are informed by a systematic review of evidence and an assessment of the benefits and harms of alternative care options”12(p.4)
  • Knowledge translation resources — defined as systematic review findings adapted to a more directly useful form including systematic review summaries (which encapsulate the take-home messages), overviews (synthesized findings of systematic reviews in a given topic area), or policy briefs (based on systematic reviews and other research studies incorporating context-specific data to address a specific policy question).13 Examples include Cochrane Corner and BMJ Rapid recommendations.

Methods

Search strategy

The search strategy will be developed in consultation with an academic librarian with the aim of finding both published and unpublished literature using a three-phase search strategy. Firstly, an initial limited search of the US National Library of Medicine Database (PubMed; 1966–current) will be undertaken, followed by analysis of the text words contained in the title and abstract and of the index terms used to describe the sources of information. This will then inform the development of a comprehensive search strategy (second phase) which will be tailored to each information source. The third phase of the search will look at the references in the information sources included in the review for further sources. No date restrictions will be applied — the authors are unable to determine when the first resource meeting the criteria for an evidence summary became available, so choosing an arbitrary exclusion date may exclude relevant resources.

Sources of information with titles and abstracts published in English will be considered for inclusion. A full search strategy for PubMed is detailed in Appendix I.

The databases to be searched include the peer-reviewed scientific journal databases Cumulative Index to Nursing and Allied Health Literature (CINAHL), Scopus, ProQuest Dissertations and Theses, and Embase. The gray literature search will include relevant government and university websites, the Health Evidence Network website, the World Health Organization (WHO) Health Evidence Network website, the McMaster Health Systems Evidence website, and relevant websites included in the Canadian Agency for Drugs and Technologies in Health (CADTH) Grey Matters Handbook.

Study selection

Titles and abstracts or the full citation record will be screened for assessment by two reviewers independently (with any disagreements settle by a third reviewer) against the inclusion and exclusion criteria. Information sources that meet the inclusion criteria will be classified and retained in the EndNote X9.1 software (Clarivate Analytics, PA, USA). Information sources that do not meet the inclusion criteria will be excluded, and the reasons for exclusion will be provided in an appendix in the final review report. Results of the search will be reported in full in the final report and presented in a PRISMA flow diagram.14

Data extraction

Data will be extracted from information sources included in the scoping review using a modified version of a JBI data extraction tool15 (Appendix II); the tool will be pilot-tested before use. The data extracted will include specific details about the information source characteristics (e.g. author, year, type of source/study design, geographical location, setting, discipline), target audience, terminology used and definition of evidence summary, structural components of the evidence summary, core methodological components of the evidence (including but not limited to stakeholder engagement, search limitations, critical appraisal, literature summary methods, format of recommendations), and quality assessment components of identified tools or instruments. Each structural, methodological, and quality assessment component will be identified in a separate field to assist analysis. Relevant data will be extracted from each included article by one reviewer and checked by a second reviewer.

Data presentation

The extracted data will be presented in a diagrammatic or tabular form in a manner that aligns with the objective and scope of this scoping review. Data presented will reflect the information collected using the data extraction tool. A narrative summary will accompany the tabulated and/or charted results and will describe how the results relate to the review objective and questions.

Acknowledgments

Robert Franchini, librarian, The University of Adelaide, for his assistance in developing the search strategy.

Appendix I: Search strategy

Search conducted in PubMed 11/07/2019; records retrieved: 7543

(((((((((rapid OR brief OR expedited OR quick) AND (review OR assessment OR synthesis OR advice OR report OR overview))))) OR (Evidence AND (summar OR brief OR note))) OR (Policy AND (summar OR brief OR note))) OR “rapid knowledge synthesis”) OR “Point-of-care system”) AND (Translational Medical Research/Methods [mh] OR method OR methods OR methodology OR methodological OR procedure OR process OR Health Policy [mh] OR quality OR “risk of bias” OR “critical appraisal” OR Evidence based practice/methods [mh] OR Review Literature as Topic [mh]) AND (Evidence based practice [mh] OR “evidence based practice” OR “evidence informed practice” OR “evidence based health care” OR “evidence based health policy” OR “evidence based decision making”)

Appendix II: Data extraction instrument

References

1. Bastian H, Glasziou P, Chalmers I. Seventy-five trials and eleven systematic reviews a day: how will we ever keep up? PLoS Med 2010; 7 (9):e1000326.
2. Munn Z, Lockwood C, Moola S. The development and use of evidence summaries for point of care information systems: a streamlined rapid review approach. Worldviews Evid Based Nurs 2015; 12 (3):131–138.
3. Tricco AC, Antony J, Zarin W, Strifler L, Ghassemi M, Ivory J, et al. A scoping review of rapid review methods. BMC Med 2015; 13 (1):224.
4. Khangura S, Konnyu K, Cushman R, Grimshaw J, Moher D. Evidence summaries: the evolution of a rapid review approach. Syst Rev 2012; 1:10.
5. Tricco AC, Zarin W, Antony J, Hutton B, Moher D, Sherifali D, et al. An international survey and modified Delphi approach revealed numerous rapid review methods. J Clin Epidemiol 2016; 70 (1):61–67.
6. Boudoulas KD, Leier CV, Geleris P, Boudoulas H. The shortcomings of clinical practice guidelines. Cardiology 2015; 130 (3):187–200.
7. García LM, Sanabria AJ, Álvarez EG, Trujillo-Martín MM, Etxeandia-Ikobaltzeta I, Kotzeva A, et al. The validity of recommendations from clinical guidelines: a survival analysis. CMAJ 2014; 186 (16):1211–1219.
8. Campbell JM, Umapathysivam K, Xue Y, Lockwood C. Evidence-based practice point-of-care resources: a quantitative evaluation of quality, rigor, and content. Worldviews Evid Based Nurs 2015; 12 (6):313–327.
9. Aromataris E, Munn Z. Chapter 1: JBI systematic reviews. In: Aromataris E, Munn Z, editors. JBI Reviewer's Manual [Internet]. Adelaide: JBI; 2017 [cited 2019 July 23]. Available from: https://reviewersmanual.joannabriggs.org/.
10. Whittemore R, Knafl K. The integrative review: updated methodology. J Adv Nurs 2005; 52 (5):546–553.
11. World Health Organization (WHO). Health technology assessment: World Health Organization [Internet]. Geneva: World Health Organization; 2019 [cited 2019 July 19]. Available from: https://www.who.int/medical_devices/assessment/en/.
12. Steinberg E, Greenfield S, Wolman DM, Mancher M, Graham R. Clinical practice guidelines we can trust. Washington, DC: The National Academies Press; 2011.
13. Chambers D, Wilson PM, Thompson CA, Hanbury A, Farley K, Light K. Maximizing the impact of systematic reviews in health care decision making: a systematic scoping review of knowledge-translation resources. Milbank Q 2011; 89 (1):131–156.
14. Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. Ann Intern Med 2009; 151 (4):264–269.
15. Peters MDJ, Godfrey C, McInerney P, Baldini Soares C, Khalil H, Parker D. Aromataris E, Munn Z. Chapter 11: scoping reviews. JBI Reviewer's Manual [Internet]. Adelaide: JBI, 2017 [cited 2019 July 23]. Available from: https://reviewersmanual.joannabriggs.org/.
Keywords:

critical appraisal; evidence summary; methods; quality; scoping review

© 2020 JBI