Introduction
Robust and trustworthy evidence synthesis products are essential to inform policy, legislation, and clinical decision-making. Evidence synthesis (eg, systematic reviews, scoping reviews, realist reviews) as a field is undergoing radical transformation and expansion. This change is due to the increased demand by policy-makers, practitioners, and the community for access to summarized and trustworthy evidence to guide decision-making. Historically, evidence synthesis strategies focused on the effectiveness of an approach or treatment. However, there now exists a multitude of evidence synthesis types and methods conducted for multiple purposes across diverse fields of inquiry, evidence types, and questions.1 As such, enabling researchers to ensure that they are undertaking the most appropriate review approach to respond to a clinical or policy question (evidence synthesis) and enabling decision-makers to ensure that they are receiving the most appropriate output for their needs (knowledge translation) are both critical.
There has been substantial growth in the different types of evidence synthesis, with systematic reviews having the highest profile. These reviews are considered the pillar of evidence-based health care2 and are widely used to inform the development of trustworthy clinical guidelines.3-5 Organizations and networks such as JBI, Cochrane, Environmental Evidence, and the Campbell Collaboration provide guidance on the conduct and reporting of different types of evidence synthesis, including systematic reviews. Although guidance from these groups may differ regarding the approaches to conducting evidence synthesis, all characterize systematic reviews as a predefined process that is comprehensive, unbiased, structured, and transparent. Systematic reviews should be conducted by review groups with specialized skills, who set out to identify and retrieve all of the international evidence that is relevant to a particular question (or questions) and to appraise and synthesize the results of this search to inform practice, policy, and, in some cases, further research.3,6,7 Ultimately, the findings of systematic reviews should be reliable and meaningful to those who use them.8
As the methods to conduct systematic reviews have evolved and advanced, so too has the thinking around the types of questions that need to be answered in order to provide the best possible, evidence-based information.3,9 Systematic review approaches are now able to address many different types of questions, from investigating the effectiveness of a particular clinical treatment or intervention to questions of feasibility, appropriateness, and meaningfulness.10 There are now many different types of systematic reviews, such as reviews of effectiveness, prevalence and incidence, etiology and risk, economic evidence, measurement properties, qualitative approaches, and more.
It is now acknowledged that different questions require different evidence synthesis types. For example, scoping reviews (which were previously referred to as scoping studies)11 have emerged as a valid approach to answering broader questions not suited to a typical systematic review. In addition, other types of evidence synthesis have emerged, including realist reviews, mixed methods reviews, concept analyses, and others.1,12–15 At least 15 different methods have been proposed for the synthesis of qualitative research alone.16–19 These include thematic synthesis, narrative synthesis, realist synthesis, content analysis, meta-ethnography, and meta-aggregation.16–18 Some preliminary typologies of these diverse review types have been developed and can be used as a reference for researchers, policy-makers, and funders to assist them when deciding on a review approach.12-14
Typologies and taxonomies are types of classification systems, and are approaches to structuring, categorizing, or classifying items, ideas, approaches, or organisms (among others). Although various definitions exist, a typology can be thought of as based on “theoretical or conceptual distinctions,”20(p.7) while a taxonomy “uses empirical observations to classify items into categories.”20(p.7) The approaches of these typologies, taxonomies, and other classification systems vary, from those that focus on overall review methods (eg, systematic, scoping, or rapid reviews)13 to typologies that have identified the different types of a systematic review (eg, qualitative, effectiveness, or diagnostic test accuracy systematic reviews).1
The development of typologies, taxonomies, and classification systems of evidence synthesis has not as yet been conducted through a systematic process, nor with extensive stakeholder engagement from members of the academic community. These typologies have also been criticized for being incomplete and lacking consistency in the terminology.20 Therefore, there is a need to identify the current typologies, taxonomies, and classification systems, and to review their characteristics and the methodological approaches used in their development, as well as to map the evidence synthesis types addressed within each one. This scoping review will identify contemporary evidence synthesis types and previously proposed typologies, classification schemes, or taxonomies that have guided evidence synthesis to date through extensive database searching and hand-searching, and extensive stakeholder engagement. The results of this review will be an inventory of evidence synthesis classification schemes detailing distinct evidence synthesis types and their methods, which will then inform the development of a comprehensive online and publicly available data bank of evidence synthesis methods and methodologies.
Review questions
What typologies, taxonomies, and classification schemes or compendia have been proposed for evidence synthesis methods?
- What evidence synthesis types/approaches have been described in typologies, taxonomies, and classification schemes or compendia discussing multiple evidence synthesis approaches?
- What are the characteristics of these typologies, taxonomies, and classification systems or compendia approaches?
- What are the facets against which evidence syntheses have been classified or distinguished from one another?
- What are the methodological approaches to the development of these typologies, taxonomies, and classification systems?
Inclusion criteria
Participants
This criterion is not relevant to this review, as we will be investigating typologies, taxonomies, and classification schemes for evidence synthesis.
Concept
The concepts of interest are classification schemes, taxonomies, typologies, compendia, groupings, or overviews of evidence synthesis methods and approaches, particularly structured resources that describe and distinguish evidence synthesis approaches from each other. This could include manuals and guidance for conducting evidence synthesis projects that have explored multiple evidence synthesis approaches, such as the JBI Manual for Evidence Synthesis or the Cochrane Handbook. Guidance papers or manuals that do not include multiple types (two or more) will be excluded. Papers that discuss variations of the same type of review (ie, differences in scoping reviews) will also be included, as will empirical studies (such as methodological studies or scoping reviews that investigate methodology) evaluating differences in review approaches. For inclusion, a report should describe at least two types of evidence synthesis and include a definition and/or distinguishing characteristics between the types.
Context
This review will not be limited by any context, setting, discipline, or field. All evidence synthesis classification schemes will be included across diverse fields, including, but not limited to, clinical sciences, public health, social sciences, environmental sciences, biosciences, engineering, policy, law, and education.
Types of sources
This scoping review will consider documents, such as discussion papers, commentaries, books, editorials, manuals, handbooks, formal guidance from major organizations, and other resources that describe different approaches to evidence synthesis and potentially provide a classification scheme of these different approaches, as well as approaches to categorization.
Methods
The proposed scoping review will be conducted in accordance with the JBI methodology for scoping reviews21,22 and reported in line with the Preferred Reporting Items for Systematic Reviews and Meta-Analyses extension for Scoping Reviews (PRISMA-ScR).23
This scoping review aligns with the guidance for conducting a scoping review when needing to examine key characteristics or factors related to a concept.24 This project constitutes part of the Evidence Synthesis Taxonomy Initiative, which endeavors to develop an evidence synthesis taxonomy developed through collaboration with key stakeholders to establish an online, interactive, and scalable evidence synthesis taxonomy. The Evidence Synthesis Taxonomy Initiative is not formally associated with Evidence Synthesis International or the Global Evidence Synthesis Initiative (or other groups/organizations in this field) but aims to collaborate with all researchers, end users, and organizations in this space. This work is linked with the Unity Health Toronto Knowledge Translation Program’s Right Review Tool and will assist in updating the tool.25
The details of this review project are available on Open Science Framework (https://osf.io/qwc27).
Engagement
The authors of this review are all working in the field of evidence synthesis and are invested in improving synthesis methodology to better support evidence-based health care and other fields. This scoping review will include engagement with other evidence synthesis researchers through an advisory panel overseen by an executive team. The advisory panel includes over 80 researchers, guideline developers, methodologists, health professionals, policy-makers, patient partners, and information scientists with an interest in methodology and evidence synthesis across various fields and disciplines. They have had the opportunity to review this protocol and offer feedback on the research questions, search strategies, information sources, and the data extraction form. The advisory and expert panel will continue to support the scoping review in the following ways:
- The advisory panel and executive team will continue to be updated about the project every month via email, and every quarter via a Zoom meeting. Members of the advisory and executive panel will be asked to provide feedback at each stage of the scoping review.
- The advisory panel and executive team will provide feedback on which databases or websites should be searched.
- The advisory and executive panels will be asked to assist in the identification of available typologies, taxonomies, and classification systems.
- The advisory panel and executive team will be presented with all findings that are identified from the scoping review.
Individuals who form part of the advisory panel and executive team, and who have made a substantial commitment that meets the International Committee of Medical Journal Editors (ICMJE) authorship requirements, will be eligible to be an author on the final report.
Search strategy
The search strategy will aim to locate both published and unpublished documents. The search strategy was developed with the help of an expert health librarian (CP) and will be peer-reviewed using the Peer Review of Electronic Search Strategies (PRESS) guideline statement.26 An initial exploratory search of MEDLINE (Ovid) was undertaken to identify key articles on the topic. The terminology used within the articles was analyzed and used to develop a full search strategy for MEDLINE via Ovid (see Appendix I) and other selected sources. This search was tested to ensure it picked up key articles. The search strategy, including all identified keywords and index terms, will be adapted for each included database and/or information source using manual translation.27 The reference lists of all included documents will be screened for additional studies using citationchaser (Zenodo, Geneva, Switzerland).28 Onwards citation screening will not be performed due to the likelihood of high citation counts of seminal discussions and guidance papers. However, to supplement the search, we will also investigate the co-citations of seminal papers1,12–14,29 on the topic and screen co-citations of these papers where there are two or more co-citations. The advisory panel will be asked whether they know of any papers that are relevant to this review topic or others in the field who may know of relevant papers.
There will be no exclusions based on language or publication status (ie, published, unpublished, in press, in progress, preprint). There are no date limitations. For documents in languages other than English, DeepL (DeepL, Cologne Germany) will be used to determine whether they meet the inclusion criteria. Where documents are published in a language other than English and meet inclusion criteria, DeepL translations will be reviewed by a person fluent in the language.
The databases to be searched include MEDLINE (Ovid), Embase, CINAHL with Full Text (EBSCO), ERIC (EBSCO), Scopus, Compendex, and JSTOR. Sources of unpublished studies/gray literature to be searched include Google, the Lens, and Dimensions. Hand-searching the websites of Cochrane, JBI, Agency for Healthcare Research and Quality, Collaboration for Environmental Evidence, Campbell, EPPI, and York University Centre for Reviews and Dissemination (CRD) will be implemented to identify any suitable resources. All search strategies will be submitted with the scoping review report as supplementary material.
Study selection
Following the search, all identified citations will be collated and uploaded into EndNote v.X20 (Clarivate Analytics, PA, USA) and duplicates removed. Data will then be imported into Covidence (Veritas Health Innovation, Melbourne, Australia). Prior to title and abstract screening, a pilot screening of 50 documents will occur between the reviewers, who will review each title and abstract independently. More piloting, discussions, and clarification to the screening guidance will occur if >70% agreement has not been observed. The title and abstracts will be screened independently by pairs of reviewers (so that each record is screened by at least two people) for assessment against the inclusion criteria for the review.
Potentially relevant documents will be retrieved in full and subjected to another pilot test of 10 full-text articles, and if >70% agreement is achieved, full-test screening will commence. The full text of selected citations will be assessed in detail against the inclusion criteria by two or more independent reviewers (so that each record is screened by at least two people) in Covidence. Reasons for exclusion of full-text documents that do not meet the inclusion criteria will be recorded and reported in the final review. Any disagreements that arise between the reviewers at each stage of the selection process will be resolved through discussion or with an additional reviewer. The results of the search and the study inclusion process will be reported in full in the final review and presented in a PRISMA flow diagram.
Data extraction
Using a standardized extraction form specifically designed for this review, we will conduct a pilot exercise using the same five to 10 full-text reports for the data extractors to calibrate and test the review form. Each document will be extracted by pairs of reviewers working independently (so that each document is extracted by at least two people) using a data extraction tool developed by the reviewers (Appendix II). Data will be extracted at two levels: i) the typology/classification system itself, and ii) the evidence synthesis approaches detailed in the taxonomy.
For the first level, where data are extracted about the classification system itself, the extraction will include the specific details about the document’s aim; field of research; type of evidence source (eg, commentary, manual/guidance); type of classification system (eg, typology, taxonomy); development approach for the classification system; type of evidence synthesis approaches included; and the facets through which different approaches have been classified.
For extraction at the individual synthesis approach level, where possible, the question framework and the indication and/or purpose of the review (such as their utility to decision-making) will be extracted and tabulated. Any definitions for review types will be extracted. Details regarding particular methods (such as risk of bias assessment approaches and data synthesis techniques) will also be extracted, along with links to formal guidance; whether there is a reporting standard; perceived strengths and weaknesses; and intended influence on policy, practice, and future research. Additionally, if any tools exist to appraise/assess the risk of bias of the synthesis approach itself or any approaches to establish certainty/confidence or the evidential quality are mentioned, these will be extracted.
The draft data extraction form may require modification following piloting on retrieved documents and during the process of extracting data from each included document. Modifications will be detailed in the final scoping review. Any disagreements that arise between the reviewers will be resolved through discussion or with an additional reviewer. If appropriate, authors of papers will be contacted to request missing or additional data.
Data analysis and presentation
All identified typologies/classification systems will be listed and described in detail. Elements included in each classification system will be presented in tables as per data extracted at the classification level. Additionally, the individual evidence synthesis approaches detailed within each classification system will be presented. These will then undergo descriptive analysis and be reported as frequencies. Evidence synthesis approaches that are similar (ie, effectiveness or intervention review) will not be combined, but rather reported and tabulated as individual approaches (thereby removing any subjectivity in coding and classifying evidence synthesis approaches at this stage). After listing all of the individual synthesis approaches, these will be grouped into clusters based on similarity in approach and purpose. Network diagrams will be developed where appropriate. Definitive selection of the review types and terminology will take place at a subsequent stage of the broader Evidence Synthesis Taxonomy Initiative and is beyond the scope of this review.
Acknowledgments
Members of the Evidence Synthesis Taxonomy Initiative for their review of this work.
Funding
ZM is supported by an NHMRC Investigator Grant, APP1195676.
ACT is funded by a Tier 2 Canada Research Chair in Knowledge Synthesis.
BC is funded by a Health Research Board (HRB) Emerging Investigator Award (EIA-2019-09). The funders will have no role in the development or approval of content.
Author contributions
ZM conceived the topic of research, wrote the first draft, incorporated feedback, and finalized the manuscript. All other authors contributed to discussions and initial ideas for the manuscript, reviewed drafts, provided feedback, and approved the final draft.
Availability of data and materials
We will endeavor to make all data and code used within this project available. This will be available in the final published manuscript, the supplementary files, or on the Open Science Framework project space (https://osf.io/kep8c/). We will endeavor to answer any queries from readers and supply the forms they need, most likely in the form of MS Excel (Redmond, Washington, USA) spreadsheets.
Appendix I: Search strategy
Ovid MEDLINE(R) ALL
Date searched: April 28, 2022
Records retrieved: 2300
1 |
(review literature as topic or systematic reviews as topic).sh. |
2 |
classification.sh. or classification.fs. or (classification or classifications or classify or classified or classifying or classified or type or types or taxonomy or taxonomies or typology or typologies or ontology or ontologies or approach or approaches or category or categories or categorization or categorize or categorise or categorisation or categorizations or categorisations or compendia or compendium or comparison or comparisons or comparing or compared or difference or different or differences or differentiate$ or family or families or group or grouped or groups or grouping or groupings or hierarchies or hierarchy or hierarchical$ or method or methods or methodology or methodologies or methodological$ or schema or schematic$ or scheme or schemes).ti. |
3 |
1 and 2 |
Appendix II: Draft data extraction form
Part 1: Extraction at the taxonomy/classification level
Author/year (Smith, 2020) |
Endorsing organization (ie, JBI, Cochrane, Campbell) where applicable |
Aim (objective) |
Field |
Type of evidence source (ie, guidance paper, manual) |
Type of classification system |
Type of evidence synthesis included |
Number of evidence synthesis approaches described |
Methodological approach to developing system |
Facets against which evidence syntheses have been classified |
Did stakeholder engagement occur in the development of the system? |
Has an evaluation of the system occurred? |
Notes |
Free text |
Free text |
Free text |
Structured options: • clinical sciences • public health • social sciences • environmental sciences • biosciences • engineering • policy • law • education + additional fields if they arise |
Structured options: • manual/handbook • commentary/discussion paper • book • methodological study +additional fields if they arise |
Structured options: • typology • taxonomy • simple listing +additional fields |
(refer to extraction form 2) |
Number |
Free text |
Free text |
Responses: • yes • no • ongoing • planned • not mentioned (If yes, provide details) |
Responses: • yes • no • ongoing • planned • not mentioned (If yes, provide details) |
|
Part 2: Extraction of the individual evidence synthesis approaches
Author/year |
Name of evidence synthesis type |
Definition of evidence type |
Question framework |
Search (methods) |
Appraisal tools |
Synthesis/analysis approaches |
Reporting standards |
Link to formal guidance |
Why this evidence synthesis type would be conducted (purpose) |
Risk of bias/critical appraisal tool for the evidence synthesis type |
Evidential quality/certainty/confidence approach |
Perceived strengths about this evidence synthesis type |
Perceived weaknesses about this evidence type |
Intended influence on policy, practice, and future research (including utility to decision-making) |
Notes |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
References
1. Munn Z, Stern C, Aromataris E, Lockwood C, Jordan Z. What kind of systematic review should I conduct? A proposed typology and guidance for systematic reviewers in the medical and health sciences. BMC Med Res Methodol 2018;18(1):5.
2. Munn Z, Porritt K, Lockwood C, Aromataris E, Pearson A. Establishing confidence in the output of qualitative research synthesis: the ConQual approach. BMC Med Res Methodol 2014;14:108.
3. Pearson A. Balancing the evidence: incorporating the synthesis of qualitative data into
systematic reviews. JBI Reports 2004;2.(45–64).
4. Pearson A, Jordan Z, Munn Z. Translational science and evidence-based healthcare: a clarification and reconceptualization of how knowledge is generated and used in healthcare. Nurs Res Pract 2012;2012:792519.
5. Steinberg E, Greenfield S, Mancher M, Wolman DM, Graham R. Clinical practice guidelines we can trust. Washington, DC, USA: National Academies Press; 2011.
6. Aromataris E, Pearson A. The systematic review: an overview. Am J Nurs 2014;114(3):53–58.
7. Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gøtzsche PC, Ioannidis JPA, et al. The PRISMA statement for reporting
systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration. BMJ 2009;339:b2700.
8. Aromataris E, Munn Z, editors. JBI Manual for Evidence Synthesis [internet]. Adelaide, JBI; 2020 [cited 2022 May 23]. Available from:
https://synthesismanual.jbi.global.
9. Pearson A, Wiechula R, Court A, Lockwood C. The JBI model of evidence-based healthcare. Int J Evid Based Healthc 2005;3(8):207–215.
10. Jordan Z, Lockwood C, Munn Z, Aromataris E. The updated Joanna Briggs Institute model of evidence-based healthcare. Int J Evid Based Healthc 2019;17(1):58–71.
11. Colquhoun HL, Levac D, O’Brien KK, Straus S, Tricco AC, Perrier L, et al. Scoping reviews: time for clarity in definition, methods, and reporting. J Clin Epidemiol 2014;67(12):1291–1294.
12. Gough D, Thomas J, Oliver S. Clarifying differences between review designs and methods. Syst Rev 2012;1:28.
13. Grant MJ, Booth A. A typology of reviews: an analysis of 14 review types and associated methodologies. Health Info Libr J 2009;26(2):91–108.
14. Tricco AC, Tetzlaff J, Moher D. The art and science of knowledge synthesis. J Clin Epidemiol 2011;64(1):11–20.
15. Tricco AC, Soobiah C, Antony J, Cogo E, MacDonald H, Lillie E, et al. A scoping review identifies multiple emerging knowledge synthesis methods, but few studies operationalize the method. J Clin Epidemiol 2016;73:19–28.
16. Thorne S, Jensen L, Kearney MH, Noblit G, Sandelowski M. Qualitative metasynthesis: reflections on methodological orientation and ideological agenda. Qual Health Res 2004;14(10):1342–1365.
17. JBI. Joanna Briggs Institute Reviewers’ Manual, 2011 edition. Adelaide, JBI; 2011.
18. Barnett-Page E, Thomas J. Methods for the synthesis of qualitative research: a critical review. BMC Med Res Methodol 2009;9.(59).
19. Booth A, Noyes J, Flemming K, Gerhardus A, Wahlster P, van der Wilt GJ, et al. Structured methodology review identified seven (RETREAT) criteria for selecting qualitative evidence synthesis approaches. J Clin Epidemiol 2018;99:41–52.
20. Littell JH. Conceptual and practical classification of research reviews and other evidence synthesis products. Campbell Syst Rev 2018;14(1):1–21.
21. Peters M, Godfrey C, McInereney P, Munn Z, Tricco A, Khalil H. Chapter 11: Scoping Reviews. In: Aromataris E, Munn Z, editors. JBI Manual for Evidence Synthesis [internet]. Adelaide, JBI; 2020 [cited 2022 May 23]. Available from:
https://synthesismanual.jbi.global.
22. Peters MDJ, Marnie C, Tricco AC, Pollock D, Munn Z, Alexander L, et al. Updated methodological guidance for the conduct of scoping reviews. JBI Evid Synth 2020;18(10):2119–2126.
23. Tricco AC, Lillie E, Zarin W, O’Brien KK, Colquhoun H, Levac D, et al. PRISMA extension for scoping reviews (PRISMA-ScR): checklist and explanation. Ann Intern Med 2018;169(7):467–473.
24. Munn Z, Peters MDJ, Stern C, Tufanaru C, McArthur A, Aromataris E. Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC Med Res Methodol 2018;18(1):143.
25. Amog K, Pham B, Courvoisier M, Mak M, Booth A, Godfrey C, et al. The web-based “Right Review” tool asks reviewers simple questions to suggest methods from 41 knowledge synthesis methods. J Clin Epidemiol 2022;147:42–51.
26. McGowan J, Sampson M, Salzwedel DM, Cogo E, Foerster V, Lefebvre C. PRESS Peer Review of Electronic Search Strategies: 2015 guideline statement. J Clin Epidemiol 2016;75:40–46.
27. Clark JM, Sanders S, Carter M, et al. Improving the translation of search strategies using the Polyglot Search Translator: a randomized controlled trial. J Med Libr Assoc 2020;108(2):195–207.
28. Haddaway N, Grainger M, Gray C. citationchaser: an R package and Shiny app for forward and backward citations chasing in academic searching. Res Synth Medods 2021;13(4):533–545.
29. Gough D, Thomas J, Oliver S. Clarifying differences between reviews within evidence ecosystems. Syst Rev 2019;8(1):170.