Secondary Logo

Journal Logo

Adherence of meta-aggregative systematic reviews to reporting standards and methodological guidance

a methodological review protocol

Munn, Zachary; Dias, Mafalda; Tufanaru, Catalin; Porritt, Kylie; Stern, Cindy; Jordan, Zoe; Aromataris, Edoardo; Pearson, Alan

JBI Database of Systematic Reviews and Implementation Reports: April 2019 - Volume 17 - Issue 4 - p 444–450
doi: 10.11124/JBISRIR-2017-003550

Review objective: The objective of the review is to examine Joanna Briggs Institute (JBI) qualitative meta-aggregative reviews to determine:

  1. To what extent published meta-aggregative reviews adhere to the most current JBI methodological guidance and reporting standards.
  2. To what extent published meta-aggregative reviews adhere to the Enhancing Transparency in Reporting the Synthesis of Qualitative Research (ENTREQ) statement.

Joanna Briggs Institute, Faculty of Health and Medical Sciences, The University of Adelaide, Adelaide, Australia

Correspondence: Zachary Munn,

EA is the Editor-in-Chief and CS is a Senior Associate Editor of the JBI Database of Systematic Reviews and Implementation Reports. They were blinded to the management and decision-making processes associated with this manuscript.

Back to Top | Article Outline


Qualitative evidence is of increasing importance in health services policy, planning and delivery. It can play a significant role in understanding how individuals and communities perceive health, manage health and make decisions related to health service usage.1 As with quantitative research, results from a single study only should not be used to guide practice.2 To develop recommendations for evidence-based healthcare practice, pooled data, rather than the findings of single studies, is necessary, and thus the findings of qualitative research should be synthesized.3 As such, there is increasing interest in the potential of qualitative evidence synthesis to inform complex decision-making processes in policy and practice.3,4

A number of different methods have been proposed for the pooling of qualitative findings.5-7 These include thematic synthesis, narrative synthesis, realist synthesis, content analysis, meta-ethnography and meta-aggregation.5-7 Meta-ethnography and meta-aggregation are two common approaches to synthesis.

The meta-aggregative approach (as advocated by the Joanna Briggs Institute [JBI]) was developed in the early 2000 s by an expert group of international qualitative researchers.1-3 The group concluded that an approach congruent with systematic review methods could be developed that would respect and incorporate the philosophical traditions of the critical and interpretive paradigms, but also promote qualitative concepts related to dependability, credibility and transferability, in other words, a valid and reliable approach to systematic reviews that would result in an auditable decision trail. As such, a comprehensive systematic search is required in all meta-aggregative reviews. There now exists clear guidance regarding the exact processes a qualitative systematic reviewer should follow when using the meta-aggregative approach.3

Meta-aggregation, similar to most methods of qualitative research synthesis,7 is a robust approach able to deal with heterogeneity or differences across studies. This is largely because meta-aggregation focuses on synthesizing study findings (the author's analytical interpretation of study data), not study data itself (such as the empirical data collected). Therefore, as long as two or more studies focus on the same phenomena of interest, their findings can be pooled, regardless of the study methodology (e.g. phenomenology, ethnography or grounded theory) or method used. This is a critical assumption of meta-aggregation. Not all approaches agree with synthesizing across different traditions and methodologies.8,9 The original conception of meta-ethnography by Noblit and Hare adviced only synthesizing across studies that have used the same method.9 However, reviewers using approaches that do synthesize across traditions, such as in meta-aggregation, consider the combining of data from multiple theoretical and methodological traditions a strength.8

Although critical appraisal is not a necessary stage in some approaches to qualitative synthesis (in meta-ethnography, for example, the practice remains contentious),10 it is required in all meta-aggregative reviews. Garratt and Hodkinson argue that it is both illogical and pointless to attempt to predetermine a definitive set of criteria against which all qualitative research should be judged.11 Nonetheless, in recent years, the number of critical appraisal and quality assessment tools have increased rapidly. There now exists a general acceptance of the need for high quality qualitative research, and for some sort of appraisal of studies to assess methodological limitations during the review process. However, there is still much debate regarding what criteria or checklist to use to evaluate qualitative research, whether studies should be excluded following appraisal, and whether cut-off points or sum scales should be used.3,4,12

Meta-aggregation aggregates findings of included studies. It requires reviewers to identify and extract the findings from studies included in the review, to categorize these study findings and to aggregate these categories to develop synthesized findings.3 This approach essentially mirrors the processes of a quantitative review whilst holding to the traditions and requirements of qualitative research. The essential characteristic of a meta-aggregative review is that the reviewer avoids re-interpretation of included studies, but instead presents the findings of the included studies as intended by the original authors. Meta-aggregation is based on a clear protocol that defines the question and methods for answering it through data retrieved. A comprehensive search strategy is required as is critical appraisal using a standardized critical appraisal instrument/s. Data extraction involves extracting findings, in addition to data that gives rise to findings, using a data extraction instrument. Synthesis involves the aggregation of findings into categories, and of the categories into synthesized findings that inform practice or policy. A standardized visual representation is used in meta-aggregation in order to present the findings, categories and synthesized findings.

Extracting findings is both the second phase of data extraction and the first step in data synthesis. For qualitative evidence the units of extraction in this process are specific findings (and illustrations from the text that demonstrate the origins of the findings). In meta-aggregation a finding is defined as: “a verbatim extract of the author's analytic interpretation of the results or data”.3(P.183) The “data” may be in the form of a theme, metaphor or rich descriptions.

Meta-aggregation requires reviewers to also extract an illustrative excerpt that the researcher presents in support of that particular finding. Findings that do not have a link back to the research participants may not be considered as demonstrably credible compared to findings where the author's analytic interpretation can be verified by the research participants. From the JBI perspective, each finding that is extracted will, where possible, be supported by an illustration that is a verbatim extraction of the words of a participant from that published piece of research. Where this is not possible, the illustration may be either a field-work observation or “other supporting data” (e.g. photo, opinions, newspaper article, painting, mask, object, artifact, etc.). It is only necessary to extract one such supporting illustration. The supporting illustration must always be a direct extraction of the words used by the researcher to illustrate the finding.

The JBI meta-aggregative approach is an important and impactful approach to qualitative research synthesis because it moves beyond theory to produce statements in the form of “lines of action” which then lead to recommendations for policy and practice. The final stage in the meta-aggregative process is to develop a meta-synthesis, a set of synthesized findings that draw some conclusions of use to practice. By contrast, many other qualitative synthesis methods only suggest implications for action that can be drawn or inferred from the synthesis exercise. A synthesized finding, as defined by the JBI, is an overarching description of a group of categorized findings that allow for the generation of recommendations for practice.3 Theoretically, this pragmatic approach allows for the delivery of readily usable synthesized findings, based on the voices of relevant stakeholders, to inform decision-making at the clinical or policy level.3 This is often produced in the form of an “if-then” statement or in a more indicatory form. The synthesized findings produced using meta-aggregative methods are practice theory statements grounded in the data. The result is a summary of the evidence in terms of its implications for practice.

Since the early 2000 s, formal methodological guidance for conducting meta-aggregative reviews has existed. In 2012, a framework for reporting the synthesis of qualitative research was developed, the Transparency in Reporting the Synthesis of Qualitative Research (ENTREQ) statement.13 The ENTREQ statement is an internationally accepted reporting standard for qualitative research synthesis. Despite the guidance available, to our knowledge, no research has yet addressed to what extent published meta-aggregative reviews conform to the methodological guidance and to the ENTREQ statement. A recent review of meta-ethnographic studies found that despite the availability of clear guidance, many reviewers had used and applied this method inappropriately.14 If meta-aggregative reviews fail to adhere to the available guidance, this could impact on the transferability and usefulness of any recommendations for policy and practice generated by these reviews. If this guidance is not being followed, further research could help to determine why this is the case and how compliance with the guidance may be facilitated. Therefore, the purpose of this methodological systematic review is to determine the extent to which published meta-aggregative reviews conform to both the available methodological guidance and the ENTREQ statement.

Back to Top | Article Outline

Inclusion criteria

Types of authors

Systematic reviews conducted by any author teams will be considered for inclusion.

Back to Top | Article Outline

Types of studies

This methodological review will consider reviews that state they have used a meta-aggregative approach, or that they are a JBI qualitative review, and have been published in the JBI Database of Systematic Reviews and Implementation Reports (JBISRIR).

Systematic reviews that include more than one evidence type (e.g. an effectiveness review with a qualitative or meta-aggregative component) will not be eligible for inclusion. This is warranted as the JBI methodological guidance for meta-aggregative reviews and the ENTREQ statement are not designed to address the conduct and reporting, respectively, of comprehensive or mixed methods systematic reviews that include a meta-aggregative component.

Systematic reviews published since 2015 will be included in this review, as the most recent JBI Reviewer's Manual and guidance for qualitative meta-aggregative reviews were published in 2014.15

Back to Top | Article Outline

Types of data

The specific data of interest for this review are the steps and/or processes related to the design, conduct and reporting of JBI qualitative systematic reviews using meta-aggregation, particularly in terms of compliance with JBI and ENTREQ reporting criteria and guidance. The specific type of data is described in further detail in the data extraction section.

Back to Top | Article Outline

Search strategy

This review will search the JBISRIR since 2015 to identify published reviews following the meta-aggregative approach. Key terms for searching will include but not be limited to the terms “qualitative”, “meta-synthesis”, “meta-aggregation” or “Qualitative Assessment and Review Instrument (QARI)”. We are aware that researchers have published systematic reviews claiming to have followed the JBI meta-aggregative approach in other journals, however, we are only interested in formal JBI reviews in this project.

Back to Top | Article Outline


Study selection

Following the search of the JBISRIR, titles and abstracts of the citations will be imported into Endnote (Clarivate Analytics, PA, USA). All citations will be screened by title and abstract to determine potential eligibility by two independent reviewers. If reviewers are unable to determine eligibility from title and abstract alone, the full text will be retrieved and reviewed. Discrepant decisions will be discussed between the two reviewers and if needed a third reviewer will be used to arbitrate the decision.

All potential citations that meet the screening process will be retrieved in full text and another process of screening will be performed by two reviewers to confirm final inclusion in the review. If studies are excluded at the full text stage, a reason will be provided for their exclusion. Discrepant decisions will be discussed between the two reviewers and if needed a third reviewer will be used to arbitrate the decision.

Back to Top | Article Outline

Data extraction

Data extraction will be conducted independently by two reviewers. Each reviewer will extract data using a standardized data extraction tool. The data extraction tool will be designed in an Excel spreadsheet. Each reviewer will extract data directly into the spreadsheet.

A pilot of the data extraction tool will be conducted prior to formal data extraction. A subset of three studies will be selected for the pilot extraction and data will be extracted independently by the two reviewers. Upon completion, the two reviewers will discuss the tool and make any changes if required. Discrepant decisions will be discussed between the two reviewers and if needed a third reviewer will be used to arbitrate the decision. The fields of data that will be extracted are shown in Appendix I.

Back to Top | Article Outline

Data synthesis

Each study will be assessed for compliance to the ENTREQ statement and JBI guidance to meta-aggregation.

A “yes” response will indicate compliance with the guidance. A “no” response will indicate the guidance has not been met or followed. Full compliance will be determined when a single study receives all “yes” responses. Partial compliance will be determined when a single study receives a mixture of both yes and no responses. Zero compliance to the guidance will be determined when a single study receives all “no” responses.

Descriptive statistics will be used to analyze the data. Where appropriate, frequency distributions and percentages distributions will be presented. Data will be presented in tables and graphs and a narrative provided to describe the findings.

Back to Top | Article Outline

Appendix I: Data extraction tool

JBI methods

  1. To measure compliance with JBI guidance, the below items will be assessed as either met (yes), not met (no), partially met (partially), or unclear (unclear). Was there reference to a protocol?
  2. Determined by an in-text reference in the background or a citation in the reference list.
  3. (a) Were the population, phenomena of interest and context presented for the review inclusion criteria?
  4. (b) If any are missing, list what was missed.
  5. Were multiple types of primary qualitative research designs included (or considered for inclusion)?
  6. (a) Were two or more databases searched?
  7. (b) List the databases searched.
  8. (a) Were gray literature resources searched?
  9. (b) List which gray literature sources were searched.
  10. Was study screening/selection performed by two reviewers?
  11. Was there accurate reporting of exclusion following full-text screening?
  12. Check to see if reasons were provided for exclusion at full-text screening.
  13. (a) Was critical appraisal conducted?
  14. (b) Identify if a different tool than the JBI Critical Appraisal Checklist for Qualitative Research or QARI checklist was used.
  15. Did two or more reviewers conduct critical appraisal?
  16. Was it clear how the results of critical appraisal were considered in the review?
  17. Make a judgement based on the answer to the following four sub-questions:
    (a) Were studies excluded after critical appraisal?
    (b) If studies were excluded, was there justification for this exclusion?
    (c) If studies were excluded, describe how this decision was made (ad hoc or specified in the protocol).
    (d) Did the results of critical appraisal impact the analysis or interpretation of the results?
  18. Did two or more reviewers perform data extraction?
  19. Were findings extracted along with illustrations?
  20. Were findings assigned a level of credibility?
  21. Were findings extracted verbatim?
  22. To determine this, check the findings from one paper included in the review (the paper will be the first in terms of alphabetical order by surname in the included studies; if all findings from that one paper were extracted verbatim this can be judged as “yes”).
  23. Were illustrations for findings extracted verbatim?
  24. To determine this, check the findings from one paper included in the review (the paper will be the first in terms of alphabetical order by surname in the included studies; if all illustrations from that one paper were extracted verbatim this can be judged as “yes”).
  25. Were categories formed by grouping together findings based on their similarity in meaning?
  26. Check the first category in the first presented meta-aggregative flowchart and make a judgement based on the presented findings.
  27. Were synthesized findings:
  28. (a) Worded as indicatory statements?
    Check that they are worded as “if-then” statements or if terms such as “may”, “will” were used.
    (b) Worded as recommendations (this should not be the case)?
  29. Was a schematic of the synthesis process provided?
  30. This should be in the form of a meta-aggregative flowchart or table with three columns.
  31. Was JBI SUMARI or the JBI Critical Appraisal Checklist for Qualitative Research or QARI used for the systematic review process?
  32. Check that the authors explicitly make mention of using JBI software for their review, or that it is clear from the report that SUMARI or the JBI Critical Appraisal Checklist for Qualitative Research or QARI was used (i.e. the meta-aggregative flowchart is a direct export from JBI SUMARI or the JBI Critical Appraisal Checklist for Qualitative Research or QARI).
  33. (a) Were recommendations for practice provided?
  34. (b) Any extra comments regarding the recommendations (does not require an answer; this is just an opportunity to note any interesting practices seen in this section).
  35. Were recommendations:
  36. (a) Graded with a grade of recommendation?
    (b) Provided with a level of evidence (this should not be the case)?
  37. Was a ConQual Summary of Findings table provided?
Back to Top | Article Outline


To measure compliance with the ENTREQ statement, the 21 items of the ENTREQ statement will be assessed as either met (yes), not met (no), partially met (partially), or unclear (unclear).

  1. Aim: Has the research question been stated?
  2. Synthesis methodology: Are the following described?
  3. (a) Synthesis methodology or theoretical framework.
    (b) Rationale for this choice.
  4. Approach to searching:
  5. (a) Was the search pre-planned?
    (b) Was the search described as either comprehensive or iterative?
    (c) Describe whether the search was comprehensive or iterative.
  6. Inclusion criteria: Were the inclusion/exclusion criteria specified (e.g. in terms of population, language, year limits, type of publication, study type)?
  7. Data sources:
  8. (a) Were the information sources used described?
    (b) Was information on when the searches were conducted given?
    (c) Was the rationale for using the data sources provided?
  9. Electronic search strategy: Was the literature search and key terms used described?
  10. Study screening methods: Were the methods for screening studies described?
  11. Study characteristics: Were the characteristics of the included studies presented?
  12. Study selection results: Were the study selection results described (i.e. including number of studies screened and reasons for study exclusion)?
  13. Rationale for appraisal: Was the rationale for appraisal included?
  14. Appraisal items: Was the tool(s) used for appraisal presented?
  15. Appraisal process: Was there information regarding whether the appraisal was conducted independently by more than one reviewer and if consensus was required?
  16. Appraisal results:
  17. (a) Were the results of the quality assessment presented?
    (b) Was the rationale for excluding studies presented?
  18. Data extraction: Was the data extraction process described including an indication of what data was extracted from what section of the included studies?
  19. Software: Was the software used to assist analysis (if any) mentioned?
  20. Number and identity of reviewers: Were the following identified?
  21. (a) Number of reviewers.
    (b) Identity of reviewers identified for the reviewers who were involved in extraction and synthesis.
  22. Coding: Was the process for coding of data described?
  23. Study comparison: Was the process of how comparisons were made within and across studies described?
  24. Derivation of themes: Was the process for derivation of themes described?
  25. Quotations:
  26. (a) Were quotations from the primary studies to illustrate themes/constructs provided?
    (b) Was it identified whether the quotations were participant quotations or the author's interpretation (check the first three quotations included in the results of the review to determine whether they have been identified as being from the author or participants of the primary study; if all three have been identified, answer “yes”)?
  27. Synthesis output: Were rich, compelling and useful results that go beyond a summary of the primary studies presented?
Back to Top | Article Outline


1. Munn Z, Porritt K, Lockwood C, Aromataris E, Pearson A. Establishing confidence in the output of qualitative research synthesis: the ConQual approach. BMC Med Res Methodol 2014; 14:108.
2. Pearson A. Balancing the evidence: incorporating the synthesis of qualitative data into systematic reviews. JBI Reports. 2004; 2 (45–64).
3. Lockwood C, Munn Z, Porritt K. Qualitative research synthesis: methodological guidance for systematic reviewers utilizing meta-aggregation. Int J Evid Based Healthc 2015; 13 3:179–187.
4. Hannes K, Lockwood C, Pearson A. A comparative analysis of three online appraisal instruments’ ability to assess validity in qualitative research. Qual Health Res 2010; 20 12:1736–1743.
5. Thorne S, Jensen L, Kearney MH, Noblit G, Sandelowski M. Qualitative metasynthesis: reflections on methodological orientation and ideological agenda. Qual Health Res 2004; 14 10:1342–1365.
6. The Joanna Briggs InstituteJoanna Briggs Institute Reviewers’ Manual: 2011 edition. Adelaide: The Joanna Briggs Institute; 2011.
7. Barnett-Page E, Thomas J. Methods for the synthesis of qualitative research: a critical review. BMC Med Res Methodol 2009; 9:59.
8. Finfgeld-Connett D. Meta-synthesis of caring in nursing. J Clin Nurs 2008; 17 2:196–204.
9. Walsh D, Downe S. Meta-synthesis method for qualitative research: a literature review. Journal of advanced nursing 2005; 50 2:204–211.
10. Carroll C, Booth A. Quality assessment of qualitative evidence for systematic review and synthesis: Is it meaningful, and if so, how should it be performed? Res Synth Methods 2015; 6 2:149–154.
11. Garratt D, Hodkinson P. Can there be criteria for selecting research criteria?—A hermeneutical analysis of an inescapable dilemma. Qual Inq 1998; 4 4:515–539.
12. Lewin S, Glenton C, Munthe-Kaas H, et al. Using qualitative evidence in decision making for health and social interventions: an approach to assess confidence in findings from qualitative evidence syntheses (GRADE-CERQual). PLoS Med 2015; 12 10:e1001895.
13. Tong A, Flemming K, McInnes E, Oliver S, Craig J. Enhancing transparency in reporting the synthesis of qualitative research: ENTREQ. BMC Med Res Methodol 2012; 12 181:
14. France EF, Ring N, Thomas R, Noyes J, Maxwell M, Jepson R. A methodological systematic review of what's wrong with meta-ethnography reporting. BMC Med Res Methodol 2014; 14 1:1.
15. The Joanna Briggs InstituteJoanna Briggs Institute Reviewers’ Manual: 2014 edition. Australia: The Joanna Briggs Institute; 2014.

ENTREQ; meta-aggregation; qualitative evidence synthesis; qualitative research synthesis; systematic review

© 2019 by Lippincott williams & Wilkins, Inc.