Skip Navigation LinksHome > July 2014 - Volume 114 - Issue 7 > JBI's Systematic Reviews: Data Extraction and Synthesis
AJN, American Journal of Nursing:
doi: 10.1097/01.NAJ.0000451683.66447.89
Systematic Reviews, Step by Step

JBI's Systematic Reviews: Data Extraction and Synthesis

Munn, Zachary PhD; Tufanaru, Catalin MD, MPH; Aromataris, Edoardo PhD

Free Access
Article Outline
Collapse Box

Author Information

Zachary Munn is a senior research fellow, Catalin Tufanaru is a research fellow, and Edoardo Aromataris is the director of synthesis science, all at the Joanna Briggs Institute in the School of Translational Health Science, University of Adelaide, South Australia. Contact author: Zachary Munn, The authors have disclosed no potential conflicts of interest, financial or otherwise.

The Joanna Briggs Institute aims to inform health care decision making globally through the use of research evidence. It has developed innovative methods for appraising and synthesizing evidence; facilitating the transfer of evidence to health systems, health care professionals, and consumers; and creating tools to evaluate the impact of research on outcomes. For more on the institute's approach to weighing the evidence for practice, go to

Collapse Box


This article is the fifth in a series on the systematic review from the Joanna Briggs Institute, an international collaborative supporting evidence-based practice in nursing, medicine, and allied health fields. The purpose of the series is to describe how to conduct a systematic review—one step at a time. This article details the data extraction and data synthesis stages, with an emphasis on conducting a meta-analysis of quantitative data.

Each year hundreds of thousands of articles are published in thousands of peer-reviewed biomedical journals. As discussed in the prior articles in this series from the Joanna Briggs Institute (JBI), researchers conduct systematic reviews to summarize the literature as a way of helping health care professionals keep up to date with the latest evidence in their field. The process of extracting, synthesizing, and combining data from various studies is one important way in which the systematic review extends beyond the subjective narrative reporting characteristic of a traditional literature review. The data synthesized in a systematic review are the results (or outcomes) extracted from individual research studies relevant to the systematic review question. The synthesis makes up the results section of the review.

Well-conducted systematic reviews, such as those reviews published by the JBI and the Cochrane Collaboration, attempt to extract all data relevant to the review question. Through the use of standardized data-extraction instruments, reviewers extract both descriptive (such as the setting, context) and outcome (results, findings) data from the included research studies.

In systematic reviews of quantitative (numerical) data, data synthesis usually appears as a meta-analysis, a statistical method that combines the results of a number of studies to calculate a single summary effect (for the definition of this and other terms used in this article, see the Glossary1). This statistical method is important because quantitative studies on an intervention may produce contradictory results or have insufficient statistical power (the sample may be small, for example). When studies have contradictory results, you can't simply add up those supporting the intervention and those not supporting the intervention (in a procedure called vote counting) because some studies may be of better quality (have a larger sample size or be less biased) than others. Vote counting also doesn't provide information on the magnitude (or size) of the effect. By combining findings from individual quantitative studies in a meta-analysis, you can generate a more precise estimate of the magnitude of effect on the outcomes of interest.2 In systematic reviews of qualitative (textual) findings, data are synthesized in a meta-synthesis. Some approaches to qualitative meta-synthesis include meta-aggregation and meta-ethnography.

Figure. No caption a...
Image Tools

This article will discuss the process of data extraction and synthesis for both quantitative and qualitative systematic reviews and provide examples of each.

Back to Top | Article Outline

Extracting Quantitative Data

In a systematic review, data extraction occurs before synthesis. As a reviewer, you'll read included studies and extract the results relevant to the review question (in determining the review question, you most likely used a form of the PICO mnemonic, which stands for Population, Intervention, Comparison intervention, and Outcome measures). Ideally, you'll use forms that have been tested to ensure that you find and extract relevant data while minimizing bias and other errors. As with the other steps described in previous articles in this series, in data extraction two independent reviewers should be used to minimize bias and reduce error.3

Besides the outcome data, you'll also record the citation details of the included studies, as well as descriptive details such as study design (whether it was a randomized controlled trial [RCT], for example), participant characteristics (such as age, gender, or location), methods used in the analysis, and the interventions studied (treatment modality and the amount, duration, frequency, and intensity used in experimental and control groups). Descriptive data should be extracted and presented in the review so that any researcher can establish the generalizability of the results. Descriptive data are also important to the reviewer during the extraction process; you may wish to refer to such information when looking for similarities or differences in methodology, sample, or intervention as you interpret results in your synthesis.

Data to be extracted include not only the outcomes but also the methods used to obtain the outcomes, and the validity and reliability of those methods. You might encounter challenges in data extraction resulting from the different populations studied or interventions administered across studies. It's also important that you ensure the reliability of the data-extraction process (between systematic reviewers) through using standardized data-extraction forms you've pilot tested beforehand, training and assessing data extractors, having two (or more) people extracting data from each study, and having reviewers conduct an independent extraction before they confer with each other.

Meta-analysis is used for systematically assessing and combining the results of two or more similar studies, with an aim of producing an overall summary effect of the results.4-6 Although used widely in medical and nursing research, meta-analysis was developed in the social and behavioral sciences. And while the term is relatively new, the method is not: research designed to summarize the findings of different studies has been conducted, especially in the field of astronomy, since the 19th century.7, 8

The term meta-analysis is not synonymous with systematic review. The statistical combination of data using meta-analysis is only a part of the systematic review process, and when performing a meta-analysis you should determine whether the statistical combination of individual results from separate studies is appropriate. Readers may encounter studies that are called a meta-analysis that combine data from many studies but do not conform to a systematic approach in study selection. In these types of studies, it may not be transparent to the reader how the authors identified studies for synthesis.

A systematic review can contain multiple meta-analyses depending on the number of outcomes you've identified to answer the review question. The advantages of meta-analysis over other methods of synthesis (such as vote counting) include its1, 4, 9

* increased statistical power as a statistical test over simpler methods.

* greater precision (as reflected in overall effect size and smaller confidence intervals, indicating a more precise estimate).

* information on the magnitude of the effect.

* ability to investigate reasons for variations between studies.

* ability to weight information from studies according to the amount and significance of information they contribute to the analysis.

* ability to investigate differences between studies and groups of studies and settle conflicting claims.

But a meta-analysis can't improve the quality of included studies, so their quality must be established during the critical-appraisal process (described in the fourth article in this series).

Meta-analysis can be used to synthesize data not only on treatment effects but also on incidence and prevalence rates, the correlation between variables, the accuracy of diagnostic tests, and prognostic factors. Meta-analysis may be performed using data from different types of study designs, depending on the review question. It may include RCTs; other experimental and quasi-experimental designs; and observational, analytical, or descriptive studies. Meta-analysis can be used to combine different types of data such as averages (means), proportions, and odds ratios, among other metrics.

Like most statistical tests, meta-analysis indicates whether results are statistically significant, but its defining feature is the estimation of overall effect size6—a measure of the strength and direction of the relationship between variables.

While meta-analysis is preferred, it's not always possible, especially if the studies vary greatly, either in how they are conducted (different methodologies), what they are assessing (different interventions), whom they are performed on (different populations), or what their final result is. When such differences exist across studies, the studies are said to be heterogeneous. Clinical heterogeneity refers to differences in study populations, interventions, and outcomes; methodological heterogeneity, to differences in study design and quality; and statistical heterogeneity, to differences in effect sizes, caused by clinical or methodological heterogeneity or simply by chance. Heterogeneity is usually determined by conducting a τ2 test, a χ2 test, or an I 2 test,10 as shown in the forest plot in Figure 1, where the degrees of freedom (df) and P values are also reported. For more information on determining heterogeneity in a meta-analysis, see the Cochrane handbook ( and the JBI reviewers’ manual (, 4

Figure 1
Figure 1
Image Tools

Meta-analysis can be used to explore the differences in the effects of the intervention, and the reasons for those differences, in different subgroups.4, 6, 11 When meta-analysis isn't possible owing to heterogeneity, your options might include providing a narrative summary, vote counting, and presenting data in tables.

If you use a narrative summary to describe the included studies and their conclusions, your readers may not be able to discern how the evidence was weighted and whether conclusions are biased. Therefore, it's recommended that you emphasize the characteristics of the studies and the data extracted and make use of tables, graphs, and other diagrams to compare data.12 Your narrative summary will present quantitative data extracted from individual studies, as well as, where available, point estimates (a value that represents a best estimate of effects) and interval estimates (an estimated range of effects, presented as a 95% confidence interval). Because a potentially large amount of data can be conveyed in a narrative summary, you can ensure consistency in the results section if all reviewers agree beforehand on a structure for the reporting of results. If you don't follow a structure, your report of results may appear incomplete or unreliable.12 If studies do not provide the relevant information to comply with a structure, you should make this clear in the summary.12

You can also use vote counting, which involves tallying the numbers of studies that provide positive, null, and negative results. Although easy to use, this approach is inappropriate in systematic reviews that aim to inform policy and practice.

What does a meta-analysis look like? The results of a meta-analysis are presented in a forest plot (see Figure 1), a visual means of conveying both the results of each study in the meta-analysis (as a square with extending lines) and the results of the meta-analysis itself (as a diamond). These are plotted on a graph with a vertical line running up the center indicating no difference in treatment effect. Results on one side of the vertical line indicate that the intervention works, while those on the other side indicate that it does not. For each study, the position of the square signifies the results, while the extending lines represent the confidence limits (the degree of uncertainty around that effect). The size of the square represents the weight of that study in the meta-analysis—a representation of the contribution that each study makes to the overall summary effect. The black diamond represents the result of the meta-analysis, and the confidence interval is indicated by the length of its sides. If it is a tight, skinny diamond, it is a precise finding with narrow confidence intervals; if it is a long, stretched-out diamond, it is an imprecise finding with broad confidence intervals.

Many published systematic reviews include meta-analyses,13-15 including important reviews conducted on nursing interventions. For example, a recent Cochrane systematic review assessing nursing interventions for smoking cessation identified a large number of trials, many of which were insufficiently powered or had non–statistically significant results.16 When the results of 35 studies were combined in a meta-analysis, the overall effect size was in favor of nursing interventions: patients receiving a nursing intervention were 1.29 times more likely to stop smoking than those in control groups, a statistically significant finding. The reviewers also investigated which characteristics of those interventions had a greater impact on smoking cessation and found that those performed in hospitals were most effective. Another example is a recent JBI review assessing interventions to assist perioperative temperature management in women undergoing cesarean section.17 A number of meta-analyses were performed, and the review found that warming devices were effective in preventing hypothermia and reducing shivering.

Software programs available to assist in performing a meta-analysis include the JBI's System for the Unified Management, Assessment and Review of Information (JBI SUMARI, available for download at and the Cochrane Collaboration's Review Manager (RevMan, available at Both of these can be used to create a forest plot. The Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) statement also provides guidance.18

Back to Top | Article Outline

Combining Qualitative Data: Meta-Synthesis

Not all questions that arise in clinical practice can be assessed statistically. Qualitative systematic reviews aim to increase understanding on a wide range of issues that aren't best measured quantitatively.19, 20 These include (but aren't limited to) how individuals and communities perceive health, manage their own health, and make decisions related to health care use; the culture of communities; and how patients experience health, illness, and the health care system. Findings derived from qualitative research are increasingly acknowledged as providing important information to complement other data that inform practice and policy.21

As with quantitative studies, the results from a single qualitative study should rarely be used to guide practice. A systematic review of all relevant studies is required.

Qualitative studies differ from RCTs, and so the methods used to extract the data differ as well. And as mentioned in a prior article in this series, there has been considerable debate over the merits of critical appraisal—evaluating the methodological quality—of qualitative studies.22-24 Since the late 1980s, a variety of methods for synthesizing findings from qualitative research have been developed, some that require critical appraisal and others that do not.24-27 However, the JBI regards critical appraisal as a pivotal part of any qualitative systematic review.

Despite differences in the approaches used to synthesize qualitative data, there are commonalities.21 These include a focus on the complexity of the phenomena in question, in addition to a statement of any gaps in the literature that would justify the synthesis. Most but not all syntheses require a clear statement of objectives and inclusion criteria, followed by a literature search, data extraction, and a summary.26 Published in books and dissertations, qualitative studies can be difficult to find,1 and the indexing and archiving may be poorer than it is for quantitative studies.28

The two dominant approaches to qualitative meta-synthesis are meta-aggregation and meta-ethnography. The JBI uses an integrative (aggregative) approach29 developed by a group led by Alan Pearson in the early 2000s.24 This group held that regardless of the type of evidence (quantitative or qualitative), the same review process should be used, with certain steps tailored as needed. Thus, meta-aggregation was developed as a method of qualitative synthesis designed to mirror the Cochrane process of quantitative synthesis, while being sensitive to the contextual nature of qualitative research and its traditions. The meta-aggregative method is aligned with the philosophy of pragmatism, according to Hannes and Lockwood, wherein meaning is connected to the idea of “practical usefulness.”21 The meta-aggregative method has been developed in order to deliver readily usable synthesized findings to inform decision making at the clinical or policy level. Therefore, the meta-aggregative approach may be helpful to authors when attempting to answer a specific practice question or summarize views on interventions; in contrast, an approach such as meta-ethnography aims to develop explanatory theory or models.26

Extracting qualitative data. The results to be synthesized during meta-aggregation are the findings reported in qualitative studies. Qualitative data extraction involves identifying and transferring study findings using an approach agreed on by the reviewers. Such a format is essential in minimizing error, providing a historical record of decisions made about the data, and establishing a data set for subsequent synthesis. Drawing on the literature and input from a panel of experts, the JBI developed, piloted, and refined a data extraction instrument, the JBI Qualitative Assessment and Review Instrument (JBI-QARI), that's incorporated into the software for conducting meta-aggregative qualitative reviews.30

The aim of meta-synthesis by meta-aggregation is to assemble findings from qualitative research; categorize those findings into groups on the basis of similarity in meaning; and aggregate these to generate a set of statements that adequately represent that aggregation. These statements are referred to as synthesized findings, and they can be used as a basis for evidence-based practice (see Figure 2 for an example). This synthesis can be conducted using the JBI-QARI. It's important to note that this method does not involve a reconsideration and analysis of the data from the included studies.

Figure 2
Figure 2
Image Tools

There are many published examples of systematic reviews using the meta-aggregative approach that can inform nursing practice. We conducted one such review on health assistants, and in synthesizing 10 studies that met our inclusion criteria we found that the way health care professionals viewed the assistant role differed dramatically; some had concerns about the use of assistants.31 A 2011 review of eight qualitative studies examined the experiences of RNs working in long-term care and highlighted issues such as a lack of support and education for these nurses.32 Qualitative systematic reviews also report on the experiences of patients, such as a systematic review that highlighted patients’ experience of medical imaging.33 These qualitative reviews (and many more!) provide nurses with information on the what, why, and how of their day-to-day practice.

The extraction and synthesis of data are pivotal steps in the systematic review process, whether it is of quantitative or qualitative data. The findings are ideally used as a basis for recommendations for practice. However, the question remains how these findings should be interpreted and how recommendations should be developed to guide practice. These issues will be discussed in further detail in the final article in this series.

Back to Top | Article Outline


1. Joanna Briggs Institute. Joanna Briggs Institute reviewers’ manual: 2014 edition. Adelaide, SA; 2014.

2. Dickersin K, Berlin JA. Meta-analysis: state-of-the-science Epidemiol Rev. 1992;14:154–76

3. Buscemi N, et al. Single data extraction generated more errors than double data extraction in systematic reviews J Clin Epidemiol. 2006;59(7):697–703

4. Higgins JPT, Green S Cochrane handbook for systematic reviews of interventions 5.1.0 [updated March 2011]. 2011 London Cochrane Collaboration

5. Littell JH, et al. Systematic reviews and meta-analysis. 2008 New York Oxford University Press Pocket guides to social work research methods.

6. Petitti DB Meta-analysis, decision analysis, and cost-effectiveness analysis: methods for quantitative synthesis in medicine. 20002nd ed. New York Oxford University Press

7. Chalmers I, et al. A brief history of research synthesis Eval Health Prof. 2002;25(1):12–37

8. Hedges LW. MetaAnalysis Journal of Educational Statistics. 1992;17(4):279–96

9. Petticrew M, Roberts H Systematic reviews in the social sciences: a practical guide. 2006 Oxford, UK Blackwell Publishing

10. Higgins JP, et al. Measuring inconsistency in meta-analyses BMJ. 2003;327(7414):557–60

11. Petitti DB. Approaches to heterogeneity in meta-analysis Stat Med. 2001;20(23):3625–33

12. Lockwood C, White S Synthesizing descriptive evidence. 2012 Philadelphia WKH/Lippincott Williams and Wilkins; Joanna Briggs Institute

13. Ivers N, et al. Audit and feedback: effects on professional practice and healthcare outcomes Cochrane Database Syst Rev. 2012;6:CD000259

14. Munn Z, Jordan Z. The effectiveness of interventions to reduce anxiety, claustrophobia, sedation and non-completion rates of patients undergoing high technology medical imaging JBI Database of Systematic Reviews and Implementation Reports. 2012;10(9):1122–85

15. Munn Z, Jordan Z. Interventions to reduce anxiety, distress, and the need for sedation in pediatric patients undergoing magnetic resonance imaging: a systematic review J Radiol Nurs. 2013;32(2):87–96

16. Rice VH, et al. Nursing interventions for smoking cessation Cochrane Database Syst Rev. 2013;8:CD001188

17. Munday J, et al. The clinical effectiveness of interventions to assist perioperative temperature management for women undergoing cesarean section: a systematic review JBI Database of Systematic Reviews and Implementation Reports. 2013;11(6):45–111

18. Liberati A, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate healthcare interventions: explanation and elaboration BMJ. 2009;339:b2700

19. Adams J, Smith T. Qualitative methods in radiography research: a proposed framework Radiography. 2003;9(3):193–9

20. Freudenberg LS, et al. Subjective perceptions of patients undergoing radioiodine therapy: why should we know about them? Eur J Nucl Med Mol Imaging. 2009;36(11):1743–4

21. Hannes K, Lockwood C. Pragmatism as the philosophical foundation for the Joanna Briggs meta-aggregative approach to qualitative evidence synthesis J Adv Nurs. 2011;67(7):1632–42

22. Creswell JW Qualitative inquiry and research design: choosing among five approaches. 20072nd ed. Thousand Oaks, CA Sage Publications

23. Murphy F, Yielder J. Establishing rigour in qualitative radiography research Radiography. 2010;16(1):62–7

24. Pearson A. Balancing the evidence: incorporating the synthesis of qualitative data into systematic reviews JBI Reports. 2004;2(2):45–64

25. Finfgeld DL. Metasynthesis: the state of the art—so far Qual Health Res. 2003;13(7):893–904

26. Noyes J, Lewin S Supplemental guidance on selecting a method of qualitative evidence synthesis, and integrating qualitative evidence with Cochrane intervention reviews. In: Noyes J, et al., eds. Supplementary guidance for inclusion of qualitative research in Cochrane systematic reviews of interventions. Sheffield, UK: Cochrane Collaboration Qualitative Methods Group; 2011. vol. 1. Chapter 6.

27. Thorne S, et al. Qualitative metasynthesis: reflections on methodological orientation and ideological agenda Qual Health Res. 2004;14(10):1342–65

28. Atkins S, et al. Conducting a meta-ethnography of qualitative literature: lessons learnt BMC Med Res Methodol. 2008;8:21

29. Munn Z, et al. The development of an evidence based resource for burns care Burns. 2013;39(4):577–82

30. Joanna Briggs Institute. Joanna Briggs Institute reviewers’ manual: 2011 edition. Adelaide, SA; 2011.

31. Munn Z, et al. Recognition of the health assistant as a delegated clinical role and their inclusion in models of care: a systematic review and meta-synthesis of qualitative evidence Int J Evid Based Healthc. 2013;11(1):3–19

32. Dwyer D. Experiences of registered nurses as managers and leaders in residential aged care facilities: a systematic review Int J Evid Based Healthc. 2011;9(4):388–402

33. Munn Z, Jordan Z. The patient experience of high technology medical imaging: a systematic review of the qualitative evidence Radiography. 2011;17(4):323–31

© 2014 Lippincott Williams & Wilkins. All rights reserved.