Secondary Logo

Journal Logo


A mixed-methods approach to systematic reviews

Pearson, Alan AM, MSc, PhD, FCNA, FRCNA FAAN, FAAG1; White, Heath BSc1; Bath-Hextall, Fiona BSc(Hons) PhD2; Salmond, Susan BSN, MSN, EdD, FAAN3; Apostolo, Joao BSc4; Kirkpatrick, Pamela MA, BA, MSc5

Author Information
International Journal of Evidence-Based Healthcare: September 2015 - Volume 13 - Issue 3 - p 121-131
doi: 10.1097/XEB.0000000000000052
  • Free



Systematic reviews seek to identify, evaluate and summarize the findings of all relevant, individual research studies on a particular clinical question or topic, to make the available evidence more accessible to decision makers.1,2 When the notion of evidence-based healthcare emerged in the early 1990s, the dominant approach to the systematic review of evidence was the meta-analysis of the results of randomized controlled trials (RCTs). The RCT was conceptualized as the ‘gold standard’ in evidence of effectiveness, with other quantitative methods ranked as lower in quality in terms of evidence, and the results of interpretive and critical research were simply not regarded as high-quality evidence. Critics of this privileging of the RCT and quantitative research cited the arguments inherent in critiques of traditional science and the emergence of new paradigms for knowledge. Whilst the RCT is probably the ‘best’ approach to generating evidence of effectiveness, nurses, medical practitioners and other health professionals are concerned with more than cause-and-effect questions, and this is reflected in the wide range of research approaches utilized in the health field to generate knowledge for practice. Pearson et al.3 suggest that clinical decision makers and policy makers are interested in evidence on the effects of healthy care – but that they are just as much interested in whether an aspect of care or an intervention is feasible, meaningful to patients and appropriate to a specific culture (the ‘F.A.M.E.’ scale), which is given as follows:

  1. feasibility is about whether or not an activity or intervention is physically, culturally or financially practical or possible within a given context;
  2. appropriateness is about how an activity or intervention relates to the context in which care is given;
  3. meaningfulness relates to the personal experience, opinions, values, thoughts, beliefs and interpretations of patients or clients; and
  4. effectiveness is about the relationship between an intervention and clinical or health outcomes.

Although the evidence-based healthcare movement initially focussed on evidence related to the effectiveness of clinical interventions, there is now a burgeoning of new approaches to synthesizing different kinds of evidence (e.g. qualitative, economic and diagnostic accuracy) to address questions on a given topic, but focusing on some or all elements of F.A.M.E. This has given rise to an increasing number of published single-method reviews that focus on different types of evidence related to a particular topic. As policy makers and practitioners seek clear directions for decision-making from systematic reviews, it is likely that it will be increasingly difficult for them to identify ‘what to do’ if they are required to find and understand a plethora of syntheses related to a particular topic.

The diverse origins of problems in healthcare require a diversity of research methodologies; thus, contemporary health research is increasing eclectic enough to incorporate both classical, medical and scientific designs, and the emerging qualitative and action-oriented approaches from the humanities and the social and behavioural sciences. The rapid development and adoption of mixed-methods research in the health sciences is indicative of the need to pursue research methodologies that are relevant and sensitive to the health needs of the consumers. ‘Mixed-methods’ research refers to ‘the class of research in which the researcher mixes or combines quantitative and qualitative research techniques, methods, approaches, concepts or language into a single study’.4 Mixed-methods research includes the following:

  1. focuses on research questions that call for real-life contextual understanding and multi-level perspectives;
  2. employs rigorous quantitative research assessing magnitude and frequency of constructs, and rigorous qualitative research exploring the meaning and understanding of constructs;
  3. utilizes multiple methods (e.g. intervention trials and in-depth interviews); and
  4. integrates these methods to draw on the strengths of each.

A mixed-methods systematic review applies the principles of mixed-methods research to the review process, that is, studies from different research traditions (but focused on the same topic) are combined to generate evidence to guide decision-making. Thus, a mixed-methods review designed to provide guidance to clinical decision makers on the management of a particular symptom could conduct a meta-analysis of trials evaluating the effectiveness of specific interventions; a meta-synthesis of qualitative studies on patients’ experience; a synthesis of cost–benefit studies on the interventions and then combine the findings of the three to identify the most effective, acceptable and economic approach (see Fig. 1).

Figure 1
Figure 1:
Synthesizing primary studies in systematic reviews.

Mixed-methods systematic reviews

By including diverse forms of evidence from different types of research, mixed-methods reviews try to maximize the findings – and the ability of those findings to inform policy and practice. The field of mixed-methods systematic reviews is still emergent and, although there is a growing literature on reviews that include both quantitative and qualitative data synthesis, included data are rarely combined in a single synthesis or united in a secondary ‘final’ synthesis. Most published papers develop a framework based on themes derived from qualitative studies and incorporate quantitative data within the framework,5 or analyse qualitative and quantitative data separately and then provide a brief narrative discussion of the ‘total’ results.6 As Sandelowski et al.7 suggest, the current impetus in the literatures of mixed-methods research and mixed research synthesis is a multiplicity rather than parsimony.

In a recent technical brief, Harden8 identifies three methods in which mixed-methods systematic reviews are conducted at the Evidence for Policy and Practice Information and Co-ordinating Centre in the United Kingdom, which are listed as follows:

  1. The systematic review of mixed-research studies is by default a mixed-methods systematic review: as the original studies are of mixed methods, the resulting synthesis will be mixed.
  2. The synthesis methods used in the review are mixed (e.g. two or more syntheses are performed involving, for example, quantitative and qualitative data).
  3. A model which involves both the building and testing of theories based on the results of original syntheses. This involves the same process as method 2 (separate syntheses of qualitative and quantitative data; building); however, it also incorporates a third synthesis (testing), whereby the thematic synthesis of qualitative data is used to ‘interrogate’ the meta-analytical results of quantitative data.

The first two of these methods do not present viable models through which to conduct mixed-methods systematic reviews, as although they include both quantitative and qualitative data, the inability of authors to clearly delineate evidence types in a single synthesis (method 1) or failure to combine evidence in a secondary synthesis (method 2) may significantly limit their utility. The third model is akin to the segregated methodology described by Sandelowski et al.7 (see below) in that syntheses are conducted separately and then recommendations from the qualitative synthesis are used to contextualize quantitative data and generate reasons behind the success and/or failure of a programme.

In the third method, two or more syntheses are conducted and then combined in a secondary synthesis. In Thomas et al.'s9 example, the authors conducted both a qualitative synthesis (synthesis 1) and a quantitative synthesis (synthesis 2) regarding the barriers to healthy eating in adolescents in the United Kingdom. By applying specific recommendations derived from qualitative-based themes (synthesis 1) to numerical data (synthesis 2), the authors could more accurately predict the cause behind an observed effect. For instance, if synthesis 2 demonstrated that children are not interested in ‘health’ per se and do not consider future health consequences as being relevant, by applying this statement to synthesis 1, the authors can recommend re-branding fruits and vegetables as being ‘tasty’ rather than ‘healthy’ in an attempt to convince children to eat more of these foods9 (Fig. 2).

Figure 2
Figure 2:
Thomas et al.'s9 findings in children and healthy eating: a systematic review.

Sandelowski et al.7 identify three general frameworks through which to conduct mixed-methods systematic reviews: segregated, integrated and contingent (Fig. 1).

‘Segregated methodologies’ maintain a clear distinction between quantitative and qualitative evidence and require individual synthesis to be conducted prior to the final ‘mixed-methods’ synthesis. The findings or evidence can fall into two categories: the quantitative and qualitative findings may either support each other (confirmation) or contradict each other (refutation); or they may simply add to each other (complementary).

The category is not chosen by the reviewer; rather, the category used depends on the data being analysed. For example, a qualitative study which looks at a patient's experience following a specific treatment could either confirm or refute quantitative findings based on lifestyle surveys/questionnaires of the same treatment. Conversely, the same qualitative study could not be used to confirm or refute the findings of a quantitative study of clinical effectiveness of the same treatment, and would instead present complementary evidence. If the quantitative and qualitative syntheses focus on the same general phenomenon, both confirmation/refutation and complementarity can inform the topic in a complementary manner. The resulting synthesis is often presented in the form of a theoretical framework, set of recommendations, or conclusions or path analysis (Fig. 3).

Figure 3
Figure 3:
Segregated synthesis (adapted from Sandelowski et al.7).

‘Integrated methodologies’ directly bypass separate quantitative and qualitative syntheses and instead combine both forms of data into a single mixed-methods synthesis. A primary condition for the development of an integrated mixed-methods systematic review is that both quantitative and qualitative data are similar enough to be combined into a single synthesis. As opposed to segregated methodologies, where the final synthesis involves a configuration of data, integrated methodologies are almost always confirmatory or refuting in nature and involve an assimilation of data. This presents the only method whereby both forms of data can be assimilated into a single synthesis, and requires that either quantitative data are converted into themes, codified and then presented along with qualitative data in a meta-aggregation, or qualitative data are converted into numerical format and included with quantitative data in a statistical analysis (Fig. 4).

Figure 4
Figure 4:
Integrated synthesis (adapted from Sandelowski et al.7).

‘Contingent methodologies’ involve two or more syntheses conducted sequentially based on results from the previous synthesis. The process begins by asking a question and conducting a qualitative, quantitative or mixed-methods synthesis. The results of this primary synthesis generate a second question, which is the target of a second synthesis, the results of which generate a third question and so on. Contingent designs can include either integrated and/or segregated syntheses, and multiple syntheses can be conducted until the final result addresses the reviewer's objective (Fig. 5).

Figure 5
Figure 5:
Contingent synthesis (adapted from Sandelowski et al.7).

Although all of the above methods utilize both quantitative and qualitative data in their analyses, only segregated methodologies present individual syntheses and then combine data in the same synthesis using a meta-analytical or meta-aggregative approach.

Bayesian approaches to mixed-methods synthesis

Bayesian methods generate summative statements of the evidence through the meta-aggregation of data. This can involve attributing a numerical value to all qualitative data, facilitating a final statistical analysis of individual syntheses (i.e. translating qualitative data into quantitative); or attributing a qualitative thematic description to all quantitative data, thereby permitting a final meta-aggregation of individual syntheses (i.e. translating quantitative data into qualitative) (Table 1).

Table 1
Table 1:
Coding of quantitative and qualitative data (Crandell et al.10)

The use of Bayesian methods in mixed-methods systematic reviews has been discussed widely, but applied infrequently.10 Essentially, in order for qualitative and quantitative data to be incorporated into the same stage of synthesis and thus equally inform the topic, the data must be transformed into a mutually compatible format.11 For example, if there are qualitative and quantitative findings, all must be translated into either quantitative or qualitative form.

Bayesian conversion 1: qualitative to quantitative

Converting qualitative data to quantitative data involves assigning a numerical value to qualitative data in a form which is compatible to that of the quantitative data, enabling the author to calculate the proportion of participants associated with a particular finding. Both quantitative and qualitative datasets are analysed independently using the same framework, and then may or may not be combined in a final analysis, depending on whether the estimates of probability have overlapping 95% credible sets.8 In other words, if the probability of a participant reporting a relationship (e.g. adherence to a complex medical regimen) is significantly different between the quantitative and qualitative analyses, no further analysis is performed.

One problem with this method of analysis is the ambiguity often associated with participants in qualitative studies. Qualitative studies frequently express results through thematic and interpretive approaches, which are not amenable to counting. Through the frequent application of the verbal count translation approach, seriously skewed or inflated ranges may be inadvertently developed. For example, an author may consider the statement ‘many patients adhered to the treatment regime’ as appropriate when 20–30% of the total patients made such a statement; however, this definition varies significantly from the verbal count translation system developed by Chang et al.12 in which ‘many’ was defined as more than 50%.

Bayesian conversion 2: quantitative to qualitative

A novel method of combining quantitative and qualitative data was presented by Crandell et al.10 when comparing factors facilitating or hindering antiretroviral adherence. The authors initially grouped similar variables together into themes and then coded data for each variable based on the whether the variable signified adherence, non-adherence or both adherence and non-adherence (Table 2). These values were entered into a data matrix with a single report occupying each row and single theme (variable) occupying each column. If a report did not address a variable, that cell was left blank.10

Table 2
Table 2:
Comparison of Bayesian methods for mixed-methods synthesis

As the majority of cells occupying the resulting data matrix were blank (most of studies only report on a subset of themes), a naive analysis of the results (assuming that each value independently contributes to the probability that these results are correct, regardless of the presence or absence of other values) produced broad 95% confidence intervals (CIs), which significantly reduced the strength of conclusions. The application of Bayesian data augmentation methods helped to mitigate these effects by imputing missing values based on the available data. The results take the form of posterior mean values ranging from 0 to 1, with ‘high’ values signifying factors associated with adherence and ‘low’ values signifying factors associated with low adherence (with middle-range factors signifying a mix of adherence and non-adherence). Table 2 shows how Crandell et al.10 have coded this so that a mean value of 0 equates to a qualitative descriptor of ‘non-adherence’, and a mean value of 1 equates to a qualitative descriptor of ‘adherence’.

Crandell et al.'s10 somewhat simple example has its limitations. Other examples of more direct relevance to Joanna Briggs Institute (JBI) reviews include findings in worked examples by White et al.13

Aggregative mixed-methods systematic reviews

Aggregative mixed-methods synthesis14,15 draws on the Bayesian approach to converting quantitative to qualitative data, as proposed by Crandell et al.10; however, when these authors converted raw quantitative data into qualitative themes to generate a single combined synthesis, the aggregative method applied the conversion process to the results of individual syntheses, thereby producing a single overarching synthesis which ‘marries’ the results of separate syntheses. Irrespective of the quantitative data presented, such data lend itself well to the derivation of defined themes, and codifying quantitative data is less error-prone than attributing numerical values to qualitative data. By utilizing both quantitative and qualitative data to develop themes and then codifying all data into a compatible system for meta-aggregative analysis, equality between both data types is achieved. This approach presents a more simplistic, elegant and yet powerful method of combining data, with the additional benefit of maintaining high fidelity through the production of separate syntheses (a fidelity which is lost through pre-synthesis pooling of extracted primary data).

White et al.13 pooled the results of a quantitative and qualitative synthesis on the impact of self-monitoring of blood glucose (SMBG) on patient outcomes. Two meta-analyses from the initial quantitative component of the review (as presented in meta-view charts) show the significant reduction in HbA1c levels in the SMBG groups, and these quantitative values are translated into a qualitative statement of ‘The use of SMBG results in significant improvement in HbA1c levels at 6 months but not at 12 months’ (see Fig. 6).

Figure 6
Figure 6:
Example of conversion of quantitative data into textual description (Pearson et al.,14).

Maintaining rigour when translating quantitative findings into qualitative statements

It is important that attention is paid to minimizing the possible impact of pre-understandings that might arise from the conduct of the initial qualitative meta-aggregation when converting the quantitative values to qualitative statements in mixed-methods synthesis.

As in synthesizing qualitative evidence in single-method qualitative reviews, reviewers are encouraged to consider ways to ‘bracket’ when conducting the mixed-methods synthesis. Bracketing relates to how qualitative investigators attempt to minimize the impact of their own vested interests, personal experience and cultural beliefs that could influence how they view and interpret data. To view data in a ‘fresh’ way, researchers try to put these potential influences into ‘brackets’ – that is they try to ‘shelve’ them for the time being.

Another consideration in maintaining rigour in mixed-methods reviews relates to ensuring that the full context of the included syntheses is not lost. Mixed-methods synthesis takes the data from the included reviews to a higher level of extraction, and the fidelity of the original review findings may be lost if not contextualized appropriately.

For example, in qualitative reviews, the final synthesized findings are based on categories which are formed from the findings of included studies. As a result, all such synthesized findings have deep roots embedded in the studies from which the data are derived. In other words, these original syntheses have built-in contexts that not only form the foundations on which discussions of the synthesized findings are based, but which can also be used to justify the conclusions produced from such syntheses. Similarly, in quantitative reviews, meta-analyses and other summaries such as evidence tables are derived from data generated within a given context, and it is important not to lose this information in a mixed-methods synthesis.

This process of contextualizing ‘textual descriptions’ is referred at as ‘text-in-context’ by Sandelowski et al.16 They suggest that when results derived through a synthesis of included reviews are anchored to the most important contexts in which such results were produced, these results are never ‘stand-alone’, but instead maintain a relationship with the methods used to generate them.

For example, in the review by White et al.13, involving the effectiveness and appropriateness of educational components and strategies associated with blood sugar monitoring, both quantitative and qualitative syntheses were developed based on data presented within identified studies. To combine these syntheses, the results of the quantitative review were translated into ‘textual descriptions’ and assembled alongside the synthesized findings generated from the qualitative review. Finally, these textual descriptions and synthesized findings were pooled or ‘married’ to each other to generate a mixed-methods synthesis.

For this example, one textual description and one synthesized finding were combined to form a single mixed-methods synthesis as follows:

  1. Textual description of quantitative synthesis finding: ‘Participants who undergo training are generally receptive to helpful information and believe such programmes to be of value, particularly when undertaken within a group setting’.
  2. Synthesized finding from qualitative review: ‘Education that incorporates group and individual dynamics facilitates experiential learning’.
  3. Mixed-methods synthesis that ‘marries’ the two: ‘Educational programmes are viewed positively by participants and may be particularly effective when conducted in group settings that are inclusive of all participants’.

This synthesizing process does not appear to have given due weight to the full context of the reviews included. Stating that such educational programmes may be ‘particularly effective when conducted in group settings that are inclusive of all participants’ is rather general and does not inform the reader exactly why and for whom group sessions prove successful, or the components involved in group training which lead to its success. Recommendations which are developed based on such a statement would be inherently vague, and require the reader to involve themselves in a re-analysis of the primary syntheses in order to assist them in developing an effective group training programme, which limits the usefulness of mixed-methods synthesis in informing policy or practice.

To avoid this, reviewers should consider close examination of the included single-method syntheses, then ascertain which contextual aspects are of greatest importance to the mixed-methods synthesis and subsequently use these to anchor the finding. In the above example, the original synthesized finding incorporates categories for both group learning and for autonomy as an objective of education. This synthesis demonstrates that achieving autonomy (with regards to self-management) is important, and that to achieve autonomy, patients must first be engaged and drawn into a group dynamic which encourages ‘learning from sharing’ rather than being tied to a timeframe or curricula. These nuances are lost when the synthesized finding is stripped of context and included within the mixed-methods synthesis, potentially leading the reader to erroneously assume that simply grouping participants together is the key to experiencing the benefits of such education. Thus, the mixed-methods synthesis, if contextually grounded, should be as follows:

Educational programmes that focus on patients becoming autonomous through the use of group processes that encourage ‘learning from sharing’ are viewed positively by participants and may be particularly effective when conducted in group settings that are inclusive of all participants

Systematic reviews of effects rely on a variety of output methods which are dependent on both the nature of the included primary data and, in some instances, author preference. The most rigorous means of combining data within a systematic review is represented by the meta-analysis. If data are incompatible with such means of combination, the tabular presentation of primary data is a more simplified way of visually presenting similar data. Unfortunately, many reviews rely on the narrative method of data presentation, whereby the author describes the results of included studies in continuous prose.

When considering including systematic reviews of effects that deal exclusively (or almost exclusively) with a narrative presentation of results, it is considered more appropriate for the author of a mixed-methods systematic review to skip the tabular conversion of quantitative to qualitative data, and to use a more qualitative approach, that is, instead use a thematic analysis programme such as the Thematic Analysis Program to analyse quantitative data. This results in the development of a three-tiered meta-aggregation suitable for direct combination with the synthesized findings, as presented within the qualitative component of the mixed-methods review.

The findings of the review: summative, proscriptive or indicative?

Clinicians rely on the ability of systematic review authors to condense their results into recommendations which are immediately useful to informing the way they practice. In other words, rather than focusing on providing an executive summary of the results of a systematic review, a good systematic reviewer should convert this summary into one or more statements explicitly describing what a clinician needs to do in order to adhere to evidence-based best practice.

On the basis of consultation with experts in a mixed-methods workshop (JBI International Convention, October 2013, Adelaide, Australia), the consensus suggests that rather than take either the proscriptive approach as outlined above, or the basic summation of results of a systematic review, authors of mixed-methods systematic reviews should take the mid-line approach of providing indicative statements based on the following available evidence:

  1. Summative: Studies included in this review generally suggest that treating patients with X is more effective and results in shorter length of hospital stay compared with Y.
  2. Proscriptive: Clinicians should administer X to patients instead of Y.
  3. Indicative: The clinician should consider administering X rather than Y as this has been shown to be both more effective and results in shorter length of hospital stay.


On the basis of the recommendations of the JBI International Mixed Methods Reviews Methodology Group in 2012,10,11 the Institute adopted the segregated approach to mixed-methods synthesis as described by Sandelowski et al.,7 which consists of separate syntheses of each component method of the review. JBI mixed-methods synthesis of the findings of the separate syntheses uses a Bayesian approach to translate the findings of the initial quantitative synthesis into qualitative themes and pooling these with the findings of the initial qualitative synthesis.

The JBI mixed-methods syntheses can be managed in the following two ways:

  1. Two or more individual, single-method reviews may be conducted (via CReMS (Comprehensive Review Management System) and appropriate analytical modules such as MAStARI (Meta Analysis and Statistics Assessment and Review Instrument), QARI, (Qualitative Assessment and Review Instrument) etc.) and published as separate, single-method reviews. These single-method reviews may then be combined in a mixed-methods review using a new mixed-methods protocol, and the JBI's – Mixed Methods Assessment and Review Instrument (MMARI) module. CReMS permits reviewers to identify previously completed syntheses in any of the analytical modules and to link them to a new mixed-methods review. This will automatically re-publish the previously published individual review reports as sections of the new mixed-methods review.
  2. The mixed-methods review may be conducted as a whole (via CReMS and relevant analytical modules) and the component syntheses published only as part of the mixed-methods review report.

The mixed-methods review question will determine the components of the review. For example, in a mixed-methods review of the use of blood glucose monitoring (BGM) in people newly diagnosed with type 2 diabetes, the question may focus on the effects of BGM on blood sugar levels; the cost benefits of BGM and the experience of people newly diagnosed with type 2 diabetes in using BGM. In the White et al.13 mixed-methods review, the broad question focused on the effects of BGM on blood sugar levels, but a broader perspective that encompasses both the effects on the outcome itself, the related costs and the experiences of people newly diagnosed with type 2 diabetes may generate more meaningful and useful findings for patients, policy makers and practitioners. To conduct a mixed-methods review for this example, the reviewer would conduct three segregated syntheses using JBI-MAStARI; JBI-QARI and JBI-ACTUARI (Analysis of Cost, Technology and Utilisation Assessment and Review Instrument), and then configuratively aggregate the results of these syntheses, as shown in Fig, 2. Within this system, the results of primary mixed-methods research will be separated into their respective components and included within individual syntheses (the quantitative component will be incorporated into the quantitative synthesis within MAStARI, and the qualitative component will be incorporated into the qualitative synthesis within QARI, etc.). It is important to ensure that only those components of a mixed-methods study which are eligible for inclusion within their respective syntheses (based on pre-defined inclusion criteria) contribute to the review, and that eligible studies that include quantitative, qualitative or other data are presented as separate studies within their individual syntheses (Fig. 7).

Figure 7
Figure 7:
The Joanna Briggs Institute model of mixed-methods synthesis.


1. Petticrew M. Why certain systematic reviews reach uncertain conclusions. Br Med J 2003; 326:756–758.
2. Petticrew M, Roberts H. Systematic reviews in the social sciences: a practical guide. Malden, MA:Blackwell Publishing; 2006.
3. Pearson A, Wiechula R, Court A, Lockwood C. The JBI model of evidence-based healthcare. Int J Evid Based Healthc 2005; 3:207–215.
4. Johnson RB, Onwuegbuzie AJ. Mixed methods research: a paradigm whose time has come. Educ Research 2004; 33: 14–26.
5. Thomas J, Harden A, Oakley A, et al. Integrating qualitative research with trials in systematic reviews: an example from public health. Br Med J 2004; 328:1010–1012.
6. Bruinsma SM, Rietjens JA, Seymour JE, et al. The experiences of relatives with the practice of palliative sedation: a systematic review. J Pain Symptom Manage 2012; 44:431–435.
7. Sandelowski M, Voils CI, Barroso J. Defining and designing mixed research synthesis studies. Res Sch 2006; 13:29.
8. Harden A. Mixed-methods systematic reviews: integrating quantitative and qualitative findings. London, UK: FOCUS: 2010 (Technical Brief no. 25).
9. Thomas J, Sutcliffe K, Harden A, et al.. Children and healthy eating: a systematic review. London: EPPI Centre, Social Science Research Unit, Institute of Education, University of London; 2003.
10. Crandell JL, Voils CI, Chang Y-K, Sandelowski M. Bayesian data augmentation methods for the synthesis of qualitative and quantitative research findings. Qual Quant 2011; 45:653–669.
11. Voils CI, Hasselblad V, Crandell JL, et al. A Bayesian method for the synthesis of evidence from qualitative and quantitative reports: an example from the literature on antiretroviral medication adherence. J Health Serv Res Policy 2009; 14:226–233.
12. Chang Y-K, Voils CI, Sandelowski M, et al. Transforming verbal counts in reports of qualitative descriptive studies into numbers. West J Nurs Res 2009; 31:837–852.
13. White H, Pearson A, Konno R, et al. The effectiveness, appropriateness and meaningfulness of self-monitoring of blood glucose (SMBG) in patients with type 2 diabetes: a worked example of a mixed method systematic review. Adelaide: The Joanna Briggs Institute; 2013.
14. Pearson A, White H, Bath-Hextall F, et al. A mixed methods approach to evidence synthesis. Philadelphia:Lippincott, Williams and Wilkins; 2014.
15. Joanna Briggs Institute. The Joanna Briggs Institute Reviewers’ Manual 2014: methodology for JBI mixed methods systematic reviews. Adelaide: The Joanna Briggs Institute; 2014.
16. Sandelowski M, Leeman J, Knafl K, Crandell JL. Text-in-context: a method for extracting findings in mixed-methods mixed research synthesis studies. JAN 2012; 69:1428–1437.

evidence-based healthcare; evidence synthesis; mixed-methods research; mixed-methods reviews; qualitative research; systematic reviews

International Journal of Evidence-Based Healthcare © 2015 The Joanna Briggs Institute

A video commentary on implementation project titled: How do health professionals prioritise clinical areas for implementation of evidence into practice? The commentary is provided by Andrea Rochon RN, MNSc, Research Assistant, Queen's University, Ontario, Canada