Conducting systematic reviews of economic evaluations : JBI Evidence Implementation

Secondary Logo

Journal Logo

METHODOLOGY PAPERS

Conducting systematic reviews of economic evaluations

Gomersall, Judith Streak PhD, MCom, BA1,2; Jadotte, Yuri Tertilus MD3,4; Xue, Yifan MBBS, MPH, MClinSC2; Lockwood, Suzi PhD, MSN, RN, OCD, FAAN5; Riddle, Dru PhD, DPN, CRNA6; Preda, Alin MD, MPH6

Author Information
International Journal of Evidence-Based Healthcare 13(3):p 170-178, September 2015. | DOI: 10.1097/XEB.0000000000000063
  • Free

Abstract

Introduction

Decision makers working in healthcare delivery and policy require the best available evidence not only on relative effectiveness of different intervention (treatment/programme or technology) options but also about the resource use and cost implications associated with them. Health economic evaluation research, which involves the comparative analysis of alternative health interventions to understand their resource use, costs and costs relative to effectiveness consequences, responds to this need.1 It focuses on identifying, measuring, valuing and comparing resource use, costs and costs compared with health benefits of interventions designed to improve health.2

There are four main health economic research design types: cost minimization analysis, cost-effectiveness analysis, cost benefit analysis, and cost utility analysis. The first is a partial economic evaluation method as only costs of the intervention and comparator are estimated. This is designed to be used when the intervention alternatives have been shown to be equivalent regarding application modalities and health effects. Regardless of the design type used in an economic evaluation, costs are measured in monetary units. The designs differ in their valuation of health benefits (Table 1).

T1-9
Table 1:
Four common economic evaluation design types

Systematic review is widely recognized as critical to guide decision makers towards implementing best healthcare practice and policy.3 Initially, systematic review focused on synthesizing evidence from randomized controlled trials investigating health treatment efficacy.3 The methodology has evolved and reviewers now synthesize diverse types of evidence to support evidence-informed decision making.4–6

The Joanna Briggs Institute (JBI) is a not-for-profit research institute located in the Faculty of Health Sciences at the University of Adelaide, Adelaide, Australia. Established in 1996, it focuses on conducting systematic review of evidence to support clinical decision making, development and dissemination of method guidance for evidence synthesis, and development and dissemination of tools to support evidence implementation in the clinical setting. The Institute has developed guidance to support reviewers to conduct systematic reviews of a variety of evidence types.5 The Institute is particularly well known for its contribution to methods for critical appraisal and synthesis of qualitative evidence and providing guidance for critical appraisal and synthesis of text-based expert opinion. The JBI has developed a computer software system called the System for the Unified Management, Assessment and Review of Information for the systematic review of evidence. It includes various modules that reviewers select depending on the kind of evidence they are reviewing. The module that supports reviewers with economic evidence review is the JBI Analysis of Cost, Technology and Utilization Assessment and Review Instrument (ACTUARI).7

Our objective in this article is to present the JBI's most recent guidance for conducting systematic reviews of evidence from primary economic evaluation research. The guidance presented in this article is the outcome of a working group formed by the JBI in 2012 to review and update its existing guidance for systematic review of economic evaluations.7

The article proceeds with a methods section that describes the activities of the working group. This is followed by a results section, which presents the findings from the two literature reviews that the group conducted to inform the reshaping of the guidance, and an outline of the updated guidance. The article concludes with a discussion which highlights limitations of the guidance and next steps for its further development.

Method

The working group formed to review and enhance the existing JBI guidance for conducting systematic reviews of economic evaluation evidence included two researchers from the JBI head office in Adelaide and four from the JBI Collaboration, all authors of this article. The group met on a monthly basis and engaged in the following activities:

  1. Review of literature on the utility/futility of reviews of economic evaluation evidence addressing questions of intervention cost and cost-effectiveness and reflection on what this suggests about how systematic review of economic evaluations should be conducted if it is. Resource limitations and time constraints precluded the group conducting systematic reviews.
  2. Assessment of the quality criteria in the critical appraisal tool embedded in the existing JBI guidance for conducting systematic reviews of economic evaluations against aspects of research design and conduct identified in Australian National Health and Medical Research Council (NHMRC) guidance8 as important to establish internal validity.
  3. Informed by the findings of these two processes, draft new guidance for conducting reviews of economic evaluations.

The ‘new’ guidance was presented and discussed with members of the JBI Collaboration by three members of the working group (J.S.G., Y.X. and D.R.) during the 2013 JBI Convention.

Results

Debate over the futility/utility of systematic reviews of economic evidence and how it shaped the guidance

The first literature review found that although the value of primary economic evaluation research in evidence-based healthcare is not disputed, systematic reviews of this kind of evidence is.9–11

More specifically, three arguments were identified in the literature about why systematic review of economic evaluation evidence is not valuable for guiding decision makers towards better health policy and/or practice, and hence should not be conducted. The first argument9–12 starts from the premise that the purpose of the systematic reviews of economic evaluations is to generate an average generalizable incremental cost-effectiveness measure from individual cost-effectiveness measures. This narrow conceptualization of purpose is informed by the traditional focus in systematic review on synthesizing measures of intervention effectiveness, gathered largely from randomized controlled trials, to generate a robust generalizable measure of effect size and direction. The first argument against then proceeds by identifying the following two reasons why meta-analysis of the cost-effectiveness measures of economic evaluation studies included in a systematic review is generally unwise2,9–11:

  1. Resource use and costs vary from country to county, across regional settings within countries as well as over time, making the cost component of cost-effectiveness measures incomparable.
  2. There is a high likelihood that differences in context (including institutional delivery capacity) and populations (behaviour and culture) will translate into differences in how interventions work in different settings, and how effective they are, which undermines the comparability of the effectiveness and cost-effectiveness measures of included studies. This problem, it is noted, will be particularly relevant when the intervention in question is a complex multicomponent intervention, such as a public health programme intervention.

The second argument against is that it is futile because differences in the decision-making context in different countries and time periods will undermine transferability of findings.10,11 This, it is noted, is most likely when the target audience for systematic reviews is defined in the usual way, as international – i.e. decision makers working in health policy and practice in any country. If the target audience for systematic reviews is defined more narrowly, as to offer direction to support decision makers in one country, and studies were included only from this country, they are more likely to generate findings that are applicable to decision makers’ context(s). The third is that dearth of economic evaluation studies (including costing/partial economic evaluation studies and full economic evaluations) implies that the outcome of most such systematic reviews is empty systematic review reports rather than recommendations for practitioners or policy makers informed by quality evidence.9

At the same time, the working group identified strong arguments for systematic reviews of economic evaluations. It found that there are a number of international organizations and researchers working in the area of evidence synthesis to promote better healthcare, policy and outcomes that see value in systematic reviews of economic evaluations and offer guidance for it.4,9,10 Some of the organizations, such as the Cochrane Collaboration,4 include guidance for conducting reviews of economic evidence, addressing questions about resource use, and cost-effectiveness (efficiency) as part of systematic reviews focused on addressing a question about health intervention effectiveness.5 The proponents for systematic reviews of economic evaluations acknowledge that there is commonly wide variation in the study settings/context, and population characteristics (including cultures) of the economic evaluation studies available for inclusion in reviews. Moreover, they acknowledge that these differences are likely to preclude using meta-analysis to generate a one size fits all type answer about the relative cost-effectiveness of alternative interventions. However, they argue that this does make them futile. Instead, they explain that they still have the potential to offer policy makers, clinicians, community leaders, patients and other decision makers with useful information to inform decision making. They can do this by the following:11

  1. Identifying for decision makers the range and quality of available studies related to a particular resource use/cost and/or cost-effectiveness questions and gaps in the evidence base. The evidence base gap identification and the research directions offered in systematic reviews are seen as particularly valuable as they facilitate decision makers and researchers understanding the kind of evidence base (or decision model) that is needed.
  2. Alerting decision makers to results that may be relevant to the intervention choice/trade-offs they are grappling with. This could include alerting decision makers to reviews (even if only a few) that have produced robust results about cost-effectiveness of interventions that they are considering, based on the findings of many included studies (e.g. 20 studies showing ICER (incremental cost effectiveness ratio) between $10.000 and $20.000) conducted in different settings with different populations.
  3. Providing decision makers with enhanced understanding of the conditions that promote effectiveness and efficiency of different interventions. This contribution may be achieved if the objective of the reviews is limited to not only summarizing the range of cost-effectiveness measures from existing studies but also learning lessons about the circumstances (contextual intervention and population factors) that promote or impede cost-effectiveness.

The following statement about the value of systematic reviews of economic evaluations, by Drummond, who is internationally recognized as a leader in economic evaluation research and methodology, reflects this argument about the value of systematic reviews of economic evaluations:

’… the real contribution of a systematic review of economic evaluations may not be to produce a single authoritative result, but to help decision makers understand the structure of the resource allocation problem that they are addressing and the impact, on the overall result of the main parameters.’ (ref. 13, page 46.)

The working group concluded from this first literature review that there is value in systematic reviews that seek to identify resource use, costs and costs relative to benefits of alternative health interventions to help guide decision makers towards taking decisions that promote efficient use of resources and best health outcomes. Furthermore, consideration of such evidence may be particularly useful if conducted as part of comprehensive or mixed method reviews, which also identify, critically appraise and synthesize evidence from qualitative and quantitative studies on intervention acceptability, meaningfulness and effectiveness. Finally, we concluded that the debate about the futility/utility of systematic reviews of economic evaluations makes it clear that systematic review guidance should steer reviewers away from defining review objective/questions as trying to estimate the cost-effectiveness of alternative programmes/treatments. Instead, the value of systematic review of economic evaluations lies in their potential to enhance decision makers understanding about the circumstances that are conducive to an intervention being more cost effective than the comparator, and this should inform framing of review objectives.10

The working group found that the JBI existing guidance framed the objectives of reviews of cost and cost-effectiveness as if the goal was to determine the costs or cost-effectiveness of alternatives. Adjusting the guidance to reflect this shift, that the literature indicates is warranted, was one of the changes made to the JBI guidance for systematic review of economic evaluations.

Appropriateness of the tool for assessing quality of economic evaluations

The JBI ACTUARI software includes a general critical appraisal checklist for appraising economic evaluations (see Table 2). The work the group undertook to assess and enhance the critical appraisal for assessing methodological quality of studies addressing questions about costs (savings) and cost-effectiveness embedded in the existing JBI systematic review methodology for economic evaluations was limited. It involved first identifying design aspects identified as key for promoting validity/minimizing bias when conducting primary economic evaluation research. The NHMRC guidance for evaluation of economic evidence was used for this purpose.8 Second, comparing the JBI tool with two others14,15 promoted as tools to judge quality of economic evaluations at the international level.

T2-9
Table 2:
Joanna Briggs Institute critical appraisal checklist for studies reporting economic evaluations

The following are the economic valuation research design and conduct aspects that were identified in the Australian NHMRC guidance as important for establishing validity of findings:

  1. There is a clear definition of the perspective taken for the health effect and cost measurement. This may be the societal perspective that results in coverage of a broad range of costs and allows for identification of any shifting of the cost burden within society. However, in some cases, the perspective may be a narrower perspective (e.g. from a patient or particular health provider or government perspective). Best practice definition of perspective depends on the decision maker/systematic review user information needs. For example, if the objective is to inform the investment of a hospital funder, the best perspective may be the funder.
  2. Comprehensive coverage of costs and health outcomes/effectiveness for the selected perspective.
  3. Credible valuation of costs and health outcomes/effectiveness.
  4. Inclusion of an incremental analysis for the intervention and comparator studied.
  5. The time period of the analysis is sufficiently long to capture all relevant future cost and health effect consequences.
  6. Discounting of future costs and health benefits.
  7. Use of sensitivity analysis to test robustness of cost/effect findings and a clear explanation of which assumptions the results indicate are critical for the validity of the cost-effectiveness results.

The working group found that although the JBI checklist may have limitations (see ‘Discussion’), it does reflect the quality criteria identified as important for establishing validity in economic evaluation and is in line with the other tools examined. It was, therefore, designed that although reviewers should be advised to use a tool design specifically to appraise model-based economic evaluations, and adjusting the tool to address limitations was a task for the future, the tool would be included as is in the new guidance.

Guidance

The updated guidance for systematic review of economic evaluations17 is recommended for use to support reviewers conducting either a stand-alone review examining evidence for a question(s) about intervention/comparator cost cost-effectiveness or as part of a comprehensive or mixed method review. A JBI systematic review of economic evaluation evidence follows the same steps as a JBI review of any other evidence type (e.g. qualitative, prevalence/incidence, effectiveness). These are as follows:

  1. Step 1 – Developing the review protocol and submitting it for publication in the JBI Library of Systematic Reviews and Implementation Reports.
  2. Step 2 – Searching the evidence and selecting studies based on the state inclusion criteria.
  3. Step 3 – Assessing study quality using the relevant appraisal tool in the System for the Unified Management, Assessment and Review of Information software (in this case, the checklist embedded in ACTUARI).
  4. Step 4 – Extracting data from articles that meet the inclusion criteria and are to be included in the review using a predetermined data extraction template.
  5. Step 5 – Analyzing and synthesizing the data extracted from the included studies to address the question(s) asked in the review, using narrative, tables and the particular JBI tool for synthesis of the particular evidence type (in this case, the three by three dominance ranking matrix tool, see Fig. 1).
  6. F1-9
    Figure 1:
    Joanna Briggs Institute three-by-three dominance ranking matrix tool for synthesizing and interpreting findings from economic evaluations. Note: ‘+’ implies the intervention has a greater cost, or greater health effect than the comparator; ‘0’ the intervention has equal cost or health effect/benefit as comparator; ‘−’ that the intervention is less costly or less effective than comparator. Read matrix by row left to right.
  7. Step 6 – Writing up the findings of the review and drawing inferences for health practice or policy and research.

Step 1 – protocol development

The protocol development step in a systematic review of evidence from economic evaluations is identified in the guidance as key. A template, which is embedded in the JBI software for conducting systematic reviews, is provided in the guidance. The template includes the following:

  1. A background section, in which reviewers motivate for the review with reference to the existing literature and user (clinician, policy maker or patient information needs).
  2. Definition of the review objective/question.
  3. A methods section, in which the review inclusion criteria are defined using the ‘PICO’ pneumonic (Patient, Intervention, Comparison, Outcome), the planned search strategy is presented, and the critical appraisal, data extraction and synthesis methods described.
  4. A statement about any conflict of interests.
  5. Acknowledgements.
  6. References.
  7. Appendix (this should incorporate the critical appraisal and data extraction tools to be used).

The guidance explains that once the protocol has been developed, it should be submitted to the JBI Library of Systematic Reviews and Implementation Reports for review, which will be conducted by two reviewers. The protocol will then be published after comments have been integrated. Any deviations from it need to be explained and motivated for in the final systematic review report/publication.

Step 2 – search and study selection

The search strategy recommended in the guidance is the standard three-step search strategy used for study identification in all JBI systematic reviews. Reviewers are offered a list of databases that are known to index health economic evaluation studies, which they may want to include in their list of databases to be searched. It is recommended that grey literature be searched in systematic reviews of this kind of evidence because many economic evaluation studies are commissioned by government and are not published in the commercial literature.

The guidance for study selection includes that two reviewers select the studies that potentially match the inclusion criteria from the database search results and pull full text for examination of any records in which it is unclear if the studies match the criteria. Again it is recommended that any disagreements or uncertainty should be referred to a third person.

In order to meet the principles of transparency that are so important for establishing the validity of systematic reviews, the guidance explains that reviewers are required to carefully document the process of study identification and selection. A narrative description of the process and flow chart describing the process needs to be included in the write-up of the systematic review report.

Step 3 – critical appraisal

It is recommended that the critical appraisal of the studies identified as matching the review inclusion criteria be undertaken as per best practice3 by two reviewers working independently, and that any disagreements be resolved through discussion, or by a third party. Reviewers are required to use either the critical appraisal tool embedded in CReMS (Comprehensive Review Management System) and the Philips et al.16 appraisal instrument for models or another commonly used tool (identified in the published protocol). No recommendation is made for setting a minimum score out of 11 that must be achieve in the assessment to be included in the review. The purpose of the critical appraisal is presented as assisting reviewers with identifying study design weaknesses that may produce bias in the measures of cost/effect and that should inform the interpretation of the evidence included in the review.

Step 4 – data extraction

The guidance explains that two reviewers should undertake the extraction of data from the included studies with cross-checking for comprehensiveness and uniformity. Moreover, one data extraction sheet should be used to extract data from included studies and the data extracted should cover the characteristics of the economic evaluation studies, and the cost and cost-effectiveness outcomes of interest/relevant to answering the question.

Step 5 – analysis/synthesis of findings and drawing inferences for policy/practice and research

The method recommended in the updated guidance for synthesizing the data extracted from included studies is to use narrative and tables and the three-by-three dominance ranking matrix (DRM) tool. The DRM tool is a simple framework for organizing and classifying the cost-effectiveness results/measures of economic evaluation studies identified and included in reviews. It is designed to help reviewers and users of reviewers draw conclusions about what the studies included in the review suggest about which intervention is likely to be most cost-effective, and therefore is the preferred one from a resource saving and health maximization perspective. The tool, which is included in the JBI ACTUARI software, is presented in Fig. 1. It has the following classification options:

  1. Strong dominance for the intervention – selected when the incremental cost-effectiveness measure shows the intervention is, first, more effective and less costly, or second as effective and less costly, or third, is equal cost and is more effective. In this case, the evidence may be interpreted as suggesting decision makers should, from an efficiency perspective, favour the intervention over the comparator [at least in circumstances similar to those of the evaluation(s)].
  2. Weak dominance for the intervention – selected when the measure shows the intervention as fourth, equally costly and effective; or fifth, more effective and more costly; or sixth, less effective and less costly. In this case, no conclusion may be drawn about whether the intervention is preferable from an efficiency perspective without further information on the priorities/preferences of decision makers in the particular decision-making context. Decision makers – clinicians, health managers/administrators, policy makers and patients – are left to judge whether the cost/benefit trade-offs are worth the introduction of the intervention in their particular context.
  3. Nondominance for the intervention – selected when the measure shows the intervention as seventh, more costly and less effective; or eighth, equally as costly and less effective; or ninth, more costly and as effective. In this case, the evidence may be interpreted as suggesting the comparator is favourable from an efficiency perspective [at least in circumstances similar to those of the evaluation(s)].

It is recommended that when analyzing and synthesizing/summarizing the results of the included studies, reviewers aim to cover the following three aspects, which relate to the review objectives:

  1. Classify the studies using the DRM tool.
  2. Present and describe the cost/cost-effectiveness results of the studies paying careful attention to any variations.
  3. Consider what the differences in the cost-effectiveness results together with the study characteristics (differences in intervention implementation and design, populations, setting, time frames of analysis) suggest about the circumstances in which the intervention is likely to be more effective and less costly than the comparator.

Step 6 – drafting the systematic review report and publication

The guidance presents a template for reviewers to follow in the write-up of the review. It reminds reviewers to give careful consideration to the findings from methodological quality assessment of included studies, and the characteristics of the studies (including differences in populations, settings, time periods, perspectives) when drawing the recommendations for policy and practice. Identifying research priorities implied by the evidence gaps found by the review is presented as a critical final stage in the review.

Discussion

This article has presented the JBI update of its guidance for conducting systematic reviews of economic evidence from economic evaluation research. The guidance is designed to support reviewers to identify, critically appraise and summarize evidence addressing questions about intervention cost-effectiveness either as a stand-alone review or as part of a mixed method or comprehensive review. A working group comprised of the authors developed the guidance.

The new guidance incorporates only marginal adjustments from the original guidance. The adjustments are first, an adjustment to the framing of objectives of reviews, incorporating steering reviewers away from trying to seek and synthesize data to estimate the costs or cost-effectiveness of an intervention/comparator towards describing and summarizing the findings of studies about the resource use and cost-effectiveness outcomes of interventions implemented in different settings, and trying to identify the circumstances which are conducive to increasing intervention efficiency and how; and second, the recommendation that a tool designed specifically for the purpose of appraising model based studies be used to appraise model-based studies, in addition to the generic JBI critical appraisal study embedded in ACTUARI.

Three limitations of the guidance require noting, which also point towards the work that needs to be done to improve the guidance and enhance the value of JBI systematic reviews of economic evaluations.

Critical appraisal tool limitations

The first two limitations of the updated guidance relate to the critical appraisal tool. The first is that there are a number of questions in the critical appraisal tool that ask both about the appropriateness of the cost and health effectiveness of measurement. It would be better if these questions could be separated into two questions, as there are instances where a yes is required for the cost measurement but not for the effectiveness. The second is that the critical appraisal tool includes a question about whether the review results are generalizable (question 11 in Table 2) but does not offer reviewers explicit guidance, on how to assess the transferability of the findings from economic evaluations studies from one context to another. There are guidelines to assess the transferability of results, and how they can be incorporated in the JBI critical to assist reviewers assess the generalizability of findings is an issue that will be taken up in future methodological work of the Institute.

Limitations of the synthesis tool

The new guidance calls for systematic reviewers of economic evaluations to focus their data analysis and synthesis not solely on identifying and summarizing results about the cost-effectiveness of interventions/comparators but also on identifying the environment conducive to intervention efficiency (lower costs and greater effectiveness). Yet, aside from reviewers studying the data extracted on the characteristics of the studies in light of the measure of cost/effect and looking for patterns, there is no rigorous method proposed for answering this question. Exploring the quantitative techniques used by others for the purpose of predicting the population contextual and intervention/program design characteristics that are conducive to lower costs and greater effectiveness in synthesis of economic evaluation evidence and incorporating a tool for this in the JBI guidance is a priority.

Acknowledgements

The authors report no conflicts of interest.

References

1. Hoch J, Dewa C. An introduction to economic evaluation: what's in a name? Can J Psychiatry 2005; 50:159–166.
2. Donaldson C, Mugford M, Vale L. BMJ Books, Evidence-based health economics: from effectiveness to efficiency in systematic review. London:2002.
3. Liberati A, Altman D, Tetzlaff J, et al. Research methods & reporting. The PRISMA statement for reporting systematic reviews and meta-analysis of studies that evaluate healthcare interventions: explanation and elaboration. BMJ 2009; 339(b2)700. doi:10.1136/bmj.b2700.
4. Higgins JPT, Green S editors. Cochrane handbook for systematic reviews of interventions version 5.1.0. The Cochrane Collaboration, 2011. www.cochrane-handbook.org [accessed 20 March 2014].
5. Joanna Briggs InstituteReviewers manual, 2014 edition. Adelaide, South Australia:The Joanna Briggs Institute, The University of Adelaide; 2014.
6. Evidence for Policy and Practice Information & Co-ordinating Centre. EPPI-centre methods for conducting systematic reviews; 2007. www.eppi.org [accessed 25 March 2014].
7. Joanna Briggs InstituteReviewers manual, 2011 edition. Adelaide:Joanna Briggs Institute; 2011.
8. NHMRC. How to compare the costs and benefits: evaluation of the economic evidence. 2001. Canberra; Biotex Production.
9. Centre for Reviews and Dissemination. Systematic reviews: CRD guidance for undertaking reviews of health care; 2008. York: Centre for Reviews and Dissemination, University of York.
10. Anderson R, Schemilt I. The role of economic perspectives and evidence in systematic review. In: Schemilt I, Mugford M, Vale L, et al.. editors. Chapter 3: evidence-based decisions and economics. Health care, social welfare, education and criminal justice. Oxford: BMJ Publishing Group, Blackwood Publishing; 2010.
11. Anderson R. Systematic reviews of economic evaluations. Utility or futility? Health Econ. 2010; 19:350–364.
12. Schemilt I, Mugford M, Byford S, et al.. Chapter 5: incorporating economics evidence. In: Higgins J, Green S editors. Cochrane handbook for systematic reviews of interventions, version 5.01; 2008. http://www.cochrane-handbook.org [accessed 20 March 2014].
13. Drummond M. Evidence-based medicine meets economic evaluation – an agenda for research. In: Donaldson C, Mugford M, Vale L, editors. Evidence-based health economics: from effectiveness to efficiency in systematic review. London: BMJ Books; 2002.
14. Drummond M, Jefferson T. Guidelines for authors and peer reviewers of economic submissions to the BMJ. The BMJ Economic Evaluation Working Party. BMJ 1996; 313:275–283.
15. Evers S, Goosen M, de Vet H, et al. Criteria list for assessment of methodological quality of economic evaluations: Consensus on Health Economic Criteria. Int J Technol Assess Health Care 2005; 21:240–245.
16. Philips Z, Ginnely L, Schulper M, et al. Review of guidelines for good practice in decision-analytic modelling in health technology assessment. Health Tech Assess 2004; 8:36.
17. Gomersall SJ, Jadotte Y, Xue Y, et al.. The systematic review of economic evaluation evidence; 2014. http://www.jbi.org [accessed 1 March 2014].
Keywords:

economic evidence synthesis; systematic review methods

International Journal of Evidence-Based Healthcare © 2015 The Joanna Briggs Institute

​

A video commentary on implementation project titled: How do health professionals prioritise clinical areas for implementation of evidence into practice? The commentary is provided by Andrea Rochon RN, MNSc, Research Assistant, Queen's University, Ontario, Canada