Systematic reviews are considered to be to the highest form of evidence that may be used to inform practice and policy decision making. Though there are differences in the various approaches that reviewers take in conducting a rigorous systematic review, it may be agreed upon that certain aspects of systematic review methodology are indispensable, such as an a priori protocol that sets out the review inclusion criteria and methods, and a comprehensive and unbiased search strategy that aims to capture all relevant literature. A further critical step in any systematic review is the assessment of the methodological quality of studies that meet the inclusion criteria of the review.1 The goal of critical appraisal in a quantitative systematic review is to evaluate the extent to which potential risks of bias have been minimized by the design and conduct of individual studies. Each eligible study should be assessed according to the reported study methods to establish internal validity and also whether the results of the study are reliable.
The Joanna Briggs Institute's critical appraisal tools for quantitative studies contain appraisal criteria that address both the validity and reliability of a study. Systematic reviews published in the JBI Database of Systematic Reviews and Implementation Reports (JBISRIR) report on specific details of the approach taken for critical appraisal, including any specific understandings or interpretations of appraisal criteria that pertain to the review. The outcome of critical appraisal must be presented within the results section of the review, accompanied by a discussion of the methodological quality of all included studies.
However, it is not sufficient to simply describe the outcomes of critical appraisal in terms of the numbers of positive and negative answers to the items on the checklist, or as an overall score. For a systematic review to adequately accommodate variations in the quality of included studies, reviewers should also consider how the results of critical appraisal relate to the potential risks of bias of each study, and what the impact of the various domains of bias are on individual outcomes, on individual studies, and on the findings of the overall review.
The strategies that reviewers have used to incorporate quality assessments into data synthesis must also be described. As the synthesis of data from low quality studies with data from higher quality studies may result in unreliable estimates of effects, there is little value in assessing risk of bias of included studies and then combining and interpreting data regardless of the validity and reliability of the source. There are varying approaches that reviewers may adopt to incorporate the quality of included studies in a systematic review. Many reviewers choose to exclude studies that are deemed to be of low quality or at high risk of bias, however reviewers should consider alternative approaches, such as meta-regression or sensitivity analysis, that use the results of critical appraisal to inform data synthesis while not excluding otherwise eligible studies from the review. When review findings contribute to the generation of guidance for practice or policy decisions, adding low-quality evidence into the mix is problematic. Quality assessment must also factor in to the formation of conclusions and the generation of recommendations for practice; Grades of Recommendation should be assigned to all practice recommendations arising from a JBI review.2
From the perspective of an editor of the JBISRIR, it is clear that there is a need to improve guidance on the conduct, discussion and incorporation of critical appraisal results in systematic reviews. With the development of new systematic review methodologies and the reinvigoration of our current methodologies, training courses and software, the timing for an increased emphasis on critical appraisal is ideal.
1. Porritt K, Gomersall J & Lockwood C. Study Selection and Critical Appraisal. AJN, 2013; 114(6): 47-52.
2. The Joanna Briggs Institute. Supporting Document for the Joanna Briggs Institute Levels of Evidence and Grades of Recommendation. 2014; [Cited 2015 April 8]; Available from: http://joannabriggs.org/assets/docs/approach/Levels-of-Evidence-SupportingDocuments-v2.pdf