Introduction
A defining feature of systematic reviews is the process of critique or appraisal of the included research evidence.1–10 This fundamental review process is called by different names in the literature, and includes terms such as risk of bias assessment, critical appraisal, assessment of study validity, assessment of methodological quality, or assessment of methodological limitations.11 The purpose of this appraisal is to assess the methodological conduct of a study and to determine the extent to which a study has addressed the possibility (or risk) of bias in its design, conduct, or analysis. All papers selected for inclusion in a systematic review (ie, those that meet the inclusion eligibility criteria described in the protocol) need to be subjected to rigorous appraisal by 2 independent reviewers (in duplicate) using an appropriate critical appraisal tool. The results of this appraisal can then be taken into consideration in the analysis, synthesis, and interpretation of the results within the systematic review. In most cases, the primary purpose of this assessment is to allow reviewers to answer the overarching question of how well a study was designed and performed with regard to avoiding systematic error (bias).
Over the nearly 3 decades of JBI’s ongoing investment in evidence synthesis,12–15 there have been many different iterations of JBI critical appraisal tools. These tools have been developed by JBI and collaborators, and approved by the JBI Scientific Committee following extensive consultation.16,17 Although these tools have been specifically designed for application within a systematic review process, JBI critical appraisal tools can also be used for various educational and clinical purposes, such as creating critically appraised topics, reading the literature, during the peer review process, and in journal clubs. The suite of critical appraisal tools is largely based on study design and exists as a checklist, with targeted questions that address key methodological limitations of the study design and the safeguards that authors may have implemented to minimize the impact of bias in the results of their study.
There are many critical appraisal and risk of bias tools available for use in systematic reviews.18,19 At JBI, we have had our own suite of critical appraisal tools (the first proposed in 2002) available for use by our Collaboration and other review authors, and included in our JBI Manual for Evidence Synthesis for many years. These tools have been endorsed and ratified by the JBI Scientific Committee as the ideal tools to use across JBI’s toolkit for evidence synthesis.16 JBI has developed its own set of tools and does not simply recommend use of those developed by other groups or used in publication by other authors for a number of reasons. Firstly, no other set of tools is broad enough to encompass all of the JBI-endorsed approaches to evidence synthesis. If JBI were to take an endorsement approach, it would lead to a multitude of different tools (all of which may differ in design, structure, and application) in use across JBI reviews. This would inevitably lead to issues regarding consistency, publication formats, and steeper learning curves, among others. In addition, across the suite of methodologies and methods,16 there are still gaps where perhaps no alternative tool exists. Another benefit is that by developing these tools ourselves, we are able to readily embed them in our systematic review software20 and our educational training programs.21
As the field of evidence synthesis continues to evolve,22,23 there have been ongoing discussions within JBI regarding our approach to critical appraisal, particularly given the advances in the methodological literature (regarding the concept of risk of bias as opposed to critical appraisal)11,24 and the development of new tools to appraise studies25 (especially non-randomized studies).26 As such, a working party consisting of members of the JBI Effectiveness Methodology Group has been considering the ideal way forward for critical appraisal and risk of bias assessment within JBI quantitative systematic reviews. In 2020, a proposal was put forward to the JBI Scientific Committee to embark on a process to evaluate our current tools and produce recommendations for assessing the risk of bias of quantitative analytical studies within the context of JBI reviews moving forward. This proposal outlined a strategy for our current tools to bring them into alignment with current methodological developments in this field. The proposal also outlined the ideal characteristics of a future JBI tool to assess risk of bias in analytical studies, and how we might move towards our ideal vision in a phased approach. This new approach will then be included in our guidance,27 education programs, and software.20
Following the approval of this proposal, a significant amount of work has been conducted both in revising our current tools and investigating key principles and concepts related to risk of bias assessment, including the ideal features of risk of bias tools from the JBI perspective. To better communicate these developments, a new series in JBI Evidence Synthesis has been launched to discuss in detail the advancements in this field. This paper aims to introduce JBI’s short-, medium-, and long-term objectives regarding the future of risk of bias assessment. The ideal principles of risk of bias are also established to set the foundation for 2 additional series of articles to be published in this journal: the first being a series of revised tools for the assessment of risk of bias; the second, a series of companion papers to introduce, discuss, and propose concepts, principles, and advancements in the field of risk of bias assessment to direct future tool development at JBI.
Principles for an ideal risk of bias approach for analytical studies for JBI
After discussion among the JBI working group and further input and ratification by the JBI Scientific Committee, the following have been identified as the ideal characteristics of a tool to assess methodological limitations for JBI systematic reviews. In this paper, we provide the initial list of these principles and concepts, which will be elaborated on in further papers in this series. The ideal JBI risk of bias approach will:
- Focus only on issues related to risk of bias (ie, systematic error or internal validity) and use consistent terminology for risk of bias.
- Be sophisticated enough to consider not only the presence or absence of methodological safeguards but also the feasibility of these safeguards in research and whether they are likely to increase the risk of bias in a study (eg, lack of blinding of outcome assessors for objective outcomes such as mortality), and will require clear alignment between safeguards/conduct/signaling questions and their relevant bias domain. Ideally, the tool would enable any approach to risk of bias assessment, including checklist approaches, judgments within domains/methodological standards or consideration of safeguards independently, quality counts, relative ranks, or other schemes.
- Map clearly to a comprehensive framework/hierarchy/taxonomy of bias structured into different levels, which can be used as a support resource or for educational purposes (ie, a framework of safeguards under methodological standards or domains) and facilitate comparisons across different analytic study designs using a common scale.
- Be user-friendly (to an extent), timely, widely applicable, and compatible with the Grading of Recommendations, Assessment, Development and Evaluations (GRADE).
- Be valid, evidence-based (ideally on meta-epidemiological studies), and, where evidence is lacking, theoretically sound.
The proposal for the future of JBI risk of bias assessment
The proposal for the future of JBI risk of bias assessment contained a set of key recommendations, which were discussed and approved by the JBI Scientific Committee.16,17 The justification for these recommendations will also be expanded on in future papers within this series. Transitioning to a new risk of bias framework for JBI reviews will likely take a significant amount of energy and resources, and will be a multiyear project. As such, short-, medium-, and long-term recommendations were submitted and approved by the JBI Scientific Committee.
Short-term recommendations
- JBI should move away from using the term “assessment of methodological quality” or “critical appraisal” and use the term “risk of bias” assessment (for all tools for quantitative designs).
- JBI will review all current tools for quantitative designs and move to focus only on internal validity, or “risk of bias” assessment, rather than other issues related to reporting, external validity, imprecision, etc. These items can be removed from tools or at least clearly separated from internal validity questions.
- JBI will review all current tools and categorize current checklist questions into risk of bias criteria domains (eg, a “selection bias” domain or “attrition bias” domain) so that assessment can occur at the domain level, if desired, including nuanced guidance regarding whether safeguard implementation was feasible (eg, blinding feasibility for hard outcomes). This will allow the tools to be flexible in how they can be applied, either as checklists, scales, or domain-based assessments. JBI will also continue to accept the use of the Cochrane RoB 2.0 tool and ROBINS-I (and other tools) for JBI reviews, with justification.
- JBI should strongly endorse risk of bias assessments to be carried out at the result or outcome level, and disallow study-level judgments. For context, risk of bias may change depending on the outcome or result within a single study – for example, although issues related to selection bias (randomization and allocation concealment approaches) should apply to all outcomes/results in a study, for other types of bias (such as attrition, detection, or measurement bias), the risk of bias may vary depending on the individual outcome and/or result.
Medium-term recommendation
A working group will create an “overview of bias” framework, including a map, with a clear and comprehensive hierarchy of bias structured by different levels, which can be used as a support resource and for educational purposes; for example, a framework of safeguards and their multilevel categorization (ie, domains or methodological standards → subdomains → safeguards) across common quantitative study designs seen in health care. For context, this will ideally be useful for tool developers and students to clearly see how items relate to domains of bias and may be common across different study designs, and it will provide a clear framework for how study design elements map to different types of bias.
Long-term recommendation
JBI will adopt, adapt, or create a new tool that meets all the characteristics we consider appropriate for an “ideal” tool, informed by a comprehensive framework of bias.
Conclusion
JBI has a pragmatic vision to identify the best available evidence to inform decision-making. As such, we provide methodologies for several diverse evidence synthesis approaches and promote the use of the best available evidence to answer systematic review questions. For this reason, we consider many different types of analytical study designs across and within review types, and it is not uncommon for JBI reviews to include experimental, quasi-experimental, and observational studies within the one review. Currently, there are some limitations within the JBI critical appraisal toolkit. To address these shortfalls, the current toolkit will be revised, and planning is underway for a future approach that will enable JBI to achieve an approach to risk of bias that will align with our proposed key principles. As this approach evolves, our revised tools, principles, guidance, and concepts will be further discussed within this new series of papers for JBI Evidence Synthesis.
Funding
ZM is supported by an NHMRC Investigator Grant, APP1195676. MK is supported by the INTER-EXCELLENCE grant number LTC20031—Towards an International Network for Evidence-based Research in Clinical Health Research in the Czech Republic.
References
1. Straus SE, Glasziou P, Richardson WS, Haynes RV. Evidence-based medicine: how to practice and teach EBM, 5th ed. Churchill Livingstone: Elsevier; 2018.
2. Porritt K, Gomersall J, Lockwood C. JBI’s
systematic reviews: study selection and critical appraisal. Am J Nurs 2014;114(6):47–52.
3. Tufanaru C, Munn Z, Stephenson M, Aromataris E. Fixed or random effects meta-analysis? Common methodological issues in
systematic reviews of effectiveness. Int J Evid Based Healthc 2015;13(3):196–207.
4. Moola S, Munn Z, Sears K, Sfetcu R, Currie M, Lisy K, et al. Conducting
systematic reviews of association (etiology): The Joanna Briggs Institute’s approach. Int J Evid Based Healthc 2015;13(3):163–9.
5. Munn Z, Barker TH, Moola S, Tufanaru C, Stern C, McArthur A, et al. Methodological quality of case series studies: an introduction to the JBI critical appraisal tool. JBI Evid Synth 2020;18(10):2127–33.
6. Munn Z, Moola S, Lisy K, Riitano D, Tufanaru C. Methodological guidance for
systematic reviews of observational epidemiological studies reporting prevalence and cumulative incidence data. Int J Evid Based Healthc 2015;13(3):147–53.
7. Stone JC, Doi SAR. Moving towards a standards-based methodological quality assessment scheme for clinical research. Int J Evid Based Healthc 2019;17(2):72–73.
8. Stone JC, Glass K, Clark J, Munn Z, Tugwell P, Doi SAR. A unified framework for bias assessment in clinical research. Int J Evid Based Healthc 2019;17(2):106–20.
9. Stone JC, Glass K, Clark J, Ritskes-Hoitinga M, Munn Z, Tugwell P, et al. The MethodologicAl STandards for Epidemiological Research (MASTER) scale demonstrated a unified framework for bias assessment. J Clin Epidemiol 2021;134:52–64.
10. Stone JC, Gurunathan U, Aromataris E, Glass K, Tugwell P, Munn Z, et al. Bias assessment in outcomes research: the role of relative versus absolute approaches. Value Health 2021;24(8):1145–9.
11. Hartling L, Ospina M, Liang Y, Dryden DM, Hooton N, Krebs Seida J, et al. Risk of bias versus quality assessment of randomised controlled trials: cross sectional study. BMJ 2009;339:b4012.
12. Jordan Z, Lockwood C, Aromataris E, Pilla B, Porritt K, Klugar M, et al. JBI series paper 1: introducing JBI and the JBI model of EHBC. J Clin Epidemiol 2022;150:191–5.
13. Jordan Z, Lockwood C, Munn Z, Aromataris E. The updated Joanna Briggs Institute model of evidence-based healthcare. Int J Evid Based Healthc 2019;17(1):58–71.
14. Pearson A, Wiechula R, Court A, Lockwood C. The JBI model of evidence-based healthcare. Int J Evid Based Healthc 2005;3(8):207–15.
15. Jordan Z, Munn Z, Aromataris E, Lockwood C. Now that we’re here, where are we? The JBI approach to evidence-based healthcare 20 years on. Int J Evid Based Healthc 2015;13(3):117–20.
16. Aromataris E, Stern C, Lockwood C, Barker TH, Klugar M, Jadotte Y, et al. JBI series paper 2: tailored evidence synthesis approaches are required to answer diverse questions: a pragmatic evidence synthesis toolkit from JBI. J Clin Epidemiol 2022;150:196–202.
17. Pilla B, Jordan Z, Christian R, Kynoch K, McInerney P, Cooper K, et al. JBI series paper 4: the role of collaborative evidence networks in promoting and supporting evidence-based healthcare globally: reflections from 25 years across 38 countries. J Clin Epidemiol 2022;150:210–15.
18. Quigley JM, Thompson JC, Halfpenny NJ, Scott DA. Critical appraisal of nonrandomized studies—a review of recommended and commonly used tools. J Eval Clin Pract 2019;25(1):44–52.
19. Zeng X, Zhang Y, Kwong JS, Zhang C, Li S, Sun F, et al. The methodological quality assessment tools for preclinical and clinical studies, systematic review and meta-analysis, and clinical practice guideline: a systematic review. J Evid Based Med 2015;8(1):2–10.
20. Munn Z, Aromataris E, Tufanaru C, Stern C, Porritt K, Farrow J, et al. The development of software to support multiple systematic review types: the Joanna Briggs Institute System for the Unified Management, Assessment and Review of Information (JBI SUMARI). Int J Evid Based Healthc 2019;17(1):36–43.
21. Stern C, Munn Z, Porritt K, Lockwood C, Peters MD, Bellman S, et al. An international educational training course for conducting
systematic reviews in health care: the Joanna Briggs Institute’s comprehensive systematic review training program. Worldviews Evid Based Nurs 2018;15(5):401–8.
22. Tricco AC, Tetzlaff J, Moher D. The art and science of knowledge synthesis. J Clin Epidemiol 2011;64(1):11–20.
23. Munn Z, Stern C, Aromataris E, Lockwood C, Jordan Z. What kind of systematic review should I conduct? A proposed typology and guidance for systematic reviewers in the medical and health sciences. BMC Med Res Methodol 2018;18(1):5.
24. Furuya-Kanamori L, Xu C, Hasan SS, Doi SA. Quality versus risk-of-bias assessment in clinical research. J Clin Epidemiol 2021;129:172–5.
25. Sterne JAC, Savović J, Page MJ, Elbers RG, Blencowe NS, Boutron I, et al. RoB 2: a revised tool for assessing risk of bias in randomised trials. BMJ 2019;366:l4898.
26. Sterne JA, Hernán MA, Reeves BC, Savović J, Berkman ND, Viswanathan M, et al. ROBINS-I: a tool for assessing risk of bias in non-randomised studies of interventions. BMJ 2016;355:i4919.
27. Aromataris E, Munn Z Aromataris E, Munn Z. Chapter 1: JBI
systematic reviews. JBI Manual for Evidence Synthesis [internet]. Adelaide, JBI; 2020 [cited 2022 Aug 1]. Available from:
https://synthesismanual.jbi.global.