A defining feature of many reviews of existing research, and certainly any review following current JBI methodologies that can lay claim to being systematic, is critical appraisal of the included studies. The process is focused principally on assessing the quality of conduct (its validity) and the possibility or potential that bias has crept into the design, conduct, or analysis of the study. Identifiable risk of bias will impinge on the trustworthiness of the results and any subsequent conclusions presented. Assessment of risk of bias is a critical consideration in the analysis and interpretation of any synthesis and, accordingly, any assessment of certainty in the findings presented in the review following the Grading of Recommendations, Assessment, Development and Evaluations (GRADE) approach.1 Checklists, scales, and domain-based instruments are commonly used to aid systematic reviewers to complete and present an assessment that is both consistent and transparent in its conduct as well as, ideally, its reporting. The existing suite of JBI checklists to facilitate critical appraisal has been available for over two decades. These tools are frequently used by authors and are valued for their ease of use and comprehensiveness; the latter facilitating authors’ ability to assess study validity across a range of common study designs relying on a single source.2
This issue of JBI Evidence Synthesis presents the launch of an exciting new series focused on risk of bias in JBI systematic reviews of quantitative evidence, which reflects the current and ongoing work of the JBI Effectiveness Methodology Group.3 The articles in this issue present foundational concepts that underpin considerations of risk of bias. The first 2 articles in the series present both the vision and the basis for the planned reconsideration of the JBI tools aligned with international developments in the field4 and the evolution of terminology in the field, respectively.5 Conceptual papers such as these will accompany the revised tools designed for practical use throughout the series as it grows. An additional article articulates the methods that have been applied by the JBI Effectiveness Methodology Group3,6 and will be applied to the remainder of the tools involved.7
Furthermore, to headline the practical changes for JBI systematic reviews, the revised JBI tool for assessing risk of bias in randomized controlled trials, one of the most frequently used by review authors, is also presented in this issue.8 Signaling questions in the revised tool have not been modified from the currently available version; however, the existing questions have now been reorganized and aligned to diverse and recognizable domains of bias. This alignment will help reviewers in their consideration of the impact of potential bias in the results of their analyses. The update also sees the use of this tool prompt users to move beyond assessment of risk of bias at the study level, and to consider the response to a subset of the questions for individual outcomes, as well as for a further subset of questions, considering the results reported.8 These changes illustrate the first stage in the planned revision process for the JBI tools and, while maintaining the ease of use that has made them so popular, it marks the beginning of a much-needed evolution toward demanding more sophisticated consideration of risk of bias in JBI systematic reviews.4
The advent of this first revised tool, and the promise of more to follow in this JBI Evidence Synthesis series on risk of bias, has broader impacts for JBI society members and systematic reviewers. Most importantly, as the new revised tools are introduced, authors who submit manuscripts to JBI Evidence Synthesis, as well as peer reviewers and editors of the journal, will be required to embrace these new tools and the nuances of their use as we move forward, working together to present high-quality evidence informed by the most current methods in synthesis to the community. Over the next 12 months, updates to journal guidelines and templates will be made available, along with a transition period for authors to adopt the new tools during the conduct of their review. Similarly, the content of the JBI education program for synthesis and accredited international trainers, and the forthcoming development of JBI SUMARI—JBI’s software to facilitate and support the conduct of systematic reviews—must now also embrace and integrate these changes and evolve accordingly.3,9,10
In line with the methodological theme for this month, this issue presents a scoping review providing a comprehensive overview of the use of statistical shape modeling of the hip joint,11 and also a range of review protocols, all linked to investigation of methods of research, its dissemination, or conduct of synthesis. A further noteworthy addition to the content of this issue dominated by consideration of methodological conduct of reviews is the latest guidance from the JBI Scoping Review Methodology Group delving further into extraction, analysis and presentation of the results of scoping reviews.12 Scoping reviews are popular review undertakings, evidenced by the increasing appearance of this type of review in the table of contents of JBI Evidence Synthesis over the past 2 to 3 years. This additional guidance is essential reading for any scoping review author, providing additional clarity and useful examples across these important steps in the review process.
1. Guyatt G, Oxman AD, Aki EA, Kunz R, Vist G, Brozek J, et al. GRADE guidelines: 1. Introduction–GRADE evidence profiles and summary of findings tables. J Clin Epidemiol 2011;64(4):383–394.
2. Tufanaru C, Munn Z, Aromataris E, Campbell J, Hopp L. Aromataris E, Munn Z Chapter 3: Systematic reviews of effectiveness. JBI Manual for Evidence Synthesis [internet]. Adelaide, JBI; 2020 [cited 2023 Feb 15]. Available from: https://synthesismanual.jbi.global
3. Aromataris E, Stern C, Lockwood C, Barker TH, Klugar M, Jadotte Y, et al. JBI series paper 2: tailored evidence synthesis approaches are required to answer diverse questions: a pragmatic evidence synthesis toolkit from JBI. J Clin Epidemiol 2022;150:196–202.
4. Munn Z, Stone JC, Aromataris E, Klugar M, Sears K, Leonardi-Bee J, et al. Assessing the risk of bias of quantitative analytical studies: introducing the vision for critical appraisal withing JBI systematic reviews. JBI Evid Synth 2023;21(3):467–471.
5. Stone JC, Barker TH, Aromataris E, Ritskes-Hoitinga M, Sears K, Klugar M. From critical appraisal to risk of bias assessment: clarifying the terminology for study evaluation in JBI systematic reviews. JBI Evid Synth 2023;21(3):472–477.
6. JBI. JBI Methodology Groups [internet]. Adelaide, JBI; n.d. [cited 2023 Feb 15]. Available from: https://jbi.global/jbi-model-of-EBHC
7. Barker TH, Stone JC, Sears K, Klugar M, Leonardi-Bee J, Tufanaru C, et al. Revising the JBI quantitative critical appraisal tools to improve their applicability: an overview of methods and the development process. JBI Evid Synth 2023;21(3):478–493.
8. Barker TH, Stone JC, Sears K, Klugar M, Tufanaru C, Leonardi-Bee J, et al. The revised JBI critical appraisal tool for the assessment of the risk of bias for randomized controlled trials. JBI Evid Synth 2023;21(3):494–506.
9. JBI. Education [internet]. Adelaide, JBI; n.d. [cited 2023 Feb 15]. Available from: https://jbi.global/education
10. JBI. JBI SUMARI [internet]. Adelaide, JBI; n.d. [cited 2023 Feb 15]. Available from: https://sumari.jbi.global/
11. Johnson LG, Bortolussi-Courval S, Chehil A, Schaeffer EK, Pawliuk C, Wilson DR, et al. Application of statistical shape modelling to the human hip joint: a scoping review. JBI Evid Synth 2023;21(3):533–583.
12. Pollock D, Peters MDJ, Khalil H, McInerney P, Alexander L, Tricco AC, et al. Recommendations for the extraction, analysis, and presentation of results in scoping reviews. JBI Evid Synth 2023;21(3):520–532.