Since their debut in the 1980s, the publication rate of systematic reviews has rapidly accelerated. In 2014, more than 8000 systematic reviews were indexed on MEDLINE, a three-fold increase over the last decade.1 Systematic reviews organize the discrete pieces of information contained in primary studies and other reports into a coherent body of evidence for use to inform healthcare decisions and policy in support of patient care and to engage stakeholders. Currently, much of the work in systematic review methodology is focused on developing guidance for reliable approaches to scoping, searching, appraising, synthesizing and grading evidence.1,2 Accordingly, there are now widely disseminated standards on how to complete a systematic review, which has increased interest in the conduct of systematic reviews and allowed this proliferation in publication. The challenge and opportunity in this proliferation is how to engage in this new world of data abundance, keeping in mind expediency, efficiency and complexity.
While systematic reviews are considered to be the gold standard in evidence synthesis, they are not without their limitations. Despite the need for systematic review evidence to inform clinical practice and policy, the best evidence is not always used due to lack of knowledge, time, skills and/or resources to translate knowledge into meaningful and useful informaton.2 For example, a systematic review can require between six months to twoyears to complete, whereas decision makers, whose needs are generally time-sensitive and emergent, often require up-to-date evidence more quickly than this. The rapid review is a new methodology that has emerged to address this need. Although the definition of a rapid review varies, typically it is characterized by a strong focus on the specific needs of a particular decision maker and by methodological shortcuts.3,4 Results of rapid reviews have been characterized by a reduced scope, omission of dual data abstraction and critical appraisal, and conduct by only one reviewer. While there is no evidence to suggest that rapid reviews are misleading, there is a need to ensure credibility and technical quality.2 The opportunity lies in developing a standardized approach to rapid reviews that does not sacrifice validity for expediency.
Umbrella or overviews of systematic reviews, which involve the synthesis of results from multiple systematic reviews, has emerged as an organized means to address an abundance of data. These reviews take advantage of previous research syntheses, bringing an efficiency that enables a broad understanding of a wide-scope topic in a shorter timeframe.5 An umbrella review can be conducted to identify factors that may influence the treatment-outcome effect in the same or different populations, map evidence and identify gaps for primary researchers, or examine discordance or similarity in findings and conclusions across reviews.6 This information is important to clinicians and patients as it aids in understanding what patients will benefit most, who is least likely to benefit, and who is at greatest risk of experiencing adverse outcomes. There are, however, unique issues to the conduct of an umbrella review that differ from a traditional review. Chief among these are how to handle overlapping primary studies, i.e. when one primary study appears in one or more systematic review.5 The challenge is how to handle the overlap of primary studies when they are included in more than one review so that their results are not being used multiple times, violating the principles of independence of data.
While methods for conducting systematic reviews on distinct treatments, such as medication regimens, are well-defined, these methods may be inadequate for systematic reviews of the complex interventions often encountered in health care. Complex interventions are those in which a number of elements must work together to achieve the best outcomes, such as chronic disease management or smoking cessation programs.7 Such complexity is influenced by multiple factors, including patient characteristics and behavior, social determinants of health, differing contexts as well as the interventions themselves.7 They need to be tailored to be effective. The challenge and opportunity here is to determine what methods can best elucidate recommendations that work best, under what circumstances and for what subgroup, given the interactions in the dual challenge of intervention complexity (multiple components) and pathway complexity (multiple causal pathways, feedback loops, synergies, and/or moderators of effect).8 Consideration needs to be given to qualitative and mixed method reviews to overcome the limits of measurement-based research, which often focus on the easily observed and easily measured effects, rather than the context and acceptability of an intervention.4,5
1. Ioannidis JP. The mass production of redundant, misleading, and conflicted systematic reviews and meta-analyses. Milbank Q
2016; 94 3:485–514.
2. Ganann R, Ciliska D, Thomas H. Expediting systematic reviews: methods and implications of rapid reviews. Implement Sci
3. Crawford C, Boyd C, Jain S, Khorsan R, Jonas W. Rapid Evidence Assessment of the Literature (REAL©
): streamlining the systematic review process and creating utility for evidence-based health care. BMC Research Notes
4. Thirsk L, Clark A. Using Qualitative Research for Complex Interventions: The Contributions of Hermeneutics. Int J Qual Methods
5. McKenzie J, Brennan S. Overviews of systematic reviews: great promise, greater challenge. Syst Rev
6. Aromataris E, Fernandez R, Godfrey CM, Holly C, Khalil H, Tungpunkom P. Summarizing systematic reviews: methodological development, conduct and reporting of an umbrella review approach. Int J Evid Based Healthc
2015; 13 3:132–140.
7. Kelly MP, Noyes J, Kane RL, Chang C, Uhl S, Robinson KA, et al. AHRQ Series on Complex Intervention Systematic Reviews – Paper 2: Defining Complexity, Formulating Scope and Questions. J Clin Epidemiol
2017; pii: S0895-4356(17)30631-5.
8. Debray Thomas PA, Damen Johanna AAG, Snell Kym IE, Ensor Joie, Hooft Lotty, Reitsma Johannes B, et al. A guide to systematic review and meta-analysis of prediction model performance. BMJ