Meta-research is research that is carried out with existing research as the subject of investigation. As systematic reviews – themselves a form of meta-research – have become more widespread, they in turn have come to the attention of meta-research as available subject matter (meta-meta-research, perhaps?). Researchers’ fascination with their own “meta” may be viewed by some as amusing (meta-meta-meta-research!), however the meta endeavours have uncovered some worrying findings.
While exceptions exist, chiefly in high impact1,2 and systematic review specific journals,3 the conduct, reporting and publication of systematic reviews of poor quality is prevalent to the point of being the norm rather than the exception.4-7 Worryingly, despite the growing prominence of explicit guidelines (like the PRISMA statement8 and the AMSTAR checklist9) as well as the expanding profile of evidence-based practice organisations that focus on systematic reviews (Cochrane, the Campbell Collaboration and the Joanna Briggs Institute), the average quality of systematic reviews in many areas has not meaningfully improved over time,10,11 or has even worsened.12
Considering this state of affairs, it seems reasonable to suggest that although evidence-based practice organizations have succeeded in evangelizing the importance of systematic reviews, they have not been successful at stressing the importance of reviews being conducted and reported in a thorough and rigorous manner. In this way they have counterintuitively contributed to the growing number of poor quality and unreliable systematic reviews despite their direct and persistent attempts to the contrary.
Organisations and individuals that are responsible for spreading the popularity of systematic reviews also hold responsibility for safe guarding their quality. As mentioned, systematic review specific journals do an excellent job of enforcing the rigor of reviews published on their own pages, and high impact journals have likewise succeeded in setting the bar high. These types of publications do not have to be exceptions, however. Those of us who most frequently carry out and publish systematic reviews have an increased likelihood of being invited to act as peer reviewers for them. Peer review therefore gives us the opportunity and responsibility to act directly to improve the quality of published systematic reviews. Detailed guidance on the proper conduct and reporting of systematic reviews of diverse types is easily available and accessible,3,9,13-15 along with useful review management tools which can be accessed free of charge (RevMan, Covidence). It therefore cannot be seen as understandable for an article labelled as a systematic review that lacks basic components of the process (i.e. a registered protocol, critical appraisal, or a detailed and comprehensive search) to be considered as a serious candidate for publication.
In our capacity as peer reviewers, editors or authors, the quality of systematic reviews is not an area where compromise should be viewed as acceptable. Standards have been agreed upon and set. If systematic reviews are to deserve their status at the preferred resource for informing evidence-based care, they must be upheld.
1. Fleming PS, Koletsi D, Seehra J, Pandis N. Systematic reviews published in higher impact clinical journals were of higher quality. J Clin Epidemiol
2014; 67 7:754–759.
2. Schiegnitz E, Kammerer P, Al-Nawas B. Quality Assessment of Systematic Reviews and Meta-analyses on Biomarkers in Oral Squamous Cell Carcinoma. Oral Health Prev Dent
2017; 15 1:13–21.
3. Fleming PS, Seehra J, Polychronopoulou A, Fedorowicz Z, Pandis N. A PRISMA assessment of the reporting quality of systematic reviews in orthodontics. Angle Orthod
2013; 83 1:158–163.
4. Campbell JM, Stephenson MD, Bateman E, Peters MD, Keefe DM, Bowen JM. Irinotecan-induced toxicity pharmacogenetics: an umbrella review of systematic reviews and meta-analyses. Pharmacogenomics J
2017; 17 1:21–28.
5. Schmitter M, Sterzenbach G, Faggion CM Jr, Krastl G. A flood tide of systematic reviews on endodontic posts: methodological assessment using of R-AMSTAR. Clin Oral Investig
2013; 17 5:1287–1294.
6. Wasiak J, Tyack Z, Ware R, Goodwin N, Faggion CM Jr. Poor methodological quality and reporting standards of systematic reviews in burn care management. Int Wound J
7. Gianola S, Gasparini M, Agostini M, et al. Survey of the reporting characteristics of systematic reviews in rehabilitation. Phys Ther
2013; 93 11:1456–1466.
8. Moher D, Liberati A, Tetzlaff J, Altman DG. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA Statement. Open Med
2009; 3 3:e123–e130.
9. Shea BJ, Bouter LM, Peterson J, et al. External validation of a measurement tool to assess systematic reviews (AMSTAR). PLoS One
2007; 2 12:e1350.
10. Tunis AS, McInnes MD, Hanna R, Esmail K. Association of study quality with completeness of reporting: have completeness of reporting and quality of systematic reviews and meta-analyses in major radiology journals changed since publication of the PRISMA statement? Radiology
2013; 269 2:413–426.
11. Chapman SJ, Drake TM, Bolton WS, Barnard J, Bhangu A. Longitudinal analysis of reporting and quality of systematic reviews in high-impact surgical journals. Br J Surg
2017; 104 3:198–204.
12. Campbell JM, Kavanagh S, Kurmis R, Munn Z. Systematic Reviews in Burns Care: Poor Quality and Getting Worse. J Burn Care Res
2017; 38 2:e552–e567.
13. Campbell JM, Klugar M, Ding S, et al. Diagnostic test accuracy: methods for systematic review and meta-analysis. Int J Evid Based Healthc
2015; 13 3:154–162.
14. Moola S, Munn Z, Sears K, et al. Conducting systematic reviews of association (etiology): The Joanna Briggs Institute's approach. Int J Evid Based Healthc
2015; 13 3:163–169.
15. Munn Z, Moola S, Lisy K, Riitano D, Tufanaru C. Methodological guidance for systematic reviews of observational epidemiological studies reporting prevalence and cumulative incidence data. Int J Evid Based Healthc
2015; 13 3:147–153.