Secondary Logo

Journal Logo

Methodological and Reporting Quality of Systematic Reviews Published in the Highest Ranking Journals in the Field of Pain

Riado Minguez, Daniel MD*; Kowalski, Martin; Vallve Odena, Marta BSc; Longin Pontzen, Daniel§; Jelicic Kadic, Antonia MD, PhD†‖; Jeric, Milka MD, PhD; Dosenovic, Svjetlana MD#; Jakus, Dora; Vrdoljak, Marija; Poklepovic Pericic, Tina MD, PhD**; Sapunar, Damir MD, PhD; Puljak, Livia MD, PhD†††

doi: 10.1213/ANE.0000000000002227
Chronic Pain Medicine: Original Clinical Research Report
Free

BACKGROUND: Systematic reviews (SRs) are important for making clinical recommendations and guidelines. We analyzed methodological and reporting quality of pain-related SRs published in the top-ranking anesthesiology journals.

METHODS: This was a cross-sectional meta-epidemiological study. SRs published from 2005 to 2015 in the first quartile journals within the Journal Citation Reports category Anesthesiology were analyzed based on the Journal Citation Reports impact factor for year 2014. Each SR was assessed by 2 independent authors using Assessment of Multiple Systematic Reviews (AMSTAR) and Preferred Reporting Items of Systematic reviews and Meta-Analyses (PRISMA) tools. Total score (median and interquartile range, IQR) on checklists, temporal trends in total score, correlation in total scores between the 2 checklists, and variability of those results between journals were analyzed.

RESULTS: A total of 446 SRs were included. Median total score of AMSTAR was 6/11 (IQR: 4–7) and of PRISMA 18.5/27 (IQR: 15–22). High compliance (reported in over 90% SRs) was found in only 1 of 11 AMSTAR and 5 of 27 PRISMA items. Low compliance was found for the majority of AMSTAR and PRISMA individual items. Linear regression indicated that there was no improvement in the methodological and reporting quality of SRs before and after the publication of the 2 checklists (AMSTAR: F(1,8) = 0.22; P = .65, PRISMA: F(1,7) = 0.22; P = .47). Total scores of AMSTAR and PRISMA had positive association (R = 0.71; P < .0001).

CONCLUSIONS: Endorsement of PRISMA in instructions for authors was not a guarantee of compliance. Methodological and reporting quality of pain-related SRs should be improved using relevant checklists. This can be remedied by a joint effort of authors, editors, and peer reviewers.

Published ahead of print July 1, 2017.

From the *Faculty of Medicine, Universidad Autónoma de Madrid, Madrid, Spain; Laboratory for Pain Research, University of Split School of Medicine, Split, Croatia; Department of Biochemistry and Molecular Biology at the Universitat de Barcelona, Barcelona, Spain; §Ernst-Moritz-Arndt Universität Greifswald, Studiendekanat Universitätsmedizin Greifswald, Greifswald, Germany; Department of Pediatrics, University Hospital Split, Split, Croatia; Department of Dermatovenerology, General Hospital Zadar, Zadar, Croatia; #Department of Anesthesiology and Intensive Care Medicine, University Hospital Split, Split, Croatia; **Department of Research in Biomedicine and Health, University of Split School of Medicine, Split, Croatia; ††Department for Development, Research and Health Technology Assessment, Agency for Quality and Accreditation in Health Care and Social Welfare, Zagreb, Croatia.

Accepted for publication April 14, 2017.

Published ahead of print July 1, 2017.

Funding: None.

The authors declare no conflicts of interest.

Reprints will not be available from the authors.

Address correspondence to Livia Puljak, MD, PhD, Laboratory for Pain Research, University of Split School of Medicine, Soltanska 2, 21000 Split, Croatia. Address e-mail to livia@mefst.hr.

Systematic reviews (SRs) are considered the highest level of evidence in medicine, particularly if meta-analysis is possible and applicable.1 SRs are used by clinicians and decision makers for creating treatment guidelines and making decisions about health care resource allocation in anesthesiology through economic evaluations.2–4 The number of published SRs is growing exponentially.5 SRs are very important in clinical decision making, but fundamental questions remain: (a) strength of the available evidence and (b) whether there is sufficient high-quality evidence to support the majority of recommendations in health care. There are concerns that we are far off from having good evidence to support evidence-based practice.6

SRs may vary and a SR does not guarantee high methodological and reporting rigor, as shown in a number of recent studies.7–9 It has recently been shown that even though many of those SRs reported that they followed published guidelines such as Preferred Reporting Items of Systematic reviews and Meta-Analyses (PRISMA) or Meta-analysis of Observational Studies in Epidemiology checklists,10 only half of them appropriately addressed important issues such as publication bias.11 Promoting better reporting and methodology could make the use of studies more effective for the decision-making process.12

There have been no attempts to analyze methodological/reporting quality of SRs in the top-ranking journals published in the field of pain. Therefore, the aim of this study was to analyze methodological and reporting quality of pain-related SRs published in the first quartile journals in the Journal Citation Reports (JCR) category Anesthesiology. Our primary hypothesis was that the methodological and reporting quality of these SRs was medium in category. Our secondary hypothesis was that those SRs published after publication of Assessment of Multiple Systematic Reviews (AMSTAR) and PRISMA were methodologically superior to pre-AMSTAR and pre-PRISMA SRs. Finally, as previously reported,13 we hypothesized that there was a positive correlation between PRISMA and AMSTAR scores in our study sample.

Back to Top | Article Outline

METHODS

Ethics

This study analyzed data from published primary studies; no patient data were included. Therefore, no ethical committee approval was obtained.

Back to Top | Article Outline

Inclusion Criteria

We analyzed pain-related SRs, regardless of whether they had meta-analysis, published from 2005 to 2015 in the first quartile journals within the JCR category Anesthesiology based on the JCR impact factor for the year 2014. This indicator was chosen because it has been suggested that the impact factor may be less prone to biases than other available indices and may thus be a more resilient measure of journal quality.14 Seven journals were eligible for analysis: Anesthesiology, Pain, British Journal of Anaesthesia, Pain Physician, Anesthesia & Analgesia, Anaesthesia, and Regional Anesthesia and Pain Medicine.

Back to Top | Article Outline

Exclusion Criteria

We excluded diagnostic accuracy and individual patient data SRs because, for those types of SRs, there is an adapted version of the PRISMA checklist. We also excluded overviews of SRs and guidelines.

Back to Top | Article Outline

Study Search and Screening

The MEDLINE database was searched using advanced search with a journal name and a filter for SRs and meta-analyses. Search results were saved. Two authors independently screened records and excluded those that were not eligible. The remaining records were analyzed in full text by 2 authors independently. Disagreements were resolved via discussion with a third author.

Back to Top | Article Outline

Data Extraction

One author designed the data extraction table in the Microsoft Excel (Microsoft Inc, Redmond, WA) spreadsheet standardized for the project. Five SRs were used to pilot the data extraction table, and revisions were made before global extraction.

Back to Top | Article Outline

Methodological and Reporting Quality Assessment

Two authors independently assessed each SR; 4 authors participated in this part of the study (D.R.M., M.K., M.V.O., D.L.P.). For methodological quality, the AMSTAR tool was used.15 For reporting quality, the PRISMA was used.16 Rating disagreements were resolved by a third author. Correlation between the total score on AMSTAR and PRISMA was calculated. Variability between journals in the total score on AMSTAR and PRISMA was analyzed.

Back to Top | Article Outline

Scoring AMSTAR

Each AMSTAR item was rated as 1 (if the criterion is met) or 0 points (if the criterion is not met, unclear, or not applicable). Possible range for the AMSTAR score for each SR was 0 to 11. SRs were then classified as high (8–11 points), medium (4–7 points), or low methodological quality (0–3 points) as suggested previously.17 Temporal trends in methodological quality have been explored since AMSTAR was published in 200715; for this purpose, studies were divided into the pre-AMSTAR cohort published from 2005 to 2007 and the post-AMSTAR cohort published from 2008 to 2015.

Back to Top | Article Outline

Scoring PRISMA

To assess the degree of compliance with the PRISMA, every item was rated as “yes” for total compliance, “unclear” for partial compliance, or “no” for noncompliance, corresponding to the score values of “1,” “0.5,” or “0,” respectively.18 Possible range for the PRISMA score for each SR was 0 to 27. For assessing compliance with individual AMSTAR and PRISMA items, we used the following arbitrary cutoffs: 90%–100% high compliance, 70%–89% medium compliance, 30%–69% low compliance, and 0%–29% very low compliance.

Temporal trends in reporting quality have been explored since the PRISMA statement was published in 200916; for this purpose, studies were divided into the pre-PRISMA cohort published from 2005 to 2009, and post-PRISMA cohort published from 2010 to 2015.

Back to Top | Article Outline

Difference Between SRs With or Without Meta-analysis

AMSTAR and PRISMA scores were compared between the SRs with or without a meta-analysis.

Back to Top | Article Outline

Journal Endorsement Analysis

Instructions for authors of the included journals were analyzed to see whether they mentioned AMSTAR and PRISMA at the time of analysis.

Back to Top | Article Outline

Data Analysis

Descriptive data about SRs’ compliance with reporting items of AMSTAR and PRISMA were presented as frequencies and percentages. To check normality of data distribution, we used Kolmogorov-Smirnov (KS) test. Because the total scores of the 2 reporting checklists were not normally distributed (AMSTAR: KS distance = 0.146, P < .001; PRISMA: KS distance = 0.065, P < .001), we used median and interquartile range (IQR) to present the summary statistics. Differences between SR with and without meta-analysis were analyzed using Mann-Whitney test. Linear regression analysis was conducted to compare the slopes for AMSTAR and PRISMA before and after publication of the guidelines. Association between total AMSTAR and PRISMA scores was calculated using 2-tailed nonparametric Spearman correlation coefficient.

Statistical significance was set at P < .05. For analyses, we used Microsoft Excel (Microsoft Inc, Redmond, WA) and GraphPadPrism software (GraphPad Software Inc, San Diego, CA).

Back to Top | Article Outline

Sample Size.

The obtained sample was deemed as sufficient based on the previous similar publication.19 Our analysis confirmed that a sample size of 150 in each group has a 99% power to detect a difference between means of 0.94 with a significance level (α) of .05 (2 tailed). Actual differences between means were higher for both AMSTAR and PRISMA (1.715 and 5.93, respectively) and our sample was larger than 150 per group (comparison of SRs with or without meta-analysis).

Back to Top | Article Outline

RESULTS

We retrieved 944 articles in a database search. After removing studies that were not eligible, 446 SRs were finally included in the study. We found significant positive correlation between the total score of AMSTAR and PRISMA checklists (Spearman r = 0.71, 95% confidence interval [CI], 0.66–0.76; P < .001).

Back to Top | Article Outline

Methodological Quality of SRs: Compliance With the AMSTAR Checklist

Table 1 shows the percentage of included SRs that met the individual AMSTAR checklist items. The median number of AMSTAR items fulfilled was 6 (IQR: 4–7) out of the maximum possible 11 items.

Table 1.

Table 1.

Only 1 AMSTAR item had high compliance: 90% of the SRs provided description of the included studies (item no. 6). There were 2 items with medium compliance, 6 items with low compliance, and 2 items with very low compliance. Compliance was very low for item no. 1, with 7.6% of SRs that reported having an a priori design of the research question(s) and item no. 11, with 11% of SRs reporting information about conflict of interest both for the SR and the included primary studies (Table 1).

Back to Top | Article Outline

Reporting Quality of SRs: Compliance With the PRISMA Statement

Table 2.

Table 2.

The median number of properly reported PRISMA items was 18.5 (IQR: 15–22) out of the maximum possible 27. Table 2 shows the proportion of SRs that properly reported the individual PRISMA checklist items. High compliance was found for 5 individual items that were properly reported in more than 90% of the SRs, including describing rationale for the review, specifying study characteristics, describing all information sources in the search and date last searched, stating the process of selecting studies and describing study characteristics in Results (Table 2). There were 8 items with medium compliance, 13 items with low compliance, and 1 item with very low compliance. Reporting was very low for item no. 5, which specifies that SRs should indicate if a review protocol exists, if and where it can be accessed, and if available to provide registration information including registration number; only 4.4% of the included SRs reported this information (Table 2).

Back to Top | Article Outline

Temporal Trends in Methodological and Reporting Quality

Using the Spearman correlation, we found significant positive, but weak correlation of AMSTAR and the year in which the SR was published (ρ = 0.22, 95% CI, 0.12–0.31; P < .001), as well as between PRISMA and the year of publication (ρ= 0.29, 95% CI, 0.20–0.37; P < .001). These data indicate that there is a weak improvement in AMSTAR and PRISMA total score over time.

Figure 1.

Figure 1.

Additionally, we divided SRs into those that were published before and after publication of AMSTAR and PRISMA. Pre-AMSTAR group of SRs (N = 53; published 2005–2007) had median total AMSTAR score 5 (IQR: 3–6), whereas post-AMSTAR group of SRs (N = 393; published 2008–2015) had median total AMSTAR score 6 (IQR: 4–7). Pre-PRISMA group of SRs (N = 142; published 2005–2009) had median total PRISMA score 17 (IQR: 15–20), whereas post-PRISMA group of SRs (N = 304; published 2010–2015) had median total PRISMA score 19 (IQR: 16–23). Linear regression analysis conducted for the before/after periods indicated that difference between slopes was not significant for AMSTAR (F(1,7) = 0.22; P = .65), or PRISMA (F(1,7) = 1.86; P = .21; Figure 1). Additionally, we repeated this analysis this time with a redefined post-AMSTAR and post-PRISMA cohort as 1 year later, to account for the possible delay in the publication of the accepted paper. However, we did not find any change compared with the original calculation: PRISMA: F(1,7) = 1.46; P = .25; AMSTAR: F(1,7) = 0.017; P = .9.

Back to Top | Article Outline

Difference Between SRs With or Without Meta-analysis

Figure 2.

Figure 2.

Among the 446 analyzed SRs, there were 263 (59%) with and 183 (41%) without a meta-analysis. Both AMSTAR and PRISMA total scores were significantly higher in the SRs with meta-analysis than in SRs without it (Mann-Whitney test, P < .001 for both comparisons; Figure 2).

Back to Top | Article Outline

Compliance With AMSTAR and PRISMA and Its Endorsement by the Journals

Table 3.

Table 3.

The 7 journals that we analyzed had different compliance rates with AMSTAR and PRISMA, when all analyzed years were taken into account cumulatively (Table 3). None of the analyzed journals mentioned the AMSTAR checklist in their instructions for authors. Of the 7 journals, 4 mentioned PRISMA. Two of the 4 journals indicated that the authors “must follow” the PRISMA and that the authors need to provide PRISMA checklist on manuscript submission, these journals were Pain and Anesthesia & Analgesia. The other 2 journals use a conditional language, without indicating that a filled-out PRISMA checklist is necessary; Pain Physician indicates that the authors of SRs “should follow” the PRISMA guidance, whereas Anaesthesia notes that “SRs should ideally be presented according to the PRISMA statement.” Despite these instructions, our analysis showed there was not a single SR in the 7 journals with all PRISMA items properly reported.

Back to Top | Article Outline

DISCUSSION

Our study of methodological and reporting quality of pain-related SRs published in the highest ranking journals in the JCR field of Anesthesiology showed insufficient compliance with most of the items on the AMSTAR and PRISMA checklists. High reporting compliance was found for only 1 of 11 AMSTAR items and 5 of 27 PRISMA items. Linear regression analysis indicated that there was no improvement in the methodological and reporting quality of SRs before and after publication of the 2 checklists. Endorsement of PRISMA in the instructions for authors was not a guarantee of compliance.

We also found that AMSTAR and PRISMA total scores were significantly higher in SRs with meta-analysis compared to SRs without it. Quantitative analyses make SR more valuable for readers. However, it has to be emphasized that conducting a meta-analysis is not always appropriate or valid due to clinical or statistical heterogeneity; therefore, the absence of a meta-analysis should not inherently mean that the quality of SR is suboptimal. In many of the SRs we analyzed the authors explicitly stated that they intended to do meta-analysis, but it was not possible because of heterogeneity, or because only 1 study was found.

Our analysis of methodological and reporting quality of SRs published in the 7 analyzed journals indicated that endorsement of a certain checklist is not a guarantee that SRs published in those journals will actually comply with the checklist. However, it has to be emphasized that the “instructions for authors” were analyzed at the time of preparation of this article in August of 2016, and we do not know whether strict adherence to these instructions is forthcoming. It is possible that the endorsement of PRISMA was relatively recent and that it did not affect a significant portion of SRs published in the immediate post-PRISMA period.

The most common methodological shortcoming of the analyzed SRs was the lack of a statement of a preexisting protocol of the SR. SR authors can publish their protocol in a free, publicly available registry such as PROSPERO, an international prospective registry of SRs.20–22 One of the solutions for the negligible number of SRs that reported having a preexisting protocol is for journals to require prospective SR protocol registration as a prerequisite for publication, just like the International Committee of Medical Journal Editors mandated for protocols of clinical trials.

Only a third of the analyzed SRs reported a list of both included and excluded studies. All the SRs reported which studies were included, but only a few reported the excluded ones. This could be due to space limitations in print journals, but a solution is to present the list of excluded studies as an online supplementary material. If the journal does not allow online supplementary material then a statement can be made that the list is available to readers on request.

Only a third of analyzed SRs assessed likelihood of publication bias, which limits our ability to make conclusions about the body of literature available on the subject. It is also worth noting that 75% of the analyzed SRs made quality assessment of included studies, and only 46% of all the SRs used quality of evidence for formulating conclusions. This may lead to biased conclusions among readers who are not aware of potential limitations in the analyzed body of evidence.

When it comes to the reporting quality, the lack of information about protocol and registration was the least followed PRISMA item in the analyzed SRs. A risk of any kind of bias was assessed only in a third of analyzed SRs. A PRISMA item that was properly reported in 45% of the included SRs was the “structured summary.” However, this is determined by the instructions for authors and journals could easily make all the SRs fully compliant with this PRISMA item if they required a structured abstract.

Full electronic search strategy was present only in half of the analyzed SRs, which could also be due to space limitations in print journals, but then it should be made available as online supplementary material or be made available to interested readers.

Previous studies that addressed the quality of SRs were disciplines other than pain medicine and were limited in the number of SRs that were analyzed. The smallest sample in these reviews was 8 SRs and the largest was 40 SRs. The authors reported insufficiencies in methodological or reporting quality of analyzed SRs and recommended rigorous assessment of SRs before publication.23–26 Adding to this body of evidence, our study reinforces the message to SR authors, as well as editors and peer reviewers handling such studies. It may not be sufficient to mandate that authors comply with a relevant checklist; this should also be thoroughly checked by the editorial staff or peer reviewers. As for SR authors, these checklists can help them conduct and report articles of higher quality.

A limitation of this study is inclusion of only select journals and SRs published within a limited time frame. Furthermore, we used AMSTAR and PRISMA as quality evaluation tools for SRs. One can argue that these checklists do not guarantee quality as the drawbacks of AMSTAR have been emphasized.27 However, these checklists are now widely used and their uniform usage results in conclusions that can be compared to other reviews. Also, for our temporal analysis, we used as a cutoff date year of checklist publication, which may be too soon for checklist adoption and does not take into full account delays in publication in journals with a lot of accepted papers. However, our additional analysis where the cutoff date added one more year after checklist publication did not change our results.

In conclusion, the methodological and reporting quality of pain-related SRs published in the top-ranked journals in the JCR field of Anesthesiology can be improved by the joint effort of authors, journal editors, and peer reviewers.

Back to Top | Article Outline

ACKNOWLEDGMENTS

We are grateful to Dalibora Behmen for language editing of the manuscript.

Back to Top | Article Outline

DISCLOSURES

Name: Daniel Riado Minguez, MD.

Contribution: This author helped with data collection, data analysis, critical revision of the manuscript draft, approval of the final version of the manuscript.

Name: Martin Kowalski.

Contribution: This author helped with data collection, data analysis, critical revision of the manuscript draft, approval of the final version of the manuscript.

Name: Marta Vallve Odena, BSc.

Contribution: This author helped with data collection, data analysis, critical revision of the manuscript draft, approval of the final version of the manuscript.

Name: Daniel Longin Pontzen.

Contribution: This author helped with data collection, data analysis, critical revision of the manuscript draft, approval of the final version of the manuscript.

Name: Antonia Jelicic Kadic, MD, PhD.

Contribution: This author helped with data collection, data analysis, critical revision of the manuscript draft, approval of the final version of the manuscript.

Name: Milka Jeric, MD, PhD.

Contribution: This author helped with data collection, data analysis, critical revision of the manuscript draft, approval of the final version of the manuscript.

Name: Svjetlana Dosenovic, MD.

Contribution: This author helped with data collection, data analysis, critical revision of the manuscript draft, approval of the final version of the manuscript.

Name: Dora Jakus.

Contribution: This author helped with data collection, data analysis, critical revision of the manuscript draft, approval of the final version of the manuscript.

Name: Marija Vrdoljak.

Contribution: This author helped with data collection, data analysis, critical revision of the manuscript draft, approval of the final version of the manuscript.

Name: Tina Poklepovic Pericic, MD, PhD.

Contribution: This author helped with data collection, data analysis, critical revision of the manuscript draft, approval of the final version of the manuscript.

Name: Damir Sapunar, MD, PhD.

Contribution: This author helped with data analysis, critical revision of the manuscript draft, approval of the final version of the manuscript.

Name: Livia Puljak, MD, PhD.

Contribution: This author helped with study design, data analysis, writing the first manuscript draft, approval of the final version of the manuscript.

This manuscript was handled by: Honorio T. Benzon, MD.

Back to Top | Article Outline

REFERENCES

1. Cook DJ, Mulrow CD, Haynes RB. Systematic reviews: synthesis of best evidence for clinical decisions. Ann Intern Med. 1997;126:376–380.
2. Duarte RV, Lambe T, Raphael JH, Eldabe S, Andronis L. Intrathecal drug delivery systems for the management of chronic non-cancer pain: protocol for a systematic review of economic evaluations. BMJ Open. 2016;6:e012285.
3. Boyers D, McNamee P, Clarke A, et al. Cost-effectiveness of self-management methods for the treatment of chronic pain in an aging adult population: a systematic review of the literature. Clin J Pain. 2013;29:366–375.
4. Xie F, Tanvejsilp P, Campbell K, Gaebel K. Cost-effectiveness of pharmaceutical management for osteoarthritis pain: a systematic review of the literature and recommendations for future economic evaluation. Drugs Aging. 2013;30:277–284.
5. Choong MK, Tsafnat G. The implications of biomarker evidence for systematic reviews. BMC Med Res Methodol. 2012;12:176.
6. Kane RL, Butler M, Ng W. Examining the quality of evidence to support the effectiveness of interventions: an analysis of systematic reviews. BMJ Open. 2016;6:e011051.
7. Pandis N, Fleming PS, Worthington H, Salanti G. The quality of the evidence according to GRADE is predominantly low or very low in oral health systematic reviews. PLoS One. 2015;10:e0131644.
8. Tian J, Zhang J, Ge L, Yang K, Song F. The methodological and reporting quality of systematic reviews from China and the USA are similar [published online ahead of print January 4, 2017]. J Clin Epidemiol. doi: 10.1016/j.jclinepi.2016.12.004.
9. Chapman SJ, Drake TM, Bolton WS, Barnard J, Bhangu A. Longitudinal analysis of reporting and quality of systematic reviews in high-impact surgical journals. Br J Surg. 2017;104:198–204.
10. Stroup DF, Berlin JA, Morton SC, et al. Meta-analysis of observational studies in epidemiology: a proposal for reporting. Meta-analysis Of Observational Studies in Epidemiology (MOOSE) group. JAMA. 2000;283:2008–2012.
11. Hedin RJ, Umberham BA, Detweiler BN, Kollmorgen L, Vassar M. Publication bias and nonreporting found in majority of systematic reviews and meta-analyses in anesthesiology journals. Anesth Analg. 2016;123:1018–1025.
12. Guglielminotti J, Dechartres A, Mentré F, Montravers P, Longrois D, Laouénan C. Reporting and methodology of multivariable analyses in prognostic observational studies published in 4 anesthesiology journals: a methodological descriptive review. Anesth Analg. 2015;121:1011–1029.
13. Liang Y. The correlation analysis of PRISMA, AMSTAR and GRADE in systematic review. Conference abstract. Cochrane Colloquium. Quebec City, Canada. 19-23 September 2013. Available at: http://2013.colloquium.cochrane.org/abstracts/correlation-analysis-prisma-amstar-and-grade-systematic-review.html. Accessed June 5, 2017.
14. Saha S, Saint S, Christakis DA. Impact factor: a valid measure of journal quality? J Med Libr Assoc. 2003;91:42–46.
15. Shea BJ, Grimshaw JM, Wells GA, et al. Development of AMSTAR: a measurement tool to assess the methodological quality of systematic reviews. BMC Med Res Methodol. 2007;7:10.
16. Moher D, Liberati A, Tetzlaff J, Altman DG; PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. PLoS Med. 2009;6:e1000097.
17. Xiu-xia L, Ya Z, Yao-long C, Ke-hu Y, Zong-jiu Z. The reporting characteristics and methodological quality of Cochrane reviews about health policy research. Health Policy. 2015;119:503–510.
18. Liu D, Jin J, Tian J, Yang K. Quality assessment and factor analysis of systematic reviews and meta-analyses of endoscopic ultrasound diagnosis. PLoS One. 2015;10:e0120911.
19. Cartes-Velásquez R, Manterola Delgado C, Aravena Torres P, Moraga Concha J. Methodological quality of therapy research published in ISI dental journals: preliminary results. J Int Dent Med Res 2015;8:46–50.
20. PROSPERO, an international prospective register of systematic reviews. Available at: http://www.crd.york.ac.uk/PROSPERO/. Accessed June 5, 2017.
21. Liberati A, Altman DG, Tetzlaff J, et al. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. J Clin Epidemiol. 2009;62:e1–34.
22. Booth A, Clarke M, Ghersi D, Moher D, Petticrew M, Stewart L. An international registry of systematic-review protocols. Lancet. 2011;377:108–109.
23. Martins DE, Astur N, Kanas M, Ferretti M, Lenza M, Wajchenberg M. Quality assessment of systematic reviews for surgical treatment of low back pain: an overview. Spine J. 2016;16:667–675.
24. Song Y, Oh M, Park S, et al. The methodological quality of systematic reviews and meta-analyses on the effectiveness of non-pharmacological cancer pain management. Pain Manag Nurs. 2015;16:781–791.
25. Haladay DE, Miller SJ, Challis J, Denegar CR. Quality of systematic reviews on specific spinal stabilization exercise for chronic low back pain. J Orthop Sports Phys Ther. 2013;43:242–250.
26. Barton CJ, Webster KE, Menz HB. Evaluation of the scope and quality of systematic reviews on nonpharmacological conservative treatment for patellofemoral pain syndrome. J Orthop Sports Phys Ther. 2008;38:529–541.
27. Wegewitz U, Weikert B, Fishta A, Jacobs A, Pieper D. Resuming the discussion of AMSTAR: What can (should) be made better? BMC Med Res Methodol. 2016;16:111.
Copyright © 2017 International Anesthesia Research Society