Secondary Logo

Journal Logo

Publication Bias and Nonreporting Found in Majority of Systematic Reviews and Meta-analyses in Anesthesiology Journals

Hedin, Riley J. MPH; Umberham, Blake A.; Detweiler, Byron N.; Kollmorgen, Lauren; Vassar, Matt PhD

doi: 10.1213/ANE.0000000000001452
Healthcare Economics, Policy, and Organization: Original Clinical Research Report
Free

BACKGROUND: Systematic reviews and meta-analyses are used by clinicians to derive treatment guidelines and make resource allocation decisions in anesthesiology. One cause for concern with such reviews is the possibility that results from unpublished trials are not represented in the review findings or data synthesis. This problem, known as publication bias, results when studies reporting statistically nonsignificant findings are left unpublished and, therefore, not included in meta-analyses when estimating a pooled treatment effect. In turn, publication bias may lead to skewed results with overestimated effect sizes. The primary objective of this study is to determine the extent to which evaluations for publication bias are conducted by systematic reviewers in highly ranked anesthesiology journals and which practices reviewers use to mitigate publication bias. The secondary objective of this study is to conduct publication bias analyses on the meta-analyses that did not perform these assessments and examine the adjusted pooled effect estimates after accounting for publication bias.

METHODS: This study considered meta-analyses and systematic reviews from 5 peer-reviewed anesthesia journals from 2007 through 2015. A PubMed search was conducted, and full-text systematic reviews that fit inclusion criteria were downloaded and coded independently by 2 authors. Coding was then validated, and disagreements were settled by consensus. In total, 207 systematic reviews were included for analysis. In addition, publication bias evaluation was performed for 25 systematic reviews that did not do so originally. We used Egger regression, Duval and Tweedie trim and fill, and funnel plots for these analyses.

RESULTS: Fifty-five percent (n = 114) of the reviews discussed publication bias, and 43% (n = 89) of the reviews evaluated publication bias. Funnel plots and Egger regression were the most common methods for evaluating publication bias. Publication bias was reported in 34 reviews (16%). Thirty-six of the 45 (80.0%) publication bias analyses indicated the presence of publication bias by trim and fill analysis, whereas Egger regression indicated publication bias in 23 of 45 (51.1%) analyses. The mean absolute percent difference between adjusted and observed point estimates was 15.5%, the median was 6.2%, and the range was 0% to 85.5%.

CONCLUSIONS: Many of these reviews reported following published guidelines such as PRISMA or MOOSE, yet only half appropriately addressed publication bias in their reviews. Compared with previous research, our study found fewer reviews assessing publication bias and greater likelihood of publication bias among reviews not performing these evaluations.

From the *Oklahoma State University Center for Health Sciences, Tulsa, Oklahoma; and Office of Institutional Research and Analytics, Oklahoma State University Center for Health Sciences, Tulsa, Oklahoma.

Accepted for publication April 29, 2016.

Funding: None.

The authors declare no conflicts of interest.

Reprints will not be available from the authors.

Address correspondence to Matt Vassar, PhD, Oklahoma State University Center for Health Sciences, 1111 W., 17th St, Tulsa, OK 74107. Address e-mail to matt.vassar@okstate.edu.

Systematic reviews synthesize individual studies to comprehensively examine an intervention’s effectiveness or inform resource allocation decisions.1–4 Although other types of studies are helpful, these reviews are often used to establish practice guidelines. The American Society of Anesthesiologists considers systematic reviews with sufficient numbers of randomized controlled trials (RCTs) that perform and report meta-analyses as level 1a evidence.5 Meta-analysis is a statistical procedure to pool results from similar studies to synthesize a single estimate of effect and determine overall treatment efficacy across individual trials.1,6 For example, a recent meta-analysis of 28 clinical trials examining the effect of general anesthesia alone versus general anesthesia combined with thoracic epidural anesthesia in cardiac surgery patients found that the use of thoracic epidural anesthesia in patients undergoing cardiac surgery reduced the risk of postoperative supraventricular arrhythmias and respiratory complications.7

One limitation of meta-analysis is a likely overrepresentation of studies with statistically significant outcomes. This limitation is an artifact of publication bias (PB), defined as the bias to publish only results that are statistically or clinically significant.8a

PB is an important methodologic concern in the peer-reviewed literature in general6,8–13 and anesthesiology in particular.14 Studies with nonsignificant results are less likely to be published, and those studies that reach publication more frequently appear in lower impact factor journals.15 Although there have been substantial efforts to increase the availability of clinical research, about half of the clinical studies remain unpublished.1,6,16 The consequence is that only half of the clinical research produced may be included for data synthesis, thereby making a treatment appear more effective than it actually is or showing a treatment to be effective when it is not.8,16 When PB exists, the pooled estimates may change, potentially leading to a different clinical decision.

Several real-world examples of the negative effects of PB on patient care have recently been documented17–19 (Table 1). One notable example is a meta-analysis of RCTs comparing selective serotonin reuptake inhibitors (SSRIs) to placebo in children aged 5 to 18 years.17 The published RCT data suggested that benefits outweighed risks for treating depression in children with some SSRIs. However, when unpublished data were included in the meta-analysis, the risks outweighed the benefits, resulting in a change in treatment guidelines for prescribing SSRIs to children.b This issue is not limited to clinical outcomes.

Table 1

Table 1

PB can also be problematic in resource allocation and policy. Billions of dollars were spent worldwide to purchase and stockpile oseltamivir (Tamiflu) based on a published body of evidence that was incomplete. When missing data were included in the evidence by a research team in association with the British Medical Journal, oseltamivir was shown to have no significant benefit in the treatment of influenza.3

Given the consequences associated with PB as well as previous research reporting PB concerns in prominent anesthesiology journals, the primary objective of this study is to determine the extent to which evaluations for PB are conducted by systematic reviewers in highly ranked anesthesiology journals and which practices reviewers use to mitigate PB. The secondary objective of this study is to conduct PB analyses of meta-analyses that omitted these evaluations and examine the change in pooled effect estimates after adjusting for PB.

Back to Top | Article Outline

METHODS

This review was performed using previously published studies; therefore, no institutional review board oversight was necessary. Furthermore, this study is not a clinical trial and, therefore, not registered.

Back to Top | Article Outline

Article Selection

Using the h5-index of Google Scholar Metrics, we identified the 5 highest ranking journals from the anesthesiology subcategory: Anesthesiology, Anesthesia & Analgesia, British Journal of Anaesthesia, Anaesthesia, and Regional Anesthesia and Pain Medicine. We searched PubMed for systematic reviews and/or meta-analyses published between 2007 and 2015 using the following search string, adapted from a published approach sensitive for identifying systematic reviews20: ((((“Anesthesiology”[Journal] OR “Anesthesia and analgesia”[Journal]) OR “British journal of anaesthesia”[Journal]) OR “Anaesthesia”[Journal]) OR “Regional anesthesia and pain medicine”[Journal]) AND ((meta-analysis[Title/Abstract] OR meta-analysis[Publication Type]) OR systematic review[Title/Abstract]) AND ((“2007/01/01”[PDAT]: “2015/12/31”[PDAT]) AND “humans”[MeSH Terms]).

Figure 1

Figure 1

Our PubMed search yielded 315 results. We used Covidence, a web-based software platform for producing systematic reviews,c to initially screen those 315 results based on their titles and abstracts. To meet eligibility criteria during the screening process, articles that summarized evidence across multiple studies and provided information on their search strategy, including search terms, databases, or inclusion/exclusion criteria, were included as systematic reviews. Articles reporting quantitative syntheses of results across multiple studies were classified as meta-analyses.8 We eliminated 54 studies for not meeting these criteria. Furthermore, 8 were excluded from the study because they could not be obtained. Full-text articles (n = 261) were then retrieved and reviewed in full. The final sample included 207 systematic reviews (Figure 1). Data from the current study are publically available on figshare (http://dx.doi.org/10.6084/m9.figshare.1494831).

Back to Top | Article Outline

Review Process and Coding

An abstraction manual was developed and pilot tested on 100 systematic reviews with adjustments made as necessary. The primary and secondary authors (R.J.H. and B.A.U.) jointly abstracted data elements from a sample of 3 systematic reviews using the manual. Next, the 2 authors (R.J.H. and B.A.U.) abstracted elements from 3 new reviews independently. Data from these 3 coded reviews were analyzed for interrater agreement using the Cohen κ statistic (κ = 0.81; 95% confidence interval [CI], 0.72–0.92, P < .01), which indicates that agreement was not because of chance yet does not reflect the strength of agreement.

Because agreement was high, the authors divided the entire sample of reviews equally between them and coded each according to the abstraction manual. After initial coding was completed, the pair verified each other’s coding to ensure that each element was correctly and consistently abstracted. A final meeting was conducted to discuss any disagreements in coding. Discrepancies were handled by mutual agreement. From the full-text reviews, we extracted the following elements: (a) title; (b) authors’ names; (c) year published; (d) name of journal; (e) whether PB was discussed; (f) whether PB was formally evaluated, and if so, the method used to assess it; (g) whether a funnel plot was published in the article; (h) whether PB, if evaluated, was found; (i) the number of studies comprising the systematic review; (j) the name of reporting guidelines used, if any; (k) whether foreign language searches were conducted; (l) whether hand searching of reference lists occurred; (m) whether reviewers performed a gray literature search, and if so, the gray literature source; and (n) whether clinical trials registries were searched. For this study, we used the Cochrane Handbook’s definition of gray literature: “Literature that is not formally published in sources such as books or journal articles.”21

Back to Top | Article Outline

Publication Bias Assessment

For the systematic reviews in our sample that did not evaluate PB, we undertook that task. We only considered reviews that included a meta-analysis of at least 10 primary studies, because previous research has noted that statistical power is too low to distinguish chance from true asymmetry when <10 studies are included.22 Ninety-four reviews of the 207 met initial inclusion criteria, of which 67 were then excluded because either the actual data for the primary studies in the meta-analysis were not available or the meta-analysis used <10 primary studies. Furthermore, we only included systematic reviews of RCTs in our PB assessments because funnel plots are less well developed statistically for other study designs.8 Given that systematic reviews of RCTs are level 1a evidence and often used clinically, we also restricted our analysis to systematic reviews of RCTs with a clinical outcome.

To assess PB, we first replicated the results from each meta-analysis that met our inclusion criteria. In some cases, slight adjustments were made to CIs if the original studies reported asymmetric CIs about the point estimates. This was necessary to reproduce the analyses, and the intervals were adjusted only slightly, the same way each time, by adjusting the upper confidence limit until symmetry was achieved. We then constructed funnel plots, used the Duval and Tweedie trim and fill method,23,24 and performed Egger25 regression tests for each of the meta-analyses. In many cases, a systematic review included multiple meta-analyses. For example, we included the study by Komatsu et al26 in our analysis. Komatsu et al’s study included 5 meta-analyses. Four of the 5 meta-analyses had at least 10 studies with the data available. Therefore, PB was assessed for each of those 4 meta-analyses that fit the above inclusion criteria. On 3 occasions, we had to omit 1 primary study from the meta-analysis because we could not replicate the CIs.

The significance level for Egger regression tests was set at P < .05. Statistical analysis was performed using STATA 13.1 (StataCorp, Stata Statistical Software: Release 13, College Station, TX) and Comprehensive Meta-Analysis, version 3 (Biostat Inc, Englewod, CA).

Back to Top | Article Outline

RESULTS

Approximately 95% of the reviews included in this study considered clinical outcomes as the primary outcome. The other 5% were studies concerned with accuracy and precision of an instrument, evaluation of a training program, or validity of evaluation tools. Ninety-four percent of the systematic reviews included RCTs. The other 6% were cohort studies, case reports or case series, and other nonrandomized studies.

PB was discussed in 114 articles (55.1%; Table 2) and most frequently by the British Journal of Anaesthesia (n = 43). The proportion of PB being discussed was also greatest in the British Journal of Anaesthesia (n = 43/75, 57.3%). PB was evaluated in 89 articles (43.0%) and most frequently by the British Journal of Anaesthesia (n = 32); however, the proportion was greater in Anesthesia & Analgesia (n = 21/43, 48.8%; Table 3). The most common method to evaluate PB was to construct a funnel plot and visually analyze it for asymmetry. Thirty-eight reviews (46.3%) presented their funnel plots in the article as a figure. PB was present in 34 reviews (16.4%), not present in 45 reviews (21.7%), and unknown in the remaining 128 reviews (61.8%; Table 2).

Table 2

Table 2

Table 3

Table 3

Studies most commonly reported using the PRISMA guidelines (n = 68, 32.9%). Among the 68 reviews that reported using the PRISMA guidelines, 44 (64.7%) evaluated PB and 51 (75.0%) discussed PB. Forty-two reviews (20.3%) reported searching gray literature for clinical trials to include in their review as a measure to counteract PB. The most common gray literature forms searched were clinicaltrials.gov and conference abstracts. Among those reviews that evaluated PB (n = 89), 79 (88.8%) hand searched reference lists for relevant reviews.

Twenty-five systematic reviews in our sample used RCTs and met our inclusion criteria for PB evaluation. Forty-five total analyses were conducted on results tables from the 25 systematic reviews. Of the 45 analyses, PB was present by funnel plot asymmetry in 36 analyses (80.0%), ie, 20 of the 25 systematic reviews. Using Duval and Tweedie trim and fill method, funnel plots were created with imputed studies needed to make the funnel plot symmetrical. Figure 2 is an example of an asymmetrical funnel plot indicating PB with studies imputed to demonstrate the number of studies needed to make the funnel plot symmetrical. The average number of studies needed by trim and fill was 4 (minimum = 0, maximum = 28; Table 4).

Table 4

Table 4

Figure 2

Figure 2

Although the trim and fill method showed evidence of PB in 36 of 45 analyses (80.0%), the Egger regression results indicated PB in 51.1% of analyses (n = 23). Five analyses from 4 different systematic reviews showed no evidence of PB with either the trim and fill method or Egger regression. Four analyses had significant PB with Egger regression but not the trim and fill method. Conversely, 17 analyses showed signs of PB with the trim and fill method, but not Egger regression. The mean absolute percent difference between adjusted and observed point estimates was 15.5%, the median was 6.2%, and the range was 0% to 85.5%.

Back to Top | Article Outline

DISCUSSION

Our study investigated the methods by which systematic reviewers evaluate and attempt to mitigate PB. In general, we found that roughly half of reviews published in the top 5 anesthesiology journals discussed PB, and only a small percentage made efforts to mitigate PB through the search process.

Compared with Onishi and Furukawa’s8 recent study on PB in systematic reviews, our study resulted in a few key differences. First, we found that approximately 45% of reviews did not report an assessment for PB. This is substantially larger than the 31% Onishi and Furukawa’s reports. Second, our study concluded by funnel plot asymmetry that 80.0% of reviews that did not report an assessment of PB (20/25 systematic reviews) did have a significant PB. This percentage is much larger than the 19.4% reported by Onishi and Furukawa. Finally, our results show the median absolute percent difference between observed and adjusted point estimates to be 6.2%, whereas Onishi and Furukawa report a median of 50.9%. It would appear that, in the anesthesiology literature, although the effect sizes are not being overestimated as much, there is more evidence of PB. Also, studies are assessing and reporting PB at a lower rate than in the general medical literature.

An example of how PB influences study effect sizes is shown by our analysis of the meta-analysis in Figure 4 of Zhang et al’s46 study. They compared postoperative pain intensity in patients receiving 300 mg pregabalin versus a control group. Their results found a mean difference of −2.39 between the 2 groups. They did not assess PB. When we corrected for funnel plot asymmetry using the trim and fill analysis, the mean difference changed to −0.35. The ratio of the original effect to the adjusted effect was −2.39/−0.35 or 6.8286, and therefore, the original effect overestimates the effectiveness of pregabalin to treat postoperative pain by 582.9%. A similar situation was reported by Onishi and Furukawa8 when they found an 839.2% overestimate of the reduction in hemoglobin A1c levels in response to disease management programs.

Although 83.1% of articles included hand-searched reference lists of relevant articles, only 20.8% searched gray literature sources such as clinicaltrials.gov. Searching gray literature databases, such as the European Association for Grey Literature Exploitation and Open System for Information on Grey Literature, as well as dissertations, theses, and conference proceedings, may lead to more data and a smaller risk of PB.

One key strategy for countering the PB problem in the published literature is perhaps to do as the Journal of Cerebral Blood Flow & Metabolism has done and provide a section of the journal titled “Negative Results.” Here, authors can submit data that do not agree with their alternative hypothesis and/or did not reproduce published results.50 Clinicians can then consider those data while making clinical decisions. The International Journal of Radiation Oncology Biology Physics is currently pilot testing an adapted peer review process by which authors submit an introduction and methods section for an initial round of review. This portion of the manuscript is peer reviewed before data collection based on methodologic quality and design. A second peer review occurs after completion of the study, and both reviews are weighed in the final decision to publish. Efforts such as these may encourage authors to submit their research for publication, regardless of the strength or direction of their outcome.

Adherence to reporting guidelines may also increase the likelihood of discussing and evaluating PB when appropriate. Half of the reviews in this sample discussed PB. Many of these reviews appropriately followed published guidelines such as the PRISMA,51 MOOSE,52 or QUOROM53 statements; however, barely more than half of those studies evaluated PB. Item 15 of the PRISMA statement notes, “specify any assessment of risk of bias that may affect the cumulative evidence (eg, publication bias).”51 For the reviews that reported adhering to the PRISMA statement, only 75% discussed PB. Ideally, all reviews conducted with the PRISMA statement as a guide would at a minimum discuss PB and evaluate PB when appropriate.

Even though the PRISMA statement was published years ago, many researchers continue to use its predecessor, the QUORUM statement.51,53 An analysis of the journals included in this study demonstrated the lack of specificity and perhaps importance placed on adhering to the PRISMA guidelines. For example, Anesthesiology merely provides word count and abstract guidelines for systematic reviews.54British Journal of Anaesthesia has specific requirements for reporting RCTs and animal studies, but for systematic reviews, the journal provides 1 nondescript sentence about methods.55Regional Anesthesia and Pain Medicine has limited guidelines for systematic reviews.56 The other 2 journals emphasized following PRISMA for systematic review. Anaesthesia says systematic reviews should ideally be presented according to the PRISMA statement.57Anesthesia & Analgesia requires systematic reviews to be presented in accordance with the PRISMA statement.58 If journals do not require following reporting guidelines such as PRISMA, study quality may be decreased.59

As may be expected, in most of the PB analyses we conducted, the effect sizes decreased, although in some studies the effect size increased. Because so many of the analyses we conducted turned out to show significant PB, it stands that authors need to make PB evaluation a standard practice during the systematic review process when appropriate. Doing so will provide reliable and inclusive information for clinical situations and resource allocation situations to decision makers in anesthesiology and across all fields of medicine.

Back to Top | Article Outline

DISCLOSURES

Name: Riley J. Hedin, MPH.

Contribution: This author helped design and conduct the study, collect data for analysis, and prepare the manuscript.

Name: Blake A. Umberham.

Contribution: This author helped prepare the manuscript.

Name: Byron N. Detweiler.

Contribution: This author helped prepare the manuscript.

Name: Lauren Kollmorgen.

Contribution: This author helped prepare the manuscript.

Name: Matt Vassar, PhD.

Contribution: This author helped design the study, perform the data analysis, and prepare the manuscript.

This manuscript was handled by: Nancy Borkowski, DBA, CPA, FACHE, FHFMA.

Back to Top | Article Outline

FOOTNOTES

a There are many reasons a study’s results may not be published, such as (a) discovery mid-study of a flawed approach, (b) lack of diligence on the part of the investigators, (c) upstaging by other research that investigated the same issue in the same manner, (d) upstaging by the introduction of better drugs or methodologies, (e) reports of treatment side effects, (f) difficulty recruiting patients, and (g) other reasons. Our study is referring to results not being published to due statistically nonsignificant results.
Cited Here...

b NIH Health and Education. Available at: http://www.nimh.nih.gov/health/topics/child-and-adolescent-mental-health/antidepressant-medications-for-children-and-adolescents-information-for-parents-and-caregivers.shtml. Accessed November 4, 2015.
Cited Here...

c Covidence. Available at: https://www.covidence.org. Accessed November 4, 2015.
Cited Here...

Back to Top | Article Outline

REFERENCES

1. Murad MH, Montori VM, Ioannidis JP, et al. How to read a systematic review and meta-analysis and apply the results to patient care: users’ guides to the medical literature. JAMA. 2014;312:171–179.
2. Manchikanti L. Evidence-based medicine, systematic reviews, and guidelines in interventional pain management, part I: introduction and general considerations. Pain Physician. 2008;11:161–186.
3. Jefferson T, Jones MA, Doshi P, et al. Neuraminidase inhibitors for preventing and treating influenza in healthy adults and children. Cochrane Database Syst Rev. 2012;1:CD008965.
4. Souza JP, Pileggi C, Cecatti JG. Assessment of funnel plot asymmetry and publication bias in reproductive health meta-analyses: an analytic survey. Reprod Health. 2007;4:3.
5. American Society of Anesthesiologists. Practice guidelines for the perioperative management of patients with obstructive sleep apnea: An updated report by the American Society of Anesthesiologists Task Force on Perioperative Management of Patients with Obstructive Sleep Apnea. Anesthesiology. 2014;120:268–286.
6. Ahmed I, Sutton AJ, Riley RD. Assessment of publication bias, selection bias, and unavailable data in meta-analyses using individual participant data: a database survey. BMJ. 2012;344:d7762.
7. Svircevic V, van Dijk D, Nierich AP, et al. Meta-analysis of thoracic epidural anesthesia versus general anesthesia for cardiac surgery. Anesthesiology. 2011;114:271–282.
8. Onishi A, Furukawa TA. Publication bias is underreported in systematic reviews published in high-impact-factor journals: metaepidemiologic study. J Clin Epidemiol. 2014;67:1320–1326.
9. Dwan K, Altman DG, Arnaiz JA, et al. Systematic review of the empirical evidence of study publication bias and outcome reporting bias. PLoS One. 2008;3:e3081.
10. Simes RJ. Publication bias: the case for an international registry of clinical trials. J Clin Oncol. 1986;4:1529–1541.
11. Sterne JA, Egger M, Smith GD. Systematic reviews in health care: investigating and dealing with publication and other biases in meta-analysis. BMJ. 2001;323:101–105.
12. Sutton AJ, Duval SJ, Tweedie RL, et al. Empirical assessment of effect of publication bias on meta-analyses. BMJ. 2000;320:1574–1577.
13. Rothstein HR, Sutton AJ, Borenstein M. Publication Bias in Meta-Analysis. Hoboken, NJ: Wiley; 2005.
14. De Oliveira GS Jr, Chang R, Kendall MC, et al. Publication bias in the anesthesiology literature. Anesth Analg. 2012;114:1042–1048.
15. Littner Y, Mimouni FB, Dollberg S, et al. Negative results and impact factor: a lesson from neonatology. Arch Pediatr Adolesc Med. 2005;159:1036–1037.
16. Thaler K, Kien C, Nussbaumer B, et al; UNCOVER Project CONSORTIUM. Inadequate use and regulation of interventions against publication bias decreases their effectiveness: a systematic review. J Clin Epidemiol. 2015;68:792–802.
17. Whittington CJ, Kendall T, Fonagy P, et al. Selective serotonin reuptake inhibitors in childhood depression: systematic review of published versus unpublished data. Lancet. 2004;363:1341–1345.
18. Nissen SE, Wolski K. Effect of rosiglitazone on the risk of myocardial infarction and death from cardiovascular causes. N Engl J Med. 2007;356:2457–2471.
19. Psaty BM, Kronmal RA. Reporting mortality findings in trials of rofecoxib for Alzheimer disease or cognitive impairment: a case study based on documents from rofecoxib litigation. JAMA. 2008;299:1813–1817.
20. Montori VM, Wilczynski NL, Morgan D, et al; Hedges Team. Optimal search strategies for retrieving systematic reviews from Medline: analytical survey. BMJ. 2005;330:68.
21. Higgins JPT, Green S. Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0. 2011 The Cochrane Collaboration; Available at: www.cochrane-handbook.org.
22. Sterne JA, Sutton AJ, Ioannidis JP, et al. Recommendations for examining and interpreting funnel plot asymmetry in meta-analyses of randomised controlled trials. BMJ. 2011;343:d4002.
23. Duval S, Tweedie R. Practical estimates of the effect of publication bias in meta-analysis. Australas Epidemiologist. 1998;5:14–17.
24. Duval S, Tweedie R. Trim and fill: a simple funnel-plot-based method of testing and adjusting for publication bias in meta-analysis. Biometrics. 2000;56:455–463.
25. Egger M, Davey Smith G, Schneider M, et al. Bias in meta-analysis detected by a simple, graphical test. BMJ. 1997;315:629–634.
26. Komatsu R, Turan AM, Orhan-Sungur M, et al. Remifentanil for general anaesthesia: a systematic review. Anaesthesia. 2007;62:1266–1280.
27. Potter LJ, Doleman B, Moppett IK. A systematic review of pre-operative anaemia and blood transfusion in patients with fractured hips. Anaesthesia. 2015;70:483–500.
    28. Andersen LP, Werner MU, Rosenberg J, et al. A systematic review of peri-operative melatonin. Anaesthesia. 2014;69:1163–1171.
      29. Carlisle JB. A meta-analysis of prevention of postoperative nausea and vomiting: randomised controlled trials by Fujii et al. compared with other authors. Anaesthesia. 2012;67:1076–1090.
        30. Yin JY, Ho KM. Use of plethysmographic variability index derived from the Massimo(®) pulse oximeter to predict fluid or preload responsiveness: a systematic review and meta-analysis. Anaesthesia. 2012;67:777–783.
          31. Pikwer A, Åkeson J, Lindgren S. Complications associated with peripheral or central routes for central venous cannulation. Anaesthesia. 2012;67:65–71.
            32. Gattas DJ, Dan A, Myburgh J, et al; CHEST Management Committee. Fluid resuscitation with 6% hydroxyethyl starch (130/0.4) in acutely ill patients: an updated systematic review and meta-analysis. Anesth Analg. 2012;114:159–169.
              33. Gurgel ST, do Nascimento P Jr. Maintaining tissue perfusion in high-risk surgical patients: a systematic review of randomized clinical trials. Anesth Analg. 2011;112:1384–1391.
                34. Hamilton MA, Cecconi M, Rhodes A. A systematic review and meta-analysis on the use of preemptive hemodynamic intervention to improve postoperative outcomes in moderate and high-risk surgical patients. Anesth Analg. 2011;112:1392–1402.
                  35. Yu SK, Tait G, Karkouti K, et al. The safety of perioperative esmolol: a systematic review and meta-analysis of randomized controlled trials. Anesth Analg. 2011;112:267–281.
                    36. Orhan-Sungur M, Kranke P, Sessler D, et al. Does supplemental oxygen reduce postoperative nausea and vomiting? A meta-analysis of randomized controlled trials. Anesth Analg. 2008;106:1733–1738.
                      37. Beattie WS, Wijeysundera DN, Karkouti K, et al. Does tight heart rate control improve beta-blocker efficacy? An updated analysis of the noncardiac surgical randomized trials. Anesth Analg. 2008;106:1039–1048.
                        38. Tiippana EM, Hamunen K, Kontinen VK, et al. Do surgical patients benefit from perioperative gabapentin/pregabalin? A systematic review of efficacy and safety. Anesth Analg. 2007;104:1545–1556.
                          39. Peyton PJ, Wu CY. Nitrous oxide-related postoperative nausea and vomiting depends on duration of exposure. Anesthesiology. 2014;120:1137–1145.
                            40. Levy M, Heels-Ansdell D, Hiralal R, et al. Prognostic value of troponin and creatine kinase muscle and brain isoenzyme measurement after noncardiac surgery: a systematic review and meta-analysis. Anesthesiology. 2011;114:796–806.
                              41. Arulkumaran N, Corredor C, Hamilton MA, et al. Cardiac complications associated with goal-directed therapy in high-risk surgical patients: a meta-analysis. Br J Anaesth. 2014;112:648–659.
                                42. Abdallah FW, Laffey JG, Halpern SH, et al. Duration of analgesic effectiveness after the posterior and lateral transversus abdominis plane block techniques for transverse lower abdominal incisions: a meta-analysis. Br J Anaesth. 2013;111:721–735.
                                43. Glossop AJ, Shephard N, Shepherd N, et al. Non-invasive ventilation for weaning, avoiding reintubation after extubation and in the postoperative period: a meta-analysis. Br J Anaesth. 2012;109:305–314.
                                  44. Lundstrøm LH, Vester-Andersen M, Møller AM, et al; Danish Anaesthesia Database. Poor prognostic value of the modified Mallampati score: a meta-analysis involving 177 088 patients. Br J Anaesth. 2011;107:659–667.
                                    45. McNicol ED, Tzortzopoulou A, Cepeda MS, et al. Single-dose intravenous paracetamol or propacetamol for prevention or treatment of postoperative pain: a systematic review and meta-analysis. Br J Anaesth. 2011;106:764–775.
                                      46. Zhang J, Ho KY, Wang Y. Efficacy of pregabalin in acute postoperative pain: a meta-analysis. Br J Anaesth. 2011;106:454–462.
                                      47. Ho KM, Tan JA. Use of L’Abbé and pooled calibration plots to assess the relationship between severity of illness and effectiveness in studies of corticosteroids for severe sepsis. Br J Anaesth. 2011;106:528–536.
                                        48. Giglio MT, Marucci M, Testini M, et al. Goal-directed haemodynamic therapy and gastrointestinal complications in major surgery: a meta-analysis of randomized controlled trials. Br J Anaesth. 2009;103:637–646.
                                          49. Hanna MN, Elhassan A, Veloso PM, et al. Efficacy of bicarbonate in decreasing pain on intradermal injection of local anesthetics: a meta-analysis. Reg Anesth Pain Med. 2009;34:122–125.
                                            50. Dirnagl U, Lauritzen M. Fighting publication bias: introducing the Negative Results section. J Cereb Blood Flow Metab. 2010;30:1263–1264.
                                            51. Moher D, Liberati A, Tetzlaff J, et al; PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: the PRISMA statement. BMJ. 2009;339:b2535.
                                            52. Stroup DF, Berlin JA, Morton SC, et al. Meta-analysis of observational studies in epidemiology: a proposal for reporting. Meta-analysis Of Observational Studies in Epidemiology (MOOSE) group. JAMA. 2000;283:2008–2012.
                                            53. Moher D, Cook DJ, Eastwood S, et al. Improving the quality of reports of meta-analyses of randomised controlled trials: the QUOROM statement. Quality of Reporting of Meta-analyses. Lancet. 1999;354:1896–1900.
                                            54. Anesthesiology. Instructions for Authors. Anesthesiology Website. 2015. Available at: http://anesthesiology.pubs.asahq.org/public/InstructionsforAuthors.aspx#reviewarticles. Accessed March 2, 2016.
                                            55. Oxford Journals. Instructions for Authors. British Journal of Anaesthesia Website. 2016. Available at: http://www.oxfordjournals.org/our_journals/bjaint/for_authors/general.html. Accessed March 2, 2016.
                                            56. Regional Anesthesia and Pain Medicine. Instructions for Authors. Regional Anesthesia and Pain Medicine Online Submission and Review System Website. 2016. Available at: http://edmgr.ovid.com/rapm/accounts/ifauth.htm. Accessed March 2, 2016.
                                            57. Wiley Online Library. Anaesthesia. Anaesthesia Website. 2016. Available at: http://onlinelibrary.wiley.com/journal/10.1111/(ISSN)1365-2044/homepage/ForAuthors.html. Accessed March 2, 2016.
                                            58. Anesthesia & Analgesia. Guide for Authors. Anesthesia & Analgesia Website. 2015. Available at: http://www.aaeditor.org/GuideForAuthors.pdf. Accessed March 2, 2016.
                                            59. Panic N, Leoncini E, de Belvis G, et al. Evaluation of the endorsement of the preferred reporting items for systematic reviews and meta-analysis (PRISMA) statement on the quality of published systematic review and meta-analyses. PLoS One. 2013;8:e83138.
                                            © 2016 International Anesthesia Research Society