Share this article on:

CONSORT item adherence in top ranked anaesthesiology journals in 2011: A retrospective analysis

Münter, Nils H.*; Stevanovic, Ana*; Rossaint, Rolf; Stoppe, Christian; Sanders, Robert D.; Coburn, Mark

European Journal of Anaesthesiology (EJA): February 2015 - Volume 32 - Issue 2 - p 117–125
doi: 10.1097/EJA.0000000000000176

BACKGROUND Randomised controlled trials (RCTs) are the gold standard for measuring the efficacy of any medical intervention. The present study assesses the implementation of the CONSORT statement in the top 11 anaesthesiology journals in 2011.

OBJECTIVES We designed this study in order to determine how well authors in the top 11 ranked anaesthesiology journals follow the CONSORT statement's criteria.

DESIGN A retrospective cross-sectional data analysis.

SETTING The study was performed at the RWTH Aachen University Hospital.

PARTICIPANTS Journals included Pain, Anesthesiology, British Journal of Anaesthesia, Regional Anesthesia and Pain Medicine, European Journal of Pain, Anesthesia and Analgesia, Anaesthesia, Minerva Anestesiologica, Canadian Journal of Anesthesia, Journal of Neurosurgical Anesthesiology and the European Journal of Anaesthesiology.

MAIN OUTCOME MEASURES All articles in the online table of contents from the top 11 anaesthesiology journals according to the ISI Web of Knowledge were screened for RCTs published in 2011. The RCTs were assessed using the CONSORT checklist. We also analysed the correlation between the number of citations and the adherence to CONSORT items.

RESULTS We evaluated 319 RCTs and found that, more than ten years after the publication of the CONSORT statement, the RCTs satisfied a median of 60.0% of the CONSORT criteria. Only 72.1% of the articles presented clearly defined primary and secondary outcome parameters. The number of citations is only weakly associated with the fulfilment of the CONSORT statement (r 2 = 0.023).

CONCLUSION Adherence to the CONSORT criteria remains low in top-ranked anaesthesiology journals. We found only a very weak correlation between the number of citations and fulfilment of the requirements of the CONSORT statement.

Supplemental Digital Content is available in the text

From the Department of Anaesthesiology, University Hospital Aachen, Aachen, Germany (NHM, AS, RR, CS, MC), and Department of Anaesthesia and Surgical Outcomes Research Centre, University College London Hospital & Wellcome Department of Imaging Neuroscience, University College London, London, UK (RDS)

*Nils H. Münter and Ana Stevanovic contributed equally to the writing of this article.

Correspondence to Mark Coburn, MD, Department of Anaesthesiology, University Hospital Aachen, Pauwelsstr. 30, D-52074 Aachen, Germany Tel: +49 241 8088179; e-mail:

Published online 9 November 2014

Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal's Website (

Back to Top | Article Outline


Randomised controlled trials (RCTs) provide the most reliable evidence for the efficacy of medical therapies and interventions.1–3 In order to draw correct conclusions from an individual study, it is of the utmost importance that the design, realisation and statistical analysis are described accurately in detail.4 The Consolidated Standards of Reporting Trials (CONSORT) statement helps authors to achieve these goals. At a time when massive amounts of data have to be processed, it is an important guide in standardising the reporting of data and easing the comprehension and comparison of trials. The CONSORT statement is intended to improve the reporting of a RCT, enabling readers to understand a trial's design, conduct, analysis and interpretation, and to assess the validity of its results. It emphasises that this can only be achieved by complete transparency from authors.5–7 The first CONSORT statement was published in 19965 and has been revised twice to date.6,7 The latest revision was published in 2010.6

Several studies have been published that examine the implementation of the CONSORT statement in a wide variety of journals.8 However, to our knowledge, there are very few studies that determined the application of the CONSORT statement in the field of anaesthesiology.9

We designed the present study in order to determine how well authors in the 11 top-ranked anaesthesiology journals (ISI Web of Knowledge 2011; Thomson Reuters, London, UK) follow the CONSORT statement's criteria.

Back to Top | Article Outline

Materials and methods

This study was designed as a retrospective data analysis and is reported according to the STROBE recommendations. The study was performed at the RWTH Aachen University Hospital. We identified the top 11 anaesthesiology journals according to the highest impact factor in 2011, using the ISI Web of Knowledge 2011 (Thomson Reuters). We excluded the Clinical Journal of Pain because it contains only reviews. All articles were screened (NHM) from the online table of contents of each journal published between 1 January 2011 and 31 December 2011. Published articles were categorised (editorial, randomised study, nonrandomised study, letter, other and so on). Indistinct allocations were clarified by one author (MC). Only primary reports of RCTs were included. We considered as randomised any prospective study that assessed healthcare interventions in human participants or manikins allocated randomly to a study group.

Two reviewers (NHM, MC) created a checklist according to the revised CONSORT statement checklist and elaboration document. They also clarified controversial items and created clear criteria and keywords. Each RCT was then analysed (NHM) with regard to the adherence to CONSORT item reporting using the revised CONSORT statement checklist and elaboration document as a guideline, as well as our checklist. In addition, an automated keyword search was performed using the Adobe Acrobat reader search (Adobe Systems GmbH, Germany). The results of both methods were then combined. Each item was categorised as present, absent or nonapplicable and thus coded as ‘yes’, ‘no’ or ‘not applicable’. The third option applied only to item 11b (similarity of interventions), because not all trials were blinded. Each of the 37 CONSORT items was assessed except for item 24 (protocol accessibility), which was not analysed. The maximum number of items per article was thus 36, or 35 in the case of nonapplicability of item 11b.

We analysed only the quantity of reported items, not the quality of the reported content. The following items can induce a variety of judgements regarding reporting adherence and were predefined in this trial as follows: items 3b (changes to methods after trial commencement), 6b (changes to trial outcomes after the trial commenced) and 7b (interim analyses and stopping guidelines) were coded ‘no’ if they were not reported. This is justified by the fact that we did not access the original study protocol and thus cannot prove that these items were not applicable for the trial. We assigned ‘yes’ if the trials reported that there were no changes, or that there were no planned interim analyses and stopping guidelines. Item 7a (how sample size was determined) was coded ‘yes’ if the following parameters were reported: type I error, one or two-tailed test, type II error or power, type of test, assumptions in the control group and the predicted treatment effect. It was also coded ‘yes’ if only the α risk or whether the test was one or two-tailed was missing.10 We did not recalculate the sample size estimation. Although blinding was not performed in every trial, authors should always state whether blinding was used by reporting item 11a (whether blinding was used and who was blinded).6 We therefore assigned ‘yes’ if nonblinding was reported anywhere in the article. Item 12b (methods for additional analyses) was assigned as ‘yes’ if additional analyses were prespecified in the Materials and methods section. It was coded ‘no’ when not reported, because we did not know whether it was not reported due to the true absence of prespecified additional analyses, or as a posthoc decision after trial termination. Accordingly, we proceeded with item 18 (results of other analyses). Item 14b (why the trial ended or was stopped) was coded ‘yes’ only if it was clearly stated in the Results section why the trial came to an end. We predefined that stopping after the sample size was achieved had also to be reported. Item 19 (harms) was assigned ‘yes’ in the manikin trials if potential harms to human beings were described.

We analysed each journal as to whether it endorses, supports or does not mention the CONSORT statement. We considered all journals that are listed on the CONSORT website as endorsing journals, and they were classified as ‘endorsing’. All other journals that were not listed on the CONSORT website but that recommended the use of CONSORT in their advice to authors were classified as ‘supporting’. Finally, we assessed the number of citations received (assessed 6 July 2013) using the ISI Web of Knowledge (Thomson Reuters).

A third author (AS) independently double-checked the items described above (3b, 6b, 7a, 7b, 11a, 12b, 14b, 18 and 19) to enhance objectivity in their assignment, as they appeared to be more dependent on authors’ interpretations than the other items. Whenever the assigning authors (NHM, AS) were uncertain about an article meeting the set criteria, a further author (MC) was consulted and the matter was discussed until a mutual agreement was achieved. It was not feasible to blind the authors to the names of the journals or the authors.

Back to Top | Article Outline

Statistical analysis

We calculated the number of, and percentage adherence to, each specific CONSORT item for all included RCTs stratified by journal. We also computed the median and range of the percentage adherence of RCTs of all journals to each of the CONSORT items. We classified each RCT into one of three categories depending on the percentage of items satisfied (<50%, between 50 and 80% or >80%). In addition, we allocated each RCT into one of 10 categories depending on the items reported: from 0 to 100% (with steps of 10%).

Finally, we assessed whether there was any correlation, at the article level, between adherence to the CONSORT items and the number of citations up to 6 July 2013. For correlation studies, linear regression analysis with calculation of the Pearson coefficient and the coefficient of determination (r 2) were performed. A P value of less than 0.05 was considered to be significant.

All statistical analyses were performed using the IBM SPSS 20 Statistics Software (IBM Deutschland GmbH, Germany).

Back to Top | Article Outline


We screened 3590 articles, of which 3250 were excluded (Fig. 1). From the remaining 340 articles, another 21 were excluded after a secondary assessment determined that they were not primary studies but only subgroup analyses of larger trials, leaving 319 RCTs to be analysed in the current study. Only six of the journals endorsed the CONSORT statement (Pain, Anesthesiology, British Journal of Anaesthesia, Anesthesia and Analgesia, Canadian Journal of Anesthesia and European Journal of Anaesthesiology). Three of the journals supported the CONSORT statement in their instructions for authors (Anaesthesia, Minerva Anestesiologica and European Journal of Pain) (see Supplemental Digital Content 1,, which shows information about additional journals).

Fig. 1

Fig. 1

How the RCTs published in 2011 in the top 11 anaesthesiology journals met the requirements for each CONSORT item is summarised in Table 1. The number and percentage of all trials reporting each CONSORT item according to the journal is summarised in Table 2 . The median percentage adherence to all items of all RCTs was 60.0%, ranging from 22.9 to 88.9% (Table 1). The majority of publications fell within the range of 50 to 80% of the CONSORT items (Fig. 2). Most articles reported 61 to 70% of the CONSORT items (Fig. 3). Specific items such as 1b (structured summary of trial design, methods, results, conclusion), 2a (scientific background and explanation of rationale), 2b (specific objectives or hypotheses), 4a (eligibility criteria for participants), 5 (detailed description of intervention for each group), 12a (statistical methods used to compare groups for primary and secondary outcomes) and 22 (interpretation of data) reached more than 90% adherence. Other items such as 3b (important changes to methods after trial commencement, with reasons), 6b (change to trial outcomes), 7b (explanation of any interim analyses and stopping guidelines) and 14b (why the trial was ended or was stopped) were reported with a median value of 0% (Table 2 ). Some items that are fundamental to assessing the validity of a study, for example items 6a (completely defined prespecified primary and secondary outcome measures), 7a (how was sample size determined) and 25 (funding, role of funders), were reported with median [range] values of 72.5% [16.7 to 100%], 87.1% [47.4 to 100%] and 83.3% [37.5 to 97.3%], respectively.

Table 1

Table 1

Table 2

Table 2

Table 2

Table 2

Fig. 2

Fig. 2

Fig. 3

Fig. 3

The median citation frequency of each RCT was 4, with a range of 0 to 35 at the assessment date on 6 July 2013 (SDC 1). We show in SDC1 whether the journals endorsed or supported the CONSORT statement, the respective impact factors of the journals and the number of published articles in the journals. Adherence to the CONSORT criteria showed a significant correlation (P < 0.01), but a very weak linear relationship with the number of citations (r 2 = 0.023) (Fig. 4).

Fig. 4

Fig. 4

Back to Top | Article Outline


One of the most important findings is that, more than 10 years after the CONSORT statement, the RCTs published in the top 11 anaesthesiology journals achieved only a median proportion of 60% of CONSORT item adherence. In addition, we found a very weak correlation between adherence to CONSORT items and the number of citations.

This study provides an assessment of the adherence to the CONSORT statement in RCTs in the field of anaesthesiology. It gives detailed information on the quality of reporting in this area. Most of the studies that have been published in this field until now were based on computerised searches in PubMed or other databases. In contrast, we manually screened every article to fit the profile, and evaluated the full text article and not only the abstracts.

Turner et al. 11 demonstrated in a Cochrane review that the completeness of reporting of RCTs improves when journals endorse the CONSORT statement. Our study confirms the findings of previously published studies. For example, 270 of the 319 RCTs (84.6%) carried out a power analysis and sample size calculation. This is in agreement with Harhay et al.,12 who investigated the outcomes and statistical power in adult critical care RCTs in 16 high-impact journals in the 7 years until May 2013. They found that 92% of 135 RCTs performed a power or sample size calculation. However, a review of RCTs published between January 2005 and December 2006 in six high impact factor general medical journals showed that only 34% reported all the required data to calculate the sample size.10 Similar results were observed by Latronico et al.,13 who assessed the quality of reporting in RCTs published in Intensive Care Medicine between 2001 and 2010. Only 43% stated a rationale for sample size calculation in that series. Of note, we did not recalculate the sample size determination; we assessed only the quality of reporting enough items for sample size calculation according to the CONSORT explanation and elaboration document.6 It has been shown that adequate reporting of sample size calculations does not mean accurate calculation.10 Sufficient reporting of sample size calculations is one crucial factor. Only a trial with adequate power can deliver valid results, and therefore, a sample size calculation should be reported transparently to the readers, to enable a recalculation.

In our study, only 230 of the 319 RCTs (72.1%) declared clearly a primary outcome parameter. This leads to a related topic: reporting of RCTs with statistically nonsignificant results. Having analysed published reports of RCTs in December 2006, Boutron et al. 14 showed that statistically nonsignificant primary outcomes were inconsistent in the reporting and interpretation of the results.

In a recent Cochrane review, Lundh et al. 15 found that sponsorship of drug or device studies by the manufacturing company leads to more favourable results and conclusions than does sponsorship from other sources. This underlines the importance of the section that includes funding in the CONSORT statement. Previously, it has been shown that funding is reported in only 628 and 54%9 of publications. It is satisfying to see a positive trend in the anaesthesiology journals, as we found that 263 of 319 RCTs (82.4%) reported the source of funding. Further improvement of reporting funding sources is clearly important for anaesthesiology trials.

We were not able to show a strong correlation between the adherence to CONSORT items and the number of citations, in contrast to the findings by Latronico et al. 13. The latter considered that the number of citations was an important indicator of the impact of an article in the scientific community. Hopewell et al. 8 published a study analysing the impact of the CONSORT statement on reporting across all specialties and found the reporting to be similar to our data. They compared studies from the years 2000 and 2006, before the second revision of the CONSORT statement. Mills et al. 16 compared RCTs in five high-impact general medical journals between 2002 and 2003 and found results somewhat similar to the current study. Can et al. 17 investigated abstracts of RCTs in four high-profile anaesthesia journals, comparing abstracts from before the latest CONSORT revision with those after it. They found that there were improvements only in a few of the subgroups, although the majority did not improve significantly. In addition, they described the overall reporting quality to be poor. Contrary to this finding, Hopewell et al.,18 in a study comparing high-impact medical journals, showed that an active policy to implement the CONSORT guidelines led to a significant increase in the number of reported items in RCT abstracts.

There is some evidence that reporting in specialty journals may be inferior to those in general medicine.8 All of these results should be read critically because the few studies that have been performed to date often focused on different outcomes, different time spans, different inclusion criteria or different defaults for assigning adherence, and are therefore difficult to compare. We would also like to stress that the present study evaluated the quality of reporting of RCTs and not the quality of the studies themselves. In addition, there is significant evidence that adequate methods are often used but not mentioned in the final publication.19,20

Our study has several limitations. For example, all articles were analysed by one author (NHM). Only nine (25%) of the 36 CONSORT items were double-checked by another author (AS). However, a further author (MC) clarified all indistinct allocations. It is therefore possible that we missed, or misinterpreted, reported items. In addition, trying to evaluate whether a manuscript meets one of the 37 possible broad checklist items is associated with subjective judgement and measurement error. Furthermore, for some items it is difficult to define hard criteria, and so these may be misclassified irrespective of their presence or absence in a report. Although we aimed to achieve a high quality for standardisation of these items, inconsistencies might appear in the evaluation. The correlation between the number of citations and the reporting quality has to be interpreted carefully. First, the analysis of the number of citations was performed at a fixed time-point. Second, it is impossible to state whether a study is cited more often because it was well reported or whether the study is cited more often because of its content. In this context, it is important to mention that the selection of journals by impact factor alone creates a bias because the calculated impact factor does not reflect the journal's real impact. For this reason, some important journals have not been assessed in this study. Another limitation of our study is that we limited the analysis to the year 2011. Our inclusion of seven RCTs performed on manikins should not have a major impact on our results from a total of 319 RCTs. Despite the inclusion of these 7 RCTs our results are comparable to other trials investigating the adherence to the CONSORT statement only in humans.

Back to Top | Article Outline


We found that RCTs in the top-ranked anaesthesiology journals report a median proportion of 60% of the CONSORT statement's recommendations. The number of citations was weakly associated with the adherence to the CONSORT items. Although the reporting quality of some items has improved since the implementation of the CONSORT statement, the overall reporting quality remains poor.

For future RCTs, the CONSORT statement serves as a useful checklist during the design of a trial. Completion of the CONSORT checklist should become mandatory in each journal's submission process. Further trials are needed that investigate not only the quantity but also the quality of the reported items. Particular focus should be given to comparing the original study protocols with the published articles, and validating power calculations.

Back to Top | Article Outline

Acknowledgements relating to this article

Assistance with the article: none.

Financial support and sponsorship: this study was financed by the Department of Anaesthesiology, University Hospital Aachen, Germany.

Conflicts of interest: none.

Presentation: none.

Back to Top | Article Outline


1. Jüni P, Altman DG, Egger M. Systematic reviews in healthcare: assessing the quality of controlled clinical trials. BMJ 2001; 323:42–46.
2. Moher D. CONSORT: an evolving tool to help improve the quality of reports of randomized controlled trials. Consolidated standards of reporting trials. JAMA 1998; 279:1489–1491.
3. Sibbald B, Roland M. Understanding controlled trials. Why are randomised controlled trials important? BMJ 1998; 316:201.
4. Ghimire S, Kyung E, Kang W, Kim E. Assessment of adherence to the CONSORT statement for quality of reports on randomized controlled trial abstracts from four high-impact general medical journals. Trials 2012; 13:77.
5. Begg C, Cho M, Eastwood S, et al. Improving the quality of reporting of randomized controlled trials: the CONSORT statement. JAMA 1996; 276:637–639.
6. Moher D, Hopewell S, Schulz KF, et al. CONSORT 2010 explanation and elaboration: updated guidelines for reporting parallel group randomised trials. J Clin Epidemiol 2010; 63:e1–e37.
7. Schulz KF, Altman DG, Moher D. Group CONSORT. CONSORT 2010 statement: updated guidelines for reporting parallel group randomised trials. BMJ 2010; 340:c332.
8. Hopewell S, Dutton S, Yu M, et al. The quality of reports of randomised trials in 2000 and 2006: comparative study of articles indexed in pubmed. BMJ 2010; 340:c723.
9. Elia N, Tramèr MR. Adherence to guidelines for improved quality of data reporting: where are we today? Eur J Anaesthesiol 2011; 28:478–480.
10. Charles P, Giraudeau B, Dechartres A, et al. Reporting of sample size calculation in randomised controlled trials: review. BMJ 2009; 338:b1732.
11. Turner L, Shamseer L, Altman DG, et al. Consolidated standards of reporting trials (CONSORT) and the completeness of reporting of randomised controlled trials (RCTs) published in medical journals. Cochrane Database Syst Rev 2012; 11:MR000030.
12. Harhay MO, Wagner J, Ratcliffe SJ, et al. Outcomes and statistical power in adult critical care randomized trials. Am J Respir Crit Care Med 2014; 189:1469–1478.
13. Latronico N, Metelli M, Turin M, et al. Quality of reporting of randomized controlled trials published in intensive care medicine from 2001 to 2010. Intensive Care Med 2013; 39:1386–1395.
14. Boutron I, Dutton S, Ravaud P, Altman DG. Reporting and interpretation of randomized controlled trials with statistically nonsignificant results for primary outcomes. JAMA 2010; 303:2058–2064.
15. Lundh A, Sismondo S, Lexchin J, et al. Industry sponsorship and research outcome. Cochrane Database Syst Rev 2012; 12:MR000033.
16. Mills EJ, Wu P, Gagnier J, Devereaux PJ. The quality of randomized trial reporting in leading medical journals since the revised CONSORT statement. Contemp Clin Trials 2005; v26:480–487.
17. Can OS, Yilmaz AA, Hasdogan M, et al. Has the quality of abstracts for randomised controlled trials improved since the release of consolidated standards of reporting trial guideline for abstract reporting? A survey of four high-profile anaesthesia journals. Eur J Anaesthesiol 2011; 28:485–492.
18. Hopewell S, Ravaud P, Baron G, Boutron I. Effect of editors’ implementation of CONSORT guidelines on the reporting of abstracts in high impact medical journals: interrupted time series analysis. BMJ 2012; 344:e4178.
19. Hill CL, LaValley MP, Felson DT. Discrepancy between published report and actual conduct of randomized clinical trials. J Clin Epidemiol 2002; 55:783–786.
20. Soares HP, Daniels S, Kumar A, et al. Bad reporting does not mean bad methods for randomised trials: observational study of randomised controlled trials performed by the radiation therapy oncology group. BMJ 2004; 328:22–24.

Supplemental Digital Content

Back to Top | Article Outline
© 2015 European Society of Anaesthesiology