Secondary Logo

Journal Logo

Invited commentary

Writing the abstract: completeness and accuracy matter

von Elm, Erik

Author Information
European Journal of Anaesthesiology: July 2011 - Volume 28 - Issue 7 - p 483-484
doi: 10.1097/EJA.0b013e328343b160
  • Free

Journal abstracts are probably the most readily available part of the biomedical literature. Access to the abstract is not restricted as it is for the full text of most scientific publications. Electronic literature databases routinely index journal abstracts and make their content suitable for online search engines. Consequently, many readers first come across the abstract and only then decide whether it is worth retrieving (and paying for) the full article or not. Further, clinicians may base their decisions on the information derived from journal abstracts alone.1 Abstracts usually provide only very limited space to convey essential information on clinical studies which, in contrast, may have taken several years while enrolling hundreds of participants. The introduction and widespread use of structured abstracts by journals has improved the quality of abstracts considerably.2 Given that most clinical journals now require that abstracts are organised following a standardised format, readers can expect to find concise information about study objectives, methods and results in a ‘nutshell’.3 Structured abstracts have been found to be of higher quality than the more traditional descriptive abstracts and allow readers to find information more easily.4

Frequently, authors tackle the writing of the article summary only in the last stages of the process or the task is left to the less experienced authors.5 As a consequence, many abstracts are tainted with incompleteness, inconsistencies or even blatant errors. Literature-based studies comparing the content of abstracts with the corresponding full text article have identified considerable deficiencies in different medical disciplines.6,7 The empirical study by Can et al. in this issue8 adds to this body of evidence using a different approach. The investigators analysed abstracts of randomised trials published in four major anaesthesiology journals and used the CONSORT guidelines for reporting randomised trials in journal and conference abstracts as the reference standard.9 The good news is that more than half of the article abstracts described trial interventions, objectives and conclusions adequately. However, in most abstracts, crucial information such as study outcomes or the number of patients analysed was not reported. The findings confirm earlier evidence that the reporting of the anaesthesiology literature has improved over time, but that deficiencies persist.10

There are strong incentives to invest time and effort in writing journal abstracts. Editors of journals with large numbers of submissions read only the abstract at a first screening stage and, based on this initial judgement, select material for further reviewing.5 Consequently, the abstract is a ‘store sign’ for the ensuing article. If complex study methods are to be described, the decision on what should be included in the abstract is not an easy task. In this situation, reporting guidelines and checklists offer guidance for writing up studies of various types.11 The array of available tools has been complemented recently by a reporting guideline dedicated to abstracts, the CONSORT guidelines for reporting randomised trials in journal and conference abstracts.9 However, the onus of improving the quality of scientific reporting should not be left to the authors alone: it is the joint responsibility of all those involved in the publication process. As gatekeepers in the publication process, editors and referees can play crucial roles and help to improve the completeness and accuracy of submitted articles considerably.

A good journal abstract will not only stimulate the interest of individual readers but it is often also the primary source of information used for media releases and news pieces. Unsurprisingly, yet another form of distortion can be found in abstracts: spin. In the context of clinical trials, spin has been defined as ‘specific reporting strategies from whatever motive to highlight that a treatment is beneficial’. The problem has been studied systematically only recently.12 A common form of spin is highlighting statistically significant results from within-group comparisons (e.g. between baseline and follow-up measurements) instead of those from the between-group comparison that was planned initially. Another frequent form is emphasising significant results from subgroups instead of less exciting results based on the entire study population. Spin in journal abstracts (and full articles) appears to be frequent. In a representative sample of 72 abstracts and full articles of randomised controlled trials, 49 (68%) abstracts showed some kind of spin in at least one section, and 20 (28%) abstracts in all sections.12 These overoptimistic representations of study results often favour new (and more costly) interventions, which still might lack a proper evidence base for efficacy and cost-effectiveness. In contrast to the more obvious gaps in the reporting of trial data, uncovering spin is much more demanding. It may require in-depth knowledge of the specific field of research and advanced skills of critical appraisal. To identify unwarranted claims in primary and secondary publications, it is even more important that the publications contain unambiguous information. Many problems with published articles are detected only after the date of publication, that is, when the scientific community has the opportunity to review the research. Journals should give ample room and time for so-called ‘post-publication peer review’. Unfortunately, many journals restrict the time window and format of their correspondence sections.13 These sections should be sources of complementary information about published research and the place of scientific debate. In this regard, not only are authors encouraged to make extra efforts to improve the quality of reporting, but readers should also play their role as peers and provide constructive criticism about the published material.

This article was checked and accepted by the Editors, but was not sent for external peer-review.

References

1 Barry HC, Ebell MH, Shaughnessy AF, et al. Family physicians' use of medical abstracts to guide decision making: style or substance? J Am Board Fam Pract 2001; 14:437–442.
2 Taddio A, Pain T, Fassos FF, et al. Quality of nonstructured and structured abstracts of original research articles in the British Medical Journal, the Canadian Medical Association Journal and the Journal of the American Medical Association. CMAJ 1994; 150:1611–1615.
3 Haynes RB, Mulrow CD, Huth EJ, et al. More informative abstracts revisited. Ann Intern Med 1990; 113:69–76.
4 Hartley J, Sydes M, Blurton A. Obtaining information accurately and quickly: are structured abstracts more efficient? J Inform Sci 1996; 22:349–356.
5 Groves T, Abbasi K. Screening research papers by reading abstracts. BMJ 2004; 329:470–471.
6 Ward LG, Kendrach MG, Price SO. Accuracy of abstracts for original research articles in pharmacy journals. Ann Pharmacother 2004; 38:1173–1177.
7 Estrada CA, Bloch RM, Antonacci D, et al. Reporting and concordance of methodologic criteria between abstracts and articles in diagnostic test studies. J Gen Intern Med 2000; 15:183–187.
8 Can OS, Yilmaz AA, Hasdogan M, et al. Has the quality of abstracts for randomised controlled trials improved since the release of Consolidated Standards of Reporting Trial guideline for abstract reporting? A survey of four high-profile anaesthesia journals. Eur J Anaesthesiol 2011; 28:485–492.
9 Hopewell S, Clarke M, Moher D, et al. CONSORT for reporting randomised trials in journal and conference abstracts. Lancet 2008; 371:281–283.
10 Langford RA, Huang GH, Leslie K. Quality and reporting of trial design in scientific papers in anaesthesia over 25 years. Anaesthesia 2009; 64:60–64.
11 Simera I, Moher D, Hoey J, et al. A catalogue of reporting guidelines for health research. Eur J Clin Invest 2010; 40:35–53.
12 Boutron I, Dutton S, Ravaud P, Altman DG. Reporting and interpretation of randomized controlled trials with statistically nonsignificant results for primary outcomes. JAMA 2010; 303:2058–2064.
13 von Elm E, Wandel S, Jüni P. The role of correspondence sections in postpublication peer review: a Rliometric study of general and internal medicine journals. Scientometrics 2009; 81:747–755.
© 2011 European Society of Anaesthesiology