Despite recognition of the importance of acute pain management and the institution of acute pain services in many hospitals since the early 1990s, many patients continue to suffer episodes of severe pain or moderate-severe pain.1 Therefore, the field of acute postoperative pain should be a fertile area for research. This brief study was designed to demonstrate any changes in the number and quality of clinical trials between 2 yr, 15 yr apart.
The years chosen were 1992 and 2007. By 1992, acute pain management had been given priority in both the United States and the United Kingdom. Techniques such as patient-controlled analgesia and epidural analgesia were well established and acute pain services were becoming widespread.2,3 The later year was chosen because it was the last complete year when the search was conducted. The six journals with the highest impact factor in the field of anesthesiology in each year were selected.4 Journals were searched using fields in PubMed restricted to journal, year, clinical trial, human, and the terms “postoperative” and “pain.” The contents pages of each edition of each journal in both years were then hand-searched for any other possible inclusions, and complete copies of each potential report were then obtained. Reports were included if they described a prospective study in which acute postoperative pain was an important end point in the study. Retrospective studies, studies on nonoperative or labor pain, and studies on pediatric pain were excluded. Further details of this process are provided in the Appendix. All eligible articles were evaluated for methodological quality using the instrument described by Jadad et al.5 This instrument is designed to measure the likelihood of bias in pain research reports. The report is examined for randomization, blinding, and management of dropouts and withdrawals. The score ranges from 0 to 5 with 5 being considered the lowest potential for bias. Other data extracted included trial size (N), explicit statements concerning methodology (power analysis, primary end points, management of withdrawals, dropouts, and protocol violations), primary and secondary end points studied, and whether adverse effects were sought.
All comparisons between years were made with Fisher’s exact test, except trial size and quality, which were compared using Wilcoxon’s ranked sum test. Statistics were performed using Stata/IC 10.0 (StataCorp, College Station, TX).
The same six journals had the highest impact factors in both 1992 and 2007. Despite a 31% increase in the number of human clinical trials published in the six journals (325 in 1992 to 425 in 2007), the actual number of clinical studies in acute postoperative pain has not increased (Table 1). However, the contribution of different journals to the total has altered significantly. Most of the difference is explained by the increase in the number in Anesthesia & Analgesia and by the decline in pain trials published in Anesthesiology in 2007. Both journals with “pain” in the title publish very few articles on acute postoperative pain and have a strong emphasis on chronic pain research.
Methodological quality has increased, and the median trial size nearly doubled (Table 2). All trials except one in each group were randomized, and the majority were classical placebo-controlled trials. The exceptions were both prospective audits of technique or intervention. Clearly, the bias instrument is not appropriate for this kind of study, but given that it was only one report in each group, they were not excluded. Articles are not now published without specific statements on primary end points and power analysis. Likewise, explicit statements on statistical treatment of data from withdrawals, dropouts, and protocol violations are now the norm.
There has been a temporal shift in hypothesis interests. There are fewer studies using neuraxial techniques and more involving multimodal techniques (Fig. 1).
There have been some minor changes in choice of end points. Subjective measures of efficacy and satisfaction are more common, and reliance on the Visual Analog Scale for pain assessment has significantly reduced. Adverse effects are more likely to be specifically sought.
There has been an improvement in the quality of publications. Some of the problems of earlier articles may simply be failures to describe elements of methodology that would now be demanded in a report. The increase in median trial size suggests that power analysis is more frequently performed. An alternative explanation is that with an increased knowledge, more modest improvements are being sought, which require larger numbers to be adequately powered. The methodological quality instrument is designed to evaluate the potential bias in the report. The instrument itself is reliable and valid.5 Consistently, lower scores are reported if the instrument is used blinded rather than open. It was not possible to review these articles blinded because the older reports were mostly photocopies and the more recent ones were printed portable document files. Also, the author was familiar with some of the more recent trials, and the specific interventions being studied tended to date the trials. The findings are consistent with the sparse literature on methodological quality of randomized controlled trials (RCTs) in the anesthesia literature.6–8 Greenfield et al.6,7 analyzed all RCTs published in four anesthesia journals in 2 yr, 2000 and 2006. They demonstrated a significant improvement in overall study quality. Specifically, they noted an improvement in the reporting of sample size estimates, major end point analysis, and the reporting of side effects. They used a modified version of the Chalmers quality instrument.9 This instrument is more detailed than that used in this report and evaluates 14 domains, eight on protocol and six on data analysis.
It is conceivable that the search methods missed some eligible studies, but there is no reason to believe that this was more likely in 2007 than 1992. Using the top six impact journals may not be representative of the broader field of anesthesia, and impact factor itself is not necessarily a good surrogate for information that gets absorbed into clinical practice.
The measurable improvement in RCT methodology is laudable. Without doubt, this has been driven by the journals themselves. However, there is a concern that the tail might have wagged the dog. Journals demand sound methodology and originality and researchers respond. Greenfield et al.6,7 bemoan the lack of discussion of Type II error in the majority of reports. However, to reduce this very problem, researchers often study select groups and thus compromise external validity. In addition, study designers frequently choose end points that are statistically convenient or easy to measure rather than potentially more valid end points (such as patient satisfaction, episodes of severe pain, or length of stay). There is rarely discussion of Type I error in studies with positive results. Findings should be confirmed by subsequent studies but rarely are in the major journals. The uptake and impact of new techniques could be assessed prospectively in large observational studies, but these rarely appear in print, if they are performed at all.
The search terms in PubMed were “postoperative” and “pain” limited by journal, year of publication, and human and clinical trials. Definitions are available on the website. The MeSH terms used by PubMed were “postoperative period,” “pain,” and “human” and all fields including the words “postoperative” or “pain” were also searched. The 1992 search yielded 71 reports. There were 23 exclusions (one nonoperative, two in children, and 23 not about postoperative acute pain). Hand-searching revealed a further three reports yielding 51 evaluable trials. The 2007 search yielded 90 reports. There were 46 exclusions (four not comparative interventions, eight in children, 27 not about postoperative pain, and seven advance E-publications that were formally published in 2008). Hand-searching revealed a further four reports yielding 48 evaluable trials.
“Intention-to-treat” analysis is the process by which trial participants are evaluated in the group to which they were assigned even if they received the incorrect treatment.
“Treatment of drop-outs and withdrawals” requires specific statements on any withdrawals from the study, including reasons (such as severe protocol violations or unrelated adverse events) and whether they were excluded in the analysis.
“Allocation concealment” is the process that ensures study participants, observers, and treating staff are truly blinded to the group allocation of the participants.
All of these processes are used in trial design to reduce bias.
1. Dolin SJ, Cashman JN, Bland JM. Effectiveness of acute postoperative pain management: I. Evidence from published data. Br J Anaesth 2002;89:409–23
2. Practice guidelines for acute pain management in the peri-operative setting: a report of the American Society of Anesthesiologists task force on pain management. Anesthesiology 1995;82:1071–81
3. Clinical Standards Advisory Group. Services for patients with pain. London: Department of Health, 1999
5. Jadad AR, Moore RA, Carroll D, Jenkinson C, Reynolds DJ, Gavaghan DJ, McQuay HJ. Assessing the quality of reports of randomized clinical trials: is blinding necessary? Control Clin Trials 1996;17:1–12
6. Greenfield ML, Rosenberg AL, O’Reilly M, Shanks AM, Sliwinski MJ, Nauss MD. The quality of randomized controlled trials in major anesthesiology journals. Anesth Analg 2005;100:1759–64
7. Greenfield ML, Mhyre JM, Mashour GA, Blum JM, Yen EC, Rosenberg AL. Improvement in the quality of randomized controlled trials among general anesthesiology journals 2000 to 2006: a 6-year follow-up. Anesth Analg 2009;108:1916–21
8. Pua HL, Lerman J, Crawford MW, Wright JG. An evaluation of the quality of clinical trials in anesthesia. Anesthesiology 2001;95:1068–73
9. Chalmers TC, Smith H Jr, Blackman B, Silverman B, Schroeder B, Reitman D, Ambroz A. A method for assessing the quality of a randomized control trial. Control Clin Trials 1981;2:31–49