The Australian and New Zealand College of Anesthetists (ANZCA) Trials group oversees research surveys sent to anesthesiologists and trainees in Australia and New Zealand.a As part of this oversight the ANZCA Trials group aims to improve the quality of survey research.1 Across health care journals there have been concerns about methodological integrity, validity, and generalizability of surveys,2–5 including surveys that were reported in anesthesia journals.6–8 Use of reporting items aims to provide transparency for readers and enhance replicating research.9 Previous studies of other types of research published in anesthesia journals, including interventional10 and randomized trials,11 have found inconsistent use of reporting items. To our knowledge, there have been no similar studies of use of reporting items in surveys published in anesthesia journals. We hypothesized that use of survey reporting items would be inconsistent in surveys reported in anesthesia journals. To test this hypothesis, we conducted a limited systematic review of anesthesia surveys.
We conducted a systematic review (limited to 6 journals over a 10-year period) of survey reports in anesthesia journals. The Human Research Ethics Committee at Austin Health approved this study. We constructed a 17-item reporting list (Table 1) for survey reporting after conducting a literature search.1,2,6,8,12–19 We looked for survey reporting items that we thought would provide transparency for readers and enhance replicating the survey. The list included proposals for both the minimum required as well as the preferred standards. We also used reporting guides for other types of research including CONSORT (randomized trials), PRISMA (systematic reviews), and STROBE (observational studies).20 This study was reported using the PRISMA guidelines for systematic reviews and meta-analyses.21 Apart from this report there is no published review protocol, and we did not register the systematic review.21
Our aim was to review a limited, representative, but not exhaustive, sample of surveys published in anesthesia journals from North America, the United Kingdom, Australia, and New Zealand. In May 2009, we undertook a MEDLINE (PubMed) search of title or abstract with the search word survey for each of 6 journals: Anaesthesia, Anaesthesia and Intensive Care, Anesthesia & Analgesia, Anesthesiology, British Journal of Anaesthesia, and Canadian Journal of Anesthesia, for January 2000 to April 2009 (Table 2). Inclusion criteria were studies containing a questionnaire survey (postal, electronic, interviewer, telephone) that were reported in full as an original research paper; letters were excluded because of insufficient detail. We did not perform a formal sample size analysis but anticipated at least 100 surveys from our sample. We used our minimum criteria (Table 1), rather than preferred criteria, for each reporting item. Each survey was assessed yes or no for each of the reporting items. We conducted a pilot of the reporting list with 20 surveys and clarified elements of the list before conducting the audit. The list was not changed after the pilot. All list items were reported. We recorded the survey response rate, and proposed that a low response rate was <60%.17,22 Because we conducted a systematic review of the reporting rather than the results and conclusions of the surveys, we did not assess the risk of bias in the surveys.21
Each survey was scored by at least 2 investigators; Table 3 gives a scoring example with one of our papers outside the review.23 We tested agreement between the 2 primary scorers (V.G. and V.R.) with the percentage agreement and κ statistic.24 Any disputes were resolved by a 3rd investigator. In the few instances in which the reporting was ambiguous, we assumed that the list item had been reported. We also calculated Cronbach's α.25 Data were collected on an Excel spreadsheet, and statistical analysis consisted of descriptive statistics including mean, interquartile range, percentages, and confidence intervals using GraphPad Prism version 4 software (GraphPad Software, San Diego, CA) and Confidence Interval Analysis software (University of Southampton, Southampton, England). We expressed data as absolute values and proportions with 95% confidence intervals (CIs) for the proportions.
The initial Medline search identified 347 publications. Of these, we excluded 107 usually because they were not questionnaire surveys (often audits), were reviews, or were letters with insufficient detail. We therefore identified 240 surveys published as original papers from 6 journals between January 2000 and April 2009. For each of 240 surveys there were 17 items, giving a total of 4080 items. The 2 scoring investigators disagreed on 185 items (5%), indicating 95% agreement with a κ statistic of 0.91 (95% CI: 0.90 to 0.92), indicating good to excellent interrater reliability.24
Reporting of our items was inconsistent (Fig. 1); the Cronbach's α was 0.57, supporting poor internal consistency. From the 17-item reporting list, the median number of items recorded was 9 (interquartile range [IQR]: 7 to 10, range 2 to 15). The number of surveys reporting specific items ranged from 9 (4%; 95% CI: 2% to 7%) for sample size to 240 (100%; 95% CI: 98% to 100%) for response rate. In addition to sample size calculation, less frequently reported items (Fig. 1) were as follows: stating a hypothesis, 23 of 240 surveys (10%; 95% CI: 7% to 14%); survey design, 67 surveys (28%; 95% CI: 33% to 34%); seeking ethics committee approval, 84 surveys (35%; 95% CI: 29% to 41%); reporting confidence intervals, 21 surveys (9%; 95% CI: 6% to 13%); allowing free text, 83 surveys (35%; 95% CI: 29% to 41%); and accounting for nonresponders, 61 surveys (25%; 95% CI: 20% to 31%). More frequently reported items in addition to response rate (Fig. 1) were the following: why a sample was chosen, 233 surveys (98%; 95% CI: 94% to 99%); conclusions related to the survey, 210 surveys (88%; 95% CI: 83% to 91%); and comparisons with previous studies, 191 surveys (81%; 95% CI: 76% to 86%). The median response rate was 71% (IQR: 57% to 85%; range 11% to 100%). Seventy-four surveys (31%) had a response rate <60%.
We found that reporting of items from our 17-item list was inconsistent, supporting our hypothesis. Most survey reports described how or why a sample was chosen, but few stated an hypothesis, calculated an appropriate sample size, or provided 95% confidence intervals.
We describe our study as limited, in part, because in the absence of widely recognized guidelines, we assessed survey reporting of a 17-item list we derived from available survey reporting guidelines that were often created by interested individuals.1,2,6,8,12–19 Most widely recognized reporting guidelines with specific reporting items for other types of research, such as CONSORT for randomized trials, are developed systematically, incorporating relevant evidence and consensus opinion from experts in a particular field, including research methodologists and journal editors.9 Therefore a weakness of our study is that the validity of our reporting items is unclear. A consensus panel may decide to include more specific requirements on certain items such as quantitative testing of reliability of the survey when describing design of the survey or specific items on facets of e-mail, Internet, or postal surveys. Our finding that our internal consistency was only fair may indicate some items may have been inappropriate for assessing survey reporting.
Although concerns have been raised about survey reporting in the medical literature,2,6,7 there is little quantitative evidence, and none we know of in the anesthesia literature.7 For other types of research, including randomized trials and other interventional trials, studies of reporting items in the anesthesia literature have also found that use of reporting points is inconsistent.10,11
We found the 5 least frequently reported items in surveys were stating an hypothesis, describing survey design, calculating a sample size, providing confidence intervals for important results, and accounting for nonresponders. These less frequently reported items are important. As with all research, survey research is based on the scientific method.26 A clear hypothesis or research question facilitates survey design, sample type and size, analysis, and discussion.1,9,27 Second, the complexity of designing a good new survey is often underestimated.2,8,15 Alternatively, if an existing survey is used, the validity of that survey should be discussed.15 Third, calculating a sample size allows researchers to estimate the likely precision of their results, not only for differences between groups, but also for the precision of reported proportions. Sample size calculations also help avoid surveying too many people. Fourth, nonresponders can bias survey results.2,8,17 Sources of bias associated with response rate include nonresponders differing in demographics from responders. This is particularly so if nonresponders are likely to differ in knowledge, opinion, or practice to responders. Fifth, 95% confidence intervals indicate the precision of the reported results, a combination of the sample size, and response rate.
Our review has limitations in addition to the validity of our reporting items. First, although we included our preferred items (Table 1) to indicate what we think should be reported in survey research, we used our minimum standards to assess the surveys. Some may argue that allocating items to a minimum (rather than preferred) standard (Table 1) is inadequate, and may bias our results towards better survey reporting. We used the minimum standards for several reasons: first, our pilot work suggested that it was likely that few, if any, surveys would report some items in our preferred manner. Second, the detailed preferred items would often be difficult to assess in a yes/no fashion, undermining interrater agreement, which was good to excellent using minimum standards. Third, we thought that a consensus panel might decide on different or less stringent versions of these items. A fourth limitation is that we included only 6 of 76 anesthesia journals, both English and languages other than English, listed in PubMed, b which may bias our sample, again possibly towards better survey reporting. A fifth limitation is that we used only one search strategy: we did not use other databases nor did we follow up with individual authors for unpublished surveys.21 A sixth limitation is that our search may have missed some surveys that were included in the body of a manuscript but not in the title or abstract.
From a limited systematic review, we conclude that the use of survey reporting items from our list may be inconsistent in the anesthesia literature. Collectively, the 6 anesthesia journals published 23 surveys per year, indicating the popularity of survey research in anesthesia. Surveys allow quantitative assessment of knowledge, opinion, and practice, which is often important in its own right or is important pilot work for interventional trials. Inconsistent use of reporting items may compromise the transparency and reproducibility of survey reports. We believe strategies are required to assist researchers reporting surveys.
Name: David A. Story, MBBS, MD, BMedSci, FANZCA.
Contribution: Study design, conduct of study, data analysis, responsibility for data archive, and manuscript preparation.
Attestation: Data and analysis.
Name: Veronica Gin, MBChB.
Contribution: Study design, conduct of study, data analysis, and manuscript preparation.
Name: Vanida na Ranong, MBBS, FRCA.
Contribution: Conduct of study and manuscript preparation.
Name: Stephanie Poustie, BN, Crit. Care Cert., MPH.
Contribution: Study design and manuscript preparation.
Name: Daryl Jones, MBBS, FRACP, FCICM, MD.
Contribution: Study design, conduct of study, data analysis, and manuscript preparation.
Attestation: Data and analysis.
a ANZCA Trials group. Survey research. http://www.anzca.edu.au/resources/trials-group/survey-research.html. Accessed February 8, 2011.
b National Library of Medicine. PubMed: Journals, subject: anesthesiology. http://www.ncbi.nlm.nih.gov/journals. Accessed November 24, 2010.
1. Jones D, Story D, Clavisi O, Jones R, Peyton P. An introductory guide to survey research in anaesthesia. Anaesth Intensive Care 2006;34:245–53
2. Draugalis JR, Coons SJ, Plaza CM. Best practices for survey research reports: a synopsis for authors and reviewers. Am J Pharm Educ 2008;72:11
3. Thorpe C, Ryan B, McLean SL, Burt A, Stewart M, Brown JB, Reid GJ, Harris S. How to obtain excellent response rates when surveying physicians. Fam Pract 2009;26:65–8
4. Krosnick JA. Survey research. Annu Rev Psychol 1999;50: 537–67
5. Sprague S, Quigley L, Bhandari M. Survey design in orthopaedic surgery: getting surgeons to respond. J Bone Joint Surg Am 2009;91(Suppl 3):27–34
6. Bruce J, Chambers WA. Questionnaire surveys. Anaesthesia 2002;57:1049–51
7. Gibbs NM. Surveys, drug names, surgical airways and safety. Anaesth Intensive Care 2009;37:523–4
8. Burmeister LF. Principles of successful sample surveys. Anesthesiology 2003;99:1251–2
9. Simera I, Moher D, Hirst A, Hoey J, Schulz KF, Altman DG. Transparent and accurate reporting increases reliability, utility, and impact of your research: reporting guidelines and the EQUATOR Network. BMC Med 2010;8:24
10. Langford RA, Huang GH, Leslie K. Quality and reporting of trial design in scientific papers in Anaesthesia over 25 years. Anaesthesia 2009;64:60–4
11. Greenfield ML, Mhyre JM, Mashour GA, Blum JM, Yen EC, Rosenberg AL. Improvement in the quality of randomized controlled trials among general anesthesiology journals 2000 to 2006: a 6-year follow-up. Anesth Analg 2009;108:1916–21
12. Aday LA, Cornelius LJ. Designing and Conducting Health Surveys: A Comprehensive Guide. 3rd ed. San Francisco: Jossey-Bass, 2006
13. Boynton PM. Administering, analysing, and reporting your questionnaire. BMJ 2004;328:1372–5
14. Boynton PM, Greenhalgh T. Selecting, designing, and developing your questionnaire. BMJ 2004;328:1312–5
15. Burns KE, Duffett M, Kho ME, Meade MO, Adhikari NK, Sinuff T, Cook DJ. A guide for the design and conduct of self-administered surveys of clinicians. CMAJ 2008;179:245–52
16. Greenhalgh T. How to Read a Paper. 3rd ed. London: Blackwell Publishing, 2006
17. Kelley K, Clark B, Brown V, Sitzia J. Good practice in the conduct and reporting of survey research. Int J Qual Health Care 2003;15:261–6
18. Veale B. Questionnaire design and surveys. Aust Fam Physician 1998;27:499–502
19. Jackson C, Furnham A. Designing and Analysing Questionaires and Surveys: A Manual for Health Professionals and Administrators. London: John Wiley and Son, 2000
20. Simera I, Moher D, Hoey J, Schulz KF, Altman DG. A catalogue of reporting guidelines for health research. Eur J Clin Invest 2010;40:35–53
21. Liberati A, Altman DG, Tetzlaff J, Mulrow C, Gotzsche PC, Ioannidis JP, Clarke M, Devereaux PJ, Kleijnen J, Moher D. The PRISMA statement for reporting systematic reviews and meta-analyses of studies that evaluate health care interventions: explanation and elaboration. Ann Intern Med 2009;151:W65–94
22. Draugalis JR, Plaza CM. Best practices for survey research reports revisited: implications of target population, probability sampling, and response rate. Am J Pharm Educ 2009;73:142
23. Van Essen GL, Story DA, Poustie SJ, Griffiths MM, Marwood CL. Natural justice and human research ethics committees: an Australia-wide survey. Med J Aust 2004;180:63–6
24. Myles PS, Gin T. Statistical Methods for Anaestheia and Intensive Care. Oxford: Butterworth–Heinemann, 2000:80–1
25. Bland JM, Altman DG. Cronbach's alpha. BMJ 1997;314:572
26. Boissel JP. Planning of clinical trials. J Intern Med 2004;255: 427–38
27. von Elm E, Altman DG, Egger M, Pocock SJ, Gotzsche PC, Vandenbroucke JP. The Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) statement: guidelines for reporting observational studies. Lancet 2007;370:1453–7
28. Moher D, Hopewell S, Schulz KF, Montori V, Gotzsche PC, Devereaux PJ, Elbourne D, Egger M, Altman DG. CONSORT 2010 explanation and elaboration: updated guidelines for reporting parallel group randomised trials. BMJ 2010;340:c869