Objective: Adequate reporting is needed to judge methodologic quality and assess the risk of bias of surveys. The objective of this study is to describe the methodology and quality of reporting of surveys published in five critical care journals.
Data Sources: All issues (1996–2009) of the American Journal of Respiratory and Critical Care Medicine, Critical Care, Critical Care Medicine, Intensive Care Medicine, and Pediatric Critical Care Medicine.
Study Selection: Two reviewers hand-searched all issues in duplicate. We included publications of self-administered questionnaires of health professionals and excluded surveys that were part of a multi-method study or measured the effect of an intervention.
Data Extraction: Data were abstracted in duplicate.
Data Synthesis: We included 151 surveys. The frequency of survey publication increased at an average rate of 0.38 surveys per 1000 citations per year from 1996–2009 (p for trend = 0.001). The median number of respondents and reported response rates were 217 (interquartile range 90 to 402) and 63.3% (interquartile range 45.0% to 81.0%), respectively. Surveys originated predominantly from North America (United States [40.4%] and Canada [18.5%]). Surveys most frequently examined stated practice (78.8%), attitudes or opinions (60.3%), and less frequently knowledge (9.9%). The frequency of reporting on the survey design and methods were: 1) instrument development: domains (59.1%), item generation (33.1%), item reduction (12.6%); 2) instrument testing: pretesting or pilot testing (36.2%) and assessments of clarity (25.2%) or clinical sensibility (15.7%); and 3) clinimetric properties: qualitative or quantitative description of at least one of face, content, construct validity, intra- or inter-rater reliability, or consistency (28.5%). The reporting of five key elements of survey design and conduct did not significantly change over time.
Conclusions: Surveys, primarily conducted in North America and focused on self-reported practice, are increasingly published in highly cited critical care journals. More uniform and comprehensive reporting will facilitate assessment of methodologic quality.
From the Departments of Pediatrics (MD, KC), Medicine (DMA, MOM, DJC), Clinical Epidemiology and Biostatistics (MOM, QZ, DJC), and Critical Care (OH), McMaster University, Hamilton, Ontario, Canada; St Michael's Hospital (KEB), Toronto, Ontario, Canada; Department of Critical Care Medicine (NKA), Sunnybrook Health Sciences Centre and University of Toronto, Toronto, Ontario, Canada; Centre de recherche FRSQ du Centre hospitalier affilié universitaire de Québec (F. Lauzier), Quebec, Canada; Department of Physical Medicine and Rehabilitation (MEK), Johns Hopkins University, Baltimore, MD; Department of Medicine and Division of Critical Care (KK), University of Western Ontario, London, Ontario, Canada; Centre de recherche clinique Étienne-Le Bel (F. Lamontagne), University of Sherbrooke, Sherbrooke, Quebec, Canada.
* See also p. 666.
Supported, in part, by a grant from the Canadian Intensive Care Foundation.
Dr. Arnold is funded by a New Investigator Award from the Canadian Institutes of Health Research in partnership with Hoffmann-LaRoche. Dr. Lauzier is a recipient of a research career award from the Fonds de la Recherche en Santé du Québec. Dr. Kho is funded by a Fellowship Award and Bisby Prize from the Canadian Institutes of Health Research. Dr. Meade is a Research Mentor of the Canadian Institutes of Health Research. Dr. Cook is a Research Chair of the Canadian Institutes of Health Research. The remaining authors have not disclosed any potential conflicts of interest.
For information regarding this article, E-mail: email@example.com