Click on the links below to access all the Data Supplements for this article.
Please note that Data Supplement files may launch a viewer application outside of your web browser.
The complexity of studying peer review explains in part the recent finding that its efficacy and effectiveness cannot easily be shown.1 The measurement of the quality of the editorial review has proven elusive.2,3 Yet the quality of editorial commentary and management of the review process contribute to a journal's ability to attract outstanding manuscripts and thus address the goals of attracting readership and enhancing journal impact.4,5 Attempts to improve review processes have included structured educational modules, written feedback or scoring from editors, checklists for reviewers, and blinding reviewers to authors. However, evidence that such measures succeed in improving quality of reviews is sparse.6–10 Feedback from authors of submitted articles is a source of evaluation that might have usefulness in improving the peer review process, but has received little study.11,12
The purpose of this study was to determine whether ratings provided by authors distinguish among aspects of manuscript review and whether such ratings are affected by attributes of authors. To these ends our analysis sought to determine 1) whether author characteristics, including the fate of their manuscript and country of origin affect assessment of reviews, 2) whether author ratings differentiate between journal review processes, (ease of submission, turnaround time, and journal staff support) and content of reviewer comments and suggestions, and 3) whether authors provided ratings of reviewer comments and suggestions that were different for the several reviewers providing commentary on their manuscripts.
MATERIALS AND METHODS
Corresponding authors of all original research manuscripts submitted to Obstetrics & Gynecology from May 19, 2005, through November 6, 2005, were invited to participate. All manuscripts submitted during the study period were submitted electronically and received full review by at least two reviewers, including one of the 16 members of the journal's editorial board and one or two expert reviewers selected from the specialty at large. The participation of one of the 16 editorial board members as reviewers of each submission permitted comparison of evaluations from multiple authors for each of these reviewers.
Authors were notified of the opportunity to participate in a survey regarding their review experience by an e-mail letter upon receipt of the manuscript by the journal office. This letter specifically assured them of anonymity in the analysis of surveys received for analysis. The survey instrument was e-mailed to authors immediately after their initial notification of either manuscript rejection or invitation to revise their manuscript. Manuscripts returned for revision are termed “accepted” in this analysis. Authors not responding to the initial invitation to complete the survey were reinvited to do so after 1 month. There was no time limit for inclusion in the study based on time taken to respond.
Returned surveys were made anonymous with assignment of a study number by journal staff, and data were abstracted into an electronic database containing subject number, manuscript fate, country of origin of first author, and responses to all items. Only one author (S.M.) had access to a coded list linking author manuscripts to study number, and it was not necessary to access this list for any portion of analysis of study results. We hypothesized that authors of accepted manuscripts would regard their reviews and review experience more highly. Our intent was to sample 400 authors, estimating a 50% response rate, to achieve an 80% likelihood of detecting a 20% difference in percentage of favorable responses between authors of accepted compared with rejected manuscripts.
The survey elicited feedback from the first author on 11 items (see the Appendix online at www.greenjournal.org/cgi/content/full/112/3/646/DC1). Six of these items obtained information from the author regarding content of editorial review for each reviewer individually. For these six questions, we used a five-point Likert scale for authors to rate the accuracy of reviewer comments; the degree to which the review content was informed, impartial, and important; respectfulness of the reviewer; and the extent to which comments were constructive. The same five-point Likert scale was used for authors' ratings of three aspects of review process: ease of submission, turnaround time, and support from journal staff. To achieve a useful summary of responses, analysis used a dichotomous variable for favorable response defined as a score of 4 or 5 of 5 on the Likert scale. Two summary questions sought dichotomous responses: one for overall satisfaction with the review (yes or no response) and one asking whether the review experience encouraged future submissions to the journal (more likely or less likely). Ratings of content of editorial review by participating authors were compared with numeric grades of reviews given by one of three senior journal editors according to a standard 0–5 scoring system.13 This study was approved by the Institutional Review Board for Protection of Human Subjects at the University of Utah.
Differences in ratings according to author, manuscript disposition, and country of origin were compared using χ2 tests for the proportion of favorable responses, as defined above, as were differences in responses to items in the survey. Correlations between responses to different groups of reviewers were calculated using Spearman's ρ. Logistic regression was used to evaluate factors affecting likelihood of response to the survey and to examine whether differences in aggregate ratings for individual reviewers were attributable to ratings given other reviewers of the same manuscripts.
Invitations to participate in the survey were sent to 445 consecutive corresponding authors submitting original research manuscripts to Obstetrics & Gynecology. All corresponding authors during this period had submitted only one manuscript. The survey was completed by 216 authors, resulting in an overall response rate of 49% among those invited to participate. The manuscript acceptance rate for all invited authors was 27%. Two hundred forty (54%) of invited authors were from the United States, and 257 (58%) of the manuscripts dealt with obstetric as opposed to gynecologic or other topics (Table 1). The median interval from manuscript submission to the time the reviews were completed and the authors informed of manuscript disposition during this period was 31 days (range 1–138 days).
In a multivariable logistic regression analysis of the association of four factors with likelihood of response to the survey (disposition, country of origin, date of submission, and obstetric or gynecologic subject matter) only manuscript disposition was associated with response to the survey (Table 2). Authors of rejected manuscripts were significantly less likely to complete the survey than authors of accepted manuscripts (odds ratio 0.25, 95% confidence interval 0.16–0.40). Nevertheless, the majority of respondents were authors of manuscripts that were rejected (60.2%).
Overall satisfaction with the review experience was indicated by 88% of responding authors, and 78% indicated that they would likely submit a manuscript in the future based on their review experience. Favorable ratings for the review processes ease of submission, turnaround time for review, and journal office support were given by 92%, 93%, and 78% of respondents, respectively. The percentage of respondents expressing overall satisfaction with the review of their manuscript and willingness to submit future manuscripts was greater among authors whose manuscripts were accepted than among authors whose manuscripts were rejected (Fig. 1). Authors of rejected manuscripts did not differ from authors of accepted manuscripts with respect to favorable ratings for ease of submission and review turnaround time, but were less frequently satisfied with support from journal staff (Fig. 1).
The percentages of authors expressing overall satisfaction with the review process and willingness to submit future manuscripts were similar for authors of U.S. origin and authors of non-U.S. origin (87% compared with 89% and 76% compared with 87%, respectively). Authors of U.S. and non-U.S. origin also reported similar percentages of favorable ratings for ease of submission, turnaround time, and journal staff support.
Authors were more satisfied with review processes than they were with reviewer critiques and suggestions. The aggregate percentage of favorable ratings for the three aspects of review process was 88% and was 69% for the six elements of reviewer commentary (P<.001). Authors' ratings also distinguished among elements of review content; differences in the proportion of favorable ratings for each of the six areas of review content were statistically significantly different (P=.001). Favorable ratings were most often given for accuracy (73%) and respectfulness (73%) of reviews and least often given for importance of reviewers' comments (63%). Authors of rejected manuscripts less often gave favorable ratings for each of the six aspects of review content than authors whose manuscripts were accepted (Fig. 2). Only 58% of ratings for editorial comments from authors of manuscripts that were rejected were favorable compared with 83% of authors of accepted manuscripts (P<.001). The percentage of favorable ratings for each of the six elements of review content given by authors of rejected manuscripts averaged 25% less than the percentage of favorable ratings given by authors whose manuscripts were accepted. The greatest differences in percentage of favorable ratings according to manuscript fate were for respectfulness and impartiality of reviews (33 and 31 percentage points), and the smallest difference was seen for constructiveness (18 percentage points). Among respondents whose manuscripts were rejected, highest ratings were accorded for accuracy (63%) and constructiveness (73%) of reviewer comments. As was the case for authors of manuscripts that were accepted, authors of manuscripts that were rejected gave the lowest percentage of favorable ratings for importance of reviewers' comments (54%).
The proportion of favorable ratings for each of the six aspects of reviewer comments was slightly higher for authors of non-U.S. origin than authors of U.S. origin, as was the overall proportion of favorable ratings (72% compared with 67% P=.001).
Because all manuscripts were reviewed by one of the editorial board members, there were multiple author ratings (mean 15.2, range 8–22) for each of these sixteen reviewers. There was wide variation in favorable ratings for individual board members (aggregate for all six review content areas), ranging from 50% to 86% (Fig. 3). The difference in aggregate scores between the highest- and lowest-rated board members was statistically significant. (P=.047). The ratings given an editorial board member for all six content areas were correlated with the scores given their companion expert reviewers (Spearman's ρ 0.63), with the lowest correlation being found for ratings for accuracy of comments Spearman's ρ –0.049). Logistic regression found that the percentage of favorable ratings given editorial board members were not statistically significantly different after adjustment for the percentage of favorable ratings given expert reviewers that reviewed the same manuscripts.
One of the three senior editors rated each review provided by an editorial board member, assigning the number 1 through 5 according to written objective criteria editors' scoring of reviews.13 The correlation between the six content ratings of reviewers and the editor's score for the review showed weak correlation both for accepted manuscripts (Spearman's ρ 0.0784) and rejected manuscripts (Spearman's ρ –0.0593).
We found that manuscript fate is an important determinant of authors' overall satisfaction with both processes for manuscript review and content of editorial comments, especially the latter. In addition, authors as a whole distinguish between processes associated with review such as submission, turnaround, and handling by journal staff, and content of the written commentary by reviewers. Authors were more enthusiastic about handling of the review than the comments given by the reviewers. We also found that authors distinguish among aspects of written review content. Authors were most critical of the “importance” of reviewers' comments whether their manuscript was accepted or not. This suggests that author satisfaction (and review quality itself) is affected by the degree to which authors felt that reviewer commentary specifically and clearly addressed the central issues of the objectives, methods, findings, and interpretation presented in submitted manuscripts.
Finally, authors' responses distinguish between content of comments from different reviewers. Because the correlation between ratings for expert reviewers and editorial board reviewers were relatively low (0.63), this indicates considerable independence in authors' ratings of content from different reviewers.
Our study has several strengths. Most importantly, all manuscripts submitted to Obstetrics & Gynecology during this period received full editorial review, which makes this survey comprehensive in describing a response from all submitting authors to a single journal. Further, our survey instrument allowed distinction between authors' perceptions of processes and content of review. The finding that authors of rejected manuscripts rate processes similarly to authors of rejected manuscripts, but are more critical of review commentary validates using such a distinction in surveying author feedback. Finally, because a single cohort of reviewers (editorial board members) participated in the review of all manuscripts, the data allow conclusions about the meaning of the aggregate of several authors' ratings as they apply to an individual reviewer.
The authors who declined to participate in this study were more often authors whose papers were rejected. However, because the majority of respondents to this survey were authors of accepted manuscripts, we were able to conduct an assessment of the nature of responses from those authors. In addition, we believe analyses of relationships between responses to different elements of the survey and between reviewers are not likely to be as sensitive to response rate as raw scores for satisfaction. The large number of responses required to complete the survey is likely to have selected for participants willing to give time and thought to respond.
There are few precedents in the literature that allow for comparison of our results. Weber et al12 also surveyed authors submitting to a major specialty journal. Overall satisfaction with review and stated likelihood of future submission seem to have been higher in our study. This may reflect the fact that all authors submitting articles to Obstetrics & Gynecology, and all authors surveyed, receive full editorial review. Like Weber et al, we found that manuscript fate affected authors' satisfaction with content of editorial comments, but in their study, manuscript fate also affected authors' satisfaction with review processes, whereas in ours it did not. Our findings agree with those of Weber et al with respect to a lack of correlation between authors' and editors' scoring of editorial commentary by reviewers. Another survey study did not find that authors' assessment of reviews differed according to manuscript fate, but it was smaller and used a questionnaire with fewer items and a different rating scale.14
The three stakeholders in the scientific peer review process are authors, the scientific community at large, and journals. Our data provide insight into the benefit these three stakeholders might derive from implementation of a process for authors to provide feedback on their peer review experience.
Our data do not tell us whether the exercise of rating of reviews is helpful to authors but do suggest they will respond to the opportunity to participate in closing the loop of critical commentary on their work. Our survey response shows that many authors, even those whose manuscripts were rejected, will take the time to participate in returning a detailed rating instrument.
For the scientific community at large, our findings suggest that author feedback regarding editorial review can provide information leading to greater accountability among journals, in the quality of their handling of review processes, and an added perspective in evaluation of commentary provided by reviewers. Reviewers might note especially that authors most often find fault with the degree to which reviews address the important or central features of manuscripts they review.
Journals themselves have the greatest interest in the meaning of these results. Our results show that feedback from authors provides information that distinguishes among the many elements of the review process and can provide information that can be used to target areas for improvement. Authors do distinguish among reviewers as well, and although these ratings are colored by manuscript fate, they provide evaluations that are different from those of editors. Thus, our data show clearly that editors should consider manuscript fate in assessing reviewers in terms of authors' ratings. It is important to note, however, that manuscript fate is not solely determined by manuscript quality; timeliness, interest to readership, novelty, and other factors determine editorial decisions for publication.
1. Jefferson T, Rudin M, Brodney Folse S, Davidoff F. Editorial peer review for improving the quality of reports of biomedical studies. Cochrane Database Syst Rev 2007, Issue 2. Art. No.: MR000016. DOI: 10.1002/14651858.MR000016.pub3.
2. Jefferson T, Wager E, Davidoff F. Measuring the quality of editorial peer review. JAMA 2002;287:2786–90.
3. Callaham ML, Baxt WG, Waeckerle JF, Wears RL. Reliability of editors' subjective quality ratings of peer reviews of manuscripts. JAMA 1998;280:229–31.
4. Garfield E. The history and meaning of the journal impact factor. JAMA 2006;295:90–3.
5. Callaham M, Wears RL, Weber E. Journal prestige, publication bias, and other characteristics associated with citation of published studies in peer-reviewed journals. JAMA 2002;287:2847–50.
6. van Rooyen S, Godlee F, Evans S, Smith R, Black N. Effect of blinding and unmasking on the quality of peer review: a randomized trial. JAMA 1998;280:234–7.
7. Justice AC, Cho MK, Winker MA, Berlin JA, Rennie D. Does masking author identity improve peer review quality? A randomized controlled trial. PEER Investigators. JAMA 1998;280:240–2.
8. Callaham ML, Knopp RK, Gallagher EJ. Effect of written feedback by editors on quality of reviews: two randomized trials. JAMA 2002;287:2781–3.
9. Schroter S, Black N, Evans S, Carpenter J, Godlee F, Smith R. Effects of training on quality of peer review: randomised controlled trial. BMJ 2004;328:673.
10. Callaham ML, Wears RL, Waeckerle JF. Effect of attendance at a training session on peer reviewer quality and performance. Ann Emerg Med 1998;32:318–22.
11. Korngreen A. Peer-review system could gain from author feedback. Nature 2005;438:282.
12. Weber EJ, Katz PP, Waeckerle JF, Callaham ML. Author perception of peer review: impact of review quality and acceptance on satisfaction. JAMA 2002;287:2790–3.
13. Landkroon AP, Euser AM, Veeken H, Hart W, Overbeke AJ. Quality assessment of reviewers' reports using a simple instrument. Obstet Gynecol 2006;108:979–85.
14. Garfunkel JM, Lawson EE, Hamrick HJ, Ulshen MH. Effect of acceptance or rejection on the author's evaluation of peer review of medical manuscripts. JAMA 1990;263:1376–8.
© 2008 by The American College of Obstetricians and Gynecologists.