A few days ago we received an e-mail from an unhappy author whose submission had been rejected after review. We get such letters from time to time. Sometimes the authors request more information or the possibility of a change in the decision. Sometimes they raise questions about the quality of the reviews, the possibility of bias, the qualifications of a reviewer, or about the review process itself. Some push the bounds of professional behavior, or even become sarcastic—for example, “Rejection of this manuscript could lead to the fall of civilization.” Clearly, for authors the review process can be a deeply personal experience, and as editors we never make a rejection decision lightly.
An average of five articles per day are submitted to Academic Medicine, but we have room in the journal to publish only one of the five. How should we weigh the relative strengths and weaknesses of these submissions? Should we accept the article with the strongest method and analysis, even though it has limited applicability or likely impact? Or should we choose the article that could positively influence how we educate students or organize medical schools but that has methodological flaws? Or the one that is well written and will be enjoyed by many readers but doesn’t add anything new to the literature? What other criteria should guide us?
We hope that all our accepted articles will be well written, interesting, important, and methodologically sound. But virtually every one has flaws, and selecting which of them to accept involves tough judgment calls. This is where peer review comes in—an imprecise but essential process, dependent upon the efforts and expertise of a select group of individuals with diverse backgrounds and experience. A number of articles in this issue of Academic Medicine relate to the peer-review process and, specifically, to the definitions and standards for medical education scholarship. In this editorial we will make connections to these articles as we share some of our observations about peer review as it relates to Academic Medicine, our efforts to strengthen it, and some ideas we have about the future of peer review.
One of the most important areas of focus for the journal is scholarship in medical education. This area continues to broaden in scope, methods, and methodologies, increasingly drawing from the social sciences and encompassing new areas such as quality improvement, care delivery, and implementation science. To optimize the journal’s effectiveness as a vehicle for curating and disseminating this evolving body of scholarship, we have made changes. We created Innovation Reports as a new article type to share novel programs or ideas without having to wait for the long-term results that a traditional Research Report would require. We also introduced New Conversations to provide a forum for examining timely issues and their effects on medical education, such as the Affordable Care Act and other topics related to health reform. Further, we continue to expand popular features such as AM Last Page, which provides a concise presentation of medical education topics and others in a unique graphical style.
We also have supplemented our more traditional publishing activities with interviews, podcasts, and blogposts to foster more active engagement among our readers, authors, and the broader community. Finally, we publish articles online ahead of print to reduce delays that can occur due to limitations of space in our print edition.
In addition to these changes in the journal, we recognize that opportunities for scholarly activity and review of scholarship are evolving and expanding, which may ultimately influence our peer-review process. In this issue of Academic Medicine, Azzam et al1 describe a medical school course that introduces a novel scholarly activity focused on peer review and improvement of articles in Wikipedia. In this course, medical students select a health care article published in Wikipedia and analyze the presentation, references, and need to insert new materials and make changes to improve accuracy, grammar, or other elements of the article. While this is not a traditional scholarly activity and the impact is difficult to judge, the students found the activity valuable, and assessments of the revised articles by experts suggested that the students’ input generally improved the articles’ quality. Wikipedia’s nontraditional approach to peer review has wide appeal, but challenges the basic tenets of academic authorship: accountability and credit for scholarly work.
Also in this issue, Roberts2 discusses collaborative writing in traditional peer-reviewed literature and provides guidance for determining who should be included as an author, who should not be included as an author, and the order of authors. While the peer-review process does not typically investigate whether all authors meet the criteria for authorship, it does sometimes identify concerns about fraud, plagiarism, or duplicate publishing that can have significant impact on publication decisions. Application of authorship guidelines to collaborative efforts like Wikipedia will require further discussion as these activities become more common.
Packer et al3 in this issue discuss another area of scholarship—case reports. Packer et al assert that case reports encourage critical thinking and help students develop writing skills and fulfill scholarly concentration requirements. The student coauthors of this report provide insight into the value of a case report. One student, when faced with choosing a case to report, noted:
My patients did not change but my perspective did. When pressed to pay closer attention, I soon realized that I could learn something from every patient no matter how mundane (or even nonmedical) their problems.
While case reports are not considered to provide a high level of evidence to guide medical decision making, they are easily understandable to practicing clinicians, increase awareness about new or unusual presentations or complications of diseases, and offer an opportunity to deepen learning for the case report authors—thus providing a useful interface between education and scholarship. We intend to examine ways to capture some of the benefits of case reports in our journal through expanded use of letters and blogposts.
Publication of original content on blogs, particularly by students and residents, has provided another avenue for reconsidering the peer-review process. Sidalak et al4 describe a process of coached peer review where the assumption is that all submissions will eventually be published; authors of the submissions agree to work with assigned coaches, who will help them improve their submissions until they are acceptable for publication. Identities of peer coaches are revealed to the authors, and discussions concerning the revision process are encouraged. Strengths of the coached peer-review process included efficiency, collegiality, improved quality, and transparency. While a coached process is not usually possible at traditional journals, where the numbers of submissions greatly exceed the capacity to review and publish, many aspects of the process described by Sidalak et al—such as collegiality and transparency—could be incorporated into many peer-review systems.
Another possible improvement in the peer-review process was discovered unexpectedly as part of a workshop for peer reviewers hosted by Academic Medicine and described by Dumenco et al.5 As participants in the workshop, each of the authors had reviewed a manuscript separately, then all had conferred as a group to develop consensus recommendations. They realized that
the ways in which we would have reviewed the manuscript as individuals were very different and certainly not as robust as the way in which we reviewed the manuscript as a group. Because each contributor in our group had a different area of expertise, the group review process elicited many more factors than we would have considered on our own.
They suggest the exploration of a team-based approach to peer review as a methodology with numerous benefits for individual authors and health professions scholarship.
In addition to strengthening the peer-review process by training reviewers, the quality of reviews can be enhanced by clarifying standard expectations for various methods and methodologies. When there is agreement about which key elements should be included in an article that uses a specific methodological approach, the review process can be applied more consistently. Waggoner et al6 recently published an article in which they examined consensus methodology and recommended reporting requirements. Studies using various consensus methods can provide useful guidance when evidence-based studies are limited or in conflict, but a clear and defensible description of the consensus process is important so that readers can evaluate the recommendations that come from it.
The same is true for the growing area of qualitative research. O’Brien et al7 have published recommendations for reporting qualitative research that provide a framework that can help authors and reviewers. In this issue, Phillips et al8 consider survey research, focusing on response rates for survey-related articles published in three medical education journals. Surprisingly, only 63% of the 73 surveys they included reported the response rate, and no investigators performed an analysis to investigate nonresponse bias. While this article did not suggest a minimum threshold of response rates for publishing survey studies, it highlights the importance of this element of survey research, which should be helpful to peer reviewers and to readers. (See also this issue’s AM Last Page,9 which focuses on response rates.) To acknowledge the unique circumstances of a research study, we recognize the need for some flexibility. But guidelines that establish expectations about the parameters that should be included for each type of methodology can be of great value in improving the peer-review process and developing feedback and recommendations to authors.
In an effort to better explain and standardize the review process, the journal recently published the second edition of Review Criteria for Research Manuscripts,10 edited by Durning and Carline. This book provides specific guidance to reviewers about the various categories of manuscripts and the various parts of manuscripts. While this resource, and guidelines developed for various types of articles, will not prevent a weak review, they can provide important guidance for those who are new peer reviewers or for experienced reviewers confronting particularly challenging manuscripts.
As peer review evolves, we at Academic Medicine will continue to evaluate our approach to it and will also continue efforts to educate our reviewers and community of scholars. We believe that the peer-review process not only helps elevate the level of published scholarship but that it also improves the scholarship of reviewers while they help colleagues in their own professional development. We hope that peer review will be increasingly valued by academic promotions committees, not just as a sign of good citizenship but also as a scholarly activity. We deeply appreciate the time and effort of peer reviewers, and, when reviewers request it, we are glad to share the thank-you letters we send to reviewers with department chairs and others. In addition, each year, the journal gives a limited number of awards for Excellence in Reviewing. We also offer to provide continuing medical education credits to reviewers who send us good-quality reviews.
Continuing efforts are also directed at improving the author experience. We have shortened the time of the “in-house” review process so that the initial editorial decision about whether an article will be sent out for review is typically made within one week of submission; every other step in the review process has also been substantially shortened. Our staff editors provide extensive support to authors—at a level which we believe is second to none.
As Academic Medicine celebrates its 90th anniversary, we look forward to continually refining how we use the peer-review process to provide the most useful feedback to our authors and the best articles for our community. We welcome comments and suggestions about our peer-review process, article types, workshops, blogs, and other initiatives from reviewers, authors, and all other readers. We look forward to the next 90 years of Academic Medicine’s contributions to the growth, not the fall, of civilization.
David P. Sklar, MD
Steven J. Durning, MD, PhD
Deputy editor for research
Jan D. Carline, PhD
Debra Weinstein, MD
1. Azzam A, Bresler D, Leon A, et al. Why medical schools should embrace Wikipedia: Final-year medical student contributions to Wikipedia articles for academic credit at one school. Acad Med. 2017;92:194–200.
2. Roberts LW. Addressing authorship issues prospectively: A heuristic approach. Acad Med. 2017;92:143–146.
3. Packer CD, Katz RB, Iacopetti CL, Krimmel JD, Singh MK. A case suspended in time: The educational value of case reports. Acad Med. 2017;92:152–156.
4. Sidalak D, Purdy E, Luckett-Gatopoulos S, Murray H, Thoma B, Chan TM. Coached peer review: Developing the next generation of authors. Acad Med. 2017;92:201–204.
5. Dumenco L, Engle DL, Goodell K, Nagler A, Ovitsh RK, Whicker SA. Expanding group peer review: A proposal for medical education scholarship. Acad Med. 2017;92:147–149.
6. Waggoner J, Carline JD, Durning SJ. Is there a consensus on consensus methodology? Descriptions and recommendations for future consensus research. Acad Med. 2016;91:663–668.
7. O’Brien BC, Harris IB, Beckman TJ, Reed DA, Cook DA. Standards for reporting qualitative research: A synthesis of recommendations. Acad Med. 2014;89:1245–1251.
8. Phillips AW, Friedman BT, Utranker A, Ta A, Reddy ST, Durning SJ. Surveys of health professions trainees: Prevalence, response rates, and predictive factors to guide researchers. Acad Med. 2017;92:222–228.
9. Phillips AW, Friedman BT, Durning SJ. How to calculate a survey response rate: Best practices. Acad Med. 2017;92:269.