I was discussing how medical practice changes over time with a group of senior residents who were approaching the end of their residencies. They were getting nervous about practicing in a community setting where they might have trouble keeping up with the advances in medical care. To empathize with their concern, I described how, during my residency, my cardiology professor had emphasized the importance of the history when diagnosing a patient with chest pain, to the exclusion of almost anything else, including review of an EKG, until after the history and physical exam were complete. “A skilled clinician should be able to diagnose myocardial ischemia from history alone; you should get the EKG only to confirm the diagnosis,” he had taught me. I used the story to show how our thinking has evolved based on the medical literature and how important it is to use all the sources of information available, including both the EKG and the history. But one of the residents had a somewhat different interpretation:
Really, now we see the EKG even before we see the patient. We don’t even need the history or physical. If we waited to complete the history, we would lose the opportunity to intervene in the cath lab. And what about the patients with atypical presentations? You would never diagnose them in time.
In a way he was right, in that the EKG is evaluated before the patient is seen by the physician. But I did not agree with him that the history and physical are not needed; it’s just that their timings are different. Now nurses order an EKG on patients with all types of chest pain in the triage area of the emergency department before the physician examines the patient. The EKG results help in the initial rapid identification of myocardial infarction patients who benefit from early angioplasty, and those results guide the subsequent history, physical, and treatment. Physicians have learned to deviate from the traditional order of history, physical, and laboratory tests because the medical literature demonstrated the critical importance of the time to angioplasty in the outcome of myocardial infarction1; the value of an early EKG turned out to be greater than an initial extensive history by reducing the time to angioplasty.
Major changes in our clinical approaches have also occurred for other problems—for example, the pharmacological treatment of patients with spinal cord injury. But in that case the studies did not provide a consistent direction. Hurlbert et al2 describe the history of research and guidelines on the use of steroids for patients with spinal cord injury. An initial enthusiasm based on early studies gave way to the current recommendations against steroid use when subsequent studies could not confirm the efficacy of the earlier studies’ findings.2 I remem bered how quickly we had changed practice to administer steroids, only to revert to our previous approach when the findings of the initial studies could not be replicated.
After I shared these observations with the residents, we discussed how each of us got information about innovations that might change practice. Most of the residents explained that they listened to various podcasts in which experts summarized the recent literature. That was how they often decided whether or not to try new treatments. They were amused by my stories of going to the library to search through volumes of the Index Medicus during my residency to find an article that had been published in the peer-reviewed literature. Yet whether accessed via the library or the Internet, the medical literature has been the place where innovations in medical care often first appear and influence health care providers. Fuchs and Sox3 surveyed practicing internists and asked them to rate the relative importance of 30 innovations based on their appearance in the New England Journal of Medicine (NEJM) or the Journal of the American Medical Association (JAMA) in the previous 25 years. Those innovations rated most highly were MRI and CT scanning, ACE inhibitors, and balloon angioplasty. All had been featured in articles in the NEJM or JAMA, and publication in those journals had likely contributed to changes in practice.
As I reviewed the article by Fuchs and Sox with the residents, we wondered how the list might change if the same type of study were repeated today, 14 years later, or if a different group of physicians were surveyed—for example, pediatricians or surgeons. However, even a different list of innovations would likely have had its genesis in the peer-reviewed literature. That is because peer-reviewed literature, in which experts inspect and critique scientific methodology, is the gold standard for high-quality research. The podcasts that the residents use are also usually based on peer-reviewed literature that has been synthesized by an expert.
Physicians are in a difficult position as they attempt to evaluate the massive amount of new information appearing daily. No physician can keep up with all of it, contributing to delays in adoption of new scientific discoveries of about 17 years.4 This seems like a long time, and lives might have been saved with more rapid adoption of promising treatments. However, there is evidence that a large portion of published research cannot be replicated, and the initial claims of investigators can turn out to be wrong.5 The history of steroid use in spinal cord injury discussed above is an example of this process. How is a busy clinician to make decisions about adopting an innovation, and thus about rejecting or maintaining current practice patterns? Why doesn’t the peer review process provide a more accurate initial judgment of those studies that are later discredited? Can we improve the peer review process?
In this issue of Academic Medicine, Cook and Reed6 report their evaluation of two tools for assessing the quality of medical education research. They found these tools to be useful, reliable, complementary instruments for appraising the methodological quality of medical education research. Clearly, these tools can help reviewers and thereby improve the peer review process.
Richard Smith,7 former editor of the British Medical Journal, describes peer review as “a system full of problems but the least worst we have.” Numerous efforts to improve the quality of peer review have been described, including mentorship,8 feedback from the editors,9 and structured training workshops.10 Unfortunately, these efforts have largely been unsuccessful.
The editors of Academic Medicine have long relied on the peer review process to help them select the best articles for publication. To foster high-quality and “least worst” peer review, in 2001 the journal published a handbook, Review Criteria for Research Manuscripts,11 to orient new reviewers to the peer review process and refresh the skills of experienced reviewers. We continue to agree with George Bordage and Addeane Caelleigh, the co-chairs of the task force that created this publication, who introduced it by writing, “Peer review lies at the core of science and academic life.” We are now releasing an updated version of the handbook.12 We hope that it will be a useful adjunct to other faculty development efforts in scholarly writing and review. The document should also be helpful to authors who might wonder which criteria reviewers use when they read and critique a submission. The handbook is available on the AAMC Web site; to download a copy, visit http://ow.ly/NQtUu.
The journal’s many reviewers form a very dedicated and talented group. Some of the reviews they send us are of such high quality, and have such insightful comments about the topic of the article being reviewed, that they could be published as separate Commentaries; that has, in fact, occasionally occurred, with some modification of the original review. In general, our reviewers provide excellent and timely feedback, but as fine as their work already is, I am hopeful that they will find the new handbook useful.
I would like to thank all the authors who worked on the revised document, many of whom were authors on the original one. I also thank Steve Durning and Jan Carline, two of our associate editors, who oversaw the effort with staff editor Elizabeth S. Karlin, and the task force that assisted them.
Efforts such as the Review Criteria for Research Manuscripts and the current update have characterized Academic Medicine over much of its history, which has now reached 90 years. Throughout that time, the journal has attempted to support the scholarship of our various communities and to use that scholarship to inform our advocacy for improvements in medical education, medical care, and population health. For example, in this issue of the journal, Azer13 identifies the articles in medical education that were most frequently cited since 1979. Our journal published more of them than any other journal, providing evidence for the important influence of Academic Medicine in furthering medical education scholarship. We could not have accomplished this without our dedicated peer reviewers, and we hope that as part of our journal’s birthday celebration, they will enjoy this new updated reviewer document as our present to them.
As to the advice I gave to those anxious residents preparing to go into independent practice in a world exploding with new knowledge, it was to let their patients’ problems guide their continued learning, so that they would pursue the latest and best information available to provide high-quality care. This goal would inevitably lead them to review the published literature and stay up-to-date. I also reminded them to keep in mind the spirit of what my former cardiology professor emphasized: the importance of listening to the patient and gathering the history of the problem, which is just as important as applying what we know of medical science. I think of our role as interpreters of our patients’ health care language and their health care problems. We must endeavor to link the patient’s story with the medical science. We can do that only if we understand the patient, the story, and the science.
David P. Sklar, MD
1. Rathore SS, Curtis JP, Chen J, et al.National Cardiovascular Data Registry. Association of door-to-balloon time and mortality in patients admitted to hospital with ST elevation myocardial infarction: National cohort study. BMJ. 2009;338:b1807
2. Hurlbert RJ, Hadley MN, Walters BC, et al. Pharmacological therapy for acute spinal cord injury. Neurosurgery. 2013;72(suppl 2):93–105
3. Fuchs VR, Sox HC Jr.. Physicians’ views of the relative importance of thirty medical innovations. Health Aff (Millwood). 2001;20:30–42
4. Morris ZS, Wooding S, Grant J. The answer is 17 years, what is the question: Understanding time lags in translational research. J R Soc Med. 2011;104:510–520
5. Ioannidis JP. Why most published research findings are false. PLoS Med. 2005;2:e124
6. Cook DA, Reed DA. Appraising the quality of medical education research methods: The Medical Education Research Study Quality Instrument and the Newcastle–Ottawa Scale-Education. Acad Med. 2015;90:1067–1076
7. Smith R. Peer review: A flawed process at the heart of science and journals. J R Soc Med. 2006;99:178–182
8. Houry D, Green S, Callaham M. Does mentoring new peer reviewers improve review quality? A randomized trial. BMC Med Educ. 2012;12:83
9. Callaham ML, Knopp RK, Gallagher EJ. Effect of written feedback by editors on quality of reviews: Two randomized trials. JAMA. 2002;287:2781–2783
10. Callaham ML, Schriger DL. Effect of structured workshop training on subsequent performance of journal peer reviewers. Ann Emerg Med. 2002;40:323–328
11. Joint Task Force of Academic Medicine and the GEA-RIME Committee. . Review criteria for research manuscripts. Acad Med. 2001;76:897–978
12. Durning SJ, Carline JD Review Criteria for Research Manuscripts. 20152nd ed Washington, DC Association of American Medical Colleges
13. Azer SA. The top-cited articles in medical education: A bibliometric analysis. Acad Med. 2015;90:1147–1161