Secondary Logo

Journal Logo

Foreword: Characteristics of RIME Papers That Make the Cut

West, Daniel C. MD; Miller, Karen Hughes PhD; Artino, Anthony R. Jr PhD

doi: 10.1097/ACM.0000000000001379
Foreword
Free

D.C. West is 2016 chair, Research in Medical Education Program Planning Committee, and professor of pediatrics, University of California–San Francisco, School of Medicine, San Francisco, California.

K.H. Miller is 2017 chair, Research in Medical Education Program Planning Committee, and associate professor of graduate medical education, University of Louisville School of Medicine, Louisville, Kentucky.

A.R. Artino is immediate past chair, Research in Medical Education Program Planning Committee, and professor of medicine, Uniformed Services University of the Health Sciences, Bethesda, Maryland.

Funding/Support: None reported.

Other disclosures: None reported.

Ethical approval: Reported as not applicable.

Disclaimer: The views expressed are those of the authors and do not necessarily reflect the official views of the Uniformed Services University of the Health Sciences, the U.S. Navy, or the Department of Defense.

On behalf of the Research in Medical Education (RIME) Program Planning Committee (PPC), welcome to the 55th annual Association of American Medical Colleges (AAMC) RIME program. Once again we are pleased to provide a great sample of the cutting edge of research in medical education from North America. The process for submitting and reviewing manuscripts was especially challenging this year because of the plan to hold a separate AAMC medical education meeting in September 2016. To meet production deadlines, the call for RIME papers had to occur significantly earlier than in past years, which resulted in less time for authors to complete projects and manuscripts and a tighter timeline for reviewers and the RIME PPC to complete their work. Nevertheless, we still received many high-quality research and review manuscripts that addressed a broad range of “hot topics” in medical education. We are happy to report that with the help of 79 reviewers, we accepted 8 research papers out of the 64 submitted manuscripts. On behalf of the RIME PPC, we offer a most sincere thank you to all those who submitted manuscripts for consideration—it was a great privilege to review your work and, we hope, provide meaningful feedback.

Back to Top | Article Outline

What Has Changed and What Has Remained Constant?

Like many of you, we were thrilled to learn that the medical education and RIME programs would be reunited with the annual Learn Serve Lead AAMC meeting. However, regardless of whether the RIME program occurs in September or November, the timeline between review, acceptance, and publication is always very tight. What this meant this year (and has meant in past years) is that we could only accept manuscripts that were nearly publication ready (i.e., requiring only minor revisions). Unfortunately, we were not able to accept a number of potentially publishable manuscripts because they would have required more extensive revisions. Our hope is that the reviewer feedback will help authors whose manuscripts were not accepted revise their work for submission elsewhere or for next year’s AAMC meeting.

Recognizing that many good submissions could not be accepted and hoping to create a new mechanism to share a broader range of research ideas with the RIME community, this year we created a new category of presentation—the RIME Oral Abstract. We identified 9 interesting and important manuscripts that we did not accept for full paper publication to be presented as an oral platform presentation. These presentations are identified as RIME Oral Presentations in the medical education program, and abstracts of these manuscripts are published in this RIME supplement. Authors of RIME Oral Abstracts are free to submit their full manuscripts for publication elsewhere.

Finally, this year marks the end of the era of hard copy publications of the RIME supplement. For a variety of reasons, we are moving to an online-only publication. Although we have mixed feelings about this change, the online publication will be free to anyone with an Internet connection. Although perhaps not ideal for some readers, the move to online-only publication is in keeping with recent trends in the world of journal publishing. We are optimistic that the free feature, coupled with a robust social media campaign, will increase the availability and profile of papers and abstracts in the RIME supplement.

Back to Top | Article Outline

Characteristics of Accepted Manuscripts

In hopes of providing guidance for authors of future submissions, we offer a brief summary of the distinguishing features of accepted manuscripts (and a few common problems of rejected manuscripts). These comments and suggestions are based on our experience reviewing manuscripts as part of the RIME PPC over the past three to five years.

Back to Top | Article Outline

Identifying the problem and formulating the research question

The best research studies begin with an observed (often generalizable) problem and a well-thought-out research question—a question that does not just fill a knowledge gap but also fills an important knowledge gap. This requires that the authors provide a concise review of the literature that informs the reader about what is already known and clearly states the gap in knowledge the study is designed to fill.1 In addition, the authors should provide a brief argument for why filling the gap is important. In medical education research, it is also important to describe the conceptual or theoretical framework from which the authors approached the study design or intervention. Manuscripts we accepted did these things well, while many rejected manuscripts did not.

Back to Top | Article Outline

Educational innovations

Many submissions to RIME focus on educational innovations, which makes sense because dissemination of innovations in medical education is critical to the advancement of how we train our future health care workforce to better meet the needs of learners, patients, and society more broadly. The challenge of reporting educational innovations lies in designing and reporting the innovation in a way that makes a “really good idea” applicable or useful to the broader medical education community. Guidelines for publishing medical education innovations have been published elsewhere.2 Accepted RIME innovation submissions adhered to these guidelines.

Authors of innovation papers must make a convincing argument that the problem the innovation is designed to address is important, builds on what is already known, and is generalizable to others outside of their local institution. Failing to effectively make this argument is the single most common problem in innovation manuscripts rejected from the RIME program. Another critical element relates to the core of study design—how did the authors demonstrate that the innovation achieved its stated objectives? In other words, how well did the innovation solve the problem it was designed to address? To do this requires that the outcome measures be clearly linked to the objectives of the innovation. To do this well usually requires measuring more than just the reactions of learners to the innovation (e.g., their satisfaction, comfort, confidence). Accepted innovation manuscripts commonly include measures of learning, skill acquisition, and the impact of improved skills on essential patient-care-related outcomes (i.e., Kirkpatrick level 3 and 4 outcomes). Performing a randomized controlled trial (RCT) might be important if, for example, it is important to test the superiority of one educational innovation over another. However, the choice of study design should depend on the objectives of the innovation. Performing an RCT may not be the best way to demonstrate that an educational innovation has met its intended objectives.3

Back to Top | Article Outline

Validity evidence

Many studies in medical education involve collecting validity evidence about a particular method of assessment or other aspects of an educational innovation. The most current validity framework offers a unifying theory of validity in which validity consists of a body of evidence designed to test (i.e., either support or refute) a hypothesized interpretation or use of assessment scores or data.4,5 We cannot emphasize strongly enough how important it is for authors to follow the unifying validity framework from the design phase of a study all the way through to reporting their findings and conclusions in a manuscript. In practice, this framework helps in developing testable hypotheses, identifying appropriate analytic methods, and reporting results and conclusions in a way that makes for an easy-to-understand and coherent presentation of validity evidence. One paper in this RIME supplement provides an especially nice example of how to frame, collect, and interpret validity evidence.6

Back to Top | Article Outline

Surveys

Many manuscripts submitted for consideration for this year’s RIME program used surveys to measure outcomes. As in assessment or curriculum development, validity evidence is important here too. In general, it is often beneficial to employ survey instruments that have been used in similar settings or have other validity evidence supporting their use. However, in medical education research it is often necessary to develop new surveys specific to the objectives of the study. To ensure that any new survey items represent the construct they are intended to measure (i.e., content validity), it is essential to use a rigorous and organized process.7 Regardless of how the survey items are created, the authors should describe the process in the manuscript so that readers can judge it for themselves.

A limitation in many medical education survey-based studies is that it is rarely possible to survey all of the population under study. Instead, investigators usually must settle on surveying a sample of the population; however, sampling creates the risk of selection bias. With this in mind, readers (and reviewers) want to know how representative the individuals who completed the survey were of the study population. One way to help answer this question is to look at the response rate—the proportion of the study population that completed the survey. Although there are no firm standards for response rate, in general, the greater the response rates the better. In our experience, most reviewers expect that greater than 50% of potential respondents should have completed the survey. Even when the response rate exceeds this threshold—and definitely when it does not—an effort should be made to determine whether the group that responded is representative of the population (or, stated another way, that the nonresponders are not different from responders in important ways). Oftentimes, demographic data can be useful in determining how much the respondents are similar to or different from the study population.

Back to Top | Article Outline

Studies using qualitative methods

Qualitative research represents a diverse range of methodologies that are well suited to answer many research questions in medical education, and in fact, every year we receive many manuscripts that use qualitative methods. Standards for reporting qualitative research have been proposed that take into account the broad range of methodologies used in qualitative research.8 One of the strengths of qualitative research is the flexibility of methods and approaches to analyze a variety of data. However, in publishing qualitative research it is critical that authors explain the methods and analytic approaches in sufficient detail for the reader to judge their suitability to address the research question. For example, a general reference to “grounded theory” is not sufficient. Another frequent problem with manuscripts that use qualitative methods is that the authors do not provide enough data to support the conclusions of the paper. Best practice usually requires that authors report representative quotes or text that support each of the conclusions or inferences that the authors propose. Every year we reject a number of manuscripts because the qualitative methods are not well explained and/or the authors do not provide enough data to support their conclusions.

Back to Top | Article Outline

Review papers

Review papers play an important role in advancing our understanding of medical education by organizing and synthesizing individual research studies. The best review papers help advance a particular field by identifying strengths and weaknesses in our understanding as well as pointing out new directions for research. Although there are several types of review/synthesis papers,9 most of the review papers submitted to RIME over the last few years were framed as systematic reviews. Standards for conducting and reporting systematic reviews in medical education have been proposed, and accepted RIME review papers generally adhere to these guidelines.10 Our advice to authors considering conducting a systematic review is to first carefully consider whether there are enough individual studies in the area to make performing the review worthwhile. In addition, it also is important to assess the quality of reviews already published to ensure that a review paper is needed and will add meaningfully to what is already available in the literature.

Usually, a great strength of systematic reviews is the rigor of the process by which papers are selected for inclusion in the review. However, every year we reject what otherwise appear to be excellent reviews because the search process and inclusion/exclusion criteria were not clearly stated. Without this information, it is impossible for the reader (or reviewer) to know whether the conclusions from the review might be biased by virtue of the way in which the papers included were selected. Finally, a critical element of any useful review paper is that it must synthesize available evidence, not just catalogue or list results from studies. By definition, synthesis requires that individual elements (i.e., the findings from individual studies) be integrated or woven together to form a more unified or complete understanding of the phenomenon under study. In the past we frequently rejected review papers that failed to synthesize the manuscripts included in the review.

Back to Top | Article Outline

Gratitude

In closing, we thank the dedicated and superbly talented AAMC staff—Nesha Brown, Steve McKenzie, and Kate McOwen—who worked tirelessly to make the RIME program and supplement happen. In addition, we thank our external peer reviewers. Having been RIME reviewers ourselves, we understand the time and effort required to conduct thorough reviews and provide constructive feedback, both of which are invaluable to the PPC and the authors. We also thank the incredible scholars who comprise the RIME PPC: Monica L. Lypson, Lynne S. Robins, Bonnie Miller, Reena Karani, Tanya Horsely, Bridget O’Brien, and Win May. It is truly an honor to work with such talented and committed individuals who are so devoted to promoting the art and science of medical education. Finally, we thank Dr. David Sklar and the editorial team of Academic Medicine. We sincerely appreciate their continued support of medical education research and the RIME program.

Daniel C. West, MD, Karen Hughes Miller, PhD, and Anthony R. Artino Jr, PhD

D.C. West is 2016 chair, Research in Medical Education Program Planning Committee, and professor of pediatrics, University of California–San Francisco, School of Medicine, San Francisco, California.

K.H. Miller is 2017 chair, Research in Medical Education Program Planning Committee, and associate professor of graduate medical education, University of Louisville School of Medicine, Louisville, Kentucky.

A.R. Artino is immediate past chair, Research in Medical Education Program Planning Committee, and professor of medicine, Uniformed Services University of the Health Sciences, Bethesda, Maryland.

Back to Top | Article Outline

References

1. Maggio LA, Sewell JL, Artino AR Jr. The literature review: A foundation for high-quality medical education research. J Grad Med Educ. 2016;8:297303.
2. Kanter SL. Toward better descriptions of innovations. Acad Med. 2008;83:703704.
3. Norman G. RCT = results confounded and trivial: The perils of grand educational experiments. Med Educ. 2003;37:582584.
4. Downing SM. Validity: On meaningful interpretation of assessment data. Med Educ. 2003;37:830837.
5. Downing SM. Reliability: On the reproducibility of assessment data. Med Educ. 2004;38:10061012.
6. Park YS, Lineberry M, Hyderi A, Bordage G, Xing K, Yudkowsky R. Differential weighting for subcomponent measures of integrated clinical encounter scores based on the USMLE Step 2 CS examination: Effects on composite score reliability and pass–fail decisions. Acad Med. 2016;91(11 suppl):S24S30.
7. Artino AR Jr, La Rochelle JS, Dezee KJ, Gehlbach H. Developing questionnaires for educational research: AMEE guide no. 87. Med Teach. 2014;36:463474.
8. O’Brien BC, Harris IB, Beckman TJ, Reed DA, Cook DA. Standards for reporting qualitative research: A synthesis of recommendations. Acad Med. 2014;89:12451251.
9. Duke University Medical Library. Systematic reviews: The process: Types of reviews. http://guides.mclibrary.duke.edu/c.php?g=158155&p=1035849. Accessed July 14, 2016.
10. Cook DA, West CP. Conducting systematic reviews in medical education: A stepwise approach. Med Educ. 2012;46:943952.
Copyright © 2016 by the Association of American Medical Colleges