Publishing in medical education, as in any field, is essential for academic faculty. Education research has grown exponentially over the past two decades, as evidenced by an ever-lengthening list of journals (one article noted that there are currently 99 in medical education1) along with guidelines for authors on how to get published.2–5 Still, competition for space in the top-tier journals has never been more fierce or acceptance rates more disheartening.6 While impact factors (IFs) have risen, the chance of acceptance has fallen.6
When deciding which journal to choose for a particular manuscript, authors are advised by experts to consider many factors, including the IF or prestige of the journal, the audience or readership, alignment of focus, the peer review process, cost, and timeliness.5,7 Yet there has been little published research to help authors weigh or balance these different factors when making submission decisions for a particular manuscript.8 Often, authors are drawn—or pushed—to journals with the highest IFs. The push often comes from department chairs and/or promotions committees but may not serve the best interests of individual researchers.9 IFs are not closely correlated with the impact of a given article, and higher-IF journals do not necessarily represent the right audience for a particular study. Further, consistently aiming too high will result in higher rejection rates, which can waste time and resources and can also be demoralizing.
We previously reported that most abstracts that are presented at medical education meetings are not ultimately published in peer-reviewed journals and that the proportion in medical education is lower than in other fields.10 The purpose of this study was to build on our previous work and explore how authors consider and rationalize their decisions related to dissemination and publication. Specifically, we sought to determine how authors make choices about journal submission, including what factors they consider and how they weigh competing interests.
This work represents part of a larger, mixed-methods study examining factors that influence how and why researchers disseminate their work in medical education.11 The larger study involved a survey distributed to all individuals who presented abstracts at two large medical education conferences held 10 years previously (the Canadian Conference on Medical Education and the Research in Medical Education portion of the Association of American Medical Colleges Annual Meeting in 2005 or 2006). At the end of the survey, participants were invited to indicate whether they would be willing to be interviewed and were provided with an e-mail address for one of the authors (M.L.).
We chose a constructivist grounded theory approach to underpin our study because it is particularly well suited to explore the “core social or psychological processes underlying phenomena of interest.”12 Our goal was to explore the nuances of decision making around journal submission choice, which is underexplored in medical education. Consistent with constructivist grounded theory, our research team discussed several sensitizing concepts as we began, that influenced our development of interview questions and our early analysis. These largely arose from the survey responses and published literature on academic publishing and mainly involved journal-related issues such as IF, word limits, timeliness, the influence of open-access and online-only publications, and peer review practices. We were also sensitive to personal and professional influences such as stage of career, research team dynamics, and available support and resources. We piloted the interview guide with two education researchers for clarity, length, and comprehensiveness. See Supplemental Digital Appendix 1, available at https://links.lww.com/ACADMED/A555, for the guide.
All interviews were conducted by phone, audio-recorded, and transcribed verbatim by one author (M.L.), between August and November 2016. Consistent with the precepts of constructivist grounded theory, data analysis began alongside our interviews, and the three authors met as a team repeatedly during the data collection phase to refine our interview guide, discuss the transcripts to identify emerging ideas, and determine theoretical sufficiency (the stage at which categories are adequate to cope with additional data without requiring extensive modifications13), which occurred after approximately 21 interviews.14 We analyzed the interview transcripts using an iterative, constant comparative approach. The three team members individually read groups of three or four transcripts at a time and identified preliminary codes. In regular meetings we debated, refined, and merged codes into thematic categories in a consensus-building process. Areas of debate or tension were resolved by repeatedly going back to the data and testing/checking our emerging understanding. Final coding was then completed by one author (M.L.) using NVivo software, version 11.4 for Windows (QSR International Pty Ltd, Australia), which facilitated organization and coding and enabled comparisons between the three researchers. Finally, one author (S.G.) reexamined the themes related to journal choice in more detail, identifying further subthemes that were then checked with the entire team.
Our team composition was as follows: S.G. and C.W. are both clinician scientists who conduct research in medical education and are at different stages of their careers. S.G. has served as a deputy editor and has participated as a member of editorial boards at several journals. M.L. recently completed a PhD in a different field within education. During the interview and analysis phases, we openly discussed our own experiences of journal submission and rejection and wrote memos as coding proceeded. These memos contained our initial thoughts and impressions after reading each transcript; additional memos were written after every team meeting to keep track of our emerging ideas and to refer back to for clarification as needed.
Ethical approval was obtained from the Research Ethics Board at the University of Toronto.
Thirty-seven researchers responded to the invitation, of whom 25 were ultimately interviewed by phone based on scheduling logistics. We also included the two pilot interviews as they were not notably different from those in the main cohort; thus, our total sample was 27 interviews (designated by participant number after direct quotations). Our participants were all from Canada or the United States and were between 42 and 86 years of age; 48% were female. Interviews were between 28 and 73 minutes long and generated 389 transcript pages for analysis.
IF and prestige
Participants discussed in detail their thought processes when considering journals for manuscript submission. The most prevalent theme identified in our data concerned the IF and prestige of a journal, which were commented on by all participants. Although many spoke of the desire to publish in the highest-IF journals (“… basically I always go for the two highest journals in our field” [P7]), their expressed reasoning and rationales behind these choices differed and often reflected underlying complexities and tensions. Participants’ attitudes about IFs were often cynical, but a range of opinions were expressed, including some more favorable views, which will be presented subsequently.
Many participants expressed tensions arising from an apparent misalignment between institutional pressure to publish in high-IF journals and participants’ own goals. P9 stated: “I’m a bit of a … I’m not a big fan of impact factors. Um … I know the university likes it but I’m afraid I’m not one, so.…” This participant went on to describe the pressure she felt to “push” for a higher-IF journal, by both her colleagues when working on multiauthored manuscripts and the institution in terms of getting promoted:
It depends on my colleagues—like some of my colleagues get quite concerned, they push for the impact factor and in fact at [Institution] … one of the policies is that we’re supposed to go for a higher—we’re supposed to go for a higher impact factor. That’s actually quite open to debate here but that’s the current policy. (P9)
One of the problems participants identified with IFs is that they are generally much lower for medical education journals than for clinical journals, which can cause their colleagues to question the value of education researchers’ work:
So whether you’re a clinician or a nonclinician you still have the same kind of stigma—“Is this really science?”—when you’re talking to others who publish in the science domain. Or “This is a really low-impact journal,” even though it’s the highest-impact journal for your field, when they compare it to something like Nature. (P24)
This participant felt that such comparisons were inappropriate and might serve to push people away from medical education research in favor of clinical research:
You can’t make those comparisons and if you do then you force people to make decisions, like if you’re a clinician then well maybe I shouldn’t be publishing in medical education I should be doing clinical research as well. So you’re not helping them build a career in a field per se. (P24)
The pressure to publish in clinical journals also applied to participants’ medical education research. Many felt that clinical journals in general are seen as more prestigious than medical education journals amongst peers and promotions committees. For faculty close to promotion, there was a perception that IF was even more important:
There was certainly a tendency to try to get into a more prestigious journal. Ummm … once you get up to full professor it doesn’t really matter much anymore. (P3)
Another explained that higher-impact, more prestigious journals in medical education are “at least recognizable to the promotions committee …” (P16).
More than one participant was concerned by this perceived overemphasis on IF, which she described as
Academic arrogance, like … “I published in this impact factor” … and the question is what’s the effectiveness in actual terms of, you know, some concrete action being taken? And I’m not sure that’s captured in that [IF metric]. (P9)
However, despite having higher IFs, clinical journals were paradoxically often seen by educational researchers as less desirable. One participant explained that if they had substandard work, they would not consider sending it to a good medical education journal, but they would consider a clinical one:
I’m not going to send, you know, some crappy pre–post study to Medical Education.… I’m going to send [it] to my specialty journal. (P4)
Another participant expanded on this notion:
There’s many studies I’ve done where I can imagine it being published in a clinical journal, for example, but not one of the high-impact-factor [ones] in med ed.… (P2)
This participant further explained that their best, most rigorous work “deserved” to be in a medical education journal rather than a clinical one:
When I did the X study where I had like over 100 [participants], properly randomized and properly powered to see a difference … and it had an interesting outcome, I felt that deserved to be in a [medical] education journal.
Yet the discussion of IFs was not all cynical. Some participants explained that getting published in a high-impact journal was not just about institutional pressure, prestige, or ego, it was more about improving the reach and impact of their work.
That’s also why getting your article into a high-impact journal is also important, because it’s the fact that your work can be picked up and used and adopted by others. (P10)
Moreover, a journal’s IF can serve as a surrogate or shorthand for many interrelated factors. Consider the following excerpt from an interview with P1, who described why high-IF journals are sought after:
Um yeah, I think it’s the credibility amongst your peers, as well.… It’s a huge piece, you know, when I see one of my colleagues come out in [Advances in Health Sciences Education] or Academic Medicine, or Medical Education, or any of the other journals you know, it’s always like, “Oh way to go, that’s great!” Because they’re so competitive and because we’re all in there, publishing, and we know how competitive they are, and it gets a little, um, you know, you realize the effort it took to get in there. (P1)
For this participant, the highest-rated journals were seen as a way to establish one’s professional identity and credibility, partly because everyone knows “how competitive they are,” so an article must be truly excellent to get accepted. Of note, this participant spoke of a colleague coming out in a journal suggesting that it is the person that gets published, not just their manuscript. The statement acknowledging “the effort it took to get in there” further establishes an author’s identity as a serious researcher.
These concerns about IFs highlight the tensions researchers face when weighing the perceived merit and value of their work—and the audience they hope to reach—against institutional pressures to “aim high.”
A journal’s vision and mission
A journal’s focus, vision, and mission were also key considerations for authors, one of whom noted that journals “all have a distinct flavor to them—this one is more policy, this one more basic research …” (P18). For some participants, where a journal is seen to sit on the spectrum of practical to theoretical was an important distinction.
Ummm … anytime when you’re talking a lot of theory that’s going to be in [Journal A]. Because if you’re going to try and put that in [Journal B] they don’t really care about the education theory behind this, but if you developed a great tool that might be practical in a lot of … training programs, then that might fit very nicely. (P1)
However, participants’ views on which journals were the most theoretical varied. For example, one journal, often cited by participants as being on the theoretical end of the spectrum, was not seen that way by everyone: “There’s not enough theory in that journal [Journal C]—well they think there is—but I believe there’s not enough” (P14). For these reasons, some researchers preferred to send their most theoretically rigorous work to disciplinary journals, such as those in higher education or sociology.
A journal’s mission and vision also helped authors discern whether they would be able to join an existing conversation, as a way to add a new voice, advance the field, and gain legitimacy for themselves:
So we look to see, you know, what articles or what journals have even talked about it before such that my research could add to the conversation that has already begun in that journal. So you’re trying to join a conversation, really, and it only works if the journal you’re submitting to already has begun to talk about it. (P15)
In contrast, participants noted that in clinical journals, there was often a need to start a new conversation, which was viewed as a much more challenging task, as described by P19: “Although like I said, I’m trying to start a conversation in the clinical journals, but that’s a real tough one, at least for my type of research.”
In recent years, more journals have included a desire to be open access as part of their missions, and many new journals are published online only. On questioning, nearly all participants described open-access or online-only journals in derogatory terms, although sometimes distinctions between these two journal types were blurred. In general, they were seen as less reputable and not desirable as first-choice venues—as one put it, “That’s like, when I’m scraping the bottom of the barrel” (P4). Online-only journals were seen as “upstarts” compared with the print journals, which have longer histories. When open-access journals were chosen, authors were careful to look at factors such as the journal’s reputation, longevity, and editor. Conversely, some participants acknowledged that open-access publications could be useful in getting one’s work seen by underserved audiences, such as learners and faculty in low-resource settings.
Journal technical factors
Participants often commented on a journal’s turnaround time (either the time from submission to decision or from acceptance to print), but it was not usually viewed as a determining factor when it came to submission. As P1 stated, “I think turnaround time is nice, but for me it’s never been a prime motivator.” Others echoed this sentiment by saying it was something they “haven’t really thought about … as an important factor” (P17). One participant noted that it might be important closer to promotion: “If a person is getting close to promotion then journals that have a quicker turnaround time are likely to be something that I would think seriously about” (P16). It is possible that this apparent lack of concern is due to the perception noted by several participants that turnaround times are much shorter than they had been historically. One participant described their experience with a particular journal in the past by (laughingly) stating that “one could have a child and walk it” by the time a paper would come out in press (P6). However, this participant also noted that things have dramatically improved in the last several years.
Sometimes the word count limits imposed by journals would affect submission choice for participants who had written particularly long articles, but this was only discussed by a handful of participants. This may be because, as P18 explained, “that’s less of an issue these days because a number of [journals] have let up on their length of paper thing, but sometimes that’s a factor.”
Article or study factors
Most participants also discussed the particulars of a specific article or study as being important to the journal selection process. This discussion was closely linked to the theme regarding a journal’s mission or vision, but in the current theme the focus was on alignment between a particular article and the journal’s mission, as stated by P20: “The first round of cuts is trying to match the topic and the audience with the journal.” Another participant, who initially discussed the paramount importance of IF, subsequently went on to explain, in detail, the importance of getting her work read by the “right” audience:
I look at all those things [IF, acceptance rates] but, but I don’t let it rule me. What really matters for me is, uh, does this work, does it make sense to publish this work for this audience? So that X paper that you asked me about, it was a case study in X, so even though they, there were journals that I could have sent it to that were higher impact, uh, publishing in [Journal] was more meaningful because uh the people that are wanting to read it were there. So I make those kinds of choices. (P24)
Authors also considered the perceived quality and novelty of their own work when weighing different journal options. For example, P13 critically considered the rigor of a research study that he was thinking about submitting to a highly rated journal:
So if I think of like journals, for instance [Journal D], which I think is probably considered to be one of the premiere journals [in medical education], ummm there’s the work that I’m doing that … I don’t think … it doesn’t quite feel up to the caliber that I would submit it there.
Many other respondents echoed these sentiments and further explained that they do not want to waste the editors’ or reviewers’ time by sending work that is not likely to be accepted in a highly ranked journal. This self-censoring may save time and mitigate delays in the communication of new findings but carries an attendant risk of undervaluing one’s work.
In contrast, other authors were less discriminating when submitting their work, instead choosing the same strategy regardless of the article. Some described submitting “to the same two journals every time” and then “working down” upon rejection (P7). Rather than being discouraged, some participants specifically noted the extremely low acceptance rates at the most competitive journals as being markers of the journals’ quality, which helped mitigate against the sting of rejection. However, the opportunity cost (the potential to lose from one alternative when another is chosen) and time required for multiple submissions and rejections was weighed carefully, especially around promotion time.
It’s prestigious, but then if I know that it’s really hard to get something in there, then I would sort of balance that on the other side about whether or not I would want to take the chance. Lose all the time involved in sending it in and then waiting to hear before you send it on to the next place. (P17)
Factors related to peer review
Despite the higher risk of rejection at high-impact journals, sometimes they were chosen despite the long odds, for the opportunity to receive at least an informative peer review.
And you start to know there’s certain journals that it might be hard to get into but … their peer review process is quite good so you tend to get a lot of useful feedback that you could then use. Even if you can’t reapply to that journal that you might use in going somewhere else. (P1)
But a good review is wonderful to receive so that’s certainly a motivator. Even if something is rejected, it’s a motivator. (P6)
Despite these positive comments, most discussion regarding peer review was negative. Many authors were concerned that reviewers were not properly trained or selected (“My god, we need to train our reviewers better” [P7]). One particularly negative exchange focused on the quality of the reviewers:
Oh well the reviews you get are [expletive]. Uh reviewers don’t do their jobs properly, uh journal editors send their stuff to people who have no clue what they’re talking about and they don’t take the time to read it properly, so you get reviews that are like stupid, really stupid.… So this is really demoralizing. (P12)
Several participants made comments about the lack of kindness and generosity apparent in some reviews. In reflecting over the past 10 years, one said: “My attitude over the quality of the peer review feedback I’ve received has changed. I think it’s gotten much harder, less collegial, overly critical” (P6). Another felt that reviewers need to be “mindful that you know, just to use language that will not decimate someone” (P7). This was particularly important for authors mentoring junior researchers.
Although there were no reported instances of a bad peer review or editorial decision dissuading an author from submitting future work to a particular journal, it is noteworthy that these frustrations were so globally discouraging.
Our findings highlight several important issues for the field of medical education research. Despite the plethora of available venues for publication, our participants were still drawn to what they consider the “top tier” journals in the field. The IF itself was not necessarily the primary driver; rather, what mattered was what the IF represented—quality, prestige, a large readership. Thus, a high IF served to enhance participants’ sense of legitimacy of their own work. This was important to their own identities as researchers but also for those to whom they are accountable. Smaller, niche journals were not often described as being participants’ first choices, but they were acknowledged to be useful in certain instances, such as for work that may speak to a specialized audience or that may increase its uptake in an emerging field. Lower-tier journals were often discussed as being good choices for work perceived as being of lower quality or when timeliness was more important. Interestingly, clinical journals, which often have much higher IFs than medical education journals, were often seen as “lesser,” and authors would sometimes strategize to submit their lower-quality, more practically focused work to these journals, saving their best work for a medical education journal with higher prestige in our own field. For similar reasons, some authors reported choosing to publish in disciplinary fields to maintain credibility in their primary discipline, regardless of IF.
Although it is beyond the scope of this article to fully analyze and critique journal IFs, suffice it to note that they are controversial.9,15 A journal’s IF is calculated by “dividing the number of current year citations to the source items published in that journal during the previous two years.”16 Originally designed as a way to compare journals, they have, over time, become a way to compare individual papers or even researchers,15 and remain important drivers of submission behavior.17 Yet the reality in biomedical research is that there is only a weak correlation between a journal’s IF and subsequent citations of any particular article.15 For our participants, the IF of a journal was a consideration, but it was not of overriding influence (although it was often a source of frustration and dismay when authors felt pressured submit to high-IF journals in contradiction to their own values). Our participants were additionally driven by finding a good match between an article and the right audience. Similar to recently reported findings, our participants recognized that choosing the “wrong” journal simply on the basis of high IF can result in “delays in the communication of new findings that can hinder scientific progress, delay career progression for young scientists, and waste limited resources.”8 On the other hand, choosing “correctly” may lead to earlier publication, but can come with different costs. For example, one study found that in the broader biomedical field, 75% of articles are accepted at the first journal to which they are submitted—yet the articles that undergo rejection and resubmission elsewhere tend to have higher citation counts in subsequent years.18 Journal choice for a particular article then becomes a delicate balancing act between multiple trade-offs, including finding the best audience, in the highest-prestige journal one can reasonably hope for, in the most efficient manner, while attempting to maximize future impact.
Factors inherent to a particular article or study were also major considerations for journal choice. Researchers considered not only the significance of their findings but also novelty (being first) or relevance to an ongoing conversation. Lingard19 has promoted the idea that a journal exists “to promote scholarly conversations,” and thus prospective authors should consider how and when to join in. In our data we did not find many examples of researchers wanting to start new conversations, but there was recurrent discussion of joining a conversation already in progress. Joining in was seen as easier because the journal’s audience already understands and appreciates the value of the problem under study. The disadvantage to this approach is that the author may be preaching to the choir, and other important stakeholder groups may never hear the message. An argument can thus be made for the value in disseminating one’s research program over multiple journals with diverse audiences so as to widen the impact and uptake of important findings. Yet this strategy necessitates both the hard work of starting a new conversation and the risk of multiple rejections. This in turn may further inflame the pervasive opinion that peer reviewers are not well trained or expert enough to critique submitted papers. These issues were weighed and carefully balanced by our participants as they considered submission strategies for individual articles. It would be beneficial for our field as a whole to encourage both conversation joiners and conversation starters as a means to broaden the dissemination of impactful work.
We were somewhat surprised at the lack of emphasis placed on some of the more technical features of journals, such as formatting requirements, or word count or limitations on length, which (anecdotally) are often discussed as important considerations among research teams. This may be due to a recent trend to loosen or remove strict word limits, along with cautions to authors to keep in mind readers’ attention spans.20 Some journals allow for different word limits depending on the article type, allowing more space for qualitative research.21 So although word counts may be considered by authors, they do not appear to be hindrances in medical education journals. On the other hand, clinical journals tend to have much more strict word limits, and many do not publish qualitative research at all—a combination that adds to the sense that these journals are not as valued in medical education.
Our study has several important limitations. Because we only interviewed researchers who had presented at academic meetings in 2005–2006, their average age is relatively high and their positions fairly advanced. Their opinions may not reflect those of junior faculty, especially when it comes to online and open-access publication. Similarly, we did not deliberately sample across the academic career trajectory or across different professional qualifications (e.g., MD, master’s trained, or PhD), which may have revealed a greater variety of viewpoints. This might be considered in future studies.
Despite the availability of helpful tips for prospective authors, there is little empiric research in medical education (or in other fields) that attempts to understand how authors weigh and balance various factors in selecting a journal for their work. Our research highlights several critical tensions that can arise for authors who are faced with the decision of strategizing to choose the “right” journal for their research. Interestingly, there appears to be a disconnect between what may be beneficial at an institutional or departmental level versus what is in the best interest of an individual researcher. This tension should be acknowledged so it can become a focus of faculty development and culture change at institutions. For example, as a way of reducing the overriding influence of IFs on faculty promotion, it would be helpful to raise awareness at the leadership level of the consequences and misuses of journal IFs. Academic leaders should also be aware of the aversion that medical education scholars may have to publish their best work in clinical journals. Mentors can reinforce these messages by emphasizing to their junior colleagues the importance of finding the best audience for one’s work, in order to maximize its adoption and impact.
The authors wish to thank Meagan Kaye and Chenthila Nagamuthu, of the SickKids Research Institute, for their valuable contribution in the recruitment process and data collection for the larger survey-based phase of this study.