Predatory journals are those open-access journals that charge publication fees while publishing most submissions without peer review or transparent publication practices.1 In 2017, the New York Times published an article that noted that more than 10 000 such predatory journals exist. This is approximately as many as the number of legitimate journals.2 Logically, this means that it is as likely as not that some literature searches will inadvertently access an article published in a suspect journal.
Systematic reviews summarize research outcomes on a topic across clinical trials to provide a reliable, high level of evidence.3 According to the American Heart Association, they are level A, high-quality evidence.4 Systematic reviews, however, have an “Achilles heel”—although all high-quality systematic reviews should assess the quality of the individual studies, the reliability of their results is still impacted by the rigor of the studies that they include.
Members of our group have been involved in multiple systematic reviews over the last decade. This has resulted in significant knowledge generation related to comorbidity, caregiving, and dyadic interventions. Using this expertise, we recently began a large review of what is known about hypercholesterolemia treatment in older (65+ years) adults. Knowing that clinical trials often excluded older adults, we wanted to be inclusive as possible and so used multiple databases. After preliminary test searches, we ended up with 3176 individual studies to assess for eligibility. In this first level, title/abstract screen, we used 5 pieces of information: author name(s), publication year, article title, journal name, and abstract. Given the growing problem, for the first time, we stopped and asked, “Should we screen for predatory journals?” This gave our full team a pause. It raised pragmatic and procedural concerns. First, is it even possible to do this in so large a data set? It was easy to identify the high-impact journals such as the New England Journal of Medicine, but what was the ACP Journal (later identified as a monthly feature of the Annals of Internal Medicine)? And we had hundreds of equally unknown journals. Second, would this vetting bias our findings? We recognized that good articles are sometimes published in suspect journals. Likewise, bad articles are sometimes published in peer-reviewed journals.
In the past, we were protected from suspect journals by the system of indexes (CINAHL, EMBASE, Scopus, etc). For most indexes, the “journal list” was and continues to be vetted and maintained by the index publisher. Traditionally, those index publishers have been very selective. However, vulnerability has been identified in one of the largest indexes, PubMed, because it indexes abstracts of manuscripts that have been submitted to PubMed Central. Inclusion in PubMed Central and indexing in PubMed at the article level do not give credence or authority to the journal. When PubMed just reported MEDLINE content, it was relatively pristine. To our knowledge, PubMed has not yet addressed this vulnerability despite editorials calling that agency to do so.5 Similarly, article records retrieved by Google may include literature that has been published in suspect journals; Google does not draw studies from a journal list delimited by editorial standards.
So, what should authors do to protect the integrity of systematic reviews? Organizations, such as the International Academy of Nursing Editors,6 World Association of Medical Editors,1 and International Committee of Medical Journal Editors,7 have issued guidelines for authors to address the issue. Using their guidelines, our team decided to apply our inclusion and exclusion criteria to all the studies that our search identified. We decided that, when the final corpus of studies is identified, we will verify every journal by examining the journal’s home page using the predatory journal algorithm from the World Association of Medical Editors1 to determine whether to include or exclude the study based on peer review and publishing practices, thereby fulfilling the author responsibilities designated by the International Committee of Medical Journal Editors. On balance, we decided the benefit of preserving the reliability of our systematic review data and bolstering the tradition of the peer review process outweighed the risk of any potential introduction of bias arising from the vetting process. We will also transparently report in the article any studies excluded because of the vetting process and the reasons for exclusion. We recommend that the Journal of Cardiovascular Nursing consider making this a requirement for all future published systematic reviews. Protecting the evidence base in a predatory environment will take all of us, editors, authors, and readers.
Harleah G. Buck, PhD, RN, FPCN, FAAN
Randy Polo, JD, MA
Cheryl H. Zambroski, PhD, RN
1. Laine C, Winker MA. Identifying predatory or pseudo-journals. Biochem Med (Zagreb)
2. Kolata G. Many academics are eager to publish in worthless journals. New York Times
. October 30, 2017;2017.
4. Yancy CW, Jessup M, Bozkurt B, et al. 2017 ACC/AHA/HFSA focused update of the 2013 ACCF/AHA guideline for the management of heart failure: a report of the American College of Cardiology/American Heart Association Task Force on Clinical Practice Guidelines and the Heart Failure Society of America. Circulation
5. Manca A, Cugusi L, Dvir Z, Deriu F. PubMed should raise the bar for journal inclusion. Lancet
6. Thorne S. Predatory publishing: what editors need to know. Nurse Author Ed