THIS IS THE SECOND of a two-part series aimed at helping nurses conduct and publish a systematic review (SR) of the literature. In part 1, we described the first four steps and started the fifth step. (See “Conducting a Successful Systematic Review of the Literature, Part 1,” in the May issue of Nursing2014.) Here we pick up where we left off. (See 10 steps to an SR.)
Step 5. Conduct comprehensive literature searches (continued).
After all relevant articles have been obtained from the search, the reference sections of relevant books and all included articles should be reviewed for possible additional articles to obtain. The team should keep track of which articles came from the searches and which came from the reference section review. This information will be needed to create the flow diagram of included and excluded articles.1
Another technique that's been used to find relevant articles is to identify journals likely to publish articles on the SR question chosen. The team may then conduct a manual search of a predetermined date range of issues of the selected journal(s).2 Finally, the team may want to contact authors who've published primary studies relevant to the SR question, because they may be able to help identify other relevant published and unpublished works.2
Researchers may want to consider searching grey literature, including conference papers and proceedings, dissertations, monographs, and government and association reports.2,3 The Fourth International Conference on Grey Literature defined grey literature as “[t]hat which is produced on all levels of government, academics, business and industry in print and electronic formats, but which is not controlled by commercial publishers.”4 Although a full review of the use of grey literature in SRs is beyond the scope of this article, interested researchers should look at the Grey Literature Report, a bimonthly publication of the New York Academy of Medicine Library.5
Conducting comprehensive searches in applicable databases can address the problems of publication and language bias.2Publication bias refers to the fact that researchers are more likely to submit manuscripts for publication when they include positive results (results that support their hypothesis) and that journals are more likely to publish manuscripts with positive rather than negative or neutral results.2Language bias means that positive results are more likely to be published in English and authors whose language is English aren't likely to read non-English articles and for that reason aren't likely to include them in their SRs.2
Both publication and language bias could lead to inaccurate conclusions. Inclusion of all relevant literature may be cost-prohibitive because non-English sources may not only require translation but also be difficult and expensive to acquire.
Today the gold standard is to provide the details of at least one search strategy and provide information about the other searches completed.6 This transparency in publishing search strategies lets other searchers reproduce what's been done.6 A good resource for ensuring search reproducibility is a list of criteria developed by Maggio and colleagues.6
The librarian who conducts the searches can save detailed documentation of the search strategies so that they can be reproduced2,6 and/or updated and then create a report that describes the details of the search strategies. It can be submitted to the journal for possible inclusion in the manuscript, posted as an online supplement, or made available from the coauthors on request after publication.
A librarian can help the team use a bibliographic management system (such as RefWorks or EndNote) for the hundreds and sometimes thousands of references that must be reviewed. Most medical libraries at academic medical centers have access to such systems. Using a bibliographic management system can provide an easy way to upload and/or document references considered in the SR, keep track of references that need to be reviewed and/or acquired, and be used to write the manuscript and easily create bibliographies. The librarian has the expertise to teach team members about using these systems, might do the initial setup, and can easily import references found during searches into the bibliographic management system.
Finally, the librarian can help to acquire articles not found in the institution's library collections by providing document delivery services. The team leader needs to discuss document delivery with the librarian because the costs of labor and articles can be substantial. Anyone writing a grant to support an SR should include costs for document delivery in the budget.
Step 6. Review of the search results.
Two members of the team who are very familiar with the review topic should independently review the search results and select articles to obtain for possible inclusion. Subject experts are used to ensure that all relevant articles are included.
The independent reviewers should thoroughly discuss what they plan to look for before they start the review of the searches. To avoid missing any relevant articles, reviewers selecting articles for full review should err on the side of overinclusion.
Once the independent reviewers have selected their list of articles to include in the SR, they must agree on which articles to include in the final, combined list by using one of two methods. In the first method, the reviewers meet and discuss their selections, discuss disagreements, and come to a consensus.7 In a more robust method, two independent investigators review the references and select articles for inclusion. Before they meet to discuss their independent lists, percent agreement and interrater Cohen's kappa are calculated.8 (See Understanding research terms.) Both statistics are reported in the methods8 or results section of the manuscript.9 In the second method, usually all the articles from both investigators' lists are obtained, but it's also acceptable to let the investigators discuss disagreements after calculating agreement statistics and obtain only those articles upon which they agree.
Finally, some journals still accept SRs in which only one author reviews the search results and decides which articles to obtain. In a variation on this method, Young and colleagues9 describe how they had one author review the original search results and another author rereviewed a 5% random sample of titles to ensure agreement. The authors used Cohen's kappa to report interrater agreement between these reviews.9 We don't recommend this method because it's not as robust and may result in omissions of relevant articles, but we realize that with large sets of search results, it may be impractical for two independent reviewers to review the entire list of references. Obtaining all articles from two independent reviewers creates more costs related to time, document delivery, paper, and other resources.
Step 7. Develop the abstraction database.
This step can be called either abstraction8 or extraction.9 We'll use the term abstraction, defined as the process of identifying relevant information within each article and typing or keying it into a repository or abstraction database. The abstraction database contains a condensed version or summary of each article with the information relevant to the SR question. (See What's included in the abstraction fields?) The team can decide on the fields to abstract based on the review question. The abstraction database may be created using software programs such as Microsoft Word, Excel, or Access.
In the protocol, the team should list the information it intends to abstract from each article, but we've found it difficult to predict all potentially relevant information until some articles have been abstracted. We use an iterative process that starts with the fields listed in the protocol, then add additional fields as the team abstracts the first 3 to 10 articles. As team members read relevant articles, new ideas for information to capture often emerge.
Step 8. Train abstractors.
Two or more independent abstractors will be needed, depending on the abstraction process and the number of articles. Start by giving the abstraction team one to three articles to abstract independently. The team leader should read these articles and abstract them as well. Once everyone has read and abstracted the first set of articles, the abstractors and team leader should meet to review any questions, issues, or difficulties encountered. They should discuss each abstraction field, sharing what each person wrote in that field. During this meeting, the team leader answers questions, clears up any confusion, defines terms the team didn't understand, and clarifies what should be placed in each field. The team also identifies new abstraction fields that should be included in the master abstraction form. This step is repeated with new articles, followed by another team meeting. Repeat this process until the team reaches consensus on all fields needed, when a master abstraction form can be developed.
During this iterative process, the lead author provides training and guidance to the abstraction team. As the abstractors read and abstract articles and meet to discuss each article, they begin to develop a common understanding. Once agreement on the master abstraction form is reached, training should continue until the lead author feels confident that the abstractors are well prepared. This usually requires a 1- to 3-week period of intensive instruction, feedback, discussion, and evaluation. However, if the abstractors are experienced healthcare professionals, this process may be shortened.
We've found it useful to develop a codebook with clear operational definitions for all important terms and abstraction fields during this period. For instance, in abstracting articles for an SR that considered nursing shift-to-shift handoffs,10 we defined “location of study” as the “institution name, city, state, county, and country, if provided.”
Once trained, two or more independent abstractors will complete the master form for each article. Then, the two abstractors meet, discuss what they have in each field, come to consensus on any disagreements, and combine their abstractions into one final abstraction document or database. Some SRs being written now are using three independent abstractors, with the first two independently abstracting and the third combining and deciding on the final version of what should be included in the abstraction form.
At the outset of training, it may take 2 to 3 hours to read and abstract one article. Over time, the team will gain confidence and a clearer understanding of what should be abstracted. We've found that it takes a fully trained abstractor about 1 hour per article.
Step 9. Conduct quality assessment of included studies' methodologies.
All studies are then assessed for methodological quality using a tool chosen for the topic. One of the authors of this manuscript worked with a team to develop a quality scoring system that can be used to assess both experimental and observational studies in the same SR.10–12
Another quality scoring system is the Kirkpatrick model, which is a proven way to assess quality for an SR focused on educational outcomes.13 Here studies can be ranked from those that assess participant reaction (Level 1 or lowest) to those that assess actual change in targeted outcomes (Level 4 or highest).13 Others have used a hierarchy of research design/evidence.14,15
Rather than an exhaustive list of the many possible quality scoring tools, we provide some useful examples instead. The most important aspects of quality scoring are determining the method and noting it in the peer-reviewed protocol, and using two independent reviewers to assign quality scores. We've used percent agreement and Cohen's kappa to document interrater reliability between assigned quality scores.10–12 We believe this provides one more level of assurance that the SR is reproducible.
Step 10. Prepare the final manuscript.
The first step in writing any manuscript is to identify the journal in which the team would like to publish. Members should carefully review and follow the instructions for authors for that journal. We also recommend that they use the PRISMA guidelines16 and checklist17 from the start of the SR research.
If the peer-reviewed protocol was thoroughly developed, most of the introduction and methods will already be written and can be updated now, if needed. The introduction should clearly describe the state of evidence before the SR began. The methods should provide enough detail so that other researchers could reproduce the SR.
In writing the results, it may be useful to start by creating the manuscript's sidebars, including tables or figures. Examples of sidebars include a list of study characteristics,18 a flow diagram of included and excluded articles,1 and a table that briefly describes each study, with its quality score.11,12
The discussion section should provide an aggregate summary of the studies,2,19 interpret the results in relationship to relevant theory and background literature,2 and review the strength of the evidence. The discussion should compare and contrast the results from included studies.2 In addition, it should include limitations of the SR, such as the possibility of having missed relevant studies or having introduced subjectivity when assessing study quality. Finally, the discussion should make recommendations for future studies relevant to the SR topic.
Go for the gold
SRs of the literature are the gold standard for summarizing the evidence found in previously published articles. Following the 10 steps outlined in our two-part series will help nurses create an SR upon which healthcare professionals can rely—and should improve the chances of publication.
10 steps to an SR
- Develop a clear question.
- Conduct an initial literature search to determine if a systematic review on the topic has already been published.
- Assemble a team.
- Create a peer-reviewed protocol.
- Conduct comprehensive literature searches.
- Review the search results and select articles to be included.
- Develop the abstraction database.
- Train abstractors and abstract study details from the included articles.
- Conduct quality assessment of included studies' methodologies.
- Prepare the final manuscript.
Understanding research terms
- Iterative process is a repetitive process that allows for refinement or improvement after each successive iteration.
- Interrater Cohen's kappa is a measure of interrater agreement that adjusts for chance agreement and is considered more conservative than percent agreement.
- The Kirkpatrick model is an evaluation model that has four sequential levels designed to evaluate a training program or course.13
- Percent agreement is a measure of interrater reliability that gives the overall percentage of agreement between reviewers.
- Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) is an evidence-based minimum set of items designed to help authors improve the quality of their reporting in systematic reviews and meta-analyses.17
What's included in the abstraction fields?
Examples of fields to abstract include the following:
- the first author's last name
- year the article was published
- brief description of the article's population, including the numbers of participants
- study design
- follow-up period
- actual data
- major findings/conclusions
- future research suggestions.
2. Bettany-Saltikov J. Learning how to undertake a systematic review: part 2. Nurs Stand
3. Savoie I, Helmer D, Green CJ, Kazanjian A. Beyond Medline: reducing bias through extended systematic review search. Int J Technol Assess Health Care
6. Maggio LA, Tannery NH, Kanter SL. Reproducibility of literature search reporting in medical education reviews. Acad Med
7. Wong BM, Etchells EE, Kuper A, Levinson W, Shojania KG. Teaching quality improvement and patient safety to trainees: a systematic review. Acad Med
8. Levine AC, Adusumilli J, Landrigan CP. Effects of reducing or eliminating resident work shifts over 16 hours: a systematic review. Sleep
9. Young JQ, Ranji SR, Wachter RM, Lee CM, Niehaus B, Auerbach AD. “July effect”: impact of the academic year-end changeover on patient outcomes: a systematic review. Ann Intern Med
10. Riesenberg LA, Leitzsch J, Cunningham JM. Nursing handoffs: a systematic review of the literature. Am J Nurs
11. Riesenberg LA, Leitzsch J, Massucci JL, et al. Residents' and attending physicians' handoffs: a systematic review of the literature. Acad Med
12. Padmore JS, Jaeger J, Riesenberg LA, Karpovich KP, Rosenfeld JC, Patow CA. “Renters” or “owners”? Residents' perceptions and behaviors regarding error reduction in teaching hospitals: a literature review. Acad Med
14. Harris RP, Helfand M, Woolf SH, et al. Current methods of the US Preventive Services Task Force: a review of the process. Am J Prev Med
. 2001;20(3 suppl):21–35.
18. From Young JQ, Ranji SR, Wachter RM, Lee CM, Niehaus B, Auerbach AD. “July effect”: impact of the academic year-end changeover on patient outcomes. Appendix Table 2. Characteristics of 39 included studies. Ann Intern Med
. 2011;155(5):309–315. http://annals.org/article.aspx?articleid=747098#t3–8
19. Haase SC. Systematic reviews and meta-analysis. Plast Reconstr Surg