Research highlights
There are many different tools available to assist with the conduct of systematic reviews and guidelines .
The survey identified the most commonly used tools amongst members of an international network (Guidelines International Network ) and how frequently they are used.
Barriers to using tools include the cost of some tools, lack of customization and lack of features.
Facilitators for using tools included improving the efficiency of the process, ability to share data/information, and being free to use.
Background
Systematic reviews, and the recommendations or clinical practice guidelines (CPGs) arising from them, form the basis of evidence-based healthcare. Over time, systematic review and guideline development methods have become increasingly demanding and complex. As such, the development of systematic reviews and trustworthy CPGs require significant investment of resources, with systematic reviews alone taking anywhere from 6 months to 2 years to complete,1–3 with an average development time of 67.3 weeks shown in one study.4 With the need to provide up-to-date and trustworthy recommendations in clinical practice, this length of time is increasingly unacceptable and difficult to tolerate. As such, specialist software programs for developing clinical guidelines and systematic reviews have been developed to facilitate, streamline and support the review and guideline development process.
In 1994 the Cochrane Collaboration released ‘RevMan’ which was a revolutionary software program for conducting systematic reviews to evaluate the effectiveness of medical interventions. In the following years similar software programs were released,5 and since these early days of systematic review software there has been rapid growth in the last 15 years to the stage where there are now over 140 software tools/programs6 that can assist in the systematic review development process. Currently, within the systematic review , CPG development and health technology assessment communities, the use of software applications is now well ingrained, which has led to a proliferation of specialist tools to support evidence synthesis and the development of recommendations.6 To ensure that clinical guideline developers are aware of tools and to facilitate interoperability amongst tools, the Guidelines International Network (G-I-N) Tech working group was established to investigate the sharing of information from the various electronic tools used during systematic reviews and CPG development, and to provide a forum to discuss the best way to use the various tools. In addition, this forum also provides the opportunity for tool developers to interact with each other to plan future interoperability between their tools. However, it is currently unclear what tools are used by G-I-N members, what steps of the guideline development process they are used for, and if there are barriers to their use.
Therefore, this study sought to employ survey methods to investigate the experience and perspectives of G-I-N Tech members on the use of systematic review and CPG development tools.
Methods
Research questions
The survey included four primary research questions to G-I-N members: First, what tools are currently being used by systematic reviewers and CPG developers? Second, how are systematic review and CPG development tools used? Third, what are the facilitators and barriers associated with using systematic review and CPG development tools? Fourth, which features and functionality are missing from current tools and/or would be suggested for future tools?
Research design
As the intent of this study was exploratory, a mixed quantitative and qualitative approach was employed using an online survey. This study is reported in accordance with the Reporting Guidelines for Survey Research.7 Ethics approval was granted by the University of Adelaide (protocol number: H-2017-050).
A list of survey questions were developed by the research team and members of the G-I-N Tech group, consisting of methodological and technical experts in guideline development approaches and technology . This was done through brainstorming, meetings, and consensus. Items in the survey, particularly question 10, were informed by resources such as the GIN-McMaster Checklist8 and AGREE II tool.9 The list of tools was generated through G-I-N Tech working group members’ knowledge, although participants could enter additional tools. Careful consideration was given to formulating questions that avoided replication, included clear and concise questions, and provided adequate coverage of the main aims of the study. Preliminary questions focussed on background professional information [type of G-I-N membership (individual versus organizational), number of systematic reviews, and/or CPGs developed per year]. The body of the survey was composed of questions relating to their experience and perspectives of using tools to develop systematic reviews and CPGs; a mix of response types were used, such as nominal (e.g., yes/no/not applicable) and ordinal (e.g., agree or disagree strongly, moderately or slightly) scales. In addition, there were sections in which free text (open responses) could be provided. Pilot testing of the survey was undertaken by the research team and feedback used to make small amendments in the terms used to frame questions (for clarity) and decision logic (for ease of survey flow). The final survey can be found in Appendix 1, https://links.lww.com/IJEBH/A32 .
Participants and recruitment
The sample for this survey included all G-I-N members (both organizational and individual) in June 2017. In the first instance, notices advertising the study were published in G-I-N newsletters which are distributed to members via e-mail. In addition, G-I-N members were specifically identified and directly recruited. This involved an administrative contact officer, based at G-I-N, generating a list of all current G-I-N members and sending a G-I-N-branded organizational e-mail which included the participant information sheet (comprising the principal investigator's professional qualification and information regarding the main purpose of the study), consent form, and a link to the online survey. Participants were required to agree to the information provided in the participant information sheet (which contained details regarding the study's purpose and how the data would be used) before they could proceed to the survey. There was no financial incentive to participate in the study.
Data collection
Data were collected via an online survey to optimize reach and ease of completion, using SurveyMonkey software.10 The survey was first emailed to potential participants on 29 June 2017; a reminder e-mail was sent after 2 weeks and data collection concluded (survey closed) on 15 September 2017. Participants were instructed to complete all questions and respond from an organizational perspective.
Data analysis
Closed data from the survey (nominal and ordinal) were analysed using descriptive statistics (i.e., frequency counts). The data collected through open-ended questions were analysed and reported as summarized themes or verbatim responses. All quantitative statistical analyses were conducted using Microsoft Excel.
Results
At the time of the survey the G-I-N membership consisted of 259 members, (103 organizational and 156 individual). The overall response rate (RR) for the survey was 34% (87 of 259) with the organizational member RR of 53% (54 of 103 members) and individual member RR of 21% (33 of 156). Although 87 people contributed data in the survey, there were not 87 responses to each question, as some questions were only on the basis of the answers of previous questions, and some answers were skipped or not answered by the participants. As there were questions specific to the use of each tool and participants sometimes used multiple tools, for some questions in which the data are aggregated there are more than 87 responses.
The majority of responders were aged between 35 and 44 years (29%) (Table 1 ). The largest number of respondents developed 1–5 guidelines a year (48%) (Table 2 ). The mean length of time that responders’ organizations had been working on guidelines was 14.3 years (SD 7.8). From the list of tools available for selection, GRADEpro GDT was the most widely used tool followed by Dropbox and RevMan (Table 3 ).
Table 1: Age range of responders
Table 2: Number of guidelines developed per year
Table 3: Tools used by respondents
Aggregating across all of the individual tool responses, the most common reason to use an online tool was to ‘be more efficient’ (Table 4 ). Tools were most commonly used a couple of times a month (Table 5 ) and most tools were used for every guideline (Table 6 ).
Table 4: Answers to the question regarding what reason they use the tool
Table 5: Answers to the question on frequency of use
Table 6: Answers to the question on the use of the tool for every guideline
The most common steps in the process in which respondents applied the tools that they used were summarising the evidence and considering additional information; screen abstracts; judging quality (strength or certainty) and assessing risk of bias. The least common steps in the process in which respondents applied the tools were managing alerts for new PICO (population, intervention, comparator, outcome) questions; identifying the target audience and topic selection, and snowball searching (Table 7 ).
Table 7: Respondents’ reports of the stages in the guideline development (or systematic review ) process that tools were used for
The vast majority of users considered the individual tools they had used as useful (Table 8 ). Similarly, the majority stated that the tool was user friendly (Table 9 ). Most users would use the tool again (Table 10 ) and 95% would recommend it to other organizations (90% would recommend the specific tool, 5% would not recommend a specific tool).
Table 8: Answers to the question whether the tool was useful
Table 9: Answers to the question whether the tool is user friendly
Table 10: Answers to the question would you use it again
Participants described a range of facilitators and barriers to the use of online tools during systematic review and clinical guideline development. The responses are summarized in Table 11 , and relate mainly to tool availability/cost, striking a balance between structured data fields and the ability to tailor these to meet need, overall usability (i.e., tool features/functionality and workflows), and technical issues (i.e., internet, data interchangeability, etc).
Table 11: Facilitators and barriers to the use of online tools
There was an open-ended question asking what features were missing overall from the tools that were currently being used. Suggestions are included in Table 12 .
Table 12: Suggested tool features
Discussion
The purpose of this survey was to investigate the experience and perspectives of G-I-N members on the use of systematic review and CPG development tools. This survey identified that of the G-I-N members who responded, they currently use 26 different tools to produce systematic reviews and CPGs. Most tools were used at least a couple of times monthly (75%) and for most or every systematic review and CPGs undertaken within the organization (69%). The organizations used the tools for different steps in the authoring process, with the majority used during the early phase of ‘screening abstracts’ and data collection (63%). Only a small amount respondents reported use of tools for the wording of recommendations and only 26 responded they used tools for the publication and dissemination phase.
Despite assisting with the overall efficiency of systematic review and CPG development, this was tempered by user-reported barriers of tool availability/cost, balancing structured and tailorable data fields, and technical issues including overall usability. In addition, it is hoped that these results will aid in the sharing of information between the various tools and platforms. As an international group of systematic review and guideline development online tool developers and users, the G-I-N Tech working group aims to provide guideline creators and tool makers an overview of what is available to aid in their decision making. As can be seen from this survey, there are many different tools that are being used for different purposes, and it is unlikely that there will ever be just one tool to fit all purposes and users. Given this, the G-I-N Tech group also aims to facilitate collaboration between developers to encourage harmonization and integration between tools, including the transfer of data between tools. G-I-N Tech aspire to facilitate integration and sharing of evidence across different electronic platforms and organizations, by developing repositories of tools, models, and expertise in this field.
The survey helps us to understand how these tools are used and for what steps in the guideline development and systematic review processes. As shown in the survey, systematic reviewers and guideline developers use different tools for their various features and steps in the guideline and systematic review development process, with most using tools for the early phase and not for dissemination. In a world where digital information is elsewhere commonly accessible, and the framework of living evidence is on the rise, this points to a clear gap when it comes to use of digital tools in guideline work.
In our open text responses, there were participants who wanted the software to ‘do it all’ or provide a one-stop shop, while others only had need of particular features. When looking at software development in other areas, we see that such one-stop shops are becoming less common, and platforms rather integrate to take advantage of capabilities in specialized systems. The reason why participants reported the need for a one-stop shop might be that integration of tools in the guideline development world is poor. For tool developers this is worth taking into under consideration.
In a study addressing a similar topic conducted by Hassler et al. 11 of the software engineering community, the most desired features of systematic review support tools were supporting collaboration and search and selection. This aspect of using these online tools to collaborate online and have a shared repository of resources was also highlighted in our survey. Of course, systematic reviews and guidelines cannot be done by an individual and many of these tools facilitate the work of a group in a more effective way than traditional approaches such as emailing Word documents to large groups.
Technology has advanced rapidly since the days of the first systematic review software 20 years ago. Newer software programs are taking innovative approaches to streamlining systematic review and guideline development, with machine learning,12 harnessing the power of the crowd,13 and systematic review automation14 all being exciting new developments in systematic review and guideline technology . With the proliferation of systematic review and guideline development software, it can be difficult for groups to determine exactly what software is available to support them throughout the process. A useful resource is the SR Toolbox, which provides an online catalogue of different tools and in which they fit within the systematic review and guideline development process.6 It also provides useful information such as whether they are freely available or if there is a cost, in which they can be accessed, and what steps in the systematic review process they support. At this stage, there are over 140 tools identified in the SR Toolbox. However, our survey found that across the G-I-N membership only 26 tools are actively used. There may be many reasons behind this gap, including that G-I-N members may have not heard of some of the alternatives, or that these alternatives may not be as useful as those identified within our survey.
As systematic review methods have become increasingly complex, it is important that tool developers simplify and streamline this process as much as possible rather than adding to complexity due to overly complicated user interfaces. As found in a study by Marshall et al. ,15 and in ours, some of the tools had very steep learning curves or were too complicated for users. This barrier was also mentioned in our survey. The results of this survey may provide useful information for developers of tools that can be used to improve the design and conceptualization of future developments.
Limitations
Although our sample size is relatively small with a RR of 34%, it provides an important contribution to the area of systematic review and guideline development software use and behaviour which is currently under evaluated. Our study, along with the study by Marshall et al. ,15 can be viewed as useful hypothesis generating research which can be further evaluated in larger studies of systematic review and guideline development communities.
Conclusion
The survey of guideline developers and systematic reviewers found that the majority of guideline developers who responded used one or more of the online tools available to assist them in their work. The most widely used tools were GRADEpro GDT, Dropbox, and RevMan. In regards to whether the tools used by the various organizations were considered useful or not during the development process, the answer was a resounding ‘yes’. The majority state that they would recommend the tools they use to another organization. The results showed that guideline developers mostly used digital tools in the evidence gathering phase of guideline development, and little in the phase of publication and dissemination of content. These results should be interpreted with care given the small sample size and low RR.
Acknowledgements
The authors would like to thank members of G-I-N Tech for contributing ideas for the survey. The authors would also like to thank the G-I-N board and staff for their support, particularly Elaine Harrow and Allison Smith. We would like to thank all G-I-N members who volunteered their time to complete this survey.
Note
The Guidelines International Network (G-I-N) is an international not-for-profit association of organizations and individuals involved in the development and use of clinical practice guidelines . G-I-N is a Scottish Charity, recognized under Scottish Charity Number SC034047. More information on the Network and its activities are available on its website: www.g-i-n.net . This paper/presentation reflects the views of its authors, and the G-I-N is not liable for any use that may be made of the information contained therein.
The G-I-N; www.g-i-n.net , which is a Scottish Charity, recognized under Scottish Charity Number SC034047, provided support for collaboration of authors. The G-I-N Board of Trustees had an opportunity to comment on this paper, but did not have any role in development or preparation of the manuscript for publication.
Conflicts of interest
The authors report no conflicts of interest.
References
1. Ganann R, Ciliska D, Thomas H. Expediting systematic reviews: methods and implications of rapid reviews.
Implement Sci 2010; 5:56.
2. Khangura S, Konnyu K, Cushman R, Grimshaw J, Moher D. Evidence summaries: the evolution of a rapid review approach.
Syst Rev 2012; 1:10.
3. Harker J, Kleijnen J. What is a rapid review? A methodological exploration of rapid reviews in health
technology assessments.
Int J Evid Based Healthc 2012; 10:397410.
4. Borah R, Brown AW, Capers PL, Kaiser KA. Analysis of the time and workers needed to conduct systematic reviews of medical interventions using data from the PROSPERO registry.
BMJ Open 2017; 7:e012545.
5. Pearson A. Balancing the evidence: incorporating the synthesis of qualitative data into systematic reviews.
JBI Rep 2004; 2:4564.
6. Marshall C, Brereton P.
Systematic review toolbox: a catalogue of tools to support systematic reviews. Proceedings of the 19th International Conference on Evaluation and Assessment in Software Engineering; 2015: ACM.
7. Bennett C, Khangura S, Brehaut JC, et al. Reporting
guidelines for survey research: an analysis of published guidance and reporting practices.
PLoS Med 2010; 8:e1001069.
8. Schunemann HJ, Wiercioch W, Etxeandia I, et al.
Guidelines 2.0: systematic development of a comprehensive checklist for a successful guideline enterprise.
CMAJ 2014; 186:E123E142.
9. Brouwers MC, Kho ME, Browman GP, et al. AGREE II: advancing guideline development, reporting and evaluation in healthcare.
CMAJ 2010; 182:E839E842.
10. SurveyMonkey IncSurveyMonkey. San Mateo, California, USA:SurveyMonkey Inc.; 2018.
11. Hassler E, Carver JC, Hale D, Al-Zubidy A. Identification of SLR tool needs – results of a community workshop.
Inf Softw Technol 2016; 70:122129.
12. Gates A, Vandermeer B, Hartling L.
Technology -assisted risk of bias assessment in systematic reviews: a prospective cross-sectional evaluation of the RobotReviewer machine learning tool.
J Clin Epidemiol 2018; 96:5462.
13. Thomas J, Noel-Storr A, Marshall I, et al. Living systematic reviews: 2. Combining human and machine effort.
J Clin Epidemiol 2017; 91:3137.
14. Beller E, Clark J, Tsafnat G, et al. Making progress with the automation of systematic reviews: principles of the International Collaboration for the Automation of Systematic Reviews (ICASR).
Syst Rev 2018; 7:77.
15. Marshall C, Kitchenham B, Brereton P. Tool features to support systematic reviews in software engineering – a cross domain study.
e-Inform Softw Eng J 2018; 12:79115.