Secondary Logo

Journal Logo

Now that we’re here, where are we? The JBI approach to evidence-based healthcare 20 years on

Jordan, Zoe PhD; Munn, Zachary PhD; Aromataris, Edoardo PhD; Lockwood, Craig PhD

International Journal of Evidence-Based Healthcare: September 2015 - Volume 13 - Issue 3 - p 117–120
doi: 10.1097/XEB.0000000000000053
DISCUSSION PAPER
Free

ABSTRACT Approaching almost 20 years of activity (and 10 years this year since the Joanna Briggs Institute, JBI, model of evidence-based healthcare was first published), the JBI remains one of the most successful international organizations to focus on the synthesis, transfer, and implementation of research evidence. Although similar in age and focus to the Cochrane Collaboration and other organizations of this nature, JBI has, from its inception, taken a broader view on what constitutes evidence to inform clinical decision making at the point of care and the need to be inclusive in order to answer the many different types of clinical and other care questions needing answers. The Institute published the JBI model of evidence-based healthcare 10 years ago this year, outlining a developmental framework of evidence-based practice that attempted to situate healthcare evidence and its role and use within the complexity of practice settings globally. Guidance on how to conduct reviews of different evidence types was limited at that time and has come a long way in the last decade. With a focus on both the scientific and pragmatic elements of the translational cycle, this article explores the history of methodological development of the Institute and postures where to from here.

Correspondence: Zoe Jordan, PhD, The Joanna Briggs Institute, Adelaide, South Australia, Australia. E-mail: zoe.jordan@adelaide.edu.au

Back to Top | Article Outline

Introduction

As the Joanna Briggs Institute (JBI) approaches its 20th anniversary, it seems timely to reflect on where we have come from during that time and how far we have progressed. When the Institute was established in 1996, the original proposal clearly outlined the methodological priorities. It was premised on the concept that the diversity of healthcare practices, particularly in nursing, required a diversity of research methodologies. Thus, it was clear that the methodological approach of the Institute needed to be ‘eclectic enough to incorporate both classic medical and scientific designs and the emerging qualitative and action-oriented approaches from the humanities and the social and behavioral sciences’.1

Although methodological development was always on the agenda for JBI, it was slow to start. By the late-1990s, most of the work around evidence-based healthcare had been carried out in the field of medicine and focused heavily on assessing the effectiveness of interventions. In 1997, barely a year after the Institute was founded, Professor Alan Pearson, the Institute's founder, wrote about the Institute's definition of effectiveness as focusing on ‘health outcomes from the client, community, clinical, and economic perspectives’; he continued, ‘The Institute regards the results of well designed research studies grounded in any methodological position as providing more rigorous evidence than anecdotes or personal opinion’.2 This conceptual thinking framed how the Institute would progress methodologically and pragmatically and would become the defining force behind this multicomponent, broad, inclusive approach internationally, setting it apart from other similar organizations on the world stage.

In 2005, Pearson et al.3 published a seminal article on the JBI model of evidence-based healthcare. Perhaps surprisingly, now a decade later, much of what was contained in this article still holds true. The basic fundamental tenets of evidence-based healthcare remain unchanged: using the best available research evidence to inform clinical decision making. The article described a model depicting ‘four major components of the evidence-based healthcare process as healthcare evidence generation; evidence synthesis; evidence (knowledge) transfer; and evidence utilization’.3

There are two key conceptualizations here that are imperative to how we approach evidence-based healthcare. The first is around how we define evidence (i.e., evidence of feasibility, appropriateness, meaningfulness, and effectiveness) and the second is clinical knowledge need as a central indicator for question derivation. Ultimately, the community that is served by the activity of systematically reviewing research evidence is those who work in and use health services. Thus, the questions asked by researchers and systematic reviewers should be informed by that community.

What has been important to the JBI since its inception, and what has influenced its approach to systematic reviews in particular, is its conceptualization of what counts as evidence to inform clinical decision making at the point of care.

Back to Top | Article Outline

‘Useful knowledge’ and what counts as evidence

So what characterizes ‘useful knowledge’? Explicit and propositional knowledge are key criteria for achieving scientific validity, but more ambiguous knowledge serves important functions in organizational life and thus possesses pragmatic validity. In 2004, Pearson4 discussed the nature of evidence for health professionals and argued for a pluralistic approach when considering what counts as evidence for healthcare practices. The term evidence, within the framework of the JBI model, relates to the ‘basis of belief, the substantiation or confirmation that is needed to believe something is true’.3 An inclusive approach to what counts as evidence has been an enduring underpinning feature of the Institute, informing its approach to evidence synthesis.

In 2013, Nutley et al.5 suggested that more work was required to better understand what should count as good evidence. Central to their argument was a concern around the ‘standards’ of evidence that could be used to underpin the development of practice recommendations. Their overarching argument is that ‘evidence quality depends on what we want to know, why we want to know it, and how we envisage that evidence being used. In varying contexts, what counts as good evidence will also vary considerably’.

Of course, JBI maintains that both tacit and explicit (or empirical, theoretical, and experiential) knowledge and clinical wisdom are important to clinical decision making, hence the need for systematic reviews of multiple forms of evidence to inform clinical practice. This is also why the Institute refers to evidence of feasibility, appropriateness, meaningfulness, and effectiveness. Research may well be privileged as a more reliable way of knowing, but wherever research evidence does not exist or is inconclusive, health professionals are still required to make decisions and provide care. That said, research evidence should not be utilized without due accord being taken for its validity and there are measures available for establishing that also.

To this end, hierarchies of evidence have existed for a number of years and indeed the JBI guidance on this has gone through a number of iterations since the first JBI levels produced in 2005. In an effort to better reflect the reality of what evidence is appropriate and available to answer a question, the Institute has revised its levels of evidence by introducing different types of evidence hierarchies for different forms of evidence such as diagnostic, prognostic, and economic evaluations, in addition to effectiveness and meaningfulness. However, it is now widely acknowledged that levels of evidence or hierarchies have a number of limitations, as they do not consider other factors that can affect the validity and quality of evidence.6 In fact, this had led to prominent experts in the field of evidence-based healthcare asking the following question: ‘is the hierarchy of evidence pyramid dead?’7 Despite the known limitations with evidence hierarchies, JBI still see these as useful for educational purposes, during the development of evidence summaries and rapid reviews, and for preranking estimates of quality in study findings prior to evaluating other factors that can affect estimates of quality.8 These levels are not a definitive measure of study quality and should not act as a substitute for critical appraisal and clinical reasoning.8 Recently, JBI have endorsed the Grading of Recommendations Assessment, Development and Evaluation (GRADE) approach to establishing the quality of evidence, particularly in reviews of effectiveness conducted by the Institute and its collaborators.9

Back to Top | Article Outline

The science of synthesis

Considerable progress has been made in the area of synthesis science over the course of the last decade. In the original 2005 article, evidence synthesis is defined as ‘the evaluation of research evidence and opinion on a specific topic to aid in decision making in healthcare … consisting of three elements in the model: theory, methodology, and the systematic review of evidence’.3

Today, if you search PubMed using the keywords ‘systematic review’, you can see that the results by year have increased from six in 1947 to 59 542 in 2015. A substantial increase by anyone's account, but for all of this ‘information’, where are we? Really? In actual fact, the methodologies underpinning the systematic review of evidence have come quite a long way in the past 10 years.

The Institute's first foray into the development of methods for the synthesis of evidence other than effectiveness occurred with the initiation of a qualitative review methods group in 2000. This group developed the meta-aggregative theory underpinning the JBI Qualitative Assessment and Review Instrument. Unique internationally, this approach was one of the first developed to deal with research evidence outside of the quantitative paradigm.

The development of other methodologies has been facilitated by the advent of international methodology groups composed of members from across the Institute's international collaboration. There are now no less than 11 methodology groups encompassing effectiveness, qualitative, economic, narrative opinion and text, correlational and association, and diagnostic evidence as well as groups looking at mixed methods, umbrella/overview and scoping reviews, and levels of evidence and grades of recommendation.

All are a-priori protocol-driven processes, as with any review, with the same basic steps involved, from question development through to searching, selection, critical appraisal, and data extraction and synthesis. The JBI methodology groups are developing underpinning theory as well as practical tools to assist reviews to pragmatically undertake critical appraisal, and data extraction and synthesis. Each has developed guidance that will form new chapters in the JBI Reviewers Manual with complimentary software and education and training programs and materials to follow.

This is a very healthy suite of methodological development, covering a broad range of evidence types. It is clear that the Institute and Collaboration are not simply doggy paddling around in a methodological haze but actively seeking to ensure that JBI continues to meet its commitments to a broad conceptualization of evidence, what counts as ‘best available’ evidence, and to offer methodology, method, and software to account for these evidence types.

Back to Top | Article Outline

When theories become tools

The practical alignment of an information technology software development strategy in line with methodological advancement has always been a critical element of JBI's progress. As methodologies outlined in the previous section have been brought to life, the pragmatic ability to synthesize those different forms of evidence using software tools has become increasingly evident.

In 2014, the Cochrane Tech Group noted the importance of technology to the systematic review process, stating that ‘since the birth of systematic reviews, technology has been an integral part of efforts to understand health evidence. Nevertheless, review authors commonly conduct the majority of their work on a patchwork of general software products poorly adapted to their needs, much of the data they handle is not captured for future use, and the core review output of a static PDF document limits the ability to search and process the contents of the review’.10 This is of course very true and JBI has not shied away from ‘attempting’ to resolve these issues.

In 2004, the first iteration of the JBI-System for the Unified Management of the Assessment and Review of Information was released. At the time, it made the Institute a standard setter for methodological development.1 It comprised a management system [Comprehensive Review Management System (CReMS)] along with four analytical modules for the review of quantitative research [Meta Analysis of Statistical Assessment and Review Instrument (MAStARI)], qualitative research (Qualitative Assessment and Review Instrument), economic research [Analysis of Cost, Technology and Utilisation Assessment and Review Instrument (ACTUARI)], and narrative, opinion, and text [Narrative, Opinion and Text Assessment and Review Instrument (NOTARI)]. The software has, over the years, undergone several updates. There have been several iterations of the software with differing levels of upgrade and new features over the course of the last 10 years in an effort to address recognized bugs and other issues. The current version (5.03) was released in 2014, which fixed a number of issues and improved usability. However, it is recognized that the software is antiquated from a user and technological perspective and as a result it is being completely redeveloped and is due for release in 2016. The new JBI-System for the Unified Management of the Assessment and Review of Information will see new features included as well as extra modules to account for the new methodologies of synthesis that have been developed.

Back to Top | Article Outline

The next frontier: closing the knowledge translation gap

We make bold claims about the use of reviews to inform practice, but do they really? In reality, we recognize that the translation of evidence into practice does not occur in just one moment in time, but across many. Although we continue to seek to establish more rigorous approaches to the translation of evidence into practice, with particular reference to growing the body of knowledge around implementation, there is still work to be done in the field of synthesis science to assist in closing these gaps.

Last year, Elliott et al.11 suggested that ‘living systematic reviews’ may help to bridge the gap between evidence and practice. In response to the criticism that reviews are often out of date by the time they are published, the authors propose living systematic reviews as a contribution to the methods of evidence synthesis to help to address this challenge by combining currency with rigor to enhance the utility of health evidence.

Equally, Long12 recently proposed routine piloting in systematic reviews. His claim is that reviewers become overwhelmed by the volume of data being processed that leads to inefficient data extraction. Recognizing that a certain amount of ‘piloting’ already occur in the systematic review process (e.g., scoping searches), this article proposes that in order to maximize efficiency and minimize error when conducting large-scale reviews, piloting could be extended to include all stages of the review process. In essence, they suggest conducting a ‘mini systematic review’ on a sample of included studies to refine data extraction and synthesis.12

Although living reviews and pilots may not be the ‘next frontier’, it is clear that there is more work to be done to assist in making reviews more usable. Other areas for consideration are the integration of existing reviews into new reviews; consideration of inclusion of other evidence types in reviews; development of methodologies for the review of new and emerging research; text mining for study identification, appraisal, and extraction; and certainty-based screening for study screening.

Back to Top | Article Outline

Conclusion

So in response to the question ‘now that we’re here, where are we?’, methodological development will continue to play an important role in the Institute's evolution. The question is what do we need to do in this space to ensure we can be responsive and innovative? It is not simply a question of methodology but how it translates to technology and, in turn, the ability of reviewers to apply methodologies in a very pragmatic sense. Something that has always been a strength of the Institute is its ability to generate ‘relevant’ science. Every methodology developed and promoted is founded on a basis of meeting the requirements of the clinical community. This is central to the Institutes’ vision and mission and will continue to inform development moving forward.

Back to Top | Article Outline

References

1. Jordan Z, Donnelly P, Pittman E. A short history of a BIG idea: the Joanna Briggs Institute 1996–2006. Melbourne:Ausmed Publications; 2006.
2. Pearson A. Basing practice on the evidence. Austr Nurs J 1997; 5:22.
3. Pearson A, Wiechula R, Court A, Lockwood C. The JBI model of evidence based healthcare. Int J Evidence Based Healthcare 2005; 3:207–215.
4. Pearson A. Balancing the evidence: incorporating the synthesis of qualitative data into systematic reviews. Int J Evidence Based Healthcare 2004; 2:45–64.
5. Nutley S, Powell A, Davies H. What counts as good evidence? Provocation paper for the alliance for useful evidence. Fife, Scotland:Research Unit for Research Utilisation (RURU), School of Management, University of St Andrews; 2013.
6. Guyatt G, Glasziou P, Montori V, Schünemann H. When can we be confident about estimates of treatment effects? Med Roundtable Gen Med Ed 2014; 1:178–184.
7. Montori V. Is the hierarchy of evidence pyramid dead? 11 April 2015, 11.08 am. Tweet https://twitter.com/vmontori/status/586954070537056257.
8. The Joanna Briggs Institute Levels of Evidence and Grades of Recommendation Working Party (2014). Supporting document for the Joanna Briggs Institute levels of evidence and grades of recommendation. The Joanna Briggs Institute. www.joannabriggs.org/assets/docs/approach/Levels-of-Evidence-SupportingDocuments-v@.pdf. [Accessed 21 March 2015].
9. GRADE Working GroupEducation and debate – grading quality of evidence and strength of recommendations. BMJ 2004; 328:1490.
10. Elliott J, Sim I, Thomas J, et al. #CochraneTech: technology of systematic reviews [editorial]. Cochrane Database Syst Rev 2014; 9:ED000091.
11. Elliott JH, Turner T, Clavisi O, et al. Living systematic reviews: an emerging opportunity to narrow the evidence-practice gap. PLoS Med 2014; 11:e1001603.
12. Long L. Routine piloting in systematic reviews – a modified approach. Syst Rev 2014; 3:77.
Keywords:

evidence based; methodological development; systematic reviews

International Journal of Evidence-Based Healthcare © 2015 The Joanna Briggs Institute