Innovations in the systematic review of text and opinion : JBI Evidence Implementation

Secondary Logo

Journal Logo


Innovations in the systematic review of text and opinion

McArthur, Alexa MPHC, MClinSc1; Klugárová, Jitka PhD2; Yan, Hu PhD3; Florescu, Silvia PhD4

Author Information
International Journal of Evidence-Based Healthcare 13(3):p 188-195, September 2015. | DOI: 10.1097/XEB.0000000000000060
  • Free



An evidence-based healthcare approach plays a major role in the clinical decision-making process. Every decision made by a healthcare professional should be based on the best available evidence, clinical experience and patient preferences. The best available evidence is usually understood as statistically proven results of primary or secondary quantitative study. Over the past three decades, results from qualitative studies have also been considered as scientific evidence. However, in the absence of evidence derived from rigorous primary research studies, what are the options? What is the best available evidence when quantitative and qualitative studies are missing?1

Expert opinion has an important role to play in evidence-based healthcare, as it can be used to either complement empirical evidence or, in the absence of scientific evidence, stand alone as the best available evidence. This is not to say that the superior quality of evidence derived from rigorous research is to be denied; rather, that in its absence, it is not appropriate to discount expert opinion as non-evidence.2

Text and opinion-based evidence (which may also be referred to as non-research evidence) is drawn from expert opinions, consensus, comments, assumptions or assertions that appear in various journals, magazines, monographs and reports.2–5 An important feature of using opinion in evidence-based practice ‘is to be explicit when opinion is used so that readers understand the basis for the recommendations and can make their own judgement about validity’.5 It is also important to highlight that one expert opinion may not be as valid as a synthesis of the opinion of a group of experts, as displayed in the formation of consensus guidelines.

Evidence, the practice gap and re-consideration of textual evidence

Evidence-based healthcare focuses on the need to use interventions that are supported by the most up-to-date evidence or knowledge. Many clinical questions cannot be fully answered by evidence derived from quantitative and/or qualitative research designs alone, since many areas of clinical care are supported by clinicians’ tacit knowledge derived from their clinical experiences or the dominant healthcare discourse at the time of practice.1 It is clearly recognized that diverse knowledge/evidence types are required to inform practice, and for this reason, comprehensive systematic review methods have been formulated to explore not only the evidence on the effectiveness of interventions (’knowing what’ type of evidence),1 but also evidence related to subjective human experiences, culture, values, ethics, health policy, or the accepted discourse at the time of practice (‘knowing how’ type of evidence).1 The Joanna Briggs Institute (JBI) – one of the global leaders in the evidence-based healthcare approach – is a global collaboration of health researchers and clinicians who advocate this unique methodology of systematic synthesis of expert opinion.

Textual evidence is, according to Mattingly6 and Worth,7 the narrative expression of clinical wisdom from health professionals. Narrative knowledge does not fall into a conventional academic reasoning system of induction and deduction, but it is possible that health professionals and other care providers can receive content-specific guidance and insights on how to improve their everyday practice in the form of narrative knowledge. An example of this is a comprehensive systematic review conducted on the best evidence for assisted bathing of older adults with dementia,8 when the textual component aimed to provide supplemental evidence to the quantitative and qualitative components of the review.

Basis of the synthesis of text and opinion

The synthesis of expert opinion findings within the systematic review process is not well recognized in mainstream evidence-based practice,2 and it is acknowledged that efforts to appraise often conflicting opinions are tentative. However, in the absence of research studies, the use of a transparent systematic process to identify the best available evidence drawn from text and opinion can provide practical guidance to practitioners and policy makers, drawn from the experience and knowledge of expert practitioners and professional bodies.


The aim of this study is to highlight the importance and role of expert opinion synthesis in healthcare, and to present the partial results of an international methodological group review and innovation into the current guidance for conducting a systematic review of text and opinion.


A methodological working group was established in mid-2014, comprising researchers from the JBI and international collaborating centres who expressed an interest in text and opinion systematic reviews. Firstly, the members of the methodological group reviewed independently the current theoretical background1 and guidance presented in the JBI Reviewer's Manual.2 We practically tested the current guidance by working in pairs (primary and secondary reviewer in each pair) and developing two pilot JBI Narrative, Opinion, Text Assessment and Review Instrument (NOTARI) systematic reviews, using the JBI Reviewer's Manual 20142 to work through any issues, including suitability of topic selection, critical appraisal, data extraction and synthesis. We developed draft NOTARI protocols using specific JBI software for systematic review development and highlighted unclear areas for discussion. We also tested the next steps in systematic review development, such as developing a systematic search strategy in recommended databases, critical appraisal of relevant papers, data extraction and data synthesis using standardized instruments and tools.

Current text and opinion reviews published in the JBI Library of Systematic Reviews and Implementation Reports were accessed as examples, and were used as a basis for discussion within the methodological group. We scoped the literature for other text and opinion/narrative systematic reviews from other researchers, to inform and update the JBI approach.

Monthly online meetings and some face-to-face meetings were held, to discuss the strengths and weaknesses of the current guidance. A further workshop was held at the JBI International Colloquium in Singapore in November 2014, including an international group discussion and further feedback regarding methodological issues. Due to the nature of this type of evidence, ongoing debate and continual development of this methodology will no doubt continue to evolve.


Current state analysis

The Joanna Briggs Institute has already created the basis of methodology for systematic reviews of narrative, text and opinion, and also prepared the basic guide for this type of systematic review development as part of the JBI Reviewer's Manual.2 Jordan et al.1 published a monograph regarding synthesizing evidence from narrative, text and opinion. The JBI systematic review of narrative, text and opinion-based evidence is conducted using the JBI Comprehensive Review Management System (CReMS) software and the JBI-NOTARI analytical module within it. These are both parts of the JBI System for the Unified Management, Assessment and Review of Information (SUMARI), which includes modules for reviews of different evidence types. The NOTARI module is designed to assist reviewers to appraise, extract and analyse data from textual and expert opinion-based evidence.

The members of the methodological working group firstly reviewed the JBI Reviewer's Manual chapter, alongside the monograph. This provided an important theoretical background to this unique methodological approach of synthesizing non-research narrative evidence. The chapter contains detailed guidance for this type of systematic review development. However, we also found some ambiguities and confusion in the process, such as inclusion criteria, search strategy, critical appraisal and data extraction (see Table 1, Figs. 1 and 2). This was confirmed through practical testing when developing a NOTARI protocol.

Table 1:
Strengths and weaknesses of the current guidance
Figure 1:
Standardized critical appraisal instrument for assessment methodological quality of narrative, expert opinion and text.2
Figure 2:
Standardized instrument for textual data extraction for narrative, expert opinion and text.2

Guidance update

Inclusion criteria

Inclusion and exclusion criteria reduces the risk of error and bias in the review, thereby promoting the dependability and credibility of its findings.9 Establishment of inclusion criteria may include the following headings, but the use of particular mnemonics should be considered as a guide rather than a policy with reviews of text and opinion.

Population/type of participants (P): Describe the population, giving attention to whether specific characteristics of interest, such as age, sex, level of education or professional qualification, are important to the question. These specific characteristics should be stated. Specific reference to population characteristics, either for inclusion or exclusion, should be based on a clear justification rather than personal reasoning. The term population is used, but not to imply that aspects of population pertinent to quantitative reviews such as sampling methods, sample sizes or homogeneity are either significant or appropriate in a review of text and opinion.

Pregnant and birthing women who received care from a skilled birth attendant within Cambodia, Thailand, Malaysia and Sri Lanka.10

Intervention/phenomena of interest (I): Is there a specific intervention or phenomena of interest? As with other types of reviews, interventions may be broad areas of practice management, or specific, singular interventions. However, reviews of text and opinion may also reflect an interest in opinions around power, politics or other aspects of healthcare other than direct interventions, in which case, these should be described in detail.

The review considered publications that described: 1. The health system/service delivery structures and underlying policy; 2. The maternity care provided by a skilled birth attendant.10

Comparator (C)/context (Co): The use of a comparator is not required for a review of text and opinion-based literature. In circumstances where it is considered appropriate, as with the intervention, its nature and characteristics should be described. Context may also be an important feature to consider within the inclusion criteria.

Outcome (O): As with the comparator, a specific outcome statement is not required. In circumstances when it is considered appropriate, as with the intervention, its nature and characteristics should be described.

The primary outcome of interest in this review was: Impact on maternal mortality rates.

Secondary outcomes of interest to this review included:

  1. Changes to health system structures related to pregnancy and childbirth (including resources/finances)
  2. Change in cultural practices related to pregnancy and birth
  3. Empowerment of women and their position in society (and what impact this has had with respect to their choice of pregnancy and birth care).10

Types of publications/narratives: Reviews of text and opinion consider narratives reporting on expert opinion, which may be from standards for clinical care, consensus guidelines, expert consensus, narrative case report, published discussion papers, conference proceedings, government policy reports or reports accessed from web pages of professional organizations.

This review considered government reports, expert opinion, discussion papers, position papers, and other forms of text, published in the English language. Technical reports, statistical reports and epidemiological reports were excluded.10

Search strategy

The three-step search strategy should flow naturally from the criteria that have been established to this point, and particularly from the objective and questions the review seeks to address. As reviews of opinion do not draw on published research as the principal designs of interest, the reference is to types of ‘text’ or ‘narrative’ publication, rather than types of ‘study’. A research librarian should be consulted to assist with development of a search strategy for textual evidence. There are a range of databases that are relevant to finding expert opinion-based literature. Grey literature searching is also of importance in a text and opinion review, depending on the clinical focus. Government websites and contacting relevant organizations may also be beneficial in developing the search strategy.

For searching the published literature, it is generally recommended to use the major health and medical databases such as MEDLINE, EMBASE, CINAHL, PsycInfo, as well as Scopus, Web of Science (and any other databases specific to your topic or research question). The search strategy of the published and unpublished literature also depends on the types of text specified in the inclusion criteria.

Assessment of credibility/critical appraisal

Expert opinion draws on the experience of practitioners, whether expressed by an individual, by a learned body or by a group of experts in the form of a consensus guideline. However, the opinion of experts is more than just their practical experience; it is based on their understanding of the knowledge and experience; moreover, it is the expression of these opinions in writing, and publishing in journals, magazines, web pages and so on. So, we should also consider the risk of ‘speech bias’ according to the circumstances in which they expressed their opinions.1

Validity in this context relates to the strength of opinion in terms of its logic and its ability to convince, the authority of the source and the quality of the opinion that makes it supportable. Although expert opinion is non-research evidence, ‘it is empirically derived and mediated through the cognitive processes of practitioners who have typically been trained in scientific method.’1 Critical appraisal focuses on authenticity; specifically, authenticity of the opinion, its source, the possible motivating factors and how alternative opinions are presented. The assessment of credibility of the expert voice and the decision as to whether the arguments are logical are also explored. The items of appraisal are standardized for this type of literature, following the same methods as for appraisal of any type of literature. Two reviewers (both primary and secondary) meet or electronically discuss the criteria to ensure a common understanding; then they apply them individually to each study. Once both primary and secondary reviewers have conducted appraisal, any inconsistencies in opinion are discussed and a mutual decision agreed upon. The NOTARI critical appraisal checklist includes specific questions, which the methodological group discussed in detail, including the challenge of how to define these questions. This continues to be ongoing work at this point in time.

  1. Is the source of opinion clearly identified?
  2. Does the source of opinion have standing in the field of expertise?
  3. Are the interests of patients/clients the central focus of the opinion?
  4. Is the opinion's basis in logic/experience clearly argued?
  5. Is the argument that has been developed analytical? Is the opinion the result of an analytical process drawing on experience or the literature?
  6. Is there reference to the extant literature/evidence and any incongruence with it logically defended?
  7. Is the opinion supported by peers?

Textual data extraction

Expert opinion synthesis should involve the aggregation or synthesis of findings to generate a set of statements that represent that aggregation, through assembling the findings rated according to their credibility, and categorizing these findings on the basis of similarity in meaning. These categories should then be subjected to a meta-synthesis in order to produce a single comprehensive set of synthesized findings that can be used as a basis for evidence-based practice. When textual pooling is not possible, the findings can be presented in narrative form.

When all conclusions and supporting illustrative data have been identified, the reviewer needs to read all of the conclusions and identify similarities that can then be used to create categories of more than one finding.

The JBI approach to synthesizing the conclusions of textual or non-research papers requires reviewers to consider the credibility of each report as a source of guidance to practice, identify and extract the conclusions from studies included in the review, and to aggregate these conclusions as synthesized findings.

The extraction form for textual data contains the following:

  1. Types of text
  2. Those represented
  3. Stated allegiance / position
  4. Setting
  5. Geographical context
  6. Cultural context
  7. Logic of argument
  8. Author's conclusions
  9. Reviewer's comments

Many text and opinion-based reports do not state conclusions explicitly. It is for this reason that reviewers are required to read and re-read each paper closely to identify the conclusions to be generated into NOTARI. Each conclusion/finding should be assigned a level of credibility, based on the congruency of the finding with supporting data from the paper where the finding was found. Textual evidence has three levels of credibility; thus, the reviewer is required to determine if, when comparing the conclusion with the argument, the conclusion represents evidence that is:

  1. Unequivocal (U): Relates to evidence beyond reasonable doubt which may include conclusions that are matter of fact, directly reported/observed and not open to challenge.
  2. Credible (C): Relates to those conclusions that are, albeit interpretations, plausible in light of the data and theoretical framework.
  3. Unsupported: When the findings are not supported by the data.

Textual data synthesis

Categorization is the first step in aggregating conclusions and moves from a focus on individual papers to consideration of all conclusions for all papers included in the review. Categorization is based on similarity in meaning as determined by the reviewers. Once categories have been established, they are read and re-read in light of the findings, their illustrations and in discussion between reviewers to establish synthesized findings. NOTARI sorts the data into a meta-synthesis table or ‘NOTARI view’, when allocation of categories to synthesized findings (a set of statements that adequately represent the data) is completed (see Fig. 3). These statements can be used as a basis for evidence-based practice recommendations.

Figure 3:
Example of a meta-synthesis table or ‘NOTARI view’.11

Synthesized findings, which have been categorized from conclusions drawn from the included papers, may generate appropriate implications for practice, which require a Grade of Recommendation to be assigned. This will be based on a consideration of the conclusions (whether a mixture of unequivocal, credible or unsupported), and be reported as grade A (a ‘strong’ recommendation) or grade B (a ‘weak’ recommendation).12 Further consideration and future work is required to establish the confidence in the final synthesized finding being used to make recommendations for clinical practice and policy. This may be adapted for use in text and opinion reviews, as demonstrated by the ConQual ‘Summary of Findings’ table,9 or the GRADE approach.13


The nature of textual or opinion-based reviews is that they do not rely upon evidence in the form of primary research and, therefore, elements of the systematic review will differ from reviews drawing on primary research as the types of papers of interest. Although the evidence derived from rigorous research is always preferred, there are circumstances where that evidence is neither available nor able to answer certain questions. In those circumstances, expert opinion becomes a valuable source of knowledge, providing the opportunity to have new insights and perspectives of those living, working and experiencing certain circumstances. However, it is important to acknowledge that for some, this may be considered as a limitation to this approach. There are still many unanswered questions, such as ‘who can be considered an expert’? Is it someone with clinical experience, or someone not only ‘experiencing’ but also ‘knowing’ from a theoretical background about that certain aspect? This debate will continue to evolve and be further developed, as the methodological group continues to work through these issues.


The JBI methodology of systematic reviews of text and opinion is unique. This study has also highlighted that this methodological approach is evolving and continuously being developed. Although it is a synthesis of non-research evidence, there are still many fields where the research evidence is missing and where text and opinion is the best available evidence. Systematic reviews of text and opinion may be considered as legitimate sources of evidence, especially when there is an absence of other research designs. Textual evidence may also be incorporated into a mixed-methods review, to supplement the best existing evidence on a topic. Through incorporating textual evidence, it may well be that clinicians and care providers will receive highly content specific insights to improve their everyday clinical practice.


The authors would like to thank the participants who attended and contributed to the discussion at the JBI Colloquium workshop in Singapore, November 2014.

Conflicts of interest

The authors declare no conflicts of interest.


1. Jordan Z, Konno R, Mu PF. Synthesizing evidence from narrative, text and opinion. Synthesis Science in Healthcare Series: Book 3. Adelaide, South Australia: Lipincott-Joanna Briggs Institute; 2011.
2. The Joanna Briggs Institute. Joanna Briggs Institute Reviewers’ Manual: 2014 edition. The University of Adelaide, South Australia: The Joanna Briggs Institute; 2014.
3. Sackett DL, Rosenberg WM, Gray JA, et al. Evidence based medicine: what it is and what it isn’t. Br Med J 1996; 312:71–72.
4. Tonelli MR. Integrating evidence into clinical practice: an alternative to evidence-based approaches. J Eval Clin Pract 2006; 12:248–256.
5. Woolf SH. Evidence-based medicine and practice guidelines: an overview. Cancer Control 2000; 7:362–367.
6. Mattingly C. The narrative nature of clinical reasoning. Am J Occup Ther 1991; 45:998–1005.
7. Worth SE. Storytelling and narrative knowing: an examination of the epistemic benefits of well-told stories. J Aesth Educ 2008; 42:342–56.
8. Konno R, Stern C, Gibb H. The best evidence for assisted bathing of older people with dementia: a comprehensive systematic review. JBI Database Syst Rev Implement Rep 2013; 11:90.
9. Munn Z, Porritt K, Lockwood C, et al. Establishing confidence in the output of qualitative research synthesis: the ConQual approach. BMC Med Res Methodol 2014; 14:108.
10. McArthur A, Lockwood C. Maternal mortality in Cambodia, Thailand, Malaysia and Sri Lanka: a systematic review of local and national policy and practice initiatives. JBI Database Syst Rev Implement Rep 2013; 11:72.
11. Stephen AI, Bermano G, Bruce D, Kirkpatrick P. Competencies and skills to enable effective care of obese patients undergoing bariatric surgery across a multi-disciplinary healthcare perspective: a systematic review. JBI Database Syst Rev Implement Rep 2014; 12:77.
    12. The Joanna Briggs Institute Levels of Evidence and Grades of Recommendation Working Party. Supporting Document for the Joanna Briggs Institute Levels of Evidence and Grades of Recommendation. The Joanna Briggs Institute. 2014. [Accessed 6/3/2015]
    13. Andrews J, Schunemann H, Oxman A, et al. GRADE guidelines: 15. Going from evidence to recommendation: determinants of a recommendation's direction and strength. J Clin Epidemiol 2013; 66:726–735.

    expert opinion; meta-synthesis; narrative; systematic review; text

    International Journal of Evidence-Based Healthcare © 2015 The Joanna Briggs Institute

    A video commentary on implementation project titled: How do health professionals prioritise clinical areas for implementation of evidence into practice? The commentary is provided by Andrea Rochon RN, MNSc, Research Assistant, Queen's University, Ontario, Canada