Secondary Logo

Journal Logo

Evidence-Based Physiatry

Evidence-Synthesis Tools to Inform Evidence-Based Physiatry

Patrick Engkasan, Julia MBBS, MRehabMed, PhD; Rizzo, John-Ross MD, MS; Levack, William MD; Annaswamy, Thiru M. MD, MA

Author Information
American Journal of Physical Medicine & Rehabilitation: November 2020 - Volume 99 - Issue 11 - p 1072-1073
doi: 10.1097/PHM.0000000000001508
  • Free

Clinical practice in physiatry is ideally evidence based, and evidence-based physiatry is the best intersection of relevant and reliable research results, clinical acumen, and patient preferences. Presently, PubMed indexes 20,000 articles annually characterized as rehabilitation relevant. To keep up, a variety of implementable evidence synthesis tools are available. This brief report summarizes common evidence synthesis tools—systematic reviews (SR) and meta-analysis, and implementation aids that can assist in using them in clinical practice.

Systematic Reviews and Meta-Analysis

Systematic reviews are often the first point of reference when seeking answers to healthcare questions and are considered highest level of evidence—higher than a single research study because they summarize several studies relevant to the question asked. However, SRs must be conducted using rigorous methodology or risk providing poor-quality evidence. Thus, it is important for readers to assess the trustworthiness of an SR before deciding on its applicability to their clinical practice.

An SR should address a clearly focused clinical question using 4 elements (PICO)—Population or group of patients; Intervention; Comparison interventions; and specific Outcomes. The authors of an SR should conduct a comprehensive literature search to ensure that all relevant articles and trials are included, searching major bibliographic databases, reference lists from relevant studies, and for unpublished studies. The criteria for including studies in an SR should be clearly defined a priori. The SR should specify the type of study design, patients, interventions, exposure, and outcomes that are eligible for inclusion. Similarly, quality of the included studies should be assessed using predetermined criteria. Generally, 2 independent reviewers should review the included studies and abstract data with a third reviewer to mediate any dispute.

To assess the quality of an SR, an appraisal may be performed using the following key questions1: (1) Are the results of this SR valid? (2) Are the results of this SR important? and (3) Can you apply these results to your patient’s care? (Table 1) Additional appraisal questions may also be asked before a summative overall assessment of the methodology of an SR is performed to conclude whether it was of high, acceptable, low, or unacceptable quality.

TABLE 1 - Key questions and subquestions to ask during appraisal of an SR
Are the Results of this SR Valid? Are the Results of this SR Important? Can You Apply These Results to Your Patient’s Care?
Was the PICO question focused Did the results show benefit, or harm? How similar is a given clinical question to this SR?
Were the criteria appropriate? What was the effect size? Were important clinical outcomes considered in this SR?
Were important, relevant studies included? How precise were the results? Are the benefits worth the harms and the costs?
Was the validity of the included studies appraised?
Do findings from assessments used in the studies seem reproducible?
Were the results homogeneous across the studies?

Meta-analysis statistically combines data from multiple studies when studies are conceptually homogenous, thereby increasing the accuracy of estimates of treatment effect. Results of meta-analysis are usually presented in a forest plot that show the contribution of individual studies that make up the SR, the sum effect across all studies, and heterogeneity between studies. Results of meta-analyses are typically reported in one of three ways: (1) risk ratios or odd ratios; (2) mean differences; or (3) standard mean differences. Risk ratios and odd ratios compare the likelihood of an outcome happening for participants in an intervention versus a comparison group. Mean differences are used to report differences in averages between two groups on a single standardized outcome measure. Standard mean differences are used to combine results from multiple studies where each study has measured the same kind of outcome but with different outcome measures (eg, different measures of quality of life). Standard mean differences are reported as a proportion of one standard deviation of all outcome measures used. All results from meta-analyses (risk ratios, odd ratios, mean differences, standard mean differences) should be reported with an estimate of the accuracy of the main result: the 95% confidence interval.

Implementation Aids

Informed by SRs and meta-analyses, implementation aids such as clinical practice guidelines can be used to make evidence synthesis tools more applicable to clinical practice. Clinical practice guidelines are defined by the Institute of Medicine as “statements that include recommendations intended to optimize patient care that are informed by a systematic review of evidence and an appraisal of the benefits and harms of alternative care options.” Clinical practice guidelines provide evidence-based recommendations intended to improve quality of care and decrease variability in practices and costs. They provide recommendations for an entire range of clinical care issues related to a patient problem, whereas SRs are usually more focused. In addition to the clinical recommendations, many clinical practice guidelines also include tools and suggestions for implementability and recommendations for resources that a practice may need to apply them to routine clinical care.

Other clinical implementation aids include decision analysis and economic analysis that factors in all relevant outcomes as well as costs. Economic and cost studies that may inform clinical practice include the following: cost-benefit analyses, in which only monetary outcomes are measured and no value is assigned; cost-effectiveness analysis, in which monetary cost is measured and compared with a clinical unit of efficacy; and cost-utility analysis, in which monetary costs are measured and compared with outcomes measured in terms of social value (eg, cost per quality adjusted life years).

Implementation aids can also elevate the quality of patient education, to elicit sustainable change. An implementation model that clearly transfers the information to the recipient is critical. The Agency for Healthcare Research and Quality described knowledge transfer in three steps: (1) knowledge creation and distillation, (2) diffusion and dissemination, and (3) end-user adoption, implementation, and institutionalization. A combined approach for dissemination may work best using strategies including message tailoring (granular content), targeting messaging to audience segments (more global content), using narratives, and message framing. The BATHE technique—a five-step process focused on enhanced patient-centered consultation skills—may also facilitate implementation. The BATHE’s five-steps are as follows: (a) background (what is going on in your life?), (b) affect (how is it affecting you?), (c) trouble (what troubles you most about this situation?), (d) handling (how are you dealing with this so far?), and (e) empathy (that sounds scary and difficult). Decision aids are evidence-based decision support tools “designed to help patients make decisions by providing information on the options and outcomes relevant to a person’s health status.”2 Using implementation techniques and aids described above, physiatrists can assist their patients in making fully informed decisions regarding their management.

CONCLUSIONS

Using evidence synthesis tools and implementation aids described in this report, physiatrists and clinicians in physical medicine and rehabilitation can use best available research, clinical expertise, and consumer perspectives to implement evidence-based physiatry in their clinical practice using shared decision making.

REFERENCES

1. Bigby M, Williams H: Appraising systematic reviews and meta-analyses. Arch Dermatol2003;139:795–8
2. Stacey D, Legare F, Col NF, et al.: Decision aids for people facing health treatment or screening decisions. Cochrane Database Syst Rev2014;1:CD001431
Keywords:

Systematic Reviews; Meta-analysis; Clinical Practice Guidelines; Evidence Based Medicine

Copyright © 2020 Wolters Kluwer Health, Inc. All rights reserved.