Secondary Logo

Journal Logo

New (and Not so New) Directions in Evidence Synthesis Methods and Application in a Learning Health Care System

Helfand, Mark MD, MS, MPH*; Floyd, Nicole MPH*; Kilbourne, Amy M. PhD, MPH

doi: 10.1097/MLR.0000000000001197
Editorial
Open

*Evidence Synthesis Program Coordinating Center, Portland VA Health Care System, Portland, OR

US Department of Veterans Affairs Quality Enhancement Research Initiative, Washington, DC

Supported by the Department of Veterans Affairs, Veterans Health Administration, Office of Research and Development Health Services Research and Development and Quality Enhancement Research Initiative (QUERI), Evidence Synthesis Program (ESP) VA ESP Project #09-199.

The findings and conclusions in this article are those of the authors, who are responsible for its contents. The findings and conclusions do not necessarily represent the views of the Department of Veterans Affairs or the US government. Therefore, no statement in this article should be construed as the official position of the Department of Veterans Affairs.

The authors declare no conflict of interest.

Reprints: Mark Helfand, MD, MS, MPH, Evidence Synthesis Program Coordinating Center, Portland VA Health Care System, 3710 SW US Veterans Hospital Road, RD71, Portland, OR 97239. E-mail: mark.helfand@va.gov.

Written work prepared by employees of the Federal Government as part of their official duties is, under the U.S. Copyright Act, a “work of the United States Government” for which copyright protection under Title 17 of the United States Code is not available. As such, copyright does not extend to the contributions of employees of the Federal Government.

Applying systematic review methods in the context of health systems is challenging. The very definition of a learning health care system as one that enables real-time and continuous improvement seems to underscore the mismatch with systematic review methods, in which a great deal of time is spent finding and sorting through studies, information is thrown into unwieldy tables, and evidence is often described as insufficient for decision-making.1

In 2006, no participants in the Institute of Medicine’s “Roundtable on Evidence-based Medicine” workshop on “The Learning Health Care System” envisioned a pathway by which systematic review methods could be useful to support decision-making in a learning health system. Workshop participants were much more focused on ways to generate better data, faster or in real-time, within health systems than on using a synthesis of research data.2 By that time, systematic reviews had a well-established role in clinical guidelines and in decisions about paying for new technologies, but many workshop participants viewed “traditional” evidence-based medicine as a barrier rather than a tool for system learning.2 Participants perceived the hierarchy of study design types, the focus on randomized trials, the heavy reliance on published studies, and the slow process as incompatible with the principles underlying learning from health system experience. There was also scant discussion on the role of evidence synthesis in learning health care systems in the more recent 2012 Institute of Medicine report.3

Nevertheless, it is vital that managers and policymakers in a learning health care system make the best possible use of current knowledge. The articles in this series demonstrate what can be accomplished when research synthesis is integrated with qualitative information from health system personnel and patients and quantitative data from health systems in the context of an overarching framework for health system learning. They also challenge the belief that “traditional” systematic review methods cannot be adapted to the needs of a learning health care system. They illustrate what can be accomplished when research synthesis is part of a wider program to identify, implement, and evaluate useful practices, within a health care system that has the capability to analyze databases, conduct original research, and leverage health data collected to monitor and improve quality to verify or supplement insights from the literature.4,5

Programs that succeed in this environment offer a wide range of evidence products that vary in methods and formats to address questions that can arise within a health system, questions that go beyond “Does it work?” to “How should it be done in our health system?” Examples include “Which of these several dozen ideas might be the best ones to pursue right now?”6,7 “What is a safe waiting time for colonoscopy after a positive fecal immunochemical test?,” “Once we implement this policy, what databases or other data resources can we use to follow whether it is working?,” “What patient, practitioner, or facility-level factors make implementation more or less likely to succeed?,” and “What influences patients’ decisions to seek care for this condition?” In some cases, policymakers not only ask about what works and what might work but also about whether an organization’s research arm has produced results that benefit the health system and whether we can identify research areas where more or less focus could be useful. For example, “What have we learned from VHA’s investment in research on disparities?”8

Managers and policymakers with certain characteristics help make partnerships work. An independently conducted review that demonstrates that a system change or intervention can improve outcomes can provide health system leaders leverage in implementing them,4,9,10 but these leaders must be willing to take the risk that the findings of an independent review might not support what they are doing or planning to do.9 Leaders that look ahead and want to innovate wisely are willing to take this risk. For their part, systematic reviewers who are embedded within a health system (and in many cases are practitioners in it), knowledgeable about a broad range of research methods, including decision-making, and responsive to policymakers’ needs are most likely to be successful in this work.11

Reviewers also need to resist the tendency to focus too much on methodology. Writing in 1995, Slavin12 even notes that “ … the canons that have grown up around meta-analysis have created a situation in which not only are serious errors possible, but the reader has no way of forming his or her own opinions on it—a criticism that is made regularly today … in fear of allowing bias to creep in, meta-analysis is typically mechanistic, driven more by concerns about reliability and replicability than about adding to understanding the phenomena of interest.” Over time, systematic review shops also tend to stick with a formula and format long after it has outlived its usefulness, oversimplify complex phenomena, focus on what is easily quantified, and lapse into self-referential jargon. For example, they may state that “We also included another study which we rated as fair-quality”—instead of conveying what is going on in each study in its context.

Some critics (and systematic reviewers) mistake these rituals for the essence of a “traditional” systematic review. In fact, the work described in this supplement are examples of fundamental strengths of research synthesis, not a departure from tradition.

Research synthesis in operational systems emerged in the 1970s and 1980s, when educators and social scientists began to evaluate school programs, applying meta-analysis to studies of class size,13 ability-grouping,14 coaching to improve SAT scores,15 and other interventions we would undoubtedly call “complex.” In their classic (and still vibrant) 1984 book, Summing Up, Richard J. Light and David B. Pillemer “pre-debunk” the myths about what is and isn’t a “traditional” systematic review. They emphasize, above all, that a meta-analysis or systematic review should be synthetic—they should bring out patterns and features of effectiveness that cannot be learned from single studies by understanding the context of each study and documenting the process of each treatment as well as the outcome. They show that the scientific method does not have to be at odds with what used to be called the “Verstehen” approach (closer to today’s “realist” approach16)—“ … a review need not be primarily quantitative or descriptive—it can be strong or weak on both dimensions.”17

Slavin described “best evidence synthesis.” Today, especially in rapid reviews, this has come to mean a narrowed inclusion criteria and the acceptance of weaker evidence when stronger evidence is unavailable. Actually, “best evidence synthesis” is mainly a thoughtful discussion of what insights can be drawn from a careful reading of the individual studies—“Except for the references to effect sizes, the bulk of the literature synthesis should look much like the main body of any narrative review.”12,18

What is “traditional”? Reviews should adhere to a set of basic principles: striving to avoid bias; using quantitative methods when appropriate; looking for insight across studies that cannot be made from a single study; making the final work public; and maintaining separate identities from stakeholders.19 These principles help ensure reviews are independent and credible. Striving to make reviews more useful and more thoughtful is also “traditional.” The enhancements and adaptations described in this supplement can help reviewers and decision-makers understand the policy context, bring attention to ideas and studies with outstanding features, yield insights from variation, and formulate plans for implementation.

Back to Top | Article Outline

REFERENCES

1. Braithwaite RS. EBM’s six dangerous words. JAMA. 2013;310:2146–2150.
2. Institute of Medicine. The Learning Healthcare System: Workshop Summary. Washington, DC: The National Academies Press; 2007.
3. Institute of Medicine. Best Care at Lower Cost: The Path to Continuously Learning Health Care in America. Washington, DC: The National Academies Press; 2012.
4. Kaboli PJ, Miake-Lye IM, Ruser C, et al. Sequelae of an evidence-based approach to management for access to care in the Veterans Health Administration. Med Care. 2019;57(suppl 3):S213–S220.
5. Kilbourne AM, Braganza MZ, Bowersox NW, et al. Research lifecycle to increase the substantial real-world impact of research: accelerating innovations to application. Med Care. 2019;57(suppl 3):S206–S212.
6. Henry SL, Mohan Y, Whittaker JL, et al. E-SCOPE: a strategic approach to identify and accelerate implementation of evidence-based best practices. Med Care. 2019;57(suppl 3):S239–S245.
7. Floyd N, Peterson K, Christensen V, et al. “Implementation is so difficult”: survey of national learning health system decision-makers identifies need for implementation information in evidence reviews. Med Care. 2019;57(suppl 3):S233–S238.
8. Kondo K, Low A, Everson T, et al. Prevalence of and Interventions to Reduce Health Disparities in Vulnerable Veteran Populations: A Map of the Evidence. Washington, DC: Department of Veterans Affairs; 2017.
9. Christensen V, Floyd N, Anderson J. “It would’ve been nice if they interpreted the data a little bit. It didn’t really say much, and it didn’t really help us.”: a qualitative study of VA health system evidence needs. Med Care. 2019;57(suppl 3):S228–S232.
10. Bauer MS, Weaver K, Kim B, et al. The collaborative chronic care model for mental health conditions: from evidence synthesis to policy impact to scale-up and spread. Med Care. 2019;57(suppl 3):S221–S227.
11. Folz CE, Clancy C, Bilheimer L, et al. Health policy roundtable: producing and adapting research syntheses for use by health-system managers and public policymakers. Health Serv Res. 2006;41(pt 1):905–917.
12. Slavin RE. Best evidence synthesis: an intelligent alternative to meta-analysis. J Clin Epidemiol. 1995;48:9–18.
13. Glass GV, Smith ML. Meta-analysis of research on class size and achievement. Educ Eval Policy Anal. 1979;1:2–16.
14. Slavin RE. Ability grouping and student achievement in elementary schools: a best-evidence synthesis. Rev Educ Res. 1987;57:293–336.
15. DerSimonian R, Laird N. Evaluating the effect of coaching on SAT scores: a meta-analysis. Harv Educ Rev. 1983;53:1–15.
16. Pawson R, Greenhalgh T, Harvey G, et al. Realist review—a new method of systematic review designed for complex policy interventions. J Health Serv Res Policy. 2005;10(suppl 1):21–34.
17. Light RL, Pillemer DB. Summing Up The Science of Reviewing Research. Cambridge and London, UK: Harvard University Press; 1984.
18. Slavin RE. Best-evidence synthesis: an alternative to meta-analytic and traditional reviews. Educ Res. 1986;15:5–11.
19. Helfand M, Balshem H. AHRQ series paper 2: principles for developing guidance: AHRQ and the effective health-care program. J Clin Epidemiol. 2010;63:484–490.
Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.