Making Observational Studies Count: Shaping the Future of Comparative Effectiveness Research : Epidemiology

Secondary Logo

Journal Logo

The Changing Face of Epidemiology

Making Observational Studies Count

Shaping the Future of Comparative Effectiveness Research

Dreyer, Nancy A.

Author Information
Epidemiology 22(3):p 295-297, May 2011. | DOI: 10.1097/EDE.0b013e3182126569
  • Free

There is substantial interest in using observational epidemiologic research in combination with randomized clinical trials (RCTs) and meta-analyses to support decision-making by regulators, payers, and physicians. Nevertheless, even well-designed and well-conducted observational studies are often viewed with skepticism. This lingering distrust comes, in part, from the well-accepted use of clinical trials compared with a widespread lack of familiarity with the principles for good conduct in observational research. The solution lies in sound guidelines for evaluating observational studies, especially studies that may prove useful for evaluating clinical effectiveness. Such guidance would help to focus on the quality and relevance of the evidence, whatever the study design.

With ever more medical interventions available, the need for hard evidence about which treatments work best, for whom and when, is spurring big changes. These changes are embodied in 3 new laws in the United States, starting with the Food and Drug Administration Amendment Act of 2007,1 followed by the American Recovery and Reimbursement Act of 2009,2 and the Patient Protection and Affordable Care Act of 2010.3 All 3 laws encourage evidence-based research to inform therapeutic decision-making, and ensure that comparative effectiveness research, defined as “the conduct and synthesis of research comparing the benefits and harms of different interventions and strategies to prevent, diagnose, treat and monitor health conditions in ‘real world’ settings,”4 will be driving decisions about patient care and health insurance coverage.

Noninterventional studies provide information that is infeasible or impossible to obtain elsewhere.5 Interest may lie in risks and benefits that extend far into the future, or in patients and real-world practices that are not well characterized, such as patient subgroups and health practitioner specialties. It may be impractical, too expensive, or politically unacceptable to do a randomized trial or other type of intervention study. Preparing for seasonal influenza is an example. With the threat of a global pandemic of potentially serious swine flu, it would have been politically unacceptable to deny vaccination to some, making observational studies the preferred approach for characterizing the effectiveness of this type of vaccine. For the highly lethal H5N1 (avian) strain of influenza, little is known about what treatments work since the disease is rare and outbreaks are unpredictable. A multicountry patient registry provided data that revealed a 51% reduction in the fatality rate for patients with laboratory-confirmed H5N1 infection who were treated promptly with oseltamivir. This reduction in mortality was smaller but still sizable when treatment was initiated up to 1 week after symptom onset, showing that even delayed treatment was helpful.6

Large benefits and risks are difficult to ignore, even when discovered outside the experimental setting. More commonly, though, the true risks and benefits of medical interventions fall in the range where unmeasured, confounding could explain the apparent effect. Epidemiologists regularly engage in discussions about channeling and other types of selection bias that cause people preferentially to receive or avoid a preventive or therapeutic intervention, or to be more or less adherent. Less attention has been paid to factors that affect the availability and use of treatments, such as health insurance coverage, and how these factors may affect the interpretation of findings from observational studies. The lack of attention to these factors is in part because information about what is covered by health insurance is difficult to access, varies by insurer, employer, and country, and changes over time.7 Furthermore, this information is not available in commercial or national data sets.

If decision-makers cannot distinguish quality work, meaningful observational work will be overlooked, no matter how well a study is designed, analyzed, and reported. Or worse—all observational research studies, regardless of quality, could be lumped together as if they provided comparable evidence. Some decision makers are skeptical that guidelines can improve the conduct, reporting, and interpretation of research,8,9 but many others find them useful aids to recognize quality in research. Although there are widely accepted benchmarks for study quality (eg, Good Clinical Practice10 and CONSORT11 for clinical trials, and PRISMA12 for meta-analysis), there is a dearth of widely accepted good-practice guidance documents for observational studies, particularly with regard to the challenges that relate specifically to clinical effectiveness.

This situation is starting to change. There is a host of relatively new offerings, including GRACE (Good ReseArch for Comparative Effectiveness) principles for conducting and evaluating observational comparative effectiveness studies,13 the ENcEPP (European Network of Centres for Pharmacoepidemiology and Pharmacovigilance) Methodological Standards,14 some analytic recommendations from the International Society for Pharmacoeconomics and Outcomes Research,15–17 and the STROBE (Strengthening The Reporting of OBservational studies in Epidemiology) reporting guidelines.18 With the exception of STROBE, which provides a reporting framework for all types of epidemiologic studies and is not tailored for clinical epidemiology, none of these guidance documents is widely known or broadly accepted yet (beyond the groups that created them). This leaves decision-makers without the support they need to use noninterventional studies with confidence.

The recent good-practice documents developed by the GRACE Initiative were designed specifically to guide decision-makers in the evaluation of observational studies of comparative effectiveness.13 These principles describe the value of a good study plan; the usefulness of conducting and analyzing studies in a manner consistent with established good practice at that point in time, and reporting with sufficient detail for evaluation and replication; and they discuss some issues to consider in evaluating the validity of the interpretation. The principles are presented as a “living document”19 and further development is expected. Guided questions may be useful to decision-makers, but the ultimate value of the GRACE principles will depend on building broad support for this conceptual guidance, especially as it offers neither a checklist nor scoring system.

In contrast, the ENcEPPMethodological Standards for Study Protocols, the newest entry in this quality-improvement field, provides a checklist of epidemiologic principles that should be considered in a protocol. ENcEPP also provides a Code of Conduct that affirms many good practices, such as the right of the principal investigator to independently prepare study publications, and a Guide on Methodological Standards for Pharmacoepidemiology that addresses many topics relating to study design, analysis, and data sources.20 ENcEPP takes the position that post hoc subgroup analyses (ones that were conceived after data analysis has begun) may not be used to verify or reject a hypothesis of a causal association, but any safety signals identified through serendipitous analysis should be evaluated appropriately. Aside from the debatable stance about the value of prespecified hypotheses,21–23 these guidance documents help to assure that investigators think through the principles of good practice, especially as they relate to safety.

Careful examination and analysis of both interventional and noninterventional studies that are of high quality will improve decisions affecting diagnosis and treatment, and delivery of medical care. The dramatic increased need for reasoned payment decisions may provide the strategic opportunity to advance understanding of the value of observational data for comparative effectiveness research. Guidelines appropriate to observational comparative effectiveness research will promote the acceptance, appreciation, and use of such research to benefit public health.

ABOUT THE AUTHOR

NANCY DREYER is Chief of Scientific Affairs at Outcome Sciences, Inc., where she leads a broad array of studies on the safety and effectiveness of medical interventions. She is a Sr. Editor of the AHRQ's User's Guide, Registries for Evaluating Patient Outcomes, and works with the European Medicines Agency on PROTECT, a project funded by the European Innovative Medicines Initiative.

REFERENCES

1. Food and Drug Administration Amendments Act of 2007. HR 3580. Available at: http://www.thomas.gov/cgi-bin/bdquery/z?d110:h.r.03580. Accessed 16 June 2010.
2. Conference Report on HR 1. American Recovery and Reinvestment Act of 2009. 111th Congress, First Session, Congressional Record House; 12 February 2009. 155:H1307–1516.
3. United States Senate. HR 3590. The Patient Protection and Affordability Care Act. Available at: http://democrats.senate.gov/reform/patient-protection-affordable-care-act-as-passed.pdf. Accessed 4 June 2010.
4. Available at: http://www.hhs.gov/recovery/programs/cer/cerannualrpt.pdf. June 2009. Accessed 11 October 2009.
5. Dreyer NA, Tunis SR, Berger M, et al. Why observational studies should be among the tools used in comparative effectiveness research. Health Aff. 2010;29:1818–1825.
6. Adisasmito WA, Chan PK, Lee N, et al. Effectiveness of antiviral treatment in human influenza H5N1 infections: analyses from a global patient registry. J Infect Dis. 2010;202:1154–1160.
7. Steinbrook R. Saying no isn't NICE—the travails of Britain's National Institute for Health and Clinical Excellence. N Engl J Med. 2008;359:1977–1981.
8. Rothman KJ, Poole C. Some guidelines on guidelines: they should come with expiration dates. Epidemiology. 2007;18:794–796.
9. Vandenbroucke Jan P. STREGA, STROBE, STARD, SQUIRE, MOOSE, PRISMA, GNOSIS, TREND, ORION, COREQ, QUORUM, REMARK...and CONSORT: For whom does the guideline toll? J Clin Epidemiol. 2009;62:594–596.
10. European Medicines Agency. ICH topic E6 R1. Guideline for Good Clinical Practice. CPMP/ICH/135/95. Available at: http://www.ema.europa.eu/pdfs/human/ich/013595en.pdf. July 2002.
11. CONSORT Statement. JAMA. 2001;285:1987–1991.
12. Moher D, Liberati A, Tetzlaff J, Altman DG; The PRISMA Group. Preferred reporting items for systematic reviews and meta-analyses: The PRISMA Statement. Ann Intern Med. 2009;151:264–269.
13. Dreyer NA, Schneeweiss S, McNeil B, et al. Recognizing high-quality observational studies of comparative effectiveness. Am J Manag Care. 2010;16:467–471.
14. European Network of Centres for Pharmacoepidemiology and Pharmacovigilance. Available at: www.encepp.eu.
15. Berger ML, Mamdani M, Atkins D, Johnson ML. Good research practices for comparative effectiveness research: defining, reporting, and interpreting nonrandomized studies of treatment effects using secondary data sources: The ISPOR Good Research Practices for Retrospective Database Analysis Task Force Report—Part I. Value Health. 2009;12:1044–1061.
16. Cox E, Martin BC, Van Staa T, Garbe E, Siebert U, Johnson ML. Good research practices for comparative effectiveness research: approached to mitigate bias and confounding in the design of nonrandomized studies of treatment effects using secondary sources: The International Society for Pharmacoeconomics and Outcome Research Good Research Practices for Retrospective Analysis Task Force Report—Part II. Value Health. 2009;12:1053–1061.
17. Johnson ML, Crown W, Martin BC, Dormuth CR, Siebert U. Good research practices for comparative effectiveness research: analytic methods to improve causal inference from nonrandomized studies of treatment effects using secondary data sources: The ISPOR Good Research Practices for Retrospective Database Analysis Task Force Report—Part III. Value Health. 2009;12:1062–1072.
18. Von Elm E, Altman DG, Egger M, Pocock SJ, Gotzche PC, Vandenbroucke JP. The strengthening the reporting of observational studies in epidemiology (STROBE) statement: Guidelines for reporting observational studies. Ann Intern Med. 2007;147:573–577.
19. GRACE Principles—Good ReseArch for Comparative Effectiveness. Available at: www.graceprinciples.org.
20. ENCePP Guide on Methodological Standards in Pharmacoepidemiology. EMA/95098/2010. Available at: www.encepp.eu.
21. Vandenbroucke JP. Preregistration of Epidemiologic Studies: An ill-founded mix of ideas. Epidemiology. 2010;21:619–620.
22. Poole C. A vision of accessible epidemiology. Epidemiology. 2010;21:616–618.
23. Lash TL. Preregistration of study protocols is unlikely to improve the yield from our science, but other strategies might. Epidemiology. 2010;21:612–613.

Section Description

Editors' note:This series addresses topics of interest to epidemiologists across a range of specialties. Commentaries start as invited talks at symposia organized by the Editors. This paper was originally presented at the 2010 Society for Epidemiologic Research Annual Meeting in Seattle, WA.

© 2011 Lippincott Williams & Wilkins, Inc.