With Great Data Comes Great Responsibility: Publishing Comparative Effectiveness Research in Epidemiology : Epidemiology

Secondary Logo

Journal Logo

The Changing Face of Epidemiology

With Great Data Comes Great Responsibility

Publishing Comparative Effectiveness Research in Epidemiology

Hernán, Miguel A.

Author Information
Epidemiology 22(3):p 290-291, May 2011. | DOI: 10.1097/EDE.0b013e3182114039
  • Free

Comparative effectiveness research has been enshrined in the US Healthcare Reform Law of 2010.1,2 The law mandates the creation of a Patient-Centered Outcomes Research Institute (PCORI), which will establish national research priorities and methodological standards, and will carry out research. The UK's National Institute for Health and Clinical Excellence, set up in 1999, was the world pioneer in this area. Though the organizational structure and duties of the American and British Institutes vary (eg, the US Institute is barred by law from considering the cost-effectiveness of interventions), both institutes have an overarching common goal: to improve the public's health through research on the relative effectiveness of different interventions. These interventions include medical treatments, changes in health care organization and delivery, community and workplace interventions, individual lifestyle modifications, etc.

Upon first hearing the above, many epidemiologists quickly retort: “Isn't comparative effectiveness research something we have always done under different names?” The answer is yes, of course. Epidemiologists are natural comparative effectiveness researchers. In fact, the new US law stipulates that the Board of Governors of the Institute shall collectively have scientific expertise in “epidemiology, decisions sciences, health economics, and statistics.”1 And yet most of the methodological discussions about comparative effectiveness research have so far taken place outside of epidemiologic journals and departments. In an attempt to jumpstart a discussion on comparative effectiveness research among epidemiologists, this issue of Epidemiology includes 4 pieces3–6 on the use of observational healthcare databases, all written by leading epidemiologists in this old new field.

This brings us to the main point of this editorial. The editors of Epidemiology encourage the submission of high-quality papers on evaluation of interventions with public health significance. If your research involves healthcare databases, we invite you to read the commentaries published in this issue, and take their recommendations to heart when preparing your paper. A summary and extension of some of the key recommendations follow.

Healthcare databases are attractive because they put large quantities of data into our hands. But let us be ready to say no to quantity if it compromises quality too much. Both Weiss3 and Ray4 provide examples in which incomplete or incorrect ascertainment of exposure, outcome, confounders, eligibility criteria, or linkage variables resulted in incorrect conclusions. As Weiss3 puts it, “Just because an analysis can be done does not mean it should be done.” At the very least, epidemiologists should work closely with those who constructed the database, or be ready to invest enough time to understand its nooks and crannies.

The quality of information on health outcomes is particularly important. Many validation studies have shown that database information on outcomes cannot just be accepted “as is” (for example, see Ives et al,7 Hernán et al,8 Garcia Rodríguez and Ruigómez,9 and Saunders et al10). Authors submitting to Epidemiology need to provide quantitative assessment of the quality of the health outcomes used in their study. If you did not conduct your own validation study, be prepared to cite others who did. Validation studies increase cost and take time, but they may be the difference between cranking out analyses and sound epidemiologic research.

Even if the data in your study, including confounders, are correctly measured, there is still room for trouble. Stürmer et al5 describe examples in which bias arose because of inappropriate choice of comparison groups. Dreyer6 stresses the importance of guidelines for the proper conduct and reporting of comparative effectiveness research using observational data. These guidelines are certainly helpful but, unlike several medical journals, Epidemiology does not require authors to submit their papers to a standardized test.11 Passing a driving test may be required to drive a car on the highway but not to race it professionally. If your paper is the equivalent of car that deserves to be driven on the race track, we trust our reviewers will help us make that determination without the false sense of security sometimes associated with one-size-fits-all checklists.

Nonetheless, if we were to mandate a checklist for papers submitted to Epidemiology, there is one set of guidelines that would prevail over all others: CONSORT.12,13 Though designed for randomized clinical trials, much of the CONSORT guidelines apply to observational studies as well. Ideally, all comparative- effectiveness-research questions would be answered via a large randomized experiment with clinically relevant outcomes (ie, no surrogate endpoints), long follow-up, and perfect adherence. The next best thing is an observational study that attempts to emulate such randomized experiment.

Most CONSORT checklist items, with the possible exception of those related to sample size calculation and blinding (items 7 and 11), apply to observational studies for comparative effectiveness research. In particular, the Methods section of a comparative-effectiveness-research paper published in Epidemiology will describe the eligibility criteria (item 4), the interventions being compared (item 5), and the outcomes (item 6). The items on randomization (items 8–10) will be replaced by a description of the process followed to identify and measure the confounders.

The statistical analysis (item 12) of comparative-effectiveness-research observational studies, like that of randomized trials, can be classified as intention-to-treat, per-protocol, or as-treated.14 The intention-to-treat analysis will be adjusted for baseline confounders in observational studies; per-protocol and as-treated analyses will be adjusted for baseline and time-varying confounders in both randomized and observational studies. All of the above apply equally to prospective cohort studies and to case-control studies nested within a healthcare database.

The Results section (items 13 to 19) need not differ much between observational studies and randomized trials. Reporting measures of (properly adjusted) absolute risk is strongly recommended. For analysis of time-to-event data, these absolute risks can be represented as survival curves.15 Also, it is necessary to specify the number of persons in each treatment group at the start of the intervention, and the proportions who were lost to follow-up, who deviated from the intervention of interest group, and who developed the outcome. A flowchart would be helpful.

In summary, to use healthcare databases for comparative effectiveness research that is directly relevant to decision-making, frame your question as one that would be answered by a randomized trial, and then tell us how you emulated such trial in the observational data. We look forward to receiving your paper.

REFERENCES

1. 111th US Congress. The Patient Protection and Affordable Care Act. In: Public Law 111–148; March 23, 2010.
2. 111th US Congress. Health Care and Education Reconciliation Act. In: Public Law 111–152; March 30, 2010.
3. Weiss NS. The new world of data linkages in clinical epidemiology: Are we being brave or foolhardy? Epidemiology. 2011;22:292–294.
4. Ray WA. Improving automated database studies. Epidemiology. 2011;22:302–304.
5. Stürmer T, Jonsson Funk M, Poole C, Brookhart A. Nonexperimental comparative effectiveness research using linked healthcare databases. Epidemiology. 2011;22:298–301.
6. Dreyer N. Making observational studies count: shaping the future of comparative effectiveness research. Epidemiology. 2011;22:295–297.
7. Ives DG, Fitzpatrick AL, Bild DE, et al. Surveillance and ascertainment of cardiovascular events. The Cardiovascular Health Study. Ann Epidemiol. 1995;5:278–285.
8. Hernán MA, Jick SS, Olek MJ, Jick H. Recombinant hepatitis B vaccine and the risk of multiple sclerosis. A prospective study. Neurology. 2004;63:838–842.
9. Garcia Rodríguez LA, Ruigómez A. Case validation in research using large databases. Br J Gen Pract. 2010;60:160–161.
10. Saunders KW, Dunn KM, Merrill JO, et al. Relationship of opioid use and dosage levels to fractures in older chronic pain patients. J Gen Intern Med. 2010;25:310–315.
11. The Editors. Probing STROBE. Epidemiology. 2007;18:789–790.
12. Zwarenstein M, Treweek S, Gagnier JJ, et al. Improving the reporting of pragmatic trials: an extension of the CONSORT statement. BMJ. 2008;337:a2390.
13. Schulz KF, Altman DG, Moher D. CONSORT 2010 statement: updated guidelines for reporting parallel group randomised trials. PLoS Med. 2010;7:e1000251.
14. Danaei G, García Rodríguez LA, Fernández Cantero O, Logan R, Hernán MA. Observational data for comparative effectiveness: an emulation of randomized trials to estimate the effect of statins on primary prevention of coronary heart disease. Stat Methods Med Res. 2011. In press.
15. Hernán MA. The hazards of hazard ratios. Epidemiology. 2010;21:13–15.
© 2011 Lippincott Williams & Wilkins, Inc.