Skip Navigation LinksHome > Blogs > EPIDEMIOLOGY Watching > Will Barack Obama change epidemiologic theory?
EPIDEMIOLOGY Watching
“EPIDEMIOLOGY watching” is a forum to address broad aspects of epidemiologic research – its history, its methods, its impact – and to stimulate discussion among its students and practitioners.
Sunday, April 10, 2011
Will Barack Obama change epidemiologic theory?
A health care initiative calling for comparative effectiveness research (CER), with US$ 1.1 billion initial funding was one of the most remarked early actions of the newly elected Barack Obama in 2009 [1]. The measure has now come into law, and epidemiologists and methodologists are jumping on the bandwagon - eager to contribute to a new era in health care where decisions on the worth of treatments should be based rationally on numerical evaluations – and perhaps also with an eye on research funding. The series of papers in the May 2011 issue of EPIDEMIOLOGY attempts to jump-start a discussion about CER. The new ideals are a reincarnation of France’s 1830s movement of “Médecine d’Observation” [2] – but even more worth to enthusiastically strive for in the early 21st century.
  
‘Haven’t we all always been CER researchers?’ – is the gist of Miguel Hernán’s commentary [3]. Yes, we have – but up to now epidemiologists have mostly covered the easy part: the adverse effects of medical treatments. In adverse-effects research, confounding by indication is mostly absent because adverse effects are usually different diseases (with different risk factors) from the one that is treated – and quite often unpredictable. Confounding by contra-indication [4], if present, can often be described in a few prescribing rules that may lead to successive restrictions during data-analysis [5]. Thus, in adverse-effects research, restrictions and a careful choice of comparators and (where necessary) ”new users” [6] leads to quite credible “expected exchangeability” of patient groups. Such research has the added advantage of being more generalizable than randomized trials, which are limited to selected populations  [7].
 
Classic papers by methodologists as diverse as Rubin [8] and Miettinen [9] have outspoken messages: “confounding by indication” in medical research on the intended effects of treatments is tractable only by randomization. The whole Evidence-Based Medicine movement, as well as the Cochrane Collaboration, are built on this very idea. Both tried to revolutionize medicine at the end of the last century. If randomization is the only solution to confounding by indication, then the prospects of CER are severely crippled -- CER would be limited to adverse-effect pharmacoepidemiology – which is indeed what we have always done.
 
However, the main aim of CER, as explicitly announced by Obama himself, is to compare effectiveness of drugs in daily practice [10]. So it is no surprise that, in an earnest effort to join forces to change health care (and to bring the US closer to what is happening in Europe, e.g., in NICE [2, 11]), people from all sides are enthusiastically trying to nibble away at these classic notions. Admittedly, when confounders are few and easily measured precisely (as in the example of sequential CD4 counts and HIV treatment [12]), the classic papers have been proven wrong. However, in other instances, when judgments about prognosis of patients are complex and may include hard-to-quantify characteristics like “degree of oedema,” or  “impression of frailty” [13], it has been shown repeatedly that confounding by indication remains “a most stubborn bias.” [14, 15].
 
Should we give up in advance, or should we see how far we can get in attempting what was judged impossible: to evaluate the beneficial effects of treatments by non-randomized studies? I have strong sympathies with people who make the attempt. Epidemiology is an evolving discipline that makes progress. Think about our insights about confounding, and about case-control studies that were revolutionized in the late 1970s and early 1980s, and then again over the last decade. Still, it is likely that in most instances mere statistical adjustment for confounding will not suffice to replace randomization. We should explore techniques that promise to address unmeasured confounding by indication, such as instrumental variables or severe restrictions, which can help in particular circumstances that should be defined. However, severe restrictions may wreck another ideal of CER: to show what works in daily practice for a wide array of patients. So – we should explore how far we can push observational epidemiology, we should seek to develop new methods, but we should keep an open mind for the possibility of failure. Whatever one’s hopes or enthusiasms, the classic papers may still be right. Clinical trialists have already predicted that CER will lead to a lowering of standards of evidence because of “data mining.”[16] If, on the other hand, CER succeeds, Obama’s presidential legacy will include a change of epidemiologic theory.
 
If you like to comment, Email me directly at epidemiologyblog@gmail.com or submit your comment via the journal which requires a password protected login. Unfortunately, comments are limited to 1000 characters.
 
 
[2] Vandenbroucke JP. Evidence-based medicine and "médecine d'observation". J Clin Epidemiol 1996;49:1335-8.
 
[3] Hernán MA With great data comes great responsability: publishing comparative effectiveness research in Epidemiology. Epidemiology 2011;22:290-291.

[4] Feenstra H, Grobbee RE, in't Veld BA, Stricker BH. Confounding bycontraindication in a nationwide cohort study of risk for death in patients taking ibopamine. Ann Intern Med 2001;134:569-72.
 
[5] Schneeweiss S, Patrick AR, Stürmer T, Brookhart MA, Avorn J, Maclure M, Rothman KJ, Glynn RJ. Increasing levels of restriction in pharmacoepidemiologic
database studies of elderly and comparison with randomized trial results. Med Care 2007;45(10 Supl 2):S131-42.
 
[6] Ray WA. Evaluating medication effects outside of clinical trials: new-user designs. Am J Epidemiol 2003;158:915-20.
 
[7] Vandenbroucke JP, Psaty BM. Benefits and risks of drug treatments: how to combine the best evidence on benefits with the best data about adverse effects. JAMA 2008;300:2417-9.
 
[8] Rubin DB. Bayesian inference for causal effects: the role of randomization. Ann Statistics 1978;6:34-58.
 
[9] Miettinen OS. The need for randomization in the study of intended effects. Stat Med 1983;2:267-71.
 
[10]
http://www.nytimes.com/2009/05/03/magazine/03Obama-t.html?scp=1&sq=Obama 2009 interview health care leonhardt&st=cse&pagewanted=6
 
[11] Rawlins M. De testimonio: on the evidence for decisions about the use of therapeutic interventions. Lancet 2008;372:2152-61.
 
[12] Sterne JA, Hernán MA, Ledergerber B, Tilling K, Weber R, Sendi P, Rickenbach M, Robins JM, Egger M; Swiss HIV Cohort Study. Long-term effectiveness of potent
antiretroviral therapy in preventing AIDS and death: a prospective cohort study.  Lancet 2005;366:378-84.
 
[13] Stürmer T, Jonsson Funk M, Poole Ch, Brookhart MA. Nonexperimental Comparative Effectiveness Research Using Linked Healthcare Databases. Epidemiology. 2011;22:298-301
 
[14] Bosco JL, Silliman RA, Thwin SS, Geiger AM, Buist DS, Prout MN, Yood MU, Haque R, Wei F, Lash TL. A most stubborn bias: no adjustment method fully resolves
confounding by indication in observational studies. J Clin Epidemiol 2010;63:64-74.
 
[15] Stukel TA, Fisher ES, Wennberg DE, Alter DA, Gottlieb DJ, Vermeulen MJ. Analysis of observational studies in the presence of treatment selection bias: effects of invasive cardiac management on AMI survival using propensity score and instrumental variable methods. JAMA 2007;297(3):278-85.
 
[16] Djulbegovic M, Djulbegovic B. Implications of the principle of question propagation for comparative-effectiveness and "data mining" research. JAMA 2011;305:298-9.
 
© Jan P Vandenbroucke, 2011
 
About the Author

Jan P. Vandenbroucke
Jan P. Vandenbroucke is a professor of Clinical Epidemiology at Leiden University and an Academy Professor of the Royal Netherlands Academy of Arts and Sciences. He studied medicine in Belgium and epidemiology at Harvard. He serves on the advisory board of The Lancet, is co-editor of the James Lind Library and the People’s Epidemiologic Library, and is co-author of the STROBE guidelines (Strengthening the Reporting of Observational Studies in Epidemiology).

Blogs Archive