Share this article on:

Commentary: Will Academia Embrace Comparative Effectiveness Research?

Lauer, Michael S. MD

doi: 10.1097/ACM.0b013e318217d6e6

In recent medical history, a number of therapies that were widely adopted based on observational data or pathophysiological constructs turned out to be useless or even harmful when tested in randomized comparative effectiveness trials. These therapies not only harmed patients but also did a disservice to the practical education of medical students, residents, and fellows. These trainees effectively learned that it is acceptable to implement practices even in the absence of high-quality evidence, and so they may not have learned how to analyze the quality of evidence. In this issue of Academic Medicine, seven groups address critical aspects of the intersection between comparative effectiveness research (CER) and academic medicine. Their topics include the need at academic health centers for cultural shifts, for addressing conflicts of interest, for exploiting academic talent and electronic information resources, for interacting well with policy makers, for incorporating economic evaluations, for incorporating tests of educational methods, for developing multidisciplinary models, and for integrating CER into “predictive health.” This commentary argues that academia must embrace CER by insisting on the highest levels of evidence, by viewing all clinical interactions as opportunities for scientific advancement, by setting an example for policy makers and colleagues working in nonacademic settings, and by engaging all physicians in the clinical research enterprise.

Dr. Lauer is director, Division of Cardiovascular Sciences, National Heart, Lung, and Blood Institute, National Institutes of Health, Bethesda, Maryland.

Correspondence should be addressed to Dr. Lauer, Office of the Director, Division of Cardiovascular Sciences, National Heart, Lung, and Blood Institute, National Institutes of Health, 6701 Rockledge Drive, Room 8128, Bethesda, MD 20892; telephone: (301) 435-0422; e-mail:

Editor's Note: This is a commentary on the collection on comparative effectiveness research that appears in this issue.

In June 1993, Peters et al1 published a report on the outcomes of 85 women with metastatic breast cancer who were treated with high-dose chemotherapy followed by autologous bone marrow transplantation. Compared with women with similar diagnoses who were enrolled in trials of standard regimens, these 85 women saw substantially higher survival rates. Nonetheless, the authors of the report were cautious, stating, “We believe that confirmation of these results in a prospective randomized trial is important before this therapy can be accepted for wide-spread use. Many new therapies, initially promising, fizzle. This treatment should only be offered at major centers... and, whenever possible, [into] randomized comparative [italics added] trials.”

Despite Peters and colleagues' prudent call, oncologists in a number of academic health centers instituted high-dose chemotherapy as a treatment for metastatic breast cancer. Over a period of 20 years, these physicians treated more than 30,000 women at a cost of $3 billion while only enrolling a small fraction of patients into proper randomized trials examining whether high-dose therapies led to better outcomes than standard dose therapies. Investigators encountered stiff resistance to the few randomized trials that were conducted, as some leading academic oncologists decried the trials as “unethical,” given widespread acceptance of the existing evidence.2 Patients and for-profit companies became vigorous advocates for the wide adoption of this treatment, taking insurers to court to ensure reimbursement. The standards used by insurers to determine reimbursement were more rigorous than those used by academic scholars to prove that a toxic and dangerous therapy was worth the personal and financial costs.3 The trials ultimately showed that the therapy failed to improve outcomes.

I am not an oncologist, but I think I can imagine the impressions that medical students, residents, and fellows might have had. They might have seen highly respected faculty administer and promote a new, aggressive therapy based on observational data and belief and not on large-scale randomized trials. The trainees might have concluded that this practice was not only acceptable but laudable, as their teachers were showing them the way to implement innovations. As an internal medicine and cardiology trainee at two major Ivy League academic health centers, I was taught that estrogen should be considered a standard cardiac drug, that antiarrhythmic agents must save lives because they suppress ventricular premature beats, that angioplasty prevents myocardial infarction, and that, for heart failure, beta-blockers are dangerous but inotropes are lifesaving. Some of my teachers openly supported large-scale comparative effectiveness trials to test each of these beliefs. But I saw that they and investigators at other academic health centers often faced resistance from those who felt that comparative trials were unnecessary or even unethical, as observational or surrogate-outcome data seemed to paint clear, convincing cases for the adoption of these practices.

Back to Top | Article Outline

Academic Medicine and Comparative Effectiveness Research Today

In this issue, seven groups publish thoughtful reports on the role of academic health centers in fostering comparative effectiveness research (CER), which is also known as patient-centered outcomes research. Rich et al4 aptly note the need for major changes in institutional culture to ensure that future generations of physicians will develop an appreciation for evidence-based care. The authors correctly challenge academic health centers to confront conflicts of interest that hinder the recruitment of patients into high-quality comparative effectiveness studies. VanLare et al5 call on academic health centers to engage actively in CER because they are responsible for training future researchers. Academic health centers, the authors argue, also have unique resources, such as electronic health records, that will enhance national CER capacity.

Zerzan et al6 focus on opportunities by which academic faculty can interact with policy makers to improve state Medicaid policies. The authors identify successful examples of this relationship, including the strong collaboration between state officials and faculty at Oregon Health and Science University as they worked together on prescription drug plans. The authors also explain how researchers and policy makers must learn about each other's needs and cultures, recognizing that they both must contend with the pervasive, but sometimes false, American beliefs that “more and newer health care options are better than existing options.”

Iribarne et al7 consider the sensitive question of the role of cost-effectiveness research within a national CER agenda. The authors present their experiences using the resources of a Clinical and Translational Science Award sponsored by the National Institutes of Health to develop and deliver a new curriculum in economic evaluation methods. They argue that, given rapidly rising health care costs, academic health centers have a core responsibility to teach economic evaluation.

McGaghie et al8 offer a specific example of CER. The authors performed a systematic review and meta-analysis comparing simulation with traditional methods of teaching medical trainees a variety of skills. They found that simulation-based methods performed better, but they admit that the number of studies done to date is small, that each study enrolled few subjects, and that the studies primarily focused on procedural skills. Nonetheless, McGaghie et al remind us that CER goes beyond simple drug comparisons; as VanLare et al argue, the CER community must ask questions about all kinds of interventions.

Marantz et al9 present a multidisciplinary model for fostering CER within their Clinical and Translational Science Award. The authors argue that, to be successful, they must bring together expertise and resources in efficacy and clinical trials, in evaluation and health services research, in behavioral research and wellness, and in social science and implementation research.

Finally, Rask et al10 describe how to integrate CER with “predictive health.” Investigators from multiple disciplines have identified biomarkers and other characteristics that can predict an individual patient's risk for developing a disease or for developing an adverse outcome from an established disease. CER is a tool that potentially can bring together predictive discoveries with a vision of personalized medicine, by which interventions are targeted to those patients who are most likely to benefit and omitted in patients who are more likely to suffer side effects than realize benefits.

Together, the authors of these seven papers present a tapestry of issues that academic health centers must address if American medicine is to avoid the mistakes of high-dose chemotherapy, hormone therapy, antiarrhythmic agents, and numerous other failed interventions that our profession adopted on the basis of inferior evidence. Academia has a responsibility not only to train future researchers but also to inculcate a scientific orientation among all physicians. Most Americans support clinical research and would be willing to volunteer as participants, yet only a tiny fraction can recall being recruited by their physicians into a clinical study.11 How can we evolve American academic medicine into a force by which all patients are invited into practice-changing clinical trials and by which all physicians become engaged in stimulating, supporting, and disseminating the results of CER?

Back to Top | Article Outline

Recommendations for Embracing CER in the Future of Academic Medicine

First, academia should take advantage of our nation's renewed interest in CER to embrace the high standards of evidence required for any intervention to be adopted into practice or policy. Most medical interventions yield modest benefits, meaning that only large-scale, unbiased, randomized trials will produce robust estimates of comparative clinical effectiveness. We must come to recognize that most other types of studies, including meta-analyses of small studies, trials that focus on surrogate outcomes, and observational studies, produce inferior evidence,12 evidence which, if accepted, can have deadly consequences.

Second, academic leaders should become role models, promoting the need to incorporate scientific thinking into clinical practice and policy. Nearly every patient presents the opportunity to ask questions of comparative effectiveness. Academic health centers need to restructure themselves into learning environments that incentivize and stimulate all of their physicians to recruit all of their patients into high-quality CER trials. Academic physicians and institutions must provide and communicate reliable, rigorous assessment of outcome data if they hope to influence policy and reimbursement decisions in the public arena and to retain credibility with their patients and advocacy groups in practice.

Third, medical schools and the writers of licensing and board examinations should restructure premedical and medical curricula to meet the realities of modern medical practice. For contemporary physicians, biostatistics, clinical epidemiology, decision science, and experimental design are far more relevant than calculus or organic chemistry. Some residency and fellowship training programs already require meaningful research experiences. Medical schools have a responsibility to ensure that all physicians have the skills to ask scientific questions and to be sophisticated readers of original research papers published in major journals. The formal educational experiences must be reinforced by postgraduate training experiences that incorporate critical review on a daily basis.

Fourth, academia should not be afraid to celebrate the successes of CER. Because of CER, we are no longer harming patients with ineffective high-dose chemotherapy, hormones, inotropes, or antiarrhythmic drugs. Meanwhile, we have seen dramatic decreases in mortality and morbidity from cardiovascular disease. Epidemiologists have discovered the possible roles of cholesterol, blood pressure, and other risk factors in the genesis of atherosclerosis and heart failure. Basic and translational scientists have identified molecular pathways and targets for treatment. Building on epidemiological, basic, and translational discoveries, large-scale comparative effectiveness mega-trials have established the benefits of aspirin, statins, diuretics, beta-blockers, angiotensin-converting enzyme inhibitors, thrombolytics, coronary stents, bypass surgery, and implantable defibrillators. And this story is far from over.

Finally, clinicians, patients, and all stakeholders should engage in an ongoing national conversation about CER. Like all scientific research, CER is difficult to do well, and, as Rich et al4 write, it faces special challenges stemming from conflicts of interest and the need to bring together people with highly diverse perspectives. Like all scientific studies, individual CER studies rarely, by themselves, produce definitive answers but, instead, must be considered in the context of dynamic, complex, and often international conversations. CER, though, has the unique potential to directly link academia's biomedical research enterprise to clinical practice, where physicians and their patients can incorporate science into our universal quest for better health.

Back to Top | Article Outline


Dr. Lauer is a full-time employee of the National Heart, Lung, and Blood Institute.

Back to Top | Article Outline

Other disclosures:


Back to Top | Article Outline

Ethical approval:

Not applicable.

Back to Top | Article Outline


The views in this commentary reflect those of the author and not necessarily those of the National Heart, Lung, and Blood Institute or the U.S. Department of Health and Human Services.

Back to Top | Article Outline


1 Peters WP, Ross M, Vredenburgh JJ, et al. High-dose chemotherapy and autologous bone marrow support as consolidation after standard-dose adjuvant therapy for high-risk primary breast cancer. J Clin Oncol. 1993;11:1132–1143.
2 Brownlee S. Bad science and breast cancer. Discover. August 1, 2002. Accessed February 15, 2011.
3 Daniels N, Sabin JE. Last chance therapies and managed care. Pluralism, fair procedures, and legitimacy. Hastings Cent Rep. 1998;28:27–41.
4 Rich EC, Bonham AC, Kirch DG. The implications of comparative effectiveness research for academic medicine. Acad Med. 2011;86:684–688.
5 VanLare JM, Conway PH, Rowe JW. Building academic health centers' capacity to shape and respond to CER policy. Acad Med. 2011;86:689–694.
6 Zerzan JT, Gibson M, Libby AM. Improving state Medicaid policies with comparative effectiveness research: A role for academic health centers. Acad Med. 2011;86:695–700.
7 Iribarne A, Easterwood R, Wang YC. Integrating economic evaluation methods into clinical and translational science award consortium comparative effectiveness educational goals. Acad Med. 2011;86:701–705.
8 McGaghie WC, Issenberg SB, Cohen ER, Barsuk JH, Wayne DB. Does simulation-based medical education with deliberate practice yield better results than traditional clinical education? A meta-analytic comparative review of the evidence. Acad Med. 2011;86:706–711.
9 Marantz PR, Currie B, Bhalla R, et al. Developing a multidisciplinary model of comparative effectiveness research within a CTSA. Acad Med. 2011;86:712–717.
10 Rask KJ, Brigham KL, Johns MME. Integrating comparative effectiveness research into predictive health: A unique role for academic health centers. Acad Med. 2011;86:718–723.
11 Research America! America Speaks: Poll Summary Data. Vol 10. Accessed February 15, 2011.
12 Ioannidis JP. Why most published research findings are false. PLoS Med. 2005;2:e124.
© 2011 Association of American Medical Colleges