The growing prominence of comparative effectiveness research (CER) during the past several years presents academic health centers (AHCs) with opportunities and challenges nicely articulated in the articles by Rich et al1 and VanLare et al2 in this issue of Academic Medicine. In this commentary, I will provide some background information about CER and underscore some of the main points of these two articles. My main purpose, however, will be to discuss the societal purposes of CER and the impediments that may prevent many U.S. AHCs from realizing these larger goals.
What exactly is CER? Several authoritative sources, including the Institute of Medicine (IOM)3 and the Patient Protection and Affordable Care Act (PPACA),4 have developed very similar definitions. The IOM3 defines CER as
the generation and synthesis of evidence that compares the benefits and harms of alternative methods to prevent, diagnose, treat, and monitor a clinical condition or to improve the delivery of care. The purpose of CER is to assist consumers, clinicians, purchasers, and policy makers to make informed decisions that will improve health care at both the individual and population levels.
The key elements of this definition are (1) head-to-head comparisons of active treatments, (2) study populations of patients that physicians encounter in daily practice, and (3) a focus on informing decisions by patients, physicians, and policy makers. In addition, for clinical practice, CER (also known as patient-centered outcomes research) helps doctors tailor their advice about decisions to patients' individual characteristics and preferences. According to this formulation, CER will require some AHCs to do clinical research in community practice settings. Others may need to hire faculty with decision sciences research skills.
Why is CER such a priceless opportunity for AHCs? As VanLare et al2 point out, the CER enterprise rests on four pillars: research, formation of human and scientific capital, data infrastructure, and translation (of evidence into clinical practice). AHCs alone have all of these resources and missions. If they can adapt to the requirements of CER, which may mean overcoming structural and cultural barriers, they can make a unique societal contribution.
Take research, the first pillar. Most AHCs are also research institutions, with large research-derived revenues. Their faculty have the accomplishments and connections that attract research support. AHCs also have the infrastructure to support large-scale research and the indirect cost recovery on grants and contracts to maintain it. For all these reasons, CER should be a sweet spot for AHCs. In fact, most research-intensive AHCs have learned the wisdom of investing in a faculty that does both clinical and population-based research.
To attract and retain the best comparative effectiveness researchers, some AHCs may have to adjust their academic rewards systems. Academic promotion criteria for researchers have always placed a large premium on originality. But CER is different in several ways, as noted by Rich et al.1 First, research into human decision making, where evidence meets behavioral psychology, is a key element of CER. This research is inherently multidisciplinary and therefore collaborative. Identifying original thinking is much easier for promotions committees when a lone investigator makes a major contribution than when someone is a member of a 20-person research team. Yet, important CER nearly always requires a team of researchers.
In a second way, CER is making it more difficult to use originality as an essential measure of academic distinction. The emerging model for CER funding differs from the traditional, investigator-initiated research model, in which the applicant proposes the research questions and the study section decides which applications have the greatest scientific merit. Under those circumstances, the researchers' scientific interests set the research priorities of the funding agency. For CER, by contrast, the funding agency (e.g., the Agency for Healthcare Research and Quality [AHRQ] or the new Patient-Centered Outcomes Research Institute [PCORI]) decides which questions are the most important to address and often dictates the research design that can best answer the question within the agency's CER budget.5 For example, some recent National Institutes of Health and AHRQ funding initiatives specified research questions taken from the IOM's list of the 100 most important CER priorities.6 The language of the PPACA states that PCORI will set CER priorities and form a research project agenda with input from expert committees.4
Why might this agency-directed model be appropriate for CER, which is largely population-based research? First, the mission of CER is to improve the quality of care in the United States by helping doctors and patients make better choices. A societal consensus about which questions are important should help shape the CER research agenda. Second, CER methods are well established after decades of experience with population-based research, in sharp contrast to molecular biology, in which new methods push the field forward. Rather than originality, CER funders are seeking reliability. They place a premium on researchers' skill in executing a research plan and managing a large, complex clinical research project in a community setting. In this changing research environment, AHCs must reward team efforts directed at problems that are, by public consensus, important to solve.
The second pillar is the human capital of medical science and clinical care. The nation must decide whether AHCs are producing enough researchers and clinicians with the right skills to achieve the goals of CER. There have not been, to my knowledge, any formal studies of workforce needs for CER researchers, but most people think that the anticipated $500 million annual expenditure by PCORI7 alone will require some expansion of the CER workforce. Universities must decide whether to authorize schools of medicine to create degree programs in CER or expand existing degree programs in the disciplines that support CER, such as economics, sociology, psychology, and statistics.
The second part of fulfilling the human capital needs of CER relates to its unique focus on decision making by clinicians and patients. CER enjoys public support based on the prospect of decisions aligned with the evidence and with the preferences of a well-informed patient.8–10 Clinicians need training in the art and science of decision making so that their decisions take full account of CER results. Therefore, medical schools have an indispensable role to play in preparing students to use CER results in clinical practice.
Unfortunately, the teaching of medical decision making in the clinical setting is left largely to chance. It occurs during clinical clerkships, in which the faculty attending physician is the senior teacher and an important role model. Some students are assigned to teams whose attending physicians show them how to evaluate the evidence and apply it to a specific patient; these faculty may even use the quantitative approach of decision analysis. Clinical clerks who spend a month in daily contact with such teachers may develop habits of thinking that last a lifetime. Encounters with such teachers are the product of students' good luck, not educational planning by the medical school. Education leaders seldom tell faculty attending physicians how to teach decision making or how to reinforce the lessons learned in classroom lectures on the topic. Leaders do not hold teachers accountable for trainees' achieving milestones toward becoming expert decision makers who can integrate the evidence, the clinical characteristics of the patient, and the patient's preferences into a defensible case for the chosen course of action. Until medical schools and residency programs correct these deficiencies, they will be weak links in the chain that connects CER evidence to practice.
The third pillar of CER is an infrastructure to obtain reliable clinical data for research. Obtaining a standard set of research-quality data from every study patient is costly because the process usually requires a research assistant, who is paid to find the patient and reproducibly administer a series of queries. Measuring outcomes accurately usually requires the research assistant to contact patients directly. Often, this parallel system of data collection duplicates information that clinicians obtain in the course of routine practice. However, relying on data obtained by clinicians is risky. First, clinicians do not ask questions in exactly the same way, so patients reporting the same experience may give different answers. Second, clinicians may not ask all of the questions in a standard data set or in a well-validated outcome measure. Missing data are the bane of population-based research.
AHCs have electronic medical records (EMRs) that could support a more robust data infrastructure for CER. The EMR could tell the clinician when the patient is part of a study and provide a set of questions to ask. Incentives might increase cooperation. Researchers could provide a modest reward to clinicians who collect the needed data. AHCs could establish promotion criteria that recognize the contribution of clinicians who cooperate with clinical studies. Using their EMRs and appropriately designed incentive systems, AHCs could improve the data obtained during CER, providing a public service that would benefit everyone. As EMRs diffuse into community practices, the same benefits could accrue to research in that setting, although regulatory requirements designed to protect patients' privacy are a substantial obstacle.
This topic is a good segue to the fourth pillar of CER, which, according to VanLare et al,2 is translation of evidence into practice. These authors declare that AHCs have the most to gain from improving practice by aligning it with the results of CER.
Historically, AHCs have enjoyed a competitive edge because many faculty have industry connections that give them access to new technologies at the earliest stage of application to patient care. Studies in AHCs have provided much of the early evidence about the effectiveness of new technologies, the news of which often attracts referrals to the AHC-based specialists who authored the published articles. AHCs exploit this advantage, which lasts until the technology matures and community hospitals are able to deliver it.
But will AHCs be as adept at introducing new evidence as they are at introducing new technologies? Specifically, will AHCs use the lessons of CER and other high-quality evidence to drive their clinical policies? Will they compete in their local markets by citing their clinicians' adherence to the evidence as a measure of excellence? They can if they choose to. VanLare et al2 suggest that AHCs create an institutional home for activities that promote better use of CER results in clinical care by inserting them into quality improvement initiatives and clinical policies. They describe the approach taken at Cincinnati Children's Hospital Medical Center and Mount Sinai Medical Center, which are worthy models for other AHCs to examine carefully.
The U.S. government is betting that CER can generate evidence that will become the foundation of new standards of care for specific clinical conditions. Such standards would specify care pathways that differ according to patients' preferences and clinical characteristics that predict responses to tests and treatments. To generate evidence about patient-specific responses to treatments, CER will do more than show that active treatment A is more effective than active treatment B overall; it will also try to identify subgroups of patients who would do better on treatment B (so-called treatment response heterogeneity). AHCs, with decision support systems built into their EMRs, are positioned to set benchmarks for excellence in individualizing decision making to the specifics of a patient by using the predictors of treatment response in that patient, evidence of comparative effectiveness, and the patient's preferences.
An AHC can use its EMR to track performance if it decides to monitor whether its physicians practice in accord with the evidence and their patients' clinical characteristics and preferences. Decision quality for key health care decisions, such as the choice between breast-conserving therapy for breast cancer or mastectomy,11,12 could serve as a measure of quality of care. In principle, decision quality, by reflecting the key components of decision making, could be a better way to measure overall quality of care than current process measures.13
CER, with its pragmatic focus on evidence about real-world choices in real-world settings, may lead AHCs to make a more realistic appraisal of the limits of the care that they and the U.S. health care system provide. More care is not necessarily better care. Unlimited choice does not always serve the patient's best interests. AHCs should anticipate changes in the environment and take the lead in adapting to new ways of thinking. AHC leaders should decide how to respond to the challenge and opportunities of CER. Reading the collection of articles on CER in this issue would be a good way to start preparing for this decision.
1 Rich EC, Bonham AC, Kirch DG. The implications of comparative effectiveness research for academic medicine. Acad Med. 2011;86:684–688.
2 VanLare JM, Conway PH, Rowe JW. Building academic health centers' capacity to shape and respond to comparative effectiveness research policy. Acad Med. 2011;86:689–694.
3 Institute of Medicine. Initial National Priorities for Comparative Effectiveness Research. Washington, DC: National Academies Press; 2009. http://www.nap.edu/catalog/12648.html
. Accessed February 16, 2011.
5 VanLare JR, Conway PH, Sox HC. Five next steps for a new national program for comparative-effectiveness research. N Engl J Med. 2010;362:970–973.
6 Sox HC. Comparative effectiveness research: An update. Ann Intern Med. 2010;153:469–472.
7 Lauer M, Collins FS. Using science to improve the nation's health system: NIH's commitment to comparative effectiveness research. JAMA. 2010;303:2182–2183.
8 Elwyn G, Edwards A, Kinnersley P. Shared decision-making in primary care: The neglected second half of the consultation. Br J Gen Pract. 1999;49:477–482.
9 Gerber AS, Patashnik EM, Doherty D, et al. The public wants information, not board mandates, from comparative effectiveness research. Health Aff (Millwood). 2010;29:1872–1881.
10 Gerber AS, Patashnik EM, Doherty D, et al. A national survey reveals public skepticism about research-based treatment guidelines. Health Aff (Millwood). 2010;29:1882–1884.
11 Sepucha KR, Levin CA, Uzogara EE, et al. Developing instruments to measure the quality of decisions: Early results for a set of symptom-driven decisions. Patient Educ Couns. 2008;73:504–510.
12 Sepucha K, Ozanne E, Silvia K, Partridge A, Mulley AG Jr. An approach to measuring the quality of breast cancer decisions. Patient Educ Couns. 2007;65:261–269.
13 Sox HC, Greenfield S. Quality of care: How good is good enough? JAMA. 2010;303:2403–2404.