Whitcomb, Michael E. MD
In last month's editorial, I referred to the results of a Rand study showing that patients afflicted with any one of a number of common disorders are likely to receive appropriate treatment only half the time.1 And I argued that those results, and others showing that physicians fail too often to provide acceptable care, reflect, at least in part, shortcomings in the ways doctors are being educated. My purpose was to call on those holding leadership positions in medical schools and teaching hospitals to ensure that learning experiences that promote the use of evidence-based medicine (EBM) in clinical practice are being integrated into the clinical education of medical students and residents.
In a Viewpoint article appearing this month, Glick maintains that clinical data showing poor health outcomes should be used in a much more explicit way to improve doctors’ education. He suggests that data derived from various sources—incident reports, morbidity and mortality conferences, surveillance of quality care, case series, surveys of adverse events and “near-misses,” malpractice claims, and, I surmise, studies like the one by Rand—could be used for this purpose. He argues that when credible patterns of poor outcomes can be established, the data involved should directly inform the design and conduct of medical education programs. He labels this process “evidence-guided education.” In his mind, medical educators have a responsibility to monitor credible data on poor clinical outcomes and use those data to improve teaching about conditions that are not being managed well.
Chen, Bauchner, and Burstin suggest a different approach for linking clinical outcomes data to the design and conduct of medical education programs. In an article that appeared in the journal last October,2 they call for the development of a medical education research agenda that would attempt to determine the impact of specific educational interventions on clinical outcomes. Their proposal is particularly important because they call for research to study the effect of educational interventions on the quality of care provided by physicians. In contrast, most of the medical education research now being conducted is designed to evaluate educational interventions by determining whether those who experience the interventions enjoy them or whether they improve their knowledge of specific topics. Since shortcomings in medical education programs are partially responsible for poor clinical outcomes, in the future, evaluation of medical education interventions must focus on whether or not the interventions change physicians’ behaviors in ways that improve the care they provide.
Now, the crucial task of documenting the impact of an educational intervention on the quality of care provided for a specific clinical condition will not be an easy thing to do. And even under the best of circumstances, it would take many years before meaningful data might be available. Given this, I think Glick has it right. Those responsible for educating doctors need to pay attention to whatever data may be available documenting the quality of the care being provided to patients, and place special emphasis in their educational programs on those conditions that are being inadequately managed. This means, of course, that certain educational interventions will be adopted without any data showing that the changes have value. Nevertheless, it makes a lot of sense to make changes in the ways doctors are being educated if data exists suggesting that the current approaches are not producing good clinical outcomes.
Since the primary goal of medical education is to produce physicians who deliver high-quality health care, the views presented by Glick and by Chen and his colleagues deserve the attention of the medical education community. But as Glick points out, it is much more important to make changes designed to address deficiencies in the quality of medical care in residency programs than in the medical school curriculum, since it is during residency that doctors are really prepared for clinical practice. Given that, I think the members of the ACGME's (Accreditation Council for Graduate Medical Education) Residency Review Committees should incorporate Glick's proposal into their thinking when they are establishing accreditation requirements. And program directors should be required to demonstrate, as part of the accreditation process, how they have modified the design and conduct of their programs in response to data showing inadequacies in certain aspects of medical practice. I believe that adopting these approaches would make the accreditation of graduate medical education programs a much more meaningful tool for improving the quality of medical care provided by future practitioners than would any of the actions the council has adopted in recent years.
In making sure that clinical outcomes data are used to inform and improve the design and conduct of medical education programs, those responsible should review and modify not simply the content of the didactic curricula of their programs, but also the kinds of clinical experiences that programs provide and the ways those experiences are conducted. Part of such an initiative should be (as I have suggested before) to find ways for medical educators and health services researchers to begin to work together, since this will help ensure that medical education programs are preparing doctors to provide high-quality medical care.
Michael E. Whitcomb, MD
1 McGlynn EA, Asch SM, Adams J, et al. The quality of health care delivered to adults in the United States. N Engl J Med. 2003;348:2635–45.
2 Chen FM, Bauchner H, Burstin H. A call for outcomes research in medical education. Acad Med. 2004;79:955–60.