How do medical educators decide what topics to teach? We think we know what should be taught, but these judgments usually derive from tradition and opinion. Students often complain that their instructors fail to teach “what we really need to know.” The information chain reaction leaves no doubt that choices of content need to be made, but what should be the basis for setting the goals and content for clinically relevant teaching? The evidence from patient outcomes, rather than tradition and opinion, should be the ultimate standard for assessing educational effectiveness; such evidence can serve as a lens for focusing on areas of educational weakness. In this article, I describe an approach for selecting teaching priorities that has been developed from prior initiatives and my own experiences with research and implementation.
Clinicians and educators routinely apply evidence to evaluate practice standards and, with more difficulty, to assess particular teaching methods. However, outcome evidence relevant to educational content has been utilized less often and less systematically. Practice-based data have been derived from quality-of-care audits to guide continuing medical education (CME).1,2 Needs determinations have utilized evidence from a variety of sources in order to establish an epidemiology of clinical problems that will be faced, for example, by learners in the field of neurology: types of encounters in ambulatory or inpatient practices,3–6 the incidence and prevalence of disease, and burdens of illness in populations. The strategy of teaching not only about the “common” but also about “urgent” and “treatable” conditions6 provides one kind of prioritization superimposed upon incidence and prevalence data. “Demand-side” analysis utilizes “needs, expectations, and trends” in a population for the purpose of informing educational planning, as done in Ontario.7
Nationally recommended curricular objectives and expected competencies, again taking neurology as a case in point, still typically result from committee deliberations largely unfettered by research data.8 (My own concern grew out of such discussions in committees of the American Academy of Neurology, in which it dawned on me that none of our contentious opinions about prioritizing content was backed by evidence.) Foresighted concepts and research regarding CME in the late 1960s, now rarely cited, demonstrated the value of practice-based CME, centered in local hospitals.1,2 Audits of actual practice behaviors, measured against practitioner-selected criteria, demonstrated deficiencies in performance that required educational remediation. Choice of issues was determined by the incidence of disability in local populations of inpatients, weighted by consensus opinion on the theoretical ability to intervene effectively and whether the issue had yet been addressed. Educationally, participation in this process of cycles of audit entrained with cycles of CME was thought to be the aspect of the learning experience most likely to influence practice behaviors.
In subsequent decades, practice audits have become integral parts of quality improvement strategies, but the application of outcome evidence to educational choices in the realm of content has not kept pace. The Association of American Medical Colleges’ (AAMC's) Medical School Objectives Project (MSOP)9 on student curricula and the Accreditation Council on Graduate Medical Education's Outcome Project (on general competencies)10 do not emphasize the utilization of health outcome evidence, the definitive endpoint. Competencies are intermediate educational products, albeit important. The president of the AAMC affirmed in 1998 that “the quality of medical education must be evaluated by what it produces, not only by how we go about it.”11
A concept that I have termed “evidence-guided education” (EGE) builds upon prior efforts to apply outcome data to education and upon current movements for educational and practice improvement. Four such movements form a context for EGE:
- ▪ The “outcomes movement” seeks to measure actual health outcomes for the purpose of establishing the relative merits of new or competing interventions, thus supplying the evidence for “best practices.”
- ▪ Evidence-based medicine (EBM) attempts to implement the best practices in daily practice, with the hope of maximizing health outcomes.
- ▪ Continuous quality improvement (QI or CQI) emphasizes an ongoing “systems” and educational approach to enhancing health practices and outcomes.
- ▪ The patient safety movement focuses on identifying and preventing errors and adverse events (patient harm as a result of medical management) as a salient thrust of improved patient care.
The identification and application of evidence links these movements, all of which aim to improve health outcomes through education as well as through other “systems” changes.
How does EGE add to past initiatives to bring evidence to bear more decisively on educational content? EGE focuses on patient outcomes (the “bottom line”), rather than just practice processes (such as adherence to current “best practices,” the usual subject of practice audits). EGE is intended to provide systematic feedback from outcomes to education, which is seen as a continuum encompassing medical school, residency, and postresidency learning. More than in the ad hoc learning from results of treatment of individual patients, as typically addressed in morbidity and mortality reviews, EGE aims, as a central goal, to enable progressive generalization. Single events should carry weight in curricular consideration only if they generate research that identifies compelling patterns. For example, personal practice audits for missed subarachnoid hemorrhage or excessive anticoagulation, according to EBM standards, should inform personal and group education and practice modification. However, only expanding evidence from further research on pervasive outcome patterns would support educational intervention at wider and higher institutional levels. These might include medical school courses, competency standards for residents and fellows, and certification processes.
Of course, only part of learning, even about clinical care, should be determined by past and present outcomes. Another part of medical education, including much of the preclerkship content, depends upon judgments on the future relevance of scientific developments. Thus, evidence should guide, but by no means exclusively control decisions on the content of clinical education: the goal is not a fully evidence-based curriculum. Application of evidence does not necessarily mean changing the curriculum. Good outcomes due to successful education and patient management will validate some existing parts of the clinical curriculum. Care should be taken to avoid the unintended consequence of displacing currently effective learning and practice. The effort to utilize outcome evidence represents an iterative process that would build the evidence base, piece by piece, and implement it, course by course.
Even with evidence supporting the curricular inclusion of particular topics, educators will still need to make judgments, but ones governed less by limited and anecdotal individual experience and more by collective results, studied and compared with other priorities that are candidates for inclusion in the curriculum. Such choices can be exemplified by the question of how much attention to allot to teaching about spinal cord compression by epidural abscesses or tumors versus teaching about malignant gliomas of the brain. A decision, widely applicable, but subject to reconsideration over time, would take into account evidence on preventable bad outcomes.12 Multiple reasons favor emphasizing spinal cord compressions, but an evidence basis should be most persuasive. The actual choice of priorities in particular training situations depends on relevance to the particular category of learner (e.g., generalist, type of specialist) and the learning time available. The pressure for prioritization of neurologic topics is greater for students and generalists than for future neurologists.
In order to portray EGE as a curricular planning and teaching tool, I will identify several sources of pertinent evidence, discuss feedback loops for educational implementation, and illustrate how research in small practices and community institutions (as well as large ones) can provide relevant and useful data.
Sources of relevant outcome evidence
What outcome evidence might inform curricular choices? For the purpose of feedback to courses and training, the data are mostly descriptive and epidemiologic (as opposed to data from randomized, controlled trials for EBM). Errors without actual harm (“near misses”) often trigger the process of adducing evidence, according to the maxim “Where there's smoke, there's fire.” Anecdotal occurrence of adverse events is recognized through case conferences and systems of reporting incidents of error or patient injury13 and through other quality-of-care surveillance activities, such as monitoring unexpected readmissions to intensive care units. Poor process outcomes, such as ignoring evidence-based practices like anticoagulation or antiplatelet therapy for atrial fibrillation, sometimes are appropriated as surrogates for patient outcomes. However, actual health results are the definitive events, rather than current constructs of best care.
The most accessible outcome evidence relevant to adverse events, aside from autopsy-derived data, has come from case series in the literature, exhaustive chart reviews,14 surveillance of inpatient13 and outpatient care sites, and malpractice claims.15 Preventable adverse events result from failures in realms as diverse as physical examination, an inadequate differential diagnosis, ordering of imaging, medication management, and communication with patients. Better data from across the whole spectrum of errors and outcomes will come from standardized, legally protected reporting systems now under legislative consideration in the U.S. Congress.
Implementation of EGE
The internal feedback of reflective practitioners on the outcome of their care, or the feedback from peer to peer or supervisor to trainee, can carry not only personal educational value but also can serve to engage the clinician in systematic improvement of both education and practice. As in prior proposals for practice-based CME,1,2 evidence used to guide practice and to guide education (both originating in patient outcomes) are linked in feedback loops. Patient outcomes from practice demonstrate educational needs, and responses to those needs should improve practices, whether by enhanced knowledge and skills or by attention to flawed systems, such as medication ordering or supervision standards.
The educational feedback loop can be short, as in “learning from one's mistakes,” which may profit individual physicians’ future performance but not systematically influence education. For example, a trainee in the emergency department makes a diagnostic error resulting in patient harm. This might lead to on-the-spot discussion and to focused teaching rounds. It might also result in a case discussion at the next regular morbidity and mortality review conference. However, unless embedded in a curricular training plan, the case-based lesson may well dissipate as successive cohorts of new trainees pass through. As systematic follow-up of the trainee's error, case-finding efforts may identify other instances of similar errors and patient harm, retrospectively and prospectively, suggesting a pattern of inadequate practice. The resulting topic is inserted through progressive iterations into the formal resident training curriculum so that the essential points will be addressed in a regular cycle of scheduled conferences and preceptorial oversight. Critical elements may be included in an objective structured clinical examination that is completed by successive trainee cohorts.
Credible evidence, even if fragmentary, has the potential to contribute if disseminated for discussion at meetings on quality assurance, outcome research, and patient safety. In order to establish the need for widespread change in clinically relevant education, replication of outcome studies in larger and diverse populations should be attempted to support generalization of conclusions. If sufficiently compelling, the lessons drawn deserve phased implementation in the content of both medical school clinical courses and residency teaching (Table 1). Since assessments (examinations) significantly drive learning, competency and certifying tests should include content derived from patient outcome analysis.
While results of large-scale investigations are likely to achieve the earliest acceptance, virtually any clinician or allied practice group can conduct pilot projects on outcomes pertinent to EGE. Observation and analysis of practices and outcomes can occur in small institutions and a variety of settings, from intensive care units to shelters for the homeless, each feeding back to curricular planning. The outcomes suitable for educational guidance should include not only those of acute disorders, but also chronic and ambulatory conditions that may not command the same level of attention, particularly with respect to patient safety.
Any level of clinically relevant education and training is appropriate as a target for EGE, as exemplified in Table 1. In the educational progression from the preclerkship medical curriculum through CME, the connections between outcomes, best practices, and education become tighter. The shorter the interval between educational input and practice output, the shorter the feedback loop and the less difficult the evaluation of the input. This suggests the need for better opportunities for evaluation in the supervised and circumscribed learning and practices of residents and in the daily work of practitioners, especially in cohesive groups conducting peer review. Compared with these shorter feedback loops, the connections of outcomes to undergraduate curricular changes to improved practice and results stretch out over much longer time periods and are correspondingly stretched thin. However, even if the linkages are too long and complex for clear accountability, student learning forms the platform for subsequent training. Attention to outcome evidence at the student level, therefore, presages habits of continued learning and practice that can then be assessed by measures of health.
The detailed implementation of EGE varies with the “clinical maturity” of learners at different levels so that the objectives fit both the developmental stage of clinical learning and expected management tasks. (Although beyond the scope of this article, outcome evidence is applicable, I believe, to the training of all health professionals in any society.) Objectives for teaching content are usually not tied by systematic evidence of outcomes and need to specific task competencies or levels of proficiency. The processes of care that affect outcomes are sometimes complex, involving multiple aspects of knowledge, skills, and attitudes. Nonetheless, the examples to be described demonstrate the need for specific modification of behaviors.
Three examples from the practice of neurology serve to illustrate the types of implementation that might be considered at multiple levels of education and training. (Table 1 uses the first two of these examples in its presentation about phased implementation of learning content.)
A 36-year-old female postal worker suddenly developed drooling and possible dysfunction of the left side of her body. The recorded history was affected by biased processing of data on past drug use, prior psychiatric contact, recent divorce, and a new pregnancy. The examination was misinterpreted because of selective neglect of observations that did not fit the (premature) working diagnosis of conversion disorder. A neurological consultation was never requested. In this case the pathology turned out to be an intracerebral hemorrhage, identified by imaging after a 36-hour delay. This serious outcome did not represent a simple knowledge failure of a single physician, but rather a series of cognitive and “system” failures that transpired in several care sites and involved medical and psychiatric residents and nurses and emergency department physicians.
This case in a community teaching hospital led to educational feedback to those involved, but more importantly to further research, curricular implementation, and broader dissemination in the following ways: Short-loop review, root-cause analysis, and teaching were done in the emergency department and on the medical and psychiatric services, through case-based conferences for residents and practitioners and subsequent incorporation into local training curricula. A neurologist created a case for problem-based learning that subsequently became a staple in the medical school's integrated neuroscience and behavior course and has been widely disseminated to other schools. However, this case did not stand alone as a teaching example. It stimulated subsequent searching for a pattern of related cases and adverse outcomes16 and more broadly for outcome data from neurologic malpractice cases.15 (“Failure to diagnose” constitutes the largest single category in malpractice claims based on authentic, preventable adverse outcomes and is a type of error well-suited to educational intervention.) The foregoing sequence of educational feedback and intervention may have improved practices, but better patient outcomes have not been proved (although no further adverse events in this category occurred over the subsequent five years). However, improved outcomes resulting from a short feedback loop to educational intervention was clearly shown in the study of another problem, described below.
Emergency department cases of inadvertent, symptomatic phenytoin intoxication appeared to be due to lack of clinicians’ knowledge of zero-order, saturation pharmacokinetics and its management implications. First noted as isolated incidents, cases included injury from falls and acutely reduced psychomotor function, which led to emergency presentations. These cases stimulated further retrospective and prospective research on incidence and generated a hospital-based educational program for residents and practitioners. This intervention cut the case occurrence by more than half17 and decreased the attendant morbidity. Saturation pharmacokinetics (important not only for phenytoin but also for ethanol) may be emphasized in basic pharmacology, clinical neuroscience, and practical therapeutic teaching in clerkship, residency, and CME curricula (Table 1). Despite the fact that phenytoin is commonly used, and that symptomatic intoxication is largely preventable, comprehensive interventions were not advocated until patient outcomes were documented and feedback to the educational process achieved. Dissemination in the literature has raised the prospect of influencing neurologic and pharmacologic learning and practices at other institutions (which should reduce the extrapolated national incidence of up to 25,000 cases of symptomatic intoxication per year in the United States alone).17
A further example comes from a Swedish group that defined a problem of poor outcomes consequent to failure of physicians to heed early-warning headaches that presage major rupture of cerebral aneurysms, thus delaying diagnosis. They undertook a sustained CME intervention, targeting referring doctors, that resulted in a 77% reduction in diagnostic errors.18 Earlier diagnosis expedited surgical intervention in patients in better condition. Thus, documenting errors linked to outcomes led directly to a short-loop educational intervention to instill best practices, which had already been conjoined with evidence for improved outcomes.
Limits and Value of EGE
Outcome data do not translate automatically into choice of teaching priorities. Even so, EGE should help to identify topics of proved importance for particular levels and types of training. Planners of curricula, courses, and training who are seeking to prioritize topics still need to consider other factors such as incidence, overall health impact of the specific outcome data, the credibility of local data for generalizing choices, and an appropriate balance of subject matter. Educational interventions should initially proceed from documentation of convincing outcome failures and straightforward educational lessons to improved practice patterns and measurement of subsequent results. Even so, accountability may be obscured by institutional changes in medical care systems that will be occurring simultaneously with modifications of training content.
We should view implementation of EGE as a continuum from medical school curricula to the competencies formulated for residents and finally to the content of certifying (or recertifying) examinations. The most profound changes will require successive cycles of implementation, from early clinical experiences in medical school to the years of evolving practice. Increasingly, practice outcomes are being scrutinized publicly as measures of the medical system and the quality of medical education. The challenge for EGE in the coming era is for educators and other clinicians in collaboration to accrue a body of credible evidence that can be validated, generalized, and applied systematically. Evidence, unlike variable and unsubstantiated opinion, will enable interventions that can be compared across educational systems. In this era of attention to evidence and of severe constraints on learning time, we should seek the judicious application of outcome evidence to help determine teaching content and learning priorities.