Physicians and clinical researchers publish reports of their patient care experience and clinical research to share information with their colleagues and to improve the care of future patients. The paradigm of evidence-based medicine (EBM) challenges physicians to make use of these reports to inform their evaluation of patients and subsequent treatment decisions.
REASON FOR OUR PROGRAM
In 1999, Chen et al. published a study suggesting that patients with acute myocardial infarction (MI) had superior outcomes when treated in a tertiary care academic hospital.1 The higher level of care of these patients could be explained, at least in part, by the more prevalent use of aspirin and beta-blockers, treatments supported by randomized controlled trials (RCTs). In the Cooperative Cardiovascular Project (CCP), authors found 30-day and two-year post-MI mortality to be significantly lower among patients receiving care at major teaching hospitals (as defined by intern-to-bed ratios) than among those patients cared for at non-teaching hospitals.2 Those patients at the major teaching hospitals were more commonly discharged on aspirin and beta-blockers. The process through which physicians at these CCP academic hospitals chose to prescribe medicines with documented benefit is less clear.
The presence of clinical researchers who act as strong opinion leaders at academic medical centers may explain the increased use of these and other evidence-based treatments. An alternative explanation is that physicians at academic medical centers are trained to use the medical literature. The Rochester Study reported changes in specific aspects of care as a result of information provided by the library.3 In addition, a case–control study by Klein et al. demonstrated that searching the medical literature for specific clinical questions (CQs) could be associated with a shorter length of stay for the patient and a lower cost of care.4
Medical education of future practicing physicians hopefully will enhance their ability to formulate CQs,5 to effectively search the literature, and to critically appraise the information obtained. In recent years, the medical residency program at Duke University School of Medicine developed a protocol to educate internal medicine residents in the formulation of CQs and search strategies. In this program, residents received a lecture on CQ formulation to start their general medicine rotations and participated in weekly lectures with librarians on literature-searching techniques. They were then asked to record their questions and search results. From this work, Cabell et al. suggested that this program resulted in significantly more literature searching by these residents and may have enhanced the quality of their CQs.6 Earlier research by Bennett et al. demonstrated that formal instruction of medical tutors could improve critical appraisal skills of the students under their direction.7 In summary, steps can be taken to lead physicians to more actively and efficiently engage the literature.
Even with the necessary training, however, physicians may fail to apply the medical literature to their patients if relevant data cannot be procured in a timely fashion. Ideally, health systems would facilitate quick access to a valid subset of the medical literature most pertinent to their patient populations. The primary objective of our project reported below, therefore, was to design and implement an electronic database of CQs and medical evidence that would facilitate the delivery of evidence-based care to patients on the inpatient general medicine service while providing a dynamic and relevant learning opportunity for our medical residents.
Prior to the 2000–2001 academic year, we created the Critical Appraisal Resource (CAR) database, a Web-based system to collect CQs raised by internal medicine residents on ward rotations. In this article, we report ten months of experience with its operation, from July 2000 to April 2001. Residents could access this database for CQ input and data retrieval through the Internet using a secure log-on authorization. Each record of this database was designed to hold a CQ, demographic data for the resident formulating the CQ, the patient's diagnosis, the electronic resource in which the resident searched for the answer, and the impact of the information on patient care decisions. Possible resources included electronic medical textbooks such as Harrison's Online and MDConsult, and Medline, American College of Physicians Journal Club, Scientific American, the resident's own files of medical information, or shared information from a colleague. The resident labeled information as “useful” if the results were valid according to established EBM criteria8 and if the data were applicable to the patient. If useful information was contained in a journal article indexed through Medline, the resident could enter the article's unique identifier (UI) into the record, linking it to the CQ and the associated full text through our Ovid Technologies, Inc., system. Residents could also place online a brief analysis of the article in a critically appraised topic (CAT) format, which is described elsewhere.9 Finally, the CAR was equipped with a search engine that allowed records to be retrieved by subspecialty, admitting diagnosis, or keyword.
Participants and Teaching Interventions
The educational project took place on the inpatient general medicine wards at Duke University Medical Center and the Veterans Administration Medical Center in Durham, North Carolina. At the beginning of each ward rotation, the residents were informed and consented to participate in the data-collection process. Residents participated over the course of seven six-week general medicine ward rotations, with 16 residents participating in each block. A total of 82 residents participated; some of these rotated on the general medicine wards more than once.
The ward rotation was structured such that each resident met individually with the chief medical resident during the evening of the overnight call nights to discuss the patients admitted to the general medicine service. During each meeting, residents were encouraged to formulate at least one CQ and to seek an answer to that CQ in the medical literature. The residents were trained to enter their CQs and those references considered useful into the CAR database. They were also encouraged to use the CAR database to find answers to their CQs, searching for questions that might have been entered during previous rotation blocks.
Twice a month on a ward rotation, a resident prepared an evidence summary or CAT on an article that answered a CQ. Each CAT was presented to the department chair or program director at morning report. The CAT was then edited and validated by the chief medical resident, the chair or program director, and another expert clinician before being added to the CAR database.
By the end of the ten-month study period, the CAR contained 625 CQs divided into the following medical subspecialties: 19% gastroenterology, 15% hematology/oncology, 14% cardiology, 9% pulmonology, 8% general medicine, 8% nephrology, 5% neurology, and 6% other. Residents had searched for answers to 93% of the CQs they had entered into the CAR and had been able to retrieve useful information from the medical literature with 82% of these searches.
For 40% of the 105 CQs for which residents attempted but failed to obtain useful data, no information addressing the specific question could be found in the medical literature. For the other 60% of these CQs, the residents considered the data they had obtained to be unhelpful, because the data were either not valid (78%) or not pertinent to the patient in question (22%). Residents found 77% of their useful answers in the Medline database (Figure 1).
Therapy questions were the most common type of CQ formulated (53%). Other types of CQs included diagnostic test questions (22%), etiology questions (9%), prognosis questions (9%), and harm questions (5.1%). Sixty percent of the therapy articles deemed useful by our residents were RCTs, while 6.8% and 6.3% of these therapy articles were prospective cohort studies and meta-analyses, respectively (Figure 2).
Across all types of CQs, useful information from the medical literature confirmed patient care decisions in 53% of cases and changed patient management in 47% of cases. In 49% of the latter cases, the altered care decision involved a medication change, but changes in diagnostic test choices (26%) and in prognosis communicated to the patient (13%) figured prominently.
As records accumulated in the database, residents searched the CAR for answers to CQs 1,035 times and viewed links from the CAR to Medline citations and online CATs 672 and 314 times, respectively.
Comparison with Previous Projects
The outcomes of our project are consistent with prior reports of the impact of the use of medical literature on patient care. The Rochester Study reported changes in patient management as a result of information provided by the library (choice of medication 45%, choice of test 50%, and diagnosis 29%).3 Duke residents also reported changes in medication and diagnostic test choices and, in addition, confirmed the influence of their literature findings on the prognosis communicated to the patient. When Sackett et al. introduced an “evidence cart” to general medicine inpatient morning rounds, successful searches for answers to CQs altered or corrected 48% of associated management decisions.10 Our approach of teaching resident physicians to engage the medical literature on the night of admission appears to have an effect similar to that of Sackett's bedside use of evidence-based resources on rounds. In the Sackett study, however, the emphasis was not on the skills required to search the medical literature. Rather, their focus was on ready, immediate access to selected, customized tools at the bedside. When these tools were removed, the rate of successful searches dropped from 81% to 12%. Our approach, which focused heavily on question building and searching skills, empowered residents to find answers to their clinical questions with existing resources. Furthermore, these skills are transferable to any setting.
The database that we have described above allows our residents to record their CQs with links to selected Medline citations and to report the perceived impact of the medical literature on patient care. Residents reported that useful literature collected to answer CQs changed patient management in almost 50% of cases.
Our residents sought answers to over 90% of the CQs entered, suggesting that the educational strategy of requiring residents to report their CQs encouraged them to search for answers. This process reinforces a model for self-directed learning by asking pertinent clinical questions and seeking and applying evidence as part of the practice of medicine. The broad distribution of CQs in our database suggests that a patient-based curriculum on an inpatient medicine service can lead residents to learn about a wide range of medical subspecialties. As has been previously reported in the outpatient setting,11 therapy questions were the prevailing type of CQ raised by our residents. This may reflect an emphasis on therapeutics in our local medical residency curriculum or in the published literature. The majority of therapy studies considered by our residents to be useful (75%) contained RCT data, suggesting that our emphasis on evidence-based educational strategies may have had an impact on their ability to critically appraise the therapy literature.
The emphasis on RCTs and questions of therapy may also reflect the medical community's struggle with the exponential accumulation of medical evidence. We sought to facilitate residents' direct use of this intimidating body of information. As a result of this project, residents were able to answer three fourths of their CQs using the Medline database. The CAR database allowed residents to rapidly access the prior work of their colleagues, as once a question and the associated reference were entered into the database, they became immediately available. In fact, over the ten-month period of our data collection, the CAR database was accessed over 1,000 times. In over 600 of those cases, the residents viewed the full text of the Medline citation directly through the database. To maximize sharing of the most valid, applicable articles for our patient population, we posted 70 CATs, which were viewed over 300 times. These validated summaries allowed the residents to review a question, an article, and its application to patient care within minutes.
In subsequent years of our medical residency program, we intend to continue collecting questions, citation links, and CATS. Over time we anticipate that use of the CAR database will have an even greater impact on information-sharing and residents' education at our institution. In the current academic environment there is increased emphasis on accountability on the part of residency programs to ensure that trainees are taught a core curriculum. A tool such as the CAR database can allow the systematic collection of an evidence base for that curriculum. In addition, improved direct links to electronic textbooks will facilitate adequate emphasis on core medical knowledge.
Academic medical communities are being pressured to demonstrate the impact on patient care of teaching medical residents the principles of EBM. Through our educational program we developed the tool described above to enhance and monitor this impact. Use of this tool by our medical residents altered their clinical decisions in almost half of the cases reported. Further research is needed to substantiate the effects of the formulation of CQs and the procurement of answers to these CQs in the medical literature on critical health outcomes such as cost of care, length of stay, patient morbidity, and mortality.
As the body of medical evidence continues to accumulate exponentially, health institutions may deliver a higher standard of care if they can develop mechanisms for efficiently delivering EBM to the bedside. Without such mechanisms, busy clinicians will not be able to answer even half of the CQs they confront in daily practice.12 The approach we have reported of engaging the medical literature may help other institutions—and ours—to more effectively educate housestaff and faculty and ultimately to improve patient care.
1. Chen J, Radford MJ, Wang Y, Marciniak TA, Krumholz HM. Do “America's best hospitals” perform better for acute myocardial infarction? N Engl J Med. 1999;340:286–92.
2. Allison JJ, Kiefe CI, Weissman NW. Relationship of hospital teaching status with quality of care and mortality for Medicare patients with acute MI. JAMA. 2000;284:1256–62.
3. Marshall JG. The impact of the hospital library on clinical information decision-making: the Rochester study. Bull Med Libr Assoc. 1992;80:169–78.
4. Klein MS, Ross FV, Adams DL, Glibert CM. Effect of online literature searching on length of stay and patient care costs. Acad Med. 1994;69:489–95.
5. Richardson WS, Wilson MC, Nishikawa J, Hayward RB. The well built clinical question: a key to evidence-based decisions. ACP J Club. 1995;Nov/Dec:A12–A13.
6. Cabell CH, Schardt C, Sanders L, Corey GR, Keitz SA. Resident utilization of information technology: a randomized trial of clinical question formation. J Gen Intern Med. 2001;16:838–44.
7. Bennett KJ, Sackett DL, Haynes RB, Neufeld VR, Tugwell P, Roberts R. A controlled trial of teaching critical appraisal of the clinical literature to medical students. JAMA. 1987;257:2451–4.
8. Sackett DL, Strauss SE, Richardson WS, Rosenberg W, Haynes RB. How to Practice and Teach EBM. London, U.K.: Churchill–Livingstone, 2001.
9. Sauve S, Lee HN, Meade MO, et al. The critically appraised topic: a practical approach to learning critical appraisal. Ann R Soc Phys Surg Can. 1995;28:396–8.
10. Sackett DL, Staus SE. Finding and applying evidence during clinical rounds: the “evidence cart”. JAMA. 1998;280:1336–8.
11. Green ML, Ciampi MA, Ellis PJ. Residents' medical information needs in clinic: are they being met? Am J Med. 2000;109:218–23.
12. Gorman PN, Hefland M. Information seeking in primary care: how physicians choose which clinical questions to pursue and which to leave unanswered. Med Decis Making. 1995;15:113–9.