Marantz, Paul R. MD, MPH; Burton, William PhD; Steiner-Grossman, Penny EdD, MPH
With a focus on core competencies in medical education, there is increasing recognition of the importance for future physicians of learning epidemiology and biostatistics. These disciplines can be likened to a “basic science” foundation for such important medical competencies as the practice of evidence-based medicine (EBM), population-based medicine,1 and preventive medicine, all recognized as skills essential for the physician.
However, many educators believe that skills in these areas are difficult to teach to medical students, as students may not learn the material effectively and often do not enjoy the courses that have been offered.2,3 The AAMC states that population health is “best taught through examples and experiences, not courses.”1 Dyke et al. found that students were more enthusiastic about epidemiology and its relevance to their professional lives when the material was taught using a problem-based learning (PBL) format rather than a traditional lecture-based course.4 Other innovative methods have included the integration of a preventive medicine curriculum into an obstetrics and gynecology clerkship,5 the use of student facilitators to enhance student participation in fourth-year epidemiology lectures,6 and the introduction of clinical epidemiology ward rounds in a pediatrics rotation.7
At the Albert Einstein College of Medicine (AECOM), our course in epidemiology and biostatistics (taught by the Department of Epidemiology and Social Medicine) has pursued the elusive goal of student satisfaction for many years. During the eight years from 1989 to 1996, this goal had not been achieved, although exams indicated that most students “learned” the material. Annual course evaluations assessed several aspects of students' perceptions, including an overall score based on students' agreement with the statement, “Overall, the course was a positive learning experience.” The student ratings, on a 1–5 scale, never met our internal criterion standard of 3.5, considered by our school to be a “positive” assessment. Throughout that period, our ratings ranged from 2.5 to 3.2 (mean, 2.9), making it consistently among the least popular courses in the first-year curriculum.
This consistency in evaluation did not reflect a consistent curriculum: our course was completely redesigned virtually every year. Such revisions included moving from large lectures to small groups to combinations of the two; from multiple to single lecturers to a single lecturer complemented by subject-specific expert panels; from mock research design to critical literature analysis to classical epidemiology; from directive small-group seminars (with specific homework assignments) to an attempt to employ principles of problem-based learning. While the faculty learned something from each of these attempts, none yielded success with respect to students' evaluations. Neither did “repackaging” the course in 1996 by renaming it Principles of Preventive Medicine: our rating that year was 2.9, right at our average over the preceding years.
In 1997, we found a teaching method that was successful in achieving a positive student evaluation. This success has been maintained over five consecutive years. Our teaching approach uses the case-discussion method, an approach used widely in schools of business, law, and education8,9,10; to our knowledge, this method has been infrequently used in undergraduate medical education. In this paper, we describe the method, the curriculum of our course, our students' response over the first five years of the new approach, and the potential applications of case-discussion method teaching in medical education, for teaching both epidemiology and other courses.
WHAT WE DID
The Teaching Approach
In June 1996, four AECOM faculty members from different departments attended the Program for Leaders in Medical Student Education, sponsored by the Josiah Macy, Jr. Foundation and the Harvard Macy Institute. The program, run jointly by faculty from the Harvard Medical School (HMS) and the Harvard Business School (HBS), focused on training participants to plan and accomplish institutional curricular change in medical schools. While the content of the course was interesting and important, it was the method used to teach it that held particular promise for the teaching of epidemiology and biostatistics.
The program was taught primarily using the “case-discussion method” of teaching, as practiced at HBS.11 The case discussions were led by two master teachers from HBS, both of whom led discussions in which most members of our group of 57 medical educators actively participated. The promise of this method, wherein groups this large could be taught in a stimulating, student-centered, and interactive fashion, was too monumental to be ignored. It was a continuing struggle to find sufficient faculty to teach groups of 15 students, let alone the groups of five to seven considered optimal for PBL.12
To further explore the potential application of the case-discussion method to medical education, three faculty members returned to Boston for a two-day visit to the HBS and HMS. The AECOM faculty attended several classes at HBS and were struck by how effective this method was, even when the subject matter varied and when different faculty members had different teaching styles. AECOM faculty met with two enthusiastic advocates of this method of teaching, both of whom teach students at HBS, and lead seminars in this teaching method for faculty throughout the university. They described the potential excitement, fears, and challenges likely to be faced by faculty embarking on this venture, and emphasized their firmly held belief that in the end, if we stayed the course, it would be worth it. But one caveat was clearly made and restated: this would not be easy, and would not come quickly. In addition, faculty needed to be willing to try new approaches, step out of their conventional authority roles, and take risks. There must also be a major commitment to faculty development. Writing the cases would be difficult and time-consuming and, as is the case with anything new, we must expect initial setbacks. Finally, it was emphasized that this was not an all-or-nothing conversion we were discussing. Lectures, simulations, problem-based learning, and small-group discussions all had their places.
We asked why this method, despite its long history of success in business and other professional schools, had not been used widely in medical education. We found that there was no clear reason for this, and no theoretical barrier to its application in medical schools. A potential fear expressed by some of our faculty was that this method would be more difficult to use in teaching technical rather than conceptual material. However, business schools also teach technical material. The same fear had been expressed with respect to PBL, and effective approaches had been found.
Faculty Development and Course Design, Year 1
On our return to AECOM, an intensive two-day, interdepartmental faculty development workshop was organized, led by a professor who helps to lead two university teaching seminars on discussion methods at Harvard, and who co-authored the book we would use for this workshop.11 Dates were set and faculty were enrolled who were willing to free themselves up for the two full days and to spend the time required in preparation. There were 33 faculty who participated in this workshop, including faculty from epidemiology, hematology, and several basic science departments, our Division of Education, and the dean of AECOM. Faculty participants found this workshop energizing, demanding, and informative, and the active participation of the dean throughout the two-day workshop underscored a serious institutional commitment to education that was well received by the faculty.
The Principles of Preventive Medicine course began only a few weeks after this workshop, but had already been designed to use this new method. Our traditional nine-week curriculum was condensed into a three-week lecture series, complemented by a detailed syllabus. Students were instructed that the goal of these three weeks was to learn the basic vocabulary and concepts, in the traditional way of “covering” the facts; they would be expected to actually learn the material during the case-discussion portion of the course. They were free to choose any method they preferred during this first portion of the course: attending lectures (attendance not required), using the syllabus (either hard copy or online), or using supplementary readings. The only requirement was that they must pass the take-home midterm examination with a grade of at least 70%. The mean grade on the exam was 87%, compared with 77% the year before, when the same exam was administered in class after an eight-week course. (These scores were not compared statistically, since the exams were administered in such different circumstances that a presumption of the “null hypothesis” would be inappropriate.) Of 163 students, only three students failed, and they attended a remedial class prior to the case-discussion sessions.
For the next six weeks, the class was divided into seven sections of 23 or 24 students each. Attendance and advance preparation were required, and students were informed that five points would be deducted from their final exam grade for each unexcused absence, and two points deducted if they were judged to be unprepared. For each class, preparation consisted of reading a case and supplementary materials and thinking about the case in the context of the learning objectives for the session. Cases were designed to require about an hour of preparation time. They were not about patients, but rather about doctors or medical students making decisions or interpreting information. Cases were developed by the course leader, in consultation with other faculty as needed; such preparation could take several days per case for research and writing.
Three representative cases can serve as examples: (1) a medical student on vacation, being asked by his family about the controversy around beta-carotene supplementation; (2) an allergy/immunology fellow looking for causation clues in the beginning of the AIDS epidemic; and (3) an investigator evaluating the purported harmful effects of dietary sodium. Supplementary materials included journal articles, editorials, newspaper articles, and/or consensus statements, with an emphasis on original research study reports. Such materials included (1) a case–control study demonstrating an association between high serum beta-carotene levels and reduced risk for lung cancer,13 a subsequent randomized trial showing beta-carotene supplementation leading to an increased risk of lung cancer in smokers,14 and related editorials and press coverage; (2) early Morbidity and Mortality Weekly Reports15,16 and an editorial17 during the early days of the HIV epidemic, and a case–control study that found an association between amyl nitrite use and Kaposi's sarcoma18; and (3) a longitudinal study demonstrating increased rates of myocardial infarctions among hypertensive men with low urinary sodium,19 a meta-analysis of the physiologic effects of salt restriction,20 and two differing analyses of National Health and Nutrition Examination Survey (NHANES) data,21,22 along with editorials and correspondence. Newer cases have been and are continually being developed, with an emphasis on emerging and resolving controversies (e.g., the appropriate use of mammography for breast cancer screening; anthrax vaccination; the use of postmenopausal estrogen replacement therapy).
Students sat at desks arranged in a circle, with the instructor in the front of the room, near the blackboard. A name card printed in large type was placed on the table in front of each student. (Most of our faculty felt that this simple step did more to change the learning environment dramatically than anything else we did. It allowed faculty to learn students' names, increasing the bond between faculty and staff; and faculty would call on students and refer to previous students' comments by name, perhaps motivating the students to prepare and to participate.) Case discussions generally filled the full two-hour time period allotted for each session. Throughout the course, the instructors were impressed with the level of preparation, participation, and mastery demonstrated by the students.
An in-class final exam was administered, using a short-answer and essay format. One week before the exam, all students received a copy of a recently published journal article;23 the examination questions all related to that paper. Exams were scored blinded to student names, to avoid bias in grading. The mean exam score was 82% (including an average reduction for absence or lack of preparation of only 0.8%), and the subjective impression of our faculty was that the students, on the whole, demonstrated good understanding of the course material. Passing the course required a score of 65%, unless the student demonstrated excellent effort and participation during the case discussions, in which case the passing criterion was dropped to 60%. Only one student failed to achieve the predetermined passing grade; this was one of the three students who had failed the midterm.
Only minor modifications have been made in the course during the subsequent four years. Cases were modified and new cases were written. The midterm exam was changed from a take-home format to an on-line format, with the passing grade increased to 75%, and students allowed to retake if they failed. Many faculty did not continue to teach every year; some chose not to take on the responsibility, and some ineffective teachers were asked to retrain or teach in different settings. New faculty members were added to the course, with a modified (one-day) faculty development program held in advance. A computer-based “workshop” session (with a predefined problem set) was added to enhance the medical students' understanding of concepts related to statistical power and sample size estimation.
Readers interested in more detail about the course can find materials online at 〈http://cobweb.aecom.yu.edu/ooe/courses/PrevMed/〉.
Evaluations of Year 1
Students' response to the course was strikingly positive. Seventy-eight percent agreed or strongly agreed with the statement “Overall, the course was a positive learning experience” (compared with 38% the year before). The mean score for this statement was 4.0 (increased from 3.0 the year before, p < .0001). Instead of being one of the least popular first-year courses, we were suddenly catapulted into the top three, and the final exam was the second most highly rated exam of the year. Students' narrative comments reflected excitement and satisfaction with this teaching method: “Overall, one of the best designed courses that I've had all year”; “Excellent, stimulating course”; “Teaches critical reading and critical thinking, processes that are discouraged and considered reactionary in other courses”; “[The] willingness to facilitate student learning was a major strength of the course”; “Best course I've taken here!! Thanks.”
Students' comments identified several important problems as well. One in particular spoke to our concern about the teaching of technical material: 15% indicated that they wanted more rigor, as exemplified by the statement, “More math, less BS!” Also, 17% of students noted that the learning objectives for the cases were not always clear. Further, there was substantial variability in how effective students felt the small-group learning experience was, with one section receiving particularly low ratings. It is interesting to note that the instructor for that section, an experienced and talented teacher, was the only instructor who had not been able to participate in the faculty development workshop.
Evaluations of Years 2–5
For the next four years, the course was not substantially modified, and the student responses have been consistently positive. The overall course evaluation score had always been neutral prior to this intervention; that is, ratings were between 2.5 and 3.5 on a scale of 5, where 1 = worst and 5 = best. Since the intervention, the overall score has always been positive (that is, above 3.5; in two of the four years, the score was around 4, and in one year, it was around 4.3). Overall response, as well as the students' self-perceived mastery of specific course content, showed substantial improvement in the five years after the intervention, compared with the four years before the intervention for which comparable data are available (see Table 1).
Other Measures of Success
Students' evaluation data provide only one measure of the success of a course. Ultimately, we would hope that our success will translate into future physicians who are more effective consumers (for those in practice), teachers, and producers (for those in academics) of clinical research. We have no data available to assess these outcomes, but there are some data that effectively complement the students' course evaluations.
The AAMC Medical School Graduation Questionnaire provides students' perspectives at the completion of medical school. While this is also a student assessment, it asks them to “look back” at their education at the end of four years, and provides an opportunity to compare the opinions of AECOM students with those at other U.S. medical schools. Data from the classes of 1998 and 1999 (who did not receive the case-discussion course) and the classes of 2000 and 2001 (who did receive the new course) are provided in Figure 1. These data indicate that the AECOM students were substantially more likely to feel that their basic science education in biostatistics and epidemiology provided a “good” or “excellent” preparation for the clinical clerkships than were students at all U.S. medical schools. There was a notable improvement in 2000, after the first AECOM class took the revised course. Of note, these data do not indicate that AECOM students are just more positive than other medical students; of seven topic areas rated, only biostatistics and epidemiology and one other were rated substantially higher by AECOM students than by students nationwide.
While our course was not designed with United States Medical Licensing Examinations in mind, we looked at AECOM's Step 1 scores for 1999 and 2000 for data on outcomes. Data are provided that compare an individual school's performances on 19 “disciplines and organ systems” with the national averages. For both 1999 and 2000, AECOM's highest score among all 19 topic areas was in biostatistics and epidemiology. (Scores for biostatistics and epidemiology were not available before 1999.)
Our findings demonstrate that medical students can respond positively to a course in epidemiology and biostatistics when taught using the case-discussion method.
As stated earlier, it has been widely recognized that epidemiology and biostatistics are particularly difficult disciplines to teach in medical school. Despite the importance of these topics to the practice of medicine, students tend not to see their relevance. These courses often take a back seat to other basic science courses, such as anatomy and biochemistry, which seem to have a higher priority in the schools and on standardized examinations. Courses that follow a traditional approach of lectures and multiple-choice exams may tend to focus students on the retention of facts that seem irrelevant to medical science and clinical practice.
To allow students to see the importance and excitement of this material, and to engage them in a process of critical reasoning, it is appropriate to see our students as adult learners, and to rely more on interactive, student-centered approaches. Case-discussion teaching, PBL, and other interactive method all share a philosophical underpinning that relates as much to medical education as to business education.24 While lectures represent the most efficient way to achieve the goal of information distribution, they fall short in achieving other goals, such as developing clinical judgment, honing critical analytic skills, or the internalizing and retention of knowledge. In addition, most lecture courses do little to build a community of active learners in that the one-way lecture system tends to separate the authority-figure instructor from students and the students from each other.
Several years ago, we tried an experimental approach that was almost successful. Using a “collaborative learning model,” one section of 18 students (out of over 160 enrolled students) approached the learning of epidemiology and biostatistics by choosing their own areas of clinical interest, identifying and retrieving the relevant literature, and learning the course material through the critical evaluation of that literature. We were able to replicate it the following year in two of the ten “sections” of the course, and reported this positive result,25 but we were not able to generalize it to the entire class. Problems in generalization were many, including logistical barriers of having many groups of students simultaneously searching the literature and photocopying articles for their groups, and such faculty issues as inadequate numbers, resistance, and lack of training in the method. There were particular difficulties in applying this student-centered model to groups as large as 18 students; it may be that groups of five to seven students are optimal for this approach, as they are for PBL.12
The ability to provide a student-centered, interactive learning environment with large groups is a particular appeal of case-discussion teaching. At the Harvard Business School, this approach is used with sections as large as 80–90 students. Most students participate actively in this setting because they are motivated to participate: class participation is a factor in their course grades, and students can be called upon to contribute even if they do not volunteer. In applying this approach to our setting, we emulated this as best we could: even in a pass/fail course, we applied penalties for absence or lack of preparation, and bonuses for class participation. In each of the five years, our faculty have noted that students' attendance, preparation, and participation have been excellent. The current report, however, does not demonstrate that interactive teaching can be employed successfully in medical school with groups as large as 80–100 students: our classrooms limited our groups to 20–24 students, and any inferences drawn from this paper should be likewise limited to such moderate-sized groups.
Since approximately 160–170 students take this course each year, appropriate group sizes for PBL would require more than 20 faculty members teaching each year. This has always been difficult to achieve in our environment, and the problem of enlisting adequate numbers of faculty has grown exponentially in the last few years. Clinical faculty are under increasing pressure to generate clinical revenues and research faculty struggle to find adequate grant support. Using the case-discussion method, we have taught our course with only seven to ten faculty each year, providing several advantages:
▪ less pressure on faculty and chairs to find release time for teaching;
▪ the ability to recruit the most dedicated and effective teachers, rather than enrolling anyone willing to serve;
▪ the ability to rotate faculty, rather than continually relying on the same faculty to teach year after year; and
▪ the luxury of focusing faculty development efforts on those most interested.
The only reason we have used as many faculty members as we have is that our available teaching spaces can accommodate no more than 24 students in the appropriate horseshoe configuration. If tiered teaching spaces could be designed and built, similar to the seminar rooms used at HBS and other professional schools, we could attempt to apply this method with 50–90 students, reducing our faculty to two to four teaching faculty each year. In the current era of medical education, this seems a particularly promising remedy to counteract two conflicting trends: increasing reliance on small-group, interactive learning; and increasing difficulties in relying on “voluntary” teaching faculty.
Several limitations to this teaching method must be considered. First, although we have emphasized the advantage of using fewer faculty by creating larger groups, our experience suggests that the case-discussion method requires substantial preparation time, and can be draining. While faculty informally report enthusiasm with this method (perhaps contributing to the positive findings we report), many do not volunteer to teach in subsequent years: over the first five years of the program, 40 groups were led by 22 different faculty, with only six faculty (including the course leader) teaching for three or more years. Thus, while they report that they enjoy teaching this way, they often choose not to repeat.
Another limitation is the need for faculty development, retraining, and oversight. We have created our own one-day workshop to help develop faculty skills in case-discussion teaching for first-time and repeating faculty, and arrange ad hoc oversight by having occasional observation of the faculty during class time. We recognize that much more could be done: for instance, the Harvard Business School requires regular peer observation, evaluation, and feedback; and Harvard's university-wide “Discussion Leadership Seminar” takes place over ten weeks, not a single day. However, we believe the positive findings reported in this paper indicate that substantial success can be achieved even with less intensive faculty support.
Further, we note that while the focus of this paper is on the teaching method, our experience specifically addresses the use of the case-discussion approach for teaching epidemiology and biostatistics. While this approach could be useful for other medical school courses (for example, it has been used in a modified format for the Hematology and Molecular and Cellular Foundations of Medicine courses at AECOM), its broader applicability can be assessed only by medical educators when they apply this method in their courses.
While the longitudinal data presented in this paper cannot prove causality, we feel extremely confident that the positive results observed are attributable directly to the teaching method itself. We cannot rule out a temporal trend, through which students may be becoming more open to the importance of epidemiology and critical reasoning, but this explanation is not consistent with the sudden and sustained change in students' evaluations that we observed. We cannot attribute this success to a “Hawthorne effect,” wherein any experimental intervention can be perceived as positive: first, because we had “experimented” in changing the curriculum virtually every year before 1997, and second, because the positive results have persisted over the last five years despite an unchanging curriculum.
We present these findings in the hope that others teaching epidemiology and biostatistics may find a new potential for curricular change, and that medical educators in other disciplines may consider the advantages of this approach. In epidemiology, we have realized the possibility of actually teaching the material we believe is so important, and have had the gratifying experience of a positive student response. By further developing this technique, we have the opportunity to engage students in the exciting process of critical thinking, to lay the foundation for continuing learning in critical analysis and evidence-based medicine.
With this early success comes a greater challenge: to build on this experience, to refine and improve our course, our cases, our readings, and our teaching skills. We must keep it as fresh and exciting for ourselves each year as it was the first year. Fortunately, the method itself makes every teaching experience new, and should help us maintain our enthusiasm. That, above all else, is the power of case-discussion teaching: it is energizing, exhilarating, and engaging.