Medical schools and other institutions of higher education are investing heavily in educational technology,1 but within medical education, there is conflicting evidence as to whether educational technologies actually improve student outcomes.2–4 Web-based teaching is a relatively recent innovation within the field of computer-aided instruction which harnesses the major technical advantages of the World Wide Web: universal accessibility by users, the ease of updating content, and the ability to provide cross-referencing through hyperlink functions.
Web-based teaching seems particularly well-suited to the structure of medical education. Computers with Internet access are now ubiquitous in the clinical environment, thus enabling a dispersed student population to access easily the Web-based materials. Web-based teaching can be done on the students’ free time so that, unlike student conferences, it does not conflict with their clinical responsibilities nor distract from their participation as a member of a patient-care team.
Several recent reviews have highlighted that most published analyses of Web-based teaching to date have been descriptive, lacked an appropriate control group, and/or focused on the subjective impressions of the students in lieu of validated outcome measures.2,3,5 In the small number of randomized controlled trials that have investigated the educational value of Web-based teaching as an adjunct to or replacement of traditional teaching methods, the results have been mixed. In aggregate, these studies demonstrate that Web-based teaching can generate significant learning gains but does not outperform other educational methods.2,3,6–14 In addition, while Web-based teaching may improve learning efficiency, there is little evidence to support the hypothesis that learning derived from Web-based teaching is durable over time.2,3,10
It remains unclear if and how Web-based teaching should be used within undergraduate medical education. In this article, we report the results of a multi-institutional randomized controlled trial in which we investigated the impact of an adjuvant Web-based teaching program on medical students’ learning during clinical rotations.
From April 2003 to May 2004, we invited 351 medical students completing clinical rotations in surgery/urology to participate in the study at four U.S. medical schools: Harvard Medical School (HMS), University of Vermont College of Medicine (UVM), University of Rochester School of Medicine and Dentistry (URSMD), and Boston University School of Medicine (BUSM). The four medical schools (three private, one public) were selected in order to assess the generalizability of the intervention among institutions. The duration of the study allowed accrual of a sample size sufficient for a statistical power of .93 to detect a small effect size (.20) at a two-sided .05 significance level. Students were recruited via an e-mail announcement and an oral presentation by faculty. Participation was voluntary. There were no a priori exclusion criteria. We defined students as participants if they entered data into any portion of the pretest. Completion of the web-based program was defined as completion of both the pretest and posttest. At selected sites, students’ participation was promoted through the distribution of $25 bookstore gift certificates upon completion of the Web-based teaching and evaluation. The institutional review board at each participating institution approved the study.
Development of validated curriculum and Web-based teaching cases
A medical student curriculum was developed which focused on four core topics in clinical urology (benign prostatic hyperplasia, erectile dysfunction, prostate cancer, and screening with prostate-specific antigen) and was validated by a panel of four medical educators: two urologists and two medical physicians. Using the Virtual Patient™ online case-delivery software, a urologist (BPK) developed two Web-based teaching cases to cover the curricular points of each of the four core topics. Content validity of the teaching cases was established by the above panel of four medical educators. These interactive teaching cases consist of sequential Web pages in which the students are given a text-based clinical scenario, answer a multiple-choice question about the workup or management of the patient’s urologic concern, and are given a text-based explanation of the correct and incorrect responses to these multiple-choice questions (see Figure 1). The teaching cases vary in length, with two to seven questions and six to 17 total sequential Web pages in each. The teaching cases are stand-alone modules that do not involve faculty participation and were administered online via HMS’s MyCourses™ Web-based course management system.
Development of validated test instrument
Based on the focused urology curriculum, we developed a 28-item multiple-choice test covering the four core topics using the question development guidelines of the National Board of Medical Examiners.15 Content validity of the test was established by the panel of four medical educators mentioned above. Construct validity was established by administration of the test to 19 urology experts.16 Reliability of the instrument was measured by Cronbach’s alpha at .76,17 and its one-week test-retest reliability was .72. In this study, we used the identical 28-item test as both the pretest and posttest and administered it online via the HMS MyCourses Web-based course management system. The questions presented in the teaching cases were different from those on the validated test. The details of the development and validation of the test instrument have been published previously.16
Study design and organization
We designed this study as a multi-institutional randomized controlled trial to investigate whether the use of Web-based teaching cases can significantly improve learning by medical students. After being stratified by medical school, gender, hospital, and clinical rotation, students underwent blocked randomization18 to one of two study arms (see Figure 2):
* Cohort A. Students received Web-based teaching on prostate-specific antigen screening and prostate cancer, in addition to the standard surgery/urology curriculum. They did not receive Web-based teaching on benign prostatic hyperplasia or erectile dysfunction.
* Cohort B. Students received Web-based teaching on benign prostatic hyperplasia or erectile dysfunction, in addition to the standard surgery/urology curriculum. They did not receive Web-based teaching on prostate-specific antigen screening or prostate cancer.
One of us (BPK) performed the randomization and cohort assignment of eligible students. The students’ cohort assignments were blinded to faculties at all institutions. Over a defined time period, students were asked to complete the 28-item pretest at the start of the period, to work through four Web-based teaching cases covering two of the four urologic topics, and to complete the 28-item posttest at the end of the time period. Completion of all components of the program requires approximately 60 to 90 minutes. Since all students were tested on all four core topics but received Web-based teaching only on two core topics, an effective control group was established. The standard surgery/urology curriculum and the study period varied at each institution: third-year HMS students completed the study during their mandatory one-week clinical rotation in urology; third-year UVM students completed the study in a two-week block during their seven-week surgery rotation; second-year URSMD students completed the study as they attended three weekly four-hour outpatient sessions in urology; and third-year BUSM students completed the study during a surgical subspecialty elective (one to two weeks in length).
In order to assess the durability of learning from Web-based teaching, 90 HMS students who participated in the randomized controlled trial and completed the Web-based teaching from July 2003 to May 2004 were asked to complete a paper version of the 28-item test at the end of their third-year of medical school (May 2004). These students had completed their urology rotation ranging from 10 months to two weeks prior. Upon completion of the trial, access to all of the teaching cases was distributed to all randomized students at all sites.
Outcomes and measurements
The primary outcome measure was the amount of learning (difference in test scores) in the four core urologic domains, as measured by the 28-item validated instrument. The durability of the learning from Web-based teaching was a secondary outcome measure. We also performed an exploratory analysis of learning efficiency. The time of completion of the pretest and posttest were recorded. Use of the Web-based teaching cases was not monitored, but at the end of the program students were asked to estimate the percentage of the teaching cases they completed.
“Learning efficiency” attempts to estimate the amount of learning per unit time for a given mode of instruction, using the following formula: (posttest score – pretest score) ÷ hours of learning.10 This analysis allows the rate of students’ learning with Web-based teaching to be compared to that with the standard curriculum alone. Since HMS is the only school in the study with a one-week structured clinical rotation in urology, only HMS students were included in an exploratory analysis of learning efficiency. HMS control students were estimated to spend 40 hours learning urology during this week. While participating students reported that Web-based teaching required 60 to 90 minutes to complete, two hours was designated as learning time for the Web-based teaching program. To determine the relative learning efficiency for Web-based teaching as a stand-alone resource, students’ score improvement in the control group (no Web-based teaching) was subtracted from their score improvement in the Web-based teaching group; the resulting value (score improvement attributable to the web-based teaching alone) was then divided by the hours of learning (two hours).
We performed analyses on the test results from all students who completed both the pretest and posttest, regardless of whether they completed the Web-based teaching cases. We did not use analysis of covariance because the data violated significantly (p < .001) the homogeneity of regression. In situations such as these, Maxwell and Delaney strongly recommend the use of mixed two-way analysis of variance (ANOVA) as the appropriate alternative.19 Further analysis was performed with a paired t test which allowed each student to act as his or her own control. Cohen’s d statistics were used to measure effect size, with .2 generally considered to be a small effect, .4 as a moderate effect, and .8 as a large effect.20 Cohen’s d expresses the difference between the means in terms of standard deviation units. We performed statistical calculations using SPSS for Windows (SPSS Inc., Chicago, IL).
Eighty-one percent of all eligible students (286/351) volunteered to participate in the Web-based program. Participation rates at individual medical schools varied substantially: 93% (169/182) at HMS, 96% (43/45) at UVM, 71% (36/51) at URSMD, and 52% (38/73) at BUSM (p < .001, chi square). We found no significant differences in participation by gender (p = .10) Of the 286 participating students, 73% (210/286) completed the Web-based program. Students who were pursuing an MD/PhD degree or had completed their medicine, pediatrics or obstetrics/gynecology rotation were significantly more likely to complete the program (data not shown). The 76 students lost to follow-up (45 in Cohort A, 31 in Cohort B) were not included in the analysis. For the 210 students who completed the Web-based program and were included in the analysis, no significant differences were noted between cohorts (see Table 1).
The use patterns of the Web-based teaching cases were not monitored, but the times at which the students completed the pretest and posttest were recorded. This duration between tests varied within and among the schools: HMS with a median 7.2 days (mean 11.7 days), UVM with a median 5.3 days (mean 5.5 days), URSMD with a median 6.1 day (mean 9.3 days), and BUSM with a median 7.2 days (mean 11.0 days). No statistically significant differences in pretest-to-posttest duration were noted between cohorts (two-tailed t test, p = .98). We found no significant correlation between pretest-to-posttest duration and the degree of score improvement, in aggregate and within each cohort (two-tailed Pearson correlation, data not shown).
Cross-over in teaching case use between cohorts and failure to complete all of the assigned teaching cases was reported anonymously by students. The students in Cohort A (prostate-specific antigen/prostate cancer) reported on average completing 97% of prostate cancer teaching cases (SD 15%), 93% of prostate-specific antigen cases (SD 24%), 17% of benign prostatic hyperplasia cases (SD 37%), and 15% of erectile dysfunction cases (SD 34%). In contrast, the students in Cohort B (benign prostatic hyperplasia/erectile dysfunction) reported on average completing 32% of prostate cancer teaching cases (SD 45%), 22% of prostate-specific antigen cases (SD 40%), 85% of benign prostatic hyperplasia cases (SD 33%), and 84% of erectile dysfunction cases (SD 36%). A significant dose–response relationship between teaching case use and posttest score (adjusting for differences in pretest scores) was present in each core topic: prostate cancer (r = .46, p < .001, two-tailed partial correlation), prostate-specific antigen (r = .55, p < .001), benign prostatic hyperplasia (r = .54, p < .001), and erectile dysfunction (r = .5, p < .001).
Web-based teaching significantly increased test scores in the four core topics at each of the four medical schools, compared to controls (p < .001, mixed ANOVA, see Figure 3). The score improvements were not uniform across the four core topics: benign prostatic hyperplasia +2.0 change with Web-based teaching versus +0.3 without; erectile dysfunction +2.2 versus 0.0, prostate cancer +2.7 versus +0.9, and prostate-specific antigen +2.7 versus +0.6 (seven points possible in each topic, p < .001, mixed ANOVA). We noted no significant differences in score improvement among medical schools and between genders (p = .53 and p = .20, respectively, mixed ANOVA). Paired t-test analysis was performed with each student acting as his or her own control, allowing comparison of the scores on the core topics in which the students received Web-based teaching (Web-based teaching topics, 14 points total) to the scores on the core topics in which no Web-based teaching was received (control topics, 14 points total). We noted no significant differences between Web-based teaching topics and control topics on pretest scores (7.05 versus 7.11, respectively; p = .73, two-tailed paired t test). Posttest scores were significantly higher in Web-based teaching topics compared to control topics (11.78 versus 8.03, respectively; p < .001, two-tailed paired t-test, 14 points possible in each set of paired topics), corresponding to a Cohen’s d statistic of 1.52 (95% confidence interval [CI], 1.23–1.80).
Control group analysis: limited learning in the absence of Web-based teaching
We identified only a limited degree of learning in the four core topics in the absence of Web-based teaching, even in HMS students who completed a week-long structured clinical rotation dedicated to urology (see Figure 3). By the end of the week, HMS students’ average scores without Web-based teaching increased 12% for benign prostatic hyperplasia, 6% for erectile dysfunction, 24% for prostate cancer, and 20% for prostate-specific antigen (see Figure 3). The increase in erectile dysfunction scores was not statistically significant (two-tailed paired t test, p = .37).
Durability of learning
Of the 90 HMS students from July 2003 to May 2004 who completed the Web-based program during their one-week urology rotation, 51 (57%) volunteered to complete the 28-item test once again in May 2004 (median 4.8 months after completing the web-based program). Using paired t test analysis with each student as his or her own control, we found that scores on this delayed test were significantly higher for Web-based teaching compared to non-Web-based teaching (10.08 versus 9.00, respectively; p = .007, two-tailed paired t test) corresponding to a Cohen’s d effect size of 0.55 (95% CI, 0.44–0.65) (see Figure 4).
Exploratory analysis of learning efficiency
Since HMS students complete a structured week-long urology clerkship,21 the scores of these students were used to perform an exploratory analysis of learning efficiency, comparing the learning efficiency of the structured urology rotation alone to that of the urology rotation combined with Web-based teaching. Using a paired t test analysis in which students acted as their own controls, we found that the average learning efficiency of the structured clinical rotation in urology alone (40 hours) was 0.03, significantly less than the 0.10 value for the structured clinical rotation in combination with Web-based teaching (42 hours, p < .001). This represents a greater than three-fold increase in learning efficiency as a result of the adjuvant use of Web-based teaching, corresponding to a Cohen’s d effect size of 1.16 (95% CI, 1.13–1.19). When calculated as a stand-alone resource, Web-based teaching (two hours) had a learning efficiency of 1.54, with an effect size of 24.39 (95% CI 19.79–28.96).
This multi-institutional randomized controlled trial provides Class I evidence that Web-based teaching as an adjunct to clinical experiences can significantly and substantially improve medical students’ learning.22 Data from HMS demonstrate that Web-based teaching also results in durable learning and increased learning efficiency. In the absence of Web-based teaching, students’ learning was quite limited, even when the students completed a one-week clinical rotation dedicated to that specialty field. These results indicate that Web-based teaching can be an effective mechanism by which medical schools can substantially improve the acquisition of medical knowledge by their students.
The combination of several factors makes this study unique in the field of medical education: the randomized controlled design in which students acted as their own matched control, the large number of students from multiple medical schools, the use of validated outcome measures of learning, and the analysis of both short-term and long-term learning.
It is not unexpected that the Web-based program would substantially increase learning (compared to controls) at the medical schools where the control groups did not receive a structured curriculum in urology. Results from these schools do indicate, however, that Web-based teaching is an effective means of delivering educational content to medical students and that its impact is generalizable to a range of medical schools and to a range of students at different points in their medical training. On the other hand, it is not clear why the Web-based teaching would increase learning substantially at HMS where the controls received a one-week structured clinical clerkship in urology with a focused curriculum which addresses the four core topics.21 There is an on-going debate among researchers in educational technology as to whether there are aspects of the Web-based technology itself which facilitate the learning process or whether Web-based teaching is merely a vehicle by which to deliver educational content.23 Regardless, it is not possible to escape the conclusion that the structured clinical rotation alone, in spite of a recent comprehensive curriculum revision,21 is not the optimal means by which to improve students’ medical knowledge.
The system of Web-based teaching cases used in our study is quite simple, with sequential text-based presentation of case materials interspersed with questions and answers for the student. Given its relative simplicity, other medical specialties and other medical schools should be able to easily implement similar systems of Web-based teaching, if they have not already done so. It seems like a misdirection of scarce resources for every medical school to develop and maintain their own Web-based teaching materials in every specialty. The opportunity is available for the regional and national societies of each medical specialty to act as centers of expertise in order to develop, maintain, and distribute Web-based educational tools to medical schools nationwide.
This study had several limitations, including the borderline low participation and completion rates as well as the narrowness of the medical topics addressed. Given the variable rate at which students completed the Web-based program at each school and the use of a participation inducement at three of the schools, a response bias cannot be ruled out.24 The magnitude and uniformity of the impact of the intervention at each site and in each core topic suggest that the influence of any response bias is likely to be small.
The substantial impact of the Web-based teaching, the limited learning in the absence of Web-based teaching, and the uniformity of improvement demonstrated at multiple institutions suggests that Web-based teaching has the potential to significantly improve the clinical education of medical students nationally. Further work is indicated to confirm these findings using curricular topics from other fields of medicine.
This study was supported by grants from the American Foundation for Urologic Disease / American Urological Association Research Scholar Program, the Pellegrino Foundations, and the Academy at HMS.
The authors are indebted to Drs. Paul Church and Barbara Masser for their content validation of the Web-based teaching materials and test; to Dr. John Halamka, David Bozzi, Ron Rouse, Griffin Weber, and the other members of the HMS Center for Educational for their technical support of the project; and to Drs. Harvey Katz and Ronald Arky for their support of the long-term learning assessment of HMS students.
1 Green K. Campus Computing 2003: The 14th National Survey of Computing and Information Technology in American Higher Education. Encino, CA: Campus Computing, 2003.
2 Letterie GS. Medical education as a science: the quality of evidence for computer-assisted instruction. Am J Obstet Gynecol. 2003;188:849–53.
3 Chumley-Jones HS, Dobbie A, Alford CL. Web-based learning: sound educational method or hype? A review of the evaluation literature. Acad Med. 2002;77(10 suppl.):S86–S93.
4 Lewis MJ. Computer-assisted learning for teaching anatomy and physiology in subjects allied to medicine. Med Teach. 2003;25:204–6.
5 Adler MD, Johnson KB. Quantifying the literature of computer-aided instruction in medical education. Acad Med. 2000;75:1025–28.
6 Maiburg BH, Rethans JJ, Schuwirth LW, Mathus-Vliegen LM, van Ree JW. Controlled trial of effect of computer-based nutrition course on knowledge and practice of general practitioner trainees. Am J Clin Nutr. 2003;77:S1019–S24.
7 Elves AW, Ahmed M, Abrams P. Computer-assisted learning; experience at the Bristol Urological Institute in the teaching of urology. Br J Urol. 1997;80(3 suppl.):S59–S62.
8 Lipman AJ, Sade RM, Glotzbach AL, Lancaster CJ, Marshall MF. The incremental value of internet-based instruction as an adjunct to classroom instruction: a prospective randomized study. Acad Med. 2001;76:1060–64.
9 Kemper KJ, Amata-Kynvi A, Sanghavi D, et al. Randomized trial of an internet curriculum on herbs and other dietary supplements for health care professionals. Acad Med. 2002;77:882–89.
10 Bell DS, Fonarow GC, Hays RD, Mangione CM. Self-study from web-based and printed guideline materials. A randomized, controlled trial among resident physicians. Ann Intern Med. 2000;132:938–46.
11 Karnath BM, Das Carlo M, Holden MD. A comparison of faculty-led small group learning in combination with computer-based instruction versus computer-based instruction alone on identifying simulated pulmonary sounds. Teach Learn Med. 2004;16:23–27.
12 McDonough M, Marks IM. Teaching medical students exposure therapy for phobia/panic: randomized, controlled comparison of face-to-face tutorial in small groups vs. solo computer instruction. Med Educ. 2002;36:412–17.
13 Seabra D, Srougi M, Baptista R, Nesrallah LJ, Ortiz V, Sigulem D. Computer aided learning versus standard lecture for undergraduate education in urology. J Urol. 2004;171:1220–22.
14 Cook DA, Dupras DM, Thompson WG, Pankratz VS. Web-based learning in residents’ continuity clinics: a randomized, controlled trial. Acad Med. 2005;80:90–97.
15 Case SM, Swanson DB. Constructing Written Test Questions for the Basic and Clinical Sciences. Philadelphia: National Board of Medical Examiners, 2001.
16 Kerfoot BP, Baker H, Volkan K, et al. Development of validated instrument to measure medical student learning in clinical urology: a step toward evidence based education. J Urol. 2004;172:282–85.
17 Cronbach LJ. Coefficient alpha and the internal structure of tests. Psychometrika. 1951;16:297–334.
18 Hulley SB, Cummings SR, Browner WS, Grady D, Hearst N, Newman TB. Designing Clinical Research. New York: Lippincott Williams & Wilkins, 2001.
19 Maxwell SE, Delaney HD. Designing Experiments and Analyzing Data: A Model Comparison Approach. Belmont, CA: Wadsworth, 1990.
20 Albanese M. Problem-based learning: why curricula are likely to show little effect on knowledge and clinical skills. Med Educ. 2000;34:729–38.
21 Kerfoot BP, Baker H, Volkan K, et al. Development and initial evaluation of a novel urology curriculum for medical students. J Urol. 2004;172:278–81.
22 Harris RP, Helfand M, Woolf SH, et al. Current methods of the US Preventive Services Task Force: a review of the process. Am J Prev Med. 2001;20:21–35.
23 Thompson AD, Simonson MR, Hargrave CP. Educational Technology: A Review of the Research. 2nd ed. Bloomington, IN: Association for Educational Communications and Technology, 1996.
24 O’Neil KM, Penrod SD, Bornstein BH. Web-based research: methodological variables’ effects on dropout and sample characteristics. Behav Res Methods Instrum Comput. 2003;35:217–26.