Web-based (WB) learning is used with increasing frequency in medical education. Although this teaching format offers many potential advantages,1–4 evidence supporting its use is limited. Many reports have described WB courses, but few controlled studies have been published, and only a handful of studies have compared WB learning with another teaching method. Data are particularly sparse in postgraduate medical training, for which a comprehensive search of the literature revealed only two controlled studies.5,6
Knowledge is an important outcome when assessing any educational intervention. Most controlled trials assessing knowledge either compare WB learning with no intervention,5,7–12 a comparison of little practical value, or are limited by biased controls, variation in interventions or assessments, or inadequately described methods.13–18 Two studies6,18 using paid volunteers found no difference in test scores between WB learning and other interventions, but how these apply to usual teaching settings is unknown. Another study19 demonstrated higher test scores for first-year medical students using WB learning than for students using traditional teaching methods. Although these studies suggest that WB learning is likely as good as, and possibly better than, other methods for improving knowledge, its effect on learning during clinical training remains unknown.
Learners’ satisfaction is another important outcome. Satisfaction with WB learning is favorable in studies without comparison to other formats.9,10,12,15,16,18,20–23 When compared with alternate formats, learners’ satisfaction ratings were higher for the WB format in one study6 and not different in three.7,8,18 These data suggest that learners approve of WB learning, but their preference after exposure to both WB and traditional teaching has been reported in only one study,19 in which medical students preferred the WB format.
We sought to determine whether internal medicine residents prefer learning from WB modules or printed practice guidelines and to compare the effect of the two teaching formats on the residents’ knowledge. We also sought to determine how often residents continue using WB modules as resources after completing the module. To do this, we compared WB learning modules with printed clinical practice guidelines in a crossover trial in which each participant used both formats to study core topics in ambulatory medicine.
Setting and sample
All categorical internal medicine residents at the Mayo School of Graduate Medical Education spend one half-day each week at one of eight ambulatory “continuity clinic” sites, managing a panel of general internal medicine patients and learning principles of ambulatory internal medicine. Organized instruction, with residents reading an evidence-based practice guideline and answering case-based questions to reinforce learning, was introduced across all clinic sites during the 2001–02 academic year. During the 2002–03 academic year we compared this instructional method with a series of WB modules.
Our Institutional Review Board approved this study and consent was obtained from all participants. All 145 categorical internal medicine residents were invited to participate.
Interventions and randomization
After excluding topics covered in other curricula, we selected as topics for the course the diagnoses seen most frequently in our community-based medical practice: depression, nicotine dependence, diabetes mellitus, and asthma. Study “units” for each topic consisted of study materials (WB or paper) and several case-based questions (identical for both formats) that residents answered while studying.
Evidence-based practice guidelines from the Institute for Clinical Systems Improvement 〈www.icsi.org〉 for each topic constituted the control (paper) intervention.
Participants completed two units using the WB format and two units using the paper format, with the sequence determined by a computer-generated randomization scheme. Participants were randomized by clinic site (i.e., all residents at one site received the same intervention at the same time) to limit the sharing of passwords between study groups.
A course home page provided access to units in both formats along with links to WB resources. Units were released every six to eight weeks. Residents randomized to the WB format for that unit were provided a password, while residents randomized to the paper format printed the guidelines from the home page. Residents completed units on their own schedule. Masking the participants’ identity was not possible in this study.
Instruments and outcomes
We developed test questions addressing each unit's objectives. Questions were based on patient scenarios viewed from the perspective of a general internist. More than 60% of the questions required application of knowledge24 to answer. Questions were initially developed by the authors, reviewed by an expert in the field, and piloted on at least ten internal medicine faculty who reviewed questions’ structure and relevance to general internal medicine. Questions were modified or deleted as needed. Prior to beginning a unit, each resident completed a preintervention test using WebCT (version 3.1, WebCT, Inc., Lynnfield, Massachusetts), which scored all tests automatically. At the end of the academic year residents completed a cumulative test (“final test”) composed of the same questions, again using WebCT. Residents received test scores, answers, and feedback only following the final test.
One primary outcome, overall format preference, was assessed on an end-of-course questionnaire using a five-point scale ranging from 1 = “Strongly prefer paper” to 5 = “Strongly prefer WB.” The other primary outcome, knowledge gained, was determined by change in test score from preintervention test to final test.
The questionnaire also contained items to evaluate the WB format regarding utility, continued use, and technical difficulties; additional comparisons of the two formats, including time spent; and unstructured comments on the course. We recorded Web site hits at two-week intervals.
We analyzed the primary outcome of format preference using the Wilcoxon signed rank test, testing the null hypothesis that there was no preference. We used the Wilcoxon rank sum test or Kruskal-Wallis test for comparisons among two or more groups.
We analyzed the other primary outcome, knowledge gained, by comparing the change in test scores over time and between the two formats using a mixed-effects analysis of variance (ANOVA) accounting for repeated measurements on each participant and for differences among units. For the comparison between formats we planned adjustments for clinic site, postgraduate year, group assignment, and gender. We used Spearman's rho to assess correlation between format preference and change in test scores. For the final test, we calculated Cronbach's alpha.
We used Pearson's chi-square test or Fisher exact test to compare categorical demographic information among groups. Other survey outcomes were analyzed using the Wilcoxon signed-rank or rank-sum test, as appropriate. We report Web site hits per week.
Three authors (DAC, WGT, DMD) independently reviewed the unstructured comments and counted the frequency of discrete ideas, which they grouped into themes by consensus.
All analyses were performed using intention-to-treat and a two-sided alpha level of .05. The expected sample size of 80 participants was to provide 90% power to detect a difference of 0.5 points in preference and a 6% change in test score. All analyses were performed using JMP (version 4.04, SAS Institute Inc., Cary, North Carolina) except the mixed-effects ANOVA and Fisher exact test, which were performed using SAS (version 8.2, SAS Institute).
A total of 109 residents consented to participate, and of these, 97 (89%) completed at least one unit and 75 (69%) completed the final test (see Table 1). Participants’ demographics are summarized in Table 2. Among participants, differences were not significant between those who completed all tests or the final questionnaire and those who did not when compared by gender (p > .15), but comparison by postgraduate year was significant (p < .01), with those earlier in training responding at a higher rate.
Format preference and satisfaction
Learners strongly preferred the WB format (mean ± SD = 4.1 ± 1.2, p < .001), with 57 of 73 (78% [95% CI, 67–86%]) preferring or strongly preferring this format, four (5% [95%CI, 2–13%]) neutral, and only 12 (16% [95% CI, 10–27%]) preferring or strongly preferring the paper format. Subgroup comparisons revealed no statistically significant difference in preference among clinic site, postgraduate year, prior experience with WB learning, or comfort using the Internet. There was a significant difference between gender (p = .029). Although both men and women preferred the WB format, men indicated a strong preference for the WB format (mean ± SD = 4.3 ± 1.1, p < .001) while women indicated a lower level of preference (mean ± SD = 3.7 ± 1.4, p = .050).
Seventy-two percent of residents (95% CI, 61–81%) agreed or strongly agreed that the WB modules should continue as part of the continuity clinic curriculum, compared with 20% (12–30%) for the paper-based format (mean ± SD =3.8 ± 1.2 and 2.4 ± 1.2, p < .001, respectively).
Cronbach's alpha for the test questions was .689, suggesting acceptable reliability. Test scores improved from a mean of 67.7 ± 11.1 to 75.0 ± 11.4 (p < .001) for the WB format, and from a mean of 66.0 ± 10.7 to 73.3 ± 12.3 (p < .001) for the paper format. The difference in the change in scores between the two formats was not statistically significant either before (p = .717) or after (p = .113) adjusting for differences between units. Simultaneously adjusting for differences between units, clinic site, group assignment, postgraduate year, and gender still did not demonstrate a significant difference (p = .08). There were weak, nonsignificant correlations between format preference and change in overall test scores (r = 0.11, = .452) and test scores corresponding to the WB (r = −0.006, p = .966) and Paper format (r = 0.135, p = .344).
Use of WB modules and technical problems
Self-reported time required to complete a unit was less for the WB modules (mean ± SD = 47 ± 26 minutes) than for the paper format (mean ± SD = 59 ± 35, p = .024).
Thirty-nine residents (54% [95% CI, 43–65%]) reported returning to use the WB modules after completing the module, although only nine (13% [95% CI, 7–22%]) returned more than three times. Thirty residents (42% [95% CI, 31–53%]) used hyperlinks to access full-text journal articles. Additional survey results are presented in Table 3.
Sixty-one residents (84% [95% CI, 75–91%]) experienced technical problems during the course (although only 38% [95% CI, 27–49%] felt they were “significant”), and this influenced both format preference and use of the WB modules. Although residents who experienced significant technical problems at the end of the study still preferred the WB modules, preference (mean ± SD = 3.7 ± 1.2) was not as strong as it was for those without problems (mean ± SD = 4.2 ± 1.3, p = .036). Passwords were the most common technical problem, with 44 residents (61% [95% CI, 50–72%]) reporting difficulty and 51 (71% [95% CI, 59–80%]) agreeing that they would have used the modules more frequently if there had not been a password. Table 3 contains additional details of technical issues.
Weekly hits to individual modules varied between modules and according to the phase of the study, as summarized in Table 4.
Forty-eight residents provided 85 separate comments. Themes and representative comments are listed in Table 5.
In a crossover trial comparing WB modules with paper guidelines as learning tools in an internal medicine residency continuity clinic, we found that residents strongly preferred the WB format and that the WB format required less time to complete, while change in test scores was similar between formats. We also found that perceived technical problems with the course affected the learners’ preferences and that passwords were a significant barrier to course completion and ongoing use of the Web site. These results have significant relevance to postgraduate medical training, where we see an increased need for effective teaching and assessment,25 and to ambulatory medical education, where logistic difficulties and demands for increasing faculty productivity challenge traditional teaching efforts.26
Our residents strongly preferred the WB format and felt that it was more efficient, easier to navigate, and more convenient to use than the paper format. These findings corroborate those of a previous study19 in which 78% of medical students preferred a WB format to a paper-based intervention for learning physical diagnosis skills. Another study6 found that internal medicine and family practice residents gave a WB module higher satisfaction ratings than they gave to the corresponding paper module. Two studies with medical students7,8 found no difference in satisfaction ratings between WB and alternative formats, but in both cases the ratings evaluated the entire course rather than the WB format itself.
We noted a difference in format preference by residents’ gender. Although women preferred the WB format, the degree of preference was less decided than it was for men. This finding has not been previously reported, although differences in response to WB learning by gender have been described in association with differences in learning style.27,28 Whether this finding of gender difference in preference of WB learning persists, and its relationship to learning styles, could be the topic of further research.
We did not find a difference in knowledge gained between the two formats, as assessed by change in score from preintervention test to final test. This is consistent with experimental studies of residents6 and medical students18 that found no significant difference between WB and paper formats. However, in the latter article,18 a second study showed that medical students studying WB cases had higher test scores than those using paper readings. Likewise, a crossover study19 found that medical students scored higher on test questions corresponding to multimedia-rich WB modules than to questions corresponding to the paper format. This suggests that WB learning is at least as effective as other instructional methods in facilitating learning and, under certain circumstances, may be more effective.
Residents spent less time completing units using the WB modules than they did using the paper format. Although we have no independent means of verifying the accuracy of these self-reported estimates, they do indicate that residents believed the WB format took less time than the paper-based format. Few other studies have discussed time requirements. One study6 measured time spent completing a module and found that residents using the WB format spent less time than those using paper without affecting test scores. Self-reported time spent completing a WB module was less compared to paper format in one study21 and more in another.19 The variability in these findings may be related to the content and design of the modules rather than an intrinsic property of the format itself.
Comments from several residents suggested that time be set aside from their clinic duties to allow them to complete units—WB or otherwise. As one resident wrote, “I think that online learning is great, but there needs to be more time set aside to do them.” This statement highlights a critical issue currently facing postgraduate medical education, as work-hour restrictions conflict with requirements from the Accreditation Council for Graduate Medical Education (ACGME) for development and assessment of competencies.25 The WB method we have described facilitates learning and documents achievement in at least three ACGME competencies: Patient Care (use of information technology), Practice-Based Learning and Improvement (use of information technology and appraisal and assimilation of scientific studies), and Medical Knowledge. We expect that teaching methods that maximize learning in a time-constrained system will ultimately lead to more opportunities to learn.
Most residents in our study experienced at least minor technical problems with the WB modules. The most frequent problem involved remembering or using passwords, and the next two most frequent problems—logging on to WebCT and logging on to modules—were likely related to the password problem. Most residents reported they would have used the site more often if there had not been a password, and many comments supported this view. The residents’ perceptions of technical problems had a significant effect on course preference as well. Although many studies have described technical problems, we are not aware of any reports detailing difficulties with passwords. We have since improved the module design, and expect these problems to decrease.
We were surprised that residents did not return to use the site with greater frequency. Although most residents returned to the site at least once, fewused it more than three times. This contrasts with our own findings in a similar WB course,12 in which 89% of the residents returned to the site and 74% reported using it on at least a weekly basis. It is notable that hits to the hyperlipidemia page, which was not password protected, were significantly higher than were hits to other pages. Once passwords were removed, the use of some of the sites increased, supporting the hypothesis that passwords impeded the residents’ use of the site. Another possible explanation is that residents did not know how to locate the site, a problem previously described.7 The residents’ comments suggested a third explanation—that other WB resources, such as UpToDate (to which all of our residents have access), may be considered more useful than are locally developed resources.
Fewer than half of the residents used hyperlinks to access additional materials. This also contrasts with our previous experience, where 74% of the residents used hyperlinks to access WB full-text journal articles, but is consistent with another study,6 in which residents used hyperlinks less frequently than expected.
Our study has limitations. We did not assess patient-related outcomes.29 Also, because the two interventions did not consist of identical content and structure, the comparisons between formats could be confounded by differences not related to the WB medium itself.30 For example, our residents had previously noted that the practice guidelines were long, not organized for use while caring for patients, and often failed to address practical clinical questions.31 We nonetheless felt it appropriate to compare the WB intervention with the existing “standard” teaching method despite the latter's weaknesses. We mitigated differences by providing the same study questions to all participants and testing only on content common to both formats.
Validity is challenged by the high dropout rate, which varied by postgraduate year. However, we enrolled more than 75% of the residents in our program and have complete data on 33% of eligible participants, which compares favorably with other published studies. For example, one highly cited randomized study6 enrolled 30% of eligible residents and had complete data on only 70% of these (21% of eligible participants). Although we failed to achieve our target sample size, we still had 85% power to detect the prespecified meaningful difference in test scores.
The generalizability of our findings is limited because the study involved a single training program, and the composition of our program may not be representative of other programs. However, these results are more readily generalized to postgraduate medical trainees (and internal medicine residents in particular) than are most existing studies, which involved medical students with very different learning and scheduling needs. Indeed, our study responds to a recent plea for more research in postgraduate medical education.32
In conclusion, we found no difference between WB and paper-based formats in test score change, but internal medicine residents strongly preferred learning with WB modules and spent less time doing so. We also found that passwords appear to impede the residents’ ongoing use of WB modules. This study, performed in an ambulatory care setting simultaneous with patient duties, shows that WB learning is effective, well accepted, and efficient. Further research should focus on specific aspects of WB instruction that will enhance its power as a learning tool and better define its role in specific settings.
1 Cohen JJ. Educating physicians in cyberspace. Acad Med. 1995;70:698.
2 Chueh H, Barnett GO. “Just-in-time” clinical information. Acad Med. 1997;72:512–7.
3 MacKenzie JD, Greenes RA. The World Wide Web: redefining medical education. JAMA. 1997;278:1785–6.
4 Zucker S, White JA, Fabri PJ, Khonsari LS. Instructional intranets in graduate medical education. Acad Med. 1998;73:1072–5.
5 Balcezak TJ, Lynch P, Jackson S, Richter J, Jaffe CC, Cadman EC. A web-based risk management and medical-legal curriculum for graduate medical education. J Biocommun. 1998;25(4):2–5.
6 Bell DS, Fonarow GC, Hays RD, Mangione CM. Self-study from web-based and printed guideline materials: a randomized, controlled trial among resident physicians. Ann Intern Med. 2000;132:938–46.
7 Baumlin KM, Bessette MJ, Lewis C, Richardson LD. EMCyberSchool: an evaluation of computer-assisted instruction on the Internet. Acad Emerg Med. 2000;7:959–62.
8 Lipman AJ, Sade RM, Glotzbach AL, Lancaster CJ, Marshall MF. The incremental value of Internet-based instruction as an adjunct to classroom instruction: A prospective randomized study. Acad Med. 2001;76:1060–4.
9 Curran VR, Hoekman T, Gulliver W, Landells I, Hatcher L. Web-based continuing medical education. (II): Evaluation study of computer-mediated continuing medical education. J Contin Educ Health Prof. 2000;20:106–19.
10 Harris JM, Salasche SJ, Harris RB. Can Internet-based continuing medical education improve physicians’ skin cancer knowledge and skills? J Gen Intern Med. 2001;16:50–6.
11 Kemper KJ, Amata-Kynvi A, Sanghavi D, et al. Randomized trial of an Internet curriculum on herbs and other dietary supplements for health care professionals. Acad Med. 2002;77:882–9.
12 Cook DA, Dupras DM. Teaching on the Web: automated online instruction and assessment of residents in an acute care clinic. Med Teach. [In press.]
13 Fulkerson PK, Miller A, Lizer S. Using WWW-based instruction modules and e-mail for a remote neurology course. Acad Med. 1999;74:576–7.
14 Schaad DC, Walker EA, Wolf FM, Brock DM, Thielke SM, Oberg L. Evaluating the serial migration of an existing required course to the World Wide Web. Acad Med. 1999;74(10 suppl):S84–6.
15 Fleetwood J, Vaught W, Feldman D, Gracely E, Kassutto Z, Novack D. MedEthEx online: a computer-based learning program in medical ethics and communications skills. Teach Learn Med. 2000;12:96–104.
16 Carr M, Hewitt J, Scardamalia M, Reznick R. Internet-based otolaryngology case discussions for medical students. J Otolaryngol. 2002;31:197–201.
17 Hallgren RC, Parkhurst PE, Monson CL, Crewe NM. An interactive, Web-based tool for learning anatomic landmarks. Acad Med. 2002;77:263–5.
18 Leong SL, Baldwin CD, Adelman AM. Integrating Web-based computer cases into a required clerkship: development and evaluation. Acad Med. 2003;78:295–301.
19 Grundman J, Wigton R, Nickol D. A controlled trial of an interactive, Web-based virtual reality program for teaching physical diagnosis skills to medical students. Acad Med. 2000;75(10 suppl):S47–9.
20 Mehta MP, Sinha P, Kanwar K, Inman A, Albanese M, Fahl W. Evaluation of Internet-based oncologic teaching for medical students. J Cancer Educ. 1998;13:197–202.
21 Horsch A, Balbach T, Melnitzki S, Knauth J. Learning tumor diagnostics and medical image processing via the WWW—the case-based radiological textbook ODITEB. Int J Med Inf. 2000;59:39–50.
22 Swagerty D Jr, Studenski S, Laird R, Rigler S. A case-oriented web-based curriculum in geriatrics for third-year medical students. J Am Geriatr Soc. 2000;48:1507–12.
23 Harris JM Jr, Kutob RM, Surprenant ZJ, Maiuro RD, Delate TA. Can Internet-based education improve physician confidence in dealing with domestic violence? Fam Med. 2002;34:287–92.
24 Case SM, Swanson DB. Constructing Written Test Questions for the Basic and Clinical Sciences. 3rd ed. Philadelphia: National Board of Medical Examiners, 2001.
25 The ACGME Outcome Project 〈http://www.acgme.org/outcome
〉. Accessed 15 October 2003. Copyright 2000, Accreditation Council for Graduate Medical Education.
26 Bowen JL, Irby DM. Assessing quality and costs of education in the ambulatory setting: a review of the literature. Acad Med. 2002;77:621–80.
27 Ford N, Chen SY. Matching/Mismatching Revisited: An Empirical Study of Learning and Teaching Styles. Br J Educ Technol. 2001;32:5–22.
28 Garland DK. Learning style characteristics of the online student: a study of learning styles, learner engagement and gender [dissertation]. Columbia, MO: University of Missouri—Columbia, 2002.
29 Prystowsky JB, Bordage G. An outcomes research perspective on medical education: the predominance of trainee assessment and satisfaction. Med Educ. 2001;35:331–6.
30 Friedman C. The research we should be doing. Acad Med. 1994;69:455–7.
31 Cook DA, Dupras DM, Thompson WG. An online core curriculum in primary care medicine for internal medicine residents. Med Educ. 2003;37:1043.
© 2005 Association of American Medical Colleges
32 Whitcomb ME. More on medical education reform. Acad Med. 2004;79:1–2.