Huge volumes of medical information are available on the Internet and the World Wide Web, and a dramatically increasing number of Web sites target health care professionals. Determining the quality and utility of so many Web sites poses a significant problem. Several ratings of medical Web sites can be found on the Web itself.1,2 However, these rating systems differ in their methods, and their validity and reproducibility are unknown. The development of simple, standard evaluation criteria oriented toward medical education will be helpful.3 We developed and piloted an instrument that can help rate Web sites as convenient and engaging for medical students.
Method. We designed the instrument to measure the following seven items: (1) interactivity, (2) ease of use, (3) speed, (4) appearance, (5) links, (6) replay value, and (7) relevance to clerkship. A five-point rating scale was used to score each item. We defined the items and improved the precision by tying specific characteristics for each item to each point on the scale. For example:
Speed (based on 28.8 kbps modem)
5 = very fast (very small or no graphics, no sound, text loads first)
4 = fast (text loads first, scattered graphics)
3 = medium (more graphics-intensive, x-rays, cat scans)
2 = slow (multimedia and large pages with sound and color scans)
1 = very slow (music, animated, 24-bit scans, busy server)
We first generated a pool of over 200 Web sites related to medical education using medical education LISTSERVs and several Web search engines. A student rater then winnowed out sites that were inaccessible or irrelevant to medical students. Four faculty and student raters independently used the instrument to rate the 100 remaining sites. Finally, we asked 39 medical student volunteers, who were doing their family medicine clerkship, to use the instrument to rate the six sites that the four faculty and student raters had rated highest.
Results. An analysis of variance revealed a significant difference among ratings of these six sites (p < .0001). Additionally, comparison of the mean rankings of the sites confirmed that one site received significantly stronger endorsement than did the other sites. Correlation analysis of the seven criteria (the instrument items) showed that each criterion correlated with all site scores (r2 = 0.65 to 0.75) except speed (r2 = 0.16). Speed emerged as the second factor in the factor analysis, explaining 18.1% of variance in scores (p < .05).
Of the rating items, speed was singled out as a strong predictor of acceptance by the medical students. The other items were grouped together by the students as a secondary consideration. Speed emerged as a single key determinant of raters' site assessments, and the top-rated Web site (www.medscape.com/home/ExamRoom/Medscape-ExamRoom.html) was also perceived as being significantly faster than the other Web sites.
Discussion. This pilot study neither comprehensively evaluated all Web sites nor directly evaluated the learning done by students at any given Web site. Still, students using our instrument identified a single preferred site. Teachers might visit this site for ideas about how to structure a Web site that appeals to medical students. Future studies with larger groups and more criteria will help develop improved rating systems for medical education Web sites.
1. The Six Senses Review: a healthcare and medical Web site review program [online]. Available http://www.sixsenses.com/FAQ.html
2. Healthcare Computing Publications [online]. Available: http://www.healthcarecomputing.com
3. Friedman RB. Top ten reasons the World Wide Web may fail to change medical education. Acad Med. 1996;71:979–81.