Skip Navigation LinksHome > October 2006 - Volume 81 - Issue 10 > Evaluation of Learning Outcomes in Web-Based Continuing Medi...
Academic Medicine:
doi: 10.1097/01.ACM.0000236509.32699.f5
Practice-Based Learning

Evaluation of Learning Outcomes in Web-Based Continuing Medical Education

Curran, Vernon; Lockyer, Jocelyn; Sargeant, Joan; Fleet, Lisa

Section Editor(s): Lowitt, Nancy MD; Silver, Ivan MD

Free Access
Article Outline
Collapse Box

Author Information

Correspondence: Vernon Curran, PhD, Academic Research and Development, Centre for Collaborative Health Professional Education, Faculty of Medicine, Memorial University of Newfoundland, St. John’s, NF, A1B 3V6 Canada; e-mail: (vcurran@mun.ca).

Collapse Box

Abstract

Background: There has been significant growth in use of Web-based continuing medical education (CME) by physicians. A number of evaluation and metareview studies have examined the effectiveness of Web-based CME to varying degrees. One of the main limitations of this literature has been the lack of systematic evaluation across different clinical subject matter areas using standardized Web-based CME learning formats.

Method: One group of pretest–postest designs were used to evaluate knowledge and self-reported confidence change across multiple Web-based courses using a standardized instructional format but comprising distinct clinical subject matter. Participants also completed a participant satisfaction survey and a self-reported retrospective skill/ability change survey.

Results: The majority of courses evaluated demonstrated significant pre to post knowledge and confidence effect size change, as well as significant self-reported retrospective practice change.

Conclusions: A Web-based CME instructional format comprising multimedia-enhanced learning tutorials supplemented by asynchronous computer-mediated conferencing for case-based discussions was found to be effective in enhancing knowledge, confidence, and self-reported practice change outcomes across a variety of clinical subject matter areas.

There has been an expansive growth of Internet usage amongst physicians. Casebeer et al.1 suggest that the main importance of the Internet to physicians is in the area of professional development and information seeking in order to provide better patient care. Upwards of 85% of physicians are believed to use the Web2 and between 1998 and 2003 it has been estimated that the number of Web-based continuing medical education (CME) activities increased over 700%.2 Some estimates suggest that 31% to 64% of physicians participate in online CME offerings.2,3

The main benefits of Web-based CME are reported to include improved access, convenience and flexibility, reduced travel expenses and time, adaptability to learning styles, multimedia format, and an ability to create interactive clinical cases.2–4 Curran and Fleet4 conducted a metareview of evaluation studies of Web-based CME and reported that the majority of studies were based on participant satisfaction data. These studies have suggested that physicians are generally satisfied with Web-based CME and in some instances more satisfied with Web-based CME than traditional formats. Wutoh et al.3 also reviewed the evaluation literature and concluded that Web-based CME is as effective in imparting knowledge as traditional formats of CME. In a more recent study, Fordis et al.2 found that Web-based CME can produce objectively measured changes in behavior as well as sustained knowledge gains that are comparable or superior to those resulting from traditional CME formats.

According to Moore, “outcomes” have become very important in health care and as a result it has become increasingly important to assess and document the effectiveness of CME activities. An outcome is defined as “the result of an event or the consequences of an action” and “in CME, an outcome would be defined as the result or consequence of a CME event or events.”5 Davis and colleagues6 have conducted extensive work investigating the outcomes of traditional CME formats. Their work has included an examination of randomized controlled trials of CME and suggests that when CME was based on accurate needs assessment information and incorporated multiple learning activities, learning outcomes were generally more likely to be positive and significant.

The need has been identified for greater analysis of current online CME programs to determine whether existing versions are effective.3 The purpose of this paper is to describe the results of a review of the learning outcomes of Web-based CME offered across diverse clinical subject areas using a standardized asynchronous Web-based learning format.

Back to Top | Article Outline

Method

MDcme is a not-for-profit Web portal which has been developed through a consortium of Canadian university-based CME units. MDcme serves as the home of a number of MainPro-M1 and MainPro-C accredited Web-based CME courses. The MDcme courses evaluated in this study were developed using the WebCT (WebCT, Inc., Lynn Field, MA) learning management system and include multimedia-enhanced learning tutorials, self-assessment activities and quizzes, and links to online resources. Interaction between participants is fostered and facilitated through the asynchronous computer-mediated conferencing component of WebCT. As part of their online learning experiences, participants also have the opportunity to interact with a facilitator and other physicians in case-based discussions using asynchronous computer-mediated conferencing.

A summative evaluation of 14 MDcme courses was conducted to review overall learning outcomes. The purpose of this evaluation review was to: (1) identify the characteristics of physicians who participate in Web-based CME activities; (2) evaluate physicians’ satisfaction with Web-based CME participation; and (3) evaluate outcomes of participation in Web-based CME with respect to participants’ knowledge, confidence and self-reported skill/ability changes. Ethical approval was received from the Human Investigations Committee (HIC), Memorial University of Newfoundland.

Back to Top | Article Outline
Participant satisfaction survey

Participants completed a participant satisfaction survey upon course completion. The satisfaction survey included 18 evaluative statements covering the content of the course, design (navigability/process), and satisfaction with online discussions and interaction. Respondents rated their level of agreement or disagreement with each evaluative statement using a Likert scale of 1 = “strongly disagree” to 5 = “strongly agree.” Overall impressions of the course offering (i.e., strengths of the module, barriers to participation, application of learning, etc.) and information on participant characteristics was also collected.

Back to Top | Article Outline
Pre and post knowledge and confidence assessment

Pre and post knowledge tests and confidence surveys were constructed for each MDcme course. The pre and post knowledge tests were comprised of 5 identical, one-best answer (A-type) multiple-choice question items. The pre and post confidence surveys included 5 identical confidence statements which asked participants to rate their confidence in specific clinical practice tasks related to the subject matter of the courses. The confidence items were rated using a Likert scale of 1 = “little or no ability/strongly disagree” to 5 = “very high ability/strongly agree.” Participants were asked to complete both the pre and post knowledge and confidence instruments immediately prior to and after completion of a course, respectively.

Back to Top | Article Outline
Skill/ability change survey

A retrospective skill/ability change survey was constructed for each course in order to evaluate participants’ self-reported practice change as a result of participation in a Web-based CME course. Retrospective survey reporting is advocated as an alternative method for educational evaluation,7,8 particularly distance education evaluation, and the validity of self-reporting is supported by the research of Curry and Purkis.9 Each survey was comprised of 5 practice-based statements, which were linked to the learning objectives of the respective course. Participants were asked to indicate their skill level “before participating in CME” and “after participating in CME” using a Likert scale of 1 = “to no extent” to 5 = “to a large extent.” Surveys were distributed approximately 6–8 weeks after course completion.

Learning outcomes resulting from the pre and post knowledge and confidence assessments, as well as the skill/ability change surveys, were analyzed using an effect size calculation. Spencer has described the effect size (ES) as a “measure of the educational importance of any performance changes produced.”10 Cohen’s11 d statistic was used to calculate the effect size. The effect size was calculated by using the average score difference between pre and post scores and dividing it by the standard deviation. An ES of 1.0 means that the innovation has increased the performance of the group by an amount equal to one standard deviation unit. This would take an average student from a position in the middle of the group to the position occupied by the top 20% of that group.10

Back to Top | Article Outline

Results

Between September 2003 and June 2005, there were 40 offerings of 14 different courses through the MDcme portal. These courses covered diverse subjects across a variety of clinical disciplines (e.g., psychiatry, neurology, oncology, orthopedics, and emergency medicine). The overall results suggest a high level of participant satisfaction across MDcme courses. Overall, 93.5% of participants agreed or strongly agreed that the content of the MDcme courses was applicable to their practice. A total of 91.9% agreed or strongly agreed that they had gained new knowledge from the online course and 86.6% agreed or strongly agreed that the online course(s) they had completed was easy to use. On the whole, 89.0% of respondents agreed or strongly agreed that they would participate in another CME course offering of this type and 85.5% agreed or strongly agreed that they would recommend the course they had completed to others.

An equal number of male (51.5%) and female (48.5%) participants completed MDcme courses during the study period. The majority (79.1%) were family physicians. In all, 25.4% of respondents had been in practice for less than five years, whereas 35.3% had been in practice for more than 21 years. The majority of respondents (55.7%) reported practicing in a rural area (i.e., population < 10,000).

Table 1 summarizes the overall pre to post knowledge and confidence change scores for each MDcme course which was MainPro-M1 accredited (N = 10 courses). A paired samples t-test and an effect size calculation were conducted for each course except those for which N < 5. There was a significant pre to post knowledge increase for 7 of 10 courses at the p < .05 level while the effect size ranged from 0.2 to 5.8. Overall pre and post knowledge mean scores and an overall effect size were also calculated. The overall mean pre and postknowledge assessments scores were 3.1 and 4.4 respectively, whereas the overall effect size was 2.2. A paired samples t-test and an effect size calculation were also conducted for the pre to post confidence change scores of each course. The pre to postconfidence scores for 8 of 10 courses showed a significant difference (p < .05), whereas the effect size ranged from 1.1 to 5.4. Overall pre and post confidence mean scores and an overall effect size were calculated. The overall mean pre and postconfidence scores were 14.6 and 20.8 respectively, whereas the overall effect size was 2.7.

Table 1
Table 1
Image Tools

Table 2 summarizes the overall retrospective pre to post self-reported skill/ability change results for each MDcme course which was either MainPro-M1 or MainPro-C accredited (N = 14 courses). A paired samples t-test and an effect size calculation were conducted for each course. The pre to postpractice change scores for 11 of 14 courses showed a significant difference (p < .05) while the effect size ranged from 1.1 to 5.2.

Table 2
Table 2
Image Tools
Back to Top | Article Outline

Discussion

A comparison of the characteristics of satisfaction survey respondents with those of self-identifying MDcme course registrants suggest the respondents were representative of the registrants (52.8% male and 47.2% female) on the basis of gender. However, on the basis of practice location the satisfaction survey responses appear to be more representative of rural physicians. Rural physicians represented 28.8% of registrants, whereas the majority of survey respondents represented rural physicians (55.7%). A further analysis of the satisfaction survey responses using Kruskal-Wallis analyses on the basis of practice location did not reveal a significant difference between rural and urban physicians’ satisfaction with items pertaining to the content or design of the courses. A large proportion of respondents also reported practice experience of 21 years or greater.

The majority of respondents were generally satisfied with the relevance of the Web-based course content to their practice settings and the overall design and presentation of the courses in a Web-based learning format. Respondents found the Web-based courses to be a convenient and flexible means for accessing and participating in CME activities. Participants commented that Web-based courses provided the opportunity to undertake CME at their own pace and on their own schedule.

The results from this evaluation of asynchronous Web-based CME indicates large effect sizes for pre to post knowledge and confidence improvement, as well as self-reported skill/ability change, were observed for the majority of courses examined. Effect size calculations were not conducted for courses with N < 5. These evaluation results were collected for Web-based CME courses which were offered over an extended period of time (between 2 to 4 weeks) and involved instructional activities based upon multimedia learning tutorials supplemented by asynchronous, computer-mediated conferencing discussions. The subject matter of each Web-based CME course also differed.

The main limitation of this study relates to the learning outcome results collected. Completion of pre and postassessment instruments was voluntary, therefore respondents to these assessments and the results collected may not be representative of all participants completing these courses. The skill/ability change survey was also based on self-reporting, therefore the results may be biased. Effect size results must also be interpreted with caution as the d statistic is normally used to compare between group effects and in such cases a d > 1.0 is considered large. In the current study, we are examining within group pre to posttest results and the magnitude of the effect size findings should therefore be interpreted within that context.

The results from this evaluation study do suggest however that an asynchronous Web-based CME format is effective in influencing knowledge and confidence gain across a variety of clinical subject matter. This finding may support the suggestion by Fordis et al.2 that Web- based CME that is completed over an extended period of time may be of greater learning benefit to participants because it fosters and promotes additional opportunities for reinforcement of learning. Further exploration of this feature is a topic for future research. This finding is also particularly important because the learning outcomes are based on results from multiple CME offerings across a variety of clinical subject areas using a common CME format: asynchronous Web-based CME. Meta-analyses of the CME evaluation literature6 and meta-reviews of Web-based CME3,4 have reported on studies which, for the most part, have concentrated on the evaluation of specific CME formats covering distinct clinical subject areas. The characteristics of this evaluation study also support recent arguments in favor of studies that focus on method versus medium comparisons in the evaluation of Web-based CME.12,13

Back to Top | Article Outline

Acknowledgments

Funding and support for the MDcme project and this research was provided through funding from the Atlantic Innovation Foundation (AIF), Atlantic Canada Opportunities Agency (ACOA), Government of Canada. We would like to acknowledge the facilitators and participants in the MDcme programs which were examined as well as the MDcme consortium partners. Further information on the consortium partners can be obtained on the MDcme website at (www.mdcme.ca).

Back to Top | Article Outline

References

1 Casebeer L, Bennett N, Kristofco R, Carillo A, Centor R. Physician internet medical information seeking and on-line continuing education use patterns. J Contin Educ Health Prof. 2002;22:33–42.

2 Fordis M, King J, Ballantyne C, et al. Comparison of the instructional efficacy of Internet-based CME with live interactive CME workshops: a randomized controlled trial. JAMA. 2005;294:1043–51.

3 Wutoh R, Boren SA, Balas A. eLearning: a review of Internet-based continuing medical education. J Contin Educ Health Prof. 2004;24:20–30.

4 Curran V, Fleet L. A review of evaluation outcomes of web-based continuing medical education. Med Educ. 2005;39:561–67.

5 Moore DE. A framework for outcomes evaluation in the continuing professional development of physicians. In: Davis D, Barnes BE, Fox R (eds). The Continuing Professional Development of Physicians: from Research to Practice. Chicago: American Medical Association, 2003.

6 Davis DA. Does CME work? an analysis of the effect of educational activities on physician performance or health care outcomes. Int J Psychiatry Med. 1998;28:21–39.

7 Howard GS, Schmeck RR, Bray JH. Internal validity in studies employing self-report instruments: a suggested remedy. J Educ Meas. 1979;16 (2):129–135.

8 Curran VR, Hoekman T, Gulliver W, Landells I, Hatcher L. Web-Based CME (Part II): An Evaluation Study of Computer-Mediated Continuing Medical Education. J Contin Educ Health Prof. 2000;20:106–119.

9 Curry L, Purkis IE. Validity of self-reports of behavior changes by participants after a CME course. J Med Educ. 1986;61:579–84.

10 Spencer K. Modes, media and methods: the search for educational effectiveness. Br J Educ Tech. 1991;22:12–22.

11 Cohen J. Statistical power analysis for the behavioral sciences. New York: Academic Press, 1977.

12 Cook DA. Internet-based continuing medical education. JAMA. 2006;295:758.

13 Fordis M, King JE, Ballantyne CM, et al. Internet-based continuing medical education – reply. JAMA. 2006;295:759.

© 2006 Association of American Medical Colleges

Login

Article Tools

Images

Share