Continuing medical education (CME) and staying abreast of the medical literature facilitate the incorporation of medical progress into daily practice. As medical knowledge is rapidly expanding (Pubmed.gov cited over 142,000 new articles just between March 1 and May 1, 2010), remaining current can be challenging. The Internet is an inexpensive, validated educational tool that can be used in lieu of, or as an adjunct to, traditional small-group learning venues.1–5 Recently, numerous journals and educational services have begun to offer e-mail alerts regarding the publication of new, relevant literature, and these e-mail alerts can be tailored for particular audiences based on medical specialty or specific interests.6–8
Incorporating advances in medical knowledge into bedside practice involves two key components: (1) familiarity of the current body of literature and (2) knowledge acquisition (also referred to as “recollection”).9 Familiarity involves the awareness of current advances (i.e., awareness that a recent article about a new drug therapy was published) but does not necessarily involve the ability to recall facts or apply new knowledge. Familiarity is important because it is a key first step in physician CME and the pathway to change in practice.10,11 Knowledge acquisition is deeper than familiarity; it involves actively learning and being able to recall and apply key messages from the literature (i.e., recalling the new drug's name, dose, and potential side effects). Clearly, knowledge acquisition is a more difficult process, and physicians have a much higher capacity for recognition and familiarity than they do for knowledge acquisition.12 Furthermore, the two likely represent a continuum as familiarity can lead to repetition and eventual knowledge acquisition.13
To our knowledge, no prior published studies evaluate the effectiveness of e-mail alerts of journal content in facilitating physicians' learning. The purpose of this study is to determine whether the subscribers of our e-mail alert service, Nephrology Now, which specializes in nephrology content, experience increases in learning as measured by familiarity with and knowledge acquisition of the most recent nephrology literature.
Nephrology Now (NN)14 is an online, nonprofit service that was created in September 2006 for the worldwide nephrology community. Its current editorial board and creators include practicing nephrologists located across Canada (including G.T., M.M.S., J.S., and D.S.). Subscribers receive a monthly e-mail alert highlighting recently published, clinically relevant articles related to the field of clinical nephrology. The monthly alert includes each article's title and authors, publication information (the journal in which the article appears and the date of publication), a brief summary of the article, and a link to the article abstract (see www.nephrologynow.com/archives for our archive of previous e-mail alerts).14 An independent editorial board selects articles on the basis of their clinical applicability, and the board focuses on articles that have a direct impact on making diagnoses, providing prognoses, and/or guiding treatment. Full free text is available for roughly 35% to 40% of all of NN′s selected articles. Since 2006, NN has delivered over 26,000 electronic newsletters, and there have been over 44,000 visitors to the Web site. NN subscribers comprise a multidisciplinary, international cohort of physicians and other medical professionals from 158 countries.
Study intervention and assessment
We, the members of the editorial board of NN (G.T., M.M.S., J.S., D.S.) who also serve as the service's administrators, performed this study. In October 2007, we contacted all subscribers to the NN monthly mailing list (1,683 subscribers at that time) by e-mail and invited them to participate in the study. We divided those who agreed to participate into two groups. Each group then randomly received one of two sets of e-mail alerts (set 1 or set 2) containing an intervention article for each of three months (January, February, and March 2008). Two of us (D.S., G.T.) chose at random which intervention article would be included in which participants' e-mail alerts. Each month, the intervention article arrived in the monthly NN e-mail alert along with the usual content of 15 to 20 article notifications. Each group received only one intervention article per month; thus, one group's intervention article became the other's control article and vice versa. In other words, for each of three consecutive months, half of the subscribers received notification about an intervention article from set 1 while the other half received notification about an intervention article from set 2. We made comparisons regarding familiarity with and knowledge about the intervention articles within participants in a paired fashion so that each group served as a control for the other group. To minimize the possibility of participants becoming aware of intervention articles not highlighted by the NN monthly newsletter, the intervention articles were selected from nontraditional nephrology journals (e.g., Journal of the American Medical Association). The full text of four of the six selected intervention articles was available through a link, and we evenly distributed these four between the two randomization groups.
After the three-month intervention period, we invited all participants to participate in an online assessment tool, which was designed to collect demographic information and to assess familiarity and knowledge acquisition with respect to the intervention articles. To assess familiarity, we asked, “On a scale of 1 to 5, where 1 is totally unfamiliar and 5 is very familiar, how would you rank your familiarity with the following article?” and to assess knowledge we asked respondents to indicate to what degree they agreed (1 = strongly agree, 5 = strongly disagree) with a statement of fact from the intervention article. We assessed both participant groups for all six intervention articles (three they received, and three they did not). Baseline demographics included each respondent's country of practice, profession, level of training, and experience. We also asked which literature sources the respondents frequently used, what technology they used, and details regarding their practices. Finally, we measured both “access to free full text” (i.e., participants' actual free access to the full text of an article) and “perception of NN articles available free” (each participant's estimation of the percentage [either more or less than 60%] of NN articles for which the full text is freely available).
Ethics and confidentiality
Participation was anonymous and voluntary. Participants gave their consent via an online consent form, and we offered no incentives for participation. We collected and stored participants' data using a unique identifier, rather than their names. The University of Toronto regional ethics board approved this study.
Treating outcomes as continuous variables.
We used the results of the online survey to create two scores: one for article familiarity and another for knowledge gain. These two scores served to collapse the multiple responses (several for each of six articles) per participant into one observation for each of these two domains.
For each participant's familiarity score, we subtracted the raw familiarity score (1 = least familiar, 5 = most familiar) from month 1 for the article to which the participant had not been alerted from the familiarity score for the article to which he or she had been alerted. We did the same for each participant for months 2 and 3, and then we summed those differences to produce a score (termed “fDelta”) for each participant's overall gain in familiarity. For example, if a subscriber answered 5 (most familiar), 3 (moderately familiar), and 1 (least familiar) for articles e-mailed to them and 2, 2, and 1 for articles not e-mailed to them, their fDelta score was 9 − 5 = 4. The difference between the familiarity scores for the alerted and nonalerted article could range between −4 and +4 for each participant. Thus, fDelta had a possible range from −12 to +12. We assumed that the subtracting and summing of the raw scores, which we considered to be random variables, produced a distribution of fDelta that was approximately normal. We performed a one-sample t test on the fDelta values across all of the participants to determine whether the mean was significantly greater than zero.
For knowledge gain, participants' raw answers reflected the extent to which they agreed with a statement regarding a particular article (1 = strongly disagree, 5 = strongly agree). Each of these statements had a correct answer. To measure the degree of congruence between the participant's answer and the correct one, we coded each participant's responses as follows:
Strongly agree = 2,
Agree = 1,
Neutral = 0,
Disagree = −1,
Strongly disagree = −2.
In other words, if the participant's answer was 5, and the correct answer was also 5, then that participant's score was 2, but if the participant's answer was 1, and the correct answer was 5, then the participant's score was −2.
Then we created a knowledge score (k) for a given participant, article, and month, as follows:
k = [5 − absolute difference (correct answer − answer)]
Using this recoding, if the participant's answer was correct, then k = 5 because k = 5 − (5 − 5). If the participant's answer was maximally incongruent with the correct answer (e.g., the correct answer was “strongly agree” and the participant answered “strongly disagree”), then k = 1 because k = 5 − (5 − 1).
In this way, the recoded knowledge scores were on the same scale as the familiarity scores. We analyzed the differences in the knowledge scores in exactly the same way we analyzed the familiarity scores; that is, we subtracted the k value for the article for which the participant had been alerted from the k value for the article for which he or she had not been alerted. Then, we summed these differences across all three months in order to produce kDelta, the overall gain in a participant's knowledge acquisition. We performed a one-sample t test on the kDelta values across all of the participants as described above for the fDelta values.
To determine whether the outcome scores degraded with the amount of time between the NN alerts and the survey (i.e., month 1 > month 2 > month 3), we deaggregated the scores for all participants by month and subjected them to a one-way analysis of variance (ANOVA) both with and without a term for linear trend. To examine the effect of participant characteristics on the outcomes, we performed multiple-variable linear regression using the score as the dependent variable and the participant factors as independent variables (e.g., PDA use). No prespecified interaction terms were included in the models, and all potential explanatory variables were included (i.e., a stepwise analysis was not performed).
Treating outcomes as binary variables.
Given that fDelta and kDelta scores potentially ranged from −12 to +12, we created binary outcome variables for the improvement in familiarity and knowledge as follows: If the continuous score was 0 or lower for a participant then we recoded the corresponding binary outcome variable as 0, and if the score was 1 or higher, we coded the variable as 1. We converted ordinal-level explanatory variables into binary ones by using arbitrary thresholds. We explored the relationship between the binary explanatory variables and the binary outcomes through univariate chi-square analyses and through multivariate logistic regression analyses. We did the latter univariate and multivariate analyses for the binary familiarity outcome first and then for the binary knowledge outcome. We then repeated these analyses within a subgroup of the participants who identified themselves as physicians.
Sample size calculations.
Pre hoc sample size calculations were not possible because the current method of creating the knowledge and familiarity scores was novel, and therefore, prior knowledge regarding the variance of the scores was unavailable. Furthermore, because the test of significance for the two primary outcome measures was the one-sample t test, the criterion for rejecting the null hypothesis was, itself, a function of the sample size. Therefore, no simple analytic method existed for the sample size estimation. To address this issue, we made an effort to recruit as many participants as possible, and we then performed post hoc power calculations. The latter assumed that the primary outcome scores (fDelta and kDelta) conformed to t distributions with means equal to the effect sizes (the average gain in familiarity or knowledge score) and standard errors equal to the observed sample standard errors. Then, we calculated the minimum effect sizes that could be detected with a two-sided alpha of 0.05 and a power of 0.8 given the observed sample size and standard error.
Of the 1,683 randomized participants we invited to participate in our study who received NN article notifications over a consecutive three-month period, 803 participants (47.7%) completed the online assessment tool, and we had a similar response from each group: 402 and 401, respectively. We detected no statistically significant differences between the two groups of respondents, and we included all 803 participants who completed the online assessment tool in the analyses. Table 1 shows the demographic data for the participants as well as the relative proportions from each group.
Outcomes treated as continuous variables
We detected a statistically significant increase in fDelta, which measured the respondents' familiarity with the articles: 0.23 ± 0.087 units on the familiarity scale (95% confidence interval [CI]: 0.06–0.41, P = .007). The increase in familiarity did not seem to be a function of how long after the NN alerts went out the survey was conducted: The disaggregated familiarity scores for months 1, 2, and 3 were, respectively, 0.15 ± 0.052, 0.01 ± 0.043, and 0.8 ± 0.057 (P = .125 by one-way ANOVA, P = .295 for the linear trend component of a one-way ANOVA with linear trend). In other words, the scores did not demonstrate a progressive linear increase from month 1 to month 3.
We found no statistically significant improvement in knowledge acquisition. The knowledge score increase was 0.03 ± 0.083 units (95% CI: −0.13 to 0.20, P = .687). To be sure that the lack of a statistically significant knowledge acquisition effect was not due to low statistical power, we conducted a post hoc power analysis using the observed sample size and variance. The observed sample size and variance together realized a power of 0.8 to detect a knowledge score effect of 0.234 points or greater. Given the lack of a signal for the gain in knowledge, we confined exploratory analyses using ANOVA for trend and multiple regression models to just the familiarity score.
We conducted t tests and F tests to look at the differences in the change in familiarity score for a variety of the categorical, potentially explanatory factors; however, the only factor associated with a marginally significant difference in the familiarity score was whether or not the participant was a physician (P = .038 assuming unequal variances [data not shown]). A linear regression analysis with fDelta as the dependent variable and all of the categorical factors as independent variables demonstrated that being a physician showed a trend to increase the effect of NN alerts on improving familiarity (+0.371 points, P = .079 [data not shown]). The other potential explanatory factors were statistically nonsignificant.
Outcomes treated as binary variables
The results of the binary analyses of the increase in familiarity and knowledge are shown in Tables 2 and 3, respectively. In general, the odds ratios (ORs) for an increase in both familiarity and knowledge for an exposure to the examined explanatory variables were similar in direction and magnitude for the univariate and multivariate (logistic) analyses. Among all participants, a gain in familiarity in both univariate and multivariate analyses was significantly associated with being a physician (OR = 1.85, P = .001 [univariate], OR = 1.83, P = .002 [multivariate]). That is, physicians, compared with nonphysicians, had between 83% and 85% higher odds of gaining familiarity with the articles highlighted by the NN alerts. Among the physician participants, perceiving greater access to the free full text of the articles mentioned in the NN alerts was significantly associated with higher odds of a gain in familiarity (OR = 1.41, P = .037 [multivariate]). We noted a trend toward greater odds of gaining familiarity among physicians who indicated that they used NN alerts to stay up-to-date, but the effect did not achieve the nominal level of significance (OR = 1.10, P = .095 [multivariate]).
Among all participants, the odds of achieving an increase in knowledge was associated, in both the univariate and multivariate binary analyses, with being a physician (OR = 1.64, P = .008 [univariate], OR = 1.69, P = .007 [multivariate]). The multivariate analysis also revealed a significant positive association of knowledge gain with the use of the NN alerts system on a daily or weekly basis as compared with less frequent use (OR = 1.48, P = .015). We also observed a trend toward a positive association with practice in North America, but the effect did not achieve statistical significance (OR = 1.36, P = .074). Among the physician participants, the perception of having free access to the full text of at least 60% of the articles mentioned in the NN alerts was significantly associated with lower odds of gaining knowledge in the multivariate analysis (OR = 0.64, P = .037). We noted greater odds of gaining knowledge among physicians who indicated that they used NN alerts to stay up-to-date in the univariate analysis (OR = 1.83, P = .0430), but the effect did not achieve the nominal level of significance (OR = 1.73, P = .095).
Discussion and Conclusions
The goal of our study was to evaluate the effect of an e-mail alert system that notifies subscribers of recent nephrology-related literature on subscribers' familiarity and knowledge acquisition. Our major findings are (1) that respondents' familiarity with the medical literature increased as a result of receiving NN, especially in physicians who perceived they had free access to full articles and (2) that receiving NN did not lead to improvements in knowledge gain.
Imparting new knowledge to physicians via education has proven to be a daunting task, and the numerous methods tested have met with variable success.1–5,11,15,16 Previous studies have found online education to significantly improve medical education and knowledge gain.17–19 In a meta-analysis by Cook and colleagues, the effect of Internet-based instruction on learners from the health professions was positive compared with no intervention (pooled effect size 1.0, CI 0.9–1.1, using 214 total studies).18 The majority of the studies they reviewed (117/214) assessed knowledge improvements; however, participants were often actively involved in the learning process through engaging in, for example, interactive exercises, repetition, online tutorials, or discussion groups. According to the literature, effective knowledge gain also occurred with small groups of online learners involved in case discussion and online CME activities.5,15,20,21 The importance of interactivity, reinforcement, and feedback in the process of education has resulted in their incorporation in multiple learning models.10,22 In our study, we noted an improvement in familiarity with the existing literature, but this familiarity did not translate into acquisition of new knowledge. Our study design evaluated knowledge acquisition in a more passive (surface) learning environment, in which individual participants may or may not be actively engaged in learning. To increase knowledge acquisition and learning, e-mail alert services could be coupled with CME activities that include more case-based discussions and interactivity.
The gain in familiarity and awareness of the literature is an important first step in physician education. A systematic review of 76 articles on barriers to using and adhering to published guidelines found the most commonly cited reason was lack of awareness.16 Indeed, according to a variety of CME teaching models, becoming familiar and aware is often the first step in improving physician education.10,22 Thus, e-mail alerts seem to be an important step in building physician awareness and disseminating information. Whether improvements in familiarity and recall translate into clinical practice changes remains unknown and provides a potential area of future investigation.
In our study of the effect of receiving NN e-mail alerts, being a physician was associated with an increase in both familiarity and knowledge acquisition. Further, among physicians, the perception of having free full-text access was also associated with increases in familiarity but decreases in knowledge gain. Ease of access, incorporation of nontraditional journals, and free text access all may play a role in increasing familiarity. We believe that providing a link to the full free text in our e-mail alerts promotes reading the full article. Further, a recalled perception of an article being freely available may have increased the readers' interest, thereby leading to an increase in familiarity. Specialist physicians often have a more narrow focus for medical information seeking, and alerting them to relevant nontraditional literature sources may stimulate heightened awareness. Specialists may miss exposure to relevant information on specific topics related to their field if they are not overtly exposed to medical literature outside their specialty. Receiving an e-mail alert and links to new publications may be, for physicians who do not have full-text access, their only source of obtaining that information. Why access to full free text did not translate into knowledge gain is difficult to explain and requires further investigation.
Previous efforts in Internet-based CME have included online e-journal clubs as well as teaching modules for trainees.23–26 Studies of these efforts have been small and mostly nonrandomized, and these studies have often failed to demonstrate a sustainable method of instruction. Furthermore, in a recent editorial, Cook reiterates the pitfalls of “media-comparative research.”27 He argues that online learning may facilitate the use of certain instructional methods, and it is the methods themselves—not the medium—that influence learning. We know from the aforementioned studies that online education is better than no education; however, we do not know what the best forms of online presentation are. We showed significant and robust increases in familiarity with the nephrology literature in an online setting. No definitive answer exists as to whether Web-based programs that result in positive knowledge gains actually translate into changes in physician practice behaviors. Ideally, online education that increases content expertise should influence behavioral outcomes, and our study demonstrates that Web-based education can effectively achieve greater familiarity with knowledge in a given field.
Strengths of this study included the randomized study design and the large, international cohort tested. Furthermore, we used a reproducible formula for determining increases in familiarity and knowledge gain. The two groups acted as their own control, which allowed us to compare each group with the other for the two phases of the study. One limitation of our study is that we assessed knowledge gain through a small number of questions (two per article); thus, not all key points in the content of each article could be evaluated. We also limited the type of questions to multiple-choice questions and did not allow respondents the opportunity to answer open-ended questions. Whether the study participants were representative of the entire cohort of NN subscribers is unclear. Possibly, NN users access the Internet more frequently than nonusers, and not only participate in this CME activity but also access other forms of Internet medical information. These medical-information-seeking behaviors may influence the degree of difference between those who did and did not participate in this study. In future studies, baseline data of nonusers would be helpful in addressing this issue.
In summary, this study has demonstrated that physicians who received Internet e-mail alerts were more familiar with the nephrology literature. Although we could not demonstrate any improvement in knowledge, this may have been a result of our method of assessment, and further evaluations are required. The evaluation of Internet CME activities, including e-mail alerts, lags far behind the development of such activities; however, e-mail alerts are a new CME activity that may serve as a valuable resource for the medical community by allowing subscribers to more easily find and access high-quality information to support their clinical decision making.
The University of Toronto regional ethics board approved this study.
The authors presented this work in poster format at the American Society of Nephrology and the Canadian Society of Nephrology annual meetings in 2009 in Edmonton, Alberta, Canada and San Diego, California, USA, respectively.
1 Shuval K, Berkovits E, Netzer D, et al. Evaluating the impact of an evidence-based medicine educational intervention on primary care doctors' attitudes, knowledge and clinical behavior: A controlled trial and before and after study. J Eval Clin Pract. 2007;13:581–598.
3 Kerfoot BP, Conlin PR, Travison T, McMahon GT. Web-based education in systems-based practice: A randomized trial. Arch Intern Med. 2007;167:361–366.
4 Amsallem E, Kasparian C, Cucherat M, et al. Evaluation of two evidence-based knowledge transfer interventions for physicians. A cluster randomized controlled factorial design trial: The CardioDAS Study. Fundam Clin Pharmacol. 2007;21:631–641.
5 Fordis M, King JE, Ballantyne CM, et al. Comparison of the instructional efficacy of Internet-based CME with live interactive CME workshops: A randomized controlled trial. JAMA. 2005;294:1043–1051.
8 Massachusetts Medical Society. Journal Watch. www.jwatch.org
. Accessed September 16, 2010.
10 Pathman DE, Konrad T, Freed GL, Freeman VA, Koch GG. The awareness-to-adherence model of the steps to clinical guideline compliance. The case of pediatric vaccine recommendations. Med Care. 1996;34:873–889.
11 Davis D, Davis N. Selecting educational interventions for knowledge translation. CMAJ. 2010;182:E89–E93.
12 Watkins M, Gardiner JM. An appreciation of the generate–recognize theory of recall. J Verbal Learn Verbal Behav. 1979;18:687–704.
13 Tulving E, Thomson M. Encoding specificity and retrieval processes in episodic memory. Psychol Rev. 1973;80:352–373. http://alicekim.ca/9.ESP73.pdf
. Accessed September 16, 2010.
15 Casebeer L, Kristofco RE, Strasser S, et al. Standardizing evaluation of on-line continuing medical education: Physician knowledge, attitudes, and reflection on practice. J Contin Educ Health Prof. 2004;24:68–75.
16 Cabana MD, Rand CS, Powe NR, et al. Why don't physicians follow clinical practice guidelines? A Framework for improvement. JAMA. 1999;282:1458–1465.
17 Cook DA, Levinson AJ, Garside S, Dupras DM, Erwin PJ, Montori VM. Internet-based learning in the health professions: A meta-analysis. JAMA. 2008;300:1181–1196.
19 Wutoh R, Boren SA, Balas EA. eLearning: A review of Internet-based continuing medical education. J Contin Educ Health Prof. 2004;24:20–30.
20 Forsetlund L, Bjørndal A, Rashidian A, et al. Continuing education meetings and workshops: Effects on professional practice and health care outcomes. Cochrane Database Syst Rev. April 15, 2009:CD003030.
21 Peloso PM, Stakiw KJ. Small-group format for continuing medical education: A report from the field. J Contin Educ Health Prof. 2000;20:27–32.
22 Green LW, Kreuter MW. Health Promotion Planning: An Educational and Ecological Approach. 4th ed. Montreal, Quebec, Canada: McGraw Hill; 2005:140–147.
23 Hammond J, Whalen T. The electronic journal club: An asynchronous problem-based learning technique within work-hour constraints. Curr Surg. 2006;63:441–443.
24 MacRae H, Regehr G, McKenzie M, et al. Teaching practicing surgeons' critical appraisal skills with an Internet-based journal club: A randomized, controlled trial. Surgery. 2004;136:641–646.
25 Kuppersmith RB, Steward MG, Ohlms LA,Coker NJ. Use of an Internet-based journal club. Otolaryngol Head Neck Surg. 1997;116:497–498.
26 Jefford M, Phillips KA, Tattersall MH. An online educational facility for medical oncology trainees. J Clin Oncol. 2001;19:2566–2569.
© 2011 Association of American Medical Colleges
27 Cook DA. Internet-based continuing medical education. JAMA. 2006;295:758.