Secondary Logo

Journal Logo

Research Reports

What Influences Choice of Continuing Medical Education Modalities and Providers? A National Survey of U.S. Physicians, Nurse Practitioners, and Physician Assistants

O’Brien Pott, Maureen PhD, MHA; Blanshan, Anissa S. MHA, MBA; Huneke, Kelly M. MS; Baasch Thomas, Barbara L. BSN, MA; Cook, David A. MD, MHPE

Author Information
doi: 10.1097/ACM.0000000000003758

Abstract

Physicians, nurse practitioners, physician assistants, and other health professionals routinely engage in continuous professional development (CPD), defined as “activities intended to improve professional knowledge, skills, or performance.”1 These activities include both formal, for-credit continuing medical education (CME)2–4 and informal learning (such as answering clinical questions about a specific patient).5–8 As documentation of CME is required for licensure by the majority of states in the United States, most clinicians must accumulate CME credits. The need for CPD broadly, and credit-bearing CME specifically, has fostered a large network of accrediting bodies, professional societies, nonprofit institutions, and for-profit companies that collectively support these activities.

The CME landscape has changed substantially in recent years, due to a myriad of factors including: recognition of “flaws in the way it is conducted, financed, regulated, and evaluated”9; growth of delivery modalities such as online webinars and podcasts10,11; recognition that common CME approaches such as educational meetings,12 printed materials,13 and audit and feedback14 have only limited impact; and strategic regulatory adjustments intended to promote meaningful learning and translation to practice.3,4 These ongoing changes both highlight and exacerbate the gaps in our understanding of the factors that influence how clinicians make selections among various CPD options. Several studies have examined such factors, but most of these studies are over a decade old,15–20 and their contemporary relevance is uncertain. More recently, national surveys of clinicians in the United States1,21,22 and Scotland23 confirmed broad interest in a variety of delivery modalities and identified topical relevance, quality of content, and time as highly influential factors; yet only the Scotland survey involved nonphysicians (general practice nurses and pharmacists), and neither study involved physician assistants or examined perceptions of specific CME providers.

CPD educators, CME providers, and accrediting bodies would benefit from further insight regarding the factors that influence clinicians’ decisions about participation in specific CPD and CME activities. To address this gap, we conducted a national survey of U.S. physicians, nurse practitioners, and physician assistants, addressing 2 questions: (1) What factors influence clinicians when they are selecting among CME delivery options, including CME providers? and (2) How do these responses vary by clinician type, specialty, and age?

Method

Overview

We surveyed licensed U.S. physicians, nurse practitioners, and physician assistants using an Internet-based questionnaire that addressed selection of CME delivery modalities and perceived characteristics of specific CME providers.

Sampling and human subjects

All invited clinicians had completed a CME activity (by any provider) in the 24 months preceding completion of our questionnaire. Clinicians employed by or closely affiliated with our institution (Mayo Clinic) were excluded. Invitees were selected from an external vendor database (Dynata, Plato, Texas) using stratified random sampling with intent to achieve a geographically representative sample of U.S. clinicians, and to include 100 respondents from each of 5 clinician types: family medicine physicians, internal medicine and hospitalist physicians, physicians in other internal medicine subspecialties, nurse practitioners (all medical specialties), and physician assistants (all medical specialties). Respondents were offered a modest incentive (a gift card, charitable contribution, or similar product, valued at $44 at most). The Mayo Clinic institutional review board deemed this study exempt from full review.

Instrument

A group of administrators, researchers, and physicians with experience in CME collaborated to develop the survey questionnaire. We reviewed prior surveys1,15,20 and consulted with colleagues to identify contemporary issues affecting CME decisions. This report presents responses for a subset of questions from the full survey, including items regarding factors influencing choices among CME activities, anticipated future use of various activities, features of live vs online modalities, and characteristics of specific CME providers. The verbatim survey questions are reported in Supplemental Digital Appendix 1 at https://links.lww.com/ACADMED/B27.

We included best–worst scaling (maximum differential) questions24,25 to force choices on factors influencing clinicians’ CME decisions. Respondents were shown several sets of potentially influential factors (drawn in successive blocks of 4 from a pool of 22 items; see Figure 1) and asked to indicate the items perceived as most and least important in each set. Each item appeared in several sets. All other questionnaire items used 5-point ordinal response options (for item-specific anchors, see the Results section).

F1
Figure 1:
Importance of factors in selecting a CME activity, from a 2018 national survey of physicians, nurse practitioners, and physician assistants. Relative frequency with which each factor was selected as most or least important in best–worst scaling response sets. Numbers in brackets indicate net positivity. Abbreviations: CME, continuing medical education; MOC, maintenance of certification.

For each of several specific CME providers, we asked if respondents were aware of that provider and if they had participated in CME activities organized by that provider in the preceding 2 years. The list of providers reflected our perception of the highest-volume providers in the United States and included commercial providers (e.g., UpToDate, Medical Education Resources), academic providers (e.g., Mayo Clinic, University of California, San Francisco), and professional organizations. For each provider with whom a respondent had participated, the respondent rated 5 characteristics: (1) research focused, (2) clinical practice focused, (3) easy to access, (4) reputation of institution, and (5) reputation of faculty. For reporting purposes, we have anonymized these providers.

Survey administration

The survey was administered by Endeavor Management Consulting (Endeavor Management, Houston, Texas) from August 7 to 14, 2018. Participants were recruited via email. Waves of emails were sent each day, with one reminder sent on August 13. Each email contained an individually tracked link to an Internet-based questionnaire presented using Qualtrics (Provo, Utah). The survey closed when the target number of responses (N = 100) for each clinician subgroup was attained.

Analyses

For best–worst scaling items, we calculated the best–worse scale score (net positivity) by counting the times that a given item was selected as most important, minus the times that item was selected as least important, divided by the number of sets in which that item appeared.24

After confirming that data approximated a normal distribution, we explored differences across subgroups (clinician specialty and age) using ANOVA, followed by Tukey’s test for statistically significant models. We compared respondents and nonrespondents using the chi-square test. We analyzed the ratings of CME provider characteristics visually using simple x–y plots. We also used categorical principal components analysis to reduce the 5 characteristic ratings into 2 dimensions and illustrated this visually using a perceptual map. All analyses were performed using IBM SPSS Statistics version 25 (IBM, Armonk, New York). Given the large sample size and the large number of independent statistical tests, we used P < .01 as the threshold of statistical significance.

Results

To achieve our target sample of 100 responses per clinician subgroup (500 total respondents), we invited 1,895 clinicians (overall response rate, 26%), including 975 specializing in family medicine and internal medicine/hospitalists (response rate, 21%), 199 other types of specialists (response rate, 50%), and 721 nurse practitioners and physician assistants (response rate, 28%). Respondents and nonrespondents were similar in gender, age, and geographical region (P ≥ .03; see Supplemental Digital Appendix 2 at https://links.lww.com/ACADMED/B27).

Factors that influence selection of a CME course

We used best–worst scaling to rank order the relative importance of various factors that clinicians might consider when selecting a CME activity (see Figure 1). Of these, the top 3—endorsed as “most important” in over half of the response sets in which they appeared as an option—were the topic (net positivity, 0.54), the quality or effectiveness of the content delivered (0.51), and availability of CME or maintenance of certification (MOC) credit (0.43). The lowest ranked—endorsed by fewer than 10% of respondents—were training/alumni (−0.39), affiliation (−0.42), and referral frequency (−0.57).

Preferences among CME delivery modalities

When asked about the likelihood of obtaining CME credits in the next 12 months using a variety of different modalities, respondents rated live activities the highest (mean 3.8 [of maximum 5] across all respondents; see Table 1), followed by online learning (mean 3.5), Internet-based point-of-care learning (mean 3.5), and print-based activities (mean 3.5). Professional society meetings (mean 3.2), webcast of a live event (mean 3.2), and performance improvement activities (mean 2.8) were rated lower. When respondents were asked about the desirability of using the same modality options, we found slightly higher scores for each modality, although the rank order of preferences remained essentially unchanged (see Table 1).

T1
Table 1:
Anticipated Future Use of and Desirability of Participating in Different CME Modalities, From a 2018 National Surveya of Physicians, Nurse Practitioners, and Physician Assistants

These ratings were similar across clinician types (see Table 1), with the exception of print-based CME and performance improvement CME. Family medicine physicians were statistically more likely than nurse practitioners to anticipate using print-based CME (P < .01 using Tukey’s method) and found it more desirable than internal medicine physicians (P < .01). Family medicine physicians were more likely than physician assistants to anticipate using performance improvement CME (P < .01). Ratings were also similar across age groups (P > .01; see Supplemental Digital Appendix 3 at https://links.lww.com/ACADMED/B27), with the exception of live activities (respondents who were 50–59 years old were more likely to use these than respondents under 40, P < .01) and Internet point-of-care learning (respondents over 59 years old were less likely to use these than those under 40, P < .01; and respondents over 59 years old found this less desirable than those aged 40–49 and under 40, P < .01).

We also asked respondents more specifically about features of online and live, in-person CME activities that might influence their selection (see Table 2). In general, ratings for online activities were higher than those for live activities (however, the features options were different for each modality, so we did not perform a formal comparison of these ratings). For online CME, the feature of greatest appeal was that learning could be done when the clinician had time (mean 4.4 [of 5]) and the lowest was that the subject was best taught using this modality (mean 3.6). By contrast, this feature (subject best taught using this modality) held greatest appeal for live activities (mean 4.0), followed by location in a destination spot (mean 4.0) or a regional location (mean 3.9).

T2
Table 2:
Appeal of Specific Features of Online and In-Person CME Courses, From a 2018 National Surveya of Physicians, Nurse Practitioners, and Physician Assistants

Responses for online CME features were similar across clinician types (see Table 2) and age groups (see Supplemental Digital Appendix 4 at https://links.lww.com/ACADMED/B27). Responses for live CME features were likewise similar across clinician types and age groups, except for time away from practice to focus on education (higher for nurse practitioners than for specialists, P < .01) and interaction with colleagues (higher for nurse practitioners than for physician assistants, P < .01).

Preferences and perceptions of specific CME providers

Respondents’ awareness of and participation with specific CME providers (names anonymized) are reported in Table 3, along with their ratings of 5 provider characteristics: research focus, clinical practice focus, ease of access, reputation of institution, and reputation of faculty. In general, respondents were more often aware of commercial and professional organizations (which captured the top 4 rankings) than academic institutions (comprising the next 3 rankings).

T3
Table 3:
Awareness of, Past Participation With, and Perceived Characteristics of Specific CME Providers, From a 2018 National Survey of Physicians, Nurse Practitioners, and Physician Assistants

Patterns in the ratings of characteristics are illustrated in Figure 2. Panel A shows that most academic institutions clustered together with relatively high ratings for both research focus and clinical practice focus. Commercial providers, by contrast, had lower ratings for research focus and varied widely in their ratings for clinical practice focus. Panel B again shows clustering by provider type, with academic institutions receiving high ratings for institution reputation but lower ratings for ease of access, and commercial providers showing the opposite pattern (lower reputation, higher ease of access).

F2
Figure 2:
Perceived characteristics of specific CME providers, from a 2018 national survey of physicians, nurse practitioners, and physician assistants. Providers have been anonymized as academic (A1–A7), commercial (C1–C5), and professional (P1) organizations. See Table 3 for mean survey response data. Panel A: Relationship between perceptions of research and clinical practice focus. Panel B: Relationship between perceptions of reputation of institution and ease of access. Abbreviation: CME, continuing medical education.

We used principal component analysis to reduce the 5 characteristics into 2 dimensions. This analysis (see Supplemental Digital Appendix 5 at https://links.lww.com/ACADMED/B27) found that the characteristics of research focus, institution reputation, and faculty reputation all clustered closely together (indicating strong correlation among these ratings) and in turn, the academic institutions clustered around these characteristics. By contrast, most commercial providers clustered around the characteristic of easy to access. The characteristic of clinical practice focus was located midway between these clusters, suggesting that this characteristic was shared (to varying degrees) by all provider types.

Discussion

Through this national survey of 500 physicians, nurse practitioners, and physician assistants, we found that the most important factors influencing selection of CME courses were topic, quality of content, availability of CME/MOC credit, and clinical practice focus. The least important factors were training/alumni, affiliation, and referral frequency. Live activities, online learning, and point-of-care learning were the modalities most likely to be anticipated for future use. The feature of greatest appeal in an online course was that learning could be done on the participant’s own schedule. The feature of lowest appeal in an online course was also the feature of greatest appeal in a live course—namely that the subject was best taught using this modality. Ratings were generally similar across clinician types and age groups. Respondents were more often aware of commercial CME providers than academic CME providers and rated most commercial providers higher in ease of access, but rated most academic providers higher in research focus, clinical practice focus, and reputation.

Limitations

Several aspects of the methods and results leave open the possibility that our findings do not represent the larger population of U.S. clinicians, including the relatively low response rate (which could bias results to over-represent clinicians with the time and/or disposition to respond to surveys), the short response period (which may bias results to early responders), and the Internet-only administration format (which may bias responses to those more comfortable with online interactions). Additionally, physicians in nonmedicine specialties were not represented. We acknowledge that other broad themes and specific items could have been selected for inclusion in the questionnaire. Tailoring items and response options to each provider type could have enhanced relevance within that provider type but would have precluded pooling results across providers and making comparisons between provider types. Although we pilot tested the survey, we realize that some response options (such as “training/alumni”) could be interpreted in multiple ways; we have avoided imposing a single interpretation on such responses and report the wording as presented in the questionnaire. Finally, as with all surveys, self-reported behaviors may not align with actual practices. Strengths of the survey include the large national sample and the inclusion of nurse practitioners, physician assistants, and physicians in various medicine specialties.

Integration with prior research

Our work aligns with prior research1,15,18–20,22,23,26 that indicates that the topic, quality/effectiveness of content, and availability of formal credit are important factors to clinicians when they are selecting a CME activity. Cost also had moderately high importance in this study and in prior work.19,22,26 By contrast, other studies have identified time to complete the activity and scheduling/coordinating with local practice needs as important factors,15,19,20,22,23,26 whereas in the present study, these factors had rather neutral ratings (net positivity near 0).

Our findings corroborate prior research indicating clinicians’ interest in using online11,15,21,27–31 and point-of-care learning11,21,32 activities for CPD. The relatively low desirability of performance improvement CME has been previously documented.1,19,33–35 Efforts to enhance this type of CME to optimize desirability and effectiveness may be particularly warranted, especially since it has been associated with improved clinical outcomes.36–38

Implications

Physicians, nurse practitioners, and physician assistants are interested in using a variety of instructional modalities for CME. Modalities with relatively high desirability include live, online, point-of-care, print, and national meetings. This gives CME providers substantial freedom in selecting modalities to effectively present content, engage learners, and meet practical/logistical needs. We note that the desirability of webcasts and performance improvement activities was somewhat lower. Additional evidence on how clinicians do and should select among delivery modalities would be helpful. Future research should explore the strengths and weaknesses of various modalities,39 how providers can advantageously match a modality with a given educational objective and context,40 and how to effectively use a given modality once it has been selected.41,42

We found few differences in preferences for CME modalities or influential features according to clinician specialty (family medicine, internal medicine, medical specialty), clinician type (physician, nurse practitioner, physician assistant), or age group. These findings align with findings from a survey of U.S. physicians1,22 that also found few differences across specialties or age groups. The only recent survey23 that contrasted clinician type (physician, nurse practitioner, pharmacist) found statistically significant differences in the rank order of most modality preferences and influential factors, yet respondents largely agreed on their general position (e.g., top, middle, or bottom third of rankings). This suggests that specialty and age might not matter that much (on average) as CME providers anticipate participants’ desires. Of course, the personal preferences of individual clinicians will vary, and their experience will likely be enhanced to the extent that such preferences can be feasibly accommodated (through, for example, flexible programming or adaptive instruction).

Our findings show that academic CME providers are perceived as having stronger research focus and institutional reputation compared with commercial products, but relatively lower ease of access. Academic providers might wish to focus on improving the usability of their activities while commercial providers might wish to focus on clinical relevance and the evidence basis of content.

The influential factors reported in Figure 1 provide guidance to CME providers on how to make their activities more attractive to clinicians. As has been repeatedly demonstrated,1,15,18–20,22,23,26 a relevant topic is the most important factor. In addition, according to our findings, emphasizing the quality of content, the availability of CME/MOC credit, and a clinical practice focus seem to be most salient. Further study on the factors that influence selection, especially for specific modalities,22 would be useful. Perhaps more importantly, we need research on instructional factors that influence educational outcomes such as engagement, motivation, knowledge, skills, transfer to practice, and long-term retention. Such evidence will provide guidance on how to support health care professionals in efficiently and effectively acquiring the knowledge and skills needed to deliver high-quality care throughout their careers.

Acknowledgments:

The authors would like to thank Marilyn E. Marolt and Michael W. O’Brien for their help in planning, testing, and implementing the survey.

References

1. Cook DA, Blachman MJ, Price DW, West CP, Berger RA, Wittich CM. Professional development perceptions and practices among U.S. physicians: A cross-specialty national survey. Acad Med. 2017;92:1335–1345.
2. Hager M, Russell S, Fletcher SW. Continuing Education in the Health Professions: Improving Healthcare Through Lifelong Learning. 2008. New York, NY: Josiah Macy, Jr. Foundation.
3. McMahon GT. What do I need to learn today?—The evolution of CME. N Engl J Med. 2016;374:1403–1406.
4. Nissen SE. Reforming the continuing medical education system. JAMA. 2015;313:1813–1814.
5. Ely JW, Osheroff JA, Ebell MH, et al. Analysis of questions asked by family doctors regarding patient care. BMJ. 1999;319:358–361.
6. Del Fiol G, Haug PJ. Use of classification models based on usage data for the selection of infobutton resources. AMIA Annu Symp Proc. 2007;2007:171–175.
7. Davis DA, Prescott J, Fordis CM Jr, et al. Rethinking CME: An imperative for academic medicine and faculty development. Acad Med. 2011;86:468–473.
8. Eva KW, Bordage G, Campbell C, et al. Towards a program of assessment for health professionals: From training into practice. Adv Health Sci Educ Theory Pract. 2016;21:897–913.
9. Institute of Medicine Committee on Planning a Continuing Health Professional Education Institute. Redesigning Continuing Education in the Health Professions. 2010. Washington, DC: National Academies Press.
10. Young KJ, Kim JJ, Yeung G, Sit C, Tobe SW. Physician preferences for accredited online continuing medical education. J Contin Educ Health Prof. 2011;31:241–246.
11. Salinas GD. Trends in physician preferences for and use of sources of medical information in response to questions arising at the point of care: 2009-2013. J Contin Educ Health Prof. 2014;34(suppl 1):S11–S16.
12. Forsetlund L, Bjorndal A, Rashidian A, et al. Continuing education meetings and workshops: Effects on professional practice and health care outcomes. Cochrane Database Syst Rev. 2009;2009:CD003030.
13. Grudniewicz A, Kealy R, Rodseth RN, Hamid J, Rudoler D, Straus SE. What is the effectiveness of printed educational materials on primary care physician knowledge, behaviour, and patient outcomes: A systematic review and meta-analyses. Implement Sci. 2015;10:164.
14. Ivers N, Jamtvedt G, Flottorp S, et al. Audit and feedback: Effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev. 2012;6:CD000259.
15. Price DW, Overton CC, Duncan JP, et al. Results of the first national Kaiser Permanente continuing medical education needs assessment survey. Perm J. 2002;6:76–83.
16. Goodyear-Smith F, Whitehorn M, McCormick R. Experiences and preferences of general practitioners regarding continuing medical education: A qualitative study. N Z Med J. 2003;116:U399.
17. Sargeant J, Curran V, Jarvis-Selinger S, et al. Interactive on-line continuing medical education: Physicians’ perceptions and experiences. J Contin Educ Health Prof. 2004;24:227–236.
18. McLeod PJ, McLeod AH. If formal CME is ineffective, why do physicians still participate? Med Teach. 2004;26:184–186.
19. Stewart GD, Teoh KH, Pitts D, Garden OJ, Rowley DI. Continuing professional development for surgeons. Surgeon. 2008;6:288–292.
20. Vollmar HC, Rieger MA, Butzlaff ME, Ostermann T. General practitioners’ preferences and use of educational media: A German perspective. BMC Health Serv Res. 2009;9:31.
21. Cook DA, Blachman MJ, Price DW, et al. Educational technologies for physician continuous professional development: A national survey. Acad Med. 2018;93:104–112.
22. Cook DA, Price DW, Wittich CM, West CP, Blachman MJ. Factors influencing physicians’ selection of continuous professional development activities: A cross-specialty national survey. J Contin Educ Health Prof. 2017;37:154–160.
23. Cunningham DE, Alexander A, Luty S, Zlotos L. CPD preferences and activities of general practitioners, registered pharmacy staff and general practice nurses in NHS Scotland—A questionnaire survey. Educ Prim Care. 2019;30:220–229.
24. Louviere J, Lings I, Islam T, Gudergan S, Flynn T. An introduction to the application of (case 1) best-worst scaling in marketing research. Int J Mark Res. 2013;30:292–303.
25. Flynn TN, Louviere JJ, Peters TJ, Coast J. Best–worst scaling: What it can do for health care research and how to do it. J Health Econ. 2007;26:171–189.
26. Stancic N, Mullen PD, Prokhorov AV, Frankowski RF, McAlister AL. Continuing medical education: What delivery format do physicians prefer? J Contin Educ Health Prof. 2003;23:162–167.
27. Olivieri JJ, Knoll MB, Arn PH. Education format and resource preferences among registrants of a pediatric-focused CME website. Med Teach. 2009;31:e333–e337.
28. Kempkens D, Dieterle WE, Butzlaff M, et al. German ambulatory care physicians’ perspectives on continuing medical education—A national survey. J Contin Educ Health Prof. 2009;29:259–268.
29. Harris JM Jr, Sklar BM, Amend RW, Novalis-Marine C. The growth, characteristics, and future of online CME. J Contin Educ Health Prof. 2010;30:3–10.
30. Casebeer L, Bennett N, Kristofco R, Carillo A, Centor R. Physician internet medical information seeking and on-line continuing education use patterns. J Contin Educ Health Prof. 2002;22:33–42.
31. Sinusas K. Internet point of care learning at a community hospital. J Contin Educ Health Prof. 2009;29:39–43.
32. Cook DA, Sorensen KJ, Hersh W, Berger RA, Wilkinson JM. Features of effective medical knowledge resources to support point of care learning: A focus group study. PLoS One. 2013;8:e80318.
33. Cook DA, Blachman MJ, West CP, Wittich CM. Physician attitudes about maintenance of certification: A cross-specialty national survey. Mayo Clin Proc. 2016;91:1336–1345.
34. Cook DA, Holmboe ES, Sorensen KJ, Berger RA, Wilkinson JM. Getting maintenance of certification to work: A grounded theory study of physicians’ perceptions. JAMA Intern Med. 2015;175:35–42.
35. Stephenson CR, Wittich CM, Pacyna JE, Wynia MK, Hasan O, Tilburt JC. Primary care physicians’ perceptions of practice improvement as a professional responsibility: A cross-sectional study. Med Educ Online. 2018;23:1474700.
36. Bird GC, Marian K, Bagley B. Effect of a performance improvement CME activity on management of patients with diabetes. J Contin Educ Health Prof. 2013;33:155–163.
37. Zisblatt L, Kues JR, Davis N, Willis CE. The long-term impact of a performance improvement continuing medical education intervention on osteoporosis screening. J Contin Educ Health Prof. 2013;33:206–214.
38. Wiggins RE Jr, Etz R. Assessment of the American Board of Ophthalmology’s Maintenance of Certification Part 4 (improvement in medical practice). JAMA Ophthalmol. 2016;134:967–974.
39. Cook DA. Where are we with Web-based learning in medical education? Med Teach. 2006;28:594–598.
40. Cook DA. The value of online learning and MRI: Finding a niche for expensive technologies. Med Teach. 2014;36:965–972.
41. Cook DA. The research we still are not doing: An agenda for the study of computer-based learning. Acad Med. 2005;80:541–548.
42. Cook DA. The failure of e-learning research to inform educational practice, and what we can do about it. Med Teach. 2009;31:158–162.

Supplemental Digital Content

Copyright © 2020 by the Association of American Medical Colleges