In 1992, the Evidence-Based Medicine Working Group published its groundbreaking article describing a new paradigm for medical practice.1 This new paradigm stressed the review and analysis of clinical evidence rather than overreliance on “intuition, unsystematic clinical experience, and pathophysiologic rationale as sufficient grounds for clinical decision making.”1 The group's work became a goal for educating resident physicians in evidence-based medicine and improving patient safety. More recently, Edgar and colleagues explained how the Accreditation Council for Graduate Medical Education's (ACGME's) Next Accreditation System focused on the creation of milestones intended to track medical resident progression toward competency and professional development.2 Competency and development are seen as being derived through practice-based learning and improvement. The focus is on evidence-based learning and informed practice as well as reflective practice and a commitment to personal growth.2
Physician assistants (PAs) and NPs are increasingly being used to fill gaps in physician shortage areas. Therefore, the ACGME milestones serve as a suitable standard to guide their educational development. Conferences provide groups of like-minded professionals with a means to gather to share ideas and receive professional development opportunities. However, the lecture-based models may not provide the practice-based development opportunities that can lead to fulfilment of the ACGME milestones.
This article describes the findings of a competency-based conference model to provide evidence-based continuing education opportunities to PAs and NPs. We wanted to know if a competency-based continuing medical education (CME) conference could improve skills and knowledge. We also wanted to explore participant perceptions of the competency-based CME conference model.
This research used a mixed-method design to determine the effectiveness and value of a competency-based CME conference for PAs and NPs (N = 48) in an urban academic healthcare facility. This protocol included a knowledge pre- and post-test, procedural skill pre- and post-test for six clinical skills, and an open-ended survey to assess prominent learning points. The sample for this research protocol consisted of 39 PAs and 9 NPs. Participant demographics varied, with 73% female, 27% male, 88% working in New York, 8% in New Jersey, and 4% in California. PAs and NPs in this sample had a median of 2 years' experience working in either an ED or an ICU. Pre-post differences were assessed by the paired t-test. The Wilcoxon signed-rank test was used to confirm the parametric findings. All P values are two-sided, with statistical significance evaluated at the .05 alpha level. All quantitative analyses were performed in SPSS Version 24. Our facility's institutional research board approved this research protocol.
Evidence-based research summaries and study guides based on current research articles and best practices were provided to the conference participants at least 6 weeks before the conference. Following time to review the study materials, a web-based link to the knowledge pretest was provided on topics related to the six clinical skills addressed in the conference. Upon arriving at the conference, each consenting PA or NP was randomly assigned to a particular skill station. Performance was preassessed by expert attending physicians and PAs using the appropriate instrument for that clinical procedure.
Following the preassessment, participants began the instructional sessions. They received a demonstration and opportunities to practice multiple times using partial-task simulators and clinical equipment. Participants rotated through each of the six stations. At the end of the rotation period, each PA or NP returned to the station where they were preassessed and completed a follow-up assessment. Participants also completed the knowledge post-test through a web-based link provided to them after the procedural stations. Finally, participants also completed an open-ended survey to assess the prominent learning points obtained from the conference. Our aim was to use the latest evidence for each procedure. As described below, we used a validated instrument if one was available. We added details to the checklist if the existing one did not contain a known best practice, or if the lack of detail could cause a grader to become very subjective and misinterpret a task within the major step. However, we did not deter from the major steps.
- Airway management and intubation. We used a modified version of the validated checklist developed by Way and colleagues.3
- Paracentesis. We used the checklist for paracentesis validated by Riesenberg and colleagues.4
- Lumbar puncture. We used the checklist for lumbar puncture validated by Berg and colleagues.5
- Focused Assessment with Sonography for Trauma (FAST). We followed step-by-step instructions provided in the 2014 American Institute of Ultrasound in Medicine FAST Exam guidelines developed by the American Institute of Ultrasound in Medicine and American College of Emergency Physicians.6
- Central venous catheter (central line) insertion. We used a modified version of the checklist developed by Evans and colleagues.7
- Tube thoracotomy (chest tube) insertion. We used a checklist adapted from See and colleagues.8
When a validated instrument was not available, we used an unvalidated checklist created based on step-by-step guidance provided by major specialty organizations (FAST) and major medical publications such as the New England Journal of Medicine. All checklists were reviewed by the emergency medicine faculty facilitating the conference.
The knowledge test consisted of 20 multiple-choice questions assessing core medical knowledge for the six procedures. The test was available via desktop computer or mobile device and was divided into four sections of five questions each. Participants had 5 minutes to complete the section before moving on to the next section. Participants could not return to the previous section once it was completed. The questions for the knowledge test were created from the study guides. Faculty reviewed and included the declarative information that they felt was most important. The knowledge test was not validated.
Our open-ended survey focused on questions that would assist us with answering the second research question related to participant perceptions of a competency-based CME conference and their understanding of how it led to self-reflection and improvement.
As shown in Table 1, significant increases were noted in knowledge scores and across all six procedures. For every comparison, the paired t-test and Wilcoxon signed-rank test produced concordant results. As shown in Table 2, the overall response rate to the open-ended survey was excellent, with 45 of 48 PAs and NPs (93.8%) participating. Because participants could opt out of responding to any question, we show the response rate for each area of the survey.
The purpose of this research was to assess the effectiveness and value of a competency-based CME conference for PAs and NPs. The answer to the research question Are there improvements in clinical knowledge and skills following participation at an evidence-based competency-based CME educational conference? is yes. As shown in Table 1, participants showed significant improvement on the multiple-choice test and all six procedures. Participants responded that they acquired new techniques and knowledge (55%) or improved current knowledge and skills (40%). Because 73% of those completing the survey responded that their purpose for participating in this conference was to improve skills or knowledge, we felt that these results were meaningful for the learners. Interestingly, 72% of participants found that during the conference, knowledge inadequacies caused stress. As a result, 90.2% of all participants sought the advice of other participants when they were unsure of something. This is not surprising, as learners in safe learning environments will often turn to others to assist them with working through difficult learning situations.9
Each of the skills and procedures was based on the latest evidence and may not have coincided with the way the PA or NP learned the skill. As a result, nearly 48% of the participants found that improving their skills was challenging. Interestingly, 81% of participants felt that, after the conference, improving technical skill was an area they could improve upon when they returned to their facility. This incongruence likely stems from the need for further practice opportunities for procedural skill mastery that the conference could not provide because of time constraints. This also could be related to the PAs' or NPs' number of years in practice.
Hatmi and colleagues found that group discussions may be more effective than traditional conferences for teaching evidence-based learning.10 We suggest that our competency-based conference model let participants reach levels 1 and 2 of Kirkpatrick's educational outcomes with gains in confidence (level 1) and knowledge (level 2) through demonstration, practice, and feedback.11 Kirkpatrick's level 3 outcomes also may have been reached because of the observed change in practice (level 3), but our observations were limited to a classroom setting and these changes may not result in changes to patient care in the clinical setting (level 4).
LIMITATIONS AND SUGGESTION FOR RESEARCH
Limitations of this pilot research include research conducted at a single institution with a small sample. Because this is only one intervention (and includes only simulation models), we cannot imply competence or credentialing through this intervention.
Our findings add to the literature by demonstrating that a competency-based conference model can enable continuing education, reflective practice, and a commitment to personal growth and achievement in accordance with the ACGME milestones. Unlike lecture-based conferences, this model provided an experiential-based learning environment that required participants to engage with the content and constantly reflect on their own practice and ways to improve it. Competency-based conferences such as this one could become the wave of the future to meet the continuing education requirements of PAs and NPs.
1. Evidence-Based Medicine Working Group. Evidence-based medicine. A new approach to teaching the practice of medicine. JAMA
2. Edgar L, Roberts S, Holmboe E. Milestones 2.0: a step forward. J Grad Med Educ
3. Way DP, Panchal AR, Finnegan GI, Terndrup TE. Airway management proficiency checklist for assessing paramedic performance. Prehosp Emerg Care
4. Riesenberg LA, Berg K, Berg D, et al. The development of a validated checklist for paracentesis: preliminary results. Am J Med Qual
5. Berg K, Riesenberg LA, Berg D, et al. The development of a validated checklist for adult lumbar puncture: preliminary results. Am J Med Qual
6. American Institute of Ultrasound in Medicine; American College of Emergency Physicians. AIUM practice guideline for the performance of the focused assessment with sonography for trauma (FAST) examination. J Ultrasound Med
7. Evans LV, Morse JL, Hamann CJ, et al. The development of an independent rater system to assess residents' competence in invasive procedures. Acad Med
8. See KC, Jamil K, Chua AP, et al. Effect of a pleural checklist on patient safety in the ultrasound era. Respirology
9. Clapper TC. Cooperative-based learning and the zone of proximal development. Simulation Gaming
10. Hatmi ZN, Tahvildari S, Dabiran S, et al. Teaching evidence-based medicine more effectively. Acta Med Iran
11. Kirkpatrick DL. Evaluating Training Programs: The Four Levels
. San Francisco, CA: Berrett-Koehler; 1994.