Dr. David Price and his colleagues,1 in their article in this issue of Academic Medicine, describe their effort to collect and summarize the findings from recent studies regarding the association of Maintenance of Certification and physicians’ learning and improvements in care. These varied continuing medical education (CME) activities included self-assessment, simulation, and other modalities. Evaluations of these activities demonstrated that physicians can and do learn and that they often changed their practice because of these educational interventions. Although this finding is expected and reassuring, it may be the right answer to the wrong question.
We know that physicians are motivated to achieve mastery but that this motivation is subject to countless competing pressures for their time and attention. The breadth and depth of new information make it very difficult for clinicians to reliably manage their own competency. Overconfidence and poor self-awareness can lead physicians to make errors of which they may be unaware and to be complacent about their own professional development. The process of unlearning outdated practices and then relearning new practices necessitates real effort—effort that accomplished professionals are unlikely to apply if they are overwhelmed, are burned out, or believe that they are already practicing to the best of their ability.
Just as meaningful learning does not occur without effortful work, effective assessment is time-consuming and challenging. We clearly need to evolve our profession’s approach to assessment as it is currently deployed by the certifying boards. This evolution is occurring, facilitated in part by increasingly sophisticated educational technology. The certifying boards of the American Board of Medical Specialties and the American Osteopathic Association are currently debating these issues as they reconsider their approach to supporting professional competency. Several certifying boards are debating whether to focus on assessing and ensuring competence in areas that can be measured or to focus on encouraging excellence and building an educational framework that gives useful feedback to clinicians who participate. Would we, and our patients, really benefit if the certifying boards conceded to the anticertification rancor by lowering standards?
If the certifying boards assume responsibility for a system of accountability, then each board would optimally set the competency expectations for physicians in their respective specialty areas; allow physicians to self-identify their core scope of practice within that discipline; assume responsibility for summative assessment (increasingly deployed longitudinally using educational technology); continuously provide formative feedback to participants; and link physicians to recommended professional development activities when needed, recognizing engagement in a spectrum of learning activities, including those that help physicians reflect on and improve their practice. Physicians who engage, achieve those competency thresholds, and provide performance data to registries would likely have little interaction with their board, while those who are not achieving minimal competency would be supported through a personal learning plan or relinquish their certification.
The certifying boards’ responsibility would encompass monitoring the changing practice environment and reassessing the meaning of continuing competence. As the practice patterns of many individuals narrow over time and more subspecialties emerge, what is measured and what is required to demonstrate competence needs to evolve; ultimately, some of these assessments will be made passively using patient outcomes data. In addition, interprofessional collaborative practice is increasingly integrated into our health system; thus, we will need to consider and develop means of assessing team competence in addition to individual competence.
If our colleagues at the certifying boards maintain standards for board certification, then we as a profession will be asked to engage for the benefit of our community and the patients we serve. Participating in an accountability system may not personally benefit each of us; however, our participation is critical to make the system meaningful for all. This accountability is essential if we are to maintain the public’s trust. Our contract with society depends on the integrity of each clinician and the profession as a whole; it involves placing patients’ interests first and setting and maintaining standards of competence and integrity.
Continuing certification is one aspect of a much bigger framework for managing professional competence in a changing environment. CME plays a key role in that process. Education can effectively address health care challenges, support lifelong learning and clinician well-being/resilience, and ultimately transform health care. Flexibility in educational design to meet learners’ needs is essential to relieving the burden, motivating clinicians to engage in this process, and improving the long-term effectiveness of education. Institutional leadership and investment in their people are key to achieving these aims.2
We have learned that professional development is most effective when the clinician is engaging in it for a purpose and when the material is meaningful and relevant to her or his scope of practice; is presented by a trusted authority; engages learners actively; and includes feedback, reflection, and reinforcement. There need to be mechanisms in place to measure changes in individual performance, processes of care, patient safety, and health outcomes. In an evolved system, a community of practice supports clinicians not for their resistance to change but for their active participation in interventions that drive measurable improvement. Clinicians should engage in CME activities intentionally; they should be able to easily find activities that meet their needs; there should be a convenient, online system for tracking and reporting participation; and participation should count for multiple regulatory expectations. CME systems in the United States are quickly evolving to meet these expectations.
In their article, Price and colleagues note the challenges in translating knowledge to practice and sustaining that change. This is the role of CME—to offer clinicians an educational home, where they can build longitudinal relationships with colleagues and participate in multiple interventions that support change.
Continuing certification needs to be sensitive to the burdens it puts on physicians without compromising on its purpose to make accurate summative decisions. Compelling physicians to participate in mandated education engenders resentment and is therefore much less likely to motivate or facilitate change. For this reason, compliance education tends to fail at its very foundation. Instead, in addition to completing necessary assessments, physicians should be given the choice of how to demonstrate their competency, intentionally choose activities, take ownership of their learning agenda, and promote their self-awareness. CME can support this process. Engagement in meaningful education can help to restore joy in learning and in our profession. Aligning CME and continuing certification can help relieve burdens: The more certifying boards recognize participation in meaningful learning programs where CME (and continuing certification) is integrated with routine daily practice, the less burdensome this process will be. Participation data can be readily and securely shared between accredited educational providers and the boards using existing systems. Price and colleagues note the critical nature of relevance in educational design. This relevance can be addressed by recognizing the contributions and expertise of local accredited education providers and professional societies that work with their clinicians to design practice-relevant activities.
As Price and colleagues note, continuing certification can be associated with positive outcomes, but, as is typical of educational interventions, the evidence does not establish causation. As always, future research can help determine the comparative effectiveness and efficiency of CME interventions in improving patient safety, patient–physician communication, patient engagement, functional outcomes, team-based care, and population/community health outcomes. Each of these aspects of effective educational design is addressed in new CME accreditation criteria.3 Further collaborations with emerging physician–leaders interested in education, quality improvement, program evaluation, and assessment research are essential.
It is useful to be reminded that we all have the capacity to learn and improve and that educational programs can be effective. But the question we should ask is not, Can we change by participating in Maintenance of Certification and educational activities? Instead, it is, Who is accountable for our individual and collective performance? Perhaps the legacy of this and related work is that it will encourage us to consider our shared responsibilities for redesigning professional development. We are privileged to serve as members of the medical profession. By assuming responsibility for each other’s attainment and continuing competency, we manifest our commitment to our profession and to the people who trust us in their time of need.