Share this article on:

The ABMS MOC Part III Examination: Value, Concerns, and Alternative Formats

Hawkins, Richard E. MD; Irons, Mira Bjelotomich MD; Welcher, Catherine M.; Pouwels, Mellie Villahermosa MA; Holmboe, Eric S. MD; Reisdorff, Earl J. MD; Cohen, Joshua M. MD, MPH; Dentzer, Susan; Nichols, David G. MD, MBA; Lien, Cynthia A. MD; Horn, Thomas D. MD; Noone, R. Barrett MD; Lipner, Rebecca S. PhD; Eva, Kevin W. PhD; Norcini, John J. PhD; Nora, Lois Margaret MD, JD, MBA; Gold, Jeffrey P. MD

doi: 10.1097/ACM.0000000000001291
Articles

This article describes the presentations and discussions at a conference co-convened by the Council on Medical Education of the American Medical Association (AMA) and by the American Board of Medical Specialties (ABMS). The conference focused on the ABMS Maintenance of Certification (MOC) Part III Examination. This article, reflecting the conference agenda, covers the value of and evidence supporting the examination, as well as concerns about the cost of the examination, and—given the current format—its relevance. In addition, the article outlines alternative formats for the examination that four ABMS member boards are currently developing or implementing. Lastly, the article presents contrasting views on the approach to professional self-regulation. One view operationalizes MOC as a high-stakes, pass–fail process while the other perspective holds MOC as an organized approach to support continuing professional development and improvement. The authors hope to begin a conversation among the AMA, the ABMS, and other professional stakeholders about how knowledge assessment in MOC might align with the MOC program’s educational and quality improvement elements and best meet the future needs of both the public and the physician community.

R.E. Hawkins is vice president, Medical Education Outcomes, American Medical Association, Chicago, Illinois.

M.B. Irons is senior vice president, Academic Affairs, American Board of Medical Specialties, Chicago, Illinois.

C.M. Welcher is senior policy analyst, Medical Education Outcomes, American Medical Association, Chicago, Illinois.

M.V. Pouwels is director, Medical Education Collaborations, American Medical Association, Chicago, Illinois.

E.S. Holmboe is senior vice president, Milestone Development and Evaluation, Accreditation Council for Graduate Medical Education, Chicago, Illinois.

E.J. Reisdorff is executive director, American Board of Emergency Medicine, East Lansing, Michigan.

J.M. Cohen is director, Education, Department of Neurology, Mount Sinai Continuum; Headache Fellowship program director, Headache Institute and Adolescent Headache Center, Mount Sinai Roosevelt Hospital; and assistant professor of neurology, Icahn School of Medicine at Mount Sinai, New York, New York.

S. Dentzer is senior policy adviser, Robert Wood Johnson Foundation, Washington, DC.

D.G. Nichols is president and chief executive officer, American Board of Pediatrics, Chapel Hill, North Carolina.

C.A. Lien is professor and vice chair for academic affairs, Department of Anesthesiology, Weill Cornell Medical Center, New York, New York.

T.D. Horn is executive director, American Board of Dermatology, Newton, Massachusetts.

R.B. Noone is executive director, American Board of Plastic Surgery, Philadelphia, Pennsylvania.

R.S. Lipner is senior vice president, Evaluation, Research and Development, American Board of Internal Medicine, Philadelphia, Pennsylvania.

K.W. Eva is associate director and senior scientist, Centre for Health Education Scholarship, and professor and director of education research and scholarship, Department of Medicine, University of British Columbia, Vancouver, British Columbia, Canada.

J.J. Norcini is president and chief executive officer, Foundation for Advancement of International Medical Education and Research, Philadelphia, Pennsylvania.

L.M. Nora is president and chief executive officer, American Board of Medical Specialties, Chicago, Illinois.

J.P. Gold is chancellor, University of Nebraska Medical Center, Omaha, Nebraska.

Funding/Support: None reported.

Other disclosures: R.E.H., C.M.W., and M.V.P. are employees of the American Medical Association. E.S.H. is an employee of the Accreditation Council for Graduate Medical Education. L.M.N. and M.B.I. are employees of the American Board of Medical Specialties. R.E.H. and E.S.H. are coeditors of a textbook on the assessment of clinical competence for which they receive royalties. C.A.L. is vice president of the American Board of Anesthesiology.

Ethical approval: Reported as not applicable.

Disclaimer: The content reflects the views of the authors and does not necessarily reflect those of their respective organizations, nor of the American Medical Association Council on Medical Education or the American Board of Medical Specialties, which cosponsored the joint conference in June 2014.

Previous presentations: The authors were presenters or discussants during the joint conference in June 2014 that was cosponsored by the American Medical Association Council on Medical Education and the American Board of Medical Specialties.

Correspondence should be addressed to Richard E. Hawkins, Medical Education Outcomes, American Medical Association, 330 N. Wabash Ave., Suite 39300, Chicago, IL 60611-5885; telephone: (312) 464-5058; e-mail: Richard.Hawkins@ama-assn.org.

The American Board of Medical Specialties (ABMS) adopted a formal Maintenance of Certification (MOC) program in 2000 as a means of supporting professional self-regulation in recognition that a periodic cognitive examination alone is insufficient to assure the public of continued physician competence. The goal of MOC, which includes the six ABMS/Accreditation Council for Graduate Medical Education general competencies, is to provide a comprehensive approach to physician lifelong learning, self-assessment, and practice improvement through its four-part framework. Part I addresses professionalism and professional standing; Part II, lifelong learning and self-assessment; Part III, assessment of knowledge, judgment, and skills; and Part IV, improvement in medical practice.1,2

The ABMS has received mixed feedback on MOC. Some stakeholders have encouraged increased rigor in MOC standards1 while others have criticized the cost and time required to meet MOC requirements, the redundancy with other regulatory and health system requirements, and what they perceive as a lack of relevance to clinical practice.1,3–5 Other concerns include the emphasis of MOC on individual clinical performance, rather than a focus on team-based practice or the impact of health care systems on patient outcomes, which would better reflect current trends.6 Some criticism is directed specifically at Part III, the secure examination, because of its perceived lack of relevance to how physicians apply knowledge in actual clinical practice.4 Many physicians are concerned that failing the examination may cause them to lose credentials or privileges or prompt them to retire early rather than retake the examination.7,8

In this article, we summarize the viewpoints presented at a June 2014 conference, co-convened by the Council on Medical Education of the American Medical Association (AMA) and by the ABMS. Members of the Council on Medical Education (hereafter, simply the “Council”) meet regularly with ABMS leaders to communicate questions and concerns about MOC and to learn of developments in the 24 ABMS member boards’ MOC programs. The purpose of the conference was to engage members of the two organizations in deeper, more constructive conversations regarding MOC Part III. The Council members sought, in particular, the following: (1) to better understand the value of and evidence underlying the Part III examination, (2) to address questions and concerns about the examination, and (3) to learn from representatives of the ABMS member boards about alternative Part III examination formats. An additional conference goal was to begin a conversation about how the Part III requirement may best meet the needs of the public and the profession. Importantly, we note that this article reflects the various viewpoints discussed at the meeting but does not necessarily reflect the opinions or policies of the sponsoring organizations or conference leaders.

Back to Top | Article Outline

Value of the Current Examination Format

The value of the MOC Part III examination emanates from its place in professional self-regulation, the evidence underlying the examination as a marker of health care quality, and its role in the measurement of clinical competence.

Back to Top | Article Outline

Professional self-regulation and public trust

Society has granted physicians the autonomy to establish the standards to enter into the profession of medicine and to ensure maintenance of competence throughout their practice careers. Members of the general public do not know about the intricacies of ABMS specialty board certification or MOC, and they are often surprised to learn that board certification and MOC are not required of all physicians.9,10 As the recipients and payers of health care, members of the public deserve to have trust in the preparation, professionalism, and ongoing competence of physicians, which certification and MOC are intended to provide. Members of the public are familiar with the concept of a secure examination (e.g., SAT and ACT college admission examinations), and they are aware that other professionals must complete examinations (e.g., lawyers take a bar examination). Likewise, the public is familiar with the notion of needing to stay up-to-date in a field via some form of continuous learning and demonstration of mastery. MOC is the public’s assurance that a physician is engaged in continuous professional development, especially now—as systems of care become more complex and medical knowledge and technology advance at unprecedented rates.

Back to Top | Article Outline

Evidence supporting the current MOC Part III examination format

Evidence supporting the current examination format falls into two categories: (1) evidence that shows declining competence, particularly in knowledge, with time; and (2) evidence linking certification examinations to quality of care and patient outcomes.

Research showing declines in the competence and performance of many physicians as a function of time in practice provides a rationale for periodically assessing physicians in practice. One review of 62 studies has shown that, more often than not, increasing time in practice is associated with a decreasing knowledge base, lower adherence to evidence-based standards of care, and worse patient outcomes.11 Studies included in this review show that variability in performance increases in older cohorts of physicians12 and that some physicians do perform exceptionally well over the course of their careers; however, the studies indicate that on average, both general medical and surgical knowledge, as well as knowledge in more specific areas such as blood product transfusion and emergency contraception, tend to decrease over time.11,13–15 Declining performance on knowledge examinations primarily reflects failure to acquire new or changing knowledge rather than the loss of a more stable knowledge base.16 More recent research demonstrates mixed findings regarding physician performance as a function of time in practice17 and, importantly, that consistent participation in MOC is associated with an increasing knowledge base over time.18

Numerous studies show that certifying examination performance is related to quality of care and patient outcomes.1 A review of the literature published prior to July 1999 found 16 positive associations between board certification and health care quality.19 Board-certified physicians were more likely to comply with standards of care and have better outcomes for conditions within their specialty. Since 1999 an additional 15 studies have shown similar relationships between certification status and quality of care.20–34 These studies provide examples of positive associations between physician performance on initial board certification or MOC examinations and quality of care for patients with acute illness, chronic disease, and preventive care.

Back to Top | Article Outline

The challenges of assessing knowledge, diagnostic reasoning, and judgment

The purpose of the certification and MOC examinations is to assure the public that physicians attain and maintain the essential specialty-specific, practice-related, and practice-environment-related knowledge necessary to safely and responsibly diagnose and treat patients within their specialty2; therefore, the certification and MOC processes are designed to evaluate the knowledge base, diagnostic reasoning, and clinical judgment expected of the physician in the broad domain of his or her discipline. Further, even though some physicians perceive the MOC examination as purely a test of medical knowledge or fact recall, it is also meant to assess each physician’s ability to make appropriate diagnostic and management decisions that have important consequences for patients. In fact, the 2015 standards for MOC35 support the episodic assessment of knowledge, judgment, and skill using the more sophisticated tiers of Bloom’s taxonomy.36 Still, the extent that the examination addresses pure knowledge versus higher-order cognitive skills varies across ABMS member boards. Some of the examinations require recognition of common as well as rare clinical problems for which patients may consult a certified specialist, and many of the examinations include multiple clinical vignettes that include patient management tasks. These vignettes function as low-fidelity simulations that are used to assess clinical decision making.

One challenge to the current approach to MOC examinations is that practice patterns transition over the years. Examination measurement standards suggest “a thorough and explicit definition of the content of the domain,”37 yet physicians often customize their practices beyond the general content and practice of the specialty,38 diminishing the relevancy of a general examination. Considering, therefore, a review of the examination’s content domain is appropriate. Additionally, the MOC Part III examination could also cover the physician’s interaction with the practice environment through, perhaps, computer-based case simulations or multiple-choice questions (MCQs) in the form of vignettes that immerse the examinee in specific practice contexts.

Back to Top | Article Outline

Concerns Regarding Cost and Relevance

Concerns about the MOC Part III examination have focused on its cost and its perceived lack of relevance to physicians’ actual practices. Many physicians believe that the time away from practice and money spent on study materials and/or test preparation courses could be better spent elsewhere.39 One concern is that the material included on the test is broader than what typically presents in most physician practices. Although broad content coverage may be acceptable for physicians entering specialty practice (and appropriate for initial certification), physicians’ practices narrow over time38—as alluded to above—and assessing the field broadly does not accurately reflect the knowledge required to practice within a narrower scope.

To better understand physicians’ experiences with the MOC Part III examination, The AMA Young Physicians Section Committee on Maintenance of Certification and Maintenance of Licensure sought feedback through a survey sent via e-mail to over 73,000 physicians (contact the authors for further information). Though the overall response rate was low (less than 1.5%), 1,050 physicians who were participating in MOC provided feedback. The majority of respondents (82%, n = 861) held time-limited certificates (obligating them to meet MOC requirements to maintain their board certification status), and their responses included perspectives on over 30 specialty and subspecialty boards.

Approximately half of survey participants (50%, n = 521) had completed a full cycle of MOC (i.e., MOC Parts I–IV), while 69% (n = 727) completed Part III. Most of those who had taken the examination (63%, n = 458) felt it should be more relevant to clinical practice. Although perceptions of relevancy varied by specialty, less than half of the respondents (43%, n = 313) reported that the current examination was relevant to their specific clinical practice. Likewise, less than half (49%, 356) believed that completion of MOC Part II educational activities helped them prepare for the examination, and only 22% (n = 160) agreed that the quality of feedback from the examination was relevant to their practice. A majority of respondents (60%, n = 436) agreed that integrating decision support or other point-of-care support to reflect daily practice would help make the examination more practice relevant. Many (63%, n = 457) indicated that Internet access during the examination would be relevant to the way they search for and apply information, and some (24%, 174) advocated changing to an “open-book” format to better represent real-life decisions in daily practice.

Back to Top | Article Outline

Alternatives to (or Variations on) the Current MOC Part III Examination Format

In response both to input from physicians and emerging views regarding how to increase the relevance or educational utility of the examination, several ABMS member boards are implementing alternative formats for the examination consistent with the Standards for the ABMS Program for MOC.35 Here we discuss alternative formats that the American Board of Anesthesiology (ABA), American Board of Dermatology (ABD), American Board of Plastic Surgery (ABPS), and American Board of Internal Medicine (ABIM) are developing or implementing.

Back to Top | Article Outline

The ABA and its revised MOC program

Acting on feedback from physicians board certified in anesthesiology and the availability of sophisticated information technology, the ABA explored oppor tunities to enhance the MOC Program for Anesthesiology (MOCA) through an initiative known as MOCA 2.0.40

The goals of MOCA 2.0 are to promote continuous lifelong learning; increase the relevance of MOCA to the practices of “diplomates”; establish MOCA as professionally and publicly credible; integrate Parts II, III, and IV of MOC; and include continuous formative (“low-stakes”) assessment. In aligning these goals, the ABA sought to document participating physicians’ cognitive expertise in a more engaging and relevant way. Specifically, the ABA has explored a tool for ongoing assessment of diplomates’ knowledge that will provide more extensive, item-specific feedback than that provided via the secure examination. The tool must engage busy physician learners in an active practice. The ABA designed the tool to provide focused content that could be reviewed periodically, allowing physicians to refresh their knowledge while documenting their cognitive expertise. The result of the ABA’s exploration and discussions is the MOCA Minute application.40

MOCA Minute was introduced as a pilot in January 2016 to replace the cognitive examination taken every 10 years. Diplomates who are enrolled in MOCA 2.0 answer 120 questions annually. The questions include “core” information, essential for the practice of anesthesiology, as well as topics that are reflective of an anesthesiologist’s area of subspecialization. Once the diplomate accesses a question, he or she has one minute to answer it. As soon as it is answered, information is sent to the candidate regarding whether or not the selected response is correct. This information is accompanied by a critique explaining the answer, two or three references, and a one-sentence summary of the material. Questions that have been answered previously, references, critiques, and peer performance on those questions can be reviewed at any time to reinforce learning.

The initial MOCA Minute was a free Internet-based application designed to help ABA diplomates achieve a better understanding of topics on the MOCA examination. The ABA’s goal was to engage diplomates in active participation and to determine whether this model of interactive learning was effective. Participation in the MOCA Minute pilot was voluntary and had no impact on their MOCA program requirements.

Analysis of the July 2014 MOCA examination showed that participation in a MOCA Minute pilot was associated with improved examination performance.40 Additional analyses are under way to determine the utility and learning outcomes from the pilot.

Back to Top | Article Outline

The ABD and examination preparation and format

To emphasize the learning experience, the ABD selected a comprehensive item pool representing the expected breadth of knowledge of practicing dermatologists, delivered via a modular format. All examinees must take the general dermatology module, which consists of 100 clinical images designed primarily to assess diagnostic skills. The diplomate then chooses among 50-item subspecialty modules in medical dermatology, dermatopathology, pediatric dermatology, or dermatologic surgery.41

Six months before the examination, the ABD publishes a list of diagnoses from which the general dermatology clinical images will be drawn along with a list of questions that will be used to generate the subspecialty modules. The ABD approach encourages diplomates to study this material, individually, in groups, and/or at dermatologic society meetings. Passing scores are required for both the general and subspecialty modules; pass rates generally exceed 90%. The ABD feels that proof of mastery of this material helps identify dermatologists who are maintaining their competence.

In addition to offering subspecialty examination modules, the ABD conducted trials employing remote proctoring to monitor examination administration in the diplomates’ homes or offices.42 ABD was able both to ensure the identity of the test taker and to monitor the progress of the examination, which allowed test administrators to identify any irregular behavior. Diplomates were pleased to take the examination in a comfortable, familiar environment and not have to travel to a testing center. The ultimate goal of the ABD is to launch an open-resource examination that—allowing access to texts, journals, and the Internet—more closely mimics how dermatologists apply knowledge in practice.

Back to Top | Article Outline

The ABPS and examination preparation

The ABPS believes that assessment drives learning through learners’ preparation for and postassessment review of the examination. To prepare for the ABPS MOC Part III examination, which is a secure, modular, computer-based test that physicians take once per 10-year cycle, the diplomate may use the MOC Study Guide. The study guide is produced by the American Society of Plastic Surgeons and includes more than 2,300 MCQ items derived from the same sources used for the MOC examination. Diplomates may study the guide in its totality or focus on specialty-specific practice content areas (e.g., cosmetic, hand, or craniomaxillofacial surgery). For each 200-item MOC examination, 25% of the items address core principles, while 75% are specialty based. Thus far, the overall pass rate has averaged around 95%, which supports the successful role of preparation for the examination through lifelong learning activities.43 The examinees receive their performance results in an effort to help them focus their future learning. In turn, the diplomates have provided positive feedback on the examination’s dual emphasis on learning and assessment of knowledge.44

Back to Top | Article Outline

The ABIM and enhanced fidelity

Advances both in cognitive and measurement theory and in information technology have enabled the ABIM to explore more creative approaches to testing, including some that more closely mimic real-world practice.45,46 Some of the examination enhancements that increase fidelity include adding a zoom feature to images, presenting realistic laboratory reports with normal ranges, embedding audio clips of heart sounds, and inserting video clips of patient presentations. A new Internet-based, graphical score report presents performance results more clearly and provides more detailed feedback to physicians about their areas of strengths and weaknesses, plus greater information on the items they missed.

Another enhancement that will be used on some examinations is allowing examinees to select more than one option instead of just one. This format promises to be very useful when there may be more than one correct answer. Other enhancements under consideration include the use of external or Internet-based resources during the examination, computer case-based simulation with patient avatars, and adaptive testing techniques whereby the examination advances differently depending on an examinee’s response to each situation, allowing the possibility that examinees might be able to leave early on the basis of their performance.

Back to Top | Article Outline

Contrasting Views on the Responsibility of Professional Self-Regulatory Organizations

Organizations entrusted by the public to ensure physician competence employ different approaches to fulfill their responsibility. Physicians must meet a particular threshold on high-stakes, pass–fail examinations to earn their initial license and certification and enter practice for a particular specialty. On the other hand, the processes underlying three of the four parts of MOC rely primarily on self-assessment, lifelong learning, and quality improvement rather than passing point-in-time, pass–fail assessments. MOC Part III retains the high-stakes legacy of certification but is embedded in an evolving lifelong learning and practice improvement framework, thus provoking tensions about its overarching purpose and role.

The views explicated below address different approaches to MOC. Even within each of these approaches, those within and outside the ABMS board community hold varying, and at times conflicting, perspectives regarding how the approaches should be operationalized.

Back to Top | Article Outline

View 1: The role of MOC Part III is to provide a high-stakes, pass–fail threshold that physicians must meet to maintain certification

The purpose of MOC is to assure the public that physicians remain competent to provide care in the specialty and/or subspecialty in which they are certified.47,48 Those who believe the primary role of the MOC Part III examination is to determine which physicians meet the minimum threshold for retaining certification are supported by the evidence1,19–34 linking the examinations to better care and patient outcomes. Proponents of MOC as a high-stakes, determinative examination believe that MOC serves as a transparent, objective indicator to the public that the certified physician meets professional standards in the field of practice covered by the designated certificate. This view of certification inevitably leads to questions regarding how to define the content and scope of practice for examinations resulting in pass/fail decisions.

As mentioned above, there are concerns that MOC examinations, which include the same broad content covered by initial certifying examinations, are not relevant to physician practices as their scope narrows over time.3–5 However, narrowing the content of the examination in an effort to increase relevance to current practice alters the intended meaning of a particular certificate. A certificate that carries the same broad label of a specialty (e.g., dermatology) but reflects only a portion of its content (e.g., pediatric dermatology) may be confusing or misleading to the public. An examination blueprint that limits content to only the clinical conditions that most specialists encounter in everyday practice (e.g., atopic dermatitis) may not contain clinical material relevant to what physicians may encounter infrequently but should be able to identify and know how to manage (e.g., dermatological signs of lupus).38 Research indicates that “a sizable portion of most doctors’ practices is devoted to the diagnosis and treatment of infrequent but clinically important conditions.”49 Accurately and clearly conveying this—the depth and scope of a physician’s competence—to the public for each designated specialty or subspecialty is crucial.

Addressing both the concern about relevance and the public’s need for transparency is possible if the content and label of an examination are consistent; however, feasibility becomes a significant issue. The member boards would have to consider what data define a particular physician’s scope of practice and then decide whether this scope is limited to common, or includes infrequent, conditions. Moving toward such a practice-specific model would also pose significant challenges for test construction, delivery, and standard setting. Creating unique assessments that would adequately address the wide variability in physician practices is nearly impossible. The process would be expensive, and attaining a sample large enough to set performance standards or to provide meaningful feedback data would be prohibitively difficult. A more feasible solution is a modular approach (not unlike that of the ABD described above) that offers a set of different examinations reflecting usual variations in practice that may be delivered without significant increases in cost and assessed without decreases in psychometric quality.

Back to Top | Article Outline

View 2: The role of MOC Part III is to provide an objective measure of physician knowledge and cognitive skills to inform continuing professional development and performance improvement

Professional self-regulatory organizations are responsible for ensuring a fair and equitable quality assurance process.50 Those who believe the role of the MOC Part III examination is to facilitate ongoing physician practice improvement are supported by studies showing that examinations with high psychometric standards can play an important role in determining whether an individual is capable of performing to the level of professional standards.51,52 Such examinations are also crucial in providing feedback to support the continuing professional development of physicians. Considerable energy and expense are incurred to generate the best possible data regarding how well a practitioner is performing. Not using such data to improve performance seems wasteful.

For recipients to be influenced by feedback, they must be receptive to it.53 For recipients to be receptive to feedback, they must deem it relevant and credible—not just with respect to its validity, but with respect to its source; that is, the recipient must believe the source is concerned with helping the recipient practice better care.54 In the context of high-stakes regulatory practices, such credibility of intent is difficult to achieve. Emerging literature offers potentially beneficial55 approaches, suggesting that the goals of MOC should be as follows: (1) To normalize the improvement process. Focusing attention only on those at the bottom of the distribution misleadingly implies that those above some threshold are as good as they can be and reduces the need for the majority of candidates to pay attention to the data available; (2) To offer information rather than data. Guidance regarding what can be done to improve is crucial given that most practitioners would not deliberately practice below an understood standard; and (3) To strive for an integrated and continuous system with shared accountability between the professional self-regulatory body and the practitioner. Point-in-time hurdles will inevitably be treated as just that—things that need simply to be overcome before returning to normal practice.

Creating an assessment system that is linked to learning activities will facilitate the achievement of these three goals. An integrated MOC program, with clear interrelationships between Parts II (lifelong learning and self-assessment), III (assessment of knowledge, judgment, and skills), and IV (improvement in medical practice), could define such a system. Establishing “desirable difficulties” (challenges that prompt learners to discover the limits of their own knowledge/ability) is likely to be crucial to engendering an assessment process with which diplomates will readily engage for the sake of improving patient care.56 Although no simple solutions are yet available, many testing organizations, as exemplified in Alternatives to (or Variations on) the Current MOC Part III Examination Format (above), have begun to develop and implement ideas that offer great promise. These organizations are working toward authentic, tailored, and meaningful assessment practices for professional regulation that will benefit the profession and the public alike.57

Back to Top | Article Outline

Reflections

On the basis of the discussions and viewpoints presented in this article (and at the conference), we offer the following reflections. Although evidence supports the need to assess physician knowledge and the MOC Part III examination as a marker of physician competence and quality of care, the examination remains a source of concern and frustration within the physician community. We have consistently heard the views that the examination format and broad content covered are not relevant either to the physicians’ practices or to their current ways of accessing and using knowledge in patient care. Further, a perspective is emerging—within both the profession and the ABMS community—that enhanced integration and coherence among the parts of the MOC program are desirable to better assimilate ongoing learning and assessment for improvement.1,58

As the ABMS member boards seek to transition MOC toward a more relevant improvement framework inclusive of high-quality assessment and learning activities, and as ongoing certification evolves to represent more than episodic demonstration of physician knowledge competence, now is a good time to engage in innovative work to better integrate both the knowledge acquisition and assessment components of MOC with practice improvement activities. It is important to continuously assess the current systems employed by the ABMS member boards and to consider alternate examination approaches that meet the future needs of both the public and the physician community. We hope this article serves to further the medical community’s collective understanding regarding professional practice and self-regulation, an essential first step in meeting these needs.

Back to Top | Article Outline

References

1. Hawkins RE, Lipner RS, Ham HP, Wagner R, Holmboe ES. American Board of Medical Specialties maintenance of certification: Theory and evidence regarding the current framework. J Contin Educ Health Prof. 2013;33(suppl 1):S7–19.
2. American Board of Medical Specialties. Board certification. www.abms.org/Maintenance_of_Certification/ABMS_MOC.aspx. Accessed April 28, 2016.
3. Iglehart JK, Baron RB. Ensuring physicians’ competence—Is maintenance of certification the answer? N Engl J Med. 2012;367:2543–2549.
4. Drazen JM, Weinstein DF. Considering recertification. N Engl J Med. 2010;362:946–947.
5. Levinson W, King TE Jr, Goldman L, Goroll AH, Kessler B. Clinical decisions. American Board of Internal Medicine maintenance of certification program. N Engl J Med. 2010;362:948–952.
6. Lee TH. Certifying the good physician: A work in progress. JAMA. 2014;312:2340–2342.
7. Bell H. Test anxiety. Minn Med. 2010;93:24–28.
8. Royal KD, Puffer JC. A closer look at recertification candidate pass rates. J Am Board Fam Med. 2013;26:478–479.
9. Weiss KB. Future of board certification in a new era of public accountability. J Am Board Fam Med. 2010;23(suppl 1):S32–S39.
10. ABMS surveys consumers about value of certification and recertification. Citizen Advocacy Center News and Views. Third Quarter 2011;23(3). https://www.fsbpt.org/download/Forum_Fall2011_ABMS.pdf. Accessed May 23, 2016.
11. Choudhry NK, Fletcher RH, Soumerai SB. Systematic review: The relationship between clinical experience and quality of health care. Ann Intern Med. 2005;142:260–273.
12. Norcini JJ, Lipner RS, Benson JA Jr, Webster GD. An analysis of the knowledge base of practicing internists as measured by the 1980 recertification examination. Ann Intern Med. 1985;102:385–389.
13. Lipner R, Song H, Biester T, Rhodes R. Factors that influence general internists’ and surgeons’ performance on maintenance of certification exams. Acad Med. 2011;86:53–58.
14. Cruft GE, Humphreys JW Jr, Hermann RE, Meskauskas JA. Recertification in surgery, 1980. Arch Surg. 1981;116:1093–1096.
15. Ramsey PG, Carline JD, Inui TS, et al. Changes over time in the knowledge base of practicing internists. JAMA. 1991;266:1103–1107.
16. Day SC, Norcini JJ, Webster GD, Viner ED, Chirico AM. The effect of changes in medical knowledge on examination performance at the time of recertification. Res Med Educ. 1988;27:138–144.
17. Reid RO, Friedberg MW, Adams JL, McGlynn EA, Mehrotra A. Associations between physician characteristics and quality of care. Arch Intern Med. 2010;170:1442–1449.
18. O’Neill TR, Puffer JC. Maintenance of certification and its association with the clinical knowledge of family physicians. Acad Med. 2013;88:780–787.
19. Sharp LK, Bashook PG, Lipsky MS, Horowitz SD, Miller SH. Specialty board certification and clinical outcomes: The missing link. Acad Med. 2002;77:534–542.
20. Boulet JR, Cooper RA, Seeling SS, Norcini JJ, McKinley DW. U.S. citizens who obtain their medical degrees abroad: An overview, 1992–2006. Health Aff (Millwood). 2009;28:226–233.
21. Wenghofer E, Klass D, Abrahamowicz M, et al. Doctor scores on national qualifying examinations predict quality of care in future practice. Med Educ. 2009;43:1166–1173.
22. Norcini JJ, Lipner RS. The relationship between the nature of practice and performance on a cognitive examination. Acad Med. 2000;75(10 suppl):S68–S70.
23. Norcini JJ, Lipner RS. Recertification: Is there a link between take-home and proctored examinations? Acad Med. 1999;74(10 suppl):S28–S30.
24. Papadakis MA, Arnold GK, Blank LL, Holmboe ES, Lipner RS. Performance during internal medicine residency training and subsequent disciplinary action by state licensing boards. Ann Intern Med. 2008;148:869–876.
25. Norcini JJ, Kimball HR, Lipner RS. Certification and specialization: Do they matter in the outcome of acute myocardial infarction? Acad Med. 2000;75:1193–1198.
26. Norcini J, Lipner R, Kimball H. The certification status of generalist physicians and the mortality of their patients after acute myocardial infarction. Acad Med. 2001;76(10 suppl):S21–S23.
27. Norcini JJ, Lipner RS, Kimball HR. Certifying examination performance and patient outcomes following acute myocardial infarction. Med Educ. 2002;36:853–859.
28. Norcini JJ, Boulet JR, Dauphinee WD, Opalek A, Krantz ID, Anderson ST. Evaluating the quality of care provided by graduates of international medical schools. Health Aff (Millwood). 2010;29:1461–1468.
29. Hanson KL, Butts GC, Friedman S, Fairbrother G. Physician credentials and practices associated with childhood immunization rates: Private practice pediatricians serving poor children in New York City. J Urban Health. 2001;78:112–124.
30. Tamblyn R, Abrahamowicz M, Dauphinee WD, et al. Association between licensure examination scores and practice in primary care. JAMA. 2002;288:3019–3026.
31. Silber JH, Kennedy SK, Even-Shoshan O, et al. Anesthesiologist board certification and patient outcomes. Anesthesiology. 2002;96:1044–1052.
32. Prystowsky JB, Bordage G, Feinglass JM. Patient outcomes for segmental colon resection according to surgeon’s training, certification, and experience. Surgery. 2002;132:663–670.
33. Masoudi FA, Gross CP, Wang Y, et al. Adoption of spironolactone therapy for older patients with heart failure and left ventricular systolic dysfunction in the United States, 1998–2001. Circulation. 2005;112:39–47.
34. Wilson M, Welch J, Schuur J, O’Laughlin K, Cutler D. Hospital and emergency department factors associated with variations in missed diagnosis and costs for patients age 65 years and older with acute myocardial infarction who present to emergency departments. Acad Emerg Med. 2014;21:1101–1108.
35. American Board of Medical Specialties. Standards for the ABMS program for maintenance of certification (MOC). Approved by the Board of Directors of the American Board of Medical Specialties (ABMS). January 15, 2014. www.abms.org/media/1109/standards-for-the-abms-program-for-moc-final.pdf. Accessed April 28, 2016.
36. Bloom BS, Engelhart MD, Furst EJ, Hil WH, Krathwohl DR. Taxonomy of Educational Objectives: The Classification of Educational Goals. Handbook I: Cognitive Domain. 1956.New York, NY: David McKay Company.
37. American Educational Research Association, American Psychological Association, National Council of Measurement in Education. Standards for Educational and Psychological Testing. July 2014.Washington, DC: American Educational Research Association.
38. Melnick DE, Asch DA, Blackmore DE, Klass DJ, Norcini JJ. Conceptual challenges in tailoring physician performance assessment to individual practice. Med Educ. 2002;36:931–935.
39. Sandhu AT, Dudley A, Kazi DS. A cost analysis of the American Board of Internal Medicine’s maintenance-of-certification program. Ann Intern Med. 2015;163:401–408http://annals.org/article.aspx?articleid=2398911. Accessed April 28, 2016.
40. American Board of Anesthesiology. About MOCA 2.0. http://www.theaba.org/MOCA/About-MOCA. Accessed April 28, 2016.
41. American Board of Dermatology. MOC exam details. http://www.abderm.org/moc/exam.html. Accessed April 28, 2016.
42. American Board of Dermatology. Remote proctoring for MOC examination. http://www.abderm.org/moc-examination/remote-proctoring-for-moc-exam.aspx. Accessed on April 28, 2016.
43. American Board of Plastic Surgery, Inc. MOC-PS program examination pass rates from 2006–2015. https://www.abplasticsurgery.org/about-us/statistics/. Accessed on April 28, 2016.
44. Noone RB. Maintaining the certificate. Plast Reconstr Surg. 2013;132(1 suppl):16S–19S.
45. American Board of Internal Medicine. Assessment 2020 report. http://transforming.abim.org/assessment-2020-report/. Accessed May 23, 2016.
46. Drasgow F. Technology and Testing: Improving Educational and Psychological Measurement. 2016.New York, NY: Routledge.
47. American Board of Medical Specialties. About ABMS. www.abms.org/About_ABMS/who_we_are.aspx. Accessed April 28, 2016.
48. Lipner RS, Hess BJ, Phillips RL Jr. Specialty board certification in the United States: Issues and evidence. J Contin Educ Health Prof. 2013;33(suppl 1):S20–S35.
49. Norcini JJ. Recertification in the United States. BMJ. 1999;319:1183–1185.
50. Randall GE. Understanding professional self-regulation. November 2000. http://www.collegeofparamedics.sk.ca/docs/about-us/understanding-prof-self-regulation.pdf. Accessed May 23, 2016.
51. Ramsey PG, Carline JD, Inui TS, Larson EB, LoGerfo JP, Wenrich MD. Predictive validity of certification by the American Board of Internal Medicine. Ann Intern Med. 1989;110:719–726.
52. Tamblyn R, Abrahamowicz M, Dauphinee D, et al. Physician scores on a national clinical skills examination as predictors of complaints to medical regulatory authorities. JAMA. 2007;298:993–1001.
53. Shute VJ. Focus on formative feedback. Rev Educ Res. 2008;78:153–189.
54. Sargeant J, Eva KW, Armson H, et al. Features of assessment learners use to make informed self-assessments of clinical performance. Med Educ. 2011;45:636–647.
55. Eva KW, Regehr G, Gruppen LD. Hodges BD, Lingard L. Blinded by “insight”: Self-assessment and its role in performance improvement. In: The Question of Competence: Reconsidering Medical Education in the Twenty-First Century. 2012:Ithica, NY: Cornell University Press; 131–154.
56. Eva KW. Diagnostic error in medical education: Where wrongs can make rights. Adv Health Sci Educ Theory Pract. 2009;14(suppl 1):71–81.
57. Eva K, Bordage G, Campbell C, et al. Medical Education Assessment Advisory Committee report to the Medical Council of Canada on current issues in health professional and health professional trainee assessment. April 25, 2013. http://mcc.ca/wp-content/uploads/Reports-MEAAC.pdf/. Accessed April 28, 2016.
58. Cook DA, Holmboe ES, Sorensen KJ, Berger RA, Wilkinson JM. Getting maintenance of certification to work: A grounded theory study of physicians’ perceptions. JAMA Intern Med. 2015;175:35–42.
© 2016 by the Association of American Medical Colleges