Whitcomb, Michael E. MD
In his introduction to the Flexner Report, issued by the Carnegie Foundation for the Advancement of Teaching in 1910, Henry Pritchett, the foundation's president, wrote, in reference to the state of medical education at that time:
The interests of the general public have been so generally lost sight of in this matter that the public has in large measure forgot that it has any interests to protect. And yet in no other way does education more closely touch the individual than in the quality of medical training which the institutions of the country provide.1
Pritchett's words are just as applicable today as they were when he wrote them almost 100 years ago. While the institutions Pritchett was referring to at that time were medical schools that provided what we would today call undergraduate medical education, I will soon make clear that his words apply equally well today to the medical schools and hospitals that currently sponsor graduate medical education (GME) programs.
As background for this editorial's topic, let me describe the lead report in this month's journal. Riesenberg and colleagues present the results of a survey of individuals who serve as the designated institutional officials for graduate medical education (DIOs) in those institutions that sponsor residency programs. Riesenberg's findings provide important insights into the administrative arrangements that institutions have established to oversee and manage the conduct of the residency programs and fellowships they sponsor.
One of the important findings is that the lines of authority governing the programs are not as clearly defined, or at least as well understood, as one might expect. The majority of the DIOs responding to the survey indicated that confusion and/or overlap exists between their roles and responsibilities and those of the program directors. This is to some degree understandable, since while program directors are accountable to DIOs for matters related to the accreditation of their programs by the Accreditation Council for Graduate Medical Education (ACGME), they report to their division chief or department chair for all other program-related matters. As a result, most program directors retain a substantial degree of autonomy in determining how their programs will be designed and conducted.
This fact leads me to the main topic of this editorial. As the leadership of institutions that sponsor GME programs strive to address the array of issues identified in the report by Riesenberg and her colleagues, I think they need to think seriously about what their institutions' responsibility should be in certifying that program graduates are truly prepared to enter practice. Indeed, this is a serious issue that the profession, and particularly the medical education community within the profession, needs to address. Let me explain.
At present, residents completing GME programs are able to enter practice in the specialties of their training, and gain privileges to provide hospital care in those specialties, as long as the directors of their residency programs are willing to state that they have completed successfully all program requirements. To be clear, graduating residents do not have to pass a licensure examination, or an examination leading to certification in their specialty, in order to enter practice. In other words, there is no assessment by an external authority of an individual's competence to enter practice as a specialist upon the completion of residency training. Given that, it is appropriate to ask whether program directors are an appropriate filter for determining whether their residents are truly competent to enter practice in the clinical disciplines of their training. Are program directors so influenced, either consciously or unconsciously, by the potential consequences of not “certifying” those completing their programs that they allow some who should not enter practice to do so?
Unfortunately, there are virtually no data that will allow one to answer that question definitively. What is known is that a certain percentage of those who have successfully completed their residency programs will fail the certifying examinations in their specialties the first time they take them. The specialty boards are careful to make a distinction between being “certified” and being “competent.” They generally assert that certification is evidence of “expertise.” Thus, those who have failed certifying examinations can be judged to not have expertise in their specialties, but, say the specialty boards, no judgment can be made about their clinical competence. As a result, there is simply no information available to reach any judgment about the clinical competence of residents completing training. Now, does this really make sense?
I don't think so, and I am not alone in this view. In fact, based on what we know about the public's expectations about how often physicians should be tested to determine their clinical competence, I am certain that the public would be somewhat appalled if they were fully aware of the current situation. Is it really credible to suggest to the public that a physician who fails a certifying examination—an examination that approximately 90% of all first-time takers will pass—is simply not enough of an expert to pass the exam, but yet is expert enough to provide medical care to patients who have a condition relevant to his or her specialty? And what might the public think about the fact that as many as one third or more of the residents completing training in some programs fail their certifying examinations on their first attempt? Indeed, given the nature of the certifying examinations, would the public even accept the notion that those who pass the examinations are truly competent to practice their specialties?
The leadership of the Federation of State Medical Boards (FSMB) and the American Board of Medical Specialties (ABMS) recognize that the current situation needs to be addressed. FSMB has policy indicating that licensure should be based on the demonstration of clinical competence. And the ABMS, largely as a result of the work done by its Task Force on Maintenance of Certification (note the absence of any reference to maintenance of competence), has now appointed a task force to make recommendations for improvement in the initial certification of residency program graduates. The charge to the task force is quite broad, so it seems highly likely that it will recommend changes in the approaches the specialty boards are currently using for initial certification. But when all is said and done, the key issue that must be addressed is who will be responsible for conducting the critical summative assessments that need to be completed before graduating residents are allowed to enter practice. Neither the state medical boards nor the specialty boards have the resources to conduct those examinations. Even if they could do so, I don't believe they are the appropriate ones to conduct the assessments.
I maintain that sponsoring institutions have to assume this responsibility. After all, they are in charge of the GME programs that are training and educating residents for the practice of the various specialties of medicine. Shouldn't they have the same responsibility for conferring“degrees” on their program graduates that medical schools have for granting the MD degree to their graduates? Shouldn't the profession, and more specifically, the medical education community within the profession, require the institutions that sponsor residency training programs to assume the responsibility for the quality of the programs they sponsor and the competence of the program graduates?
Clearly, a great deal of work will need to be done to put in place administrative arrangements that will allow sponsoring institutions to take on this critically important role. I believe that the need to clarify the roles and responsibilities of DIOs, as reflected in the report by Riesenberg and her colleagues, provides an opportunity to address this critical challenge. Why not vest DIOs with the responsibility to see to it that programs have established credible summative assessment procedures, and to oversee and monitor the conduct of those procedures? If this is to occur, the leadership of sponsoring institutions will have to appoint as DIOs individuals who are qualified to meet this responsibility.
Achieving this goal presents a formidable challenge to our highly fragmented system of medical education. How will the needed changes come about? Which of the many professional organizations that are in some way involved in the system will lead the way? Some have suggested that an attempt by any organization to take on this issue will simply lead to “turf wars” between that institution and the many others that will feel that their prerogatives to address the issue are being ignored. So in their view, one I share, there is a need for a 21st century “Flexner Report”—a report that will both identify for the public the shortcomings of the current system and also present a vision for how the system should work to ensure that physicians entering practice after completing a residency program are truly competent to provide medical care. One can even imagine Henry Pritchett's words prominently featured in the introduction to the report!
It is high time that the profession takes steps to ensure that GME programs are educational experiences that are truly designed and conducted in ways that will prepare physicians to provide high-quality care to patients when they enter practice. And having done that, to take steps to ensure that program graduates are not allowed to enter practice until they have passed a rigorous summative assessment that documents that they are capable of performing the complex integrative tasks required to provide high-quality care in the venues where they will practice. Can anyone seriously doubt that such an approach would not better serve the interests of patients?
Michael E. Whitcomb, MD
1 Flexner A. Medical Education in the United States and Canada. A Report to the Carnegie Foundation for the Advancement of Teaching. Bulletin No. 4. Boston: Updyke, 1910.
A New Format for the Table of Contents
The journal is starting 2006 with a new approach to the table of contents. As you can see, most articles are now classified by topic rather than by whether they are Articles or Research Reports. This approach emphasizes the content of each issue and makes that content easier to see at a glance. However, the types of articles that the journal publishes—Articles (including Viewpoints and Commentaries) and Research Reports—remain unchanged, and authors should still follow the guidelines for the various types listed in our “Complete Instructions for Authors” on our Web site. We hope readers find the topic-oriented table of contents easier to navigate.