The goal of all graduate medical education is to ensure that the graduating physician is competent to practice in his or her chosen field of medicine. The evaluation of a resident's competency to practice, however, has never been clearly defined, nor has the fixed period of time given for residency training in each specialty been shown to be the right amount of time for every resident to achieve competency. Traditionally, a residents' competency has been measured by the certifying opinion of the program director that the trainee is ready to practice independently, after a specified number of years in training. This opinion may be supplemented by required examinations during training, and is expected to be supported by successful completion of a specialty board examination. However, there are no data available that verify the ability of any of the three methods to predict competent, ethical practice. Whereas it is axiomatic that physicians should be skilled in the procedures they perform, how to ensure mastery of procedural skills is still a matter of much discussion. Even in specialties such as surgery, where a number of required procedures can be specified, the evaluation of how well these procedures are performed, and perhaps even more important, whether the trainee can apply the skills learned in the overall treatment of individual patients, remain unresolved.
In this article I propose a major advance in graduate medical education that will better ensure that new physicians have the competencies they need to effectively practice in their specialties. That advance should be the replacement of the current approach to residents' education, which specifies a fixed number of years in training, with competency-based training, in which each resident remains in training until he or she has been shown to have acquired the skills and knowledge needed for his or her specialty, and to be able to apply these skills independently and competently to individual patients. In this approach, the certification of competency should replace the years-in-place measure of residency training.1,2 In addition, programs of competency-based training will make it possible to devise and test schemes to evaluate competency more surely than is now possible.
THE BASIS OF RESIDENCY TRAINING
How has our current system of training residents evolved and what concepts is it based on? For the past 100 years, graduate medical education has been based on the concept of residency as articulated by Osler, Welch, and Halsted. Before that time, there was no formalized surgical education; whether a student gained surgical experience depended on the whim of the professor. Some professors required virtual indentured servitude, whereas the educational experiences offered by others were too brief to be of value. Indeed, young surgeons might never have performed an operation. Thus, the model for residency experience introduced and implemented by Halsted to correct the inadequacies of unstructured training was an extraordinarily important advance. The fundamental requirements of a Halsted residency were a fixed period of time for training, structured educational content, actual experience with patients, escalating responsibility for patient care during training, and a period of supervised practice after formal training. Halsted also introduced a selection process for choosing residents based on merit, and a curriculum of specified length and content that focused on learning from graduated responsibility for patient care. His ultimate method of evaluation was a final year of supervised practice at Johns Hopkins Hospital. Over the years the board movement added evaluation by experienced practitioners to supplement the opinion of the director of the training program. The residency review committee system specified the length of training and minimum standards for content of that training. Taken together, the residency as a model for postgraduate education, the board examinations, and national standards by which residencies are reviewed constitute one of the great educational revolutions of medicine.1
Although most of Halsted's concepts have formed the basis of all graduate medical education worldwide during this century, evaluation of a physician's competency to practice never became universal. A major disadvantage of the current residency system is that the time required for training in each specialty was, in the past, arbitrarily chosen based on the opinions of those doing the training. Little attention was paid to the actual time required to learn a particular procedure or fully understand how to treat a particular condition; consequently, no actual measures of the time needed exist. Instead, the continuous changes in medicine have been accommodated by the inclusion of fellowships of equally arbitrary duration. Issues of competency and professionalism have been left to the judgment of the program director.3 Although current national standards are clearly much better defined, the method of establishing them is not too different from the older arbitrary decision making by program directors.4 In my own specialty (neurosurgery), only one change in the length of training required has been made in the past 40 years. How to determine and then implement what a neurosurgeon needs in order to cope with the enormous advances and increased scope of neurosurgery has never been formally assessed, nor are there data on the length of time required to assimilate these new advances into training.
In a study carried out under the auspices of the Society of Neurological Surgeons, the outcome of training was examined by exit interviews with trainees and program directors. A small but significant percentage of to-be-certified trainees possessed questionable competence in the candid opinions of their training program directors. What was particularly disturbing was that a smaller (although again significant) number of trainees questioned their own competence to perform all procedures, because of lack of exposure to specific diseases and procedures. No subsequent practice data are available to verify the validity of these concerns about competence, but the very fact that they existed in some 10% of trainees at the end of their training is of concern.5,6
List 1 presents an instructive comparison of the two methods of training that grew out of the study just reported.
COMPETENCY-BASED RESIDENCY EDUCATION
Competency-based residency education implies a training process that results in proven competency in the acquisition and application of skills and knowledge to medical practice that is not simply dependent on the student's length of training and clinical experiences. Competency-based training requires definitions of what is needed to practice competent medicine in every specialty, and then the verification that each trainee can assume complete care of a particular class of patients. Because the attainment of competency is an individual process for each trainee rather than a process based on the assumption that all trainees will progress at the same speed (as occurs in the years-in-training paradigm), it is possible, indeed even probable, that some trainees will become competent considerably sooner than they would in the current required years of training. Consequently, the overall period of residency and the costs may well be reduced, although this possibility remains to be investigated. Moreover, by reducing the number of years needed to acquire minimal standards of competency, it may be possible for subspecialty training and research experience, which now are obtained through costly post-residency fellowships, to be incorporated into the current residency requirements, thereby further reducing costs.
ACQUISITION AND APPLICATION OF SKILLS AND KNOWLEDGE
How do people learn and how do they translate what they learn into practice? Theories of education are much neglected in our residency system.7 There is no good evidence that any one construct is better than another, but Rasmussen's formulation seems particularly appropriate to the medical paradigm.8,9 Rasmussen devised his scheme to explain situations in which rapid decisions has to be made, often without all the definitive information that would be desirable—a scenario particularly pertinent to the practice of medicine.10 Rasmussen theorizes that the first step in a practical learning system must be the acquisition of skills, and asserts that these skills can be learned before the full theoretical knowledge required for their practical application is known. As skills are acquired, the student learns to follow rules that constitute appropriate responses to most situations most of the time. With experience, more and more rules are learned, so that their practical application can be augmented. This constitutes the second phase of the educational experience.
According to Rasmussen, most individuals never proceed beyound the use of rules. In the ideal educational experience, however, the learner moves to a third phase, “knowledge-based practice,” in which the required solution is derived from broad experience, and, indeed, is not directly related exclusively to the specific information that is available in the current situation. Little is known about the transition from rule following to knowledge-based applications, but it is clear that all master clinicians have achieved this transition.11 Although other theoretical constructs exist and may be equally relevant, Rasmussen's paradigm provides one rational basis for instituting and evaluating a competency-based training program. However, that paradigm neglects the area of practice generally denoted by professionalism, an area that encompasses the moral, ethical, and humanistic aspects of medical practice and that remains the most difficult to evaluate.12
OUR EXPERIENCE WITH A COMPETENCY-BASED RESIDENCY
Since 1994, the Department of Neurosurgery at Johns Hopkins has carried out a preliminary evaluation of the acquisition of surgical skills in a competency-based mode. The brief summary below of the experience should help clarify how competency-based training can be applied to a procedural specialty.
We first identified the clinical experience and procedural and operative skills that had to be acquired during the internship and postgraduate years (PGYs) 1 through 4 in order to fulfill clinical requirements for board eligibility. These were then listed in sequence in six-month blocks (based on the PGY residency in our institution) beginning with internship and continuing through the traditional chief residency year. We chose the list based in part on the responsibilities established for the residents in each block and in part based upon what we arbitrarily thought residents should know at each stage of training. We then defined the individuals who would be best suited to teach the requisite information and skills at each level, including neurologists, intensive care specialists, neuroradiologists, neuropathologists, and general surgeons. Many operative procedures were divided into component parts, such as closing, opening, the definitive procedure, and last, the complete operation. This produced an algorithm for the entire training period that was arranged to provide increasing responsibilities for both patient care and operative procedures. The residents thus had a blueprint for what they would be expected to learn, and the sequence in which learning should be acquired. Although we used arbitrary six-month blocks to create the training algorithm, we placed no time limits on the acquisition of skills and knowledge.
In keeping with Rasmussen's construct, the original experiment was based solely on acquisition of procedural skills. We first spent three years in judging the speed with which residents became competent to perform five arbitrarily chosen surgical procedures or their components in the traditional PGY-1-4 model. We then evaluated the speed with which residents became competent in the same procedures when progression in responsibility in the operating room was based on competency, not PGY. Competency was arbitrarily judged to be acquired when the program director was satisfied that the resident could carry out the procedure independently (with supervision), and when the trainee felt competent to do so.
The results were telling (see Table 1). For exposure and closure in operations, which were traditionally performed during PGY-2, the times to skill mastery were reduced by three to six months. For complete procedures, which were performed during PGY-3 and PGY-4, the times to master the procedures were reduced by six to nine months. For the most complex procedures—pterional craniotomy and exposure of the optic chiasm and suboccipital craniotomy with exposure of an acoustic tumor—the reductions in times to skill mastery were even more dramatic. In the years-in-training model, these skills were generally mastered in 33 to 36 months; in the competency-based program, mastery was achieved at 18 months by all trainees. Our preliminary experience suggests that mastery of procedural skills can be accelerated dramatically by focusing on competency.
COMPETENCY IN PROCEDURAL AND NONPROCEDURAL FIELDS
It is obvious that competency is easiest to determine in procedural fields, because outcomes can be well defined. In fields such as pathology and radiology that depend on objective interpretation of macro- or microscopic specimens, however, nuances in interpretation may be difficult to test. Moreover, skills not only encompass procedures, but include all of the attributes of a competent physician: accurate history taking and physical examination, interpretation of relevant diagnostic data to provide differential diagnosis, evaluation of the patient's personality, an individualized accurate prognosis, and development of an appropriate paradigm of care for an individual patient. A knowledge base that allows progressive responsibilities for patients must be developed for every specialty. The procedural fields also require not only the skills to carry out the procedures, but knowledge of when these procedures should be done. In surgery and in interventional subspecialties, skills and knowledge are equally important, and one is useless without the other. All of medicine has a fundamental skill set that should continuously be expanded by knowledge and experience. The ultimate key to competency is the synthesis of skills, knowledge, and experience into a program of care that is the most appropriate for each individual patient. This synthesis is the third phase of Rasmussen's paradigm, and to help students achieve it is the greatest educational and evaluational challenge.
EVALUATION OF COMPETENCY
The fundamental issue in a competency-based training program or in a traditional years-in-training program is to determine whether competency has been achieved.12–14 At present, in the traditional years-in-training programs, the primary measure of evaluation of competency is the written certification of the program director. This is supplemented in many specialties by national in-training examinations and by the final board examinations, in which evaluation of competency is attempted by assessing both factual knowledge and (usually hypothetical) case management. Neither of these two examination strategies has been proven to reflect subsequent competent surgical practice. Indeed, whereas factual knowledge can be tested by examination, whether that knowledge is actually relevant to practice has never been evaluated. However, competency-based training provides an exceptional opportunity to devise and test schemes to evaluate competency, and to do so in a system that assess trainees on an individual basis, rather than assuming that, in a certain time frame, all trainees will have achieved the desired level of competence.
What is the best way to evaluate competency? Patient care algorithms that assess rules-following are available for many specialties.13 These can be supplemented by assessment of procedural skills and management of pain by experienced surgeons or experts in other fields that require interventional skills.14 The challenge is how to quantify the evaluations so that they can be standardized objectively. A combination of rules-following and patient management has been assessed in a variety of ways, including evaluating actual patients, hypothetical patients, or individuals representing patients. The best solution may require evaluation of management of actual patients and of standardized, computer-based hypothetical patients, which offers an opportunity to improve upon current ad hoc judgment methods. Since evaluation of competency will differ from specialty to specialty, competency-based training provides an excellent research opportunity to provide a defined evaluation program for each field of medicine.
It is logical that the major goals of competency-based training for residents must be first to define the skills and knowledge that are required for each specialty and then to prove that these attributes have been acquired and can be applied to individual patients independently by the clinician in a competent manner. Certification of competency replaces the time-honored years-in-place method of residency training. Thus each trainee progresses at his or her own pace. Our experience indicates that this scheme requires that skills and knowledge be obtained in parallel and that these, as well as practical application of patient management, have to reach the specified level as the trainee progresses through training. Although limits might be set for minimal and maximal times for training, numbers of years in training should become irrelevant to the educational process for residents. Completion of required years of training does not assure the public of competency.
As said earlier, competency-based training also provides opportunities to find new, effective ways of evaluating competency. Why do we need a new evaluation of competency at a time when the quality of medical education in the United States is unsurpassed and, indeed, worldwide is at an all-time high? A major reason is that not all physicians practice competently at the completion of their training or even throughout their careers. Also, there are no objective data to indicate that the chosen numbers of years are appropriate for residency training in the various specialties. More and more physicians attest to this by choosing subspecialty fellowship training. We lack proven effectiveness of a whole range of questions: Do written and oral examinations adequately assess application of a knowledge base required for competent practice? Does successful completion of board examinations differentiate competent from incompetent practitioners? How can “professionalism” best be addressed and evaluated? Competency, if evaluated appropriately, is the key to measuring the effectiveness of all graduate medical education and to answering these questions.
A hoped-for consequence of competency-based training is to instill in the physician the concept of “learning to competence,” so that in later practice this can be applied to the addition of new skills and knowledge, thus dispelling the notion that simply attending a program is sufficient to keep a physician up to date. In that sense, competency-based training has real implications for all postgraduate medical education. Moreover, it is likely that not all training programs will have expertise in everything that is required for training in all fields, suggesting that multi-institutional or even multinational training programs might be developed. Significant changes in the current regulatory or examination process are unlikely to be needed; rather, evaluations used by our residency review committees and board panels would change, but the regulatory structure need not.
Competency-based training is a logical evolutionary step from our current years-in-place—based system. Such training should improve, or at least verify, the quality of education for our clinical trainees and the function of our training programs. This mode of education will allow comparisons among training programs and against standards that are more accurate than any method used currently. The length of training may be reduced, particularly in the procedurally based specialties, if acquistion of knowledge and application of skills and knowledge to patient management can be made convergent. Although this is not the primary impetus for the change (the purpose is proof of competency to practice), substantial cost saving may accrue if the length of training can be reduced by changes in residency years or by incorporating post-residency subspecialty training into existing allotted times.
The idea of competency-based residency training has much philosophical support. Graduate educational programs should end when adequate competency, as generally recognized by the specialty involved, is achieved. Although assessment of competent practice is even more challenging than the measurement of skills and knowledge, I believe that we in residency education must strive to produce a method to do so. Outcome assessment as a basis for certification and for recertification is the logical extension of competency-based practice.15 The evaluation of skills and knowledge required to practice in any specialty and the competent application of both to patient care should become the essential educational effort for graduate medical education in the new century. For that reason, it is time for competency in practice to become the most important measure for graduate medical education. I urge the medical education community to discuss this concept and ultimately to implement it.
1. Long DM. Neurosurgical training at present and in the next century. In: Reulen HJ, Steiger HJ (eds). Training in Neurosurgery. New York: Springer Wein, 1997.
2. Long DM. Educating neurosurgeons for the 21st century. Neurosurg Q. 1996;6:78–88.
3. Adapting Clinical Medical Education to the Needs of Today and Tomorrow. New York: Josiah H. Macy, Jr. Foundation, 1988.
4. Muller S (chair). Physicians for the twenty-first century: report of the project panel on the general professional education of the physician and college preparation for medicine. J Med Educ. 1994 Nov;59(11 Pt. 2).
5. Bosc C. Report to the Society of Neurological Surgeons Annual Meeting, Pasadena, California, 1999.
6. Blum BL, Sigilliot VG. An expert system for designing information systems. Johns Hopkins APL Technical Digest. 1986;7:23–31.
7. Anderson JR (ed). Cognitive Skills and Their Acquisition. Hillsdale, NJ: Erlbaum, 1981.
8. Rasmussen J. Skills, rules, knowledge: signals, signs and symbols and other distinctions in human performance models. IEEE Trans Systems, Man and Cybernetics. 1983;123:257–66.
9. Rasmussen J, Lind M. A Model of Human Decision Making in Complex Systems and Its Use for Design of System Control Strategies. Riso National Laboratory Report. Riso-M-2349, Roskilde, Denmark, 1982.
10. Hamill BW, Stewart RL. Modeling the acquisition and representation of knowledge for distributed tactical decision making. Johns Hopkins APL Technical Digest. 1986;17:31–8.
11. Larkin JH. Skilled Problem-solving in Experts. Technical Report in Science and Mathematics Education. Berkeley, CA: University of California, Berkeley, 1977.
12. Hart L, Harden RM (eds). Further developments in assessing clinical competence. Montreal, QU, Canada: Can-Heal, 1987.
13. Tamblyn RM, Klass DJ, Schnabl GK, Kopelow ML. The accuracy of standardized-patient presentations. Med Educ. 1991;25:100–9.
14. Williams RG, et al. Direct, standardized assessment of clinical competence. Med Educ. 1987;21:482–9.
15. Solomon RA, Mayer SA, Tarmey JJ. Relationship between the volume of craniotomies for cerebral aneurysm performed at the New York state hospitals and in-hospital mortality. Stroke. 1996;27:13–7.