The master clinician is disappearing from U.S. medicine. In this commentary, I briefly explain why I believe this is occurring, explain why that is a problem, and initiate a dialog to pursue possible solutions.
When medical students are asked the simple question, “Why are you going to medical school?” most would answer, “To become the best doctor I can.” That is, they aspire to be master clinicians. At what stage in their training should they expect to achieve master clinician status? What is the measurable end-point? What qualities does a master clinician possess? And what proportion of medical students can expect to become master clinicians? These are all very important questions because a strong group of master clinicians is necessary to maintain medical standards at the highest level possible. The answers, however, are not immediately obvious.
The following attributes are generally agreed to be essential ones for master clinicians. Master clinicians should be
▪ personable and adaptable, with solid perceptual and analytic skills;
▪ creative, ethical, and intuitive;
▪ informed, and able to integrate complex information; and
▪ empathic, caring, and holistic (i.e., taking into account the whole patient, including the patient's surroundings and beliefs), with excellent communication and clinical skills (such as those required for history taking and the clinical—i.e., physical—examination).
Additionally, the master clinician should incorporate good medical practice into the broader society, contributing in social and political areas for the greater public good.
Of the above attributes, how well the aspiring master clinician is informed can be assessed with multiple-choice or other types of written examinations. Some aspects of problem solving can also be assessed in the same manner. The other attributes, however, are not measurable in this way. There has been a far greater understanding of some of these “ill-defined” attributes over the last few years, with creativity, intuition, and integration of complex information being seriously evaluated, perhaps for the first time.
Some of the skills that a master clinician must have are increasingly difficult to acquire. Although the evidence indicates that the clinical examination has reasonable sensitivity and specificity, the trend in the United States since the late 1960s has been to rely increasingly on technology to perform aspects of that examination. This is usually done in the often-mistaken belief that the tests and their interpretations are more accurate than the clinician. Also, the time for tests is not counted as physician contact time (i.e., the testing is more efficient for the medical practice, but less efficient for the patient). This approach may have adverse consequences for a physician's learning and sustaining the ability to conduct a good clinical examination. This concern about the loss of clinical history and examination skills has been expressed before.1
Why has the master clinician all but disappeared from U.S. medicine? There are several reasons, and some of these have been noted and thoughtfully put into historical context in a recent text.2 Ludmerer notes the decline in the time that faculty have spent teaching students, especially on the wards, since the late 1960s, as a consequence of the introduction of Medicare. He also notes the increasing difficulty for medical students and residents to be exposed to sufficient numbers and varieties of hospitalized patients, a problem augmented by the growth in the number of students and reductions in hospital lengths of stay.
Another major cause for the decline of the master clinician is the loss of the oral examination. While the retrievable knowledge about the pathophysiology of diseases can be examined with multiple-choice questions, the other skills required of a master clinician were for many years assessed by the oral examination. This generally is no longer the case in the United States. The progressive displacement of the oral examination by the written (generally multiple-choice) test was examined in detail by the American Board of Medical Specialties in 1995, with no convincing reasons given in their report3 for not continuing with an oral examination.
One compelling reason for dropping some of the oral examinations (at least for internal medicine in 1970) was the decision of the National Board of Medical Examiners' research advisory committee in 1968 that computer-based systems should be developed to test physicians' skills that had been previously tested in bedside examinations. This decision appears to have been made after recognizing some deficiencies in the oral examination, while at the same time significantly underestimating the difficulty of developing computer-based simulations. The committee believed that the solution would take only a couple of years. Indeed, the first prototype simulation was demonstrated in 1970, and the project has continued since (initially as a joint effort with the American Board of Internal Medicine until 1975). This journey of over 30 years has been recently summarized.4 Like the quest for computer-based artificial intelligence, a final satisfactory solution that replaces the complexity of human interaction between doctor and patient is still in the future.
Thus there has been a 30-year absence of any oral examination in internal medicine. A generation of practitioners in that specialty—and more recently, practitioners in other specialties—have not had the benefits of practicing for—or being examined on—their clinical skills at the level that the oral examination required. As a result, clinical skills are taught against no particular standard. Over time the skill level of the teachers has changed, as have the expectations and skills in the students. In effect, the absence of an oral examination, and the lack of an effective substitute, has created a chaotic non-standardized approach to the acquisition of these skills, making the assessment of clinical competency less dependable and uniform.
The need to recognize the master clinician in an academic medical institution have been argued thoughtfully.5 Unfortunately, most academic institutions do not recognize faculty who are master clinicians for tenure purposes, and indeed master clinicians' skills are given little attention or mention in tenure guidelines. This situation in itself, by omission, seriously devalues clinical skills, ascribing to them little or no value within the academic center. Academic centers are charged with educating medical students and residents to become master clinicians, to promote the health of their future patients. The current situation of not recognizing master clinicians in such settings fails to serve that mission.
Although many studies confirm that current medical students, residents, and fellows are deficient in clinical skills, does this really matter? Technology is being increasingly used to replace clinical skills. Is this appropriate? Let me examine these questions by reviewing the recent history of the autopsy in medical education. In the recent past, academic and other large medical institutions were held to an internal “gold standard”—the autopsy and the accompanying case conference. Indeed, the autopsy rate of an institution was promoted strongly as one very important guide to clinical competency. The autopsy meetings revealed the “truth” with regard to in-hospital deaths. This “gold standard,” so important 30-40 years ago, has now almost disappeared, without a replacement, and without critical discussion. The autopsy rates for a large Midwestern university hospital serve as an example of a general trend, moving from performing autopsies on about 80% of the patients who died in 1970 to performing autopsies on only around 28% of such patients currently.
Could it be that autopsies are no longer needed as a “gold standard” because the technology to assess the patient during life is now so good that the autopsy is redundant? As clinical skills have been declining, technologic solutions have become increasingly attractive. Indeed, after 1970 came the extraordinary proliferation of “diagnostic” procedures, as well as the high costs of many of them. This situation allows for interesting speculation about cause and effect, and can be examined critically.
First, 98% of medical devices, which make possible the technologic solutions mentioned above, have never been stringently evaluated by groups such as the Food and Drug Administration because they were in use before 1976 (since they were “grandfathered in” starting in that year). Improvements have been added as small equivalent increments under the 510K process, i.e., the improvements demonstrated equivalency to a previously approved product. Second, many of the technologic instruments were evaluated in trials using experienced clinicians to determine entry and to administer the test. Technicians now largely administer these tests, and the effect of this change has seldom been evaluated. Third, the findings obtained via technology are seldom questioned. Consider the computed tomography (CT) scan. Because it appears to mimic reality so well, the CT scan picture has rarely been questioned with regard to the overall error rate within the context of its use in the human setting. Research has shown that the error rate of the CT image, before interpretation, is between 0.74 and 0.90, depending on the algorithm used.6 This value is on a scale where chance provides a value of 0.5 and the ideal truth is 1.0. Most clinical practitioners assume that the CT is a perfect representation of the truth, and they adjust their practices accordingly. The extent to which they are misled is unknown. Further, the error of interpretation of the CT scan is large, for both interobserver and intraobserver variability.
It is encouraging that the problems of medical educational standards, including those of clinical education, are recognized and that serious attempts are being made to address them.7 But I do not believe that at this time, improvements in the evaluation and teaching of clinical skills can rely solely on the use of simulated patients, or computer models, although these have useful adjunctive roles. (I realize that there are some who think the roles of these tools are more central.) It should be recognized that this experiment in education—the present technology-centered approach to teaching and evaluating clinical skills—has largely failed. Some educators have assumed that the lack of clinical skills in the early years of medical training is a response to too much information, which led to the tempting and superficially logical solution to provide a more focused, restricted curriculum. In light of recent knowledge about how master clinicians' skills are learned, it is clear that this approach will significantly compound the problem.
Approaches to modify the direction of U.S. health care will need to refocus significantly on the unique value and attributes of the master clinician. This patient-centered approach is still the usual one in most other Western countries. This call for reform is not in any way a suggestion that the past must be preserved, but rather a proposal that the array of clinical and other skills that define the master clinician should serve as a very strong foundation upon which to build alternative education and patient-care strategies, such as the appropriate evaluation and use of technology, and is in keeping with the public trust.