Home Current Issue Previous Issues Published Ahead-of-Print Collections For Authors Journal Info
Skip Navigation LinksHome > October 2000 - Volume 75 - Issue 10 > The Epistemology of Clinical Reasoning: Perspectives from Ph...
Academic Medicine:
SPECIAL PRESENTATION: 1999 Jack Maatsch Memorial Presentation

The Epistemology of Clinical Reasoning: Perspectives from Philosophy, Psychology, and Neuroscience

NORMAN, GEOFFREY R.

Free Access
Article Outline
Collapse Box

Author Information

Correspondence: Dr. Geoffrey Norman, Department of Clinical Epidemiology and Biostatistics, Health Sciences Centre 2C14, McMaster University, 1200 Main Street West, Hamilton, Ontario L8N 3Z5, Canada; e-mail: 〈norman@mcmaster.ca〉.

The Jack Maatsch Memorial Presentation was sponsored by the Office of Medical Education Research and Development, Michigan State University, and presented at the annual AAMC-RIME meeting, October 27, 1999.

Physicians' clinical reasoning has been an active area of research for about 30 years. The goal of the inquiry has been to reveal the processes whereby doctors arrive at diagnoses and management plans (although as Elstein correctly points out in his discussion of this paper,1 the focus has been more on the former than on the latter) so that we could use this information to devise specific instructional strategies or support systems to make the acquisition and application of these skills more efficient and effective. Initially, these “clinical reasoning skills” were conceived as general, and content-independent, so that they could be observed in all clinicians working through any problems. That is, they were thought of as a general mental faculty, presumably rooted in the architecture of the mind, which would be brought to bear on solving clinical problems.

However, the research findings did not support this viewpoint. Elstein and Shulman2 showed that whatever clinical reasoning was, it was definitely not skill-like, in that there was consistently poor generalization from one problem to another, a finding that ultimately sounded the death knell for evaluation methods such as patient management problems. The past 30 years have seen an accumulation of evidence, in medicine and many other disciplines,3 about the nature of the process, and shown the importance and centrality of knowledge. The central issue of this revised research program is achieving an understanding of how knowledge is initially learned, how it is organized in memory, and how it is accessed later to solve problems.

A second research program in medical decision making also emerged from research of the early 1970s. As Elstein discusses in the companion paper, this program “views diagnosis making as opinion revision with imperfect information.”1 From the decision-analytic perspective, the best decisions arise from the application of a statistical decision rule to data; any other method is suboptimal. Thus, the research agenda is directed to identifying areas such as medicine where humans function in a suboptimal way, and attempting to understand the strategies, the heuristics and biases, they apply to arrive at these suboptimal decisions.

Elstein states that “it seems to me that decision theory is at least as promising as the study of categorization processes.” He may well be correct. But the two schools highlight a fundamental epistemologic dilemma that the remainder of this paper addresses: Will we understand more about the nature of clinical diagnosis by focusing on the diagnostician and striving to understand the mental processes underlying diagnosis, or by focusing on the clinical environment and attempting to understand the statistical associations among features and diseases? To what extent is the world of clinical reasoning “out there” and comprehensible by understanding the relation between symptoms and diseases, and to what extent is it “inside” and understandable only by examining mental processes in detail?

Further dilemmas face us as we examine the research in clinical reasoning. “Organization of knowledge” is viewed as a critical determinant of expertise in medicine. But it is not really clear what is meant by organization of knowledge. Is knowledge organized hierarchically with general concepts at the top, more specific scripts in the middle, and specific instances at the bottom?4 Is it organized in networks with nodes and connections,5 as a symptom-by-disease matrix,6 as propositions with causal links,7 as collections of semantic axes,8 or as individual examples with no overarching concepts, as some of my earlier research claimed?9

A perusal of these various studies leaves the reader with only one overall impression—that the human mind is incredibly flexible and can organize and reorganize information at will and seemingly effortlessly to give the researcher exactly what he or she wants to hear. It is no coincidence that propositional networks are disturbingly idiosyncratic and not apparently reproducible.5 My view is that all of these concept architectures are produced on the fly at retrieval, in order to satisfy the expectations of the researcher, and none can claim special status as the way knowledge is organized. Do you want the clinician to tell you the probability that myocardial infarction (MI) will present with referred pain to the back? Can do. The nature of the neural pathways linking the heart and the upper arm? Sure. The hair color of the last patient they saw with an MI? Red. Given this incredible diversity of knowledge from specific to general, it seems likely that any attempt to uncover a representation of knowledge consistent with a particular perspective from fairly directive probes will be successful; however, the ultimate form of this knowledge (if that is even an issue worth addressing) will remain elusive.

Still, if the clinician's mind is really that malleable, then this poses a serious challenge to the research tradition. Are there really any more “basic” or “primitive” forms of knowledge? How can we understand the nature of clinical reasoning if it appears to be this flexible? These were the questions that presented themselves as I reviewed the studies of clinical reasoning. As I thought about these issues, I began to explore other perspectives on the nature of knowledge and knowing from philosophy, psychology, and neuroscience, and started to identify common threads that, I think, can shed some light on these questions. As I did so, I found myself moving back and forth among three kinds of knowing, more or less from specific to general:

1. How does the clinician come to know about diseases? How might diseases be represented in his or her mind?

2. How do we as researchers come to understand domains of science, whether these are the diseases of clinical research or the workings of the clinician's mind?

3. What do we mean by knowing? What do we mean when we say we understand something?

In the remainder of this article I roam freely among these levels, since many of the writings I uncovered inform all levels. But I must begin with a disclaimer. My journeys in this field are as an amateur, and are recent. I have been heavily influenced in my interpretations by two books. The first is Lessons from an Optical Illusion, by Hundert,10 who took the brave step of trying to find links among philosophy, psychology, and neuroscience. His goal was to place ethics in a context of these disciplines; mine is to turn these general truths to an understanding of clinical reasoning. A second major influence on my thinking is a book called What is this Thing Called Science? by Chalmers11 —a wonderful and readable review of classical philosophy and philosophy of science. I highly recommend both.

The starting point of my discourse is a critical examination of the concept of disease. My intention is to use the exploration of disease as a case study of how we come to know about things.

Back to Top | Article Outline

What is a Disease?

Through advances in biology, physiology, and molecular biology, we have come to a deep understanding of the mechanisms of many diseases. It seems almost nonsensical to now turn the clock back and ask what a disease is. But this small departure may serve us in good stead in understanding better what a concept is and how people identify concepts.

Let's take two examples:

* Is syphilis a disease? Absolutely. It fits the medical model to perfection. A bacterium invades the host, stimulating a diversity of processes that ultimately are manifested in clinical signs. Osler said “understand syphilis and you understand all of medicine.” But there is a small historical glitch. Syphilis has been with mankind for millennia and the signs and symptoms were well established long before the bacterium was isolated.

* Is heart disease a disease? Yes. Put a label such as anterior myocardial infarction on it, and it looks even more like a disease. But likely we are all harboring the precursors of ischemic disease as cholesterol plaques slowly accrue in our arteries. So in a manner of speaking, the prevalence of heart disease approaches 100%. Can we then still speak of it as a disease? And by the way, although there are many risk factors for heart disease, there is no clear cause. The same is true for cancer. We can easily identify cancerous cells on pathology slides, and we can correlate the clinical course with the accumulation of malignant lesions, but we all have microscopic tumors in our thyroids, and a third of men who die of unrelated causes are found to have prostate cancer.

All of these things seem disease-like because we can “explain” them at some lower level—plaques, bacteria, malignant cells. But there are many other diseases listed in textbooks that have no clear causes, no microscopic correlates, no known mechanisms. And it is well to bear in mind that although anthropologists and historians have identified evidence of (for example) tuberculosis dating back several thousands of years, and although old writings in medicine clearly describe the symptoms and clinical course of tuberculosis, the cause, the tubercle bacillus, was identified, by Koch, only as recently as 1884, and effective therapy has been available only since the 1940s. So the existence of a causal mechanism is hardly sufficient to claim that something is a disease. More generally, it is likely that exceptions to any definition of disease will be common.

Campbell et al., in a classic article, “The Concept of Disease,” reported presenting clinicians and lay people with a series of medical conditions and asking them whether or not they were diseases.12 Perhaps not surprisingly, doctors were more prone than lay people to call things such as lead poisoning and tennis elbow diseases. But there was otherwise quite good concordance. Infectious diseases—malaria, tuberculosis, syphilis, polio—topped the list. Other common or serious medical problems—lung cancer, diabetes, multiple sclerosis, cirrhosis—came next. At the bottom were things such as hangover, senility, heatstroke, tennis elbow, and drowning, which had English, not Latin, labels. These authors concluded that the features that best predicted the labeling of a condition as a disease were that the condition (1) was associated with an abnormality of structure or function (i.e., it had a “cause”) and (2) was likely to be treated by a doctor. The latter was the stronger determinant, but regrettably, this seems tautological. Since doctors are in the business of dealing with disease, describing a disease as something that doctors deal with does not, in my view, advance our understanding much.

Let us consider the first predictor for a moment. Arguably one simplistic but functional view is that if a condition simply represents a cluster of signs and symptoms (for example, carpal tunnel syndrome, low back pain) it is less disease-like. Presumably this reflects a concern that a condition's features and associations among the features may be an illusory correlation (which humans are particularly good at making)13 and not “real.” There is good reason for such a degree of skepticism. Historically, many syndromes that existed 100 years ago, such as self-pollution, have now disappeared, and there is every indication that many contemporary syndromes, such as chronic fatigue, sick-building syndrome, Gulf War syndrome, and the myriad health problems believed to be caused by breast implants may go the same way. Conversely, the ability to explain disease through some underlying mechanism lends authenticity to it. Angina becomes much more believable if we can find narrowing of the lumen of the coronary artery on angiography, even though the association with the clinical manifestations is weak.

Back to Top | Article Outline

The Role of Basic Science

If we view the identification of the features of a disease as analogous to the findings of an experiment (in this case, an experiment conducted by a malicious deity) then one basis for distinguishing a disease from a non-disease is the extent to which the features can be explained by a scientific theory. Thus the infectious diseases are explained by a noncontroversial, and historically verified, theory of host and parasite. Chronic diseases such as atherosclerosis are a bit less disease-like since the theory underlying them is less secure. And as we move to syndromes such as chronic fatigue syndrome, we are less inclined to view them as diseases because no satisfactory scientific mechanism has yet been found to explain their features.

Turning to clinical reasoning, investigators such as Schmidt14 and Patel,15 in studying the role of basic science in clinical reasoning, have found repeatedly that clinicians rarely invoke mechanistic explanations. But as Schmidt has shown, the fact that they need not invoke mechanisms does not mean that they do not know them—the knowledge is available but is only rarely used. As he describes it, the knowledge is “encapsulated.” While basic science may play only a minimal role in day-to-day-practice, it is arguably the only, or at least the major, route to understanding in this domain. Of course, basic science need not be restricted to biology. In the same way, the basic science of epidemiology was fundamental to understanding the transmission of AIDS, just as Snow in the 1880s understood the mechanism of cholera transmission (the London water supply) long before the bacillus was isolated.

I believe we can now posit an explanation for the paradoxical findings of Schmidt and Patel. In the normal course of events, clinicians making diagnoses deal at the syndrome level, where the nature of the causal mechanism is irrelevant. The history and physical exam are directed at revealing the syndrome-like manifestations, which then point to tests directed at the underlying processes, and therapy. The textbooks of clinical diagnosis for “old” diseases probably have not changed much since Osler's time. The signs and symptoms are pretty well what they have always been, although of course some historic scourges—smallpox, diphtheria, cholera—are now nearly unheard of in the West, and others, such as AIDS, have taken their place. But despite the changes in our understanding of disease, the clinician attempting to make a diagnoses is dealing almost exclusively at the syndrome level. Occasionally, some understanding of underlying processes may help to sort out some conundrum, but one suspects that clinicians appear rarely to use basic science simply because their investigations of history and physical are directed to labeling the syndrome. Clinical reasoning reverts to a historically earlier form of the disease, following the biologic dictum that ontogeny follows phylogeny—the fetus passes through all stages of evolution before birth.

Campbell13 elaborated the notion of disease in philosophical terms, describing two basic positions: the “nominalist” perspective and the “essentialist” perspective. In the nominalist view, a disease is simply a collection of abnormalities that appear to arise together. Thus the historical diseases of dropsy, consumption, and plague were recognized long before any causal agent was detected, although etiologies (such as “bad humors”) were advanced. Conversely, the essentialist perspective presumes that the signs and symptoms arise from pathologic processes that can be identified and hopefully rectified. While it is tempting to place these two views in a historical order, the contemporary examples we have discussed indicate that the two perspectives represent extremes on a continuum, which, as we shall see, has parallels in both philosophy and psychology.

Back to Top | Article Outline

What is a Concept? Lessons from Philosophy

We can make some general observations about the concept of disease. First, a disease, like any concept, does not exist entirely “out there” but rather, to some degree, is a mental construct. Second, the category or concept called “disease” is not an all-or-none proposition; rather, particular exemplars have different degrees of disease-ness. Finally, it is awfully difficult to devise an explicit rule to aid in distinguishing between diseases and non-diseases. A rule such as “diseases are what doctors deal with” works quite well but is singularly uninformative. And we sense, without proof, that any rule we may devise is not going to be coldly analytic, but must have sub-rules such as “the more Latinesque it is, the more disease-like it is.” So ironically, while it is relatively easy to devise rules to determine whether someone has a particular disease (although I will go on to show that the rules are not the whole story), it is a lot harder to devise rules for the overarching category called “disease.”

These issues are not at all specific to disease, but rather are part of a large body of knowledge extending in space across at least three disciplines—philosophy, psychology, and neuroscience—and in time as far back as Plato. To explore this further, I now venture (with considerable trepidation) into a more general inquiry into the nature of concepts. I begin by revisiting some philosophical views on the nature of concepts.

The origin of concepts has been, in some sense, a nature—nurture debate.9 However, this argument has focused not on whether human traits are inherited or learned (the usual spin on nature versus nurture), but rather on whether categories or concepts such as beauty, disease, table, or tree exist “out there” to be learned by individuals as they develop and mature (which would suggest that an individual's knowledge is formed from experience [nurture]) or are essentially a product of the mind (we impose order and category boundaries where none exists, as a result of the biological structure of the mind [nature]). A casual reading of any philosophy textbook reveals that this issue has been a central concern through the ages of the great minds—Plato, Aristotle, Descartes, Hume, Kant, etc. Let us briefly review the historical debate in mainstream philosophy, with a view to showing how thinking in philosophy can help to frame our perspective on clinical reasoning.

Modern philosophy began with Descartes, who emerges as the ultimate skeptic, and whose views have retained central status as the universal straw man for all his successors. His famous statement “cogito, ergo sum” (I think, therefore I am) has been a lodestone for philosophers and t-shirt makers for three centuries. Regrettably, this idea has been almost universally misunderstood. Most interpret it as a statement of the ultimate rational man; our humanity is defined in terms of our capacity for rational thought. Unfortunately, the statement had a much more humble meaning for Descartes. In continuing to question whether one could justify any external reality, to devise any conclusive argument for the existence of objects such as dogs and tables, Descartes was led to the desperate conclusion that the only thing he could be really sure of was his own thoughts. I think, therefore I am.

The antithesis of this position was championed by the English empiricists Locke and Hume. Their view was that the mind was a tabula rasa, a clean slate on which one's experience with the world was written. This interpretation seems perfectly acceptable for sensory experience, but is more difficult to sustain for higher concepts such as causation, temporality, or, for that matter, disease. Hume's resolution was to suggest that these notions emerged as a result of experience.

Kant reframed the issue in a way that is central to our subsequent journey through psychology and neuroscience. He recognized that thoughts can occur only as products of interactions between the mind and the external reality of experience; we construct experience. He maintained a rigid boundary between those properties that our minds bring to experience (which are hardwired) and those that emerge from experience. He eventually created a list of 12 “primitives”—object, causation, temporality, and nine others—that he claimed the mind imposed on the world of experience.

Hegel went one step further and recognized that the external world can influence the categories and labels we apply. The categories themselves do not emerge from our minds, but are influenced by the objects of our perceptions. The mind is not simply a clean slate upon which all experience is written in coherent form (Hume); nor is it the case that there is no uniform order in the outside world and that all concepts are mental inventions (Descartes); nor finally does the mind impose fixed structure or constructs on sensory experience (Kant). Instead, the concepts and the content both grow and evolve (“become”) as a consequence of the interaction between the individual and the environment.

Finally, in this century, Wittgenstein extended these ideas further. He proposed that not only are concepts not fixed, they also are not definable by any set of logical rules. In pondering even commonplace concepts such as “dog,” he realized that any attempt to devise rules is doomed. A dog has four legs—but if one is amputated it's still a dog. A dog barks—except an Egyptian Basenji. A concept—whether an abstract concept such as truth or a mundane concept such as dog, fork, or tree—emerges as a matter of “family resemblance.” Robins are more bird-like than penguins; malaria is more disease-like than alcoholism. Wittgenstein proposed that concepts or categories are derived from family resemblances, not from fixed sets of defining attributes.

Thus the philosophy of concepts evolved from a Cartesian view, which is entirely intra-psychic and questions any external reality, and an empiricist perspective that presumes that all order and concepts exist as natural categories to be discovered by the human observer, to a Kantian interaction, in which the mind provides the categories or concepts and the external reality provides the objects to fill the categories, to a Hegelian perspective, which is much more organic, and in which thoughts and concepts themselves evolve and change as a result of interactions with external reality. Ultimately, we reach the perspective of Wittgenstein, which places even fewer constraints on concepts, which are a matter of family resemblance and thus can be elaborated only through extensive experience with the world's families.

Applying these notions to clinical reasoning, philosophy presents a larger framework in which to view our dilemma in defining a disease. To the extent that a disease is a concept, philosophy buttresses the middle ground between the notion that diseases exist entirely “out there” only to be discovered and learned and the notion that they are probably simply mental constructs. We can then think of the concept of disease as arising from an interaction between the thoughts of the perceiver and regular aspects and associations of the environment. Further, some diseases, such as syphilis, are more central members of the family; others, including the syndromes, are more peripheral.

As we shall see, this formulation finds remarkable support in research in both psychology and neuroscience, to which I now turn.

Back to Top | Article Outline

What is a Concept? Lessons from Psychology

One division in psychology has been preoccupied with the same issue as the philosophers: how do people learn concepts such as table, dog, or truth? But instead of relying entirely on reason for understanding, psychology seeks evidence to understand how people create and learn concepts. Perhaps in the course of doing so, psychologists deliberately skirt some of the tough epistemologic issues that preoccupy philosophers. On the other hand, in my own reading, I was struck by how the one informs the other. A simple example:

The Müller Lyer illusion,16 shown in Figure 1, is pretty well known to all. We see the one vertical element as being longer than the other. Even though we can measure them and show them to be the same, the illusion is inescapable—a fine example of how we impose order (sometimes biased order) on the external world. But psychologists have gone further with this illusion, and questioned precisely why it is an illusion. In the course of doing so, they provide a nice illustration of Hegel's interactive model of mind. One hypothesis is that it is an illusion because our minds are seeing it in three dimensions, so that the symbol on the left is seen as the outside corner of a wall nearest the viewer, and the one on the right is seen as the inside corner of a wall farthest away from the viewer. Although the two vertical lines are objectively the same size, since the one on the left is seen to be nearer than the one on the right, the right one is “actually” longer. Deregowsky17 tested the illusion in Zulus, who spend their lives in round houses, and found that they did not see it as an illusion. So, it is not an illusion because our brains are “hardwired” to see it as such (unless Zulus have different hardwiring); it is an illusion because of the particular experiences we have had with the world. On the other hand, the illusion reminds us that our perceptions do not necessarily mirror reality, as they are also shaped by internal assumptions (in this case, about perspective and the inference of a third dimension from the two-dimensional representations on the retina) that sometimes lead us astray.

Figure 1
Figure 1
Image Tools

A second example from psychology leads us closer to our central concern with clinical reasoning. Most of us have, at one time or another, wondered whether the “red” we see is the same as the red seen by the person beside us. While the differences in perception are rarely likely to be as extreme as in the case of a childhood friend of mine whose color blindness was detected when he went to school and repeatedly drew green reindeer at Christmas, we have no real way of ever verifying the universality of “red.” Is it just a linguistic device, or a cultural norm? After all, at some time we all had to learn, from our parents or friends, what red was. Perhaps it differs in different cultures. These questions, as they begin to cross the boundary between philosophy, psychology, and learning, are of more than passing interest.

Much of the fundamental work in concept formation has been done by Eleanor Rosch.18 One area she studied was how colors are identified in different cultures. While, on the one hand, there appear to be small cultural differences in the boundaries between colors (e.g., the Navaho have only one word for blue and green (no wonder, with all that turquoise jewelry around),10 Rosch showed that all cultures were unanimous in their choices of the best examples of red, yellow, or green. Even more interesting, Rosch discovered a primitive tribe, the Dani, who had words for “bright” and black only. She then taught them words for colors, using Dani words (e.g., tree) that were unrelated to color. One group learned the “primary” colors such as fire-engine red; the other learned Dani words for intermediate colors such as turquoise. The group learning red, yellow, and blue learned the associative words rapidly and effectively; the other group never did master the associations. Studies of this type provide support for the contemporary notion in philosophy that categories and concepts derive from our experience of the world; indeed there is surprising uniformity to these concepts in precisely those areas where we might expect that experience (such as the experience of color) is also universal.

Prototype theory was perhaps the first theory of concepts to be seriously applied to clinical reasoning. Bordage and Zacks19 used many of the methods of Rosch to demonstrate that the same kind of graded structure that distinguished the natural categories was present in disease categories. They found, for example, that diabetes was a much more prototypical endocrine disease than Hashimoto's disease or hyperthyroidism. It was volunteered more often by practitioners asked to name as endocrine disease, recognized more accurately and quickly, and so on.

These studies lead to two conclusions: first, there is evidence to substantiate our musings at the beginning of this talk that the concept of disease is a continuum, not a category. Second, the identification of conceptual prototypes such as diabetes, carrot, and robin, which transcend different cultures, argues for an external “nurture” basis for concepts—even high-level concepts such as disease.

Prototype theory, in its methods, seeks evidence for cultural or even transcultural norms for categories. In the extreme, prototype theory might be viewed as empirical evidence for a position that concepts and categories are derived entirely from universals in the environment, a position more extremely nurture-oriented than any we have considered except the positions of Locke and Hume.

Another psychological theory of concept formation, exemplar theory, while still holding to the implicit view that the concepts we learn reflect an external reality, is much more modest about the universality of such concepts. In this perspective, we are able to identify a member of a class or a concept, not because of any internal rules or because the sum of our experience has created prototypes of the class that are available for analysis and introspection, but because we have, in any category (dogs, chairs, diseases, sports cars), an innumerable number of instances of the category (my dog, Rover, Lassie, etc.). When we are faced with a categorization task, a first line of defense is a search through memory for similar examples of the class, and then, if we find an example that is sufficiently similar, we assume the new beast is also a dog. This description makes the process sound far more deliberate and available for introspection than the evidence suggests. Instead, if we inquire why a person decided that the new beast was a golden retriever, the new car was an Audi, or the skin lesion was actinic keratosis, the modal response would be “Because it looks like a golden retriever,” or an Audi or actinic keratosis. Further justification may be forthcoming but it sounds suspiciously post hoc. This process is in fact unlikely to be available for conscious introspection.

I and some colleagues have done a series of studies in dermatology20 and cardiology21 in which we have found evidence for this mode of processing. As one example,20 in a series of experiments we gave subjects (residents) practice with a set of dermatology slides covering 11 conditions, then subsequently tested them with a new set of slides. The slides were carefully chosen. Each was drawn from a quartet of slides containing two typical slides that strongly resembled each other, and two atypical slides that resembled each other. Each subject was then tested with two other slides of the quartet. We balanced it all off, so that we could look at performances on typical—similar, atypical—similar, typical—different, and atypical—different slides. Thus we deliberately compared typicality (a property of the number of features and prototype theory) with similarity (a characteristic of exemplar-based reasoning). The results showed effects of both similarity and typicality. With immediate testing, similarity resulted in a gain of accuracy of about 50%, typicality a gain of about 12%. After ten days' delay, slides that were similar to those in the initial learning series were diagnosed about 25% more accurately, and typical slides were diagnosed about 25% more accurately.

We have continued to explore these phenomena. One concern is that it will work only with visually rich materials, where similarity is highly perceptual. Hatala21 conducted a study with ECG interpretation, which, while still visual, is replete with quantitative rules. In this study, similarity to an ECG in the learning phase was based entirely on a one-line description (e.g., a “54-year-old accountant” and a “middle-aged banker” versus “an “80-year-old widow”). To demonstrate the effect, the match was to an ECG that was visually similar, but from an incorrect and confusable category (e.g., left bundle-branch block and anterior MI). When the description was matched, accuracy was 23%; when it was unmatched, it was 46%, and of course more residents who saw the matching description fell for the incorrect diagnosis. Further, it would seem that the process must have occurred without awareness. If they had known they were matching on the age and occupation, they would not have done it, since a moment's introspection reveals that this is irrational.

Both of these psychological theories—prototypes and instances—derive from a nurture view of concepts, namely that the concepts we learn are derived from our experiences. In fact, the exemplar models show precisely how specific experiences are available and used in subsequent judgments of category membership. However, as always, there is another side to the story. Psychology has been equally successful at deriving evidence to support the nature view, that what we see is influenced by our own minds. Admittedly, this is not a pure nature view, as we shall see, since the way our perceptions of the external world are biased derives itself from our experience with the world.

Cognitive psychology had its origins in an information-processing model based on the metaphor that the mind is like a computer. However, there was rapid accumulation of evidence showing just how un-computer—like humans are. One simple yet fundamental example is in information retrieval. The answers to questions such as “When did Columbus discover America?” and “What is the capital of Arkansas?” are available almost as soon as you hear the question inflection. Second, if asked about Albania, not Arkansas, you would know that you didn't know almost as rapidly. Contrast that with a search of the Web. Although the computer processes information at least a million times faster than does the mind, retrieval will inevitably take much longer. Further, it will take the computer longer still to decide that it doesn't know, since it will have to search every corner of its memory before it gives up. It is difficult to envision what kind of memory architecture humans must have to do this job, but it must be very different from the computer's RAM.

One model of memory that accommodates these observations is called human associative memory. The model emerged from studies of reading coupled with a phenomenon called the word-superiority effect,22 which has relevance, surprisingly, to clinical reasoning as well as to many other domains. Imagine that I flash a four-letter word on the computer screen for a few milliseconds and ask you to identify the fourth letter. The phenomenon is this: when the fourth letter occurs in a real word such as “rink” or a pseudo-word such as “bink,” the “k” is recognized faster and more accurately than when it occurs in a non-word such as “nrik.” While this seems perfectly plausible, it says some fundamental things about the nature of memory. That is, even at the perceptual level of recognizing individual letters, a process that must occur in milliseconds and without conscious introspection, identification is facilitated by memory of much higher-level concepts, the words themselves. This seems to illustrate beautifully the interactive nature of perception, showing that what we see can be influenced by what we expect to see.

The observations of the word-superiority effect were modelled by McLelland and Rumelhart23 using a “connectionist” or parallel distributed processing (PDP) model, with multiple layers of nodes between input and output corresponding to letter elements, letters, and words, with links among nodes at all layers. Unlike expert systems or Bayesian models, these connectionist models had no preprogrammed rules: rather, they “learned” from experience, gradually building up strength among certain links connecting nodes.

Parallel distributed processing models have been continually refined (and renamed—they are now more commonly known as “neural networks”), and have found application in many settings, including clinical diagnosis, where they appear to be more effective diagnosis machines than the traditional expert systems. However, for present purposes, these applications are less important than the observation that the models have commonality with psychological views of concept formation, based on learning from examples. And as we shall see, the new name is not simply good public relations—neural networks bear a striking resemblance to models emerging from neuroscience.

I and my colleagues have taken the phenomenon that recognition depends, in part, on available concepts in memory into the clinical reasoning lab. In a series of studies in dermatology, radiology, and electrocardiography, we biased the subjects by providing a brief history suggestive of a particular diagnosis, then showed them a visual stimulus—an ECG, a slide of a skin lesion, or a head-and-shoulders picture. We have consistently found that the bias influences not only the differential diagnosis (which might be viewed as perfectly rational), but also the feature calls. Moreover, in a recent study using textbook examples of physical signs,24 we showed that it was not simply a case that the history increased vigilance for that particular sign, and therefore the likelihood of detection. Rather, an incorrect history led students to misinterpret one sign as another—the inflamed parotid glands of mumps became the moon-shaped face of Cushing's disease, and the moon-shaped face of Cushing's became periorbital edema when linked with a history of nephrotic syndrome.

This phenomenon, that prior higher-level information either provided to the subject or available from memory can influence basic perceptual processes, has been demonstrated at all levels of expertise, from first-year students to cardiologists, so it is not simply a naive bias that can be erased with experience. LeBlanc's follow-up studies of strategies to “de-bias” subjects, under way in our lab, have shown that even fairly draconian measures are only partially successful; a finding that is not surprising since perceptual processes are not available to conscious introspection.

These findings, both in cognition of perception and in clinical reasoning, challenge a commonly held view that experts use “forward reasoning”; that is; they begin with the facts of the case and reason inductively to a logical conclusion, a view championed by Groen and Patel.25 Their findings were derived from verbal introspections or written summaries, after the subjects had had time to read and reflect on the clinical case. It is my present view that the work on top—down processing, both in reading and reasoning, shows that deductive processes from hypothesized solutions are already occurring long before the case is in full view, and that the apparent induction of the expert simply reflects a coherent story told post hoc. One study done by Eva26 substantiates this view. He had subjects read mystery stories, than recount their solutions. Half told their solutions “online” as they were reading; the other half, as a summary after. On three measures, the latter group looked as if they were doing substantially more forward reasoning. However, the manipulation took place after the reasoning was over.

Back to Top | Article Outline

What is a Concept? Lessons from Neuroscience

Finally, conspicuous in its absence from the discussion to date is the role of neuroscience in our understanding of concepts. I have described how cognitive psychology has provided examples of phenomena that help us to understand some aspects of clinical reasoning. Theories of concept formation and perception are a useful heuristic for testing apart aspects of clinical reasoning. But the skeptical reader could be forgiven for remarking that these theories seem more like useful demonstrations and analogies than real explanations, in a scientific sense.

Let me then venture into what is for me the largely uncharted territory of neuroscience. In doing so, I am moving closer to the more traditional interpretation of the nature—nurture debate than the way I originally framed it. That is, we now seek evidence from neuroscience that the brain and its structures (nature) are responsive to, and modified by, the environment (nurture). Further, just as basic science provides a framework for understanding disease, neuroscience may provide a framework for understanding the process of concept formation and clinical reasoning.

To advance the neuroscience argument, we need to discover evidence that categories “out there” can be localized to specific brain activities. Perhaps the most accessible argument about the impact of specific experiences on brain anatomy and brain development emerge from the phenomenon of plasticity—the discovery that there are critical periods in the development of the brain during which input from the environment is required in order for specific facilities to develop. The phenomenon is ubiquitous. Here are some examples:

* Children who have congenital cataracts must have them surgically removed before age 10, or they will be unable to recognize shape and pattern, although they will be able to learn colors. This was hypothesized to arise because of abnormal development in the visual cortex. Very recent research with newborns has extended this understanding further. Maurer studied children less than 9 months old who had had cataracts removed immediately following the surgery. Immediately after surgery, their vision was like a newborn's—about 1/40 the acuity of an adult's. But after only one hour of visual input, their acuity had improved to the level of a one-month infant. To quote the researcher: “It's using the eyes and having the experience of seeing that's driving the normal experience of vision after birth…. the brain was wired to be ready to receive visual images… but it's got to have the input in order to do the learning.”27

* Animal experiments showed kittens raised in an environment that only allowed horizontal or vertical orientations never learned the other. Hubel and Wiesel28 then showed that these selective deprivations are identifiable in the development of specific cells in the visual cortex. They went on to show that the brain development was incredibly specific, so that a single day of exposure at day 28 was sufficient to establish the orientation. Other researchers have gone on to establish that plasticity is associated with the presence of specific proteins.

The phenomenon of plasticity is direct evidence of an interaction between brain structures and the environment, and provides an explanation for the philosophical dilemma. Of course, such experiments do not provide direct evidence that higher-order concepts such as temperature, unemployment, love, or for that matter, tables, are associated with specific local changes. The next step is to move from the construction of perceptual maps of the environment to conceptual maps in different areas of the brain. This may not be as large a leap as it sounds; after all, the mechanism that enables us to recognize Aunt Sally must involve links among the more primitive operators that isolate color, shape, and orientation. Thus, we move from brain mappings corresponding to perceptual inputs, which, as we have seen, develop and specialize as a consequence of interactions with the environment at highly specific developmental intervals, to mappings corresponding to the relations among these elements—a “mapping of types of maps,” according to Edelman.29 This remains a theory thus far, the theory of “neuronal group selection.” I cannot pretend to be more than an intrigued observer, but it would seem that the evidence at hand regarding neural plasticity provides plausible mechanisms for such a neural correlate of concept formation. Indeed, as I discussed earlier, although neural networks were devised as a simulation device to test a model of concept learning involving parallel and distributed activation, there is a striking correspondence between the nodes and connections of neural networks and the proposed model of neuronal group selection.

Back to Top | Article Outline

Conclusions

This review was intended to accomplish no more than to place the current debates around clinical reasoning in a larger context. There is, in all this, a Michigan State University (MSU) connection. The small research program focusing on clinical reasoning was begun by Elstein and Shulman at MSU in the early 1970s. The McMaster group joined the fray soon after, with me as their hired hand. But soon after this first cycle of studies was completed, there was a strong divergence in the field. Elstein moved his interest to normative approaches such as decision analysis, assuming that clinicians were suboptimal decision makers who could be made more optimal with training. Others who followed, including Patel and Groen, while disagreeing on the details, retained a strongly rationalist perspective. On the other side, Bordage pursued studies in prototype theory, and I began a research program around exemplar models. It is only recently, with the study leading to this review, that I began to appreciate the historical origins of this divergence.

The exciting conclusion from this review is that there appears to be a convergence among the three disciplines—philosophy, psychology and neuroscience—pointing to the reconciliation of these positions. While the constructs, the capacities for identifying regularities appear innate, these abilities are directly responsive to the environment, so that each individual's concepts will be both communal and idiosyncratic. Moreover, this synthesis has some practical implications (believe it or not!). It appears to me that these thinkers are urging us to a reconciliation in our own field—expertise in clinical reasoning is neither mastery of analytical rules nor accumulation of experience, it is both. And the role of experience with individual examples in refining the concepts is critical. Moreover, the philosophical work and the demonstrations of optical illusions show us that the external environment is not delivered to the senses intact, but is filtered through the prisms of prior experience. These are important lessons for instruction in clinical reasoning.

The sum of these findings describes a model of clinical reasoning very different from the algorithmic processes used by the computer (except when, using neural networks, the computer mirrors the mind). An evident implication is that there is little to be gained in demonstrating that humans are suboptimal Bayesians or algorithm-appliers; they are suboptimal because they are using a substantially different basis for computation. While, on the one hand, this provides a strong rationale for computerized decision-support systems, the cautionary note that pervades this review is that the support system cannot intervene after the data are collected, since the data are themselves subject to interpretation in light of mental models.

Back to Top | Article Outline

References

1. Elstein AS. Clinical problem solving and decision psychology: comment on “the epistemology of clinical reasoning.” Acad Med. 2000;75(10 suppl):S134–S136.

2. Shulman LS, Elstein AS, Sprafka SA. Medical Problem Solving: An Analysis of Clinical Reasoning. Cambridge, MA: Harvard University Press, 1978.

3. Ericsson KA, Smith J. Toward a General Theory of Expertise: Prospects and Limits. Cambridge, U.K.: Cambridge University Press, 1991.

4. Schmidt HG, Norman GR, Boshuizen HP. A cognitive perspective on medical expertise: theory and implication. Acad Med. 1990;65:611–21.

5. McGaghie WC, Boerger RL, McCrimmon DR, Ravitch MM. Learning pulmonary physiology: comparison of student and faculty knowledge structures. Acad Med. 1996;71:S13–S15.

6. Papa FJ, Elieson B. Diagnostic accuracy as a function of case prototypicality. Acad Med. 1993;68(10 suppl):S58–S60.

7. Patel VL, Groen GJ, Frederiksen CH. Differences between medical students and doctors in memory for clinical cases. Med Educ. 1986;20:3–9.

8. Bordage G. Elaborated knowledge: a key to successful diagnostic thinking. Acad Med. 1994;69:883–5.

9. Norman GR, Brooks LR, Allen SW. Role of specific similarity in a medical diagnostic task. J Exp Psychol Gen. 1991;120:278–87.

10. Hundert EM. Lessons from an Optical Illusion: On Nature and Nurture, Knowledge and Values. Cambridge, MA: Harvard University Press, 1995.

11. Chalmers AE. What is This Thing Called Science? 2nd ed. St. Lucia, Australia: University of Queensland Press, 1999.

12. Campbell EJ, Scadding JG, Roberts RS. The concept of disease. BMJ. 1979;6193:757–62.

13. Chapman LJ, Chapman JP. Illusory correlations as an obstacle to the use of valid psychodiagnostic signs. J Abnorm Psychol. 1969;74:271–80.

14. Boshuizen HPA, Schmidt HG. The role of biomedical knowledge in clinical reasoning by experts, intermediates and novices. Cogn Sci. 1992;16:153–84.

15. Patel VL, Groen GJ, Scott HM. Biomedical knowledge in explanations of clinical problems by medical students. Med Educ. 1988;22:398–406.

16. Gregory RL. Eye and Brain: The Psychology of Seeing. London, U.K.: Weidenfield and Nicolson, 1966.

17. Deregowsky JB. Illusion and culture. In: Gregory RL, Gombrich EH (eds). Illusion in Nature and Art. New York: Scribner, 1974.

18. Rosch E. Natural categories. Cogn Psychol. 1973;4:328–50.

19. Bordage G, Zacks R. The structure of medical knowledge in the memories of medical students and general practitioners: categories and prototypes. Med Educ. 1984;18:406–16.

20. Regehr G, Cline J, Norman GR, Brooks L. Effect of processing strategy on diagnostic skill in dermatology. Acad Med. 1994;69(10 suppl):S34–S36.

21. Hatala R, Norman GR, Brooks LR. Influence of a single example upon subsequent electrocardiogram interpretation. Teach Learn Med. 1999;11:110–7.

22. Reicher GM. Perceptual recognition as a function of meaningfulness of stimulus materials. J Exp Psychol. 1969;81:274–80.

23. McLelland JL, Rumelhart DE. An interactive activation model of context effect in letter perception. Psychol Rev. 1989;88:375–407.

24. Brooks LR, LeBlanc VR, Normal GR. On the difficulty of noticing obvious features in patient appearance. Psychol Sci. 2000;11:112–7.

25. Patel VL, Groen GJ. Knowledge based solution strategies in medical reasoning. Cogn Sci. 1986;10:91–116.

26. Eva KW, Norman GR. Is thinking aloud equivalent to post hoc explaining. Presented at the 1998 Research in Medical Education meeting, New Orleans, LA, 1998.

27. Maurer D, Lewis TL, Brent HP, Levin AV. Rapid improvement in the acuity of infants after visual input. Science. 1999;286:108–10.

28. Weisel TN. The postnatal development of the visual cortex and the influence of the environment. Bioscience Reports. 1982;2:351–77.

29. Edelman GM. Mountcastle VB. The Mindful Brain. Cambridge, MA: MIT Press, 1978.

Back to Top | Article Outline
Section Description

Research in Medical Education: Proceedings of the Thirty-ninth Annual Conference. October 30 - November 1, 2000. Chair: Beth Dawson. Editor: M. Brownell Anderson. Foreword by Beth Dawson, PhD.

© 2000 Association of American Medical Colleges

Login

Article Tools

Images

Share