Skip Navigation LinksHome > October 2006 - Volume 81 - Issue 10 > The Value of Basic Science in Clinical Diagnosis
Academic Medicine:
Linking Basic Science and Clinical Reasoning

The Value of Basic Science in Clinical Diagnosis

Woods, Nicole N.; Neville, Alan J.; Levinson, Anthony J.; Howey, Elizabeth H.A.; Oczkowski, Wieslaw J.; Norman, Geoffrey R.

Section Editor(s): Bonomino, Giulia PhD; Wallach, Paul MD

Free Access
Article Outline
Collapse Box

Author Information

Correspondence: Nicole N. Woods, PhD, University of Toronto, The Wilson Centre, 200 Elizabeth Street, Eaton South 1-565, Toronto, ON M5G 2C4; e-mail: (nikki.woods@utoronto.ca).

Collapse Box

Abstract

Background: The role of basic science knowledge in clinical diagnosis is unclear. There has been no experimental demonstration of its value in helping students recall and organize clinical information. This study examines how causal knowledge may lead to better recall and diagnostic skill over time.

Method: Undergraduate medical students learned either four neurological or rheumatic disorders. One group learned a basic science explanation for the symptoms. The other learned epidemiological information. Both were then tested with the same set of clinical cases immediately after learning and one week later.

Results: On immediate test, there was no difference in accuracy (70% for both groups). However, one week later, performance in the epidemiology group dropped to 51%; the basic science group only dropped to 62%.

Conclusions: Basic science knowledge relating causal knowledge to disease symptoms can improve diagnostic accuracy after a delay.

Since Flexner, there has been almost universal agreement that medical students should spend a minimum of two years studying basic science and that a solid foundation in basic science is a necessary prerequisite for competent practice. However, the basis for this assumption is difficult to identify. Clearly, some clinicians—anesthesiologists, intensivists, nephrologists—use many concepts from physiology in their daily practice. But it is less clear that other physicians need or regularly use basic science. Indeed, studies of clinician reasoning have shown little evidence that clinicians use basic science in routine diagnosis.1,2 Patel writes:

…the basic sciences and the more practical clinical knowledge form two separate domains with their own individual structures and the clinical information cannot be embedded into the basic science knowledge structure.2,p.398

It would seem that an understanding of mechanisms would have little direct heuristic value in the diagnostic task. Indeed, Patel appears to argue further that the basic science knowledge has no benefit, direct or indirect, in clinical diagnosis. However, in a later paper, she argues that in cases of uncertainty, biomedical knowledge may provide coherence in the explanation of clinical cues.3 This is consistent with the results of the study by Norman, Brooks, and Trott, which showed that when experts are confronted with very difficult cases they make extensive use of basic science explanations.4

One possible source of uncertainty in these conclusions is that they are derived from think-aloud protocols as clinicians work their way through cases. So, if basic science provides “coherence,” that is, a conceptual framework relating signs and symptoms to diseases, this may never be visible at the level of overt utterances. In a recent paper, Rikers and colleagues have shown that physicians are more rapid and accurate than students in recognizing “encapsulated items” in case presentations.5 Encapsulations were generally defined as inferences about underling physiological processes. The authors argued that responses to encapsulations reflect use of basic science knowledge. However, while encapsulations clearly include such biomedical process descriptions as “necrosis” or “sepsis,” their definition was inclusive, so a restatement of the diagnosis was also considered encapsulated knowledge. A follow-up study deliberately created target words that were either clinical or biomedical and again showed superiority of experts for speed and accuracy of recognition.6

These findings certainly show, in contrast to the earlier studies that relied on overt recall, that medical expertise is associated with encapsulated biomedical knowledge. But again, the conclusion of a causal role for biomedical knowledge is problematic. Experts have better recognition of biomedical knowledge and better recognition of clinical knowledge, so both could be independently associated with expertise, but both need not be causally associated with diagnostic skill.

To explore causal associations, de Bruin used a statistical technique, structural equation modelling, to examine several models of the relation between basic science knowledge, clinical knowledge, and diagnostic accuracy.7 The best fit arose from a model in which basic science knowledge predicted clinical knowledge that in turn predicted diagnostic accuracy. However, while the fit was good, the strength of association (path coefficients) was only moderate for experts, ranging from .31 to .43. In a sense, this creates the opposite problem of interpretation from the response time data, in that, if biomedical knowledge were causally related to diagnostic skill, one would expect the associations to be stronger, not weaker, with experts. Further, the relations were derived from correlational data, and causal relations, in the strictest sense, cannot be presumed. Finally, the associations amount to an average over 26 cases and about 100 clinical and basic science questions that were not chosen to link directly with each other. That is, there is no indication of the extent to which solution of the clinical cases was in any way logically or structurally linked to either basic science or clinical questions. The strength of association must, to some degree, be contingent on the degree of content overlap of specific questions asked in each test.

To unequivocally demonstrate a causal role for basic science in clinical reasoning, one should experimentally manipulate the presence or absence of biomedical knowledge and examine its impact on diagnosis. That is the intent of the present paper—to demonstrate experimentally that one fundamental role of basic science is to provide a coherent conceptual framework on which to acquire diagnostic information. A sound grasp of underlying mechanisms may enable students to understand why particular signs and symptoms are associated with particular disease states, so that students may use their understanding of basic science to aid in reconstructing the features of a disease after initial instruction. For example, a Babinski sign and hyperreflexia are associated with a typical stroke because there has been an interruption of the upper motor neuron pathways that normally mediate tone and reflexes. The pathology and its attendant signs and symptoms can be worked out through an understanding of the basic physiological processes.

If this is so, then students who learn basic science explanations may actually retain clinical information about the relation between features and diseases better than students who attempt to learn the clinical correlates directly. This hypothesis was tested in a study where we presented undergraduate students with information related to four neurological disease categories—Muscle Disorders, Neuromuscular Junction Disorders, Upper Motor Neuron Lesions, and Lower Motor Neuron Lesions.8 One group of students learned a basic science description of each condition, which explained how the signs and symptoms came about; and another group learned the probabilities relating the signs or symptoms to the diseases. Each group did the same diagnostic test, consisting of 15 written cases. Immediately after learning there was no difference, however one week later, the students who had learned the disease probabilities had a 10% drop in performance, while the group that learned a basic science description showed no decay at all.

The findings are based on a small and atypical sample, and a single knowledge domain. Further, despite the fact that some authors have produced evidence that students provided with actual probabilities will outperform others who simply learn the features,10 over the longer term, it is likely that disease probabilities may be difficult to remember. Finally, participants were undergraduate psychology students, who had minimal familiarity with the nomenclature of medicine. The possibility exists that the results may be an artifact of the use of such rank novices.

Nevertheless, these findings, although preliminary, lead to the notion that a critical role for basic science may be to permit the reconstruction of the relation between signs/symptoms and diagnoses by making these relationships meaningful, hence memorable. This thesis finds support in theories of cognition regarding the coherence of categories. Murphy and Medin presented the idea that concepts are organized around personal theories and that we fit our knowledge of categories into theoretical frameworks.9 We argue that basic science knowledge serves as a theoretical framework for the organization of clinical knowledge.

In the present paper, we replicate and extend these findings to two domains of medicine, neurology and rheumatology. We are using actual diagnoses, and a larger, more representative sample of undergraduate medical students. The primary hypothesis is that students who learn basic science explanations for clinical conditions will be better able to remember the critical features of these conditions after a delay period than a control group of students who simply learn the features of the condition.

Back to Top | Article Outline

Method

Participants

With institutional review board approval, 58 participants were recruited from the first- and second-year medical students prior to their rheumatology and neurology study units. No particular attempt was made to have the same participants in each discipline and the two disciplines were simply treated as a between subjects factor in the analysis. Participants were compensated $20.00 for their time.

Back to Top | Article Outline
Materials and apparatus

Except for the initial instructions, the experiment was run entirely on personal computers. For each of the two experimental conditions, a set of written learning materials describing four disorders was created. In the neurology discipline, the four disorders were myasthenia gravis, brainstem stroke, spinal cord compression and polyneuropathy. In the rheumatology discipline, the four disorders were ankylosing spondylitis, scleroderma, rheumatoid arthritis and lupus.

The learning materials for both conditions consisted of paragraphs that included the same clinical features or symptoms. Only the nondiagnostic information differed. In the Basic Science (BS) condition, the written materials included a brief overview of relevant anatomy and physiology. The specific symptoms of the disorder were described as resulting from various disruptions to the system. In the Feature List (FL) condition, additional epidemiological information, which was not diagnostically relevant (such as prevalence, prognosis, etc.) was included to make the description approximately equal in length to the Basic Science training materials.

In an attempt to encourage participants to learn all the available material and to ensure that participants did indeed acquire adequate knowledge of the disorders, the experiment included two short quizzes. The first quiz was common to both conditions and consisted of eight multiple-choice and true/false questions, measuring the participant’s memory for the clinical features of the disorders. The second quiz was short answer format and measured their knowledge of the “supporting information” associated with the diseases. For the BS condition, this included questions on the pathways and causation associated with the symptoms. For the FL condition, it included questions relating to the epidemiological information. For counterbalancing purposes, two variations of each test, matched for difficulty, were created.

The diagnostic test included 12 cases consisting of the gender, age, and at least four presenting symptoms for a fictional patient. The cases were created by the research team, and checked for realism by specialists. For counterbalancing purposes, two sets of 12 cases, matched for difficulty, were created. On day one, one half of the participants in each condition were tested on set A, while the remaining half were tested on set B. The sets were reversed for the delayed test.

Back to Top | Article Outline
Procedure

Participants were run in cohorts of up to five people in a computer laboratory. Participants in the Basic Science condition were told to learn the symptoms of the disorders and also to focus on learning the “disease process”—the biomedical information behind the causes of the symptoms and how the symptoms relate to each other. Participants in the Feature List condition were told to learn the symptoms of the disorders and also to focus on learning other information such as prevalence rates and treatment options. Participants were asked to spend approximately 15–20 minutes studying the learning materials. When they felt that they had learned the symptoms of the disorders and the other supporting information relevant to their condition, they were told to click on “proceed to test” to take the recall quiz for which they would receive feedback. Participants were then given an unlimited amount of time to complete the diagnostic test. They were asked to read each case carefully and to decide on the most appropriate diagnosis. All participants were informed that they would be tested again the following week but that they would not be studying again.

One week later, participants were told that they would begin with 12 patient cases they had not seen before, to read each case carefully and decide on the most appropriate diagnosis after which they would complete a memory test for all four disorders. The test cases and the recall test were taken from the alternate set of test materials. No time limits were imposed.

Back to Top | Article Outline
Analysis

The primary hypothesis was that students in the basic science and feature list conditions would perform equivalently on the first diagnostic test, but after a one-week delay participants in the FL condition would perform worse than they did initially, while students in the BS condition would maintain diagnostic performance. This hypothesis amounts to a Time × Condition interaction in a repeated measures analysis of variance.

Back to Top | Article Outline

Results

The mean scores for each group at the first and second test periods are shown in Table 1. Both groups performed identically on the first test, with 70% accuracy. However, after the delay, the group that had learned the feature lists had an accuracy of only 51% vs. 62% for the basic science group, F(1,53) = 5.95, p < .05. This amounts to an effect size of .65, which is in the range of a moderate to large effect.

Table 1
Table 1
Image Tools

Both groups performed similarly on tests of recall of the features in each diagnostic condition at Time 1 and Time 2, with mean percent correct ranging from 75% to 85% (see Table 1). While the effect of delay was significant (p = .02), there was no significant effect of condition or interaction.

Back to Top | Article Outline

Discussion

The results are consistent with the earlier study and our expectations. Students who learned the symptoms of a disease in the context of biomedical information performed similarly to students who learned the symptoms in the context of epidemiological information immediately after learning, but the students with causal knowledge showed a substantially smaller degradation of performance after a one week delay.

A possible, but incomplete, explanation is that imbedding the features in a causal model provides a meaningful context for learning, leading to enhanced memory for the material. This is consistent with a large body of evidence about the role of meaning in memory. The problem with the explanation is that the enhanced performance on the diagnostic task by participants in the BS condition after a delay was not accompanied by a similarly enhanced memory for the features at this time (BS = 6.13/8; FL = 6.00/8). An alternative, though less well understood explanation is that basic science provided a measure of coherence to the relation between features and diagnoses, so that features of the case that were consistent with the causal model (and hence correct) were weighted more heavily in the final judgment. This explanation is consistent with the finding of De Bruin (2005) showing that basic science knowledge was best characterized as encapsulated, and was related to diagnostic accuracy only indirectly through its relation to clinical knowledge.

The study has obvious limitations. None of the participants were experts in any sense of the term, although they did demonstrate an acceptable level of diagnostic skill. It is the nature of the experiment that their entire expertise in the domain was derived from the materials they learned in the experiment. Thus, it cannot be presumed that the extent of their reliance on causal explanations does, in any way, mirror the reliance of practitioners on basic science. Indeed, there is much evidence from this lab and elsewhere that experts rely heavily on prior specific experiences.11 This may also explain the low path coefficients among experts observed in the De Bruin study discussed earlier. The time of delay was quite short, although there is no reason to expect that the observed trends would be extinguished over longer intervals.

Regardless, while the mechanism is still to be precisely delineated, this research showed experimentally for the first time that basic science knowledge is not an inert corpus of facts that does not interact with clinical knowledge, which was the conclusion of earlier studies. Quite the converse, a good understanding of basic science appears to be a major determinant of diagnostic success in the long term. In this respect, it is consistent with other recent work on the role of basic science that shows, using probes that are independent of overt recall, that basic science has a central role in the development of clinical expertise.

To the extent that the proposed mechanisms are operating, they might depend critically on establishing explicit causal mechanisms establishing links between features and diseases. If so, instruction that divorces mechanisms from clinical correlates will likely be of little value. Thus, a preclinical course in physiology or pharmacology that treats the subject as a self-contained body of facts and concepts, and that does not explicitly examine the relation between mechanisms and disease manifestations is likely of little value. Problem-based learning (PBL) may appear to exemplify how instruction can make explicit the linkage between clinical features and disease mechanisms; however it may be that this is achieved at the cost of good understanding of disease mechanisms.12,13 In summary, the findings of the study have significant implications for preclinical basic science instruction regardless of curriculum format.

Back to Top | Article Outline

Acknowledgments

This research was funded entirely from a grant from the Medical Council of Canada.

Back to Top | Article Outline

References

1 Patel VL, Evans DA, Groen GJ. Biomedical knowledge and clinical reasoning. In Evans DA, Patel VL, eds. Cognitive Science in Medicine. Cambridge: MIT Press, 1988:53–112.

2 Patel VL, Groen GJ, Scott HM. Biomedical knowledge in explanations of clinical problems by medical students. Med Educ. 1988;22:398–406.

3 Patel VL, Kaufman DR. Clinical reasoning and biomedical knowledge: Implications for teaching. In Higgs J, Jones M, eds. Clinical Reasoning in the Health Professions. Oxford: Butterworth Heinemann, 1995:117–28.

4 Norman GR, Trott AL, Brooks LR, Smith EKM. Cognitive differences in clinical reasoning related to postgraduate training. Teach Learn Med. 1994;6:114–120.

5 Rikers MJP, Loyens SMM, Schmidt HG. The role of encapsulated knowledge in clinical case representations of medical students and family doctors. Med Educ. 2004;38:1035–1043.

6 Rikers MJP, Loyens SMM, te Winkel W, Schmidt HG. The role of biomedical knowledge in clinical reasoning: a lexical decision study. Acad Med. 2005;80:945–949.

7 De Bruin MBH, Schmidt HG, Rikers RMJP. The role of basic science knowledge and clinical knowledge in diagnostic reasoning: A structural equation approach. Acad Med. 2005;80:765–73.

8 Woods NN, Brooks LR, Norman GR. The value of basic science in clinical diagnosis: creating coherence among signs and symptoms. Med Educ. 2005;39:107–12.

9 Murphy GL, Medin DL. The role of theories in conceptual coherence. Psychol Rev. 1985;92:289–316.

10 Elieson SW, Papa FJ. The effects of various knowledge formats on diagnostic performance. Acad Med. 1994;69:81–3.

11 Norman GR, Brooks LR. The non-analytic basis of clinical reasoning. Adv Health Sci Educ. 1997;2:173–84.

12 Patel VL, Groen GJ, Norman GR. Effects of conventional and problem-based medical curricula on problem solving. Acad Med. 1991;66:380–9.

13 Norman GR. Editorial—Beyond PBL. Adv Health Sci Educ. 2004;9:257–60.

© 2006 Association of American Medical Colleges

Login

Article Tools

Images

Share

Readers Of this Article Also Read