Skip Navigation LinksHome > August 2005 - Volume 80 - Issue 8 > The Role of Basic Science Knowledge and Clinical Knowledge i...
Academic Medicine:
Research Report

The Role of Basic Science Knowledge and Clinical Knowledge in Diagnostic Reasoning: A Structural Equation Modeling Approach

de Bruin, Anique B. H. MSc; Schmidt, Henk G. PhD; Rikers, Remy M. J. P. PhD

Free Access
Article Outline
Collapse Box

Author Information

Ms. de Bruin is assistant professor, Department of Psychology, Erasmus University Rotterdam, The Netherlands.

Dr. Schmidt is professor, Department of Psychology, Erasmus University Rotterdam, The Netherlands.

Dr. Rikers is associate professor, Department of Psychology, Erasmus University Rotterdam, The Netherlands. At the time this study was conducted, all three authors were at Maastricht University, The Netherlands.

Correspondence should be addressed to Ms. de Bruin, Department of Psychology, WJ5-09, Erasmus University Rotterdam, P.O. Box 1738, 3000 DR Rotterdam, The Netherlands; e-mail: 〈debruin@fsw.eur.nl〉.

Collapse Box

Abstract

Purpose: To examine four theories on the role of basic science knowledge and clinical knowledge in diagnostic reasoning.

Method: In 2000–01, the authors tested the basic science and clinical knowledge and diagnostic performances of 59 family physicians and 184 second- to sixth-year medical students at Maastricht University, The Netherlands. Structural equation modeling was used to analyze the data. Four theoretical models were tested. In the first model only basic science knowledge is involved in diagnostic reasoning; in the second model only clinical knowledge is related to diagnostic reasoning; in the third model, clinical knowledge is related to diagnostic reasoning, but basic science knowledge is integrated in clinical knowledge; and in the fourth model, both basic science knowledge and clinical knowledge independently influence diagnostic reasoning.

Results: Forty-four (75%) of the family physicians and 184 (100%) of the students responded. The results indicated that the third model, which is based on the knowledge encapsulation theory, provided the best fit to the data, whereas the models that had directly related basic science knowledge with diagnostic performance did not fit the data adequately.

Conclusion: The results generally supported the third model by Schmidt and Boshuizen of knowledge encapsulation theory suggesting that basic science knowledge is activated in expert diagnostic reasoning through its relation with clinical knowledge.

Since the 1980s, cognitive scientists have generally assumed that what distinguishes experts from novices is their superior domain-specific knowledge.1 For example, a study by Johnson and colleagues2 on medical expertise demonstrated that expert physicians exemplified more differentiations of common diseases into disease variants than did nonexperts in diagnostic reasoning. Furthermore, Norman et al.3 showed that when given uninterpreted laboratory data, medical experts were able to produce more extensive recall protocols than were novices or advanced students, which indicated that the experts possessed more detailed medical knowledge (for a detailed analysis of the relation between recall and problem representation, see Kintsch and Greeno.4)

Two more or less separate types of knowledge can be identified when attempting to describe the structure of medical knowledge.5 First, basic science, or biomedical knowledge, describes causal mechanisms regarding the functioning and dysfunctioning of the human body. This type of knowledge incorporates, for instance, physiology, anatomy, and microbiology. Second, clinical knowledge entails information about relations of particular signs and symptoms with specific diseases. The difference between basic science knowledge and clinical knowledge is reflected in the structure of medical education. In most traditional curricula, medical education is generally divided into two phases: a preclinical phase and a clinical phase. In the first phase, students build extensive basic science knowledge that specifies the anatomy and (patho)physiology of the human body. In the second phase, students ideally follow a program of clinical internships, where they continuously find themselves in patient encounters that enable them to identify associations of signs and symptoms with particular diseases. At that point students commence the construction of a firm and coherent clinical knowledge base. Through these numerous patient interactions, physicians' clinical knowledge is revised and extended continuously, resulting in efficient and adequate clinical reasoning.

Although researchers generally agree on the distinction between biomedical knowledge (hereafter referred to as basic science knowledge) and clinical knowledge,2,3,6–8 the role of basic science knowledge and clinical knowledge in diagnostic reasoning has been subject of discussion since the 1980s. For example, Lesgold and colleagues6,9 proposed that basic science knowledge fulfills an integrating function in medical diagnosis. They assert that the key to accurate diagnostic thinking lies in a correct understanding of the anatomy and physiology of the human body. When constructing clinical case representations, physicians recognize clinical phenomena and activate basic science knowledge to account for unexplained symptoms and to specify a diagnosis. Lesgold and colleagues' research on diagnosing X-rays showed that stating a correct diagnosis involved explicit use of anatomical and pathophysiological knowledge. Apart from recognizing more anatomical structures, the radiologists in the Lesgold study produced more extensive think-aloud protocols compared to the novices and intermediates, which contained numerous connections between findings in the X-ray and basic science knowledge. In a study on diagnosing ECG traces, Gilhooly and colleagues10 showed that, compared to intermediates and novices, cardiologists produced more extensive think-aloud protocols containing more biomedical “episodes,” characterized as sequences of three or more segments of basic science information. Gilhooly and colleagues concluded that the extent to which physicians are able to apply basic science knowledge is an important predictor of expertise.

The perspective that the application of basic science knowledge is a distinguishing feature of expert diagnostic thinking was challenged by Patel and colleagues.11–13 In their opinion, when diagnosing a clinical case, medical experts mainly activate clinical knowledge, and hardly ever revert to basic science knowledge. Patel and colleagues assert that biomedical or basic science knowledge and clinical knowledge constitute two more or less “distinct worlds,” possessing their own distinct structure and characteristics.12,14 They assume that clinical knowledge is based on a complex taxonomy relating symptoms to disease categories, whereas basic science knowledge consists of general principles defining chains of causal physiological mechanisms. Basic science knowledge is mainly used to provide more coherence in post hoc pathophysiological explanations of clinical phenomena.

Evidence for Patel and colleagues' two-worlds hypothesis comes primarily from the finding that basic science concepts were largely absent in think-aloud protocols of physicians engaged in clinical reasoning.11,12 In one of their experiments, adding relevant basic science information to a clinical case did not lead advanced students to state more accurate diagnoses or to adjust their views of the diagnostic evidence. Basic science information was solely used to increase the extensiveness of the post hoc pathophysiological explanations. Patel and colleagues concluded that basic science knowledge and clinical knowledge are incongruous, and that the information retrieved from a clinical case is organized independently of basic science information. Patel and Kaufman13 later asserted that especially in cases of uncertainty (i.e., a complex clinical case) biomedical knowledge can provide coherence in the explanation of clinical phenomena. Thus, although basic science and clinical medicine each has a distinct structure and mode of reasoning, the first can help create coherence in the latter.

Finally, the knowledge encapsulation theory put forward by Schmidt and Boshuizen15–18 assumes that basic science knowledge plays an important role in the development of clinical knowledge. When they study a clinical case, medical students, who largely lack clinical experience, activate detailed basic science knowledge in order to understand the case. On the other hand, medical experts have repeatedly encountered constellations of particular signs and symptoms in relation to certain diseases. Basic science knowledge acquired during the first years of medical education becomes subsumed under (or encapsulated in) clinical concepts during experts' repeated contact with patients. When they study clinical cases, medical experts directly associate signs and symptoms in the case with relevant clinical concepts. Basic science knowledge is summarized or encapsulated in the clinical concepts. How medical knowledge develops from basic science to clinical knowledge is described in further detail in work on so-called illness scripts. Schmidt et al.19 theorized that medical knowledge evolves from a stage of formal reasoning characterized by the use of extensive knowledge of pathophysiology, through a stage of compilation of this elaborate knowledge into condensed causal mental models that ease diagnostic reasoning and lead to the emergence of “illness scripts,” to a final stage in which individual patient encounters are stored in memory and their representations are instantiated when a different patient with similar symptoms is encountered. This evolution of knowledge starts as soon as medical students are exposed to real patients. However, the result of this process is not consistent across physicians. Bordage20 discusses the structure of medical knowledge and adds that four types of knowledge organization are recognized in diagnostic reasoning: reduced, dispersed, elaborated, and compiled. While the first two types are associated with clinicians who have diagnostic difficulties, the latter two types (elaborated or compiled) correspond with accurate diagnostic thinking, and in the case of compiled knowledge, knowledge encapsulation. According to Bordage,20 medical training should focus on learning strategies that foster especially elaborated and compiled knowledge.

Possible evidence for the knowledge encapsulation theory comes from various sources. For example, a study by Boshuizen and Schmidt17 showed that the proportion of basic science statements physicians produced when diagnosing a pancreatitis case was highest at intermediate levels of expertise. Experts in this study applied little basic science knowledge when diagnosing the particular case. Moreover, in a free-recall study,16 medical experts' recall protocols were shorter and contained less basic science statements than did those of intermediates. However, experts provided more clinical concepts in their recall protocols, which were characterized as summaries of detailed basic science and clinical knowledge. The clinical concepts in the recall protocols indicated that experts processed the cases in a compiled or “encapsulated” manner, which made activation of detailed basic science knowledge unnecessary.21–23

In this study we tested the three theories described above (Lesgold and colleagues,6 Patel and colleagues,11,12 and Boshuizen and Schmidt15,16) by exploring the possible relationship proposed in the models between basic science and clinical knowledge, and diagnostic performance. A group of family physicians and medical students (second to sixth year) completed three tests: a clinical case diagnosis test, a basic science knowledge test, and a clinical knowledge test. We examined the plausibility of the theoretical models using structural equation modeling (SEM). Figure 1 shows the theoretical models that were tested. The observed path coefficients for student data and expert data are inserted separately. The first model represents the point of view of Lesgold and colleagues6,9 that the amount and quality of basic science knowledge influences a physician's diagnostic performance (indicated by the arrow); clinical knowledge does not play a prominent role in this model (indicated by the absence of arrows). The second model corresponds with the theory of Patel and colleagues, in which only clinical knowledge contributes to diagnostic performance.11–13 To test the knowledge encapsulation theory15–17 we examined a third model by Schmidt and Boshuizen in which clinical knowledge influences diagnostic performance, but basic science knowledge has an indirect effect on diagnostic reasoning by contributing to clinical knowledge. Finally, we tested a fourth model in which both basic science knowledge and clinical knowledge are independently related to diagnostic performance. We added this model to the analysis to rule out any alternative explanations and because of its face validity.

Figure 1
Figure 1
Image Tools
Back to Top | Article Outline

Method

Study participants and location

In 2000 and 2001, we asked 59 family physicians each with at least 1.5 years and maximally 30 years of experience (mean = 15.4 years) to participate. Physicians were sent a letter explaining the context of the study, and were asked whether they were interested in taking part. Participants responded by returning an answer sheet and were subsequently contacted by phone. A total of 184 second- to sixth-year medical students at Maastricht University, The Netherlands, also participated. At the time of the tests, the fifth- and sixth-year students were in their clinical internships. Both students and physicians received a financial compensation for taking part in this study.

Back to Top | Article Outline
Materials

Our study materials consisted of a clinical case diagnosis test, a basic science knowledge test, and a clinical knowledge test*.

The clinical case test (Cronbach's alpha = .91) comprised 26 written medical cases, all based on information from real clinical cases from a practice of family physicians. The cases covered diseases in all the major organs and consisted of an introduction specifying the patient's complaint and history, followed by the patient's laboratory data. A graduate medical student under review of a team of several expert family physicians constructed the cases.25 (See Appendix 1 for a sample case.) Hereafter, we will refer to the clinical case test as the diagnostic performance test. The basic science knowledge test (Cronbach's alpha = .97) consisted of 97 true/false questions covering the basic sciences of medicine (e.g., physiology, microbiology, biochemistry, and anatomy). The clinical knowledge test (Cronbach's alpha = .97) consisted of 105 true/false questions that covered clinical subdisciplines of medicine (e.g., cardiology, dermatology, internal medicine, and pulmonology). The item selection of the two knowledge tests was done by staff of the medical school of Maastricht University and was a vigorous and precise procedure, often resulting in rejection of several questions due to poor formulation or inadequate content.24 (See Appendix 2 for examples of a basic science question and a clinical knowledge question.) Participants could answer the questions on the basic science test and the clinical knowledge test with one of three options: true, false, or, when unsure about the correct answer, a question mark. We gave this last option to discourage guessing and to more accurately reflect the participants' knowledge.

Back to Top | Article Outline
Procedure

The testing procedure for family physicians and students differed. Instead of a single test moment, a research assistant visited family physicians who had agreed to take part. Physicians were asked to read and diagnose the 26 clinical cases without time restriction. When they were unsure about their diagnosis, they were allowed to state a differential diagnosis. Completion of the diagnostic performance test took between 45 minutes and an hour. Because of the considerable length of the two knowledge tests, we decided not to administer these directly after the diagnostic performance test. Participants were allowed to fill in the knowledge tests in their spare time and were handed an envelope to return the completed answer sheet. Given the length and detail of the tests, we believed it unlikely that participants would look up the answers. Moreover, the instructions stressed the importance of the reliability of the study and underlined the fact that we would make no individual comparisons.

We recruited the medical students by phone. A total of 184 students agreed to take part and completed the diagnostic performance test individually, without time restriction. They were allowed to state a differential diagnosis when they thought it was necessary. These students completed the basic science and clinical knowledge tests as part of their regular medical education, and their results were made available to us for further analysis. We followed the procedures in accordance with the ethical standards of the institutional ethical committee of Maastricht University and with the Helsinki Declaration of 1975, revised in 1983.

Back to Top | Article Outline
Analysis
Medical knowledge tests.

To ensure that the researcher was not aware of the expertise level of the respondents (student or physician), we masked the scoring of the accuracy of the diagnoses on the diagnostic performance test. A complete, accurate diagnosis was awarded two points. When the diagnosis was only partially correct (e.g., missing a certain specification), or when the correct diagnosis was present, but not on the first place in the differential diagnosis, we awarded one point. For example, regarding an external endometriosis case, one point was awarded when the diagnosis contained the word “endometriosis,” two points were awarded when the diagnosis stated “external endometriosis.” Participants could obtain a maximum score of 52 points on the diagnostic performance test. We calculated the total score on both knowledge tests by subtracting the total number of incorrectly answered questions from the total number of correctly answered questions. Every incorrectly answered question lowered the overall score by two points, whereas every correct answered question increased the overall score by one point. Question marks were scored zero. Participants were informed of this scoring procedure beforehand to discourage them from guessing. Participants could obtain a maximum score of 97 on the basic science test and 105 on the clinical knowledge test. Since a preliminary analysis showed no significant differences between the scoring procedure described here and a scoring procedure that only sums the number of correctly answered questions, without subtracting the incorrectly answered questions (F < 1), we used the former scoring procedure in all analyses. To assess whether group means on the different tests differed significantly between groups, we performed a one-way analysis of variance. Furthermore, we compared fifth- and sixth-year students' scores separately to experts' scores to enable comparison with advanced students who had (limited) clinical experience. For all analyses, α levels were set at .05 with Bonferroni correction.

Back to Top | Article Outline
Structural equation modeling.

We used the SEM program AMOS to analyze the data.26 Structural equation modeling combines multiple regression and path analysis to enable testing of the causal relations in a hypothetical model based on the covariance and variance structure in a data set. The advantage of SEM over regular multiple regression is that SEM allows for the use of more than one dependent variable, making possible the analysis of causal models with more than one structural path between variables. AMOS produces several goodness-of-fit criteria indicating how well the tested model accounts for the observed covariance and variance structure. The model fit criteria that are commonly used and that we assessed in this study are:

▪ the chi-square goodness-of-fit value, which has to be nonsignificantly different from zero for the tested model to fit the data (i.e., the difference between the hypothetical model and the data should be as small as possible);

▪ the chi-square divided by the degrees of freedom (CMIN/DF), which is required to be smaller than 1.0 to indicate a reasonable fit of an hypothetical model;

▪ the comparative fit index (CFI), which compares the fit of the particular model under test with a model in which none of the variables are related. A CFI of .90 or higher indicates that the tested model fits the data well; and

▪ The root mean square error of approximation (RMSEA), which represents the root square of the chi-square divided by the number of degrees of freedom. This value is required to be smaller than .05 to be considered acceptable.27

This educational research study was approved by the Institutional Review Board of the Institute of Psychology, Erasmus University Rotterdam, The Netherlands.

Back to Top | Article Outline

Results

Respondents

Forty-four of the 59 physicians (75%) returned the completed answer sheet for the basic science knowledge test and the clinical knowledge test. Because the knowledge tests were part of the students' regular medical training, we had completed answer sheets for all 184 medical students (100%).

Back to Top | Article Outline
Test scores

Table 1 shows an overview of the mean scores and standard deviations on the diagnostic performance test, the basic science test, and the clinical knowledge test. The scores of fifth- and sixth-year students are presented separately.

Table 1
Table 1
Image Tools

As Table 1 shows, family physicians (experts) scored significantly higher than did students on the diagnostic performance test (F1, 242 = 85.74, p < .001) and on the clinical knowledge test (F1, 227 = 58.13, p < .001). Students scored significantly higher than did family physicians on the basic science test (F1, 228 = 5.54, p < .05). Regarding the differences between physicians and fifth- and sixth-year students, statistical analyses showed a main effect for expertise level on the diagnostic performance test (F2, 135 = 13.72, p < .001), the basic science test (F2, 121 = 36.14, p < .001), and the clinical knowledge test (F2, 120 = 18.82, p < .001). Post hoc analysis revealed that on the diagnostic performance test, physicians scored significantly higher than did sixth-year students (p < .01), who in turn significantly surpassed fifth-year students (p < .001). For the basic science test, post hoc analysis showed that sixth-year students scored higher than did fifth-year students, whereas fifth-year students scored higher than did the physicians (all p values < .001). For the clinical knowledge test, post hoc analyses revealed that both experts and sixth-year students scored significantly higher than did the fifth-year students (both p values < .001), although we found no difference between physicians and sixth-year students.

Tables 2 and 3 show the correlations between the diagnostic performance test, and the basic science and clinical knowledge test data for students and physicians, respectively.

Table 2
Table 2
Image Tools
Table 3
Table 3
Image Tools

As the tables show, correlations based on the students' data were overall higher than were correlations based on the physicians' data. Although in the students' data both the basic science test and the clinical knowledge test were significantly correlated to the diagnostic performance test (r = .44, p < .01 and r = .57, p < .01 respectively), this was not true for the physicians' data. For the physicians' data, only the correlation between the clinical knowledge test and the diagnostic performance test (r = .41, p < .01) was significant. As for the correlation between the basic science test and the clinical knowledge test data, this relationship was stronger for students (r = .77, p < .01) than for physicians (r = .30, p < .05), although in both cases these correlations were significant.

Back to Top | Article Outline
Test of model fit using structural equation modeling

Figure 1 shows the specific path coefficients based on students' data and experts' (physicians') data. These regression weights indicate to what extent the variance in a certain variable is explained by another variable that is connected to it. We also computed these coefficients separately for students in the preclinical phase (years 2, 3, and 4; n = 107) and students in the clinical phase (year 5 and year 6; n = 77) of medical school, to examine the effect of clinical experience on knowledge structure (see Table 4).

Table 4
Table 4
Image Tools

Overall, the path coefficients for all four models based on the students' data were somewhat higher than were the path coefficients based on the physicians' data. However, this finding is at least partially explained by the higher number of students who responded compared to the number of experts who responded (184 versus 44). In general, for both students' and physicians' data, the path coefficients of the knowledge encapsulation model15,16 and the Patel model11,12 were highest compared to the other models. When we separated the students' data into a preclinical group and a clinical group, a similar pattern emerged: For preclinical students, the path coefficients of the Patel model and the Schmidt and Boshuizen model were highest, whereas for clinical students the Schmidt and Boshuizen path coefficients were highest. These data indicate a likely relationship between clinical knowledge and diagnostic performance as described in these models, whereas the relationship between basic science knowledge and diagnostic performance described in the remaining models is unlikely.

To examine whether the observed differences in regression weights between the students' data and the physicians' data were statistically significant, we conducted a separate analysis. This analysis imposed an additional constraint on the four models, as we assumed that the regression weights for the students' data and the physicians' data were equal. The chi-square values were 216.93 for the Lesgold model, 172.18 for the Patel model, 17.39 for the Schmidt and Boshuizen model, and 172.16 for the independent influence model (all p values were < .001). These results indicate that the differences in the regression weights between the students' and physicians' data can be considered significant for all four tested models.

Although the path coefficients do not provide much information about the statistical plausibility of the four theoretical models, the goodness-of-fit criteria do (see Table 5). For both students' data and experts' data, only the Schmidt and Boshuizen model provides chi-square values nonsignificantly different from zero (p = .97 and p = .93, respectively), which indicates close similarity between the observed and model-implied covariances matrices. Stated otherwise, the difference between the data structure proposed by the theoretical model and the actual data we observed is not significant, and therefore indicates adequate fit of the Schmidt and Boshuizen model. We observed this same pattern when dividing the students' data into a preclinical and a clinical group (χ2 = 0.00, p = .99 for students in preclinical phase; χ2 = 0.01, p = .91 for students in clinical phase). In general, the results on the goodness-of-fit of the different tested models did not change when we separated the preclinical student group from the clinical student group.

Table 5
Table 5
Image Tools

Moreover, the chi-square divided by the degrees of freedom (CMIN/DF), which is required to be smaller than 1, is adequate only in the Schmidt and Boshuizen model in both students' and physicians' data. As mentioned above, the CFI compares the fit of the particular theoretical model with a model in which none of the variables are related, to assess to what extent the theoretical model is more plausible than a model without relations between variables. For the physicians' data, the CFI is higher than 0.9 in all of the models except for the Patel model. In the students' data, the CFI is higher than 0.9 only for the Schmidt and Boshuizen model. Finally, only in the Schmidt and Boshuizen model is the root mean square error of approximation (RMSEA) smaller than 0.05, for both the students' and physicians' data. None of the other models led to adequate RMSEA levels. This measure, being a derivative of the chi-square value, provides further evidence for the fit of the Schmidt and Boshuizen model. The remaining goodness-of-fit criteria in the models based on the physicians' data are all insignificant. Overall, the goodness-of-fit criteria suggest that of the four models tested, the Schmidt and Boshuizen model corresponds best to the covariance and variance structure underlying the data set.

Back to Top | Article Outline

Discussion

Several hypotheses about the role of basic science and clinical knowledge in diagnostic reasoning have been proposed since the 1980s.6,11,12,15,16 To examine the plausibility of these hypotheses, we compared four different theories on this issue using structural equation modeling. We investigated students' and family physicians' basic science and clinical knowledge, as well as their diagnostic performance.

Our findings indicate that primarily clinical knowledge is involved in diagnostic reasoning, and that basic science knowledge plays a less prominent role. First, the family physicians in our study scored higher on the clinical knowledge test than did the medical students, whereas the students scored higher on the basic science test. This finding suggests that, for family physicians, basic science knowledge is mainly acquired in medical school, whereas clinical knowledge continues to develop in professional practice. These findings extend earlier research that indicated expert physicians made less use of basic science knowledge in clinical reasoning than did students and applied primarily clinical concepts in their pathophysiological explanations of clinical cases.15–17 Why family physicians in general do not revert to biomedical reasoning can be explained by the routine nature of the clinical cases under consideration. However, the mean diagnostic accuracy (36.7, on a scale of 0 to 52) indicates no ceiling effect; the cases we used were in general not routine cases for family physicians. A different explanation is derived from earlier work by Norman et al.,28 who showed that family medicine residents had low diagnostic scores on nephrology cases, but also had great difficulty providing pathophysiological explanations for the data in the cases. It is possible that because the use of basic science knowledge is limited in the field of family medicine, family physicians have less need to develop this knowledge further in professional practice, and therefore they are unable to activate this knowledge when processing clinical cases, even when confronted with difficult cases.

When we looked at the goodness-of-fit criteria, the two models that expressed a direct relationship between basic science knowledge and diagnostic performance were both inadequate. The lack of fit of these models suggests that family physicians do not use much basic science knowledge in diagnostic reasoning. On the other hand, the model based on the theory of Patel and colleagues11,12 that did not include basic science knowledge did not fit the data either. This finding indicates that family physicians' diagnostic reasoning is not completely explained by the activation of clinical knowledge alone. The independent influence model that described diagnostic performance as the result of both basic science knowledge and clinical knowledge lacked goodness of fit as well. However, results suggest that the Schmidt and Boshuizen model in which basic science knowledge is indirectly involved in diagnostic performance by underlying the clinical knowledge provides the most plausible explanation for the data. In this model all goodness-of-fit criteria were at an adequate level. The knowledge encapsulation theory of Schmidt and Boshuizen15,16 stresses that clinical knowledge is directly activated in diagnostic reasoning, and refers to basic science knowledge as being summarized, or encapsulated in clinical knowledge. Our findings support this theory.

The fact that the model based on the knowledge encapsulation theory had adequate goodness-of-fit criteria for the student data also deserves some attention. According to the knowledge encapsulation theory, medical students, who mostly lack clinical experience, mainly activate basic science knowledge in diagnostic reasoning.15,16 Although our findings did not support this theory, one would initially expect the model by Lesgold and colleagues6,9 to fit the students' data best. Possibly, medical students' successful diagnostic reasoning is not explained by the activation of basic science knowledge but relies on the availability of clinical knowledge. Apparently, the medical students' clinical knowledge is no all-or-none process, but follows a certain development that enables medical students even in earlier phases of medical school to apply clinical knowledge in diagnostic reasoning. This argument agrees with earlier theories by Schmidt and colleagues19 on illness scripts. In their view the development of medical knowledge has a number of stages, ultimately leading to pattern recognition against previously encountered cases. In the process of developing knowledge, medical students proceed from activating mostly pathophysiological knowledge to using so-called illness scripts that summarize relevant disease information and ease clinical reasoning. According to this line of reasoning, even preclinical medical students might be able to use clinical knowledge when diagnosing clinical cases, depending on the extent of their patient encounters. Since the students in our study were all from a problem-based curriculum, clinical knowledge development might have already started before they entered the clinical internships. Since we did not measure the reasoning process during clinical case processing, we have at present no data to examine to what extent medical students applied clinical knowledge during case diagnosis. Further research is needed to test this possibility.

One of our findings that requires further explanation is the difference in path coefficients between the students' data and the physicians' data. While students and physicians were tested differently (students in class and physicians individually), but both using a paper-and-pencil test, this difference is not likely to explain these findings. A more probable explanation is provided by the theory that focuses on the evolution of clinical knowledge.19 Illness scripts explain the existence of medical students' clinical knowledge by describing the stages that characterize the development of clinical reasoning. In this theory, high path coefficients are a possibility because even preclinical students have a certain level of clinical knowledge that is also used in diagnostic thinking. However, we suggest a further explanation for the difference in path coefficients between students' and physicians' data. Since experts typically have extensive clinical knowledge compared to students with limited practical experience, it is possible that physicians' performances in diagnosing clinical cases are explained by other factors not represented in the tests we used. The basic science and clinical knowledge tests in our study consisted of true/false questions, which might not allow for the testing of the full scope of the family physicians' medical knowledge. In a study by Hobus et al.,29 for instance, experienced physicians formulated better first diagnostic hypotheses when presented contextual information as a picture of the patient and the patient's complete medical history, which are normally not given in typical paper clinical cases. These researchers concluded that expert physicians rely to a certain extent on contextual information that supplies them valuable knowledge regarding disease risks and possible diagnoses. As this kind of knowledge is not available to medical students, the relationship between medical knowledge and diagnostic performance in this group is more straightforward, resulting in higher path coefficients for students in our study. Since for medical experts contextual knowledge might in part also account for diagnostic performance, the models presented here leave part of the variance in diagnostic performance unexplained, which is expressed in the lower path coefficients compared to the students. Nevertheless, in both students' and physicians' data, the knowledge encapsulation model was the only model that proved to have adequate fit to the data. Further experimental research that examines the direct activation of basic science and clinical knowledge during diagnostic reasoning can give further insight into the plausibility of this theory.

We caution generalizing these findings to the field of teaching because of the limitation in testing: We tested students from only one medical school. However, given the fit of the knowledge encapsulation model in students' data, our findings indicate that it might be worthwhile to introduce clinical practice and clinical phenomena early in the medical curriculum. Our findings indicate that students seem able to benefit from clinical information at an early stage of medical school. However, great care is needed in the design of instruction that accompanies these clinical phenomena. As Bordage20 noticed, stimulating students to search for connections between patient findings will foster the construction of elaborate knowledge networks that characterize elaborated and compiled knowledge organizations. These types of knowledge organization have a higher chance of leading to retrieval of relevant information when students study a clinical case and will therefore have a higher chance of diagnostic success.

The authors would like to thank the medical faculty of Maastricht University, The Netherlands, who lent their support in contacting possible participants because they considered this research relevant to the field of medical education.

Back to Top | Article Outline

References

1Glaser R. Education and thinking: the role of knowledge. Am Psychol. 1984;39(2):93–104.

2Johnson PE, Duran AS, Hassebrock F, et al. Expertise and error in diagnostic reasoning. Cogn Sci. 1981;5:135–283.

3Norman GR, Brooks LR, Allen SW. Recall by expert medical practitioners and novices as a record of processing attention. J Exp Psych Learn Mem Cogn. 1989;13:1166–74.

4Kintsch W, Greeno JG. Understanding and solving word arithmetic problems. Psychol Rev. 1985;92(1):109–29.

5Patel VL, Arocha JF, Kaufman DR. Medical cognition. In: Durso FT, Nickerson RS, (eds). Handbook of Applied Cognition. Chichester: Wiley, 1999:663–93.

6Lesgold A, Rubinson H, Feltovich P, Glaser R, Klopfer D, Wang Y. Expertise in a complex skill: diagnosing X-ray pictures. In: Chi MTH, Glaser R, Farr MJ (eds). The Nature of Expertise. Hillsdale, NJ: Lawrence Erlbaum Associates, 1988:311–42.

7Bordage G, Zacks R. The structure of medical knowledge in the memory of medical students and general practitioners: categories and prototypes. Med Educ. 1984;18:406–16.

8Patel VL, Arocha JF, Kaufman DR. Expertise and tacit knowledge in medicine. In: Sternberg RJ (ed). Tacit Knowledge in Professional Practice: Researcher and Practitioner Perspectives. Mahwah, NJ: Lawrence Erlbaum Associates, 1999:75–99.

9Lesgold A. Acquiring expertise. In: Anderson JR, Kosslyn SM (eds). Tutorials in Learning and Memory: “Essays in Honor of Gordon Bower.” San Francisco, New York: W. H. Freeman, 1984:31–60.

10Gilhooly KJ, McGeorge P, Hunter J, Rawles JM, Kirby IK, Green C, Wynn V. Biomedical knowledge in diagnostic thinking: the case of electrocardiogram (ECG) interpretation. Eur J Cogn Psychol. 1997;9:199–223.

11Patel VL, Evans DA, Groen GJ. Biomedical knowledge and clinical reasoning. In: Evans DA, Patel VL (eds). Cognitive Science in Medicine: Biomedical Modeling. Cambridge, MA: MIT Press, 1989:53–112.

12Patel VL, Evans DA, Groen GJ. Reconciling basic science and clinical reasoning. Teach Learn Med. 1989;1(3):116–21.

13Patel VL, Kaufman DR. Clinical reasoning and biomedical knowledge: implications for teaching. In: Higgs J, Jones M (eds). Clinical Reasoning in the Health Professions. Oxford: Butterworth-Heinemann, 1995:117–28.

14Patel VL, Arocha JF, Kaufman DR. Diagnostic reasoning and expertise. In: Medin DL (ed). The Psychology of Learning and Motivation. San Diego: Academic Press, 1994:187–252.

15Schmidt HG, Boshuizen HPA. On acquiring expertise in medicine. Educ Psych Rev. 1993;5:205–21.

16Schmidt HG, Boshuizen HPA. On the origin of intermediate effects in clinical case recall. Mem Cogn. 1993;21:338–51.

17Boshuizen HPA, Schmidt HG. On the role of biomedical knowledge in clinical reasoning by experts, intermediates and novices. Cogn Sci. 1992;16:153–84.

18Schmidt HG, Boshuizen HPA, Hobus PPM. Transitory stages in the development of medical expertise: the “intermediate effect” in clinical case representation studies. In: Proceedings of the Cognitive Science Society. Hillsdale, NJ: Lawrence Erlbaum Associates, 1988:139–45.

19Schmidt HG, Norman GR, Boshuizen HPA. A cognitive theory on medical expertise: theory and implications. Acad Med. 1990;65:611–21.

20Bordage G. Elaborated knowledge: a key to successful diagnostic thinking. Acad Med. 1994;69:883–85.

21Rikers RMJP, Schmidt HG, Boshuizen HPA. On the constraints of encapsulated knowledge: clinical case representations by medical experts and subexperts. Cogn Instruct. 2002;20(1):27–46.

22Rikers RMJP, Schmidt HG, Boshuizen HPA, Linssen GCM, Wesseling G, Paas FGWC. The robustness of medical expertise: clinical case processing by medical experts and subexperts. Am J Psychol. 2002;115:609–29.

23Van de Wiel MWJ, Boshuizen HPA, Schmidt H. G. Knowledge restructuring in expertise development: evidence from pathophysiological representations of clinical cases by students and physicians. Eur J Cogn Psychol. 2000;12:323–55.

24Van der Vleuten CPM, Verwijnen GM, Wijnen WHFW. Fifteen years of experience with progress testing in a problem-based curriculum. Med Teach. 1996;18:103–10.

25Schmidt HG, Machiels-Bongaerts M, Hermans H, Ten Cate TJ, Venekamp R, Boshuizen HPA. The development of diagnostic competence: comparison of a problem- based, an integrated, and a conventional medical curriculum. Acad Med. 1996;71:658–64.

26Arbuckle JL, Wothke W. AMOS 4.0 User's Guide. Chicago: SmallWaters Corporation, 1996.

27Byrne BM. Structural Equation Modeling with AMOS: Basic Concepts, Applications, and Programming. Mahwah, NJ: Lawrence Erlbaum Associates, 2001.

28Norman GR, Trott AD, Brooks LR, Smith EKM. Cognitive differences in clinical reasoning related to postgraduate training. Teach Learn Med. 1994;6:114–20.

29Hobus PPM, Schmidt HG, Boshuizen HPA, Patel VL. Contextual factors in the activation of first diagnostic hypotheses: expert-novice differences. Med Educ. 1987;21:471–76.

Back to Top | Article Outline
Appendix 1 Cited Here...
Back to Top | Article Outline
Appendix 2 Cited Here...
Table. Example Clini...
Table. Example Clini...
Image Tools
Table. Example Quest...
Table. Example Quest...
Image Tools

*The basic science knowledge test and the clinical knowledge test used in this study are part of the larger progress test that consists of approximately 250 true/false questions covering all medical subdisciplines. The test assesses the progress of medical students' general medical knowledge over the years of their study and is routinely administered at the medical school of Maastricht University in The Netherlands. (For more information on the progress test, see Van der Vleuten et al.24) Cited Here...

Cited By:

This article has been cited 16 time(s).

Medical Education
Biomedical knowledge, clinical cognition and diagnostic justification: a structural equation model
Cianciolo, AT; Williams, RG; Klamen, DL; Roberts, NK
Medical Education, 47(3): 309-316.
10.1111/medu.12096
CrossRef
Academic Medicine
The role of biomedical knowledge in clinical reasoning: A lexical decision study
Rikers, RMJP; Loyens, S; te Winkel, W; Schmidt, HG; Sins, PHM
Academic Medicine, 80(): 945-949.

Teaching and Learning in Medicine
How to use structural equation modeling in medical education research: A brief guide
Violato, C; Hecker, KG
Teaching and Learning in Medicine, 19(4): 362-371.

Journal of Evaluation in Clinical Practice
Evidence-based healthcare, clinical knowledge and the rise of personalised medicine INTRODUCTION
Miles, A; Loughlin, M; Polychronis, A
Journal of Evaluation in Clinical Practice, 14(5): 621-649.
10.1111/j.1365-2753.2008.01094.x
CrossRef
Journal of General Internal Medicine
Impact of the foundations of clinical medicine course on USMLE scores
Brownfield, EL; Blue, AV; Powell, CK; Geesey, ME; Moran, WP
Journal of General Internal Medicine, 23(7): 1002-1005.
10.1007/s11606-008-0631-z
CrossRef
Journal of General Internal Medicine
A self-instructional model to teach systems-based practice and practice-based learning and improvement
Peters, AS; Kimura, J; Ladden, MD; March, E; Moore, GT
Journal of General Internal Medicine, 23(7): 931-936.
10.1007/s11606-008-0517-0
CrossRef
Medical Education
How expertise develops in medicine: knowledge encapsulation and illness script formation
Schmidt, HG; Rikers, RMJP
Medical Education, 41(): 1133-1139.
10.1111/j.1365-2923.2007.02915.x
CrossRef
Academic Medicine
Medical students' clinical reasoning skills as a function of basic science achievement and clinical competency measures: A structural equation model
Donnon, T; Violato, C
Academic Medicine, 81(): S120-S123.

Advances in Health Sciences Education
How do medical teachers address the problem of transfer?
Laksov, KB; Lonka, K; Josephson, A
Advances in Health Sciences Education, 13(3): 345-360.
10.1007/s10459-006-9048-9
CrossRef
Academic Medicine
The value of basic science in clinical diagnosis
Woods, NN; Neville, AJ; Levinson, AJ; Howey, EHA; Oczkowski, WJ; Norman, GR
Academic Medicine, 81(): S124-S127.

Medical Education
Speed kills? Speed, accuracy, encapsulations and causal understanding
Woods, NN; Howey, EHA; Brooks, LR; Norman, GR
Medical Education, 40(): 973-979.
10.1111/j.1365-2929.2006.02556.x
CrossRef
Journal of Evaluation in Clinical Practice
Models based on value and probability in health improve shared decision making
Ortendahl, M
Journal of Evaluation in Clinical Practice, 14(5): 714-717.
10.1111/j.1365-2753.2007.00931.x
CrossRef
Advances in Health Sciences Education
It all make sense: biomedical knowledge, causal connections and memory in the novice diagnostician
Woods, NN; Brooks, LR; Norman, GR
Advances in Health Sciences Education, 12(4): 405-415.
10.1007/s10459-006-9055-x
CrossRef
Academic Medicine
Student performance problems in medical school clinical skills assessments
Hauer, KE; Teherani, A; Kerr, KM; O'Sullivan, PS; Irby, DM
Academic Medicine, 82(): S69-S72.

Medical Teacher
What biomedical science should be included in undergraduate medical courses and how is this decided?
Bull, S; Mattick, K
Medical Teacher, 32(5): 360-367.
10.3109/01421590903434144
CrossRef
Medical Education
Clinical case processing: a diagnostic versus a management focus
Monajemi, A; Rikers, RMJP; Schmidt, HG
Medical Education, 41(): 1166-1172.
10.1111/j.1365-2923.2007.02922.x
CrossRef
Back to Top | Article Outline

© 2005 Association of American Medical Colleges

Login

Article Tools

Images

Share