In general, the structure of medical knowledge can be divided into two specific domains. One focuses on the biomedical mechanisms (e.g., anatomy, pathology, physiology) of the human body, and the other deals with aspects of the clinical encounter that consist of, but are not limited to, the diagnosis, investigation and management of disease signs and symptoms. In particular, most medical curricula tend to focus the first half of medical school on the preclinical or basic science years, distinct from the later clinical component where medical students adhere to a number of clerkship rotation specialities. Although many medical schools have moved towards the integration of clinical content in the first two years of the curriculum, the foundation of the curriculum and assessment continues to emphasize a fundamental understanding of the basic sciences within the context of the clinical setting. This distinction between basic science knowledge and clinical knowledge has led to a theoretical debate as to the process medical experts and nonexperts use in clinical or diagnostic reasoning.1 The main purpose of the present study was to explore further the evidence of a structural model of medical students' clinical reasoning skills based on their aptitude for medical school, their basic science achievement, and their clinical competency measures.
From the perspective of the medical student novice, the process of accumulating medical knowledge can be reflected in the measures applied just before, during and after the medical school experience. In particular, students are accepted to medical school based on initial assessment measures such as undergraduate grade point average and Medical College Admission Test (MCAT) scores. These admission criteria are used as a reflection of the students' potential aptitude for medical school and have been found to be consistent predictors of performance in medical school.2 Subsequently, medical students proceed through medical school based on their successful performance on basic science achievement and clinical competency examinations. In Canada, all medical school graduates are required to write the Medical Council of Canada Part I licensure examination as a measure of their biomedical knowledge and clinical reasoning skills.3 Through these various measures and stages of basic science knowledge and clinical knowledge development, a theoretical model of diagnostic or clinical reasoning can be investigated using the principals of structural equation modeling.4
From a theoretical perspective on diagnostic or clinical reasoning, Schmidt et al.5 proposed a cognitive structure of medical expertise based on the accumulation of clinically relevant knowledge about disease signs and symptoms referred to as “illness scripts.” In this model, the development of elaborate knowledge networks6 evolves through a process of biomedical knowledge acquisition, practical clinical experience, and an integration or “encapsulation” of both theoretical and experiential knowledge.7 As Schmidt and Boshuizen8 explained, the encapsulation process of basic science knowledge begins as soon as medical students are introduced to real patients through clinical encounters or presentations. In a recent study using structural equation modeling, De Bruin et al.1 found support for the knowledge encapsulation theory, which postulates that basic science knowledge has an indirect influence on diagnostic reasoning by contributing directly to clinical knowledge. In this model, the biomedical or basic science knowledge in the first years of medical school precedes and is eventually encompassed by the development of clinical knowledge. As such, the basic science knowledge becomes encapsulated or reorganized into causal representations of illness scripts that lead to the formal process of diagnostic and clinical reasoning.5
In contrast to the knowledge encapsulation model, Patel et al.9–10 described the relationship between basic science knowledge and clinical knowledge as unique domains with their own distinct structure and characteristics. In this medical expertise model, diagnostic or clinical reasoning evolves from the clinical knowledge obtained by relating patient presentations of signs and symptoms to a taxonomy of disease categories. Although this process rarely relies on the strict use of basic science knowledge, Patel and Kaufman10 acknowledged the importance of using pathophysiological explanations to provide further support for the explanation of clinical phenomena. In this model, the activation of clinical knowledge is preeminent in the diagnostic or clinical reasoning process and distinct from the basic knowledge used potentially in posthoc clinical considerations.
The main purpose of the present study was to test the fit of a hypothesized model of medical students' knowledge as a function of their aptitude for medical school, basic science achievement, and clinical competency. The application of structural equation modelling allows researchers to test the goodness-of-fit of a variety of models through an explicit representation of a distinct number of observed and latent variables. In addition to testing the knowledge encapsulation theory purported by Schmidt and Boshuizen,8 and the emphasis on clinical knowledge supported by Patel et al.,9–10 consideration is given to an alternative model that includes both the basic science knowledge and clinical knowledge components as contributing independently to the diagnostic or clinical reasoning skills of medical students. In addition to having face validity, this independent influence model has both practical and theoretical implications for medical school curriculum and assessment.1
Method
Data were collected from a total of 589 students (292 males, 49.6%; 297 females, 50.4%) who completed medical school at the University of Calgary from 1994 to 2002. The data collected consist of the following medical students' aptitude, achievement, and performance measures: (1) the MCAT subtests (Verbal Reasoning [VR], Biological Sciences [BS], Physical Sciences [PS], and Writing Sample [WS]); (2) undergraduate Grade Point Average (UGPA) at admission; (3) basic science achievement in the first two years of medical school (Y1, Y2); (4) clinical performance in the medical school's single clerkship year (Y3); and (5) the Medical Council of Canada's (MCC) Part I test, which consists of a seven-section, 196 multiple-choice question (MCQ) examination on declarative knowledge (e.g., medicine, pediatrics, psychiatry) and an approximately 60-case or 80-item clinical reasoning skills (CRS) test designed to assess problem-solving and clinical-decision making abilities. The medical school at the University of Calgary offers an intensive three-year, undergraduate medical education program consisting of two preclinical (basic sciences) years and one clinical (clerkship rotations) year of study.
One of the major applications of structural equation modeling (SEM) is in the generation of a goodness-of-fit or fit-indices assessment of a hypothesized model to quantitatively derived data. Medical students' raw test scores on the 10 variables identified above were used to test the fit of the hypothesized models using confirmatory factor analyses. The application of SEM begins with the specification of a model. In this case, measures of medical school aptitude, basic science achievement, and clinical competency latent variables were identified to test the fit of a three-factor model in separate SEM analyses.4 The recommended combinational rules for fit indices in structural models include Bentler's Comparative Fit Index (CFI) with the maximum likelihood (ML)-based standardized root mean squared residual (SRMR) and the root mean squared error of approximation (RMSEA), as they tend not to reject more simple and complex true-population models under nonrobust conditions.11 This study received approval from the Conjoint Health Research Ethics Board of the University of Calgary.
Results
As shown in Table 1 , the Pearson correlation coefficients range from r = −.05 (p = .83), between the MCAT WS subtest and the clerkship year (Y3), to a strong relationship r = .77 (p < .01), between the two basic science years (Y1 and Y2). Although most of the variables correlated positively with other similar test measures, the WS subtest was found not to correlate with any other variable. Other than with the second year (Y2) of medical school (r = .15, p < .01), Y3 was found also not to correlate with the other variables. Moderate to strong correlations are found for those variables that are assessing similar domains of measurement (e.g., Y1-Y2, Y1-MCQ, Y2-MCQ) and are related through proximity (e.g., MCAT: PS-BS, MCC: MCQ-CRS).
Table 1: Pearson Correlations between Medical Students' Performance on the MCAT Subtests, Undergraduate GPA, Basic Sciences and Clinical Competency, and MCC Part I: MCQ and CRS Subtests
Figure 1 shows the final three-latent variable model with respective parameter estimates and goodness-of-fit index values for CFI, SRMR, and RMSEA. In this ML model fit, the theoretical structure of the model is supported with the existence of covariance between the Aptitude for Medical School and Basic Science Achievement latent variables. In this model, the combination rules of cutoff score values are achieved for the CFI at .905 and are close to the criteria set for robustness and nonrobustness conditions with N > 250 and values of SRMR at .054 and RMSEA at .105.11 In comparison, the test values obtained for the knowledge encapsulation and emphasis on clinical knowledge models were found to be less than the minimum 0.90 cutoff CFI values at 0.81 and 0.82, respectively.
Figure 1:
CFA model of aptitude for Medical School, Basic Science Achievement, and Clinical Competency latent variables employing ML estimation (N = 589).
As shown in Figure 1 , the latent variable loadings on Basic Sciences Achievement consisted of observed variables from all but the MCAT subtests (VR, BS, PS, WS) and the clerkship year (Y3) variables. The test achievement variables for Y1 and Y2 of medical school are the strongest indicators, with path coefficients equal to .87, each accounting for 76% of the variance between the observed and latent variables. Medical students' UGPA also loaded on the latent variables Basic Science Achievement (.39) and Aptitude for Medical School (.15). The MCC Part I—MCQ and CRS subtests, written just after medical school graduation, loaded on the Basic Science Achievement latent variable with path coefficients of .81 and .52, respectively. Although there was covariance between the Basic Science Achievement and Aptitude for Medical School latent variables (.44), covariance with the Clinical Competency latent variable was not found in any other models tested.
The Clinical Competency latent variable was found to have a large path coefficient value of .85, accounting for 72% of the variance, with the MCC Part I—Clinical Reasoning Skills (CRS) subtest. The MCAT–VR subtest has a small positive (.15) path coefficient on the Clinical Competency latent variable and may reflect the importance of verbal proficiency in the clerkship year. The Y3 variable, however, has a small negative (−.10) path coefficient on the Clinical Competency latent variable that may demonstrate the variability of clinical performance measures obtained from medical students during their different clerkship rotations. As anticipated, the Aptitude for Medical School latent variable is related to the MCAT subtest and UGPA proximal measures obtained at the beginning of students' medical school experience.
Discussion
The main findings of the present study are as follows: (1) a three-factor model of medical student aptitude, achievement, and performance was the most theoretically coherent model; (2) the MCAT and UGPA provide a good proximal measure of students' aptitude for medical school in the basic sciences years; and (3) basic science achievement and clinical competency function as independent latent variables in measures of clinical reasoning skills.
The SEM cutoff criteria used to evaluate structural model fit indices for the final model tested showed support for a three-factor model of medical students' Aptitude for Medical School, Basic Science Achievement, and Clinical Competency in medical school. In particular, there is support for a theoretical structure of clinical reasoning skills development that supports the independent influence of a basic science achievement and clinical competency distinction. As shown in both the bivariate correlations and derived structural model, the two basic science years are shown to be separate from the clerkship year in medical school. Notwithstanding, the Basic Science Achievement (basic science knowledge) and Clinical Competency (clinical knowledge) latent variables independently influence clinical reasoning skills test scores. In particular, both the latent variables associated with corresponding measures of basic science knowledge and clinical knowledge show large path coefficients with the CRS variable at .52 and .85, respectively. Although the current model would appear to provide support for the contention that basic knowledge and clinical knowledge are more or less two “distinct worlds,”9,10 our findings also suggest the independent influence each knowledge domain has in the development of clinical reasoning skills for medical students. In subsequent postgraduate and clinical practice, these same medical students as future residents and physicians are expected to become more dependent on their clinical knowledge to attain appropriate diagnostic reasoning or clinical problem solving skills.
As indicated above, our findings are limited to medical students who attended a single medical school. The basic science and clinical knowledge tests used in this study, however, are well-designed MCQ certifying course/rotation examinations with high internal reliability measures, as are the MCAT subtests and MCC Part I MCQ and CRS licensure examinations. The three-factor model presented in Figure 1 illustrates the distinction that exists between the independent domains of the basic sciences achievement and clinical competency latent variables in medical school. Although advocating for the integration of basic science with clinical knowledge would lend support to the construction of elaborate knowledge networks or illness scripts, medical students may also benefit from being well-educated both during their initial development and in medical school. In particular, our findings support an integrated medical school curriculum that would nurture the inherent connection between the basic sciences and clinical competency in the further development of clinical reasoning skills in medical students. Further research into the direct influences that basic science and clinical knowledge have on the development of diagnostic or clinical reasoning skills is needed at the novice, intermediate, and expert level of knowledge organization.
Acknowledgments
The authors gratefully acknowledge the ongoing support of both the Undergraduate Medical Education Admissions and Program Offices at the University of Calgary.
References
1 De Bruin ABH, Schmidt HG, Rikers RMJP. The role of basic science knowledge and clinical knowledge in diagnostic reasoning: a structural equation modeling approach. Acad Med. 2005;80:765–73.
2 Donnon T, Violato C, Oddone E. Predictive validity of the MCAT: a meta-analysis of the published research. Paper presented at the 11th International Medical Education Ottawa Conference. Barcelona, Spain: July 4-8, 2004.
3 Medical Council of Canada. The Objectives for the Qualifying Examination, 2nd Ed. Ottawa: Medical Council of Canada, 1999.
4 Bentler PM. EQS Structural Equations Program Manual. Encino, CA: Multivariate Software, 1995.
5 Schmidt HG, Norman GR, Boshuizen HPA. A cognitive perspective on medical expertise: theory and implications. Acad Med. 1990;65:611–21.
6 Bordage G. Elaborated knowledge: a key to successful diagnostic thinking. Acad Med. 1994;69:883–85.
7 Boshuizen HPA, Schmidt HG. On the role of biomedical knowledge in clinical reasoning by experts, intermediates and novices. Cogn Sci. 1992;16:153–84.
8 Schmidt HG, Boshuizen HPA. On acquiring expertise in medicine. Educ Psych Rev. 1993;5:205–21.
9 Patel VL, Evans DA, Groen GJ. Reconciling basic science and clinical reasoning. Teach Learn Med. 1989;1:116–21.
10 Patel VL, Arocha JF, Kaufman DR. Diagnostic reasoning and expertise. In: Medin DL (ed). The Psychology of Learning and Motivation. San Diego: Academic Press, 1994:187–252.
11 Hu L, Bentler PM. Cutoff criteria for fit indices in covariance structural analysis: conventional criteria versus new alternatives. Struc Equa Mod. 1999;6:1–55.