Secondary Logo

Journal Logo

Research Report

Effect of an Undergraduate Medical Curriculum on Students’ Self-Directed Learning

Harvey, Bart J. MD, PhD; Rothman, Arthur I. EdD; Frecker, Richard C. MD, PhD

Author Information
  • Free


Studies indicate that being an effective, independent, and self-directed lifelong learner is critical for graduates of undergraduate medical1,2 and other professional programs.3 Recognizing the importance of these abilities, the University of Toronto’s Faculty of Medicine Council identified “to prepare our students for life-long study and practice of medicine” as the first goal of its Goals and Objectives for the Undergraduate Medical Curriculum (1990).

In 1992, at least in part to address this curricular goal and to enhance students’ self-directed learning (SDL), the University of Toronto Faculty of Medicine revised its conventional, lecture-based medical curriculum into a “hybrid,” replacing much of the curriculum time devoted to large-group didactic lectures with small-group, problem-based, and SDL opportunities. The resulting curricular changes were designed and implemented to achieve the ideals Barrows describes: “That students would be stimulated by the experience, would see the relevance of what they were learning to their future responsibilities, would maintain a high degree of motivation for learning, and would begin to understand the importance of responsible professional attitudes.”4 We undertook this study to begin to learn whether the curricular revision enhanced students’ SDL (and, ultimately, their abilities as effective lifelong learners).

Several authors (e.g., Wilcox,5 Ryan,6 Miflin, Campbell and Price,7 Knowles,8 Hammond and Collins,9 and Candy10) have examined the knowledge, skills, and attitudes associated with and necessary for effective SDL. Although some authors9–12 discuss SDL’s role as an emancipatory and individualist and, therefore, not directly relevant to “institutionalized” education, others5–8 have explored SDL within a variety of postsecondary educational contexts. From these explorations, four common SDL components, consistent with the objectives of the University of Toronto curriculum changes, have been identified. These are students’ ability and motivation to (1) self-assess and identify their specific learning needs; (2) wisely and efficiently identify, locate, access, and use a range of relevant resources to address their identified learning needs; (3) critically evaluate the scope and accuracy of the selected resources; and (4) evaluate the resultant learning outcomes and their effects on the learner’s practice.

Guided by these four SDL components, Ryan6 developed and administered a brief, two-part questionnaire to assess students’ perceptions concerning the importance of SDL and their abilities as self-directed learners. Periodic assessments, made during a course offered by Ryan employing a SDL approach, demonstrated significant increases in students’ perceptions of both the importance of SDL and their abilities as self-directed learners.6 In addition to Ryan’s, several other instruments have been developed to measure SDL.13 The two most widely recognized, extensively used, and validated instruments for measuring SDL capability and readiness13–15 are Guglielmino’s Self-Directed Learning Readiness Scale (SDLRS)16 and Oddi’s Continuing Learning Inventory (OCLI).17,18 Several assessments of the reliability and validity of the OCLI and SDLRS have been conducted,14 including dissertations reporting positive associations between instrument scores and SDL activity.

Both the SDLRS and OCLI have been used to assess undergraduate medical students. Using the SDLRS, Mann and Kaufman19 found no significant difference between the first-year and second-year scores for Dalhousie University’s initial class in its revised, problem-based learning (PBL) curriculum. They also found no overall difference between the average second-year scores of students in the last year of Dalhousie’s conventional curriculum and those in the initial PBL class. However, their more detailed analysis found four SDLRS items to favor the PBL group at a significance level of p < .01.

Also using the SDLRS, Shokar and others20 studied 182 third-year medical students at the University of Texas Medical Branch at Galveston, assessing their degree of readiness for SDL after experiencing two years in a PBL curriculum. Because the clinical preceptors’ ratings of the students correlated with the SDLRS scores and the mean SDLRS score was significantly higher than that reported for general adult learners, the authors concluded the students were ready for SDL.

Using the OCLI at the Bowman Gray School of Medicine (now Wake Forest University School of Medicine), Shulman21 found significantly higher scores among students in a PBL curriculum compared with those in a conventional, lecture-based curriculum. Comparing first and fourth years, she also found higher fourth-year SDL scores in the PBL curriculum, but no difference in the more traditional curriculum.

Although each of these instruments has received varying degrees of evaluation, we could identify only one interinstrument comparison. Studying the SDLRS and OCLI, Landers reported an interinstrument correlation of .606 and concluded that both instruments appear to share similar properties.22 With only this single study to guide and justify the selection of one instrument over the others, we designed a cross-sectional study of the effect of the University of Toronto Faculty of Medicine’s undergraduate medical curriculum on students’ SDL using the three instruments (SDLRS, OCLI, and Ryan’s). We hypothesized that students’ scores on these instruments would increase across the four curricular years.



A total of 280 students, 70 from each of the four years of the undergraduate medical curriculum, were randomly selected from the school’s population of 700. We chose this number because calculations showed that 60 respondents per class (85% response rate) would provide a study power of .80 to detect (α = .05) a 5% difference among the four years.23


Ryan’s questionnaire asks respondents to consider the four identified components of SDL and rate each, from low (0) to high (6), on its importance and their ability with the component.6 The SDLRS contains 58 statements (e.g., “I learn several new things on my own each year”) with five-point responses, ranging from “Almost never true of me; I hardly ever feel this way” to “Almost always true of me; there are very few times when I don’t feel this way.” Total scores range from 58 (least ready for SDL) to 290 (most ready).16

The OCLI contains 24 statements (e.g., “I work more effectively if I have freedom to regulate myself”) with seven-point responses, ranging from “Strongly Disagree” to “Strongly Agree.” Total scores range from 24 (least characteristic of self-directed learners) to 168 (most characteristic).17,18 Brockett and Hiemstra,14 in their review of the use and validity of the SDLRS and OCLI, concluded that both are well-accepted measures of SDL.


The instruments, stapled together as a single booklet, were mailed with an identification number and questions concerning each respondent—their age, sex, and level of premedical education. To enable an assessment of any instrument-order effects, half the students selected from each class were randomly selected to receive the questionnaires organized and stapled so that the SDLRS preceded the OCLI; for the remaining students, the OCLI preceded SDLRS. Ryan’s questionnaire always followed the other two instruments, with Ryanability always following Ryanimportance. The survey packages included a postage-paid return envelope and a cover letter personally signed by the study coordinator (BJH) and the associate dean of Undergraduate Medical Education (RCF). To allow for maximal curricular exposure, while also providing sufficient time for data collection, the surveys were administered, using Dillman’s24 Total Design Method, during the final two months of the academic year, in April of 1999. Nonrespondents received up to three follow-up mailings: a postcard reminder at ten days, and a full mailing, including a replacement questionnaire, one and two months after the initial mailing. If any questions were found unanswered, the questionnaire was returned by mail to the respondent with a cover letter requesting completion of the identified item(s).


The responses of returned surveys were entered into a database. Instrument totals were calculated using spreadsheet software. Data analysis (chi-square for differences of proportions, analysis of variance for differences of means, and linear regression for trends) was completed using SPSS 10.0.25 For the primary question—the trend in OCLI, SDLR, and Ryanability scores over the four years of the curriculum—an α level of .05 was used. All other analyses, explanatory and exploratory, were completed acknowledging that significant findings may occur by chance, a result of the multiple comparisons conducted.


A total of 250 (89.3%) completed questionnaires were returned. Respondents’ characteristics are summarized in Table 1. Response rates by class were similar. The percentages of women and of those with a master’s degree prior to entering medical school were also similar across classes. However, the percentage of respondents reporting premedical PhDs differed significantly by class.

Table 1:
Characteristics of 250 Respondents to Three Self-Directed Learning Questionnaires by Year, University of Toronto Faculty of Medicine, 1999

Summary (i.e., four-component total) scores for “SDL importance” and “SDL ability” were used in the analysis of Ryan’s questionnaire because factor analysis demonstrated the four components of each loaded together onto single factors. Table 2 summarizes the SDLRS, OCLI, and Ryan scores. Because no effect due to instrument order was evident, all instruments were analyzed together. Standardized Cronbach α was .76 for the SDLRS, .66 for the OCLI, .76 for Ryanability, and .73 for Ryanimportance. The interinstrument correlations are shown in Table 3.

Table 2:
Summary of Mean Scores (SD) on Three Self-Directed Learning Questionnaires for 250 Respondents, University of Toronto Faculty of Medicine, 1999
Table 3:
Interinstrument Correlation Coefficients for Three Self-Directed Learning Questionnaires Administered at University of Toronto Faculty of Medicine, 1999

Women and men had similar SDL scores. SDLRS, OCLI, and Ryanability scores increased significantly by age (data not shown) and highest level of premedical education achieved (undergraduate only, masters, or doctoral). However, only premedical education remained significant in a multivariate linear regression including both factors.

Although a significant between-year difference was found for the SDLRS (p = .028), OCLI (p = .011), and Ryanimportance (p = .021), a significant trend by year was only evident for Ryanimportance scores (p = .007). This trend, however, indicated a decrease in perceived SDL importance by curricular year. The SDLRS, OCLI, and Ryanability instruments each identified the highest and lowest mean scores in the first and second years, respectively. Neither controlling for level of premedical education nor restricting the analysis to only those with undergraduate postsecondary education altered these results. To further examine trends in SDL ability by year and enable comparison with Mann and Kaufman’s study,19 analyses were also conducted on each of the 58 SDLRS, 24 OCLI, four Ryanability, and four Ryanimportance items.

By applying the more stringent α level of .0005 (.05/90), acknowledging the conduct of multiple comparisons, none of the 90 items suggested a trend by curricular year. In fact, only four SDLRS and two OCLI items reached the more liberal level of p < .05, each consistent with lower SDL in the senior years of the curriculum. The SDLRS items were: “In a classroom, I expect the teacher to tell all class members exactly what to do at all times” (more agreement in senior years; p = .05); “I don’t like challenging learning situations” (more agreement in senior years; p = .04); “Learning how to learn is important to me” (more agreement in junior years; p = .03); and “Learning is a tool for life” (more agreement in junior years; p = .01). The OCLI items were “I seek involvement with others in school or work projects” (more agreement in junior years; p = .001) and item 23 “I make an effort to meet new people” (more agreement in junior years; p = .002).

Except for age, results from the bivariate and multivariate regression analyses were comparable.


Our study’s main objective was to evaluate whether University of Toronto medical students’ SDL, as measured by widely recognized, extensively used, and validated instruments, improves over the four years of the curriculum. Although significant interyear SDL differences were found, SDL scores did not follow a trend consistent with progression through the curriculum, with the first and second years having consistently the highest and lowest scores, respectively.

These results differ from those of Mann and Kaufman,19 who reported no difference in SDLRS scores in a group of Dalhousie University medical students between their first and second years in a PBL curriculum (232 versus 233). Their more detailed analysis, however, identified four SDLRS items favoring the PBL group at a significance level of p < .01. In a similar examination, we found four, albeit different, SDLRS items suggesting a trend by curricular year. These, however, all indicated decreasing SDL into senior years. Similarly, both OCLI items suggesting a trend by year also favored students in more junior years. Together, these six items may simply reflect the different working and social environments of senior students rather than diminished SDL.

These results also differ from those of Shulman.21 Comparing medical students in PBL and traditional curricula at the Bowman Gray School of Medicine (now Wake Forest University School of Medicine), Shulman found significantly higher OCLI scores among the fourth-year students in the PBL curriculum (136 versus 123). Shulman also found that women had significantly higher OCLI scores than men (132 versus 123). Shulman’s results, however, may have been affected by response bias because only 37% of the 216 potential respondents returned completed instruments.

The SDLRS and OCLI scores in our study are consistent with those found in other studies. In their study of first-year and second-year medical students, Mann and Kaufman19 observed mean SDLRS scores ranging from 225 to 233, only slightly less than this study’s 228 and 239. Similarly, the average score of third-year medical students reported by Shokar and colleagues20 is consistent with this study (236 and 235, respectively). Finally, the mean OCLI scores of first-year and fourth-year students reported by Shulman21—ranging between 121 and 136—are also comparable to this study’s 126 and 131.

The SDLRS–OCLI interinstrument correlation of .713 we observed supports Landers’ assertion22 that both instruments appear to share similar properties. In both our study and Landers’, the 58-item SDLRS had a higher internal reliability than the 24-item OCLI.

Although the results of our study suggest that the curriculum does not foster students’ SDL, alternative explanations should be considered. For example, are the instruments sufficiently sensitive to detect SDL progress? Two factors suggest that they are: (1) all three instruments provide similar findings, with each able to detect the significant increasing trend in SDL associated with higher levels of premedical education; and (2) the similarity of results for each of the three measures of SDL ability (i.e., SDLRS, OCLI, and Ryanability) suggests that the significant results observed are unlikely to have occurred by chance (i.e., as a result of the multiple comparisons conducted). Further, response bias is not a likely explanation for the study’s inability to detect a year-by-year SDL trend because the response rates across the four years were uniformly high—in excess of 85%.

An additional consideration, however, is the study’s cross-sectional design. Although this design is more efficient than a longitudinal approach, actual changes in SDL are not measured in the same groups of students over time. Instead, the cross-sectional design assumes the comparability of the four classes. Although the admission procedures and curriculum were similar for each of the four years, the failure to detect SDL progress over the curriculum could be the result of unmeasured differences between two or more of the classes. Clearly, the completion of a longitudinal follow-up assessment should be given high priority so that a more complete understanding of these findings and the effect of this curriculum on students’ SDL might be achieved.

Although statistically significant interyear SDL differences were found, it is important to also consider their practical significance. For example, what is the educational relevance of the SDL differences observed between first-year and second-year scores? To examine this, we completed an item-by-item comparison, which demonstrated the second-year class consistently had lower SDL item scores than their first-year counterparts—particularly on the 38 SDLRS and 21 OCLI items that accounted for the greatest differences between the two classes. Further, as presented in Table 4, 28% (16/58) of the SDLRS items accounted for 50% (5.5/10.9) of the between-class difference in SDLRS scores. Similarly, 25% (6/24) of the OCLI items accounted for 42% (3.2/7.7) of the difference between the first-year and second-year classes (also shown in Table 4).

Table 4:
Items Accounting for the Largest Differences between the First-Year and Second-Year Scores on Two Self-Directed Learning Questionnaires, University of Toronto Faculty of Medicine, 1999

Further, as Shokar and colleagues20 note, the SDL scores found in this study are much higher than those reported for general adult learners (214 on SDLRS). As such, the opportunities for students’ SDL scores to increase during medical school may be limited by a so-called ceiling effect. However, despite this potential limitation of high scores, significant between-year differences were identified by this study. In addition, differences between Ryanimportance and Ryanability scores (see Table 2) appear to suggest that opportunities exist for further SDL improvements, as students generally rated their current SDL ability significantly lower than they rated the importance of SDL to their current learning.

Of course, these observed SDL differences may in fact be a true reflection of the students’ curricular experiences. For example, the low scores observed among second-year students might be due to frustration with the preclerkship and impatience to begin the more clinically oriented clerkship. The findings may also be due to the steering effects of examinations26,27 or, as argued by Schmidt,28 the inability of most medical curricula, even those that are purely problem-based, to ensure the kind of learning environment necessary to effectively enable, encourage, and support SDL.


Although our study’s results should be interpreted cautiously, especially before the completion of a longitudinal follow-up assessment, they suggest that students’ SDL is not positively influenced by this revised curriculum—a finding consistent with experience at the University of Queensland.29

For the purposes of this research, a royalty-freecopyright license for the use of the OCLI was granted by Lorys F. Oddi. The University of Toronto Faculty of Medicine provided funding for this study.


1.Knox AB, Suter E, Carmichael JW Jr. Physicians for the twenty-first century: subgroup report on learning skills. J Med Educ. 1984;59(11 Pt 2):149–54.
2.Muller S (chair). Physicians for the twenty-first century: report of the project panel on the general professional education of the physician and college preparation for medicine. J Med Educ. 1984;59(11 Pt 2):1-200.
3.Houle CO. Continuing Learning in the Professions. San Francisco, Calif.: Jossey-Bass Publishers, 1980.
4.Barrows H. Foreward. In: Evensen DH, Hmelo CE (eds). Problem-Based Learning: A Research Perspective on Learning Interactions. London, UK: Lawrence Erlbaum Associates, Publishers, 2000:vii.
5.Wilcox S. Fostering self-directed learning in the university setting. Stud Higher Educ. 1996;21:165–76.
6.Ryan G. Student perceptions about self-directed learning in a professional course implementing problem-based learning. Stud Higher Educ. 1993;18:53–63.
7.Miflin BM, Campbell CB, Price DA. A conceptual framework to guide the development of self-directed, life-long learning in problem-based medical curricula. Med Educ. 2000;34:299–306.
8.Knowles MS. Self-Directed Learning: A Guide for Learners and Teachers. Chicago, Ill.: Follett Publishing Company, 1975.
9.Hammond M, Collins R. Self-Directed Learning: Critical Practice. London, UK: Kogan Page, 1991.
10.Candy PC. Self-Direction for Lifelong Learning: A Comprehensive Guide to Theory and Practice. San Francisco, Calif.: Jossey-Bass, 1991.
11.Brookfield S. Self-directed learning, political clarity, and the critical practice of adult education. Adult Educ Q. 1993;43:227–42.
12.Kerka S. Myths and realities No. 3: self-directed learning 〈〉. Accessed 7 August 2003. Columbus, Ohio: ERIC Clearinghouse on Adult, Career, and Vocational Education, 1999 (ED 435 834).
13.Pilling-Cormick J. Existing measures in the self-directed learning literature. In: Long HB et al. (ed). New Dimensions in Self-Directed Learning. Norman, OK: Educational Leadership and Policy Studies Department, University of Oklahoma, 1995:49–60.
14.Brockett RG, Hiemstra R. Measuring the iceberg: quantitative approaches to studying self-direction. In: Self-Direction in Adult Learning: Perspectives on Theory, Research, and Practice. London: Routledge, 1991:55–82.
15.Confessore GJ, Long HB, Redding TR. The status of self-directed learning literature, 1966–1991. In: Long HB et al. (eds). Emerging Perspectives of Self-Directed Learning. Norman, OK: Oklahoma Research Center for Continuing Professional and Higher Education of the University of Oklahoma, 1993:45–56.
16.Guglielmino LM. Development of the self-directed learning readiness scale [dissertation]. Athens, Ga.: University of Georgia, 1977.
17.Oddi LF. Development and validation of an instrument to identify self-directed continuing learners. Adult Educ Q. 1986;36:97–107.
18.Oddi LF. Development of an instrument to measure self-directed continuing learning [dissertation]. DeKalb, Ill.: Northern Illinois University, 1984.
19.Mann KV, Kaufman D. Skills and attitudes in self-directed learning: the impact of a problem-based curriculum. In: Rothman AI, Cohen R (eds). Proceedings of the Sixth Ottawa Conference on Medical Education, Toronto, Ontario, June 26–29, 1994. Toronto: University of Toronto Bookstore Custom Publishing, 1995:607–9.
20.Shokar GS, Shokar NK, Romero CM, Bulik RJ. Self-directed learning: looking at outcomes with medical students. Fam Med. 2002;34:197–200.
21.Shulman JM. A comparison between traditional and problem-based learning medical students as self-directed continuing learners [dissertation]. DeKalb, Ill.: Northern Illinois University, 1995.
22.Landers KW Jr. The Oddi continuous learning inventory: an alternate measure of self-direction in learning [dissertation]. Syracuse, N.Y.: Syracuse University, 1989.
23.Glantz SA. Primer of Biostatistics. 4th ed. New York: McGraw-Hill, 1997:171–5.
24.Dillman DA. Mail and Telephone Surveys: The Total Design Method. New York: Wiley, 1978.
25.SPSS 10.0 Syntax Reference Guide. Chicago: SPSS Inc., 1999.
26.Blake JM, Norman GR, Smith EKM. Report card from McMaster: student evaluation at a problem-based medical school. Lancet. 1995;345:899–902.
27.Newble D, Jaeger K. The effect of assessments and examinations on the learning of medical students. Med Educ. 1983;17:165–71.
28.Schmidt HG. Assumptions underlying self-directed learning may be false. Med Educ. 2000;34:243–5.
29.Miflin BM, Campbell CB, Price DA. A lesson from the introduction of a problem-based, graduate entry course: the effects of different views of self-direction. Med Educ. 1999;33:801–7.
© 2003 Association of American Medical Colleges