Secondary Logo

Journal Logo

Evaluating Curricular Effects on Medical Students' Knowledge and Self-perceived Skills in Cancer Prevention


Section Editor(s): Barzansky, Barbara PhD


Correspondence: LuAnn Wilkerson, EdD, Center for Educational Development and Research, David Geffen School of Medicine at UCLA, Box 951722, Los Angeles, CA 90095; e-mail: 〈〉.

Supported by a National Cancer Institute award (R25 CA73914).

Since the Report of the Project Panel on the General Professional Education of the Physician in 1980,1 there have been numerous recommendations for increasing attention to health promotion and disease prevention content in the medical student curriculum. The need for cancer prevention curriculum in particular was highlighted in the early 1990s, when a national survey2 by the American Association for Cancer Education indicated that 64% of the 1,038 faculty respondents felt that cancer prevention was underemphasized in their institutions' medical student curricula. To put this in perspective, only 32% felt that cancer treatment was similarly underemphasized. Stimulated by a large percentage of UCLA graduating seniors indicating “inadequate” instruction in general prevention and screening on the 1996 AAMC Graduation Questionnaire3 and supported by a National Cancer Institute grant (R25 CA73914), we convened a panel of cancer control and prevention experts from four local institutions in 1997–98 to develop a set of instructional objectives to guide curricular revision in this area, 〈〉. Using these objectives, a UCLA curriculum task force on cancer prevention worked with course directors to develop or revise instructional materials to emphasize cancer prevention, screening, and counseling. By January 2000, an enhanced cancer prevention curriculum had emerged to supplement existing instruction in cancer diagnosis and treatment and strategies for smoking cessation. In year one, we added two problem-based learning (PBL) cases and two standardized patient (SP) exercises, which focused on identifying risk factors and counseling for lifestyle change (for a total of 14 hours). In year two, we implemented a new SP case on breast cancer screening accompanied by a lecture on women's cancer risks, and three hours of lecture on carcinogenesis (for a total of seven hours). In year three, we added computer-based simulations on skin cancer, a video and model for the prostate exam, and a SP case involving family counseling on breast cancer risk (for a total of seven hours).

The present study was designed to evaluate the effects of this enhanced curriculum in cancer prevention on medical students' knowledge and self-perceived competency in the use of counseling and screening examinations during each of the first three years of medical school. We also evaluated how three instructional strategies used—direct instruction, hands-on practice, and observation—contributed to these outcomes. Few studies exist in the literature evaluating this type of large-scale, multi-year curriculum project in cancer education.4,5,6

Back to Top | Article Outline


We conducted a cross-sectional study of medical students at UCLA School of Medicine in April 2000 and entering students in August 2000 using a cancer prevention and detection survey adapted from the one devised at Boston University.7 In addition to answering 30 multiple-choice and true—false questions assessing knowledge of cancer prevention, students rated their levels of competence in counseling for smoking prevention, smoking cessation, sun protection, and healthy nutrition, and in performing skin, breast, Pap smear, and digital rectal screening examinations. Knowledge and self-perceived competencies in counseling and screening skills were the three outcome variables of interest. Students also reported on the numbers of times they had practiced, observed, or received direct instruction in targeted cancer prevention skills.

We distributed the survey in class to first-year (class of 2003) and second-year (class of 2002) students at the end of the academic year in 2000. At the same time, we sampled those (73) ending third-year students (class of 2001) who were on campus for a class at the medical center. Third-year students are split into four groups that return from their clerkships to the UCLA campus twice a month for the Doctoring 3 course. We asked only those students on campus from two of the four groups to participate in the study due to a scheduling problem. Assignment to one group or another for Doctoring 3 is not purposeful in any way; thus we have reason to believe that this half is representative. Both the second- and the third-year cohorts had been exposed to two years of the enriched cancer curriculum. We also asked entering medical students (class of 2004) to complete the survey in August 2000, to add baseline data. The study was classified as exempt by the UCLA School of Medicine Institutional Review Board. Participation was voluntary and anonymous.

For purposes of analysis, we grouped the 30 knowledge items into six subscales by topic: general cancer knowledge (six items), smoking-related cancer (nine items), breast cancer (six items), prostate cancer (three items), cervical cancer (two items), and colon cancer (four items). We calculated mean percentage correct for each sub-scale and overall knowledge. We also calculated mean ratings for self-reported competencies in counseling and screening overall and by specific skills. We used multivariate analyses of variance (MANOVAs), post-hoc ANOVAs, and t-tests to compare across student cohorts. To identify educational predictors (including practice, observation, and direct instruction) of each outcome measure (knowledge, counseling skills and screening skills) for each year and the first three years combined, we ran multiple regression analyses with stepwise selection. We also calculated Cronbach's reliability coefficient alpha for each outcome measure.

Back to Top | Article Outline


A total of 333 students (63% of those sampled) agreed to participate in the study, including 114 baseline (76%), 95 first-year (63%), 72 second-year (48%), and 52 third-year (71%). Cronbach's reliability coefficient alphas for the scales measuring knowledge, counseling skills, and screening skills were .74, .92, and .87, respectively.

Table 1 compares the overall mean scores across years for knowledge, counseling skills, and screening skills. In all the areas assessed, students' scores progressively increased by level of training, although not all increases were statistically significant, indicating known-groups validity.8



Back to Top | Article Outline


The baseline students scored significantly (p < .001) lower, and the third-year students scored significantly (p < .001) higher, than the other groups in the overall knowledge and most of the six subscales (not shown). However, the first-year and second-year students' mean overall and subscale scores did not differ significantly from each other except in general cancer knowledge. The greatest knowledge gain across cohorts was in the area of colon cancer prevention, with a gain of 47% correct between the baseline (29%) and the third year (76%). Within each cohort, students' mean scores across the subscale topics were significantly (p < .001) different from one another, with prostate cancer knowledge consistently having the lowest mean (range = 8–49%) and breast cancer, the highest (range = 56–78%).

Back to Top | Article Outline

Self-perceived Counseling and Screening Skills

The cohorts differed significantly (p < .001) in their self-rated competencies, with higher ratings associated with each higher level of training. Self-perceived competencies in counseling and screening examinations showed a somewhat different change pattern. For counseling skills, significant increases were found between the baseline and first-year cohorts (p < .001), and between the second- and third-year cohorts (p < .05), but not between the first- and second-year cohorts. For screening skills, there was no significant difference between the baseline and first-year cohorts, while we found a significantly (p < .001) progressive increase in the differences between the other cohorts in overall competency and in most targeted skills. Students consistently rated their counseling skills higher than their screening skills. This difference was significant (p < .001) for all the cohorts except the third-year cohort. In spite of having overall higher screening exam ratings, the third-year students rated their skills in performing a skin cancer exam (M = 2.29) significantly (p < .001) lower than their other screening skills (M = 3.64, 3.51, 3.31 for breast exam, Pap smear, and digital rectal exam, respectively).

Back to Top | Article Outline

Instructional Methods

In considering how a curriculum might affect the three outcomes under consideration, we explored the relative contributions of opportunities to practice, observe, or receive direct instruction. Results of the multiple regression analyses are shown in Table 2. Considering the combined scores for the first-, second-, and third-year groups, amount of self-reported practice was the single best predictor for all three outcome measures; it explained different amounts of the variance in each measure, with the most in screening skills (57%) and the least in knowledge (16%). Direct instruction added incremental contributions to the predictions of counseling skills (2%, p < .05) and screening skills (3%, p < .001). The major educational predictors varied for the three outcome variables by cohort.



Back to Top | Article Outline


Findings of this study demonstrated support for the effects of an enhanced cancer prevention curriculum on medical students' knowledge, counseling skills, and screening skills. In general, the students' overall mean scores in these areas increased progressively for each cohort. The increases in all three outcome measures were most pronounced during the clerkship year. An examination of the effects noted for each year provides some lessons for curriculum development.

Although each cohort demonstrated a higher overall knowledge score than the previous one, there was only a three-point difference between the overall scores of the first- and second-year students. Second-year students demonstrated little gain in knowledge about prevention of smoking-related, breast, cervical, or colon cancers. Prostate cancer showed only a five-point increase. Three factors may account for this lack of effect of the second-year curriculum: (1) an emphasis throughout the year on pathophysiology, (2) the use of more traditional teaching approaches for teaching prevention, and (3) students' obsession with the upcoming licensing examination. The response rate for second-year students (48%) was lower than those for the other years, suggesting that the lack of demonstrated effect could have been due in part to respondent bias. However, we are working with year-two course directors to add prevention concepts to their coverage of neoplasia and cancer pathogenesis. When comparing results of the third-year students with those of the entering students, we can see that knowledge growth was most noteworthy in the areas of prevention of colon and prostate cancers. However, with an overall percentage correct score of 68.4, students still have room for improvement, especially in prostate cancer prevention. Clerkship directors have agreed to increase instruction in each of these prevention areas.

Students' self-perceived competency in counseling for lifestyle change did not progress at a steady rate across years. The first-year curriculum appeared to have been responsible for most of the gain over baseline in counseling for smoking prevention, smoking cessation, and healthy nutrition. This effect corresponds to the new curricular components in which students had to counsel standardized patients on diet, lifestyle, and smoking changes as part of their Doctoring course. The second-year curriculum had little effect on students' counseling skills, whereas in year three, the combination of hands-on practice and observation during actual patient care experiences contributed to increased competencies in counseling.

Students' ratings of their competencies in performing screening examinations were unaffected by the year-one curriculum and only slightly affected by year two. If practice is the predominant contributor to competency in performing screening examinations, then it is clear that the hands-on clinical experience available to third-year students is critical in consolidating their competency. The curriculum does not appear to contain sufficient instruction on the skin examination, since third-year students rated their competencies much lower on this exam than on the other three. Additional hands-on curriculum on the skin exam has been assigned to the ambulatory clerkship. This is a major omission, given the number of sunny days in southern California!

In examining the value of three educational strategies (i.e., practice, observation, and direct instruction) relative to the three outcome measures, it is clear that for all students, practice is the major way in which curriculum affects self-perceived competency. This contribution is most obvious in their performing screening examinations, not surprising given the psychomotor demands of these exams. However, for the first-year students, new opportunities to practice counseling skills appeared to have contributed to significant growth in their self-perceived competencies. The results for the second-year students reflect the lack of practice or observation in the area of cancer prevention during that year, with the only significant contribution coming from instruction, probably from the added lectures. For the third-year cohort, however, practice is the key to increased competency in screening, while observation of counseling by faculty may be helpful in refining the skills initially developed in year one.

In building a cancer prevention curriculum, it seems essential to use hands-on instructional strategies that allow students opportunities to practice the skills to be learned. Practice even appears to play a role in knowledge acquisition. The cancer prevention survey4,7 is a useful tool for assessing the impact of new curricular efforts, even recognizing that self-assessment has been shown to have only a low to moderate correlation with demonstrated performance.9 Since cross-sectional studies cannot provide evidence of the cumulative effect of a curriculum, it is important to follow a cohort of students across the curriculum using a repeated-measures design. Even then, studying the curriculum over time at a single school does not control for other threats to validity such as maturation, historical events, selection or loss of subjects, and investigator bias. Given these limitations of cross-sectional, single-institution studies, we have already initiated a multi-institutional longitudinal cohort study comparing the effects of an expanded prevention curriculum at UCLA with findings at schools that have not implemented a similar curriculum.

Back to Top | Article Outline


1. Muller S (chairman). Physicians for the twenty-first century: report of the project panel on the general professional education of the physician and college preparation for medicine. J Med Educ. 1984;59(11 part 2):1–208.
2. Gallagher R, Bakemeier R, Chamberlain R, et al. Instructional methods and the use of teaching resources in cancer education curricula: Cancer Education Survey II: Cancer education in United States medical schools. J Cancer Educ. 1992;7:95–104.
3. Medical School Graduation Questionnaire: All Schools Report 1996. Washington, DC: Association of American Medical Colleges, 1996.
4. Geller A, Prout M, Miller D, et al. Evaluation of a cancer prevention and detection curriculum for medical schools. Prev Med. In Press.
5. Hodgson C. Tracking knowledge growth across an integrated nutrition curriculum. Acad Med. 2000;75(10 suppl):S12–S14.
6. Zapka J, Luckmann R, Sulsky S, et al. Cancer control knowledge, attitudes, and perceived skills among medical students. J Cancer Educ. 2000;15:73–8.
7. Geller A, Prout M, Sun T, Lew R, Culbert A, Koh H. Medical students' knowledge, attitudes, skills, and practices of cancer prevention and detection. J Cancer Educ. 1999;14:72–7.
8. Steward A. Psychometric considerations in functional status instruments. In: (WONCA) Classification Committee (ed). World Organization of National Colleges, Academies and Academic Associations of General Practice/Family Physicians. Functional Status Measurement in Primary Care. New York: Springer, 1990.
9. Gordon MJ. A review of the validity and accuracy of self-assessments in health professions training. Acad Med. 1991;66:762–9.
© 2002 by the Association of American Medical Colleges