Students’ perceptions of their experiences and learning during clerkships are multifaceted1 and determined by variables linked to the educational environment, such as the structure of learning activities, opportunities for clinical practice, and the quality of teaching. A number of studies that have addressed learners’ appraisal of clerkships have essentially focused on the competencies of the clinical teacher.2–5 Considerable evidence shows that teachers’ interpersonal communication skills and clinical expertise have a positive impact on learning and a marked influence on students’ career choices.6–7 However, studies show that these appraisals differ among clinical settings (i.e., inpatient versus outpatient) and that these settings influence teaching and students’ perceptions of learning.8 In addition to the recognized role of the teacher, other aspects related to the clerkship structure may potentially contribute to the quality of learning in clinical settings; however, these have been less systematically investigated. Further, few studies have looked at other factors, above and beyond clinical teaching, that might influence the quality of learning in distinct clinical settings. One such study of students’ perceptions of learning at a Scottish medical school identified substantial differences in the evaluations of medical versus surgical environments.9 However, the majority of studies assessing learning and teaching in clinical settings rely on item-based questionnaires, an evaluation that is restricted to the aspects covered by the instrument.
In the context of the evaluation of the learner, an overall global appraisal has proven valuable because it seems to assess more than the sum of individual checklist items.10 Global ratings have also been used to assess residents’ clinical performance, as a complementary indicator to evaluations derived from checklist items.11 However, despite the demonstrated utility of the global scoring and its complementarity to the individual checklist items, medical educators rarely analyze and report this global rating when they assess the quality of learning in clinical settings.
The aims of this study were (1) to determine which factors of the clerkship structure, learning environment, and clerkship learning activities most influence students’ global evaluations of the clerkship and (2) to compare these results across the seven medical specialties from a period of nine years. To accomplish this objective, we analyzed the students’ evaluations of the clerkships during the academic years 1997–1998 through 2005–2006, to cover the period in which we introduced problem-based learning (PBL) at our institution.
Setting, curriculum, and clerkships
Undergraduate medical education has progressively improved through innovative changes in a growing number of medical schools’ curricula. Our institution, the University of Geneva Faculty of Medicine, is one of the medical schools that has introduced and extended the concept of PBL into the clinical years. In another publication,12 we described in detail our six-year curriculum, which is divided into the preclinical years (first through third), the clinical years (fourth and fifth), and the elective year (sixth). Learning activities in the preclinical years rely in great part on a PBL approach, whereas learning in the clinical years is structured to ensure broad coverage of content within contextualized learning environments. During these years, learning in clinical settings is arranged to ensure a balanced exposure to direct patient encounter activities, written case-based problem-solving learning (PSL) tutoring activities, and regularly scheduled seminars on specific topics (e.g., radiology, pathology).
PBL consists of clinical cases developed to enhance students’ abilities to analyze and synthesize, and to increase their understanding of the pathophysiological mechanisms underlying cases. By contrast, PSL aims to enhance students’ reasoning processes and clinical case problem-solving skills. Additionally, the PSL cases are often chosen and developed to facilitate students’ exposure to cases that are not easily accessible to them through real patients.
In the clinical years, students attend, during the course of one year, seven main clerkships: internal medicine (IM), surgery, pediatrics, psychiatry, emergency medicine (EM), community medicine (CM), and obstetrics–gynecology (Ob-Gyn). Overall, the University of Geneva has five rotations of the eight-week clerkships and 10 rotations of the four-week clerkships. In general, the time devoted to clinical activities during clerkships consists of five to six hours per day, corresponding to approximately 60% of the students’ total learning schedule. Clerkships are distributed throughout the academic year, and attendance at all learning activities is compulsory. The length of the main clerkships is predetermined, but it varies among specialties. For IM, surgery, and pediatrics the duration is eight weeks, whereas for psychiatry, CM, EM, and Ob-Gyn it is four weeks. Residents, chief residents, and faculty supervise the patient-based clinical activities, and faculty tutors mainly teach the case-based PSL curriculum.
Students systematically evaluate all learning activities for all of their clerkships throughout the clinical year. At our institution, students complete the questionnaire voluntarily (and, if desired, anonymously) at the end of each clerkship. Over the years, changes have been made to the clerkships’ general structure and activities, mainly to integrate new content and to adapt the PSL cases according to the faculty tutors’ and students’ reviews and suggestions.
Each year, 100 to 130 students matriculate into the University of Geneva Faculty of Medicine. During the nine academic years composing this study, students evaluated each clerkship. The number of students per each clerkship rotation varies from 10 to 20 depending on rotation length. The students’ office randomizes students into groups of approximately 20 students at the beginning of the clinical years. Groups remain essentially unchanged throughout the clerkship rotations, with few exceptions (for affinity or when constituting smaller groups of 10 students for clerkships of shorter length). The students’ office also assigns groups to rotations among different clerkships.
We developed a 22-item evaluation questionnaire (List 1) to derive students’ evaluations of the clerkship learning context and activities. The instrument contains questions allowing for both a global evaluation of the clerkship (1 item) and for the assessment of three main aspects of clerkships functioning (21 items). The first aspect is a general appraisal of the clerkship, which includes six items addressing the clerkship objectives, organization, learning materials, and students’ supervision and integration in the group. The second aspect, clinical activities, consists of eight items addressing the students’ perceptions of whether these activities improved their clinical knowledge and competencies in addition to their perceptions of the availability of residents and the adequacy of time dedicated to these activities. The third aspect, case-based PSL activities, consists of seven items addressing the quality of the written PSL cases in facilitating the acquisition of clinical knowledge and reasoning, the quality of the faculty tutors, and the availability of time for self-directed learning. Scoring of each item is based on a five-point Likert scale. The scale for the global evaluation of the clerkships is 5 = excellent, 3 = good, and 1 = to be improved. For the remaining 21 items of the questionnaire, the scale runs from 5 = strongly agree to 1 = strongly disagree.
In our analyses, we included all data obtained throughout the nine-year study period and all data from all the group rotations across the seven clerkships. At our institution, a specific ethical review process is not required for the type of educational research we conducted. However, as a regular process, the study and its results have been presented and discussed at the Clerkship Curriculum Committee and approved by the respective clerkship directors.
We distributed a total of 3,125 questionnaires for the seven clerkships during the nine-year study period. As stated, students rotated in preestablished groups through their clinical rotations. For the purpose of this study, we performed both a global analysis and a separate analysis for each clerkship using the students’ group mean scores for each of the 22 questionnaire items.
We performed descriptive statistics for the global ratings, the overall questionnaire ratings, and the items within the general appraisal of clerkships, clinical activities, and PSL activities sections. We used Pearson correlations to compare global ratings and the overall questionnaire ratings, and we derived Cronbach alpha coefficients for the questionnaire ratings. We performed analyses of items mostly influencing students’ global ratings on a three-step basis. First, we performed the principal component analysis using Varimax orthogonal rotation and derived three main factors with an eigenvalue ≥1.0. Subsequent nonorthogonal rotations (Promax) derived the same three factors. Because item extraction revealed that the three factors were essentially the same between the two methods, and considering that Varimax orthogonal rotation allows for a clearer interpretation of results, we made the decision to employ Varimax in this study. Second, Pearson correlations compared the global evaluation scores and the principal components (factors). Third, we used regression analysis to assess the variance explained by each factor and the model combining all three factors. For interpretation purposes, we considered only items with loadings ≥0.50.
When describing the study results, we employ the term organization to address the specific item included in the general appraisal of the clerkship (i.e., The clerkship was well organized; see List 1); additionally, we use this term in the discussion section to summarize the following items of the questionnaire, which directly related to the clerkship structure: objectives, general organization, integration of the student into the clerkship, and clinical activities supervision. A P value ≤.05 was defined for statistical significance. Analyses used SPSS software for Windows, release number 15 (SPSS Inc., Chicago, Illinois).
Overall, 2,450 evaluation questionnaires (78.4% of the total number distributed) were completed for the seven clerkships during the nine years. The average rate of response to the surveys across the nine years was 77.9% ± 15.0% (mean ± SD; 95% CI 73.4%–82.5%). Table 1 shows the mean score ratings obtained by each clerkship. These results correspond to the analysis of 472 groups of students throughout nine years for the seven clerkships. As shown in Table 1, the number of groups rotating through each clerkship during the study period varied between 43 and 88. The clerkships’ mean global ratings varied between 2.7 and 4.2 (P < .05), while the mean ratings for the clerkship’s general appraisal, the clinical activities, and the PSL activities varied between 3.0 and 4.4 (P < .05). The correlation between the overall questionnaire and the clerkships global ratings was 0.859 (r 2 = 0.738; P < .0001); the same correlation coefficients across clerkships varied between 0.7 and 0.9.
The overall questionnaire reliability (Cronbach alpha) varied between 0.942 and 0.958 across clerkships; reliabilities of the three questionnaire sections—namely, general appraisal of clerkship, clinical activities, and PSL activities—were, respectively, 0.902, 0.929, and 0.860 (Table 1).
Figure 1 depicts the global ratings, the 21-item questionnaire ratings, and the Cronbach alpha coefficients of the seven clerkships averaged across nine years. Both the global and the overall questionnaire ratings increase significantly over time, but reliability remains fairly stable.
Table 2 shows the results of the principal component analysis; loadings are displayed for all the 21 items of the questionnaire using the global ratings of the corresponding clerkships as the criterion. The analyses across all clerkships indicated that, overall, the global ratings of the clerkships were explained in great part by three factors (r 2 = 0.74). The first factor regrouped facets that related to
- items pertaining to the clerkships’ general appraisal, which included organization, achievement of the clerkships’ objectives, supervision, and students’ integration into the clerkship, and
- items related to the clinical activities, which included students’ opportunities to acquire and improve various clinical competencies, the availability of residents and supervisors, and time to practice during the clerkship.
The second factor mainly regrouped aspects pertaining to the written clinical cases used in small-group PSL. It included activities contributing to students’ improvements in clinical reasoning and knowledge, the complementary aspect of PSL to the clinical activities, and the tutors’ preparedness and ability to actively stimulate students during the learning process. The third factor, which contributed less to the total variance, was related to the availability of time for self-directed learning. These findings seem to suggest that the PSL activities, a variation of PBL in the clinical years, contribute as much as the clinical activities to the overall quality of the clerkship learning environment.
Table 3 displays the complete results of the factor analysis by clerkship. To determine whether a potential multicolinearity occurred among factors, we performed additional analyses, and the results failed to indicate such a risk (data not shown). As shown, factors contributing the most to the variance differed among clerkships. Factors 1 and 2 were more relevant in IM, CM, and Ob-Gyn; factors 1 and 3 in psychiatry and EM; factor 2 in surgery; and factor 1 in pediatrics. Items that related to the clerkship structure (included in the general appraisal section) contributed significantly to the global evaluation across clerkships. However, the contribution of specific activities varied among clerkships. For example, items related to clinical activities were mostly represented in pediatrics, whereas PSL activities contributed more to the observed variance in psychiatry. Table 4 presents a summary of the items included in each factor of the respective clerkship. Clerkships’ global ratings, except for Ob-Gyn, were in great part explained by three factors (r 2 = 0.71–0.84), although the relative importance of each factor varied by specialty. Overall, what contributed to the quality of the clerkships were items concerning the clerkships’ general organization, students’ supervision and integration into the clerkship, and students’ perceptions of both achieving the clerkship objectives and learning a great deal. The clinical activities contributed equally to the overall quality of the clerkship, but more in some clerkships (IM, pediatrics, EM, and Ob-Gyn) than others (surgery, psychiatry, and CM). Finally, the written, case-based PSL activities also contributed to the overall quality of the clerkship, but to a much lesser extent, and the contribution varied by clerkship. Among the four clerkships where the clinical activities contributed a great part to the quality of the clerkship learning environment (i.e., IM, pediatrics, EM, and Ob-Gyn), we observed that the PSL activities contributed very little, except in Ob-Gyn, where they seemed to play as much a role as the clinical activities. For the three clerkships where the clinical activities did not contribute greatly to the quality of the overall learning environment (i.e., surgery, psychiatry, and CM), the PSL activities became as much a contributing factor, as with the surgery clerkship, or an greater contributing factor, as with the psychiatry and the CM clerkships. In general, items pertaining to direct patient clinical activities were typically linked to the availability of supervision by residents, while those pertaining to PSL activities were linked to the quality of faculty tutors.
This study shows that factors influencing the students’ evaluation of the overall clerkship quality vary among medical specialties and depend not only on the teacher and the teaching but also on the clerkship structure and learning activities. While students consistently identified the importance of the clerkship organization and supervision of clinical activities across the seven clerkships, the role of the clinical teacher and the acquisition of specific clinical skills differed among specialties.
We are not aware of other studies that have comprehensively and longitudinally investigated the multiple facets that characterize learning in different clinical environments. The long-term analysis of clerkships’ global evaluations and this attempt to identify facets that might influence students’ evaluations have confirmed several fundamental features, heretofore unsubstantiated, that medical educators generally believed to impact the quality of clerkships. These include first the clerkships’ good organization and supervision, and students’ perceptions that they are integrated into the clerkship and have learned a great deal. Additional important contributing facets are the clerkships’ clinical activities and, to a lesser extent, the written case-based PSL activities. However, according to the global ratings on the questionnaire, the relative strength of the contribution granted to these two activities varied by clerkship and differed in relation to the activity’s characteristics. On the other hand, the contribution of each of these (good organization, clinical activities, and written case-based PSL activities) can be viewed in terms of their combined or single effect. For IM and surgery, the combined effect prevailed. The single effect of clinical activities was found in pediatrics, EM, and Ob-Gyn, whereas the single effect of PSL activities was found in psychiatry and CM. Further analyses on the effects of these two activities on questionnaires’ global ratings showed that while the students appreciated the clinical activities in most clerkships because they promoted their acquisition of various clinical competencies, in surgery and EM their appreciation is limited to the acquisition of technical skills. The students gave high ratings to the PSL activities partially because these activities facilitated students’ knowledge integration and clinical reasoning, but mostly because they were perceived as complementary to the clinical activities. This latter reason is most prominent in the surgery, EM, and Ob-Gyn clerkships.
Studies addressing the particularities of medical specialties and their effects on students’ perception of learning in different clinical environments are scarce. Earlier work1,9 showed substantial differences in the way students perceived learning when comparing medical and surgical environments. Recently, Beckman and associates13 went further by showing that differences in students’ evaluations of teaching appear even when students evaluate subspecialties within IM. However, these studies focused mainly on aspects related to the quality of teaching and learning, thus precluding further comparison with our results.
Given that the quality of a clerkship relies to a great degree on its organization and the complementarities of providing students with direct patient clinical activities and written, case-based PSL sessions, these aspects would not be sufficient if not completed by a good clinical supervision, availability of residents, well-prepared faculty tutors, and sufficient time for clinical activities and self-directed learning. Despite students’ appreciation of learning derived from the PSL activities, we observed that, for some clerkships, a trend for not enough scheduled time for self-directed learning in the PSL activities developed. Indeed, PSL activities become useful only when students are allowed sufficient time to prepare for the tutorials.
For nearly 30 years, Irby and colleagues14–17 have focused on the role of the clinical teacher and the impact of faculty on effective learning. Our results are in line with these reports in that they also show that well-prepared and motivated faculty influence the quality of clerkships, notably during PSL activities. Yet, residents have been increasingly enrolled as clinical teachers in practice-based teaching activities in the last years.18 Previous studies have shown that students evaluate residents and young faculty well, and this may indicate the close rapport between undergraduate and postgraduate trainees, as well as young professionals’ good understanding of students’ specific learning needs.1,19 Our earlier work20 has shown that students rely on residents for diagnostic reasoning, suggesting that the clinical reasoning of residents is close to the students’ representation and organization of knowledge. Our current results corroborate these earlier findings by showing that the availability of residents positively influenced the appraisal of clerkships and that residents’ availability gained importance in clerkships that offer direct patient activities, thus suggesting an additional role of residents in assisting students during their clinical learning activities. The reduction of residency work hours at our institution, in accordance with the new European Working Time Directive, is a concern, because the reduced hours may limit the residents’ availability and consequently impact the quality of clerkships.
Strengths of our study were the systematic and long-term collection of data and assessment of clerkships from seven main medical specialties. The instrument used to assess the students’ perceptions of clerkships proved reliable, and findings support its content and construct validity. Furthermore, the strong correlation between the global and the overall questionnaire ratings of clerkships and the consistent results obtained with the two measurements across the nine-year period of observation suggest the satisfactory responsiveness of the instrument. Over time, the evaluation of the clerkships improved, and the improvement we observed beyond the year 2000 (Figure 1) adequately reflects the adjustments brought to the clinical program in the preceding years. In addition, this systematic assessment has proved useful as a monitoring tool to identify areas of weakness within clerkships and to make necessary changes in a timely matter; for example, the Ob-Gyn clerkship now increases opportunities for clinical activities. Continued evaluation will allow us the ability to measure the impact of these new opportunities.
This study has certain limitations. The results described are restricted to the items included in the instrument, which was designed for the assessment of learning in ward clinical settings and may not be applicable to other settings. As anticipated, some findings are closely related to the context of clerkships and specialties. However, aspects related to the organization of clerkships and clinical activities are of broad relevance. Also, students completed the surveys on an anonymous, voluntary basis, precluding demographic analysis of the studied groups and reflecting only partially the views of the entire class of students attending clerkships. Nonetheless, the high rates of response to the surveys (78%) and the reliability of the results obtained over the study years support the consistency of the data. Our study is based on students’ assessments, but the validity and reliability of students’ assessments of teaching have been previously reported,21,22 as has the longitudinal stability (up to two years) of students’ ratings.22 Our study relies on global ratings, but the use of global ratings has been validated in the specific context of assessing student and faculty performance, proving complementary to structured item-based instruments.23–25 In fact, our data corroborate these studies by showing that the global scoring is a reliable indicator when applied to the clinical setting. Moreover, they add to previous work by showing the validity of the students’ ratings in distinct clinical environments and its stable reliability over the long-term. Finally, our study was based on the evaluation of clerkships by students clustered in small groups throughout rotations. Therefore, students may have shared their perceptions of learning during clerkships within groups, which may have influenced the results; however, the consistent long-term ratings suggest that results were not greatly affected by this potential bias.
In conclusion, this study has defined what students report are important factors when they judge the overall quality of a clerkship learning environment. The results not only confirm past findings on the necessity of having quality clinical teachers but also expand on the importance of the clerkship organization and its learning context and activities. The study also demonstrates that different formats and contexts of learning should be considered and adapted accordingly to the clinical disciplines so as to ensure maximum learning. For disciplines where direct and multiple access to patients is more difficult, an alternative learning format should be considered. For this purpose, a PSL design meeting the clerkship objectives is useful as a necessary complementary activity. To our knowledge, these are the first results which seem to indicate the relative utility of PSL activities when applied to the clinical years. Finally, our students recognized the value of at-hand residents in supervising direct patient clinical activities. This observation may support the extension of faculty development skills applied earlier to postgraduate training, especially because teaching is an aptitude proven beneficial both for the student and the future practitioner.26,27 Recent results indicate that restriction of residents’ work hours did not limit the time they spend with students.28 However, further studies are needed to investigate the acquisition of teaching skills and the quality of supervision provided by residents under such time constrains.
The authors would like to express their gratitude to the students who responsibly evaluated all clerkships, to the residents, tutors, and faculty who teach in the clinical years, and to the faculty in charge of clerkships for their engagement throughout the years, without whom this study could not have been possible. We thank Professor G. Pini for assistance with statistical analyses.
This project was partially supported by The University of Geneva Faculty of Medicine Intramural Funds for the Faculty of Medicine Program Evaluation. The funding sponsor had no role in the design and conduct of the study; the collection, management, analysis, and interpretation of the data; or the preparation, review, or approval of the manuscript.
Part of this research was presented at the Association for Medical Education in Europe (AMEE) meeting, fall 2006.
1 Patel VL, Dauphinee WD. The clinical learning environments in medicine, paediatrics and surgery clerkships. Med Educ. 1985;19:54–60.
2 Irby DM, Gillmore GM, Ramsey PG. Factors affecting ratings of clinical teachers by medical students and residents. J Med Educ. 1987;62:1–7.
3 Ramsey PG, Gillmore GM, Irby DM. Evaluating clinical teaching in the medicine clerkship: Relationship of instructor experience and training setting to ratings of teaching effectiveness. J Gen Intern Med. 1988;3:351–355.
4 Marriott DJ, Litzelman DK. Students’ global assessments of clinical teachers: A reliable and valid measure of teaching effectiveness. Acad Med. 1998;73:S72–S74.
5 Hunter AJ, Desai SS, Harrison RA, Chan BK. Medical student evaluation of the quality of hospitalist and nonhospitalist teaching faculty on inpatient medicine rotations. Acad Med. 2004;79:78–82.
6 Ambrozy DM, Irby DM, Bowen JL, Burack JH, Carline JD, Stritter FT. Role models’ perceptions of themselves and their influence on students’ specialty choices. Acad Med. 1997;72:1119–1121.
7 Ellsbury KE, Carline JD, Irby DM, Stritter FT. Influence of third-year clerkships on medical student specialty preferences. Adv Health Sci Educ Theory Pract. 1998;3:177–186.
8 Beckman TJ, Ghosh AK, Cook DA, Erwin PJ, Mandrekar JN. How reliable are assessments of clinical teaching? A review of the published instruments. J Gen Intern Med. 2004;19:971–977.
9 Atkinson P. Worlds apart: Learning environments in medicine and surgery. Br J Med Educ. 1973;7:218–224.
10 Solomon DJ, Szauter K, Rosebraugh CJ, Callaway MR. Global ratings of student performance in a standardized patient examination: Is the whole more than the sum of the parts? Adv Health Sci Educ Theory Pract. 2000;5:131–140.
11 Silber CG, Nasca TJ, Paskin DL, Eiger G, Robeson M, Veloski JJ. Do global rating forms enable program directors to assess the ACGME competencies? Acad Med. 2004;79:549–556.
12 Vu NV, van der Vleuten CPM, Lacombe G. Medical students’ learning processes: A comparative and longitudinal study. Acad Med. 1998;73:S25–S27.
13 Beckman TJ, Cook DA, Mandrekar JN. Factor instability of clinical teaching assessment scores among general internists and cardiologists. Med Educ. 2006;40:1209–1216.
14 Irby DM. Clinical teacher effectiveness in medicine. J Med Educ. 1978;53:808–815.
15 Irby DM, Ramsey PG, Gillmore GM, Schaad D. Characteristics of effective clinical teachers of ambulatory care medicine. Acad Med. 1991;66:54–55.
16 Irby DM. What clinical teachers in medicine need to know. Acad Med. 1994;69:333–342.
17 Ramsbottom-Lucier MT, Gillmore GM, Irby DM, Ramsey PG. Evaluation of clinical teaching by general internal medicine faculty in outpatient and inpatient settings. Acad Med. 1994;69:152–154.
18 Irby DM. Where have all the preceptors gone? Erosion of the volunteer clinical faculty. West J Med. 2001;174:246.
19 Beckman TJ, Mandrekar JN. The interpersonal, cognitive and efficiency domains of clinical teaching: Construct validity of a multi-dimensional scale. Med Educ. 2005;39:1221–1229.
20 Nendaz MR, Junod AF, Vu NV, Bordage G. Eliciting and displaying diagnostic reasoning during educational rounds in internal medicine: Who learns from whom? Acad Med. 1998;73:S54–S56.
21 Benbassat J, Bachar E. Validity of students’ ratings of clinical instructors. Med Educ. 1981;15:373–376.
22 Pugnaire MP, Purwono U, Zanetti ML, Carlin MM. Tracking the longitudinal stability of medical students’ perceptions using the AAMC graduation questionnaire and serial evaluation surveys. Acad Med. 2004;79:S32–S35.
23 Solomon DJ, Speer AJ, Rosebraugh CJ, DiPette DJ. The reliability of medical student ratings of clinical teaching. Eval Health Prof. 1997;20:343–352.
24 Williams BC, Litzelman DK, Babbott SF, Lubitz RM, Hofer TP. Validation of a global measure of faculty’s clinical teaching performance. Acad Med. 2002;77:177–180.
25 Silber C, Novielli K, Paskin D, et al. Use of critical incidents to develop a rating form for resident evaluation of faculty teaching. Med Educ. 2006;40:1201–1208.
26 Zabar S, Hanley K, Stevens DL, et al. Measuring the competence of residents as teachers. J Gen Intern Med. 2004;19:530–533.
27 Golub RM. Medical education 2006: Beyond mental mediocrity. JAMA. 2006;296:1139–1140.
© 2009 Association of American Medical Colleges
28 Nixon LJ, Benson BJ, Rogers TB, Sick BT, Miller WJ. Effects of Accreditation Council for Graduate Medical Education work hour restrictions on medical student experience. J Gen Intern Med. 2007;22:937–941.