Direct observation of students by faculty and residents during clerkship rotations is vital to the instruction and valid assessment of clinical skills. Studies have shown that direct observation provides an authentic patient-centered teaching environment and improves history-taking and physical examination skills.1–3 In addition, these bedside observations are inherent in many performance ratings of students during clerkship rotations and, therefore, play an important role in the evaluation of clinical skills. Medical educators consider direct observation to be an important method for ensuring clinical competency. This is evidenced by the Liaison Committee on Medical Education's requirement that faculty provide “… ongoing assessment that assures students have acquired and can demonstrate on direct observation the core clinical skills, behaviors, and attitudes that have been specified in the school's educational objectives.”4
Virtually all medical schools report the use of performance ratings based on direct observations of students for the assessment of clinical skills.5 Despite their wide use, the validity and reliability of these ratings have been challenged.6,7 One problem with performance ratings may relate to the low incidence of direct observations by faculty. A recent survey by the National Board of Medical Examiners (NBME)8 reported that students were observed more often by a resident than by a faculty member and approximately 20% of students indicated that they had been observed by faculty zero to two times while performing a history or physical examination. Another survey of graduates by the Association of American Medical Colleges (AAMC)9 reported even lower rates of observation. Specifically, they found that 27% of students reported that they had never been evaluated by a faculty member while taking a complete history and performing a complete physical examination. This survey also found that these reports varied greatly, depending on the graduate's medical school, and ranged from zero to 77%. Twenty-one schools had reports of “no observation” by 10% or fewer students and 20 schools had reports by 40% or more students.
The NBME survey did not clarify how many students had never been observed or when during the clinical curriculum the observations were occurring. The AAMC survey focused on whether students had ever had their clinical skills evaluated by faculty observation; it also did not delineate the number of observations or when they occurred. Additionally, students were not asked to report whether they had been observed by a resident. Neither survey differentiated between complete (head-to-toe) and focused (one-system) physical examinations. We undertook our descriptive study to investigate student estimates of the number of direct observations that occurred during clerkship rotations, type of skill observed, when these observations occurred, and who conducted them.
To determine how often third-year students at the University of Virginia School of Medicine estimated they were observed during the clerkship rotations, from 1999–2001 we administered a survey instrument to 397 students at the end of their sixth rotation. Because of the difficulty of gaining access to all students during their third year, the survey instrument was administered at the end of the academic year when the entire class was required to attend a comprehensive performance assessment. Before conducting our study, we sought approval from the university's Institutional Review Board.
We asked students to estimate, for each of the six rotations, the number of times they had been observed by a resident or faculty member while taking a history, conducting a focused physical examination, and performing a complete physical examination. Each of these three clinical skills were presented in separate grids that included the following possible responses: “0,” “1–3,” “4–6,” “7–9,” “10–12,” and “13+” times observed. The clerkship rotations varied in length: internal medicine and surgery lasted 12 weeks, pediatrics eight weeks, obstetrics–gynecology (Ob/Gyn) and psychiatry six weeks each; and family medicine four weeks. Survey completion was optional and anonymous.
Three hundred and forty-five students (87%) returned the survey instrument; of these, 322 (81%) returned instruments with complete information. The percentages of students reporting that they had been observed by a resident and faculty member 0, 1–3, 4–6, 7–9, 10–12, or 13+ times while performing each of the three clinical skills for combined rotations are presented in Table 1. On average, the majority of students reported having never been observed by a faculty member while they interviewed a patient (51%) or conducted a focused (54%) or a complete (81%) physical examination. Students reported that they had been observed more frequently by a resident while performing each of the three skills; however, on average 60% of students reported that they had never been observed by a resident while they conducted a complete physical examination.
The percentages of students reporting that they had never been observed by a faculty member for each rotation and each clinical skill are shown in Table 2. Although history-taking was the skill most often observed by a faculty member, overall the majority of students reported that they had never been observed taking a patient's history during the internal medicine (59%), surgery (74%), and ob–gyn (68%) rotations. Despite the shorter rotation length, psychiatry (27%) and family medicine (26%) clerkships had relatively fewer students reporting that they had never been observed by a faculty member while taking histories. The majority of students reported that they had never been observed by a faculty member while performing a focused physical examination on the internal medicine (51%), surgery (71%), psychiatry (71%), and ob–gyn (61%) rotations. Family medicine (25%) and pediatrics (47%) had relatively fewer students reporting that they had never been observed by a faculty member while performing a focused physical examination. The data reflect that faculty observed students conducting a complete physical examination fewer times than they observed the other two skills. This finding was consistent across all rotations and ranged from 70% on the family medicine rotation to 91% on the surgery rotation.
Figure 1 displays the overall percentage of students who reported having been observed, 1–6, 7–12, and 13+ times by a faculty member for each rotation and each skill. The skill and rotation that had the largest percentage of students who reported having been observed “13+ times” was history-taking during psychiatry (31%). With the exception of the family medicine rotation, fewer than 5% of students reported having been observed “13+ times” during their rotations for each skill component. Overall, students reported having been observed by a faculty member least frequently during the 12-week surgery and six-week Ob/Gyn rotations. We calculated a chi-square goodness-of-fit test comparing the frequency of reported observation with the length of the rotation. A highly significant deviation from the expected values was found, χ2 (5, n = 295) = 127.85, p < .000. In other words, the number of reported observations did not increase as the students spent more days on the rotation.
The descriptive data in our study reinforce and further delineate the findings from previous studies. Previous research did not differentiate what skills were being observed, when the observation was occurring, or who was conducting the observation. According to our findings, on average 81% of students reported never having been observed by a faculty member while performing a complete physical examination during a required clinical rotation. Although observations by residents appear to have occurred more frequently, on average 60% of students reported never having been observed by a resident while performing a complete physical examination. With the exception of family medicine and psychiatry, we found no great disparities across clerkship rotations. Students estimated that they were observed taking a patient's history more frequently on the psychiatry rotation than they were on any other. This finding is consistent with the expectations of the psychiatry clerkship. Specifically, during the course of the study, psychiatry was the only rotation during the third year that required faculty to conduct formal and structured observations of their students’ interviewing skills.
The length of the rotation did not relate to the number of observations. In other words, the longer rotations (12 weeks) were not associated with a larger number of estimated observations. Surprisingly, the surgery rotation, one of the longest clerkships at 12 weeks, received the largest percentage of students reporting that no observations had occurred. Conversely, the family medicine rotation was the shortest (four weeks) and received the smallest percentage of students who reported that they had not been observed by faculty or residents. We do not understand the reasons for this discrepancy. However, we speculate that the family medicine faculty, who predominantly practice in an outpatient setting, may have more time to observe their students than do those faculty who practice in inpatient settings. Additional research is needed to further investigate this disparity.
Our study had several limitations. First, students completed the survey instrument at the end of the sixth rotation. They may have had difficulty recalling the number of times they were observed during rotations they completed at the beginning of the third year, approximately 12 months earlier. However, the rotation schedule is assigned equally and randomly among all students. This inherent randomization of the clerkship rotation schedule should greatly reduce any possible recency effect. Second, our study did not include data regarding the required neurology clerkship, which is scheduled throughout the fourth year of the curriculum and would have occurred after we administered the survey instrument. We felt that waiting until the end of the fourth year to include the neurology rotation might make it even more difficult for the students to recall the observations. Finally, response bias is always a consideration when conducting a survey. Although we had a high response rate (87%), the anonymous data did not allow us to analyze the 13% of students who chose not to respond, nor did it allow us to follow-up on the approximately 6% of respondents who returned incomplete survey instruments.
Although the benefits of direct observation and bedside teaching of medical students have been long understood, these encounters have become increasingly infeasible. As early as 1964, Reischman and colleagues3 observed that bedside teaching was declining on hospital rounds. Soon after, Morgan2 postulated four reasons for the apparent change: faculty subspecialization lending to insecurities with teaching generalist skills, third-party funding requirements, increasingly complex patient challenges, and technological advances. Unfortunately, these barriers are even more pronounced today. The current acute and managed care environment has left faculty, preceptors, and house-staff with higher productivity standards, increasing administrative demands, increasing expectations for direct patient contact and documentation by faculty, and subsequently, less time for direct observation of students.10
Several authors have suggested methods to increase the time faculty spend directly observing students, but none have empirically documented the outcomes of these methods.11,12 One study found that “structured clinical observations” that were limited to five minutes, focused on components of the clinical process, and included a limited number of feedback points, were qualitatively effective for teaching clinical skills.11 Since the completion of our study, a similar method, referred to as the “Clinical Skills Passport Program,” has been initiated at the University of Virginia for each of the required clinical clerkships. One of the goals of this new program is to encourage direct observation of specific clinical skills by faculty and residents. It is hoped that some of the problems identified by our study will be obviated by this initiative. Future research is planned to investigate the educational impact of this program, namely whether direct observations by faculty and residents have increased.
Given the current and related reports that students are rarely observed during the clinical rotations, the increasing demands on faculty and residents, and the changing face of medical education, consideration should be given to alternate forms of assessment to replace or augment ratings based on direct observation. Alternatively, efforts should be made to increase the occurrence and validity of direct observations. These efforts to produce more valid assessments and better instruction will ultimately better equip medical students with the clinical skills required for residency training.
1. Cooper D, Beswick W, Whelan G. Intensive bedside teaching of physical examination to medical undergraduates: evaluation including the effect of group size. Med Educ
2. Morgan WL. Bedside teaching. Trans Am Clin Climatol Assoc
3. Reischman F, Browning FE, Hinshaw JR. Observation of undergraduate clinic teaching in action. J Med Educ
4. Functions and structure of a medical school 〈http://www.lcme.org/standard.htm#current
〉. Accessed 13 February 2003. Liaison Committee on Medical Education, Washington, DC.
5. Mavis BE, Cole BL, Hoppe RB. A survey of student assessment in the U. S. medical schools: the balance of breadth versus fidelity. Teach Learn Med
6. Borowitz SM, Saulsbury FT, Wilson WG. Information collected during the residency match process does not predict clinical performance. Arch Pediatr Adolesc Med
7. Carline JD, Paauw DS, Thiede KW, Ramsey PG. Factors affecting the reliability of ratings of students clinical skills in a medicine clerkship. J Gen Intern Med
8. An analysis of U. S. student field trial and international medical graduate certification testing results for the proposed USMLE clinical skills examination 〈http://www.usmle.org/news/cse/cseftresults2503.htm
〉. Accessed 11 February 2003. National Board of Medical Examiners, Washington, DC, 2003.
9. Association of American Medical Colleges. The role of faculty observation in assessing students’ clinical skills. Contemp Issues Med Educ
10. Brodkey AC, Sierles FS, Spertus IL, Weiner CL, McCurdy FA. Clerkship director's perceptions of the effects of managed care on medical students’ education. Acad Med
11. Lane JL, Gottlieb RP. Structured clinical observations: A method to teach clinical skills with limited time and financial resources. Pediatrics
12. Mooradian NL, Caruso JW, Kane GC. Increasing the time faculty spend at the bedside during teaching rounds. Acad Med