Skip Navigation LinksHome > July 2013 - Volume 88 - Issue 7 > The Impact of Lecture Attendance and Other Variables on How...
Academic Medicine:
doi: 10.1097/ACM.0b013e318294e99a
Research Reports

The Impact of Lecture Attendance and Other Variables on How Medical Students Evaluate Faculty in a Preclinical Program

Martin, Stanley I. MD; Way, David P. MEd; Verbeck, Nicole MPH; Nagel, Rollin PhD; Davis, John A. PhD, MD; Vandre, Dale D. PhD

Free Access
Article Outline
Collapse Box

Author Information

Dr. Martin is assistant professor of clinical internal medicine, Division of Infectious Diseases, and associate director, Integrated Pathway Curriculum, Ohio State University College of Medicine, Columbus, Ohio.

Mr. Way is senior research associate, Center for Education and Scholarship, Ohio State University College of Medicine, Columbus, Ohio.

Ms. Verbeck is curriculum coordinator, Med-1 Integrated Pathway, Ohio State University College of Medicine, Columbus, Ohio.

Dr. Nagel is professor of clinical internal medicine, Division of General Internal Medicine, and education resource specialist, Center for Education and Scholarship, Ohio State University College of Medicine, Columbus, Ohio.

Dr. Davis is assistant professor of clinical internal medicine, Division of Infectious Diseases, and assistant dean for student life, Ohio State University College of Medicine, Columbus, Ohio.

Dr. Vandre is associate professor, Department of Physiology and Cell Biology, and director, Integrated Pathway Curriculum, Ohio State University College of Medicine, Columbus, Ohio.

Correspondence should be addressed to Dr. Martin, Division of Infectious Diseases, Ohio State University Medical Center, N1135 Doan Hall, 410 W. 10th Ave., Columbus, OH 43210; telephone: (614) 293-5666; fax: (614) 293-4556; e-mail: stanley.martin@osumc.edu.

Collapse Box

Abstract

Purpose: High-quality audiovisual recording technology enables medical students to listen to didactic lectures without actually attending them. The authors wondered whether in-person attendance affects how students evaluate lecturers.

Method: This is a retrospective review of faculty evaluations completed by first- and second-year medical students at the Ohio State University College of Medicine during 2009–2010. Lecture-capture technology was used to record all lectures. Attendance at lectures was optional; however, all students were required to complete lecturer evaluation forms. Students rated overall instruction using a five-option response scale. They also reported their attendance. The authors used analysis of variance to compare the lecturer ratings of attendees versus nonattendees. The authors included additional independent variables—year of student, student grade/rank in class, and lecturer degree—in the analysis.

Results: The authors analyzed 12,092 evaluations of 220 lecturers received from 358 students. The average number of evaluations per lecturer was 55. Seventy-four percent (n = 8,968 evaluations) of students attended the lectures they evaluated, whereas 26% (n = 3,124 evaluations) viewed them online. Mean lecturer ratings from attendees was 3.85 compared with 3.80 by nonattendees (P ≤ .05; effect size: 0.055). Student’s class grade and year, plus lecturer degree, also affected students’ evaluations of lecturers (effect sizes: 0.055–0.3).

Conclusions: Students’ attendance at lectures, year, and class grade, as well as lecturer degree, affect students’ evaluation of lecturers. This finding has ramifications on how student evaluations should be collected, interpreted, and used in promotion and tenure decisions in this evolving medical education environment.

Advances in classroom technology enable students to obtain educational material without actually attending standard didactic teaching sessions (i.e., lectures). Lecture-recording technology captures up to three channels: (1) the audio content of the lecture, (2) a digital picture of the visual display, usually PowerPoint (Redmond, Washington) slides, and in some cases, (3) a video recording of the lecturer. The three channels of the lecture capture can be processed into a self-contained learning module (commonly called a podcast) that viewers—in this case first- and second-year medical students—can play back on a personal computer or other digital device. These podcasts allow students to customize their studies, freeing them from the constraints of otherwise rigid lecture schedules and allowing them to learn in new, nontraditional ways. Students have control over the amount of time they spend on any one subject, and this approach also allows them to focus on different subjects in different ways and places. Many medical schools in the United States have recognized the impact of lecture-capture technology and have responded by making student attendance at lectures optional.1

Research on lecture-capture technology has generally shown no significant impact on student learning when compared with traditional methods; however, some studies suggest that lecture-capture technology is quite popular among students.2–4 Research on faculty attitudes toward lecture-capture technology is just beginning to emerge. The most common faculty concerns reported to date relate to the loss of an audience. Specifically, faculty fear that low attendance and reduced student–teacher interaction will have an impact on the efficacy of their teaching performance and give them less autonomy in their approach to teaching.5

Research on student evaluations of college teaching is abundant, dating to the 1920s and the work of Remmers6 and colleagues from Purdue University.7,8 This rich history of evaluations includes the 1970s era, which education researchers termed “the golden age of research on student evaluations.”7 Over the past 90 years, researchers have consistently demonstrated that students serve as valid and reliable assessors of faculty teaching performance in college classrooms and lecture halls; however, investigators have also identified various factors that occasionally confound student evaluation results, including class size, level of course, the student’s anticipated grade in the course, and the subject matter or discipline.6–8

With the advent of optional lecture attendance brought about by advancements in lecture-capture technology and versatile electronic playback equipment, we wondered whether students’ attendance at the lecture might be a confounding factor in their evaluations of faculty. Specifically, the purpose of this study was to compare how students who attended class versus those who relied solely on podcasts and other supplemental learning materials evaluated lecturers in the preclinical medical school curriculum at the Ohio State University College of Medicine (OSUCOM). We hypothesized that students who attended lectures in real time would evaluate lecturers differently than those students who did not.

The person-to-person interaction that develops through the attendance of a traditional didactic lecture may have a different effect on individual learners compared with the reliance on other materials, even if that material includes a complete podcast of the lecture itself. We believe this interaction may potentially color how a student listening to a lecture live rates the faculty member giving the lecture. Other variables such as underlying academic achievement (i.e., the student’s class grade or rank) and whether or not the lecturer has a clinical versus basic science degree may also shape how students evaluate lecturers.

Back to Top | Article Outline

Method

In the Integrated Pathway (IP) curriculum of the first-year (Med-1) andsecond-year (Med-2) at the OSUCOM, students undergo a rigorous lecture schedule each weekday throughout the academic year. A teacher with an academic title, who is associated with the OSUCOM and whom the university considers to be a content expert, administers each lecture. The vast majority of lecturers deliver their presentations with concomitant PowerPoint slides or other electronically formatted visual aids. Most lecturers also distribute a traditional written handout of the material to the students. The visual aids and the lecturer’s audio delivery are recorded and available as podcasts for listening and examination by the students at any time. The handout is also available online for use at any time. Attendance at Med-1 and Med-2 lectures is not mandatory, but students are ultimately responsible for demonstrating mastery of the material through their performance on regularly scheduled exams.

For this retroactive study of students’ evaluations of preclinical faculty, we included evaluations from both the Med-1 and Med-2 classes of each individual lecturer from the academic year 2009–2010. The curriculum for the Med-1 class was divided into multiple divisions, each lasting approximately three to four weeks. At the end of each division, a predetermined (see below) group of students was required to evaluate each lecturer in terms of his or her overall instruction and ability to facilitate student learning. The students completed these evaluations electronically online on their own time. The survey form was adopted and simplified from other faculty evaluation forms developed by Ohio State University, and the administrative staff and curriculum coordinators in the Office of Medical Education oversaw and monitored the survey implementation. Students were asked to consider the quality of the stated objectives, the available learning materials (lecture notes and slides), and the presentation itself. They then quantified their opinion of the lecturer’s performance with one summary rating of overall instruction by using a response scale with options ranging from 1 = poor performance to 5 = extraordinary performance. The choice “Not Applicable” was also available. At the time of the evaluation, students were also asked whether they attended the lecture, viewed the podcast, used only supplemental materials, or any combination of these three options. We assigned students who relied on podcasts and/or supplemental materials without attending the lecture to the “nonattendance” group. We assigned students who reported attending the lecture or both attending a lecture and viewing the subsequent podcast to the “attendance” group.

At the beginning of the academic year, students in both the Med-1 and Med-2 courses were assigned to an evaluation group. Group assignment was alphabetical, based on student last name. The average group size in 2009–2010 was 49 students for the Med-1 class and 43 students for the Med-2 class. At the end of each division, one of the evaluation groups was assigned to evaluate the faculty from that division. These preassigned evaluation groups then repeated evaluation assignments every four divisions so that all faculty were evaluated by groups of students in a revolving fashion and all students contributed to the evaluation process. Completion of the evaluation forms for lecturers was mandatory and enforced by the IP administration. Failure to complete evaluations could have led to deductions in the students’ overall professionalism grades.

Besides actual attendance at lecture (versus listening to the lecture online), other independent variables we considered relevant to students’ evaluations of lecturers were their year in medical school (Med-1 or Med-2) and their grade in the course. We also examined the effect of whether the lecturer was trained as a clinician (MD or equivalent degree), as a basic scientist (PhD or equivalent degree), or both. We classified lecturers with an MD or equivalent, regardless of other degrees, in the MD category. Students received a grade for each yearlong course (Med-1 and Med-2). The grades were primarily based on their end-of-year cumulative average examination score. The IP assigns grades as follows: Approximately the top 10% of the class receive Honors, followed by the next 15% (those in the 11th through 25th percentiles) who receive Letters of Commendation (LOC), followed by the remaining students who are designated as satisfactory (or unsatisfactory if they do not pass). Unsatisfactory students and those who deferred rating a faculty member were not included in the analyses. Because unsatisfactory students did not complete the curriculum, they did not submit a complete set of responses for appropriate inclusion. These few students would have produced a small enough number of evaluations as to likely have no significant impact on the overall effect.

We conducted a four-way analysis of variance (ANOVA) with post hoc tests to evaluate the relationship between independent variables and the students’ ratings of lecturers. We performed all analyses using IBM-SPSS Statistics Software, Version 19 (Armonk, New York). The Ohio State University institutional review board approved this study.

Back to Top | Article Outline

Results

We originally included a total of 358 students in our analysis (191 Med-1 students and 167 Med-2 students). These students completed 12,555 valid evaluations (6,904 Med-1 evaluations and 5,651 Med-2 evaluations) of 221 different faculty members during the one-year period. There was an average of 57 evaluations per faculty member during this time period. Of those evaluations, 9,316 (74%) were from students who reported attending the lecture, and 3,239 (26%) were from students who reported that they did not attend the lecture. Our initial analysis of the students’ evaluation of lecturers indicated that those who attended the lectures gave faculty a mean rating of 3.87, whereas those who opted not to attend lectures gave faculty a mean rating of 3.82 (P = .018).

We discovered, however, through our ANOVA, a significant two-way interaction (Table 1) between class year (Med-1/Med-2) and the lecturer’s degree (clinical versus basic science background). On the basis of this initial analysis, Med-1 students rated lecturers with clinical backgrounds (physicians) higher than they rated lecturers with basic science degrees, whereas Med-2 students rated basic scientists higher than physicians (Figure 1). One faculty member, whose number of lectures and number of evaluations were statistical outliers (more than 2.5 standard deviations greater than those of other faculty), was identified as contributing to this significant interaction. This faculty member was a basic scientist who, having delivered more lectures in more divisions than any other faculty member, garnered a total of 463 evaluations compared with the average of 57 for all faculty. He had also received higher-than-average ratings from students (a mean of 4.1 from the Med-1 class and 4.52 from the Med-2 class, compared with an overall mean rating of 3.88 from the Med-1 students and 3.78 from the Med-2 students). This faculty member’s results seemed to completely explain the two-way interaction between year in medical school (Med-1 versus Med-2) and degree of the lecturer (MD versus PhD); thus, we treated this faculty member as a statistical outlier and reanalyzed the data without his results. After reanalysis, the two-way interaction disappeared and we observed that all four independent variables (attendance, year in medical school, faculty degree, and class grade) had significant main effects in the four-way ANOVA (Table 1).

Table 1
Table 1
Image Tools
Figure 1
Figure 1
Image Tools

The removal of the data related to the one faculty outlier left 12,092 evaluations (of 220 faculty) available for analysis. Students who attended lectures gave a mean rating of 3.85 to lecturers, whereas those who did not attend gave a mean rating of 3.80 (P ≤ .05, effect size [ES] = 0.055; see Table 2). Med-1 students evaluated faculty significantly higher than did Med-2 students (3.88 versus 3.78, P < .001, ES = 0.11). Taken together, the Med-1 and Med-2 students rated clinicians significantly higher than basic scientists (3.86 versus 3.79, P < .001, ES = 0.077). Our findings also showed that students’ class grades influenced how they evaluated lecturers. Honors students gave lecturers a mean rating of 3.92, whereas LOC students gave faculty a mean rating of 3.86, and satisfactory students gave faculty a mean rating of 3.82 (P ≤ .01). Post hoc analysis showed that the Honors students were driving the difference in faculty ratings (Figure 2). Once the faculty outlier was removed, we detected no significant two-way interactions (see Table 1).

Table 2
Table 2
Image Tools
Figure 2
Figure 2
Image Tools
Back to Top | Article Outline

Discussion and Conclusions

As medical curricula change to permit more independence and as technology allows the shift of learning to move outside the traditional classroom setting, our data suggest subtle differences in students’ evaluations of their faculty lecturers. Previous studies have indicated that students find value in using video-recorded lecture material to learn.2–4 Although students’ interest and willingness to use video-recording technology may have potential ramifications on learning habits, student surveys have not necessarily borne that out.2 Other studies have suggested that despite the availability of external learning material, their use has not had a large impact on actual attendance in formal lecture hall venues9,10 or on student academic performance.3,11 Our data appear similar in that the majority of students still attend lectures, though there may be a small decline in attendance during the Med-2 year (data not shown).

We found that students who attended lectures in person rated lecturers significantly higher than those who viewed podcasts of them. This phenomenon occurred regardless of students’ year in medical school, the rated faculty member’s degree, or the student’s class grade or rank. These data suggest that attendance in person during a lecture has a favorable impact on how a student evaluates lecturers. The direct, in-person interaction may subtly influence the student’s perceptions. This finding has ramifications on how department leaders should assess lecturers and on the role evaluations should play in determining promotion and tenure. For example, two different faculty each rated a “4” on a 5-point scale by students would not be truly equivalent if one faculty was judged primarily by students who listened to a podcast lecture and the other by students attending a mandatory lecture in person. In turn, department leaders should not assess each of these faculty members in an equivalent manner.

The statistical outlier responsible for the significant interaction between faculty degree background (MD versus PhD) and year in medical school (Med-1 versus Med-2) deserves further elaboration. This individual is unique in that he delivers more lectures than any other faculty member and draws a higher rate of attendance, primarily due to the interactive nature of his lectures. He does not, in advance of the lecture, provide students with complete notes like those they receive in other sections of the course. Instead, they receive pages with the major headings (the topics that will be covered) and plenty of blank space on which they can copy the notes the lecturer produces (and provides via overhead projection) as he lectures. He uses no PowerPoint slides. Rather, his philosophy is that hearing, seeing, and writing the lecture material at the same time contributes to retention and aides student learning. One possible explanation of the statistical interaction observed is that students initially struggle with this lecturer’s methods in the Med-1 year but eventually buy in to his technique during the Med-2 year.

Once we removed the data related to the statistical outlier, we found that students tended to rate clinical faculty (those with MDs) higher than the basic scientists (those with PhDs), regardless of whether they had or had not attended lectures. This phenomenon may reflect the nature of the content delivered by these lecturers, or it could be a subconscious, or even conscious, display of content favoritism. Previous studies of preclinical medical students’ evaluations of their courses have suggested that basic science material, when well integrated with the clinical sciences, can have a significant, positive impact on student evaluations of lecturers.11 Integrating basic science material in a clinically relevant way that medical students find appealing is challenging for any educator. This inherent challenge, along with our own data, underscores the problems some institutions face with effectively integrating basic science content into the medical school curriculum.

Our results also showed that class grade has a significant impact on how students evaluate faculty, though grade does not relate to attendance. Students who perform better may be more actively engaged in the curriculum, regardless of their attendance at lectures, and they may have a more favorable view of the lecturers themselves. Previous studies have suggested that students who receive higher grades tend to complete their faculty evaluations in a more timely manner and offer more substantive comments about the lecturers’ performance.12 This engagement (that students with higher grades presumably have) may be lacking in students whose grades are lower, especially when they are not face-to-face with faculty during the learning process.

While we detected significant differences of lecturer ratings across student attendance, year in medical school, and grade in the class—as well as across lecturer degree—there are some inherent limitations to this study. First among them is our reliance on students to self-report attendance. Although the culture of the institution supports a policy of optional attendance, some students may have been reluctant to accurately report their attendance to lectures. The other limitation is that observed effect sizes were relatively small, according to Cohen standards, and have to be taken into account when interpreting the findings.13,14 Because of these limitations and the lack of previous research in this particular area of medical education, we are limited in drawing firm conclusions on the interaction between students’ lecture attendance and rating of lecturers. Our findings suggest the need for more rigorous research.

The amount of information that students must master during the traditional preclinical years of their medical education is substantial and demands significant dedication to independent learning outside the classroom. Given this reality, and the clearly identified need to better integrate classroom learning with clinical exercises and group learning throughout the curriculum, many medical schools across the United States are turning away from the traditional didactic lecture as the primary format of preclinical content delivery. Although the academic medicine community hopes that this evolution has long-lasting benefits for medical education, our data suggest that it can potentially have an impact on how students think about their educators in the short term. Use of the traditional evaluation process of faculty lecturers may not be a valid practice for these evolving teaching techniques. When faculty are being evaluated by students, the students’ attendance at in-person teaching events needs to be taken into account. This accounting may be of particular importance when faculty are being evaluated by promotion and tenure committees. Focusing instead on specific teacher–learner relationships and their effectiveness outside the classroom may prove to be a better gauge of medical educator quality. Our findings suggest that we medical educators have more work to do when it comes to our understanding of how students and educators interact. New evaluation methods that account for differences in how students learn from an individual instructor will need to be developed as medical education moves toward nontraditional techniques of content delivery.

Acknowledgments: The authors wish to thank Dr. Richard Fertel for his input, experience, and insight into the manuscript and into the education of medical students at our institution. Dr. Fertel is professor emeritus in the Department of Pharmacology and has granted us permission to identify him as the outlier in the study discussed in detail in the second paragraph of the Results section and the third paragraph of the Conclusions section. Clearly, his impact on medical education here at the Ohio State University College of Medicine continues to be felt, and we are grateful for the work he has done.

Funding/Support: None.

Other disclosures: None.

Ethical approval: The study was evaluated and approved by the Ohio State University Medical Center institutional review board.

Previous presentations: These data were presented in preliminary form as an oral abstract during the 2011 Research in Medical Education (RIME) conference at Association of American Medical Colleges Annual Meeting in Denver, Colorado, November 4–9, 2011.

Back to Top | Article Outline

References

1. Ruiz JG, Mintzer MJ, Leipzig RM. The impact of E-learning in medical education. Acad Med. 2006;81:207–212

2. Cardall S, Krupat E, Ulrich M. Live lecture versus video-recorded lecture: Are students voting with their feet? Acad Med. 2008;83:1174–1178

3. Bacro TR, Gebregziabher M, Fitzharris TP. Evaluation of a lecture recording system in a medical curriculum. Anat Sci Educ. 2010;3:300–308

4. Franklin DS, Gibson JW, Samuel JC, Teeter WA, Clarkson CW. Use of lecture recordings in medical education. Med Sci Educ. 2011;21:21–28 http://www.iamse.org/artman/publish/article_585.shtml. Accessed March 12, 2013

5. Danielson J, Bender H, Hassall L, Preast V. The use of lecture capture in light of teaching approach and content type: An institution-wide study. March 10, 2012 Alexandria, Va Paper presented at: Association of American Veterinary Medical Colleges http://www.aavmc.org/data/files/annualconference/2012ppt/danielsonlecturecap.pdf. Accessed March 12,2013

6. Remmers HH. To what extent do grades influence student ratings of instructors? JEduc Res. 1930;21:314–316

7. Wachtel HK. Student evaluation of college teaching effectiveness: A brief review. Assess Eval Higher Educ. 1998;23:191–211

8. Aleamoni LM. Student rating myths versus research facts from 1924–1998. J Person Eval Educ. 1999;13:153–166 http://library.sau.edu/committee/assess.pdf. Accessed March 12, 2013

9. Billings-Gagliardi S, Mazor KM. Student decisions about lecture attendance: Do electronic course materials matter? Acad Med. 2007;82(10 suppl):S73–S76

10. McNulty JA, Hoyt A, Gruener G, et al. An analysis of lecture video utilization in undergraduate medical education: Associations with performance in the courses. BMC Med Educ. 2009;9:6

11. Woloschuk W, Coderre S, Wright B, McLaughlin K. What factors affect students’ overall ratings of a course? Acad Med. 2011;86:640–643

12. McNulty JA, Gruener G, Chandrasekhar A, Espiritu B, Hoyt A, Ensminger D. Are online student evaluations of faculty influenced by the timing of evaluations? Adv Physiol Educ. 2010;34:213–216

13. Ellis PD. What’s a good effect size index forcomparing the means of two groups? http://effectsizefaq.com/category/effect-size-calculators/. Accessed March 12, 2013

14. Cohen J Statistical Power Analysis for the Behavioral Sciences. 19882nd ed Hillsdale, NJ Erlbaum

© 2013 by the Association of American Medical Colleges

Login

Article Tools

Images

Share