An audience response system (ARS) is an electronic classroom communication system in which students use handheld remote devices, called clickers, to respond to questions posed during a computerized slide show. Data that students input with their clickers are compiled by a receiving unit that is installed on the presenter's computer. Data input from students can be displayed in the slide show and saved for analysis. In recent years, use of ARS has become increasingly common in the lecture halls of many undergraduate colleges1–3 as well as in medical education.4–6 An ARS is considered a useful instructional tool for several reasons: to increase interaction between faculty and students, to allow for formative assessments of student knowledge, to maintain students' attention during lectures, and to focus student attention on key points.7–9 A few studies of ARS effects used indirect measures of ARS influence on learning or used student evaluations of teaching as outcome measures.3,10 Several published studies have established that when students were asked their perceptions of the ARS or asked to evaluate their perceived learning, their responses were affirmative.1,3,11,12 In a recent literature review of ARS studies, it was found that 13 possible benefits of using ARS had been documented by various investigational methods.13
To date, few “systematic” studies, in which some form of experimental control was used, have been published9,13; however, in those that were, ARS was reported to correlate with improved results.4–6,14 Likewise, few studies have used direct measures of learning, such as test scores or course grades, as an outcome variable.5,15–17 We searched the MEDLINE and ERIC databases but were unable to identify any controlled studies that measured medical student learning as an outcome of the study. This was consistent with a published literature review.13
The study described in this manuscript was planned to remedy the paucity of controlled studies on students' learning performance with ARS use. Additionally, most prior studies of ARS use have focused on baccalaureate-level students13 with no evidence as to whether the results would generalize to medical students. The study reported here was designed to answer the question of whether the use of ARS by students during medical school lectures was associated with improved performance on the course exam in comparison with a different subgroup of students who heard the same lecture without using ARS. This study design isolated the effect of students using ARS clickers and related their clicker use to exam performance.
This study was conducted at one medical school with a cohort of second-year medical students during their required, two-week pulmonology course. The study used student performance on the course exam as the outcome measure and employed a switching replication design.18 This within-subjects design compared each student's performance on a set of exam items based on the lectures when the ARS had been used versus his or her own performance on a different set of exam items corresponding to lectures when ARS had not been used. The exam items used for the outcome measure were developed by the lecturers and were directly tied to the concepts taught during the lectures. The study was approved as exempt research by the IRB of the University of Nebraska Medical Center.
For the sake of the subjects' education, it was critical that all students heard all of the lectures and that no advantage was accorded to either group. Therefore, several faculty members volunteered to deliver their lectures twice: once to 50% of the class using the ARS and a second time to the other 50% of the class without using ARS. A schematic diagram of the study design is displayed in Figure 1. For the sake of efficiency in curriculum delivery, while 50% of the students heard an ARS lecture, the other 50% heard a non-ARS lecture from a different lecturer in a different auditorium. Following this hour of lecture, both groups of students remained in place while the lecturers traded rooms and delivered their same lecture a second time using the opposite format (with ARS or without ARS). For both versions of the lecture, the presenters used the same PowerPoint slides, including posing questions to the class with pauses to contemplate the answers. During the ARS lecture, students used their clickers to register their answers to the questions. At the end of such a two-hour block, all students had heard both lectures; one group had used ARS during both lectures, while the other group heard the same lectures from the same lecturers without using ARS. On the subsequent day, the student groups traded rooms so that the group which had not used ARS now had the opportunity to use it and the same two-lecture switching process was repeated with a new pair of lecturers. At the end of the 17 lectures in the unit, one group of students had used the ARS for 8 of the lectures and the other group had used ARS for the remaining 9 lectures.
All lectures were podcast with recordings done in the auditorium in which ARS was used, and print handouts of the lectures were produced including the ARS slides. Lecturers were instructed to not preview actual exam questions in the ARS lectures. The questions posed via ARS were related to the lecture objectives but were not identical to those that were on the exam.19 The only factors on which the lectures differed was whether students were expected to pick up their “clickers” to register answers to questions that were posed and whether the students could observe how their classmates had answered the ARS questions.
To measure performance, student scores on specific exam items were pooled according to whether they had heard the corresponding lecture with ARS or without. Thus, all students had an exam score in the ARS pool; however, the actual items from which that score was derived depended on which set of eight or nine lectures a student had heard with ARS. A paired-samples t test was planned to compare each student's score with ARS versus his or her score without ARS. In addition, a sign test on the difference scores was also planned. This within-subjects analysis controlled for variations between students in their prior knowledge, ability, and motivation.
The course was taught to 128 students, of whom 126 consented to participate in this study. Because of some allowances for student convenience, one student group had 65 members while the other had 61, but the within-subjects design meant that the composition of the groups was irrelevant. Although lecture attendance was not required, over the 17 experimental lectures the median attendance was 36 students (approximately 57% of the groups). The exam that was used as the outcome measure consisted of 140 total questions, 42 of which were used for the study.
Descriptive statistics and the results of the paired-samples t test analysis are shown in Table 1. The average scores on questions when ARS was used were statistically similar to scores when ARS was not used (t = 0.866; df = 125; P = .388). Because the raw average score without ARS was higher than the score with ARS, it is clear that having students use the ARS clicker during lecture was not associated with improved scores. For confirmation, a sign test was conducted on difference scores, which were computed by subtracting the non-ARS score from the ARS score. Forty-seven students had positive difference scores, while 54 had negative difference scores; there were 25 ties. The number of positive difference scores was not statistically different than would be expected by chance (P = .550) which confirmed that ARS use was not related to higher scores.
Given our finding of no statistically significant difference between using ARS and not using ARS, a power analysis (G*Power Version 3.0.10, Kiel, Germany) was necessary to determine whether a statistical difference would have been recognized, had one existed. With the sample size of 126, α = .05, and power = 0.8, this study could have detected an effect size of dz > 0.22. This means that even a small effect size20 would likely have been detected by this study.
Our study found that there was no significant difference in the percentage of correct answers on the exam, regardless of whether the students used the ARS or not. A number of previous studies have shown conflicting results when scores earned using the ARS were compared with scores earned without the ARS.4–6,17 On one hand, ARS use has been associated with retention of the lecture material.4,5 On the other hand, in a study examining retention, there was no significant difference in scores related to ARS use after being adjusted for gender, region, and specialty.6 An additional study found no significant difference on either short-term or long-term knowledge retention based on ARS use.17
Our suspicion was that previous studies which found improved performance with ARS use4,5,15 actually captured the effect of enhancing lectures by adding exam-style questions, which is a necessary component of ARS use, rather than the effect of students using the clickers. Improvements in test scores noted by other researchers may not have been attributable to the ARS at all but rather to the incorporation of questions that encouraged students to wrestle with the content. Employing ARS technology required lecturers to develop relevant questions and to pose those questions during the lecture. On the basis of our results, we conjecture that simply adding questions to lectures and giving students time to ponder those questions had equal benefit as having students actually use ARS to submit responses did. In short, using ARS technology did not, by itself, lead to improved student performance.
Despite our efforts to create a controlled experiment, there were limitations to this study. Chief among them was the inability to guarantee that students attended their assigned sessions, despite having agreed to do so. Likewise, attendance at the lectures was below 100%, so possible benefits of ARS may have been diluted by the number of students who did not attend lectures and use their clickers. Although this detail was detrimental to our study, it realistically reflects the situation into which ARS would likely be introduced. That is, in actuality although increased student performance might yet be attainable with 100% attendance and student participation, it is unlikely that such a scenario exists at many medical schools.
We advocate the use of ARS in medical school lectures, for the compelling learning environment reasons discussed in the first section. Our results were consistent with other findings that, although ARS use may correlate with an improved educational experience, there was no measurable increase in learning.14,17 Nonetheless, if using ARS compels lecturers to rewrite their lectures to incorporate questions, then it may be worth the investment; however, our study showed that similar benefits can be realized by the change in pedagogy alone without the actual technology.
This study was approved as exempt research by the IRB of the University of Nebraska Medical Center.
1 Beekes W. The “millionaire” method for encouraging participation. Active Learning Higher Educ. 2006;7:25–36.
2 Carnevale D. Run a class like a game show: “Clickers“ keep students involved. Chronicle High Educ. 2005;51:B3.
3 MacGeorge EL, Homan SR, Dunning JB Jr, et al. Student evaluation of audience response technology in large lecture classes. Educ Technol Res Dev. 2008;56:125–145.
4 Pradhan A, Sparano D, Ananth CV. The influence of an audience response system on knowledge retention: An application to resident education. Am J Obstet Gynecol. 2005;193:1827.
5 Rubio EI, Bassignani MJ, White MA, Brant WE. Effect of an audience response system on resident learning and retention of lecture material. AJR Am J Roentgenol. 2008;190:W319–W322.
6 Miller RG, Ashar BH, Getz KJ. Evaluation of an audience response system for the continuing education of health professionals. J Contin Educ Health Prof. 2003;23:109–115.
7 Kenwright K. Clickers in the classroom. TechTrends. 2009;53:74–77.
8 Robertson LJ. Twelve tips for using a computerized interactive audience response system. Med Teach. 2000;22:237–239.
9 Simpson V, Oliver M. Electronic voting systems for lectures then and now: A comparison of research and practice. Australas J Educ Technol. 2007;23:187–208.
10 Stoddard HA, Piquette CA. The impact of using an audience response system in medical school lectures: A preliminary study on student evaluations of faculty. Paper presented at: AAMC Central Group on Education Affairs Annual Meeting; Columbus, OH: April 8, 2008.
11 Caldwell JE. Clickers in the large classroom: Current research and best-practice tips. CBE Life Sci Educ. 2007;6:9–20.
13 Kay RH, LeSage A. Examining the benefits and challenges of using audience response systems: A review of the literature. Comput Educ. 2009;53:819–827.
14 Stowell JR, Nelson JM. Benefits of electronic audience response systems on student participation, learning, and emotion. Teach Psychol. 2007;34:253–258.
15 Alexander CJ, Crescini WM, Juskewitch JE, Lachman N, Pawlina W. Assessing the integration of audience response system technology in teaching of anatomical sciences. Anat Sci Educ. 2009;2:160–166.
16 Gauci SA, Dantas AM, Williams DA, Kemm RE. Promoting student-centered active learning in lectures with a personal response system. Adv Physiol Educ. 2009;33:60–71.
17 Plant JD. Incorporating an audience response system into veterinary dermatology lectures: Effect on student knowledge retention and satisfaction. J Vet Med Educ. 2007;34:674–677.
18 Shadish WR, Cook TD, Campbell DT. Experimental and quasi-experimental designs for generalized causal inference. Boston, Mass: Houghton Mifflin; 2002.
19 Beatty ID, Gerace WJ, Leonard WJ, Dufresne RJ. Designing effective questions for classroom response system teaching. Am J Phys. 2006;74:31–39.
20 Cohen J. A power primer. Psychol Rev. 1992;112:155–159.