The Research We Still Are Not Doing: An Agenda for the Study of Computer-Based Learning
Cook, David A. MD
Dr. Cook is assistant professor of medicine, Mayo Clinic College of Medicine, Rochester, Minnesota.
Correspondence should be addressed to Dr. Cook, Baldwin 4-A, Division of General Internal Medicine, Mayo Clinic College of Medicine, 200 First Street SW, Rochester, MN 55905; e-mail: 〈email@example.com〉.
Media-comparative research—that is, the comparison of computer-based learning (CBL) to noncomputer instruction—is logically impossible because there are no valid comparison groups. Results from media-comparative studies are thus confounded and difficult to meaningfully interpret. In 1994, Friedman proposed that such research be supplanted by investigations into CBL designs, usage patterns, assessment methods, and integration. His proposal appears to have largely been ignored. In this article, the author updates the agenda for research in CBL (including Web-based learning).
While media-comparative studies are confounded, CBL-CBL comparisons are often not. CBL instructional designs vary in configuration (e.g., discussion board or tutorial), instructional method (e.g., case-based learning, personalized feedback, or simulation), and presentation (e.g., screen layout, hyperlinks, or multimedia). Comparisons within one level (for example, comparing two instructional methods) facilitate evidence-based improvements, but comparisons between levels are confounded. Additional research questions within the CBL-CBL framework might include: Does adaptation of CBL in response to individual differences such as prior knowledge, computer experience, or learning style improve learning outcomes? Will integrating CBL with everyday clinical practice facilitate learning? How can simulations augment clinical training? And, how can CBL be integrated within and between institutions? In addressing these questions it is important to remember the most important outcome—effect on patients and practice—and outcomes specific to CBL including costs, cognitive structuring, and learning unique to the computer-based environment.
CBL is not a panacea, but holds great promise. Realization of this potential requires that media-comparative studies be replaced by rigorous, theory-guided comparisons of CBL interventions.
Eleven years ago Friedman1 set forth “the research we should be doing” in computer-based learning (CBL). He discussed media-comparative research, which is the comparison of computer-based instructional formats to non-computer-based formats, and argued that it is “logically impossible because there is no true comparison group.” He proposed that media-comparative research should be supplanted by research into different CBL designs, usage patterns, assessment methods, and modes of integration of CBL with traditional instruction. Before him, Clark2,3 and Keane et al.4 had also argued the limitations of media-comparative research, and had suggested that comparisons of one CBL method with another would be more likely to produce meaningful results.
Since that time research reports on CBL, and Web-based learning (WBL) in particular, have continued to accumulate. Yet it appears that the suggestion to discontinue media-comparative research and instead focus on comparisons among alternative CBL methods has largely been ignored. A recent report5 quantified research studies in CBL and found that only 1% of those studies compared one CBL format to another. A review of WBL research6 found 35 evaluative studies, eight of which compared Web-based interventions to interventions using other media, while none compared different CBL formats. It appears that Santayana's prediction has proven true: “Those who cannot remember the past are condemned to repeat it.”7
In this article, I reiterate the argument that media-comparative research is futile, and propose in its place a novel framework for research into computer-based instructional design. Comparisons of one CBL format to another are important, and if properly designed and conducted will produce results that can be applied (generalized) to other settings and thus enable the most effective use of this powerful teaching tool.
Confounding in Media-Comparative Research
Media-comparative research seeks to make comparisons between different media formats such as paper, computer, and face-to-face. At first glance, it seems natural when evaluating a new educational intervention to compare it to another method to establish superiority or at least equivalence. Thus, comparing a new CBL program to, say, the old paper syllabus seems to be a good idea. After all, this appears to be what we do in clinical and biomedical research when we evaluate a new intervention. However, although the clinical research paradigm does apply to education research, most media-comparative research does not appropriately fall into this rubric. Why is this so?
The clinical researcher rarely looks at multifactorial interventions but instead exposes patients to a tightly controlled set of interventions. This is not what happens with media-comparative research. Such research would require, as Clark put it, “a uniform medium such as ‘computer’ which can be compared with some other uniform medium such as ’teacher.'”2 Such uniformity does not exist.
Consider, for example, the comparison of a Web-based program to a paper equivalent. Even if content is “identical” (which is rarely the case), the comparison is “confounded* by the uncertain but undoubtedly unequal impacts … of other, potentially critical, features or components.”4 For example, most WBL programs contain hyperlinks—a feature the paper version obviously lacks. But hyperlinks are not the only difference between the two media. Factors such as logging onto the site, reading from the screen, scrolling, and color illustrations are also features of the computer-based intervention. More often, additional differences are present such as multimedia, interactive models, and sophisticated simulations. Comparisons of this type are analogous to comparing treatment of myocardial infarction with aspirin and nitroglycerin to treatment with low-molecular weight heparin, primary angioplasty, beta-blockade, angiotensin-converting enzyme inhibitor, HMG-CoA-reductase inhibitor, clopidogrel, and folate—all at once. Even if a significant result is found in such an investigation, little is known about which therapies contributed. Such investigations cannot be reliably generalized—taken from the context in which they were studied and applied to a new setting—because the multifactorial intervention cannot be replicated precisely, and implementation of only a subset of factors may or may not be effective.
Ambiguity and unexplained variance limit the interpretation of results in any experimental study. How much of the observed effect is attributable to the intervention? And do I know precisely what the intervention was? Some variance is random. Another source of variance in both clinical and educational experiments is the study's participants. The difficulties with variation in participants (including unequal enthusiasm for the interventions) in large comparative trials in medical education have been discussed elsewhere.8 Bias in the outcome measure is a third potential source of variance in education research.1 Although participants and outcomes can potentially dominate the effects of the educational intervention, they can be addressed in the study design.
In contrast, even the most careful study design cannot account for variance within and among interventions using different media. There are simply too many influential factors to allow the definition of an appropriate control intervention. Examples of media-comparative research in CBL abound,5 and I have been guilty of reporting such comparisons.9 However, I have come to believe in the accuracy of previous proposals that media-comparative research be abandoned. What, then, should be the direction of further CBL research?
Norman8 proposes two factors that distinguish the kind of medical education research that is likely to provide meaningful results. First, the setting is tightly controlled. CBL research has an advantage here over many other educational interventions. The computer presents a remarkably controlled environment. Barring technical complications, the computer won't have bad days and won't play favorites, but instead will consistently deliver the same message and response to every learner.
Norman's second suggestion is that the “various factors that contribute to a result are systematically varied … based on a theory of causation.”8 The first part of this suggestion—systematic variation in the intervention, or reductionism—will constitute much of the remainder of this essay. In short, I agree with Norman that the most meaningful advances in education will arise from “careful, theory-guided, experimental research”8 and with Friedman that “media-comparative [studies] are not the most important studies to be doing at this time.”1 What studies should be done in CBL? To answer this question I will first present a framework for systematic, meaningful comparisons of computer-based instructional design, and then outline additional interventions and outcomes that merit rigorous CBL-CBL research.
Meaningful Comparisons in Computer-Based Instructional Design
There is a difference between the medium (computer, paper, face-to-face) and the features of an intervention (font, hyperlinks, homework questions, etc.) within a given medium. While comparisons among media are confounded (results are susceptible to multiple interpretations), the same is not necessarily true of CBL-CBL comparisons that evaluate specific features. To this end, there appear to be three variables (other than the medium) relevant to the design and research of CBL programs: configuration, instructional method, and presentation. These constitute a hierarchy of changes that differentiate one intervention from another (see Table 1) and will be discussed in detail below. There is a natural order or nesting (see Figure 1), such that defining an intervention at an earlier level (to the left in Figure 1) will constrain the implementation and options at later levels. Note, however, that the levels of this hierarchy have nothing to do with the relative importance of the design components.
To avoid confounding, research is best done within rather than between levels in this hierarchy. This point will be elaborated below.
Levels of instructional design in computer-based learning
Differences in configuration.
Configuration denotes the “big picture” differences within a given media format. For face-to-face teaching, for example, configurations include lecture, small-group discussion, problem-based learning, and bedside teaching. CBL configurations often parallel those of face-to-face settings. For example, CD-ROM-based or Web-page-based tutorials and PowerPoint slide shows are similar to face-to-face lectures. Discussion-based teaching uses synchronous or asynchronous communication between learners as the primary teaching modality, analogous to a face-to-face small group. “Just in time learning”10 (discussed in greater detail below) is a developing and potentially powerful WBL configuration that provides instruction at critical points in a clinical encounter. Note that a given educational intervention might incorporate elements of more than one configuration. Comparison of configuration is exemplified by a recent study11 comparing two WBL environments.
Differences in instructional method.
Instructional methods (also called instructional strategies) are techniques that support learning processes. Gagne et al's nine instructional events12 provide one framework for this level: attention-getting activities, objectives, activation of prior learning, presentation of distinctive features, learning guidance, performance, feedback, assessment, and enhancing transfer. Other learning models such as Merrill's first principles13 (activation, demonstration, application, and integration in the setting of real-world problems) and the cognitive flexibility theory described by Spiro et al14 are particularly relevant to CBL. Specific instructional methods include questions, cases, simulations, interactive models, analogies, group discussion and activities, construction of databases or concept maps, and feedback. With some creativity, each of these methods could be applied—alone or in combination—to most media or configurations. Effective instructional methods are the active ingredient in any learning activity, and the significant results in much media-comparative research may be more appropriately credited to differences in instructional method than to attributes of the medium.3 Examples of comparisons of instructional method include narrative versus problem-solving simulation formats,15 interactive versus linear videodisc,16 provision of an overview versus exploring in depth first,17 and presence or absence of a tool to facilitate metacognition.18
Differences in the presentation.
Presentation encompasses elements of the medium that enhance a given intervention regardless of medium, configuration, or instructional method. For CBL and WBL, presentation variables include font, hyperlinks (number and type), multimedia (illustrations, radiographs, audio, video), simulation fidelity, and other means of interactivity to enhance learning. Although the distinction between instructional method and presentation is at times blurred, it is usually possible to distinguish whether the variable represents a primary teaching method (instructional method) or one of several possible enhancements that make the method more effective (presentation). For example, simulation is an instructional method, but high-fidelity simulation could be considered an enhancement of low-fidelity simulation. Research exploring presentation has varied the degree of the learner's control of anatomy illustrations,19 the visual display of hemodynamic data,20 use of audio narration,21 and the type of navigational support.22
There are other differences in CBL and WBL, such as the computer operating system (e.g., Windows, Macintosh, UNIX) and the programming/markup language (e.g., HTML, XML, Perl, etc.). While these differences are certainly important and merit investigation, from a teacher–student perspective the meaningful differences are subsumed by the categories above. For example, in comparing one Web programming language to another the technical differences (which can be significant) are only relevant to the learner insofar as they have bearing on the configuration, instructional methods, or presentation of the course. Such technical considerations are beyond the scope of this essay.
Why research should be conducted within, not across, levels of instructional design
Research should be conducted primarily within, rather than across, levels of instructional design. Doing otherwise leaves the reader unable to distinguish whether it was the configuration, the instructional method, or the presentation that had the effect. For example, a comparison of two instructional methods (e.g., case-based versus non-case-based questions) would likely yield meaningful and generalizable results. However, a comparison across levels (e.g., a discussion board using case-based learning versus Web-page-based interactive models) would be limited by confounding (Was it the discussion board, or the case-based questions, that made a difference?). As with comparisons of media, it will be difficult to control comparisons within the configuration level. For example, when comparing a discussion board to a Web-page-based tutorial, it may prove challenging to account for all the differences in instructional methods and presentation, as illustrated by a recent study.11 Although configuration-comparative research may yield important information, perhaps the greatest utility in identifying configuration as a distinct level will be to avoid the confounded comparison described above.
There may be exceptions to the rule against complex comparisons. As in clinical research, where an effectiveness study demonstrates the clinical utility of an intervention shown to be efficacious in an efficacy study,23 it may at times be appropriate to investigate multifactorial interventions. However, such studies will have limited generalizability unless they recognize and carefully address the challenges noted above.
Other Topics for Computer-Based Learning Research
Several additional research themes warrant rigorous study under the CBL-CBL comparative paradigm. Within the context of instructional design, adaptation to individual learners, just-in-time learning, and simulation present singular challenges and opportunities. Research questions for each of these themes could draw upon virtually any combination of configuration, instructional method, or presentation, and would lend themselves to comparative studies using the framework discussed above. Comparative studies could also investigate integration of CBL within and between institutions. Regardless of the intervention, outcomes in CBL research should be carefully considered. These themes are discussed below and in Table 2.
Adaptation to differences in individual learners has been proposed as a way to improve WBL,24–27 and many of the arguments apply to CBL in general. In face-to-face teaching, effective teachers adapt to accommodate the various needs of individual learners. In contrast, traditional computer-based instruction presents the same material to every learner regardless of individual learning needs. By imitating the effective human teacher, computer systems that adapt to individual differences could enhance learning.28 In fact, given the diversity of the potential audience and the fact that the learner in most Web-based settings works alone, adaptation may be imperative to realize the full promise of WBL.29 In considering adaptation to individual differences, the aptitude–treatment interaction30 is critical (see Figure 2).
“Just-in-time” learning10 involves computerized delivery of educational material at critical stages in a clinical encounter. While the information satisfies an immediate need, the clinical context may also facilitate deep and enduring learning by capitalizing on the learner's motivation31 and facilitating effective cognitive structuring.32 Evidence on clinical-decision support tools is substantial,33,34 but less is known about knowledge retention once decision support is withdrawn or the impact of interventions intended to promote learning rather than influence immediate behavior.
Computers play an ever-increasing role in medical simulation.35 Potential advantages of simulation training include opportunities for learner control of the training agenda, repeated “deliberate practice” in a safe environment, objective assessment, and immediate feedback.36–38 Yet evidence supporting the use of these tools is scant and often lags far behind the technology, and there are some who fear that fascination with technology may outstrip actual learning gains.38,39
CBL must be integrated with other systems and curricular components. Since CBL is not an end in itself, how and when to use this tool are as important as the optimization of the tool itself. Friedman noted, “The thinking about how to integrate computer technology into medical school instruction is less mature than the thinking about how to design computer-based instruction itself,” and he proposed “a line of research that would explicitly compare different modes of integration.”1 More recently, integration has been identified as an area meriting continued research in both curriculum development and individual learning settings.5 As multi-institution initiatives40,41 continue to develop, interinstitutional issues will also need to be addressed.
Across the spectrum of CBL research, selection of outcomes is a critical issue. The predominant outcomes in current use—satisfaction, self-efficacy, and knowledge/performance—are only surrogates for the outcomes of real interest: physician performance and patient outcomes.42–44 Furthermore, the benefits of a given instructional design may differ for different learning outcomes.24 Additional outcomes to consider, some of them unique to CBL, are presented in Table 2.
A Comment on Uncontrolled Studies in CBL Research
Most publications on CBL and WBL have no control group.5 As the education equivalent of the clinical case report, most of these studies raise hypotheses but provide few definitive answers. Such research is valuable in the early stages of an innovation. However, as a discipline matures, such research should be replaced by studies that ask and answer specific research questions. I suggest that the time has come for hypothesis-driven comparative research in CBL and WBL.
In proposing this, I acknowledge the potential contribution of qualitative research. In contrast to most of the descriptive reports prevalent in the literature, studies employing rigorous qualitative methods can shed light on the complex pedagogical, technical, and organizational aspects of CBL and uncover truths applicable to other settings.45–48 Such research49–51 complements the comparative research paradigm presented in this article.
CBL is not a panacea. Aspirin does not cure all ailments, and CBL does not cure all educational problems. It will not work equally well in all settings, and with current technology it is likely suboptimal in many contexts. Rather, CBL is a powerful tool, to be used with wisdom and judgment to enhance the learning process. Instead of deciding to use CBL and then working to fit it into the curriculum, instructional objectives should be defined first, and CBL used only when it appears to be the most effective means of achieving them. Research should focus on when to use CBL, and how to use it most effectively once the decision has been made.
The interpretation of most existing research in CBL is limited by lack of an adequate control group. But even well-controlled media-comparative research will always be difficult to generalize because observed effects cannot confidently be ascribed to any one variable. In contrast, CBL-CBL comparisons of instructional design, including configuration, instructional method, and presentation, are more likely to yield meaningful results. Studies employing systematic variations within each of these levels will advance the science of CBL. Within the CBL-CBL framework special attention should be paid to factors such as adaptation to individual differences, just-in-time learning, simulation, and integration within and between institutions, while assessing meaningful outcomes. Such investigations will help to realize and refine the role of computers in medical education.
The author thanks D. M. Dupras and T. J. Beckman for their critical review of the manuscript.
Note: References 52–80 are cited in Table 2 only.
1 Friedman C. The research we should be doing. Acad Med. 1994;69:455–7.
2 Clark R. Confounding in educational computing research. J Educ Comput Res. 1985;1:28–42.
3 Clark R. Dangers in the evaluation of instructional media. Acad Med. 1992;67:819–20.
4 Keane D, Norman G, Vickers J. The inadequacy of recent research on computer-assisted instruction. Acad Med. 1991;66:444–8.
5 Adler MD, Johnson KB. Quantifying the literature of computer-aided instruction in medical education. Acad Med. 2000;75:1025–8.
6 Chumley-Jones HS, Dobbie A, Alford CL. Web-based learning: sound educational method or hype? A review of the evaluation literature. Acad Med. 2002;77(10 suppl):S86–93.
7 Santayana G. In: Hirsch ED Jr, Kett JF, Trefil J (eds). The New Dictionary of Cultural Literacy. 3rd ed. Boston: Houghton Mifflin Company, 2002. Available online at 〈http://www.bartleby.com/59
〉. Accessed 14 April 2005.
8 Norman G. RCT = results confounded and trivial: the perils of grand educational experiments. Med Educ. 2003;37:582–4.
9 Cook DA, Dupras DM, Thompson WG, Pankratz VS. Web-based learning in resident continuity clinics: a randomized, controlled trial. Acad Med. 2005;80:90–7.
10 Chueh H, Barnett GO. “Just-in-time” clinical information. Acad Med. 1997;72:512–7.
11 Brunetaud JM, Leroy N, Pelayo S, et al. Comparative assessment of two interfaces for delivering a multimedia medical course in the French-speaking Virtual Medical University (UMVF). Stud Health Technol Informat. 2003;95:738–43.
12 Gagne RM, Briggs LJ, Wager WW. Principles of Instructional Design. 4th ed. Belmont, CA: Wadsworth/Thompson Learning, 1992.
13 Merrill MD. First principles of instruction. Educ Technol Res Dev. 2002;50(3):43–59.
14 Spiro RJ, Coulson RJ, Feltovich PJ, Anderson DK. Cognitive Flexibility Theory: Advanced Knowledge Acquisition in Ill-structured Domains. Center for the Study of Reading Technical Report. Champaign, IL: University of Illinois at Urbana-Champaign, 1988.
15 Bearman M, Cesnik B, Liddell M. Random comparison of “virtual patient” models in the context of teaching clinical communication skills. Med Educ. 2001;35:824–32.
16 Yoder ME. Preferred learning style and educational technology: linear vs interactive video. Nurs Health Care. 1994;15:128–32.
17 Ford N, Chen SY. Matching/mismatching revisited: an empirical study of learning and teaching styles. Br J Educ Technol. 2001;32:5–22.
18 Hsu TE, Frederick FJ, Chung ML. Effects of learner cognitive styles and metacognitive tools on information acquisition paths and learning in hyperspace environments. Paper presented at the National Convention of the Association for Educational Communications and Technology, Nashville, TN, February 16–20, 1994.
19 Garg AX, Norman GR, Eva KW, Spero L, Sharan S. Is there any real virtue of virtual reality? The minor role of multiple orientations in learning anatomy from computers. Acad Med. 2002;77(10 suppl):S97–9.
20 DiBartola LM, Miller MK, Turley CL. Do learning style and learning environment affect learning outcome? J Allied Health. 2001;30:112–5.
21 Spickard A, Smithers J, Cordray D, Gigante J, Wofford JL. A randomised trial of an online lecture with and without audio. Med Educ. 2004;38:787–90.
22 Triantafillou E, Pomportsis A, Demetriadis S, Georgiadou E. The value of adaptivity based on cognitive style: an empirical study. Br J Educ Technol. 2004;35:95–106.
23 Hulley S, Cummings S, Browner W, Grady D, Hearst N, Newman T. Designing Clinical Research: An Epidemiologic Approach. 2nd ed. Philadelphia: Lippincott Williams & Wilkins, 2001.
24 Dillon A, Gabbard RB. Hypermedia as an educational technology: a review of the quantitative research literature on learner comprehension, control, and style. Rev Educ Res. 1998;68:322–49.
25 Chen C, Czerwinski M, Macredie RD. Individual differences in virtual environments—introduction and overview. J Am Soc Inf Sci. 2000;51:499–507.
26 Merrill MD. Instructional strategies and learning styles: which takes precedence? In: Reiser R, Dempsey JV (eds). Trends and Issues in Instructional Design and Technology. Upper Saddle River, NJ: Merrill/Prentice Hall, 2002.
27 Cook DA. Learning and cognitive styles in Web-based learning: theory, evidence, and application. Acad Med. 2005;80:266–78.
28 Chen SY, Paul RJ. Editorial: Individual differences in Web-based instruction: an overview. Br J Educ Technol. 2003;34:385–92.
29 Brusilovsky P. Adaptive educational systems on the World-Wide-Web: a review of available technologies. Paper presented at the Fourth International Conference in Intelligent Tutoring Systems, San Antonio, TX, August 16–19, 1998.
30 Jonassen DH, Grabowski B. Handbook of Individual Differences, Learning, and Instruction. Hillsdale, NJ: Lawrence Erlbaum Assoc, 1993.
31 Wlodkowsi RJ. Strategies to enhance adult motivation to learn. In: Galbraith MW (ed). Adult Learning Methods: A Guide to Effective Instruction. 2nd ed. Malabar, FL: Krieger, 1998:91–111.
32 Shatzer J. Instructional methods. Acad Med. 1998;73(9 suppl):S38–45.
33 Hunt DL, Haynes RB, Hanna SE, Smith K. Effects of computer-based clinical decision support systems on physician performance and patient outcomes: a systematic review. JAMA. 1998;280:1339–46.
34 Bates DW, Kuperman GJ, Wang S, et al. Ten commandments for effective clinical decision support: making the practice of evidence-based medicine a reality. JAMA. 2003;10:523–30.
35 Issenberg SB, McGaghie WC, Hart IR, et al. Simulation technology for health care professional skills training and assessment. JAMA. 1999;282:861–6.
36 Friedman C. Anatomy of the clinical simulation. Acad Med. 1995;70:205–9.
37 Ziv A, Wolpe PR, Small SD, Glick S. Simulation-based medical education: an ethical imperative. Acad Med. 2003;78:783–8.
38 Kneebone R. Simulation in surgical training: educational issues and practical implications. Med Educ. 2003;37:267–77.
39 Norman GR. Simulation—saviour or Satan? Adv Health Sci Educ. 2003;8:1–3.
40 Harden R, Hart I. An international virtual medical school (IVIMEDS): the future of medical education? Med Teach. 2002;24:261–7.
41 Sisson SD, Hughes MT, Levine D, Brancati FL. Effect of an Internet-based curriculum on postgraduate education: a multicenter intervention. J Gen Intern Med. 2004;19:505–9.
42 Whitcomb ME. Research in medical education: what do we know about the link between what doctors are taught and what they do? Acad Med. 2002;77:1067–8.
43 Prystowsky JB, Bordage G. An outcomes research perspective on medical education: the predominance of trainee assessment and satisfaction. Med Educ. 2001;35:331–6.
44 Chen FM, Bauchner H, Burstin H. A call for outcomes research in medical education. Acad Med. 2004;79:955–60.
45 Owston RD. Evaluating Web-based learning environments: strategies and insights. Cyberpsychol & Behav. 2000;3:79–87.
46 Lederman NG. What works: a commentary on the nature of scientific research. Contemp Issues Technol Teach Educ. 2003;3(1):4–10.
47 Bradley P, Postlethwaite K. Simulation in clinical learning. Med Educ. 2003;37:1–5.
48 Savenye WC, Robinson RS. Qualitative research issues and methods: an introduction for educational technologists. In: Jonassen DH (ed). Handbook of Research on Educational Communications and Technology. 2nd ed. Mahwah, NJ: Lawrence Erlbaum, 2004:1045–71.
49 Kneebone R, ApSimon D. Surgical skills training: simulation and multimedia combined. Med Educ. 2001;35:909–15.
50 Steele DJ, Johnson Palensky JE, Lynch TG, Lacy NL, Duffy SW. Learning preferences, computer attitudes, and student evaluation of computerised instruction. Med Educ. 2002;36:225–32.
51 Bearman M. Is virtual the same as real? Medical students’ experiences of a virtual patient. Acad Med. 2003;78:538–45.
52 Brusilovsky P. Adaptive navigation support in educational hypermedia: the role of student knowledge level and the case for meta-adaptation. Br J Educ Technol. 2003;34:487–97.
53 Specht M, Kobsa A. Interaction of domain expertise and interface design in adaptive educational hypermedia. Paper presented at the Workshop on Adaptive Systems and User Modeling on the World Wide Web, Eighth International World Wide Web Conference, Toronto, Canada, May, 1999.
54 Weibelzahl S, Weber G. Adapting to prior knowledge of learners. Paper presented at the Second international conference on Adaptive Hypermedia and Adaptive Web Based Systems, Malaga, Spain, 2002.
55 Abouserie R, Moss D. Cognitive style, gender, attitude toward computer-assisted learning and academic achievement. Educ Stud. 1992;18:151–60.
56 Lynch TG, Steele DJ, Johnson Palensky JE, Lacy NL, Duffy SW. Learning preferences, computer attitudes, and test performance with computer-aided instruction. Am J Surg. 2001;181:368–71.
57 Lieberman G, Abramson R, Volkan K, McArdle PJ. Tutor versus computer: a prospective comparison of interactive tutorial and computer-assisted instruction in radiology education. Acad Radiol. 2002;9:40–9.
58 Billings DM, Cobb KL. Effects of learning style preferences, attitude and GPA on learner achievement using computer assisted interactive videodisc instruction. J Comput Based Instruct. 1992;19:12–6.
59 Ford N, Chen SY. Individual differences, hypermedia navigation, and learning: an empirical study. J Educ Multimedia Hypermedia. 2000;9:281–311.
60 Eklund J, Sinclair K. An empirical appraisal of the effectiveness of adaptive interfaces for instructional systems. Educ Technol Soc. 2000;3(4):165–77.
61 Riding R, Cheema I. Cognitive styles: an overview and integration. Educ Psychol. 1991;11(3/4):193–215.
62 Kolb D. Experiential Learning: Experience as the Source of Learning and Development. Englewood Cliffs, NJ: Prentice-Hall, 1984.
63 Leung GM, Johnston JM, Tin KY, et al. Randomised controlled trial of clinical decision support tools to improve learning of evidence based medicine in medical students. BMJ. 2003;327:1090.
64 Tsai TL, Fridsma DB, Gatti G. Computer decision support as a source of interpretation error: the case of electrocardiograms. J Am Med Inform Assoc. 2003;10:478–83.
65 Bordage G. Elaborated knowledge: a key to successful diagnostic thinking. Acad Med. 1994;69:883–5.
66 Maran NJ, Glavin RJ. Low- to high-fidelity simulation: a continuum of medical education? Med Educ. 2003;37:22–8.
67 Schuwirth LWT, van der Vleuten CPM. The use of clinical simulations in assessment. Med Educ. 2003;37:65–71.
68 Koschmann T. Medical education and computer literacy: learning about, through, and with computers. Acad Med. 1995;70:818–21.
69 Cartwright CA, Korsen N, Urbach LE. Teaching the teachers: helping faculty in a family practice residency improve their informatics skills. Acad Med. 2002;77:385–91.
70 Davis MH, Harden RM. E is for everything—e-learning? Med Teach. 2001;23:441–4.
71 Nowacek G, Friedman C. Issues and challenges in the design of curriculum information systems. Acad Med. 1995;70:1096–100.
72 Candler CS, Andrews MD. Avoiding the great train wreck: standardizing the architecture for online curricula. Acad Med. 1999;74:1091–5.
73 Kaplan B, Brennan PF, Dowling AF, Friedman CP, Peel V. Toward an informatics research agenda: key people and organizational issues. J Am Med Inform Assoc. 2001;8:235–41.
74 Berner ES, McGowan JJ, Hardin JM, Spooner SA, Raszka WV, Berkow RL. A model for assessing information retrieval and application skills of medical students. Acad Med. 2002;77:547–51.
75 Ramnarayan P, Kapoor RR, Coren M, et al. Measuring the impact of diagnostic decision support on the quality of clinical decision making: development of a reliable and valid composite score. J Am Med Inform Assoc. 2003;10:563–72.
76 Downs S, Marasigan F, Abraham V, Wildemuth B, Friedman C. Scoring performance on computer-based patient simulations: Beyond value of information. Paper presented at the Proceedings of the Annual Symposium of the American Medical Informatics Association, Washington, DC, 1999.
77 Bransford J, Brown A, Cocking R. How People Learn: Brain, Mind, Experience, and School. Washington, DC: National Academy Press, 2000.
78 Jonassen DH, Wang SR. Acquiring structural knowledge from semantically structured hypertext. J Comput Based Instruct. 1993;20(1):1–8.
79 Jonassen DH, Reeves TC. Learning with technology: using computers as cognitive tools. In: Jonassen DH (ed). Handbook of Research for Educational Communications and Technology. New York: Simon and Schuster Macmillan, 1996:693–719.
80 Kamin C, O'Sullivan P, Deterding R, Younger M. A comparison of critical thinking in groups of third-year medical students in text, video, and virtual PBL case modalities. Acad Med. 2003;78:204–11.
*Confounding is present when multiple factors simultaneously influence the dependent variable, resulting in outcomes that can be interpreted in more than one way. As applied to media-comparative research, Clark stated, “Studies are often vulnerable to rival hypotheses that learning gains resulted from different instructional methods, content, or from student enthusiasm for a novel medium, not from the computer per se.”2 Cited Here...
© 2005 Association of American Medical Colleges
What does "Remember me" mean?
By checking this box, you'll stay logged in until you logout. You'll get easier access to your articles, collections,
media, and all your other content, even if you close your browser or shut down your
To protect your most sensitive data and activities (like changing your password),
we'll ask you to re-enter your password when you access these services.
What if I'm on a computer that I share with others?
If you're using a public computer or you share this computer with others, we recommend
that you uncheck the "Remember me" box.
Data is temporarily unavailable. Please try again soon.
Readers Of this Article Also Read