Secondary Logo

Journal Logo

Integration Strategies for Using Virtual Patients in Clinical Clerkships

Berman, Norman, MD; Fall, Leslie H., MD; Smith, Sherilyn, MD; Levine, David A., MD; Maloney, Christopher G., MD, PhD; Potts, Michael, MD; Siegel, Benjamin, MD; Foster-Johnson, Lynn, PhD

doi: 10.1097/ACM.0b013e3181a8c668
Simulation
Free

Purpose To explore students’ perceptions of virtual patient use in the clinical clerkship and develop a framework to evaluate effects of different integration strategies on students’ satisfaction and perceptions of learning effectiveness with this innovation.

Method A prospective, multiinstitutional study was conducted at six schools’ pediatric clerkships to assess the impact of integrating Web-based virtual patient cases on students’ perceptions of their learning during 2004–2005 and 2005–2006. Integration strategies were designed to meet the needs of each school, and integration was scored for components of virtual patient use and elimination of other teaching methodologies. A student survey was developed, validated, and administered at the end of the clerkship to 611 students. Data were analyzed using confirmatory factor analysis and structural equation modeling.

Results A total of 545 students (89%) completed the survey. Overall student satisfaction with the virtual patients was high; students reported that they were more effective than traditional methods. The structural model demonstrated that elimination of other teaching methodologies was directly associated with perceived effectiveness of the integration strategies. A higher use score had a significant negative effect on perceived integration, but a positive effect on perceived knowledge and skills gain. Students’ positive perceptions of integration directly affected their satisfaction and perception of the effectiveness of their learning.

Conclusions Integration strategies balancing the use of virtual patients with elimination of some other requirements were significantly associated with students’ satisfaction and their perceptions of improved knowledge and skills.

Dr. Berman is associate professor, Department of Pediatrics, Dartmouth Medical School, Hanover, New Hampshire.

Dr. Fall is associate professor, Department of Pediatrics, Dartmouth Medical School, Hanover, New Hampshire.

Dr. Smith is associate professor, Department of Pediatrics, University of Washington School of Medicine, Seattle, Washington.

Dr. Levine is associate professor, Department of Pediatrics, Morehouse School of Medicine, Atlanta, Georgia.

Dr. Maloney is associate professor, Department of Pediatrics, University of Utah School of Medicine, Salt Lake City, Utah.

Dr. Potts is associate professor, Department of Pediatrics, University of Illinois College of Medicine at Rockford, Rockford, Illinois.

Dr. Siegel is professor of pediatrics and psychiatry, Boston University School of Medicine, Boston, Massachusetts.

Dr. Foster-Johnson is research scholar, Tuck School of Business at Dartmouth College, and adjunct assistant professor, Department of Psychological and Brain Sciences, Dartmouth College, Lebanon, New Hampshire.

Please see the end of this article for information about the authors.

Correspondence should be addressed to Dr. Berman, Department of Pediatrics, Dartmouth Medical School, One Medical Center Drive, Lebanon, NH 03756; telephone: (603) 653-9888; fax: (603) 650-0909; e-mail: (norman.berman@dartmouth.edu).

The last several decades have seen an explosion of technological advances useful in medical education settings. Computer-assisted instruction (CAI) methods offer several potential advantages, including efficiency,1 consistency of instruction, easy accessibility, interactivity, immediate feedback, and personalized learning,2,3 as well as the potential relief of the teaching burden for busy faculty while ensuring students’ consistent exposure to important clinical problems.4,5

Adoption of CAI in curricula has been limited to date. When new CAI modules are merely disseminated to the students or implemented in a cursory way as an “add-on,” students’ acceptance, use, and satisfaction are inconsistent.6 Integration of CAI as an important and “integral” part of the regular curriculum improves students’ use and acceptance,2 and it is clear that students’ acceptance is a necessary precursor to broader use of CAI by students. A few reports have addressed the integration of CAI programs for their own institutions,4,7 yet these case reports yield little guidance regarding the ultimate challenge of using CAI in a clinical clerkship at another institution with differing instructional needs. Little is known about the general principles that underlie a successful integration strategy or about how best to proceed to introduce CAI into existing courses.8

Calls for studies of effective CAI integration strategies have been made since the early 1990s,9 yet little progress has been made in studying these methods after more than 10 years.10 In 2007, the Association of American Medical Colleges (AAMC) renewed the call for a different target for research on instructional technology. The question of whether or not to use CAI as a teaching method has been supplanted by a research agenda that seeks to discover (1) how we can best integrate educational technology into existing curricula and (2) how we can best integrate educational technology into existing educational settings.11 For narrowly targeted, topic-focused CAI tools used in single institutions, these questions are extremely difficult to study.

In 2003, the Council on Medical Student Education in Pediatrics (COMSEP) launched a unique CAI effort: the Computer-assisted Learning In Pediatrics Program (CLIPP). CLIPP was developed based on the published COMSEP pediatric clerkship curriculum guidelines12 and by drawing on multiple authors at multiple institutions. The CLIPP cases consist of 31 virtual patients representing common pediatric conditions; the cases expose participating students to more than 90% of the defined COMSEP curriculum objectives.5 Currently, CLIPP virtual patients are used in 115 medical schools, and more than 225,000 case sessions are completed per year. Wide acceptance and robust use of this comprehensive program provided the opportunity to study the various methods used to integrate CLIPP into existing clerkship teaching at different schools and to examine the effect of integration strategies on the acceptance, effectiveness, and utility of CAI in clinical education as called for by the AAMC.

We hypothesized that as integration increased, students would be more satisfied with CLIPP and would perceive that CLIPP was more effective for their learning. In addition, we anticipated that some of the effects of virtual patient integration on student outcomes would be transmitted indirectly, through students’ perceptions of how effectively the virtual patients were integrated.

Back to Top | Article Outline

Method

Study design and sample selection

The CLIPP Working Group Integration Study was a prospectively designed, hypothesis-driven, multiinstitutional study that we, the members of the working group, conducted during the 2004–2005 academic year. We recruited six U.S. medical schools* (our home institutions) to participate in the study to achieve a diverse representation of medical school size, student demographics, and geography. The sample of students participating in this investigation included all third-year medical students at each school during their pediatric clerkships. We informed students of the study and obtained their consent during the clerkship orientation. The institutional review board (IRB) of each institution approved the study and the student survey instrument (see Table 1).

Table 1

Table 1

Back to Top | Article Outline

Measurement instruments and variables

This study incorporated two measures from different sources—school-level measures of CAI integration and student-level measures of satisfaction and learning effectiveness, each discussed below.

Back to Top | Article Outline

CAI integration measure.

A range of integration factors was suggested, based on the published literature on CAI integration2,3,7 and on our experiences with CLIPP integration in prior years. Additional input on the integration options was provided by the CLIPP Working Group participants, and consensus was reached within the group on the integration factors to be considered. Integration plans included a variety of ways of incorporating the CLIPP cases in the clerkship (i.e., use), in addition to elimination or reduction of existing traditional components of the curriculum to “make room” for the new CAI curricular approaches (i.e., elimination). Each school in the study group designed a detailed CLIPP integration plan based on its assessment of the integration strategy that would work best in its institution, and implemented the plan during the following academic year. Any deviation or changes from the written integration plans during study implementation were documented by making modifications to the original plans. Integration plans during the second year of the study were adjusted, based on discussions within the CLIPP Working Group and on the individual clerkship director’s assessment of integration results for the first year of the study.

Two variables, use and elimination, were derived from the integration strategies employed in each school’s integration plan. A scoring system was designed to quantify the components of integration based on the detailed CLIPP integration plans (see List 1).

The use score was calculated by assigning a score of 0, 1, or 2 to the following four components of students’ use of the CLIPP program: level of students’ orientation to CLIPP, number of assigned cases, level of faculty development, and specific examination on the case content. An additional point was given for presence of each of the following components: coordination of didactics (i.e., classroom work and lectures) with assigned CLIPP cases, use of CLIPP in bedside teaching, direct use in didactic teaching, use of CLIPP supplementary teaching resources, and use of CLIPP to fill in clinical exposure gaps. Scores for this variable could range from 0 to 14.

For assessing CAI elimination, the scoring became more complicated because we were capturing both the presence and absence of components. The elimination score was calculated by assigning a score to the following components of the curriculum: textbook, other CAI, other work, other exam. If the school had one of these existing curricular components and removed it to make room for CLIPP, it was given a 1. If the school did not have this existing curricular component, it was given a zero. If the school had the existing component but did not eliminate it, the school was given a −1. Scores for this variable could range from −4 to 5.

The principal investigator (N.B.) conducted initial scoring of the integration plans, and each school’s clerkship director verified the accuracy of the scoring. There was 100% agreement between the investigator’s assigned score and each clerkship director’s assessment of his or her school’s integration plan. At no point were the students involved in the design or assessment of the integration plans.

Back to Top | Article Outline

Student survey measure.

The second measure used a survey of the students who participated in the study. A two-page, anonymous, paper–pencil student survey was developed based on a detailed review of the literature on CAI and curricular integration, and iterative feedback from us, the authors, at the participating schools. The survey was pilot-tested on medical students, and feedback was incorporated into the final version. Consisting of 15 questions designed to assess various aspects of the delivery and outcomes of CLIPP, items were rated on a five-point Likert scale measuring aspects of students’ satisfaction and their perceptions of the effectiveness of CLIPP. Satisfaction questions used response categories ranging from “strongly disagree” to “strongly agree,” with “neutral” as the midpoint. Effectiveness questions required the student to contrast CLIPP with more traditional instruction methods on numerous dimensions. Response categories ranged from “not nearly as effective” to “much more effective,” with a midpoint response of “equally effective.” We used the average of the students’ responses to 11 of the items to create three composite scales measuring perceived knowledge gain (four items), perceived skills gain (three items), and perceived integration (four items). Students’ satisfaction was measured with a single item and, therefore, was not included in subsequent analyses of the reliability and validity of our measures.

Back to Top | Article Outline

Validation of student survey measure.

Content validity was achieved by developing the survey based on a detailed review of the literature on CAI evaluation, and from iterative feedback from the educators at the participating schools. To validate the hypothesized factor structure of the three student evaluation scales, we applied confirmatory factor analysis (CFA) to the 11 items from the student survey. In addition to providing item loadings for each factor, CFA also gives model fit indices that indicate how well the data fit our hypothesized model. In general, for both factor loadings and model fit indices, values closer to 1.00 are desirable. Fit indices of at least 0.90 are the minimum threshold values for “acceptable” model fit. A three-factor solution of knowledge, skills, and integration yielded acceptable model fit indices with the goodness of fit index (GFI)13 at 0.93, the comparative fit index (CFI)14 at 0.95, and the normed fit index (NFI)15 at 0.93 (see Table 2’s item means and factor loadings for the 2004 student survey evaluation data).

Table 2

Table 2

A reliability analysis suggested minimal measurement error for the three scales. Cronbach alpha for perceived knowledge (α = 0.89), perceived skills (α = 0.85), and perceived integration (α = 0.79) indicated our scales were internally consistent. The correlation between integration and perceived skills was 0.45, and rxy between perceived knowledge and integration reached 0.51. The correlation between perceived knowledge and skills was 0.79. Although our scales were moderately to highly related, we concluded that they measured three distinct constructs (discriminant validity) because the three-factor-model solution yielded a significantly better model fit than was the case with models where we tested the items in a one-factor solution or in separate combinations of two-factor models (i.e., all knowledge and skills items as one factor, and integration as another factor). We will be glad to provide additional information about this analysis.

Back to Top | Article Outline

Data collection and statistical analysis

The data for the CAI integration measure were collected during the summer of 2004, and again prior to the beginning of the 2005–2006 academic year. The primary respondent at each participating school was the clerkship director. Students’ survey data were collected at the end of each clerkship rotation.

Data were analyzed using SAS 9.1.16 We used structural equation modeling (SEM)17 to test whether increasing degrees of integration would result in greater levels of students’ satisfaction with CLIPP and in higher perceptions of learning effectiveness. We anticipated that the impact of use and elimination strategies on student outcomes would occur directly and indirectly, with some of the effects on student outcomes being mediated through students’ perceptions of how well CLIPP was integrated. SEM is methodologically superior to regression or path analysis (which relies on independent regression equations) because one can statistically account for measurement error. In addition, model-fit indices provide an indication of how well the data fit the entire set of equations. We set P < .05 as the level of significance and give P values in our results only when they are significant.

Back to Top | Article Outline

Results

Characteristics of student respondents

Whereas 611 students were targeted to participate in the study, we received responses from 583 (95%; range, 40–175 across schools). Removal of participants with incomplete survey responses resulted in a final sample size of 545 (89%; range, 30–60). There was not a statistically significant difference on the survey items between the participants whose survey responses were incomplete and those whose responses were complete. Respondents were anonymous, so it was not possible to determine possible differences between respondents and nonrespondents. Overall (as shown in Table 2), students were quite satisfied and thought CLIPP was more effective than other traditional methods at improving their knowledge and that it was about equal to traditional methods at improving their skills. Perceptions of the degree to which CLIPP was well integrated and balanced were also high.

Back to Top | Article Outline

Characteristics of CAI integration

In 2004–2005, across the six schools, the mean use score was 6.94 (range, 5 to 12), and the mean elimination score was 1.48 (range, −2 to 4).

Back to Top | Article Outline

Effect of integration on student outcomes—2004–2005

A structural model was generated that included use and elimination as predictors of students’ perceptions of the effectiveness of CLIPP in improving their knowledge, skills, and satisfaction. The model also included the mediating pathway through perceived integration. For the models predicting knowledge and skills, both CFI and GFI reached 0.97, and NFI was only slightly lower at 0.95 for knowledge and 0.94 for skills. For the model predicting satisfaction, GFI was 0.98, CFI was 0.97, and NIF was 0.95. (Figure 1 presents a structural equation model of integration and perceived learning.)

Figure 1

Figure 1

Back to Top | Article Outline

Integration strategy directly and significantly affects perceived learning.

We found that use and elimination as integration strategies had important direct and indirect effects on students’ perceptions of learning. Figure 1 provides the unstandardized path coefficients for the three models of perceived student outcomes: knowledge, skills, and satisfaction. The measurement indicators for each latent factor are omitted for clarity. We provide detailed interpretation for the knowledge model, although all models yielded similar findings. There were direct effects of the integration strategy on perceptions of knowledge gain. In model 1, higher use scores are positively and significantly associated with students’ perceptions of the effectiveness of CLIPP at improving their knowledge (b = 0.07, P < .0001). Higher elimination scores had a nonsignificant direct effect on perceived knowledge (b = 0.02, P = NS).

Back to Top | Article Outline

Integration strategy directly and significantly affects perceived integration.

The integration strategy used had differential effects on the students’ perceptions of effective integration. Higher elimination scores were significantly and positively predictive of student perceptions of integration (γ = 0.09, P < .0001), showing that greater removal/replacement of existing curricular methods for CAI was linked to perceptions of greater integration. Conversely, higher use scores had a small and insignificant negative effect on perceived integration (γ = −0.03, NS).

Back to Top | Article Outline

Perceived integration directly and significantly affects perceived learning.

Students’ perceptions of more effective integration were strongly and positively predictive of their perceived knowledge improvement (b = 0.44, P < .0001). The effect of perceived integration on perceptions of learning effectiveness was much greater than the direct effect of increasing use scores.

Back to Top | Article Outline

Perceived integration mediates effects on perceived learning.

Some of the effects of integration strategies on perceived learning were transmitted indirectly, through students’ perceptions of integration. The mediating role of integration is most evident with elimination strategies. Although there are minimal direct effects of elimination scores on students’ perceptions of learning (b = 0.02, P = NS), when student perceptions of integration are considered, the impact of elimination scores becomes more apparent. Specifically, greater elimination scores are predictive of students’ perceptions of increased integration (γ = 0.09), which in turn are associated with higher perceptions of learning (b = 0.44). A Sobel18 test of mediation confirmed that the indirect effect (IE) of elimination scores on perceived learning through perceived integration reached significance (IE = 0.04 [0.09 × 0.44], z = 3.41, P < .001) and was substantially greater than the direct effect (b = 0.02). The indirect effect of use strategies on perceived learning was much lower and did not reach significance (IE = −0.01, z = −0.923, P = NS).

Taken together, integration strategies (use and elimination) and students’ perceptions of the integration explained 27% of the variance in students’ perceptions of the effectiveness of knowledge improvement due to CLIPP, 21% of students’ perceptions of skills improvement, and 43% of students’ satisfaction. Integration use and elimination strategies each explained 4% to 5% of the variance in students’ perceptions of integration across the different models. For a one-point increase in students’ perceptions of integration, students’ perceptions of knowledge and skills increased by almost half a point (b = 0.44 and 0.48, respectively), and perceptions of satisfaction increased by more than two thirds of a point (b = 0.69).

Back to Top | Article Outline

Effect of integration on student outcomes—2005–2006

The 2005–2006 model results were not substantially different from the 2004–2005 results. For the model predicting knowledge, model fit for 2005–2006 remained good (CFI = 0.96, GFI = 0.97, and NFI = 0.95), and the model accounted for 28% of the variance in knowledge. Perceived integration remained a significant predictor of knowledge (b = 0.51, P < .0001). Integration use in the 2005–2006 models was not a predictor of perceived knowledge, and, as opposed to the data from 2004–2005, greater elimination scores predicted lower perceived knowledge gain (b = −0.05, P < .05).

For the 2005–2006 model predicting skills, model fit remained good and accounted for more variance in explaining skills than did the 2004–2005 model (R2 = 0.28). Perceived integration was again strongly predictive of perceived skills (b = 0.56, P < .0001). Integration use had a small but significant effect on skills (b = 0.05, P < .10), with an insignificant effect from elimination (b = −0.02).

Back to Top | Article Outline

Replication of findings

Intraclass correlations (ICCs) on each of the variables used in the analysis suggested that the school clustering effect (or school membership) accounted for less than 10% of the variance in our measures. Specifically, the school ICCs for knowledge and satisfaction were 0.06, and, for skills and integration, ICC reached 0.09. To test the stability of the model and confirm that our results were not due to a single school or combination of schools, we replicated the analysis across 21 combinations of the six schools, dropping first one school and then two schools in combinations. We then averaged the parameter estimates across the different analyses and compared them with the original models, and the average parameter estimates varied only slightly from the original models. Specifically, in predicting knowledge, the average estimate for integration was 0.46 (range, 0.40 to 0.58), for total use 0.08 (range, −0.02 to 0.33), and for total elimination 0.03 (range, −0.04 to 0.16). For the skills model, the average parameter estimate across the 21 models for integration was 0.49 (range, 0.39 to 0.75), for total use 0.12 (range, −0.02 to 0.47), and for total elimination 0.03 (range, −0.13 to 0.25).

Back to Top | Article Outline

Discussion

There is ample evidence in the medical education literature to support the use of CAI in general,19 and virtual patients in particular.20 There is a growing understanding of the steps needed to achieve successful curriculum change,21 but this knowledge is focused on getting faculty to adopt innovations in education. Little is known about the effects of differing integration strategies on students’ acceptance of this new technology, but it is clear that students’ acceptance of this innovation must be achieved. In our study, in which the typical student spent 17 hours using CLIPP cases, students’ satisfaction with CLIPP was high and students felt that CLIPP was a more valuable and more effective teaching tool than more traditional methods. This finding supports the view that, when integrated well, a comprehensive CAI program can be an acceptable supplement to traditional clerkship teaching.

In our study, integration use strategies that include orienting students to the cases, using the cases extensively throughout the curriculum, incorporating faculty development, and formally assessing students on the content of the cases had a direct positive effect on perceived learning. Our results also show that increased CAI use should be balanced by reduction or elimination of redundant teaching activities such as required textbook reading or other unrelated assignments. Although students in our study were not told what previous exercises were eliminated, they still clearly perceived the effects of integration. There was a significant correlation between higher elimination scores, which were not student-generated, and higher perceived integration, which was based on student surveys. Furthermore, students’ perceptions of integration constituted a very strong and significant mediating factor on students’ perceptions of satisfaction and learning. The most important factor leading to a student’s perception of effective integration was the thoughtful elimination of other forms of teaching.

Importantly, these results do not imply that indiscriminate elimination of other teaching would result in a greater perception of effective integration or one of greater perceived learning. The integration strategies employed were the result of careful consideration by the clerkship director at each participating school, taking into account all of the environmental factors that would allow, or not allow, another form of teaching to be eliminated. Use factors remained very important in achieving the desired learning outcomes, and excessive or inappropriate elimination of other teaching would also likely result in lower use scores. For instance, elimination of didactics that might be coordinated with CAI case teaching might not be an ideal strategy.

Although the use of computers to deliver education is relatively new, the need to integrate new and innovative teaching methods into the medical curriculum is not. Our results are consistent with earlier published studies evaluating successful curricular change, including thoughtful planning in the context of the local environment, active engagement of the teaching faculty, and responses to formative curricular evaluation.22–24 In a review of effective strategies for facilitating learning from high-fidelity simulation, Issenberg et al25 found integration of the simulation exercise into the standard curriculum to be an essential feature, including making the exercise a required component of the standard curriculum, building the exercise into the learner’s training schedule, and evaluating learners on their performance on the simulations. In the process of developing unique integration plans at each study institution, these factors were a strong consideration.

A growing body of literature is beginning to demonstrate that clinical education integrated with case-based CAI learning is more effective than traditional methods alone. Students do not, and should not, see CAI as a replacement for traditional instructor-led training but, rather, as a complement to it. As with the use of problem-based learning as an adjunct to clinical experiences, virtual patient cases during the clerkship can assist students in linking their factual knowledge to their clinical experiences, deepening their understanding of clinical problems, and allowing generalization from one patient to another.20,26 CAI has been shown to be a more efficient, effective, and cost-effective method for content delivery and durable learning.1,27 This efficiency can allow instructors to function more in the role of facilitator and assessor, rather than distributors of content.20 Thus, the use of effective CAI integration strategies can enhance non-instructor-led learning and can allow students and faculty to use their limited and precious time together for quality clinical teaching activities that cannot be achieved with CAI.

Back to Top | Article Outline

Limitations

There are a number of potential limitations to our study. The aim of our study was to respond to the literature calling for additional research on effective and generalizable CAI integration strategies. We studied integration strategies for a multicase, comprehensive virtual patient program, and, as such, our data may not be as applicable to a CAI program designed to teach more limited or focused content areas, or one using an approach that differs greatly from virtual patients. Although attempting to measure students’ learning from this program would be interesting, this was not the aim of the study, and we are not able to answer questions about specific learning outcomes. Measuring students’ perceptions of their learning is valid and appropriate for assessing the overall effects of CAI on students’ learning.28 Our study design, with prospectively determined integration plans at six pilot schools, required more than simple adoption of new technology, but this was not intended to be a study of the steps needed to achieve curriculum reform. This study was observational, which limits our ability to determine causality of the associations we found. It may be possible that our findings are due to a school-level effect or are the result of conducting our analysis at the student level. However, the replication analyses conducted both at the individual level and in a multilevel context suggest that our findings are consistent across schools and different modeling approaches. Finally, although our study schools represent a diverse cross-section of U.S. medical schools and our integration strategies employed a discrete list of components, the best CAI integration strategy at each individual school remains unique to the environment at that school.23

Back to Top | Article Outline

Conclusions

The use of technology to enhance students’ clinical and experiential learning throughout medical education continues to grow, and the challenge of integrating technology effectively into an already overcrowded curriculum will grow accordingly. Our results with a comprehensive CAI program designed for the clinical clerkship demonstrate that a thoughtful approach to integration is important and that effective integration can be achieved. Providing effective orientation, integrating CAI teaching into existing didactics, fostering faculty development to build on students’ learning from the cases at the bedside, and eliminating redundant reading and assignments should all be considered. Additional research is needed into methods that blend the best of CAI learning with clinical education to further improve objective student learning and patient-care outcomes. Obtaining students’ positive perceptions of integration is critical in achieving the desired outcomes of computer-assisted instruction.

Back to Top | Article Outline

Acknowledgments

The work presented in this manuscript was supported by a Health Resources and Services Administration Predoctoral Training in Primary Care grant (8 D56 HP 00059-04).

The authors wish to thank Carol Edwards for her assistance in managing the CLIPP Working Group and in preparation of this manuscript.

Back to Top | Article Outline

Disclaimer

Dr. Berman and Dr. Fall are the founders, executive directors, and receive partial salary support from the Institute for Innovative Technology In Medical Education (iInTIME). iInTIME is incorporated in New Hampshire as a nonprofit corporation and was established to maintain and provide access to the Computer-assisted Learning In Pediatrics Program. The other authors received a stipend, paid out of the above-mentioned grant, for their contribution to the CLIPP Working Group.

Back to Top | Article Outline

References

1 Lyon H, Healy J, Bell J. PlanAlyzer, an interactive computer-assisted program to teach clinical problem solving in diagnosing anemia and coronary artery disease. Acad Med. 1992;67:821–828.
2 Cooksey K, Kohlmeier M, Plaisted C, Adams K, Zeisel S. Getting nutrition education into medical schools: A computer-based approach. Am J Clin Nutr. 2000;72(3 suppl):868S–876S.
3 Greenhalgh T. Computer assisted learning in undergraduate medical education. BMJ. 2001;322:40–44.
4 Hamilton NM, Furnace J, Duguid KP, Helms PJ, Simpson JG. Development and integration of CAL: A case study in medicine. Med Educ. 1999;33:298–305.
5 Fall L, Berman N, Smith S, White C, Woodhead J, Olson A. Multi-institutional development and utilization of a computer-assisted learning program for the pediatric clerkship: The CLIPP project. Acad Med. 2005;80:847–855.
6 Haag M, Singer R, Bauch M, Heid J, Hess F, Leven FJ. Challenges and perspectives of computer-assisted instruction in medical education: Lessons learned from seven years of experience with the CAMPUS system. Methods Inf Med. 2007;46:67–69.
7 Leong S, Baldwin C, Adelman A. Integrating Web-based computer cases into a required clerkship: Development and evaluation. Acad Med. 2003;78:295–301.
8 Berman NB, Fall LH, Maloney CG, Levine DA. Computer-assisted instruction in clinical education: A roadmap to increasing CAI implementation. Adv Health Sci Educ Theory Pract. 2008;13:373–383.
9 Friedman C. The research we should be doing. Acad Med. 1994;69:455–457.
10 Cook D. The research we still are not doing: An agenda for the study of computer-based learning. Acad Med. 2005;80:541–548.
11 Effective Use of Educational Technology in Medical Education. Colloquium on Educational Technology: Recommendations and Guidelines for Medical Educators. Washington, DC: AAMC Institute for Improving Medical Education; 2007.
12 Olson A, Woodhead J, Berkow R, Kaufman N, Marshall S. A national general pediatric clerkship curriculum: The process of development and implementation. Pediatrics. 2000;106(1 pt 2):216–222.
13 Joreskog K, Sorborn D. LISREL 7: A Guide to the Program and Application. 2nd ed. Chicago, Ill: SPSS, Inc.; 1989.
14 Bentler P. EQS Structural Equations Program Manual. Los Angeles, Calif: BMDP Statistical Software; 1989.
15 Bentler P, Bonett D. Significance tests and goodness-of-fit in the analysis of covariance structures. Psychol Bull. 1980;88:588–606.
16 Statistical Analysis Software [computer program]. Carey, NC: SAS Institute, Inc; 2004.
17 Bollen K. Structural Equations With Latent Variables. New York, NY: Wiley-Interscience; 1989.
18 Sobel M. Asymptotic Confidence Intervals for Indirect Effects in Structural Equation Models. Washington, DC: American Sociological Association; 1982.
19 Chumley-Jones H, Dobbie A, Alford C. Web-based learning: Sound educational method or hype? A review of the evaluation literature. Acad Med. 2002;77(10 suppl):S86–S93.
20 Ruiz J, Mintzer M, Leipzig R. The impact of E-learning in medical education. Acad Med. 2006;81:207–212.
21 Surry D. A model for integrating instructional technology into higher education. Br J Educ Technol. 2005;36:327–329.
22 Bland C, Starnaman S, Wersal L, Moorehead-Rosenberg L, Zonia S, Henry R. Curricular change in medical schools: How to succeed. Acad Med. 2000;75:575–594.
23 Sachdeva AK. Faculty development and support needed to integrate the learning of prevention in the curricula of medical schools. Acad Med. 2000;75(7 suppl):S35–S42.
24 Sierpina V, Bulik R, Baldwin C, et al. Creating sustainable curricular change: Lessons learned from an alternative therapies educational initiative. Acad Med. 2007;82:341–350.
25 Issenberg SB, McGaghie WC, Petrusa ER, Lee Gordon D, Scalese RJ. Features and uses of high-fidelity medical simulations that lead to effective learning: A BEME systematic review. Med Teach. 2005;27:10–28.
26 O’Neill PA, Willis SC, Jones A. A model of how students link problem-based learning with clinical experience through “elaboration.” Acad Med. 2002;77:552–561.
27 Kerfoot BP, Baker H, Jackson TL, et al. A multi-institutional randomized controlled trial of adjuvant Web-based teaching to medical students. Acad Med. 2006;81:224–230.
28 Ten Cate O. What happens to the student? The neglected variable in educational outcome research. Adv Health Sci Educ Theory Pract. 2001;6:81–88.

*Boston University School of Medicine, Dartmouth Medical School, Morehouse School of Medicine, University of Illinois College of Medicine at Rockford, University of Utah School of Medicine, and University of Washington School of Medicine.
Cited Here...

© 2009 Association of American Medical Colleges