Economic forces and biomedical advancement have transformed the delivery of health care. The disease spectrum has shifted from acute care to chronic illness, and symptom evaluation and management often take place outside of the hospital. However, the bulk of clinical training still occurs in the hospital, where trainees are exposed to diseases often less reflective of illnesses in the general population. Pressures to shorten hospital stays have resulted in lost opportunities to interact and learn from patients. Medical students are often marginalized from the health care team rather than being engaged in the clinical decision-making process.1
Technology-enabled instruction is one means by which medical educators hope to address these challenges in clinical training. In particular, virtual patients were designed to fill gaps in clerkships by exposing students to diseases that they would not otherwise experience because of short clinical rotations and limited ambulatory care experiences. “Virtual patients” are defined as “computer programs that simulate real-life clinical scenarios in which the learner acts as a health care professional obtaining a history and physical exam and making diagnostic and therapeutic decisions.”2 They should be distinguished from other forms of simulation, such as standardized patients (human actors trained to portray patients) and from high-fidelity simulators (life-sized robot mannequins). Virtual patients can be used to simulate, for example, the longitudinal care of a diabetic patient over the course of nine “virtual” years, condensed into several hours. They expose learners to rare, “do-not-miss” events that might not occur over the course of a three-month clerkship or even a three-year residency, such as a ruptured aortic aneurysm. They permit a window into procedures or conversations in which trainees may not normally participate, such as a cardiac catheterization or a primary care physician giving bad news. They portray a variety of clinical presentations for a single disease such as HIV or, alternatively, multiple diseases for the same clinical presentation, such as chest pain. They have been used to teach topics as diverse as communication skills3 and bioterrorism response.4
Several studies have demonstrated that virtual patients are well received and may improve cognitive and behavioral skills better than traditional methods do. For instance, medical students who used virtual patients to learn about acute back pain rated the learning experience more enjoyable and performed better on posttest examinations than did students who read journal articles on the same topic.5 A study assessing student attitudes in a pediatric clinical curriculum (Project LIVE) showed that students reported higher confidence in their ability to recognize abnormal findings when using hybrid CD-ROM/Internet virtual patients (either face to face or as a “virtual group”) than did those in a traditional problem-based learning session.6 A follow-up Project LIVE study revealed that medical students who learned in a virtual modality with a digital video case engaged in higher critical thinking than did students using a paper case.7 Participants in a continuing medical education course who used both virtual patients and standardized patients showed equal improvements in performance and diagnostic ability compared with participants who used standardized patients only.8 In recognition of the value of simulation in medical education, the recent Liaison Committee on Medical Education (LCME) ED-2 requirement permits the use of simulated patients to meet clerkship objectives.9 Perhaps the strongest validation to date is the introduction of virtual patients for assessment in USMLE Step 3 in 1999.10
Despite persuasive evidence for the effectiveness of virtual patients, these programs are not ubiquitous in medical education. Because virtual patient programs employ complex programming and multimedia to replicate clinical environments, these resources are extremely time and resource intensive to produce, which is prohibitive to institutions that lack robust educational technology programs. At the same time, virtual patient development tends to be confined within single institutions, resulting in potentially duplicative case development and lack of access for those schools that have not developed virtual patients. In addition, as with any educational innovation, the successful integration and effective use of virtual patient programs hinges on the extent to which educators sufficiently consider curricular issues and training needs. These challenges, among others, have led to the inconsistent and ultimately limited impact of virtual patient simulation on the undergraduate medical curriculum. In response, we decided to gather detailed technical, content, and usage information regarding virtual patients developed at U.S. and Canadian schools, anticipating that dissemination of these results will promote additional collaboration among developers, increased visibility to educators, and sharing of these valuable resources.
From February to September 2005, we contacted curriculum deans at 142 U.S. and Canadian schools and asked them to report on virtual patient simulation activities at their schools or to forward our requests to educational technology directors. We sought the following general information from respondents:
- technical platform (CD-ROM, DVD-ROM, Internet based, workstation, or hybrid)
- multimedia object types (images, audio, video)
- browser plug-in requirements
- intended audience (students, resident trainees, faculty, or allied health professionals)
- access patterns (one time or continuous)
- user setting (individual or group)
- feedback mechanism
- peer-review mechanism
- case structure (linear or outcome branching)
- performance tracking
- access mechanism (open or password protected), and
- willingness to share cases.
We sought the following information about individual cases:
- patient demographics (age, gender, race, ethnicity)
- key words
- funding mechanism
- production cost
- production duration, and
- willingness to share case metadata with others.
The inventory was piloted at two medical schools and was subsequently refined. We sent e-mail requests to all 142 U.S. and Canadian medical schools, followed by targeted phone calls to nonresponders. In addition, we announced the inventory on various listserves and directly contacted virtual patient developers known to ourselves.
We categorized each of the virtual patient cases into 1 or 2 of 21 possible clinical disciplines and into 1 of 17 possible basic science disciplines, if applicable. In addition, we characterized each case by clinical presentation, using the Medical Council of Canada (MCC) list of clinical presentations.11
We tabulated the virtual patient programs by delivery mechanism, embedded multimedia, Web browser plug-in requirement, intended learner audience, instructional characteristics (individualized use, presence of immediate feedback, peer review, multiple possible outcomes, and tracking of user input), accessibility, and willingness to share.
We tabulated the individual virtual patient cases by clinical discipline (primary and secondary), basic science discipline, unique clinical presentation (according to the MCC guidelines), gender, age, race, ethnicity, funding source, cost, and production duration.
We tabulated production characteristics (funding source, cost, and duration) by various aspects of sharing (willingness to accommodate external users, willingness to share entire software packages, willingness to share multimedia components, and willingness to allow modifications).
We used a univariable logistic regression model to analyze factors associated with the willingness to accommodate external users and willingness to share the entire package. Independent variables included federal/state funding source, production cost greater than $50,000, and production duration greater than 12 months. Analyses used Stata 7.0 (StataCorp, College Station, Tex).
A total of 108 U.S. and Canadian medical schools out of 142 participated in the inventory (76% response rate). Eighty-two (76%) respondents reported that they were not producing or using virtual patients at their institution. Twenty-six (24%) medical schools responded to questions in the inventory, and 12 medical schools entered 103 cases, comprising information about 111 virtual patients, into the inventory. An average of 8.6 cases (SD 8.4, range 1–31) was entered by each of those 12 schools.
Twenty-four schools’ responses addressed the technology underlying their virtual patient programs (see Table 1). The vast majority of existing virtual patient programs required an Internet connection (20 schools, representing 83%). CD-ROMs, workstation terminals, and DVD-ROMs were used in eight (33%), four (17%), and three (13%) programs, respectively. Many of the programs had hybrid delivery mechanisms. The programs were rich in multimedia, with 22 (92%) containing still images, 17 (71%) using audio clips, and 20 (83%) using video clips. A quarter to one third used more advanced multimedia, with six (25%) using three-dimensional animation and eight (33%) using Macromedia Flash animation. Of plug-in players needed to run the virtual patient programs, 15 (63%) required QuickTime player and nine (38%) required Macromedia Flash player.
Most of the virtual patient programs were developed for medical student use: 20 (83%) during the clerkship years, and 15 (63%) during the basic science components of the curriculum. Use in graduate medical education and continuing professional education was less common, being seen in only eight (33%) and five (21%) programs, respectively. None of the medical schools reported an intended allied health professional audience.
The majority of the 108 schools responded “frequently” or “almost always” to the following instructional characteristics of their virtual patient programs (see Figure 1): use by individual users rather than in small groups (17 programs, 68%), direct feedback to users about their performance (17 programs, 68%), peer review of case content (16 programs, 64%), and tracking of user input (13 programs, 52%). In contrast, only five (20%) programs offered multiple outcomes contingent on learner decisions.
In terms of accessibility, two thirds of programs (18 programs, 69%) were password protected, but nine (35%) permitted noninstitution members to register for access. Twenty-two schools (85%) were willing to share their virtual patient cases in exchange for access to other institutions’ cases, and 16 (62%) were willing to share the paper-based materials used by the programs.
Analysis of clinical discipline coverage revealed that pediatrics was most frequently represented (40 cases, 39%), followed by internal medicine (30 cases, 29%), clinical neuroscience (24 cases, 23%), psychiatry (13 cases, 13%), and preventive medicine (8 cases, 8%). Remaining clinical disciplines of geriatrics, surgery, obstetrics–gynecology, dermatology, emergency medicine, and palliative care each represented fewer than 5% of cases. Only eight (8%) reported cases that included basic science coverage; three (3%) reported cases that included microbiology and immunology topics. Using the MCC examination objectives, we found that 65 of 103 cases represented unique clinical presentations; these constituted 30% of the 219 listed by the MCC.
The 111 virtual patients reported by respondents were well distributed by gender (51% male, 49% female) and age (20–24 virtual patients [18%–22%] in each age category, with an average age of 34.6 years—see Table 2). Virtual patients were predominantly white (93 virtual patients, 84%) and either non-Hispanic (46 virtual patients, 41%) or reported as “not applicable” (56 virtual patients, 50%).
Of the 102 of 103 cases where production information was provided (Table 3), 56 (50%) required multiple funding sources for their development. Half (54 cases, 53%) were funded by public federal and state grants, followed by internal support mechanisms (30 cases, 29%) and medical schools (27 cases, 26%). Corporate funding sources were the exception (7 cases, 7%). Production of individual virtual patients was expensive; 35 (34%) cost more than $50,000, and 87 (85%) cost more than $10,000. Sixty-two cases (61%) required more than six months to produce, and the average production duration was 16.6 months.
We tabulated production characteristics by expressed intention to share (see Table 4). In virtual patient programs funded by either federal or state, commercial, or university grants, the majority of cases were associated with willingness to host external users on their local servers (44, 7, and 9 cases, respectively, or 81%, 100%, and 89%, respectively), but a lower percentage were willing to give away the entire program for free (20, 4, and 2 cases; 37%, 57%, and 22%, respectively). In general, willingness to share multimedia assets and to allow modifications of work was less frequent than willingness to host external users or to share the entire package, across production cost and production duration categories.
A univariable analysis revealed the factors associated with the willingness to accommodate external users. These factors included federal/state funding (odds ratio [OR], 9.1; confidence interval [CI], 3.7–22.5), production cost >$50,000 (OR, 0.1; CI, 0.1–0.3), and production duration >12 months (OR, 3.0; CI, 1.3–6.8). Factors associated with the willingness to share the entire package included federal/state funding (OR, 0.2; CI, 0.1–0.4), production cost >$50,000 (OR, 8.6; CI, 3.0–25), and production duration >12 months (OR, 0.3; CI, 0.1–0.7).
Discussion and Conclusions
This inventory is the first formal effort to collect information about virtual patient simulation at medical schools in North America. The data support the following specific conclusions:
- The vast majority of virtual patients are Internet accessible, media rich, and associated with significant production costs and time. Half are supported by more than one funding source.
- Virtual patient programs contain desirable elements of learner-centered instruction (individualized feedback and performance tracking).
- Primary care fields such as internal medicine and pediatrics are well represented among virtual patient cases.
- Sharing is widely endorsed by medical schools.
Given that most developers have expressed a willingness to share their programs, we encourage interinstitution collaboration in using virtual patients, which may, in turn, broaden funding support of these tools and advance research evaluation of virtual patients. We recognize that many barriers to collaboration exist: educators may be subject to the “NIH” (not invented here) syndrome; programs may be rigidly tailored to the specific needs of a particular course or curriculum; and collaboration and further refinement may require time and significant effort. Institutions may have restrictive intellectual property policies that limit how these resources might be used by others. A few examples of successful multicentered associations do exist6,12,13 and may serve as models for broad-based curricular integration. In addition, making case-authoring tools14,15 available can allow institutions to create libraries of virtual patients based on a common platform, which may decrease the production cost per case and facilitate sharing of content.
Strategic development of new virtual patients in underrepresented topics may motivate more widespread integration of virtual patients into medical education. We recommend the creation of additional cultural competency cases, more explicit objectives in basic science content to increase integration between preclinical and clinical curricula, and the exploration of technologies and features that would allow learners the opportunity to explore the consequences of clinical decisions. In addition, development of virtual patients in acute care and surgery disciplines would result in a comprehensive body of cases to complement the entire clinical curriculum.
Limitations of the inventory include the fact that it may not encompass all programs considered “virtual patients.” Two points of comparison are noteworthy. The LCME annual survey in 2004 revealed that 62 medical schools (50%) responded in the affirmative that at least one clerkship was using “computer-based case simulations.”16 A 2004 survey of medical schools regarding their educational technology services showed that 70 out of 88 schools (80%) responding reported using “computer-based simulations for teaching and/or assessment.”17 These data show that the majority of medical schools are taking advantage of some type of computer-based simulation. We hypothesize that the discrepancy is attributable to our adherence to a single definition of “virtual patient,” which reflects the more complex, media-rich programs, rather than the generic term “computer-based simulation,” which encompasses virtual patients but also electronic versions of paper cases. Furthermore, although we made this distinction, we are unaware of published studies evaluating the benefit of multimedia and complex interactivity in virtual patients compared with generic, computer-based case simulations, and we are unsure about the generalizability of such studies because both categories are heterogeneous. We also specifically excluded programs purchased commercially; we examined in-house production of virtual patients as a justification for collaboration and resource sharing, which would not be permitted with proprietary software. We also uncovered in our search that virtual patients are being developed by organizations with loose affiliations to the medical schools; these organizations may not have known of the school-centered call for data. This includes efforts at teaching hospitals for residency education, even though our data suggest that hospitals are uncommon sources of funding and that virtual patients are less likely to be used for graduate medical education. We have also excluded international initiatives, though virtual patient development is particularly active in certain European schools.
A second major constraint of inventories such as this one is the limited information we collected about virtual patient programs themselves; this has impact on the content analysis, the resource allocation, and conclusions regarding the perceived educational value of the programs. Our categorization of virtual patient cases is imperfect; basic sciences are poorly reflected, even though the content of some cases may somehow address basic science topics. Because virtual patients tend to be centered on symptom presentations (just as real patients are), the information we received from the case authors tended to be clinically oriented. We also lack detailed financial data that might shed light on the wide range of production costs and development times reported and indicate whether costs were predominantly associated with personnel or technical infrastructure. Finally, we are unable to make wholesale conclusions about the educational value of these virtual patients, because we are unsure how they meet course objectives and learners’ needs or how they are integrated into the curriculum.
The willingness of virtual patient developers to share information about their programs and to share the programs themselves may be a catalyst toward wider adoption of this technology. Although institutions may have dissimilar policies regarding ownership of curricular materials, in many cases the virtual patient developers hold leadership positions in educational technology or medical education and are therefore empowered to advance reliable statements of intention, as they did in response to our questionnaire. In addition, with the increasingly widespread adoption of other forms of medical simulation, particularly procedural and high-fidelity simulation, there exist potential pedagogic synergies to allow trainees the opportunity to practice in realistic and safe learning environments. In the face of compelling evidence that current virtual patients are effective for teaching and assessment and the likelihood that the technology will grow increasingly sophisticated and costly, we must be thoughtful about how limited, available resources are used, and we should strive in unison towards improving the education of our future physicians.
The authors thank Roger Davis, ScD, of the Division of General Medicine and Primary Care at Beth Israel Deaconess Medical Center, for his statistical expertise.