Virtual Patients: ED-2 Band-Aid or Valuable Asset in the Learning Portfolio? : Academic Medicine

Secondary Logo

Journal Logo

Education Strategies

Virtual Patients: ED-2 Band-Aid or Valuable Asset in the Learning Portfolio?

Tworek, Janet MSc; Coderre, Sylvain MD, MSc; Wright, Bruce MD; McLaughlin, Kevin MB, PhD

Author Information
Academic Medicine 85(1):p 155-158, January 2010. | DOI: 10.1097/ACM.0b013e3181c4f8bf
  • Free

Abstract

Those of us involved in planning clerkship curricula readily identify with the sentiments of Henry Brook Adams that “chaos was the law of nature; order was the dream of man.” Except that in this case, “nature” is clinical learning experiences, “man” is the clerkship director, and the “dream”—according to the Liaison Committee for Medical Education (LCME) accreditation standard ED-2—must be a reality. We must be able to “specify the types of patients or clinical conditions that students must encounter and ... remedy any identified gaps.”1 So, how can we create order from chaos?

Why Does Undergraduate Medical Education Need Virtual Patients?

Bringing order to the chaos requires, in part, that medical educators meet LCME accreditation standard ED-2; that is, they must “specify the types of patients and clinical conditions that students must encounter and ... monitor and verify the students' experiences with patients so as to remedy any identified gaps.” As medical educators, we have perpetually grappled with an unpredictable supply of clinical material and have become adept at compensating for deficiencies. We maximize the learning potential of our rare and interesting cases by discussing them at rounds and publishing our findings. We also “invent” clinical material in the form of paper cases, standardized patients, human patient simulators, and virtual patients. Here, our focus is on virtual patients and their potential as a simulated clinical learning experience.

Virtual patients are “computer programs that simulate real-life clinical scenarios in which the learner acts as a health care professional obtaining a history and physical exam and making diagnostic and therapeutic decisions.”2 Using this strict definition, virtual patients are actually scarcer than real patients in undergraduate medical education and are currently used at fewer than a quarter of U.S. and Canadian medical schools.2 Although some pioneers have already integrated virtual patients into their curricula,3 the total North American pool of virtual patients equates only one per medical school, a small total—likely because of the considerable time (16.6 months) and cost ($10,000–$50,000) required to produce each case.2 At the present time, therefore, virtual patients are still in the realm of the “innovators” and “early adopters,”4 but their use will likely increase over time, especially since the LCME has granted them curricular parity with real-life clinical experiences: “If a student does not encounter patients with a particular clinical condition (e.g., because it is seasonal), the student should be able to remedy the gap by a simulated experience (such as standardized patient experiences, online, or paper cases, etc.).”1

Yet, for curriculum planners, it seems almost too good to be true that virtual and real clinical experiences should be considered equal, particularly considering the limited data on the effectiveness of virtual patients on learning outcomes.5 Currently, no statement limits the proportion of clinical experiences that can be virtual. Can we go completely virtual?6 If so, what are the advantages and disadvantages of doing this? Do we run the risk of creating virtual MDs?

The Virtues of “Virtuality”

In addition to offering a backdoor route to meeting an LCME accreditation standard, virtual patients may meet, or even surpass, the learning potential of their real-life counterparts.

At the curricular level, virtual patients offer several advantages, including greater consistency in the delivery of learning experiences. In a mixed curricular model, the virtual curriculum could run alongside the existing clinical curriculum, giving educators the flexibility of using either, or both, curricula to meet the learning objectives. Despite high start-up costs,2 virtual patients can reduce the burden on some resources, such as clinical teachers and real patients, ensuring that learning can continue even in their absence.

At the individual student level, virtual patients allow learners—rather than patient availability—to determine their learning agenda. The virtual learning environment is safe because it gives learners permission to fail. They can make errors without patients suffering adverse clinical consequences, and they can err in private. Virtual patients allow learners to have graded challenges, through which they begin with straightforward cases that have a lower cognitive load7 and then gradually progress to increasingly complex cases. Teaching is always available around virtual patients; the “virtual preceptor” always provides evidence-based recommendations and is never too busy to teach. Finally, when we have a plentiful supply of virtual patients, learners will have a greater opportunity to practice their diagnostic skills.

So, if virtual patients allow for a more consistent curriculum, delivered in a learner-centered environment, with greater practice opportunities and reduced requirements for busy clinical preceptors, are there any disadvantages? Yes, there are two glaring reasons why virtual patients might not provide a sound learning experience. First, they may not be able to deliver effective feedback, and, second, virtual learning may not transfer to the real-life environment.

Providing Effective Feedback in the Virtual Learning Environment

Effective feedback and deliberate practice

In medicine, practice alone does not make perfect. Indeed, with increasing clinical practice, skills frequently deteriorate.8,9 Ericsson and colleagues10 described the training conditions required to improve skills, referring to this process as “deliberate practice.” The main difference between everyday clinical practice and deliberate practice is the inclusion of effective feedback in the latter.11

Neher and colleagues12 described a five-step model of effective clinical teaching, and subsequent studies demonstrated that training preceptors to use this approach improved both the efficiency and effectiveness of their clinical teaching.13,14 The five steps are:

  1. Making learners commit to a diagnosis and a management plan
  2. Probing learners' diagnostic reasoning and underlying knowledge
  3. Providing learners with some general rules, ideally in their area(s) of weakness
  4. Reinforcing what the learners did well
  5. Correcting the learners' errors by providing constructive feedback and recommendations for improvements

These five steps are readily achievable in the clinical setting, but are they achievable in a virtual environment?

Barriers to feedback in a virtual environment

Steps 1 and 2 involve gathering data from the learner and are static tasks—that is, the same data are gathered from all learners. Step 1 remains relatively straightforward in the virtual setting; learners must provide a diagnosis and a description of their management plan before receiving feedback. Step 2 can be completed in several ways, ranging from requiring learners either to draw elaborate diagrams conveying the nature of the causal relationship or simply to provide a free-text explanation of the reasoning process.15 The latter is usually preferred because it is consistent with the real-life clinical setting. Using a “think aloud” technique to identify a diagnostic reasoning strategy does have limitations, particularly because not all diagnoses are based on conscious reasoning. This limitation, however, applies equally to both the real-life and virtual environments.16 Probing underlying knowledge is also relatively easy to achieve if learners provide free-text responses to a priori questions, such as the interpretation of an electrocardiogram, chest X-ray, or lab test.

Completing steps 3 to 5 with virtual patients is more challenging. These are dynamic tasks; that is, they are different for each learner and are dependent on data gathered from steps 1 and 2. So, how can we attempt these dynamic tasks in a static environment?

For step 3 we can try to predict the likely errors that learners will make and provide general rules for how to avoid these.17 But, considering how poorly physicians judge their own abilities, it is safe to assume that these predictions might be inaccurate and that general rules cannot correct all errors.18 We have several options for achieving steps 4 and 5, including either presenting “outcome models” through which positive and negative feedback is provided in the form of successful or adverse patient outcomes,2 or showing learners a system or “trace,” which an expert uses when diagnosing and treating the case. This trace reinforces what the learners did well and allows them to correct their errors by comparing their processes with those of the expert. While this approach has some appeal (e.g., encouraging learners to reflect on their performance and self-diagnose), it also has some important limitations. Experts have different amounts of knowledge and different knowledge structures than nonexperts, and they tend to use less conscious reasoning.16,19 This issue of cognitive incongruence, which obviously extends beyond the virtual environment, may make it difficult for experts to trace their steps for nonexperts, and for the nonexperts to reproduce these steps.20 Thus, expert traces may require translation into learning content that nonexperts can understand.

Potential strategies for improving feedback in a virtual environment

If the goal of virtual learning is to improve diagnostic performance, the five-step model of effective clinical teaching is an appropriate framework to follow; we just need to achieve these steps in a different way. In this dynamic learning environment, within which each student can take a different path and arrive at a different end point, a one-size-fits-all approach to feedback is inappropriate. Therefore, we propose “adaptive feedback,” which acknowledges the different needs of learners and tailors feedback to their needs, posing solutions to their specific diagnostic performance weaknesses.

Adaptive feedback can occur in the virtual environment through a combination of virtual and in-person teaching. Such an approach could involve learners choosing from three levels of feedback: (1) seeing the correct diagnosis and patient outcomes, (2) seeing an expert trace, and/or (3) meeting with a preceptor to discuss the case. For example, when presented with a case of hyponatremia due to the syndrome of inappropriate antidiuretic hormone (SIADH) as a result of ectopic production from small cell lung cancer, the learner who has confidently and accurately interpreted the urine electrolytes and chest X-ray findings may request little feedback beyond confirmation that his or her diagnosis and explanation are correct. Students who have the wrong diagnosis, or who are less confident in their diagnosis, can access an expert trace explaining the characteristic urine electrolyte pattern of SIADH and the X-ray findings, or they can seek an in-person explanation of the diagnosis.

Transferring learning from the virtual to the real-life environment

Learning versus transfer

Learning occurs when new information is stored in long-term memory; transfer occurs when this information is retrieved and applied in another setting. For virtual patients to be an effective learning tool, the information encoded in the virtual learning environment must transfer to the real-life clinical setting. So, how can we increase the probability of transfer?

Transfer can be “intentional” (high road) or “automatic” (low road), and techniques used to encourage transfer differ depending on the type of transfer targeted.21 Metacognitive approaches target intentional transfer, and these approaches include training learners to make explicit comparisons between new and previous problems or using questions or cues to activate stored knowledge that is relevant to a new problem.22 Unfortunately, learners, particularly novices, tend to focus on superficial features of a case, such as a patient's occupation or hobby,23,24 which may lead to false transfer and poorer performance.25 Also, in the clinical setting, providing activating questions or cues may not be feasible, which is of concern because transfer between dissimilar contexts is typically low without these.22

An alternative approach to enhancing transfer is “situated learning,”26 which targets automatic transfer. This theory acknowledges that transfer of learning is bound by the principal of external encoding specificity, such that the probability of successful transfer parallels the similarity between the learning and retrieval situations.27 Differences between the learning and retrieval environments reduce the likelihood of transfer, perhaps explaining why medical students in traditional curricula have difficulty applying their basic science knowledge in the clinical setting.28 Consequently, significant contextual differences between virtual and real patients may present a major challenge to the transfer of learning.

Another issue related to transfer is the temporal relationship between the initial encoding and the subsequent retrieval of knowledge. Traditional curricula strive for “forward reaching” transfer; that is, learning in a given area (e.g., the basic sciences) is completed before attempting application to the transfer environment. Although this approach may be appealing, the literature on transfer of basic science knowledge suggests that this approach is less successful than experiencing both environments at the same time.29

Potential strategies for improving transfer

Although further research is clearly needed to evaluate the effectiveness of strategies designed to promote transfer of learning from the virtual to the clinical environment, we know enough about this process to predict which strategies are more or less likely to be effective. After all, the issue of transfer is not unique to medicine—just ask any golfer about transferring skills from the range to the course.30

We can facilitate automatic transfer of learning by making all aspects of the learning and retrieval environments as similar as possible, including the physical setting, the learning content, the role that the learners adopt, and the learners' attitudes toward the task.31 Rather than having a curriculum consisting of entirely virtual patients followed by a curriculum of exclusively real patients, we can provide early practical experience with real patients and integrate the virtual and real-life environments. Doing so would not only spare learners the burden of forward reaching transfer but also provide an anchor for virtual learning experiences. We can also promote intentional transfer of learning if we approach the virtual learning environment as the driving range—the place learners practice their skills before clinical encounters, and the place to which they return after clinical encounters to reflect on and improve their skills.

In Sum

Virtual patients are a means to an end: LCME accreditation standard ED-2 and/or improved learning outcomes. Realistically, the goal of using virtual patients is not to provide learning experiences that are superior, or even equal, to real patient encounters, but to provide learning experiences that are better than nothing.32 Providing clinical clerks with comprehensive and meaningful learning experiences requires a balanced curriculum. In financial parlance, real patients are like equity stocks; they have the highest potential returns, but they are also higher risk (e.g., the patients may not show up, or the preceptor may not be an effective teacher). Virtual patients are like bonds—more predictable, but with lower potential returns. As with investing, the optimum balance between risk and return is achieved with a balanced portfolio of high-quality assets.33 Rather than debate whether we should include virtual patients in our clinical learning portfolio, perhaps we should accept them as a potentially valuable asset and focus on ways of improving their quality as a learning tool, addressing, in particular, the issues of feedback and transfer of learning.

Funding/Support:

None.

Other disclosures:

None.

Ethical approval:

Not applicable.

Disclaimer:

The views expressed are those of the authors and do not necessarily reflect the views or opinions of the supporting programs.

References

1Liaison Committee for Medical Education. Accreditation Standards. Available at: http://www.lcme.org/standard.htm. Accessed September 24, 2009.
2Huang G, Reynolds R, Chandler C. Virtual patient simulation at U.S. and Canadian medical schools. Acad Med. 2007;82:446–451.
3Fall LH, Berman NB, Smith S, White CB, Woodhead JC, Olson AL. Multi-institutional development and utilization of a computer-assisted learning program for the pediatrics clerkship: The CLIPP project. Acad Med. 2005;80:847–855.
4Rogers EM. Diffusion of Innovations. 5th ed. New York, NY: Free Press; 2003.
5Cook DA, Triola MM. Virtual patients: A critical literature review and proposed next steps. Med Educ. 2009;43:303–311.
6Harden RM, Hart IR. An international virtual medical school (IVIMEDS): The future for medical education? Med Teach. 2002;24:261–267.
7Sweller J. Cognitive load during problem solving: Effects on learning. Cogn Sci. 1988;12:257–285.
8Butterworth JS, Reppert EH. Auscultatory acumen in the general medical population. JAMA. 1960;174:32–34.
9Choudhry NK, Fletcher RH, Soumerai SB. Systematic review: The relationship between clinical experience and quality of health care. Ann Intern Med. 2005;142:260–273.
10Ericsson KA, Krampe RT, Tesch-Römer C. The role of deliberate practice in the acquisition of expert performance. Psychol Rev. 1993;100:363–406. Available at: http://projects.ict.usc.edu/itw/gel/EricssonDeliberatePracticePR93.pdf. Accessed September 24, 2009.
11Veloski J, Boex JR, Grasberger MJ, Evans A, Wolfson DB. Systematic review of the literature on assessment, feedback and physicians' clinical performance: BEME Guide No. 7. Med Teach. 2006;28:117–128.
12Neher JO, Gordon KC, Meyer B, Stevens N. A five-step “microskills” model of clinical teaching. J Am Board Fam Pract. 1992;5:419–424.
13Furney SL, Orsini AN, Orsetti KE, Stern DT, Gruppen LD, Irby DM. Teaching the one-minute preceptor: A randomized controlled trial. J Gen Intern Med. 2001;16:620–624.
14Aagaard E, Teherani A, Irby DM. Effectiveness of the one-minute preceptor model for diagnosing the patient and the learner: Proof of concept. Acad Med. 2004;79:42–49.
15Jonassen DH, Ionas IG. Designing effective supports for causal reasoning. Educ Technol Res Dev. 2008;56:287–308.
16Norman GR, Brooks LR. The non-analytical basis of clinical reasoning. Adv Health Sci Educ. 1997;2:173–184.
17Croskerry P. The importance of cognitive errors in diagnosis and strategies to minimize them. Acad Med. 2003;78:775–780.
18Davis DA, Mazmanian PE, Fordis M, Van Harrison R, Thorpe KE, Perrier L. Accuracy of physician self-assessment compared to observed measures of competence: A systematic review. JAMA. 2005;296:1094–1102.
19Schmidt HG, Norman GR, Boshuizen HP. A cognitive perspective on medical expertise: Theory and implication. Acad Med. 1990;65:611–621.
20Schmidt HG, Moust JH. What makes a tutor effective? A structural equation modeling approach to learning in problem-based curricula. Acad Med. 1995;70:708–714.
21Perkins DN, Salomon G. Transfer of learning. In: International Encyclopedia of Education. Oxford, UK: Elsevier; 1994.
22Catrambone R, Holyoak KJ. Overcoming contextual limitations on problem-solving transfer. J Exp Psychol Learn Mem Cogn. 1989;15:1147–1156. Available at: http://www.nbu.bg/cogs/personal/kokinov/COG501/Overcoming%20Contextual%20Limitations%20on%20Problem-Solving%20Transfer.pdf. Accessed September 24, 2009.
23Eveleth DM. Analogical reasoning: When a non-expert reasons like an expert. J Behav Appl Manag. 1999;1:28–40. Available at: http://ibam.com/pubs/jbam/articles/vol1/article1.htm. Accessed September 24, 2009.
24Hatala R, Norman GR, Brooks LR. Influence of a single example on subsequent electrocardiogram interpretation. Teach Learn Med. 1999;11:110–117.
25Read S, Cesa I. This reminds me of the time when ...: Expectation failures in reminding and explanation. J Exp Soc Psychol. 1991;27:1–25.
26Brown JS, Collins A, Duguid P. Situated cognition and the culture of learning. Educ Res. 1989;18:32–42.
27Tulving E, Thomson DM. Encoding specificity and retrieval processes in episodic memory. Psychol Rev. 1973;80:352–373.
28Patel VL, Evans DA, Groen GJ. Biomedical knowledge and clinical reasoning. In: Evans DA, Groen GJ, eds. Cognitive Science in Medicine: Biomedical Modeling. Cambridge, Mass: MIT Press; 1989.
29Patel VL, Kaufman DR. Clinical reasoning and biomedical knowledge: Implications for teaching. In: Higgs J, Jones M, eds. Clinical Reasoning in the Health Professions. 2nd ed. Oxford, UK: Butterworth Hinemann; 2002.
30Lynch J. How to transfer your golf swing from the range to the course. No. 1 Golf Book Reviews [blog]. December 2007. Available at: http://no1golfbookreviews.blogspot.com. Accessed October 5, 2009.
31Needham DR, Begg IM. Problem-oriented training promotes spontaneous analogical transfer: Memory-oriented training promotes memory for training. Mem Cogn. 1991;19:543–557.
32Cook DA, Levinson AJ, Garside S, Dupras DM, Erwin PJ, Montori VM. Internet-based learning and the health professions: A meta-analysis. JAMA. 2008;300:1181–1196.
33Bernstein WJ. The Intelligent Asset Allocator: How to Build Your Portfolio to Maximize Returns and Minimize Risk. New York, NY: McGraw-Hill Professional; 2001.
© 2010 Association of American Medical Colleges