Secondary Logo

Journal Logo

Coordinating Progressive Levels of Simulation Fidelity to Maximize Educational Benefit

Brydges, Ryan PhD; Carnahan, Heather PhD; Rose, Don PhD; Rose, Louise PhD; Dubrowski, Adam PhD

doi: 10.1097/ACM.0b013e3181d7aabd
Simulation
Free

Purpose To evaluate the effectiveness of a novel, simulation-based educational model rooted in scaffolding theory that capitalizes on a systematic progressive sequence of simulators that increase in realism (i.e., fidelity) and information content.

Method Forty-five medical students were randomly assigned to practice intravenous catheterization using high-fidelity training, low-fidelity training, or progressive training from low to mid to high fidelity. One week later, participants completed a transfer test on a standardized patient simulation. Blinded expert raters assessed participants' global clinical performance, communication, procedure documentation, and technical skills on the transfer test. Participants' management of the resources available during practice was also recorded. Data were analyzed using multivariate analysis of variance. The study was conducted in fall 2008 at the University of Toronto.

Results The high-fidelity group scored higher (P < .05) than the low-fidelity group on all measures except procedure documentation. The progressive group scored higher (P < .05) than other groups for documentation and global clinical performance and was equivalent to the high-fidelity group for communication and technical skills. Total practice time was greatest for the progressive group; however, this group required little practice time on the resource-intensive high-fidelity simulator.

Conclusions Allowing students to progress in their practice on simulators of increasing fidelity led to superior transfer of a broad range of clinical skills. Further, this progressive group was resource-efficient, as participants concentrated on lower fidelity and lower resource-intensive simulators. It is suggested that clinical training curricula incorporate exposure to multiple simulators to maximize educational benefit and potentially save educator time.

Dr. Brydges is a postdoctoral fellow, Centre for Health Education Scholarship, Faculty of Medicine, University of British Columbia, Vancouver, British Columbia, Canada.

Dr. Carnahan is full professor, Wilson Centre and Department of Occupational Science and Occupational Therapy, University of Toronto, Toronto, Ontario, Canada.

Dr. D. Rose is associate professor, Daphne Cockwell School of Nursing, Ryerson University, Toronto, Ontario, Canada.

Dr. L. Rose is assistant professor, Lawrence S. Bloomberg Faculty of Nursing, University of Toronto, Toronto, Ontario, Canada.

Dr. Dubrowski is assistant professor, The Learning Institute and the Research Institute, Hospital for Sick Children, and Department of Paediatrics, Faculty of Medicine, University of Toronto, Toronto, Ontario, Canada.

Correspondence should be addressed to Dr. Dubrowski, SickKids Learning Institute, 525 University Ave, Room 6021, Unit 600, Toronto, Ontario, Canada M5G 1X8; telephone: (416) 946-8270; fax: (416) 340-3792; e-mail: adam.dubrowski@utoronto.ca.

To facilitate students' transition to clinical practice, accreditation committees have advocated for the integration of simulators into undergraduate and graduate training programs.1,2 Recent advances in computer science and engineering have resulted in the development of a plethora of simulators that vary across the range of realism, referred to in this context as “fidelity.”3–5 Traditionally, educators have favored high-fidelity simulators for teaching technical clinical skills based on the assumption that these simulators provide the optimal context to prepare students for clinical duties.6,7 This justification for high-fidelity simulation, however, is seldom based on empirical data. For example, several research studies report similar learning outcomes for low-fidelity and high-fidelity technical skills training.8–10 Alessi3–5 suggests low-fidelity simulation is best for novice students, initial learning, and performance improvement, whereas high-fidelity simulation is best for advanced students, transfer, and assessment. To guide the selection of simulators for educational programs, the theoretical principle of progressive learning has been proposed.11–13 Progressive learning involves gradual changes in simulator attributes as the student's ability improves with training. Whereas Alessi3–5 proposed a link between student progress and increases in simulator fidelity, progressive learning may also encompass increases in other simulator characteristics such as task difficulty11 and information content.

In the present study, we define a simulator's information content as including its fidelity along with the number of skills being trained, which can range from isolated skills to a set of integrated clinical skills. Most researchers use simulation to train an isolated skill (e.g., technical skill performance),11,14–17 and little evidence exists on the use of simulation to train the integrated set of multiple skills that prepare students for clinical practice.18 Using the progressive learning method, students may begin with low-fidelity simulators that train an isolated skill and gradually progress to high-fidelity simulators that incorporate the learned skill and introduce other skills in a more global, patient-centered context.18 Thus, the progressive method may maximize transfer to scenarios in which the student must coordinate all skills in clinical practice. Also, students can be given the opportunity to control their progress and decide when to transition to the next simulator level. Giving students control over practice conditions reflects growing evidence that directed self-guided learning (DSGL) enhances clinical skill acquisition.14,17

We tested the efficacy and feasibility of progressive learning of intravenous (IV) catheterization on low-, mid-, and high-fidelity simulators compared with use of either a low-fidelity or high-fidelity simulator in isolation. Skill transfer was evaluated using a scenario from the Integrated Procedural Performance Instrument (IPPI), an innovative, simulation-based training and assessment approach that emphasizes an integrated use of skills in patient–clinician interactions.18–21 We hypothesized that students in the progressive group would demonstrate better global clinical performance (e.g., technical, communication) and better skill transfer than students who learn entirely on a low- or high-fidelity simulator.

Back to Top | Article Outline

Method

Study population

Participants were recruited from all medical undergraduate years at the University of Toronto in the fall of 2008 (around 850 students in all). Participants with previous experience inserting more than 10 peripheral IV catheters were excluded. Sample size was calculated using the Global Rating Scale (GRS) score, as this is considered the gold standard in performance-based assessment.1 On the basis of previous work,15 we required 12 participants per group to adequately power the study at β = .80, P = .05. We used a random number generator to assign 45 participants equally to one of three interventions: progressive, low-fidelity, or high-fidelity. The University of Toronto research ethics board approved the study protocol. All participants provided written consent.

Back to Top | Article Outline

Study apparatus

We categorized three simulators as low-, mid- or high-fidelity using Alessi's3–5 definition of simulator fidelity. Our low-fidelity simulator was the Virtual IV (Laerdal Medical), a computer-based system that provides haptic feedback and rudimentary patient communication cues. Although this simulator is advanced technologically, it does not offer students the opportunity for physical contact in many aspects of the procedure (e.g., instruments or veins), thus limiting the simulator's responsiveness and resulting in low-fidelity. The mid-fidelity simulator was an inanimate plastic arm (Nasco Health Care, Model LF01121U), the most commonly used simulator for training IV catheterization.22 Although patient communication cues were not present, physical interaction with instruments and veins containing simulated blood enhanced the fidelity relative to the Virtual IV. The high-fidelity simulator was a SimMan (Laerdal Medical, Model 211–00050) used in a highly contextualized environment resembling a hospital ward. SimMan was placed in a hospital bed, its veins contained “blood,” and it responded to the student's actions. Responses were communicated by a research assistant trained to react to participants' actions based on an IPPI script18–21 using a microphone in a remote location. Relative to the inanimate arm, SimMan had higher fidelity because the “patient” responded to the student's actions.

For the transfer test, students performed IV catheterization on an IPPI-type simulation that combined a standardized patient (SP) with a different inanimate plastic arm (Nasco Health Care, Model LF01126U) and a second patient case from the IPPI protocol.18–21 Use of the IPPI-type simulation in the context of a mock hospital room represented the highest level of fidelity in the study.

Back to Top | Article Outline

Study design and procedure

We used a randomized, three-arm, intervention study design (progressive, high-fidelity, and low-fidelity). Initially, all participants watched an eight-minute instructional video of an expert nurse performing IV catheterization on a real patient. PowerPoint slides that illustrated building patient rapport, and tips on patient communication and patient safety, were inserted between video of the preparatory (e.g., maintaining sterility) and technical aspects of the procedure. Participants were then assigned to their study groups and provided with a list of seven process goals (List 1). Process goals are designed to direct the student's attention to the mechanisms of their performance and have been shown to benefit learning.23,24 The goals list was created using published guidelines23 and in consultation with experts in IV catheterization. Participants could refer to the goals list at any time during practice.

List 1 Process Goals Representing Validated Learning Strategies for IV Catheterization Provided to Students During Practice for IV Catheterization Simulation

List 1 Process Goals Representing Validated Learning Strategies for IV Catheterization Provided to Students During Practice for IV Catheterization Simulation

Participants in the progressive group switched from low- to mid- to high-fidelity simulators in a self-guided manner. After switching, participants could not return to a previous simulator. Participants were informed that “the final trial you perform on each simulator will be videotaped for subsequent analyses.” Participants in the high-fidelity and low-fidelity groups practiced on their respective simulators until they chose to end practice. All participants were told, “If you feel that you have learned the task proficiently, you do not need to stay the full two hours.” We provided no further definition of the term “proficiently.”

The maximum practice time was two hours. Access to the instructional video was available to all participants at any time during practice. We used custom software to record individual participants' total video viewing time. One week after practice, all participants returned to complete a transfer test on the IPPI-type simulation.

Back to Top | Article Outline

Follow-up and outcome measures

We videotaped each participant's transfer test performance. Two IV catheterization experts watched the videos and evaluated participant performance using two scoring systems. Both scoring systems are analytic, meaning performance is separated into measurable components that are scored first and then summed to generate an overall score.25 First, global clinical performance was separated into isolated skill sets such as professionalism, situational awareness, communication, technical skill, and patient safety. The experts evaluated participants' global clinical performance using the IPPI rating tool, a series of seven-point rating scales.18 Second, technical and communication skills were separated into measurable components and evaluated as follows: Technical skills were evaluated using the validated GRS and checklist (CL),1,26 and communication skills were assessed using a previously validated five-item global scale of communication and interpersonal skills.21,25,27 Ratings from the two experts were used to establish a single-item intraclass correlation coefficient (ICC) of 0.55, 0.55, 0.61, and 0.71 for the IPPI rating, GRS, CL, and communication scale, respectively. Both raters were blinded to participants' identity and group assignment.

We assessed how participants documented the procedure to determine whether the three experimental groups differed in their ability to multitask and accumulate relevant information about the simulated patient and the procedure. After the transfer test, each participant completed a progress note using his or her own terminology to document the procedure. Documentation was scored on a previously reported five-point scale22 that included notation of location, date placed, catheter size, patient tolerance, and the participant's signature.

To evaluate participants' resource management, defined as the use of available resources during practice, we assessed the total time the groups engaged in hands-on practice with each simulator and the total time the groups viewed the instructional video during practice. Additionally, participants' perceptions of the educational value of each simulator were evaluated using a five-item Likert scale. Scale anchors were not useful (1), somewhat useful (3), and very useful (5).

Back to Top | Article Outline

Statistical analysis

A multivariate analysis of variance (MANOVA) using the Wilks lambda criteria tested for group differences in performance and resource management data. Following a significant MANOVA, separate univariate analyses of variance tested for group differences on each dependent measure. Post hoc analysis was performed using the Student–Newman–Keuls procedure.

A repeated-measure MANOVA assessed the progressive group's resource management. The within-group factor of simulator fidelity (low, mid, high) tested for differences in total practice time, total video time, and participants' ratings of each simulator.

Two independent-samples t tests compared the progressive group's ratings of the low- and high-fidelity simulators with the respective ratings from the low-fidelity group and the high-fidelity group. Statistical significance was assessed at P < .05 throughout, and all analyses were conducted using SPSS version 15.0 (SPSS Inc, Chicago, Illinois). Finally, we converted means and standard deviations to standardized mean differences (Hedges g effect sizes).

Back to Top | Article Outline

Results

The multivariate test of differences between groups was statistically significant (F14,72 = 5.96; P < .001). Follow-up univariate comparisons are outlined below and in Figure 1.

Figure 1

Figure 1

Back to Top | Article Outline

Performance data

IPPI ratings for the transfer test differed among groups (F2,42 = 12.29; P <. 001). Progressive participants had better global clinical performance than high-fidelity participants (effect size = 0.78; 95% CI, 0.03 to 1.52) who performed better than low-fidelity participants (effect size = 0.72; 95% CI, −0.01 to 1.46; Figure 1A). The progressive group provided better documentation of the procedure (F2,42 = 9.11; P < .001; Figure 1A) than the low-fidelity group (effect size = 0.98; 95% CI, 0.22–1.73) and the high-fidelity group (effect size = 1.40; 95% CI, 0.61–2.20).

According to the GRS (F2,42 = 7.68; P < .001) and communication scale (F2,42 = 5.76; P < .01), the progressive group (GRS effect size = 1.20; 95% CI, 0.42–1.97; communication effect size = 1.07; 95% CI, 0.30–1.83) and high-fidelity group (GRS effect size = 1.29; 95% CI, 0.51–2.08; communication effect size = 0.85; 95% CI, 0.11–1.60) performed better than the low-fidelity group (Figure 1B). Only the progressive group scored higher than the low-fidelity group according to the CL (F2,42 = 4.32; P < .05; Figure 1B; effect size = 0.96; 95% CI, 0.21–1.72).

Back to Top | Article Outline

Resource management data

Between groups, we found that the progressive group practiced longer (F2,42 = 10.11; P < .001) than the other two groups and that the low-fidelity group practiced longer than the high-fidelity group (Figure 2). Participants in the high-fidelity group watched a greater amount of the instructional video (F2,42 = 3.53; P < .05) than those in the progressive and low-fidelity groups (Figure 2). Finally, progressive participants rated the low-fidelity simulator lower than the low-fidelity group (t28 = 3.12, P < .01) and rated the high-fidelity simulator the same as the high-fidelity group (t28 = 1.38, P = .18).

Figure 2

Figure 2

Within the progressive fidelity group, participants practiced longer (F2,28 = 8.80; P < .001; Figure 2) and watched more video (F2,28 = 13.40; P < .001; Figure 2) on the low-fidelity simulator. These time measures did not differ when participants practiced on the mid- and high-fidelity simulators. Participants rated the mid- and high-fidelity simulators as more educationally valuable than the low-fidelity simulator (F2,28 = 25.64; P < .001; Figure 3).

Figure 3

Figure 3

Back to Top | Article Outline

Discussion

Informed by the motor learning11,12 and psychology4,28,29 literature, we hypothesized that the progressive group would experience greater educational benefit than groups practicing exclusively on either low- or high-fidelity simulators. Our findings supported this hypothesis in that the progressive group scored higher than the other groups on global clinical performance and also documented the procedure in greater detail (Figure 1A). Evaluations of isolated skills showed no difference between the progressive and high-fidelity groups on technical (GRS) and communication skills, though both scored better than the low-fidelity group on these measures. Finally, the progressive group outperformed the low-fidelity group according to a second measure of technical skill (CL), whereas the high-fidelity group did not differ from either group.

Although results from the progressive versus high-fidelity group comparison are mixed (i.e., the progressive group scored higher on global but the same on isolated skill assessments), we favor the global assessment approach. Students' performance according to the global assessment has greater implications for their interactions with patients in real clinical settings because this assessment combines a broad set of interrelated clinical skills.18 Further, the isolated skill assessment data followed the same trend as the IPPI rating data (Figure 1B), and the lack of significant differences could be due to a measurement sensitivity issue.

The finding that the progressive group demonstrated the best global clinical performance score on the transfer test can be explained using three theoretical perspectives: scaffolding theory,3,28,29 self-regulated learning theory,23,24 and motor learning theory.30 In accordance with scaffolding theory, the progressive group received scaffolded (i.e., structured) information content in a way that facilitates skill transfer to a realistic patient encounter. Initial practice on the low-fidelity simulator emphasized the procedural steps of IV catheterization (e.g., choice of instruments) without the need for careful motor performance or communication with the patient. The mid-fidelity simulator allowed participants to achieve greater familiarity with technical skills and sensory feedback arising from performance (e.g., flash back on successful catheterization). Practice with actual instruments and physical contact with a real arm built on experience from the low-fidelity simulator yet did not require patient communication. Finally, the high-fidelity simulator enabled knowledge consolidation from the first two simulators, knowledge application in a new setting, and the development of communication and professionalism skills.

From a self-regulated learning perspective, the progressive group's superior performance may relate to participants' frequent opportunities to self-monitor their learning progress. When self-monitoring, a participant may focus on reducing the distance between his or her perceived current level of performance and his or her goal performance.23,24 We hypothesize that progressive participants switched between simulators on the basis of the combined effect of their motivation and their self-monitored level of performance. Having participants decide when to progress between simulators created more explicit self-monitoring opportunities for those in the progressive group relative to those in the low- and high-fidelity groups. Schunk31 has shown that increased self-monitoring opportunities lead to improved learning outcomes.

Motor learning theorists have found that skill transfer is enhanced during variable practice compared with practice limited to the same learning context.30 The progressive learning method clearly offers students such variety. Further, the gap between novice students' background knowledge and the information content of a high-fidelity simulator may impede learning by overwhelming the students' information-processing abilities12 and possibly reducing motivation.32 Our findings add to a growing body of evidence suggesting medical students can effectively self-guide and learn fundamental clinical skills.16,17 However, stronger conclusions cannot be drawn because further research is needed to determine whether DSGL is as effective as instructor-guided learning.

According to the resource management data, the progressive group spent the most time engaging in hands-on practice; this was expected, because the group was required to practice on three separate simulators, as opposed to a single simulator. By contrast, the high-fidelity group had the lowest total practice time yet achieved high scores on all performance measures. This indicates high-fidelity practice might be more efficient than progressive practice. Closer inspection of the data, however, suggests a different interpretation. Figure 2 shows that although the progressive group engaged in the most hands-on practice, 70% was on the low- and mid-fidelity simulators. The progressive group's average combined time practicing and watching the video was 16.5 minutes on the high-fidelity simulator, whereas the high-fidelity group spent 51 minutes doing the same activities. Thus, progressive participants required close to 70% less time on the resource-intensive, high-fidelity simulator. Consequently, progressive practice seems to be the most resource- and cost-efficient because instruction using the high-fidelity simulator requires knowledge of SimMan technology and an educator to provide instruction and the patient's “voice.”

We also assessed the within-group resource management data for the progressive group. Progressive participants rated the educational value of the mid- and high-fidelity simulators higher than that of the low-fidelity simulator; however, they spent more time practicing and watching the video when working with the latter (Figure 2). Interpreting our data within scaffolding theory, some forms of simulation (e.g., the Virtual IV) have significant limitations as stand-alone learning modalities but do have value when integrated into a progressive training regime incorporating other simulation modalities.

In the present study, the computerized simulator was considered low-fidelity based on previous work.3,33 Though this designation may surprise some readers, the data demonstrate that when defining a simulator's fidelity, it is important to consider the specific simulation content and how that content is represented.3–5 An example completely opposite to the present study would be simulating changing hemodynamics variables, which are well simulated by computer programs and less so by “realistic” heart and lung models.

There are limitations to this study. The single-item ICC values were in the low to acceptable range for most dependent measures. The ICC values are, however, close to those generally reported in previous work.20,25 Next, some may suggest that the progressive group's improved transfer test performance could be attributed to the group's opportunity to practice on a similar inanimate arm simulator not available to the other groups. Experience with the inanimate arm simulator may have enabled the progressive group to focus on other aspects of performance for the transfer test. We acknowledge the low-fidelity participants were distinctly disadvantaged, as they did not perform a realistic IV catheterization during practice. However, similarities between the mid- and high-fidelity simulators' functionality means the progressive group likely did not have this advantage over the high-fidelity group. Moreover, the mid-fidelity simulators for the practice and transfer tests were not the same model. Lack of generalizability beyond the learning of IV catheterization is another study limitation. In addition, differences in available resources across institutions will limit application of our study findings because progressive learning, although cost-effective, is simulator-resource intensive. Finally, although DSGL seems to be effective, this educational approach is not without boundaries, and further work is needed to determine what supports (i.e., peers, tutors, faculty) are needed when self-guided learning falters.

Back to Top | Article Outline

Conclusions

In sum, our data suggest that simulation modalities should be integrated into curricula using evidence-based theoretical principles. Educational research intensity must match the rate at which simulation modalities are introduced into the medical field. Innovative approaches, such as the progressive learning method presented here, may reduce costs and demands on educators' time related to simulation-based activities. Our results suggest that the question is not which level of simulator fidelity is best but, rather, how we should incorporate the range of simulator fidelities into a progressive training regime. On the basis of the current data, we conclude that the earlier phases of learning can be self-guided, requiring no faculty presence and enabling greater availability of faculty resources for later phases that focus on skill consolidation.

Back to Top | Article Outline

Acknowledgments:

The authors wish to acknowledge Glenn Regehr, PhD, Wilson Centre, University of Toronto, for his significant creative and conceptual input on this paper. Dr. Regehr was not compensated for his contributions. The authors also wish to thank Kathleen Bowler, RN, BScN, St. Michaels Hospital, Charmaine Lodge, RN, BScN, MN, Canadian Blood Services, and Dionne Reelis, RN, BScN, ONC, Halton Healthcare Services, for their efforts during data acquisition and analysis. All received compensation for their work on this study. Finally, the authors wish to thank Professor Debra Nestel and Roger Kneebone, PhD, FRCS, FRCSEd, FRCGP for providing the tools related to the Integrated Procedural Performance Instrument (IPPI). Drs. Nestel and Kneebone were not compensated for this contribution.

Back to Top | Article Outline

Funding/Support:

The project was supported by a grant from the Natural Sciences and Engineering Research Council (NSERC).

Back to Top | Article Outline

Other disclosures:

None.

Back to Top | Article Outline

Ethical approval:

The University of Toronto research ethics board approved the study protocol.

Back to Top | Article Outline

References

1Reznick RK, MacRae H. Teaching surgical skills—Changes in the wind. N Engl J Med. 2006;355:2664–2669.
2DaRosa D, Rogers DA, Williams RG, et al. Impact of a structured skills laboratory curriculum on surgery residents' intraoperative decision-making and technical skills. Acad Med. 2008;83(10 suppl):S68–S71.
3Alessi SM. Fidelity in the design of instructional simulations. J Comput Base Instr. 1988;15:40–47.
4Alessi SM. Dynamic versus static fidelity in a procedural simulation. Paper presented at: American Educational Research Association Annual Meeting; April 18–22, 1995; San Francisco, Calif.
5Alessi S. Simulation design for training and assessment. In: O'Neil HF, Andrews DH, eds. Aircrew Training and Assessment. Mahwah, NJ: Lawrence Erlbaum Associates Publishers; 2000:197–125.
6Issenberg SB, McGaghie WC, Petrusa ER, Lee Gordon D, Scalese RJ. Features and uses of high-fidelity medical simulations that lead to effective learning: A BEME systematic review. Med Teach. 2005;27:10–28.
7Gordon JA, Wilkerson WM, Shaffer DW, Armstrong EG. “Practicing” medicine without risk: Students' and educators' responses to high-fidelity patient simulation. Acad Med. 2001;76:469–472.
8Sidhu RS, Park J, Brydges R, MacRae HM, Dubrowski A. Laboratory-based vascular anastomosis training: A randomized controlled trial evaluating the effects of bench model fidelity and level of training on skill acquisition. J Vasc Surg. 2007;45:343–349.
9Grober ED, Hamstra SJ, Wanzel KR, et al. Laboratory based training in urological microsurgery with bench model simulators: A randomized controlled trial evaluating the durability of technical skill. J Urol. 2004;172:378–381.
10Chandra DB, Savoldelli GL, Joo HS, Weiss ID, Naik VN. Fiberoptic oral intubation: The effect of model fidelity on training for transfer to patient care. Anesthesiology. 2008;109:1007–1013.
11Dubrowski A, Park J, Moulton C, Larmer J, MacRae H. A comparison of single- and multiple-stage approaches to teaching laparoscopic suturing. Am J Surg. 2007;193:269–273.
12Guadagnoli M, Lee T. Challenge point: A framework for conceptualizing the effects of various practice conditions in motor learning. J Mot Behav. 2004;36:212–224.
13Quinn J, Peña C, McCune L. The effects of group and task structure in an instructional simulation. Paper presented at: Annual Meeting of the Association for Educational Communications and Technology; 1996; Indianapolis, Ind. ERIC Document Reproduction Service ED397772.
14Brydges R, Carnahan H, Safir O, Dubrowski A. How effective is self-guided learning of clinical technical skills? It's all about process. Med Educ. 2009;43:507–515.
15Xeroulis GJ, Park J, Moulton CA, Reznick RK, Leblanc V, Dubrowski A. Teaching suturing and knot-tying skills to medical students: A randomized controlled study comparing computer-based video instruction and (concurrent and summary) expert feedback. Surgery. 2007;141:442–449.
16Brydges R, Carnahan H, Backstein D, Dubrowski A. Application of motor learning principles to complex surgical tasks: Searching for the optimal practice schedule. J Mot Behav. 2007;39:40–48.
17Jowett N, LeBlanc V, Xeroulis G, MacRae H, Dubrowski A. Surgical skill acquisition with self-directed practice using computer-based video training. Am J Surg. 2007;193:237–242.
18Kneebone R, Nestel D, Yadollahi F, et al. Assessing procedural skills in context: Exploring the feasibility of an integrated procedural performance instrument (IPPI). Med Educ. 2006;40:1105–1114.
19Kneebone R, Bello F, Nestel D, Yadollahi F, Darzi A. Training and assessment of procedural skills in context using an integrated procedural performance instrument (IPPI). Stud Health Technol Inform. 2007;125:229–231.
20LeBlanc VR, Tabak D, Kneebone R, Nestel D, MacRae H, Moulton CA. Psychometric properties of an integrated assessment of technical and communication skills. Am J Surg. 2009;197:96–101.
21Moulton CA, Tabak D, Kneebone R, Nestel D, MacRae H, LeBlanc VR. Teaching communication skills using the integrated procedural performance instrument (IPPI): A randomized controlled trial. Am J Surg. 2009;197:113–118.
22Engum SA, Jeffries P, Fisher L. Intravenous catheter training system: Computer-based education versus traditional learning methods. Am J Surg. 2003;186:67–74.
23Zimmerman BJ, Kitsantas A. Developmental phases in self-regulation: Shifting from process goals to outcome goals. J Educ Psychol. 1997;89:29–36.
24Zimmerman BJ, Kitsantas A. Self-regulated learning of a motoric skill: The role of goal setting and self-monitoring. J Appl Sport Psychol. 1996;8:60–75.
25Hodges B, McIlroy JH. Analytic global OSCE ratings are sensitive to level of training. Med Educ. 2003;37:1012–1016.
26Faulkner H, Regehr G, Martin J, Reznick R. Validation of an objective structured assessment of technical skill for surgical residents. Acad Med. 1996;71:1363–1365.
27Hodges B, Turnbull J, Cohen R, Bienenstock A, Norman G. Evaluating communication skills in the OSCE format: Reliability and generalizability. Med Educ. 1996;30:38–43.
28Vygotsky LS. Mind in Society: The Development of Higher Psychological Processes. Oxford, UK: Harvard University Press; 1978.
29Wood D, Bruner JS, Ross G. The role of tutoring in problem solving. J Child Psychol Psychiatry. 1976;17:89–100.
30Shea CH, Wulf G. Schema theory: A critical appraisal and reevaluation. J Mot Behav. 2005;37:85–101.
31Schunk DH. Goal and self-evaluative influences during children's cognitive skill learning. Am Educ Res J. 1996;33:359–382.
32Schunk DH. Social cognitive theory and self-regulated learning. In: Zimmerman BJ, Schunk DH, eds. Self-Regulated Learning and Academic Achievement: Theoretical Perspectives. 2nd ed. Mahwah, NJ: Lawrence Erlbaum Associates Publishers; 2001:125–151.
33Hansen MM. Versatile, immersive, creative and dynamic virtual 3-D healthcare learning environments: A review of the literature. J Med Internet Res. 2008;10:e26.
© 2010 Association of American Medical Colleges