Journal Logo

Empirical Investigations

Evaluating the Influence of Goal Setting on Intravenous Catheterization Skill Acquisition and Transfer in a Hybrid Simulation Training Context

Brydges, Ryan PhD; Mallette, Claire MScN, PhD; Pollex, Heather BScN, EdD; Carnahan, Heather PhD; Dubrowski, Adam PhD

Author Information
Simulation in Healthcare: The Journal of the Society for Simulation in Healthcare: August 2012 - Volume 7 - Issue 4 - p 236-242
doi: 10.1097/SIH.0b013e31825993f2
  • Free

Abstract

Simulation training programs should have clearly articulated learning objectives. When teaching a complex clinical skill, an educator might reduce the skill to its components (eg, nontechnical and technical skills) and set “isolated objectives” for each component separately.1 Alternatively, an educator might preserve the skill’s full complexity and set learning objectives for all components concurrently (ie, what we call “holistic objectives”).

Training that concentrates on isolated objectives has been favored in the literature perhaps because many simulators (eg, part task trainers) impose physical and operational limitations that constrain practice to individual skill components (eg, technical skills).2 Recognizing these limitations, Kneebone et al3 conceptualized a need for hybrid simulation4 to provide training that better approximates clinical practice. Specifically, a hybrid simulation combines standardized patients (SPs) with inanimate simulators or other pieces of equipment. Research has shown that trainees and faculty evaluate hybrid simulation favorably5–7 and that the psychometric properties of performance evaluations are acceptable.8

We are not aware of any studies that address whether the objectives provided in a hybrid simulation should be isolated or holistic. For guidance, we considered a conceptual framework2 in which hybrid simulation is placed along a training continuum. Ellaway et al2 describe a “practica continua” framework in which trainees (i) practice on part task simulators while setting isolated objectives, (ii) transition to hybrid simulation and holistic objectives, and (iii) integrate these experiences while practicing in the clinical setting. Starting with isolated objectives in such a progressive training regime9 may benefit learners because evidence suggests that dividing learners’ attention among several information sources (eg, setting holistic objectives) may overwhelm them and diminish skill retention.10 Conversely, some psychologic evidence supports always setting holistic objectives, suggesting that holistic performance is more than the sum of its parts (ie, many isolated objectives do not equal holistic objectives).11 Medical education researchers with this perspective believe that setting holistic objectives throughout training better prepares trainees for the clinical context.1,12

Given the diverging hypotheses implied by these alternative perspectives, we adopted the Canadian Institutes of Health Research framework for evaluating complex interventions to design an exploratory trial aimed at evaluating 2 goal setting interventions for hybrid simulation training. We refer to holistic goal setting as our active intervention and isolated goal setting as our appropriate alternative intervention.13 We used goal setting theory14–17 to define holistic and isolated learning goals for peripheral intravenous (IV) catheterization. The kinesiology literature suggests that learning is specific to the conditions that one experiences during practice.18 Therefore, we hypothesized that trainees who set specific goals would display greater acquisition and retention of the skills associated with those goals. We expected trainees who set holistic goals related to technical, cognitive, and communication skills to display superior holistic performance. Similarly, we expected trainees who set isolated goals related only to technical skills to display superior technical performance.

METHODS

Participants

Practicing health care professionals from diverse professions were recruited from 3 academic hospital sites in an urban community from August 2009 to October 2009. Participants were included in the study only if they had performed fewer than 10 IV starts during their career; all others reporting more experience were excluded. Given that many of the participants had years of clinical experience, they were likely well practiced in the subtasks of IV catheterization (eg, communicating with the patient, describing potential complications), but most had no experience with the procedural skills associated with IV catheterization. One reason we targeted this participant pool is that we expected their previous clinical experience to better prepare them for the demands of hybrid simulation compared with complete novice trainees.

Based on previous work,9 we required 12 participants per group to adequately power the study at β = 0.80 (P = 0.05). We assigned participants randomly to the holistic goal group (n = 14) or the isolated goal group (n = 15) using a random number generator. The local research ethics board approved the study protocol, and all participants provided written consent.

Study Apparatus

The study took place in a simulation laboratory at an academic hospital. The hybrid simulation consisted of an SP lying in a bed with an arm simulator extending from beneath the bedsheets (Fig. 1). The arm simulator (Model LF01121U; Nasco Health Care) is oft used for IV catheterization training.19 Throughout the 2-hour training session, the SP portrayed “Mrs. Johnson,” a quiet and withdrawn 75-year-old woman who was experiencing persistent vomiting and diarrhea. For the transfer test, participants performed in a new hybrid simulation scenario with “Mr. Logan,” who was a drunk and agitated 24-year-old man with lacerations to his head after a motor vehicle accident.

FIGURE 1
FIGURE 1:
Hybrid simulation with SP and part task trainer model (circled in white).

We developed the learning goals using expert consensus, followed by application of goal setting theory. First, we created a list of IV learning objectives and organized them into technical, cognitive, and communication categories. Next, 2 expert IV nurses independently selected 3 objectives as the “highest priority” in each category. After resolving disagreement, the experts generated a list of 9 technical objectives (for the isolated group) and 9 technical, cognitive, and communication objectives (3 in each category, for the holistic group). Finally, we used goal setting theory14,17 to frame the objectives as process goals, meaning we oriented participants to concentrate on the mechanisms of their performance (Table 1).

TABLE 1
TABLE 1:
Shown Are the 9 Learning Goals Provided to Participants to Help Guide Their Focus During Practice

Study Design and Procedure

We completed a prospective randomized 2-arm study to compare the training efficacy of the 2 goal setting interventions. All participants completed the experiment individually. Initially, participants watched an 8-minute instructional video that included full demonstration of IV catheterization on a real patient with equal focus (ie, 4 minutes each) on technical and nontechnical aspects (eg, obtaining consent, choosing equipment, patient examination). Next, participants were given the list of 9 goals specific to their assigned group to read and study before completing the first IV attempt. Participants had as much time as they wished to independently review the goal list, both during this initial review and at any time during practice.

After watching the video and reporting that they had reviewed the goal list as long as desired, participants received instructions about Mrs. Johnson. Next, each participant completed a videotaped first attempt of IV catheterization. After completing the first attempt, participants had full control over the length of time that they spent practicing and could complete multiple IV attempts during this time. On deciding to end practice, participants completed 1 final videotaped trial. Finally, participants completed a questionnaire to rate and comment on their learning experience. We asked participants to avoid studying any IV catheterization materials during the 1-week retention period.

Approximately 1 week after practice, participants completed the transfer test with Mr. Logan. During the test, participants did not have access to the video or the goal list.

Outcome Measures

Holistic and Technical Performance

Two blinded expert IV nurses independently rated the participants’ videotaped performances on the first and the last practice trials and the transfer test. To generate a holistic performance score, the raters used the Direct Observation of Procedural Skills (DOPS) tool, which is a series of 6-point Likert scales (competent score, 44; maximum score, 66) assessing components of clinical performance such as professionalism, situational awareness, communication, technical skill, and patient safety. Content and relations to other variables validity evidence have been collected previously for the DOPS tool.20 To create a “technical skills performance” score, we grouped the scores for the DOPS categories of “preparation for procedure,” “technical performance,” and “maintenance of asepsis” sections (competent score, 12; maximum score, 18).

Postpractice Questionnaire

Immediately after practice, participants provided ratings on a 10-point Likert scale of their confidence (5 scales; maximum score, 50), satisfaction (5 scales; maximum score, 50), overall rating of learning experience, rating of the perceived utility of the learning goals, and rating of the attention they paid to the learning objectives. Also on the questionnaire, the participants were asked, “Please describe all learning goal(s) that you focused on most from either the provided list or if you created your own goals that were not on the list.” Two authors (R.B. and C.M.) worked independently to group participants’ written answers into the baseline categories of technical, cognitive, and communication goals and the emergent categories of competency and patient comfort goals.

Resource Management Measures

We collected additional measures related to participants’ resource management, defined as the use of resources during practice. We assessed the total practice time, total number of IV attempts (trials), and total time groups viewed the instructional video.

Statistical Analyses

We analyzed the demographic data using separate independent samples t tests for parametric variables and χ2 tests for nonparametric variables. To analyze the participants’ holistic and technical DOPS scores on the first trial, the final trial, and the transfer test, we used a 2 × 3 repeated-measures analysis of variance with the between-subject factor of group and the within-subject factor of trial. To analyze the quantitative questionnaire and resource management data, we used separate independent samples t tests. We analyzed the frequency counts of participants’ learning goals using Mann-Whitney U test. α Level was set at P = 0.05 for significant findings. Finally, for the analysis of variances, we converted means (SDs) to standardized mean differences (Hedges g effect sizes).

RESULTS

Demographic Data

Participants had limited experience inserting peripheral IV catheters (previous IV starts: range, 0–10; median, 0), and the groups did not differ in the average number of previous attempts (Table 2). Furthermore, there was no between-group difference on any indices of self-reported clinical experience or in the length of the retention period (Table 2). Thus, the data confirm that we compared 2 equivalent groups with little technical IV catheterization skills, which possess a similar amount of previous clinical experience interacting with patients.

TABLE 2
TABLE 2:
Participants’ Demographic Data

As a check on whether years of clinical experience and previous IV experience influenced our findings, we calculated 2-tailed Pearson correlation coefficients between those 2 variables and the participant’s DOPS score on the first trial, the last trial, or the transfer test. In Table 3, the only significant correlation is between the first-trial DOPS score and previous IV experience for the isolated group. There is no such significant relationship for the holistic group. Furthermore, for both groups, the correlation between the DOPS score and previous IV experience diminishes from the first trial to the last trial and becomes even smaller on the transfer test. Across the study time line, then, the relationship between participants’ previous experience and their study performance dropped markedly.

TABLE 3
TABLE 3:
Correlation Data Showing the Relationships Between DOPS Scores on the First Trial, the Last Trial, and the Transfer Test With the Participants’ Previous Number of IV Starts and Years of Clinical Experience

Holistic Performance Scores

Interrater reliability for the overall DOPS score was acceptable, with an interclass correlation coefficient of 0.76. Our analysis of the DOPS score revealed that the holistic group performed at a higher level than the isolated group on all 3 trials (F1,27 = 7.69; P = 0.01; effect size, 0.71). Shown in Figure 2A, a trial main effect (F2,54 = 16.91; P < 0.001; effect size, 0.89) suggests that both groups improved from the first to the final attempt of practice (P < 0.01) and maintained performance between the final attempt and the transfer test (P > 0.05). The interaction term was not significant (F2,54 = 0.53, P = 0.59).

FIGURE 2
FIGURE 2:
Direct Observation of Procedural Skills performance data from the first trial, the last trial, and the transfer test (means with SE bars). Data for the full DOPS ratings (A) and the technical skills composite score (B).

Technical Skills Performance Scores

Our analysis of the technical skills composite score showed that the groups performed similarly across the 3 trials (F1,27 = 2.71, P = 0.11). A trial main effect (F2,54 = 21.01; P < 0.001; effect size, 1.27) suggests that both groups improved from the first to the final practice trial (P < 0.01) and maintained performance between the final practice attempt and the transfer test (P > 0.05; Fig. 2B). The interaction term was not significant (F2,54 = 0.51, P = 0.61).

Postpractice Questionnaire

Our secondary analysis showed no group differences on any self-report measures including ratings of confidence, satisfaction, overall learning experience, perceived utility of the learning goals, and the attention they reported paying to the learning objectives (Table 4).

TABLE 4
TABLE 4:
Participants’ Postpractice Questionnaire, Reported Learning Goals, and Resource Management Data

For our coding of participants’ stated learning goals, the exact interrater agreement achieved 80% after the initial review and 100% after iterative discussions. We detected a significant group difference when analyzing the frequency of these learning goals. Specifically, the 2 groups did not differ in the number of technical, competence-related, patient comfort, or overall total goals that they focused on; however, the holistic group reported significantly more cognitive and communication goals than the isolated group (Table 4).

Resource Management Measures

The groups did not differ in the total practice time, the total number of practice trials, or in how they used the instructional video (Table 4).

DISCUSSION

Previous research has established feasibility3,5,7 and psychometric data8 that support the use of hybrid simulation as an assessment tool. Only recently have researchers explored designing hybrid simulation for clinical skills training.21–24 We chose to focus on how educators can set learning objectives for trainees to consider before and during hybrid simulation training. The data support our hypothesis because the holistic group received greater holistic performance scores than the isolated group (Fig. 2A). Despite the similar baseline clinical experience of the 2 groups, the holistic goal group displayed an immediate performance benefit after studying the holistic goal list just once (ie, on the first trial). This benefit persisted across the training session and transferred to a new hybrid simulation scenario. Conversely, we expected trainees in the isolated goal group to benefit from their exclusive focus on technical skills and to score higher than the holistic group on technical performance. Despite our prediction, our analyses showed that the 2 groups’ technical performance was equivalent on all attempts (Fig. 2B). Put another way, these results suggest that, when participants set holistic goals, this did not interfere with their attaining competence in technical skills. Also notable, the holistic goal setting intervention led to performance improvements for participants who were relatively “pretrained” in aspects of the task (ie, one might expect that pretrained health care professionals would not show an appreciable benefit from this training, but they did).

Assigning goals is one thing, whereas asking trainees what goals they actually set is quite another. Our analysis of the goals that trainees reported they focused on and/or that they created for themselves lends support to the notion that learning goals have an immediate and persistent biasing effect on performance (Table 4). The isolated group identified technical goals more (although not significantly) than the holistic group, whereas the holistic group reported setting significantly more goals related to communication and cognitive skills. These results are not surprising and speak to the desired bias created by the learning goals, whereby the holistic group focused nearly equally on technical, cognitive, and communication goals, whereas the isolated group concentrated mostly on technical goals. It appears, therefore, that well-prepared learning goals direct trainees’ learning efforts. Another interpretation of our findings is that divided attention was not an issue because adding psychosocial domains to the learning goals did not negatively affect the holistic group’s technical performance. That is, participants who set holistic goals processed multiple sources of information concurrently and effectively, and they did not seem overwhelmed as might be predicted by psychologic evidence suggesting a limited attentional capacity.25

Another finding that is relevant for simulation researchers is the difference that we detected between learner satisfaction and learner performance. According to the modified Kirkpatrick hierarchy,26,27 level 1 is associated with learner participation and reaction, whereas level 2b is associated with modification of skills and knowledge. In many previous studies in the simulation context, emphasis has been placed on learner satisfaction scores at the exclusion of measuring the intervention’s affect on learner attitudes, beliefs, knowledge, and skills. Our findings show why this emphasis is problematic because both groups provided similar ratings of satisfaction, confidence, and the overall learning experience (Table 4). Despite these similar Kirkpatrick level 1 findings, the groups’ transfer test performance differed significantly (Fig. 2). Therefore, we would agree with others28 who call for an end to simulation research with learner satisfaction as the lone outcome. Furthermore, these observations can be linked with evidence that junior trainees often do not know what they do not know29,30 and, consequently, may be satisfied with their learning progress despite being incompetent at the task.

The appropriate integration of simulation into medical curricula is a critical topic in contemporary medial education research. One framework that challenges simulation researchers to ask how and when to integrate simulation is referred to as practica continua.2 Interfacing our findings with that framework, we propose that the “when” of different simulation modalities depends on the clinical skill of interest. Participants in the present study, for instance, acquired IV catheterization skills at a “competent” level. Thus, it seems that, for some clinical procedures, hybrid simulation combined with a list of learning goals may serve as a starting point that bypasses the need to set isolated learning objectives. Of course, such claims must be supported with additional experimental evidence.

Limitations

Acknowledging that simulation research is prone to low sample sizes and reduced generalizability, we identified the following limitations to the study. Our first limitation relates to the correlation data in Table 3, which show a significant relationship between participants’ previous number of IV starts and the first-trial DOPS score for the isolated goal group (not the holistic goal group). Importantly, our study is a 2-group first trial/last trial/transfer test design; thus, the focus of the study is a comparison of the degree of change between the intervention groups. Hence, the within-group variability for previous number of IV starts is not a concern as long as the between-group difference is not significant at baseline. As shown in Table 2, there was no baseline difference in clinical and/or previous IV experience between the groups, suggesting that the groups began at the same collective experience level. That said, our results are limited because we relied on participants self-report of previous IV experience and did not collect a preeducation skill measure, which would have firmly established baseline equivalence of the groups. Furthermore, future studies may wish to obtain baseline scores before any instruction in either group, which would more accurately demonstrate that the groups were equivalent in baseline expertise. A related limitation is that participants’ previous patient care experiences (4–7 years) may have biased them to perform better in the holistic goal group, and future studies should replicate our design usingtrainees who do not have experience with either the technical or nontechnical task elements. Doing so will help to identify whether prior training played a significant role in our results. Another limitation is that the content of our holistic goals focused on 3 broad categories, and we acknowledge that the clinical training goals extend beyond the technical, cognitive, and communications skill domains.

We believe that this study contributes to the emerging conversation on how simulation educators might design hybrid simulation training. Educational research is context dependent, which means it must be replicated both within and across contexts and learner populations. Our data cannot (and should not) be considered as applicable to the many groups of trainees who would be targets of hybrid simulation training. Instead, this exploratory trial is a necessary step in the trajectory toward conducting definitive randomized trials that expand the generalizability of our findings. As one piece in that puzzle, our results suggest that asking learners to set holistic goals during hybrid simulation training did not “take away” from the learning of each isolated component skill (especially technical skills). It will be crucial to explore whether these findings hold in learners with more limited patient care experiences. Beyond this exploratory trial, future studies will be needed to further clarify the mechanisms that ensure effective goal setting in the context of hybrid simulation training.

ACKNOWLEDGMENTS

The authors thank Roger Kneebone and Debra Nestel for providing hybrid simulation scenarios. The authors also thank Susanne Nelson and Susan Bartlik for participating in the expert consensus–building process (no compensation provided) and Opal Robinson and Tracey Fletcher for serving as expert raters (compensation provided).

REFERENCES

1. Kneebone R. Perspective: simulation and transformational change: the paradox of expertise. Acad Med 2009; 84: 954–957.
2. Ellaway RH, Kneebone R, Lachapelle K, Topps D. Practica continua: connecting and combining simulation modalities for integrated teaching, learning and assessment. Med Teach 2009; 31: 725–731.
3. Kneebone R, Nestel D, Yadollahi F, et al.. Assessing procedural skills in context: exploring the feasibility of an Integrated Procedural Performance Instrument (IPPI). Med Educ 2006; 40: 1105–1114.
4. Kneebone R. Webinar 13: Controversial issues in simulation. MedEdWorld Webinars Web site. Available at: http://www.amee.org/index.asp?llm=96. Accessed May 22, 2012.
5. Higham J, Nestel D, Lupton M, Kneebone R. Teaching and learning gynaecology examination with hybrid simulation. Clin Teach 2007; 4: 238–243.
6. Kneebone R, Nestel D, Wetzel C, et al.. The human face of simulation: patient-focused simulation training. Acad Med 2006; 81: 919–924.
7. Kneebone R, Bello F, Nestel D, Yadollahi F, Darzi A. Training and assessment of procedural skills in context using an Integrated Procedural Performance Instrument (IPPI). Stud Health Technol Inform 2007; 125: 229–231.
8. LeBlanc VR, Tabak D, Kneebone R, Nestel D, MacRae H, Moulton CA. Psychometric properties of an integrated assessment of technical and communication skills. Am J Surg 2009; 197: 96–101.
9. Brydges R, Carnahan H, Rose D, Rose L, Dubrowski A. Coordinating progressive levels of simulation fidelity to maximize educational benefit. Acad Med 2010; 85: 806–812.
10. Naveh-Benjamin M, Craik FIM, Guez J, Kreuger S. Divided attentionin younger and older adults: effects of strategy and relatedness on memory performance and secondary task costs. J Exp Psychol Learn Mem Cogn 2005; 31: 520–537.
11. Duncan J. Divided attention: the whole is more than the sum of its parts. J Exp Psychol Hum Percept Perform 1979; 5: 216–228.
12. Gordon JA, Hayden EM, Ahmed RA, Pawlowski JB, Khoury KN, Oriol NE. Early bedside care during preclinical medical education: can technology-enhanced patient simulation advance the Flexnerian ideal? Acad Med 2010; 85: 370–377.
13. Section 3.6.2: Framework for Evaluating Complex Interventions. Canadian Institutes for Health Research Web site. Available at: http://www.cihr-irsc.gc.ca/e/41946.html. Accessed February 9, 2012.
14. Zimmerman BJ, Kitsantas A. Developmental phases in self-regulation: shifting from process goals to outcome goals. J Educ Psychol 1997; 89: 29–36.
15. Zimmerman BJ, Kitsantas A. Self-regulated learning of a motoric skill: the role of goal setting and self-monitoring. J Appl Sport Psychol 1996; 8: 60–75.
16. Locke EA, Latham GP. A Theory of Goal Setting & Task Performance. Englewood Cliffs, NJ: Prentice Hall, Inc; 1990: xviii.
17. Brydges R, Carnahan H, Safir O, et al.. How effective is self-guided learning of clinical technical skills? It’s all about process. Med Educ 2009; 43: 507–515.
18. Proteau L, Marteniuk RG, Lévesque L. A sensorimotor basis for motor learning: evidence indicating specificity of practice. Q J Exp Psychol A 1992; 44: 557–575.
19. Engum SA, Jeffries P, Fisher L. Intravenous catheter training system: computer-based education versus traditional learning methods. Am J Surg 2003; 186: 67–74.
20. Brydges R, Carnahan H, Rose D, Dubrowski A. Comparing self-guided learning and educator-guided learning formats for simulation-based clinical training. J Adv Nurs 2010; 66: 1832–1844.
21. Bowyer MW, Hanson JL, Pimentel EA, et al.. Teaching breaking bad news using mixed reality simulation. J Surg Res 2010; 159: 462–467.
22. Girzadas DV, Antonis MS, Zerth H, et al.. Hybrid simulation combining a high fidelity scenario with a pelvic ultrasound task trainer enhances the training and evaluation of endovaginal ultrasound skills. Acad Emerg Med 2009; 16: 429–435.
23. Crofts JF, Bartlett C, Ellis D, Hunt LP, Fox R, Draycott TJ. Management of shoulder dystocia: skill retention 6 and 12 months after training. Obstet Gynecol 2007; 110: 1069–1074.
24. Siassakos D, Draycott T, O’Brien K, Kenyon C, Bartlett C, Fox R. Exploratory randomized controlled trial of hybrid obstetric simulation training for undergraduate students. Simul Healthc 2010; 5: 193–198.
25. Engle RW, Kane MJ. Executive attention, working memory capacity, and a two-factor theory of cognitive control. In: Ross BH, ed. The Psychology of Learning and Motivation. Vol 44. New York, NY: Academic Press; 2003: 145–199.
26. Kirkpatrick Hierarchy for Assessment of Research Papers. American College of Surgeons Web site. Available at: http://www.facs.org/education/technicalskills/kirkpatrick/kirkpatrick.html. Accessed May 22, 2012.
27. Kirkpatrick DL. Evaluation of training. In: Craig R, Mittel I, eds. Training and Development Handbook. New York, NY: McGraw Hill; 1967: 87–112.
28. Cook DA. One drop at a time: research to advance the science of simulation. Simul Healthc 2010; 5: 1–4.
29. Eva KW, Cunnington JP, Reiter HI, et al.. How can I know what I don’t know? Poor self assessment in a well-defined domain. Adv Health Sci Educ Theory Pract 2004; 9: 211–224.
30. Hodges B, Regehr G, Martin D. Difficulties in recognizing one’s own incompetence: novice physicians who are unskilled and unaware of it. Acad Med 2001; 76 (Suppl 10): S87–S89.
Keywords:

Patient-focused simulation; Skill retention; Skill transfer; Self-regulated; Self-directed; Integrated procedural performance instrument; Learning objectives

© 2012 Society for Simulation in Healthcare