Secondary Logo

The Application of Observational Practice and Educational Networking in Simulation-Based and Distributed Medical Education Contexts

Welsher, Arthur, MSc; Rojas, David, MSc, PhD(c); Khan, Zain, MSc; VanderBeek, Laura, MD; Kapralos, Bill, PhD; Grierson, Lawrence, E.M., PhD

Simulation in Healthcare: The Journal of the Society for Simulation in Healthcare: February 2018 - Volume 13 - Issue 1 - p 3–10
doi: 10.1097/SIH.0000000000000268
Empirical Investigations
Free

Introduction Research has revealed that individuals can improve technical skill performance by viewing demonstrations modeled by either expert or novice performers. These findings support the development of video-based observational practice communities that augment simulation-based skill education and connect geographically distributed learners. This study explores the experimental replicability of the observational learning effect when demonstrations are sampled from a community of distributed learners and serves as a context for understanding learner experiences within this type of training protocol.

Methods Participants from 3 distributed medical campuses engaged in a simulation-based learning study of the elliptical excision in which they completed a video-recorded performance before being assigned to 1 of 3 groups for a 2-week observational practice intervention. One group observed expert demonstrations, another observed novice demonstrations, and the third observed a combination of both. Participants returned for posttesting immediately and 1 month after the intervention. Participants also engaged in interviews regarding their perceptions of the usability and relevance of video-based observational practice to clinical education.

Results Checklist (P < 0.0001) and global rating (P < 0.0001) measures indicate that participants, regardless of group assignment, improved after the intervention and after a 1-month retention period. Analyses revealed no significant differences between groups. Qualitative analyses indicate that participants perceived the observational practice platform to be usable, relevant, and potentially improved with enhanced feedback delivery.

Conclusions Video-based observational practice involving expert and/or novice demonstrations enhances simulation-based skill learning in a group of geographically distributed trainees. These findings support the use of Internet-mediated observational learning communities in distributed and simulation-based medical education contexts.

From the Department of Kinesiology (A.W., L.E.M.G.), McMaster University, Hamilton; Department of Medicine (D.R.), University of Toronto, Toronto; Faculty of Business and Information Technology (Z.K., B.K.), Ontario University Institute of Technology, Oshawa; Departments of Surgery (L.V.), Family Medicine (L.E.M.G.), Community and Rural Education Program (Mac-CARE) (L.E.M.G.), Program for Educational Research and Development (L.E.M.G.), and Michael G. DeGroote School of Medicine (L.E.M.G., A.W., L.V), McMaster University, Hamilton, Ontario, Canada.

Reprints: Lawrence Grierson, PhD, Department of Family Medicine, McMaster University, David Braley Health Sciences Centre, 100 Main St W, Hamilton, Ontario L8P 1H6, Canada (e-mail: griersle@mcmaster.ca).

A.W. led the experimental portion of the study as part of his graduate thesis work, including data collection and analysis, and was the lead author of the manuscript. D.R. led the evaluation portion of the study. D.R., Z.K., and B.K. developed and managed the OPEN system for the experiment. L.V. organized and managed the rating of video demonstrations. This work occurred in the medical education research laboratory led by L.E.M.G., who supervised all aspects of the project. All authors contributed to the critical revision of the paper, approved the final manuscript for publication, and have agreed to be accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work were appropriately investigated and resolved.

This work was generously supported via research funding awarded to L.E.M.G. via the Ontario Simulation Network (Sim-ONE).

The authors declare no conflict of interest.

This study was approved by the Hamilton Health Sciences' Integrated Research Ethics Board (Hamilton, Canada).

Attribution to: Department of Family Medicine and Program for Educational Research and Development, Faculty of Health Sciences, McMaster University.

A large portion of clinical learning within the fields of medicine, nursing, and surgery now occurs outside operating rooms and wards using simulation-based methodologies.1 Simulation is a desirable approach because it affords learners a controlled environment in which they can engage in repetitive and deliberate practice, receive constructive feedback, explore different strategies, and make errors without jeopardizing patient safety.2–4 However, because simulation is resource dependent and often challenged by the scheduling, opportunity, and availability constraints that are placed on trainees, video-based observational practice has been explored as a possible way to extend the learning achieved through simulation beyond the physical confines of the simulation center or in situ training environment. Although observational practice has a long history in health professions education (it is, after all, Halsted's “see one”), the proliferation of video technology in simulation spaces has enhanced the educator's ability to capture learner performances and take advantage of observation in an iterative, controlled, and systematic manner. Accordingly, the health professions education community has seen video-based observational practice integrated more prominently into a number of contexts including surgical technical skills training5,6 and postsimulation debriefing.7–10

Our medical education research group, which is composed of skill scientists, health care professionals, and systems engineers, acknowledges the great potential of video-based observational practice and has focused specifically on determining the ways in which observational practice approaches can connect health professional trainees with one another to augment their simulation-based training.11–13 Foundational to our perspective is considerable evidence that skill learning is facilitated through the observation of either perfect14–19 or imperfect20–25 demonstrations. Theories of observational learning that rely on a set of mirror neurons that activate when a task is either performed or observed explain that perfect demonstrations offer observers a mechanism to develop and refine internal representations of skills.26–28 Flawed demonstrations, on the other hand, are thought to support learning by revealing the costs associated with errors and, in turn, impacting higher-order strategies directed at optimizing actions for efficiency, accuracy, and safety.29 Regardless of underpinning mechanisms, what this means for video-based observational practice is that health professional learners can benefit from observing either the performances of experts or of other novices. As such, we have envisioned the development of observational learning communities, which use Internet-mediated social-networking platforms to connect learners and instructors in observation-based educational activities organized around the sharing of video performances captured during practice in the simulation laboratory.12

To facilitate a better understanding of our vision, we have custom developed and tested the Observational Practice and Educational Networking (OPEN) system.11–13,30,31 This system integrates video-streaming technologies into an Internet-mediated social media platform and allows learners and instructors to engage, as individuals or in groups, in educational activities that support the development of precision technical (ie, manual) skills through observational practice. These activities include uploading and reviewing videos of personal skill attempts, sharing video performances with peers and instructors, observing the videos of peers and instructors, and appraising the quality of viewed performances through associated assessment scales, contrasting videos, written augmented feedback, and the annotation of particular sections of video. To date, our research has revealed that health professions trainees can engage in effective observational learning via this system and that this approach is augmented when the activity includes observation of imperfect (ie, novice) clinical performances.11–13 However, our research has not yet tested this educational approach with novice demonstrations that are sampled directly from within the group of learners under study. Rather, in our preceding work, we maintained experimental control over the nature, distribution, and magnitude of the errors presented in the flawed demonstrations so as to be able to confirm that differences elicited between learning groups could be appropriately attributed to the factors that delineated those groups (ie, the observation of errors; the presence of feedback regarding errors). Importantly though, educational manifestations of the proposed observational learning communities will not be subject to such control.

In this study, we explored the way in which educational networking technology that enables observational practice around simulation-based activities can have use within an observational community of learners. To do so, we conducted a simple observational learning experiment with learners from McMaster University's (Ontario, Canada) 3 regional undergraduate medical education campuses (Hamilton Region, Niagara Region, Waterloo Region) and followed up that experiment with an evaluation of the participants' experiences of engaging in observational practice via the OPEN system. We have regularly pointed to our findings as support that networked observational practice applications may be particularly valuable in providing effective and comparable technical skill learning to communities of learners whose members are separated by considerable geography, as is the case in many distributed education contexts.32 Specifically, the experiment challenged participants to learn the elliptical excision skill through a combination of physical simulation-based practice on a part-task trainer and 1 of 3 observational practice protocols, which were differentiated by the novice, expert, or mixed nature of the presented demonstrations. The elliptical excision is a common surgical task that involves sizing, outlining, incising, undermining, and excising a skin lesion and closing the resultant wound with sutures. Importantly, the stimuli that served as the novice model demonstrations were videos sampled from within the participants' simulation-based practice on the part-task trainer. In this regard, we expected the skill performances of the group that observed expert demonstrations exclusively to benefit after the observational intervention and considered improvement in the group that viewed novice demonstrations exclusively to be evidence that the error-based observational learning effect could be elicited within an ecologically relevant observational learning community. In a secondary fashion, the inclusion of the third mixed-model group allowed us to investigate whether observation-based clinical skills learning is further optimized through a combination of novice and expert demonstrations.11

The evaluation part of this study involved semistructured interviews with a portion of the participants. These were conducted on the bases of the Questionnaire of User Interaction Satisfaction, a popular framework for appraising user perceptions of engineering innovations,33 and aimed to ascertain the learners' perceptions of using the OPEN system in the context of their distributed medical education. In this regard, the participants' perceptions serve as reflections on the possible translation of this approach from an experimental context to an educational context.

Back to Top | Article Outline

METHODS

Participants

Twenty-two preclerkship medical students (7 males, 15 females; mean age = 23.47 years) were recruited from the Michael G. DeGroote School of Medicine at McMaster University network of distributed campuses: the Niagara Regional Campus in St Catharine's, Ontario (n = 9); the Kitchener-Waterloo Regional Campus in Kitchener, Ontario (n = 8); and the Hamilton Regional Campus in Hamilton, Ontario (n = 5). One additional undergraduate health sciences student with aspirations to pursue medicine also participated in the study (female, age = 22 years; Hamilton, Ontario). Our recruitment allowed us to create observational learning communities that reflect the standard size of the problem-based and clinical skills learning groups into which medical trainees at McMaster University are typically organized (ie, between 7 and 10 per group). All participants had no previous experience performing an elliptical excision and limited experience suturing (3.4 ± 2.36 hours total practice). Participants were not remunerated; however, all viewed the study as a skills enrichment opportunity. All participants provided informed consent in accordance with the guidelines set out by the Hamilton Integrated Research Ethics Board and the Declaration of Helsinki (2013).

Back to Top | Article Outline

Experimental Protocol

The experimental protocol was divided into 4 phases: warm-up and pretest, acquisition, posttest, and retention (Fig. 1).

FIGURE 1

FIGURE 1

Back to Top | Article Outline

Warm-up and Pretest

In the warm-up and pretest phase, participants observed a standard and error-free instructional video that demonstrated the correct technique to performing the elliptical excision, including 3 simple interrupted sutures. They then read a set of written instructions that described the procedure in detail and were given the opportunity to view both the checklist and global rating scale that would be used to rate their performance. Participants then performed a warm-up attempt of the procedure on a skin pad (Professional Skin Pad Mk2-Light, Limbs & Things, Canada). They were not provided any augmented feedback regarding their performance or results of their warm-up attempt. They then re-viewed the video and re-read the instructions 1 more time before performing a pretest trial of the procedure, which was video recorded. Warm-up, pretest, and all subsequent performances were completed while wearing latex gloves.

Back to Top | Article Outline

Acquisition Phase

After the pretest, participants were randomly allocated to 1 of 3 experimental groups using a random number generator. Each group was asked to observe and assess 8 video-recorded elliptical excision performances over a period of 15 days (1 video every second day) via the OPEN system. On each occasion, the performance video was presented alongside the standard checklist and global rating scale for the elliptical excision.34 The participant was required to watch the video and to assess the performance by way of these scales. Thus, in this study, observational practice is operationally defined as critical viewing of demonstrations with reference to an established standard. It is noteworthy, however, that participants did not receive feedback regarding the correctness of their assessments of the demonstrations. It is also noteworthy that the participants did not practice the elliptical excision skill outside of the context of the experiment for the duration of the study.

The groups were differentiated by the nature of the performance content that was displayed within the suite of videos. In particular, Group E (n = 7) viewed an expert demonstration every other day, Group N (n = 8) viewed a novice demonstration every other day, and Group NE (n = 8) viewed interleaved expert and novice demonstrations over the acquisition phase. All the groups were counterbalanced with respect to the order of the videos and distribution of learners from the 3 campuses.

The expert videos were created by filming a general surgeon perform the elliptical excision using the same tools and simulation apparatus as the participants. A total of 8 videos were created, 1 per viewing day within the protocol. All the videos were rated to ensure each demonstrated an error-free example. The mean checklist score for the expert videos was 24.88 ± 0.35 of a possible 25, and the mean global rating score was 4.95 ± 0.14 of a possible 5. The novice videos that were used were selected randomly from among those pretest performances that achieved a score of between 30% and 70% accuracy by way of both checklist and global rating scale measures. The mean checklist score for these videos was 13.88 ± 3.04, and the mean global rating score was 2.175 ± 0.33. Participants did not view their own videos.

Back to Top | Article Outline

Posttest

The posttest occurred after the 15-day acquisition phase (ie, on day 16). For this test, the participants returned to the laboratory to perform a video-recorded attempt of the elliptical excision on a skin pad in the same fashion as the pretest, excluding the warm-up.

Back to Top | Article Outline

Retention

The retention test occurred 1 month later (ie, on day 45). Similarly, the participants returned to the laboratory to perform a video-recorded performance retention test that followed the same protocol as the pretest and posttest.

Back to Top | Article Outline

Dependent Measures and Rating

The dependent measures for the experiment were the participants' total checklist score and average global rating score for each performed test.

All performances, including those that served as the demonstration videos, were rated by senior surgical residents that are regularly involved in educating and assessing the elliptical excision skill. Each video was rated by 3 different residents using a previously validated objective structured assessment of technical skills (OSATS) checklist and global rating scale.34 For the checklist, a minimum of 2 raters had to rate the item in the same manner for it to be documented. For the global rating scale measures, if all 3 raters disagreed on an item, then a fourth rater was enlisted to break the deadlock.

The rating occurred in 2 phases. The first phase occurred immediately after the warm-up and pretest and involved rating the pretest attempts. This phase of rating was necessary so that novice videos could be appropriately sampled for use in the acquisition portion of the study. The second phase of rating involved assessing the posttest and retention performances. All raters were blinded to group assignment, participant, and test. All ratings were conducted via observation of performances presented on the OPEN system. Any trials in which the performance was assessed as more than 2.5 SDs below the sample mean for either dependent variable were considered outliers and removed before analysis. This cleaning process resulted in 4 removed trials (3 pretest, 1 posttest).

Back to Top | Article Outline

Experimental Analyses

Interrater reliability was established using interclass correlation.

Separate 1-way analyses of variance of the participants' pretest checklist and global rating scale scores were completed to ensure group equivalence before the intervention period.

The checklist and global rating scores were compared in independent 3 Group (E, N, E/N) by 3 Test (PRE, POST, RET) analyses of variance with repeated measures on the second factor. Effects significant at an α set at P < 0.05 were further analyzed using Tukey honestly significant difference (HSD) post hoc methodology.

Back to Top | Article Outline

Follow-up Evaluation

Nineteen medical students were recruited purposively from the participant sample (6 Niagara Campus students; 6 Kitchener-Waterloo Campus students; 7 Hamilton Campus students; 12 females, 7 males; mean age = 23.1 years) to engage in semistructured interviews that probed their perceptions of their experience. These questions were developed on the bases of the Questionnaire of User Interaction Satisfaction, and the responses were appraised using the Questionnaire's focus on the system's perceived usability, relevance, and areas for improvement as an organizing framework.33 Selected interview quotes are presented in support of the expressed perceptions.

Back to Top | Article Outline

RESULTS

Experimental Analyses

Interrater Reliability

Interclass correlation between the raters was found to be strong for both the checklist (r = 0.89; P < 0.001) and global rating (r = 0.80; P < 0.001) scores.35

Back to Top | Article Outline

Pretest Analyses

The pretest analyses revealed no significant differences between groups with respect to checklist scores {[grand mean (±SE)]/25 = 18.75 ± 0.90} or global rating {[grand mean (±SE)]/5 = 2.61 ± 0.13} scores for the elliptical excision procedure before the acquisition phase.

Back to Top | Article Outline

Group-by-Test Analyses

The analysis of the total checklist score measures revealed a significant main effect of test (F[2, 34] = 14.18, P < 0.0001, η2p = 0.45). Post hoc decomposition of this effect indicated that all participants performed significantly better during the retention test than during the pretest or posttest, regardless of group assignment (Fig. 2). The analysis did not reveal a significant main effect of group (F[2, 17] = 1.68, P = 0.22, η2p = 0.17) or a significant group-by-test interaction (F[4, 34] = 1.64, P = 0.19, η2p = 0.16).

FIGURE 2

FIGURE 2

The analysis of the mean global rating score measures revealed a significant main effect of test (F[2, 34] = 27.06, P < 0.0001, η2p = 0.61). Post hoc decomposition of this effect indicated that all participants performed better at posttest than at pretest and better at retention test than at posttest, regardless of group assignment (Fig. 3). The analysis did not reveal a significant main effect of group (F[2, 17] = 0.13, P = 0.88, η2p = 0.01) or a significant group-by-test interaction (F[4, 34] = 0.89, P = 0.48, η2p = 0.09).

FIGURE 3

FIGURE 3

Back to Top | Article Outline

Follow-up Evaluation Analyses

Perceived Usability

Participants agreed that the system was easy to use, quick enough, and familiar with respect to their technological knowledge:

“[Although] it felt like a beta version due to the anonymity of the [experiment]… it did not detract from the functionality.” [P4]

“The load of the videos was fast. The videos always load under any Internet connection, either, airport, house, coffee shop, even on the data on my phone, it was always fast!” [P5]

A general complaint was that the way that the system was organized did not provide a simple map for navigating the breadth of performance videos that were shared. The respondents speculated that this issue would be more challenging in a nonexperimental context wherein they might skip observing performances for several days. The idea is if educational videos accumulate, then navigating their observation in an appropriate order will yield a bigger challenge.

Only 1 participant indicated experiencing technological malfunctions while using the networked observation system. This problem was solved when the participant changed computers; however, the prevalence of a malfunction reminds educators that contingencies must be planned when relying on technological systems to deliver content.

Back to Top | Article Outline

Perceived Relevance

Participants agreed that versatile and accessible networked observation tools, such as OPEN, are beneficial to their learning:

“It was like a YouTube for school.” [P8]

However, despite this assertion, students acknowledged that they are not consistent in their use of already available online educational options:

“In my opinion, you have to make it mandatory, a lot of the things school give us that are optional, not a lot of people utilize them.” [P2]

When asked about what will encourage them to start using the tools, participants mentioned that the particular content plays an important role motivating use:

“You have to sell it to medical students that they would learn something. The content you provide is the most important, it has to be linked to what they need and want.” [P10]

Back to Top | Article Outline

Perceived Areas of Improvement

Multiple participant responses suggested that what they wanted to make the networked observational learning activity most relevant was direct feedback about their own performance:

“…a break-down of feedback would have been curious to see. Especially for the GRS, since [it is] a bit more subjective.” [P4]

“…I would be interested in my performance compared to other people…” [P1]

“I would add also some kind of personal progress, like a picture that contains all my scores to track my performance…a visual component would have been fun” [P3]

Although the present study did not examine educational effects associated with the delivery of feedback, it is noteworthy that many of the ideas for feedback delivery proposed by the learners are possible within this approach including instructor-led, social comparative, and peer-to-peer processes.12,31

Back to Top | Article Outline

DISCUSSION

In this study, we aimed to show that video-based observational practice could enhance simulation-based technical skill learning within an observational learning community of distributed health professional trainees. This was approached in 2 ways. First, we conducted an experiment that aimed to replicate findings derived from previous studies highlighting how skill learning is enhanced via observation of either perfect or imperfect demonstrations. The perspective that expert demonstrations are “best” is common and intuitively appealing; however, there is a broad base of theoretical research from kinesiology, psychology, and neurophysiology that has revealed very distinct learning advantages for the observation of flawed performances across a variety of skill classifications. The underlying concept, in general, is that expert demonstrations provide a reference for accurate performance, but that flawed demonstrations reveal to learners the consequences associated with errors, which influences the development of higher-order strategies for managing movement-based skills and the variability that is inherent to the neuromuscular system.17–29 Novel in this line of inquiry is the way in which this experiment sampled the imperfect demonstrations directly from the community of learners. Second, we used the Questionnaire of User Interaction Satisfaction to guide interviews designed to evaluate the participants' perceptions of the Internet-mediated OPEN system's usability and relevance to their clinical skill learning, while also eliciting learners' thoughts on areas for system improvement.33 The results of the experiment confirm that observational practice that includes either novice or expert viewing can support clinical skill learning, even when the novice demonstrations are sampled ecologically from video-recorded performances of the participating learners. Specifically, checklist and global rating measures demonstrate that all learners, regardless of group assignment, improved over the duration of the experiment. That all participants outperform their pretest performances on both measures at posttest and after periods of extended retention indicate that this observational learning effect is robust. Although this finding is in concert with our primary hypothesis and supports the organization of interactive observational learning communities around simulation-based learning activities, there are 2 particular details that warrant future investigation.

The first is that mixed observation of expert and novice demonstrations did not provide an additional learning advantage. Given that each type of demonstration provides viewers with differential forms of beneficial information, there was some expectation that a combination of demonstration types may provide the richest form of observational practice.20,25 There are a number of reasons why this effect may not have borne out in the current study. For one, in our earlier work, the participants that benefitted from mixed-model observation received information about the quality of the performance that they had just observed and assessed (ie, a consensus expert rating of that demonstration). In this study, the learners did not receive information about the quality of the viewed demonstration. Thus, it is possible that the present participants were not able to effectively differentiate the flawless and erroneous performances in a way that suppressed the cumulatively beneficial influence of mixed models. Alternatively, it is also possible that the mixed-model effect requires that the errors are of a certain nature, distribution, and/or magnitude to emerge. Indeed, where the benefit of mixed-model observation has been elicited in our own clinical11 and laboratory-based skill learning research,36 we have assured a consistency and balance of the type and magnitude of the errors to which observers are exposed. Aspects of this type of control may be essential to the mixed-model effect. Lastly, we must also acknowledge a pair of limitations that may have impacted our ability to detect differences between the experimental groups. For one, the study may have been underpowered. Given that our primary focus was to explore the replicability of previous experimental findings within an ecological context, we chose to test the differential observation learning interventions with group sizes that reflect the way that clinical learners are normally organized at McMaster University. Although we feel that the most meaningful educational effects should emerge within the natural constraints of the educational setting, we recognize that future work should test these ideas with larger group sizes and particular experimental attention to the characteristics of the errors to which learners are exposed to confirm the potential benefits of mixed-model observation in clinical skills learning. Second, learners' moderate-to-good initial performance on both measures may have left too little room to reveal differential degrees of improvement. In this regard, replication of the experiment with a task that is more complex than the elliptical excision may serve to elicit significant between-group difference. However, given that the analyses revealed significant learning effects successively over 2 periods of retention suggest that the scales used in this study were sufficiently sensitive to reveal degrees of learning for this task.

The second interesting experimental finding is that all learners' performances improved at the retention test relative to the posttest. This outcome likely reflects how each instance of testing in our experimental design represents a specific instance of physical simulation-based practice for our participants. In this regard, the retention improvements may be a function of the additional experience gained during posttesting. However, we must also acknowledge that participants may have experienced some offline performance gains, which describe improvements in skill proficiency that occur without practice.37,38 These types of gains are not entirely unexpected and are often seen in procedural tasks.39 To disentangle any offline gains associated with video-based observational practice, future research may involve a similar paradigm but include additional learning groups that do not participate in the posttest portions of the experiment. With group considerations in mind, future work may also include control groups that do not participate in the observational activities and/or that participate only in review of the skill checklist and global rating scales or the observation of demonstrations (ie, without assessing) to parse out the central elements that drive the observational learning effect.

The evaluation of participants' perception of networked video-based observational practice as a way to enhance the skills learning associated with simulation-based activities highlighted the learners' impressions of the approach's usability, relevance, and areas for improvement. To begin, the learners felt the Internet-mediated delivery of video-based observational practice was familiar and usable. We recognize, however, that this likely reflects the generation of learners that were engaged in this study and may not be universally true for certain groups of older or younger trainees, although we anticipate that the perceived use of these technologies will only become more widespread over time. The participants also highlighted that they felt video-based observational practice was relevant to their clinical education but admitted that the many competing demands on their time mean that educators would need to enact mandates or new forms of motivation to ensure their regular interaction with this type of curriculum. In terms of areas for improvement, there was consensus among participants that increased feedback about the quality of observed performances and demonstration assessments would enhance the learning experience. This is consistent with the idea that additional information during the observational practice activities may have distinguished an additional learning advantage for the mixed-model observers. In this regard, the nature and prevalence of feedback mechanisms within the OPEN system are central to some of our ongoing research efforts on the effectiveness of video-based observational practice communities. In particular, this new work investigates the way that scoring observers' assessments and providing social comparative ratings (eg, “You are performing above average”) work to motivate learners within the context of networked observational practice.31

Interestingly, the conduct of our simple observational learning experiment also highlights for us some important considerations for the implementation of video-based observational practice in simulation-based learning contexts. Most prominently, the application of effective video-based observational learning in skills education requires a commitment to the assessment of the video performances that will constitute the set of demonstrations. This, of course, requires some dedication of time and resources. For example, in our experiment, we needed to enlist 3 raters to perform assessments of demonstrations in a timespan of 24 hours to make our video stimuli available to participants at the start of the intervention period. This turnaround time is likely too quick for the educational community but also probably quicker than it needs to be. In a simulation-based education context, we can envision a number of curricular approaches that circumvent this short assessment turnaround time. For instance, a greater delay in beginning an intervention may be instituted in the educational context. Alternatively, educators may work to build a repository of already assessed demonstrations, which can be conveniently sampled. Furthermore, online-learning groups could be formed in which learners create personal performance videos that are assessed, by expert raters and/or peers, as part of the online-sharing process. With specific respect to assessment, it is noteworthy that in this experiment, the OPEN system allowed our raters to access and assess the participants' performances remotely. Thus, it is important to acknowledge that there is potential for video-based observational technologies to extend the effectiveness of clinical skill assessment insofar that it creates opportunities to involve more raters in an efficient fashion. This may be particularly important for those learners that train in medical teaching environments with a paucity of individuals qualified to levy assessments of student progress. For instance, community health clinic teaching sites often have only 1 practitioner that is able to provide an assessment. Through video-based means, one can speculate on how the assessor pool may be broadened and in a way that allows ratings to be administered by individuals that do not interact with the learners on a regular basis. In doing so, biases in assessment that manifest as a function of the personal relationship that forms between student and preceptor may be reduced.35

Back to Top | Article Outline

CONCLUSIONS

The results of this study indicate that observational practice involving a variety of demonstration types can support simulation-based clinical skills learning in a distributed group of health professional trainees. The evidence presented highlights that this approach permits educators a degree of flexibility in the way that they implement video-based observation into simulation curricula. For instance, novices can be partnered with each other, encouraged to watch intermediate performances, or exposed to classic forms of observational practice involving the viewing of expert performances. In this way, the competence level of the learning cohort or the demonstrators need not be considered a major constraint on this type of educational activity. In total, this work also affords us a stronger foundation on which to build an argument that video-based observation, facilitated by Internet-mediated platforms, is a viable way to support skill development via simulation. In making these conclusions, we acknowledge that the effectiveness of this approach may be contingent on sufficient information about demonstration quality being transmitted to the learners during their observational practice. Furthermore, we remind the reader that deliberate practice remains the criterion standard in skill acquisition across disciplines40 and highlight that observational approaches to learning should not forgo the feedback delivery process nor minimize the importance of physical practice.

Back to Top | Article Outline

ACKNOWLEDGMENTS

The authors wish to acknowledge the invaluable contributions of Allison Brown, Tami Everding, Dr Matthew Greenway, Dr Andrew Costa, Dr Ryan Fielding, Dr Tiffaney Kittmer, Dr Jessica Bogach, Dr Katherine Tedman, Dr Graeme Bock, and Dr Yaameen Chanda.

Back to Top | Article Outline

REFERENCES

1. Reznick RK, MacRae H. Teaching surgical skills—changes in the wind. N Engl J Med 2006;355:2664–2669.
2. Cook DA, Hatala R, Brydges R, et al. Technology-enhanced simulation for health professions education: a systematic review and meta-analysis. JAMA 2011;306:978–988.
3. Kneebone R. Simulation in surgical training: educational issues and practical implications. Med Educ 2003;37:267–277.
4. McGaghie WC, Issenberg SB, Cohen ER, Barsuk JH, Wayne DB. Does simulation-based medical education with deliberate practice yield better results than traditional clinical education? A meta-analytic comparative review of the evidence. Acad Med 2011;86:706–711.
5. Rogers DA, Regehr G, Howdieshell TR, Yeh KA, Palm E. The impact of external feedback on computer-assisted learning for surgical technical skill training. Am J Surg 2000;179:341–343.
6. Xeroulis GJ, Park J, Moulton CA, Reznick RK, Leblanc V, Dubrowski A. Teaching suturing and knot-tying skills to medical students: a randomized controlled study comparing computer-based video instruction and (concurrent and summary) expert feedback. Surgery 2007;141:442–449.
7. Cant RP, Cooper SJ. Simulation-based learning in nurse education: systematic review. J Adv Nurs 2010;66:3–15.
8. Cheng A, Eppich W, Grant V, Sherbino J, Zendejas B, Cook DA. Debriefing for technology-enhanced simulation: a systematic review and meta-analysis. Med Educ 2014;48:657–666.
9. Fanning RM, Gaba DM. The role of debriefing in simulation-based learning. Simul Healthc 2007;2:115–125.
10. Grant JS, Moss J, Epps C, Watts P. Using video-facilitated feedback to improve student performance following high-fidelity simulation. Clin Simul Nurs 2010;6:e177–e184.
11. Domuracki K, Wong A, Olivieri L, Grierson LE. The impacts of observing flawed and flawless demonstrations on clinical skill learning. Med Educ 2015;49:186–192.
12. Grierson LE, Barry M, Kapralos B, Carnahan H, Dubrowski A. The role of collaborative interactivity in the observational practice of clinical skills. Med Educ 2012;46:409–416.
13. Welsher A, Grierson LE. Enhancing technical skill learning through interleaved mixed-model observational practice. Adv Health Sci Educ Theory Pract 2017.
14. Al-Abood SA, Davids KF, Bennett SJ. Specificity of task constraints and effects of visual demonstrations and verbal instructions in directing learners' search during skill acquisition. J Mot Behav 2001;33:295–305.
15. Bandura A. Social Foundations of Thought and Action: A Social Cognitive Theory. Englewood Cliffs, NJ: Prentice-Hall, Inc; 1986.
16. Blandin Y, Lhuisset L, Proteau L. Cognitive processes underlying observational learning of motor skills. Q J Exp Psychol A 1999;52:957–979.
17. Buchanan JJ, Dean NJ. Specificity in practice benefits learning in novice models and variability in demonstration benefits observational practice. Psychol Res 2010;74:313–326.
18. Buchanan JJ, Dean N. Consistently modeling the same movement strategy is more important than model skill level in observational learning contexts. Acta Psychol (Amst) 2014;146:19–27.
19. Hodges NJ, Chua R, Franks IM. The role of video in facilitating perception and action of a novel coordination movement. J Mot Behav 2003;35:247–260.
20. Andrieux M, Proteau L. Observation learning of a motor task: who and when? Exp Brain Res 2013;229:125–137.
21. Black CB, Wright DL. Can observational practice facilitate error recognition and movement production? Res Q Exerc Sport 2000;71:331–339.
22. Blandin Y, Proteau L. On the cognitive basis of observational learning: development of mechanisms for the detection and correction of errors. Q J Exp Psychol A 2000;53:846–867.
23. Buchanan JJ, Ryu YU, Zihlman K, Wright DL. Observational practice of relative but not absolute motion features in a single-limb multi-joint coordination task. Exp Brain Res 2008;191:157–169.
24. Hayes SJ, Hodges NJ, Huys R, Mark Williams A. End-point focus manipulations to determine what information is used during observational learning. Acta Psychol (Amst) 2007;126:120–137.
25. Rohbanfard H, Proteau L. Learning through observation: a combination of expert and novice models favors learning. Exp Brain Res 2011;215:183–197.
26. Cross ES, Kraemer DJ, Hamilton AF, Kelley WM, Grafton ST. Sensitivity of the action observation network to physical and observational learning. Cereb Cortex 2009;19:315–326.
27. Higuchi S, Holle H, Roberts N, Eickhoff SB, Vogt S. Imitation and observational learning of hand actions: prefrontal involvement and connectivity. Neuroimage 2012;59:1668–1683.
28. Pellegrino GD, Fadiga L, Fogassi L, Gallese V, Rizzolati G. Understanding motor events: a neurophysiological study. Exp Brain Res 1992;91:176–180.
29. Elliott D, Grierson LE, Hayes SJ, Lyons J. Action representations in perception, motor control and learning: implications for medical education. Med Educ 2011;45:119–131.
30. Rojas D, Cheung JJ, Weber B, et al. An online practice and educational networking system for technical skills: learning experience in expert facilitated vs. independent learning communities. Stud Health Technol Inform 2012;173:393–397.
31. Rojas D, Kapralos B, Dubrowski A. The role of game elements in online learning within health professions education. Stud Health Technol Inform 2016;220:329–334.
32. Myhre DL, Adamiak P, Turley N, Spice R, Woloschuk W. Beyond bricks and mortar: a rural network approach to preclinical medical education. BMC Med Educ 2014;14:166.
33. Chin JP, Diehl VA, Norman KL. Development of an instrument measuring user satisfaction of the human-computer interface. ACM CHI'88 Proceedings 1998:213–218.
34. Alam M, Nodzenski M, Yoo S, Poon E, Bolotin D. Objective structured assessment of technical skills in elliptical excision repair of senior dermatology residents: a multirater, blinded study of operating room video recordings. JAMA Dermatol 2014;150:608–612.
35. Streiner DL, Norman GR, Cairney J. Health Measurement Scales: A Practical Guide to Their Development and Use. New York, NY: Oxford University Press; 2015.
36. Grierson LE, Roberts JW, Welsher AM. The effect of modeled absolute timing variability and relative timing variability on observational learning. Acta Psychol (Amst) 2017;176:71–77.
37. Diekelmann S, Born J. One memory, two ways to consolidate? Nat Neurosci 2007;10:1085–1086.
38. Wright DL, Rhee JH, Vaculin A. Offline improvement during motor sequence learning is not restricted to developing motor chunks. J Mot Behav 2010;42:317–324.
39. Walker MP, Stickgold R. Sleep, memory, and plasticity. Annu Rev Psychol 2006;57:139–166.
40. Ericsson KA, Krampe RT, Tesch-Römer C. The role of deliberate practice in the acquisition of expert performance. Psychol Rev 1993;100:363–406.
Keywords:

Observational practice; Technology-enhanced learning; Simulation-based learning; Distributed medical education; Skills

© 2018 Society for Simulation in Healthcare