The use of simulation suites for educational training1 and for research data collection2 has been documented, yet their use for research training has not been widely presented. Researchers employing observational studies of clinician and/or patient behaviors in the clinical setting need to train observers to code these behaviors correctly and consistently. Observer training can be difficult because of robust coding schemes that are necessarily intricate to measure the behaviors of interest adequately and objectively. Moreover, the unpredictable nature of clinical environments presents challenges for observers to achieve agreement when coding. A key component of a training program is to observe clinician or patient behaviors because they occur in the clinical environment. Unfortunately, observer training on-site can be disruptive as well as burdensome for clinical staff to coordinate training sessions with researchers.3
Video recording simulated behaviors in an environment similar to the data collection site would allow observers to gain an understanding of not only what the behaviors look like but also how they are likely to present themselves in the study environment. Simulation suites are increasingly available in medical schools, nursing schools, and hospitals and can be embedded in observer training programs. Hypothetical case studies recorded in a simulation suite can provide realistic scenarios that enhance observers' understanding of the behaviors of interest in the coding scheme.
The purpose of this article is to describe the use of a simulation suite in a multimodal training approach to accommodate a complex coding structure and achieve observer agreement. This article does not compare observer training approaches or evaluate the effect of the recorded simulation video in isolation. However, this article does provide a detailed description of an innovative use for simulation suites that could facilitate observer training programs for complicated studies of clinician or patient behaviors. The process described could also be applied to other uses, such as training observers for studies conducted in the simulation space or evaluating student skills and behaviors in the clinical environment.
COMPLEXITY OF CODING
We conducted a pilot study of a multidimensional intervention to improve older adult patient ambulation in a hospital setting. This study was reviewed and approved by the health sciences institutional review board, and results were reported in a previous article.4 Direct observation was needed to capture changes in nurse behavior related to patient ambulation. In the study, six observers documented nurse behaviors using a handheld computer. Each observation period was 4 hours long; there were 80 observation periods for a total of 320 hours of data collection. The codes collected portrayed timed event sequential data, which takes into account the frequency, sequence, and onset and offset times for each behavior.5
The coding process was complex because of the variety of nurse behaviors (communication, assessment, and documentation) and patient behaviors (mobility) that were of interest. Observers needed to have a clear understanding of what the nurse and patient behaviors looked like. For example, observers coded the nurses' mobility assessment of the patient's ability to walk and, therefore, were trained to identify nurse behaviors for this assessment. In addition to identification of the nurse behaviors, the speed in which they occur in the clinical setting was also a challenge. Communication between nurse and patient or nurse with other members of a healthcare team can be quick especially during shift hand-off. Therefore, to capture events, observers had to be vigilant in listening and observing for the behaviors of interest regarding patient mobility.
Observers also needed to be trained on how to enter the codes into the handheld computer (additional detail on this method of timed event sequential behavior coding using a handheld computer is provided elsewhere).6 Observers entered the start and stop times of each code when they observed the corresponding behavior (Table 1). After the coded behavior was entered, a new screen prompted observers to answer contextual detail questions about the behavior by selecting responses from a drop-down menu.6 The contextual detail questions were different for each behavior coded. Observers needed to be comfortable with which details to pay attention to during every behavior of interest so that they could capture details appropriately. To prepare observers for this complex coding scheme, a multimodal training plan was created.
MULTIMODAL TRAINING PLAN
The multimodal training plan included explanation of codes and technology, coding of simulation video, and observer-trainer hospital practice (Fig. 1). Two trainers, the principal investigator (B.K.) of the project and a member of the team (K.P.) who designed the coding structure on the handheld computer, prepared observers using this plan. The observers included five graduate students and one undergraduate student in nursing degree programs.
Explanation of Codes and Technology
All members of the research team received cards with definitions of the codes as well as explanations for how to code the contextual details (Table 2 provides an example of a card). The team met to discuss and clarify each code definition, and definitions were refined as needed. The team also received training on how to use the handheld computers to document the start and stop times of each code and to document the contextual details of the codes.
Coding of Simulation Video
A simulation of nurse and patient behaviors was video recorded in the School of Nursing's hospital unit simulation suite. The simulation mimicked a typical inpatient unit and situations observers would experience during data collection for the study. Nurses on the research team wrote the script based on previous nursing experience; therefore, the behaviors from the coding scheme written in the script were consistent with how they tended to occur in the clinical environment (Table 3). In addition, the simulation included complicated scenarios that would challenge observers' understanding of the behaviors. For example, the nurse actor felt the patient's pedal pulses and legs, which was intended to have the observers question whether the nurse was doing a behavior related to patient mobility or general patient assessment.
Before recording the video, the researchers familiarized themselves with the simulation space and planned accordingly. The simulation space included eight “patient rooms” with a “nurses' station” in the center. The script for the video involved two patient scenarios and nurse shift hand-off; therefore, two of the patient rooms and the nurses' station were used for the recording. Each room and the center station already had multiple cameras built into them, and the spacing of the actors was determined based on the camera locations. The prop equipment needed (gait belt, patient chair, etc.) in each room was also negotiated with the simulation specialists.
The roles for the actors were decided based on previous clinical experience. Members of the research team who had nursing experience played the roles of nurses and nursing assistants because they understood how these clinicians engage in the behaviors called for in the script. Members of the research team without clinical experience played the roles of patients. The actors were instructed to move and converse in “real time” so that observers got used to having to code with the speed of actual clinical situations. The script provided actors detailed direction of how to do certain behaviors, such as patient tries to get up slowly but falls back to bed. The actors did not need to be precise in their behaviors because in the clinical setting, observers would be viewing and coding imprecise behaviors of patients. The actors went through the script and acted out the scenarios three times: once for practice (about 30 minutes) and twice more for video recording the scenarios (about 15 minutes each) to provide options of which performance to include as the final video. The research team watched the videos with the simulation specialists to ensure that the cameras captured each behavior adequately for observers to code both the behavior and details of the behavior. The video was then edited with the assistance of the academic technology department (about 1 hour) by choosing the appropriate camera angles for each scene. The final video was 10 minutes long.
All observers and trainers viewed the simulation video together and coded behaviors individually on their handheld computers while watching the video. Then, the trainers went through the video a second time and stopped at each behavior of interest, discussing the behavior and the contextual details that should have been coded. This activity prompted a discussion of discrepancies in coding and resulted in revisions of some of the code definitions. For example, one scenario from the video included a nurse performing a mobility assessment right before the patient begins walking, and it became apparent that the code “Patient Walking in the Room” needed to be refined to clarify when the assessment ended and the walking began. The group training session lasted 90 minutes. After the training sessions, observers could access the video online through a shared drive to watch the video individually. Observers who accessed the video on their own time found it useful because it helped them adjust to the speed of the behaviors (especially communication) and develop proficiency with the coding scheme.
Observer-Trainer Hospital Practice
The observers trained on an inpatient unit similar to the unit where the study would be conducted. Each training session was 4 hours long to prepare observers for the 4-hour observation period they would experience during the study. Team members met in groups of three—two observers and a trainer—and all three coded nurse and patient behaviors on individual handheld computers. After each occurrence of a behavior of interest, the observers and trainers immediately compared their codes. Two members of the team required one additional 4-hour observation shift because of inconsistency of codes when compared with the trainer or to become more comfortable with the data collection process. Data from these practice observations were used to assess observer agreement. The interrater reliability was calculated using intraclass correlation coefficient. The intraclass correlation coefficient achieved for the six observers trained using this multimodal method was 0.936 to 0.966, which is considered excellent.7
Video recording behaviors of interest in a simulation suite that mimics the actual study environment can help achieve adequate observer agreement in coding observational data. This multimodal observer training approach is an example of incorporating new techniques and expanding on traditional approaches to add rigor and efficiency to observational research. Clinician and patient behaviors are multifaceted, requiring creative strategies to measure these behaviors adequately in the chaotic setting of healthcare environments. Developing coding schemes and identifying tools that contain the complexity necessary to capture behaviors is important; however, training observers on how to use such tools and code complicated behaviors is critical to ensure accurate data collection.
Using a simulation suite can be especially advantageous for research teams that include members from multiple disciplines. A concern about working in interdisciplinary teams noted in the literature is whether members are speaking the same “language.”8,9 This can be challenging when some members do not have clinical experience to draw on to understand the routine of the interactions and processes of the clinical environment.8 Viewing a video recorded in a simulation suite could help nonclinical members of the research team obtain insight into how nurses and patients navigate their space and how behaviors are likely to occur in the clinical environment.
It is important to note that the simulation video did not replace training in the clinical setting but rather augmented it. Training in the clinical setting is important not only to practice coding but also to learn how to code within the study environment, for example, getting acclimated to the noise and distractions of the hospital and determining where to stand to view behaviors without being obtrusive. All observers engaged in video training and hospital training; therefore, we could not assess whether the interrater reliability would be better or worse if the observers only trained in the hospital (without the simulation video). However, it is possible that the time required to practice in the clinical setting was reduced because of the video training. In a similar study that did not use video training, observers required 16 hours of training in the clinical setting before achieving observer agreement in coding nurse behaviors.10 Dempsey and colleagues11 compared in-vivo observer training with video training and found that although both were effective, participants who completed the video training required fewer training sessions to achieve high reliability scores on a posttest. This finding may be because the content of a video can be manipulated to include all the behaviors of interest whereas there is no control of behaviors that will occur in the real-time setting.11 Indeed, the simulation video in this study intentionally included all behaviors of interest and complicated scenarios. While reducing time spent in the clinical setting is important for minimizing burden on clinical staff, it can also be valuable for efficiently completing observer training.
There are other applications for using this multimodal training approach beyond preparing observers for a study in the clinical setting. Wright and colleagues12 used patient simulation to evaluate the ease of use of protocol procedures for clinical trial coordinators. Furthermore, while developing a training video of an “ideal performance,” the research team identified potential problems with the protocol that they addressed before beginning the training.12 Using simulation could also be helpful to test use of complex data capture strategies, such as handheld computers, before training observers. In addition, the training approach could be used to prepare observers for studies that take place in the simulation suite, such as how clinicians behave and react during various hypothetical patient scenarios in the simulation suite.
Another use could be training examiners to assess student behaviors in the clinical setting, particularly to see whether students applied what they learned from the simulation environment to the clinical setting. Training examiners is also important for objective structured clinical examinations (OSCE) that assess performance in simulated environments and can be used for high-stakes licensure and certifications to practice medicine.13 Although OSCEs use standardized scoring rubrics and standardized patients, there is a potential for examiner bias, and robust training approaches are needed.13
Strategies to train examiners for OSCEs—such as rater error training, performance dimension training, frame-of-reference training, and behavioral observation training—have been previously described in the literature.14,15 The multimodal training approach described herein contains components of these strategies by showing a video vignette with behaviors of interest (performance dimension training) and having observers practice coding from the video and in the hospital and discuss discrepancies (frame-of-reference training). However, our observer training approach could be improved by increased consistency in following these training strategies. For example, providing variations of the behaviors across multiple videos would strengthen a frame-of-reference training design and could enhance observers' understanding of the behaviors of interest.
Other limitations to our training approach merit discussion. We developed the video in a state-of-the-art simulation suite with a skilled simulation staff that may not be available to other researchers or clinicians. In addition, we did not compare observer agreement using the described approach with any other training approach and therefore cannot claim that this approach “improves” observer agreement. Finally, we assessed interrater reliability at one time point and did not re-establish interrater reliability during the course of the study.
There are more than 300 simulation centers across the country in medical schools, nursing schools, and hospital settings.16 The use of simulation in research training may offer a valuable new way to leverage these spaces and improve research quality. Including a video recorded in a simulation suite in this multimodal approach to observer training made it possible to achieve observer agreement in a study that used a complex coding scheme. In addition, developing the video in the simulation suite alleviated concerns for interrupting patient care or burdening clinical staff and allowed for observer training outside of the clinical setting. Researchers should consider using these facilities to achieve observer agreement in observational studies of the ever-changing clinical environment.
The authors thank contributions of Brenda Kupsch, Jackie O'Brien, Tim Piatt, and Laurie Pirtle in the creation of the simulation video.
1. Cant RP, Cooper SJ. Use of simulation-based learning in undergraduate nurse education: An umbrella systematic review. Nurse Educ Today
2. Lu A, Mohan D, Alexander SC, Mescher C, Barnato AE. The language of end-of-life decision making: a simulation study. J Palliat Med
3. Edwards M, Hillyard S: Improvisation, ethical heuristics and the dialogical reality of ethics in the field, Studies in Qualitative Methodology: Vol. 12. In: Love K, ed. Ethics in Social Research. Bingley, UK: Emerald Group Publishing Limited; 2012:129–148.
4. King BJ, Steege LM, Winsor K, VanDenbergh S, Brown CJ. Getting patients walking: a pilot study of mobilizing older adult patients via a nurse-driven intervention. J Am Geriatr Soc
5. Bakeman R, Quera V. Sequential Analysis and Observational Methods for Behavioral Sciences
. New York: Cambridge University Press; 2011.
6. Pecanac KE, Doherty-King B, Yoon JY, Brown R, Schiefelbein T. Using timed event sequential data in nursing research. Nurs Res
7. Fleiss JL. The Design and Analysis of Clinical Experiments
. New York: Wiley; 1986.
8. Fernandez R, Grand JA. Leveraging social science-healthcare collaborations to improve teamwork and patient safety. Curr Probl Pediatr Adolesc Health Care
9. Kneipp SM, Gilleskie D, Sheely A, Schwartz T, Gilmore RM, Atkinson D. Nurse scientists overcoming challenges to lead transdisciplinary research teams. Nurs Outlook
10. Doherty-King B, Yoon JY, Pecanac K, Brown R, Mahoney J. Frequency and duration of nursing care related to older patient mobility. J Nurs Scholarsh
11. Dempsey CM, Iwata BA, Fritz JN, Rolider NU. Observer training revisited: a comparison of in vivo and video instruction. J Appl Behav Anal
12. Wright MC, Taekman JM, Barber L, Hobbs G, Newman MF, Stafford-Smith M. The use of high-fidelity human patient simulation as an evaluative tool in the development of clinical research protocols and procedures. Contemp Clin Trials
13. Khan KZ, Ramachandran S, Gaunt K, Pushkar P. The Objective Structured Clinical Examination (OSCE): AMEE Guide no. 81. Part I: an historical and theoretical perspective. Med Teach
14. Feldman M, Lazzara EH, Vanderbilt AA, DiazGranados D. Rater training to support high-stakes simulation-based assessments. J Contin Educ Health Prof
15. Woehr DJ, Huffcutt AI. Rater training for performance-appraisal—a quantitative review. J Occup Organ Psychol