Secondary Logo

Journal Logo

Alternative Educational Models for Interdisciplinary Student Teams

LeFlore, Judy L. PhD, RNC, NNP, CPNP-PC, CPNP-AC; Anderson, Mindi PhD, RN, CPNP-PC

Simulation in Healthcare: The Journal of the Society for Simulation in Healthcare: October 2009 - Volume 4 - Issue 3 - p 135-142
doi: 10.1097/SIH.0b013e318196f839
Empirical Investigations
Free

Background: Few studies compare instructor-modeled learning with modified debriefing to self-directed learning with facilitated debriefing during team-simulated clinical scenarios.

Objective: To determine whether self-directed learning with facilitated debriefing during team-simulated clinical scenarios (group A) has better outcomes compared with instructor-modeled learning with modified debriefing (group B).

Methods: This study used a convenience sample of students. The four tools used assessed pre/post knowledge, satisfaction, technical, and team behaviors. Thirteen interdisciplinary student teams participated: seven in group A and six in group B. Student teams consisted of one nurse practitioner student, one registered nurse student, one social work student, and one respiratory therapy student. The Knowledge Assessment Tool was analyzed by student profession.

Results: There were no statistically significant differences within each student profession group on the Knowledge Assessment Tool. Group B was significantly more satisfied than group A (P = 0.01). Group B registered nurses and social worker students were significantly more satisfied than group A (30.0 ± 0.50 vs. 26.2 ± 3.0, P = 0.03 and 28.0 ± 2.0 vs. 24.0 ± 3.3, P = 0.04, respectively). Group B had significantly better scores than group A on 8 of the 11 components of the Technical Evaluation Tool; group B intervened more quickly. Group B had significantly higher scores on 8 of 10 components of the Behavioral Assessment Tool and overall team scores.

Conclusion: The data suggest that instructor-modeling learning with modified debriefing is more effective than self-directed learning with facilitated debriefing during team-simulated clinical scenarios.

From the University of Texas at Arlington School of Nursing, Arlington, TX.

Reprints: Judy L. Leflore, PhD, RNC, NNP, CPNP-PC, CPNP-AC, Pediatric and Acute Care Pediatric Nurse Practitioner Program, The University of Texas at Arlington School of Nursing, Arlington, TX (e-mail: jleflore@uta.edu).

The authors declare no conflict of interest.

Simulation offers participants a way to acquire and apply new knowledge, perform technical skills, and demonstrate team behaviors without putting patients at risk. Several approaches of instructor involvement exist while using simulators to construct a team-simulated clinical scenario. The approach the instructor selects may be determined by the skill level of the student. One of these approaches allows participants to be self-directed, ie, clinically experienced students are allowed to proceed through the team-simulated clinical scenario with little or no input from the instructor until the debriefing session after the scenario. An alternate approach exists, where instructors cue students during the scenario. The expert or instructor-modeled approach is a third alternative. This approach is where clinical experts model the appropriate responses and interventions during the team-simulated clinical scenario while the students or participants observe. During the instructor-modeled approach, each expert verbalizes aloud the critical elements of technical skill performance, behavioral responses that facilitate effective teamwork, and cognitive knowledge as it is applied to the simulated clinical scenario. This may be done before the students participate in the team-simulated clinical scenario.1

In simulated environments, participants can receive a written or oral “stem” to provide context or frame of reference, which may include information such as the patient’s chief complaint and history of the present illness. After the simulated clinical scenario, a detailed debriefing can occur, which can include the use of audiovisual recordings of the scenario itself. This debriefing session allows participants to reflect on their performance during the simulated clinical scenario, which may include cognitive, technical, and behavioral aspects.2

Social Learning Theory by Bandura3 highlights the importance of reinforcement and learning through modeling. By observing others, new knowledge and behavior is learned and then imitated.4 Additionally, it is thought that that reinforcement for behavior increases learning.3 During preceptorship in the clinical arena, it is theorized that students in healthcare professions internalize both preceptor behaviors and attitudes through preceptor modeling.1 However, in this type of precepted experience, instructors can neither guarantee that all students will receive the same experience nor that preceptors will model behaviors current with evidence-based practice. This is often called “education by random opportunity.”5–7

To determine the best method for structuring the simulated clinical experience, Leflore et al1 compared three approaches in conducting the simulated clinical scenario: instructor-modeled versus self-directed learning, plus a control group. This descriptive pilot study compared three groups of nurse practitioner (NP)-students (n = 16): control (lecture only followed by a simulated scenario), instructor- modeled (observed instructor team, then participated in a modified debriefing for 5 minutes, followed by a simulated scenario), and self-directed (received stem and performed in a simulated scenario without instructor intervention, followed by facilitated debriefing, then they participated in a second simulated scenario). Results showed no differences between the groups on knowledge. However, significant differences were noted on self-efficacy at three time points, which were as follows: immediately before the lecture (pretest; time 1: P = 0.006), immediately after the lecture (posttest 1; time 2: P = 0.008), and immediately after the simulation experience, which occurred approximately 1 week after the lecture (posttest 2; time 3: P = 0.012). On evaluation of technical performance, the only significant difference between the groups was on the time it took to start albuterol. Behavioral assessment showed significant differences between the groups on 8 of the 10 tool components. It was concluded that instructor-modeled learning may be more beneficial and effective than learning that is self-directed in a simulated environment. However, there were limitations to the study that included a small sample size, a distribution of student NP majors that was skewed, and both the control group and the instructor-modeled group only received one opportunity to perform in a simulated scenario. Additionally, the two scenarios used were the same. It was discussed that a stronger correlation could have been made if two different scenarios were used incorporating the same Pediatric Advanced Life Support principles.1

Because of the limitations in the pilot study, we replicated the study using two of the approaches and interdisciplinary teams, self-directed learning with facilitated debriefing versus instructor-modeled learning with modified debriefing for interdisciplinary teams of healthcare students, ie, NP students, registered nurse (RN) students, respiratory therapy (RT) students, and social work students.

In this study, it was hypothesized that students who observed instructor-modeled learning with modified debriefing during team-simulated clinical scenarios would demonstrate greater ability to apply new knowledge, be more satisfied, have greater proficiency performing technical skills, and show more appropriate team behaviors, when compared with students who underwent self-directed learning with facilitated debriefing for a team-simulated clinical scenario. Therefore, the purpose of this study was to determine whether instructor-modeled learning with modified debriefing during team-simulated clinical scenarios would have better student outcomes compared with students who underwent self-directed learning with facilitated debriefing.

Back to Top | Article Outline

METHODS

Design

This study was performed during two summer semesters: July 2006 and July 2007. During each summer session, students were randomized to one of two approaches: self-directed learning with facilitated debriefing or instructor-modeled learning with modified debriefing. In July 2006, the distribution of interdisciplinary student-teams resulted in three teams, which were randomized to self-directed learning with facilitated debriefing, and two teams, which were randomized to instructor-modeled learning with modified debriefing. In July 2007, the distribution of interdisciplinary student-teams resulted in four teams, which were randomized to self-directed learning directed with facilitated debriefing, and four teams, which were randomized to instructor-modeled learning modeled with modified debriefing. A team consisted of one NP student, one RN student, one RT student, and one social work student. Initial analysis of demographic data revealed that there were no differences between the July 2006s and the July 2007s self-directed learning with facilitated debriefing and instructor-modeled learning with modified debriefing groups. Therefore, the data from self-directed learning with facilitated debriefing groups of both Julys were combined (group A), and the instructor-modeled learning with modified debriefing groups of both Julys were combined (group B) for final analysis of the outcome measures. Institutional review board’s approval was obtained from The University of Texas at Arlington (UT Arlington). Before participation in the study, informed consent was obtained from all the participants, Figure 1.

Figure 1.

Figure 1.

Back to Top | Article Outline

Setting

The study took place at UT Arlington School of Nursing’s simulation laboratory. The simulated setting was an Emergency Department. The Emergency Department was equipped with several cameras that allowed for multiple video angles of each simulation for audiovisual recording. In the first summer, two video cameras on tripods were used. In the second summer, several lipstick cameras were used to retrieve different angles.

Back to Top | Article Outline

Sample

A convenience sample of NP, RN, RT, and social work students participated in this study. NP, RN, and social work students were from UT Arlington. The RN and social work students were in their final (senior) year of a 4-year program. The nurse practitioner students were from the Acute Care Pediatric Nurse Practitioner program and in their last semester of a 2-year graduate program. Because UT Arlington does not have an RT program, RT students were recruited from a local community college. These RT students were in the last semester of a 2-year program. All students were informed that their course grades would not be affected by either participating or refusal to participate in the study. All students received the same general orientation to the simulated clinical environment by the same faculty member to ensure familiarity with the equipment and abilities of the computerized simulator, which lasted approximately 30 minutes. This orientation included the opportunity for each student to complete a physical assessment on the simulator. Students were shown the simulator’s monitor parameters, including electrocardiogram, respiratory monitoring, oximeter readings, and carbon dioxide monitor. Students were also shown on the monitor how to obtain both x-rays and laboratory values. Additionally, students were oriented to the equipment in the environment. For example, students were shown where and how to set up albuterol treatments. Demographic data obtained included: healthcare profession (NP, RN, RT, or social work), gender, age, and previous experience with simulation.

Back to Top | Article Outline

Operational Definitions

Self-Directed Learning With Facilitated Debriefing

Participants in group A received a stem (see later) and then participated in the team-simulated clinical scenario with the computerized simulator. No intervention was provided by the instructor during the scenario regarding performance and decision-making process. Questions regarding patient assessment and management, such as results of x-rays, were answered during the scenario. Immediately after the team-simulated clinical scenario, facilitated debriefing was done by an instructor trained in the techniques of debriefing from a comprehensive medical simulation workshop. Additionally, this instructor had several years of practical experience in the art of debriefing postworkshop. Audiovisual equipment was used in the facilitated debriefing to review the team-simulated clinical scenario for the participants to evaluate their performance.8,9 The facilitated debriefing focused on learning objectives from both the Technical Evaluation Tool and the Behavioral Assessment Tool.

Back to Top | Article Outline

Instructor-Modeled Learning With Modified Debriefing

Participants in group B first observed an instructor-model team (experts) that comprised four clinically experienced instructors, one from each healthcare profession. The instructors were unaware of the scenario content. During the simulated clinical scenario, the instructor team modeled clinically appropriate behaviors, along with verbally stating their rationale for the interventions. One of the instructors played the NP role, one of the instructors played the RN role, one of the instructors played the RT, and one of the instructors played the social worker (SW). After the team-simulated clinical scenario, participants were allowed to ask the instructor team questions about anything that occurred during the scenario for approximately 5 minutes (modified debriefing). Student teams then repeated a second simulated clinical scenario, Figure 2.

Figure 2.

Figure 2.

Back to Top | Article Outline

Measurement Tools

Baseline knowledge was assessed in all participants before the team-simulated clinical scenario using a 10-item, instructor-developed, Knowledge Assessment Tool. Each healthcare profession (NP, RN, RT, and social work) received a different Knowledge Assessment Tool developed by their instructor based on their discipline-specific (profession) lecture content and interventions. The Knowledge Assessment Tool was administered at two time points: immediately before the team-simulated clinical scenario (pretest) and immediately after the team-simulated clinical scenario repeated by the student team (instructor-modeled) or the facilitated debriefing (self-directed learning with facilitated debriefing). Posttest items were the same as pretest items, except they were reordered. Scores on the Knowledge Assessment Tool could range from 0 to 100, with 100 being the highest score.

Participant satisfaction with the approach used in the team-simulated clinical scenario was measured using the Satisfaction Survey. This included six 5-point Likert scale items from Strongly Agree to Strongly Disagree and three open-ended items. The open-ended items were related to what participants liked about the simulated clinical scenario, what they would improve, and other comments.

Critical elements of technical skill performance were evaluated using the Technical Evaluation Tool, which was developed based on the learning objectives of the team-simulated clinical scenario. Items were scored either task done or not done. The same NP evaluator, blinded to the training type but experienced with all the study tools, was used to score each tool. The time it took in minutes for each team to perform the critical interventions was assessed using the Log feature from SimBaby, a computerized simulator.

Behavioral performance was assessed using the Behavioral Assessment Tool. Items on the Behavioral Assessment Tool were developed from Crisis Resource Management principles.10–15 The original tool was used at the Center for Advanced Pediatric Education at Packard Children’s Hospital at Stanford to evaluate neonatal resuscitation.16 It was then used in an unpublished study by Anderson and Yaeger where its internal consistency, as measured by Cronbach’s α, ranged from 0.8331 and 0.9168 (J. Anderson, personal communication, January 2006). The tool was again modified to broaden the scope. In a study by Leflore et al,1 this modified tool had a Cronbach’s α of 0.97. In this study, items on the tool were scored on a fulcrum from 0 to 4: 0, poor; 1, partially acceptable; 2, acceptable; 3, above average; and 4, excellent. Possible scores ranged from 0 to 40, with 40 being the highest score. In this study, videotapes of each team were evaluated by the same physician evaluator, blinded to the training type but experienced with all the study tools.

Back to Top | Article Outline

System/Personnel

The computerized simulator, SimBaby by Laerdal, was used in this study. The automatic log of scenario events from SimBaby was used during the facilitated debriefing and to obtain times where specific skills were performed for the Technical Evaluation Tool.

Back to Top | Article Outline

The Scenario Stems

Respiratory arrest is the leading cause of cardiac arrest in the pediatric population, and most arrests occur in children less than 15 months of age.17,18 Therefore, it was decided that it would be most beneficial for students to have more than one opportunity to manage respiratory arrest. Additionally, it was imperative to evaluate students’ ability to transfer knowledge regarding pediatric respiratory arrest from one scenario to a similar scenario using the same assessment and behavioral skills. The simulator was preprogrammed with two similar scenario stems to mimic the physical signs and symptoms of an infant in respiratory distress: circumoral cyanosis, coarse rales, tachypnea, and moderate intercostal retractions. The simulator was operated by an instructor experienced in programming and running the scenario as well as “on the fly” responding to participant interventions. A standardized parent (actor) played a scripted role of the mother. The setting for both the simulated clinical experiences was the Emergency Department. The simulated experiences started with the RN student assessing the patient. Then, as the scenarios progressed, the RN student solicited the appropriate personnel. See Figure 2 for the written scenario stems provided to all participants.

During July 2006 and July 2007, participants completed the Knowledge Assessment Tool and then performed in the team-simulated clinical scenario using the self-directed learning model with facilitated debriefing (group A). Participants were instructed to complete a patient assessment and then manage the patient appropriately based on the signs and symptoms displayed by the simulator. No input was given from the instructor during the scenario. A facilitated debriefing was done immediately after the team-simulated clinical scenario. Each team then repeated a different simulated clinical scenario while being videotaped. After the repeated scenario, each participant filled out the Knowledge Assessment Tool again, along with the Satisfaction Survey. The Behavioral Assessment Tool and Technical Evaluation Tool were completed later by a blinded reviewer using an audiovisual recording of the scenario and/or log from the computerized simulator, as appropriate.

During July 2006 and July 2007, participants performed the team-simulated clinical scenario using the instructor-modeled learning with modified debriefing (group B) approach. Participants first observed the instructor team (clinical experts) assess and manage the simulator based on the signs and symptoms displayed. While performing, the instructor team verbalized their assessment findings, along with patient management and rationales for interventions. After the team-simulated clinical scenario, participants were allowed to ask the instructor team questions for approximately 5 minutes (modified debriefing). Each team then repeated a different simulated clinical scenario while being videotaped. After the team-simulated clinical scenario, each participant filled out the Knowledge Assessment Tool again, along with the Satisfaction Survey. The Behavioral Assessment Tool and Technical Evaluation Tool were completed later by a blinded reviewer using an audiovisual recording of the scenario and/or log from the computerized simulator, as appropriate.

After final evaluations, both groups were offered the opportunity to participate in a debriefing.

Back to Top | Article Outline

Statistical Analysis

Descriptive statistics were used to compare the demographic data for the groups during July 2006 and July 2007 and after the data were combined between group A (self-directed learning model with facilitated debriefing) and group B (instructor-modeled learning with modified debriefing) using χ2, analysis of variance, or Student t test where appropriate. Analysis of variance was used to analyze data on the Behavioral Assessment Tool and Technical Evaluation Tool. Additionally, a dependent t test was used to analyze the Knowledge Assessment Tool. Statistical Package for Social Sciences version 11.5 (SPSS, Inc., Chicago, IL) was used for statistical analysis. The Bonferroni correction was used for multiple comparisons. Unless otherwise stated, all values are expressed as mean ± SD. α and β were set at 0.05 and 0.20.

Back to Top | Article Outline

RESULTS

Thirteen interdisciplinary teams of students participated in the study: seven teams in group A and six teams in group B. Each team consisted of one NP student, one RN student, one social work student, and one RT student. Seventy-nine percent of the participants were female. The age distributions within the two groups were similar; however, group B appeared older and was less experienced. There were no statistically significant differences in the demographic characteristics between the groups (Table 1).

Table 1

Table 1

Because each team comprised students from different healthcare professions, the Knowledge Assessment Tool was analyzed by profession. There were no statistically significant differences among the NP, RN, social work, or RT groups on the Knowledge Assessment Tool (Table 2).

Table 2

Table 2

There were significant differences between the groups as a whole noted in the Satisfaction Survey (P = 0.01) (Table 3). When satisfaction was evaluated between groups by student profession, there were significant differences between the groups among the RNs (group A, 26.2 ± 3.0 vs. group B, 30.0 ± 0.50, P = 0.03) and among the social work students (group A, 24.0 ± 3.3 vs. group B, 28.0 ± 2.0, P = 0.04) (Table 4). The tool itself had a Cronbach’s α of 0.91. Table 5 contains sample students comments from the open-ended questions on the Satisfaction Survey.

Table 3

Table 3

Table 4

Table 4

Table 5

Table 5

When the time to intervention was evaluated on the 11 components of the Technical Evaluation Tool, there were statistically significant differences between the groups in eight of the components. The Cronbach’s α on this tool was 0.94 (Table 6).

Table 6

Table 6

There were statistically significant differences between the groups in 8 of 10 components on the Behavioral Assessment Tool and the overall team scores (Table 7). The Cronbach’s α was 0.95.

Table 7

Table 7

Back to Top | Article Outline

DISCUSSION

The purpose of this study was to determine whether instructor-modeled learning with modified debriefing during team-simulated clinical scenarios would have better student outcomes compared with students who underwent self-directed learning with facilitated debriefing. We hypothesized that students, who observed instructor-modeled learning with modified debriefing during team-simulated clinical scenarios would demonstrate greater ability to apply new knowledge, be more satisfied, have greater proficiency performing technical skills and, show more appropriate team behaviors, when compared with students who underwent self-directed learning with facilitated debriefing for a team-simulated clinical scenario.

Simulation provides the opportunity to teach advanced clinical skills, teamwork, effective communication, and critical thinking skills. In the past, learning experiences necessary for students to expand their theoretical knowledge to diagnose and manage patient’s acute and emergent problems has required “on-the-job” education. Additionally, students have been educated in “silos,” meaning students are taught within their healthcare profession with limited opportunities to work with other healthcare professionals until after graduation. Simulation-based learning may provide students of various healthcare professions opportunities to develop teamwork skills along with competencies through practice to maximize patient outcomes in a safe, nonthreatening simulated environment that is controlled by the faculty.

As discussed earlier, several approaches of instructor involvement exist while using simulators to construct a team-simulated clinical scenario. One of the factors that determines the approach used is the skill level of the student. One of these approaches allows participants to be self-directed, ie, clinically experienced students are allowed to proceed through the team-simulated clinical scenario with little or no input from the instructor until the debriefing session after the scenario. The instructor-modeled approach, where clinical experts model the appropriate responses and interventions during the team-simulated clinical scenario while the students or participants observe, is another alternative.1 Additionally, students may be supplied with a “stem” that mimics a handoff report or other information regarding the patient. Other alternatives may be to immerse the students immediately into the clinical reality with no report or a truncated report. This may be appropriate as an evaluation mechanism for experienced personnel. However, for students learning a new role or new content, complete immersion without patient information may be an unrealistic expectation and cause undue stress.

No statistically significant differences were assessed between the groups when knowledge was assessed over two time periods. However, pretest scores were already high in both groups, and the inability to detect a change may have been due to the study being underpowered. As found in our previous study, these results may validate that measuring knowledge alone is not adequate in assessing learned material.1

On the Satisfaction Survey, group B, instructor-modeled learning with modified debriefing, was more satisfied with their learning approach than group A, self-directed learning with facilitated debriefing. The students’ comments validated that our hypothesis that those who watch instructors model are more comfortable and had increased satisfaction than teams who had self-directed learning with facilitated debriefing in the simulated clinical scenarios. Additionally as a profession, RNs and social work students were statistically more satisfied with instructor-modeled learning with modified debriefing.

Bandura3 discusses the importance of reinforcement and learning through modeling. As discussed, both knowledge and learning of new behavior is increased by observing others.4 The learner engaged in a simulated clinical scenario learns directly from the experience itself. Observers also internalize the new information, which leads to indirect or vicarious learning. Debriefing after the simulated clinical scenario allows the learners to reflect upon and weigh their actions to improve future performance.9

There were statistically significant differences in 8 of the 11 components on the Technical Evaluation Tool. The differences between the times to interventions, as evaluated with the Technical Evaluation Tool, clearly demonstrate the benefit of observing instructors model proper technique when performing skills. For instance, effective bag-mask ventilation (observe chest wall movement) occurred 3 minutes sooner in group B, which could be a clinically significant difference during patient care.

As with the Technical Evaluation Tool, benefits of observing instructors model proper teamwork behaviors was clearly demonstrated on the Behavioral Assessment Tool between the groups. Of clinical significance, a leader was more likely to emerge and effective communication occurred among the team that had instructor modeling. This was similar to our previous study that showed significant differences between the groups in 8 of 10 components on the Behavioral Assessment Tool.1

Although the stem was changed between the first and second simulated clinical scenarios, a major limitation of the study was the similarity between the two scenarios in terms of cues, vital signs, and branching outcomes. This study might have revealed greater variability, or different results had two completely different scenarios been used. However, as previously discussed, we wanted our students to have more opportunities to manage respiratory arrest, and we wanted the two scenarios to have the same assessment and behavioral skills.

A second limitation in the study was the simulation laboratory. In July 2006, the simulation laboratory was classroom space within the nursing building (ie, decreased realism). During July 2007, the simulation laboratory was in a separate building with state of the art equipment and supplies, including audiovisual equipment (ie, increased realism). However, SimBaby, a computerized simulator was used in both summers. There was also a time lapse of 1 year between the first data collection period and the second data collection period. However, data from the participants across the 2 summers were not different and, therefore, collapsed together.

Another limitation was that data regarding previous clinical experience was not collected; therefore, no comparisons could be made between the groups. This could have affected the differences between the groups on satisfaction, technical performance, and behavioral performance. Additionally, the differences between the age of the students and previous simulation experience between group A and group B could explain the differences between the groups on satisfaction, technical performance, and behavioral performance.

An additional limitation was the simulator. Although SimBaby has the capability to demonstrate some of the signs of respiratory distress, such as circumoral cyanosis, coarse rales, tachypnea, and moderate intercostal retractions; it cannot mimic all human functions. For example, SimBaby cannot show nasal flaring.

Other limitations of this study were that the results may only be applied to the novice participant. There was also proximity of the intervention to the posttesting. Differences on the Knowledge Assessment Tool, Behavioral Assessment Tool, and skill test may have been very different a few days or weeks later. However, long-term follow-up was not possible, because of graduation in relation to timing of the study.

Additionally, a videotaped orientation to the simulation laboratory may have decreased faculty involvement and have been equally effective in orienting students to the environment.

Finally, the opportunity for students to observe experts modeling their role may have led to decreased role confusion and contributed to the satisfaction of the expert-modeling team. This could also explain comments from the control group, ie, “more direction” and “more clear roles.” However, it does support our hypothesis that providing a gold-standard version of the scenario modeled by experts may be more effective to novice learners than self-directed learning followed by facilitated debriefing. Additionally, expert role-modeling may facilitate the acquisition of team behaviors and decrease the number of clinical simulated experiences needed to learn about team behaviors.

Back to Top | Article Outline

CONCLUSION

Scoring well on traditional standard paper-pencil tests does not necessarily reflect the student’s ability to apply knowledge.1 Additionally, there are limitations to providing students with important clinical experiences during their educational process. Therefore, it is essential to explore alternative educational methodologies that provide students’ with clinical experiences to apply their knowledge in a safe, nonthreatening environment. Providing students with simulated clinical scenarios is one method that gives students the opportunity to apply new knowledge. Therefore, determining what simulation method, instructor-modeled learning with modified debriefing or self-directed learning with facilitated debriefing, is most effective in teaching students to apply knowledge is imperative.

Based on the study results, instructor-modeled learning with modified debriefing seems to be more effective than self-directed learning with facilitated debriefing with novice interdisciplinary student teams in the performance of technical skills and behavioral assessment. This group also seems to be more satisfied with this form of simulation methodology. Additionally, using simulated clinical scenarios with interdisciplinary student-teams provide a means to teach and evaluate effective team behaviors.

Back to Top | Article Outline

REFERENCES

1. Leflore JL, Anderson M, Michael JL, Engle WD, Anderson J. Comparison of self-directed learning versus instructor-modeled learning during a simulated clinical experience. Simul Healthc 2007;2:170–177.
2. Mort TC, Donahue SP. Debriefing: the basics. In: Dunn WF, ed. Simulators in Critical Care and Beyond. Des Plaines, IL: Society of Critical Care Medicine; 2004.
3. Bandura A. Social Learning Theory. New York: General Learning Press; 1977.
4. Bandura A. Social Foundations of Thought and Action: A Social Cognitive Theory. Englewood Cliffs, NJ: Prentice Hall; 1986.
5. Allen SS, Bland CJ, Harris IB, Anderson D, Poland G, Satran L, Miller W. Structured clinical teaching strategy. Med Teach 1991;13:177–184.
6. Halamek LP, Kaegi, DM, Gaba DM, et al. Time for a new paradigm in pediatric medical education: teaching neonatal resuscitation in a simulated delivery room environment. Pediatrics 2000;106:1–6. Available at: http://pediatrics.appulbications.org/cgi/content/full/106/4/e45.
7. Schull MJ, Ferris LE, Tu JV, Hux JE, Redelmeier DA. Problems for clinical judgment. 3. Thinking clearly in an emergency. Can Med Assoc J 2001;164:1170–1175.
8. Graling P, Rusynko B. Kicking it up a notch—successful teaching techniques. AORN J 2004;80:459–475, 471–475.
9. Rhodes M, Curran C. Use of the human patient simulator to judgment skills in a Baccalaureate nursing program. CIN: Comput Nurs 2005;25:256–262.
10. Thomas EJ, Sherwood GD, Mulhollem JL, Sexton JB, Helmreich RL. Working together in the neonatal intensive care unit: provider perspectives. J Perinatol 2004;24:552–559.
11. Wiener EL, Kanki BG, Helmreich RL. Cockpit Resource Management. San Diego: Academic Press; 1993.
12. Helmreich RL, Merritt AC, Wilhelm JA. The evolution of crew resource management training in commercial aviation. Int J Aviat Psychol 9:19–32.
13. Pizzi L, Goldfarb NI, Nash DB. Chapter 44: Crew Resource Management and Its Applications in Medicine. Evidence Report/Technology Assessment, No. 43. Available at: http://www.ahrq.gov/clinic/ptsafety/chap44.htm.
14. Gaba DM, Howard SK, Fish KJ, Smith BE, Sowb YA. Simulation-based training in Anesthesia Crisis Resource Management (ACRM): a decade of experience. Simulat Gaming 2001;32:175–193.
15. Thomas EJ, Sexton JB, Helmreich RL. Translating teamwork behaviors from aviation to healthcare: development of behavioral markers for neonatal resuscitation. Qual Saf Health Care 2004;13:i57–i64.
16. Anderson JM, Murphy AA, Boyle BB, Yaeger KA, Halamek LP. Simulating extracorporeal membrane oxygenation (ECMO) emergencies. II. Qualitative and quantitative assessment and validation. Simul Healthc 2006;1:228–232.
17. Stenklyft PH. Pediatric emergency medicine—past, present, and future. Jacksonville Med 1999;50:12.
18. Reis AG, Nadkarni V, Perondi MB, Grisi S, Berg RA. A prospective investigation into the epidemiology of in-hospital pediatric cardiopulmonary resuscitation using the international Utstein reporting style. Pediatrics 2002;109:200–209.
Keywords:

High fidelity; Simulation; Education; Interdisciplinary; Students

© 2009 Lippincott Williams & Wilkins, Inc.