Reflection is an essential element in the development of professional competences in the health care sector.1 Participating in a postsimulation debriefing may provide a way for health care professionals to practice reflection and to translate this into practical actions.2 The term reflection refers to “those intellectual and affective activities in which individuals engage to explore their experience in order to lead to new understanding and appreciations.”3 If the debriefing is going to facilitate deeper reflection, it is important for facilitators and simulation instructors to be aware of how to put questions that encourage this, thereby optimizing simulation-based learning.2,4,5 Reflection is a key element in learning from experience and helps learners to develop and integrate insights from direct experience into later action.6
According to Flanagan,7 debriefing in simulation-based learning refers to “the purposeful, structured period of reflection, discussion and feedback undertaken by students and teachers usually immediately after a scenario-based simulation exercise involving standardised patients and/or mannequins.”7 However, debriefings can be structured in a number of ways to promote reflection. According to Fanning and Gaba,8 all debriefing models involve reflecting on the experienced event, discussing experiences with others, and learning and modifying behaviors based on the experience.8 It is suggested that debriefings structured for reflection begin with open-ended questions and emotional release and include guidance and feedback from the facilitator.9,10 Rudolph et al11 describe “debriefing with good judgment” as an approach to promote reflective practice.11 The “frames” that underlie the actual actions are invisible but can be discovered through the “advocacy-inquiry” approach, that is, by asking questions about the trainee’s underlying reason for their actions. According to Rogers12 and Yuen Lie Lim,13 there is agreement among researchers in the field that reflection is deliberate, is stimulated by a problematic situation, involves an examination of personal knowledge, and leads to new insights. However, a precise definition of the way reflection is implemented in learning settings and the way it is researched are lacking.14,15 Several attempts have been made to clarify reflection by offering conceptual frameworks, for example, Gibbs,16 Mezirow,17 Schön18 and Kolb.19 Gibbs,16 for example, reinterpreted the model of reflection by Kolb19 and developed it into a 6-stage reflective cycle. According to Boud et al,3 Kolb19 does not say very much about the process of reflection, although the significance of his work on the activity of reflection may be that he sets it in a context of learning. The same applies to Gibbs.16
There are few studies that investigate the effect of debriefing on reflection. Overstreet showed that debriefing provides students the opportunity to reflect on the simulation experience and to make prospective assumptions as to how they might perform differently next time.20 Dreifuerst21 investigated reflection strategies during debriefings and demonstrated change in clinical reasoning skills. Dieckmann et al22 showed that the theoretical ideal of how debriefing should be conducted is not always fulfilled in debriefing practice.
In this study, we investigate the level of reflection in questions and responses in debriefings. In doing this, we use the definition outlined in Boud et al3 where the term reflection refers to “those intellectual and affective activities in which individuals engage to explore their experience in order to lead to new understanding and appreciations.” Despite the importance of facilitating reflection in debriefings, questions concerning whether and how faculty promote reflection remain unanswered in the research literature.9,23 Owing to the absence of research documenting reflective questions and answers in health care simulation debriefing, the aim of this study was to explore the depth of reflection as expressed in questions by facilitators and in responses by nursing students. Three study questions guided our work as follows:
- What stage of reflection did the facilitators’ questions reveal?
- What stage of reflection did the students’ responses reveal?
- What is the relationship between stages of reflection in facilitators’ questions and stages of reflection in students’ responses?
Several potential models were explored to see whether they provided a suitable framework for analyzing the depth of reflection in facilitators’ questions and students’ responses. The reflective cycle of Gibbs16 was used as the conceptual framework in this study (Fig. 1), the model linking reflection with learning.24 Gibbs suggests how the experimental learning cycle can be implemented, including reflecting upon experiences such as simulation.16 Thus, the cycle is a circular process and includes 6 stages to encourage students to organize and structure their thinking and learning reflectively (Fig. 1) and relates to a fully structured debriefing with increased depth of reflection. Gibbs’ reflective cycle has been used in previous studies to enhance health care students’ learning of knowledge, skills, and attributes in simulation and clinical practice.25–27
This study has an explorative and descriptive qualitative design.28 The results are based on 24 video recordings of postsimulation debriefings in nursing education. The data were collected by the first author in February and March 2008. We used a deductive approach to grade reflections, based on the reflective cycle of Gibbs.16 The model was chosen because it is designed for the context of education and simulation and has been used to assist the reflective process in nursing education and nursing practice.29,30 In this study, Gibbs’ reflective cycle was applied to classify all facilitators’ questions and students’ responses regarding leadership in all debriefings. Finally, the relationship between all classified questions and responses were described, apart from those graded differently by the 2 researchers.
A total of 81 students (72 female and 9 male subjects; mean age, 27 years; range, 22–53 years) studying in their last semester of a 3-year nursing education program participated in the study. The students were divided into 14 groups that were comprised of between 4 and 7 team members per group. Four groups were composed of mixed sexes, whereas the remainder was composed of females only. The median age in the 14 groups varied from 23 to 33 years. The student group was comparable with other student groups in Norwegian nursing education programs with respect to age and sex.31 Five female faculty members (aged 34–60 years) were involved as facilitators and simulator operators. The facilitators selected to facilitate the simulation session had been teaching Basic Life Support (BLS) and the use of semiautomatic defibrillators for the past 2 years. All 5 facilitators had participated in a 3-day workshop in educational principles of simulation-based learning, including how to brief and debrief learners before the simulation.
The study was approved by the Norwegian Social Science Data Services and the University of Stavanger, Norway. Consent forms for participation in the study were signed by the nursing students as well as the faculty staff, and confidentiality was guaranteed. All those who were asked agreed to participate in the study. For the type of research presented here, the regional committee for medical ethics of western Norway declined to consider the application because the study did not involve patients or relatives.
Setting and Scenario
The data were collected during voluntary resuscitation team training given to students at the University of Stavanger in the Stavanger Acute medicine Foundation for Education and Research (SAFER) simulation center. Earlier on in their education program, the students attended lectures in resuscitation teamwork and completed repeated training in BLS and the use of semiautomatic defibrillators. The students had attended simulation courses in their second year. The students received the learning objectives of the simulation scenario in advance, and these were restated by the facilitator in the scenario briefing immediately before the scenario. The objectives were (1) optimizing leadership in resuscitation teamwork and (2) putting the BLS algorithm into practice. In this study, the simulated patient was a 71-year-old woman who had an upper femur fracture and had been moved to an out-of-hospital rehabilitation unit without a staff physician present. The patient had a history of angina pectoris and went into cardiac arrest during the scenario.
Each group simulated the same scenario twice, with 3 students participating at a time, whereas the other 3 took the roles of observers. For each scenario, the students elected 1 group member to be the leader. The facilitator was present in the room observing the scenario. In the second scenario, the observers and active participants exchanged roles. After each simulation scenario, the students took part in a debriefing guided by the facilitator and analyzed leadership, team performance, and execution of the BLS algorithm.
The first author recorded 28 simulations including briefings, simulation scenarios, and debriefings as parts of the simulation sessions, resulting in 28 hours of videotaped material.32 Video recordings were chosen because they allow for the capturing and recording of interactions in the debriefing setting as they occur naturally without disturbances from direct observations and because they allow for repeated viewing and detailed analysis.33
All the video recordings of the debriefings were reviewed between 2 to 4 times to identify sequences focused on leadership. The material of analyses was defined as sequences in which the facilitator and students talked about the following:
- The roles and tasks of the team leader and team members, and
- Their collaboration and communication regarding roles and tasks.
Four of the debriefings were excluded from the analysis. In 2 debriefings, the facilitator asked the first author to take over the debriefing, which may represent a potential bias. The other 2 debriefings did not provide relevant material for the study because the conversation only focused on medical technical issues, that is, how to perform chest compressions and ventilations. All recorded conversations were then transcribed verbatim and read between 2 and 6 times to ensure correctness and understanding. A total of 8 hours of material was recorded. Based on the defined sequences presented previously, the material for analysis consisted of 1-hour-20-minute verbal communication. The duration of the 24 debriefings varied from 5.5 to 35 minutes, with a median value of 20.5 minutes, whereas the transcribed parts regarding leadership varied from 0.5 to 6.5 minutes with a median value of 3.5 minutes because the remaining parts of the conversation focused on other issues. Two facilitators each performed 10 debriefings, whereas the other 2 facilitators each performed 2 debriefings. For the purpose of our analysis, we decided to quantify data.
First, 2 researchers (F.F. and S.E.H.) independently graded the first 20 facilitators’ questions and students’ responses regarding leadership in the 6 stages of reflection of the reflective cycle of Gibbs.16 This served to refine the criteria for grading the reflection and calibrate judgments. The examples of questions and responses, given in Figure 2, clarify how we interpreted Gibbs’ different stages of reflection. Second, the 2 researchers independently graded all the remaining questions and responses. For the analysis of stages of reflection in questions and answers, 117 questions and 130 responses in 24 debriefings were used (Table 1), the number of responses being greater than the number of questions owing to several responses to one question. Examples of how questions were interpreted into stages of reflection and the responses the questions elicited can be seen in Figure 2. Questions and responses graded differently by the 2 researchers were excluded from further analysis (Appendix 1). Finally, the relationship between questions and responses graded equally by the 2 researchers was identified by sorting and counting all responses in each of the 5 stages of grading facilitators’ questions. A detailed template with all graded questions and responses in all debriefings made it possible to analyze the relationship between questions and answers in reflection stages.
Rater agreement, defined as the number of agreed assessments (x + y) divided by the number of agreed assessments + the number of disagreed assessments (z),34 was calculated for questions and responses separately and together using this formula: (x + y) + z. However, this calculation has at least 2 weaknesses: it takes no account of where the agreement was, and we would expect some agreement between the 2 raters by chance, even if they were guessing. For this reason, we additionally calculated interrater reliability of assessment of questions and responses, a coefficient indicating the extent to which the ratings of 2 independent raters are intercorrelated35 with κ and linear weighting using VassarStats (http://vassarstats.net/). It has been proposed that a κ score of 0.81 to 1.00 indicates very good agreement; 0.41 to 0.80, moderate-to-good agreement; 0.21 to 0.40, fair agreement; and below 0.20, poor agreement.36
Reliability of the Coding
Rater agreement between the 2 coders for assessment of stage 1 to 5 of the Gibbs’ reflective cycle was 0.82 for questions (82% = 117/ [117 + 25]), 0.80 for responses (80% = 130 / [130 + 32]), and 0.81 for both questions and responses (81% = [117 + 130] / [117 + 130] + 58), indicating a reliable application reflective cycle to code the questions and responses. The κ score for interobserver agreement for questions was 0.77 and 0.79 for responses, indicating good agreement between the 2 coders.
Stages of Reflection in Questions and Responses
Facilitators asked most evaluative questions, stage 3 (S3) (43 of 117) and fewest emotional questions (S2) (4 of 117) (Table 1), whereas students answered most evaluative responses (S3) (50 of 130) and fewest conclusive responses (S5) (1 of 130) (Table 2). None of the questions and responses were rated as questions about action plans (S6). The greatest difference between facilitators and students was in the analytic stage (S4). Only 23 (19.6%) of 117 questions asked by the facilitators were analytic (S4) (Table 1), whereas 45 (35%) of 130 of the students’ responses were rated as analytic (S4) (Table 2).
The Relationship Between Questions and Responses
To explore the relationship in the reflection stages between questions and responses, the relationship between questions and responses that were graded equally by the researchers were analyzed (Table 3). The figure in each cell indicates how many of each response in a certain stage followed each question in a certain stage. For example, descriptive questions (S1) elicited not only descriptive responses (S1), but also evaluative (S3) and analytic responses (S5). Questions in the evaluative stage (S3) were followed by responses in the same stage (27 of 41) as well as in the analytic stage (S4) (10 of 41). Analytic questions (S4) were followed by 17 responses at the same level, whereas conclusive questions (S5) elicited most responses in the analytic stage (S4).
The results of this study demonstrate that there were large variations in the duration of the debriefings (from 5.5 to 35 minutes). Johnson-Russell and Bailey37 suggest that the amount of time allotted for debriefing should be commensurate with the objectives, level of the learners’ knowledge, as well as skills and scenario complexity and should last no less than 30 minutes, whereas Flanagan7 proposes that “the length of time for debriefing should not be less than the time taken for the scenario itself: usually more time is ideal.” A major goal of debriefing is to reinforce the objectives of the simulation to ensure that the intended learning occurred. Debriefing should also foster reflective learning.37 Possible explanations for the large variation in the duration of the debriefings could be challenges in the conduct of debriefing and scenarios, possibly owing to the beginner level of facilitators. The learning outcome of the groups with the shortest debriefings might be impaired.37
Although a substantial portion of the students’ responses to descriptive questions were in the evaluative and analytic stages (S3 and S4), the findings imply that the questions in the evaluative and analytic stages (S3 and S4) elicited more responses at these stages compared with descriptive questions. Results also showed that although approximately one third of the facilitators’ questions were in a descriptive stage, these questions promoted half the responses in the emotional and conclusive stages (S2 and S5). The results also point to the fact that questions on a deeper level of reflection such as “What else could you have done?” (S5) promote fewer variations in responses. One interpretation of this result might be that the formulation of these questions is more precise than the descriptive and evaluative ones. These results demonstrate the complexity of the debriefing and that descriptive questions may not only promote descriptive responses. Participants might be stimulated to engage in self-reflection by other elements in the setting as well, which would need to be explored further.
Very few of the facilitators’ questions in this study were formulated as analytic questions focusing on what sense the participants could make of the situation. According to Moon,38 deep reflection includes a metacognitive stance (ie, critical awareness of one’s own processes), “standing back” from the event, exploring motives or reasons for behavior, and taking into account the view and motives of others. In line with Moon, the facilitators’ questions in the debriefings may have encouraged a relatively superficial form of reflection, and learning that results from superficial reflection is also likely to be superficial.38 The results also revealed that the facilitators posed most descriptive and evaluative questions, which may have implied that students did not articulate their reasoning behind the actions. This, in turn, may explain why the students in the present study did not articulate any implications for future actions.11 Consequently, our results also point to the importance of using questioning techniques, such as the advocacy and inquiry tool suggested by Rudolph et al.11
There were neither questions nor answers in S6, “Action plan—If it occurred again, what would you do?” This stage involves a personal plan for future actions.16 According to Moon,39 a plan for future actions in the reflective process is more likely to be posited in clinical practice. At the same time, Moon39 points to ambiguity in literature regarding whether reflection should include a plan for future actions. An explanation of why the facilitators did not ask questions in S6 might be due to the beginner level of the faculty and that the facilitators were not sufficiently trained in formulating questions on this level. Wildman and Niles40 (1987) claim that it requires much time and effort to master the skills needed for promoting reflection. Another reason can be that the training of such skills was explicitly addressed in the faculty development program in the studied cases.
The results point to the necessity for reflective questioning to be included in faculty training to make effective use of simulations. In addition to formal training of facilitators, novice and beginner facilitators should be guided through a reflective learning process early in their career by reflective expert facilitators. The results also indicate that it is necessary to work further on structuring the debriefing to facilitate deeper reflection.
One limitation of our study is that the research tool applied here has, to our knowledge, not been previously used to grade questions and answers into stages of reflection in postsimulation debriefing.16 To strengthen the reliability of the findings, the coding of questions and responses was conducted by 2 independent researchers. To strengthen the validity of the research tool, an independent team of raters could have been used to validate the grading. Second, the characteristics of the facilitators should have been included because the skills of the facilitator are essential for the overall quality of the simulation. The characteristics of the facilitators would have given a clearer idea of the facilitators’ reflective skills.8 Third, the nursing students recruited to this study had only performed 1 simulation before the current one. That the situation was quite new for them could imply that few answers corresponded to deeper levels of reflection.39 More experience with simulation would hopefully lead to deeper reflection for the students. Considering that the sample size was relatively small and all students and faculty were recruited from only one nursing program in Norway, the results points toward some, although limited, possibilities for generalization to other simulation settings and professions.
The results of this study reveal that postsimulation debriefings provide students with opportunity to reflect on their simulation experience. The facilitators mostly asked questions at the descriptive and evaluative stages, whereas three quarters of students’ responses were at the evaluative and analytic stages. Nevertheless, the facilitators’ descriptive questions not only promoted responses at the descriptive stage but also at more reflective levels. If the debriefing is going to pave the way for student reflection, it is necessary to work further on structuring the debriefing to facilitate deeper reflection. It is therefore important that facilitators and simulation instructors consider what kind of questions they ask to promote reflection, thereby optimizing the conditions for simulation-based learning. In addition, future research on debriefing should focus on developing an analytical framework for grading reflective questions. Such research will inform and support facilitators in devising strategies for the promotion of learning through reflection in debriefing.
1. Cantrell MA. The importance of debriefing in clinical simulations. Clin Simul Nurs
2008; 4: e19–e23.
2. Rudolph JW, Simon R, Rivard P, Dufresne RL, Raemer D. Debriefing with good judgement: combining rigorous feedback with genuine inquiry. Anesthesiol Clin
2007; 25: 361–376.
3. Boud D, Walker D, Keogh R. Promoting reflection in learning: a model. In: Boud D, Walker D, Keogh R, eds. Reflection: Turning Experience Into Learning
. London, England: Kogan Page; 1985: 18–40.
4. Brackenreg J. Issues in reflection and debriefing: how nurse educators structure experiential activities. Nurse Educ Prac
2004; 4: 264–270.
5. Neill MA, Wotton K. High-fidelity simulation debriefing in nursing education: a literature review. Clin Simul Nurs
2011; 7: e161–e168.
6. Boyd EM, Fales AW. Reflective learning. Key to Learning from Experience. J Humanistic Psychol
1983; 23: 99–117.
7. Flanagan B. Debriefing: theory and technique. In: Riley RH, ed. Manual of Simulation in Healthcare
. Oxford: OUP Oxford; 2008: 155–170.
8. Fanning RM, Gaba D. The role of debriefing in simulation-based learning. Simul Healthc
2007; 2: 115–125.
9. Dreifuerst KT. The essentials of debriefing in simulation learning: a concept analysis. Nurs Educ Perspect
2009; 30: 109–114.
10. Decker S. Integrating guided reflection into simulated learning experiences. In: Jeffries PR, ed. Simulation in Nursing Education: From Conceptualization to Evaluation
. New York, NY: National League for Nursing; 2007: 73–85.
11. Rudolph JW, Simon R, Dufresne RL, Raemer DB. There’s no such thing as “nonjudgmental” debriefing: a theory and method for debriefing with good judgment. Simul Healthc
2006; 1: 49–55.
12. Rogers R. Reflection in higher education: a concept analysis. Innovative Higher Educ
2001; 26: 37–57.
13. Yuen Lie Lim L-A. A comparison of students’ reflective thinking across different years in a problem-based learning environment. Instr Sci
2011; 39: 171–188.
14. Harrison M, Short C, Roberts C. Reflecting on reflective learning: the case of geography, earth and environmental sciences. J Geography Higher Educ
2003; 27: 133.
15. Atkins S, Murphy K. Reflection: a review of the literature. J Adv Nurs
1993; 18: 1188–1192.
16. Gibbs G. Learning by Doing: A guide to Teaching and Learning Methods
. London, England: FEU; 1988.
17. Mezirow J. Transformative learning: theory to practice. N Dir Adult Contin Educ
1997; 74: 5–12.
18. Schön DA. The Reflective Practitioner: How Professionals Think in Action
. New York, NY: Basic Books; 1983.
19. Kolb DA. Experiential Learning: Experience as the Source of Learning and Development
. Englewood Cliffs, NJ: Prentice-Hall; 1984.
20. Overstreet ML. The Current Practice of Nursing Clinical Simulation Debriefing: A Multiple Case Study
. Knoxville, TN: University of Tennesse, Knoxville; 2009.
21. Dreifuerst KT. Debriefing for Meaningful Learning: Fostering Development of Clinical Reasoning Through Simulation [dissertation]
. Bloomington, IN: Indiana University; 2010.
22. Dieckmann P, Friis SM, Lippert A, Østergaard D. The art and science of debriefing in simulation: ideal and practice. Med Teac
2009; 31: 287–294.
23. Raemer D, Anderson M, Cheng A, Fanning R, Nadcarni V, Savoldelli G. Research regarding debriefing as part of the learning process. Simul Heathc
2011; 6: S52–S57.
24. Stanton F, Grant J. Approaches to experiential learning, course delivery and validation in medicine. A background document. Med Educ
1999; 33: 282–297.
25. Jones I, Alinier G. Introduction of a new reflective framework to enhance students’ simulation learning: a prelimnary evaluation. Available at: http://hdl.handle.net/2299/6147
. Accessed September 15, 2012.
26. Hogg G, Ker J, Stewart F. Over the counter clinical skills for pharmacists. Clin Teach
2011; 8: 109–113.
27. Williams RM, Sundelin G, Foster-Seargeant E, Norman GR. Assessing the reliability of grading reflective journal writing. J Phys Ther Educ
2000; 14: 23–26.
28. Polit DF, Beck CT. Essentials of Nursing Research: Appraising Evidence for Nursing Practice
. Philadelphia, PA: Wolters Kluwer/Lippincott Williams & Wilkins; 2010.
29. Gnash L. Supervision issues in practice: supporting and advising midwives. Br J Midwifery
2009; 17: 714–716.
30. Wilding PM. Reflective practice: a learning tool for student nurses. Br J Nurs
2008; 17: 720–724.
31. Røykenes K, Larsen T. The relationship between nursing students’ mathematics ability and their performance in a drug calculation test. Nurse Educ Today
2010; 30: 697–701.
32. Husebø SE, Rystedt H, Friberg F. Educating for teamwork - nursing students’ coordination in simulated cardiac arrest situations. J Adv Nurs
2011; 67: 2239–2255.
33. Heath C, Hindmarsh J, Luff P. Video in Qualitative Research: Analysing Social Interaction in Everyday Life
. Los Angeles, CA: Sage; 2010.
34. Dyrholm Siemensen IM. Et eksplorativet studie av faktorer der påvirker sikkerheten af patient-overgange (An Explorative Study of Factors Influencing Safety in Patient Handovers) [dissertation]
. Lyngby, Denmark: Denmarks Technical University; 2011.
35. Polit DF. Data Analysis and Statistics for Nursing Research
. Stamford, CT.: Appleton & Lange; 1996.
36. Fleiss JL, Levin B, Paik MC. The measurement of interrater agreement. In: Fleiss JL, Levin B, Paik MC, eds. Statistical Methods for Rates and Proportions
. Hoboken, NJ: Wiley; 2003: 598–626.
37. Johnson-Russell J, Bailey C. Facilitated debriefing. In: Lashley FR, Nehring WM, eds. High-fidelity Patient Simulation in Nursing Education
. Sudbury, MA: Jones and Bartlett Publishers; 2010: 369–385.
38. Moon J. Getting the measure of reflection: considering matters of definition and depth. J Radiother Pract
2007; 6: 191–200.
39. Moon JA. Reflection in Learning and Professional Development
. London, England: Kogan Page; 2000.
40. Wildman TM, Niles JA. Reflective teachers: tensions between abstractions and realities. J Teach Educ
1987; 38: 25–31.