Simulation training has been accepted as an effective educational approach all over the world.1–4 Cultural influences are believed to play an important role in how human beings interact with one another.5,6 This ultimately reflects on education, safety, and outcome in healthcare7 and other industries.6 Most literature on postsimulation debriefing originates from North America and Europe and mainly focuses on techniques that apply to western cultures. One might argue that the whole idea of debriefing—guided verbal reflection after action—is a western concept. In some Asian countries, the process of learning relies more on demonstration and repetition8,9 and less on verbal aspects.
Culture is a concept with a great deal of face value but is also a difficult concept to define and quantify. Unlike in the airline industry literature, little research in healthcare has focused on the cultural perspective of learning and simulation10–13 or on the effect of culture on adverse events.6,10,14 In this article, we focus on national cultures, as opposed to, for instance, professional or organizational cultures.
According to Hofstede, “Culture distinguishes one group of people from another […] and influences their patterns of thinking.”15 Hofstede's culture analysis permits a quantitative comparison according to six dimensions of national culture. One of these dimensions is power distance (high vs. low), described as hierarchy within a society. Relating to the Hofstede model, Chung et al5 hypothesized the existence of five debriefing characteristics that might be used to analyze differences in debriefing practice in different cultures; these are the following: talking time-instructor, talking time-students, interaction patterns, interaction styles, and initiative for interactions. Chung et al5 postulated that regional debriefing characteristics prevailed in different parts of the world. Their work raised the question of how debriefing characteristics unfold relative to cultural background and how these characteristics are related to hierarchy within a culture.5 In an effort to obtain a more detailed picture and to better adapt to existing cultural models, we sought to explore these debriefing characteristics specifically to countries rather than entire regions. The Hofstede model is based on empirical data from individual countries. It permits the distinction of countries that Chung et al5 had considered part of the same region (eg, Norwegian culture vs. Italian culture, rather than simply European culture).
In an effort to continue and build on the work put forth by Chung et al,15 this study uses the Hofstede model. We focused on the dimension of power distance in individual countries, also referred to as the power distance index (PDI), defined as “acceptance of inequality in distribution of power in a certain society.”15 We sought to explore the relation between PDI and self-reported behavior patterns during simulation debriefing in countries with different PDIs as perceived by the debriefers. Specifically, we aimed (1) to identify common debriefing patterns that describe interactions among participants in different cultures during simulation debriefing and (2) to investigate the correlation between debriefing patterns and national PDI.
Based on previous work,5 we determined the following six culture-relevant debriefing characteristics: (1) debriefer/participant talking time, (2) debriefer/participant interaction pattern, (3) debriefer/participant interaction style, (4) debriefer/participant initiative for interactions, (5) debriefing content, and (6) difficulty with which nontechnical skills can be discussed. We formulated six hypotheses about postsimulation debriefing characteristics from the perspective of debriefers and studied how they compared in high- versus low-PDI countries:
- Hypothesis 1 (H1): During debriefings, debriefer talking time correlates positively with PDI: In high-PDI cultures, debriefers talk more.
- Hypothesis 2 (H2): The involvement of debriefers in interactions during debriefings correlates positively with PDI: In high-PDI cultures, debriefers are involved in more interactions.
- Hypothesis 3 (H3): PDI correlates positively with the amount of leading questions during debriefings. In high-PDI cultures, leading questions rather than open-ended questions are used in debriefings.
- Hypothesis 4 (H4): PDI correlates positively with the likelihood of debriefers initiating interactions. In high-PDI cultures, debriefers are more likely to initiate interactions.
- Hypothesis 5 (H5): PDI correlates positively with the discussion of technical skills and medical knowledge. In cultures with high PDI, conveying medical knowledge and practical or cognitive skills (as opposed to nontechnical skills) consumes more time during debriefings.
- Hypothesis 6 (H6): PDI correlates positively with the perceived difficulty with which nontechnical skills, in particular speaking-up and the importance of volunteering/admitting to personal uncertainty, can be addressed. In high-PDI cultures, nontechnical skills are more difficult to discuss than in low-PDI cultures.
We designed an interview guide with the intention of conducting semistructured interviews with experienced simulation debriefers willing to participate in our study. The answers given were used to test the hypotheses. The interview guide consisted of three sections: (a) ten demographic questions, (b) six quantitative questions investigating culture-relevant debriefing characteristics, and (c) two qualitative questions exploring subjective culture sensitive issues. The interview guide was written in English and piloted with eight experienced simulation debriefers from various regions of the world in face-to-face interviews. Based on the answers given in the pilot study, we realized that there was considerable variability in how interviewees chose to answer some questions. The phraseology was modified to improve clarity and comprehension of the questions after pilot analysis (see Document 1, Supplemental Digital Content 1: Interview Guide, https://links.lww.com/SIH/A374).
The interviewers were experienced simulation educators and researchers with a common interest in the cultural migration of simulation. The common denominator was that their simulation training had taken place in a cultural environment different from the one they were practicing in at the time of the study, thus spawning an interest in the cultural prototypes of debriefing. The one author (Z.L.) who did not conduct interviews tabulated and analyzed the data and was completely blinded to the interview data collection process. This investigator was not privy to interview design, did not know any of the debriefers when the interviews were conducted, and did not attend any of the international conferences at which interviews were conducted. The five interviewers conducted the interviews with a convenience sample of debriefers for a period of 18 months. During this predefined period, the investigators sought to interview as many debriefers as possible.
Participating debriefers met the following inclusion criteria:
- Sufficient English language proficiency to engage in the interview
- Currently practicing simulation debriefings with interprofessional participant groups at the postgraduate level
- Having conducted at least 25 debriefings
Once a suitable debriefer had been identified, he or she was approached either in person or via e-mail with a request to participate. The e-mail included the interview guide as an attachment and was intended to give the debriefer an idea of what the interview would cover. After consent, the debriefer and the interviewer met in person, over the phone, or via video teleconference. During the meeting, the debriefer and interviewer jointly completed the interview guide. This was done by either writing on a paper copy of the interview guide or by typing into an electronic copy of the interview guide or a combination of both.
Five questions involved drawings—either pie charts or interaction graphs. Based on the pilot data, we observed that interviewees were more spontaneous, engaging and forthcoming when requested to proportionally divide a pie chart rather than when requested to estimate percentages. The pie charts depicted the proportions of the responses. The interaction graph in question 12 was derived from the recording form introduced by Dieckmann et al.3 It allows for graphic illustrations of the dynamic interaction between the people involved in a debriefing: debriefer(s) (D), very active participants (VAP), less active participants (LAP), and observers (O). The presence of interaction between two people was indicated by drawing a connecting line between them. The extent of each interaction drawn was indicated by a number next to the line (rated from 1 to 9 with 1 indicating low interaction and 9 indicating high interaction). After initial analysis of all debriefing results, a secondary analysis was conducted in which debriefings that had observers and/or co-debriefers present were analyzed separately from debriefings that did not have observers and/or co-debriefers present.
Based on the pilot study, completion of the interview guide was estimated to take 15 to 20 minutes. During the interview, the participating debriefers were asked to picture a specific debriefing setting with an interprofessional postgraduate group of simulation participants and to base their answers relative to this specific setting. Participating debriefers were provided with the option to ask follow-up questions if the content, wording, or context of the interview guide was not sufficiently clear to them. Interviews were conducted between the beginning of May 2015 and the end of October 2016.
Potential participant debriefers were informed about the study in detail during the first encounter. After providing verbal consent to proceed with the interview guide, they shared their contact details for future follow-up. All debriefers were made aware of the fact that their participation was voluntary and that they could stop the interview process at any time. Analysis and findings were based on aggregate results and therefore respect the confidentiality and privacy of all participants. The Swiss Ethics Review Board approved the study (EKNZ Req-2016-00674).
For this study, we used the PDI cutoff to distinguish countries with low and high PDI; a PDI of 50 or less is low and a PDI of 51 or greater is high.16 The PDI values used were obtained from the Geert Hofstede Web site.17 Data from sections I and II were analyzed using the statistical analytics software, SPSS 23.00. Descriptive and inferential quantitative analyses were performed. A one-tailed significance level (P 0.05) was used to account for the directed hypotheses. Correlation was determined by calculating Kendall's τ b. To investigate the effects of the different modalities in which the interviews were conducted, data collected in face-to-face interviews were analyzed separately from the data collected without interviewers present.
A total of 68 debriefers from 26 countries participated in the study. Fifty-four interview guides were completed in face-to-face interviews. Fourteen debriefers responded to our e-mail request by answering the questions electronically or on paper and thus completed our interview guide independently—in the absence of the interviewer. Given the fact that our interview guide had been written in a self-explanatory fashion, respondents seemed to have understood the questions to a large extent. All e-mail respondents were notified of the opportunity to ask follow-up questions, should they have encountered ambiguity while responding to the interview guide. However, some of the questions that had been answered independently had been misunderstood and could not be considered in our analyses, as outlined hereinafter.
Section I: Descriptive Statistics
Forty-four data sets came from debriefers practicing in countries with low PDIs (≤50) and 24 came from countries with high PDIs (≥51) (Fig. 1). Table 1 shows the descriptive statistics of the study population.
Section II: Correlation of Culture-Relevant Debriefing Characteristics With PDI
Table 2 shows the data corresponding to the findings addressed in this section. Our results supported our first hypothesis (H1): debriefer talking time was significantly positively correlated with PDI. Debriefers talked more than participants during debriefings in high-PDI countries. We hypothesized that interaction patterns during debriefing were also influenced by a country's PDI (H2). Six interaction pattern diagrams were excluded from the analysis as the returned depiction revealed that the interviewee had not understood the question correctly. Figure 2 shows that 32% of the interviewees did not have observers present during debriefings, and 20% did not have a co-debriefer present. Debriefings with and without observers were analyzed separately for correlation, and in both instances, debriefer interaction was significantly correlated with PDI values: As hypothesized, in high-PDI countries, most interactions involved the debriefer. Conversely, in low-PDI cultures, the debriefers perceived more interactions to occur between participants and observers without directly involving the debriefer (Fig. 3). As for H3 and H4, our results supported both hypotheses: We found statistically significant positive correlation between PDI and interaction styles: the higher the PDI the more leading questions dominated the interaction and the more discussions were initiated by the debriefers. Results also showed a significant correlation between debriefing content and PDI (H5): in countries with high PDI, discussion during debriefing focused more on conveying technical knowledge and skills, whereas nontechnical skills (eg, team issues, human factors) were addressed to a greater extent in countries with a low PDI. The correlation between the perceived difficulty of discussing nontechnical skills and PDI was significant for speaking-up, closed loop communication, systemic/institutional processes, and situational awareness (Table 2).
For the sake of testing the validity of our data, we analyzed the subset of the 54 interviews that were conducted in the presence of the interviewer separately. Except for talking time (± 0.142, P = 0.235), all culture-relevant debriefing characteristics showed comparable significant correlations.
Our results support the six hypotheses and show that cultural background is mirrored in the practice of simulation debriefing. In the perception of the debriefers, in high-PDI cultures, debriefers talked more (H1), participants interacted less with each other (H2), debriefers used fewer open-ended questions (H3), participants initiated fewer interactions (H4), debriefers focused more on technical/medical issues than on nontechnical discussions (H5), and debriefers found it more difficult to address issues such as speaking-up and volunteering/admitting to personal uncertainty than in low-PDI countries (H6).
The current study shows a relation between power distance of a country and the perceived debriefer-participant behavior patterns as seen through the eyes of the debriefer. The results indicate that the higher the power distance of the setting in which a debriefing takes place, the more the debriefer determines its course. In a high–power distance setting, certain topics become more difficult to address. Nontechnical skills that can be perceived as challenging to existing hierarchical norms, such as speaking-up or volunteering uncertainty, are especially sensitive. We found empirical support for the assumptions postulated by Chung et al in their article.5 Our results are based on individual countries, hypothesis of Chung et al5 is based on regions in the world but the basic idea—a connection between power distance index—and debriefer behavior is the same.
The findings of this study are important because they emphasize the need to take a closer look at what actually happens during simulation debriefing and possibly during simulation-based training as a whole. This study points to national culture as a relevant dimension that influences debriefing practice and simulation-based training. Comparable with other educational methods, simulation is conducted in different settings and is experienced in a variety of ways. Understanding this variability and its impact is key to adapting simulation to context and learners, and hence increasing the impact of this educational experience.18 Assuming the plurality of debriefing practices outlined in this study are an indication of “how we do things around here”—a common definition of organizational culture,19 the findings indicate substantial differences in debriefing practices in different countries. What is mutually expected by debriefers and course participants? What is considered acceptable behavior during a simulation course? What topics are expected to be discussed? These and many other issues seem to be linked to national culture and its norms, values, and beliefs. In Denmark for instance—a low-PDI country, participants might want to have a say about what is discussed in their debriefing. In Guatemala, the country with the highest PDI in our sample, participants might expect the facilitator to determine the course and content of the debriefing. These findings are in line with previous work, describing simulation as a social practice.20 Social practices are seen as settings in time and space enabling certain interactions between those involved. How these interactions unfold in practice is influenced by many aspects, including traditions and other norms, as well as what is possible to do in the physical environment.21–23 One example would be whether participants and instructors are on first-name basis, potentially indicating less hierarchy. Being on a first-name basis is perceived as much more of an interpersonal approximation in the French- and German-speaking countries of Europe than in most English-speaking countries.
The differences in reported debriefing practice described in this study raise the question of how the simulation community could approach them and how cultural differences could be accounted for. The answer to this question is beyond the scope of our study but could potentially be accomplished in three principle ways. One possible approach would be to tailor all simulation activities to a prototypical and standardized way of “doing things.”21 Based on the way simulation has evolved, this would likely be a mix of North American and European ways of thinking and practicing simulation training. Any kind of accreditation/certification program could be seen as a means along those lines, prescribing and standardizing the approaches.23,24
A second approach would aim to generate a broad consensus of accepted and expected practice. What are the “international” expectations for the role of the facilitator and the course participant? This would, by default, lead to a common, almost an Esperanto-like, culture compromise that applies partly to all but not fully to anyone. This common ground, although very important, might be defined in terms that are too broad to guide everyday practice of simulation.
A third approach would strive to preserve cultural prototypes and to practice established differences. Simulation might look different in different parts of the world. Consider Figure 3. In some countries, a fan-shaped interaction pattern might be typical (and functional); in others, the star-shaped pattern would describe common (and functional) practice. This would mean that there is no inherent superiority of one approach over another, as long as common practice is assessed within the logic of its cultural context and based on context-sensitive assessment criteria.
Finding a functional approach in addressing cultural differences is especially important when different national cultures intermingle – as would be the case in work places, where people from different countries cooperate. A first step in this direction is to understand the differences. Taking the different traditions, expectations, and qualitative criteria into account is important when designing simulation scenarios, debriefings, evaluation concepts, and when thinking about how to integrate simulation into curricula and organizations.
The culture model used in this study is up for debate. Hofstede analyzed data on employees' value scores that were collected in the 1960s and 1970s by IBM, a United States–based information technology company. Given the fact that the information collected focused on IBM's needs at the time, one might question whether such culture dimensions apply to modern day health care, let alone health care education. Nonetheless, culture dimensions still seem to have a substantial amount of face value, especially when focusing on power distance. Beugelsdijk et al25 showed that cultural change is absolute rather than relative, meaning that countries' scores on the Hofstede dimensions, including power distance, relative to the scores of other countries have not changed very much. As a result, over time, cultural differences between country pairs (ie, cultural distances) remain quite constant.
The main reason for choosing the Hofstede model was the previous work by Chung et al5 that inspired the current study. The alternative model, GLOBE,26 is more complex and more recent. Both models are similar in their definition of power distance, although they align individual countries differently along this criterion. Both the Hofstede27 and the GLOBE28 groups tested their dimensions against one another, with varying conclusions; they do agree on the fact that the dimension of power distance is the closest correlate between the two models. Nonetheless, even within this dimension, Hofstede showed that actual behavioral hierarchical practices, which are the basis of the Hofstede model, were statistically distinct from values—one of the bases of the GLOBE model. To describe debriefing practice, we needed to rely on a model that is based on the description of practice, which the Hofstede model is. As globalization progresses, the construct of national culture may become less pronounced. Humans move (almost) freely around the world and as a result of cultures mix. Societal reforms and revolutions, including the Internet, transportation industries, and global migration, are untoward to culture preservation and have fostered cultural mixing. Nonetheless, we believe that the dimensions of national culture, as defined and quantified by Hofstede, can still be observed in the current day and age and are very much applicable to modern health care.
Hofstede15,16 used 50 as a cutoff for distinguishing high- from low-PDI countries. This distinction was arbitrarily chosen by Hofstede resulting from the intention to compare countries with clearly different PDIs. To be stricter on the distinction side, one could use only the top and bottom 20% of countries with the highest and lowest PDI. However, our data pool was not large enough to allow for such a cutoff.
The relatively small number of simulation experts who were interviewed is a limitation in our study. As mentioned in the methods section, interviewed debriefers constituted a convenience sample, most of which were met at international simulation conferences. There was a relatively small number of debriefers per country (Fig. 1). Their views were therefore not representational of the respective countries especially when considering that variability within a country may at times be larger than between countries. Selecting study participants from a convenience sample as well as having countries such as the United Kingdom, the United States, and Switzerland represented disproportionately, introduces a bias inherent to this interview methodology. The current study, however, did not aspire to achieve comprehensive and proportional representation of individual countries.
The interview guide was designed using predominantly close-ended questions, aiming to leave little room for potential bias related to interviewer behavior. Nonetheless, most interviews were conducted face-to-face. The authors acknowledge that a number of factors inherent to the interview method (eg, unconscious verbal and nonverbal interaction) could have biased the interview process.
Another limitation of our study is the fact that the principle mode of data collection was based on self-reported behaviors and attitudes of experienced simulation debriefers. It is likely that their expressed views were not in total agreement with their actual simulation practice. Furthermore, debriefers' answers were based on the perceptions of how their debriefing sessions were conducted and how interaction patterns were delineated between the different participants and/or observers and themselves. In particular, the depicted interaction patterns represent the subjective view of the debriefer and do not take participants' perception into account.
The debriefer's personal style and personality traits were not considered in the interview guide. The way a debriefer chooses to conduct his or her debriefing is invariably influenced by personal style and cultural background, which in turn affects how participants engage and interact. Furthermore, this study did not focus on the dynamics between the debriefer and the co-debriefer, which may vary considerably depending on experience, personality, societal background, power distance, and other dimensions of national culture.29 Future studies would benefit from using direct observations of debriefings in different settings. Such a measure would yield a more accurate description of how debriefing truly unfolds in practice.
However, another limitation is that 15 debriefers provided data without the interviewer being present and thus do not meet the classic definition of an “interview.” This contributed to the fact that six of the interaction pattern diagrams had been completed inconclusively and had to be excluded from analysis. Given the fact that the data sets collected in the presence of the debriefer, compared with those collected in the absence of the debriefer, showed very similar correlations, we opted to include both pools in our analyses.
Finally, not all of our study participants understood all of our interview guide questions immediately. Some of the difficulties seemed to have had cultural roots and originated from discrepancies in practice. Some of the interviewees seemed puzzled by the question whether observers were present during the debriefing. In some cultures, observers are always present during debriefing, whereas in other cultures, the presence of observers seemed to be unsuitable, although not always explicitly stated. Not having offered a specific definition of the term “observer” is a limitation of this study. This limitation did, however, allow us to sense the versatility of how different cultures deal with the idea of being observed during debriefing.
Despite our limitations, we were able to delineate common debriefing characteristics relative to national cultures. This is particularly striking when examining the typical patterns of the interaction graphs in Figure 3, illustrating the apparent clustering of a more “fan-shaped” interaction dynamic among high-PDI nations in contrast to the more “star-shaped” interaction dynamic seen in low-PDI countries. Based on this finding, it seems that in high-PDI cultures, participants' learning experience is dominated more by the debriefer, unlike in low-PDI cultures, where participants' learning experience is the result of more evenly distributed interactions. This suggests that the power distance between the debriefer and participants is likely related to the dynamics of the learning experience. In many western cultures, debriefing has emphasized facilitation-driven education that reduces the gradient between educator and learners and that encourages broad interactions resembling the low-PDI interaction pattern. However, given our results, it seems questionable whether this approach is acceptable and desirable to learners and educators in high-PDI cultures. Debriefing approaches that take cultural parameters, such as PDI, into account are needed to meet the needs of the learner. Despite the existing stark contrasts in dimensions of national culture in general and power distance in particular, simulation has become an educational tool in health care, which is embraced and successfully practiced by many educators and learners alike all over the world.
In summary, we attempted to quantify the effects of the cultural dimension of power distance in relation to different characteristics of simulation debriefing. Power distance bears relevance in virtually every clinical setting whenever health care professionals interact.8 Studies have yet to unveil the effects of other cultural dimensions such as femininity/masculinity, uncertainty avoidance, or other factors impacting debriefing characteristics. Nonverbal cultural traits, such as body language and utterances or exclamations, also constitute cultural research opportunities. Further research is needed to assess whether and how debriefing techniques can be tailored to address participants' different cultural backgrounds and whether this results in a heightened learning experience. Such research may lead to simulation debriefing techniques being customized to a learner's cultural background, resulting in an optimal learning experience and ultimately translate into improved clinical outcomes. Eventually, the concept of national culture may need to be replaced by a more nuanced form of cultural analysis allowing for variations within a country or even within an institution. Examining the concept of cultural influences on debriefing is quite unique and requires further investigation. More specifically, participants' perceptions should be explored. Reviewing interactions of actual recordings of debriefing recordings would elicit objective details on how debriefings are practiced in different parts of the world.
A Concluding Remark
Written in times of many conflicts between cultures, we do want to take a stand that might be considered outside of the tradition of scientific writings. We emphasize that we do not intend to assign values to any culture. By default, we value diversity and mutual respect of this diversity. We hope to contribute to a diversified view of what simulation-based education is—a diversification that will hopefully increase mutual understanding and cultural acceptance.
The authors thank the study participants for their time and dedication in the data collection process. The authors also thank the editorial team and reviewers of Simulation in Healthcare, who helped improve this article.
1. Cheng A, Donoghue A, Gilfoyle E, Eppich W. Simulation-based crisis resource management training for pediatric critical care medicine: a review for instructors. Pediatr Crit Care Med
2. Cook DA, Hatala R, Brydges R, et al. Technology-enhanced simulation for health professions education: a systematic review and meta-analysis. JAMA
3. Dieckmann P, Molin Friis S, Lippert A, Østergaard D. The art and science of debriefing in simulation: ideal and practice. Med Teach
4. Raemer D, Anderson M, Cheng A, et al. Research regarding debriefing as part of the learning process. Simul Healthc
5. Chung HS, Dieckmann P, Issenberg SB. It is time to consider cultural differences in debriefing. Simul Healthc
6. Helmreich RL. Culture at work in aviation and medicine : National, organizational and professional influences
. Routledge, 2017.
7. Raghunathan K. Checklists, safety, my culture and me. BMJ Qual Saf
8. Campbell ET. Teaching Korean RN-BSN students. Nurse Educ
9. Lin CS. Medical students' perception of good PBL tutors in Taiwan. Teach Learn Med
10. Eppich W, Cheng A. How cultural-historical activity theory can inform interprofessional team debriefings. Clin Simul Nurs
11. Engestrom Y. Expansive learning at work: toward an activity theoretical reconceptualization. J Educ Work
12. Rystedt H, Lindwall O. The interactive construction of learning foci in simulation-based learning environments: a case study of an anaesthesia course. PsychNology J
13. Johnson E. Situating Simulators: The Integration of Simulations in Medical Practice
, 1st ed. Sweden, Arkiv Academic Press, 2004.
14. Chassin MR, Loeb JM. The ongoing quality improvement journey: next stop, high reliability. Health Aff (Millwood)
15. Hofstede G, Bond MH. Hofstede's culture dimensions: an independent validation using Rokeach's value survey. J Cross Cult Psychol
16. National Culture: Available at: https://geert-hofstede.com/national-culture.html
. Accessed January 31, 2017.
17. Country comparisons: Available at: https://geert-hofstede.com/countries.html
. Accessed January 31, 2017.
18. Curran I. Creating Effective Learning Environments- Key Educational Concepts Applied to Simulation Training, Clinical Simulation: Operations, Engineering and Management
, 1st ed. In: Kyle RR, Murray BW. San Diego, Academic Print-Elsevier Inc., 2008, pp 153–167.
19. Bower M. The Will to Manage: Corporate Success Through Programmed Management
. 1st ed. New York, McGraw-Hill, 1966.
20. Dieckmann P, Gaba D, Rall M. Deepening the theoretical foundations of patient simulation as social practice. Simul Healthc
21. Lahlou S, Le Bellu S, Boesen-Mariani S. Subjective evidence based ethnography: method and applications. Integr Psychol Behav Sci
22. Lahlou S. Installation Theory: The Social Construction and Regulation of Individual Behavior
. Cambridge: Cambridge University Press, 2017.
23. Wilson L, Wittmann-Price RA. Review Manual for the Certified Healthcare Simulation EducatorTM(CHSETM) Exam
, 1st ed. New York, Springer Publishing Company, 2014.
24. Palaganas JC, Maxworthy JC, Epps CA, Mancini ME. Defining Excellence in Simulation Programs
, 1st ed. Philadelphia, Wolters Kluwer Health, 2014.
25. Beugelsdijk S, Maseland R, van Hoorn A. Are scores on Hofstede's dimensions of national culture stable over time? A Cohort Analysis. Global Strategy Journal
26. House RJ, Hanges PJ, Javidan M, Dorfman P, Gupta V. (2004) Culture, Leadership, and Organizations: The GLOBE Study of 62 Societies
. Thousand Oaks, CA: Sage Publications; 2004.
27. Hofstede G. What did GLOBE really measure? Researchers' minds versus respondents' minds. J Int Bus Stud
28. Javidan M, House RJ, Dorfman PW, Hanges PJ, de Luque MS. Conceptualizing and measuring cultures and their consequences: a comparative review of GLOBE's and Hofstede's approaches. J Int Bus Stud
29. Cheng A, Palaganas J, Eppich W, et al. Co-debriefing for simulation-based education: a primer for facilitators. Simul Healthc