Journal Logo

Empirical Investigations

Linguistic Validation of the Debriefing Assessment for Simulation in Healthcare in Spanish and Cultural Validation for 8 Spanish Speaking Countries

Muller-Botti, Sacha MD; Maestre, Jose M. MD, PhD; del Moral, Ignacio MD, PhD; Fey, Mary PhD, RN, CHSE-A, ANEF; Simon, Robert MAT, MEd, EdD

Author Information
Simulation in Healthcare: The Journal of the Society for Simulation in Healthcare: February 2021 - Volume 16 - Issue 1 - p 13-19
doi: 10.1097/SIH.0000000000000468
  • Free

Abstract

Reflecting on one's professional practice is a crucial step in the experiential learning process.1 It helps learners develop and integrate insights from direct experience into later action.2 In a systematic review, up to 47% of included journal articles reported that feedback (debriefing) is the most important feature of simulation-based medical education.3

Given the pivotal role of debriefing, it is important to have instruments that measure the quality of debriefings. The Debriefing Assessment for Simulation in Healthcare (DASH) tool was designed to assist in developing and evaluating faculty debriefing and instructional skills in a wide variety of healthcare debriefing contexts. It is a behaviorally anchored rating scale that, in the context of trained raters, has shown interrater reliability and internal consistency. Debriefing Assessment for Simulation in Healthcare content validity is grounded in an extensive theoretical structure, practice-based experience, and expert panel review. Evidence shows that DASH scores yield a statistically significant difference between debriefings of varying quality.4

As simulation-based education and debriefing are expanding internationally, instruments such as the DASH will likely be useful in other languages, such as Spanish.5 This assessment tool was developed in English, and although English is the international language of science, not everyone in other countries necessarily feels confident using it. Furthermore, other assessment tools and debriefing guidelines or protocols are available only in English, which may hinder their use in non-English–speaking countries. To make simulation-related assessment and training materials more accessible, there is an increasing need to translate them into other languages.

To ensure the selected methodology results in a linguistically accurate translation that is also fit to be used across cultures of the same language,6,7 it must be robust and systematically applied. Literal translation into another language would maintain semantic equivalence between word order and syntax in the translated document but may not account for important nuances such as implicit values and assumptions particular to a culture. Thus, when translating instruments into other languages, it is necessary to achieve congruence within the terms and the meanings of concepts across cultures. It requires the use of expressions and concepts that are equivalent rather than identical. For example, there are known linguistic and cultural differences among countries across Latin America and Spain that may affect the meaning given to a number of terms and concepts used in the DASH, for example. “engagement.” There are rigorous methodologies available to ensure useful and valid translations into different languages and cultures within languages.6–14 This study is an example of such an endeavor applied to one of the most commonly used languages in the world.15

The objective of this study was to conduct a linguistic validation of the translation of the DASH from English to Spanish and a cultural validation across 8 Spanish-speaking countries: Argentina, Chile, Colombia, Costa Rica, Ecuador, Mexico, Peru, and Spain.

The aims of this study were to assess whether a Spanish version of the DASH tool is capable of capturing the intended meaning of the English version and whether it is well understood by Spanish-speaking users in the targeted countries. In addition to this methodological demonstration, the desired outcome was to publish an internationally compatible Spanish version.

MATERIAL AND METHODS

A schematic overview of the forward and backward linguistic translation process, and the cross-cultural review process, based on the principles of good practice and available guidelines for translation and cultural adaptation of Piault et al,6 Brandt et al,7 and the International Society for Pharmacoeconomics and Outcomes Research Task Force,11 is shown in Figure 1.

F1
FIGURE 1:
Schematic overview of the forward and backward linguistic translation process, as well as the cross-cultural review process. Based on the methodology of Piault et al6 and Brandt et al.7

HARMONIZATION

Original Document

The original document is the DASH tool. It includes score sheets and a comprehensive Rater's Handbook used for the DASH-rater version (RV). The Rater's Handbook includes background and instructions and comprehensively describes a 6-element, unweighted, criterion-referenced, behaviorally anchored rating scale. The DASH Rater's Handbook provides instructions and examples reflecting debriefer's behavior to assist with the rating. To accurately assess and score the debriefing, the DASH Rater's Handbook and the RV score sheet are used concomitantly. Each of the 6 elements is scored via a 7-point rating scale. “elements” are concepts describing a reasonably distinct area of debriefer behavior. After describing and defining an element, behaviorally oriented “dimensions” are included to further enhance the rater's understanding of each element. There are 23 dimensions distributed across the 6 elements (Fig. 2). Within each of the 23 dimensions, observable positive and negative “behavior” examples are provided.16

F2
FIGURE 2:
Schematic structure of the DASH.

The DASH has 3 versions of the score sheet: one designed for trained raters (RV), one designed for students to rate their instructors [student version (SV)] and one designed for instructors to self-rate [instructor version (IV)]. Furthermore, 2 different score sheets are available for each RV, SV, and IV forms: a short and a long form. The short forms are designed to obtain element scores only. It provides more global ratings and is viewed as especially useful for summative evaluations. The long forms are designed to obtain element and dimension ratings and yield more specific data for providing formative feedback to debriefers (Fig. 3).

F3
FIGURE 3:
The DASH Handbook and Score Sheets.

In this study, the translation process was applied to all score sheet versions (RV, SV, and IV) and to the Rater's Handbook.

Translator and Reviewer

Sousa et al8 and the Process of Translation of the World Health Organization9 state that the translator's native language should be the target language and that the translator must be fluent in the source (English) and the target (Spanish) language of the instrument. In addition, the World Health Organization guidelines mention that the person is preferably bicultural, that is, having in-depth experience in the culture of the source and target language of the instrument. Finally, the translator must be knowledgeable about health care terminology and the content area of the instrument's constructs in both languages.

We also considered that the translator and reviewer should understand the DASH and should regularly teach how to use it to the simulation community. Specifically, translators and reviewers took a formal DASH course through the Center for Medical Simulation (CMS) and use the tool regularly in their simulation practice.

The DASH Rater's Handbook was initially translated to Spanish by a native Spanish speaker (S.M.-B.) who is fluent in English and has been living and working in an English-speaking country for more than 12 years. He is a faculty member of a simulation center and teaches simulation-based courses in English- and Spanish-speaking countries. He has been trained as a DASH rater through the online DASH webinar held by the CMS in Boston and teaches the use of the DASH tool in Spanish and English.

The reviewer is a fluent English speaker whose native language is Spanish (J.M.M.). He is the education director of a simulation center, teaches simulation-based courses in both English and Spanish, and has been trained as a DASH rater through the online DASH webinar held by the CMS. He also has developed and provided DASH rater training in Spanish.

Consensus Harmonized Translation

Between the translator and the reviewer, the 6 DASH elements underwent between 3 and 6 iterative revisions until both parties agreed that the translation was accurate. The aim was the conceptual equivalent of a word or phrase and not a word-for-word translation. The Spanish team considered the definition of the original term and attempted to translate it in the most relevant way.8 Once agreement was achieved, the document was considered to be the consensus harmonized translation. The new names of the DASH became Spanish DASH (translated as Evaluación del Debriefing para la Simulación en Salud).

LINGUISTIC VALIDATION

Selection of Complex Items and Cultural Differences for the “Back Translation”

Back translation is the process by which a professional translator interprets a document that has previously been translated into another language back to the original language, in this case from Spanish to English.

The complete score sheets, including the 6 DASH elements, were reviewed via back translation. The handbook back translation iteratively followed by checking for accuracy is labor intensive. In light of limited resources, it was decided to start the validation process using 2 elements for thorough review. If they had few and only minor changes needed, it was thought unnecessary to review the rest so closely. However, the possibility was left open that if the 2 elements needed extensive rework, all 6 would need to be back translated and reviewed. Two original DASH authors chose elements 2 and 5 as the sample for review of the back translation (R. Simon and J. Rudolph, personal communication, September 18, 2017). The choice was based on terms and concepts contained in those elements considered to be critical to the understanding of the document and those that were suspected to be particularly prone to translation problems across cultures (Table 1).9

TABLE 1 - Criteria for Selecting DASH Elements (Items) for Back Translation
• Key organizational components of the DASH:
 o Rater, Instructor, Student Score Sheet versions—reflects all the components of the rater, instructor, and SVs
 o Handbook:
  ▪ Description of the “elements” related to key debriefing skills for maintaining psychological safety and assessing performance.2
  ▪ Description of the “dimensions” that reflect high-level competencies within the “element.”
  ▪ Specific “behaviors” that are observable examples of carrying out the “dimension” effectively and ineffectively.
• Cultural differences between Spanish-speaking countries:
 o Salient but common English concepts, terms, and phrases that vary across the 2 countries (eg, score sheet, rating scale, feedback, performance, etc.).
 o English terms, concepts, and phrases that are essential for an effective debriefing that may have a different meaning in Spanish (eg, engaging context for learning, provokes engaging discussions, etc.).
• Complex terms to translate between English and Spanish
 o Concepts and words that remain in the DASH as Anglicisms (eg, debriefing, feedback).
• Linguistic differences in active and passive voice.
 o Passive voice in English cannot always be rendered as the passive voice in Spanish, and in many cases, Spanish correspondences are reflexive or impersonal active clauses that contrast between languages. In addition, English passives may be translated as Spanish actives and vice versa.17
Example: To translate the following sentence from English to Spanish “The student's frame was explored” (passive voice), an acceptable translation would be “Se exploró el modelo mental del estudiante” (active voice).

Once the elements were identified, the back translation was undertaken by a bilingual translator whose native language was English.9 The chosen translator (F.L.) was familiar with the terminology and content of the original instrument.8 F.L. was born and raised in New York and has lived in Spain for the past 20 years where he works as an English-Spanish interpreter. He has experience in simultaneous interpretation for simulation-based instructor courses.

Approved Harmonized Translation

Once the back translation to English was done, the authors of the original document in English performed an expert review. Just like in the initial translation, the comparison of the back-translated version to the original English version did not aim for a word-for-word symmetry, but for an accurate conceptual and complete correspondence to assure the content was equivalent.

For any discrepancies detected, the forward translator, reviewer, and back translator had discussions to determine the source of the problem, for example, was the source of the problem the forward translation or the back translation. Problematic words or phrases that did not completely capture the concept addressed by the original English document were brought to the attention of the translator and revisor. A table was created to outline the conflicts detected between the back translation and the original document. Each conflict was categorized as term, concept, verb tense, syntax error, anglicisim, or cultural difference. The source of the conflict was identified (forward or back translation) to determine whether the forward translation needed to be modified or not. Amendments and iterations were made until a satisfactory version was reached. This created the approved harmonized translation.

Discrepancies detected between the forward and back translations were then subsequently used to create the questionnaire to be used in the Spanish cultural validation described hereinafter. Syntax errors were not included in the questionnaire.10

CULTURAL VALIDATION

Questionnaire

The approved harmonized translation (Spanish DASH) was assessed through a questionnaire issued to assess clarity, comprehensiveness, appropriateness, and cultural relevance among 29 monolingual subjects from 8 Spanish-speaking countries including Argentina, Chile, Colombia, Costa Rica, Ecuador, Mexico, Peru, and Spain. The questionnaire was performed using the criteria outlined in Table 2.

TABLE 2 - Questionnaire Criteria for Cultural Validation of the Harmonized Version
1. Content development
 ○ To evaluate the validity of the harmonized document, participants were given a questionnaire to analyze their understanding of terms, key concepts, verb tense, and general writing.
 ○ Fourteen questions covering all 6 elements from the score sheets and the handbook.
 ○ Questions were in the short answer format to allow participants to express their understanding as they saw fit.
 ○ Space was allocated for the participants to write any concepts that were not clear or that they thought were inappropriate.
2. Sampling strategy
 ○ A standard sample size calculation is difficult to apply because of the nature of the analysis. When focusing on qualitative data, it has been shown that 5 users revealed an average of 85% of issues on usability testing procedures.17 Extrapolating these findings, we sent questionnaires to 62 instructors from each of the 8 following countries: Argentina (8), Chile (12), Colombia (10), Costa Rica (5), Ecuador (6), Mexico (7), Peru (8), and Spain (6).
 ○ Inclusion criteria:
  • Male and female representation (at least 2 of each gender from each country).
  • Simulation instructors with at least 2 years debriefing experience in healthcare simulation.
  • Not DASH rater trained.
  • Having completed one or more of the following:
   • Instructor course for simulation in healthcare
   • Certification as healthcare simulation educator
   • Completed or currently participating in a simulation fellowship
   • Master's level degree in healthcare-based simulation or education
 ○ Instructors meeting these criteria were identified through the alumni list of the instructor's simulation courses run by Hospital virtual Valdecilla (HvV), Santander, Spain, and matching it with the Rater's DASH Webinar alumni list.
3. Participation process
 ○ Distribute the harmonized translation and the questionnaire to the respondents.
 ○ Complete the questionnaire.
 ○ For every disagreement or confusion, the respondent may suggest alternatives or ask the panel to propose alternatives for the translation.
 ○ The analysis of the results of the questionnaire and the follow-up for problematic phrases was the responsibility of the project manager.

The questionnaire included 13 questions in which the participants determined whether the intended meaning of a term or concept was accurate. For every response where a different meaning was identified, they were asked to give a recommendation for a change that would fit better according to them. In addition, there was a 14th question where the participants could add as many recommendations as they saw fit.

Analysis of Results and Amendments

Responses within each questionnaire were reviewed to assess participants' ability to understand the questions (comparing the obtained responses with the expected answers) and to collect their comments. Respondents' suggestions were collated to determine whether any amendments were required. The criteria used to decide which suggested changes would be applied were the following:

  • Repeated suggestion from survey participants from different countries.
  • The suggestion significantly improved the understanding of the translated document in all countries.
  • It was a true representation of the original English document (DASH).
  • It conveyed the same meanings and concepts as the DASH.

A report on the responses was produced in English and sent to the original DASH authors for approval. It outlined the number of subjects interviewed, the difficulties encountered, the solutions suggested and retained, and how the linguistically and culturally validated version was produced.

RESULTS

Linguistic Validation

Skills considered representative and fundamental for debriefing selected by the DASH original authors were element 2 (maintains an engaging learning context for simulation and debriefing) and element 5 (identifies and explores performance gaps). These 2 elements from the original English document were compared with the back translation by the original DASH authors. A total of 18 discrepancies (with 2 duplicates) were found. The distribution and number of changes incorporated into all versions, handbook and score sheets, are shown in Table 3.

TABLE 3 - Discrepancies Detected Between DASH and Back Translation of Elements 2 and 5
Spanish
• 16 discrepancies
 ○ Term: 11 (68.7%)
 ○ Syntax: 2 (12.5%)
 ○ Concept: 3 (18.8%)
• 9 (60%) discrepancies were seen as originated in the back translation and, therefore, did not incur any necessary changes to the consensus harmonized translation.
• 7 (40%) discrepancies were seen as originated in the forward translation and prompted necessary changes:
 ○ Term: 5 (71%)
 ○ Syntax: 2 (29%)
 ○ Concept: 0 (0%)

Cultural Validation

The cultural validation was done with the objective of ensuring the translated Spanish DASH is comprehensible in 8 Spanish-speaking countries.

Sixty-two participants were sent the questionnaire and 29 (46.8%) responded (Argentina: 5; Chile: 5; Colombia: 3; Costa Rica: 2; Ecuador: 1; Mexico: 4; Peru: 4; Spain: 5).

From questions 1 to 13, 77% found that the terms and concepts presented “meant the same” and 22% “meant something different.” Discrepancies lead to 82 recommendations for change (Table 4).

TABLE 4 - Distribution of Questionnaire Results
Country
No. Questionnaires Received
(Total No. Questions)
Answer Result %
Argentina Means the same 54 83
5 Has a different meaning 10 15
(65) Questions not answered 1 2
Chile
5
(65)
Means the same 49 75
Has a different meaning 16 25
Colombia
3
(39)
Means the same 30 77
Has a different meaning 9 23
Costa Rica
2
(26)
Means the same 23 88
Has a different meaning 2 8
Questions not answered 1 4
Ecuador
1
(13)
Means the same 6 46
Has a different meaning 7 54
Mexico
4
(52)
Means the same 50 96
Has a different meaning 2 4
Peru
4
(52)
Means the same 41 79
Has a different meaning 11 21
Spain
5
(65)
Means the same 39 61
Has a different meaning 25 39
Questions not answered 1 2
TOTAL (377) Means the same 292 77
Has a different meaning 82 22
Questions not answered 3 1

In addition, there were 57 extra recommendations from the open-ended question 14 for a total of 139. Of these recommendations, 59 (42%) were terms, 22 (16%) concepts, and 58 (42%) syntax.

Using the criteria described in the methodology, 37 (27%) of the recommended changes were applied to the approved harmonized translation.

DISCUSSION

The results support the equivalence of the Spanish and English DASH. Thus, we propose the amended versions of the documents for the transcultural assessment of debriefings in 8 Spanish-speaking countries.

Methods used in this study have several key components in common with those found in the literature. The study methodology was based on the Process of Translation and Adaptation of Instruments used by the World Health Organization9 and the Principles of Good Practice for the Translation and Cultural Adaptation developed by the International Society for Pharmacoeconomics and Outcomes Research.11 The methods included multiple opportunities for translation, back translation, cultural understanding, and harmonization while making it practical in terms of time and resource limitations. A recent systematic literature search revealed a limited number of articles available on linguistic and cultural validation methodologies, various different approaches to language adaptation, and inconsistent terminology. Moreover, only 54% of articles provided details about the adaptation and translation process.12 The method used in this study was applied to 2 targeted languages. Thus, there is reason to believe that the methods are likely applicable to other languages and cultural variations.

The comparison between the back translation and the original document, done by the original authors of the DASH, identified several discrepancies that required minor changes in the Spanish forward translation. Moreover, the original DASH authors could assure the research team that the back translation demonstrated a very good overall understanding of the document and considered it to be accurate. These findings supported the initial decision to review the 2 critical DASH elements instead of the entirety of all the score sheets and the whole handbook.

A strength of this project was the composition of the translation team. The skills and experience of the back translator were seen to be very important. Even if the original translation from English to Spanish was 100% accurate and the back translation from Spanish back to English was not accurate, the back-translated English document would have caused considerable problems for the project. If that were to occur, it could have degenerated into infinite iterations.18 Several authors who have used 2 back translators concluded that the second back translation added some small benefit to Slavic, Asian, and Indian language translations (particularly in documents containing medical terminology). In contrast, Latin- or Germanic-based languages were observed to be less problematic as they are more closely related to English.19 As a result, this study considered that the quality of translations using either dual or single back-translation methodologies is likely to be very similar and that additional back translation would have provided only a small improvement in quality.13 Based on this, only one back translation was implemented.

A key component was the cooperation of the DASH original authors in the review to guarantee the equivalence of the constructs of the final version. If at all possible, we recommend involvement of original authors in similar translation projects.

Given the cultural differences between the Spanish language spoken in different Hispanic countries, a cultural validation was considered essential to ensure that a worldwide validated Hispanic translation was made available. In our case, 77% of methodologies reviewed went beyond conceptual and semantic translations and incorporated cultural adaptations.11 There are several possible combinations of cross-cultural translation techniques depending on the research environment and questions. There is, however, no criterion standard.14 The results of the questionnaires provoked more changes than the linguistic validation, highlighting the importance of a cultural validation that produces a version that assesses the same construct irrespective of the cultural idiosyncracies. Despite the differences between the Spanish cultural groups, the results provide evidence for the conceptual and functional equivalence between the original English and the Spanish versions of the DASH.

The fact that the translators and reviewers for the Spanish translation were from different Spanish-speaking backgrounds (Chile and Spain) added a global linguistic perspective, which likely helped the translated DASH have a wider acceptability for intralanguage cultures.

The iterative processes used in this project generated results supporting the success for the systematic language adaptation of the DASH instruments from its original English language to Spanish. The iterative processes also underpinned the completion of a culturally validated translation for Latin America and Spain. Language adaptation, rather than a simple word-by-word grammatical translation, was used. The objective was an interpretation of meaning in the source language that moved translation beyond only the wording conventions to find similarities across different intralinguistic cultural interpretations.11 It is likely that this methodology may be fruitfully applied to other languages that are spoken in more than just one country, for example, Chinese, Portuguese, German, French, Arabic, and others.

This study is an example of how different languages and cultures within languages can use a tool that was previously only available for people who could fully understand the English language and its nuances. It can help bring together English- and non-English–speaking international simulation and healthcare communities. We surmise that this methodology can be used in a wide variety of simulated-related documents in our expanding international community.

One limitation of this study is that it did not include all Spanish-speaking countries worldwide. Furthermore, the number of respondents per country was limited to 4 or 5 in 5 countries and 3 respondents or less in the other 3 countries, thus limiting identification of differences in cultural interpretation.20 For this reason, some cultural variations might have not been captured.

Future research recommendations include gathering further evidence from a larger and more varied sample to overcome these limitations. Studies to develop statistics about interrater reliability and internal consistency are also needed. It is necessary to gather evidence of construct validity of the Spanish version by evaluating their ability to detect variations in the quality of debriefings in a variety of simulation settings.

In conclusion, the translated DASH score sheets and Rater's Handbook are shown to be linguistically valid in Spanish, culturally valid in 8 Spanish-speaking countries, and may be used to assess debriefings in healthcare settings. We propose that the methodology used is applicable for translating and assessing linguistic and cross-cultural validation of instruments and should be considered for use in the translation of other simulation-related documentation.

The Spanish version of the DASH Handbook and Scoring Sheets are available for download at https://harvardmedsim.org/debriefing-assessment-for-simulation-in-healthcare-dash/.

REFERENCES

1. Maestre JM, Szyld D, Del Moral I, Ortiz G, Rudolph JW. The making of expert clinicians: reflective practice. Rev Clin Esp 2014;214(4):216–220.
2. Rudolph JW, Simon R, Dufresne RL, Raemer DB. There's no such thing as “nonjudgmental” debriefing: a theory and method for debriefing with good judgment. Simul Healthc 2006;1:49–55.
3. Issenberg SB, McGaghie WC, Petrusa ER, Lee Gordon D, Scalese RJ. Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME systematic review. Med Teach 2005;27(1):10–28.
4. Brett-Fleegler M, Rudolph J, Eppich W, et al. Debriefing Assessment for Simulation in Healthcare: development and psychometric properties. Simul Healthc 2012;7(5):288–294.
5. Durá MJ, Merino F, Abajas R, Meneses A, Quesada A, González AM. Simulación de alta fidelidad en España: de la ensoñación a la realidad. Rev Esp Anestesiol Reanim 2015;62(1):18–28.
6. Piault E, Doshi S, Brandt BA, et al. Linguistic validation of translation of the Self-Assessment Goal Achievement (SAGA) questionnaire from English. Health Qual Life Outcomes 2012;10:40.
7. Brandt BA, Angün Ç, Coyne KS, Doshi S, Bavendam T, Kopp ZS. LUTS patient reported outcomes tool: linguistic validation in 10 European languages. NeurourolUrodyn 2013;32(1):75–81.
8. Sousa VD, Rojjanasrirat W. Translation, adaptation and validation of instruments or scales for use in cross-cultural health care research: a clear and user-friendly guideline. J Eval Clin Pr 2011;17(2):268–274.
9. World Health Organization. Process of translation and adaptation of instruments World Health Organization. Process of translation and adaptation of instrument. Available at: https://www.who.int/substance_abuse/research_tools/translation/en. Accesed May 24, 2020.
10. Maneesriwongul W, Dixon JK. Instrument translation process: a methods review. J Adv Nurs 2004;48(2):175–186.
11. Wild D, Grove A, Martin M, et al. Principles of good practice for the translation and cultural adaptation process for patient-reported outcomes (PRO) measures: report of the ISPOR task force for translation and cultural adaptation. Value Heal J Int Soc Pharmacoeconomics Outcomes Res 2005;8(2):94–104.
12. Maríñez-Lora AM, Boustani M, del Busto CT, Leone C. A framework for translating an evidence-based intervention from English to Spanish. Hisp J Behav Sci 2015;38(1):117–133.
13. Wild D, Eremenco S, Mear I, et al. Multinational trials-recommendations on the translations required, approaches to using the same language in different countries, and the approaches to support pooling the data: the ISPOR patient-reported outcomes translation and linguistic validation good. Value Heal 2009;12(4):430–440.
14. Cha ES, Kim KH, Erlen JA. Translation of scales in cross-cultural research: issues and techniques. J Adv Nurs 2007;58(4):386–395.
15. Eberhard DM, Simons GS, Fennig CD. Ethnologue: LANGUAGES of the world. 20th ed. Dallas, Texas: SIL International. Availabe at: https://www.ethnologue.com/guides/most-spoken-languages. Accesed February 26, 2020.
16. Simon R, Raemer D, Rudolph J, Center for Medical Simulation. Debriefing Assessment for Simulation in Healthcare (DASH). Available at: https://harvardmedsim.org/debriefing-assessment-for-simulation-in-healthcare-dash. Accesed February 26, 2020.
17. Pinkster H, Anna Siewierska. The passive. A comparative linguistic analysis. London: Croom Helm, 1984. Pp. 306. J Linguist 1987;23(1):247–248.
18. Behr D. Assessing the use of back translation: the shortcomings of back translation as a quality testing method. Int J Soc Res Methodol 2017;20(6):573–584.
19. Gawlicki M, Brandt B, Heinzman A, McKown S, Pollitz A, Talbert M. Dual back-translation vs single back-translation methodology for clinical outcomes assessments. In: International Society for Pharmacoeconomics and Outcomes Research European Congress. Dublin, Ireland; 2013.
20. Caulton DA. Relaxing the homogeneity assumption in usability testing. Behav Inform Technol 2001;20(1):1–7.
Keywords:

Linguistic validation; cultural validation; translation; forward translation; back translation; instrument translation; assessment; debriefing; cross-cultural research

Copyright © 2020 Society for Simulation in Healthcare