Purpose: To systematically examine the methodological rigor of studies using cultural competence training as a strategy to improve minority health care quality. To the authors’ knowledge, no prior studies of this type have been conducted.
Method: As part of a systematic review, the authors appraised the methodological rigor of studies published in English from 1980 to 2003 that evaluate cultural competence training, and determined whether selected study characteristics were associated with better study quality as defined by five domains (representativeness, intervention description, bias and confounding, outcome assessment, and analytic approach).
Results: Among 64 eligible articles, most studies (no. = 59) were published recently (1990–2003) in education (no. = 26) and nursing (no. = 14) journals. Targeted learners were mostly nurses (no. = 32) and physicians (no. = 19). Study designs included randomized or concurrent controlled trials (no. = 10), pretest/posttest (no. = 22), posttest only (no. = 27), and qualitative evaluation (no. = 5). Curricular content, teaching strategies, and evaluation methods varied. Most studies reported provider outcomes. Twenty-one articles adequately described provider representativeness, 21 completely described curricular interventions, eight had adequate comparison groups, 27 used objective evaluations, three blinded outcome assessors, 14 reported the number or reason for noninclusion of data, and 15 reported magnitude differences and variability indexes. Studies targeted at physicians more often described providers and interventions. Most trials completely described targeted providers, had adequate comparison groups, and reported objective evaluations. Study quality did not differ over time, by journal type, or by the presence or absence of reported funding.
Conclusions: Lack of methodological rigor limits the evidence for the impact of cultural competence training on minority health care quality. More attention should be paid to the proper design, evaluation, and reporting of these training programs.
Dr. Price is a postdoctoral fellow, Division of General Internal Medicine, Department of Medicine, Johns Hopkins University School of Medicine (JHUSOM), Baltimore, Maryland.
Dr. Beach is assistant professor of medicine and health policy and management, Division of General Internal Medicine and Welch Center for Prevention, Epidemiology, and Clinical Research (JHUSOM).
Dr. Gary is assistant professor of epidemiology, Welch Center (JHUSOM), and Department of Epidemiology, Johns Hopkins Bloomberg School of Public Health (JHBSPH), Baltimore, Maryland.
Ms. Robinson is a research associate, Divisions of General Internal Medicine and Health Sciences Informatics, Department of Medicine (JHUSOM).
Dr. Gozu is a postdoctoral fellow, Division of General Internal Medicine (JHUSOM).
Dr. Palacio was a postdoctoral fellow, Division of General Internal Medicine (JHUSOM).
Dr. Smarth was a Robert Wood Johnson Clinical Scholar (JHUSOM).
Ms. Jenckes is a research associate, Division of Infectious Disease, Department of Medicine (JHUSOM).
Ms. Feuerstein was an undergraduate student at Johns Hopkins University.
Dr. Bass is a professor of medicine, Division of General Internal Medicine (JHUSOM) and director of the Evidence-based Practice Center (JHBSPH).
Dr. Powe is a professor of medicine, Epidemiology and Health Policy and Management, Division of General Internal Medicine and Welch Center (JHUSOM, JHBSPH).
Dr. Cooper is associate professor of medicine, Epidemiology and Health Policy and Management, Division of General Internal Medicine and Welch Center (JHUSOM, JHBSPH).
Please see the end of this report for information about the authors. This article was presented as an abstract at the 27th Annual Meeting of the Society of General Internal Medicine on May 13, 2004, in Chicago, Illinois.
Correspondence should be addressed to Dr. Cooper, Welch Center for Prevention, Epidemiology, and Clinical Research, 2024 East Monument Street, Suite 2–500, Baltimore, MD, 21287; telephone: (410) 614-3659; fax: (410) 614-0588; e-mail: 〈email@example.com〉.
In 2003, the Institute of Medicine report Unequal Treatment: Confronting Racial and Ethnic Disparities in Health Care identified cultural competence training of health professionals as a potential strategy to improve quality of care and to reduce health disparities between ethnic minorities and whites.1 Cultural competence may be defined as a set of attitudes, skills, behaviors and policies enabling individuals to establish effective interpersonal and working relationships that supersede cultural differences.2,3 It includes providers’ recognition of sociocultural influences on health beliefs and behaviors, disease prevalence, and incidence and treatment outcomes for different patient populations.4
Previously, we reported the results of a systematic review of cultural competence training interventions targeted at health care providers and designed to improve minority health. The review identified good evidence that cultural competence training influences provider knowledge, attitude, and skills.5 However, few studies assessed patient outcomes. Additionally, the heterogeneity of curricular content, teaching methods, and evaluation strategies made it difficult to determine the impact of training on outcomes.
Even so, cultural competence is now considered to be a federal standard of care and is among educational objectives for various accreditation bodies in medical education.6–8 Researchers and educators have described conceptual frameworks for integrating cultural competence training into medical education.9–12 However, there are no standard guidelines to help educators effectively design, evaluate, or report cultural competence interventions. Furthermore, little has been done to examine the rigor of evidence upon which the development and implementation of future interventions in this area might be based.
The objectives of this study were (1) to critically appraise the methodological rigor with which studies of cultural competence training (targeted at health professionals as a mechanism to improve minority health) have been designed, analyzed, and reported; and (2) to determine whether selected study characteristics are associated with better study quality. We defined better study quality as using representative study populations, providing complete intervention descriptions, and using techniques to enhance the validity of study results (minimizing bias and confounding and using rigorous evaluation strategies and analytic approaches). We hypothesized that studies published since 2000 would meet more criteria for study quality, since cultural competence is now a federal standard of health care. Second, we hypothesized that studies published in health professions education journals would better adhere to basic principles of data reporting and analysis now that educators must document curricular effectiveness.7,8 Third, we hypothesized that studies targeting nurses would include more complete intervention descriptions, since interventions involving nurses are more typically based on theoretical models.5 Fourth, we hypothesized that studies that reported funding would meet more criteria for study quality. Finally, we expected that studies employing rigorous study designs would meet our criteria for study quality.
Literature search methods
To find eligible articles, we searched the following databases: Medline, Cochrane CENTRAL Register of Controlled Trials (Issue 1, 2003), Embase, Effective Practice and Organization of Care Cochrane Review Group (EPOC), Research and Development Resource Base in Continuing Medical Education (RDRB/CME), and Cumulative Index of Nursing and Allied Health Literature (CINAHL) using a search strategy specific to each database. Terms such as cultural sensitivity, transcultural, cultural diversity, multicultural, and cultural competence were used. The full search strategy (including a comprehensive list of search terms) is available in the Agency for Health care Research and Quality (AHRQ) Evidence Report/Technology Assessment No. 90.5
We identified priority journals based on expert opinion and on the electronic search results. To ensure inclusion of recent publications, we scanned tables of contents of these journals from February 1, 2003, to June 15, 2003. Additionally, we examined the reference lists of key review articles and all articles eligible for this study.
We excluded articles that were not written in English, were published prior to 1980, did not include human data or original data, did not have a full article available for review, were not relevant to minority health, did not have an intervention or had no evaluation of an intervention, did not target health care providers or organizations, or did not apply to any of the study questions.
The titles and abstracts of all articles identified were screened by multiple pairs of study investigators (each pair consisting of one postdoctoral fellow and one other investigator). We used an independent review process at this stage, since reviewer agreement was anticipated to be low (calculated kappa was 0.41 on a random sample of abstracts) and because we did not want any abstract to be excluded based on the opinion of only one reviewer. Two reviewers independently assessed whether the abstracts were eligible for full article review. Citations were returned for adjudication if the two reviewers disagreed on article eligibility. If the article title and abstract did not provide sufficient information, the full article was retrieved for review.
We developed forms to confirm eligibility for full article review, assess study characteristics, and extract data relevant to the study questions. We conducted independent and serial reviews of the quality assessment forms from a random sample of ten articles to calculate the agreement (kappa statistic) between reviewers. Each quality assessment form contained 21 questions with three or four possible choices. We found a mean kappa (across the 21 items) of 0.81 for the independent review process and 0.87 for the serial review process. These values are similar and are in the range that most experts consider to indicate excellent agreement.13 We used a serial review process to conserve time and resources. The primary reviewer completed the data abstraction and quality assessment forms described below. The secondary reviewer was instructed to be critical of the primary reviewer's assessment and to check each item on the form for completeness and accuracy. All information from the article review process was entered into a database.
For each eligible study, we abstracted the study design, publication year, the types of targeted health care providers, sample size, journal type, study location (country), curricular content, teaching methods, length of interventions, evaluation methods, measured outcomes, and funding. We classified the outcomes as provider outcomes (knowledge, attitude, skills/behavior), patient outcomes (satisfaction, adherence, health status), or curriculum evaluation (learner satisfaction with curricular content, teaching methods etc.).
Assessment of methodological rigor
We developed review forms to systematically evaluate the methodological rigor of the eligible articles based on published guidelines for assessing medical education curricula.14–17 We chose five domains of study quality (representativeness of study subjects, potential for bias and confounding, description of the intervention, outcome assessment, and analytic approach). To define these domains, we assessed whether the study (1) provided information on the setting and population from which study subjects were drawn; (2) described key provider characteristics; (3) described the intervention in enough detail to facilitate replication; (4) used a concurrent and similar comparison group; (5) used objective evaluation methods; (6) blinded outcome assessors to participants’ intervention assignment; (7) reported the number and reasons for noninclusion in the data analysis; and (8) reported the magnitude of difference between groups and an index of variability (test statistic, p value, standard error, confidence interval). For these eight items from the quality assessment form, we found a mean kappa of 0.65 for the independent review process and 0.76 from the serial review process of the random sample of ten articles mentioned earlier. Both values are in the range that most experts consider substantial agreement.13
Study variables and statistical analysis
Our main dependent variables were the eight methodological factors representing the five domains of study quality. Our main independent variables were (1) publication date (1980–1989, 1990–1999, 2000–2003), (2) journal type (health professions education journals, nursing, general medicine/primary care, psychiatry/psychology, other journals), (3) targeted health providers (nurses, physicians, other/mixed groups of health professionals, specifically occupational therapists, counselors, and psychologists), (4) report of funding (yes/no), and (5) study design (trials: randomized controlled/concurrent controlled, pretest/posttest, posttest only, qualitative).
Using chi-square analysis, we assessed how the proportion of articles meeting specific quality criteria differed by the aforementioned study characteristics. All data analyses were performed used STATA 8.0 Intercooled (STATA Corporation, College Station, TX).
Literature search and review process
Of the 4,389 articles retrieved, most studies were excluded after the abstract and article review process leaving 64 articles eligible for this analysis.18–81 (See Figure 1 for an overview of the search and review process.) The most common reasons for exclusion were no evaluation of an intervention, not relevant to minority health, and not targeted to health care provider or organization. Details regarding each individual article (study characteristics and quality assessment) are available at the AHRQ Web site 〈http://www.ahrq.gov/clinic/evrptpdfs.htm#minqual〉.
Characteristics of studies evaluating cultural competence interventions
Study characteristics are summarized in Table 1. Most of the 64 articles were published from 1990 to 2003 (no. = 59) and in health professions education journals (no. = 26). Most interventions were targeted at nurses (no. = 32) or physicians (no. = 19), and most targeted providers who were trainees such as medical or nursing students (no. = 38). The curricular content widely varied, and most studies addressed more than one content area. Using a previously developed framework for categorizing cultural competence curricular content,82 most interventions focused on specific cultural content such as epidemiology of disease in specific ethnic groups (no. = 45) and general concepts of culture such as ethnorelativism (no. = 43). Additionally, most studies used multiple training methods and evaluation tools. To evaluate the intervention, only ten studies employed trial study designs, of which two were randomized controlled trials and eight were concurrent controlled trials. Most studies measured provider outcomes. Only four studies measured patient outcomes, and of these, none were health outcomes. Twenty-three studies reported a source of funding, of which 15 reported external sources.
Quality assessment of articles
Table 2 summarizes the association between select study characteristics and the methodological rigor of the 64 studies. Findings in the five quality domains are described below.
Quality domain 1: representativeness of targeted providers.
Only 21 of 64 articles adequately described the setting and population from which study subjects were drawn, and only 21 articles adequately described provider demographics. The following is an example of a description of the setting and population from which targeted providers were drawn and where they received training that was classified as adequate45:
Subjects consisted of [University of Massachusetts Medical School] students in the classes 1997 to 2003 who completed international electives (travelers) and a class cohort (from the class of 2002) that did not study abroad. ... Fifty seven percent of all students traveled to Latin America, 15% to Asia, 15% to Western Europe and 7% to Africa. ... Thirty-four percent of students described their experience as solely rural, 50% as solely urban and 16% listed both rural and urban.
A higher proportion of the studies that targeted physicians (58%) than of the studies that targeted nurses (28%) or other/mixed groups of health professionals (8%) described the study setting and population (p = .009). A higher proportion of studies that reported funding described provider demographics (47.8% versus 24.4%, p = .055). As expected, a higher proportion of trials adequately described provider demographics and the study setting and population than did studies using other study designs. There were no statistically significant differences in the description of providers or settings according to publication date or types of journals.
Quality domain 2: complete description of intervention.
Most articles had clearly stated objectives (no. = 47), but only 21 articles described interventions with enough detail to facilitate replication. A higher proportion of studies targeting physicians completely described the intervention than studies targeting nurses or other/mixed groups of health professionals (58%, 25%, and 15%, respectively, p = .017). Only 30% of trials completely described the intervention compared to 55% of pre/post studies and 22% of post-only studies (p = .036). There were no statistically significant differences in the completeness of intervention descriptions according to other study characteristics.
Quality domain 3: potential for bias and confounding: use of comparison groups
Only eight studies had concurrent and similar comparison groups. All of these studies employed trial study designs. Alpers and Zoucha18 provide an example of what was considered a similar but nonconcurrent control group:
The [spring class] which had not received any cultural content in [their course] (n = 31) ranged in age from 21 to 47 [mean age of 27.5]. ... The ethnicity of the group was [87% Caucasian, 3% African American, and 10% Hispanic]. Females made up [93.5%] of the group with males contributing [6.5%]. The [fall class], which received cultural content from [their course] (n = 32) [had a mean age of 27.7]. The ethnicity of the group was [78% Caucasian, 9% African American, 9% Hispanic, and 3% Asian]. Females contributed [91%] of the group while males made up [9%].
A higher proportion of studies published in 1990–1999 and 2000–2003 and studies targeting nurses described adequate comparison groups than did studies published earlier or targeting physicians or other/mixed groups of health professionals; however, these differences were not statistically significant. There were also no statistically significant differences in the use of adequate comparison groups according to journal type or reported funding.
Quality domain 4: outcome assessment
Only 27 of the 64 studies used objective evaluation strategies (written examinations, direct observation, performance audit, validated self-efficacy scales). Ninety percent of trials used objective evaluation strategies compared to 73% of pretest/posttest and 7% of posttest only study designs (p < .01); however, there were no statistically significant differences in use of objective evaluation strategies according to other study characteristics. Only three studies blinded outcome assessors. Fifteen articles reported outcomes that did not match the study objectives. For example, seven studies stated knowledge, attitude, skill and/or behavior-type training objectives, but reported curriculum evaluation.35,54,56,57,63,67,68
Quality domain 5: reporting analytic approach.
Only 14 of 64 studies reported the number and reasons for noninclusion of data in the analysis, and there were no statistically significant differences in reporting this information according to any study characteristic. Additionally, only 15 studies reported the magnitude of difference between groups (including pre- and posttesting) and an index of variability. Notably, only 50% of trials reported this information.
To our knowledge, this is the first analysis of the methodological rigor used in studies evaluating cultural competence training. Our critical appraisal of 64 educational research articles suggests that the quality of the evidence from interventions to improve cultural competence of health professionals is generally poor. Specifically, most studies did not meet our criteria for high study quality, which were based on published guidelines for assessing the evidence of educational practices.14–17
Regarding provider representativeness, less than a third of the studies provided detailed descriptions of targeted providers. Complete descriptions of provider characteristics and the setting and population from which subjects are drawn provide information on which types of health care setting the intervention is likely to be translated into in practice, or which types of learners are most likely to benefit.15 Moreover, less than a third of the studies provided comprehensive descriptions of interventions, which hampers the ability of other educators to replicate the interventions in the appropriate settings and populations. We do, however, recognize that journal publications may limit the length of these descriptions, which could sometimes limit the comprehensiveness of the descriptions.
Few studies had adequate comparison groups. Using concurrent and similar control groups allows educators to distinguish the effect of training programs from other environmental factors that may affect the learners. Randomization of study participants to intervention and control groups minimizes the impact of selection bias that might occur if participants volunteer or are assigned to training programs based on interest, skills or other unknown confounders. Even so, study design alone does not guarantee the quality of the evidence reported in studies evaluating educational interventions.15 Other factors such as methods for evaluation and data analysis may influence the robustness of the study.
The studies we reviewed used a variety of evaluation methods. However, many did not include an objective evaluation method nor did they blind outcome assessors to the learner's intervention status. Therefore, the validity of the evidence from some of the studies we reviewed may be questionable in the absence of such methodological rigor. The choice of the evaluation method depends on the study objectives and desired outcomes.10,83 If the study question is “Do providers learn what is taught?,” then the evaluation method should measure changes in knowledge, attitude and skills. If the study question is “Do providers use what is taught?,” then the evaluation method should measure changes in behavior (i.e., their translation of what is learned into practice).
Most studies in this review measured changes in provider attitude and knowledge as opposed to changes in provider behavior or patient outcomes. This finding most likely reflects the fact that most interventions targeted medical or nursing students instead of practicing clinicians or other health professionals. A recent review of 599 educational research articles published in leading medical education journals revealed that trainee performance and satisfaction were the predominant themes (60%) whereas patient outcomes and financial implications of programs were the foci of only 5% of the studies.84 Our review of cultural competence interventions had similar findings. Yet, even if studies of cultural competence training examined and failed to show improved patient outcomes, this would not imply that this type of training is not valuable. As is true for other educational interventions, the use of rigorous methods would still be important to provide findings that educators could trust to improve their understanding of how best to teach cultural competence.
In assessing the strength of the evidence for cultural competence training, we found that few studies in our review reported quantitative data such as the magnitude of difference between groups (including pretest/posttest measures) or a variability index (i.e., confidence interval). Reporting such information would improve the statistical quality of cultural competence literature and allow educators to extrapolate the magnitude of the impact of training on learners and patients.
In addition to examining the methodological rigor of studies included in our review, we sought to determine whether select study characteristics are associated with use of rigorous research methods. We found support for a few, but not for most, of our hypotheses. Of particular concern is that the quality of the literature does not appear to be consistently improving over time. Nor did we find an association of any indicator of study quality with the type of journal. Our results indicate that studies targeting physicians provide more comprehensive descriptions of the study settings and interventions. We found no differences in study quality between studies that did or did not report funding. However, we had no information on levels of funding. Additionally, investigators may have been less likely to report internal sources of funding. An internally funded study might provide more resources than an externally funded one. Finally, as we expected, a majority of studies employing trial designs met our criteria for study quality.
There are at least two potential explanations for our findings on the state of the literature on cultural competence training. First, the educators who designed and implemented these training programs may have had substantial content expertise in cultural competence but may have lacked experience in research methodology. Second, these educators may have lacked the level of resources (personnel, time, or funding) necessary for rigorous evaluations of the educational interventions. More resources devoted to this relatively new area of educational research may be warranted.
Our systematic review does have limitations. First, we limited our review to articles published in English after 1980. However, recent work suggests that results from reviews limited to published literature in English do not differ substantially from reviews without this limitation.85 Moreover, the field of cultural competence is relatively new, and most interventions have occurred within the last decade. Second, most of the eligible studies took place in the United States, United Kingdom, Australia and New Zealand. Therefore, our findings may only be generalizable to training interventions that have occurred in these countries. Third, we did not design our own conceptual model of cultural competence, given the lack of consensus on a common definition. Instead, our review included all studies in which the authors defined their interventions as cultural competence training to improve minority health. Fourth, we developed our own quality review forms for data abstraction. Nonetheless, the domains of quality that we included are generally accepted as basic tenets of quality in clinical and educational research methods.14–17 Finally, we may not have been able to detect statistically significant associations between some study characteristics and use of methodological rigor, given the small number of eligible studies.
In summary, we have provided a systematic assessment of the methodological strengths and weaknesses of studies evaluating cultural competence training interventions targeted at health care providers. We identified numerous descriptive studies of cultural competence interventions. Of the small percentage of these studies that actually provided detailed evaluations of the interventions, most did not adhere to basic principles of study design, reporting, and data analysis. For any given study in our review, there was variability in the extent to which each domain of quality was met. Inadequate descriptions of targeted providers, heterogeneity and incomplete description of interventions, nonadherence to basic study design, lack of objective or standard evaluation strategies, and incomplete statistical analysis all hamper the quality of evidence on which medical educators might base the efficacy of cultural competence training. To help overcome the methodological limitations of the literature, future efforts might explore whether collaborations between educators and researchers and increased funding for educational program development could help improve the design, implementation, evaluation, and reporting of cultural competence training programs.
This article is based on research conducted by the Johns Hopkins Evidence-based Practice Center under contract to the Agency for Health care Research and Quality (Contract No. 290-02-0018), Rockville, MD.
The authors of this article are responsible for its contents, including any clinical or treatment recommendations. No statement in this article should be construed as an official position of the Agency for Health care Research and Quality or of the U.S. Department of Health and Human Services.
1 Institute of Medicine. Unequal Treatment: Confronting Racial and Ethnic Disparities in Health Care, Smedley BD, Stith AY, Nelson AR (eds). Washington, DC: National Academies Press; 2003.
3 Cooper LA, Roter DL. Patient-provider communication: the effect of race and ethnicity on process and outcomes of healthcare. In: Smedley BD, Stith AY, Nelson AR (eds). Unequal Treatment: Confronting Racial and Ethnic Disparities in Healthcare. Washington, DC: The National Academies Press; 2002.p. 552–93.
4 Betancourt JR, Green AR, Carillo JE, Ananeh-Firempong O. Defining cultural competence: a practical framework for addressing racial/ethnic disparities in health and health care. Public Health Rep. 2003;118:293–302.
5 Beach MC, Cooper LA, Robinson KA, et al. Strategies for Improving Minority Healthcare Quality. Evidence Report/ Technology Assessment No. 90. (Prepared by the Johns Hopkins University Evidence-based Practice Center, Baltimore, MD.) AHRQ Publication No. 04-E008-02. Rockville, MD: Agency for Healthcare Research and Quality; January 2004.
6 US Department of Health and Human Services. Assuring Cultural Competence in Health Care: Recommendations for National Standards and an Outcomes-Focused Research Agenda, 2003 〈http://www.omhrc.gov/clas/
〉. Accessed 18 November 2003.
9 Betancourt JR. Cross-cultural medical education: conceptual approaches and frameworks for evaluation. Acad Med. 2003;78:560–9.
10 Kagawa-Singer M, Kassim-Lakha S. A strategy to reduce cross-cultural miscommunication and increase the likelihood of improving health outcomes. Acad Med. 2003;78:577–87.
11 Tervalon M. Components of culture in health for medical students’ education. Acad Med. 2003;78:570–6.
12 Wear D. Insurgent multiculturalism: rethinking how and why we teach culture in medical education. Acad Med. 2003;78:549–54.
13 Sackett DL, Haynes RB, Guyatt GH, Tugwell P. Clinical Epidemiology: A Basic Science for Clinical Medicine. 2nd ed. Boston/Toronto/London: Little, Brown, and Company; 1991.
14 Green ML. Identifying, appraising, and implementing medical education curricula: a guide for medical educators. Ann Intern Med. 2001;135:889–96.
15 Harden RM, Grant J, Buckley G, Hart IR. BEME Guide No. 1: Best Evidence Medical Education. Med Teach. 1999;21:553–62.
16 Morrison JM, Sullivan F, Murray E, Jolly B. Evidence-based education: development of an instrument to critically appraise reports of educational interventions. Med Educ. 1999;33:890–3.
17 Kern D, Thomas P, Howard D, Bass E. Curriculum Development for Medical Education: A Six Step Approach. Baltimore: Johns Hopkins University Press; 1998.
18 Alpers RR, Zoucha R. Comparison of cultural competence and cultural confidence of senior nursing students in a private southern university. J Cult Divers. 1996;3:9–15.
19 Barton JA, Brown NJ. Evaluation study of a transcultural discovery learning model. Public Health Nurs. 1992;9:234–41.
20 Beagan BL. Teaching social and cultural awareness to medical students: “It's all very nice to talk about it in theory, but ultimately it makes no difference.” Acad Med. 2003;78:605–14.
21 Bengiamin MI, Downey VW, Heuer LJ. Transcultural healthcare: a phenomenological study of an educational experience. J Cult Divers. 1999;6:60–6; quiz 67–8.
22 Berman A, Manning MP, Perters E, Siegel B, Yadao L. A template for cultural diversity workshops. Oncol Nurs Forum. 1998;25:1711–8.
23 Blackford J, Street A. Cultural conflict: the impact of western feminism(s) on nurses caring for women of non-English speaking background. J Clin Nurs. 2002;11:664–71.
24 Bond ML, Jones ME. Short-term cultural immersion in Mexico. Nurs Health Care. 1994;15:248–53.
25 Briscoe VJ, Pichert JW. Evaluation of a program to promote diabetes care via existing agencies in African American communities. Assoc Black Nurs Fac J. 1999;10:111–5.
26 Browne CV, Brown KL, Mokuau N, Mclaughlin L. Developing a multisite project in geriatric and/or gerontological education with emphases in interdisciplinary practice and cultural competence. Gerontologist. 2002;42:698–704.
27 Campinha-Bacote J, Yahle T, Langenkamp M. The challenge of cultural diversity for nurse educators. J Contin Educ Nurs. 1996;27:59–64.
28 Chevannes M. Issues in educating health professionals to meet the diverse needs of patients and other service users from ethnic minority groups. J Adv Nurs. 2002;39:290–8.
29 Copeman RC. Medical students, Aborigines and migrants: evaluation of a teaching programme. Med J Aust. 1989;1502:84–7.
30 Crandall SJ, George G, Marion GS, Davis S. Applying theory to the design of cultural competency training for medical students: a case study. Acad Med. 2003;78:588–94.
31 Culhane-Pera KA, Reif C, Egli E, Baker NJ, Kassekert RA. Curriculum for multicultural education in family medicine. Fam Med. 1997;29:719–23.
32 Dogra N. The development and evaluation of a programme to teach cultural diversity to medical undergraduate students. Med Educ. 2001;35:232–41.
33 Douglas KC, Lenahan P. Ethnogeriatric assessment clinic in family medicine. Fam Med. 1994;26:372–5.
34 Dowell A, Crampton P, Parkin C. The first sunrise: an experience of cultural immersion and community health needs assessment by undergraduate medical students in New Zealand. Med Educ. 2001;35:242–9.
35 Drouin J, Rivet C. Training medical students to communicate with a linguistic minority group. Acad Med. 2003;78:599–604.
36 Erkel EA, Nivens AS, Kennedy DE. Intensive immersion of nursing students in rural interdisciplinary care. J Nurs Educ. 1995;34:359–65.
37 Farnill D, Todisco J, Hayes SC, Bartlett D. Videotaped interviewing of non-English speakers: training for medical students with volunteer clients. Med Educ. 1997;31:87–93.
38 Felder E. Baccalaureate and associate degree student nurses’ cultural knowledge of and attitudes toward black American clients. J Nurs Educ. 1990;29:276–82.
39 Flavin C. Cross-cultural training for nurses: a research-based education project. Am J Hosp Palliat Care. 1997;14:121–6.
40 Frank-Stromborg M, Johnson J, McCorkle R. A program model for nurses involved with cancer education of black Americans. J Cancer Educ. 1987;2:145–51.
41 Frisch NC. An international nursing student exchange program: an educational experience that enhanced student cognitive development. J Nurs Educ. 1990;29:10–2.
42 Gallagher-Thompson D, Haynie D, Takagie K, Thompson L. Impact of an Alzheimer's disease education program: focus on Hispanic families. Gerontol Geriatr Educ. 2000;20:25–40.
43 Gany F, de Bocanegra HT. Maternal-child immigrant health training: changing knowledge and attitudes to improve health care delivery. Patient Educ Couns. 1996;27:23–31.
44 Godkin MA, Savageau JA. The effect of a global multiculturalism track on cultural competence of preclinical medical students. Fam Med. 2001;33:178–86.
45 Godkin M, Savageau J. The effect of medical students’ international experiences on attitudes toward serving underserved multicultural populations. Fam Med. 2003;35:273–8.
46 Hadwiger SC. Cultural competence case scenarios for critical care nursing education. Nurse Educ. 1999;24:47–51.
47 Haloburdo EP, Thompson MA. A comparison of international learning experiences for baccalaureate nursing students: developed and developing countries. J Nurs Educ. 1998;37:13–21.
48 Hansen ND. Teaching cultural sensitivity in psychological assessment: a modular approach used in a distance education program. J Pers Assess. 2002;79:200–6.
49 Haq C, Rothenberg D, Gjerde D, et al. New world views: preparing physicians in training for global health work. Fam Med. 2000;32:566–72.
50 Inglis A, Rolls C, Kristy S. The impact on attitudes towards cultural difference of participation in a health focused study abroad program. Contemp Nurse. 2000;9:246–55.
51 Jeffreys M, Smodlaka I. Changes in students’ transcultural self-efficacy perceptions following an integrated approach to culture care. J Multicult Nurs Health. 1999;5:6–12.
52 Jeffreys MR. A transcultural core course in the clinical nurse specialist curriculum. Clin Nurse Spec. 2002;16:195–202.
53 Lasch KE, Wilkes G, Lee J, Blanchard R. Is hands-on experience more effective than didactic workshops in postgraduate cancer pain education? J Cancer Educ. 15:218–22, 2000.
54 Lindquist GJ. A cross-cultural experience: comparative study in nursing and health care. J Nurs Educ. 1984;23:212–4.
55 Lockhart JS, Resick LK. Teaching cultural competence. The value of experiential learning and community resources. Nurse Educ. 1997;22:27–31.
56 Mao C, Bullock CS, Harway E, Khalsa SK. A workshop on ethnic and cultural awareness for second-year students. J Med Educ. 1988;63:624–8.
57 Marvel MK, Grow M, Morphew P. Integrating family and culture into medicine: a family systems block rotation. Fam Med. 1993;25:441–2.
58 Mazor SS, Hampers LC, Chander VT, Krug SE. Teaching Spanish to pediatric emergency physicians: effects on patient satisfaction. Arch Pediatr Adolesc Med. 2002;156:693–5.
59 Napholz L. A comparison of self-reported cultural competency skills among two groups of nursing students: implications for nursing education. J Nurs Educ. 1999;38:81–3.
60 Nora LM, Daugherty SR, Mattis-Peterson A, Stevenson L, Goodman LJ. Improving cross-cultural skills of medical students through medical school-community partnerships. West J Med. 1994;161:144–7.
61 Oneha MF, Sloat A, Shoultz J, Tse A. Community partnerships: redirecting the education of undergraduate nursing students. J Nurs Educ. 1998;37:129–35.
62 Rolls C, Inglis A, Kristy S. Study abroad programs: creating awareness of and changing attitudes to nursing, health and ways of living in other cultures. Contemp Nurse. 1997;6:152–6.
63 Rooda L, Gay G. Staff development for culturally sensitive nursing care. J Nurs Staff Dev. 1993;9:262–5.
64 Rubenstein HL, O'Connor BB, Nieman LZ, Gracely EJ. Introducing students to the role of folk and popular health belief-systems in patient care. Acad Med. 1992;67:566–8.
65 Ryan M, Twibell R, Brigham C, Bennett P. Learning to care for clients in their world, not mine. J Nurs Educ. 2000;39:401–8.
66 Ryan M, Ali N, Carlton KH. Community of communities: an electronic link to integrating cultural diversity in nursing curriculum. J Prof Nurs. 2002;18:85–92.
67 Scisney-Matlock M. Systematic methods to enhance diversity knowledge gained: a proposed path to professional richness. J Cult Divers. 2000;7:41–7.
68 Sinnott MJ, Wittmann B. An introduction to indigenous health and culture: the first tier of the Three Tiered Plan. Aust J Rural Health. 2001;9:116–20.
69 Smith L.S. Evaluation of an educational intervention to increase cultural competence among registered nurses. J Cult Divers. 2001;8:50–63.
70 St Clair A, McKenry L. Preparing culturally competent practitioners. J Nurs Educ. 1999;38:228–34.
71 Stumphauzer JS, Davis LC. Training community-based, Asian-American mental health personnel in behavior modification. J Community Psychol. 1983;11:253–8.
72 Tang TS, Fantone JC, Bozynski ME, Adams BS. Implementation and evaluation of an undergraduate sociocultural medicine program. Acad Med. 2002;77:578–85.
73 Tomlinson-Clarke S. Assessing outcomes in a multicultural training course: a qualitative study. Counsel Psychol Q. 2000;13:221–31.
74 Underwood SM. Development of a cancer prevention and early detection program for nurses working with African Americans. J Contin Educ Nurs. 1999;30:30–6.
75 Underwood SM, Dobson A. Cancer prevention and early detection program for educators: reducing the cancer burden of among African Americans within the academic arena. J Natl Black Nurses Assoc. 2002;13:45–55.
76 Velde BP, Wittman PP. Helping occupational therapy students and faculty develop cultural competence. Community Occup Ther Educ Pract. 2001;13:23–32.
77 Wade P, Berstein B. Culture sensitivity training and counselor's race: effects on black female clients’ perceptions and attrition. J Couns Psychol. 1991;38:9–13.
78 Warner JR. Cultural competence immersion experiences: public health among the Navajo. Nurse Educ. 2002;27:187–90.
79 Way BB, Stone B, Schwagger M, Wagoner D, Bassman R. Effectiveness of the New York State Office of Mental Health Core Curriculum: direct care staff training. Psychiatr Rehabil J. 2002;25:398–402.
80 Wendler MC, Struthers R. Bridging culture on-line: strategies for teaching cultural sensitivity. J Prof Nurs. 2002;18:320–7.
81 Williamson E, Stecchi JM, Allen BB, Coppens NM. Multiethnic experiences enhance nursing students’ learning. J Community Health Nurs. 1996;13:73–81.
82 Dolhun EP, Muñoz C, Grumbach K. Cross-cultural education in U.S. medical schools: development of an assessment tool. Acad Med. 2003;78:615–22.
83 Hutchinson L. Evaluating and researching the effectiveness of educational interventions. BMJ. 1999;318:1267–9.
84 Prystowsky JB, Bordage G. An outcomes research perspective on medical education: the predominance of trainee assessment and satisfaction. Med Educ. 2001;35:331–6.
85 Egger M, Juni P, Bartlett C, Holenstein F, Stern J. How important are comprehensive literature searches and the assessment of trial quality in systematic reviews? Empirical study. Health Technol Assess. 2003;7:1–76.