Over the past few decades, medical educators have been addressing the issue of cultural competence in earnest. Many regulatory establishments, includingthe Liaison Committee on Medical Education1 and Accreditation Councilfor Graduate Medical Education,2 list cultural competence as a required subject in medical education. Several models for cultural competence training have been developed in Western countries,3,4 and the current pedagogy embraces a patient-centered approach that focuses on evaluating individual patients and considers the unique cultural and social factors that affect their care.5–7 Although several studies have demonstrated the immediate effectiveness of this patient-centered cultural competence training approach in controlled trials in many Western and a few Eastern countries,4,8–10 we could find no evidence in the medical literature evaluating the long-term effectiveness of this strategy.11
The purpose of our study was to examine whether a patient-centered cultural competence curriculum that improved the cross-cultural communication skills of students immediately after intervention could be sustained a year later as measured by an objective structured clinical examination (OSCE). Our hypothesis is that the overall effectiveness of the training would diminish with time, but we were uncertain as to which components of the curriculum might be retained or lost.
Participants and setting
Between January 2006 and June 2006, 57 fifth-year medical students rotated through a clerkship in internal medicine in the College of Medicine, National Taiwan University, where medical school is seven years long and starts right after high school. Students had a mean age of 25 years; 20% were women, 87% were Taiwanese, and all had had prior lecture-based general communication skills training. Each student spent nine weeks taking the required clerkship and participated in an OSCE at the end of the clerkship. Half of the students took the clerkship between January and March, and the other half took the clerkship between April and June.
Before the clerkship, we used a random-number generator and assigned students in both blocks to either the control group (n = 27) or the intervention group(n = 30). We assessed demographic information that might affect students' baseline cultural competence (age, gender, residence, and nationality). We also administered a survey containing three cultural competence self-assessment scales12 and compared the scores of the control and intervention groups. There were no significant differences in studentage, gender, nationality, or prior communication skills and cultural competence training between the control and the intervention groups. In addition, there were no significant differences between the control and the intervention groups revealed in the survey containing three cultural competence self-assessment scales. In the OSCE a year later, one student in each of the control and intervention groups dropped out. There were no differences in student demographics between the control and the intervention groups in the second-year OSCE.
At the beginning of the clerkship, we informed all students that half of them would be given additional instruction in cultural competence, that they then would be assessed for their cultural competence, and that the assessment would not affect their clerkship grades. The methods used to assess cultural competence were not revealed to the students. All participants were assured that their data would remain confidential. We also informed the students in the control group that we could offer the additional cultural competence workshops immediately after the internal medicine clerkship, but none of the students requested it. The study protocol was approved by the National Taiwan University Hospital research ethics committee.
Students in the intervention group received workshops in cultural competence. Workshops were scheduled during the period of the participants' internal medicine clerkship and were spaced two weeks apart. Each workshop lasted 1.5 to 2.5 hours. The first workshop focused on knowledge and attitudes. Major topics included basic concepts such as culture and cultural competence, evidence of ethnic and social health disparity globally and locally, and hidden biases toward members of different cultural groups. The format was interactive, with some discussions and exercises (e.g., describing photos of minorities to explore subconscious stereotyping) to encourage active participation.
In the second workshop, models of cross-cultural communication were introduced including video clips from an online course, “A Family Physician's Practical Guide to Cultural Competent Care,” as a demonstration. Specific skills that were the focus of these workshops involved eliciting the patients' perspectives and exploring social factors related to illness. Then, a student volunteer interviewed a standardized patient (SP) in front of the other students. The observing students rated a checklist and provided feedback to the volunteer interviewer. After the second workshop, students were given homework assignmentsof writing a case report with the sociocultural information they gathered from an inpatient using a cross-cultural communication model.
We measured change in student cross-cultural communication skills with two OSCEs. The first OSCE (OSCE1) took place immediately after the curriculum and was embedded as one case within a series of three OSCE cases that are part of the routine evaluation of students in the internal medicine clerkship. According to the review of curriculum and interviews with the students, we judged that the two groups were exposed to similar training in the interval between the first and second OSCEs.
The second OSCE (OSCE2) took place one year later and was part of the three OSCE cases routinely assessing students at the end of their psychiatry clerkship. The designs of the OSCE station and checklist were inspired by a cultural competence OSCE station used at Harvard Medical School13 in which students explore various sociocultural factors (e.g., a different understanding of disease mechanisms, use of traditional herbal remedies instead of modern medication) underlying an immigrant woman's poorly controlled hypertension. The SP in our OSCE station portrayed a Taiwanese patient instead of a foreigner in order to prevent students from recognizing the station as testing cultural competence. The SP in the internal medicine OSCE (OSCE1) portrayed a diabetic patient admitted for ketone acidosis. The psychiatry OSCE (OSCE2) featured a schizophrenic patientadmitted for uncontrolled symptoms.The checklists for cross-cultural communication skills in these two OSCE were identical.
The SPs were recruited and received a half-day workshop on the OSCEs used in the routine examination of all students. The SPs were instructed on how to respond to students' questions, how to mark the checklist (0 = student did not exhibit the skill, 1 = student exhibited the skill), and how to provide feedback. Checklists of student behaviors that were not specifically cross-cultural in nature were in the domains of basic communication, history-taking, and differential diagnosis. These checklists were used in all OSCE cases, including the cultural competence OSCE case.
In addition, the SP who was portraying the role in the cultural competence case received an additional hour of training focusing on the relevant student cross-cultural communication behaviors. Students were then evaluated by this SP on a validated checklist that included specific behaviors within the two cross-cultural domains: eliciting the patient's perspective (PP) and exploring social factors related to illness (SF). These domains were chosen according to a review of the literature.3–9 The specific items within the domain of PP were eliciting the patient's explanation of illness (PP1), pattern of medication utilization (PP2), concerns about treatment (PP3), and utilization of alternative treatments (PP4). The specific items within the domain of SF were eliciting the patient's sources of social support (SF1), impact of illness on work (SF2), affordability of medication (SF3), prescription literacy (SF4), and access to clinics (SF5). Each item was graded on a 0 or 1 scale (0 = student did not exhibit the skill, 1 = student exhibited the skill). We used this method of grading because we viewed these behaviors as discrete tasks that were either done or not done, rather than being done to a particular degree. The total score given for the PP domain was the sum of PP1, PP2, PP3, and PP4, with each item weighted the same. The total score given for the SF domain was the sum of SF1, SF2, SF3, SF4, and SF5, with each item weighted the same. Therefore, total PP scores could be 0, 1, 2, 3, or 4, and total SF scores could be 0, 1, 2, 3, 4, or 5. Thus, each domain's scores directly translated to the number of specific behaviors that were performed.
All students in the control and intervention groups rotated through the cultural competence OSCE in addition to the other OSCE stations. The SPs were not aware of which students received the cultural competence workshop. All OSCE stations were videotaped and transcribed verbatim. The checklists marked by the SP in the cultural competence case were comparedwith the checklists completed by independent observers of the cultural competence video recordings and the transcriptions. The interrater reliability between SP ratings and independent observer ratings was high (0.90). Because the transcription and video recordings provided unquestionable evidence for standardized marking, the independent observer ratings were used in the analysis in the rare cases of disagreement.
The data were entered and analyzed using SAS 9.1 version statistical software (SAS, Inc., Cary, NC). We used a t test for age and chi-square tests to compare other demographic characteristics. Data from each of the two scales (PP and SF) and items within the two scales were independently analyzed using a two-way mixed-design analysis of variance (ANOVA),14 with time of testing (OSCE1, OSCE2) as a within-participants factor and group (control, intervention) as a between-participants factor. If what was learned in the intervention could be sustained, then we hypothesized that the intervention group would score higher than the control group in the OSCE1 and OSCE2 and that the control group's scores might decrease from OSCE1 to OSCE2 more than the scores of the intervention group would. Therefore, the critical terms in the analyses were four simple main effect terms involving group effect at OSCE1, group effect at OSCE2, time of testing effect in the intervention group, and time of testing effect in the control group, respectively.
To assess the magnitude of each effect, eta squared (η2), which is the measure of effect size in ANOVA, was computed to interpret the proportion of variance in each scale attributable to the effect. Interpretations of effect sizes were small (η2 ≥ 0.01), medium (η2 ≥ 0.059), and large (η2 ≥ 0.138).15 In addition, the data met the assumption of homogeneous variance examined by the modified Levene test at the significance level of P = .01, but PP1, SF1, and SF3 were exceptions.
Effect of cultural competence training on students' exploring patient perspectives
Figure 1, top panel, displays the change in the mean scores of the patient perspective domain over time for the intervention and control groups; Table 1 presents the results of a two-way ANOVA with repeated measures of total PP scores. The intervention group scored significantly higher than the control group at OSCE1 (F1,106 = 12.97, P < .001, η2 = 0.103, power = 0.95), although there was a significant decrease from OSCE1 to OSCE2 in the intervention group (F1,53 = 6.17, P < .05, η2 = 0.055, power = 0.68). There was no decrease in the scores of the control group from OSCE1 to OSCE2, and no significant differences between the intervention group and control groups at OSCE2.
Table 2, top half, presents the mean scores of students at OSCE1 and OSCE2 in the intervention and control groups, indicating how they exhibited each of the behaviors related to the exploration of patient perspectives. Most items in the patient perspective domain showed a significant decrease from OSCE1 to OSCE2, including patient explanation of illness (PP1) in the intervention group (F1,53 = 5.83, P < .05, η2 = 0.041, power = 0.66), patient utilization of medication (PP2) in the control group (F1,53 = 7.50, P < .01, η2 = 0.065, power = 0.77), and utilization of alternative treatment (PP4) in the intervention (F1,53 = 14.91, P < .001, η2 = 0.125, power = 0.97) and control (F1,53 = 4.16, P < .05, η2 = 0.035, power = 0.52) groups. However, for patient concerns about medication (PP3), the intervention group scored significantly higher than the control group at OSCE1 (F1,106 = 5.96, P < .05, η2 = 0.049, power = 0.68) and OSCE2 (F1,106 = 8.62, P < .01, η2 = 0.071, power = 0.83).
Effect of cultural competence training on students' exploring social factors
Figure 1, bottom panel, displays the change in the mean scores of the social factors domain over time for the intervention and control groups, and Table 1 presents the results of a two-way ANOVA with repeated measures. Overall, there was large group effect in social factors. The intervention group scored significantly higher than the control group at OSCE1 (F1,106 = 14.74, P < .001, η2 = 0.110, power = 0.97) and OSCE2 (F1,106 = 10.14, P < .01, η2 = 0.076, power = 0.88), although there were nonsignificant decreases from OSCE1 to OSCE2 in both groups.
Table 2, bottom half, presents the mean scores of students at OSCE1 and OSCE2 in the intervention and control groups, indicating how they exhibited each of the behaviors related to the exploration of social factors related to illness. Most items in the social factors domain showed a significant decrease from OSCE1 to OSCE2, including impact of work (SF2) in the control group (F1,53 = 12.51, P < .001, η2 = 0.086, power = 0.93) and prescription literacy (SF4) (F1,53 = 11.50, P < .01, η2 = 0.095, power = 0.91) and access to clinics (SF5) (F1,53 = 6.20, P < .05, η2 = 0.038, power = 0.69) in the intervention group The decrease from OSCE1 to OSCE2 is not significant for affordability of medication (SF3). A different pattern is observed in social support (SF1). There was a significant increase from OSCE1 to OSCE2 in the intervention (F1,53 = 7.85, P < .001, η2 = 0.042, power = 0.79) and control (F1,53 = 6.70, P < .05, η2 = 0.036, power = 0.72) groups. In addition, the intervention group scored significantly higher than the control group at OSCE1 (F1,106 = 10.83, P < .01, η2 = 0.078, power = 0.90) and OSCE2 (F1,106 = 11.20, P < .01, η2 = 0.081, power = 0.91).
Our study helps answer the unexplored question about the long-term effectiveness of cultural competence training. Our results seem to support the commonly held hypothesis that the effectiveness of prior training diminishes with time but that the group with prior training retains more competence than the group without prior training. However, in our study, further examination of two fundamental domains of cultural competence showed varying patterns.
Regarding students' ability to elicit patient perspectives, two components diminished in both the control and intervention groups. We found that students who received workshops a year before retained more ability to inquire about patient utilization of medication than the students who did not attend the workshops. However, both groups dropped significantly to the same low level in the ability to explore patient utilization of alternative medicine, regardless of the fact that the intervention group's score was higher than the control group's immediately after the workshop. This pattern implies that there is a need to strengthen medical education to prevent the loss of medical students' consideration of alternative medicine as they become increasingly socialized in biomedicine-centered medical training.
In contrast, both the intervention and the control groups became better at eliciting patient concerns about prescribed medication. It is likely that both groups developed the skill through increased clinical experience. However, the significantly larger gain in the intervention group might be attributable to the prior training. The puzzling result—that the control group became better at inquiring about patients' explanations of illness, whereas the intervention group got worse—needs explanation. We interviewed the few students in the control group whose increased scores in this item made the average OSCE2 score of the control group high. These students informed us that they had had clinical experiences in which their patients did not adhere to prescribed treatments. These students witnessed their medical teams exploring patients' explanations of illness in depth. The implication for this finding is that we have to consider the powerful influence of the “informal curriculum” in shaping students' cultural competence.
In terms of social factors, the general trend observed is that students' ability decreased with time and that the intervention group scores higher than the control group. However, both groups dropped significantly to the same low level in the ability to explore patient literacy and access to clinic, regardless of the fact that the intervention group's score was higher than the control group's immediately after the workshop. This is likely attributable to the informal curriculum. Our students have frequently observed that attending physicians do not inquire about literacy and access to care; this is because Taiwan has a high literacy rate and easily accessible clinics. Nonetheless, we have to remind students not to neglect these factors.
An unusual pattern was observed in the consideration of social support. Both groups improved with time, but the intervention group scored significantly higher. This might be another example of the influence of the informal curriculum. In Chinese culture, families play a significant role in making decisions for and caring for sick family members. A significant proportion of the students in both groups had witnessed and experienced this phenomenon, which means that they would naturally be inclined to ask patients about their social support. We speculate in this way as a result of informal discussions with students and faculty members; our speculation could be the basis of interesting future research.
Our study has some limitations that should be noted. First, the OSCE tests occurred shortly after and then a year after the cultural competence workshops. Further studies are needed to answer what would happen if observations were made over longer periods and if further training were offered to the same type of participants as those in our study sample. Second, without an OSCE before the intervention, we cannot be certain that the two groups possessed similar cross-cultural communication skills before the workshops. However, the preintervention self-assessment of cross-cultural communication skills between control and intervention groups showed no difference. Future research designs could include a pre- and postintervention OSCE to address this issue. Third, we cannot be certain that the two groups had the same exposure to cross-cultural issues between the first and the second OSCEs. There was definitely no formal curriculum on these issues for either of the groups. In terms of the informal and hidden curriculum, we only know that the two groups rotated through the same clerkships. Future research designs could include a portfolio to trace long-term informal learning. In addition, faculty development would probably be helpful to improve students' learning from both the formal and informal curricula.
In conclusion, our study adds to the knowledge base in cross-cultural medical education in several ways. First, it investigates the unexplored long-term effectiveness of cultural competence training. Second, the results illustrate the varying durability of different components of cultural competence and suggest which components need further strengthening. Finally, the study highlights the influence of the informal curriculum in shaping students' cultural competence. This study can help in building a foundation for future studies evaluating the impact of both formal and informal curricula in cross-cultural medical education.
The authors would like to thank Drs. Tien-Shang Huang, Tzung-Jeng Hwang, Fen-Yu Tseng, and Yen-Ling Chiu, who made the OSCEs possible. The authors also wish to thank all of the students and standardized patients who participated in this study.
This research project was supported by the National Science Council of Taiwan, R.O.C.
The study protocol was approved by National Taiwan University Hospital research ethics committee.
3 Betancourt JR. Cross-cultural medical education: Conceptual approaches and frameworks for evaluation. Acad Med. 2003;78:560–569.
4 Beach MC, Price EG, Gary TL, et al. Cultural competence: A systematic review of health care provider educational interventions. Med Care. 2005;43:356–373.
5 Betancourt JR. Cultural competence and medical education: Many names, many perspectives, one goal. Acad Med. 2006;81:499–501.
6 Green AR, Betancourt JR, Carrillo JE. Integrating social factors into cross-cultural medical education. Acad Med. 2002;77:193–197.
7 Tervalon M. Components of culture in health for medical students' education. Acad Med. 2003;78:570–576.
8 Beach MC, Rosner MB, Cooper LA, Duggan PSA, Shatzer J. Can patient-centered attitudes reduce racial and ethnic disparities in care? Acad Med. 2007;82:193–198.
9 Rosen J, Spatz ES, Gaaserud AM, et al. A new approach to developing cross-cultural communication skills. Med Teach. 2004;26:126–132.
10 Ho M, Yao G, Lee K, Beach M, Green A. Cross-cultural medical education: Can patient-centered cultural competency training be effective in non-Western countries? Med Teach. 2008;30:719–721.
11 Price EG, Beach MC, Gary TL, et al. A systematic review of the methodological rigor of studies evaluating cultural competence training of health professionals. Acad Med. 2005;80:578–586.
12 Ho MJ, Lee K. Reliability and validity of three cultural competency measures. Med Educ. 2007;41:519.
13 Green AR, Miller E, Krupat E, et al. Designing and implementing a cultural competence OSCE: Lessons learned from interviews with medical students. Ethn Dis. 2007;17:344–350.
14 Lunney G. Using analysis of variance with a dichotomous dependent variable: An empirical study. J Educ Meas. 1970;7:263–269.
© 2010 Association of American Medical Colleges
15 Cohen J. Statistical Power Analysis for the Behavioral Sciences. 2nd ed. Hillsdale, NJ: Lawrence Earlbaum Associates; 1988.