According to the latest report by the US Census Bureau, >25 million people in the United States have limited English proficiency (LEP).1 In health care settings, language barriers may predispose LEP patients to suboptimal quality of care. Research shows that language barriers lead to more medical errors,2–4 lower patient satisfaction,5 health care adherence,6 and misunderstanding of diagnoses and treatment.7,8 Ideally, when there is language discordance between physicians and their patients, interpreters are used.9–11 Yet, recent studies have demonstrated that physicians underuse interpreter services when treating patients with LEP12–15 despite evidence that, when used, it leads to improved quality of care, better outcomes,11 and reduced length of stay.16 Often, clinicians rely on their own non–English language (NEL) skills to communicate with LEP patients.13,15,17–20 While some health care organizations have instituted language proficiency testing for bilingual staff,21 few have begun testing clinicians.22,23
There is a compelling unmet need to understand the degree of NEL fluency required by clinicians to provide effective care to LEP patients. Current methods to evaluate language proficiency include self-assessment and oral proficiency tests.20,24,25 However, there is conflicting data about the accuracy of clinician language proficiency self-assessment tests. In one study, 20% of participants, who conducted medical interpretation in their jobs, failed linguistic proficiency testing.21 Three other studies, however, demonstrated that clinician’s self-reported NEL fluency was associated with successful linguistic proficiency testing via Clinician Cultural and Linguistic Assessment (CCLA)23,26 and positive patient-provider communication.27 Moreover, no prior studies have explored the characteristics of clinicians whose self-reported ratings of language proficiency were different than their tested abilities.
We have previously published our preliminary data with a smaller dataset in which we sought to compare 2 NEL proficiency assessments: a self-assessment scale and a validated oral proficiency test as a gold standard, the Interagency Language Roundtable (ILR) scale and the CCLA, respectively.25 This current study presents full data analysis that (1) evaluates the association between self-report language skills and tested proficiency with a larger sample size, (2) investigates clinicians’ characteristics that were associated with accuracy of self-assessment of NEL proficiency and to identifies the factors associated with overestimation and underestimation of self-assessed language proficiency, and (3) describes reported interpreter use among clinicians who passed versus those who failed the CCLA. We hypothesized that while most clinicians would accurately rate their NEL skills, there would be those who underestimated and overestimated their language abilities. We predicted that there would be associated characteristics for each of these groups of clinicians.
Research methods for setting, recruitment, NEL assessment for this study have been previously described in detail elsewhere.25 In brief, primary care providers (PCPs) surveyed in our study belonged to 2 different organizations both with large LEP patient populations, one in northern California (site A) and another in Massachusetts (site B) serving diverse Asian and Latino populations through nonacademic, community-based care. One hundred fifty-eight PCP were invited to participate in this study, of which 98 clinicians completed the study (16 from site A and 82 from site B). Our response rate was 62%. Thirty-one additional clinicians were recruited since our previous report.25 NEL self-assessment was made using the ILR scale, which has been published previously.20 Self-assessment was followed by the oral proficiency test (CCLA) used as the gold standard.23
A survey was administered to all participants; it included questions regarding demographics such as sex, race, age, type of clinician (ie, physician, nurse practitioner, physician assistant), clinical department, % of clinical time, and native language. If clinicians reported that they used NEL skills to communicate with LEP patients, follow-up questions included how the NEL ability was acquired, self-assessment of fluency using the ILR scale, and frequency of NEL use and interpreter use. Participants were given the option to report on skills in up to 4 NELs they used with LEP patients. Clinicians were also asked whether they would use an interpreter or their own NEL skills in different hypothetical clinical scenarios such as in conversations involving health care maintenance, a minor patient complaint, a major patient complaint, an end of life discussion and an informed consent. Following, the survey, clinicians were sent information on and access to the CCLA.25
The methods for the analysis comparing clinicians’ self-reported NEL proficiency levels using the ILR scale and their tested proficiency from the CCLA were assessed using Spearman rank correlation coefficient and are described in detail elsewhere.25 We estimated the point-biserial correlation between the continuous variable of ILR self-ratings and the dichotomous variable of passing the CCLA (assuming a latent continuous distribution).
We also identified factors associated with accurate, overestimation, and underestimation of language ability. The participants were categorized as “overestimators,” “underestimators,” or “accurate,” based on the association between their ILR level and CCLA result. Participants were considered accurate estimators if they reported their ILR level as “excellent” or “very good,” and passed (ie, 80 or above in 0–100 scale) the CCLA test. Those who reported an ILR level of “good,” “fair,” or “poor,” and failed the CCLA test were also considered as accurate estimators. PCPs who reported ILR levels of “excellent” or “very good,” but failed the CCLA test were categorized as overestimators. Finally, participants who reported ILR levels of “good,” “fair,” or “poor,” but passed the CCLA test were categorized as “underestimators.”
To test for factors associated with accuracy of language ability, we used Fisher exact test for categorical variables, and the Kruskal Wallis test for continuous variables. We tested factors including test language (Spanish vs. other), sex, race, clinician type, clinical department, English first language, age, and percentage of time doing clinical work, against the level of accuracy in self-assessment: being an accurate, overestimator or underestimator of NEL ability. The analysis on interpreter use from the scenarios was conducted using descriptive statistics and a generalized estimated equation model.
Ninety-eight PCPs participated in this study. The majority were women (75.5%), identified as white/Caucasian (62.2%), with Spanish (81.6%) as their NEL, other languages included Mandarin (7.1%), Portuguese (5.1%), French (2%), Hindi (2%), Vietnamese (1%), and Arabic (1%). The average CCLA score was 78/100 and 69 (70%) participants passed the CCLA (Table 1).
Figure 1 shows a positive correlation (0.512; P<0.0001) between self-reported ILR proficiency and CCLA score. All clinicians at the extreme ends of the ILR scale were 100% accurate in self-reporting language proficiency skills. Clinicians’ who reported to have “excellent” language skills passed the CCLA test with an average score of 87% and those who rated their abilities as “poor” had an average score of 33.5%. Of those who reported “very good” language skills, most (n=19; 79%) passed the CCLA test. While the majority of clinicians in the “good” ILR category passed the CCLA test, the average score for these clinicians was 78.1 (ie, a nonpassing score). The greatest variance was in CCLA scores was found in the low and middle ILR categories: good, fair, and poor. Overall, 35 clinicians underestimated, 58 were accurate and 5 overestimated their language proficiency skills.
Several factors were assessed to identify characteristics of PCPs associated with underestimating and overestimating language proficiency skills (Table 2). Clinicians who spoke a NEL other than Spanish were more likely to accurately estimate their language proficiency abilities. All overestimators reported speaking Spanish and rated it at the “very good” level of the ILR (n=5). The accuracy of self-assessed NEL proficiency was greater for clinicians whose primary language was not English (85.7% accurate) than clinicians who reported English as their first language (54.7% accurate). There were no significant differences in accuracy of self-report by clinicians’ race, sex, age, % of clinical time, or clinician type (MD vs. nurse practitioner).
We also looked at clinicians’ responses to the scenario-based questions on interpreter use. Among respondents (n=34), 26 (76%) passed the CCLA and 8 (24%) failed the test. Table 3 describes the interpreter use across 5 clinical scenarios. Overall, clinicians who failed the test were more likely to report the use of professional interpreters; however, there were no statistically significant differences in interpreter use between PCPs who passed and those who failed.
We aimed to analyze how well clinicians with varying language proficiencies self-assessed their NEL skills using the ILR instrument, compared with the results using a validated oral proficiency assessment test (CCLA) as a gold standard. Our findings support the results from our earlier study25 in that there was a positive correlation between self-report using the ILR scale and the CCLA test at both ends of the proficiency spectrum, with the middle ILR categories showing more variance between self-assessment and oral proficiency scores. Further analysis showed that clinicians who spoke a language other than Spanish were more likely to either accurately assess or underestimate their language proficiency skills. This may be because Spanish was the most common NEL spoken by the clinicians in our study, as well as the most common language spoken in the United States,28 giving more opportunity for variance in the levels of fluency. Unsurprisingly, clinicians whose first language was not English were more accurate at assessing their language skills, likely because they were reporting on their own primary language. As expected, physicians who did not pass the CCLA test reported using professional interpreters, as they may have considered their NEL abilities to be insufficient to communicate effectively with LEP patients. Still, some clinicians overestimated their NEL skills as they considered their NEL skills to be sufficient to communicate directly with LEP patients but failed the CCLA, which is potentially concerning from patient safety and quality of care perspective. Future studies are needed to evaluate how clinicians use their NEL skills and language services in the clinical setting.
Policy makers have issued strong mandates for using appropriate language services, which may include language concordant clinicians, to provide culturally and linguistically appropriate services for LEP patients. Our study found only a moderate association between clinician self-reported language fluency and language proficiency testing. The Department of Health and Human Services (HHS) issued The National Standards for Culturally and Linguistically Appropriate Services in Health Care (CLAS standards).29 This document provides guidance for health care organizations to implement culturally and linguistically appropriate services to patients with diverse cultural beliefs, preferred languages, and communication requirements. Moreover, the HHS also has required all covered health care programs to provide meaningful access to individuals with LEP, published in the Nondiscrimination in Health Programs and Activities guidance as dictated by the Affordable Care Act. This included a description of the characteristics of “qualified” medical interpreters that must be used.30 However, multiple studies have demonstrated that physicians may be under-using professional medical interpreters13,31 and instead relying on ad hoc interpreters (eg, untrained staff, family members),9,10,19 or using their own NEL skills that have not been appropriately assessed.13,17,18,32 Ideally, care to LEP patients should be given by language concordant health care providers. One study found that 73% of physicians serving at community health centers that serve a large immigrant and refugee population, reported speaking at least 1 NEL at a self-assessed level sufficient to conduct a patient history and physical examination. Of these physicians, 82% rated their skills to talk clearly and accurately to LEP patients as “good” to “excellent” and 68% reported that they use an interpreter in <25% of patient encounters.33 Health care centers such as the ones described above, with high levels LEP patients and with a high number of PCPs reporting to speak and treat their patients in another language other than English, may be particularly interested in implementing ways to accurately assess NEL proficiency to assure the best quality of care given to LEP patients.
Previous study findings have been inconsistent regarding clinicians’ abilities to self-assess their own NEL skills.21,23,26 Our study results suggest that the ILR scale can be used as a screening test but an additional oral proficiency examination may be needed. Research has shown that clinicians with limited NEL proficiency still use their language skills with patients.21 We recommend that clinicians who self-assess at the low end should always work with professional medical interpreters. Clinicians who self-assess at the high end could use their own NEL skills to communicate with LEP patients. Other studies have demonstrated that clinicians who reported high NEL proficiency correlates with passing oral proficiency tests.26,27 Finally, we suggest that clinicians in the middle ILR categories should be required by health organizations to be evaluated using validated assessments (ie, oral proficiency tests). The competence of bilingual clinicians could thus be assured presumably, improving communication with LEP patients. This type of policy would limit clinicians from using their own limited NEL skills, which may adversely affect the quality of care provided to LEP patients.12–15
Our study has important limitations to consider. First, our study sample size may have been too small to capture other significant differences in clinician characteristics (eg, race, age, specialty, or % of clinical time) associated with accuracy of self-assessment of their NEL proficiency. This limited sample size also influenced the small number of non-Spanish speaking clinicians that were evaluated. Second, as noted in our preliminary analysis, the ILR scale (while validated in nonmedical settings) has been adapted for the medical setting and used in previous studies.20,32 Currently, there are no existing validated self-reported measures to assess clinician NEL proficiency. Third, the response rate was low for the hypothetical scenarios in the survey, which were to provide data on interpreter use and habits of clinicians, limiting the analysis of these data and the association strength of the variables. While the hypothetical scenarios allow us to look at clinicians’ behaviors with using their NEL skills or accessing interpreter services, it does not fully capture the experience of needing language assistance in real time with a patient. Issues of access to reputable language services and professional interpreters can influence a provider’s usage of their NEL skills.
Language discordance between patients and clinicians plays an important role in the quality of care for LEP patients. The lack of standardized ways to report NEL fluency in clinicians impedes the development of strategies aimed at eliminating health care disparities experienced by patients with LEP. This study demonstrates that some clinicians who are communicating directly with LEP patients using their NEL skills have an inaccurate sense of their proficiency skills. Health care organizations could implement policies that require the use of validated self-reporting tools and/or oral proficiency interviews to evaluate clinicians’ NEL skills. Our results may aid health care organizations and policy makers in determining the minimum thresholds of clinicians’ NEL abilities that are appropriate for the care of LEP patients and below which professional interpreters must be used. Establishment of such policies in health care settings has the potential to mitigate some of the health care disparities associated with LEP patient-clinician language discordance and could improve the quality of care for this population.
1. Hiatt RA, Pasick RJ, Stewart S, et al. Community-based cancer screening for underserved women: design and baseline findings from the Breast and Cervical Cancer Intervention Study. Prev Med. 2001;33:190–203.
2. Flores G, Abreu M, Barone CP, et al. Errors of medical interpretation and their potential clinical consequences: a comparison of professional versus ad hoc versus no interpreters. Ann Emerg Med. 2012;60:545–553.
3. Flores G, Laws MB, Mayo SJ, et al. Errors in medical interpretation and their potential clinical consequences in pediatric encounters. Pediatrics. 2003;111:6–14.
4. Divi C, Koss RG, Schmaltz SP, et al. Language
proficiency and adverse events in US hospitals: a pilot study. Int J Qual Health Care. 2007;19:60–67.
5. Carrasquillo O, Orav EJ, Brennan TA, et al. Impact of language
barriers on patient satisfaction in an emergency department. J Gen Intern Med. 1999;14:82–87.
6. Traylor AH, Schmittdiel JA, Uratsu CS, et al. Adherence to cardiovascular disease medications: does patient-provider race/ethnicity and language
concordance matter. J Gen Intern Med. 2010;25:1172–1177.
7. Sudore RL, Landefeld CS, Perez-Stable EJ, et al. Unraveling the relationship between literacy, language
proficiency, and patient-physician communication. Patient Educ Couns. 2009;75:398–402.
8. Karliner LS, Auerbach A, Napoles A, et al. Language
barriers and understanding of hospital discharge instructions. Med Care. 2012;50:283–289.
9. Flores G. The impact of medical interpreter services on the quality of health care: a systematic review. Med Care Res Rev. 2005;62:255–299.
10. Gany F, Kapelusznik L, Prakash K, et al. The impact of medical interpretation method on time and errors. J Gen Intern Med. 2007;22 (suppl 2):319–323.
11. Karliner LS, Jacobs EA, Chen AH, et al. Do professional interpreters improve clinical care for patients with limited English proficiency? A systematic review of the literature. Health Serv Res. 2007;42:727–754.
12. Brooks K, Stifani B, Batlle HR, et al. Patient perspectives on the need for and barriers to professional medical interpretation. R I Med J. 2016;99:30–33.
13. Diamond LC, Schenker Y, Curry L, et al. Getting by: underuse of interpreters by resident physicians. J Gen Intern Med. 2009;24:256–262.
14. Hsieh E. Not just “getting by”: factors influencing providers’ choice of interpreters. J Gen Intern Med. 2015;30:75–82.
15. Baker DW, Parker RM, Williams MV, et al. Use and effectiveness of interpreters in an emergency department. JAMA. 1996;275:783–788.
16. Lindholm M, Hargraves JL, Ferguson WJ, et al. Professional language
interpretation and inpatient length of stay and readmission rates. J Gen Intern Med. 2012;27:1294–1299.
17. Burbano O’Leary SC, Federico S, Hampers LC. The truth about language
barriers: one residency program’s experience. Pediatrics. 2003;111(pt 1):e569–e573.
18. Yawman D, McIntosh S, Fernandez D, et al. The use of Spanish by medical students and residents at one university hospital. Acad Med. 2006;81:468–473.
19. Diamond LC, Tuot DS, Karliner LS. The use of Spanish language
skills by physicians and nurses: policy implications for teaching and testing. J Gen Intern Med. 2012;27:117–123.
20. Diamond LC, Luft HS, Chung S, et al. “Does this doctor speak my language
?” Improving the characterization of physician non-English language
skills. Health Serv Res. 2012;47(pt 2):556–569.
21. Moreno MR, Otero-Sabogal R, Newman J. Assessing dual-role staff-interpreter linguistic competency in an integrated healthcare system. J Gen Intern Med. 2007;22(suppl 2):331–335.
22. Tidwell L. Kaiser Permanente-Southern California Physicians Language
Concordance Program: Meeting the Needs of LEP Patients. Health Care Interpreter Network: From Ad-Hoc to Best Practices in Healthcare Interpreting; July 16–17; Oakland, CA, 2009.
23. Tang G, Lanza O, Rodriguez FM, et al. The Kaiser Permanente Clinician Cultural and Linguistic Assessment Initiative: research and development in patient-provider language
concordance. Am J Public Health. 2011;101:205–208.
24. Lion KC, Thompson DA, Cowden JD, et al. Clinical Spanish use and language
proficiency testing among pediatric residents. Acad Med. 2013;88:1478–1484.
25. Diamond LC, Chung S, Ferguson W, et al. Relationship between self-assessed and tested non-English-language
proficiency among primary care
providers. Med Care. 2014;52:435–438.
26. Reuland D, Frasier P, Olson M, et al. Accuracy of self-assessed Spanish fluency in medical students. Teach Learn Med. 2009;21:305–309.
27. Fernandez A, Schillinger D, Grumbach K, et al. Physician language
ability and cultural competence. J Gen Intern Med. 2004;19:167–174.
29. United States Department of Health and Human Services. The National Standards for Culturally and Linguistically Appropriate Services. 2001.
31. Lee KC, Winickoff JP, Kim MK, et al. Resident physicians’ use of professional and nonprofessional interpreters: a national survey. JAMA. 2006;296:1050–1053.
32. Lion KC, Thompson DA, Cowden JD, et al. Impact of language
proficiency testing on provider use of Spanish for clinical care. Pediatrics. 2012;130:e80–e87.
33. Savageau JA, Cragin L, Ferguson WJ, et al. Recruitment and retention of community health center primary care
physicians post MA Health Care Reform: 2008 vs. 2013 physician surveys. J Health Care Poor Underserved. 2016;27:1011–1032.