Share this article on:

Language Matters: Considerations in Measuring Speech Intelligibility

Wong, Lena L.N. PhD; Sultana, Nuzhat; Chen, Yuan PhD

doi: 10.1097/01.HJ.0000524320.19140.0e
Journal Club

Dr. Wong, left, is an associate dean and professor at the University of Hong Kong, and president elect of the International Society of Audiology. Ms. Sultana, middle, is a PhD candidate at the University of Hong Kong. Dr. Chen is an assistant professor at the Education University of Hong Kong.

A few research studies have reported variations in the contribution of the band-importance function (BIF) to speech intelligibility in different languages. These differences could mean that hearing impairment may not have the same effect on understanding the speech of speakers with different linguistic backgrounds, raising the question of whether speech-processing algorithms should be customized accordingly (J Acoust Soc Am. 2007;121[4]:2350 In a recent report, Jin and colleagues examined this important issue in the context of Mandarin and Korean by deriving count-the-dot audiograms based on their BIFs and comparing the audibility of these languages with English (J Am Acad Audio. 2017;28[2]:119

Back to Top | Article Outline


The BIF is a measure of a frequency band's relative contribution to speech intelligibility. BIFs at individual frequency bands are combined into a speech intelligibility index (SII) ranging from 0 to 1, with 0 indicating no speech intelligibility and 1 suggesting full intelligibility. The SII summarizes the proportion of linguistically important speech cues that are audible across frequency bands. When hearing loss is involved and some speech information is therefore inaudible, the band audibility function (BAF) is used to describe the speech energy that is above the listener's hearing threshold in a given frequency region. Thus, when the BAF is multiplied by the BIF of a particular language and summed up across frequency bands, it results in the SII of the person speaking that language.

Back to Top | Article Outline


Due to differences in phonetics, phonemes, sentence structures, and distribution of linguistically distinctive speech, different languages have different BIFs (Semin Hear. 2011;32[2]:182; J Speech Lang Hear Res. 2014;57[1]:338 For example, tones in tonal languages such as Cantonese and Mandarin Chinese are linguistically distinctive and lexically meaningful. A change in tone can change the meaning of a word. Given that the fundamental frequencies of tones are mainly located at low frequencies, the frequency region between 180 and 1,600 Hz makes a greater contribution to Cantonese speech intelligibility than it does to English (J Acoust Soc Am. 2007 Although the contribution of fundamental frequencies and tone contours are often referred to in speech comprehension, much of this information is being carried by the vowels, which play a relatively more important role in speech perception in tonal languages than in non-tonal ones (J Speech Lang Hear Res. 2014; J Acoust Soc Am. 2013;134[2]:EL178 While research has not been conducted in other tonal languages (e.g., Thai or Somali), the BIFs of these languages at low frequencies are expected to carry a stronger weight than those in English. Overall, clinicians should consider the BIFs of individual languages when considering the impact of hearing impairment on understanding speech (J Acoust Soc Am. 2013

These linguistic differences have led to research and discussions on whether the signal-processing algorithms accounting for these differences can better maximize speech intelligibility. For example, some studies have examined whether enhancing low-frequency information that codes pitch changes would result in better speech understanding (J Acoust Soc Am. 2007 Yet, no strong evidence has supported the use of these algorithms one way or another.

Back to Top | Article Outline


Two issues strain the translation of empirical research into clinical practice: (1) the information has not been presented in a way that could be easily understood by clinicians and their patients, and (2) clinicians have limited availability in busy clinics. To address these issues, Mueller and Killion derived a count-the-dot audiogram, and provided a simplified account of speech intelligibility in English (Hearing Journal. 1990;43[9]:14 The count-the-dot audiogram consists of 100 dots, with each dot representing one percent of the total weighted audibility. A new version of the count-the-dot audiogram was introduced to include weightings for 6,000 and 8,000 Hz (Hearing Journal. 2010;63[1]:10 To account for differences in the BIFs associated with different languages, count-the-dot audiograms for Mandarin Chinese and Korean were derived using standardized and highly predictable sentences spoken by male voice actors in the respective languages (J Am Acad Audio. 2017 Assuming a dynamic range of 30 dB for all languages, the dots were distributed over 10 band divisions between 150 and 8,600 Hz. To estimate speech intelligibility, the number of audible dots above the hearing thresholds could be counted (Hearing Journal. 1990

The count-the-dot audiograms were used to illustrate the effects of hearing impairment on speech intelligibility in Mandarin Chinese and Korean compared with that in English. The results showed that the same degree and configuration of hearing loss could lead to different SII predictions (J Am Acad Audio. 2017 Clinicians could use this tool to gain a better understanding of their patients’ ability to comprehend speech.

Language-specific BIFs should be considered when working with patients of different linguistic backgrounds. Clinicians could use language-specific count-the-dot audiograms to illustrate the available speech information and the benefits of amplification. These audiograms are particularly useful when the clinician is unable to assess a patient because of the lack of speech audiometry tools in the respective language and/or inadequate language proficiency. However, clinicians must note that this simplified representation of speech intelligibility does not account for other relevant factors such as the listener's signal-to-noise loss, dynamic changes in speech and noise levels, impact of speech-processing algorithms, and cognitive decline.

Journal Club Highlight

Does Language Matter When Using a Graphical Method for Calculating the Speech Intelligibility Index?

Jin IK, Kates JM, and Arehart KH

J Am Acad Audiol. 2017 Feb;28(2):119-126. doi: 10.3766/jaaa.15131.

Copyright © 2017 Wolters Kluwer Health, Inc. All rights reserved.