The Speech Intelligibility Index: What is it and what's it good for? : The Hearing Journal

Journal Logo

Page Ten

The Speech Intelligibility Index

What is it and what's it good for?

Hornsby, Benjamin W.Y.

Editor(s): Mueller, Gus

Author Information
The Hearing Journal 57(10):p 10-17, October 2004.
  • Free

In Brief

If you're not quite sure what's the difference between the AI and the SII and how you use each of them, let this month's Page Ten explain the two measures and discuss their value in a daily clinical practice.

FU1-3
Figure:
Benjamin W.Y. Hornsby

1 I guess I haven't been keeping up very well, so, for starters, let me ask you, what exactly is the Speech Intelligibility Index?

The Speech Intelligibility Index, or SII, is a measure, ranging between 0.0 and 1.0 “that is highly correlated with the intelligibility of speech.” (ANSI, S3.5, 1997, p. 1).1 You're probably not the only one who isn't familiar with the term. Although drafts of the standard were around in the mid-1990s, it wasn't until the revision of the ANSI S3.5 standard in 1997 that the term SII formally replaced the more familiar term AI (for Articulation Index). The term SII is just starting to find its way into clinical settings.

2 So, are you saying this is just a new name for the Articulation Index?

We can talk about the details later, but for now let me just say yes, there are many similarities between the old AI and the SII. The SII, like the AI, is a quantification of the proportion of speech information that is both audible and usable for a listener. Basically, an SII of 0 implies that none of the speech information, in a given setting, is available (audible and/or usable) to improve speech understanding. An SII of 1.0 implies that all the speech information in a given setting is both audible and usable for a listener.

Generally, there is a monotonic relationship between the SII and speech understanding. That is, as the SII increases, speech understanding generally increases. The method for calculating the SII is described in the ANSI S3.5 (1997) standard titled “American National Standard Methods for Calculation of the Speech Intelligibility Index.”1

3 You're talking as if the SII and speech understanding are similar, but not exactly the same. Is that right?

You are correct. They are not the same, although this is a common misconception. For example, having an SII of 0.5 in a certain environment does not mean you would understand 50% of speech. It simply means that about 50% of speech cues are audible and usable in a given setting. It turns out that for most conversational speech stimuli an SII of 0.5 would correspond to close to 100% intelligibility for individuals with normal hearing. Some researchers have suggested that we use the term “Audibility Index” to remind us that the AI (or SII) is not a direct measure of intelligibility.2

4 So are you saying that we canpredicta person's speech understanding based on his SII score?

Yes, the SII (and AI) can be used to predict speech recognition scores by means of an empirically derived transfer function. These transfer functions are based on the specific speech materials being used during testing. Examples of transfer functions for several types of speech materials are shown in Figure 1. These functions show that a single SII value can correspond to multiple speech-recognition scores. The actual speech score depends on the speech material used during testing, as well as the proficiency of the talker and listener.3

F1-3
Figure 1:
Transfer functions showing the relationship between the SII and speech understanding scores for three different speech materials: the CID W-22 and NU-6 monosyllabic words4,5 and Connected Speech Test (CST) sentence recognition materials.6

5 So itstillsounds as if the SII is just the AI in a new package. Am I right?

Well, you're persistent. Here are some of the details: The 1997 document is a revision of the 1969 ANSI S3.5 document titled “American National Standard Methods for Calculation of the Articulation Index.”7 In this article, I'll use the acronym SII to refer specifically to the 1997 version of the ANSI S3.5 standard and the term AI to refer to the 1969 version of the S3.5 document and other more simplified calculation methods.

The SII and AI share a common ancestry being based, in large part, on early work conducted by Harvey Fletcher and colleagues at American Telephone and Telegraph's (AT&T's) Western Electric Research Laboratories (later to become Bell Telephone Laboratories) in the early and mid-20th century.8 These researchers were interested in developing a method to predict the impact of changes in telephone circuits (e.g., changes in frequency response, distortion, noise) on speech understanding. The method of calculating the AI, as described in the 1969 standard, was summarized by French and Steinberg, colleagues of Fletcher, in 1947.9 Prior to the development of the AI, quantifying the impact of changes in a telephone circuitry on speech understanding was done via behavioral assessment. This involved substantial speech testing with multiple talkers and listeners and was quite time consuming and expensive.

6 So I was right, wasn't I? The SII and the AIarethe same thing.

No, they are similar in that they are based on the same underlying theory that speech intelligibility is directly related to the proportion of audible speech information. However, there are some important differences between the two methods.

7 Are you going to tell me what the differences are?

The primary differences are listed in the foreword of the 1997 standard, but a major difference is that the 1997 standard provides a more general framework for making the calculations than the 1969 version. This framework was designed to allow flexibility in defining the basic input variables (e.g., speech and noise levels, auditory threshold) needed for the calculation. The general framework also allows for flexibility in determining the reference point for your measurements (e.g., free-field or eardrum).

Additional differences include corrections for upward spread of masking and high presentation levels (maybe we can talk more about this later) and the inclusion of useful data such as various frequency importance functions (FIFs). The most notable difference, however, was the name change from AI to SII.

8 Okay, theredoseem to be some differences. Can youbrieflyexplain how the SII is calculated?

Let's start with an overview. To calculate the SII, or the AI, we need certain basic information. Specifically, we need frequency-specific information about speech levels, noise levels, auditory threshold, and the importance of speech. In its simplest form, the SII is calculated by determining the proportion of speech information that is audible across a specific number of frequency bands.

To do this you compare the level of speech peaks to either (1) auditory threshold (although you have to account for bandwidth differences between the pure tones used to measure threshold and bands of speech) or (2) the RMS level of the noise (if present), also in frequency-specific bands. The proportion of audible speech, in a frequency region, is then multiplied by the relative importance of that frequency region. Finally, the resulting values are summed across the total number of frequency bands used to make your measures.

9 That seems to make sense, but I think I'm going to need a few more details. But please, no formulas!

Sorry, but I was just getting ready to toss out a formula when you interrupted. I'll try to make it as painless as possible. The general formula for calculating the SII is:

In this formula, the n refers to the number of individual frequency bands used for the computation. The current SII standard is flexible in that you can choose how specific you want your frequency measures (ranging from 6 [octave bandwidth] to 21 [critical bandwidth] bands). Generally speaking, the more frequency-specific your measures, the more accurate your computations.

The Ii refers to the importance of a given frequency band (i) to speech understanding. The values for Ii, also known as the frequency importance function (FIF), are based on specific speech stimuli, and when summed across all bands are equal to approximately 1.0. Again the 1997 S3.5 standard allows flexibility in using the most appropriate FIF for your situation. Figure 2 provides examples of the 1/3-octave band FIFs for two test materials. This figure highlights the substantial differences in terms of band importance that can exist between speech materials.

F2-3
Figure 2:
One-third-octave frequency importance functions for continuous discourse and various nonsense syllables taken from the ANSI S3.5 (1997) Methods for the Calculation of the Speech Intelligibility Index.1

Finally, the values for Ai, or band audibility, which range from 0 to 1, indicate the proportion of speech cues that are audible in a given frequency band. The determination of the Ai variable is based simply on the level of the speech, in a given frequency band, relative to the level of noise in that same band. For determining Ai a dynamic range of speech of 30 dB is assumed (in both the 1969 and 1997 standards). Using the basic formula for calculating Ai we simply subtract the spectrum level of noise from the spectrum level of the speech (in dB) in a given band, add 15 dB (the assumed speech peaks), and divide by 30. Resulting values greater than 1 or less than 0 are set to 1 and 0, respectively. This value essentially provides the proportion of the 30-dB dynamic range of speech that is audible to the listener.

The value Ai is then multiplied by the FIF (Ii) to determine the contribution that frequency region will provide to speech recognition. By summing these values across the various frequency bands a single numerical index (the SII) is obtained.

10 Wait a minute. You mention speech spectrum level and noise spectrum level. What if you are testing in quiet? How does threshold fit in here?

When calculating the SII, we assume that both noise and thresholds function in the same way to limit audibility. The 1997 standard has a conversion factor that is used to convert thresholds (in dB HL) to a hypothetical “internal noise” that would give rise to the measured threshold in quiet.

11 I'm not sure I'm ready to pull out my calculator, but thanks anyway. You've used the term “basic” or “general” formula several times. Is there more to this calculation?

Yes, as I mentioned earlier, the 1997 version of the S3.5 standard includes correction factors in the calculation of the SII that are designed to account for upward spread of masking effects and the negative effects of high presentation levels. These factors may reduce the SII value in a given band and consequently reduce the overall SII for that specific situation.

These factors are important to take into account in certain situations. Although unlikely to play a role in everyday situations, when a lower-frequency, narrowband masking noise is present, upward spread of masking may significantly limit the utility of speech information in higher frequency regions. In contrast, high presentation levels are something experienced by persons without hearing loss and persons wearing hearing aids on a daily basis and can have a significant impact on speech understanding.

12 Okay, I see how the SII can be calculated and even how it could be useful, at least for AT&T. But how is it used in audiology?

Although the scope of the SII standard is limited to individuals without hearing loss, the SII (and its predecessor, the AI) has been an incredibly useful tool for researchers in audiology.

One of the most common complaints of persons with hearing loss is difficulty understanding speech, particularly in noise. The source of this difficulty has been hotly debated for many years. Researchers have tried to determine how much of the difficulty is due to reduced audibility, resulting from hearing loss, and how much is due to factors other than audibility. The SII allows us to quantify the impact hearing loss has on the audibility of speech, both in quiet and in noise. This allows us to “predict” speech understanding in specific test settings and compare the predicted and measured performance of persons with hearing loss.

Using this method, researchers have investigated the impact of factors other than audibility, such as high presentation levels, age, degree and configuration of hearing loss, and cognitive function, on speech understanding.10,12–15 The results of these studies, and many others like them, have contributed to our basic understanding of how hearing loss interacts with other factors to affect communication function.

13 But what about for people in routine audiologic practice? Are there relatively simple ways to calculate the SII? And how can I use the information in the clinic?

Good questions. Let's take them one at a time. Several investigators have developed relatively simple graphic, paper and pencil, methods of calculating an AI measure. A nice review of these simplified procedures is provided by Amlani et al.16 These graphic tools, often referred to as “count the dot” methods, use an audiogram format to display both auditory threshold and the dynamic range of speech in dB HL. Figure 3 shows an example of several methods.2

F3-3
Figure 3:
Examples of three “count the dots” methods of calculating the Articulation Index. From the left to right are examples of methods by Mueller and Killion,21 Humes,22 and Pavlovic.23 Reprinted from Killion et al.2

You can see that the density of dots in the speech spectrum varies with frequency. This variation in density corresponds to the relative importance of speech information in each frequency region (the FIF in the SII calculation). In the Mueller and Killion method, for example, there are a total of 100 dots, so to calculate the AI you simply count the number of dots that are above threshold and divide by 100. Thus, if 30 dots are above threshold, then the AI for that individual, listening in quiet, is 0.3.

14 Wait a minute! Do you mean I can do the math you described earlier or I can just count the dots and I'll get the same answer?

It's not likely that you would get the same answer, although you may be pretty close in some situations. Recall that the ANSI S3.5 method provides a general framework and allows room for varying many of the input parameters used in the calculation, such as speech level, noise level, and the type of speech materials. The input parameters, other than auditory threshold, are fixed in these simplified methods. For example, the Mueller and Killion method assumes the speech is presented in quiet, at a conversational level, and has a frequency importance function representative of nonsense syllables. If all these assumptions are incorporated in the ANSI S3.5 (1997) method, then the two methods will produce similar results. However, for situations that deviate from these assumptions, differences between methods could be quite large.

15 Okay, I can count the dots, but how might I use this information in the clinic?

Well, methods for calculating audible speech information can be useful clinically in several ways. Mueller and Hall listed six clinical applications of AI type measures.17 The following is a summary of their suggestions:

Perhaps the most obvious uses are in determining candidacy and in patient counseling and education. For example, individuals with very high unaided AI measures are unlikely to show large aided benefit, at least for conversational speech inputs. At the same time, it is unclear how “high” an unaided AI is “high enough.” Self-assessment questionnaires, however, could also help identify borderline candidates. In terms of counseling, a visual image of audibility (or lack of) provides a powerful tool for patients and significant others to drive home the point that hearing loss can impact speech audibility and speech understanding.

AI measures may also be helpful in making circuit selections or for comparisons across hearing aids. For example, the AI can help quantify the potential improvement in speech audibility as you change hearing aid circuits or instruments. It is important to remember, however, that increasing the AI does not always mean speech understanding will increase, particularly for persons with substantial hearing loss (more about that later).

The AI also provides an objective measure of hearing aid benefit. Although inadequate in isolation, documenting improved audibility in conjunction with other measures of hearing aid benefit (such as subjective assessments) is important in these days of managed healthcare.

Finally, Mueller and Hall suggest that AI measures could be useful in educating referral sources. Information about speech-understanding abilities provided to referral sources typically comes from the diagnostic battery and consists of word-recognition scores based on monosyllabic word testing done at high levels. However, reporting an unaided AI measure based on conversational and soft speech is another method that may be useful when describing hearing handicap for speech to referral sources.

16 Doesn't my probe-microphone system provide unaided and aided AI information too?

Yes, some systems do calculate a version of the AI, but to my knowledge none actually implement the 1997 version of the ANSI S3.5 standard. In fact, in many cases it is unclear what input parameters, or what “method” the software is basing its calculations on. It's very possible that your probe-mic AI calculations would differ from the SII or from your favorite pencil-and-paper AI calculations.

17 Does that mean the values they provide are useless?

No, the information can be very useful in making decisions regarding hearing aid adjustments. For example, your patient has returned for a follow-up session and reports that, despite your perfect match to your prescriptive target, she is continuing to have substantial difficulties understanding speech at work. Based on your AI measures you determine that your client's aided AI value is still substantially below 1.0 (e.g., ∼0.55). (It is important to note that this will actually be the case for many individuals with moderate hearing loss, even after perfectly matching some prescriptive targets.) The client is not complaining of tolerance issues but rather of speech-understanding difficulties. This may well be a case where gain for soft to moderate sounds, particularly in the high-frequency regions, could be increased to improve audibility and, potentially, speech understanding.

That said, the usefulness of the AI values provided by your probe-microphone system will also depend on your reasons for obtaining them in the first place. In most cases, we are simply looking to verify a change in the AI between the unaided and aided conditions. Because we are comparing data obtained using the same calculation method and are not trying to actually predict speech recognition, this relative comparison is quite valid. If, however, your needs are more precise, then a separate implementation of the ANSI S3.5 1997 SII calculation would be more appropriate.

18 Let me get this straight. It sounds as if all we need to do to maximize speech understanding is crank up the hearing aid gain until we get an AI/SII of 1.0. Is it really that simple?

Unfortunately, no. Several factors make this approach unreasonable. For one, people with cochlear pathology usually have LDLs at or near normal levels, resulting in very narrow dynamic ranges, usually in the high frequencies. This limits the maximum amount of gain, particularly for linear devices, that can be applied to soft sounds without making louder sounds uncomfortably loud.

Recall that speech understanding actually decreases at high presentation levels, particularly in noise. Several researchers have reported that maximizing the AI does not necessarily improve speech understanding compared with other prescriptive procedures (e.g., the NAL) and in some cases may even result in poorer performance.18,19

Finally, research suggests that the gain-frequency response that provides optimal speech understanding may not always be preferred by listeners in terms of optimal sound quality.20 Clearly, in most cases, simply turning up hearing aid gain to achieve AIs of 1.0 is not appropriate. An exception might be in cases of mild hearing loss when WDRC, in conjunction with appropriate frequency shaping, is used.

19 You mentioned earlier that there were programs available that helped in the calculation of the SII?

Yes, a useful web site for those interested in the standard is http://www.sii.to. The site was created and is maintained by members of the Acoustical Society of America (ASA) Working Group S3–79. This working group, which is in charge of reviewing the ANSI S3.5–1997 standard, has provided access to several computer software programs to aid in calculating the SII. It is important to note, though, that the programs are not a part of the standard and are provided by the developers on an “as is” basis. The site also contains errata to the 1997 standard and describes typographical errors in the existing standard.

20 I know this is my last question, so tell me one more time. When I'm seeing my patients in the weeks to come, when would I use the SII and when would I use the old AI?

Clinically, you are most likely to obtain AI (rather than SII) type measures using a “count the dots” method or from your probe-mic system. The information provided by either of these methods is quite adequate for clinical uses. In addition, calculating the complete SII (i.e., incorporating all correction factors) would be tedious without the use of computer programs and, to my knowledge, these programs are not yet incorporated in current audiologic equipment. I would, however, expect to see this in the near future.

REFERENCES

1. ANSI: ANSI S3.5–1997. American National Standard Methods for the Calculation of the Speech Intelligibility Index. New York: ANSI, 1997.
2. Killion MC, Mueller HG, Pavlovic C, Humes L: A is for Audibility. Hear J 1993;46(4):29–32.
3. Kryter KD: Validation of the articulation index. J Acoust Soc Am 1962;34:1698–1702.
4. Studebaker GA, Sherbecoe RL: Frequency-importance and transfer functions for recorded CID W-22 word lists. J Sp Hear Res 1991;34(2):427–438.
    5. Studebaker GA, Sherbecoe RL, Gilmore C: Frequency-importance and transfer functions for the Auditec of St. Louis recordings of the NU-6 word test. J Sp Hear Res 1993;36(4):799–807.
      6. Sherbecoe RL, Studebaker GA: Audibility-index functions for the connected speech test. Ear Hear 2002;23(5):385–398.
        7. ANSI: ANSI S3.5–1969. American National Standard Methods for the Calculation of the Articulation Index. New York: ANSI, 1969.
        8. Fletcher H, Galt RH: The perception of speech and its relation to telephony. J Acoust Soc Am 1950;22:89–151.
        9. French NR, Steinberg JC: Factors governing the intelligibility of speech sounds. J Acoust Soc Am 1947;19:90–119.
        10. Studebaker G, Scherbecoe R, McDaniel D, Gwaltney C: Monosyllabic word recognition at higher-than-normal speech and noise levels. J Acoust Soc Am 1999;105(4):2431–2444.
        11. Hornsby BW, Ricketts TA: The effects of compression ratio, signal-to-noise ratio, and level on speech recognition in normal-hearing listeners. J Acoust Soc Am 2001;109(6):2964–2973.
          12. Ching T, Dillon H, Byrne D: Speech recognition of hearing-impaired listeners: Predictions from audibility and the limited role of high-frequency amplification. J Acoust Soc Am 1998;103(2):1128–1140.
          13. Hargus SE, Gordon-Salant S: Accuracy of speech intelligibility index predictions for noise-masked young listeners with normal hearing and for elderly listeners with hearing impairment. J Sp Hear Res 1995;38:234–243.
          14. Hornsby BW, Ricketts TA: The effects of hearing loss on the contribution of high- and low-frequency speech information to speech understanding. J Acoust Soc Am 2003;113(3):1706–1717.
          15. Humes LE: Factors underlying the speech-recognition performance of elderly hearing-aid wearers. J Acoust Soc Am 2002;112(3-Pt 1):1112–1132.
          16. Amlani AM, Punch JL, Ching TYC: Methods and applications of the audibility index in hearing aid selection and fitting. Trends Amplif 2002;6(3):81–129.
          17. Mueller HG, Hall III JW: Audiologist's Desk Reference Volume II, Audiologic Management, Rehabilitation, and Terminology. San Diego: Singular Publishing, 1998.
          18. Rankovic C.M: An application of the articulation index to hearing aid fitting. J Sp Hear Res 1991;34(2):391–402.
          19. Ching TYC, Dillon H, Katsch R, Byrne D: Maximizing effective audibility in hearing aid fitting. Ear Hear 2001;22(3):212–224.
          20. Gabrielsson A, Schenkman BN, Hagerman B: The effects of different frequency responses on sound quality judgments and speech intelligibility. J Sp Hear Res 1988;31(2):166–177.
          21. Mueller HG, Killion MC: An easy method for calculating the articulation index. Hear J 1990;43(9):14–17.
            22. Humes LE: Understanding the speech-understanding problems of the hearing impaired. JAAA 1991;2(2):59–69.
              23. Pavlovic C: Speech recognition and five articulation indexes. Hear Instr 1991;42(9):20–24.

                Section Description

                The Articulation Index (AI) has been with us for many years. No, it has nothing to do with speech production—it's about hearing. For simplicity, think of it as an “audibility index.”

                For many years the AI rarely found its way outside of research laboratories. This probably was because the calculations were more complicated than the busy clinician cared to tackle. Then, in 1988, Chas Pavlovic introduced a clinically friendly version of the AI—equal weighting for 500, 1000, 2000, and 4000 Hz. The math simply required adding together four key numbers and then dividing by 120. Not bad.

                But soon, things got even simpler as the Mueller-Killion count-the-dot audiogram was introduced. Now, the ability to count to 100 was all that was required. This was soon followed by the Humes 33-bigger-dot version, which was followed by the Pavlovic 100-square version, which was followed by the Lundeen 100-dot modification of the Pavlovic squares. Soon, manufacturers of probe-mic equipment included automated AI calculations, which encouraged even greater use of audibility considerations in the selection, fitting, and adjustment of hearing aids.

                But then, in 1997, just when we all were getting comfortable with the AI, a new calculation method was introduced—the Speech Intelligibility Index or SII. This has led to questions such as: Did the SII replace the AI? How does the SII differ from the AI? Is it still okay to use the AI?

                To help us make sense of all this, we've brought in a guest author who has used both the AI and the SII, in the clinic and in the research lab. Benjamin W.Y. Hornsby, PhD, is a research assistant professor at Vanderbilt University. Although Ben has logged many years as a clinician, these days he spends most of his time in the Dan Maddox Hearing Aid Research Lab. You're probably familiar with many of his recent publications. To assure you that Dr. Hornsby is not a single-minded kind of guy, prior to his interest in audiology he worked as a welder, rock climbing instructor, sign language interpreter, and junior high school science teacher.

                After reading Ben's excellent review I think you'll have a better understanding concerning the similarities and differences of the SII and the AI, and how these measures fit into your daily practice. And yes, “articulation” is about hearing, not speech production.

                © 2004 Lippincott Williams & Wilkins, Inc.