Twenty years later: A NEW Count-The-Dots method : The Hearing Journal

Journal Logo

PAGE 10

Twenty years later: A NEW Count-The-Dots method

Killion, Mead C.; Mueller, H. Gustav

The Hearing Journal 63(1):p 10,12-14,16-17, January 2010. | DOI: 10.1097/01.HJ.0000366911.63043.16
  • Free

In Brief

In 1990, The Hearing Journal published a quick and easy way for clinicians to calculate from a patient's audiogram what percentage of speech sounds the person could hear. The “Count-The-Dots” Method caught on quickly, and is used all over the U.S. and beyond. This month, the creators of the original “Count-The-Dots” Method are back with an updated version reflecting what has been learned in the past 20 years about the contribution of speech cues at 6000 Hz and above to understanding.

At a conference in the fall of 1989, I had the pleasure of listening to Mead Killion talk about the benefits of wide-dynamic range compression—a relatively new concept at the time. For you younger readers, Dr. Killion moved the WDRC notion forward and developed a circuit called the K-Amp, which pretty much changed the way we fit hearing aids.

During his talk, Mead got onto aided audibility, and to make his point he popped a blurry chart, with a lot of his own scribbling added, on the overhead projector. The chart, from a 1962 JASA article, showed a somewhat unusual speech spectrum, displayed in SPL, with the importance function of speech illustrated by the density of 200 dots. Interesting, but not very friendly for clinicians.

Later that evening, over beverages, we discussed that if the speech spectrum was corrected, and if everything was displayed in HL on a familiar audiogram, and if the dots were reduced to a manageable 100, it could turn into a pretty handy form. For some strange reason, maybe because Bobby McFerrin was whistling Don't Worry, Be Happy in the background, we decided to tackle the project. A few months later, the Count-The-Dots audiogram was published in The Hearing Journal. Now, after 20 years, we're back.

Dr. Killion is Chief Technology Officer and President of Etymotic Research, Inc. He may be best known for developing insert earphones, Musicians Earplugs, or the K-AMP circuit and Class D amplifier. But he is most proud of his musical accomplishments on piano, violin, viola, singing barbershop, and directing a church choir. Whether it is in marathons (32 to date), lecturing around the world, or innovating in the lab, Mead is always running. He has about as many patents applied for and pending as the nearly 70 he already holds.

Two things currently capturing his attention are brain plasticity and electronic earplugs that preserve the hearing of deployed soldiers who are reluctant to wear conventional hearing protection. Mead also is suspected of masquerading as the wise Dr. Abonso (for Automatic Brain-Operated Noise Suppressor Option) when dispelling myths about hearing aid bandwidth, fidelity, and performance.

There aren't too many myths about audibility, but it's pretty darn important when you fit hearing aids. We believe thinking about dots helps get it right.

Gus Mueller

Page Ten Editor

1 So the two of you are the “dots guys”! I've used that audiogram quite a bit, but I never really knew where it came from.

The dots guys we are, and we're here to party, as our audiogram is celebrating its 20th birthday! Over the years, we've been pleased to see widespread clinical use of the Mueller and Killion Count-The-Dots “Easy Method” for calculating the Articulation Index, which was first published here in The Hearing Journal (See Figure 1 on the next page).

F1-3
Figure 1:
Original Mueller and Killion Count-The-Dots audiogram form published in The Hearing Journal in 1990.1

In our original article we specifically stated that the form was not copyrighted, as we wanted to encourage people to try it out. That seemed to work. Surprisingly, we've observed that our 100 dots have even found their way into several research studies and peer-reviewed publications, although that wasn't really the intended use.

2 Wasn't it just meant to be a simple, handy tool for clinicians?

That's what we had in mind. When we first published the original audiogram, we discussed the primary purpose of its development: to provide a clinically friendly, easy-to-use method to make routinely measured hearing thresholds meaningful regarding the understanding of speech, including understanding speech in background noise. By meaningful, we meant for audiologists, hearing instrument specialists, allied professionals, and, most importantly, for patients. We believed there were many uses of this handy tool, and many of these applications were summarized in Mueller and Hall.2

The count-the-dots audiogram is now commonly used to explain to patients, for example, why they can hear reasonably well in quiet but their severe loss of audibility for high-frequency sounds creates a difficulty understanding speech in noisy surroundings: In noise, they can't hear many of the low-frequency speech sounds—which they normally depend upon—because those cues are now covered up with (technically, masked by) noise, and they can't make up the difference with high-frequency speech cues because they can't hear them.

3 And you can also use the “dots” with hearing aid fittings, right?

We certainly think so. They will provide you with a good estimate of the audibility of speech inputs, which of course then leads to the potential benefits of hearing aid use. Some of today's probe-mic systems calculate unaided and aided AIs based on our count-the-dots audiogram. For an estimate of hearing aid benefit for speech in quiet, a carefully conducted sound field-aided audiogram, and then simply counting the dots that are audible, still works quite effectively too.

4 The more aided dots the better, right?

As long as they are speech dots and not noise dots, fall within the patient's residual dynamic range, and are not distorted, probably so. Much of the research with hearing aids and cochlear implant processors suggests that if audibility isn't everything, it is at least the most important thing. David Pascoe's quote from 30 years ago still applies: “Although it is true that mere detection of a sound does not ensure its recognition, it is even more true that without detection the probabilities of correct identification are greatly diminished.”3

5 What about patients with cochlear dead regions? Do you have a companion “dead dot chart”?

You might be joking, but we've given it serious thought. If your brain can't “use” the dot, then there is no reason to count it. If you indeed knew that audibility for a certain frequency region did not contribute to speech understanding, then you could correct for this, just as we discussed earlier regarding low-frequency masking.

Perhaps the easiest quick check for a dead region is the oldest: Ask what the person hears. If the patient has a downward sloping hearing loss, a report of “tone” at low frequencies and “a screech or buzz or hum” at 4 kHz suggests such a region, especially if it persists at 20-30 dB above threshold. Those dots wouldn't count.

6 So after 20 years, you're both still happy with your original “dots” audiogram?

For the most part, yes. But, while it seems to be popular, the one drawback to our original Easy Method is that it gives virtually no importance to speech cues at 6000 and 8000 Hz. There has been considerable research in the past decade showing the importance of these higher frequencies for speech understanding. Much of this has been conducted by Pat Stelmachowicz and colleagues.4-8 These researchers have reported that the frequency range up to 8000 to 9000 Hz can be extremely important for recognizing the inflectional morphemes /s/ and /z/, especially for female talkers. This has the greatest significance for hearing-impaired children, as these sounds are important for speech and language development.

7 So does the new version take this into account?

Yes. Fortunately, the research behind the new ANSI standard S3.5-1997(R2007), “Methods for Calculation of Speech Intelligibility Index,”9 produced a new “SII” importance function that gives more weight to higher frequencies (see Ben Hornsby's Page Ten review of the SII10). So, what we've done is to construct a new version of the count-the-dots audiogram, based on the SII importance function, which includes weightings for 6000 and 8000 Hz.

8 What else is changing with your new audiogram?

Not much. We will continue to use the term “audible dots” as convenient shorthand for “audible speech cues weighted by the importance function at each frequency.” Thus “an AI of 65%” can be written (and thought of) more picturesquely as “65 audible dots.”

In developing the revised method, we retained the same basic speech spectrum shape employed in the original method, so that the only noticeable change for most users would be the increased importance given to high frequencies. In recognition of our use of the new SII one-third-octave importance function, and to avoid confusion, we are calling the present version (formally) the SII-based Method for Estimating the Articulation Index.

9 Let me interrupt for a moment. I was once told that the “A” in your “AI” was for Audibility, not Articulation.

Okay, you caught us on that one. Back in 1993, we published an article here in HJ, along with co-authors Chas Pavlovic and Larry Humes, titled “A is for Audibility.” 11 We thought it was a reasonable idea, as audibility seems related more to hearing while “articulation” seems more related to speech production. But the audibility thing never really caught on, so we decided to retain the term Articulation Index both for its familiarity and in recognition of Fletcher's fundamental role in developing a theory that has survived nearly untouched for 88 years:12,13 The percentage of syllables, words, and sentences that can be correctly understood over a transmission system can be predicted from the audibility of the various cues that are important to speech. Our new SII Count-The-Dots Audiogram is shown in Figure 2.

F2-3
Figure 2:
The new Killion and Mueller SII Count-The-Dots audiogram for estimating the articulation index. The distribution of the 100 dots represents a speech level of 60 dB SPL (˜45 dB HL).

10 Okay, kudos to Fletcher and that all makes sense. Is your new audiogram copyrighted?

Like the first version, it is not. Run a few photocopies and try it out tomorrow morning if you like!

11 Other than the extended high frequencies, is the speech spectrum used in the new audiogram the same?

There are very minor changes. We retained the 60-dB-SPL equivalence of the 100 dots used in the earlier version. Margo Skinner recommended that all conversational speech tests be conducted at 60 dB SPL, 5 dB quieter than the generally accepted average for conversational speech, as a better measure of the listener's ability in everyday life.14 We agree.

Observe, however, that the new audiogram has 11 dots above 4000 Hz, whereas only 6 dots were above 4000 Hz in the older version. There is also an additional dot at 4000 Hz in the new version. Because we still have only 100 dots to work with, these extra-high-frequency dots were carefully lifted from the frequencies below 4000 Hz.

12 Does that mean I'll be seeing different AI scores with my patients?

Because we removed a few dots from the lower frequencies, the typical patient with a gradually downward sloping hearing loss might have an AI score 2%-3% lower with the new SII version than with the older one. And, to state the obvious, when the new audiogram is used for hearing aid fittings, the hearing aids need to have substantial gain above 4000 Hz or the patient won't get credit for the new dots in this area.

13 I know that the AI score somehow relates to intelligibility, but I could use a refresher on how to get from dots to percent correct?

No problem. Conveniently, the clinical relevance of the audible dots hasn't changed since our discussion of 20 years ago, and we certainly can review a few key points. To do this, however, we need to introduce another chart, shown in Figure 3. This chart is similar to what we published back in 1990, and it shows the relationship between the AI and the percentage correct response for digits, spondees and sentences, words in sentences, and sentences. The sets of dots for our old and new audiogram forms (Figures 1 and 2) are close enough at most frequencies that we chose not to refine previous data to produce a revised version of Figure 3.* A common misunderstanding is to believe that there is a one-to-one relationship between audible dots and speech intelligibility—e.g., 80 audible dots means 80% intelligibility. As you can see, this isn't how it works.

F3-3
Figure 3:
Approximate relationship between AI or SII and intelligibility of digits, spondees, sentences, words in sentences, and isolated words. The Y-axis is predicted percent correct, the lower X-axis is the AI or SII percentage, and the upper X-axis is the signal-to-noise ratio (SNR), normalized by QuickSin 50% words correct at 2 dB SNR: adjusted to 50% correct IEEE words in sentences at 2 dB SNR re: fourtalker babble.

14 I've seen that chart, or a similar version before, but to be honest I never quite understood how to use it.

We were getting to that. To continue, in Figure 3, the x-axis would be the number of dots that are audible (e.g., 70 dots audible equal an SII of 70%) and the y-axis is the predicted percent correct score for a given speech material. To use an example similar to early telephone experiments, if a telephone filters out all sounds above approximately 1500 Hz, then only half the speech cues will be available to the listener and the AI (SII) would equal 50%.

The relationship between AI and intelligibility is not linear, however (see Figure 3), so that a listener missing 50% of the speech cues will miss only about 30% of the words in a NU-6 word list, and 5% of sentences. The dramatic difference is a result of the brain's remarkable ability to fill in the gaps.**

15 Where did you pull out the 30% for words and 5% for sentences?

We'll go through it step-by-step. First, locate 50% on the bottom “X-axis.” That's the percent of audible dots. Then draw an imaginary line straight up until it bisects the curve labeled “NU 6 Words.” At that point, draw a straight horizontal line to the left, until it bisects the “Y-axis.” You'll end up with roughly a 70% correct value, which is why we said the person would be missing about 30%. Do the same for sentences, except use the intelligibility prediction curve labeled “Sentences” and you'll see where our “missing 5%” number came from.

16 Thanks, I think I've finally got it. Do you have another example?

Sure. Just for fun, this time we'll include some “masked dots” in the example, which is what typically happens in real-world listening. In this example you can use the chart in Figure 3 to estimate the effect of masking and the limited bandwidth of the Bluetooth circuit. For the background noise masking, we'll use a 50-dB(A) typical noise spectrum, which gives a masked threshold close to 35 dB HL at audiometric frequencies from 500 to 2000 Hz.15,16

As you might know, Bluetooth has a 300- to 3500-Hz transmission bandwidth, which by itself would reduce the AI in quiet to 78% and the normal-hearing listener could repeat 92% of NU-6 words correctly. In the real world, however, background noise often masks 60% of the dots, leaving only 40 audible. After passing through the limited bandwidth of Bluetooth devices only 31 dots are left. Table 1 gives the AI percentage and the expected NU-6 scores for the four combinations, based on Figure 3.

T1-3
Table 1:
A comparison of the predicted word recognition for a full-band versus a Bluetooth transmission of speech in ∼50 dB(A) SPL background noise.

17 I'm looking at the different curves in Figure 3. Why does it require considerably fewer dots to understand spondees than single-syllable words?

The simple answer is that there are fewer spondees than single-syllable words. Similarly, words in sentences are easier to guess correctly because the sentence context restricts the possible choices from the one million English words to the much fewer number of words likely to fit in the blank of a given sentence. For example, while both “hear” and “beer” are common English words, and they sound very much alike, and are uttered frequently by audiologists, it is unlikely that the word “hear” would be substituted for “beer” in the classic IEEE sentence: “The stale smell of old beer lingers.”

Most young readers probably have not seen the germinal data of Miller et al.,17 which make this point clearly. We have displayed one of their charts in Figure 4. As you can see, they found that the signal-to-noise ratio required to identify words 50% of the time could be degraded more than 6 dB when the same words were in sentences. While many things have changed since 1951, this concept hasn't.

F4-3
Figure 4:
Two articulation curves for a single list of words. In one case the words were presented singly in a list; in the other case the words appeared as parts of sentences (from Miller et al.17).

18 You're right, I somehow missed that article. So are you saying that I can use this information to counsel my patients?

Yes and no. These are data from normal-hearing individuals. Your patients often will have difficulties that might be much worse than this. This could be related to their reduced peripheral ability to distinguish and identify speech from noise, or their central processing and cognitive abilities to extract meaningful information from sentences, both in quiet and when environmental noise is present.

Much of this can not be predicted from individual pure-tone audiograms or traditional word-recognition testing.18-21 So yes, you can use these data as a starting point, but we-recommend adding some supplemental speech-in-noise testing.

19 My questions are winding down. Any final comments on your new SII Dots Audiogram?

Well, as we stated earlier, we did some careful tweaking to our old audiogram so we could place greater importance on the speech cues between 4000 and 8000 Hz, in keeping with recent research on the importance of the speech cues in this region to (especially) the learning of speech by children.

A couple of final things are worth emphasizing: Only 26% of the speech cues are required by normal-hearing subjects to carry on a conversation. This degree of audibility should result in about 90% sentence understanding at a typical social gathering. But someone with an 8-dB SNR loss (e.g., as measured by the QuickSIN or the HINT) needs an 8-dB greater SNR to carry on a conversation in the same listening situation.*

Thus it is important to remember that the theoretical relationship between AI and intelligibility applies only to those with normal hearing. Persons with SNR loss, even when wearing appropriately fitted hearing aids, can be expected to do worse without the assistance of directional-microphone technology, FM systems, Companion Mics™, or other assistive listening technology that improves the signal-to-noise ratio.

20 So, do you think your new Count-The-Dots audiogram will have the same 20-year lifespan as your 1990 edition?

Well, we do have a couple factors on our side. We doubt that the average speech spectrum or the importance of audibility will be changing in the near future. But rather than speculate on our new audiogram's popularity, let's just close by saying that we'll both be quite pleased if we're around in 2030 to write about it!

Acknowledgments

Ben Hornsby of Vanderbilt University reviewed an earlier version of our new SII Count-The-100-Dots audiogram, which was first presented at the 2004 Jackson Hole Audiologic Rendezvous. He discovered 102 dots. His discussion on the most appropriate dots to remove was both mathematically erudite and welcome.

REFERENCES

1. Mueller HG, Killion MC: An easy method for calculating the articulation index. Hear J 1990;43(9):14-17.
    2. Mueller HG, Hall JW: Audiologists Desk Reference Volume II, Audiologic Management, Rehabilitation, and Terminology. San Diego: Singular Publishing, 1998: 184-185.
    3. Pascoe D: Clinical implications of nonverbal methods of hearing aid selection and fitting. Sem Hear 1980;1:217-229.
    4. Stelmachowicz PG, Nishi K, Choi S, et al.: Effects of stimulus bandwidth on the imitation of ish fricatives by normal-hearing children. J Sp Lang Hear Res 2008;51(5):1369-1380
    5. Stelmachowicz PG, Lewis DE, Choi S, et al.: Effect of stimulus bandwidth on auditory skills in normal-hearing and hearing-impaired children. Ear Hear 2007;28(4):483-494.
    6. Pittman AL, Lewis DE, Hoover BM, Stelmachowicz PG: Rapid word-learning in normal-hearing and hearing-impaired children: Effects of age, receptive vocabulary, and high-frequency amplification. Ear Hear 2005;26(6):619-629.
    7. Stelmachowicz PG, Pittman AL, Hoover BM, et al.: The importance of high-frequency audibility in the speech and language development of children with hearing loss. Arch Otolaryngol Head Neck Surg 2004;130(5):556-562.
    8. Stelmachowicz PG, Pittman AL, Hoover BM, Lewis DE: Aided perception of /s/ and /z/ by hearing-impaired children. Ear Hear 2002;23(4):316-324.
    9. ANSI: ANSI S3.5-1997. American National Standard Methods for the Calculation of the Speech Intelligibility Index. New York: ANSI, 1997.
    10. Hornsby BW: The speech intelligibility index. What is it and what is it good for? Hear J 2004;57(10):10-17.
    11. Killion MC, Mueller HG, Pavlovic C, Humes L: A is for Audibility. Hear J 1993;46(4):29-32.
    12. Fletcher H: (1921) “An empirical theory of telephone quality,” AT&T Internal Memorandum 101(6). See also Allen JB (1966) for a historical account.
    13. Allen JB: Harvey Fletcher's role in the creation of communication acoustics. J Acoust Soc Am 1966;99(4):1839
    14. Skinner MW, Holden LK, Holden TA, et al.: Speech recognition at simulated soft, conversational, and raised-to-loud vocal efforts by adults with cochlear implants. J Acoust Soc Am 1997;101:3766-3782.
    15. Botsford JH: How to estimate dBA reduction of ear protectors. Sound Vib 1973;7(1):32-33.
    16. Killion MC, Studebaker GA: A-weighted equivalents of permissible ambient noise during audiometric testing. J Acoustical Soc Am 1978;63(5):1633-1635.
    17. Miller GA, Heise GA, Lichten W: The intelligibility of speech as a function of the context of the test materials. J Exp Psychol 1951;41(5):329-335.
    18. Killion MC, Christensen LA: The case of the missing dots: AI and SNR loss. Hear J 1998;51(5):32-47.
    19. Killion MC, Niquette PA: What can the pure-tone audiogram tell us about a patient's SNR loss? Hear J 2000;53(3):46-53.
    20. Killion MC: New thinking on hearing in noise: A generalized Articulation Index. Paper based on a presentation at CID Conference on New Frontiers in the Amelioration of Hearing Loss. Sem Hear 2002;23(1):57-75.
    21. Killion MC, Niquette PA, Gudmundsen GI, et al.: Development of a quick speech-in-noise test for measuring signal-to-noise ratio loss in normal-hearing and hearing-impaired listeners. J Acoust Soc Am 2004;116(4,10):2395-2405.

    *Killion: When people look at this chart they often look at the signal-to-noise (SNR) numbers listed on the upper x-axis, and ask where they came from and how do you relate them to the AI. They came from a couple of sources. During the development of the QuickSIN tests we found that normal-hearing subjects obtained 50% correct, on average, at a 2-dB SNR, which set the location of the 2-dB point as in Figure 3. Second, the range between “hearing no cues” and “hearing all cues” was described as 30 dB by Fletcher and was reconfirmed in the latest “SII” standard mentioned above. In addition, data from other basic research were used to show the relationship between SNR, AI, and word recognition. These were consistent with unpublished data of Tom Tillman on the NU-6 vs. SNR. It is therefore a reasonable, but un-peer-reviewed SNR scale. See Killion and Christensen for further discussion.15
    Cited Here

    **Killion: Some years ago, while on St. Croix, I contacted a fellow ham radio operator in California, about 3300 miles away. By checking with a telephone (!), we could confirm that we were talking to each other on the same frequency, but the static was so bad it was nearly impossible to carry on a conversation. We switched to Morse code on the same frequency, and it was crystal clear and error free (albeit much slower). Subjectively, the SNR was improved about 20 dB. We didn't think to check, but I suspect we could have whistled to each other on the voice transmissions and obtained a similar improvement. (Next time you can't understand a cell phone conversation, try whistling: Two whistles for yes, one for no.)
    Cited Here

    *Killion: It is beyond the scope of this paper to justify, but we believe that an 8-dB SNR loss happens when the cochlea or eighth nerve is damaged so badly that the brain can use only one of every two audible dots. Fortunately, an 8-dB SNR improvement under the situation described above will double the number of audible dots, so the listener's brain will still receive the 26 dots required to carry on a conversation.
    Cited Here

    Copyright © 2010 Wolters Kluwer Health, Inc. All rights reserved.