Secondary Logo

Journal Logo

Software seeks to provide seamless adaptation to changing soundscapes

Nyffeler, Myriel

doi: 10.1097/01.HJ.0000361849.89089.80
ARTICLE
Free

Myriel Nyffeler, PhD, is Head of Field Studies at Phonak AG. She holds a doctorate in neurobiology from the Swiss Federal Institute of Technology. Readers may contact Dr. Nyffeler at Myriel. Nyffeler@phonak.com or at Phonak AG, Laubisrütistrasse 28, CH-8712 Stäfa, Switzerland.

Acoustic environments are built on countless, unique combinations of acoustics, sound source positions, types of interference, and signal properties. Therefore, a sophisticated high-definition classifier that recognizes sound environment nuances such as pitch, intensity, harmony, and many others is needed for the system to adapt accordingly.1 While in the past there was a distinct selection of a specific program, newer hearing instruments such as Phonak's Exélia, based on the new CORE platform, use an elaborate mix of base programs.2

SoundFlow, a function contained in the CORE platform, creates a composition that is precisely customized to the hearing environment and is built on the base programs blending together in order to improve sound and loudness perceptions.3,4 As the environment changes, this function automatically and continually readjusts the blended programs. Because this is a smooth, continuous process without abrupt switches between programs, users potentially experience less audible transitions between settings. The main goals of the studies discussed in this article were to evaluate if the automatic adaptation of SoundFlow was noticeable to the subjects and if it improved hearing performance as well as speech intelligibility.

Back to Top | Article Outline

STUDY DESIGN

In Exélia, SoundFlow is set with four defined hearing programs: speech in quiet, speech in noise, music, and noise. Unlike older processing systems that made clear-cut adjustments and applied them to a specific category, this function blends the four programs based on classified probabilities. Thus, frequency as well as noise reduction and microphone characteristics vary according to these probabilities. A recent study found that SoundFlow remains in an asynchronous balanced mode 67% of the time.5 The rest of the time it remains in one of the four fixed programs.

For this study, 15 participants without sensorineural hearing loss were recruited (See Figure 1 for audiograms). Their ages ranged from 20 to 54 years, with a mean of 32.5. After an audiological diagnostic assessment, subjects were all binaurally fitted with the new hearing instrument (HI), Exélia, with SoundFlow.

Figure 1

Figure 1

The HIs with and without SoundFlow were binaurally fitted on the 15 subjects with the DSL [i/o] formula for precalculation. In order to fit all the instruments identically their automatic functions were deactivated and wide-band white noise was introduced. The output signals of the instruments underwent a Fast Fourier Transformation (FFT) analysis and the resulting frequency courses were compared. If the differences were too great, additional fine-tuning was done with repeated verification.

After all the instruments were adjusted identically, their automatic features were re-activated according to precalculation. Subjects were asked to wear the HIs in a situation where two people were talking while cooking in a kitchen. In the background, a blower provided a constant noise. When the blower was on and no one was speaking, the HI switched to “Noise.” When the two people were conversing, the HI switched to “Speech in noise.” Sound situations were recorded via the HI microphones.

Because the characteristics of the microphones differed in these two programs, a clear difference should have been audible, particularly in the loudness of the noise. In order to detect automatic adaptation mechanisms, subjects were trained to recognize the switches with different sound samples.

Sampled sound examples were then provided to the subjects via head phones so they could compare the two hearing programs for the perceptibility of the automatic adaptation between programs and for speech intelligibility. Matlab-implemented software was used to run a double-blinded test design.

Back to Top | Article Outline

RESULTS

Perceptibility of automatic adaptations

To examine the automatic adaptation of SoundFlow to the environment, track 250 of the media database was chosen (Figure 2). It displays a situation in the kitchen where the blower provides the noise and a conversation between two people provides the speech in noise. Subjects were told to concentrate on perceived automatic adaptations between the two programs, “Noise” and “Speech in noise.”

The median number of perceived adaptation transitions with AutoPilot, which was used in older Phonak products, was 18, which matched the estimated maximum. The number of perceived automatic adaptations varied from 11 to 51 with AutoPilot, while with SoundFlow it ranged from 0 to 46, and the median was 11. Overall, subjects using the new software detected fewer adaptation transitions than subjects using the AutoPilot, as shown in Figure 3.

Figure 2

Figure 2

Figure 3

Figure 3

Back to Top | Article Outline

Speech intelligibility

Figure 4 shows the course of AutoPilot during the test signals. Abrupt switches are seen between the programs “Speech in noise” and “Noise.” The smoother curve shows the probabilities of SoundFlow in the “Noise” program. If the ordinate reaches the level 3, SoundFlow would be in the program “Noise” 100% of the time. On the level 2, SoundFlow would be in the “Speech in noise” program 100% of the time. Therefore, it might seem that the AutoPilot and SoundFlow programs both use comparable settings and estimate sound environments the same.

Figure 4

Figure 4

To compare AutoPilot (A) with SoundFlow (B) for speech intelligibility, subjects were asked to compare intelligibility in different sound situations using the two programs. To provide an equal basis for evaluation purposes, subjects were trained prior to testing, but different sound examples were used in the test.

During the test, subjects were able to listen to the sound situations several times. With the aid of a scroll bar, they could rate intelligibility as “A much better than B,” “A better than B,” “A equal B,” “B better than A,” and “B much better than A.” Each subject had to rate four test series with 14 sound examples. Figure 5 shows that speech intelligibility was noticeably better with SoundFlow than AutoPilot. However, during 35 presentations there was no automatic adaptation to the different programs.

Figure 5

Figure 5

In evaluation it appears that speech intelligibility depends strongly on the single sound examples. However, in 25 of 28 cases speech intelligibility with SoundFlow was rated “better” or “as good as” AutoPilot.

Back to Top | Article Outline

Questionnaires

Results of subjective questionnaires showed that the satisfaction with the automatic function was higher than average. The only weakness was in conversation with several people in a noisy environment. Thus, 65% rated the amount of automatic adaptation as correct and 88% had the opinion that the blended adjustment to differing sound situations markedly improved speech perception in noise. Perceived adaptation transitions decreased with SoundFlow since adaptations were no longer occurring abruptly.

Back to Top | Article Outline

CONCLUSION

The study described shows that SoundFlow adapts specifically to sound environments and takes individual user preferences into consideration by blending base programs to create a specific-program for each situation, based on environmental changes. It can be concluded that because each base program is easily adjustable in the programming software, further individualization to the user's listening priorities and preferences may improve sound and loudness perception. Subjective results obtained with questionnaires also support the ability of SoundFlow to enable the hearing instrument to adapt to changing sound environments.

Back to Top | Article Outline

REFERENCES

1. Büchler M: How good are automatic program selection features? A look at the usefulness and acceptance of an automatic program selection mode. Hear Rev 2001;8(9):50–54,84.
2. Büchler M: Algorithms for Sound Classification in Hearing Instruments. ETH Zurich, Dissertation No. 14498, 2002.
3. Olson L, Ioannou M, Trine TD: Appraising an automatic directional device in real-world environments. Hear J 2004;57(6):32–38.
4. Kühnel V, Checkley PC: Die Vorteile eines adaptiven Multi-Mikrofon-Systems. Phonak Focus 2000;26.
5. Weinmann K: Asymmetries in Switching. Stäfa, Switzerland, Phonak AG, 2006.
Copyright © 2009 Wolters Kluwer Health, Inc. All rights reserved.