Journal Logo


New algorithm automatically adjusts directional system for special situations

Chalupper, Josef; Wu, Yu-Hsiang; Weber, Jennifer

Author Information
doi: 10.1097/01.HJ.0000393211.70569.5c
  • Free

In Brief

Directional-microphone technology has been used in hearing instruments since the late 1960s, and has been shown to improve speech understanding in background noise (e.g., see evidence-based review by Bentler1). For many years, this technology was considered a “special feature” and was available only in select models. All this has changed in the last 15–20 years, and today manufacturers offer directional technology in most of their hearing instruments.

In modern instruments, the directional effect usually is accomplished using two omnidirectional microphones, which Siemens introduced with its dual-directional microphones (“TwinMic”) in 1997. Research with this new technology produced encouraging findings.2,3 In 2002, Siemens was the first to add automatic-adaptive functionality to the polar patterns of directional microphones.4–6 It was “automatic” in that, based on the results of an analysis of the situation-detection system, the algorithm “automatically” switched from omnidirectional to directional or back to omnidirectional. It was “adaptive” in that the directivity was focused to the front, but the null of the polar pattern could be steered to correspond with the loudest sound from the rear hemisphere, which allowed for maximum attenuation of background noise in this general region. Or, if a diffuse noise field was detected, the adaptive algorithm would select the polar pattern that provided the best directivity.

This adaptive functionality was expanded to a multi-channel function in 2004, that is, the maximum direction of attenuation could be varied for different frequency regions. While this meant that more noise sources could be attenuated, the directivity was still adjusted to provide maximum gain for signals from the front.

While multi-channel automatic/adaptive directional hearing instruments are highly sophisticated in terms of improving speech intelligibility in noisy situations, this technology still has its limitations. The intrinsic nature of traditional directional microphones implies that attenuation of noise can occur only in the rear hemisphere and that directivity can be aimed only toward the front of the hearing instrument wearer. This application is desirable since in most circumstances the wearer faces the speaker of interest. And even if the talker is not located in front of the wearer when he or she starts speaking, it is expected that the listener will turn to face the talker.

Therefore, the only situation in which traditional directional microphones are not effective is when the wearer is unable to face the speaker. One such specific, but commonplace, situation is when the wearer is driving a car and a passenger in the back seat speaks. In this case, speech comes from the back while the noise comes from the sides and front, but the wearer cannot turn to face the speaker while driving. A similar situation occurs when the hearing instrument wearer is walking side by side with a group of people. In this case, the speech is coming from the side, which the directional microphone cannot account for, and the wearer cannot always turn toward the speaker. An ideal microphone system, therefore, should be able to steer maximum directivity toward speech, regardless from which direction it originates—even if it originates from behind the listener.


Siemens has developed a new “full-directional” algorithm, called SpeechFocus, for steering directionality. The SpeechFocus algorithm addresses the limitations of traditional directional microphones by adding more unique polar patterns as options for certain listening environments. In addition to offering all the functionalities of a four-channel adaptive directional microphone, when necessary SpeechFocus can automatically suppress noise coming from in front of the wearer and focus on speech coming from a different direction, such as from behind.

The algorithm continuously scans sounds in the listening environment for speech patterns. When it detects speech, it selects the directivity pattern most effective in focusing on that speech source, which would include selecting an omnidirectional pattern if noise is not present at a significantly high level.

Algorithm uses three directivity patterns at once

SpeechFocus works by operating simultaneously three different directivity patterns: omnidirectional, adaptive-directional, and a reverse-directional (anti-cardioid) pattern. Unlike standard directional-microphone patterns (e.g., cardioid, hypercardioid) that attenuate only sounds coming from the sides and the back, this backward-directional-microphone pattern works like an acoustic rear-view mirror and focuses on speech that originates from the back while suppressing noise from the front hemisphere.

When speech is present and background noise is also detected, the signals from all directivity patterns are then analyzed for speech patterns (see Figure 1). The microphone pattern that results in the greatest output for the speech signal is then selected. This means that when speech is detected from the front hemisphere, the traditional frontal adaptive directional-microphone polar patterns are employed, reducing noise from the side and the back.

Figure 1
Figure 1:
Illustration of how the acoustic scene analysis system monitors the acoustic environment to create effective steering for SpeechFocus.

When speech is detected as originating from the back, the backward directional-microphone pattern is selected, and noise from the front hemisphere is reduced.

When the speech is detected directly from the left or the right side, then the omnidirectional directivity pattern is engaged. As this would happen only at exactly +/−90°, in the real world either the frontal or backward directionality will usually be active in noisy situations.

The decision-making process in SpeechFocus depends upon the detection of speech patterns, in particular the modulations present in speech. Since the typical modulation frequency of speech is approximately 4 Hz (i.e., 4 peaks/second), accurate detection of speech requires at least a 1-second window of analysis. Therefore, the algorithm has an attack time of approximately 1 second. That is, from the time speech is detected from a particular direction, the algorithm takes about 1 second to choose the appropriate polar pattern.

In some listening situations, this detection time might be too slow for optimum speech understanding. For maximum effectiveness, therefore, the algorithm should be activated in a separate program for more static listening situations, especially situations where it is anticipated that speech will originate from behind the listener, such as when driving a car with passengers in the back.

For this reason, whenever the background noise is ∼65 dB SPL or lower, SpeechFocus selects an omnidirectional pattern. As described by Branda and Hernandez,7 there are listening situations where soft-level directionality is desired, which is available in the traditional frontal-directional automatic/adaptive setting. For many patients, therefore, it probably would be most effective to use the standard automatic/adaptive directionality in the universal program, and SpeechFocus in a different program.


Prior to any behavioral study of the new algorithm, the University of Iowa conducted a laboratory electro-acoustic analysis to determine if the automatic steering and resulting adaptive polar patterns were correct. The following polar patterns were obtained in an anechoic chamber with the Siemens Pure 701 BTE.

Figure 2 shows the results of speech presented at a 0° azimuth at 75 dB SPL (left) with no noise present. The resulting pattern is omnidirectional. In the middle plot, the figure shows the resulting polar pattern that results when a background noise signal (multitalker babble) from 180° azimuth was added. As expected, a frontal-directional cardioid pattern was selected. When the location of the speech and noise signals was switched so that speech was from the back and noise from the front, this paradigm resulted in a reverse-directional (anti-cardioid) pattern (right). To summarize, when the SpeechFocus mode is selected, the automatic/adaptive directional feature works essentially the same as in the standard universal directional mode, except that the additional option of an anti-cardioid pattern is available.

Figure 2
Figure 2:
SpeechFocus automatically selects the most appropriate directivity pattern in different acoustic environments: omnidirectional for speech only, frontal directional for speech from front and noise from the back (S0N180), and reverse directional for speech from back and noise from front (S180N0). Measurements obtained at the Hearing Aid Lab of the University of Iowa.


To examine the effectiveness of the SpeechFocus algorithm for improving speech understanding in background noise, clinical studies were conducted at two different sites: Site 1 was the University of Iowa, Iowa City, IA, and Site 2 was the University of Northern Colorado, Greeley, CO. The same protocol was used at both sites.

All 36 participants (n=15 at Site 1, n=21 at Site 2) had downward-sloping sensorineural hearing loss and were experienced hearing aid users. They were fitted bilaterally with the Siemens Pure 701 BTE instruments using closed domes. The Connexx 6.4 software was used to program the hearing instruments to the NAL-NL1 prescriptive fitting. The calibrated real-speech signal (“carrot passage”) of the Verifit probe-microphone system was used to verify the match to targets; minor adjustments to gain and compression were made to obtain a closer fit when necessary. This prescribed gain and output were then stored in three different memories of the hearing instruments.

The three memories differed only in terms of the microphone mode setting: Memory #1: fixed omnidirectional, Memory #2: conventional frontal automatic/adaptive directional, and Memory #3: SpeechFocus (fully automatic/adaptive directional including the anti-cardioid pattern). All special features (e.g., noise reduction, adaptive feedback suppression, low-level expansion) remained active for all microphone modes at default settings.

Testing was conducted in an audiometric sound suite, with speech delivered from the back (180° azimuth) and noise from the front of the listener (0° azimuth). The speech material used was the Hearing in Noise Test (HINT), delivered via CD. The standard HINT material was modified slightly: The quiet interval between sentences was filled with the HINT noise, and the carrier phrase “Please repeat the next sentence” was inserted before each sentence. The standard HINT noise was presented at a constant level of 72 dB(A); the level of the sentences was adaptive.

Participants were tested for each of the three microphone modes using two HINT lists (20 sentences). The HINT was scored in the conventional manner, resulting in a reception threshold score (RTS) for sentences in noise for each participant for each microphone setting. The ordering of the microphone modes was counterbalanced.

The results for Site 1 are shown in Figure 3. The three conditions shown are the fixed omnidirectional, adaptive directional (which adapted to a hypercardioid pattern), and SpeechFocus (which adapted to an anti-cardioid pattern). Note that there were large differences in the resulting mean HINT RTS scores.

Figure 3
Figure 3:
HINT results (mean and standard deviations) from Site 2 (University of Northern Colorado) for SpeechFocus compared to omnidirectional and conventional adaptive directional microphones.

Repeated-measure ANOVA was conducted to determine the effect of microphone mode with the HINT RTS score as the dependent variable. The main effect of microphone mode was significant (F(2, 28) = 85.36, p<0.0001). The follow-up tests (with Bonferroni correction) showed that the performance between all microphone modes was significantly different (p<0.0001): Omni was better than adaptive frontal directional, SpeechFocus was better than adaptive directional and omni. Looking at individual data and simply comparing SpeechFocus to omni shows a benefit for SpeechFocus of at least 2 dB for 100% of the participants, and 67% of the participants had a SpeechFocus benefit of at least 4 dB.

The findings for Site 2 are shown in Figure 4. The pattern of findings was very similar to that of Site 1. Repeated-measure analysis of variance was performed to determine the effect of microphone mode, and the main effect was significant (F(1.7, 33.7) = 151.85; p<0.0001). The follow-up tests (Bonferroni correction) showed that the HINT performance between all the microphone modes differed significantly (p<0.0001). At this site, 95% of the participants had a SpeechFocus benefit of at least 2 dB (when compared to omni) and 81% had a benefit of at least 4 dB; 3 of the 21 participants had a SpeechFocus benefit of over 9 dB.

Figure 4
Figure 4:
HINT results (mean and standard deviations) from Site 1 (University of Iowa) for SpeechFocus compared to omnidirectional and conventional adaptive directional microphones.

Figure 5 shows the results from the two experimental sites plotted as a function of mean SpeechFocus benefit. Although the overall mean HINT RTS scores were somewhat different for the two sites (Figures 3 and 4), the benefit for SpeechFocus was nearly identical when compared to the two other microphone mode options. As shown in this figure, the benefit is about 10 dB when compared to adaptive directional, and about 5 dB when compared to omnidirectional.

Figure 5
Figure 5:
Mean benefit of SpeechFocus at two different research sites.


In the Connexx 6.4 software, SpeechFocus is a separate directional-microphone setting under the Microphone/Bluetooth tab. It is also the default setting in the SpeechFocus program. We recommend that it be selected in programs for a patient to use when speech may originate from directions other than the front and when the wearer cannot turn to face the speaker.

While patients could use SpeechFocus as their primary program, the automatic activation to directional occurs at a higher noise level for this algorithm, so we recommend the standard automatic/adaptive directional mode for everyday listening. We also recommend the fixed omnidirectional microphone mode for music and outdoor programs, as well as in a mixed input mode, such as with DAI or the Tek Transmitter.

The function of SpeechFocus can be tested in real time using the Connexx software. This helps the dispenser understand the operation of this feature, and also is useful for patient demonstration. Figure 6 is an example of what is shown in the Connexx Real Time Display. This recording was obtained for a speech-in-noise situation, with speech originating from the back.

Figure 6
Figure 6:
Demonstrating SpeechFocus with the Connexx Realtime Display.

The diagram in Figure 6 represents an aerial view of the hearing instrument wearer facing front (0° azimuth). Depending on from which direction speech originates, the green field indicates the focus of directivity. By simply holding the hearing aid and presenting speech and/or noise from different azimuths, one can observe the automatic operation of the algorithm. Please note, this is real time, not a simulation, so the automatic switching observed is the same as what the patient will experience during real-world use.

During the fitting process, informational counseling is facilitated by having the patient wear the instrument so that he or she can hear as well as see the effects of SpeechFocus. Here is a protocol that we have found successful:

  • Fit the hearing instruments bilaterally to the desired gain and output settings.
  • Select the SpeechFocus setting under the Microphone/Bluetooth tab.
  • Play the selected sound files from the Connexx software utilizing two loudspeakers. Select a presentation level around 70 dB SPL.
  • First, position the loudspeaker playing the speech signal in front of the wearer and the loudspeaker playing noise behind the wearer. For this listening situation the frontal adaptive directional polar pattern will be engaged and the green field will be focused to the front.
  • Then, exchange the position of the speakers (or rotate the patient) so that speech is now coming from the back and the noise is coming from the front. In this listening situation, the adaptive anti-cardioid directional polar pattern will be activated and the green field will appear behind the wearer (as in Figure 6).
  • Finally, move the speakers playing either speech or noise to different locations to observe the automatic/adaptive behavior.


While directional-microphone technology has been used for over 40 years, there continue to be significant advances. One listening situation in which directional technology had not previously been successful was a speech-in-noise condition with speech presented from behind the listener. The Siemens SpeechFocus, which includes a reverse directional polar pattern, was introduced to improve speech understanding in this type of listening environment.

Clinical trials at two independent research sites compared SpeechFocus with omnidirectional and conventional frontal adaptive directional. They found a significant benefit for SpeechFocus for understanding speech in background noise. This microphone mode provided an average improvement of ∼5 dB compared to omnidirectional, and ∼10 dB compared to frontal adaptive directional.

As has been pointed out in several MarkeTrak reports, overall satisfaction with hearing aids is related to the number of listening situations in which users report that the instruments provide benefit. We believe that SpeechFocus has added one more listening situation in which patients will find significant improvement in speech understanding.


1. Bentler RA: Effectiveness of directional microphones and noise reduction schemes in hearing aids: a systematic review of the evidence. JAAA 2005; 16(7):473–484.
2. Powers T, Wesselkamp M: The use of digital features to combat background noise. Hear Rev: High Performance Hearing Solutions 1999;3:36–39.
3. Ricketts T, Dhar S: Comparison of performance across three directional hearing aids. JAAA 1999; 10:180–189.
4. Powers T, Hamacher V: Proving adaptive directional technology works. Hear Rev 2004;11(4): 46–50.
5. Bentler R, Palmer C, Mueller H: Evaluation of a second-order directional microphone hearing aid. I: Speech perception outcomes. JAAA 2006; 17(3):179–189.
6. Palmer C, Bentler R, Mueller H: Evaluation of a second-order directional microphone hearing aid II: Self-report outcomes. JAAA 2006;17(3):190–201.
7. Branda E, Hernandez A: New directional solutions for special listening situations. Presentation at the American Academy of Audiology Annual Convention, April 2010, San Diego.
© 2011 Lippincott Williams & Wilkins, Inc.