The purpose of this study was to evaluate hearing aid users’ performance on four tasks across three types of directional processing implemented by the same pair of commercially available behind-the-ear hearing aids. The three types of directional processing were mild, moderate, and strong. The mild processing aimed at emulating the directionality of an unoccluded ear. The moderate processing was a traditional adaptive directional type. The strong directional processing was a cue-preserving bilateral beamformer. The four tasks included gross localization, sentence recognition, listening effort, and subjective preference.
Eighteen adults aged 48 to 83 years (
= 69.1, σ = 10.9) with sensorineural hearing loss participated in this study. Each participant was fitted bilaterally and the three types of directional processing were matched for frequency response but varied by directionality (mild, moderate, and strong). Performance was always evaluated in background noise, which surrounded the listener. Sentence recognition was evaluated in low and moderate reverberation, while gross localization, listening effort, and subjective ratings were evaluated only in moderate reverberation. Sentence recognition and gross localization were evaluated using auditory-only and auditory–visual stimuli (talker’s face visible). The gross localization task included assessment of the ability to identify the origin of words, in addition to the ability to recall those words. Listening effort was evaluated using auditory–visual stimuli and a dual-task paradigm where the secondary task was a simple reaction time to a visual stimulus.
The results revealed similar gross localization abilities across moderate and strong directional processing when visual stimuli were present. Conversely, localization accuracy was significantly poorer with the strong directional processing than with moderate directional processing in auditory-only conditions, but only for signals presented at the greatest eccentricities (±60 degrees). Regardless of signal to noise ratio or degree of reverberation, the moderate and strong directional processing resulted in significantly better sentence recognition in noise than the mild directional processing. In addition, sentence recognition in moderate reverberation was significantly better with strong directional processing than with moderate directional processing (~ 4 to 12 rationalized arcsine units across conditions), regardless of signal to noise ratio. Although not statistically significant, the same trend was present in low reverberation. There were no significant differences in listening effort or subjective preference across directional processing.
The strong directional processing, which was a cue-preserving bilateral beamformer, provided additional sentence recognition benefit in realistic listening situations. Furthermore, despite reducing the interaural differences, the authors measured no significant negative consequences on listening effort or subjective preference, although it is unknown whether differences might be found using more sensitive measures. In addition, gross localization was disrupted at large eccentricities if visual cues were not present. While further study is needed, these results support consideration of this cue-preserving, bilateral beamformer technology for patients who experience difficulty with speech recognition in noise, which is not adequately addressed by conventional directional hearing aid processing.
The purpose of this study was to evaluate hearing aid users’ performance on a variety of tasks across mild, moderate, and strong types of directional processing within the same hearing aids. The strong directional setting was a cue-preserving, bilateral beamformer. Results indicated that the strong directionality provided additional sentence recognition benefits in realistic listening situations. Furthermore, there were no negative consequences on listening effort or subjective preference. However, gross localization was disrupted in some conditions. These results support consideration of this technology for patients who experience difficulty understanding speech in noise, which is not adequately addressed by conventional directional processing.Supplemental Digital Content is available in the text.
1Vanderbilt University Medical Center, Nashville, Tennessee, USA; and 2JFK Johnson Rehabilitation Institute, Edison, New Jersey, USA.
This project was funded with gifts from Phonak, AG and the Dan Maddox Hearing Aid Foundation.
Portions of this article were presented at the 2012 International Hearing Aid Conference in Tahoe, CA.
The authors declare no other conflict of interest.
Address correspondence to: Erin M. Picou, Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Avenue South, Room 8310, Nashville, TN 37232, USA. E-mail: firstname.lastname@example.org