Secondary Logo

Journal Logo

Institutional members access full text with Ovid®

The Effect of Simulated Interaural Frequency Mismatch on Speech Understanding and Spatial Release From Masking

Goupell, Matthew J.1; Stoelb, Corey A.2; Kan, Alan2; Litovsky, Ruth Y.2

doi: 10.1097/AUD.0000000000000541
Research Articles
Buy
SDC

Objective: The binaural-hearing system interaurally compares inputs, which underlies the ability to localize sound sources and to better understand speech in complex acoustic environments. Cochlear implants (CIs) are provided in both ears to increase binaural-hearing benefits; however, bilateral CI users continue to struggle with understanding speech in the presence of interfering sounds and do not achieve the same level of spatial release from masking (SRM) as normal-hearing listeners. One reason for diminished SRM in CI users could be that the electrode arrays are inserted at different depths in each ear, which would cause an interaural frequency mismatch. Because interaural frequency mismatch diminishes the salience of interaural differences for relatively simple stimuli, it may also diminish binaural benefits for spectral-temporally complex stimuli like speech. This study evaluated the effect of simulated frequency-to-place mismatch on speech understanding and SRM.

Design: Eleven normal-hearing listeners were tested on a speech understanding task. There was a female target talker who spoke five-word sentences from a closed set of words. There were two interfering male talkers who spoke unrelated sentences. Nonindividualized head-related transfer functions were used to simulate a virtual auditory space. The target was presented from the front (0°), and the interfering speech was either presented from the front (colocated) or from 90° to the right (spatially separated). Stimuli were then processed by an eight-channel vocoder with tonal carriers to simulate aspects of listening through a CI. Frequency-to-place mismatch (“shift”) was introduced by increasing the center frequency of the synthesis filters compared with the corresponding analysis filters. Speech understanding was measured for different shifts (0, 3, 4.5, and 6 mm) and target-to-masker ratios (TMRs: +10 to −10 dB). SRM was calculated as the difference in the percentage of correct words for the colocated and separated conditions. Two types of shifts were tested: (1) bilateral shifts that had the same frequency-to-place mismatch in both ears, but no interaural frequency mismatch, and (2) unilateral shifts that produced an interaural frequency mismatch.

Results: For the bilateral shift conditions, speech understanding decreased with increasing shift and with decreasing TMR, for both colocated and separate conditions. There was, however, no interaction between shift and spatial configuration; in other words, SRM was not affected by shift. For the unilateral shift conditions, speech understanding decreased with increasing interaural mismatch and with decreasing TMR for both the colocated and spatially separated conditions. Critically, there was a significant interaction between the amount of shift and spatial configuration; in other words, SRM decreased for increasing interaural mismatch.

Conclusions: A frequency-to-place mismatch in one or both ears resulted in decreased speech understanding. SRM, however, was only affected in conditions with unilateral shifts and interaural frequency mismatch. Therefore, matching frequency information between the ears provides listeners with larger binaural-hearing benefits, for example, improved speech understanding in the presence of interfering talkers. A clinical procedure to reduce interaural frequency mismatch when programming bilateral CIs may improve benefits in speech segregation that are due to binaural-hearing abilities.

1Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland, USA

2Waisman Center, University of Wisconsin, Madison, Wisconsin, USA.

Received January 2, 2017; accepted November 15, 2017.

This study was supported by National Institutes of Health (NIH) Grant R01-DC015798 (to M.J.G. and Joshua G. W. Bernstein), R03-DC015321 (to A.K.), and R01-DC003083 (to R.Y.L.) and was supported, in part, by NIH Grant P30-HD03352 (Waisman Center core grant). The word corpus was funded by NIH Grant P30-DC04663 (Boston University Hearing Research Center core grant).

The authors have no conflicts of interest to disclose.

Address for correspondence: Matthew J. Goupell, Department of Hearing and Speech Sciences, University of Maryland, 0119E Lefrak Hall, College Park, MD 20742, USA. E-mail: goupell@umd.edu

Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.