Perception of spectrally degraded speech is particularly difficult when the signal is also distorted along the frequency axis. This might be particularly important for post-lingually deafened recipients of cochlear implants (CIs), who must adapt to a signal where there may be a mismatch between the frequencies of an input signal and the characteristic frequencies of the neurons stimulated by the CI. However, there is a lack of tools that can be used to identify whether an individual has adapted fully to a mismatch in the frequency-to-place relationship and if so, to find a frequency table that ameliorates any negative effects of an unadapted mismatch. The goal of the proposed investigation is to test the feasibility of whether real-time selection of frequency tables can be used to identify cases in which listeners have not fully adapted to a frequency mismatch. The assumption underlying this approach is that listeners who have not adapted to a frequency mismatch will select a frequency table that minimizes any such mismatches, even at the expense of reducing the information provided by this frequency table.
Thirty-four normal-hearing adults listened to a noise-vocoded acoustic simulation of a CI and adjusted the frequency table in real time until they obtained a frequency table that sounded “most intelligible” to them. The use of an acoustic simulation was essential to this study because it allowed the authors to explicitly control the degree of frequency mismatch present in the simulation. None of the listeners had any previous experience with vocoded speech, in order to test the hypothesis that the real-time selection procedure could be used to identify cases in which a listener has not adapted to a frequency mismatch. After obtaining a self-selected table, the authors measured consonant nucleus consonant word-recognition scores with that self-selected table and two other frequency tables: a “frequency-matched” table that matched the analysis filters with the noisebands of the noise-vocoder simulation, and a “right information” table that is similar to that used in most CI speech processors, but in this simulation results in a frequency shift equivalent to 6.5 mm of cochlear space.
Listeners tended to select a table that was very close to, but shifted slightly lower in frequency from the frequency-matched table. The real-time selection process took on average 2 to 3 min for each trial, and the between-trial variability was comparable with that previously observed with closely related procedures. The word-recognition scores with the self-selected table were clearly higher than with the right-information table and slightly higher than with the frequency-matched table.
Real-time self-selection of frequency tables may be a viable tool for identifying listeners who have not adapted to a mismatch in the frequency-to-place relationship, and to find a frequency table that is more appropriate for them. Moreover, the small but significant improvements in word-recognition ability observed with the self-selected table suggest that these listeners based their selections on intelligibility rather than some other factor. The within-subject variability in the real-time selection procedure was comparable with that of a genetic algorithm, and the speed of the real-time procedure appeared to be faster than either a genetic algorithm or a simplex procedure.
Cochlear implant (CI) recipients must adapt to a signal in which there may be a mismatch between the input signal and the characteristic frequencies of the neurons stimulated by the CI. Using a CI simulation, the authors explored whether real-time selection of frequency tables can be used to identify cases where a listener has not adapted to a frequency mismatch. Thirty-four naive listeners were able to reliably select a preferred frequency table that aided speech understanding in the presence of a sizeable mismatch. The speed and variability of these selections suggest this tool may be feasible for use with CI recipients.
1Department of Otolaryngology, New York University School of Medicine, New York, New York, USA; and 2Precor Corp., Woodinville, WA, USA.
ACKNOWLEDGMENTS: This work was supported by National Institutes of Health/National Institute on Deafness and other Communication Disorders grants DC09459 (principal investigator: Fitzgerald), and DC03937 (principal investigator: Svirsky).
The authors declare no conflict of interest.
Address for correspondence: Matthew Fitzgerald, New York University School of Medicine, 550 First Avenue, NBV-5E5, New York, NY 10016, USA. E-mail: email@example.com