Secondary Logo

Journal Logo

Institutional members access full text with Ovid®

Information From the Voice Fundamental Frequency (F0) Region Accounts for the Majority of the Benefit When Acoustic Stimulation Is Added to Electric Stimulation

Zhang, Ting1; Dorman, Michael F.2; Spahr, Anthony J.2

doi: 10.1097/AUD.0b013e3181b7190c
Research Articles
Buy

Objectives: The aim of this study was to determine the minimum amount of low-frequency acoustic information that is required to achieve speech perception benefit in listeners with a cochlear implant in one ear and low-frequency hearing in the other ear.

Design: The recognition of monosyllabic words in quiet and sentences in noise was evaluated in three listening conditions: electric stimulation alone, acoustic stimulation alone, and combined electric and acoustic stimulation. The acoustic stimuli presented to the nonimplanted ear were either low-pass-filtered at 125, 250, 500, or 750 Hz, or unfiltered (wideband).

Results: Adding low-frequency acoustic information to electrically stimulated information led to a significant improvement in word recognition in quiet and sentence recognition in noise. Improvement was observed in the electric and acoustic stimulation condition even when the acoustic information was limited to the 125-Hz-low-passed signal. Further improvement for the sentences in noise was observed when the acoustic signal was increased to wideband.

Conclusions: Information from the voice fundamental frequency (F0) region accounts for the majority of the speech perception benefit when acoustic stimulation is added to electric stimulation. We propose that, in quiet, low-frequency acoustic information leads to an improved representation of voicing, which in turn leads to a reduction in word candidates in the lexicon. In noise, the robust representation of voicing allows access to low-frequency acoustic landmarks that mark syllable structure and word boundaries. These landmarks can bootstrap word and sentence recognition.

Information from F0 and the associated amplitude envelope accounts for the majority of the speech-perception benefit when acoustic stimulation is added to electric stimulation. We propose that, in quiet, low-frequency acoustic information leads to improved representation of voicing and manner, which in turn, leads to a reduction in word candidates in the lexicon. In noise, the robust representation of voicing allows access to low-frequency acoustic landmarks that mark syllable structure and word boundaries. These landmarks can bootstrap word and sentence recognition.

1University of Maryland at College Park, Maryland; and 2Arizona State University, Tempe, Arizona.

Address for correspondence: Ting Zhang, 658 North Poplar Ct., Chandler, AZ 85226. E-mail: Ting.Zhang@asu.edu.

Received May 24, 2008; accepted July 9, 2009.

© 2010 Lippincott Williams & Wilkins, Inc.