The purpose of this study was to examine word recognition in children who are hard of hearing (CHH) and children with normal hearing (CNH) in response to time-gated words presented in high- versus low-predictability sentences (HP, LP), where semantic cues were manipulated. Findings inform our understanding of how CHH combine cognitive-linguistic and acoustic-phonetic cues to support spoken word recognition. It was hypothesized that both groups of children would be able to make use of linguistic cues provided by HP sentences to support word recognition. CHH were expected to require greater acoustic information (more gates) than CNH to correctly identify words in the LP condition. In addition, it was hypothesized that error patterns would differ across groups.
Sixteen CHH with mild to moderate hearing loss and 16 age-matched CNH participated (5 to 12 years). Test stimuli included 15 LP and 15 HP age-appropriate sentences. The final word of each sentence was divided into segments and recombined with the sentence frame to create series of sentences in which the final word was progressively longer by the gated increments. Stimuli were presented monaurally through headphones and children were asked to identify the target word at each successive gate. They also were asked to rate their confidence in their word choice using a five- or three-point scale. For CHH, the signals were processed through a hearing aid simulator. Standardized language measures were used to assess the contribution of linguistic skills.
Analysis of language measures revealed that the CNH and CHH performed within the average range on language abilities. Both groups correctly recognized a significantly higher percentage of words in the HP condition than in the LP condition. Although CHH performed comparably with CNH in terms of successfully recognizing the majority of words, differences were observed in the amount of acoustic-phonetic information needed to achieve accurate word recognition. CHH needed more gates than CNH to identify words in the LP condition. CNH were significantly lower in rating their confidence in the LP condition than in the HP condition. CHH, however, were not significantly different in confidence between the conditions. Error patterns for incorrect word responses across gates and predictability varied depending on hearing status.
The results of this study suggest that CHH with age-appropriate language abilities took advantage of context cues in the HP sentences to guide word recognition in a manner similar to CNH. However, in the LP condition, they required more acoustic information (more gates) than CNH for word recognition. Differences in the structure of incorrect word responses and their nomination patterns across gates for CHH compared with their peers with NH suggest variations in how these groups use limited acoustic information to select word candidates.
1Center for Hearing Research, Boys Town National Research Hospital, Omaha, Nebraska, USA; 2Center for Audiology, Boys Town National Research Hospital, Omaha, Nebraska, USA; and 3Center for Childhood Deafness, Boys Town National Research Hospital, Omaha, Nebraska, USA.
Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal’s Web site (www.ear-hearing.com).
This research was supported by the following grants from the NIH-NIDCD and NIH-NIGMS: R01 DC004300, R01 DC009560, R01 DC013591, P30 DC004662, P20 GM109023. The content is solely the responsibility of the authors and does not necessarily represent the official views of the NIH. Dawna Lewis is a member of the Phonak Pediatric Advisory Board. However, that relationship does not impact the content of this manuscript.
The authors have no other conflicts of interest to disclose.
Received May 16, 2016; accepted October 21, 2016.
Address for correspondence: Dawna E. Lewis, Center for Hearing Research, Boys Town National Research Hospital, 555 N. 30th Street, Omaha, NE 68131, USA. E-mail: email@example.com