Secondary Logo

Journal Logo

Institutional members access full text with Ovid®

Sonority’s Effect as a Surface Cue on Lexical Speech Perception of Children With Cochlear Implants

Hamza, Yasmeen1,2; Okalidou, Areti1; Kyriafinis, George3; van Wieringen, Astrid2

doi: 10.1097/AUD.0000000000000559
Research Articles
Buy
SDC

Objectives: Sonority is the relative perceptual prominence/loudness of speech sounds of the same length, stress, and pitch. Children with cochlear implants (CIs), with restored audibility and relatively intact temporal processing, are expected to benefit from the perceptual prominence cues of highly sonorous sounds. Sonority also influences lexical access through the sonority-sequencing principle (SSP), a grammatical phonotactic rule, which facilitates the recognition and segmentation of syllables within speech. The more nonsonorous the onset of a syllable is, the larger is the degree of sonority rise to the nucleus, and the more optimal the SSP. Children with CIs may experience hindered or delayed development of the language-learning rule SSP, as a result of their deprived/degraded auditory experience. The purpose of the study was to explore sonority’s role in speech perception and lexical access of prelingually deafened children with CIs.

Design: A case–control study with 15 children with CIs, 25 normal-hearing children (NHC), and 50 normal-hearing adults was conducted, using a lexical identification task of novel, nonreal CV–CV words taught via fast mapping. The CV–CV words were constructed according to four sonority conditions, entailing syllables with sonorous onsets/less optimal SSP (SS) and nonsonorous onsets/optimal SSP (NS) in all combinations, that is, SS–SS, SS–NS, NS–SS, and NS–NS. Outcome measures were accuracy and reaction times (RTs). A subgroup analysis of 12 children with CIs pair matched to 12 NHC on hearing age aimed to study the effect of oral-language exposure period on the sonority-related performance.

Results: The children groups showed similar accuracy performance, overall and across all the sonority conditions. However, within-group comparisons showed that the children with CIs scored more accurately on the SS–SS condition relative to the NS–NS and NS–SS conditions, while the NHC performed equally well across all conditions. Additionally, adult-comparable accuracy performance was achieved by the children with CIs only on the SS–SS condition, as opposed to NS–SS, SS–NS, and SS–SS conditions for NHC. Accuracy analysis of the subgroups of children matched in hearing age showed similar results. Overall longer RTs were recorded by the children with CIs on the sonority-treated lexical task, specifically on the SS–SS condition compared with age-matched controls. However, the subgroup analysis showed that both groups of children did not differ on RTs.

Conclusions: Children with CIs performed better in lexical tasks relying on the sonority perceptual prominence cues, as in SS–SS condition, than on SSP initial relying conditions as NS–NS and NS–SS. Template-driven word learning, an early word-learning strategy, appears to play a role in the lexical access of children with CIs whether matched in hearing age or not. The SS–SS condition acts as a preferred word template. The longer RTs brought about by the highly accurate SS–SS condition in children with CIs is possibly because listening becomes more effortful. The lack of RTs difference between the children groups when matched on hearing age points out the importance of oral-language exposure period as a key factor in developing the auditory processing skills.

1Department of Educational and Social Policy, University of Macedonia, Thessaloniki, Greece

2Department of Neurosciences, Research Group Experimental ORL, KU Leuven - University of Leuven, Leuven, Belgium

3AHEPA Hospital, First University ENT Clinic, Thessaloniki, Greece.

Received January 16, 2017; accepted January 3, 2018.

Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal’s Web site (www.ear-hearing.com).

The research leading to these results has received funding from the People Programme (Marie Curie Actions) of the European Union’s Seventh Framework Programme FP7/2007–2013/under REA grant agreement no. FP7-607139 (iCARE). Publication of this article was funded in Greece, by General Secretariat of Research and Technology for years 2016-2017 for iCARE-FP7-607139.

A. v. W. is a member of the Editorial board of Ear & Hearing. The other authors have no conflicts of interest to disclose.

Portions of the article were presented by Y. H. at the HEaring Across Lifespan (HEAL) Conference, Lake Como, Italy, June 2–4, 2016, and at the IFOS Conference, Paris, France, June 24–28, 2017.

Address for correspondence: Yasmeen Hamza, Department of Neurosciences, Research Group Experimental ORL, KU Leuven - University of Leuven, O&N II, Herestraat 49/721, Leuven, Belgium, 3000. E-mail: yasmeenabdelkarimmohamed.hamza@student.kuleuven.be

Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.