Institutional members access full text with Ovid®

Share this article on:

Voice Emotion Recognition by Children With Mild-to-Moderate Hearing Loss

Cannon, Shauntelle A.1,2; Chatterjee, Monita2

doi: 10.1097/AUD.0000000000000637
Research Article: PDF Only

Objectives: Emotional communication is important in children’s social development. Previous studies have shown deficits in voice emotion recognition by children with moderate-to-severe hearing loss or with cochlear implants. Little, however, is known about emotion recognition in children with mild-to-moderate hearing loss. The objective of this study was to compare voice emotion recognition by children with mild-to-moderate hearing loss relative to their peers with normal hearing, under conditions in which the emotional prosody was either more or less exaggerated (child-directed or adult-directed speech, respectively). We hypothesized that the performance of children with mild-to-moderate hearing loss would be comparable to their normally hearing peers when tested with child-directed materials but would show significant deficits in emotion recognition when tested with adult-directed materials, which have reduced prosodic cues.

Design: Nineteen school-aged children (8 to 14 years of age) with mild-to-moderate hearing loss and 20 children with normal hearing aged 6 to 17 years participated in the study. A group of 11 young, normally hearing adults was also tested. Stimuli comprised sentences spoken in one of five emotions (angry, happy, sad, neutral, and scared), either in a child-directed or in an adult-directed manner. The task was a single-interval, five-alternative forced-choice paradigm, in which the participants heard each sentence in turn and indicated which of the five emotions was associated with that sentence. Reaction time was also recorded as a measure of cognitive load.

Results: Acoustic analyses confirmed the exaggerated prosodic cues in the child-directed materials relative to the adult-directed materials. Results showed significant effects of age, specific emotion (happy, sad, etc.), and test materials (better performance with child-directed materials) in both groups of children, as well as susceptibility to talker variability. Contrary to our hypothesis, no significant differences were observed between the 2 groups of children in either emotion recognition (percent correct or d' values) or in reaction time, with either child- or adult-directed materials. Among children with hearing loss, degree of hearing loss (mild or moderate) did not predict performance. In children with hearing loss, interactions between vocabulary, materials, and age were observed, such that older children with stronger vocabulary showed better performance with child-directed speech. Such interactions were not observed in children with normal hearing. The pattern of results was broadly consistent across the different measures of accuracy, d', and reaction time.

Conclusions: Children with mild-to-moderate hearing loss do not have significant deficits in overall voice emotion recognition compared with their normally hearing peers, but mechanisms involved may be different between the 2 groups. The results suggest a stronger role for linguistic ability in emotion recognition by children with normal hearing than by children with hearing loss.

1Department of Speech and Hearing Sciences, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA; and

2Auditory Prostheses & Perception Lab, Center for Hearing Research, Boys Town National Research Hospital, Omaha, Nebraska, USA.

ACKNOWLEDGMENTS: The authors would like to thank Sara Damm, Aditya Kulkarni, Julie Christensen, Mohsen Hozan, Barbara Peterson, Meredith Spratford, Sara Robinson, and Sarah Al-Salim for their help with this work. The authors would also like to thank Joshua Sevier and Phylicia Bediako for their helpful comments on earlier drafts of this article.

Portions of this work were presented at the 2016 annual conference of the American Auditory Society held in Scottsdale, Arizona.

This research was funded by the National Institutes of Health (NIH) grants R01 DC014233 and R21 DC011905, the Clinical Management Core of NIH grant P20 GM10923, and the Human Research Subject Core of P30 DC004662. S. Cannon was supported by NIH grant number T35 DC008757 and R01 DC014233 04S1.

The authors have no conflicts of interest to disclose.

Address for correspondence: Monita Chatterjee, Auditory Prostheses & Perception Lab, Boys Town National Research Hospital, 425 N 30th St, Omaha, NE 68131, USA. E-mail: monita.chatterjee@boystown.org

Received September 19, 2017; accepted June 2, 2018.

Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.