Institutional members access full text with Ovid®

Share this article on:

Effect of Hearing Loss on Semantic Access by Auditory and Audiovisual Speech in Children

Jerger, Susan1,2; Tye-Murray, Nancy3; Damian, Markus F.4; Abdi, Hervé1

doi: 10.1097/AUD.0b013e318294e3f5
Research Articles

Objectives: This research studied whether the mode of input (auditory versus audiovisual) influenced semantic access by speech in children with sensorineural hearing impairment (HI).

Design: Participants, 31 children with HI and 62 children with normal hearing (NH), were tested with the authors’ new multimodal picture word task. Children were instructed to name pictures displayed on a monitor and ignore auditory or audiovisual speech distractors. The semantic content of the distractors was varied to be related versus unrelated to the pictures (e.g., picture distractor of dog-bear versus dog-cheese, respectively). In children with NH, picture-naming times were slower in the presence of semantically related distractors. This slowing, called semantic interference, is attributed to the meaning-related picture-distractor entries competing for selection and control of the response (the lexical selection by competition hypothesis). Recently, a modification of the lexical selection by competition hypothesis, called the competition threshold (CT) hypothesis, proposed that (1) the competition between the picture-distractor entries is determined by a threshold, and (2) distractors with experimentally reduced fidelity cannot reach the CT. Thus, semantically related distractors with reduced fidelity do not produce the normal interference effect, but instead no effect or semantic facilitation (faster picture naming times for semantically related versus unrelated distractors). Facilitation occurs because the activation level of the semantically related distractor with reduced fidelity (1) is not sufficient to exceed the CT and produce interference but (2) is sufficient to activate its concept, which then strengthens the activation of the picture and facilitates naming. This research investigated whether the proposals of the CT hypothesis generalize to the auditory domain, to the natural degradation of speech due to HI, and to participants who are children. Our multimodal picture word task allowed us to (1) quantify picture naming results in the presence of auditory speech distractors and (2) probe whether the addition of visual speech enriched the fidelity of the auditory input sufficiently to influence results.

Results: In the HI group, the auditory distractors produced no effect or a facilitative effect, in agreement with proposals of the CT hypothesis. In contrast, the audiovisual distractors produced the normal semantic interference effect. Results in the HI versus NH groups differed significantly for the auditory mode, but not for the audiovisual mode.

Conclusions: This research indicates that the lower fidelity auditory speech associated with HI affects the normalcy of semantic access by children. Further, adding visual speech enriches the lower fidelity auditory input sufficiently to produce the semantic interference effect typical of children with NH.

Is semantic access by speech influenced by the mode of input in children perceiving lower fidelity auditory speech due to sensorineural hearing impairment (HI)? Thirty-one children with HI and 62 children with normal hearing named pictures while ignoring auditory or audiovisual word distractors on a multimodal picture–word task. The semantic content of the picture–distractor pairs was varied to be related (dog-bear) versus unrelated (dog-cheese). Children with HI showed normal results for the audiovisual mode but not for the auditory mode. Adding visual speech seemed to enrich the lower fidelity auditory input sufficiently to promote more normal semantic access.

1University of Texas at Dallas, Dallas, TX; 2Callier Center for Communication Disorders; 3Central Institute for the Deaf of Washington University School of Medicine, St. Louis, MO; and 4University of Bristol, Bristol, United Kingdom.

Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and text of this article on the journal’s Web site (www.ear-hearing.com).

ACKNOWLEDGMENTS: The authors thank Dr. Alice O’Toole for her advice and assistance in recording their audiovisual stimuli. The authors thank the children and parents who participated and the research staff who assisted, namely Elizabeth Mauze of Central Institute for the Deaf of Washington University School of Medicine and Karen Banzon, Sarah Joyce Bessonette, Carissa Dees, K. Meaghan Dougherty, Alycia Elkins, Brittany Hernandez, Kelley Leach, Michelle McNeal, Anastasia Villescas of University of Texas at Dallas (data collection, analysis, or presentation), and Derek Hammons and Scott Hawkins of University of Texas at Dallas, and Brent Spehar of Central Institute for the Deaf of Washington University School of Medicine (computer programming).

This research was supported by the National Institute on Deafness and Other Communication Disorders grant DC-00421.

The authors declare no conflict of interest.

Address for correspondence: Susan Jerger, School of Behavioral and Brain Sciences, University of Texas-Dallas, 800 W. Campbell Road, Richardson, TX 75080. E-mail: sjerger@utdallas.edu

© 2013 by Lippincott Williams & Wilkins