Share this article on:

Complementary Factors in Processing

Kraus, Nina PhD; White-Schwoch, Travis

doi: 10.1097/01.HJ.0000481809.66946.c9
Hearing Matters

Dr. Kraus, left, is a professor of auditory neuroscience at Northwestern University, investigating the neurobiology underlying speech and music perception and learning-associated brain plasticity. Mr. White-Schwoch, right, is a data analyst in the Auditory Neuroscience Laboratory (, where he focuses on translational questions in speech, language, and hearing.

Listening difficulties, such as auditory processing disorders, are complex and heterogeneous in their origins, presentation, and remediation. A premise of our work is that a better understanding of the basic biological mechanisms of sound processing will lead to a set of strategies to evaluate a patient's challenges in everyday listening, and to identify the best course of intervention for that individual. Here, we outline a new framework to understand sound processing that distinguishes between acoustic and intrinsic factors.

Acoustic factors reflect the accuracy with which the nervous system processes the myriad details in sound, such as pitch, timing, and timbre. They are stimulus-dependent, which means neural activity patterns will change along with changes to the sound. For example, sounds of different frequencies will spark activity in distinct hair cells, nerve fibers, and regions of auditory brain nuclei and cortices that are arranged tonotopically.

Intrinsic factors reflect the health of the nervous system's infrastructure for processing sound. They are stimulus-independent, which means that neural activity patterns are similar regardless of the acoustic makeup of the eliciting sound. For example, neural asynchrony will cause problems transcribing sounds across a broad frequency spectrum.

By analogy, we like to think about a record player. Intrinsic factors are like the wiring, the mat, and the belt—they provide the base necessary to process the sound on the record. The acoustic factors are like the cartridge, the amplifier, and the speakers—they convey the sound as faithfully as possible. All of these components need to work together to process the record itself. If you lose the intrinsic factors, you're likely to stop processing the sound. The acoustic factors are necessary, but are also more nuanced. Different speakers will convey the same sounds in different ways, just like listeners may process the same sounds distinctly.

These two groups of factors give us a framework to navigate biological indices of sound processing, particularly the auditory brainstem response to complex sounds (cABR), which has been the focus of many of our columns (see Table). Indeed, several studies using cABR have found evidence for a functional separation between acoustic and intrinsic factors in neural processing.

Back to Top | Article Outline


A key aspect of this framework is that acoustic and intrinsic aspects of neural processing are complementary—they need to work together so that we can make sense of sound. We recently evaluated how the neural processing of consonants in noise predicts early language and listening skills (White-Schwoch. PLoS Biol 2015;13[7]:e1002196 We found that a triumvirate of an intrinsic factor (response stability) with acoustic factors (response timing and timbral representation) strongly predicted emergent language skills, and when combined were more than the sum of their parts. Additionally, we evaluated these three factors in older children. Children who were diagnosed with a learning disability had poorer responses across all three dimensions than typically developing children. Moreover, the three factors in combination tracked with reading skills. This supports the hypothesis that intrinsic and acoustic features are both important to develop good listening and language skills.

Back to Top | Article Outline


Additional evidence for the distinction between acoustic and intrinsic factors comes from research evaluating the biological impact of music training. Across the lifespan, music training is associated with superior acoustic neural coding: musicians have faster responses, more robust processing of timbral features, and are more resilient to the degrading effects of background noise than their nonmusician peers (Strait. Hear Res 2014;308:109–121 Longitudinal studies, however, in disadvantaged populations (such as children from low socioeconomic backgrounds) have shown that music training boosts response stability as well, but only in these populations (Tierney. Proc Natl Acad Sci USA 2015;112[32]:10062–10067 This is noteworthy because these children tend to have unstable and noisy neural responses, indicating a poor foundation of intrinsic neural coding (Skoe. J Neurosci 2013;33[44]:17221–17231 These results support a hypothesis that intrinsic factors are most labile when the nervous system is vulnerable—when it remains under development, in aging, and in cases of deprivation and disorder.

From a clinical standpoint, this leads to an important conclusion. We need a strong base of neural processing—that is, good intrinsic activity—on which to improve the biological processing of sound details in impaired populations. Thus, while both intrinsic and acoustic neural processes are complementary, individuals with poor intrinsic response properties may face a biological bottleneck they need to overcome to improve listening skills. Encouragingly, these are not hard-wired. Both intrinsic and acoustic processes may be improved through auditory training, including music training and the use of assistive listening devices (Hornickel. Proc Natl Acad Sci 2012;109[41]:16731-16736; Tierney. Proc Natl Acad Sci USA 2015;112[32]:10062–10067 Future work should determine best practices for strengthening the domain in which an individual seems to struggle.

Copyright © 2016 Wolters Kluwer Health, Inc. All rights reserved.