Share this article on:

Auditory Brain Development in Children with Hearing Loss – Part Two

Wolfe, Jace PhD; Smith, Joanna MS

doi: 10.1097/01.HJ.0000508363.81547.d2
Tot 10

Dr. Wolfe, left, is the director of audiology at Hearts for Hearing and an adjunct assistant professor at the University of Oklahoma Health Sciences Center and Salus University. Ms. Smith, right, is a founder and the executive director of Hearts for Hearing in Oklahoma City.

Editor's Note: This is the conclusion of a two-part article. The first part was published in the October 2016 issue.

Back to Top | Article Outline

5. The secondary auditory cortex: For sale to the highest bidder!

Although the activity observed in the primary auditory cortex was certainly interesting, the most relevant finding of the Nishimura et al. study was the activity they observed in the secondary auditory cortex (Nature. 1999;397[6715]:116 http://go.nature.com/2daRWNL). Little to no activity was observed in the secondary auditory cortex while participants listened to running speech, but robust neural activity was observed in the area when participants observed sign language (Fig. 6). This finding was one of the most prominent early reports of cross-modal reorganization in the secondary auditory cortices of people who are born with severe to profound hearing loss and deprived of access to intelligible speech during the first few years of life.

Stated differently, in the absence of access to intelligible speech from the primary auditory cortex, the secondary auditory cortex is colonized by the visual system to aid in visual function. Numerous published reports have shown similar findings over the past 15 years, with some indicating activity in the secondary auditory cortex in response to tactile stimulation as well (Brain Res Rev. 2007;56[1]:259 http://bit.ly/2duXvXw). The acquisition of the secondary auditory cortex by other sensory modalities likely explains why people who are born deaf without sufficient access to auditory stimuli develop exceptionally adept abilities in some areas that involve other sensory functions (e.g., peripheral vision is better in people who are born deaf without access to sound during the critical period) (Trends Cogn Sci. 2006;10[11]:512 http://bit.ly/2duWz5r). Since such reorganization occurs outside of the primary auditory cortex, a functional disconnection between the primary and secondary cortices was postulated (Brain Res Rev. 2007;56[1]:259 http://bit.ly/2duXvXw).

Recent research in Dr. Andrej Kral's laboratory investigated activity in single neurons in the secondary auditory cortex in response to cochlear implants (CIs) (J Neurosci. 2016;36[23]:6175 http://bit.ly/2duWIWf). They demonstrated that while some anatomical fiber tracts among cortical areas and thalamus persist in deafness and the secondary cortex preserves some auditory responsiveness, there is also an increased visual responsiveness in the area (PLoS One. 2013;8[4]:e60093 http://bit.ly/2daSqn3). Neurons in the secondary auditory area that responded to visual stimuli did not respond to auditory stimuli, demonstrating that visual input occupied some of the auditory resources normally used by hearing.

The results of Dr. Kral's studies (along with the research of others) suggest that when the brain does not have access to intelligible speech during the early years of life, meaningful auditory input does not coordinate activity between the primary and secondary auditory cortices. Instead, the secondary auditory cortex assists with other sensory functions such as visual processing. Additionally, auditory stimulation beyond the critical period of language development finds disordered functional connections or interactions between the primary and secondary auditory cortices, further limiting auditory learning.

Back to Top | Article Outline

4. The break-up! Starring the primary and secondary auditory cortices

At this point, a natural question to ask is, “Where does the disconnect occur when the auditory areas of the brain do not receive early access to intelligible speech?” To answer that question, we turn to Dr. Kral's research exploring the functions within the multiple layers of the primary auditory cortex. The auditory cortex is comprised of six layers of neurons (2-4 mm thick; Fig. 7). Afferent inputs from the thalamus arrive at the cortex at layer IV, and much of the processing within the cortex takes place at layers I-III (i.e. the suprgranular layers). Layers V-VI (i.e., the infragranular layers) modulate activity in the supragranular layers, serve as the output layers of the cortex into the subcortical auditory structures, receive top-down projections from higher-order areas, and integrate higher-order information with the bottom-up stream of auditory input.

Dr. Kral measured neural responses to auditory stimulation at the different layers of the auditory cortex using microelectrodes inserted to varying depths (Cereb Cortex. 2000;10[7]:714 http://bit.ly/2duWyyf). Because such testing is too invasive to conduct in young children, Dr. Kral completed his studies with deaf white cats. He discovered activity in layers I-IV but reduced activity in layers V-VI (Fig. 8). Among other deficits, reduced infragranular layer activity interferes with the integration of bottom-up and top-down information streams. As a result, Dr. Kral and his coauthors concluded that a functional decoupling between the primary and secondary auditory cortices had occurred, particularly from the top-down information stream.

This “break-up” between the primary and secondary cortices has significant functional implications for auditory and spoken language. When auditory signals are not efficiently and effectively transmitted from the primary to secondary auditory cortex, the secondary cortex cannot share spoken language and other meaningful sounds with the rest of the brain. This lack of distribution of auditory stimulation to the secondary auditory cortex and then to the rest of the brain explains why a teenager who was born deaf and never had access to auditory stimulation can detect sound at whisper-soft levels with a CI but cannot understand conversational speech or even distinguish between relatively disparate words. The bacon sizzling in the pan is audible, but the vivid experience evoked in an auditory system that has been exposed to rich and robust auditory stimulation from birth is diminished or altogether absent in the brain of a child who is deprived of auditory stimulation during the first few years of life. Auditory cortex must distribute auditory stimulation to the rest of the brain for sounds to be endowed with higher-order meaning. Such a connectome model of deafness has recently been used to explain inter-individual variations in CI outcomes (Lancet Neurol. 2016;15[6]:610 http://bit.ly/2e7u4pK).

Likewise, visual stimulation during the first few years of life is not sufficient to develop, support, sustain, or lay a foundation for listening and spoken language development. Again, the underlying explanation resides in the fact that the functional connections between the primary and secondary auditory cortices are not developed and are pruned away in the absence of exposure to meaningful auditory stimulation. Indeed, visual stimulation elicits responses in the secondary cortex, but it does not promote the development of functional synaptic connections between the primary and secondary auditory cortices, which are required for auditory information to be disseminated in a neural network across the brain.

Back to Top | Article Outline

3. Kids and kittens

In deaf white kittens, Dr. Kral showed that the loss of infragranular activity developmentally occurred between four to five months of age. This period is consistent with the critical period for adaptation to chronic CI stimulation (Cereb Cortex. 2002;12[8]:797 http://bit.ly/2duX0wj; Brain. 2013;136[Pt 1]:180-93 http://bit.ly/2daSJyd). In other words, the critical period of auditory brain development spanned over the first few months of a cat's life when cortical synapses appeared and were pruned. If auditory stimulation was not provided during these first few months, during the time when synaptic development happens, development of the auditory function is severely compromised. However, if these deaf kittens were provided with a CI within this time period (Fig. 8), the microelectrode recordings made by Dr. Kral and his colleagues suggest the kittens’ auditory areas of the brain developed rather typically.

How does Dr. Kral's research with kittens translate to kids? For that answer, let's turn to the work of Dr. Anu Sharma, who measured the latency of the P1 component of the cortical auditory evoked potential (P1-CAEP) in children with normal hearing and participants who were born deaf and received a CI at ages ranging from about 1 year old to early adulthood (Ear Hear. 2002;23[6]:532 http://bit.ly/2duYZAG). Children who received their CIs during the first three years of life had P1 latencies that were similar to children with normal auditory function (Fig. 9). In contrast, children who received their CIs at 7 years of age or older had P1 latencies that were almost invariably later than their age-matched peers with normal hearing. Most (but not all) of the children who received their CIs between 4 and 7 years old also had delayed P1 latencies. Dr. Sharma concluded that the latency of the P1 component was a biomarker of auditory brain development, with later latencies representing a decoupling of the primary and secondary auditory cortices. In short, Dr. Sharma's P1 latencies provided an electrophysiologic indication of the critical period of language development, which has long been considered to span over the first two to three years of life.

The functional implication of Dr. Sharma's work is obvious. Children with hearing loss must be appropriately fitted with hearing technology (e.g., hearing aids or a CI) as early as possible to avoid auditory deprivation and provide access to a rich and robust model of intelligible speech. Early fitting of technology is necessary to feed the auditory cortex with adequate stimulation to promote synaptogenesis between the primary and secondary cortices and establish the functional neural networks between the secondary auditory cortex and the rest of the brain to make incoming sound come to life and possess higher-order meaning. To optimize listening and spoken language, the brain must be fed with a hearty diet of intelligible speech. Visual stimulation does not promote the requisite connection between the primary and secondary cortices necessary to develop spoken language skills; decades of clinical and anecdotal observations indicate poor auditory and spoken language outcomes in children who are deprived of sound during the critical period. Also, a large number of studies show better listening, spoken language, and literacy skills in children who communicate using listening and spoken language relative to their peers who use sign language or Total Communication (e.g., a combination of aural/oral and sign language) (Int J Audiol. 2013;52 Suppl 2:S65 http://bit.ly/2duYoPK; Otol Neurotol. 2016;37[2]:e82 http://bit.ly/2duY0AS; Ear Hear. 2011;32[1 Suppl]:84S http://bit.ly/2daTveg; Ear Hear. 2003;24[1 Suppl]:121S http://bit.ly/2duYHdp).

We must remember that every day within the critical period is actually critical. In other words, delays (and most probably, lifetime deficits) in listening and spoken language abilities will occur if a child is deprived of sound throughout the first 30 months of life, even if cochlear implantation is provided weeks or months before the third birthday. The deprivation that occurred during the first two and a half years of the child's life will almost certainly result in a weakening of the functional synaptic connections between the primary and secondary auditory cortices and a subsequent decline in the functional neural networks between the secondary auditory cortex and the rest of the brain. We know that the typical child from an affluent home hears 46 million intelligible words by his or her fourth birthday (American Educator, 2003). These 46 million words serve as the bricks and mortar that lay the functional pathways between the primary and secondary auditory cortices and establish the neural networks necessary for sound to come to life and possess higher-order meaning. Admittedly, it is a daunting goal to provide access to these 46 million words by the fourth birthday. To do so, we must remind ourselves that every day within the critical period is an important opportunity to nourish the developing auditory brain with intelligible speech.

Back to Top | Article Outline

2. Upping the ante

Important work by Mortensen and colleagues has shown that the complex neural networks that arise when the secondary auditory cortex shares an auditory signal with other areas of the brain extends beyond the perspective of auditory skill development (Neuroimage. 2006;31[2]:842 http://bit.ly/2duXRNI). Mortensen used PET scan to image the brain while high-performing and poor-performing CI users listened to running speech. The high performers showed activity in the left inferior prefrontal cortex while the poor performers did not (Fig. 10). The left inferior prefrontal cortex, a region often referred to as Broca's area, is involved with phonological processing, phonemic awareness, speech production, and literacy aptitude. As a result, robust connections must be developed between the primary and secondary auditory cortices so that the latter may facilitate responses in the left inferior prefrontal cortex, which is imperative for several reasons. Engaging the left inferior prefrontal cortex in response to meaningful sounds is necessary so that the child may learn to produce intelligible speech. As we've known for quite some time, children speak as they hear, and access to intelligible words is necessary to develop intelligible speech. Furthermore, access to intelligible speech is necessary to develop phonemic awareness (e.g., The “A” says “ah.”), which serves as the foundation for reading development. To summarize, children with hearing loss need brain access to intelligible speech as early and as much as possible to develop their auditory skills as well as their speech production and literacy abilities.

Back to Top | Article Outline

1. The auditory brain is hungry! Feed it clearly and frequently.

So what do we do to promote auditory brain development in children with hearing loss in order to promote optimization of listening, spoken language, and literacy abilities? We stick with the tried and true fundamentals of modern, evidence-based pediatric hearing health care. We seek to accurately diagnose children with hearing loss by 1 month of age so that hearing aids may be fitted using probe microphone measures and evidence-based targets (e.g., Desired Sensation Level 5.0/NAL-NL2) as soon as possible. For children with severe to profound hearing loss, we move forward with cochlear implantation between 6 and 9 months of age. For all children using hearing technology, we also ensure they are fitted with digital adaptive remote microphone systems (RM) so they have access to intelligible speech in our noisy world. We are convinced that the road to 46 million words is much more manageable to navigate with the use of a digital adaptive RM system. Research has clearly shown that RM technology is the most effective means to improve communication in noise (J Am Acad Audiol. 2013;24[8]:714 http://bit.ly/2duY5nN; Am J Audiol. 2015;24[3]:440 http://bit.ly/2duYIxH). Additionally, we must routinely administer audiological evaluations to ensure children are hearing well with their hearing technology.

We also must make certain that each of the child's caregivers realizes the importance of using a hearing aid, CI, and RM technology during all waking hours from the first day when these technologies are fitted.

Finally, we must assist the family with creating a robust daily conversational/language model rich in intelligible speech. Families must also be aware that their child needs to hear 46 million words by the time he/she is 4 years old and that their auditory brain development depends upon it. Families must understand that a child's long-term listening, spoken language, literacy, academic, and social development is influenced by early brain access to intelligible speech, and they must be equipped with skills to optimize the child's exposure to intelligible speech. Audiologists and speech-language pathologists must work hand-in-hand with families to achieve these goals.

Neuroscientists from around the world have enlightened our profession on the neurophysiologic underpinnings of listening and spoken language development. Of particular note, children must have access to intelligible speech and meaningful acoustic input to fully develop the auditory areas of the brain and optimize spoken language and literacy aptitude. Visual stimulation in the form of sign language does not promote development of the functional synaptic connections between the primary and secondary auditory cortices. These connections serve as the springboard for a neural network/connectome that fully engages the brain and is necessary for the development of typical listening and spoken language abilities. Modern hearing technology (e.g., hearing aids, CIs, digital adaptive remote microphone systems) allows almost every child with hearing loss the access needed to fully develop the auditory areas of the brain. It is our job as pediatric hearing health care professionals to provide the children we serve with the brain growth they deserve.

Copyright © 2016 Wolters Kluwer Health, Inc. All rights reserved.