Secondary Logo

Journal Logo

When (Part of) the Brain Can't Hear

Kraus, Nina PhD; White-Schwoch, Travis

doi: 10.1097/01.HJ.0000616152.63242.08
Hearing Matters
Free

Dr. Kraus, left, is a professor of auditory neuroscience at Northwestern University, investigating the neurobiology underlying speech and music perception and learning-associated brain plasticity. Mr. White-Schwoch is a data analyst in the Auditory Neuroscience Laboratory (www.brainvolts.northwestern.edu), where he focuses on translational questions in speech, language, and hearing.

We have often advocated for an integrated view of the auditory system. That is, we contend it is important to think about how the entire system works together—in concert with sensory, cognitive, motor, and reward brain circuitry—to make sense of sound and learn.1 Indeed, in several articles we've published in The Hearing Journal, we have expressed skepticism about site-of-lesions frameworks for understanding auditory processing difficulties out of concern that these frameworks adopt too narrow a view of functional hearing and risk overlooking the potential for learning and compensatory brain plasticity.

Still, patients with well-defined auditory system lesions and distinct auditory processing difficulties can illuminate the roles of different brain regions. We recently reported on two such patients.2 IT is a woman with auditory neuropathy, meaning she has no auditory brainstem response (ABR) due to subcortical dyssynchrony.3 NR is a man with bilateral auditory cortex lesions following prolonged treatment for leukemia. These lesions have caused “cortical deafness.”4 Both had normal cochlear function, demonstrated by OAEs.

We measured the frequency-following responses (FFRs) and cortical-auditory-evoked responses (CAEPs) in both patients. FFRs and CAEPs were measured to the speech sound “da” in quiet and with background noise. While FFRs have long been thought to be predominantly generated by subcortical auditory nuclei, emerging evidence suggests a cortical contribution as well.5 Therefore, IT and NR provided an opportunity to disentangle cortical and subcortical contributions to these evoked potentials.

We found a double dissociation between subcortical and cortical function and the FFR and CAEP, respectively. In particular, IT had no FFR despite a robust CAEP. In contrast, NR had a robust and normal FFR despite an absent CAEP. Thus, it appears that subcortical synchrony is both necessary and sufficient to generate an FFR, whereas cortical function is necessary and sufficient to generate a CAEP.

While this conclusion will intrigue the auditory electrophysiologists among us, what is perhaps more interesting is IT and NR's real-life auditory processing abilities. In quiet, IT had excellent speech perception. In noise, however, IT was essentially deaf. Even at extremely favorable signal-to-noise ratios, she struggled to identify single words and sentences.6 She also reported intermittent difficulties with sound awareness, particularly detecting unexpected transients such as the phone or doorbell. NR, in contrast, could not understand speech in any setting; he communicated mainly through written notes and had extreme difficulty detecting any sounds.

Thus, we have another dissociation. Subcortical synchrony is necessary to understand speech in noise, but cortical function is necessary to understand sound in general.

These rare patient cases can inform our evaluation of patients with more common listening difficulties, such as patients being evaluated for auditory processing disorder. For example, given the correspondence between IT and NR's electrophysiological and hearing profiles, we can infer that the FFR is a sensitive and specific measure of at least one brain function that is critical to hearing in noise. Consequently, the FFR could be an appropriate test for evaluating listening-in-noise difficulties and tracking responses to interventions.7 In contrast, in patients with a general difficulty recognizing and responding to auditory input in any context, cortical potentials might be an appropriate screener to determine how quickly and robustly sound events are being detected.

It is important to frame these case studies within the context of the broader hearing network rather than only thinking of them as telling us that “brain region X is necessary for hearing function Y.” The hearing brain is vast and both too messy and wily to fit into neat boxes. Consider, for example, that in blind patients, the auditory processing network takes over the visual cortical network, augmenting their ability to discern sounds.8 It is possible that in these patients with auditory system lesions, a subtler but still functionally relevant adaptation could occur. In fact, Colucci has reported that NR's hearing has improved markedly in the several years since his lesions formed.9

Back to Top | Article Outline

REFERENCES

1. Trends Cogn Sci. 2015;19[11]:642.
2. J Neurophysiol. 2019;122[2]:844-848.
3. Laryngoscope. 1984;94[3]:400-406.
4. EEG Clin Neuro. 1982;53[2]:224-230.
5. Nat Commun. 2016;7:11070).
6. J Assoc Res Otolaryngol. 2000;1[1]:33-45.
7. PNAS. 2013;110[11]:4357-4362.
8. Nat Rev Neuro. 2002;3[6]:443.
9. Hear J. 2018;71[12]:44-46.
Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.