Journal Logo

Editorial

Updates on Spatial Hearing

Ihlefeld, Antje, PhD

Author Information
doi: 10.1097/01.HJ.0000657972.92810.1a
  • Free

Do you recall the last time you tried to locate your car in a cavernous parking garage, following the car's honking via your remote key? You were relying on your sound localization ability. Unlike other sensations, such as feeling where a mosquito lands on your skin or distinguishing low from high frequencies, the direction of sound cannot be directly read by your sensory organs—the ears. It must be computed—a feat your brain accomplishes by interpreting how much sooner a sound reaches one ear before it reaches the other, the so-called interaural time difference (ITD).

The resulting spatial cues not only enable us to identify sound direction but also allow us to better comprehend speech in situations with background noise (where spatial cues help us differentiate between a target sound and the noise we are trying to ignore). Therefore, many studies over the past decades have focused on trying to understand how ITD is represented in the brain, how our ability to utilize it gets disrupted by hearing loss, and how it can be restored.

In people with normal hearing, ITD can reliably signal the direction of a sound source, in both anechoic and reverberant environments.1 Individuals with hearing loss, however, are often less capable of utilizing ITD, a problem that is only partially due to the limitations of hearing devices.2 Similarly, cochlear implant users struggle to interpret ITD, consistent with altered processing in the brain as opposed to just the ears.3 The inability to benefit from acoustic ITD cues delivered to the ears may in part be rooted in how the brain decodes auditory direction from them.

Engineers have suggested that humans decode sound direction through a scheme akin to a spatial map or a compass in the brain, with ITD-sensitive neurons aligned from left to right that fire individually when activated by a sound coming from a given angle—say, at 30 degrees to the right of your nose. This neural compass is equivalent to a mathematical operation known as interaural cross-correlation. Excitingly, neurophysiologists have discovered early on that biological mechanisms for calculating the interaural cross-correlation function exist in the avian brain.4 This discovery spawned the development of computational models of human sound localization that can now predict with high accuracy where listeners with normal hearing localize sound based on interaural cross-correlations.

However, it turns out that there is no hard evidence that the mammalian brain decodes location based on a binaural cross-correlation map5 as birds do. Instead, mammals appear to rely on a more dynamic neural model where different neurons fire at varying rates depending on directional signals. Computational models that assume the brain compares these rates across sets of neurons can also predict human perception of sound directionality with high accuracy. They do this by recognizing neural response patterns that correspond to different sound directions and dynamically building new maps that link acoustic ITDs to a perceived location, depending on the context and the environment. A dynamic map is also plausible from an evolutionary perspective since our early mammalian ancestors were only able to interpret sound level differences across the ears as spatial cues. The ability to interpret ITDs evolved later.6

While both interaural cross-correlation-based maps and dynamic population rate coding models can predict sound localization with high accuracy, no neural imaging modality with sufficiently high resolution can determine which mechanism humans use. To date, evidence for either model has only been indirect.

We recently noticed, however, that the two models make different predictions depending on the sound volume. The dynamic rate coding model predicts that for faint low-frequency sounds, humans should make systematic errors by hearing sounds slightly closer to the midline of their head as opposed to where they truly are. In contrast, the interaural cross-correlation model predicts no such bias by sound intensity. We first confirmed this idea computationally by reconstructing neuronal responses to ITD in rhesus macaque monkeys (representing rate coding) and barn owls (a contender for the interaural cross-correlation-based compass). Next, we tested human ITD-based sound localization behaviorally, and discovered that humans do make systematically biased response errors, confirming the prediction of the dynamic rate coding model.7

We still cannot restore ITD-based spatial perception for many people with hearing aids and cochlear implants. However, our recent data suggest that this perceptual skill is based on a dynamic neural code, encouraging the notion that retraining peoples’ brains is a worthwhile pursuit. To restore ITD-based perception, we could program hearing aids and cochlear implants to compensate for an individual's hearing loss, as well as offer targeted rehabilitation strategies that leverage a person's ability to retrain themselves to use spatial cues from their devices. This would be particularly important for situations with background noise, where most people with hearing loss cannot single out a target sound and where the restoration of spatial cues could help.

Thoughts on something you read here? Write to us at HJ@wolterskluwer.com

REFERENCES

1. Devore, S., Ihlefeld, A., Hancock, K., Shinn-Cunningham, B. and Delgutte, B., 2009. Accurate sound localization in reverberant environments is mediated by robust encoding of spatial cues in the auditory midbrain. Neuron, 62(1), pp.123-134.
2. Cubick, J., Buchholz, J.M., Best, V., Lavandier, M. and Dau, T., 2018. Listening through hearing aids affects spatial perception and speech intelligibility in normal-hearing listeners. The Journal of the Acoustical Society of America, 144(5), pp.2896-2905.
3. Ihlefeld, A., Carlyon, R.P., Kan, A., Churchill, T.H. and Litovsky, R.Y., 2015. Limitations on monaural and binaural temporal processing in bilateral cochlear implant listeners. Journal of the Association for Research in Otolaryngology, 16(5), pp.641-652.
4. Peña, J.L., Cazettes, F., Beckert, M.V. and Fischer, B.J., 2019. Synthesis of hemispheric ITD tuning from the readout of a neural map: commonalities of proposed coding schemes in birds and mammals. Journal of Neuroscience, 39(46), pp.9053-9061.
5. McAlpine, D., Jiang, D. and Palmer, A.R., 2001. A neural code for low-frequency sound localization in mammals. Nature neuroscience, 4(4), p.396.
6. Grothe B, Pecka M (2014) The natural history of sound localization in mammals-a story of neuronal inhibition. Front Neural Circuits 8:116.
7. Ihlefeld, A., Alamatsaz, N. and Shapley, R.M., 2019. Population rate-coding predicts correctly that human sound localization depends on sound intensity. eLife, 8.
Copyright © 2020 Wolters Kluwer Health, Inc. All rights reserved.