As technology advances, hearing aids continue to improve. But in recent years, most improvements have been limited to aesthetics, comfort, or secondary functions (e.g., wireless connectivity). With respect to their primary function—improving speech perception—the performance of hearing aids has remained largely unchanged. While audibility may be restored, intelligibility is often not, particularly in noisy environments (Hearing Health Care for Adults. National Academies Press, 2016).
Why do hearing aids restore audibility but not intelligibility? To answer that question, we need to consider what aspects of auditory function audibility and intelligibility depend on. For a sound to be audible, it simply needs to elicit a large enough change in auditory nerve activity for the brain to notice; almost any change will do. But for a sound to be intelligible, it needs to elicit a very particular pattern of neural activity that the language centers of the brain can recognize.
UNDERSTANDING THE LIMITATIONS
The key problem is that hearing loss doesn't just decrease the overall level of neural activity, it also profoundly distorts the patterns of activity such that the brain no longer recognizes them. Hearing loss isn't just a loss of amplification and compression, it also results in the impairment of many other important and complex aspects of auditory function (Trends Neurosci. 2018 Apr;41(4):174).
A good example is the creation of distortions: When a sound with two frequencies enters the ear, an additional sound is created by the cochlea itself at a third frequency that is a complex combination of the original two. These distortions are, of course, what we measure as distortion product otoacoustic emissions (DPOAEs), and their absence indicates impaired cochlear function.
But these distortions aren't only transmitted out of the cochlea into the ear canal. They also elicit neural activity that is sent to the brain. While a hearing aid may restore sensitivity to the two original frequencies by amplifying them, it does not create the distortions and, thus, does not elicit the neural activity that would have accompanied the distortions before hearing loss.
These distortions themselves may not be relevant when listening to broadband sounds like speech, but they are representative of the complex functionality that hearing aids fail to restore. Without this functionality, the neural activity patterns elicited by speech are very different from those that the brain has learned to expect. Because the brain does not recognize these new patterns, perception is impaired.
A useful analogy is to think of the ear and brain as two individuals having a conversation. The effect of hearing loss is not simply that the ear now speaks more softly to the brain, but rather that the ear now speaks an entirely new language that the brain does not understand. Hearing aids enable the ear to speak more loudly, but make no attempt to translate what the ear is saying into the brain's native language. In this sense, hearing aids are like tourists who hope that by shouting they will be able to overcome the fact that they are speaking the wrong language.
Why don't hearing aids correct for the more complex effects of hearing loss? In severe cases of extensive cochlear damage, it may be impossible. Even when hearing loss is only moderate, it is not yet clear how a hearing aid should transform incoming sounds to elicit the same neural activity patterns as the original sounds would have elicited before hearing loss.
But there is reason for optimism. In recent years, advances in machine learning have been used to transform many technologies, including medical devices (Nature. 2015 May 28;521(7553):436). In general, machine learning is used to identify statistical dependencies in complex data. In the context of hearing aids, it could be used to develop new sound transformations based on comparisons of neural activity before and after hearing loss.
But machine learning is not magic; to be effective, it needs large amounts of data. Fortunately, there have also been recent advances in experimental tools for recording neural activity (J Neurophysiol. 2015 Sep;114(3):2043; Curr Opin Neurobiol. 2018 Feb 10;50:92). These new tools allow recordings from thousands of neurons at the same time and, thus, should be able to provide the required “big data.”
The combined power of machine learning and large-scale electrophysiology provide an opportunity for an entirely new approach to hearing aid design. Instead of relying on simple sound transformations that are hand-designed by engineers, the next generation of hearing aids will have the potential to perform sound transformations that are far more complex and subtle. With luck, these new transformations will enable the design of hearing aids that can restore both audibility and intelligibility—at least to a subset of patients with mild-to-moderate hearing loss.