Computational audiology—the intersection of traditional hearing health care and technology—has immense potential for the advancement of the field as well as efforts to address the global burden of hearing loss.
While the term computational audiology was coined relatively recently, this is an area of exploration built on years of study and discovery. It uses “algorithms and data-driven modeling techniques, including machine learning and data mining, to generate diagnostic and therapeutic inferences and to increase knowledge of the auditory system,” states Dennis -Barbour, MD, PhD, associate professor, and director of master’s studies in the Department of Biomedical Engineering at Washington University.
Ongoing advancements in artificial intelligence (AI) and machine learning have opened the door to the development of new tools that can support audiologists in patient management and care.
“Audiology has always been deeply rooted in technology and the integration of human perception with technology. At some point, those technologies changed from analog to digital—driven by computational audiology,” explains Shae D. Morgan, AuD, PhD, CCC-A, assistant professor at the University of Louisville School of Medicine.
“Digitally amplifying sounds via a hearing aid might be one of the first game-changing applications of computational audiology,” he adds. “From there, we have the introduction of cochlear implants, noise reduction algorithms, improved computational models of peripheral and central auditory pathways, directionality in hearing aids, and so on.”
While these models and algorithms have improved hearing devices and the field’s understanding of the human auditory system, limitations remain, according to Morgan, who highlights “the struggle to effectively emulate the kind of active, dynamic noise reduction combined with shifting attentional focus in a noisy background that is relatively easy for a normal-hearing system.”
Research efforts in the diagnostic space hold significant promise, according to Barbour, noting efforts that could lessen the use of sound attenuating rooms for testing. “Multiple labs have demonstrated that you can achieve actionable diagnostic inferences about someone’s hearing from circumaural headphones that attenuate sound sources in a normal acoustic room with passive attenuation—and in some cases active attenuation—and then deliver an appropriate test in that environment that can be effective.
“We could eliminate these expensive soundproof rooms, and this dovetails right into telemedicine, taking medical care out into communities,” Barbour says.
Manufacturers continue to introduce new tools using computational audiology. For instance, a portable computer-based system (KuduWave by eMoyo) has recently been released that is able to launch the immittance suite of tests, and without changing probe tips it will then proceed to pure tone and speech reception air conduction threshold seeking, followed by bone conduction assessment, explains Jackie L. Clark, PhD, clinical professor, School of Behavioral and Brain Sciences, The University of Texas at Dallas.
“Partnering with other disciplines, like pediatrics, pharmacology, ophthalmology, becomes a clearer possibility by providing an AI hearing screening suite kiosk in the offices of other health care providers,” Clark explains. “In the instance of a fail, the results would be simultaneously electronically transmitted to the recipient’s email or cellphone text as well as to the sponsoring audiologist.”
A COMMON MISCONCEPTION
While advances continue—and adaptation of this technology grows—some hesitancy remains when it comes to integrating computational audiology into day-to-day practice. A common concern is that the audiologist will be phased out by computers; however, that is not the intention of those developing this technology, according to Barbour.
“We are trying to build tools to allow clinicians to practice at the top of their license,” he notes. “What I am genuinely trying to accomplish is to automate the things that can be automated, allowing the clinician to review the results and proceed with that information confidently.”
By providing tools that ease the demands on an audiologist’s time—such as performing an audiogram—Barbour hopes they will have more time to devote to other aspects of care, such as therapeutic decisions and programing hearing aids or cochlear implants.
“If a patient can be tested before their appointment, they can come with enough information to start the fitting process earlier,” Barbour suggests. “I am not advocating for shortening appointments or replacing the audiologist. Humans need to be in charge of the ultimate decisions, always. We just need better tools to help everyone make better decisions.”
Barbour wants to dispel the myth that this technology will make audiologists obsolete. In his opinion, it will have the opposite effect. “If anything, computational audiology should expand the need for audiologists,” he says. “If we’re able to address hearing loss for the majority of the population in a way that works for the individual, the market for audiological services will grow tremendously and it is important the field is prepared for that.”
“Our advanced scope of practice involves integrating the findings of multiple domains assessed and marking the most effective path for each patient with patient input,” emphasizes Clark. “We have all experienced patients who have ‘normal’ pure tone thresholds but are experiencing significant handicapping condition when they are out in the real world.
“Simply, an audiologist’s work is multi-factorial and is much more than a pure tone audiogram finding,” she continues. “Certainly, the pure tone audiometric result is one of many other components that lead an audiologist to the pathway forward for the unique needs of each patient, but wouldn’t it be great to have a quality audiogram completed through AI so the patient contact can be centered around the way forward?”
IMPACT ON AUDIOLOGY
Computational audiology addresses a long-recognized deficit—a lack of audiologists to meet the growing demand for hearing-related care.
“There are not enough audiologists to serve the needs in high income countries (like the U.S.), let alone low- and middle-income countries,” Clark explains. “Such a shortage of hearing and balance professionals can be mitigated with a very simple public health perspective by engaging in task shifting some of the simpler and basic duties in audiology.
“Science has shown that there is no difference between trained human versus AI-driven audiometric assessment,” she continues. “By pushing off those simple tasks completed with AI to trained audiology assistants, there is more time for each audiologist to efficiently integrate findings from multiple tests as well as engage in more advanced measures of hearing acuity, cognitive processing, balance, testing, etc.”
Ongoing advancements in technology have streamlined and reduced the time for diagnostic and follow-up audiometric evaluations, according to Clark. “Some of the computational audiometry we have successfully deployed are able to provide the data while either generating interpretations of results or displaying the data for the audiologist to interpret the results,” she says.
In recent years there has been an expansion of mobile phone and tablet-based hearing screenings, according to Clark, who noted that various countries have launched much needed national hearing screening programs via these methods. COVID-19 has highlighted the importance of teleaudiology and the role it can play in counseling and caring for patients.
“Hearing aid manufacturers have also deployed software to allow audiologists to make program modifications in patient hearing aids through computational audiology teleaudiology,” Clark states. “In fact, some of the hearing aid manufacturers are touting that their hearing aid microchips have more sophisticated AI to assist the wearer’s ability to hear and understand speech—especially while in the midst of noise.”
Changes in audiology as a result of computational advancements increase the onus of practitioners, who increasingly have a broad array of tools at their disposal, but don’t always have the time or resources to fully understand these tools, according to Morgan.
“The scope of what a good audiologist can do using these tools has dramatically increased beyond doing hearing tests and fitting hearing aids,” Morgan says. “For example, computational audiology extends into the realm of electrophysiology, advanced vestibular testing (via processing of rapid eye movements called saccades or nystagmus), cochlear implant processing (rather unique from hearing aid processing), immittance testing (for measuring changes to impedance—-absorption and reflection—of the middle ear), and so on, all of which are in the scope of practice for an audiologist.”
INTEGRATING INTO PRACTICE
Successfully integrating these tools—now and in the future—begins with an understanding of computational audiology and how this technology can enhance the care and services they already provide.
“Audiologists need to understand that computational audiology is not beyond our comprehension and that it is focused on the same goals as the clinician: improved patient outcomes, removed barriers to service (i.e., increased access and affordability), and the creation of more tools for clinicians to use to reach those goals,” Morgan says.
Before adding this technology to their practice, Morgan suggests audiologists conduct a critical review of the evidence supporting the tool or tools being considered. They should then use their clinical judgment to decide whether the tool is something they can confidently integrate, he says.
“As a field, we are often slow to adopt change, particularly to ‘tried and true’ testing methods that have served us well for decades,” Morgan notes. “With changes to health care provision and reimbursement structures, I believe that audiology will face significant challenges in the future that can be only softened through the adoption of evidence-based technologies.”
Looking forward there is a plethora of avenues that computational audiology could take, and its impact on audiological practice will only continue to grow. Morgan sees a bright future with significant potential that includes automated diagnostic testing that goes beyond hearing tests.
“For example, vestibular diagnostics can be made more accessible using VR and AR headsets rather than the sometimes prohibitively expensive vestibular tools currently in use,” he says. “With advances in machine learning, artificial intelligence, and improvements in computational power, signal processing can achieve human-like performance for speech recognition in complex, noisy backgrounds, but the future holds the miniaturization of this technology and its adoption in on-the-ear technologies like hearing aids.
“In the future, I imagine we will see the integration of hearing technologies with other wearable sensors (like digital glasses) and inputs from the glasses (i.e., eye gaze) could adjust the focus of the hearing aids or other technologies toward or away from a particular location,” Morgan continues. “Ultimately, the hearing technology could be integrated with the brain to decode these attentional shifts from one location or target to another.”
When Barbour envisions where computational audiology will lead next, he sees a world that includes true personalized care. “The most exciting things I see coming are the chance to combine the best of big data analytics and deep neural networks with probabilistic individualized imprints that we’re working on to make tools that are more efficient for the majority,” he says. “However, those who are outliers aren’t looking for efficiency; they’re looking for accuracy and something that actually reflects them as individuals, and I think we’ve got that too.
“The merger of these trends within machine learning look extraordinarily bright and exciting from my perspective, and I’m quite optimistic that that will have an impact on all of health care,” Barbour concludes. “Audiology is positioned to take the lead in cultivating truly modern, 21st century medicine.”
Thoughts on something you read here? Write to us at [email protected]