Emerging Hearing Assessment Technologies for Patient Care : The Hearing Journal

Journal Logo

Hearing Technology Special Feature

Emerging Hearing Assessment Technologies for Patient Care

Wasmann, Jan-Willem A.; Barbour, Dennis L. MD, PhD

Author Information
doi: 10.1097/01.HJ.0000737596.12888.22
  • Free

Since the standardization of hearing tests in the 1940s, the procedure and core equipment functionality for basic audiometry have evolved very little.1 In the digital era, however, change is imminent. Mobile telephones are the most rapidly spreading technology in history, with 6 billion in use after 30 years. Smartphones are driving digital health care in many medical fields. For instance, smartphones can be used to evaluate symptom severity and how symptoms are experienced by patients with Parkinson's disease.2 Given the broad use cases for smartphones today, one might almost forget that telephones were specifically designed to deliver sounds. Artificial Intelligence software running directly on modern smartphones or on internet-accessible cloud servers can be combined with calibrated sound delivery to promote accessible hearing assessments across the world.

FU1
Digital care pathway via distributed human and algorithmic expertise. Audiology, technology, healthcare.
F1
Figure 1:
Modular approach to managing the complexity of health data streams combined from n-health domains with identified key factors. Linkages between the modules Aij are based on standards and protocols that facilitate exchanges similar to the network layers in information technology.26 Audiology, technology, healthcare.

ADVANTAGES OF MOBILITY & AUTOMATION

A key advantage of using mobile phones in health care delivery is their very mobility. The average time spent by a patient for outpatient care includes 35 minutes traveling to a clinic and 42 minutes waiting for an appointment, while the appointment itself requires 70 minutes. These estimated times are based on a survey of 60,000 Americans about time spent on medical care.3 Hearing health care likely requires similar time commitments. Therefore, remote data collection using mobile phones and follow-up care delivery are promising opportunities for lowering a key barrier to care for patients.4 Also, other barriers including the need for low-touch audiology and/or constraints in low-resource environments can be reduced by remote data collection technologies.5,6

Remote data collection that saves a patient's time could be tele-supervised by a trained clinician or technician.4 Separate development work, however, has focused on automating data collection procedures to save clinicians’ time, thereby empowering them to evaluate and treat more patients.7-10 Automating common and low-stakes decisions using algorithms tailored to specific questions or as part of asynchronous services are appropriate methods to preserve clinician efforts for more important decisions. These automations can also be applied to remote data collection for a further streamlined process.11 Automated methods do not typically create a faster test for patients. However, through properly designed machine learning methods, they can deliver faster overall tests with more detailed assessments12 in clinical13 or remote settings.7

Advances in machine learning, internet connectivity, and new data collection tools have the potential to build a system of distributed human and algorithmic expertise we refer to as computational audiology.14 The merit of distributed expertise is that health care resources can be allocated efficiently based on patient needs using scalable procedures. Diagnostic data are collected remotely or by care providers and shared within a computational infrastructure with experts in specialized centers. The clinical question at hand or the patient's need determines which level of diagnostic accuracy is called for and whether remote data streams alone are sufficient. Clinical decision support systems can guide the most useful next steps in a workup, while consumer-grade hardware provides the tools to collect the requisite data flexibly. The result is a hearing health care system organized to provide higher accuracy where needed and greater efficiency where allowed.

As exciting as these distinct advances are, their synergistic potential will not be realized without concerted management of their associated complexities. Therefore, we suggest a modular approach to control for complexity while maintaining the performance advantage of integrating data streams from multiple sources. Patient-centric care can distinguish itself by considering outcomes across multiple domains in order to provide a better context for making the optimal clinical decision for a specific individual. In Figure 1, each domain is depicted as a column containing three vertical nodes: (1) clinical care management (“why”), (2) computational process/methodology (“how”), and (3) flexible hardware and software (“what”). All nodes within the domain are required to provide adequate care within a discipline, yet linkages to adjacent cross-disciplinary nodes are needed for optimal patient-centric care. Following a modular approach, one can upgrade to a new prediction model without a complete overhaul of the clinical pathway. For example, online machine learning audiometry delivers the same stimuli and addresses the same questions as conventional audiometry, so it can substitute for conventional audiometry with no loss of functionality. What it adds is additional capability to incorporate new questions (e.g., about language skills, cognitive processing, visual perception, etc.) addressed by a variety of data-collecting devices, providing more patient information in the process.12

APPLICATION IN PATIENT-CENTRIC CARE

In patient-centric care, the patient must be empowered to prioritize what is most relevant for his/her well-being and everyday function.15 Information needs to be tailored not only to professionals but, more importantly, to patients and their relatives for them to contribute to informed decisions. To this end, the Ida Institute, a non-profit organization that promotes patient-centric hearing care, is working with hearing experts to design new tools that make hearing test outcomes easier to understand.16 Clinical judgment is required to determine the most appropriate scenario to provide to a patient. At the level of clinical care management, the patient and the clinician should not worry about adequately applying underlying computational methodology (e.g., calculating the optimal audiometric masking pro-cedure) or programming the software. Patient-centric hearing treatment includes monitoring hearing status and checking aided performance, along with technical integrity of hearing aids or cochlear implants (processed at the hardware level), and reporting daily problems.17 Based on large data sets (processed at the methodology level), predictions can be made when a patient is at risk of suboptimal care, leading to timely interventions (at the patient level) based on emerging needs and increased uncertainty of clinical status. This scenario is in contrast to the conventional procedure of obtaining periodic check-ups at fixed intervals that lead to unnecessary visits in cases of stable performance.

One can imagine that audiometry, for example, becomes layered in with other patient-executed hearing tests, including localization performance assessed in a lab brought to the patient18 or in virtual environments,19 loudness tests,20 dead region determination in the cochlea,21 auditory nerve integrity testing,22 and speech-in-noise assessment.23 A test battery that integrates hearing tests on a single tablet could enable this workflow. To our knowledge, such test batteries are currently only available to researchers and not yet applicable in clinical use.24 Tests from other disciplines, including language and cognitive tests,25 could be used in combination with hearing tests via the methodology and hardware layers to aid clinicians in selecting the right treatment for the right person at the right time.

We envision that computational audiology has great potential to improve access, accuracy, and efficiency of patient-centric hearing care worldwide.14 The major effort this emerging discipline needs to undertake is to devise interoperability standards that manage the dependencies between nodes. Embracing a modular approach to assessment and intervention within this framework will allow for scalable efforts to improve patient outcomes as new data streams are incorporated. These efforts will ultimately yield patient-centric benefits well beyond audiology.

REFERENCES

1. Hughson, W. & Westlake, H. Manual for program outline for rehabilitation of aural casualties both military and civilian. Trans Am AcadOphthalmolOtolaryngol 48, 1-15 (1944).
2. Taylor, K. I., Staunton, H., Lipsmeier, F., Nobbs, D. & Lindemann, M. Outcome measures based on digital health technology sensor data: data- and patient-centric approaches. Npj Digit. Med. 3, 1-8 (2020).
3. Russell, L. B., Ibuka, Y. & Carr, D. How Much Time Do Patients Spend on Outpatient Visits? Patient Patient-Centered Outcomes Res. 1, 211-222 (2008).
4. Ratanjee-Vanmali, H., Swanepoel, D. W. & Laplante-Lévesque, A. Digital Proficiency Is Not a Significant Barrier for Taking Up Hearing Services With a Hybrid Online and Face-to-Face Model. Am. J. Audiol. 29, 785-808 (2020).
5. Swanepoel, D. W. & Hall, J. W. Making Audiology Work During COVID-19 and Beyond. Hear. J. 73, 20-22 (2020).
6. Swanepoel, D. W. & Clark, J. L. Hearing healthcare in remote or resource-constrained environments. J. Laryngol. Otol. 133, 11-17 (2019).
7. Barbour, D. L. et al. Online Machine Learning Audiometry. Ear Hear. 40, 918-926 (2019).
8. Charih, F., Bromwich, M., Mark, A. E., Lefrançois, R. & Green, J. R. Data-Driven Audiogram Classification for Mobile Audiometry. Sci. Rep. 10, 3962 (2020).
9. Eikelboom, R. H., Swanepoel de, W., Motakef, S. & Upson, G. S. Clinical validation of the AMTAS automated audiometer. Int J Audiol 52, 342-9 (2013).
10. Bastianelli, M. et al. Adult validation of a self-administered tablet audiometer. J. Otolaryngol. Head Neck Surg.J. Oto-Rhino-Laryngol. Chir. Cervico-Faciale 48, 59 (2019).
11. Swanepoel, D. W. & Hall, J. W. Making Audiology Work During COVID-19 and Beyond. Hear. J. 73, 20-22 (2020).
12. Barbour, D. L. & Wasmann, J. W. A. Performance and Potential of Machine Learning Audiometry. Hear Journal. 2021;74(3):40,43.
13. Song, X. D. et al. Fast, Continuous Audiogram Estimation Using Machine Learning. Ear Hear. 36, e326-335 (2015).
14. Wasmann, J. W. et al. Computational Audiology: New Approaches to Advance Hearing Health Care in the Digital Age. Ear Hear. (2021). DOI: 10.1097/AUD.0000000000001041. In press.
15. International Classification of Functioning, Disability and Health (ICF). https://www.who.int/classifications/international-classification-of-functioning-disability-and-health.
16. Klyn, N. A. M., Rutherford, C., Shrestha, N., Lambert, B. L. & Dhar, S. Counseling with the Audiogram. Hear. J. 72, 12 (2019).
17. Remote Hearing Care and Teleaudiology. Hearing Tracker https://www.hearingtracker.com/services/remote-care.
18. Wasmann, J. A., Janssen, A. M. & Agterberg, M. J. H. A mobile sound localization setup. MethodsX 7, 101131 (2020).
19. Stecker, G. C. Using Virtual Reality to Assess Auditory Performance. Hear. J. 72, 20 (2019).
20. Schlittenlacher, J. & Moore, B. C. Fast estimation of equal-loudness contours using Bayesian active learning and direct scaling. Acoust. Sci. Technol. 41, 358-360 (2020).
21. Schlittenlacher, J., Turner, R. E. & Moore, B. C. A hearing-model-based active-learning test for the determination of dead regions. Trends Hear. 22, 2331216518788215 (2018).
22. Wasmann, J. W. A., van Eijl, R. H., Versnel, H. & van Zanten, G. A. Assessing auditory nerve condition by tone decay in deaf subjects with a cochlear implant. Int. J. Audiol. 57, 864-871 (2018).
23. Potgieter, J. M., Swanepoel, D. W. & Smits, C. Evaluating a smartphone digits-in-noise test as part of the audiometric test battery. South Afr. J. Commun. Disord. Suid-Afr. Tydskr. VirKommun. 65, e1-e6 (2018).
24. Shapiro, M. L., Norris, J. A., Wilbur, J. C., Brungart, D. S. & Clavier, O. H. TabSINT: open-source mobile software for distributed studies of hearing. Int. J. Audiol. 59, S12-S19 (2020).
25. Anguera, J. A. et al. Video game training enhances cognitive control in older adults. Nature 501, 97-101 (2013).
26. Alani, M. M. Guide to OSI and TCP/IP models. (2014).
    Copyright © 2021 Wolters Kluwer Health, Inc. All rights reserved.