The increasing use of hearing technology like hearing aids and cochlear implants calls for the need for auditory training (AT) to optimize the use of these devices. However, clinical AT may not always be feasible, with challenges and limitations related to cost and scheduling, among others. To address this issue, a collaborative team from the United States and South Korea developed Speech Banana, a mobile app that promotes accessible clinical AT among people with hearing loss, and found that it offers accessibility “with a validated curriculum, allowing users to develop speech comprehension skills with the aid of a mobile device.”
;)
mHealth, telehealth, hearing loss.
Speech Banana was originally a project of Joanne Song, Margo B. Heston, and Rohit Bhattacharya, then undergraduate students at Johns Hopkins University (JHU). They were advised by J. Tilak Ratnanather, DPhil, an associate research professor at JHU's Department of Biomedical Engineering. Ratnanather, who has been using a cochlear implant since 2012, has personally benefited from clinical AT sessions.
“Speech Banana was developed as a free mobile health (mHealth) app to provide clinically relevant AT on tablet and web platforms,” the authors explain in their paper published in JMIR mHealth. “The name alludes to the shape that speech sound frequencies form when visualized on an audiogram.”
In designing the app, the team used a problem- and objective-centered Design Science Research Methodology. Their problem-centered goal was to provide greater accessibility to auditory training worldwide, and their objective-centered goal was to improve on current clinical techniques for auditory training. The now-expanded team used computer-based learning programs to identify gaps in existing auditory training and interviewed speech pathologists and users to determine what features worked best in their app.
The resulting app has English and Korean iterations, and can be used on an iPad and any browser. It comprises 38 lessons, including exercises that use auditory stimuli and ones that use a combination of visual and auditory stimuli. Users may control the background noise volume so they can train with various frequencies and signal-to-noise ratios.
“The design has to be robust to adaptations for other languages as was the case for the Korean version,” Ratnanather told The Hearing Journal. This is in line with their aim to increase the worldwide accessibility of auditory training.
The team specifically designed variations between the Korean and English versions to accommodate the phonetic and syntactic differences between the languages.
“To clarify things, no translation [of the lessons] was involved,” said Ratnanather. “Only the design structure was used and adapted for the second language. The challenge was to understand the different ways auditory training was used in different countries. The Korean version uses sentence stimuli for both training and testing. The English version uses word and sentence stimuli for training and testing respectively.”
“Furthermore, improvements to the design in the early stages of development were based on suggestions from users, i.e., programmers of mHealth apps must understand the perspective of both the clinician and user,” Ratnanather said.
A 2018 scholarly review (Olson, et al.) highlighted the successful features of Speech Banana. The review assessed more than 200 AT mobile apps using five “expected” characteristics:
- The app provides feedback through real-time scoring and allows the user to repeat the test stimulus.
- It uses a large training corpus of words and sentences.
- It trains users on specific phonemes across the speech frequency space.
- It employs analytic and synthetic processing.
- It tracks user performance via a progress report.
Speech Banana met these five criteria, and was one of the five iPad apps deemed “appropriate for a detailed review.”
Ratnanather also noted that their team has been getting positive feedback from users. “Users have found that the progress tracking page provides motivation and satisfaction. The opportunity to test the ability to listen to conversational sentences is attractive since most AT apps focus on word stimuli.”
He added, “Due to lack of access for auditory training in Korea, adults with cochlear implants are benefiting from using the app. The potential for global use is great. In addition to the languages mentioned in the paper [British English, French, German, Turkish, Arabic, Spanish, Hindi, Tamil, and Sinhalese], we received a request for a Malay version.”
With this demand, will the team keep developing the app to accommodate a broader user base?
“Alas, manpower is dependent on undergraduate programmers,” said Ratnanather.
But the team remains optimistic. “Implementation may be accelerated by using publicly available speech corpora such as the British English Speech Corpus, which is frequently used in speech recognition research,” the authors recommended. “To ensure phonemic consistency across languages, speech and language clinicians should be invited to collaborate on curriculum development.”