Human beings adapt to sensory deprivation in at least two manners. One, they modify their behavior in a way they believe, sometimes incorrectly, to be beneficial. Two, they undergo a physiologic adaptation by means of neural plasticity. Such an adaptation is often observed in the reorganization of sensory maps following damage to peripheral receptors.1
Furthermore, patients presenting similar audiometric profiles frequently obtain very different benefits from amplification. Various factors may account for this. One factor relates to an individual's assimilation of acoustic, linguistic, and environmental cues. To optimize this integration, a person must call upon many skills and processes, including cognition, auditory memory, auditory closure, auditory learning, metalinguistics, use of pragmatics, semantics, grammatical shape, localization, visual cues, repair tactics, and—since communication is transactional, not one-way—effective interactive communication strategies.
While modern hearing aids can make the acoustic signal audible, they may fail to rectify impaired frequency and temporal resolution, improve the skills listed above, or correct misguided compensatory strategies. Limitations in any of these areas, accompanied by the reduced redundancy found in adverse acoustic conditions, require the listener to make decisions based on acoustic information that is fragmented compared with what a normal-hearing person receives. Given this fragmented signal, the hearing-impaired person must use compensatory strategies and skills to interpret it.
When a person loses a limb and is fitted with a prosthetic device, professionals and the patient recognize the importance of physical therapy to strengthen adjacent muscles (the physiologic adaptation) and instruction to optimize function (the behavioral modification). Therapy also is normally recommended for patients displaying central auditory processing disorders.
Central auditory processing disorders and loss of a limb have something in common with peripheral hearing loss: the likelihood that the physiologic deficit will lead the person to adopt behavioral modifications. Yet, therapy for persons with hearing loss is rarely considered.
It is possible that the mere introduction of amplification will not produce the desired adaptation of the auditory system and auditory skills unless it is accompanied by training. Recent discoveries in neuroscience suggest that training may enhance auditory skills and even bring about changes in the central auditory system.2–4
We have long known that amplification alone does not fully meet the needs of a large percentage of hearing-impaired patients. Hearing healthcare professionals recognize that additional therapy can enhance the benefits of hearing aids. However, time and cost considerations often preclude the use of such therapy.
In this article, we will discuss the theoretical foundations of individual Listening and Auditory Communication Enhancement (LACE) training and report on efforts at the University of California, San Francisco (UCSF) to develop a cost-effective method of providing such training. Our efforts have been guided by two main assumptions.
1. People can use even a fragmented speech signal (which, despite advances in hearing aids, continues to be the reality) and, through home-based training, adopt behavioral strategies, possibly coupled with accelerated and facilitated acclimatization via cortical plasticity, to improve communication effectiveness.
2. Such training is best implemented by means of an individualized protocol established by thorough testing that defines the strengths and limitations of a given patient's communicative profile.
PILOT STUDY OF LISTENING TRAINING
To address our first assumption, we conducted a pilot study at UCSF. We gave eight experienced hearing aid users a baseline audiologic examination, including the HINT, QuickSin, and a training-related speech-in-noise task. The subjects were then assigned to either a control group that did not receive training or an experimental group that participated in a training task 30 minutes a day, 5 days a week for 4 weeks. This training schedule was based on the results of an informal query of clinic patients regarding how much time they would be willing to spend on an auditory training program.
The training stimuli consisted of 1500 digitally recorded sentences in noise. The signal-to-noise ratio varied pseudorandomly from −5 to +3 dB. The selection of signal-to-noise values was based on published data of typical signal levels as a function of background noise.5 The sentences were organized into five categories and further subdivided into topics, as if the subject were listening to a story on the radio. The training protocol was recorded on CD-ROM to be used on the subject's home computer. Thus, only subjects with access to a home computer participated in this project. We will comment on this potential limitation later.
Testing and training were performed with the subjects' hearing aids at the most comfortable settings for the stimuli. All subjects received both detailed oral instructions at UCSF and written instructions. Training proceeded as follows:
Step 1. At the beginning of each training session, the subject adjusted the volume of a calibration sentence to achieve a comfortable listening level and performed no further adjustments during that training session.
Step 2. The subject received an audio-only presentation of the first sentence. The subject was instructed to identify as much of the sentence as he/she was able to distinguish, either aloud or silently.
Step 3. The subject advanced the program and was presented with both the audio and visual representation of the sentence, thus providing immediate feedback.
Step 4. The subject again advanced the program and received the audio presentation only. The subject was asked to think about the sentence as the auditory stimuli were repeated, paying close attention to the sounds that were not initially heard.
Step 5. The subject proceeded to the next sentence. Steps 2–5 were repeated until 30 minutes had passed.
The subjects were seen for testing 2 weeks after the start of the training (mid-training), at the end of training (post-training), and 4 to 6 weeks after completing training (follow-up). The control subjects were seen for testing at the same time intervals; however they never completed any of the training.
The number of subjects in each group was small (four). Three of the four trained subjects had improved scores on all three tests administered during the post-training and follow-up sessions. The fourth subject was unable to complete the training task as instructed. Conversely, none of the four subjects in the control group improved on all three of the tests. These results are depicted in Figure 1.
Although the sample size is too small to perform definitive statistical analysis, the trends from this preliminary study suggest that a take-home training program with a realistic paradigm could improve some listening skills in a relatively short time. Still, the inability of one of the subjects to master the computerized training task underscores that the training must be tailored to the individual and intuitive to use.
Despite these encouraging results, we believe that incorporation of the second assumption, i.e., establishing an individualized protocol based on the results of comprehensive testing and defining the strengths and limitations of a particular patient's communicative profile, could provide for a more time- and cost-efficient and longer-lasting outcome. To ascertain the optimal training parameters, one must assess a patient's listening proficiency and capacity to assimilate the skills required to communicate in the real world.
Speech-recognition tests provide only a rough estimate of a patient's ability to incorporate relevant acoustic, linguistic, and contextual environmental cues. A major shortcoming of current tests is their failure to consider that communication is bi-directional.
Flynn identifies the need to go beyond traditional speech-perception testing to develop a battery of tests that can define an individual's ability to use contextual cues and metalinguistic abilities.6 He bemoans the lack of face validity of existing speech measures and points out that commonly used procedures merely assess acoustic perception, and do not “require the organization of streams of information.” Flynn observes that if one is satisfied with simply measuring a patient's ability to identify speech in the clinic, then today's speech-recognition tests are acceptable. If, however, the goal is to estimate how an individual might benefit from amplification in the real world, the assessment must take into consideration that conversation is interactive and contains cues in addition to those obtained merely from audibility and auditory perception.
To develop a test battery capable of determining an individual's communication profile, it is helpful first to identify the elements comprising communication. Kiessling et al. proposed the following cascade leading to effective communication.7
The most basic step is hearing, “a passive function providing access to the auditory world via the perception of sound.” Next comes listening, the “process of hearing with intention and attention.” Listening is an active process requiring effort. Listening is followed by comprehending, “the reception of information, meaning, and intent.” Comprehending is uni-directional. The final step is communicating, “the bi-directional transfer of information, meaning, and intent.”
It is easy to see how a peripheral hearing impairment affecting any one of these steps can adversely impact the other steps as well. If one cannot hear, it is difficult to listen. A hearing aid may provide adequate audibility for hearing, but if the wearer is a poor listener, comprehension will not be achieved. If a person improves his or her listening skills, but is unable to understand the intent and meaning of the message, communication will be negatively impacted.
Our ultimate objective in training is to enhance a patient's communication abilities. As stated, one necessary step is to improve listening skills. To do this we must enhance the listener's ability to use all relevant cues to formulate a whole meaning out of a fragmented auditory input. The four elements described above are essential to this process. It should be noted, however, that there is considerable overlap in the abilities used to achieve each element.
In addition, not only will enhanced listening skills lead to better comprehension and communication, but better comprehension and communication will further enhance listening skills. In other words, there is a positive feedback loop, which ultimately produces improvements both in the skill being trained and in the other elements of communication (Figure 2). Conversely, when a breakdown occurs anywhere in the process, a negative feedback loop may result, similarly impacting overall auditory communication.
For example, if patients realize they are not comprehending well and immediately become anxious about the situation, they will pay less attention to the acoustic signal. That will impair their listening and, therefore, their communication.
So, given this interaction, what type of test battery might best assess these four elements of communication? In the following discussion, we will consider tests that could be included in a communication profile test battery. Most of these are currently available. Many are already used routinely in audiology practices, while some will need to be adapted for clinical usage. Other tests suitable for use in a communication profile battery remain to be developed.
Developing a test battery
Monosyllabic-word and sentence-recognition testing in quiet measure the most basic of communicative elements and assess the first-order task, hearing. The requirement for completing these tasks is audibility.
To test the second-order task, listening, one must assess the ability to attend and direct attention. Word- and sentence-recognition testing in noise using procedures such as the Hearing in Noise Test (HINT)8 or the Speech in Noise test (SIN) or Quick SIN9 tests listening. However, such tests do not take into account if the stream of acoustic information (not just the words) is correctly interpreted.
To test the third-order task, comprehension, one might use a test such as the Speech Perception in Noise.10 The SPIN determines if the listener can take advantage of contextual and linguistic (high- versus low-predictability) information.
Comprehension also is enhanced when a listener uses non-acoustic cues. Erber used the Sent-Ident test to determine the minimum set of cues required for the patient to understand speech.11 This procedure measures a listener's ability to use additional acoustic and non-acoustic cues, including visual cues, repetition, and clarification, to assist in sentence interpretation.
Other factors that may negatively impact comprehension and communication are deficiencies in figure-ground perception, auditory memory, and auditory closure skills. While all the speech-in-noise tasks require a degree of auditory memory and auditory closure, some tests specifically assess auditory memory. These include the Goldman-Fristoe-Woodcock (GFW) test of auditory memory12 and the Dichonics CAPD screener test of phoneme memory.13 A test that specifically requires auditory closure would be a time-compressed speech measure. This is also relevant to the common complaint by patients that they cannot understand rapidly spoken conversation.
In assessing communication, one must recognize that communication is interactive and depends on the context and linguistic environment. In other words, there is often a predictable relationship between the preceding utterance and the message in question. Such cues add to the redundant nature of communication and are particularly helpful in adverse listening environments. Procedures employing “adjacent pairs” can help determine if a listener uses these commonly occurring cues effectively.14,15
Flynn demonstrated how a preceding question, “Why is Jim limping today?,” assists in understanding of the sentence “He twisted his ankle playing tennis last night.” These two statements form an adjacent pair. Also, since communication is bi-directional, assessing a listener's ability to employ interactive conversational repair strategies would be useful. Programs that are helpful for training elements of listening and communication skills but that would require modification to become useful clinical assessment tools include the Dyalog screener16 and components of the CSLU (Center for Spoken Language Understanding) toolkit.17
To administer such a complete communication profile test battery would likely take the professional an inordinate amount of time. Therefore, some tests would need to be modified or conducted via automated procedures not requiring the presence of the clinician.
One would expect those listeners who made the greatest use of the additional cues cited above to fill in the gaps created by their hearing loss would be the most successful in communicating. Conversely, if a listener failed to take advantage of the additional cues, that would be a deficit to address in therapy. A comprehensive test battery could identify such deficits.
Also, by appraising a patient's listening and communication strengths and weaknesses, such a test battery would enable the professional to counsel the patient on how effective amplification is likely to be and to design individualized, deficit-specific therapy programs.
There are justifiable reasons that practitioners do not routinely provide individualized aural rehabilitation. One is that it is so time-intensive that many professionals do not consider it cost-effective. A viable alternative is group aural rehabilitation. A limitation of that approach, however, is that it ignores differences among individual patients.
By using individualized computerized training, LACE can overcome many limitations of traditional therapy. Computerized training has been proven effective in sensory training for other visual deficits,18 as well as for cognitive disorders such as aging-associated memory deficits and early-stage Alzheimer's.19
Also, well-established rules of perceptual learning can be easily implemented in a computerized protocol. For example, it is essential that the patients being trained maintain a high level of interest. Visual graphics and dynamic interaction between the patient and the computer program help hold their attention.
In addition, the task must be difficult enough to present a challenge, but not so hard as to create frustration. One can accomplish this by keeping the level of difficulty of the training close to the subject's threshold for the task. This model has proven beneficial in driving neural plasticity.20 In other words, the difficulty level of the task is based on the accuracy of a person's response to the previous task. For example, if a subject can correctly identify a sentence presented at a +2 dB signal-to-noise ratio (SNR), the next presentation would be made at a 0-dB SNR. Or, if the subject cannot correctly identify the stimulus at a +2-dB SNR, the next presentation would be at a +4-dB SNR.
Because computerized training can be performed off site, it can proceed at a pace based on the individual patient's progress. Moreover, this progress can be measured remotely. By carefully defining the patient's communication profile, one can design deficit-specific training modules to fit that person's needs.
We recognize that certain patients will present physical and/or cognitive limitations that will require variations in the training protocol. The amount of time spent training is also important because of the need to minimize fatigue. The 30 minutes a day, 5 days a week, 4-week schedule that we employed in the pilot study appeared reasonable to our test subjects. The training outcomes must generalize to real life and not simply to the assessment measure. It also will be vital to implement periodic surveillance and maintenance to ensure that enhancement of skills is long-lasting.
Even if all the factors discussed here are effectively addressed, challenges remain. For example, how can audiologists convince patients to accept training beyond the simple purchase of amplification? How can they persuade third-party payers to reimburse for these procedures? And how can audiologists be convinced of the importance of using such therapy? Clearly, studies demonstrating successful outcomes will be necessary to win acceptance of such training.
The pilot project described above strongly suggests that listening skills can be enhanced with practice and immediate feedback. Moreover, given the theoretical models posed in this paper, it also is likely that augmentation of listening skills with communication strategies would produce a superior outcome. LACE therapy is thus intended to incorporate “listening skills enhancement” with “communication strategies.”
Currently, we are working with software engineers to incorporate the concepts outlined above into an interactive program that patients may use whenever they are fitted with new amplification—or even in cases where they do not get hearing aids. Because many patients, especially elderly ones, lack either computer skills or access to a computer, we envision take-home therapy in the form of hand-held PDAs or “Game-Boy”-type devices. The communication profile test battery is also being established and will allow for eventual inclusion of deficit-specific training modules.
We realize that some professionals may contend that enhancement of listening and auditory communication is not a result of specific training parameters, but simply a function of practice by the patient. However, our objective is to use any means possible to achieve better auditory communication skills, with or without amplification. We may be unable initially to ascertain whether progress is a result of acclimatization or neural plasticity or training effects or even a placebo. While this issue is certainly worthy of investigation, at this time, we would happily trade our lack of specific knowledge regarding why LACE therapy may work for the finding that it does work.
1. Irvine DR, Rajan R, Brown M: Injury- and use-related plasticity in adult auditory cortex. Audiol Neuro-otol 2001;6(4):192–195.
2. Hayes EA, Warrier CM, Nicol TG, et al.: Neural plasticity following auditory training in children with learning disabilities. Clin Neurophysiol 2003;114(5):912–918.
3. Tremblay K, Kraus N, McGee T, et al.: Central auditory plasticity: Changes in the N1-P2 complex after speech-sound training. Ear Hear 2001;22(2):79–90.
4. Recanzone GH, Schreiner CE, Merzenich MM: Plasticity in the frequency representation of primary auditory cortex following discrimination training in adult owl monkeys. J Neurosci 1993;13(1):87–103.
5. Pearsons KS, Bennett R, Fidell S: Speech Levels in Various Noise Environments. Washington, DC: US Environmental Protection Agency, 1977.
6. Flynn M: Sailing out of the windless sea of monosyllables. Hear Rev 2003;10(4):24–30,78.
7. Kiessling J, Pichora-Fuller MK, Gatehouse S, et al.: Candidature for and delivery of audiological services: Special needs of older people. Int J Audiol 2003;42(2):S92-S101.
8. Hearing in Noise Test (HINT) for Windows. Eden Prairie, MN: Maico Diagnostics.
9. QuickSIN Speech in Noise Test. Elk Grove Village, IL: Etymotic Research.
10. Kalikow DN, Stevens KN, Elliott LL: Development of a test of speech intelligibility in noise using sentence materials with controlled word predictability. J Acoust Soc Am 1977;61:1337–1351.
11. Erber NP: Adaptive assessment of adult sentence perception. Ear Hear 1992;13:58–60.
12. Goldman R, Fristoe M, Woodcock RW: Goldman-Fristoe-Woodcock Auditory Memory Tests. Circle Pines, MN: American Guidance Service, 1974.
13. Dichonics Sonido, Inc. 1811 Plum Street, Valdosta, GA 31601
14. Flynn MC, Dowell RC: Speech perception in a communicative context: An investigation using question/answer pairs. J Sp Lang Hear Res 1999;42:540–552.
15. Gagne JP, Tugby KG, Michaud J: Development of a speechreading test on the utilization of contextual cues (STUCC): Preliminary findings with normal hearing subject. J Acad Rehab Audiol 1991;24:157–170.
16. Dyalog Communication Analysis. West Bloomfield, MI: Parrot Software.
17. CSLU Toolkit. Beaverton, OR: Center for Spoken Language Understanding.
18. Ciuffreda KJ: The scientific basis for and efficacy of optometric vision therapy in nonstrabismic accommodative and vergence disorders. Optometry 2002;73(12):735–762.
19. Gunther VK, Schafer P, Holzer BJ, Kemmler GW: Long-term improvements in cognitive performance through computer-assisted cognitive training: A pilot study in a residential home for older people. Aging Ment Health 2003;7(3):200–206
20. Merzenich M, Wright B, Jenkins W, et al.: Cortical plasticity underlying perceptual, motor, and cognitive skill development: Implications for neurorehabilitation. Cold Spring Harbor Symposia on Quantitative Biology 1996; 61:1–8.
© 2004 Lippincott Williams & Wilkins, Inc.