Imagine two scenarios. In the first, you're a little late driving in an unfamiliar city (without satellite navigation), and you're on your way to an important meeting. In addition to looking for street signs, you are struggling to read a map to help you find your way. The heavy traffic is disturbing you. You accidentally miss your exit and must determine a new route to your destination. You are frustrated, and it takes a lot of mental effort to complete the task. By the time you arrive, you're exhausted.
Now, imagine a second scenario. You're driving to work along the same familiar route you take daily. Traffic is flowing smoothly, and the trip is routine. While driving, you think about your weekend plans. Suddenly, you realize you've arrived at work. You've driven through the whole town without actually noticing how you were driving, and you arrive precisely on time while expending little mental effort.
Obviously, a drive through a city can vary significantly with regard to the amount of problem solving, precision, focus, conscious processing of new information, and memorization required, the amount of mental effort expended, and the amount of stress experienced. The first scenario represents a process that involves significant effort, problem solving, and mental resources. The second scenario involves over-learned driving patterns that made the drive automatic and effortless and required few mental resources.
The above examples are analogous to different listening situations. Some listening situations appear effortless, while others demand much greater effort to understand what is being said. We know hearing-impaired people expend more listening effort in demanding listening situations.
Therefore, the question is, when is listening automatic and effortless and when does it require significant cognitive resources? Two important concepts help us find the answer: Working memory and the Ease of Language Understanding model.
“Seven, plus or minus two” was the so-called “magical number” introduced by the Princeton University cognitive psychologist George A. Miller in 1956.1 After extensive research, Miller proposed that average people could remember seven things at once. Of course, some people are better at remembering and some people are worse, so he added “plus or minus two.” However, what Miller and many other researchers were really investigating was a much larger question. Specifically, why are some people's brains better suited for success? Their findings kept pointing to a new cognitive concept called “working memory” as an important part of the answer.
Working memory represents the brain's ability to hold and process separate items of information about what you are doing at the present moment (see Figure 1 for a sketched example).
For example, imagine you are watching television and the program stops for a commercial break. You quickly compose a mental list of tasks and set off to get everything done before the program resumes. Your thought process is as follows: “Go to the refrigerator, get a drink, get a snack, check e-mail, and hurry back.” However, as the word “e-mail” pops into your head, you suddenly remember that you need to notify your colleague at work; the meeting scheduled for the next morning has been postponed. The next moment you find yourself staring into the refrigerator thinking, “What am I looking for?” (example from Westerberg2).
As this scenario clearly shows, working memory has limits. If we try to focus on too many things at once or if we become distracted, information that we are trying to retain can fall off our mental radar. This scenario is precisely what Miller and colleagues were trying to understand and describe—and with good reason. It is one thing to forget why you went to the refrigerator, but the same cognitive processes and abilities at work in this example impact thousands of daily activities of much greater importance.2
Working memory is not just a constraint for those whose mental capacity falls in the lower percentage of the population. Many people—including very intelligent people—experience strains on their working memory because of external causes such as the hectic environment in which they struggle to work/perform. One experience that challenges everyone's working memory is stress. Increasingly, research studies are investigating and illuminating why normally intelligent people fail under pressure, and the answer appears to be centered on working memory.
Working memory is highly involved in communication in complex and challenging acoustic conditions, especially for hearing-impaired persons when the auditory input is degraded. For example, while conversing in a noisy background, people need to store information in their working memory to make sense of subsequent information (see Figure 2). Simultaneously, those with hearing impairment probably miss some words or word fragments as a consequence of their hearing loss and of masking from the interfering noise. As a result, they need to allocate some of their (limited) cognitive processing resources to figuring out what is being said—and thus effortful problem-solving mechanisms come into play, as demonstrated in the scenario of unfamiliar driving described above. Because people's working memory capacity varies, individual performance and success while handling complex listening situations also vary. Indeed, Lunner showed that working memory performance correlates to speech-recognition performance in noisy environments.3
THE EASE OF LANGUAGE UNDERSTANDING MODEL
The Ease of Language Understanding model (ELU) was developed by Rönnberg et al.4 Their model (see Figure 3) aims to explain why in some listening situations, for some people, speech is easily and effortlessly understood, while in other situations, understanding speech is effortful and consumes significant working memory resources.
Under optimal conditions (such as one person speaking clearly at a comfortable loudness in a quiet and comfortable listening environment and a listener with normal hearing), the ELU model assumes that the internal (neural) representation of the speech input can be rapidly and automatically bound together at the cognitive level to form a stream of phonological information (words and syllables). The stream of phonological information unlocks long-term memory and compares the incoming input to information stored in long-term memory to understand the message. This function, called “implicit processing,” is assumed to be effortless, automatic, fast, and precise.
However, when conditions are sub-optimal, an imperfectly represented stream of phonological information may fail to unlock long-term memory representations. In this situation, the central nervous system “asks” for the additional working memory resources that it needs to infer the meaning of the incomplete stream of information. Additional resources are then allocated to remembering what has been said and to guessing at what has been missed. Thus, the working memory system may actively and consciously compensate for incompleteness in the stream of incoming information. This function, called “explicit processing,” endeavors to decode the incoming message at a high performance level—but at the cost of extra mental effort.
Sub-optimal conditions may include environmental acoustic issues such as interfering speech and other sounds competing with the target speech signal, sounds that are too quiet or too loud, reverberation, and so on. In these cases, the neural representation of the target signal may be less salient. However, sub-optimal conditions can also include hearing impairment. In particular, cochlear hearing loss generally induces disruptions and distortions of sounds transduced from mechanical sound waves in the natural, acoustic environment into bioelectric neural representations within the brain. Thus, important neural representations of speech cues are more vulnerable in hearing-impaired persons and may require additional working memory resources at the cognitive level.
Therefore, hearing-impaired persons must devote working memory resources to understanding speech more often than normal-hearing persons do.
Hearing-impaired people use more working memory resources to compensate in situations where normal-hearing persons smoothly and automatically “unlock” the meaning of speech. Thus, people with high working memory capacity may be better able to compensate in difficult listening situations than people with low working memory capacity.
Lunner has shown this phenomenon empirically:3 People with high working memory capacity understood speech better under more difficult listening situations (up to 10 dB louder background noise) than people with low working memory capacity, despite comparable hearing thresholds.
HELP FOR THE COGNITIVE SYSTEM
The first thing to do to help hearing-impaired people is to make target speech cues audible without adding distortion. If hearing aids amplify inaudible speech sounds, the internal (neural) representation of the speech signal may be made more salient, especially under conditions where disturbing background sounds are minimal and the words are essentially correctly understood. In this situation, inaudible speech would have required extra resources to make sense of the sound (i.e., explicit processing). But when that same speech signal is made audible via amplification, effortless, automatic, fast, and precise understanding occurs (i.e., implicit processing).
It is no easy task, however, to make sounds audible without adding distortions. Modern hearing aids use wide dynamic range compression (WDRC) amplification, which amplifies weak input sounds more than loud input sounds. Thus, gain-regulation systems may cause distortions of the original speech cues in different ways and may themselves generate a need for extra use of working memory resources by the listener. If the regulation of gain during rapid drops in the input level is too slow (long release time in the compressor), weak speech cues may be under-amplified, and speech cues will be less salient.
Likewise, if the regulation of gain during fast increases in the input level is too slow (long attack time), sounds may be too loud, causing unnecessary attention, which also calls for working memory resources. However, if the regulation system is fast, secondary speech sources may introduce artifact in the regulation.
Therefore, an obvious design goal for a WDRC hearing aid would be to relieve the listener's working memory resources by preserving speech cues as much as possible, so that effortful working memory processing is transferred to implicit automatic unlocking of understanding, and by minimizing artifacts that stimulate the use of working memory resources.
A superior WDRC system would allow good internal representation of speech cues. If the system can be shown to require less listening effort in a given situation, then it can be assumed that the internal (neural) representation is better and that more working memory has been freed for other tasks.
© 2010 Lippincott Williams & Wilkins, Inc.