Conventional wisdom on any topic can be a help or hindrance. On the one hand, it can facilitate decision making, removing complexities that may sometimes be paralyzing. On the other hand, blind adherence to conventional wisdom may oversimplify one's understanding, leading to decisions based on subjective, sometimes misdirected, impressions, rather than objective data. And, over time, especially in a rapidly changing field like ours, conventional wisdom runs the risk of becoming obsolete.
As hearing professionals, we adhere to certain conventional wisdoms, not always because they are wise—they may or may not be—but because they help us make sense of complicated information and make decisions based on that information.
In this time of rapidly developing technology, the hearing aid industry must commit to presenting a clear picture of our products' benefits and their limitations. This clarity should ensure win-win-win outcomes. Your customers achieve the best solution to their hearing concerns. Your practice thrives. And manufacturers see results that allow them to continue investing in new developments.
We are seeing a favorable trend. Data-based decisions are supplanting supposition about the potential of some new technologies. However, advancing technology itself can hatch new misconceptions, sometimes the result of imperfect teaching or learning of new concepts. This article attempts to straighten a few things out, to address a few conventional wisdoms that are either ill conceived or obsolete.
Well-known members from the scientific teams of five major manufacturers agreed to share their knowledge and unconventional wisdom in this article: Laurel Christensen and Andrew Dittberner, both of GN ReSound; Dave Fabry, of Phonak; Tom Powers, of Siemens; Don Schum, of Oticon; and Tim Trine, of Starkey Labs. Each selected a topic to address briefly, with the goal of dispelling what they see as misconceptions. Each author was given an opportunity to comment briefly on one other offering.
Ruth Bentler, professor of audiology at the University of Iowa, agreed to serve as editor and commentator. Ruth's role in critiquing and editing this article mirrors an important role she has established as an objective evaluator of hearing aid research and design. I am grateful for her involvement here and for the contributions she has made to developments in this field.
In proposing this article, my hope was that we would all gain valuable insights from the ensuing exchange. After reading the offerings from these experts, I am confident that goal will be achieved.
Those of you attending Audiology NOW! 2006 in Minneapolis can hear these same experts, plus Francis Kuk, PhD, of Widex, expand on their comments on these pages when they take part in an interactive panel on Evidence-Based Hearing Instrument Design. It will be held Friday, April 7 from 8 to 9:30 am, and once again, Dr. Bentler will be at the helm as moderator.
Conventional Wisdom Challenge #1
Do adaptive polar patterns improve speech intelligibility in noise over fixed directional systems?
TRINE: Directional processing has proven its unmatched ability to improve speech recognition in noise (once audibility has been maximized). However, the performance of directional microphones in everyday life1,2 has not met the high expectations created by their demonstrated laboratory performance.3,4
This discrepancy can be explained in part by considering the acoustics of everyday life in conjunction with the speech-to-noise ratio (SNR) loss of individual patients5,6 and should serve as a reminder that appropriate counseling regarding expectations for directional performance is critical to the fitting process. It might also suggest, however, that the typical laboratory evaluation of a directional system is not representative of real-world performance because it does not accurately simulate the acoustics of everyday life.
Nowhere is this last hypothesis truer than in the characterization of currently available adaptive directional hearing aids. In fact, every publication to date that has shown performance of an adaptive directional system to be better than a fixed directional system has placed noise sources within the critical distance of the space, typically in an anechoic or sound-treated room.3,4,7,8
The importance of this experimental detail cannot be overstated. By placing the noise source within the critical distance of the test space, these studies have virtually eliminated the possibility of generalizing the experimental conclusions to the real world because in the real world, noise sources are rarely within the critical distance of the spaces in which we live.* The few studies that have taken real-world room acoustics into account have shown no improvement from an adaptive directional system over a well-designed fixed directional system.9,10
Although these data might challenge conventional wisdom, they are well supported by the acoustic realities of everyday life.11 The bottom line is that we live in a reverberant world, so most noise is relatively diffuse. Consequently, the magnitude of the real-world directional benefit is not related to the speed, frequency resolution, or accuracy of moving nulls in the polar pattern. Rather, it depends simply upon the absolute directional performance of the system, best characterized by the in situ directivity (quantified by the directivity index measured on KEMAR), noise performance, and stability over time.
Killion recently showed that these performance metrics can differ dramatically across products and likely change over time for dual omnidirectional-microphone systems.12 This is particularly true for ITE and ITC applications where the small port separations require exquisite microphone matching and a mismatch between microphones of 0.15 dB can significantly degrade directivity.6
One way to summarize this challenge to conventional wisdom is that in the real world, a well-designed adaptive directional system will spend the vast majority of its time in a relatively fixed directional pattern. Thus an adaptive null becomes extraneous. Two valid perspectives then might be: “Why not offer adaptive directional systems?” or “Why bother?” In the interest of evidenced-based product development and stable, high-performance hearing aid design, I choose the latter.
BENTLER: Here's a twist, me challenging industry's cautiousness! Few researchers will argue your point about critical distance effects. Few would also argue your comment about many environments being diffuse-like, at least indoors. But what about the proportion of time hearing aid wearers spend outside office rooms, classrooms, living rooms, etc.? Are there any data to suggest adaptive mics might provide benefit there (i.e., when the noise is outside some critical distance)?
TRINE: The short answer to this question is no, I don't know of any data to suggest benefit in realistic outdoor environments. But, for an adaptive polar pattern system to have an advantage over a non-adaptive system, all of the following conditions must be true:
1. The listener is outside…
2. … attempting to hear a signal (e.g., carry on a conversation) in front of him or her.
3. The signal is relatively close to the listener.
4. There is a single dominant noise source moving behind or to the sides of the listener.
5. The microphone pair in the hearing aids maintains exquisite sensitivity and phase matching.
Using conventional wisdom, how often are all of these conditions met?
FABRY: I agree that the benefits of directional microphones diminish with increased talker-listener distance and reverberation. However, a few studies have shown directional-microphone benefits for test conditions with increased talker-listener distance and reverberation times that reflect many real-world situations.13 It is true that additional research with adaptive directional-microphone systems is needed to support laboratory research dating back to the previous millennium that found that:
* Adaptive directional-microphone systems provide improved benefits when one or two noise sources exist in a listening environment and are outside of the polar pattern “null” for the fixed or automatic system.14
* Adaptive directional-microphone systems provide improved SNR benefits for a single noise source that is moving.15
Conventional Wisdom Challenge #2
Is directional benefit lost in open-ear fittings?
(Note: There are two commentaries on this topic.)
FABRY: Patients with precipitous high-frequency sensorineural hearing loss provide significant challenges for hearing healthcare practitioners. These patients do not usually meet candidacy requirements for cochlear implants, and yet they are often dissatisfied with hearing aid performance.
The incidence of this hearing loss configuration (normal low-frequency hearing thresholds combined with >30-dB/octave slope in high frequencies), which may arise from noise exposure, presbycusis, and ototoxicity, or some combination, is expected to increase as the “baby boom” generation ages. In recent years, new hearing aid technology has been developed to focus on the needs of patients with such hearing losses.
Although not a new concept, “open” fitting devices have recently enjoyed a renaissance in popularity. Most major hearing aid manufacturers now offer several devices that either are specifically developed for open fitting or may be modified from existing product families. Effective open-fitting strategies comprise:
1. Minimal occlusion via a narrow (0.8 mm) tube fitting or large vented earmold (greater than 3.0-mm diameter)
2. Feedback phase cancellation system
3. Minimal signal processing group delays (less than 15 ms)
4. Precise frequency compensation with steep filter slopes
5. Directional microphones to im-prove speech recognition in noise.
Directional microphones have been implicated as the single factor most related to patient satisfaction and benefit with other styles of hearing aids.16 However, conventional wisdom suggests that open fitting and directional microphones are mutually exclusive. This stems primarily from the perception that the acoustic properties of venting act to attenuate low-frequency gain (below 1000–1500 Hz), where directional microphones are most effective. This is illustrated in Figure 1, based on work conducted by Lybarger.17 While this perception is historically accurate, modern directional-microphone systems have been developed that extend and preserve directional benefits for frequencies in excess of 6000 Hz by placing the microphones closer together than in the past.
The previous industry standard separation used in hearing aids was 12 mm, which limited directional benefits to below approximately 4000 Hz. However, some new systems use a separation as small as 5 mm. Although this results in higher internal noise, digital technology may be able to minimize its impact by using “acoustic scene analysis” to monitor the listening environment and automatically activate either omnidirectional or directional microphones when appropriate. As a result, considerable directional benefits can still be achieved for frequencies between 1500 and 6000 Hz when coupled to an “open fit” hearing system.
This translates into measured speech-recognition improvements in noise of 20%-30% (i.e., 2–3 dB SNR benefit) over comparable open-fit hearing aids that use omnidirectional microphones. Furthermore, the use of automatic directionality provides improved sound quality in low ambient noise levels over manually activated or full-time directional-microphone systems. Of course, feedback phase inversion is essential to allow these high-frequency components of speech to be amplified without producing feedback oscillation.
TRINE: In general, I agree with Dr. Fabry's message here, and it is an important contribution. One minor correction, however, should be noted regarding the assertion that the low-frequency “internal” noise associated with directional processing remains a problem for state- of-the-art directional BTEs. Taking an unconventional approach, Starkey's DaVinci PsP has “internal” noise performance in directional mode that is comparable to the excellent performance in omnidirectional mode.
CHRISTENSEN & DITTBERNER: Whether or not hearing aids equipped with directional microphones provide a directional advantage when used with an open fitting is a topic of debate. Two main issues have been identified related to the effectiveness of directivity in an open fitting. First, does an open fit reduce or eliminate any directional advantage? Second, does environmental noise that would otherwise be rejected by the directional microphone “leak” through the vent?
Killion and Christensen measured directivity on KEMAR using an earmold with a blocked vent and a 3-mm vent. Approximately 1.8 dB of directionality was lost between the blocked vent and the 3-mm vent conditions.18 In addi-tion, Ricketts reported that the average directivity index (DI) is reduced with increasing vent size.19 A decrease of approximately 1.5 dB DI was reported between a 1-mm vent and an open fit. It is well known that an acoustic effect of any vent is seen in the low frequencies with the frequency roll-off dependent on vent size (as shown in Figure 1).
A consequence of this low-frequency reduction is a decrease in the measured DI as reported by Ricketts.19 However, is this attenuation best described as a decrease in directivity or a decrease in audibility? Killion and Christensen went on to measure DI when the vent effect was compensated for by equalizing the frequency responses of the omnidirectional and directional microphones. DIs were essentially the same in these two conditions. In conclusion, it is the attenuation of the lows (audibility) that decreases the DI.
To find out if environmental noise coming in through the vent has an additional impact on directionality, a series of measures were made on KEMAR. All measures were completed in an eight-speaker cube constructed in a semi-anechoic space to simulate a diffuse sound field.20 Figure 2 shows the attenuation of two types of noise (pink noise and speech babble) at the eardrum after the noise passed through an earmold with a 3-mm vent. Only an earmold was present on the ear. Low-frequency sound is not attenuated when traveling through the vent. Thus, without a hearing aid, low-frequency noise does enter the ear canal through the vent.
To determine if the low-frequency noise leaks through the vent simultaneously with a low-frequency signal leaking out, the DIs for two hearing aid couplings (fully blocked vent and a 3-mm vent) were evaluated. Equalization of the frequency responses was implemented so that any decrease in directivity measured would be due to the environmental noise entering the ear and not an effect of venting. Figure 3 shows that for both pink noise and speech babble the differences in directivity between the closed vent and the 3-mm vent are approximately 0.2 dB. Thus, the noise entering via the vent has no negative influence on directivity.
One explanation may be that a vent not only lets noise in, but also lets it out. This outward release of noise tends to be at a higher pressure level than the noise that leaks in. This pressure difference on either side of the vent favors the higher pressure located in the ear canal (as produced by the hearing instrument), thus impeding the leakage of noise in.
BENTLER: What I hear you all indicating (albeit from different approaches) is that the directivity of the open-fit system may actually be better than previous designs due to our focus on higher frequency microphone response (spacing), and that any loss of the directivity (or maybe the word should be directionality) of the open-fit system is tied to loss of gain caused by the venting. The directivity of the microphone is not altered; the directionality of the system may be. Without gain, there can be no gain reduction.
Another point of clarification relates to the use of the term “open fitting,” which Dave calls anything greater than 3 mm and Laurel and Andrew call 3 mm. Most products of this category use some cone-shaped canal piece. While the result is not a truly open canal, the percept is a non-occluded canal.
Conventional Wisdom Challenge #3
Do feedback-control algorithms sacrifice the frequency response of a hearing aid?
POWERS: The introduction of digital hearing instruments provided a platform for new types of advanced signal processing. One algorithm introduced in these first-generation instruments was feedback reduction/cancellation. Many of these early feedback-reduction systems used filtering algorithms. In some cases, these algorithms used multi-channel gain reduction to accomplish the desired effect.
If the hearing instrument had only a few channels, it sometimes was necessary to alter gain in a relatively broad frequency range. It was perhaps during this time of our digital hearing aid history that one of the industry's “conventional wisdoms” developed: Feedback algorithms have a negative effect on audibility and it is impossible to achieve adequate gain with the feedback algorithm engaged.
Today's adaptive feedback-cancellation systems, now available in fifth-generation digital products, are very different from those of the 1990s. These systems employ extremely fast adaptation to changes in the signal path. A detector is specifically designed to analyze feedback (internal and external) and when it detects it to introduce a counter-phase signal to reduce the feedback signal. The algorithm avoids inappropriate cancellation of important environmental signals.
Since the cancellation is adaptive and does not affect the gain or frequency response of the hearing instrument, it maintains audibility. Hence, the notion that audibility cannot be obtained when the feedback control is activated is now simply untrue. If we can reduce the occurrence of feedback while maintaining gain and audibility, several patient benefits are possible.
The most challenging fitting environment for these adaptive feedback-can-cellation systems are “open-fitting” instruments, i.e., fittings where significant high-frequency gain is required, but the ear tip used is totally (or almost totally) non-occluding. This directly addresses the conventional wisdom previously mentioned: Is it possible to provide appropriate gain with this type of fitting without feedback?
To investigate the relationships among modern feedback technology, audibility, and feedback control we measured the real-ear maximum gain with the adaptive feedback system on and off. Fourteen subjects were fitted monaurally with an open-fit BTE (Acuris Life) and the hearing aids were fitted to NAL-NL1 targets for a mild-to-moderate high-frequency hearing loss. With the adaptive feedback cancellation turned off, the gain of the instrument was increased until feedback became audible (to the examiner or the patient) or was observed on the display of the probe-microphone system. The real-ear gain was then recorded for key test frequencies.
The adaptive feedback system was turned on and the measurements were repeated (see Figure 4).
Observe that, on average, it is possible to achieve 8–12 dB of additional gain in the critical speech frequencies when the adaptive feedback algorithm is implemented. Note also that mean feedback-free gain in the range of 25–30 dB was available through 6000 Hz.
In summary, considerable improvements have been made in adaptive feedback reduction in recent years. The “conventional wisdom” held by some, that these systems can reduce audibility, is clearly no longer wise. In fact, as shown by our data, audibility for frequencies critical for speech understanding can be increased when these systems are employed.
BENTLER: Do all manufacturers agree that the current feedback managers use a fast-acting algorithm that “introduces a counter-phase signal to reduce the feedback signal”? I suspect some of them might take issue with the term “feedback manager.”
SCHUM: I prefer to see a differentiation between the terms “feedback cancellation” and “feedback management.” Cancellation refers to phase cancellation without any loss of audibility, as Tom described. Feedback management, in my opinion, refers to systems that do limit gain as a way to minimize feedback. Although typical of systems that were in place before cancellation algorithms became widespread, they still have a place in some circumstances. As Tom points out, cancellation algorithms can increase usable gain in a device, but there will still be a point where feedback will occur.
In cases where a patient with severe or profound loss has high gain requirements, feedback management that would impose channel-specific gain limits may still be needed in addition to the effect of the cancellation system. Of course, the first step is to determine if the cancellation system itself will provide enough usable gain without feedback. However, in some cases, limits may need to be set.
CHRISTENSEN: I agree with Tom that most high-end devices today use some method of phase cancellation to control feedback. What I don't completely agree with him about is that audibility doesn't ultimately get sacrificed due to the cancellation system. The sound quality of these systems varies greatly and some introduce so much distortion and artifact when activated that listeners will ultimately turn down their hearing aids to improve the quality of sound.
Conventional Wisdom Challenge #4
Does noise-reduction circuitry improve speech understanding in noise?
SCHUM: Poor performance in noise is the most important unresolved problem for hearing aid users. The layperson, therefore, is primed to interpret what we call “noise reduction” as a system that can amplify only speech and eliminate or reduce noise. These systems simply do not perform that way.
Do these signal processing approaches help out in noisy situations in a manner that is apparent to the user? Yes, in many cases. However, the nature of this benefit is not a direct improvement in the signal-to-noise ratio. Without a direct improvement in SNR, it is highly unusual for there to be a measurable improvement in speech understanding in noise. It is a matter of professional responsibility for the audiologist to make sure the patient understands what noise reduction can and cannot do.
The origin of the misconceptions about noise-reduction systems relates to the difference between Analysis and Response to the environment. Noise-reduction systems are excellent at analyzing the nature of the sounds coming into the hearing aid. Whether the system uses modulation analysis, synchrony analysis, or a combination of the two, modern systems can effectively determine if it is speech, noise, or both that is entering the hearing aid.
However, just because a system can classify the components of a mixed-speech-plus-noise signal doesn't mean the system can disentangle the two types of signals. The only thing that noise-reduction systems in hearing aids can do is manipulate the gain in specific frequency regions, usually corresponding to the channel structure of the hearing aid. Since speech is broadband, any reduction in a specific channel in response to a classification of a high level of noise will inevitably also reduce the level of whatever speech is in that channel. At no point in time is the SNR in that channel altered. Thus, reducing the noise in that channel in no way improves the effective audibility of the speech in the same channel. When speech understanding in noise has been measured with such systems, no change in performance has been seen reported.21
The only way a gain reduction in a given channel can improve speech perception is if it somehow improves the effective audibility of speech in an adjacent frequency region. This can occur only if the gain reduction in a specific channel reduces a spread of masking or some other cross-frequency distortional effect. These effects are rare.
Noise-reduction systems are effective at reducing the overall loudness of the amplified signal in noise or speech-plus-noise environments. Again, the SNR is not improved, but the patient perceives that the level of the noise has been reduced. The audibility levels provided by multi-channel, non-linear systems are typically optimized for speech understanding. In cases where there is a significant non-speech component to the aided signal, patients typically appreciate further gain reductions via a noise-reduction system.22
So, does noise-reduction circuitry improve speech understanding in noise? Most likely not. Is there a value? Of course, but it may be of a different nature from what the patient expects.
BENTLER: Your suggestion to improve effective audibility seems logical. Since we now have access to many channels in which to reduce gain where speech is not present, the perception of the hearing aid user may be that listening got both easier and better. These and many new technologic developments suggest exciting possibilities in future generations of hearing aids.