Because all speech needs to emanate from a vocal tract that is between 15 cm. (child) and 19 cm. (large adult) in length, it is no surprise that the long-term speech spectra are similar for a wide range of languages. All speech is generated by a soft-walled, moist set of tubes (oral and nasal cavities) and, although we have articulators (tongue, soft palate, lips) that can move, there are limitations to what we can generate.
Byrne et al. studied the long-term speech spectra of a number of languages and (expectedly) found almost identical spectra.1 The only consistent difference they found was that males have more low-frequency emphasis than females, which is directly related to the lower fundamental frequencies of the male subjects.
This consistency in the human vocal tract has allowed us to use aspects of the long-term speech spectrum in hearing aid fittings. Music, however, is quite different. Some forms have long-term spectra that are similar to the long-term speech spectrum and others bear little resemblance. Music can have significant low-frequency energy or none at all. It can have low- or high-frequency spectral emphasis. It can be very intense, and it can be very quiet. In short, the dynamic ranges and bandwidths of musical instruments can be, and typically are, much different and greater from those of speech.
MUSIC PRESENTS COMPLICATIONS
The wide variations found in music naturally pose problems in fitting hearing aids on people who wish music to be part of their lives. Several global statements that can be made about setting hearing aids for music have been covered elsewhere.2–4 Essentially, both the gain and the output of a “music” program should be 6 dB lower than the “speech in quiet” program. This is because of the 6-to-8-dB larger crest factors (difference between the RMS and the peak) of music versus speech. However, issues related to compression, attack and release times, and bandwidth appear to be similar for speech and music.
When it comes to fitting hearing aids for speech and music, by far the greatest difference is the intensity level of the two stimuli. Loud speech is typically on the order of 80–85 dB SPL (with peaks about 10–12 dB higher than the RMS). In contrast, medium to loud music can easily exceed 95–100 dB SPL with peaks on the order of 120 dB SPL (i.e., 18 dB higher than the RMS). Although modern hearing aid microphones (and even those of the late 1980s) can handle distortion-free inputs up to 115 dB SPL, modern digital hearing aids have difficulty reproducing distortion-free inputs above 96 dB SPL. There are many factors, but a major one is the limitations of the 16-bit analog-to-digital (A/D) converter. Once a poorly configured A/D converter is overdriven with resulting distortion, no amount of software manipulation later on can improve things.
With the advent of wide-band hearing aids in the late 1980s (and the introduction of the K-AMP in 1988), music listening and playing improved dramatically. However, the introduction of digital hearing aids and their weak point—the A/D converter—created new problems for musicians and for those who like to listen to louder inputs such as music. Short of bringing back the K-AMP (which would be an excellent solution to this problem), I would like to suggest some “low tech” innovations that may allow a hard-of-hearing person improved access to music.
In many cases, the optimal approach is simply to ask the hearing-impaired client not to wear the hearing aids when listening to or playing intense music. Those with moderate hearing loss might need only several dB of amplification to listen to loud music. A strategy that many of my clients use when listening to intense music is to bring a balloon with them to the concert. Usually, after the lights are dimmed, they inflate the balloon and hold it in their hands. Because the low-frequency sounds of the concert are near the resonances of the balloon, this gives the client an improved vibro-tactile response. This strategy can significantly improve the listening experience for many hard-of-hearing people.
SIX POSSIBLE APPROACHES
This article examines six methods that are available to handle higher-level music inputs. They fit into two main categories: (1) increasing the ability of the A/D converter to handle more intense inputs, and (2) decreasing the sensitivity of the microphone.
Both approaches significantly improve the fidelity of music. Decreasing the microphone sensitivity is just a method to “delude” the A/D converter into thinking the input is less intense than it really is. Whether the input to the A/D converter is specifically reduced or whether the A/D converter is altered in some way to handle more intense inputs, the ultimate results are similar: improved musical fidelity.
Ways to increase the ability of the A/D converter
(1) The HRX feature: HRX is an acronym from ON Semiconductor (formerly Sound Design, Inc., and even more formerly, Gennum), which stands for headroom extension. HRX has been available on these circuits for years and, as the name suggests, it increases the range of inputs that the A/D converter can handle in a dynamic manner. Because of engineering/power consumption issues, an A/D converter can handle only a quite limited dynamic range.
Typical engineering solutions have been to use a gain amplifier before the A/D converter. Although that works well for signals like speech with restricted dynamic range, it can be problematic for more intense inputs such as those in music. The HRX feature dynamically alters the gain amplifier, thereby providing the A/D converter with a signal that is well within its operating characteristic. The HRX feature has also been implemented as well on Sound Design's newer Wolverine architecture. (For more information see Ryan and Tewar.5)
(2) Live Music Plus: This is an innovation of Bernafon Hearing Instruments. Although I am an independent clinical audiologist working with musicians and Live Music Plus is proprietary to one manufacturer, I would be remiss not to mention this very useful approach to allowing people to hear music with minimal distortion. The 16-bit architecture found in most modern digital hearing aids has, at best, a 96-dB dynamic range between the quietest and most intense signal that can be transduced. And, typically for engineering reasons, it is far less than 96 dB.
However, I should point out that this range of 96 dB SLP need not be from 0 to 96 dB SPL. There is nothing to prevent one from shifting this range upwards from 0–96 dB SPL to 15–111 dB SPL, which is still a 96-dB dynamic range. The circuit would be slightly noisier, but judicious use of expansion circuitry would ameliorate this issue. Recall that modern hearing aid microphones are quite capable of handling up to 115 dB SPL with minimal distortion, so this shifted dynamic range would still be within the operating characteristic of the hearing aid.
This approach has been called Live Music Plus and is available on some of Bernafon's hearing aids. For more about it, see Hockley et al. in this issue.6
(3) Post-16-bit A/D converter architecture: Currently two IC chips are manufactured for hearing aids, both with 16-bit architecture. Using 20- and 24-bit architecture would allow a wider dynamic range to be obtained, and this might improve the sound quality for more intense non-speech, musical signals. The primary benefit of the post-16-bit architecture would be reducing the quantization error and, therefore, noise floor. However, this would have ramifications for increasing the dynamic range by allowing the design engineer to make decisions that would effectively extend the upper part of the dynamic range.
The new Wolverine 20-bit architecture from ON Semiconductor, which also uses the HRX innovation mentioned above, is being sold to various hearing aid manufacturers. This 4-bit increase is responsible for increasing the dynamic range to 120 dB, which can handle most of the needs of music. In addition, more digital operations can be performed without increasing the level of quantization noise in the hearing aid.
ON Semiconductor also markets the Ezario 5900 IC chip set, a 24-bit system. Both the Wolverine and the Ezario 5900 may be showing up in hearing aids in the not-too-distant future.
Decreased sensitivity of microphone
(4) Damping or desensitizing microphones: If the A/D converter cannot handle overly intense inputs such as those from music, a good strategy may be to reduce the sensitivity of the hearing aid microphones. There are two common ways to apply this approach: the use of damped microphones or of microphones with reduced low-frequency sensitivity (−6 dB/octave). Both methods have been commercially available for years from microphone suppliers to the hearing aid industry. The net result is that a less intense signal reaches the A/D converter, which leads to improved fidelity of music. More on this can be found in Chasin and Schmidt.4
For those listeners (and players) of music who have good low-frequency hearing, using a hearing aid microphone that is less sensitive to lower-frequency sounds, coupled with a non-occluding fitting, would be quite useful. Unamplified low-frequency sounds would enter the unblocked ear canal directly while mid- and higher-frequency sounds would be amplified by the hearing aid. However, most manufacturers of non-occluding behind-the-ear hearing aids use a broadband microphone to minimize the hearing aid noise level. A broadband hearing aid microphone would still allow the A/D converter to be overdriven, resulting in poor fidelity for music, even though the lower-frequency components would be lost through the non-occluding coupling to the ear. Higher-frequency harmonic distortion components would still be transduced through the hearing aid system and thus be perceived by the hard-of-hearing listener.
Using a less sensitive low-cut microphone would minimize the chances that the intense low-frequency sound components of the music would overdrive the A/D converter. Internal noise would increase with this type of microphone, so judicious use of expansion would be necessary. Commercially available −6 dB/octave, low-cut microphones are available in almost all mechanical sizes and electrical configurations as broadband microphones, so there is no inherent reason why any hearing aid manufacturer would be unable to perform this modification.
(5) Taping down the sound: Scotch tape has many uses, and one of them is for anyone who wants to enjoy listening to and/or playing music. Here's something I often recommend to my clinical patients and it works very well. Simply stated, placing one or two layers of Scotch tape over the hearing aid microphone(s) will reduce the microphone's sensitivity by 5–10 dB. This results in an additional 5–10 dB range for the A/D converter. Some experimentation is required, but simply placing the tape over the microphone opening(s) prior to a concert or even in a noisy movie theater can significantly improve the listening experience. You should counsel your clients to remove the tape once they leave the musical venue.
(6) Using an electric network: It is not difficult to have a hearing aid manufacturer reduce the sensitivity of a microphone by 10–12 dB using an electric network. This has the same benefits as the Scotch tape method, and is typically programmed to be “on” (a −10-dB reduction in sensitivity) or “off” (normal function) with either a pushbutton or remote control. This may be implemented differently by different manufacturers, but a goal of a uniform reduction of 10–12 dB is reasonable. Depending on the instrument, the required part of the circuitry may not be accessible, so this approach may not be an option with every manufacturer. There are several ways it can be implemented, including by reducing the electrical charge on the back part of the microphone capacitive sensor.
These six clinical and manufacturer-based modifications work well for listening to more intense music, and I routinely recommend some of them to my clients. I should add that at no time have I referred anyone for routine “software” adjustments. Listening to music is not a software issue; it is a front-end hardware issue. “Programs” that successfully alter the dynamic characteristics of the input, such as Live Music Plus, are not simple software programs, but are actually hardware changes that can be implemented by software changes.
Simply altering the frequency response, compression, gain, and output characteristics will be of very little benefit in listening to music unless the analog-to-digital converter is presented with an input within its operating characteristic. Not all manufacturer-based modifications will be simple. That will depend on many factors, including whether or not a particular portion of a circuit is accessible in any given hearing aid.
I would like to acknowledge the valuable input and discussions of Steve Armstrong and Jim Ryan on earlier drafts of this paper. I would also like to point out that any errors in this paper are mine alone.
1. Byrne D, Dillon H, Tran K: An international comparison of long-term average speech spectra. J Acoust Soc Am
2. Chasin M, Russo F: Hearing aids and music. Trends Amplif
3. Chasin M: Hearing aids and music. Hear Rev
4. Chasin M, Schmidt M: The use of a high frequency emphasis microphone for musicians. Hear Rev
5. Ryan J, Tewari S: A digital signal processor for musicians and audiophiles. Hear Rev
6. Hockley NS, Bahlmann F, Chasin M: Programming hearing instruments to make live music more enjoyable. Hear J