Skip Navigation LinksHome > May 2007 - Volume 60 - Issue 5 > Flexible DSP circuits have put advances in hearing aid techn...
Hearing Journal:
doi: 10.1097/01.HJ.0000285593.09582.bd
Article

Flexible DSP circuits have put advances in hearing aid technology on a fast track

Niklaus, Marc

Free Access
Article Outline
Collapse Box

Author Information

Marc Niklaus, an electrical engineering graduate of the Engineering College of Geneva, Switzerland, is the Product Line Manager for Audiology ASSPs at AMI Semiconductor's Medical Group. Readers may contact him at marc_niklaus@amis.com.

From analog to digital signal processing, from large, bulky devices to small, inconspicuous ones that are barely noticeable, and from a battery a day to one a week, hearing aids have come a long way in the past century. However, while hearing aids evolved slowly through most of their history, advances in recent years have come at warp speed compared with the past. And, all indications are that 21st century technologies will deliver innovations even faster in the future.

This article will discuss how advances in semiconductor technology have enabled hearing aid manufacturers to reduce the size and the power consumption of their products. It will also explain why ultra-low-power semiconductor technology alone is no longer sufficient to enable the highly sophisticated hearing aids of today that require entire mini-computer systems on a piece of silicon. These systems must be flexible so they can be quickly reconfigured to enable new feature sets for hearing aids, and they need to be closely mapped to the semiconductor technology they use to optimize performance.

The article will then discuss how semiconductor companies have crafted such systems, first by introducing digital signal processing (DSP) technology and now reconfigurable DSP systems. It will conclude with a short discussion of hearing aid features that enable today's technology, new features that may soon become available, and an example of an advanced new hearing aid IC (integrated circuit). But first, let's take a brief look at the history of the hearing aid.

Back to Top | Article Outline

TRACING THE ROOTS OF ELECTRONIC HEARING AIDS

Electronic hearing aids can be traced back to the turn of the 20th century. The first designs employed technology taken from the newly invented telephone, in which two thin metal plates compressed carbon granules in response to sound waves. The carbon granules changed their electrical characteristics under sound wave pressure, thereby modulating an electrical current flowing through the microphone. The larger the microphone relative to the earphone cupped to the ear, the greater the sound amplification. The initial carbon ball hearing aids were quite large. The first one on the market was a table-model instrument made by the Dictograph Company in 1898.

After decades of steady improvement (see the timeline in Figure 1), the first all-digital hearing aids appeared in the mid-1990s. Today, nearly all new hearing aids are based on DSP technology, which offers very advanced audio processing capabilities and battery lifetimes of about a week. As hearing aid technology evolves further, advanced features such as wireless communication between left and right hearing aids will be available to further enhance the listening experience.

Figure 1
Figure 1
Image Tools
Back to Top | Article Outline

MINIATURE CIRCUITS ADVANCED PERFORMANCE

Until the mid-to-late 1990s, all hearing aids were based on conventional analog technology. They used a microphone to convert sound waves into electrical signals, amplifying and equalizing the signals to improve sound quality, filtering out noise, and then using a miniature speaker to convert the electrical signals back into sound at a location much closer to the eardrum.

The primary drawback of early analog aids was that sounds were amplified over the full audible frequency range. Because hearing-impaired people do not hear all sounds equally well across the listening spectrum, those sounds they could still hear well would be amplified together with those they found more difficult to hear, resulting in an uncomfortable listening experience. Furthermore, when users turned up the volume to hear soft sounds, loud sounds were also boosted. Often, users had to continually adjust the volume level to control loud and soft sounds.

Early hearing aids addressed the problem of loud sounds by “clipping” the output so the sound was not over-amplified. However, clipping distorted louder sounds. So, while early analog aids were effective in a quiet room, they were not very user-friendly in more complex listening situations.

Later, analog hearing aids used dynamic range compression, which addressed the issue of full-range amplification by providing amplification based on the input signal level. These devices later became programmable, allowing users to adjust the equalization of their hearing aids to meet their particular needs. Some of these hearing aids also featured noise reduction and feedback suppression; however, the limitations of analog signal processing meant that these devices were quite crude.

The most advanced analog hearing aids featured multi-band processing schemes that incorporated wide dynamic range compression (WDRC) and sometimes adaptive time constants for the compression to further improve sound quality.

Back to Top | Article Outline

THE IMPACT OF DIGITAL

Semiconductor companies, which had previously helped develop miniature analog technology for hearing aids, began introducing new technology in the mid-1990s for designing and implementing fully digital instruments. Using DSP technology, these devices capture sound similarly to an analog aid, but then convert the sound waves from acoustic energy into digital electrical signals.

This digital representation is then mathematically manipulated via DSP circuitry for far more accurate results than what analog signal processing can achieve. This allows the hearing aid's response to be better adjusted for an individual's requirements and for the wide range of everyday listening situations the user encounters. Once the digital processing is complete, the hearing aid transforms the digital representation back into sound waves (see Figure 2).

Figure 2
Figure 2
Image Tools

Digital hearing aids can do many things that are impossible with analog-based hearing aids. For example, digital devices can divide the sound information into many components based on frequency, time, or intensity and apply different processing techniques to manipulate the signal, resulting in precise tuning of the signal to benefit the hearing-impaired consumer.

Algorithms can be designed that filter out unwanted noise and perform tasks such as automatic feedback suppression, speech enhancement, noise reduction, directional processing, and echo cancellation. Advanced algorithms for pattern recognition let the hearing aid automatically change processing modes based on the varying sound environments, such as a noisy street versus a quiet room.

The digital hearing aid's ability to process sound accurately also allows advanced multi-microphone processing techniques that can provide benefits in noisy listening situations. Using sophisticated digital processing, these techniques perform spatial processing over various frequency bands to provide users with consistent directionality based on a “listen where you look” paradigm. The techniques can also employ a “steerable null” that helps attenuate unwanted noise sources emanating from a particular direction. However, directionality is not always desired, especially in situations where the noise level is lower or when the user wishes to listen to music. Thus, some DSP hearing aids can automatically or at the user's command switch between directional and non-directional (omnidirectional) modes.

Digital hearing aids also make possible the use of sophisticated noise-reduction techniques that evaluate the level of background noise during pauses in speech and subtract this estimated noise level from the speech signal in a technique known as “spectral subtraction.” By coupling this technique with psychoacoustic-based post-processing to eliminate so-called “musical noise,” hearing aid developers can implement very effective and high audio quality noise-reduction systems.

The feature set and quality of a hearing aid depend on the performance of its DSP core, which uses a computing capability similar to that found in a personal computer. In traditional DSPs, increased computing capability also increases power consumption, hence battery drain. In the early days of digital hearing aids, manufacturers specified fixed-function DSPs. These would compute exactly the algorithms they were designed for. However, when new algorithms were conceived, new DSP circuits had to be developed.

The development of a new generation of DSPs that could run newer and more sophisticated algorithms also required new semiconductor processes and new system approaches to ensure that power consumption stayed within acceptable limits.

Back to Top | Article Outline

ADDING VALUE BEYOND PROCESS TECHNOLOGY

The challenge of developing ever-more advanced hearing aids lies in dealing with a fundamental DSP tradeoff—computing capability versus power consumption. Taking advantage of continuously evolving semiconductor process technologies, hearing aid manufacturers were able to specify more sophisticated DSPs for their products that demanded more computing capability. That enabled them to introduce new features without increasing power consumption, hence permitting the development of smaller, more sophisticated hearing aids that did not need bigger batteries.

IC process technology continues to improve. In theory, hearing aid manufacturers could simply rely on this constant improvement to solve their DSP needs; however, this poses a few practical problems. Design cycles using sophisticated new semiconductor process technologies take longer because of their increasing complexity. This means that the time required to implement new audio processing algorithms, which may well require a new DSP engine, is still lengthy.

Manufacturers have attempted to include as many algorithm ideas as possible into one DSP circuit to extend its lifetime. However, not all ideas will fit on the same DSP, and sometimes the best idea emerges after the DSP's development phase has reached “feature lock-down.” In today's market, where new products are introduced more often, manufacturers need to modify or specify new DSPs more frequently to keep up with the competition.

The solution to this dilemma is flexible DSP technology, in which the portions of the signal processing algorithm that may change are written in software code (or microcode). Portions of these algorithms that are common and will not change can be coded in hardware or in microcoded hardware, which is generally more power-efficient than a fully software-programmable system. AMI Semiconductor (AMIS) calls this combination a reconfigurable application-specific signal processor or RASSP. Using RASSP, manufacturers can now create their algorithms in software and microcode, and re-program hearing aids with new features without changing the DSP hardware. The flexibility in rapidly creating new products comes from using RASSP solutions rather than crafting new DSPs each time.

To develop RASSPs, semiconductor companies had to create specific DSP architectures that exploit the commonality found in many signal processing schemes, yet offer sufficient flexibility for a wide range of applications. It takes this level of innovation to meet the power-consumption constraints of hearing aids along with the demand for fast time-to-market development.

A few semiconductor companies, including AMIS, specialize in this type of RASSP technology and deliver more than just a flexible DSP; they deliver an entire system-on-a-chip (SoC) for hearing aids and related ultra-low-power applications. An SoC tightly integrates the required functional blocks of a hearing aid, such as the microphone interface; output driver; the program, volume, and fitting control functions; and battery management, which enables the instrument to operate with both disposable and rechargeable batteries.

The SoC is then further integrated to become a system-in-a-package (SiP) (Figure 3), which includes the necessary passive components and memory. The passive components are required to interface with the hearing aid's external transducers, while the memory stores the audio processing software and the user's data. Integrating everything in one package ensures that hearing aids can be built using the fewest components, while enabling small form factor designs.

Figure 3
Figure 3
Image Tools
Back to Top | Article Outline

MORE COMPUTATIONS FOR THE SAME POWER

Today, semiconductor companies are working on RASSPs that offer more precise DSP operations for hearing aids. Analogous to the number of digits available after the decimal point on a calculator, a DSP circuit will manipulate the data with greater or lesser precision, which in DSP terms equates to the number of bits being processed simultaneously.

Algorithm performance is strongly influenced by this parameter. For example, calculating more precisely which sounds ought to be heard, based on the direction from which they originate, results in greater listening comfort in noise. An adaptive feedback canceler benefits from high-precision computing by enabling smaller hearing aid devices with high output power to be built, since feedback can now be prevented even if the microphone(s) and the receiver are very close together.

Sound quality also benefits from higher precision. To render the full range of audible frequencies, RASSPs must have good audio front-end and back-end stages as well as ample computing precision and capability. Often, hearing aids process only the speech portion of the frequency spectrum, leaving out the fuller picture of audio and other sounds. Chip manufacturers are working to enhance the circuitry that captures signals from the microphone(s) so that the hearing aid considers a greater range of frequencies in processing the signal. Greater DSP computational precision will result in higher-quality audio. Not only will the essential sounds related to speech be adjusted to the individual user, but the listener will also get the full depth of sounds that contribute equally to the emotional experience of hearing.

Back to Top | Article Outline

PUSHING THE FRONTIER

One example of an RASSP is AMI Semiconductor's new family of DSPs for hearing aids. The Ezairo 5900 series builds on the flexible DSP concept in which algorithms can be programmed into software code and then run on the DSP, hence allowing faster time to market with new product ideas. The product line's flexible and reconfigurable DSP engine not only has the power to run simultaneously several of the sophisticated algorithms described earlier, it also has the precision to process and produce superior sound quality.

Ezairo borrowed techniques used in professional audio systems, such as 24-bit audio computing precision and an input dynamic range up to 110 dB to ensure that sounds from very soft to very loud can be captured, processed, and rendered with superior quality.

Electronic hearing aids have evolved very far from the carbon ball equipment of more than a century ago to today's multiple-utility aural communication devices. The days when hearing aids could focus on only a small portion of the sound frequency range and process only speech are over. Today's devices are on the cusp of becoming the aural interface between the user and the cacophony of sound that the real world presents every day.

© 2007 Lippincott Williams & Wilkins, Inc.

Login

Article Tools

Images

Share