Secondary Logo

Journal Logo

Bone-Conduction Hearing Aids

Into the (Near) Future

Mejia, Jorge PhD; Nguyen, Cong-Van MEng; Yeend, Ingrid MA; Loi, Teck MBiomedE; Cowan, Robert PhD; Dillon, Harvey PhD; National Acoustic Laboratories and the HEARing Cooperative Research Center, Sydney

doi: 10.1097/01.HJ.0000470894.83693.cf
NAL News
Free

Dr. Mejia is a senior research engineer with the HEARing Cooperative Research Center (CRC), focusing on the psychoacoustics of hearing and signal processing coding for binaural hearing devices. Mr. Nguyen is a research engineer with National Acoustic Laboratories (NAL), with interests in statistical signal processing and machine learning. Ms. Yeend, a NAL research audiologist, works as a member of the Rehabilitation Devices team on the evaluation and improvement of hearing device effectiveness. Mr. Loi led the development of HEARLab, a PC-based audiological test system. Dr. Cowan, who is CEO of the HEARing CRC and HEARworks P/L, its commercial arm, as well as a principal research fellow at the University of Melbourne and an adjunct professor at Macquarie University, has extensive experience in the management and commercialization of hearing research. NAL Director Dr. Dillon is best known for his hearing aid research, and he recently has been active in the study of auditory processing disorders and electrophysiological assessment techniques for infants.

Figure 1.

Figure 1.

Welcome to a new column for The Hearing Journal. We will regularly be bringing you results from the research and development activities of the National Acoustic Laboratories (NAL) in Sydney. At NAL, we are interested in anything and everything concerning hearing loss, including assessment, rehabilitation, and prevention.

This month's article concerns two topics that you might not think have much to do with each other: bone-conduction hearing systems and super-directional beamformer microphones. These technologies actually work extremely well together; both require devices on each side of the head that connect to one other, and the headband that's needed for the bone-conduction hearing system facilitates this connection. NAL's interest in the devices arises from the need for improved hearing aids for children with fluctuating conductive hearing loss, but the instruments have other applications as well.

It is commonly thought that having a bone conductor anywhere on the head excites both cochleae equally, but, in reality, the signal is earlier and stronger in the closer cochlea, as occurs with an air-conduction signal, though to a lesser degree. Like this article shows, the speech intelligibility advantages provided by super-directional processing (from microphones on both sides of the head) complements the localization advantages offered by wearing bone conductors on both sides of the head.

—Harvey Dillon, PhD

We examined a recently developed vibratory technology in combination with a state-of-the-art super-directional strategy called a beamformer. From the study, we concluded that listening through binaural bone-conduction devices allows distinct sounds to be perceived at the two cochleae. In this way, having two devices can preserve some localization cues for the listener.

Furthermore, the binaural processing provided by the NAL/HEARing Cooperative Research Center (CRC) beamformer, which we have shown provides equivalent listening performance to the normal air-conduction pathway, offers an intelligibility advantage. These outcomes are of particular importance for those who wear bone-anchored or bilateral bone-conduction devices.

Vibratory technology such as bone-conduction hearing aids can overcome conductive hearing loss in some hearing-impaired listeners who, for various reasons, receive limited benefit from conventional air-conduction hearing aids. These reasons include the presence of a fluctuating conductive loss, typically caused by recurrent otitis media, and the absence of a pinna or ear canal.

In bone-conduction hearing aids, the sound signals picked up by microphones are amplified and used to drive vibrating transducers typically positioned on the mastoid. These vibrators are either surgically anchored, as in a bone-anchored hearing system, or worn with a headband.

The sound vibrations applied to the skull bypass the normal transmission path through the outer and middle ear, instead causing vibration within the cochlea via several paths and mechanisms. In cases where the outer or middle ear attenuates the normal transmission of acoustic information, bone-conduction hearing aids enable people to receive amplified sound.

Postsurgical complications related to skin necrosis or infection of the tissue surrounding the implant have been reported in a small proportion of bone-anchored hearing system cases ( Otol Neurotol 2010;31[5]:766-772http://journals.lww.com/otology-neurotology/pages/articleviewer.aspx?year=2010&issue=07000&article=00010&type=abstract). Implantable devices are also more costly than headband-worn bone-conduction hearing aids.

On the other hand, the tightness of headband-worn devices may cause the pressure exerted by the bone conductor to exceed capillary closure pressure at the point of contact, which is known to result in discomfort and skin ulcerations, and, in extreme cases, a permanent depression in the skin (Aust N Z J Audiol 2008;30[2]:113-117http://search.informit.com.au/documentSummary;dn=580112648175830;res=IELNZC). Further, the transmission of sound vibration from the transducer to the skull is not as effective with headbands compared with implanted devices because the vibratory force is absorbed to an extent by the intervening tissue.

Current bone-conduction transducers are known to have relatively high distortion and a more restricted frequency range than conventional hearing aids (Dillon H. Hearing Aids. 2nd ed. Turramurra, Australia: Boomerang Press; 2012). The bone-conduction transducer resonance limits the frequency range to 250 Hz to 4 kHz, with poor output levels near the extremes of this range.

The excitation levels achieved do not exceed those produced by a sound of 70 dB SPL in a normal ear ( Otol Neurotol 2010;31[3]:492-497http://journals.lww.com/otology-neurotology/pages/articleviewer.aspx?year=2010&issue=04000&article=00022&type=abstract). As a result, sound perceived via bone-conduction transducers is considerably limited compared with sound heard through air conduction by people with normal hearing.

However, recent developments in transducer vibratory technology offer the prospect of improved performance. Increasing the surface area at the point of contact makes headband-worn bone-conduction transducers more comfortable and wearable (see Figure 1).

Changes to the motor, based on balanced variable reluctance design, have improved the dynamic range and distortion, resulting in better sound quality for the listener ( Aust N Z J Audiol 2008;30[2]:113-117http://search.informit.com.au/documentSummary;dn=580112648175830;res=IELNZC; J Acoust Soc Am 2003;113[2]:818-825http://scitation.aip.org/content/asa/journal/jasa/113/2/10.1121/1.1536633).

Back to Top | Article Outline

NEW STUDY WITH HEADBAND

The recent study at the National Acoustic Laboratories examined the capabilities of bone-conduction hearing aids used in combination with a beamformer algorithm developed by NAL and the HEARing CRC.

The beamformer is formed by combining microphone output signals from both sides of the head and producing a super-directional output signal (Mejia J, Dillon H, Nguyen C-V, Walravens E, Convery E, Keidser G. Directional benefit from binaurally linked noise-reduction processing for hearing aid applications. Paper presented at: Audiology Australia XX National Conference; July 1-4, 2012; Adelaide, Australia).

The stimuli were recorded using behind-the-ear microphones on a Knowles Electronics Mannequin for Acoustic Research (KEMAR) head and stored to a computer. The recorded stimuli were processed off-line to form different microphone signals.

These signals were amplified and then presented both binaurally and monaurally to the participants by routing the sound card outputs to either insert earphones (Etymotic Research ER-2) or bone-conduction transducers (Ortofon BC2).

Figure 2.

Figure 2.

In the bone-conduction listening condition, two microphone modes were formed—conventional directional and super-directional—and two listening modes were administered to all participants—monaural and binaural. In all conditions, the participants’ ears were occluded with tightly fitting, soft, ultra-carved shell earmolds with a long canal length. The experimental setup is shown in Figure 2.

Speech reception threshold in noise was measured for all presentation modes, and localization in quiet was assessed for the binaural listening conditions.

Back to Top | Article Outline

SPEECH INTELLIGIBILITY IN NOISE

Normal hearing participants (7 women and 5 men) were asked to repeat Bamford–Kowal–Bench-like sentences that were recorded and presented with a background signal similar to that of a noisy cafeteria. The signal-to-noise ratio was varied adaptively to achieve a level at which 50 percent of morphemes were correctly identified for each sentence ( Int J Audiol 2013;52[11]:795-800http://informahealthcare.com/doi/abs/10.3109/14992027.2013.817688).

Figure 3.

Figure 3.

The results, shown in Figure 3, suggest that the directional microphone processing presented binaurally through insert earphones is significantly better than binaural or monaural presentation through bone-conduction transducers. On the other hand, super-directional processing presented either monaurally or binaurally through bone-conduction transducers scored as well as the directional microphone processing presented binaurally via insert earphones.

There was no significant difference between binaural and monaural presentations in the super-directional processing mode. In other words, the super-directional processing itself provided the greatest benefit, and adding a second bone-conduction transducer had no significant effect on speech intelligibility in noise.

Back to Top | Article Outline

LOCALIZATION OF SOUND

Directional microphones were used in the localization test. The initial procedure consisted of a loudness balance adjustment between insert earphones and bone-conduction transducers.

A 300-ms pure-tone beep was presented via insert earphones at 60 dB SPL to 65 dB SPL (measured in a Zwislocki coupler), followed by a 300-ms silence gap, and then the same 300-ms beep was presented via bone-conduction transducers. The sequence was repeated after a 1-s silence gap. Each beep sequence was generated for every third-octave band, which ranged from 250 Hz to 5 kHz.

The loudness of the bone-conduction transducer mode was adjusted higher or lower by the listener to match the loudness of the insert earphone mode. The procedure was administered monaurally while the opposite ear was masked with a white noise presented at 65 dB SPL. The levels obtained for equal loudness were individually used to adjust the gain-frequency response applied to a white noise sound-burst stimuli used in the localization test.

During the actual localization task, participants were asked to point, using a mouse, to the apparent direction of arrival for each stimulus on a spatial (virtual) map drawn on the computer screen. Each stimulus contained three shape- and level-adjusted white noise sound bursts of 50-ms duration, which were convolved with head-related impulse responses from horizontal azimuth directions at 0 degrees, ±22.5 degrees, ±45 degrees, ±67.5 degrees, and ±90 degrees.

In the insert earphone mode, 10 practice runs were provided. Visual feedback on the actual direction was provided on screen during practice runs but not during the actual localization test. Also, for every practice run, the participant was allowed to listen to each stimulus a second time, if needed. This procedure was repeated for the bone-conduction transducer mode.

Figure 4.

Figure 4.

The results, shown in Figure 4, suggest that participants can perceive direction of arrival in the bone-conduction transducer mode but experience an apparent compression of horizontal azimuth. A considerable reduction in localization accuracy is apparent for angles greater than ±45 degrees azimuth.

Back to Top | Article Outline

CLOSE TO NORMAL HEARING

The combination of high-fidelity bone-conduction transducers with the advanced super-directional binaural beamformer algorithm appears to provide significant hearing advantages for listeners.

For hearing in noise, the binaural advantage over the monaural condition resulted from the noise suppression enabled by the super-directional processing. For hearing in quiet, listeners appear to retain accurate localization perception for angles less than 45 degrees. This finding is consistent with research by Ad F.M. Snik, PhD, and colleagues, who reported no difference between air- and bone-conduction localization for 45-degree source directions ( Arch Otolaryngol Head Neck Surg 1998;124[3]:265-268http://archotol.jamanetwork.com/article.aspx?articleid=218938).

However, the maximum apparent angle from the front of sounds under the bone-conduction transducer condition was compressed to 59 degrees compared with 77 degrees under the insert earphone condition. This result is consistent with findings from simulated models suggesting that interaural time cues are compressed from 0.8 µs in open ears to 0.4 µs under bone-conduction stimulation (O'Brien WD Jr., Liu Y. Evaluation of acoustic propagation paths into the human head. In: New Directions for Improving Audio Effectiveness. RTO-MP-HFM-123: North Atlantic Treaty Organization (NATO) Science and Technology Organization; 2005:15-1–15-24http://ftp.rta.nato.int/public/PubFullText/RTO/MP/RTO-MP-HFM-123/MP-HFM-123-15.pdf).

Because binaural hearing through the bone-conduction transducer condition provides some directional cues, it results in a more natural sound perception than the monaural listening condition. Likewise, bilaterally implanted bone-anchored hearing system users have reported improved levels of satisfaction ( J Laryngol Otol 2002;116[suppl S28]:47-51http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=402489&fulltextType=RA&fileId=S0022215102001524).

It is also unsurprising that, in most studies, listeners continue to experience some degree of difficulty during conversations with two or more people in noisy surroundings ( J Laryngol Otol 2002;116[suppl S28]:47-51http://journals.cambridge.org/action/displayAbstract?fromPage=online&aid=402489&fulltextType=RA&fileId=S0022215102001524).

Although some localization cues are preserved, the degree of preservation is not sufficient to assist the listener in segregating a desired target sound from the surrounding noise. This reality is evident from the speech discrimination results reported here, which were the same for monaural and binaural listening with the bone-conduction presentation.

By contrast, the super-directional microphone processing enabled close to normal speech intelligibility in noise.

Funding: This project was financially supported by the HEARing Cooperative Research Center, which is established and supported under the Cooperative Research Centers Program, an Australian Government Initiative.

Copyright © 2015 Wolters Kluwer Health, Inc. All rights reserved.