Difficulty hearing in noise affects approximately 10 percent of patients seeking audiology services. (Ear Hear 1989;10:200.) Researchers and audiologists have struggled to define and replicate this problem clinically and have had difficulty in determining the type of stimuli, noise, and signal-to-noise ratio (SNR) that best imitates real-life situations. Attention, cognition, and language are also a hindrance. The complexity of our central auditory system and an attempt to determine the sites responsible for such processing also provide complications. The test Listening in Spatialized Noise-Sentences (LiSN-S) recently developed by Australian researchers may have sparked renewed interest in tackling this challenging task. (J Am Acad Audiol 2008;19:377.)
The LiSN-S, distributed by Phonak, is a computer-based software platform that comes with standard introductions to patients. The final scoring is done by computer while the clinician enters the number of correct words for each target sentence in noise. The test takes approximately 20 minutes to administer, and the program generates a report at the end, comparing the patients' scores with normative data. Normative data are included with the test beginning at age 6. The test has been normalized for use in North America. (J Am Acad Audiol 2010;21:629.) Recent published normative data up to age 60 have also been reported. (J Am Acad Audiol 2011; 22:697.)
The LiSN-S uses mathematical algorithms (head-related transfer functions) to reproduce a three-dimensional space under headphones. Four subtests are designed to be administered in a standard order. Each subtest presents sentences binaurally under headphones with competing stories. The patient repeats as many words as possible from the target sentence while also listening to a story. The level of the target sentences varies in an adaptive approach to find the signal-to-noise ratio at which 50 percent of the words in the target sentences are understood and repeated by the client. Performance indicators are generated for the clinician, with two speech-reception threshold (SRT) and three advantage scores.
Interestingly, while the target sentences are always presented from the front, the competing stories are spoken by the same voice or a different voice and can arrive from the front, left, or right side of the patient. This generates four conditions for the patient: same voice from same direction, different voices from same direction, same voice from different directions, and different voices from different directions. These conditions are reflective of situations that can occur in patients' everyday lives.
Three different advantage scores can be calculated for the four conditions. The talker advantage (the difference between the same voice from same direction and different voices from same direction) shows the dB improvement in SRT by using information about the different voice qualities of the competition relative to the target voice. The spatial advantage (difference in dB between the same voice from same direction and different voices from different directions) shows the amount of dB that the SRT can be improved by using different spatial locations. The total advantage (difference between the same voice from same direction to different voices from different directions) reflects in dB the amount of improvement by combining the different vocal qualities and different spatial locations.
Two individual subtest scores are also generated, the low-cue and high-cue SRTs, which are of great interest to the clinician. The low-cue condition (same voice with same direction) reveals the SNR required when there are no talker or spatial cues available to the client. The high-cue condition (different voices from different directions) measures the SNR required when talker and spatial cues are available.
Researchers suggested that the high-cue SRT has the potential to be affected by the widest range of disorders of all the scores measured in the LiSN-S. (J Am Acad Audiol 2012;23:97.) This was discovered while attempting to prioritize patients for auditory processing testing, and it may be useful to assess patients with this subtest first. High-cue SRT scores approaching near normal levels may suggest that the patient will not have problems utilizing spatial cues or talker cues. Screening this way may take approximately five minutes. Further testing with the remaining portions of the test can help delineate potential problems if the patient scores poorly on this subtest.
Difficulty finding professionals who provide treatment for those diagnosed with auditory processing disorder is a common reason for audiologists not offering auditory processing testing. (J Am Acad Audiol 18:428.) A rehabilitation program has been developed called LiSN & Learn to help patients with processing problems in background noise. (J Am Acad Audiol 2011;22:697.) Preliminary data on deficit-specific remediation from nine participants demonstrated that, on average, children improved their SRT by 10 dB over the course of the 12-week training. Children as young as 6 years are able to complete training and the improvements last for three months posttraining. The LiSN & Learn software is available through the National Acoustic Laboratories website. (See FastLinks.)
Great strides have been made in validating more efficient and effective tests to add to audiologists' repertoire. We are perhaps one step closer to understanding the complex processing involved in hearing in the presence of background noise and being able to offer solutions for our patients' hearing problems. The LiSN-S is based on cutting-edge research and it will be interesting to watch the clinical uptake of this test and its effects on assessing and, with the LiSN & Learn training software, remediating hearing-in-noise complaints.
© 2012 Lippincott Williams & Wilkins, Inc.
- Visit National Acoustics Laboratories' website at http://bit.ly/NatAcousLab.
- Read past Pathways columns in a special collection at http://bit.ly/PathwaysCollection.
- Visit HJ's Student Blog at http://bit.ly/HJStudentBlog.
- Check out HJ's R&D Blog at http://bit.ly/HJblogRD.
- Click and Connect! Access the links in The Hearing Journal by reading this issue on our website or in our new iPad app, both available at thehearingjournal.com.
- Comments about this article? Write to HJ at HJ@wolterskluwer.com.
- Follow us on Twitter at twitter.com/hearingjournal and like us on Facebook at www.facebook.com/HearingJournal.