Don't let the software take charge

Van Vliet, Dennis

doi: 10.1097/01.HJ.0000389931.95291.17
Final Word

Dennis Van Vliet, AuD, is Director of Education and Professional Relations, AudioSync Hearing Technologies. His column was honored both this year and last by the American Society of Healthcare Publication Editors. Readers may contact Dr. Van Vliet at

Article Outline

I was composing an e-mail the other day on a tablet device I use for a number of things, including responding to e-mails rather than looking at my small wireless phone screen. The text program automatically corrects typos and guesses what the writer meant to write. In this case, I was trying to write the possessive form of “it” which is, as all of us who made it through eighth grade knew at one time, is “its.” The tablet device kept “correcting” my attempts with the contraction form “it's” so that I was writing “it is” or “it has,” which made no sense in my sentence.

The teacher I learned these grammatical rules from was a red-haired, short-tempered former marine named Mr. Sparks. We had to learn the “its-it's” and “there-their-they're” rules or suffer his drill-sergeant-style verbal tirades. I may not remember all the rules all the time, but I know enough to make sure it is (it's) correct when I write something rather than risk looking as if I don't know the rules. I was pretty sure that I was following the “its-it's” rule correctly, so I was frustrated by the self-correcting of the software that was trying to make me look less than literate. I began wondering how the software was written and what uninformed software code writer took one-too-few English grammar classes before setting out on his or her career.

Back to Top | Article Outline


The experience made me think of how dependent we are on software to guide us, predict where we are going, and lead us through any number of tasks, or, sometimes, to not tell us and simply do what its logic thinks is best. Hearing aids are a perfect example. Software makes dozens of decisions when we are fitting hearing aids. The manufacturers' research teams look at data (we hope!) or what they think is best and set certain default settings for gain, output, compression settings, noise reduction, and directional microphones, to name a few.

Directional microphones are a great example of this because the act of switching from omni to directional creates ample opportunity for error. If we set up the hearing aids for some form of automatic switching, the devices rely on the rules regarding the intensity, azimuth, and character of sounds at the front and back microphones to determine if switching is appropriate. We may assume that sound coming from the front is typically the signal of interest and be comfortable with an algorithm that automatically reduces the sensitivity to sounds behind the listener. However, we can all think of situations when this would not be what the user desired, so the automatic switching becomes a disadvantage.

There is a good reason that manual switches for directional microphones are recommended for optimum benefit. Switching algorithms cannot predict all situations and cannot read the mind of the listener. No matter how sophisticated the algorithms, there will undoubtedly be switching errors. There are similar examples for noise reduction as well.

Our job is to understand the systems that we are fitting well enough to know how the various features affect the sound that the hearing aid wearer ultimately receives at the eardrum. Understanding the features and any software controls available will help us do a better job of matching up the hearing aids and subsequent adjustments with the user.

Manual switching can be very effective, but not every patient is able or willing to switch the hearing aids every time a change is needed. The best we can do is set things up so that whatever system the patient ends up with has the least potential for errors. Depending on the instrument's technology level, we may be able to modify time constants for the features in question or other thresholds such as the signal-to-noise ratio at which slow-acting noise reduction takes effect and how much of an effect is desired. There are defaults for these settings, but no comprehensive protocols to guide us in modifying them.

The Final Word? Automatic settings and first-fit protocols are designed for the typical user. Unfortunately, few of our patients fall neatly in line with the typical. There is still a place for clinical expertise, no matter how sophisticated the hearing aid we are fitting. Our responsibility is to understand the needs of the patient through careful questioning and reflection on their experiences with the hearing aids, and also to understand the power and limits of the adjustments we have at our disposal with the hearing aids. The automatic features of the software will put us somewhere near the right place, but we can't forget that each individual fitting will likely need modifications to fully match up the performance of the hearing aids to what the user needs. We need to be in charge of the process, not wait for the software to lead us there.

© 2010 Lippincott Williams & Wilkins, Inc.