Mueller, H. Gustav; Bentler, Ruth A.; Wu, Yu-Hsiang
It seems that just about every month we read about some new advance in hearing aid technology. The practitioners dispensing these instruments must continually make decisions concerning what type of digital noise reduction, directional technology, or Bluetooth applications are best for their patients.
Hearing aid fitting and verification also seem to be constantly changing. We now have new prescriptive methods for both the DSL and the NAL, and real speech has become a routine input signal for probe-microphone measurements.
But one thing has pretty much stayed the same: selecting the appropriate maximum output for each patient. We still usually limit the output by adjusting the AGCo kneepoint, technology that has been available since the 1940s. And, if we don't get it right, we often have an unhappy hearing aid user. That probably hasn't changed in the last 60 years either.
SETTING MAXIMUM OUTPUT— THEORY AND PRACTICE
We've written about loudness, loudness discomfort level measures (LDLs), and selecting the hearing aid's maximum output several times here in the Journal over the past years.1-3 On the surface, getting the output “right” would seem to be a fairly simple task that could be accomplished during the fitting process, especially given the highly adjustable hearing aids of today. Yet, the most recent MarkeTrak survey shows that only a dismal 60% of hearing aid users are satisfied with “comfort with loud sounds.”4 We recognize that this low satisfaction rate may not be related entirely to inappropriate maximum output settings, but it is still disappointing that satisfaction in this area has not improved significantly over the past several MarkeTrak surveys.
While there seems to be no precise formula for setting the output of hearing aids, there is evidence of a relationship between the patient's LDL, the hearing aid's maximum output, and satisfaction with loud sounds in the real world (see Mueller and Bentler for review5). Given that most dispensers do not routinely conduct testing after the fitting of hearing aids to verify that the maximum output is “okay,”6 it seems reasonable to adjust the maximum output based on the patient's unaided loudness perceptions when the hearing aids are programmed, a routine procedure in the Best Practices Guidelines for the management of hearing loss.7
One simple clinical method is to conduct earphone pure-tone LDLs at one or two key frequencies, convert these HL LDL values to 2-cc coupler by adding the appropriate RETSPL (reference equivalent threshold sound pressure level), and use these values for setting the AGCo kneepoints. Although this is easy to do, it is not routine practice, as most people who dispense hearing aids ignore best practice guidelines and fail to conduct frequency-specific LDLs (compliance is only about 17%).6
What does seem to be a common method for selecting the maximum output of hearing aids is to use the patient's pure-tone thresholds, and then select a prescribed maximum output based on average LDL data for individuals with that degree of hearing loss. A frequently referenced formula of this type of calculation is Dillon and Storey's.8
Given the wide range of LDLs for patients with the same hearing loss—40 dB or greater according to Bentler and Cooley9—this method seems a little risky. Yet, Dillon and Storey showed that their predicted optimal output fell within a 2-dB acceptable window for about 2/3 of their field-trial subjects. And, interestingly, they reported that adding the frequency-specific LDLs did not improve the percentage of “acceptable” fittings.9
The fitting success of using average hearing loss data to set the hearing aid's maximum output is, of course, directly related to the particular output prediction formula used. That takes us to the question that our paper seeks to answer: How have different manufacturers implemented predicted optimal output settings in their fitting software? That is, if one followed the method most commonly used by dispensers of only entering pure-tone threshold data, would hearing aid output settings among manufacturers be similar?
MATERIALS AND PROCEDURES
The hearing aids used in this comparative study were the “premier” BTEs from six major manufacturers. If a given model was available in multiple power categories, we used the lowest power category. We programmed the gain and output for each product according to the manufacturer's recommended “default” fitting procedure. For the initial 2-cc coupler testing, we entered a hearing loss of 50 dB into the fitting software for all frequencies; no LDLs were entered.
We conducted all 2-cc coupler measures using the Audioscan Verifit VF-1, which incorporates the ANSI S3.22–1996 requirements.
However, we determined OSPL90 not at full-on gain, but rather at the default programmed settings. For convenience, we'll use the term OSPL90 in this paper, as the maximum output was determined using an input signal of 90 dB SPL. It is possible that the output of the instruments would have been higher had the gain been set to full-on. Our interest, though, was in comparing the maximum output levels recommended by the different manufacturers for this specific hearing loss. That might have been obscured had we set all instruments to full-on gain.
We recorded the OSPL90 value for four key frequencies: 500, 1000, 2000, and 4000 Hz. All hearing aid special features (e.g., noise reduction, directional technology, feedback reduction) were disabled for these measurements. Test procedures followed the Verifit User's Guide v2.8.
Do manufacturers agree?
Figure 1 shows the OSPL90 results for the six different hearing aids. The most notable finding is the rather large range of maximum output values. For example, if we look at 2000 Hz, a critical frequency for loudness tolerance issues, we see that two products have outputs as high as 103–109 dB SPL (Hearing Aids 1 and 6), while other products are no higher than ∼90–92 dB SPL (Hearing Aids 4 and 5).
We can only speculate on why this large range exists. It could be related to broad-band versus multichannel AGCo, how data from LDL research were converted to 2-cc coupler, or a given manufacturer's theory as to how a hearing aid's maximum output setting should relate to a patient's LDL (e.g., Set maximum output precisely at the average LDL? Set maximum output a fixed level above the average LDL?).
Moreover, the differences could relate to the normative LDL data that were used in deriving the formula. For example, the average LDL data from Dillon and Storey8 are slightly higher than the findings of Bentler and Cooley,10 while the data from Pascoe11 (which were commonly used by manufacturers in the 1990s) show considerably higher average LDLs than either of these other studies.
While we are normally most concerned that we may set outputs too high, settings that are too low can also work against hearing aid satisfaction. The point of our comparison was not to determine which of these six different settings is “right,” but rather to point out how unlikely it seems that a patient who finds an output setting of 90–92 dB SPL satisfactory would also find a setting of 109 dB SPL satisfactory, or vice versa. Hence, patient preference for a product from a given manufacturer could be due to something as simple as the algorithm used for setting the maximum output.
What happens when LDLs are entered?
While most dispensers do not enter pure-tone LDLs into the fitting software, this procedure is included in the AAA's hearing aid fitting guidelines.7 We thought it would be interesting to see what changes, if any, would result when LDL values were entered. We selected an LDL value of 90 dB HL. We assumed that this value was lower than what any manufacturer would use as its normative data for a 50-dB hearing a loss, yet it is a common finding when pure-tone LDLs are measured. We programmed the hearing aids as before, except that we entered the 90-dB-HL LDL values into the fitting software for all key frequencies.
The new OSPL90 results for the six hearing aids are shown in Figure 2. Observe that, in general, lower maximum output values were present. In fact, some products now have an output lower than the input. The change in output for a given product for the two test conditions is shown in Figure 3. Negative values mean that the maximum output was reduced when the 90-dB-HL LDL values were entered.
Observe that for two products (Hearing Aids 5 and 6), the output did not change. Note that #5 already had a relatively low output setting (∼93–95 dB SPL). It's possible that the maximum output range (e.g., AGCo kneepoint) already was at its lowest setting. Hearing Aid 6 didn't either, although it had one of the highest maximum output settings. One of many possible explanations for this would be a strict interpretation of the NAL-NL1 fitting method, which is the default fitting for some manufacturers. The NAL-NL1 method does not use patient-specific LDLs to determine maximum output.
The other four hearing aids showed an average maximum output reduction of about 5 to 10 dB (although some interesting frequency patterns existed). This degree of change could have occurred because a 90-dB-HL LDL was that much lower than the predicted LDL for someone with a 50-dB-HL hearing loss, or because the fitting algorithm used the lower 90-dB LDL to calculate the WDRC ratios. Again, for several products these lower LDLs resulted in a maximum output ∼93 dB SPL or below, which might have been the lowest setting possible. It's clear that different fitting software handles entered LDL values quite differently. It's important for dispensers who enter these values to know this.
Measured versus software simulations?
The final comparison we made was to observe how well the change in measured OSPL90 agreed with the change in the maximum output displayed in the fitting software. For this comparison, we used the average of 500, 1000, 2000, and 4000 Hz, and examined the differences between the OSPL90 for the “no LDL entered” versus the “90-dB LDL entered” conditions. The software values were taken from either the frequency-specific I/O functions, or read from the simulated 2-cc coupler curves.
The results were somewhat varied and interesting. For Hearing Aids 1, 2, 5, and 6, there was good agreement. For Hearing Aid 3, however, the software suggested a 13-dB drop in maximum output when the measured change was only 5 dB. The opposite effect was present for Hearing Aid 4, where the software reflected no change, but a 6-dB reduction in maximum output was measured. These comparisons appear in Figure 4.
We examined the prescribed maximum output for six premier hearing aids from different manufacturers, all programmed for the same hearing loss. When LDLs were not entered, the prescribed maximum output of these products varied by about 15 dB, a value large enough that it could influence patient satisfaction. We also found that when an LDL of 90 dB HL was entered, it reduced the maximum output for some products, but not others. Finally, when we compared the effects of entering the 90-dB LDL for both the measured values and software displays, we found that these data were not in agreement for two of the six products. For one hearing aid, the software suggested that the hearing aid had an output 8 dB lower than what was measured.
Given that the maximum output of hearing aids has been shown to influence patient satisfaction, it's important in programming and fitting them to consider some of the differences we've reported here. These issues, of course, become essentially non-issues when the practitioner uses probe-microphone measures and other aided loudness testing for verification and maximum output adjustment, rather than relying on average predictions and computer simulations.
© 2008 Lippincott Williams & Wilkins, Inc.