1 I think the two of you have written about this topic on Page Ten before. Aren't you ever going to give up?
We may give up someday, but not yet. You're right, we did talk about loudness measures and hearing aid maximum output selection on these pages in 1994 and again in 2002,1,2 and we've also addressed the issue in a few other articles in this Journal.3-5 The reason we keep pounding on this topic is that we think a lot of patients are still not being fitted with the maximum output that is best for them.
2 But haven't a lot of things changed in technology and fitting techniques since you wrote your last paper 6 years ago?
Let's address the testing and fitting issues first. Back in the late 1990s, just preceding our last Page Ten article, there were guidelines stating the need to measure loudness discomfort and to set the hearing aid's maximum output appropriately. Yet, as we reported then, surveys showed that most dispensers did not follow these recommendations, and we learned from Sergei Kochkin's 2000 MarkeTrak V report that only about 50% of hearing aid users were satisfied when asked about “comfort with loud sounds.”6
So, what has happened since? Well, for starters we have new evidence-based hearing aid fitting guidelines from the American Academy of Audiology.7 And guess what? Consistent with previous guidelines, they recommend conducting pre-fitting frequency-specific LDLs and conducting verification of loudness at the time of the fitting. And yes, surveys continue to show that only 20%-30% of people fitting hearing aids follow these guidelines.8,9 As you also might guess, the recent MarkeTrak surveys of Sergei Kochkin (e.g., MarkeTrak VII) continue to show that only 60% of hearing aid users are satisfied when asked about “comfort with loud sounds.”10
3 I see the point you're trying to make, but hasn't a decade of digital technology mostly solved this problem?
First, consider that the disappointing MarkeTrak data we just mentioned were collected in 2005—only 3 years ago. Granted, some of the hearing aids the respondents were using were 5 years old, but there really hasn't been much change in hearing aid compression limiting and output control in the past 10 years. We agree with your thought that we have the potential to adjust the output to appropriate levels with today's technology. Even the “entry-level” products now have adjustable AGCo kneepoints. And, nearly every hearing aid can be programmed to be WDRC, which indirectly can be used to select the maximum output range (although influenced by VC changes).
But just because the circuitry is there doesn't mean that it's being used any more appropriately with today's technology than it was in the past.
4 Maybe so, but if nothing else, I would think that with all the open-canal fittings there are fewer loudness problems, right?
Well, the mini-BTE open-canal (OC) products certainly are popular—maybe as much as 25% of all fittings last year. It might be short-sighted, however, to assume that just because these hearing aids tend to be low power, and the canal is open, there won't be any maximum output problems.
Consider that if the canal is left open (e.g., you don't use one of the tighter fitting domes), much of the open-ear resonance remains. On average this is probably around 12–15 dB in the range of 2000–3000-Hz.11 This often is considered a benefit when we're thinking about high- frequency gain, but it might not be a benefit regarding loudness discomfort. And the real-ear maximum output for an open fitting is difficult to predict from 2-cc coupler data, as the residual ear canal effects are quite variable. In addition, with an OC fitting there could be some summation effects at these higher levels (e.g., the direct flow SPL and the hearing aid output SPL might be similar).
5 So are you telling me that loudness discomfort is a real problem with OC fittings?
No, we're simply saying that giving some thought as to whether or not the maximum ear canal SPL exceeds the patient's LDL for an open fitting is probably just as important as when a more closed fitting is used. We've certainly heard many anecdotal comments about OC users “not liking a lot of gain.” We wonder, was it the gain or the hearing aid's maximum output that they didn't like? These are two different issues that usually require different treatment strategies. We're not aware of actual research data, but we do have case studies showing REAR90 outputs that are 10 dB higher with an open fitting than with a closed fitting for the same hearing aid settings in the same ear. Does 10 dB matter? We think so.
6 Okay, I understand. But let's get back to maximum output selection in general. Isn't this predicted fairly accurately by prescriptive fitting methods?
If you are referring to NAL-NL1 (soon to be NL2) and the DSL m[i/o] v5.0a, the only two validated methods we have, the answer is yes—sort-of. If you're only going to enter pure-tone thresholds, it depends on how well you believe that LDLs (or the hearing aid's maximum output) can be predicted from the patient's hearing loss, and how big a mistake you believe you can make with the maximum output selection and still be okay.
As we discussed in our 2002 article, there are data suggesting that if you consider +/-5 dB a reasonable window of acceptance, then you'll be okay about 60%-70% of the time if you set the output by predicting from the hearing loss.2 Other data, however, such as those from Bentler and Cooley,12 are not this encouraging. They show a larger spread for LDLs, and the +/-5-dB acceptance window includes fewer than 50% of their 433 subjects.
7 You're talking average values, but what if I were to tell you that I was one of those people who entered frequency-specific LDLs into the fitting software?
We probably wouldn't believe you! But, just in case you're serious, that's a tough question to answer. The reason is, most people like you don't use the “true” NAL or DSL software; they use an implementation of these methods that is embedded in their favorite manufacturer's fitting software.
Let's take NAL-NL1 for example. What we're saying is that it's possible that you could select “NAL-NL1” as your desired fitting method in the manufacturer's software, and this prescriptive method would then be used to select gain and compression characteristics. But it's also possible that the manufacturer uses a different fitting strategy to select the AGCo kneepoint (which would then control maximum output). Therefore, entering the patient's LDL might make a difference, but maybe not. This is something you can easily check out in the fitting software by altering the LDLs (e.g., 80 versus 120 dB HL) for the same hearing loss and then observing if the AGCo kneepoint setting changes accordingly.
8 And what if I were using the “real” software?
Here's a simple example using the DSL software (v5.0a), which provides the option of entering patient-specific LDLs. For starters, we'll say that your patient is an adult, has a 50-dB-HL hearing loss at 2000 Hz, and you do not enter in his LDL. The software would then prescribe an OSPL90 setting of 97 dB, and your REAR90 target would be 106 dB.
Now, if you did some testing and found that your patient had an LDL of 90 dB HL at 2000 Hz, a fairly common finding for someone with a 50-dB-HL hearing loss, and you entered this into the fitting software, these target values would be 10 dB lower. Does 10 dB matter? We think so.
9 So how do manufacturers' output selection methods differ from what we've been discussing?
Good question, especially because that seems to be the most popular selection method. George Lindley, using survey results from over 200 dispensing audiologists, reported that 71% of this group stated that they pre-program hearing aids to “manufacturer's recommendations” compared with only 38% who said they pre-program to a “specific fitting strategy” (respondents could select more than one category).9
It's hard to give you a specific answer, however, because for some manufacturers their recommended (default) fitting is a recognized fitting strategy. Moreover, a manufacturer could use its proprietary method for selecting gain and compression characteristics, but then use a more well-established (or time-honored) method for selecting maximum output. It differs from manufacturer to manufacturer.
10 But do you have a hunch whether or not you would end up with about the same output settings with different manufacturers?
We have more than a hunch. You would not. We've recently been looking at how different manufacturers select the hearing aid's maximum power output and, in fact, we just published some data on this topic last month in the Journal.5
We were a little surprised to see that if you enter the same audiogram into the software of six leading manufacturers, you end up with hearing aids programmed with maximum outputs differing by as much as 15 dB. We simply used a 50-dB-HL hearing loss and the manufacturer's default settings; the resulting programmed outputs (input: 90-dB-SPL swept pure-tone signal) ranged from around 90 dB SPL to around 105 dB SPL.
11 Do you think the agreement would have been better if you had entered frequency-specific LDLs into the fitting software?
We did that. After those initial measurements, we entered an LDL of 90 dB HL for all frequencies. The agreement actually became worse. For two products, the maximum output was reduced by ∼10 dB, and in two other products, including the one with the initial highest output, the maximum output didn't change. Another important finding was that for two of the six hearing aids, the maximum output displayed on the fitting screen was quite different from what we measured in the coupler.
It all seems to go back to the same general theme: The only way to get things right for a given patient is to do some testing. Using pre-fitting LDLs to set the maximum output should get you close (by making your own corrections from HL to 2-cc coupler), and aided verification will get you even closer!
12 It's clear that the two of you think there is a link bet-ween clinical measures and real-world benefit and satisfaction. Is there any proof of this?
That is the real question, isn't it? We think so, and in fact, that was a topic we addressed in a 2005 JAAA paper.13
We conducted a systematic review of peer-reviewed published research for two related topics: the use of unaided LDLs for setting the maximum output of hearing aids and the use of aided LDL measurements for adjusting the maximum output. In both cases, we then questioned if there was real-world evidence to support the practice.
Following principles of evidence-based practice (EBP), we limited our selection of articles to studies involving adults that were published from 1980 to 2005 using either a randomized control, a non-randomized intervention, or a non-intervention descriptive research design, including either unaided or aided LDL measures, and using self-report measures following real-world experience with the hearing aids. We searched a number of databases and found 187 articles of potential relevance. By first reviewing the abstracts, it was clear that 173 of the “hits” did not meet our criteria for inclusion; after more careful scrutiny, we found that only three articles could be included.
13 Three? Out of 187 articles?
You're right, you'd think that for something that has been an issue for as long as setting the maximum output of hearing aids there would have been a lot of studies that qualified. Remember, however, that we had fairly strict criteria, and many otherwise good studies didn't meet the criterion of having the real-world component.
One that did was a large clinical trial undertaken by NIH (NIDCD) and the VA.14 In this study, the maximum output of the hearing aids was set following frequency-specific measures of loudness discomfort and RESR verification. The results were interesting on many levels, but of particular interest to us was the finding that only 10% of the time (across 330 subjects) did the measured RESR exceed LDL by 5 dB or more at one or more frequencies, and only one subject (of that 10%) complained of loudness discomfort during real-world use.
In all, the limited evidence we gathered in this review supported the use of clinically measured frequency-specific LDLs for selecting the real-ear maximum output of hearing aids. However, the dearth of studies, the low statistical power of the studies, and the level of the evidence did not allow us to make a strong recommendation supporting this clinical procedure.
14 That was 3 or 4 years ago. Anything new to support or refute your conclusions?
Not much, really. As you know, with all the new digital technology and algorithms today, clinical investigations related to setting the maximum output of hearing aids just don't seem to rise to the top of the heap for researchers or the people funding the research. There are a couple studies, however, worth mentioning.
For example, Carol Mackersie took a look at clinical protocols used to set the hearing aid maximum output for adult hearing aid users.15 In that retrospective study, patients were fitted according to the clinic's standard protocol, which was a method that predicted the hearing aid output based on the patient's hearing loss. Importantly, this method also included an alteration of the prescribed output values based on verification results (e.g., probe-microphone measures and loudness judgments for everyday sounds).
After the initial fitting session, the patients were seen again for a minimum of two follow-up visits prior to the final evaluation. The paper reports that adjustments of gain and/or output were made for 15 of the 28 subjects (54%) during those follow-up appointments based on comments regarding their real-world experiences. The good news was that at the final outcome assessment, only one participant had average RESR values that exceeded his ear canal SPL LDL by more than 5 dB, and no one reported discomfort to the high-intensity pure-tone sweep.
The message of this research seems to be that you need to do a lot of tweaking to get the output right. We, of course, wonder what the outcome would have been if the maximum output had been simply set to correspond to the patient's LDL in the first place. Just maybe some of that post-fitting tweaking could have been avoided.
15 Anything else that might convince me to do more loudness testing in my office?
There was another study from some Syracuse University researchers that caught our eye.16 They compared two different protocols for hearing aid fitting: one that included LDL testing along with aided loudness measures, and another that didn't.
Those subjects fitted with hearing aids using the protocol that included loudness measures returned for fewer adjustments within the first 45 days. We also would like to point out (mostly because we like it when data agree with what we believe should be right) that after 3 months of hearing aid use the group that did not have the loudness measures included in their fitting protocol had reduced satisfaction scores compared with the group that had the testing!
16 I'm almost convinced, but let's move on. The topic of acclimatization for loudness and hearing aid use has always interested me. Anything new on that front?
Actually, there is. Here are some recent data we found quite interesting. Remember that big NIDCD/VA study that we talked about earlier? Well, in a follow-up study, 190 subjects from the original study were re-evaluated 6 years later.17 As you would expect, RESR measures remained unchanged from the earlier findings (for those 81 using their original devices). That is, the output of the hearing aids didn't change over time.
The interesting finding is what did change. The LDLs obtained in this follow-up study were significantly lower than those measured at two different times in the original study (pre-fitting and at the conclusion). The change was not only statistically significant, but big enough to maybe make a clinical difference. The average reduction in LDL across test frequencies (500–4000 Hz) was 4.6 dB compared with the initial LDLs of the original study, and 6.4 dB compared with the final LDLs of the original study.
17 Isn't it strange that hearing aid use reduces LDLs?
Those findings have us puzzled too. It's tempting to think they changed the LDL test protocol or instructions, but the authors say no. The subjects were 6 years older, but we know that LDLs do not change as a function of age.12
But before you put the results of that study into your permanent memory, we have another one for you that tells a somewhat different story. It's about LDLs and wearing only one hearing aid. Kevin Munro and June Trotter compared pre-fitting LDLs to the patient's LDLs after several years of unilateral hearing aid use.18 Although hearing thresholds were un-changed, the average LDLs (2000–4000 Hz) increased by 14.5 dB in the fitted ear, and 7 dB in the unaided ear. The authors acknowledge that there might have been minor alterations in the wording of the instructions for the two test sessions, but even so this wouldn't account for the 7.5-dB difference between ears. So, in this case, we see LDLs increasing after hearing aid use.
18 Could this be some type of adaptive plasticity for loudness?
Hard to say. This same group has conducted two other studies with people fitted unilaterally (different subjects).19,20 In one study, they found that the patient's acoustic reflex threshold (in addition to loudness discomfort) increased in the aided ear (compared with the unaided ear), and in the other study, they observed differences in the ABR between ears—an increase in the mean wave V to SN10 peak-to-peak amplitude in the fitted ear. Both studies had small samples, but stay tuned.
19 Interesting stuff. I'm almost out of questions. Any late-breaking loudness data you can tell me about?
Mueller: Ruth, how about that study you've been working on the past 2 years. Something you can talk about?
Bentler: Sure, we're writing it up right now. What Gus is referring to is a long-term consortium study that we have been involved with at the University of Iowa. The other investigators are at the University of Giessen (Germany) and the National Acoustic Laboratories (Australia).
One thing we examined was the relationship between measures of discomfort obtained in laboratory (or clinical) settings and self-reported discomfort from real-world aided experiences. This relates back to our earlier discussion on evidence-based practice, and the JAAA article Gus and I wrote.13
In this consortium study, we were particularly interested in using ecologically valid stimuli (real-world sounds like traffic, cutlery, etc.), but also included some ratings for narrow and wide bands of noise. We're still going through the data, but a couple of our preliminary findings are interesting:
First, many subjects recruited for this study (in all three parts of the world) reported experiencing loudness discomfort when wearing their current hearing aids, despite the average output being lower than a widely used prescription for maximum output.21 Also, we found that measurement of aided “loudness discomfort” using a 1500-Hz narrow-band noise stimulus was the best predictor of self-reported real-world discomfort. We believe this finding gives further credence to the use of frequency-specific measures of loudness discomfort, either in the fitting stage (to get it right) or in the verification stage (so you can adjust it to get it right)!
20 So, we're at the very end, and the two of you are still trying to convince me that loudness measures are worth my time.
Well, when we wrote our first Page Ten article on this topic 14 years ago, we closed by saying: “Maximum output problems don't go away. Either you take care of them when you fit the hearing aids, or you take care of them with repeat visits—or returns for credit.” We haven't changed our minds.
Let's take a quick trip back to January of 1994. Bill Clinton was delivering his first State of the Union address, the Cowboys were beating the Bills in the Super Bowl, and Lorena Bobbitt was being found not guilty regarding her little incident with her husband. In the world of audiology, the first AuD programs had just opened at Baylor and Central Michigan Universities, FDA commissioner David Kessler's name was being bounced around, and here at the Journal a new monthly feature appeared on page ten—titled, Page Ten.
At the time, people fitting hearing aids were pretty excited about “programmable,” WDRC, and this new style of hearing aid that fit completely in the canal. Ruth Bentler and I had a discussion about how it seemed that people had forgotten about one of the fundamentals of fitting hearing aids—getting the output right. So “How loud is allowed?” became the topic of the Journal's first Page Ten.
Seven years later Ruth and I had another discussion about what had changed regarding the attention paid to the selection and fitting of maximum output. Our conclusion was not much, so our second Page Ten article on the topic was born: “How loud is allowed: It's déjà vu all over again.”
Well, 14 years have now passed since our original article. Unfortunately, many of the problems associated with maximum output selection haven't gone away, so we're back for the hat trick. Unlike the original hat trick, getting the output right isn't like pulling rabbits out of a hat—it doesn't happen magically. Rather, you have to use a little bit of science and some second-grade math.
My co-author, Ruth Bentler, PhD, is professor of audiology at the University of Iowa. As you know, she is a noted resear-cher, writer, and international workshop lecturer. Since our last Page Ten together, she's gained even more international fame through her intriguing posting at www.earTunes.com. And when not singing, she has found time to work in extended teaching assignments in Hong Kong, the University of Western China (Chengdu), and the University of Canterbury (Christchurch, New Zealand). She's also become a regular participant in the Healthy Hearing component of the World Games of the Special Olympics.
Although we've said it before on these pages, our message is pretty simple: Don't forget about maximum output when fitting hearing aids. And who knows, if things don't change, maybe we'll be back for a “quad-row!”
Page Ten Editor