Estimating hearing thresholds using electrophysiologic measures in infants, children, and difficult-to-test patients is an especially challenging task. The ABR has been a good friend for 30 years, but there are some limitations to its prediction precision and the range of hearing losses that it can predict.
Variations of the MLR have been studied in an attempt to find a procedure that is effective in gathering more frequency-specific information. You might recall terms such as the SSR, SSEP, and the 40-Hz ERP. Over the years, researchers have continued to refine these procedures, and new research with what we now call the ASSR (auditory steady-state response) is very encouraging. The current application of this evoked potential, elicited with modulated tones (modulations around 90 Hz), shows promise as a reliable predictor of hearing sensitivity.
Back in the 1980s at Baylor College of Medicine, a young doctoral candidate completed a dissertation titled: “Optimum Stimulus Rate for Measurement of the Auditory Steady-State Evoked Potential.” In general, the research supported the use of the 40-Hz ERP. Now a seasoned audiologist, Brad Stach, PhD, has stopped by Page Ten to transition us from the 40-Hz ERP of the ‘80s to today's ASSR. As Brad says, “It looks like I only missed it by about 50 Hz.”
Dr. Stach is the director of audiology and clinical services, Central Institute for the Deaf, and professor and director of the Audiology Graduate Program, Washington University, St. Louis. He is the author of the Comprehensive Dictionary of Audiology, is completing the Encyclopedia of Audiology, and has a best-selling textbook, Clinical Audiology. Brad was a founder of the American Academy of Audiology, and currently is its president-elect. You regular Journal readers probably recall his previous visits to Page Ten, and his annual contributions to our Journal Club Review of the Best of Audiology Literature.
As Brad points out in this excellent review of the ASSR, we still have much to learn, and some of the techniques need further refinement. But, research to date strongly suggests that the ASSR has the potential to be a valuable clinical procedure for the assessment of hearing loss, and may even have applications in the diagnosis of neurologic pathology. Stay tuned for further developments in this exciting area.
Page Ten Editor
1 What is the auditory steady-state response?
The auditory steady-state response (ASSR) is an auditory evoked potential, elicited with modulated tones, that can be used to predict hearing sensitivity in patients of all ages. The response itself is an evoked neural potential that follows the envelope of a complex stimulus. It is evoked by the periodic modulation, or turning on and off, of a tone.
The neural response is a brain potential that closely follows the time course of the modulation. The response can be detected objectively at intensity levels close to behavioral threshold. Emerging data suggest that the ASSR will yield a clinically acceptable, frequency-specific prediction of behavioral thresholds in patients, regardless of age, subject state, or degree of hearing loss.
2 Sounds perfect, but do we really need another auditory evoked potential?
Well, yes, it would be nice, and here's why. The earliest averaged evoked potentials, described over 4 decades ago, are what we now refer to as the auditory middle-latency and late-latency responses. The late responses, as you know, can be elicited with tonal stimuli, permitting the electrophysiologic prediction of an audiogram. However, these potentials are fatally confounded by subject state of consciousness, rendering them unacceptable for routine clinical use in anyone but awake, cooperating adults.
3 But what about the ABR?
Be patient. I was getting to that. The auditory brainstem response (ABR), discovered about 3 decades ago, currently stands as the gold standard for threshold prediction. Its immunity to subject state makes it an excellent choice for predicting hearing in sleeping infants or sedated children.
However, there are two important limitations to the ABR. First, it is best elicited using a click stimulus, which is not frequency-specific and generally only allows an estimate over a broad range of higher frequencies. Tone-burst-elicited ABRs, albeit more frequency-specific, can be difficult to record and observe at near-threshold levels, especially at lower frequencies. So, ABR provides a good prediction of high-frequency hearing in a broad sense and, often, an idea of the shape of the audiogram. Unfortunately, the precision across frequencies remains less than optimal.
The second limitation of ABR is the range of hearing loss that can be estimated. As a rule of thumb, if the high-frequency pure-tone average exceeds 70 dB, the ABR may well be absent. Said another way, an ear with a 70-dB hearing loss will have the same absent ABR as an ear with a loss of 80 dB, 90 dB, 100 dB, and so on. That is a rather broad range of loss predicted by an absent ABR, especially if you are the one fitting a hearing aid on that ear.
4 Okay, that's convincing. So, why is the ASSR better?
First, the ASSR is elicited by a tone. That tone is modulated; you might think of amplitude modulation as turning the tone on and off periodically and frequency modulation as warbling it. Although the modulation expands the spectrum of the tone, the frequency spread is narrower than a tone-burst or, especially, a click. So, the portion of the basilar membrane being stimulated is more restricted, and a more precise audiogram can be predicted.
Second, if the modulation rates are high enough, the ASSR seems unaffected by subject state. When the modulation rate is greater than 60 times per second (60 Hz), the response can still be recorded reliably in sleeping babies.
5 Could you step back for a minute? Is this the 40-Hz response, or the SSEP, that they talked about in the 1980s?
Well, yes and no. It is sort of a variation on the theme. But first, let me clarify terminology and acronyms. Several terms have been used synonymously to describe this modulation-following response. They are the amplitude-modulation-following response (AMFR), envelope-following response (EFR), steady-state evoked potential (SSEP), steady-state response (SSR), and the auditory steady-state response (ASSR). Perhaps the most popular has been the SSEP, but since that acronym is also used for somatosensory evoked potentials, ASSR is becoming the more commonly used term.
6 Thanks, but would you please answer my question?
What a crab! The 40-Hz response was first described by Galambos and his colleagues in 1981.1 At that time, and still today, the most common approach to electrophysiologic assessment was to measure transient evoked potentials. Transient responses are those that are elicited by rapid change in the auditory stimulus at rates that allow the response to be finished before the next stimulus presentation. For a middle-latency response (MLR), for example, a signal is presented, the response is recorded for 100 milliseconds or so, another signal is presented, the response is recorded again, and so on.
A steady-state evoked potential is an on-going response and is elicited in response to an on-going, periodically varying stimulus. The response is phase-locked to the modulation envelope. That is, the neural response closely follows the time course of the modulation. The phase relation of the response to the modulation is reasonably fixed or locked in time.
Galambos et al. noted in measuring the transient MLR that, when the stimulation rate was increased to a rate fast enough to occur within the recording epoch, responses could be overlapped. If the rate corresponded to the period of the major peaks of the waveform, the response would appear as a rather robust sinusoid. In adults, that occurred at rates near 40/s.
The so-called 40-Hz response, or 40-Hz event-related potential, was studied fairly exhaustively over the ensuing several years. David Stappells and his colleagues from Terry Picton's lab2 and some of us in Jim Jerger's lab at the Baylor College of Medicine3 began looking at it differently. Instead of doing conventional signal averaging, we measured how brain activity at the frequency corresponding to the click rate followed the envelope of the interrupted transient signal. That is, we turned a click or tone-burst signal on and off at a rate of say, 40/s, and analyzed the 40-Hz component of the brain activity to see if it increased in amplitude and/or phase-locked to the periodic change in the stimulus. So, viewed in that way, the 40-Hz response could be considered an ASSR. The procedure worked well, and we learned much about the nature of the 40-Hz response and measurement techniques.4,5
7 If the 40-Hz response was so great, what happened?
Two things. First, we learned that the 40-Hz response was strongly influenced by subject state. Amplitudes varied tremendously from waking to sleeping states. The fact that there was a consistent phase relationship between the response and the modulation across subject state was encouraging.3
Second, it became readily apparent that the 40-Hz response was not recordable, at least not in any useful way, in infants or babies.6 So, we stopped pursuing the 40-Hz measure.
Fortunately, around this same time, Field Rickards and his colleagues at the University of Melbourne noted the ability to record this modulation-following response at higher rates.7 They found that ASSRs could be readily recorded and were particularly useful at modulation rates of greater than 60 Hz. Later, in the early 1990s, they found that at modulation rates of around 90 Hz, robust ASSRs could be recorded from sleeping adults and infants.8–10Researchers in Terry Picton's lab at the University of Toronto were in hot pursuit as well.11 By the late 90s, clinical applications of the higher-rate ASSRs were being developed and implemented.
8 So, you losers back in the 80s were just looking at the wrong modulation rate.
Please, don't sugarcoat it. Actually, steady-state responses can be recorded over a range of modulation rates. Different modulation rates result in stimulation of different portions of the auditory nervous system. It now appears that lower rates (<20 Hz) reflect activity of the generators responsible for late-latency response, moderate rates (20 Hz-60 Hz) reflect those responsible for the middle-latency response, and higher rates (>60 Hz) reflect activity from the brainstem.12 It is no wonder, then, that the lower modulation rates are more vulnerable to subject state than the higher rates,13 just as middle and late responses are.
9 Okay, I think I understand it theoretically, but how do you actually record it?
The stimulus is a pure tone. In clinical applications, the frequencies of 500 Hz, 1000 Hz, 2000 Hz, and 4000 Hz are commonly used. The pure tone is either modulated in the amplitude domain or modulated in both the amplitude and frequency domains. Optimum modulation strategies continue to be evaluated.14-16 Electrodes are placed on the scalp at locations typically used for the recording of other auditory evoked potentials. Brain electrical activity is pre-amplified, filtered, sampled, and then subjected to spectral analysis. The frequency of interest in the brainwaves is that corresponding to the modulation rate.
Let me explain. When a tone of any frequency is modulated periodically at a rate of 90/s, the 90-Hz component of the brain electrical activity is measured; when modulation rate is 96/s, the 96-Hz component is measured, and so on. Measurement is of some aspect of amplitude or phase. Amplitude of the response and its variability are ways of detecting a response. Various strategies for objective determination of significant signal-to-noise ratios (SNR) are proving successful.
Another way of detecting a response is by analyzing phase and its variability. As an example, the concept of phase coherence has proven to be useful. Imagine that a sinusoid is modulated periodically and that the brain wave of the frequency corresponding to that modulation rate is following right along. The lag between the modulating signal and the response, if in fact the brain is following the signal, should be fairly constant. This lag can be measured as the phase angle.
The variability of the lag over successive samples represents the coherence of the phase relationship. A robust following response to the modulating tone will have a fairly consistent or coherent phase relationship. If the brain is not responding to the sound, then the phase relationship to the modulation will be random. A criterion level for significant phase coherence is then used to determine objectively whether or not a response occurs.
11 Does everyone measure it the same way?
That would be too simple. I'll give you two examples. The strategy derived from the Melbourne group uses a modulation frequency of 90 Hz.10 Amplitude modulation depth is 100%; frequency modulation depth is 10%. Automatic determination of a response is based on phase coherence. One audiometric frequency is tested at a time.
The Toronto group takes a different approach.17 They use a stimulus that is a combination of four carrier frequencies of 500 Hz, 1000 Hz, 2000 Hz, and 4000 Hz, each of which is modulated at a slightly different frequency. For example, the 500-Hz tone is modulated at 77 Hz, the 1000-Hz tone at 85 Hz, the 2000-Hz tone at 93 Hz, and the 4000-Hz tone at 101 Hz. The four brainwave frequencies are analyzed independently to determine the presence of a response at each audiometric frequency. Just to make it more efficient, they use the same approach in the other ear, with each frequency assigned a slightly different modulation rate. In that way, both ears can be tested at all frequencies simultaneously. Analysis of the waveforms is made with fast Fourier transform, and detection is accomplished with statistical F ratios.
12 Whew! After all that, please tell me there are clinical advantages to using the ASSR. For starters, can it predict threshold?
Yes. Various studies have shown that it can provide a reasonably accurate prediction of behavioral thresholds.8,10,11,18–20 It appears to be comparable in accuracy to the ABR.12 Interestingly, some of the earlier work showed it to be a strong predictor of thresholds in patients with hearing loss, but a less than adequate predictor of normal hearing.20 That is probably an SNR issue that will be solved with technologic advancements.
13 Can the ASSR be recorded in infants?
Yes, it can. The response is present and readily measurable in newborns,21 sleeping infants,20 and sedated babies.
14 Is it frequency-specific?
As I stated before, the stimulus is a tone that is only slightly distorted by modulation. It stimulates a portion of the basilar membrane that is restricted enough to provide very acceptable frequency resolution. Not unexpectedly, I suppose, threshold prediction appears to be more accurate for higher-frequency signals than for lower-frequency signals.10,17 Nevertheless, the ASSR approach should enjoy an advantage over tone-burst ABR prediction of lower-frequency hearing. The outcome should be a more precise prediction of the audiogram.
15 And you say it can be objectively detected?
Remember, the response itself is periodic in nature and, when present, is at least somewhat phase-locked to the eliciting signal. Objective detection is made easier by these factors.
The Toronto group derives a response amplitude from a fast Fourier transform of the brainwaves.17 The outcome is subjected to a variance ratio test, or F test, comparing the amplitude of the response at the modulation frequency to the amplitude at some distant frequency. When the difference reaches a pre-determined statistical criterion, a response is deemed to have occurred.
The Melbourne group analyzes the variance, or coherence, of the phase relationship of the response to the modulation envelope.10 If the phase coherence reaches a pre-determined criterion, it is assumed that the brain is responding to the stimulus because the response is phase-locked to it.
Refinements in these techniques, as well as new techniques, will likely emerge. The point is that it is the periodic nature of the response that makes it so susceptible to objective detection.
16 Does the test take a long time?
Accurate threshold detection across audiometric frequencies in both ears can take a while—probably somewhere between 30 and 60 minutes. The speed of this approach seems acceptable and is likely to improve as the technology advances.
The Toronto approach provides a creative example of the potential for this technology. If each of eight frequencies (four per ear) is assigned a different modulation rate, ASSRs can be measured simultaneously at all frequencies in both ears at a single intensity level.17,22 The development of automated algorithms to enhance the efficiency of the threshold prediction process seems inevitable in the near future.
17 You said something earlier about dynamic range. Could you go over that again?
Sure. Gary Rance and his colleagues from Melbourne taught us this.23 Remember that modulated tones are being used to elicit the ASSR. Maximum signal intensity of modulated tones is comparable to equipment limits for pure tones on modern clinical audiometers. This can be as high as 120 dB HL.
By comparison, air-conducted clicks used to elicit ABRs typically have a maximum output of around 90 dB nHL. This is adequate for predicting hearing thresholds at the moderately severe level or better. However, the likelihood is low of recording an ABR in an ear with a high-frequency hearing loss greater than 70 dB. In those patients, threshold simply cannot be estimated with ABR. Thus, infants and young children with severe and profound hearing loss will have absent ABRs, and the audiologists will be left to guess whether the loss is severe, profound, or complete.
The ASSR has the potential to give an extra 50 dB worth of information about degree of loss. That is very important in determining early candidacy for amplification or implantation.
18 Does the ASSR have its own CPT code yet?
Um, no, it's a little early for that.
19 Why don't more people use the ASSR?
For all the potential of the ASSR in predicting hearing loss, the emerging techniques need refinement. Prediction of normal hearing thresholds and of low-frequency thresholds has room for improvement. In addition, ASSR clinical recording systems are just now becoming available commercially.
20 You've talked all about threshold prediction. Are there diagnostic applications?
Good question. As with the ABR, the ASSR is abnormal in patients with auditory dys-synchrony (auditory neuropathy).23 How sensitive they might be to other types of neurologic disorders remains unknown.
An intriguing finding was reported recently on the correlation of the ASSR to speech perception.14 Results suggest that steady-state response recordings to multiple independent amplitude and frequency modulations of a pure tone may provide an objective assessment of suprathreshold hearing. We obviously have much more to learn.
1. Galambos R, Makeig S, Talmachoff PJ: A 40-Hz auditory potential recorded from the human scalp. Proc Nat Acad Sci
2. Stapells DR, Linden D, Suffield JB, et al.: Human auditory steady state potentials. Ear Hear
3. Jerger J, Chmiel R, Frost JD, Coker N: Effect of sleep on the auditory steady state evoked potential. Ear Hear
4. Stapells DR, Makieg G, Galambos R: Auditory steady-state response threshold prediction using phase coherence. Electroencephal Clin Neurophysiol
5. Picton TW, Vajsra J, Rodriquez R, Campbell KB: Reliability estimates from steady-state evoked potentials. Electroencephal Clin Neurophysiol
6. Jerger J, Chmiel R, Glaze D, Frost JD: Rate and filter dependence of the middle-latency response in infants. Audiol
7. Rickards FW, Clark GM: Steady-state evoked potentials to amplitude modulated tones. In Nodar RH, Barber C, eds. Evoked Potentials II
. Boston: Butterworth, 1984:163–168.
8. Cohen LT, Rickards FW, Clark GM: A comparison of steady-state evoked potentials to modulated tones in awake and sleeping humans. J Acoust Soc Am
9. Rickards FW, Tan LE, Cohen LT, et al.: Auditory steady-state evoked potentials in newborns. Br J Audiol
10. Rance G, Rickards FW, Cohen LT, et al.: The automated prediction of hearing thresholds in sleeping subjects using auditory steady-state potentials. Ear Hear
11. Lins OG, Picton TW, Boucher BL, et al.: Frequency-specific audiometry using steady-state responses. Ear Hear
12. Cone-Wesson B, Dowell RC, Tomlin D, et al.: The auditory steady-state response: Comparisons with the auditory brainstem response. JAAA
13. Pethe J, von Specht H, Muhler R, Hocke T: Amplitude modulation following responses in awake and sleeping humans—a comparison for 40 Hz and 80 Hz modulation frequency. Scand Audiol
14. Dimitrijevic A, John MS, van Roon P, Picton TW: Human auditory steady-state responses to tones independently modulated in both frequency and amplitude. Ear Hear
15. John MS, Dimitrijevic A, Picton TW: Auditory steady-state responses to exponential modulation envelopes. Ear Hear
16. John MS, Dimitrijevic A, van Roon P, Picton TW: Multiple auditory steady-state responses to AM and FM stimuli. Audiol Neuro-otol
17. Dimitrijevic A, John MS, Van Roon P, et al.: Estimating the audiogram using multiple auditory steady-state responses. JAAA
18. Herdman AT, Stapells DR: Thresholds determined using the monotic and dichotic multiple auditory steady-state response technique in normal hearing subjects. Scand Audiol
19. Vander Werff KR, Brown CJ, Gienapp BA, Schmidt Clay KM: Comparison of auditory steady-state response and auditory brainstem response thresholds in children. JAAA
20. Rance G, Rickards F: Prediction of hearing threshold in infants using auditory steady-state evoked potentials. JAAA
21. Cone-Wesson B, Parker J, Swiderski N, Rickards F: The auditory steady-state response: Full-term and premature neonates. JAAA
22. John MS, Purcell DW, Dimitrijevic A, Picton TW: Advantages and caveats when recording steady-state responses to multiple simultaneous stimuli. JAAA
23. Rance G, Dowell RC, Rickards FW, et al.: Steady-state evoked potential and behavioral hearing thresholds in a group of children with absent click-evoked auditory brain stem response. Ear Hear